DUE TO SOME HEADACHES IN THE PAST, PLEASE NOTE LEGAL CONDITIONS:
David Yakobovitch owns the copyright in and to all content in and transcripts of The HumAIn Podcast, with all rights reserved, as well as his right of publicity.
WHAT YOU’RE WELCOME TO DO: You are welcome to share the below transcript (up to 500 words but not more) in media articles (e.g., The New York Times, LA Times, The Guardian), on your personal website, in a non-commercial article or blog post (e.g., Medium), and/or on a personal social media account for non-commercial purposes, provided that you include attribution to “The HumAIn Podcast” and link back to the humainpodcast.com URL. For the sake of clarity, media outlets with advertising models are permitted to use excerpts from the transcript per the above.
WHAT IS NOT ALLOWED: No one is authorized to copy any portion of the podcast content or use David Yakobovitch’s name, image or likeness for any commercial purpose or use, including without limitation inclusion in any books, e-books, book summaries or synopses, or on a commercial website or social media site (e.g., Facebook, Twitter, Instagram, etc.) that offers or promotes your or another’s products or services. For the sake of clarity, media outlets are permitted to use photos of David Yakobovitch from the media room on humainpodcast.com or (obviously) license photos of David Yakobovitch from Getty Images, etc.
Welcome to our newest season of HumAIn podcast in 2021. HumAIn as your first look at the startups and industry titans that are leading and disrupting ML and AI, data science, developer tools, and technical education. I am your host David Yakobovitch, and this is HumAIn. If you liked this episode, remember to subscribe and leave a review, now on to our show.
Welcome back listeners to the HumAIn podcast, your channel to launch AI products, discover tech trends and augment humans. Today on the episode I’m bringing to you the chief scientist of ASAP, Ryan McDonald. Ryan is joining us from the UK and has a seasoned career in all things: customer experience around AI, and advanced systems with humans. Previous to joining ASAP, Ryan was a researcher at Google and at a university in Europe, The Athens University of Economics and Business, as an associate researcher. Ryan, thanks so much for joining us on the show.
Thanks for having me, David.
Ryan, prior to joining ASAP I’d love to hear about your career on what got you excited doing research at Google and at the university level. How did that career take you to where you are today?
Absolutely. So, I did my PhD at the University of Pennsylvania on Natural Language Processing, that was almost 20 years ago as it is. After that I joined Google and I was working on machine learning for artificial intelligence for a number of different consumer products at Google. So, search quality, machine translation, Google assistant, you name it.
I was very fortunate to, sort of, see artificial intelligence really bloom there as it bloomed throughout the industry. It went from kind of a niche technology to something that’s at the heart of most technological companies today. About, I would say five years ago, I really got interested in enterprise problems. So, problems in customer care, problems in document understanding and healthcare. I really got interested in these because they offered a different set of challenges over consumer artificial intelligence, consumer technologies in the consumer space.
So, think search or ads, there’s basically a single service that everybody uses and the research and the artificial intelligence that is being driven at that particular modality is to really get incremental gains, because incremental gains can lead to large wins in terms of revenue, quality, these kinds of things. Enterprise is totally different.
The kinds of problems that deploying AI runs into for enterprise is more about scalability. Instead of having a single user of the technology we have hundreds of users of the technology and how can we deliver a unique experience, and an excellent experience for each of those users and this necessitates questions around adopting machine learning and natural language processing models to new domains, in terms of training new models with the least amount of training data that we can get human intervention. I enjoyed working on that a lot, and in that process I got introduced to ASAP. I was very excited when I started conversations with ASAP, because what I really liked about their approach is that they were laser focused on one particular area that was customer care at that time.
There’s a broader approach about AI for augmented human activity. But at the moment in time, we have this laser focus on trying to leverage AI to really transform customer care. What I was really excited about was that I had come to the conclusion that the only way to really make transformational changes and enterprise is to be fully integrated with the clients, have holistic solutions for the area that you’re trying to make an impact. In ASAP and customer care we’ve developed our AI native customer experience platform. That’s really what that is, it’s a holistic solution that is developed with AI from the bottom to up. So everything about the experience, the user experience, when the models are called, it’s just seamless.
That’s really how AI can have an impact on enterprise software, in particular, in customer care. So, that’s kind of where I was coming from. I got super excited about this laser focus on one area, and generally the hypothesis and the attitude that we’re really only gonna make large changes by having holistic and seamless solutions to these problems.
The customer care industry, as you’ve mentioned, is so important. It’s fantastic to see how your experience has led you today right up to building the future of customer care and customer experience or directed automation at ASAP. As you mentioned, automation isn’t only binary, we talked about this prior to the show, but that it’s about augmenting humans. Can you talk to us more about your thoughts on automation and how you’re working on that continuum on ASAP?
Absolutely. So, at ASAP, we really want to put humans at the center of AI. So we’re really interested in problems where we think AI can be used to automate or augment human activity for some positive outcome, whether that’s efficiency, better satisfaction with the work, whatever that outcome is, but some positive outcome.
As you mentioned, customer experience is this great space for customer care. There’s lots of challenges there. Just from a business perspective $1.6 trillion is lost in American and American companies by people changing services due to bad customer care. We’ve all been there. We’ve all called in a call center and had a really bad experience, because of these companies they’re spending $600 billion a year addressing this with various platforms out there, customer experience and customer care platforms. But also there’s a really high attrition rate in customer care, 40% per year.
So, clearly because of all these things, there’s this big opportunity, and quite frankly, there’s antiquated solutions in the space. So, there’s a great chance for AI just to have a large impact. When I mean antiquated solutions, I also mean, sort of, antiquated thoughts around artificial intelligence. A lot of players in this space see a binary landscape as you put it. There’s bots and there are humans. AI should be focused on bots. We should just be trying to automate as much as we can, and we should be trying to contain as many customer care issues from humans as possible through bots.
Of course, bots are an important part of any solution in the space. But at ASAP, we also recognize that there’s a huge opportunity to augment human activity. So for those calls that bots can’t handle, they’re too complex, maybe the customer that’s calling in is irritated or having a problem, and they have to speak to a human agent. There’s just a huge amount of opportunity for AI to augment that particular activity.
So we really want to look at cases where we can try to make that human more efficient. They’re dealing with these complex issues, they’re dealing with potentially angry customers. How can we recommend things for them to do and recommend things for them to say so that they can solve the issue more efficiently. So that they can lead to the highest customer satisfaction, because the current platform or the current landscape of solutions is not that great.
We have a lot of willing partners. Which is excellent as well. We really want to try out innovative things and really want to push the needle because they recognize that big solutions in this space are going to pay a lot of dividends for them. That’s what I mentioned earlier for our AI native customer experience platform, and that’s essentially what it’s designed for. It’s designed to have humans at the center for those calls that get to humans, a seamless experience where artificial intelligence is helping those human agents to solve these really complex problems. It’s our belief that only by building this full experience, can we really have that kind of transformational impact on the space.
I’ve seen that firsthand in the industry. In fact, my sister has worked as a customer care lead for one of the major healthcare companies, United Health Care Group. So I hear a lot of anonymous stories about the pain points that she experiences working in the field. Part of that is, as you rightfully said, the attrition of her colleagues over the last couple of years, because of the demands of working in the customer care space.
Making sure that you’re getting those agent calls and providing that success with the limited resources you have. So I’ve learned from her, and it sounds like a solution that what’s being built at ASAP could be great for a company like UHG and many others as well.
Yeah, definitely. That’s exactly the sort of use case we’re going after.
I also think about the enterprise space as well. In my current role at single-story I report into the director of customer success. So when we’re thinking less about the customer care case from the call centers in the users, you also mentioned, Ryan, about the enterprise phase.
So, of customer success, and when our customer success managers and team leaders are having calls with enterprises, I thought back just a few weeks ago, one of our customer success managers reached out to me and said: Hey, we have some follow ups from the client based on this project. And we started discussing and a lot of information was lost in translation. I felt that in the conversation we were, potentially going about in a couple of circles, trying to discuss who said what, what insights were gleaned?
I think we have to get back on another call with the client. We missed some information. So it sounds like us, as humans, our memory is fallible. There’s a lot of information that We need to be augmented by sounds like that might even be a good use case for ASAP.
Absolutely. It shares a lot of commonalities even with the customer care case. A big objective of call centers is they don’t want customers to have to call back on the same issue. So part of that is making sure that the agents, when they’re dealing with that issue the first time, they’re following the right flow, they’re asking the right questions, they are getting the right information that they need in order to solve that particular case.
This is exactly the technology we’re building out. How can we sort of regularize that? How can we look at the conversation and the issue that the customer’s happening? That’s, sort of, embodied in the dialogue up to a point in time, and then allow AI to make recommendations to the agent: Here is a workflow that we think you should use and all the steps you need to follow in order to solve this issue.
Here’s a knowledge base article that we think will have a lot of information that will help you solve this issue. Here are some suggestions of what to say next, and these can be questions, information to elicit so that we get you on the right path to solving this issue.
These are really the same problems. We want to make sure that information is gathered in the right way and efficient way. But in a way that is going to solve the issue, ultimately it’s going to solve the issue. The most satisfactory way for the customer and the company.
I can’t help thinking about customer success that there’s so much being solved, not only for the company and the customer, but also for the agent. I’m thinking back to the scenario with my sister in the sense that there could be so much cognitive load, and I’m just imagining her right now, taking all these calls back to back, and there could be a lot of routine issues that are not very exciting from the agent perspective or not very challenging.
Software, like being built at ASAP, can help reduce that cognitive load. It sounds like from what you mentioned before, Ryan, that helps improve satisfaction. Helps improve the experience, not only for the customer, but also to the agent.
Absolutely. A key area of focus for us is how to build our platform so that it does help the agent, and it can help the agent in a number of different ways. One is, like you said, it can try to balance that cognitive load. We can think of trying to eliminate some mundane or routine tasks from their job, but there is a flip side to that. We’ve actually just today released a report called CX The Human Factor, which is a large survey study of agents about their job. One interesting aspect there is that agents do once in a while, want to do mundane and routine things. It’s like everybody else in any job that you have, you want a mix of easy wins that motivate you, but you also want to be challenged and you want to handle complex issues, and you do want the ultimate satisfaction of solving a really hard problem.
So, there’s a couple things we can do in this space. One is, have our software try to optimize how we assign issues to agents to make sure that their cognitive load through the day doesn’t get too high or too low, and this is also dealing with customers. Customers have a different personas, from: angry, decide, to happy, making sure that they’re not only dealing with angry customers all day. So, that’s a particularly exciting area.
How can artificial intelligence, first of all, identify what issues are hard. What customers, what are the traits of the customers calling in? And then how can we optimize to make the agent’s job as satisfactory as possible? The other thing from the study, which I found quite interesting, is that agents find real scenario training to be much more effective than reading a manual, and that makes sense, of course. In particular, since COVID started, and a lot of these agents are working at home, the opportunity for these kinds of training have lessened, and in the survey they’ve definitely responded, that’s the case. So how can we use artificial intelligence to try to build tools, to train agents and get them the skills that they need in order to handle these complex cases.
So they’re not just kind of thrown into the fire and they have to deal with a new issue that they’ve never dealt with. Are there ways artificial intelligence can be used to help them? Certainly the tools we’ve already built so far, where we make recommendations to agents like: Use this workflow or say this thing next. That definitely helps and we can already see learning curves of coming into the call center. Those learning curves are much deeper because artificial intelligence, by suggesting these things to the agents, is ultimately teaching them the right way to handle certain issues.
But even going beyond that is quite exciting. How do we actually onboard agents? Get them up to speed as quickly as possible, but also offer them continuous training throughout their jobs so that they feel happier about their job, and they’re less likely to leave.
So diving deeper into the training narrative. I have a huge passion for the education industry. I previously led scale training for both: General Assembly and Galvanized. So it’s something that I find really fascinating. A few weeks ago, I spoke to Daniel Bianco, one of chief partners in the venture industry, about how they’re working with training with healthcare workers, and legal workers, and all across the industry training is definitely the phrase that has been needed the most, even beyond the pandemic.We’ve seen that acceleration in all regards.
So thinking about that here with agents, I’m wondering what could that solution be? We know that In September 2021Amazon came out and said that for all call center and logistics workers who stay at the company for more than 90 days, Amazon is going to be paying for a full bachelor’s degree for over 750,000 workers. So I’m wondering if you have some input or, or some thoughts on that with the training side.
I read about the Amazon case a couple of days ago. That’s quite amazing. My general thought, in particular for agents and customer care. One really great thing about that domain is We have just lots of examples.
We have historical records of conversations that have happened in the past. We sometimes know what the issues are. We often know what the tools are that are being used to solve those. We know for those historical cases what were the characteristics of the agent. Was the agent, the tenured agent? Has it been somebody who’s been at the call center a long time?
We also know what was the outcome, did that conversation lead to a good outcome? That could be something like a high customer satisfaction score or the customer didn’t have to call back, or things like that. For me in customer care that is the gold.
We need to mind that goal to understand: Okay, agents tenured agents that have a lot of experience in solving particular kinds of issues. Here are the things that artificial intelligence understands. Here are the things that they tend to use and they tend to do to solve those problems. How can we have the models learn that and then transform that into some artifacts that can then be used to train new agents on those issues. There’s just a rich tapestry of research and that space, and the nice thing about it is not some second order effect. We will be creating tools that are going to be used directly by the agents to solve the problems that they need in their job. That’s going to help them get the tools they need to be happier and more effective at the job.
I couldn’t agree more. What I’m hearing is that the competitive theme here in the show today is about augmenting humans. It’s different from necessarily prioritizing AI tasks to do things that humans do better today. I think of the catastrophizing example of the Boston Dynamics hardcore robots, or Elon Musk bringing to stage a robot that’s dressed as a human or a human dresses as a robot that would be your agent in the future. But that’s not necessarily the future that we’re moving towards. It may or may not be a better modality.
It might be focusing on these repetitive or mundane tasks that can help a human workflow do better. Can you tell us some examples of things around that you’ve seen humans do well? but would be difficult to automate, and vice versa.
There’s definitely a situation. So artificial intelligence and machine learning are really good at multivariate problems. So they’re certainly good at recognizing repetitive things and memorizing that behavior ,and repeating it. They’re also really good at multivariate problems like optimization. So something that’s very important in the call center is how do you route calls? You get a number of different calls. There’s a bunch of queues of people with problems. We have agents on the other side.
We know a little bit about what’s happening in the queues of the people waiting to be served. We know the current state of the agents and how do we optimize that for things like throughput to get as many issues through as possible. But also, as I mentioned before, how do we optimize that without not burning agents out. One agent is getting all the hard issues, and other agents are getting all the easy issues, and that’s the kind of thing that artificial intelligence, machine learning, that’s where it thrives, Obviously.
On the flip side, there’s certain, in particular, interpersonal cases and just kind of real world knowledge. So it’s going to be much harder for a bot to deal with an upset person, a person who’s got an urgent need, or some complex exception or something like that. Certainly, humans have more empathy and they can deal with those problems better at this point in time. So these are, we’re not talking about complex issues, just more like complex emotional states. But there are other cases, the one that I always like to think about is somebody calling in, let’s say to an airline. They’re calling the call center of an airline and they need to change their flight for tomorrow. The agent looks at the information and they see that that person is traveling with three children or something like that. They will immediately understand that someone who’s traveling with three children would not want to be on a flight at midnight or very late in the night or something like that, that there’s constraints there.
So they will, probably, prioritize flights that are more family friendly, more friendly to the kids that would fly. Whereas a machine learn model, unless it’s explicitly programmed for that, or sees a huge amount of data to really learn that kind of subtle complex need of someone who’s trying to travel is never going to really understand that.
It’s probably just going to suggest the next available flight or the cheapest flight or some other dimension to optimize on. Those kinds of complex cases that really need some level of world knowledge, but just understanding implicit constraints that people may have when they’re calling into these call centers.
Humans definitely fit the pill there to solve those problems. Then there, of course, like we’ve talked about, really the drive becomes well, how do we get them? The tools they need to solve that particular problem as effectively as possible.
About flying a lot, prior to the pandemic I would be on quite a lot of flights a year, all over the states and even into Europe. I’ve been following a lot of the flight changes that we’ve been seeing in the past year. I’m part of a couple of flight groups where I see different members who frequently travel, and one airline that they’ve actually been using different AI technologies to select where people are sitting, especially when they get upgrades going from economy to economy plus or into the first class.
It’s so fascinating to see the AI at work, putting people together next to each other or separating them because of COVID concerns or other notifications. So it seems that there’s a lot of tasks here that AI techniques are able to augmente the human workflows and help the agents, and agents are not always someone with a headset on, it could be someone in person being augmented like the flight stewardess or other agent examples, as well.
Absolutely, that’s a great case. Exactly the kind of case where AI would thrive at. Sort of, constraint satisfaction problems, getting these constraints into place, similar to other things that I’ve talked about earlier. The kinds of augmentations around finding needles in haystacks. What is the exact right information and the company’s database that the agent is going to need at that moment in time to solve that case.Those situations where the range of possibilities is so large, exponential, that really requires efficiency, but also efficiency coupled with learned intelligence, and is exactly the automation and augmentation opportunities that we see.
Taking this all together. One phrase that I found really powerful with ASAP mission is about: the surfacing information workflows that will empower humans to execute on problems better. Wow, that’s so powerful to me. Thinking through that, can you share with us from your insights as a leading researcher, why Ryan? Why are humans better at surfacing relevant information from a customer to really help solve these problems?
Why is the AI better at surfacing relevant information from a customer to solve these problems?
Well, I guess it depends on which problem. So why would the AI be better? Or why would humans be better? That’s right.
Exactly. It depends on what information you need to surface. If we want to surface workflows, there’s a number of situations where they are going to be better. As I mentioned before, a large telecommunications company is going to have a lot of different reasons why people might be coming to call in the call center.
Those differences between some of those reasons might be quite subtle. So trying to understand which workflow the agent should follow is a really difficult problem. We have to understand that from the language and then search inside a large database to pull that information out. That’s in the case where a call center as a company has actually codified workflows for issues in a searchable database. If that doesn’t exist, another great artificial intelligence problem is how do you actually detect these workflows? Even for companies that have codified workflows, there might be updates that haven’t been made.
So how do you detect when there’s certain sequences of repetitive actions or sets of actions to solve tasks? How do you do that? Because that’s all implicit through the language and the conversation. So that’s understanding the language that in this conversation, the language that is used embodies the same set of actions and problems that in some other conversation, and aggregating that information and surfacing it in a useful way so that people can search over these workflows.
But on the other hand, humans are the ones conversing with the customers, and we do have to get that information out from the customers, this subtle information around while I see your travel with three children, and are there constraints on the time of day you want to fly. Those sorts of things are still critical to have humans in the loops because all these corner cases when trying to solve issues for people calling in and humans are better at recognizing those corner cases and adopting correctly.
And having the synergy between the artificial intelligence system, getting the agent on the right track, making sure that they’re saying the right things to move the issue along in the right way and execute the right actions, coupled with a human who, basically, flagging unusual situations and pulling out the right information is this great opportunity to see effectiveness in proficiency improved.
Today’s most data intensive applications and workflows do require real time insights, real time results, so that the agents can have the decisions to be augmented and made possible to have real time insights. That means latency is very important. In your experience, how much latency matters and how does ASAP design for a low latency system?
That’s a great question. It ties back to one of the things I said very early. Here is that we really want to design AI products with humans at the center, and to do that the experience has to be seamless. We can’t have a tool or a piece of software in front of an agent that’s making recommendations or suggestions for things that the agent to do, but it’s happening even two or three seconds after the fact, when those actions should have been executed. The agent will just learn to ignore them because, ultimately, at that speed and that latency, they’re not helping them solve the particular problem.
Going back to my days at Google, if you think about Google 20 years ago, one of the reasons that they really succeeded in the search phase is because of latency. The results were excellent, but they also made sure that you got those results almost instantaneously, and that user experience was great. In the AI native customer experience platform we’ve designed, we want that kind of seamless, immediate, AI responses to be part of it.
So we design everything, and that’s why it’s critical to design these things from the bottom up with AI in mind. All of our artificial intelligence has been designed to serve those latency needs. So to give you a couple of examples, the first is automatic speech recognition. A huge number of calls that come into call centers are still voice. They’re not digital. It’s not people contacting me over chat. It’s people calling in on their phone. In those situations, if we want to make any suggestions to agents, we need to have super fast, automatic speech recognition so that we can transcribe what’s happening.
Use the information we’re pulling from the conversation, send that to our artificial intelligence models and get the information back. So there’s two sides there. One is the AI models have to be fast in their suggestions, but we also have to make sure that the automatic speech recognition is getting us the information as quickly as possible.
So we’ve done, actually, some great research in that space, what we call a Multimode ASR. The idea here is that automatic speech recognition can be more accurate. So we also want top-line accuracy when building these models. If you have a little bit of look ahead, that means you let the conversation play out a little bit before you transcribe what’s happened before, but the longer you let the conversation play out, the longer the latency is going to be.
So the work we’ve done here is trying to build general models where you can basically specify, based on the latency, how far you want that look ahead to be? So you can really have this knob that allows you to balance automatic speech recognition accuracy with latency, and try to find the sweet spot so that we do get the information out of the system as quickly as possible.
On the other side of the artificial intelligence model side, we’ve done a bunch of work on specialized architectures that are super fast. So we live in a world of deep learning. They use architectures that are very big. They use a lot of memory, use a lot of CPU and GPU. But obviously, like I said, we can’t wait many seconds to get predictions back to agents.
So we’ve focused on building out something called SRU ++, which is an architecture where we can take super high, accurate AI models, and then distill them into these faster architectures. Which allows us to get into these millisecond range. So we can get responses back to agents and milliseconds, and that really is going to affect how much they use those suggestions at the end of the day. So it’s extremely important. We definitely design for it. And I think without it, ultimately the augmentations that we suggest just wouldn’t be used.
It’s exciting to hear about the research like SRU ++ and how it’s accelerating the industry. Thinking from what’s been built today at ASAP and to what’s next on the product roadmap as a result of lessons learned, or the biggest feature requests from your customers, can you tease to the audience what we should be seeing in the coming months and years?
Some things that I’m very excited about. So one I’m particularly excited about is something we call: Agent Journey. What we’ve recognized is a task oriented dialogue. So task oriented dialogue is this modality where two people are having a conversation and they want to solve a particular problem.So this is customer care, exactly.
Almost all the research in ASAP is focused under the umbrella of task oriented dialogue. When solving problems in that space, obviously, there’s a lot of information in the conversation that the customer is telling us things, and the agent is responding, but there’s also a lot of information that’s happening outside of the conversation.
The agent has opened some tools. They might be reading a knowledge base. They might have opened up a tool that allows them to search flight recommendations, if the customer’s in the airline industry. In order to really make headway and task oriented dialogue, we have to really make that recognition that we need to go beyond what’s happening in the conversation and see all the information and all the actions that the agent can possibly do on their computer. So agent journey is a product where we put a piece of software on the agent’s computer and it allows us to access all the tools they’re using, how they’re using them, how that interacts with the conversation that’s ongoing.
Initially on our roadmap, we already have partnered to do analytics. Try to understand these tools that agents are using, which are the most effective, which are the least effective. What are certain paths and tools agents use to try to solve problems to really understand the space and then blow that out into now that I understand the tools that they’re using outside of the conversation to solve issues.
Can we integrate all that information? Can we start to integrate what they’re looking at and another piece of software to make suggestions for what they do next or suggestions for what they say next and vice versa? If we see the conversation up to some point in time the artificial intelligence should know that: Oh, you’re going to have to open up this tool and make this particular query.
Agent journey is really our efforts in that space to understand everything holistically that the agent is doing to really make headway in task-oriented dialogue.
Well, I’m super excited to see the continued growth in ASR and task oriented dialogue. In all these different elements of the growth of AI for customer success, for customer care and the customer experience industry. Ryan McDonald, the Chief Scientist of ASAP. Thanks so much for joining us today on HumAIn.
Thanks so much for having me.
Thank you for listening to this episode of the HumAIn podcast. Did the episode measure up to your thoughts on ML and AI, data science, developer tools and technical education. Share your thoughts with me at humainpodcast.com/contact. Remember to share this episode with a friend, subscribe, and leave a review and listen for more episodes of HumAIn.