Ryan McDonald: How To Position People at the Center of AI Native Solutions
Ryan McDonald is the Chief Scientist at ASAPP working on NLP and ML research focusing on CX and enterprise. He is also an Associate researcher in the NLP group at Athens University of Economics and Business. Ryan was a Research Scientist in the Language Team at Google for 15 years where he helped build state-of-the-art NLP and ML technologies and pushed them to production.
He managed research and production teams in New York and London that were responsible for a number of innovations used in Translate, Assistant, Cloud and Search. He was the first NLP research scientist in both New York and London, and helped grow those groups into world-class research organizations. Prior to that, he did his Ph.D. in NLP at the University of Pennsylvania.
Ryan McDonald’s LinkedIn: https://www.linkedin.com/in/ryanmcd/
Ryan McDonald’s Twitter: https://twitter.com/asapp
Ryan McDonald’s Website: http://www.ryanmcd.com
CX: The Human Factor Report: https://ai.asapp.com/LP-2021-09-CX-The-Human-Factor_Landing-Page.html
Podcast website: https://www.humainpodcast.com
YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag
Support and Social Media:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/humain/creators
– Twitter: https://twitter.com/dyakobovitch
– Instagram: https://www.instagram.com/humainpodcast/
– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/
– Facebook: https://www.facebook.com/HumainPodcast/
– HumAIn Website Articles: https://www.humainpodcast.com/blog/
Here’s the timestamps for the episode:
(3:00)- The kinds of problems that deploying AI runs into for enterprise is more about scalability. Instead of having a single user of the technology, we have hundreds of users of the technology and how can we deliver a unique experience and an excellent experience for each of those users and this necessitates questions around adopting machine learning and natural language processing models to new domains.
(10:49)- And this is exactly the technology we’re building out. How can we sort of regularize that? How can we look at the conversation and the issue that the customer’s happening? That’s sort of embodied in the dialogue, up to a point in time and then allow AI to make recommendations to the agent; Here is a workflow that we think you should use and all the steps you need to follow in order to solve this issue
(28:33)- So we design everything and that’s why it’s critical to design these things from the bottom up with AI in mind. All of our artificial intelligence has been designed to serve those latency needs. So to kind of give you a couple of examples, the first is automatic speech recognition. So a huge number of calls that come into call centers are still voice, they’re not digital. It’s not people call contacting over chat. It’s people calling in on their phone.
(30:41)- So we’ve focused on building out something called SRU, which is an architecture where we can take super high, accurate AI models and then distill them into these faster architectures, which allows us to get into these millisecond range. So we can get responses back to agents and milliseconds, and that really is going to affect how much they use those suggestions at the end of the day.
(32:38)- Beyond what’s happening in the conversation and see everything, all the information and all the actions that the agent can possibly do on their computer. And so agent journey is a product where we, you know, put a piece of software on the agent’s computer and it allows us to access into all the tools they’re using, how they’re using them, how that interacts with the conversation.
(33:49)- Agent journey is our efforts in that space to understand everything holistically that the agent is doing to really make headway in task-oriented dialogue.