Why Responsible AI is Critical for every Enterprise Company with Brett Greenstein
Brett Greenstein is a Senior Vice President and Global Head of Artificial Intelligence at Cognizant. His experience in the Internet of Things, technology consulting, solutions in banking, healthcare, customer service, and retail with organizations include IBM and many Fortune 500 products.
Brett Greenstein’s LinkedIn: https://www.linkedin.com/in/bretgreenstein/
Brett Greenstein’s Twitter: https://twitter.com/bretgreenstein?s=20
Brett Greenstein’s Website: https://www.cognizant.com
Podcast website: https://www.humainpodcast.com/
YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag
Support and Social Media:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/humain/creators
– Twitter: https://twitter.com/dyakobovitch
– Instagram: https://www.instagram.com/humainpodcast/
– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/
– Facebook: https://www.facebook.com/HumainPodcast/
– HumAIn Website Articles: https://www.humainpodcast.com/blog/
Here’s the timestamps for the episode:
(00:00) – Introduction
(02:53) – There are ethical implications of putting people out of work that scares them and there’s also the fear that when AI is biased, it can cause damage, it can cause people to not be hired, it can cause things that reflect badly on your brand to be used in business.
(03:19) – People have begun to extrapolate their inner fears and transfer into AI and assume that using AI must be ethically dangerous. AI might be able to solve a problem better than not using it, and this has come up increasingly because the accuracy of AI-based systems is consistently better than people in very narrow tasks.
(04:17) – Everyone wants to be an AI-first company. And it sounds great. It sounds efficient and powerful and smart. It’s really good at some things, but it’s also not good at everything.
(05:14) – If AI could do everything we could do, we’d let the machines do the work. But in practice, it’s usually a very specialized skill set that is fairly narrow, and ultimately we’re still responsible for business and commerce and government and family. We can’t delegate that to a system.
(06:10) – A human being is accountable to other human beings in a way that AI is not, but it would be irresponsible to do certain types of diagnosis and not ask AI if it spotted anything
(07:32) – We should manage the exceptions instead of managing the bulk of the work, and recognize where the strengths are.
(09:23) – Best-in-class for cars gets you into level three, conditional automation. The world is not really designed for self-driving cars as much as self-driving cars are not designed to fully take advantage of the world we built all of our traffic systems and everything under the assumption that people drive cars, people cross streets, like lanes or bike lanes.
(11:01) – In the U.S. there’s a backlash in several cities around facial recognition and other things, but as regulations help protect us from privacy, cameras can still help drive enormous efficiency and safety in cities.
(14:07) – Just the amount of information and work is so high that actually it induces strain, it induces errors, and induces stress on people. But if you had an AI do all the photos and then you touched up and tweaked and fixed the ones that needed it, you’d get more done with less stress and all of our jobs are filled with those kinds of tasks.
(15:42) – There’s so many extensions and packages that claim to be AI ready, AI enabled, which they’re really using these presets that are performing repetitive tasks over and over. You no longer need the human to do that, but then they could double-check.
(16:36) – Like with Facebook and Instagram, that’s pretty cool when you’re face timing with a friend or you’re doing something social, but there’s also the bad actors, when someone tries to hack the system. There should be regulations put in place there.
(17:53) – Using where you can use AI to pre-filter out the really awful stuff so people don’t have to look at it in the content moderation side that’s just an ethical thing to do, because it’s really unfair to make people look at that stuff it’s necessary, but it’s awful.
(19:57) – Responsible use of data: when AI is used, you should know that it was used and have some ability to have discussion or escalation, if you disagree with it with an outcome, because it will enhance the AI for everybody else once you solve it and you should know that it was generated by an algorithm or a person.
(23:03) – As these customer service human interaction systems become better, they will also have a little more transparency and what you can do about it, because if it was an algorithm, if it were an algorithm, it would have told you, it was because of this and this and this, which is then correctable.
(24:23) – With these new AI recruiting tools that are beginning to emerge, perhaps we’re going to move into a process that better serves humans, but also frees up the hiring manager to work on more challenging tasks. The complement of people owning the HR process and whatever policies, governance, and AI is that actually can tell you a little bit more about why they made the decisions they made is a better combination than purely doing with people who are purely doing AI.
(27:27) – Setting policies to know what criteria to look for in candidates. Students are using reverse engineering on their resumes so they have the right buzzwords in there so that an algorithm will pick them. It will help get them to the top of an algorithmic decision that’s a whole different world, and it’s a really interesting result.
(28:28) – Once we give AI a task, it now runs on its own, but in reality, people are still ultimately responsible for every system in a business. You can’t really just delegate this, you still are responsible for the policies, quality and bias and all the other things that go into making a system work well.
(30:24) – We run an ethical AI council at Cognizant, which is a subset of our corporate responsibility office and it specifically focuses on making sure that for the projects we do we’ve considered the ethical implications of doing it as well as the ethical implications of not doing it. if you have the ability to save someone’s life and you choose not to, that’s unethical to walk by someone you could save their life, but don’t do something when you’re involved in AI.
(33:22) – Systems should know that you should be able to set it and define it in some way and at least be informed in that moment when you can’t make a decision fast enough, at least having an AI tell you what’s going on would be better than having nothing tell you it just guessing.