Bret Greenstein

Please welcome back Brett Greenstein to HumAIn to learn what has changed in evolutionary AI, ethical AI and responsible AI as a continued series to or April 2019 discussion on the rapidly emerging AI race. Listen in as we explore topics, including white ethical AI is on the minds of everyone when it is unethical not to use AI and how responsible AI it could be critical for every enterprise company this is HumAIn.

Welcome to HumAIn. My name is David and I will be your host throughout this series together we will explore AI through fireside conversations with industry experts, from business executives and AI researchers, to leaders who advanced AI for all HumAIn is the channel to release new AI products, to learn about industry trends and to bridge the gap between humans and machines in the fourth industrial revolution if you like this episode, remember to subscribe and leave a review. 

David Yakobovitch

Welcome back to another episode of the HumAIn Podcast and I am joined back with Bret. We were on the HumAIn Podcast just about six months ago got to chat about a lot of interesting evolution in AI and conversational AI and ethical AI and since then has been so much hot developments, especially around ethics so I’m pleased to have you on to talk about that topic what’s going on in the space. 

Bret Greenstein

People have begun to realize that there are very hard questions to answer when it comes to where AI should be used obviously as people think about capabilities that could do some of the work that people do, there’s the ethical implications of putting people out of work that scares people and there’s also the fear that perhaps when AI as biased, it can cause damage can cause people to not be hired can cause things that reflect badly on your brand to be used in business. 

So there’s many of these cases where people have begun to extrapolate their inner fears and transferred into AI and assume that by using something as powerful as AI, it must be ethically dangerous but  over the last few months, I’m beginning to see the conversation shifting from, is it scary to use AI to what if I don’t use AI? Is that more unethical? Because AI might be able to solve a problem better than not using it and this has come up increasingly because the accuracy of AI based systems is consistently better than people in very narrow tasks and so it’s really hard to ignore when you work on these AI based systems and you see a conversational system that is now more accurate than people answering the phone at some point it would actually be foolish to have people answer a phone if a system could answer better.

David Yakobovitch

Everyone wants to be an AI first company 

Bret Greenstein

And it sounds great. It sounds efficient and powerful and smart and I really get why people are nervous about it as well, because there’s been a lot of scary stories in the news but when you get past the stories, the real issue is that  people have extrapolated to imagine what if AI were me? What if AI replaced all of me? And that terrifies people, but in reality it’s good at some things it’s really good at some things, but it’s also not good at everything.

David Yakobovitch

I don’t think it is going to everything yet, but there’s this new show on Netflix. I was watching in the past couple of weeks it’s called better than you or better than human it was recorded in Moscow and they’re showing a future where there’s AR and VR everywhere and these robots that are performing all the tasks that we talk about, like Sophia next Mahina and all the bots, but that’s still kind of a doomsday scenario that we’re nowhere near that. 

Bret Greenstein

We aren’t in a perfect world, if AI could do everything we could do we could sit on the beach and drink pina coladas and let the machines take do the work but in practice, it’s usually a very specialized skill set that is fairly narrow and ultimately we’re still responsible for business and commerce and government and family we can’t delegate that to a system, but we can let certain things be done by systems we’ve been doing it for a hundred years this is no different than that. The key is understanding what it’s good at what’s not, and making sure we’re leveraging it for what it’s good at. 

David Yakobovitch

I would love a bartender that makes me my Tanqueray tonic or one that makes me my Starbucks espresso it’s we’re doing a disservice by not having the robots do that and that’s some of the premise, the thesis that you and I were chatting about before the recording about when this is unethical to not use AI. 

Bret Greenstein

So in this you can look at a lot of different cases, but there are certain things that AI already does better and I’m not saying that people shouldn’t have radiologist but image recognition is now so good that there are certain types of things in x-rays that can be read better by an AI than by a person.

Now you’d want to compliment that because you want the judgment of a doctor because a human being is accountable to other human beings in a way that AI is not, but it would be irresponsible to do a certain types of diagnosis and not ask the AI did you spot anything? Cause I didn’t and so we getting to this tipping point where it’s almost irresponsible to not ask, did the AI something we couldn’t see in addition to what we saw is that disagree with what we saw and it’s still a doctor’s judgment in the end what the diagnosis is and what you should do, but how could you not check. When it’s available to you as a tool?

David Yakobovitch

I definitely think that’s the direct on we’re moving it’s if you will human in the loop and we’ve seen a few ventures in the Bay area moving into this direction where it used to be where humans were completely labeled data from scratch, but now we’re at a tipping point where these AI systems will label the data instruments will check the data for that accuracy and to confirm the results sounds quite similar to the radiology example that you just presented. 

Bret Greenstein

You’re starting to look for managing the exceptions is that of managing the bulk of the work and recognizing where the strengths are take a look at self-driving cars, a self-driving car generally can hold a lane better than any person it is just dead right in the middle of that lane, as long as you need to be and so on lane keeping I’d say it’s better, but in exception situations like when you’re near construction, when your schools or other places, why would you delegate that responsibility? Because the risk is high so highway driving safe environments, keeping Elaine is probably better than AI and also you could use images in self-driving cars that can look at infrared and other frequencies.

We can’t see wouldn’t you want to at least have a warning there might be a person coming ahead of time. Even today self-driving cars can already project. When you might hit something better than we can, it will calculate based on your rate of speed, the speed of front of you, of what something else is and whether you’re accelerating or decelerating towards it and whether you’re likely to have a collision, we can do that in our mind wouldn’t it be great to have an assistant that is also doing it to make sure we don’t miss something. 

David Yakobovitch

On the last few episodes of HumAIn, I’ve been chatting with some of our presenters about self-driving and automation and where we’re moving from level zero to level five systems with zero means no automation. One means driver assistance, two means partial automation, three means conditional automation, four means high automation and five means full automation and my question to you is two parts first where do you think we are in that stage with self-driving cars and two, how applicable is this framework of, five levels of automation, perhaps to other industries like radiology data labeling and what you’re seeing emerging so let’s start on level one level, zero to five. Where do you think we are there? 

Bret Greenstein

So obviously best-in-class for cars gets you into level three, you were able to go for periods of time with only light attention being paid to a vehicle, and it will go for miles and miles and miles without incident but would you go to sleep as some people have, but would you go to sleep and let your car take you 30 miles to work? That would be insane right now, even the best in class it’s just not a safe thing to do obviously it’s not legal either but from a safety point of view, there’s still lots of things that come up in many respects.

The world is not really designed for self-driving cars as much as self-driving cars are not designed to fully take advantage of the world we built all of our traffic systems and everything under the assumption that people drive cars, people cross streets, like lanes or bike lanes. We sorta know how things are because we designed it for people so in a system that’s more optimized for AI and when AI gets better, you’ll also have the roads more optimized for it as well. 

Where lines will always be clear where even traffic signals might have additional information in them to help cars better determine when the light’s going to change on the other side. For example: today you just see red, yellow, green, but your car could look at that light and get information that says in 12 seconds, the other lights going to turn red. 

David Yakobovitch

It’s amazing to think about smart cities, the future of how subbies are designed with self-driving cars and I recently saw a research report that defined what were the top cities in the world with smart cameras and cameras for tracking all the data and New York city didn’t even come in the top 30 in fact, New York city was about one, 100th of the amount of cameras as to these like Beijing and Shanghai in China. 

Bret Greenstein

And you don’t have talked about some of the differences in culture that allow for increase camera usage in other places and now in the U.S. there’s a backlash and several cities around facial recognition and other things but 

 as cameras, as regulations help protect us from privacy cameras can still help drive enormous efficiency and safety in cities as well and I guess it really comes down to what’s acceptable use of images. 

Not should they be used at all? And what’s acceptable in 2019 is very new because it’s foreign, it’s strange it’s changed to a lot of people so it’s natural that a lot of people fear using AI and aren’t sure what is, or will not be ethical, but  that’s going to change very quickly in the next couple of years. 

David Yakobovitch

Now you run up in New York city I saw a video just this past week from 1911 it’s a film that was AI was used to automatically colorize the film but effectively, it was the city streets of New York City in 1911 it was amazing to watch because you’ve got cars and horses and people interacting, and you can see how much the city infrastructure has changed to optimize for cars now where, before they all had to coexist in the same physical space n ow we’ve got obviously better traffic signals, like warning lights and lanes and everything to make it better it’s still chaotic, but now we can travel much faster and get more people through the same city that back then you could barely go two miles an hour and not have problems.

Bret Greenstein

Especially with the horses, but kidding aside Beyond City design and what that looks like I thought was fascinating that you mentioned is colorizing the video and the question is when you use an AI, versus when you use a human for many years, there has been all the major animation studios, like Pixar and Disney that will have full-time colorists.

In fact today, when you see new videos popping up on Kickstarter all the time, they’re hiring a colorist to make sure everything’s at the right shade and for me, actually an interesting case came up a few weeks ago in that I was meeting,  I’ve seen my grandparents for a few months, we meet together in New Jersey and they share with me one of their photos from 1935 and this was wedding at one of these clubs and progress clubs back in right after the great depression in New Jersey and the photo was just completely destroyed. 

I had tears and rips and parts of faces missing and I thought the moment I said, is there an AI today that can really do that into a good job? And I started thinking of deep fakes and all the issues with hair and eyes and nose being misplaced and I said, probably not so I found one of the best photo restoration artists in New York City who manually and painstakingly went through the photo, completely restored it and it’s so beautiful and the story you just shared reminded me of where we have humans doing certain tasks and where again, when this become unethical Nazis to AI, there’s a coloring, a video and photo really require a human once you’re trained on so many task.

David Yakobovitch

You picked a level five case, which is tears and aging and all this stuff that requires tremendous sort of a human touch to know how a picture would look natural when fully correct and restored with lots of missing information, but for the bulk of things, if you were taking, a whole bunch of black and white photos that were decent quality photos, colorizing them with a computer would be a really good first step to get a close enough and then you might do the refinement with a person and that’s where you look at where systems can augment what we do and help us to get our jobs done more productively one of the other cases that we think about for where it’s unethical to not use AI is where the cognitive load on people were.

Just the amount of information and work is so high that actually it induces strain, it induces errors, and induces stress on people so if you had to do 5,000 photos, you’d be pretty stressed about it, trying to get it done in a week but if you had an AI, do all the photos and then you touched up and tweaked and fixed the ones that needed it, you’d get more done with less stress and all of our jobs are filled with those kinds of tasks.

Bret Greenstein

Case that’s going to happen to most people in their lives is getting married and having a wedding and you think about it today is you hire a photographer and the photographer manually goes through every single photo spending days or weeks doing all, every touching and you pay a premium for that sometimes you could pay hundreds of thousands of dollars on that, but we’re going to move to that place at where are they moving there? 

So if we look at further retouching with software, like Photoshop and Lightroom, there’s so many extensions and packages that claim to be AI ready, AI enabled, which they’re really using these presets that are performing a repetitive task over and over. That as you just mentioned breadth you no longer need the human to do that, but then they could double-check the shading on this cloud with the sky is a little off or no, that dress is not fully white balance corrected so let me just double check that.

David Yakobovitch

Selfie feature on my phone actually automatically has a filter that Smooths your face when you’re taking a selfie it does it live while you click it’s already done and so rather than going through like an Instagram filter or something else and I look at that and  one maybe look younger, so I’m happy, but isn’t it amazing that you can image process like that while you’re snapping photos, it’s correcting them on the fly, smoothing out adjusting color, adjusting focus?

Bret Greenstein

Like with Facebook and Instagram, like that’s pretty cool when you’re face timing with a friend or you’re doing something social, but there’s also the bad actors, if you will, of using these corrections and you may have heard, there was a case that dropped in the past couple of months in China of one of these celebrity vloggers. Who used one of these live video corrections to make her look like a 23 year old model? 

And she had like a hundred thousand fans and she was making like $15,000 a month mind you in China and apparently during one of her vlogs, the video correction failed and they saw her real identity and she was like this middle age fat like normal person and like overnight, like all her fans left and so it brings it begs up the question, like when is a good use case in, when is someone trying to hack the system and there should be maybe regulation put in place there.

David Yakobovitch

But in part, we were all sort of adjusting to a world that has live effectively live correction and so we didn’t have that before so we expected people to be real then we watched television and we noticed how much makeup and other work they do to make themselves look good and then HD TV television came out and we began to see a lot more imperfections on people, but then it just raised the bar for better editing, better video features that are plastic surgery.

All of it to be basically 4k ready when you’re on TV, in a world where you can do why overlays and corrections for people in video, we’re going to start to expect it and get used to it knowing that people don’t look the same there’s a whole website devoted to what famous people look like without their makeup and they look just like all the rest of us, but in makeup, they look great. We don’t complain that they look good on film because we know they’re made up and we expect good lighting and all that stuff and we pay for it to see him in a great movie but that enhancement is becoming just more normal and we’re getting used to it.

We’ll adjust it now you also mentioned social media one of the other things that strikes me about ethical use of AI is that social media is filled with horrible things in addition to great things, a lot of horrible things, and companies spend a lot of energy filtering out the horrible, making sure that all of us are going to have to be exposed to things that are untrue, that are hateful, that are violent, that are against the law in some way and up until now, that’s required a lot of people looking at a lot of really bad images and a lot of bad videos, a bad text and scrubbing it out and obviously not all of it gets scrubbed out because we see some of it. 

But it’s a really difficult task for human beings to look at it and be exposed to is enormous stress and stuff and so using where you can use AI to pre-filter out the really awful stuff so people don’t have to look at it in the content moderation side that’s just an ethical thing to do, because it’s really unfair to make people look at that stuff it’s necessary, but it’s awful.

Bret Greenstein

Content moderation for myself as a consumer, where I see it the most is also places like reviews on sites like Amazon and Yelp and TripAdvisor and I actually, this summer had a personal case with content moderation that negatively impacted me from a consumer brand that I wasn’t expecting so the story is I have a dog, as many of us do you have pets and love my dog, just like a baby, just like a kid and my dog got sick because I had recently bought a new dog food and the dog wasn’t reacting well to it. 

And I tested several times just to make sure was it, or wasn’t, not the food and that was the food and so I started investigating and reading into it that apparently Amazon does not temperature control their warehouses on like chewy and these other competitors and I’ve read into it and I said: this is terrible! So I requested the refund, got the refund all that worked out fine, but then I wanted to protect other consumers from having a sick dog from this dog food, that’s not great so I wrote a review for the Amazon platform for this dog food his dog food got my dog sick. 

I’ve requested the refund amazon provided the refund, I recommended that they provide more temperature control because I don’t want other dog owners to get sick and then when I tried to post a review, Amazon said, we’re checking with our review curation team and we’ll let  when your review gets posted then I get an email two days later that says we could not process your review because it violated one of our policies of providing factually misleading information and I thought that was so fascinating. That was here in August, 2019, in the sense of, was that done by an AI? Was that done by a human? Or was there something there that wasn’t best serving an interest or negatively impacting people?

David Yakobovitch

Well, you’re touching on a lot of things all in that same example so first obviously lots of positive reviews get through that are not factually based and not even necessarily generated by people, which is why a lot of the ratings are very high in general, review sites so that’s a challenge in making sure that the positives are not overly positive when you put it in a negative and then have a high bar to meet, to prove that it’s factual that’s a challenge, but you should have a right to know whether a person determined that or an AI and how it was determined so you could respond.

And so  that’s a level we talked earlier about responsible use of data, like from cameras the other side of it is when AI is used, you should know that it was used and have some ability to have discussion or escalation, if you disagree with it with an outcome, because it will enhance the AI for everybody else, once you solve it and you should know that it was generated by an algorithm or a person. 

Bret Greenstein

Absolutely because in this situation there was no recourse there was no email to respond to no chat, no nothing but one platform where I love it for content moderation is Facebook because whenever I see something that I may not find appropriate, they give me the options they give me the options to go and say: maybe this is not great, ir maybe this should be approved so I feel empowered and I didn’t feel empowered in this experience so I kind of hope that maybe Amazon adoptive Facebook type platform for reviewing content. 

David Yakobovitch

I look at all of these cases of customer service, human interaction, where algorithms use, where they’re not making sure that people know that algorithms are being used so human beings are very biased. Like the person who or the thing who reviewed your review, it could be just the personal bias of a human being or it could be a bias of a system or the system would be correct, but you just need to know what you should change to fix it so as soon as these customer service human interaction systems become better, they will also let  why something happened and a little more transparency and what you can do about it, because if it was an algorithm, if it were an algorithm, it would have told you, it was because of this and this and this, which are then correctable.

If it’s a person they may or may not be able to tell you that they may just use a judgment call so in some ways, an AI based system, which is building things on confidence levels and percentages and probabilities, a system like that can also tell you why it made a decision, not every time, but it could certainly give you the things that matter the most to its decision where human being cannot always do that so there are times where you want to use it if nothing else, just to give you a chance to know why it happened. 

Bret Greenstein

And what’s so interesting is in this use case, I didn’t know if it was an AI or a human and didn’t even have the option to then appeal or submit a review clarification or an adjustment typically when I buy products, there’s consumer reports and certain consumer protection burrows that helps empower you but  that’s important here too and as we’re moving into a society that is driven by AI with humans in the loop this question again about when is it unethical not to use AI comes in a lot to cost of labor for certain tasks, but then also the accuracy of humans can they perform better than an AI without the AI or even better with the AI? 

David Yakobovitch

So take a look at hiring like for a while, there was a lot of people who talked about hiring, might be introducing bias, have used AI for filtering resumes and other things and there is a risk of that, but human beings are inherently biased, all of us and so a human being, looking through a stack of resumes will be just as biased or worse because you can’t actually calculate it you can’t measure it but we can measure the bias of an AI based system against normal distributions or desired distribution so let’s say you have a hiring AI, that’s going through all the resumes coming in and it produces a result that we’re 60% of the ones that get accepted are male. Well, you got to look at a lot of factors that you can calculate, whether that was actually a biased result or not based on how many came in, what resume distribution came in, what skill sets exist in your field?

All kinds of different quantitative things, but a person looking at a resume and just kind of tossing one a or grabbing one, you have no idea what bias is influenced that person so  the complement of people owning the HR process and whatever policies, governance, and AI is that actually can tell you a little bit more about why they made the decisions they made is a better combination than purely doing with people are purely doing AI.

Bret Greenstein

I like that have your hiring manager or recruiter and basically when the resumes come to them, the AI says: No, this on the resume, or this is the score that we gave for these reasons and then the recruiter can look it over, I’ve also had the opportunity to do a lot of hiring and interviewing of candidates over the years and when I was just getting started in that, I remember I was shadowing a hiring manager at this fortune 500 company and they got for one of these positions, it was an engineering role they got 300 resumes and I was so fascinated how the hiring manager was going to go through them and they had a couple of folders.

They had the resumes and they opened each one looked at it for I kid you not eight seconds and instantly in that time, apparently knew it was good enough or not to advance to the next round and I was mind blown and I said, there has to be a better way it sounds like now with these new AI recruiting tools that are beginning to emerge, perhaps we’re going to move into a process that better serves humans, but also frees up the hiring manager.

To work on more challenging tasks, such as executive placement or better ensuring that they can build an organization where culture is uniform and all people and bots are having a good experience. 

David Yakobovitch

Just setting policies to know what sort of criteria to look for in candidates. I don’t know about you, but when I was first coming into the workforce, I was told to make sure your resume is a slightly different color or a little bit larger so when people shuffle the papers, you’re stands out now students are actually sort of reverse engineering their resumes so they have the right buzzwords in there so that an algorithm will pick them so they have to think about what’s looked for, and then work backwards to make sure they’re covering all the words terms or we’re formatting. It will help get them to the top of an algorithmic decision that’s a whole different world, and it’s a really interesting results. 

Bret Greenstein

And still believe it or not, even with all the algorithm tuning that college students and those in the workforce are applying today the best way to get into a company that you want to work for is having an informational conversation with someone in that company and after that great conversation, you ask them for a referral and if they say: Yes, that’s probably going to beat the AI. It’s probably going to put you at the top of the pile. What would you say?

David Yakobovitch

It is because relationships matter and that’s the other part about it. People worrying about AI and incorrectly like they think once we give AI a task that we’ve now delegated and it now runs on its own, but in reality, people are still ultimately responsible for every system in a business. You have a company comes in, come damn in 280,000 people. We are responsible for the behavior of everybody at the same time if we also have a bunch of AI doing work for our company, we’re still responsible for how they behave too.  

And so I heard somebody recently say that they’re a CIO, but they’re also effectively the HR leader for all the automation systems and they’re coming, they’re like a technology HR leader, because they’re responsible for the behavior and the outputs and the performance of an AI of all the AI systems in their company. You can’t really just delegate this, you still are responsible for the policies, quality and bias and all the other things that go into making a system work well. 

Bret Greenstein

And that’s the challenge 280,000 people you could place all these augmented tasks and bots in place to help you run decisions, but you’re responsible for the AI, which means you need these reports and these check-ins and these committees to make sure everything that you are doing is ethical we’ve seen them place, different products go live, and then there’s this backlash because the committees weren’t set up or the responsible processes weren’t set up and one of the big phrases going back to what you mentioned earlier on facial recognition and everything, we’re starting to see that there’s a need for regulation it seems that that is the direction we’re moving, at least in the us and what’s the best partnership we can set up si we’re moving on a good path. That’s going to advance AI not limited, but also help with that understanding and explainability. 

David Yakobovitch

There’s a couple of best practices so we run an ethical AI council in the company in cognizant, which is a subset of our corporate responsibility office and it specifically focuses on making sure that the projects we do that we’ve considered the ethical implications of doing it as well as the ethical, ethical implications of not doing it so once we’re aware a technology has a capability, we have to look and say, should we be using it here? Should we not be using it over there?

We actually have a cross-functional team to do that and many companies I’m working with are establishing similar structures so  at a company level, that’s good, but at a government and policy level, we have to set a sort of bill of rights for the use of AI that helps establish what is a reasonable use of data.

What is needed to deliver without bias. What are the ethical ground rules and what is it unethical to not use it? If you’re, she doesn’t know how this policy works, but if you have the ability to save someone’s life and you choose not to,  That’s unethical. I don’t know if it’s legal, depending on, I’m not a lawyer, but it’s not unethical to walk by someone.

You could save their life, but don’t do something when you’re involved in AI. For example, business decisions, most business leaders. Most human beings can only balance about three parameters when they’re optimizing anything more than three, four, five, six parameters. You’re pretty much guessing what the best decision to make is, but an AI can balance hundreds of decisions of optimization points and parameters so if you’re making important decisions for your company and for your shareholders and for your employees and your customers, and you don’t consult an AI that could help you balance better, isn’t that unethical? Is it irresponsible to not consider all the factors? 

Bret Greenstein

 With all those factors while we’re thinking about, especially that scenario of you’re in New York City and you’re biking along the streets and you see a bicyclist, get hit by a car, like is it your obligation, your duty, your citizenship to help the bicyclists to call nine one one, or where does that come? As far as moral hazard is concerned, and that’s it’s very subjective and goes back to the classic problem of self-driving cars: If you’re driving and then an old lady crosses the streets, do you hit the old lady and injure her, or do you swerve and hit the mom with her kid? 

Assuming you have to hit either scenario and it’s quite complicated and  with moral hazard, there’s no clear decision yet, but the truth is, can you use the technology to help us make the better decision? Because as you rightfully said, Bret if you give me more than three of the six inputs suddenly I’m saying, these two inputs are very important, the speed, the person’s going on the bicycle, or the direction the car is driving but beyond that, like it’s just guesswork so we might as well get as much data as possible so we can responsibly and ethically make better decisions.

David Yakobovitch

So you’re on, you’re on really what  is the big point of this entire conversation, which is as a person driving and that situation, I hate to be morbid, but if you’re driving and something happens, you have to choose who you’re going to hit it’s a horrible decision and when you can’t possibly optimize on you, can’t factor in life, expectancy, insurance costs, liability any of these things, you just can’t figure it out even risk of death. 

One might be riskier than it. You can’t figure out any of that stuff you have to pretty much go with your gut while you’re driving and hopefully for everything works out but we always extrapolate and say: Well, an AI system has to make that choice, but we actually can. But if I were driving, I would certainly want to know as much information as an AI can calculate to tell me what is going to be the least impact by whatever factors mattered to me, numbers of lives, life expectancy, whatever factors would matter to me. 

Systems should know that you should be able to set it and define it in some way and at least be informed in that moment when you can’t make a decision fast enough, at least having an AI tell you what’s going on would be better than having nothing tell you it just guessing.

Bret Greenstein

The whole industry has evolved so quickly in the last 20 years before data science and AI went through their revival as industries, everyone was focused on the actuarial sciences and the actuarial sciences is always about loss experience and can you monitor and see how changes in different decisions cause financial impact, but  the big question that you’re asking here, Brett is, well, what’s important to me is it financial? Is it life? Is it something else? And when we’re put in those situations that are uncomfortable, where either choice is going to unfortunately cause a negative impact. 

It’s how do you best mitigate against that? And it could be building a smarter AI system, or it could be empowering you with those insights to make better choices and  that is all about the heart that we’ve been connecting today, which is. What is ethical and what is unethical about when you should or should not use AI? And that’s going to be the talking points at conferences at industry and the products, especially as we’re moving into CES 2020, seeing what’s emerging.

David Yakobovitch

I’m glad you brought it up and I’m glad we’re able to have a conversation about it and  for all of us, we should recognize that as the advancements in AI grow and they are growing very fast it wasn’t only a couple of years ago where we were happy for camera could tell dog or cat now we’re talking about diagnosing illness in x-rays and real time monitoring of traffic and road safety conditions it’s an amazing set of leaps as the leaps come the times when we should consider using AI instead of not using AI who grew up and it would be my assertion unethical to not consider that more and more every year so all of us have to be watching for the advancements in AI and how it could be used and what would happen if we don’t.

Bret Greenstein

Absolutely in future speaking in the AI space one of my favorite shows I come back to whether it’s for it’s sensationalism or just it’s fun overall is Grey’s Anatomy and on one of the recent season episodes, they have started to actually show using AI for diagnosing these tumors and, and even using AR and VR devices, like Microsoft Hololens as well to perform operations.

I’m really excited to see where we’re going to continue to move in the industry with ethics and this has been a fun conversation as always. I can’t wait for our next one. Thank you very much. I enjoyed it too, 

David Yakobovitch

Brett. Thanks for being on HumAIn as always have a great day. 

Bret Greenstein

Thank you, David.

David Yakobovitch

Hey humans. Thanks for listening to this episode of HumAIn. My name is David Yakobovitch and if you’d like HumAIn, remember to click, subscribe on Apple podcasts, Spotify or luminary. Thanks for tuning in and join us for our next episode. New releases. Every Tuesday