You are listening to the HumAIn Podcast. HumAIn is your first look at the startups and industry titans that are leading and disrupting artificial intelligence, data science, future of work and developer education. I am your host, David Yakobovitch and you’re listening to HumAIn. If you like this episode, remember to subscribe and leave a review. Now, on to the show.

David Yakobovitch

Welcome back listeners to the HumAIn Podcast. Today, we are going to talk all about ethics. Ethics has been a topic that the entire world has been up in storm over, everything from COVID to protests to data science. And today, we have an ethics expert. We have Ben Byford¹. Ben is the host of the Machine Ethics Podcast. He’s an AI teacher, ethicist and games developer joining us from the United Kingdom. Ben thanks for joining us on HumAIn Podcast.

Ben Byford

Thanks so much for having me.

David Yakobovitch

Well, Ben, it’s been a wild ride of 2020 so far, looking between COVID and protests and everything with data science and ethics, what are some of the early signals that you see emerging that are of interest today.

Ben Byford

It’s really good that we’re talking about ethics. It has really flourished in my opinion, in this industry, technology industry, and kind of the news and general parlance that people were talking more about. How we share data and that’s really coming to the fore, or we’re talking about different ways of tracking individuals within the COVID situation.

So, that’s the big question and a moral quandary which we’re battling with, how much information do we give to organizations, governments, about our movements and that’s always been the case but we’re now having to think differently in the face of a pandemic, about how much we can give away, and what kinds of things can be done with that data.

But also, we had last year this kind of introduction to face tracking. So we had a lot of kind of new information about how face tracking has been used and how evolved technology-wise to be this kind of force in the general public, citizens could just be trapped generally in the environment, whether it’s by police officers or in other means, and these images being used against mass databases and attract and stuff like that.

So it’s really good that we’re talking more about ethics and in this digital world that we’ve built because it sounds silly but, we live in this global situation and we can’t just move to Mexico. There’s very little for us to do, to get out of the situation we’re building. So, we better build it in the right way or the way that we want to live.

David Yakobovitch

It’s so interesting that, Ben, you’re sharing all these unique cases that we’ve seen rise in 2020 of course, contact tracing with COVID, we’ve seen Singapore, China, the European Union coming out with different systems, even in the US with Google and Apple, contact tracing I know has been in everyone’s mind because no one wants to die from COVID, but then do people want to give up their data to be healthy? And where will the data be used? So, what’s your take on giving up our data privacy or data rights for this shared contact tracing?

Ben Byford

It’s a really good question. So, my instinct is that we already do that. We are already partially on the way there. So, if you’re taking that stance, then you’re really concerned with whom you’re giving that data to. And can they be transparent about how they’re using that data? And have that data secure, and be able to delete that data when appropriate. And it’s very hard to actually believe or have trust in organizations when they say these things. So, we could give Google our information about who we see, where we go, and they can correlate that with GPS data, as well as other things like wifi #data and signals and anything that they can use to coordinate who is within the same building, for example.

All these sorts of things come together, and they could say that they’re doing certain things with our data, but it comes down to signaling this information. And also, the trust involved within society to actually accept that information is going to be carried out and correct. Traditionally, we’ve lost a lot of trust in these organizations because again and again they do something strange or, let’s not say evil, but kind of things which are maybe not in our interest and are maybe in their own interests. And that has to do with how organizations run for shareholders’ board.

So how people operate in the world. So, I think it’s a trust issue with technology. So, both kind of sides of the coin. It’s a very difficult one to square. I don’t necessarily trust these big organizations, I’d prefer to trust my own government, but again, it’s difficult to trust because, obviously, they will farm out this information to probably a third party. And then, I have to trust the third party. So, I think it’s a good thing to be doing, but the trust issue is a big one.

David Yakobovitch

And I think when we look at the trust issue, we’re seeing in the United States like Apple and Google collaborating together on contact tracing and a lot of third party providers where trust has been destroyed over the years. We’ve seen how with Banjo and Clearview AI, some of these face tracking Apps that you’ve talked about previously, not only on the Machine Ethics Podcast, but the work that you’re doing, that a lot of us trust, has been eroded and we’re seeing now Google saying, “we are here, please trust us. We want to be the caretakers of your health.”

But then, of course, the news keeps coming through its cycle where it was recently revealed that with Google analytics, the incognito privacy mode browser has not been as incognito as people thought it has been. In fact, millions of people who had trusted Google with their #privacy were made aware that the data cookies, although they weren’t stored on their local machines, they were still going back to Google servers.

And I know this is probably something that you may have an opinion on because you’re based in the United Kingdom, where the GDPR passed. That’s been a whole process from 2016 to today and the US is still grappling with what our version will be. But this incognito mode, I mean, this is probably like the biggest GDPR flag I can imagine.

Ben Byford

Whether Americans have a similar legislation put in place is, in my opinion, irrelevant. Because the internet is cross boundary, cross continental. So, if you deal with anyone outside of your own jurisdiction, your own country, then you will fall into someone else’s legislation. And it just so happens that GDPR is one of the most robust that we have at the moment, to do with data. So, unless you are a small shop working in a small locality, then you’re going to be affected. So just getting it out of the way.

But the technologies that we have are very open and open to exploitation, especially the providers. I mean, there was a time when Google didn’t have a web browser. There are funny things that have happened since then. And people like Zoom have done some pretty interesting and awful things, technology-wise, to make our data available to anyone who wants to see it.

And they are saying that Google Chrome is the default browser, for example. And there are many other brands, like I use Firefox and other browsers, which try their very best to look after your data, but it is a problem. And for me, as a web developer, I can just implement some stuff to snoop on people who come to my website so you use my tools.

That is a power that I have, and really it comes down to making that power transparent. And so it’s kind of an educational piece that can we educate citizens about the power of technologies and the use of technologies in a way that they know enough? But also, can we educate people to do the right thing? If you’re sitting down and making something, you could do things to elicit behavior change, you just could. Facebook and Instagram and people like this do this sort of thing all the time.

And they also do research on us all the time. Should we be doing these thing is the question and how do we know what the right thing to do is, and that’s where all these ethical questions come in. So, I have this real belief that we should actually spend more time with an educational system reflecting on some of these problems and maybe even reflecting on them in a context. So we can talk about them, especially in secondary school in the UK, secondary, in the States that’s middle school.

David Yakobovitch

Yes, so we go, our primary school would be elementary and then we’d go to middle and high school. That’s right.

Ben Byford

And these kids growing up in an augmented reality existence now where they can be bullied afar, they can bully other people afar, they can be poked and prodded into different ways. Constantly. All the time. So I don’t really have any answers to that specific problem, but I think that we should be teaching people to reflect on the situation within our educational institutions, so that we are priming people who are going to be making this stuff in the future, to be making design decisions and technical decisions that they can implement it in full respect of other people, and for the respect of the environment. And I’m not sure we’ve seen much of that traditionally. I know there’s a lot of things we can talk about, lots of issues, but we just need to be more mindful about our impact on other people in the world.

David Yakobovitch

I think one of those big challenges that we saw in 2020 has been the eruption of support for George Floyd and these protests, how they’ve just emerged on social change, around accessible and equitable outcomes. And this isn’t just for looking at education, but society as a whole. And of course, it’s been in the United States bubble of what social change looks like, but that is part of the conversation that we’ve seen. There’ve been conversations with leaders from Teach for America and Girls Who Code, who have also come out saying that education is not necessarily a pipeline problem, but it is all about quotas.

And it’s all about understanding that there is equity that is not being seen. And I wanted to share this story with you right now about what we’re talking about with Black Lives Matter, because I think it is systemic in society. Ben, you’ve shared about Zoom and the Zoom software, and I found their use case, the same systemic racism that we’re seeing in society. Zoom announced May 7th, that they acquired Keybase, an end to end encryption startup, to provide security for their users.

And then they later came out in the beginning, early stages of summer 2020 that they’re going to bring the end to end encryption into their product but only for paid users, which means for the users who cannot have these accessible, equitable outcomes from non-traditional backgrounds and low socioeconomic status, “ Sorry, we’re not going to secure your data, we’re still going to sell the data. We’re going to put it on Google. We’re going to track your data and monitor that”.

I’m not sure how that’s priming society for these smart design decisions. Where are we going wrong? I mean, can you share some insight of what you’ve been seeing from Black Lives Matter, from Zoom, from some of these issues? Are they by design to be unethical?

Ben Byford

Yes and no. The Zoom example is a really good one because Zoom is doing the utmost to make me hate them at the moment. I think the epitome of where we’re going wrong with how we build and market technology, I don’t think they’re going out there to say, “ if you don’t pay, we’re going to sell your data, track your data.” They’re not going to say that, they’re going to say, “for those who pay, we have a higher level of feature set. And part of those features is, if you’re worried about encryption and your data privacy, then you have to pay us”.

But really, we should all be worried about these things. We should all be worried about security, as citizens and our #dataprivacy as citizens, because we don’t necessarily want to tell everyone what we’re talking about, and that comes into our discrimination issue. So, you can be discriminated in different countries, for all sorts of different things. And you might not want to tell your neighbor or your government certain things about your person, because those things aren’t deemed in that country normal or acceptable or legal.

So there’s many reasons why you would want to keep your privacy and your security intact. And if you’re using this, it’s almost like you’re using a utility, and the utility doesn’t respect the user. So, some things are on the internet. I just believe that they should be taken up by non-profit organizations or quangos or governments. Email is a distributed technology and is becoming non distributed at the moment quite rapidly. And that has a wide and worrying effect on our autonomy as citizens, as human beings. Most of our email will go through Google or Microsoft now. So that’s a problem. And what once was a decentralized thing is now becoming centralized.

Zoom is another way we communicate and that’s the centralized platform and they do not have any respect for you. And they do seem to have a dubious way of implementing what is, in my mind, a utility. And there’s no reason why organizations can’t do this sort of thing in a nonprofit way and enable us to communicate as a community in a way that we want to, to get things done, right? We are recording this podcast online, using a screen-sharing and voice, things like this.

There’s like hundreds of these things. So, why don’t we just make one really good one and you don’t have to pay for it, or you have to pay for your taxes or whatever. And there’s certain things within the internet environment, which I think should be like this, but have traditionally not been the case, just internet access generally is another one of these things.

if we’re saying water and electricity is a general need, a civil need. The internet is certainly up there as a civil need. And I know in the States you have some terrible internet connections in lots of places. And it’s bizarre to me that isn’t a citizen requirement, to have some of these things as part of a tax-paying situation. It makes me feel bad that we are getting exploited for a small amount of people for using things which are quite a name and quite enabling for us to just exist at the moment.

It’s hard to exist within COVID without using Zoom, which is annoying for me because everyone’s using it. It’s quite hard to exist without using Google Hangouts and Gmail, or at least going through their servers. So, my mere existence as a human being at the moment in the West, maybe less so in China, it just, it requires these things. And I think we should look at those requirements and think how we can adjust them to make ourselves live in an environment which is more equitable. And like you were saying, actually can enable more people to be able to work, to live and be more fulfilled. So that was a mini rant about that. Sorry.

David Yakobovitch

But I think it’s so timely because what we’ve seen at least in the States is between throttling internet, and net neutrality. It is not commonplace that all citizens have the right to internet access, but I think #COVID, and a lot of these demonstrations have shown that the right to internet access could be or should be universal. And Europe has made a lot stronger strides there than the US. And I think that’s because in the United States trust is broken.

It is a broken process. And we’ve seen even in the government sector where companies have chosen to have licenses for both Microsoft teams and for Zoom. Microsoft teams have been used for the private domain, because they’ve known for a while. There has been support for end-to-end encryption and they’re only using Zoom for the public domain.

So, there is a clear divide that trust is broken, but I feel that, from what I’ve seen, that Europe has begun to figure out how to repair trust and that GDPR has been part of that solution. What have you seen in these first couple years where GDPR has been mainstream in both the UK and the European Union?

Ben Byford

It’s been really interesting, actually. I think it’s been a real smack in the face. Lots of companies have been scrambling around trying to make sense of what this means for how they operate. And one of the major things is that they used to be this kind of working at some of these companies, there used to be this kind of default mode of operation, where you have some sort of technology and you just hang on to as much data as you can.

When they’re using our App or when people are using our website, we can get X, Y, and Zed from them. And we’ll just store that. And it doesn’t matter what we do with it later. We’ll probably do something with it later, and we don’t really know what we’re going to do with that store. And that really can’t be a default behavior anymore.

As you’re building technology, you have to require consent under GDPR. You have to stipulate usage under GDPR, and you have to give terms of access under GDPR. So, if you are to be amended or deleted for your delayed data or have your data shared to the user, what specific data they have on them. All that has to be implemented.

And if you don’t implement that, then you could be taken to court and sued a lot of money. So there’s been a real kind of sea change on what the default modes of operation are within the design build of technology. And that can only be, in my mind, a good thing. It’s obviously a transitional period, and I think we’re still transitioning for anyone who still sees cookie notices on their websites.

There are ways of building websites without cookies or without 12 cookies and things like this. So they’re kind of like, as a transition into, “Okay, the old world actually isn’t compatible with now, what people were saying about privacy and data protection.” So, we’re kind of still moving there and not everyone’s kind of worked out the solutions which we can use or the ways of operating that we can use. Maybe we can only store data that we actually need. Maybe we can ask permission for that data. Maybe we can actually use it for the benefit of that user.

Often data’s taken and used for other purposes, for sale. Or put into a big bucket and made into some sort of data store that other people can contribute to and sell on as some sort of third party service. So there’s kind of like these insidious things that have been happening for many years behind the doors, which basically if you’re operating in now, you have to tell us because it’s bad and you can get sued a lot of money again. So I think having started out my career in a company, which sort of did this sort of stuff, my first job ever was in the digital world.

Digital World in technology was building sites and doing some scraping activities and producing large data sets to send marketing materials. And those sorts of companies just can’t operate in the same way as they did. I mean, they can’t do the same sort of things. We have permission now, but it’s not a given. And technology can do all this stuff automatically. But again, it comes down to what should we be doing?

And now it’s illegal to be doing some of that stuff, but within the ethics of AI and the ethics of technology and kind of the ethics of mass automation, we have to really go beyond what is under GDPR, beyond what is legal, illegal and think about again, what is it the world we’re we’re making? What is equitable to most people? What is useful to people and what isn’t just useful to shareholders.

David Yakobovitch

And making the world by design is seeing how we can bridge the gap of this equitable and accessible conversation. I think what both of us know, as being leaders in the technology space, is that #technology always moves faster than governance. And so what we’ve seen with GDPR is definitely a reactionary, but now helping support citizens for their rights.

But you mentioned earlier in the show, Ben, about face tracking. And this has been one of both exciting and not so exciting technologies of the last few years. And there’s three use cases of face tracking I’d like to dive deeper with you on the show and where the ethics may lie there: around credit, around COVID and around crime.

We’re going to go from least controversial to most controversial. So I think credit is probably in the middle. So let’s start with COVID. COVID has been something on everyone’s mind, rightfully so. And we’ve seen recently that in South Korea, they’ve come out with robots that can do face tracking to measure the temperature checks, contact lists. And then if you have this, COVID, then you have to be quarantined. What do you feel about face tracking for COVID prevention?

Ben Byford

You’ve put me on the spot there. So just to kind of caveat, I haven’t spent a lot of time in my career thinking specifically about face tracking. I’m much more interested in autonomous moral agency, machine ethics, as well as helping organizations with implementing what is now “Responsible AI”.

But the face tracking stuff is great. Cause it’s kind of like, it’s a microcosm of what is essentially a really big ethical quandary, which has positive and negative effects and then the use of that terminology. So it is really interesting and really frightening in the same way.

And we’ve been talking about this stuff in, I’d like to say science fiction, but kind of in culture for many, many years. 1984, George Orwell. And in terms of the South Korea robots, you have to create trust. And if it is known that these machines are very, very good and work very well, and the information maybe doesn’t really leave the robot in any meaningful way, or is anonymized in all aspects and isn’t actually restricting the citizens mobility, you can’t just get stopped in the street by one of these #machines and you can actually buy lifesaving medicines for your partner, but because you’re slightly hot, you’ve been stopped.

There’s all these kinds of circumstances that associate with this sort of technology. And as long as we kind of appreciate that, we’ve built something that actually really does work and works enough, and knowing when it works enough is an ethical question. And then also, allowing humans to be in the loop somewhere. So again, I actually don’t know fully, I’ve only read a short article about these robots, but I’m not sure fully how they operate in the world, but you can imagine that if you got stopped and you had some sort of way of recourse, I can imagine that is a useful tool, but if it’s a “Computer says No situation” and a squad of medically trained police come down on you, and all your citizens are scared to go outside, then that’s not a good situation. So there’s a nuance here in the context of using these sorts of technologies.

And the other example you gave with the Clearview stuff. It’s unclear where the nuance is. That is unclear where the buck stops and where we can actually have some sort of recourse. And that’s the worrying thing for me. So, the robots are much less worrying. But again, I don’t know the full extent of their implementation. And again, if they have lots of false positives, and that could be an issue, and false negatives.

David Yakobovitch

Absolutely. Let’s go into our next one. So what we’ve just talked about and robots, I think the big takeaway is humans need to be in the loop and the engineers who build these systems need to see what the different directions or outcomes can be and not like pigeonholed humans into, well, this is your choice. No, there really isn’t a choice.

And I think that’s what we see not only with COVID, but also with credit in the last couple of years, it’s been well-publicized. China, which is a very state-owned enterprise country has enabled a social scoring system and the social scoring system is to help improve transactions and financial well-being in China. This means that if you purchase sugary drinks, perhaps you will be docked a few credit points. If you decide to go for a fantastic run in the park, you might gain a few credit points.

So China’s really about building good social policies and enabling a healthy society. This is from China and the China government. So that’s where I’m sharing this viewpoint. And this is all being done through face tracking technology as well. There’s no opt. And there’s no opt out. You are opt from day one. Where is your take on as we’re moving, maybe China towards that society or the rest of the world? How has face tracking seen for #ethics as well as credits?

Ben Byford

An episode we ran on the Machine Ethics podcast, there was this question about the tracking in China happening, and it hadn’t yet happened, but it was coming in. The social credit system that you were talking about, and from what I’ve seen in my travels recently, it’s something that’s coming into the UAE as well. So I’ve been to Dubai a few times in the last year and my face is now held in a database with UAE and I can actually get through the gates quicker, but at what cost, so where is the cost benefit there to me as a citizen and to us as a society?

And the Chinese situation is really different and a very broad implementation. So, to operate within their society, you basically have to conform to certain rules. And part of this set of rules is to have a profile on the Sesame system and have your face tracked and have your biometrics tracked and all sorts of stuff. And there’s lots of things I can say about how they’re doing that. Well, badly technically and stupid things like leave your databases open. And there’s a story a couple of years ago about someone just finding a database of Chinese citizens on the internet, because they just had the default database settings and it was just accessible, stuff like that.

So obviously security is a big issue, but we all have this contract within our own society with the government to conform to certain social norms, legal norms. The obvious contradiction here is that the Chinese system seems to be very heavy handed in its use of technology to implement those social norms. We don’t really have a similar approach, I don’t think, in the West. I mean, you could say that some of the private companies as private entities do a similar thing.

We have to conform in certain way to operate within their systems. And that’s a kind of a peer pressure situation or economic situation, but the civil situation in China is a really different kind of experiment. And it’s too heavy handed. There are other ways of doing some of the things, whether it’s good or bad, is debatable. But the way that they’ve used technology to create this kind of authoritarian set of rules is beyond what is appropriate as a westerner, obviously.

And as someone who cares about my autonomy, and my autonomy within additional world and the rights to be forgotten, and my right to kind of evolve as a person and not to have this history dragging me down of like things I might’ve looked at on the internet when I was 15, for example, and these are all kind of potent issues. And I think the Chinese system is going to see a possible collapse when people start understanding what it actually means, 10, 20 years down the road, where they can’t actually be normal people who evolve in time and in experience in the same way necessarily, because they’re being tracked.

And because we have this data, which is making a picture of what this person should be doing or shouldn’t be doing. And again, that’s kind of an abstraction away from whether you think it’s right or wrong. It’s an interesting experiment regardless. And it’s probably, in my opinion, it’s not an unnecessary experiment. There are other ways of doing similar things. So, what do you think about it?

David Yakobovitch

I mean, what i’ll share is something very contrarian, which is that we’re seeing today. China’s done this Sesame Credit system. And again, as you’ve shared Ben, there’s some pros and some cons, especially from technology that need to be worked out.

But my contrarian statement is that I think what Experian and Equifax have done in the United States with our credit system for decades has been quite similar. With the tracking of transactions and the data, potentially not with the facial tracking, just yet. So in fact, I would even go so far to say that China is only doing what’s fair. They’re playing catch up right now to be on par with the monitoring that we’ve seen in the United States.

Ben Byford

I guess you’re right, in a sense. It takes that model and it takes it to the extreme. So if you’re adding a face tracking to Equifax, you’re saying that we don’t just care about what you buy, but we also care who you see, where you go, how you feel.

And I guess that’s an extreme version of some tracking already exists in lots of places. So the extreme version of that is, are you going to see your grandma once a week? If you’re not, you’re a bad person, and the government can do something about that. They can restrict you in ways which is hard to operate in a society.

Are you a truant at school? Well, you can’t be a truant at school because you can’t go to the local leisure center. You can’t actually go to a shop without getting picked up and taken home. There are the extremes, the case of this is the worrying factor.

Do you actually have any choice within that society in the extremes? or autonomy is debatable if you’re not following the line, obviously. So, that’s where I’d say it’s taking some of these things and running with them and, and probably taking them too far, because again, it comes down to who’s making the rules and whether the rules are appropriate for millions of people, millions of people we’re talking about. So it’s a big problem, I would say.

David Yakobovitch

And as we’re talking about extreme action, let’s get to our most extreme facial tracking scenario of these three, from COVID, to credit to crime. Again, this has been mostly isolated in the United States, but with the killing of George Floyd, it has been a past few weeks. And even as we’re listening to the show into the future, it’s been a moment of social change in the United States, but it has not been without violence and it’s been in different modalities, we’ve seen it from peaceful protestors, we’ve seen that from bad actor looters, and we’ve seen it from protestors who have felt helpless, that they’ve become these rioters with these violent demonstrations.

So we’ve seen multiple personas of people. And a lot of this has stemmed from the use of facial tracking. We’ve seen body cams on police officers in the United States. We’ve seen cell phone video footage of different crimes occurring. And George Floyd of course, is not an isolated incident. This has been going on for decades systemically in the United States. But where I’m leading towards in this conversation is, should we be moving to a world of face tracking for crime? Should we be moving towards a Minority Report-type world? That’s of course moving, ever more extreme, but like where did we stop when we go to the face tracking with crime?

Ben Byford

That’s exactly the problem we have, whether you think face tracking is good or bad, it’s actually, that’s the point. Where does the buck stop and where isn’t it appropriate to use it? For what purposes? I think the police in use of face tracking or taking imagery and in real time relating it to a database. That’s something which it seems like the first step. So the idea here is that if we have this technology at all and that’s, again, debatable, we don’t have to use it, it’s just technology. It’s just a tool.

Where are the low hanging fruits for us to use. And it might’ve been said that policing or better policing is, initially, a low-hanging fruit situation. But equally, this stuff can be used in any situation. So you could use it in shopping malls and things like that to track people who are shoplifting, to track people who are maybe happy or sad when they walk out of a shop, and to track people’s reactions to items on their store. There’s lots of different kinds of applications for this sorts of technology.

So it’s not just that we are tracking people’s facial features and then relating them and trying to find out who they are. That’s one big aspect of this, but we can really do lots and lots of different things. We can track people’s gait and see if the shoes that they’re wearing are affecting them and sell them coupons for new shoes. Those are the things that we can just do now. So that seems innocuous. But again, like you said, where does it stop? So there’s been some conversation in the European Union about face tracking. And I know that there’s been conversation in California and other places who have banned some face tracking situations.

And it’s really interesting because you have all these really good applications, all these really interesting applications. And then you have applications which then restrict people’s rights or human rights. And again, it might be that we have to kind of look at what human rights actually mean in the digital world, because obviously human rights were created, It was in the fifties now. So we have to be context-specific about these things, but it’s a slippery slope situation.

The police are using it to what they say is to track citizens who are already under surveillance or have already been known to commit crime and be able to track them in real time in the environment. And many people would say that was a useful tool, but there is no knowing how they are actually using it necessarily, but also other uses that we could use for that same technology. So, I am very fearful about people who can just look at technology and go “Great. I can use it for X or Y instead”, that is going to be a problem that I can see kind of being a real issue going into 2021.

David Yakobovitch

Right, because how I’m hearing this from you, Ben, is like, “look, if we can face track citizens, that’s part of the problem or solution?”. But how about the big man? Should we be face tracking them as well? And what we’ve seen at least in the United States, in New York State, as the whole movement around George Floyd, then Black Lives Matter, has continued to grow for social change, We’ve seen that in New York State has been talked to repeal something, a piece of legislation called 50-A. 50-A’s a legislation that says that if a police officer has misconduct, that misconduct is sealed and never revealed to the public.

Unlike Congressmen and Congress ladies, when they commit crimes, it’s revealed to the public with names, the repeal of 50-A says that these sealed documents on police brutality and the verdicts must be released to the public. So I think what I’m hearing consistent throughout the episode today is that trust is often a broken process and we need to set policies to re-enable trust and to re-enable ethics by design.

Ben Byford

We want to live in a world where George Floyd or anyone who is discriminated against traditionally in a society can walk up to a police officer, can walk up to a person of power in that society and know that they are going to be trustful, trustworthy, wherever your trust in any situation, you don’t want to be in a situation where you are in grave danger and you can’t trust your own environment. You can’t trust your own. You can’t trust your own society. That’s not a world we want to continue to live in, really. That repeal obviously sounds like a good one.

David Yakobovitch

Now, taking everything that we’ve spoken about today, and these are like huge topics that we could go even more in depth into. I’d love to hear from your perspective, as you’ve seen 2020 unfold with ethics overall, whether some of the themes or “aha moments” that you’re seeing in the near term, 2020, 2021, the midterm 2025, and even longer term, like vision 2030.

Ben Byford

There has been a wealth of interest in ethics and technology and, in data science and machine learning, and AI has just been an explosion. And let’s say, over the last few years and interest in this area, and some of that interest has been rhetorical and not necessarily useful per se, but maybe it harnesses a conversation which has already happened, which is probably good. And some of it has been thought-provoking, very useful. And the conversation at large is a useful one to have. What is at the moment going forward is an appreciation for what needs to happen, and then, the implementation of what needs to happen at the moment.

I feel like three years ago we were asking the questions. And now we’ve given up asking the questions because we have a lot of questions that we need solutions to. And now we are trying to produce solutions. So we are quite well versed in how a lot of these machine learning tools work, how we’ve traditionally used technology and implemented technology in the digital world and how we designed and built.

And now we really just have to kind of go well in these certain situations. You are legally required to do certain things. It is good to do certain things and you might consider these certain things. And I think going into 2021, we really should begin to the point where we have solutions that, as a designer, as a data scientist, as a business person, you can take and run with, and those are the sorts of kind of aims that I have within the community to really focus on.

We’ve asked really good questions. We’ve had really good conversations now is the time to take these conversations into action. And I think I’m seeing that with the emergence of quite a few workshops and talks and, conversations around AI, responsibility, transparency, and diversity and equity and all those sorts of terms. So next year I’m hoping that we’ll be able to actually grab tools and frameworks and use them.

Going forward into 2025, Was it? So into the future, I am most interested in how the interaction of moral agencies appears in technologies that we actually use and within society’s reaction to it. So obviously, the easy example of this is automated cars. There’s been lots of big conversation around automated cars for many years now, but we haven’t really seen a general public acceptance of cars on the road, automated cars on the road. So in 2025, I would imagine that we would start seeing a general kind of normality of there being autonomous systems in the environment, not just in the virtual environment.

So you might have autonomous algorithms that are doing certain things at Netflix or Amazon, or these Google search, but autonomous things that interact with us on a physical world or things that we have within our pockets and our phones and things like that, a more kind of general acceptance of moral autonomy, and these really, really interesting aspects of that.

And there’s the conversation I had on my podcast about social morality. So if you have an AI helper in your phone and it has some sort of capacity to make decisions on your behalf, let’s say, and it also has the capacity to kind of nudge you and send behaviors. What is the moral agency of that device and who controls that? And what kinds of things is it going to do? That’s a really interesting question.

If you have an AI, a teaching assistant, what kind of a social moral agency is that going to have? And all these sorts of questions are coming in. As we start building this technology, and as well, like the robots in South Korea or health robots, they have to do a job and they have to make their job worthwhile to citizens to actually have the robot in the first place.

Is it going to be faster, more efficient, better, whatever it is, safer, in terms of the cost situation? But is it also going to make us feel good? Is it going to be able to give us autonomy back to where it wasn’t before? Is it going to be able to communicate to us in a way that we would like to be communicated to? Is it going to notice when we’re sad?

Is it going to notice when we’re so excited or all these sorts of aspects, which is, for me, just very interesting? So, I’m loving all that sort of stuff. And that’s going to be much more important going into 2025 sort of timeframe. And in past that, I couldn’t speculate. It’s just going to be a fascinating time and, and I hope to be here talking and doing something in that environment.

David Yakobovitch

Well, those are some amazing trends that you’ve just shared some light on, Ben, and bringing it back to our audience and keeping everything practical, what call to action would you like to share with our listeners on the HumAIn Podcast today?

Ben Byford

Well, I would obviously, if I’m allowed to pitch, I would like to say, check out the website and the podcasts, that’s the Machine Ethics Podcast, for more of these types of conversations about just a lot of the ethics and also some technology stuff and some kind of technical stuff to do with robots and the internet and things like that.

I think just be mindful. “You are not an island” is a really good way of thinking about it. If you’re producing some technology or you’re producing something, which is going to make people behave in a certain way, how are your parents going to feel about it? How are your grandparents going to feel about it? How are your children going to feel about it? Are you producing something for them, which is going to be useful? Foremost, is it going to be safe? And is it going to be creating behavior that you want to see in the world? So, be mindful.

And there’s this idea that as a technology person or as a developer or designer, I’m only a designer, I’m only a developer. I can’t think about all these things. I’m only the guy who writes the code or whatever, and gets told what to do. That has never been the case. We all have our autonomy and we all should be thinking about the things that we are doing, and you should be empowered to think about what you are doing. So obviously it’s easy for me to say it on this podcast, but please be mindful of how you affect the world.

David Yakobovitch

Ben Byford: AI teacher, Ethicists and games developer, host of the Machine Ethics podcast. Thanks for joining us on the HumAIn podcast.

Ben Byford

Thanks very much.

David Yakobovitch

Thank you for listening to this episode of the HumAIn Podcast. What do you think? Did the show measure up to your thoughts on artificial intelligence, data science, future of work and developer education? Listeners, I want to hear from you so that I can offer you the most relevant trascendening and educational content on the market.

You can reach me directly by email at Remember to share this episode with a friend, subscribe and leave a review on your preferred podcasting app and tune into more episodes of HumAIn.

Works Cited

¹Ben Byford