DUE TO SOME HEADACHES IN THE PAST, PLEASE NOTE LEGAL CONDITIONS:

David Yakobovitch owns the copyright in and to all content in and transcripts of The HumAIn Podcast, with all rights reserved, as well as his right of publicity.

WHAT YOU’RE WELCOME TO DO: You are welcome to share the below transcript (up to 500 words but not more) in media articles (e.g., The New York Times, LA Times, The Guardian), on your personal website, in a non-commercial article or blog post (e.g., Medium), and/or on a personal social media account for non-commercial purposes, provided that you include attribution to “The HumAIn Podcast” and link back to the humainpodcast.com URL. For the sake of clarity, media outlets with advertising models are permitted to use excerpts from the transcript per the above.

WHAT IS NOT ALLOWED: No one is authorized to copy any portion of the podcast content or use David Yakobovitch’s name, image or likeness for any commercial purpose or use, including without limitation inclusion in any books, e-books, book summaries or synopses, or on a commercial website or social media site (e.g., Facebook, Twitter, Instagram, etc.) that offers or promotes your or another’s products or services. For the sake of clarity, media outlets are permitted to use photos of David Yakobovitch from the media room on humainpodcast.com or (obviously) license photos of David Yakobovitch from Getty Images, etc.

Welcome to our newest season of HumAIn podcast in 2021. HumAIn as your first look at the startups and industry titans that are leading and disrupting ML and AI, data science, developer tools, and technical education. I am your host David Yakobovitch, and this is HumAIn. If you liked this episode, remember to subscribe and leave a review, now on to our show.

Welcome to our newest season of HumAIn Podcast in 2021. HumAIn is your first look at the startups and industry titans that are leading and disrupting ML and AI, data science, developer tools, and technical education. I am your host, David Yakobovitch, and this is HumAIn. If you like this episode, remember to subscribe and leave a review. Now onto the show.

David Yakobovitch 

Welcome to the HumAIn Podcast, where we cover topics deep diving on augmenting humans, developer tools, the future of work and how AI is part of humans and machines. Today’s guest speaker is Nell Watson. Nell is an AI ethicist and the machine intelligence engineer now works to define the IEEE Standards and she has worked with certifications and AI as well.  Thanks so much for joining us on the show. 

Nell Watson

Thank you so much. It’s a pleasure. 

David Yakobovitch 

It’s such a pleasure to have you and as we’ve seen the last couple of years, there’s been a lot of evolution of standards and certifications across the world with the gold standards going with the IEEE you’ve been part of some of these core AI standardizations one of those is the IEEE7000 series and the other is the ECPAIS can you tell us more about these standardizations and certifications that you’ve been a part of.  

Nell Watson

It’s still very much early days in the world of AI ethics and the establishment of best practices in other areas such as medicine we’ve had bioethics now for decades and it’s actually rather mature as a domain it’s been very much pulled apart and put back together again and now if there is an ethical issue and something relating to medicine we can quite easily understand what to do but still in the world of AI ethics there is a lot to be done there’s a lot to be understood there’s a lot of terminology to be agreed upon.

And there’s a lot of work yet to be done in translating principles into actionable criteria principles are great because they’re timeless if we think of the Peelian policing principles they’re almost 200 years old but the idea of policing being done with the consent and cooperation of the community, etcetera, is a timeless value even though the science of forensics and police work has changed so much in those last two centuries so principles are great but it’s very important that we create something actionable out of that we create criteria with defined metrics that we can know whether we are achieving those principles and to what degree. 

With that in mind I’ve been working with teams at the IEEE Standards Association to create standards for transparency, which are a little bit traditional big document upfront very deep working on many different levels for many different use cases and different people for example, investigators or managers of organizations, etcetera.

We’ve also done quite a bit of work in certification, which is a little bit different from standardization and the certification looks at the specific elements within the systemic, as well as organizational ecosystem.

For example with our certification criteria we can understand the elements which tend to drive or tend to inhibit the emergence of an ethical quality such as upholding ethical transparency.

To give an example, if an organization is concerned about intellectual property that will tend to detract from transparency. If, on the other hand, it has a culture which rewards and welcomes the sharing of information between departments and perhaps even with the public, then that will tend to strengthen or drive transparency in this case. 

And so by breaking down these different elements sometimes across several levels which they can feed into each other in a tree structure we can gain a very wide and broad and deep  resolute understanding of all of the elements that contribute towards good ethics for example within transparency.

It’s important as well to understand not only the system, what it’s doing, its functions, its limitations but also the organization, for example what kinds of incentives might be in that organization? What kinds of constraints might management have or what  culture is built into it? If we think of commercial airtravel, it used to be quite dangerous, say back in the 1940s and fifties and sixties but it’s gotten a lot less dangerous since then even though the amount of people traveling by air has increased massively.

 And a lot of that is because we have black box systems on board these aircraft, which can tell us what happened if an incident occurs a and we have two of those; we have one in the cockpit which records the voice of the pilot, copilot and so forth a and we also have one for flight instrumentation so we have a way of understanding the human element as well as the system a and generally when things go wrong it’s some interaction between the two and that’s only by having an understanding of the human element and the machine element that we can really put together what occurred during an incident.

And it’s the same with transparency if it’s the same with understanding AI systems not only in terms of what they’re doing electronically or algorithmically but also the organizational context in which they have been designed developed and deployed. 

David Yakobovitch 

Transparency is the area that  in the last couple of years, we, as society have seen, is not being held to the standard that it should be. When you think of autonomous and intelligent systems I think to the new Netflix career show that came out in 2021 Squid Games, which is a sensational thriller show that implements without any spoilers here AI to perform certain actions and there’s a lot of questionable decisions that occur in that game and show.

And again, without doing a spoiler there the challenge is where is transparency of data may be available and in the states, we’ve seen other companies like Clearview AI who were not transparent with the use of data for the police and the military and where other systems have not used data for good.

And to the contrary, there have been systems for data for good, such as with the pandemic identifying who might be ill or who needs to cover their face with a mask to ensure compliance for the greater good so it’s both sides of the coin and the challenge is there enough processes in place to ensure the right transparency for these systems.

Nell Watson

Absolutely. Transparency is really the foundation of all other aspects of AI and Ethics. We need to understand how an incident occurred or we need to understand how a system performs a function in order to I analyze how it might be biased or where there might be some malfunction or what might occur in a certain situation or a certain scenario, or indeed who might be responsible for something having gone through it is really the most basic element of protecting ourselves protecting our privacy, our autonomy from these kinds of advanced algorithmic systems.

There are many different elements that might influence these kinds of systems you mentioned data, that data might be incomplete it might have a bunch of erroneous elements in there that have been accidentally collected and that might cause that system to have a different impression of the world or a different impression of certain people which might be biased for example, or simply outright broken that data typically trains models.

And sometimes therefore the models themselves can have issues even though the data itself might be relatively clean another issue of course is where that data goes after it’s been processed by a model, the model has made a prediction or a judgment about something, but quite often that flow of data can end up in third-party hands where it is then exchanged again and again and so your data exhaust might end up being transferred eight or nine times between different players in this ecosystem and that can end up eventually in the hands of say foreign intelligence services or even law enforcement agencies who often today instead of bothering to get a warrant they will simply use one of these services. 

That can buy the information necessary to basically state where you were at a certain time and place and that speeds up their investigation, I imagine but it sidesteps a lot of our legislative processes, which we’ve carefully built over centuries and there’s no way of knowing whether indeed that information is accurate so there are lots of dangerous implications for these technologies in terms of protecting people from injustices, as well as ensuring that our privacy and our dignity remain intact and I would say that the dignity is really part of privacy in a broader sense privacy is not just about keeping secrets, but it’s about keeping information and it’s correct contextual container.

For example, you might be using a music streaming service and you happen to be playing a bunch of breakup songs that service might pass some of that data backed you on to a third party who might decide that now is a good time to try and sell you high proof alcohol now that would not be respectful of your dignity and you would not be expecting your data to be used in that way.

You might expect somebody to try and sell you concert tickets, but not to use it in this way which might ultimately have a strong influence on your mood or your wellbeing so there is a lot to be done in understanding the pitfalls of these technologies in sharing that information so that people can become aware of ways in which AI systems have gone wrong and putting together the standards certifications and best practices for as an industry to be able to better preempt these kinds of situations.

David Yakobovitch 

Data should be taken seriously and one of my advisors, Christopher Sanchez, launched the data path project which is talking as well about ethics and how they are a work in progress being pushed forward by the changes in society and the standards society wishes to obtain I’ve learned a lot in these last few years working in data science and building end to end products for Fortune 500 companies, which led me to the narrative to start investing in data driven startups.

And through that journey in the past couple of years I realized it’s not only in the product, that for consumers enterprise and the public at large but everywhere data should be taken seriously and to that extent, with my venture capital firm DataPower Ventures, we are launching the industry’s first data transparency rider in the VC world riders are additional clauses or side leathers that say the founders must agree to certain terms if you’re going to play the game we are going to play the same game together and what that data transparency rider indicates is that there will be no harm for data the scenarios that you’re painting today now with data exhaust should those be the scenarios that citizens have to give into? Or is there a better way? And it does start all across the ecosystem even with investors to build better products to invest in teams that take data seriously. 

Nell Watson

Absolutely and that there is a great opportunity for leadership at all levels whether it is from the venture capital side whether it’s from the board side, or indeed sometimes  a lot of ethical leadership can come from lower rung employees who might be up the coalface in terms of understanding how something might be having an ethical impact even if you’re doing tech support or customer services.

You might be amongst the first to realize some aspect of a product is affecting a group of people in a way that may not be fair and where there might be a reasonable way to make that situation better there’s a lot of opportunity as well for new ways of training people in how to deal with these kinds of situations quite often someone might go to their immediate line manager and say I don’t really like this situation I feel a bit uncomfortable about it. 

But somewhere along the line between their immediate manager and the company heads, somebody may decide we don’t have time for this right now we have to concentrate on pushing out the next update and ultimately if that pressure keeps building then it can deeply affect people’s morale they can experience  a moral industry if they are forced to participate in something that they feel is actively harming people or maybe they end up having to be a whistleblower they feel compelled to take on that role which is very often the end of a lot of people’s careers.

Even if they’ve done the right thing it comes with a heck of a stigma which is difficult to move on from so if we can better establish the practices of how we have fair open, charitable and agreeable conversations about these things in a non antagonistic way but to ensure that where issues are discovered that the right people who have the power to make important decisions about them are hearing that and therefore are able to take action upon them because  quite often these days that doesn’t work so well. 

David Yakobovitch 

And so thinking about what hasn’t worked well to what could work well a lot of this requires collaboration among all people and machines, and that includes having a common language including learning with the right context perhaps with as you shared previously  in your TED talk. Now about teaching AI behavioral norms, what does teaching AI behavioral norms mean for you? 

Nell Watson

Even before we expect a young human child to know right from wrong which is typically somewhere around the age of six to eight years or maybe by the age of three we’re starting to teach them basic things like if you’re in a movie theater, please don’t make noise if you make a mess try to clean it up or ask someone else for help if you find yourself in some  difficulty. These are the basic premoral socialization rules which helped to make life nicer for other people and help to make a happier society as well. And so I’ve been researching now for a number of years. It’s become my doctoral research as well on how we can better annotate examples of human behaviors to provide role models for machines to learn from. 

That little kid who is beginning to learn, the rules of socialization will tend to ingest a lot of moral rules by things like Saturday morning cartoons, the behavior of the virtuous heroes will be different from the nefarious villains and if we can annotate that in a way that machines can better understand, and they can know who are the goodies and who are the baddies and what actions are generally preferable or dis preferable we can begin to socialize AI we can begin to teach it basic social graces, politeness, gentleness, and friendliness. 

Now this will mean different things for different people. Behavior that might be welcomed by your great aunt might be different from behavior welcomed by some youths hanging out in an alleyway. They will have different subcultures and different ethno religious creeds will have an effect different situations will have an effect what might be appropriate at a wedding may not be appropriate at a funeral or vice versa.

But if we can begin to provide these examples and to map them across time, across space, across different cultures we can provide competence to machines to know how best to act in certain situations in certain geographies or with certain people on one of the nice things about the new technologies which have come out just in the last year or two for example, like GPT-3 these transformers or foundation or large language models we still don’t have a fixed terminology.

But we’re learning that simply by giving a few prompts to these systems like Mike to give you an example of something and then you can say well that’s quite good but I’d like that to be a bit more formal or a little bit more gentle or something like that it can be very abstract feedback but we’re learning that just a couple of these iterations of these backwards and forth revisions of these prompts can change the systems very strongly we can influence how they perform a certain action or the behavior or a flavor of their output can be changed very much.

And so if we can make it easy to have those kinds of reinforcement sessions then literally everyone on the planet in just the space of one or two hours might be able to select and then refine their very own personal set of values and so then AI can act as an ambassador for ourselves with our own values it can not only act in a way that we would prefer it to but it can act on our behalf in a way that reflects our intentions and  that’s going to be the next wave today we are working on standard terminology and standard ways of understanding systems and organizations but as we move through this decade it’s going to be increasingly about corrigibility about aligning AI to be more responsive to our social needs and to be more of a welcome presence as it becomes more able to take on a sophisticated social. 

David Yakobovitch 

And that social role is very powerful because from my experience when I’ve traveled I need to learn these social cues my partner is Taiwanese and when I traveled to Taiwan the culture is completely different than what I see in New York City and to replicate that with a human is difficult enough to implement that with a machine thinking as social for good will unlock new opportunities from a social perspective though it also begs the question what opportunities will be unlocked or risked from a economic perspective when we launched.

For example, new robots there’s one company that we invested in that very excitedly called Embodied created a robot called Moxie. Moxie was on the cover of Time Inc in 2020 as the most breakthrough invention and this is little robot with a screen that can be in hospitals next to patients that are recovering so that they can have some mental relief and comics from this little robot and have support or care from this device and the question becomes there’s a lot of benefits on the social side but to what extent could economic disruption occur and what ethical economics will there be?   

Nell Watson

We’re really coming to a Sputnik moment and AI. We’ve gotten used to the idea of talking to our embodied smart speakers and asking them about sports results or what tomorrow’s weather is going to be but they’re not truly conversational like you can’t have a chat for 20 minutes or a conversation even though that spans several days, they’re really more of an interrogation system than a conversational assistant but that’s about the change the transformer foundation model technologies like GPT-3 are going to enable these kinds of conversations to be heard.

And that’s going to freak a lot of people out because these conversations are going to be very entertaining they’re going to be funny even and in fact some people will get sucked into them they will find them to be a supernormal stimulus that’s when something is more exciting and enticing than the real thing some people can get hooked on junk food or porn or video games which are all a manifestation that is a larger than life example of the real thing.

That we’ll probably have quite a strong effect on people’s mental health some of it positive and some of it negative and the negative sense people may become obsessed with these bots or they may become unduly influenced because as the old adage goes we are the sum of the five people closest to us and if one of our interactions is with AI then we are likely to take on it’s mannerisms it’s opinions about things either. 

However, in a positive sense, a lot of people out there could probably use a friend or somebody to chat about mundane things or esoteric things that real human beings might not necessarily have an interest in and that might be profoundly life enhancing for a lot of people especially today in an increasingly atomized world where fewer people report that they have close friends in their life or at least people that they can reach out to if they feel they need as these technologies become more sophisticated they will tend to take a greater role within business.

They will take perhaps more of a front facing role as well, a customer facing role as they become more trustworthy, less likely to put their foot in it or to cause embarrassment and so as AI has begun to take over many clerical tasks in the future it will begin to take over some though not all social tasks like calling to make an appointment or reception style tasks, those kind of things that will have a large effect on the economy.

A mixed bag positive and negative it will unlock many new possibilities, particularly for smaller companies who otherwise might not be able to afford a full-time receptionist for example but a lot of people will naturally end up being somewhat disrupted by these systems just as the advent of desktop computing in the nineties also put a lot of people either out of work or ask scrambling to keep up with the new development. 

David Yakobovitch 

With the context of what we’ve discussed today from social to economic to the greater good of humans and machines it seems that as a society we are moving to a place where there is the demand for transparency there is the request for data for good and data for all so we’re beginning to move more towards a humane society though there’s a lot of work to be done and your team and yourself among many leaders are building those foundations for a humane world whether you see it as the next steps that our audience and listeners can take to heart on building for HumAIn.

Nell Watson

G.K. Chesterton the writer of Chesterton’s Fence was once asked what is wrong with the world and he thought about it for a long time and he basically said and this was about over 110 years ago it’s a long time back but he remarked that “When the world goes wrong, it’s because we try to meet the needs of systems instead of the needs of humans we sacrifice human needs to serve systems, and that’s when the world goes wrong”

And I find that incredibly profound and wise and very prescient as well for our times today, which are increasingly algorithmically driven by processes that are beyond our knowledge our ability to understand fundamentally technologies and a humane society is about putting the human first putting human needs first and adapting systems to serve those needs and to truly and better the human condition to not sacrifice everything for the sake of efficiency.

To leave a bit of slack and to ensure that the costs to society of a new innovation or the costs to the environment are properly taken into effect we understand what those kinds of influences will be and we attempt to mitigate them or at the very least to offset them in some way if we keep that in mind as a guiding star we are more likely to build a humane technological society more easily and more quickly.

David Yakobovitch 

Nell Watson, AI ethicist, and machine intelligence engineer with the IEEE standards and certifications and AI.  Nell, thanks so much for joining us today on HumAIn. 

Nell Watson

My pleasure. Thank you.

David Yakobovitch 

Thank you for listening to this episode of the HumAIn Podcast. Did the episode measure up to your thoughts and ML and AI, data science, developer tools and technical education? Share your thoughts with me at humainpodcast.com/contact. Remember to share this episode with a friend, subscribe and leave a review and listen to more episodes of HumAIn.