You are listening to the HumAIn podcast. HumAIn is your first look at the startups and industry titans that are leading and disrupting artificial intelligence, data science, future of work and developer education. I am your host, David Yakobovitch, and you’re listening to HumAIn. If you like this episode, remember to subscribe and leave a review. Now onto the show.

David Yakobovitch

Welcome back to the HumAIn podcast. Today, I have a very special guest. This is Professor Christoph Lutge¹. Professor Christoph Lutge and I did not get to meet in person, but we met through a shared network in New York City. He’s been part of a fantastic forum called the Future Forum on demystifying artificial intelligence.

That took place in late 2019, and his acting role today is the director of the TUM Institute for Ethics in Artificial Intelligence², out in Germany. Christoph, thanks for being with us. 

Christoph Lutge

Nice to meet you, David. Hi.

David Yakobovitch

That’s great to meet and love to first start off to hear about the Future Forum that you ran with a lot of researchers in Brooklyn, back in November 2019. What were some of the takeaways or aha moments that you learned from that forum?

Christoph Lutge

It was a great mix of speakers, both from Germany and Europe in general, as well as from the US. And it was very interesting in the kind of unique experience for me to see this interaction on the topics. And also, to see how much common ground we actually have on those issues. So  that was really a nice conference and hopefully we’ll see more of this.

David Yakobovitch

Excellent. And we can say both in New York and in Germany and the EU there’re so many conferences and events going around AI today. Every topic, whether we think a bias or ethics, or inclusion, or responsibility, or human centered, so much is emerging.

There’s never been so many conferences, but they don’t always lead to actions and policy and regulations. Prior to recording this episode, we talked about you having some next steps from the Future Forums, some ideas that emerged. So what are those looking like in 2020?

Christoph Lutge

Yes. On the Future Forum we developed the idea of forming a kind of global network of centers for AI ethics. And at the end of this forum, we launched the idea of, not just an idea, but a concrete project, the global AI Consortium, which we are now taking forward in order to form a kind of global alliance of centers working in this field. 

So we are in talks for example, with Carnegie Mellon University, with a university in Singapore, as well as others in Asia, Australia, and elsewhere. And we hopefully will develop this further during this year and the ones to come, since these are issues that we can only deal with globally.

David Yakobovitch

The emergence or the reemergence of AI in the last few years, has sparked a lot of conversation. And a lot of your work is around universities and research and policy. And what we’re starting to see, especially in the United States, is public-private partnerships where it’s not just research with universities like in New York, NYU, and Columbia, or the government, like the small business administration. But we’re seeing where the city’s jumping in with companies. 

So for example, New York recently launched a responsible AI center. And so they’re doing, it sounds similar to the work that you’re mentioning, where they’re partnering with big companies. They’re partnering with researchers and institutes to say, not just for New York, but for everyone, we want to do responsible AI. So I wanted to hear your take on how you think. What are some of these best practices for setting up centers in major cities like Munich and Newark?   

Christoph Lutge

This is important to see. When we opened our Institute, and it was actually just launched officially last October. But when we developed it during the last year, the most important point was that it’s not just an academic thing. It’s not just a traditional research Institute where you do research behind closed doors, basically intimate. You have to work with both industries, with civil society and with politics, and that’s the only way to take these issues forward. 

So  this is quite an important takeaway for other centers who are being set up about these topics as well around the globe. While, for example, Oxford University is just now launching a new Institute for Ethics in AI, which has the same name and also will be modeled, actually, after our center in Munich. This one will be financed by the Schwartzman grant for this case. So, this is something important to have, this collaboration and these new forms of partnership in mind, because, ultimately, it will be the big companies who will be implementing these rules and whatever norms and standards.

David Yakobovitch

It’s interesting to think that, especially in the United States, when we look back at technology, a lot of the breakthrough success we saw around the internet and around different devices started with our government. It started with DARPA. It started with these military initiatives going back decades. 

And now, just as you mentioned, Christoph, it’s gone to the private sector. It’s the big companies that are inspiring innovation and in the US DARPA now invest in the big companies. They invest in the startups because they realized the pace of innovation has picked up so much. As someone like myself, who’s a data scientist and AI practitioner, it’s almost impossible to keep up with all the new packages and all the new use cases coming out. 

I know just at the end of 2019, there was the NeurIPS Conference in Vancouver, Canada, around transfer learning and NLP and new technologies used around automation and better recognizing certain patterns. And it’s incredible to see that over 10,000 people attended the conference and they had requests for thousands of papers. And for someone like yourself, who’s been in the research academia industry well beyond 2012, without revival of research with deep mind, Google and seeing this, what are you seeing as trends in industry today around this revival, this resurgence? have we hit an inflection point?

Christoph Lutge

Yes, this is quite interesting to talk about. Since when I’m discussing this with colleagues here in Munich, especially those from computer science, we have been professors of computer science, something AI related for decades, and they say, well, we have been working on this field for so many years, but no one was really interested in it.

Why is this suddenly such a big fuss? it’s hard to say. Certainly, more of these systems are more visible to the public, and that’s why there’s also this discussion about AI and the ethical as well as governance aspects of it.  Certainly the trend is now, and has been already for years, obviously, the machine learning and deep learning aspect of AI, which some of the more conservative countries still refuse to call real AI. Something which  does not really make much sense, but yes, you cannot come in. 

So for a long time, the idea has been that there will be something more robot-like systems that are out there in the world and doing certain things. At some point, of course, these things will be there, but for them to come immediately ahead, this will not be the major thing, but it will be, rather, systems that are built into some which are on the computer, on your device and your clothes and in your glasses, whatever, wherever. And things that are basically just invisible in this sense.

But  this is the major trend. And of course, the implementation into special vehicles, and probably also in the field of health. I would say these are the most important trends for the near future.

David Yakobovitch 

One of the comments you just mentioned, so interesting about glasses and AI. Now we’re both glasses users, and one of the things that is only natural for a human is we try to avoid surgery unless we need to. So I’m like one of those proponents who says I will never do lasik or contoura or eye surgery unless I really need to, but I’ve done some research in that space. And it’s so interesting to see that these new eye surgery techniques are now using smart tracking of the pupil and the wavelengths of the cornea, and now becoming AI powered.

So  we’re seeing, as you mentioned, the emergence of AI in healthcare, and  that’s a space that could do a lot of great benefit for all of humanity. In your experience, what are you seeing in the healthcare arena? Or why do you think AI is a great opportunity in healthcare? 

Christoph Lutge

AI systems can both speed up a lot of processes, as well as create entirely new ones, or let’s say connect data. You might say that big data applications are not really AI, but I don’t care about that. They will provide a lot of new input for doctors. And even if it’s not just a robot or some small system performing remote surgery that we will also have, but we will have many more.

And we all, to some extent, already do have assistance, which informs doctors about patient data and allows them to get a much clearer picture of a patient or, one example that I always like to pick up is, early recognition of cancer and the scanning. Pictures of cancer cells are something that AI now can officially declare, can now do better than any human doctors.

And then some, maybe even one or two years ago, it was still common sense that AI was good, but the experienced doctors were better. And this has now been reversed, actually. And so we are, and will be more and more, at a point where we can say, it’s not responsible anymore not to use AI.

David Yakobovitch

That was such a powerful statement. We will be at the point where it will no longer be responsible not to use AI. So as you just said, in healthcare, for getting higher accuracy prediction, recall rates with the AI systems, then doctors can focus on better patient care. 

And of course, there could be processes and checks and balances in place where humans are auditing the scans that AI believes in results, to make sure type I and type II errors are minimized. But we’re getting to the point where it will no longer be responsible not to use AI. 

And that’s not just in healthcare, that’s across industries. And I know one of the big industries you’ve been doing work around is with autonomous vehicle ethics.

And we’ve seen a lot of progress in the last few years, particularly in Silicon Valley and parts of the US and also in Europe. Mostly around now, these little mini self-driving cars that deliver a can of Pepsi soda or a sandwich, but now there’s autonomous buses. There’s autonomy trolleys, there’s different systems coming into place. What do you see that’s happening in the autonomous vehicle or AV space?

Christoph Lutge

Yes, there are a number of processes that are underway. So what you just mentioned is mostly these smaller shuttle vehicles that move at a relatively slow speed to some extent, completely remotely. On the other hand, what we have. We have these different levels of autonomous, striving automated, highly automated driving and fully automated driving. So what we are witnessing now is a progression on these levels. 

So we have already a lot of assisted driving systems built into a number of cars. We’ve had that for years now, to some extent. And now the crucial point will be to get to the next level they are at. Some say it will only be interesting once we get to the level where a car can perform an entire journey on its own without a driver interfering.  That would be definitely interesting to get there, and we will get there, but  the number of steps on the way to that one.

So the level below that will be the driver does not always have to monitor the system officially, not just saying, well, it’s your responsibility. And see, we are not responsible as a company, whether that’s what it means. So we need to get beyond that level where it’s actually where the company is liable during the time that the car was in control, but not the driver.

So this shift in liability, which will be quite important, will first be seen on parts of roads, part of Germany, of the Autobahn. And there we will make progress. And at some point, we will get to that level where the car can perform the entire journey on its own. And on the other hand, progress will also probably be made in the next few years, in the way that cars can drive in this sometimes quite unstructured way. 

So just paying no attention, basically, to the traffic rules or traffic signs, of course. Does that make sense? Not possible, but just in a very complicated environment. See how you can get on how you can continue your journey and evolve across very carefully. But the general point is, it’s an ethical point because, in general, these cars would be able to save a lot of lives, avoid a lot of accidents. This would be an ethical goal in itself. But we can only reach that if we programmed certain rules into these cars. 

David Yakobovitch

You hit everything spot on. And particularly the two things that are most interesting is one unstructured. Not a lot of attention goes into that space. We’ve seen the past couple of years, there’s been a lot of demos, particularly with moving robots, like Boston Dynamics’ hardcore robots that can jump. And not only that one that’s hit with a baseball bat, it can get back up and get on task to collect the box, or some sort of task.

So that should be really interesting with the car. We could just imagine that there’s a hailstorm outside and the hale’s hitting the window of the car. Can the car respond to send you into a parking garage or predict that?

Christoph Lutge

We need to have robust costs. We need to have robust software which must be able to drive on the difficult, maybe not most extreme conditions, that’s if we want to drive under any conditions, that will be difficult. And of course, that car must be able to deal with, let’s say, rain, with hale, with snow, at least light snow, maybe. And that can pose a number of difficulties, also different ones around the globe.

So I’ve heard that when these cars were tested in a city like Boston, for example, a huge problem was low flying seagulls. And they were apparently not always correctly detected, so the cost stopped. 

David Yakobovitch

Sure. And it sounds like that might be even a data problem. There have not been enough scenarios of low flying seagulls, running a muck around cars.

But the recurring theme, as you’ve shared, is about saving accidents, improving lives. When can we get to a parody point when it’s not responsible not to use AI or risky? You need to use AI. And although we’re just talking high level on autonomous vehicles, you’ve done some research in this space, particularly around AI ethics and autonomous vehicles.

And I know that you’re on a commission that is similar to the high level expert group on artificial intelligence from the EU commission. But you do work with the EU parliaments. Why don’t you tell us more about that? 

Christoph Lutge

Yes. So this is an initiative which was formed in 2018. And it has been developed by the Atomium Institute, which is the Brussels Institute. We presented our works in cooperation with the European parliament several times. So it’s a group, it has two levels, actually. So there’s an expert committee, there’s the scientific committee, which I’m part of, and which comprises around 15 experts. 

And as a forum where companies can engage and where a lot of major companies who are in the area in this field of technology are involved, we presented our first guidelines for ethics of AI in late 2018 in the European parliament. And we came up with these five ethical principles for AI. So, which are beneficence-maleficence, justice-autonomy. And while these four are quite standard for ethics, the fifth one is quite interesting: the explainability criteria.

So AI should be able to explain results at least in principle. This was our first step. And then we presented another paper on AI governance issues just recently last November, this was about how companies and States can interact on deriving rules and governance rules for these systems. In the future, we will be working in different sub thematic, sub sections. And I will be chairing the committee on autonomous cars. 

David Yakobovitch

That’s very exciting to see how conversations and research papers from 2018 are already in the span of a couple of years, materializing both with the public and private sector and going from research papers to practices that very well could become policies or standards.

And in my experience in the data science industry, you need standards, and it helps all researchers. It helps all scientists and helps the general public so that best practices are put forward. So what standards are you imagining or working on around autonomous vehicles?

Christoph Lutge

So just recently I was invited to be part of a new group with the ITU, the International Telecommunication Union, one of the oldest standard setting bodies in the world. For example, they develop the country codes for telephones. We met in January 2020 for the first time in London. And we will be working in the next years to develop standards, actually for autonomous driving.

There are mostly people from the engineering side. We are working in major companies which are working in research institutions and also the government side of it. There are a few people who have the expertise in ethics actually. So I’m one of the few ones in there and it will be quite interesting to see how this process works out, because, ultimately, we will need to develop international standards for these AVs as well. So it just wouldn’t be workable if all countries came up with their own standards and the companies, the car manufacturers, will actually demand to have these standards.

David Yakobovitch

It’s amazing to think on standards, how progress happens. I continue to refer back to public-private partnership, but also in framing the conversation. And when we look at the work that has been done in the U S and in Europe, a lot of the conversation has been around ethics. But  it’s so fascinating how, in the past couple of years, the ethics conversation has shifted gear to the responsible conversation. 

It’s almost as if ethics is being grouped into responsibility. And I don’t know if that’s true or not, but it seems that they’re going hand in hand, and now we’re seeing responsible conferences and forums, because, in my experience, working with clients and companies often, ethics is very not that tangible. It’s very tough to grasp. And when you think of responsibility, you say, Oh yes, I see you’re serving the client or you’re causing more risk. What’s your take on the ethics-responsible dilemma?

Christoph Lutge

Actually,  we have had the same experience here. Ethics is quite a fuzzy term. It has lots of connotations and, for some people, it’s about personal morality and that’s not really what we mean. We are aiming at standards or guidelines, rules which are not always legal ones, which might be so. So we found it also better to use the term responsible AI.

And we have also launched this new conference, which we will be held in November in Munich. It’s called the Responsible AI forum, which will be held in early November in Munich. And this will also be a conference and not just the typical research academic conference, but one where we plan to interact with other stakeholders from industry, from civil society, from politics as well. There are days actually when it’s easy to find on the internet, and there is a call for papers out there and the corporate proposals. So we invite everyone to take part in that. 

David Yakobovitch

So it’s responsibleaiforum.com, if you’re a researcher interested in submitting a paper or getting involved in this responsible AI topic, especially globally. What’s the submission date to get one of these papers peer reviewed or submitted for consideration for the conference?

Christoph Lutge

So the submission deadline is 1st of May, 2020. And we invite the abstracts on many areas of AI and ethics in a general sense to visit our webpage to find a lot of potential topics, whether it will be AI in the healthcare sector, AI and the STGs, AI policy, AI and diversity and education, and many others.

David Yakobovitch

That’s excellent. I’m definitely going to check it out and who knows, maybe I’ll submit a paper. It’s going to take a lot of work to do, but if I feel passionate enough, I might work on one also. So, doing a segway of topics as a few other exciting things I know that you’re working on. 

So being involved at TUM, there’s a lot of new leadership there now, and there’s new work around human centered AI. Particularly in the US, we’ve been seeing that growth. Stanford launched their high institutes and the universities in the US are following suit. And you’re actually coining it a different term. You’re coining human centered AI is human centered engineering. Can you tell us more about that?

Christoph Lutge

Yes. This is a vision that TUM will be implementing within the next year. So it’s right. We have a new president here who has just taken office a couple of months ago, and he subscribes to this idea that the engineering curriculum should be enriched with elements from humanities and social sciences, not least of which it will be ethics.

And this is something that, , actually, when I speak about it, and when I talk to colleagues, it always finds very positive resonance. Something that in the past has always, not always, but a couple of times already surfaced, but never really picked up so much momentum. 

But now with a focus on AI, it becomes clearer that working on AI will not be enough to just look at it from a purely technical point of view. It needs to generate the necessary trust. Otherwise people would just not use these systems. And this is something that engineers should be familiar with, engineers and computer scientists, and people from technology. 

So implementing and enriching the curriculum with some of these elements to raise awareness, that will be an important issue. And we have, I should say, we have already established a partnership with the I triple E. So the international association of engineers is working on this curriculum of ethics for engineers.

David Yakobovitch

So how do I see everything in the industry coming together? I see a cross section between the responsible AI conversation, the human-centered engineering conversation and the explainable AI conversation. And we’ve touched on the first two so far today, responsible and human centered. But if we jump back to the use case earlier in our conversation on healthcare and cancer detection, and how we’re reaching human level parody, through AI, the big question also becomes not just what is ethical and responsible, not what is human in the loop and human centered, but how explainable is the system.

Do you think explainable systems is the missing piece that is going to be solved in the next couple of years? Or new packages are going to come about, or there’s going to be a standard set about how you should build explainable AI systems, so you can gain traction around policy, product, and people?

Christoph Lutge

Yes. One of the key challenges will be how we manage to some extent, standardize explainability. Of course, from a traditional point of view, people like to say, well, this is a black box. We don’t know what it does. And on the other hand, when I talked to, for example, lawyers, sometimes they say, well, every step within the system must be transparent and it must be clear, you must be able to track it down. Of course, there’s no way to do that, if you are familiar with the technology. So we need to find some kind of middle way. And there is this research field of explainable AI in computer science, and the challenge will be to implement systems. 

For example, interfaces for different types of users, which do not explain every step and every result. We will probably never be able to do that, but at least in principle, explain some of the results and some of the factors that influenced certain results of an AI process.And then it will be interesting to see what does it mean explainable. For whom does it mean, for the end user? 

Some systems probably will have to be for the end user, but in others, it might be fore, let’s say for a doctor who uses the AI system for collecting the data and arriving at some general picture about the patient and so on.

So there will be a huge number of steps here to take in the next few years and it will be quite interesting. 

David Yakobovitch

Well, it sounds like between the topics we’ve covered today from responsible AI, human centered engineering and explainable AI, there’s a lot of work to be done, but a lot of the conversations are now being had. And perhaps that’s a result of the tech industry being so interested to deploy systems very quickly without seeing the impact. But as we’ve learned in tech over many years, if you don’t launch, if you don’t try to break things, you don’t get to move forward. 

And sometimes the policy makers, the regulators, the governance are not happy about that, but we do make progress quicker. Sometimes we take some steps back as we’ve seen around facial recognition implementation both in Europe and the US. Lately, there’s been some steps back. And that’s fair. That’s good for building the right systems, making sure there’s data privacy, it’s ethical, it’s responsible, all these parts. So time will tell, as systems mature and get rolled out.

I’m very excited about the work you’re doing with autonomous vehicles and the space. These committees sound phenomenal. So I’m looking forward to these new forms, like the responsible AI forum, and Christoph, really appreciate you for taking the time to join the audience today.

Christoph Lutge

Yes, it was my pleasure. Thank you, David. 

David Yakobovitch

Thank you.

David Yakobovitch

Thank you for listening to this episode of the HumAIn podcast. What do you think? Did the show measure up to your thoughts on artificial intelligence, data science, future of work and developer education? 

Listeners, I want to hear from you so that I can offer you the most relevant, transcending  and educational content on the market.

You can reach me directly at humainpodcast.com/contact. Remember to share this episode with a friend, subscribe and leave a review on your preferred podcasting app, and tune in to more episodes of HumAIn

Works Cited

¹Christoph Lutge  

Companies Cited

²TUM Institute for Ethics in Artificial Intelligence