Today’s guest speaker is paving the way for digital rights and AI protocols with The World Economic Forum. He has championed for international protection and open access through the Yale information society project, the European digital rights and the electronic frontier foundation listen in as Eddan Katz and I discuss his initiative for the center for the fourth industrial revolution at World Economic Forum, including generation AI, unlocking public sector AI, and how teaching AI ethics will create a more balanced ecosystem. This is HumAIn.

Welcome to HumAIn. My name is David Yakobovitch, and I will be your host throughout this series together, we will explore AI through fireside conversations with industry experts, from business executives and AI researchers, to leaders who advance AI for all HumAIn is the channel to release new AI products to learn about industry trends and to bridge the gap between humans and machines in the fourth industrial revolution. If you’d like this episode, remember to subscribe and leave a review.

David Yakobovitch

Welcome back everyone to HumAIn my name’s David Yakobovitch and today our guest speaker is Eddan Katz, who previously has served as international affairs director at The Electronic Frontier Foundation, where he worked in advocacy initiatives at an international multi-stakeholder decision-making bodies in cyber crime, data privacy, intellectual property and freedom of expression. He was also the first executive director of the information society project at Yale law school, where he taught cyber law and he founded the Access to Knowledge initiative. Eddan has a JD from UC Berkeley, has a BA in philosophy from Yale and today he’s working at the World Economic Forum on artificial intelligence and machine learning. Thanks for being with us, Eddan. 

Eddan Katz

Thanks for having me. 

David Yakobovitch

Absolutely. Cyber law is super important as someone who lives in New York city, we’re actually launching cyber NYC very soon, which is an initiative partnering with NYU and the CUNY school system, including Facebook Palentier, our local government, you’ve done a lot on the cyberspace. Why do you think it’s so important that we’re caring about security with our data? 

Eddan Katz

We’re now entering a stage where we have a lot of different types of relationships and the decentralized nature of our communications makes it so that things are connected together in a way that they haven’t been before. And that sort of complexity between the physical environment and the digital environment in regards to the internet of things. And the way in which information is processed means that more and more of our daily lives are impacted by the structure and rules around how digital context and the digital networked environment is governed.

David Yakobovitch

Now data is more than just governing and making sure it’s managed properly. But if we think more micro level at each of our own rights and digital rights, you’ve done a lot of foundational work here as early as 2008, right way before GDPR became active. Why is international protection with our digital rights, so important as well?

Eddan Katz

The international aspect of it reflects the fact that our communications and our trade and our products and services don’t obey the same physical borders as we’re used to in other contexts. So, getting the rules to have some sort of harmonization across borders is the way that we can have more reliable and resilient systems across the board so that there can be some stability to the environment.

David Yakobovitch

Now, thinking about across the board, we’ve had our own version of legislation. That’s been passing in the US recycling CCPA the California Consumer Privacy Act was passed. And a lot of the lawmakers and tech leaders in California and New York and Washington DC are thinking about, is a version of GDPR going to be taking over the United States. And what does the future of data look like? If so, any thoughts on perhaps how you see that moving in the US? 

Eddan Katz

There is more of a possibility for establishing some privacy norms in the US then there’s been, since I’ve started working on these issues, clearly there has to be some bipartisan support for some legislation, but having done a comparative law in privacy, understanding the differences between the data protection regime in Europe, privacy laws and regulations in the US on the federal level and on a state level, there is a difference between how we tackle these issues and in the US, some of the privacy protections, for example, in the banking and finance arena and in health care, such as the HIPAA law are much more fine tuned and advanced than in other places. 

So while there isn’t a data protection regime that is across all types of data and information, US law is oriented towards strong privacy protection in different arenas, data privacy as a whole, the fact that so much of the information that is covered by it ends up being reduced to ones and zeros and bits. It makes it so that type of industry specific kind of legislation bleeds from in actuality, that data leads from one context to another. 

And so it does make more sense now to treat things more holistically, but in terms of the work that we’re doing here, we’re focused on identifying particular aspects and subsections of the problem. So one of the projects I’m working on generation AI is focused on how privacy relates to how children interact with and are exposed to artificial intelligence. 

David Yakobovitch

That’s so interesting. Actually a few episodes go on HumAIn. I had Tara Chklovski who is the founder of Iridescent, and she does a lot of work on the AI family challenge, which just concluded in San Francisco where families from all over the world get to start discovering AI with their children who are in elementary school. She also does a project called Technovation Challenge, which is more for girls who code, who are in middle school and high school. And, we talked about countries such as Bolivia and Pakistan and Cameroon, where some of that work’s going on. Why don’t you share with me more how generation AI is also bridging that gap for our children of the future?

Eddan Katz

So, the generation AI project is run by the World Economic Forum and the center for the fourth industrial revolution. But in partnership with UNICEF as an entity works in many different countries and has been focused on implementation of child protection rules on a domestic level, all over the world.

And we’re working in combination with them and also CPRA, maybe a research agency to supplement some of the thinking in regards to policy development, with developmental education and science and researchers who are working on the latest research in regards to how children can actually benefit from algorithmic and precision education as it’s begun to be called.

And so we are trying to bring together two main entry points to advance norm setting in this context, one is focused more on corporate governance. That is what companies should do in regards to how they design, how they store data, how relationships with the users are processed. But in addition, making that an aspirational commitment to some of the promises that AI can have, especially in regards to education for people with learning disabilities are a lot of opportunities in that kind of customization of educational experience. That’s very promising. 

And so we want to vote. We encourage companies to try to make those commitments to the public, but also make sure that things such as transparency of the way that the products and services work, the privacy of the child’s information and that the children’s agency is respected along the way. In addition, we’re working on public policy agreements and through UNICEF, as I mentioned, one area of particular focus is on the way that schools integrate AI, both in the classroom, but also in the way that rules are set up for use of computers and social media, etcetera. And through these different avenues of north setting, we’re hoping to advance ethical use of AI, particularly in this context in regards to children.

David Yakobovitch  

When you think about children, they’re the future in what they’re learning in technology, but that’s also about the movement of children. I travel a lot and fly. And now when I fly through airports, especially internationally with Monterrey and Mexico city, I can’t help. But seeing the signage around where companies like Delta say, if you’re seeing children, are they actually part of these families or is it part of trafficking? And, this is an interesting problem. That is a big problem that perhaps AI and machine learning can help solve for one. Now all the airlines are moving to facial recognition and they’re using that to verify if you should be on the flight, if you are who you say you are, and there’s a lot of issues going on that I’m really concerned with, you mentioned. You’re looking to have a positive impact on children with AI and ML, whether some of those projects that maybe yourself or your colleagues are working on for that as well. 

Eddan Katz

So we’re thank you. That you’re also working on a facial recognition project as a center for the fourth investment solution is set up here. We have felonies, different governments with France. We’re trying to implement some rules regarding the implementation and deployment official recognition and public context. So for example, in France, over some of the public utility that the transportation networks, the way that facial recognition is being used, there are both opportunities for, as you mentioned it to improve the safety of individuals you called out.

So specific aspects of authentication and after the fact crime-fighting, that may be relevant, but it’s important when laying these out at the beginning to carve out what the rules and parameters and boundaries are for the use, how that data is processed and to make sure that it is used solely for the purposes of either authentication or whatever other transaction is necessary to enable good processing and more efficient use of the public services before they get deployed we need to lay out all of the scaffolding for how it will be used and how the data will be integrated into other contexts. Further down the line, making sure that there isn’t commercial exploitation without permission, that takes place in other kinds of parameters that need to be kept in mind.

David Yakobovitch  

Earlier in May, 2019, I attended an AI conference in New York city, hosted by Lazard asset management and J. Jacob one of their managing directors posed a question to the audience which included, actually, Virginia Nicholson from Google. And it was about thinking of healthcare, thinking about health care for children and thinking about early detection with companies like Memorial Sloan Kettering, one of the major hospital providers here in the US being okay with sharing their data with companies like Google, especially with HIPAA compliance. And I thought the question was so interesting because you just mentioned a really fascinating point, Eddan, which is authentication is the key.

If you can authenticate your data and make sure the rest is secure and private, then this is something that should be possible, the organizations can share their algorithms and share their data with each other. And then you get to maintain corporate governance. 

Eddan Katz

That’s true. And, there also just needs to be as much transparency as to how the processes being designed and where the data goes. And there are as part of the conversation in regards to corporate governance, there are also ways in which the reporting structure within a company should be consulted in regards to making decisions. And that’s going to another project that one of my colleagues is working on empowering AI leadership. So this is a corporate governance tool for boards at different companies, and to allow better decisions based on the corporate structures, the internal processes decisions in regards to crisis management and public engagement and enabling it through that way. 

David Yakobovitch  

Now thinking more of corporate governance. AI is such a buzz field world that we’re seeing. And we’re also talking about this as a word that some of my colleagues say, actuarial science became predictive analytics, which became data science, which became AI. So for leadership, your colleagues and yourself for thinking to empower others, How do we even start with just educating about AI and whether it is?

Eddan Katz

That’s a great point. People need to understand what it is and what is different about aggregated data and artificial intelligence, machine learning, deep learning. There’s an opportunity when explaining how individual products and services work that there’s in that context a way for people to begin to understand the dynamics. So what is it that is just storing data and bridging it together. And how much of it has to do with combining insights and bringing together different types of information and finding patterns and more sophisticated ways of identifying possible routes and solutions to how an algorithm is designed.

And the more that people will begin to understand that the way that intersects with their expectations of daily life will allow everyone to be able to make better decisions, because as you say, it’s hyped both in its promise, but also in the fears. The fears can sometimes get exaggerated when there’s a lack of realistic understanding.

David Yakobovitch  

And those fears have most recently become even more exaggerated just earlier in May, 2019, San Francisco is the first city in the US to ban facial recognition. And it’s so fascinating because as soon as that ban went in the next day, it’s been on the legislative ballot in New York city and Albany and in Syracuse and I took a stance back. I said, what is going on even right. Amazon’s board of advisors said, no, you should not be in facial recognition. What’s your take on how that information should be used for the public and for the private sector? 

Eddan Katz

In addition to San Francisco, Oakland, where I live is also considering such legislation. We’re looking at it closely. The point is that we understand how these systems work or acknowledge their shortcomings at this point. Understand the implications of false positives and other ways in which there are errors in its application before it gets deployed publicly, and to be introduced and embedded and becoming ubiquitous in our public lives, we should make sure it works right.

And so this is why our project focused on racial recognition. Official recognition is about establishing those guidelines and expectations. If there are rules laid out beforehand that both enable the public to understand how it works, instill some transparency in the process and enable some accountability of how the information is used and how bias is prevalent in the decision-making process.

And then, the input into the information. Once those things become more clear, once there is greater understanding amongst the broad range of stakeholders, can we more safely and usefully implement facial recognition?

David Yakobovitch (Ad)  

Humans, are you accumulating lots of listening minutes for your podcasts, but not being rewarded for your listening time. There’s a new app available now called Podcoin, where you can listen to podcasts and donate your listening time to charity. Podcoin gives you the opportunity to be rewarded for listening to your favorite podcasts. Whether you’re listening to Joe Rogan, Tim Ferriss, or Terry Gross, or even HumAIn. Podcoin is the new app for you to give back for your list minutes. Check it out on the app store. 

Eddan Katz

And so we didn’t need to have those structures set up. And, that’s what the reaction is in regards to facial recognition on the city and national level.

David Yakobovitch  

I’m wondering whether those legislation structures in the US are going to become more in an entire country level or going to be just city by city. San Francisco passes, Oakland moves forward, then Sacramento and LA make a change. And all these suddenly start boiling up to the national level.

But as a researcher myself, is this causing harm for AI research in the US compared to the global rise in AI for countries in Asia. 

Eddan Katz

So that’s always a consideration then international context, which is why earlier I was mentioning the importance of trying to think globally and think about harmonizing norms on a greater level, because there is, there can be, regulatory loopholes that can be created as there are some ways in which there are lack of restrictions on how AI is used in some jurisdictions, as opposed to others. 

And it can create havens where certain types of activity can take place and not in others. And that’s not good for the long-term stability of innovation. There are aspects in terms of the experimentation behind policymaking.

There are aspects of that that may actually be good, where we learn in some contexts where different rules are set up in different jurisdictions. What may work better in both establishing public trust, enabling efficiency in public services to be served. But harmonization of the rules is definitely a goal.

The more consensus there is across stakeholders from industry to government, to the public and civil society, the better the stable environment for innovation. 

David Yakobovitch  

That makes all the sense. And, another missing piece in innovation is making sure that we’re being very ethical. And How are there projects that you’re involved with or your colleagues that are tackling AI ethics as that’s been a very big, hot topic in the past year?

Eddan Katz 

Absolutely. And we see AI ethics principles and guidelines being proposed on the government level and intergovernmental level, such as the OECD principles that were released in May, as well as from companies who are releasing their own AI ethics guides, establishing council moves for external advisory boards for some of their decision-making.

One of the projects that I’m working on called Unlocking Public Sector, adoption of AI through government procurement sees the opportunity when governments Acts work as market actors in the purchasing decisions, how the rules that they can attach to those purchases and the requirements. So we’re developing guidelines for government procurement officials for the ethical and efficient purchasing of AI systems and algorithms being piloted this summer in the UK. And we in advanced conversations with several other countries about trying to implement these guidelines in one part of their procurement process. 

And the way that our center works in establishing policy norms for to address some of the governance gaps in the fourth industrial revolution is to pilot them and then reintegrate and take those learnings from individual contexts where they’re being tried out to have more uniform sex of norms and rules and standards that can apply everywhere. And hopefully some of these guidelines that we’re developing will be taken up by many governments other than several that we are trying to pilot in in the next several months. 

David Yakobovitch  

And, Why might that be important for organizations with procurement to have standards on purchasing goods, particularly around AI systems?

Eddan Katz

Before we’re able to make decisions about what is the goal. We need to establish a process of what is being reported and what is being considered in the invitations to tender and how the requests for proposals are answered. And so the more that we can lay out. Some of the concerns that need to be taken into account when vendor is trying to sell to the government to explain how about the data quality, the possibility of bias in the system and how to possibly address it to lay out a multidisciplinary team that looks at the issues from several different perspectives and establishes throughout the life cycle of the use of the product or service, how it’ll be used in practice. This is how establishing that process and making sure that this is being recorded and documented is the first step towards being able to make those kinds of ethical decisions. 

David Yakobovitch  

Part of that is, as you mentioned before, Eddan, with the OEDC how Europe’s been focused on these new AI ethics principles, there’s been a lot of new progress on that in May, but I’ve looked through the more than 70 page report and all these principles. Are there any that stood out for you as very important, or there was a principle that people stood by to remember, this is the key to being ethical with AI. What would you share with the audience? 

Eddan Katz

The issues surrounding bias and in regards to diversity, as a fundamental aspect of ethics in this context, is an important point to consider that some of the problems that we encounter with the way that systems work over time can be traced back to the initial ways in which the system learned and making sure that we really think through the representation of interests and perspectives and experiences in building that knowledge is key to do now, because further down the line, some of these systems will be set and may sadly replicate some of the disparate impacts that we have in our systems from criminal justice system to, as I mentioned, how children are protected along those lines. So thinking through and understanding that diversity is a key principle that makes up ethical design of AI systems is an important point to carry.

David Yakobovitch  

And what would diversity look like for these AI ethical systems? Does that mean let’s have more women on the boards of those processes? What else could diversity look like for when teams are thinking about how to develop their own AI? 

Eddan Katz

In addition to making sure that there’s representation on the people, making the decisions where the information is collected from the people that are inputting and teaching the systems with their particular patterns, to make sure that it’s a diversified group of people that has been engaged is crucial, especially as we’ve mentioned, and the conversation thus far, how global these systems are, the extent to which they will be deployed in many different places, where there are different cultural norms, different expectations, different understandings of what words mean. And so the greater breadth of input into the system that contemplates and takes into account a diverse range is better for the long-term sustainability of that solution.

David Yakobovitch  

That completely makes sense. Especially with facial recognition. Earlier you mentioned how CFR is doing a lot in this space, but for those who don’t know, CFR has some classic data site ads in AI and machine learning, CFR Tenancy, CFR 100, which originally came out from Geoffrey Hinton. And Geoffrey Hinton is one of the fathers of modern day machine learning. And it’s amazing to see how the organizations who are doing work didn’t just start in 2019. The research with AI has been going back for some even decades. 

Eddan Katz

That’s right. And we’ve seen many of these issues as you’ve pointed out are in some ways new, but in some ways, extensions or modifications, they’re variations of questions that we’ve had for a long time. AI in fact, brings some fundamental questions in regards to the self-learning systems. Does not take long to get to questions of what does it mean to be human? What does it mean to create? What is personhood and how has responsibility assigned to automated systems? Some of these questions go to the core of the Bible in Greek philosophy, being raised in regards to how we negotiate our world.

David Yakobovitch  

So I know one of the main projects you’re working on, you mentioned earlier as generation AI, and you just mentioned now with the ethics, heart and personhood, and that’s very similar to agency with children. How do we determine agency in the world of AI and machine learning? We have Google’s coming out with Google Glass 2.0, where we’re going to now have holograms that appear next to each other, that show your social media feeds. So where does agency stop from being purely human purely machine? Perhaps a hybrid of both. 

Eddan Katz

These are great questions there aren’t currently easy answers for, but it’s important to note that it does often depend on the context when it’s about when the agency question has to do with whether or not a child is able to consent to some sort of transaction that enables behavioral advertising. There’s a certain level of agency that the law already deals with in regards to what is the age at which you have the agency to consent on? How much of that is the responsibility of the child. And so those are certain aspects of agency. We also need to think about the problem differently in regards to liability of systems.

When we get to complicated systems that implement AI and to lethal autonomous weapons, it’s crucial for us to maintain the space where responsibility is can still be assigned when it is divorced from human judgment and interaction becomes a unique problem. When we’re using these systems personally, having to do with consequences with grave, physical harm.

David Yakobovitch  

Using as a guideline of robots that Boston dynamics works on. I saw a video earlier in May, 2019 that they now have 10 robots who are able to together move an 18 Wheeler as if you were Ironmen pulling the 18 Wheeler strongest champions, if you will. So with those breakthroughs we’re seeing in robots so quickly and come like Boston dynamics and around less than 10 years, the robots could previously, barely even move. So this begs the question of thinking of Skynet and Terminator and all these fun doomsday stories. But you mentioned lethal autonomous weapons. How close are we to seeing some of that militarized by the police or even by our armies? 

Eddan Katz

It’s already the case that our systems, you can actually describe going back to landmines being an automated weapon, and there’ve been efforts internationally to try to regulate and control the use of landmines as having inhumane effect. When it’s divorced from the actual intent and the context of conflict is particularly dangerous. We’re seeing more and more of that, we do see an acceleration of innovation. The more that there are these systems being developed and the more that data is being shared and the more that there’s greater awareness. And I don’t think we are far off from very complicated decision-making, being further and further enabled by machine learning. 

David Yakobovitch  

Let’s turn our attention to data. As you mentioned, in all these systems and an ML are based on data and the better the day that you have, and the better it’s designed, ideally, the better solutions you have. There’s so much new technology happening today, in Nepal, there are drones delivering precision medicine. Are in fact already doing that now with Amazon in Blacksburg, Virginia, they are now doing drone delivery. So what’s your thoughts on drones, precision medicine, and other cutting edge fields with AI?

Eddan Katz

We actually have a project. My colleagues on drones and tomorrow’s airspace portfolio here at the center for the fourth industrial revolution. We’re actually a big part of using drones to deliver medicine in particular blood in Rwanda. And some of that, as I’ve been mentioning throughout has to do with getting some consensus around the rules in order to be able to do that, there has to be aviation regulation that were complimented in that were updated to enable the kind of unmanned travel that is part of the drone delivery and figuring out those rules, getting consensus about it, bringing the different stakeholders to the table, making sure that the companies developing those technologies are able to communicate with the regulators who are in charge of governing it, enabled these kinds of uses to be backed by public trust and thoughtful regulation and system. 

David Yakobovitch  

Now as someone here in the US and probably many of our listeners have never been to Rwanda and that’s a country that although traditionally has been plagued by a lot of disease and violence. I’ve heard that it’s gone through a huge transformation just in the last 10 years. Becoming one of the technology leaders in Africa, a lot of new systems are in place. And particularly, you mentioned about working with this precision medicine in work with blood. So we’d love to hear that a little bit further, what you could share there.

Eddan Katz

We see that in developing countries, there are often opportunities what’s often called leapfrogging where some of the infrastructure that is in place for incumbent companies and workers. So-called developed world when that is less of an obstacle in introducing new technologies. There is an opportunity to move towards new and innovative uses of our emerging technology without having to consume the structures of what’s already in place. And so there are lots of opportunities and very innovative solutions that happen sometimes in the case of Rwanda, as you mentioned, and places that can catch up in regard. That’s been a big infrastructure and other projects. 

David Yakobovitch  

That’s really exciting that AI and machine learning is no longer a story of simply the United States and Europe and China, but the whole world is getting in on it and it starts with our children, starts with the education and sounds like now’s the right time. I do a lot of training and teaching as well as yourself. So with digital literacy and education moving online, how do you think that can help bridge the gap? On AI machine learning, especially in developing and frontier nations. 

Eddan Katz

Thanks for that. Another project that one of my colleagues is working on is called teaching AI ethics. And we’re trying to actually bring together some of the curriculum that think about responsible use of AI and social and economic considerations, and to integrate that into engineering and computer science graduate programs, so that the people who are actually building the technology, the future start thinking about these issues.

As part of their learning of the design and the development of technical instruction and creation, and these considerations be part of the thought process from the beginning. 

And so getting those curricula from people already teaching these projects with these courses and finding a way to package them in such a manner that they can be added to program elsewhere that kind of sharing of information enables a specialty with it being oriented, towards advancing the public good that it can be taken from places where there’s cutting edge research and being able to share that knowledge and those constructs in places where it would be difficult to have those people go lecture in those particular schools and classes, being able to put together the kind of curricula and learning materials more broadly enables a greater audience to have access to that.

David Yakobovitch  

And what’s the best way you think to impart that information? Would that be in case competitions, hackathons, accelerators, graduate level students generally come in with some experience, whether then ready to implement the results in startups in other companies.

Eddan Katz

That’s right. I also am oriented towards, as the center here is how it works in practice and previous to working here. And you mentioned my time that EFS didn’t yell, but I also helped start a hackerspace in downtown Oakland called pseudo room. And that kind of experimental learning that is less structured and also quite innovative and being open-ended. There are opportunities for these kinds of teaching and kinds of discussions and kind of conversations too, to be embedded everywhere where technology is being developed. 

And the earlier that these conversations start, it shouldn’t be something that is safe till later when you’re a larger company and get a legal and policy team, this would be something that is embedded into the initial design. So we’ve had a lot of talk about privacy by design, and we’re now beginning to talk more about ethics by design and that kind of thinking where the way that actually is structured from the beginning should contain these elements as well. 

David Yakobovitch  

One final question that’s been on my mind for a while is thinking about AI by design and particularly the term fourth industrial revolution and where that spawned from, and where that creativity landed for The World Economic Forum to create the team that you’re part of center for the fourth industrial revolution. Love to hear a little bit behind the design there as well. 

Eddan Katz

Sure. So the notion from the World Economy Forum Klaus Schwab, who is the chairman also professor. Wrote a book several years back on the fourth industrial revolution, thinking through the fact that we had the different stages of technology development and the one that we’re currently in integrates the physical, the biological, the computational in this way, where the convergence is creating all sorts of exciting opportunities, but also social and economic challenges that can be addressed in the idea behind the center that was created. 

And we have portfolio teams focused on everything from AI, like it’s in my mind and data policy, which I used to work in to precision medicine and drones. There’s one on autonomous vehicles and urban mobility and trying to understand where the governance gaps, because governments are having a hard time keeping up with these developments and companies are looking for guidance on how best to make decisions.

And so to fill in that governance gap, the C4IR, The Centre for the Fourth Industrial Revolution has focused on implementing actual policy proposals and pilots with both companies and governments and making sure that the work and practice so that we can have models that can be then exported and used in other contexts. 

David Yakobovitch  

I’m excited to see where these models go and what context they’ll be used. Then I know we’re just kicking off with some exciting trends in 2019, and AI is continuing to integrate, but Eddan, all the initiatives you’re sharing with the World Economic Forum, with the papers and policies coming off from the OACD, that’s going to be instrumental for researchers like myself and individuals who are just trying to grapple on what is AI to be better servants and leaders in this space. So I applaud all the work you’re doing and things for helping us humans be further part of this conversation. 

Eddan Katz

Thank you so much for the opportunity and appreciate your podcast and thanks to your listeners for paying attention. 

David Yakobovitch  

Fantastic. Thanks so much.

Eddan Katz

Thank you.

David Yakobovitch  

Humans. Thanks for listening to this episode of HumAIn. My name is David Yakobovitch, and if you’d like HumAIn, remember to click subscribe on Apple Podcasts, Spotify or Luminary. Thanks for tuning in. And join us for our next episode. New releases are every Tuesday.