You are listening to the HumAIn Podcast. HumAIn is your first look at the startups and industry titans that are leading and disrupting artificial intelligence, data science, future of work and developer education. I am your host David Yakobovitch and you are listening to HumAIn. If you like this episode, remember to subscribe and leave a review. Now onto the show.
Welcome back listeners to the HumAIn Podcast, today our guest is Babak Hodjat¹. Babak is the VP of evolutionary AI at Cognizant. Cognizant is known for building and scaling systems around the globe and Babak has done some fantastic work over the years in AI and the entire industry.
He’s been involved with projects like Siri and many other really interesting ventures in the finance and the tech space. So looking forward to a dynamic conversation today, maybe an evolutionary conversation, Babak, thanks for joining us.
Pleasure. Thank you, David.
Both of us are on enterprise businesses, building solutions to see how teams can grow from just an insight to a decision. This is something you and I have spoken about offline that often companies don’t know how to work with their data. What are some strategies you think they can do to be more effective with their data?.
So currently, primarily AI is being used. Machine learning and AI based algorithms are being used to get a sense of what is happening in an organization, abstracting out patterns, and then to be able to actually forecast and make predictions into the future. So it’s more about insights that then humans look at and make decisions. We need to move beyond that and we need to have our AI systems help us with the decision-making itself. In order to do that, we used to start from decision-making.
We need to define what are the constituents of a decision are. We kind of know that already because we live our lives going through these decision loops, where we’re faced with a context within that context, we have a number of choices. Let’s call them actions. We pick some of them. We don’t pick them randomly.
We pick them in a manner by which we are optimizing outcomes, survival, happiness, joy, whatever, in a context of a company that might be things like revenue and margins and sales and in the context, for example, of COVID-19 that might be containment and minimization of cost. And so often those goals. Are not in alignment with one another. And so that makes the decision-making quite difficult. Also, you have to make the decisions in the face of incomplete data. So a machine, an intelligent system has a lot to offer to help us, augment and improve that decision-making.
I run at Galvanize, these executive data science workshops. And we talk about intelligence systems. That’s like the starting point of, whether all these fields of AI, the supervise, the unsupervised, the optimization, the statistical inference, all these different components. And what you just shared was so fascinating is that AI should help with this decision-making. It’s not that AI should do the decision-making or let’s debate that a little bit here, but am I hearing right that AI should help augment the human in that decision-making.
Humans are really good at general intelligence. We know a lot of things about a lot of things. So often that’s still state-of-the-art in AI can not capture things like common sense. We have a hard time actually building that into our systems right now. So where possible, where the decision making let’s call it, the decision loop, the frequency of making decisions is slow enough that we can have a human in the loop.
Today, it does still make sense to have a human in the loop. And of course we have to facilitate that, which means that not only does the AI need to be able to give the human a choice of what balance of the various different outcomes is interesting to the human decision-maker, but it should also give the human decision maker.
Here’s what you need to be doing. And if you do it, here’s what I think the outcome is going to be. So a prediction of the outcome, and here’s how confident I am in this advice. It’s something that’s often neglected in our AI systems or done poorly, but our systems have to be able to tell us how much we can rely on them.
And if we have all of that, as well as the ability for humans to modify some of these. And make their own choices and run those through the predictor and get a sense of what will happen. Then we have a system that’s kind of complete in augmenting decision-making going back to your original question though. Sometimes that’s not even possible.
We do actually defer decision-making to our machines already, even machines that are not, do not have #artificialintelligence. When you are driving a car and it’s automatic transmission, the changing of the gears is not something that you do. It’s deciding for you in the stock market if you’re trading at a very high frequency, you just can’t be there and make every single decision. And, so there are cases where we have to rely on AI systems to make autonomous decisions for us.
This reminds me a lot of the new season of Westworld, recently season three came out and we were seeing that they have this new, super AI, this AI machine called Rehoboam. And if I pronounced that right, or however, the AI would want me to pronounce it. It was so fascinating that in the season, the humans were relying on the machine.
The machine was saying with 99.99% confidence, we believe this decision will result in a better outcome. And I think firsthand I’ve seen with a lot of our students and a lot of different companies is you’re right that piece is missing. It’s like, this is the solution, default or no default as a result of COVID. But how confident are we of that? Is it really certain how reliable is that?
It’s very true, when, if you have a friend that you use for advice, let’s say an accountant that gives you financial advice or so and so forth. You do rely on them, could be implied facial expressions and so forth, or their own words as to how much you should take their words for what it is, how confident should you be? And what’s important here is to note that there are two distinct aspects of certainty. One is whether the decision is being made within the context that is familiar.
So you would ask for advice from your financial advisor regarding, for example, a home equity loan, but you’d probably not ask them advice regarding sort of a family matter, or if you do, or if they give you advice, you would take it with a little bit more of a grain of salt because that’s unfamiliar context the same applies for the advice itself.
So if your financial advisor has always been saying, look, you need to be careful about your money or you need to be opening a CD account and so forth. And tomorrow they come in and say, look, you need to put all your money into Tesla. This is something that they’ve never advised you for in the past.
It’s a brand new advice that you’re not used to getting from them. You will take that with a grain of salt so we can build models. That are specialized in assessing certainty in our AI systems and the way they do that is based on, you can think of it as, familiarity on the input side, the context side and familiarity on the output side.
And as we continue on this conversation around finance and the whole industry, whether we’re looking at defaults with these loans, from the payment protection program, with COVID or investments, should you invest in Tesla or not? I’ve seen so many of the financial platforms just in the last few months, even prior to COVID announced fraction shares.
And I thought this is so fascinating that now say you don’t have a thousand dollars to buy a share of Tesla at the time of this recording, you could potentially buy a share for a hundred dollars. And then that means the risk is being spread, but then can the risk be predicted as much? It’s such a fascinating concept. I’ve been trying to wrap my head around it, whether it’s good or not for the system, like, can I provide more access and equity for people who traditionally could not get into such big deals.
But at the same end, would it now diversify all these people in this graph network that makes it more challenging for the machine to predict when patterns are going to occur, whether it’s like unprecedented, like COVID or even just withdrawals of large sums of money. What’s your take on this new fraction share paradigm we’re seeing emerge.
It sounds like a nightmare for the accounting piece of it. But, I don’t think it poses a special problem to our #machinelearning and AI systems, because the quantum of a stock is sort of arbitrary as far as they’re concerned. If you can build a mechanism to be able to break it into multiple and spread the risk, as you said, then that does work.
I don’t think it has a particular impact there, but as you said, the risk there’s two sides to this one is confidence in what the AI system is telling you to do, but then there is risk sort of projecting that confidence out. And we should always be used to the fact that, the further out we’re trying to project the less confident we are going to be just necessarily. Past performance is no indication of future returns. So have that in mind.
That’s right. And we definitely know that, as a result of COVID that, if you didn’t have data to train on, there had not been a global pandemic since 1918, there was no data from the stock market to say, this is what might happen. Let’s prepare for this with contract law and insurance law.
But what we have seen in the AI space, at least in the last few months is there has been a shift. So there has been a change from just AI research, more to enterprise AI solutions and practical AI solutions, whether some of the things you’ve seen recently and, throughout your career in this space.
I’m busier than ever. We’re doing quite well at Cognizant AI and to my surprise, companies and enterprises have definitely reprioritized things. They have maintained. Or even in some cases increase their investments in AI enablement, which says a lot about the value that people ascribe. Two AI based systems. It is a natural extension than a natural next step to digitizing your business.
And I think of businesses as being on this trajectory on this roadmap of consolidating their data. You can think of the data that is being modernized and put in a data Lake and so forth as a sense of what’s actually happening in my business. And it’s very important. The more up-to-date it is the easier it is to actually interrogate and query it the better.
And that’s the fundamental first step that needs to be taken on top of that. You do a lot of statistical analysis and reporting and so forth to slice and dice. And in some cases you might be interested in finding out causal relationships between the variables, the #data that you’re collecting, segmenting your data, set, segmenting your users, for example, and finding out which users are more likely to, for example, use your system or why or what categories and how you can move them between segments.
Then the next layer up from there, as we move through this roadmap is how can we actually project this out to the future. How can we take the data that we’ve collected and the reports that we’ve collected and make obstructions find patterns that will allow us to see into the future forecast, how we’re going to do so that is where this AI journey starts.
And it’s very important companies are investing a lot in data science teams and AI based tools and trying to identify areas where it makes sense to do this. But as we discussed earlier, the tip of this triangle, the tip of the spear, the ultimate is you want to do all of this to be able to make good decisions in your business. And so why not use AI on the top? To make use of all of what you’ve done in order to help you augment and improve and adapt to change when it comes to your decision-making.
And thinking about this decision-making I know we feature the few other leaders from Cognizant² in the past and the HumAIn Podcast. We’ve had Matt O’Kane, the European market, head of AI and analytics who talked a little bit about #evolutionaryAI, but he referred me over to you actually, which is what has inspired a lot of our conversation today.
And even that Bret Greenstein, where we talked about just the practical nature of these systems and today at Cognizant you’re building the groundwork of evolutionary AI, but let’s break it down, what does that mean for companies and what does that mean for people who are interested in this field?
Evolutionary AI is a set of tools that we use to build AI systems and AI enable companies. The reason why it’s called evolutionary AI beyond the fact that we think it’s an evolution in the way people should think about AI, is that a very strong core component of it is evolutionary computation. And as often is the case with AI systems. We do pull from other AI disciplines as well, such as deep learning and neural networks and so forth, but the essence, the main differentiation here is the fact that we have an element of what I call creativity that is missing in a lot of AI systems.
The reason why it’s creative is because by virtue of evolution, we’re able to search for solutions much more efficiently than we are with your typical machine learning based systems. And that speed and efficiency allows us to be much more creative and find solutions that are either very difficult to arrive at using other methods or impossible to arrive at. So it also gives us a number of very interesting capabilities.
Evolutionary computation is very good at multi-objectives decision-making. What I mean by that is, as we discussed earlier, oftentimes your decision-making is to achieve a balance of outcomes. When we talk about sales versus margins, you can increase your sales through the roof if you set your price to zero, but then your margin is negative.
So you can’t do that. And there’s a balance to be found between the two. And so evolutionary computation allows us to find solutions that are the best in some combination, some balance of the multiple objectives. Other characteristics of evolutionary computation is the fact that it’s adaptive just by nature.
And so unlike the state of the art in AI, where you build a system kind of offline, almost on your dataset, and then it’s kind of fixed. And then you run it going forward, relying on the abstractions that it’s made, Evolutionary AI based systems can be updated quite regularly and kind of adapt to a changing world.
We talked about COVID-19, it would have been impossible to predict COVID-19 and its implications right off the bat. However, we could have been faster and adapting to it faster in making the correct decision. As it unfolded. And so that’s one of the things that, being adaptive, being able to update your AI models continuously would help.
The fact of the matter is that, COVID-19 spread unevenly. It started in China and then it kind of went to different countries at different times and different countries dealt with it differently. And, all of that is something that we can use in our AI models as we make decisions. As to how to deal with the spread of the pandemic in our specific locality.
And then as we observe other changes, other responses in other countries, we can quickly adapt our decision-making, our decision strategy regarding how we deal with the pandemic. So that’s one really good example of how we could have done better. We couldn’t avoid it, but we could have done better.
And it’s so incredible to think that we didn’t have the data, but when we look at different industries, like in the medical field, there’s always synthetic data that’s being generated. Even with the self-driving field, there’s synthetic cars, they’re moving and simulating environments to help systems perform better. So my question, thinking about these use cases or those you work with is synthetic data part of the evolutionary AI process.
Yes. We have multiple choices. We could, for example, create a simulation again, using the COVID-19 example, there are many epidemics models. SEIR is one, for example, where you actually put in a number of assumptions regarding the virus itself and it will give you a pretty good indication of the number of cases and deaths and so forth. The problem is the assumptions. The problem is that it is a priority. We really don’t know how that works.
So you could actually take a simulator and create your system against that simulator. That simulator would be kind of like what you were mentioning, like augmenting your data, but it’s still based on some assumptions about how your data looks like, and you cannot do this statically. If we still used the SEIR models that were in use in the first like month or two of COVID-19 today, then, our decisions would be very different and probably very wrong.
And so we need to update that as we go forward. So we can’t use a static simulator. What evolutionary AI allows us to do is to actually use machine learning, to create what we call a surrogate for the real world that surrogate is learned off of data that we’ve seen up until now. So it minimizes the number of assumptions that we have to make, and it’s the best guests that we can have, but it’s the best that we know about the world.
And then we evolve using evolutionary AI. We actually evolve the strategies for decision-making against the surrogate. Because it’s very expensive to do so against the real world. You can’t just go out and say, let’s open all the bars and restaurants and see what happens. It just is risky. You’d rather try that out against the surrogate and see how it does.
And so that’s the principle of what we call evolutionary surrogate assisted prescriptions, where you have a predictor, which is the surrogate for the real world. You have a prescriptive that you evolve, which gives you a decision strategy. And often you pair that with a certainty model. So when the three of these come together, you have all the elements of a good decision augmentation system, where a human decision maker, let’s say a policy maker would ask the AI, how can I achieve this balance of cost and containment, for example.
And the AI says, here’s the best you can do today or into the future. And if you do so, here’s what it’s going to look like. Here’s what the trajectory or the forecast for the number of cases and deaths is going to look like. And. By the way, here’s how confident I am and what I’m telling you. So here’s a confidence bound around my forecast.
And so now the human decision maker can take a look at that and either go with it or modify it, even if they modify it, they can still run that through the predictor and get a sense of what the modified decision might look like. And then the whole thing keeps being updated. You have this, what we call an outer loop, where with every day that goes by and new data becomes available, we can bring that in and improve our predictive model.
And by virtue of that, improve our prescriptive and our certainty models and go through the loop over and over again. So I think that’s the analogy for that is how as humans we operate, we try various different things in our head, in our imagination, based on what we know of the world, that’s our surrogates for the real world before we actually do them out in the real world and trying them out.
Now these surrogates are so fascinating because how I’m hearing you’re sharing give is very similar to a lot of the trends I’ve been hearing in 2020 that we’re beginning to move into a world of digital twins. And so #digitaltwins may be similar or different in some capacity to these surrogates, but it’s where you could have these AI avatars that are living on Snapchat or Tik-Tok where we’re seeing how the real humans integrate with these digital interactions or how does this world simulate and evolve over time? It’s so fascinating to see that you can basically run through these simulations in the AI system and then see what is a good confidence level or a poor confidence level to make those decisions.
The ultimate would be these digital twins are often created again as simulations or simulators or by feeding them a lot of information, for example, semantic space, contextual information, and common sense information and so forth. They do entail a lot of, again, the downsizes, there’s a lot of assumptions that need to go into sort of engineering, a digital twin that an AI based surrogate, a machine learning based surrogate might not have.
The ultimate would be to actually have a combination of the two sort of a hybrid system. That has a learned element, as well as engineered digital twin kind of elements that, and in AI systems, we know how to build these sorts of ensemble hybrid models. So that’s definitely an area of interest for me.
Fantastic. And, from everything that we’ve been hearing in talking about COVID and talking about this other scenario, it sounds like the big area of algorithm research that evolutionary AI is focusing on is optimization and optimization problems. Whether we’re looking at supply chains or contracts, or just the difference with policies, how am I reading that? Do you think optimization is where it’s at or there’s actually a lot more to it than just that?
Absolutely. No, that is where it’s that, in my opinion, adaptive, multi objective. Optimization is where we need to be. And that is what decision-making is about. We are constantly optimizing and trying to improve on goals and outcomes. So for sure, that makes a ton of sense. And there are a variety of different approaches there.
Evolutionary based systems are very interesting because the substrate that is being evolved is not important to evolution. So you can evolve. For example, a neural network. But you can also evolve, for example, a rule set. And so in some cases, the optimization that you make and the decision based system that you’re generating needs to be explainable.
And so it’s important for you to use a different substrate there, but the actual core optimization system, which is evolutionary computation does not change. It’s still evolving against the different outcomes that you care about and producing the outputs that you care about.
It’s just that what it’s evolving moves from being a black box sort of neural network based substrate to a rule-based substrate as an example. So these are some of the sort of characteristics of evolutionary AI based systems that make it very interesting for this particular use case.
It’s so interesting because especially humans, like you and I here on the show today, like we care about the outcomes of COVID-19 and where the world comes, but evolution AI doesn’t really care. What happens to humans? It cares about the rules and cares about that system, but the rules can be very complex. The rules aren’t necessarily, Y equals MX plus B. There could be, what would you say? Even like millions or billions of parameters have worked here?
They could be, or in some cases they might be simpler. We don’t know. And that’s the thing is that because as I said, evolutionary computation has this element of creativity. It does have this element of surprise. Every time you actually create these systems, they have a capacity to delight you with solutions that you weren’t expecting. So, yes, it might be explainable, but not understandable for sure. But that is a reflection of the complexity of the problems that we’re tackling.
And I think it’s incredible to even see in the tech space. We’ve seen a lot of research with over the last couple of years from open AI, with GPT 1, GPT 2, and now GPT 3, all about generative text, to me this feels very much in line with the evolutionary AI space that we’re exploring, transfer learning. We’re exploring new problems of how to generate content, but you don’t necessarily know what’s going to happen.
And, Interestingly enough, people were very pleasantly surprised with GPT 2. You’re able to make video games with choose your own adventure stories. They were able to make new articles that were synthesizing information that humans wouldn’t naturally make. So I thought that was so fascinating. And, recently #GPT3 came out to really enable that transfer learning. Do you think the work that you’re doing in evolutionary AI cross-sex here in the text or the natural language space as well?
It could be used for natural language based applications. And has it been used in natural language based applications as well? Yes. For generative based systems we’re starting to use it a little bit, which is also interesting, as I said, the substrate really is evolutionary AI is neutral to that, so you can actually take, for example again and evolve it or evolve aspects of it. So, for sure. I think there’s an interesting avenue of research and applications there. Absolutely.
I know I’m a big fan of GANs. I’ve downloaded many of them online from the video and a lot of research conferences. And I know this year, that style GANs 2 came out, which is the successor of style GAN. It’s this a generative system where you have human faces that were completely made by the AI.
And StyleGAN 1, there were some clear mistakes that you and I could spot, that your lobe looks a little low or, there’s a big wrinkle on the neck, on someone who’s 20 years old. I don’t think they’re getting wrinkles just yet, but in StyleGAN 2 all this was repaired. I’m so surprised this speed of technology continues to accelerate. Where are you seeing some of the new emerging trends in the evolutionary AI space?
Yes. So there is of course the track where there’s a lot of work around new architecture, search and evolving, basically the design and hyper parameters of any kind of deep learning based system. That’s a particularly interesting area, because it’s so difficult to design these systems. It takes as you see months to get to the next version and a lot of AI experts playing around with it. And so what if you could actually automate that?
And, recently we’re seeing that it can go beyond just evolving the architecture and hyper parameters, you can even start looking at things like loss functions and activation functions and so forth. So that’s one very interesting area. You talked about transfer learning. There is a very interesting area called multitask learning that allows us to train against, let’s say public, large dataset.
And then at the same time, tune a second or a head out of that deep learner on a smaller dataset so that we can actually tune it into the that particular domain for which we might not have sufficient data. So to me, that’s also quite interesting and the scientific side of it is interesting to sort of explore where the limits are. What are the commonalities, what is it that allows these systems to operate?
Well across domains and how can we not start from scratch every time, for example, as we train for a new domain. So these are all very interesting areas of research. Another area of research on the evolutionary AI side is what I mentioned, evolutionary surrogate assisted prescriptions. We just published a paper on applying that to reinforcement learning based problems.
And in this case it has some very interesting characteristics. Not only does it solve, for example, in playing and so forth, the problem itself, it plays the game quite well after learning it, but it also does. So. By minimizing regret, which means minimizing the number of times, it’s actually making mistakes or dying, for example, as it’s learning it.
And, also minimizing the number of times, it has to actually play the game to learn it. So when you use things like deep reinforcement learning, you have to play the game like millions of times, even simple reinforcement learning games, but these evolutionary AI based systems allow you to train on a fraction of the actual game playing the reason why it’s important.
Is it because, as I mentioned again, it’s all well, and good. If you’re trying to teach your system to play, go to have a play, go a billion times, but if you want to teach your system how to deal with the COVID-19 situation, and you’re talking about life and death of humans, you can’t afford to try things out in the real world a million times, you have to reduce significantly the number of times you actually interrogate the real world.
Now, especially as we’ve been living in this remote world, that’s becoming this remote utopia. If you will, where we’re looking at games and life and everything together, games have gone remote, Formula One has gone Formula E, chess has gone online with Magnus Carlsen with these online competitions.
And we’ve even seen games like the League of Legends and Dota two, which have become famous for seeing these open AI systems play against the humans become even bigger, but that’s not all we’ve seen in gameplay. We’ve started seeing people like Magnus Carlsen, chess training against AI so he can discover whether these hidden secret patterns that humans don’t think about to then outsmart other humans or have a human with an AI versus another human with an AI. I wanted to hear your take about the future of game play and where you might see that going.
I’m not an expert in that area, but absolutely. I hear a lot about how this augmentation happens. In chess it’s already happened where a lot of the training happens against AI based chess players and so forth and a lot of the games that is definitely the case. I think also the AI in a lot of, like, let’s say first person shooter games and so forth is improving significantly. Which also I think is quite interesting. AI in the #virtualworld has an easier time of dealing with more limited contexts.
And so it’s a good place to try these systems out, but again, Let’s not solely rely on that in developing our AI systems. Even recently there was some talk about AI systems in medicine. Where Google puts something out where they were getting very high accuracy and so forth.
It was only after they actually pushed it out to the real world to practitioners to use that they realized that it was, really not being used the way they expected or the input wasn’t quite what they expected and the general accuracy of what they were getting wasn’t quite there. So let’s not lose sight of the fact that often the ultimate value of these systems is in the real world.
And in the real world, the future of re-imagining the world is going on very much right now. And we’re seeing this around computer vision, natural language processing voice, and all these systems. I know in one of your past lives of this life, you had the chance to work on products that got involved with Apple Siri, and one that’s also see your take on the future of voice as we’ve become a world. We’re no longer, we’re just texting each other. I have been on more phone calls, more video calls than I can remember, which I think is one of the few good outcomes we’ve seen of the COVID world.
Yes. These systems have made huge improvements, especially on the speech recognition side and the scale at which they’re able to respond to voice commands, for example, and Alexa and Siri and so forth.
We still have some fundamental breakthroughs that need to happen in that area that have to do with dialoguing and common sense, holding several threads of dialogue at the same time and emotion detection and so forth. So there are a number of areas of research that need maybe a little bit of a different thinking than what’s gone in there today.
There’s also the case that as humans, we don’t quite make full use of these systems just yet. So there’s a usability aspect where if it’s just a cylinder in the corner of the house, you will not be treated as a system with which you can actually hold a dialogue.
And, whereas if the same thing was moving around and had something resembling eyes you might have. In my opinion, robotics or home robotics and so forth might be a very interesting area where these systems might get too. There’s definitely a cost related consideration, but that’s one area that is interesting.
And now tying it all together back to your area of passion around Evolutionary AI. If we’re thinking about trends, looking out, this year and into the future, what do you see for the future of evolutionary AI?
More and more companies are going to adopt this technology for decision-making and it will start with areas where the decision-making has been captured. So the data around the decision-making is already there, but it will not stay there. I hope that it will get to areas where we think decision-making is the soul. Sort of domain of humans, but there is room for improvement of that decision-making. And as that happens, organizations will start having hierarchies of decision makers, not necessarily #autonomous, but at least argumentative.
And when that happens, we will get benefits as far as that is kind of multiplicative. So you’re looking at efficiencies. You’re looking at accuracy of decision-making, you’re improving on the outcomes. Generally. I think we really, the world really does need that. Data-driven decision-making is lacking and we’re probably today over reliant on human judgment. And we can together make a better world by increasing the role of AI based argumentative decision-making systems.
And Babak. What call to action would you like to leave with our listeners and the audience today as well?
If you are in an organization or enterprise where there’s critical decision-making happening, work back from there. Most businesses that I see are still saying, we’re far from that. It’s too far in the future for us. We’re still struggling with digital. We’re still struggling with getting our data in order. Sure. Your competitors might be ahead of you on that.
And remember that you have to have a vision of AI enablement. In order to even get the data and digital part of what you do. So I would say the first order of business is look at where you want to be in two years, three years, four years of AI enablement, work back from there to see what kind of data are you even missing leaving on the table you’re not even collecting. You need to be able to identify that spec that out.
And of course, yes, build your data infrastructure modernize it, reporting on top of that, build your machine learning and sort of forecasting and predictions on top of that, but do so with this aim that ultimately you want to improve on your decision-making because what that will do is it will hit your bottom line directly because that is exactly the outcomes that you are giving your AI to optimize.
Babak Hodjat VP of Evolutionary AI Cognizant. Thank you for joining us on the HumAIn Podcast.
Thank you for listening to this episode of the HumAIn podcast. What do you think? Did the show measure up to your thoughts on artificial intelligence, data science, future of work and developer education? Listeners, I want to hear from you so that I can offer you the most relevant trend setting and educational content on the market.
You can reach me directly by email at firstname.lastname@example.org. Remember to share this episode with a friend, subscribe and leave a review on your preferred podcasting app and tune in to more episodes of HumAIn.