Welcome to our newest season of HumAIn podcast in 2021. HumAIn is your first look at the startups and industry titans that are leading and disrupting ML and AI, data science, developer tools, and technical education. I am your host David Yakobovitch, and this is HumAIn. If you like this episode, remember to subscribe and leave a review. Now on to our show.
Welcome back, listeners, to the HumAIn podcast. Today, we are talking with an expert in enterprise, AI and data science.
Our guest speaker is Veljko Krunic, who’s the author of the new Manning book “Succeeding with AI”. He’s also a consultant that focuses on big tech and how you can bridge the gap with AI and data science.
Veljko, thank you so much for joining us on the show.
Thank you very much, David. It is a pleasure to be here.
So as we continue to unravel 2020, one thing that has been so fascinating is that VC investment in AI has not stopped. There is so much around AI that continues to grow and accelerate, whether it’s big tech, whether it’s open source, whether it’s consulting, AI is still a big part of what companies are looking to do.
But the big challenge is how they can be successful with AI, which I get a sense, you’ve written a lot about and have learned a lot about through case studies with your new book. So why don’t you start us off about, if we’re looking as a company to get involved in AI, how should we think about this? What should we do?
You’re right. There is still plenty of investment in AI and that will continue. What would, however, change is the length of the runway that the typical project has to show up the success. Would it still be possible to add multi-year projects that are not delivering some concrete business results? Yes, it would. Is it going to be still as easy to do so as it fell in 2018? Probably not.
What is different today, compared to 2018, is the amount of patience that people would have to wait before you can show them business results. That means that when running an AI project, the most important thing that the leader of an AI project can ask is what you are going to do with the results that the AI project will deliver. How to react to the answers that AI will give you.
Speaking of which, have you ever seen an example on Swiss that AI could provide or did you actually skip to us to a team? Give me an example of the answer that they can get from AI. How exactly it would look. Ideally, at the start of the AI project, you should ask to get exact examples of how answers from AI would look for all the types of answers that an AI project could give you once finished.
Then you should gain doubt cups and you should ask what business actions should I take if I get such an answer? If you didn’t perform such an analysis at the very beginning of the project, that means that you have effectively started with AI algorithms and technology, but have left. What will they do? What business sections should they take as an exercise to be completed once when you already have the project results?
The most important thing that you could do with AI, in general, is understanding how the response that you get from AI dies with the business section that you are going to take. The very underappreciated aspect of using data science is the dead number of business sections that you could take. It’s usually severely met very often on a data science project.
Belgium would produce the result and then humans had to act to make some business action or make some decision based on the answer from AI. And what we forget is that people work at the speed of the people. You have to work at the people’s speed to align people with each other.
People have to come together, they need to unify around some goal, and execute that goal.
They need to find the funding, et cetera. That is why I am strongly advising to always start their project, not by asking what cool technical things can we do, but what business sections can we take based on the result from the AI.
I love how you’ve put this, that building AI as much as it is technical, code and product and infrastructure with use cases, it has to align with the business. And typically we’ve seen that alignment could have a long runway. It could be 2 to 5 to 10 years out, and then there’ll be some payoff. Efficiency, automation, or new breakthroughs in research and tech.
And that cycle seems to be shortening, and the whole AI generation has been, since the 1950s, an ebb and flow of investment and research and tech. But as we fast forwarded to the last 10 years, only now has it started to become practical.
And what I see as a consultant and educator in the industry, is there is such a thirst for knowledge, and a thirst for results today by companies, but they’re not thinking how you’re explaining it. I was with a major company that we were talking about, their analytics and their solutions.
And what we discovered was that they have this library, if you will, of artifacts. And these artifacts were dashboards and metrics and streams, and very nice predictions and graphs and insights. And when the company took a look at how many views they had, how much engagement was occurring with these artifacts, null, zero. It’s somewhere from the top up.
There was the request. Let’s create this fantastic work. And that’s not being used. So it sounds like, and I’d love you to explore this, that perhaps there’s a disconnect on why we’re creating things today.
There is definitely a disconnection.
Working with AI is a very exciting and rewarding job to have, and it is easy to get lost in how much we enjoy just working with AI technologies.
I get lost myself. Many other data scientists are in the same boat, but that’s not what the client is engaging me for. And frankly, that’s not what the world needs us to do. One part of the disconnection comes from the fact that it is so easy to enjoy technology that business problems could look much less interesting.
And consequently, it could be easy to make an afterthought for the engineer. But it is not just the problem of, even, merely paying the engineering. It is a problem of siloing people.
The companies often silo and classify people as engineers or business people. And the logical result of that is that there is not enough communication between business and technology.
That’s not because both teams are not trying their best, but because AI is a much more complicated area to tie business and technology together than other areas of the technology.
We have been doing databases, for example, for much longer than we have been doing AI. You said that we are talking about hundreds of different dashboards in your question.
So let me guess. You’re probably talking about some fortune, a thousand types of environment. It’s very common that in such a company, you have people who are excellent in a technology, but don’t necessarily have significant business expertise. Or if they have business expertise, they might be lacking the view in top business operations and nexus to it.
And that problem is mirrored by the people who have, in a business team, excellent business skills, but very limited understanding of the AI. It’s not easy for these two groups of people to talk openly, because there is also a social component that gets in our way.
Data scientists would often use terminology from their domain, and they would expect business people to ask if they don’t understand the terms used. Ideally business people should ask.
Unfortunately, that’s not how people work. People generally don’t like when they look like they don’t know something they think they’re supposed to know. It’s perfectly human to be reluctant to admit that you do not know something that you think other people expect you to already know.
Let’s talk about business people and why they might be reluctant to speak when they do not understand what some business terms mean.
How did they become business people? They maybe got an MBA, and maybe they figured out early on that it’s better talking with people, and that they’re more interested in talking with people than they’re interested in the minuscule details of the statistics for that matter. They might have not been paying that much attention to the statistics class that they had in business school.
And that class might have been 30 years ago. So put yourself in their position. Now they’re listening to some technical terms that look like statistics, and they are very often assuming that they are supposed to know what they mean. They could be reluctant to speak. Maybe I would be reluctant too, if I’m put in such a situation.
Here is help to break out from that cycle. If you’re an executive, forget that class of statistics you had many years ago, you’re not supposed to still remember what those formulas and statistical terms mean. If you knew them, you would have been a data scientist, not an executive. What you care about is not what those statistical terms mean.
But for what those statistical terms mean when they’re translated into the dollars. Now, small discretion before we talk too much about money. I just want to be clear that when I talk about money, I’m talking about the dollars, when you are working in the for-profit environment, and then talking about other quantifiable metrics, if you’re working in a non-profit environment.
If you’re a nonprofit that is helping the people, how many people have you helped? How exactly did you help them? How can you measure that? So from now on, if I happen to say money, consider that not just dollars and cents, but the catch-all term for your quantifiable outcome of your efforts, of your business. So in nonprofit, in the real world.
So, with that one out of the way, here is my advice to the executives. If you’re listening to the presentation in which the team is using technical metrics to talk about, and you’re trying to figure out what those metrics mean on a business side, in real time, you’re doing something you shouldn’t be pointing. That is a job that should have been completed before the presentation was given, it should have been already on the slides.
What does this mean in a business step? As an example, you’re not supposed to know what the RMSC built means. Quiet areas. You’re certainly not supposed to be thinking during the meeting is RMSE of 0.0 23.
good or bad for the business?
What you’re supposed to hear is not IMSE, but the dollars you’re comfortable making decisions when you know the value of that decision in dollars. So you should insist on using dollars as a unit in which results of your AI projects should be measured. It becomes the question of what is the conversion rate between IMSE and dollars. And that question should be known before the meeting started.
If you’re a business-oriented person or a high-level executive that is listening to the presentation full of technical terms, your most important responsibility is not to try to understand what and learn those technical terms in real time. Your most important responsibility is to raise the hand and say, let’s take a break, stop. I don’t understand what you are telling me. How does this translate in a business term? What do you need to put an X? How does this business metric translate to dollars?
Once you ask that question, you would be in one of the two situations. It is all obvious how to translate it, or it is not. An example when it is obvious it would be when every 0.01% improvement in an IMSE is going to be worth a hundred grand. Great.
If it is that simple to translate, then let’s translate it immediately. Let’s put it on the slides and let’s talk dollars. Not IMSE. It’s even worse when it is not simple to translate. If you need to have some very complicated link function that is telling you, this value of the technical metrics is this many dollars, it’s completely unrealistic to expect people to translate in real time between the technical metrics and the dollars.
I cannot do that myself, especially if I’m supposed to think about something else during the meeting. For example, how do we implement this result on the business side? So I’m very skeptical that even brilliant executives could easily translate complicated technical metrics into business decisions in dollars, regardless if it is simple or complicated to tell the technical metric into the business metric.
What you should do is insist that it should be done before the meeting. If you are a high-end executive who is presented with some AI technical metric that you do not understand, the meeting is not the time to learn, what does it mean? Or even worse to guess what it may mean.
What you need to do is find a polite way to coach the team members that they should translate the technical metrics into business metrics before the meeting. And of course, to give them any support they need to do so. If they need additional help, and maybe if you need to assign additional people that understand the business side of the problem better, assign them.
But make decisions based on the business metrics, not based on the technical metrics. It’s a much easier question to answer. Do you want to earn a million dollars more than to ask, is IMSE of 0.021 enough or not? This is not rocket science, unfortunately, that is often not what is happening today.
The result is that we have a simple disconnect between business and technology. The bigger the company, the easier it is to happen, that we have such a disconnect. However, the root cause of that disconnect is that we do not understand that convergence between AI and business cannot happen on its own.
It’s something that you have to work on. Something you have to earn. You have to create the environment in which business and technical people could safely admit when they don’t understand the domain of the other team, and then teams need to work together to bridge that gap.
That answer invites me to think more about what I’ve seen with business, and it’s that we don’t know what we don’t know.
This is true. Executives of managers, of technical engineers, of all levels of an organization, and generally, the technical engineers are the ones that are asking the questions, whether they’re searching through peers or through reference material to discover the answer.
And somewhere along the way, as people rose the ranks, they forgot as you say that you were inspired to learn statistics at 23 years old, but now, as the executive at 43 or 53 or 63, you’re focused on relationships, which is very important for business.
But it needs to be quantified business, not only qualified. And somewhere it’s shifted and changed. And perhaps we’ve hit an inflection point this year. And that inflection point is the pandemic, the COVID, that businesses are saying, we love technology, let’s invest in AI, but let’s get practical. Show me the money, show me the results. I don’t have 10 years of a runway anymore.
And it’s interesting that it had to take COVID for this shift to occur. Or were we moving towards this transformation from AI research to practical AI?
That is the next one question. And obviously, the real answer is that we will never know, because we do not get the opportunity to replay those events twice. And in one timeline you have COVID, and in another timeline, you do not have COVID. Plus, on top of that, given the choice, I believe that people don’t choose to live in a timeline in which there is no COVID, anyway.
Having said that, I could give my biased answer. I’ve written my book because I feel that many of the AI projects that are in progress would fail. And without COVID, we would have been in a similar position. The problem is not COVID. The problem is the disconnect between the business section you can take and the technical work you’re doing. You do not make money
when you get results from your data science project. You make the money when you act on those results. And if you don’t know how to act on those results, then you do not make money. As simple as that.
If you lead with technology and if you left action as an afterthought, you have spent a lot of money developing technical solutions that could be classified or justified, and difficult to change, but that may require pivoting to 11 tower.
What happens if you spend a lot of time on the project and only then you figure out that it’s not clear to you, how to make appropriate business sections based on the results? Now you have to pivot. It’s pretty easy to see that many projects would not have successful pivot and they would have to hope for some relatively easy pivot. They could figure it out late in the game, could produce something usable.
Unfortunately, many of the projects are not going to succeed. They’re not going to find that pivot. I would also like, with your permission, to open another topic that often causes the failure of the purchase. It is the topic of imitation.
Very often, we like to do things in data science, or in technology in general, because these are the groups of people, how the companies that are prestigious, are already doing it. One of the things that you are going to see is that, very often, small companies have what they call a Google envy.
We do the same thing that Google does. And they go and start working on some project because they know that big companies are using the same technology. Here’s the problem. You’re imitating somebody who has very different circumstances than you do. Put yourself in a position of high level executive in some large technology company, imagine that there is some chance of this thing called AI being a big deal.
And imagine that you have a large budget, a budget in which putting $5 to $10 million in some idea is not that expensive, in the grand scheme of things. If you’re in that position, and you’re that executive, you could think about this almost as an insurance policy.
Suppose that there is only 50% chance of the project working. It could be a big deal, if it works. Would you rather save $5 to $10 million, take a chance that you’re competitive, you would make an important discovery, and then having to stand in front of your boss or your board of directors and say, I wanted to save $5 to $10 million.
So I did not take a swing on this big chance and I effectively cost us that big research break too. That could have been very important. What if you’re a much smaller company? What if those $5 to $10 million is all the money you have? And if it does not succeed, that’s it for the company. Do you take that chance or do you say, I need to spend this money on projects that are having a significantly higher chance of succeeding.
That is why, when you put together this connection between technology and business, and unfortunately, a little bit of imitation, that is happening. We have seeded the seed that causes that many projects that are in progress should not have been started at all. And they will now go where they have to go.
I am afraid that there is a significant amount of AI projects that would have not succeeded, COVID or no COVID.
It’s as if technology continues to evolve, as we’re in this fourth Industrial Revolution, we’re still heading down the same pathway.
We’re still in this imitation game, if you will, where startups are playing catch up or to scale ups that they get acquired by big tech, like you’re saying. AI is different at the “G-mafia”, at these big tech companies. Then at the consultant shops, then at the startups. Because budgets mean resources and business-driven results.
I can hear what you’re saying that other technology today is still very much on the fringe of possibility like blockchain, like quantum, they totally can be the next big thing. And so why not?
Facebook invests in blockchain, IBM invests in quantum. It makes complete sense because it’s an insurance policy for their enterprises to scale new business units, to surface new opportunities.
And please do not take the phrase insurance policy to mean anything. There it is something that is smart money to place if you’re an executive in a big company.
But as I pointed out, a small startup is in a different position. In my book, I use the following analogy: We all are descendants of people who in a big historic time figured out how to hunt. And that’s why they were hanging around in a dream pool long enough for that dream pool eventually to lead to us, David, good hunters.
If you are a hunter, you don’t starve because you haven’t caught them out, for that matter. If you’re anything like me, the chances are that you would run away if you see a mammut, if you’re lucky enough to finally corner him when he turns around, you’re excused if you decide that you’re a vegetarian for the rest of your life, for I have no idea how to bring down mammuts with a stick and stones.
But failing to catch the mammut is not why you starved, you starved because you were chasing that mamut and past nearby hundreds, perfectly good rabbits that you could have caught instead. You were ignoring rabbits and you were following that mammut until it was only you and him out of the bitten part. And there is nothing that you can do now. But catch him, for there are no rabbits anymore.
That’s what happens when you are chasing to begin difficult AI projects as a first step. Don’t put yourself in that position. If you’re starting the AI efforts in the division of a big company, start by hunting the rabbits, small, predictable intangible successes.
That is what your first AI project should be in a small company, or even within the division of the big company. Catch the rabbit, do easy projects, show results first, then go from there. Show that you are good at catching rabbits. And after a while you will get better at hunting. You would get a reputation for being a good hunter.
You would have more executive support. Other departments would want to work with you, smart people would want to join your team. Now you have become a good Hunter. Now it is time to hand that mamut. Always start with the projects that lead to predictable successes, instead of copying difficult-to-do projects from organizations that are much bigger and better staffed than yours. Start to feed the rabbits instead of the mammuts. Build the history of success. That is how you choose the projects that you should take.
Now, let’s talk about one overarching topic that you are hearing in this conversation. I put an onus on you, as a project lead. Agency is important, and we often have more agency and influence than we recognize. And that, as a result, we do not run projects, innovative exercises that agency we often like to see steps not analyzed, and do not turn the systems ourselves, let them be around us. We like things to just happen, that we should have been controlling all the way. My message to you is that you must do better than that.
It is your job as a project leader to drive the project. It is your responsibility. Analyze the system as a whole, because AI can’t do that for you. You and your team have to do so. And then, steer that system tower, the goal. Exercise your agency. AI needs your help. You cannot delegate your agency and your job to AI.
It’s incredible to think about these goals, because selecting the right project is, what you’ve just described throughout this narrative, Veljko, that you can build this team and suddenly have 40 or 50 engineers and five projects. And you’re running full steam ahead to build the next great thing. But if you didn’t consider the business, it can all be for nought.
And we’ve seen in the tech industry, as it’s gone through its ebb and flows, many of these startups have just gone bust for just having raised tens of millions of dollars going after a problem without ever discovering the business. And you can ladder the boards of TechCrunch and The Verge, and see, this company could not scale the business.
This company could not get a paying customer. And it’s the same issue over and over. You can have the most world class team, the best PhDs, the best AI researchers, the best engineers as data scientists. But it’s really about, as you write in your book, selecting those projects, discovering what are the projects to have an impact on the business.
So that leads us to, if we’re a leader listening to the show today, how should we determine, how should we start considering what projects to get involved with?
You should first start by answering the question, what is the business value of this project? If it is successfully executed. Now you’re working right on the problem that is worth solving.
The next step is that you need to actually lead the first bite of the leadership, lead yourself. We were talking about how important it is to have open communication between business and data science. You should know, as a leader of the project, how to create the atmosphere in which you are feeling that it is safe saying that you do not know something, then you need to extend that atmosphere, so that every single team member feels that they are safe saying that they don’t know something.
And then you need to get the business and data science to work on bridging the gap between business and data science. Data science is a team sport, and what you’re building is a system. The best system is simply the best system. It is not the collection of the best parts. So you need to actually build a good system.
Your next step is to set up a project in such a way that if you fail, you will fail fast, and you will need to be focused here. Today we are not limited in our AI projects so much by budgets or even by technology, as we are limited by the time of the data science team, that means, the time the data science team is working on a project that is not going to succeed for 10 months.
You just took them out for a full 10 months. Don’t do that. Don’t start 10 months long mammuts chase. Remember, if you are the first team in your company that is applying artificial intelligence in some area of your business, you’re lucky. 80% of the companies do not have any AI in the production. You are on the rich hunting ground. You are privileged to be one of the people that gets to look with the powerful AI tools into the data that nobody before you looked at.
It is vital using the tools of the same power. There has to be something that would be easy to implement and work a lot of money somewhere in all of those problems that you are looking at with such powerful tools.
The three keys to not finish chasing problems that are too difficult to execute, or the problems that you thought would be easy to solve, but show up to be much more difficult than expected. Don’t chase rabbits that show up to be mammuts in disguise. Let them go. You do not have to catch everything you start to chase. Instead, let that mammut in disguise go, find the rabbit and catch him. Build a history of success, even if you’re a big boss yourself.
Even if you’re lucky to work in a great organization, with a ton of smart people and you have an unlimited budget. You’ll still need to motivate the team that works for you. Chasing predictable successes and choosing those your team sees five, six small successes quickly. That is much better for morale than chasing some nice, big story that is dragging for years, but you still have a little to show up for.
The goal of all of this is that you run predictable projects. And once you learn how to do this type of analysis, you’ll get the chance to practice. Sit on small and cheap projects. Then, you eventually put yourself in a position that you’re delivering on larger projects, and that regardless of how you’re working on the smaller of the larger projects, you always run them in a way that ensures that you do not make a big investment into the problems that you feel will not be solvable, and that you do not make big investments in the machine learning pipelines that would not work.
If you do that. You cannot fail. Or even if you fail, you are going to fail in a cheap way. What would you rather say to your boss? Scenario one: Boss, this can never work. I have just proven it and I spent 5% of the timeline and 7% of the budget, or scenario two: Boss, I’m sorry. This will not work. And by the way, we have spent 150% of the timeline and we are 200% of the budget team. Just a small career advice on how to avoid moves that limit your career potential. If you fail, fail fast, fail the trip.
That definitely sounds like career advice from proven experience in the landmines and navigating these landmines in the industry. And one additional area I’d love us to explore today is for leaders who are building these data and AI projects today.
So you’ve already been commissioned to do that project. And you have your resources, your budget, your technology. Where do you focus or spend that time? Is that around collecting and aggregating many sources of data? Is that around reading all the top research papers to get that state of the art algorithm? How do you determine how to be successful with that AI project?
The most important thing to understand when you are a leader in an AI project, is that the machine learning pipeline would ossify after a while. You could postpone the moment when the ML pipeline would ossify a little, but eventually, by the time your organization is large enough, and you’ve worked on that machine learning pipeline for long enough, it all ossifies. It’s the nature of the beast.
While you cannot prevent ossification, you could make sure that you’re using the right pipeline. It’s not the problem that the pipeline is ossified, if you do not need to change it. So make sure that you have the right one. And as a result, when you’re looking at the candidates for that machine learning pipeline, make sure that the candidate that you are choosing to invest a lot of money could support your business for the foreseeable future.
On the topic of how to start the project, there are always a lot of different opinions and a lot of different possibilities that you have when you’re starting a project. Should you be getting more? Should you be starting by prototyping some algorithms? Should you be reading our latest research papers in that area?
You should first understand how all of the pieces of your project would fit together. What is your machine learning pipeline going to look like? You typically do not need to know all the latest developments in a computer vision research to figure out that you would need one of the boxes in your machine learning pipeline to say something like computer vision in it. And once when you have the basic idea of how that machine learning pipeline would look like, what are the pieces that you need in it? You could now start engineering the system around, be aware that people often spent too much time trying to understand how something works.
For example, how do some algorithms work and not enough time early enough to ask the question, what could those algorithms do for you?
What is the result of putting an algorithm that has 99.99% accuracy in some stage of your pipeline? Does your machine learning pipeline start faking like a champ? If you could only find such an algorithm. Or are you going to spend a ton of time implementing that algorithm only to find that when you plug your team, your machine learning pipeline is still sluggish, even with that great algorithm?
Just like you cannot build a great computer by opening some random computer and replacing around them by two feet with a better one. You cannot take a random stage in your machine learning pipeline and think if only that stage rig fell, the whole system would be perfect.
That would be a clairvoyant of putting the very best cooling fan known to humankind into the random hundred dollar laptop you got on eBay. Instead, you need to analyze the system and ask this particular machine learning pipeline, what bite of it should be improved first? On which bite it is the most sensitive. Look at the sensitivity analyses of the whole system.
As an executive, tasked with bringing AI to the organization, your job is to build an organization that would grow with time, and you should plan for its growth. As the organization grows, it will become much more capable than it is today. So you’ll want to sequence the challenges and the projects that the organization would reckon with the growth of the organization.
And you want to tackle the problems at the time that the organization is capable of solving them. If you do all that we talked about today, you’re highly unlikely to fail. And the chances are that even if you run into trouble, the nature of the trouble that you would run into is some small piece of the machine learning pipeline experiencing trouble. But we know that the pipeline as a whole is strong, and we know that particular piece could be improved enough that we will get a solution to our problem, because we already checked that.
Even if we do not know how to fix that small piece ourselves, that’s great. You could hire people to solve this problem for you. You could hire experts, you could get experts in that particular small piece of the machine learning pipeline, because now you happen to have a budget. You’ll have the budget because you have a good track record.
You have sequenced the projects in a way that maximizes cashflow you’ll get from solving them. And you’ll have also maximized the predictability of the projects. So you delivered them. Above all, exercise your agency and lead. Lead the team so that they look at the engineering systems as a whole. Analyze the systems and apply custom-tailored solutions, so that you guide your systems throughout your business goals.
This is not rocket science. This is fairly simple and velveteen and established executives capabilities. I wish you good luck and a lot of success.
Veljko Krunic, the author of “Succeeding with AI”, a Manning book, and a consultant on AI and data for big tech companies.
Thank you for joining us today on the HumAIn podcast.
Thank you, David.
Thank you for listening to this episode of the HumAIn podcast. Did the episode measure up to your thoughts and ML and AI data science, developer tools and technical education? Share your thoughts with me at humainpodcast.com/contact.
Remember to share this episode with a friend, subscribe and leave a review. And listen for more episodes of HumAIn.