When you’re making the decision for a trade-off of how understandable my model’s going to be versus how accurate I’m going to be. A lot of folks are unaware that they’re not going to get that same increase in improvement. If you’re trying to look at fraud detection. 

David Yakobovitch

This is HumAIn a weekly podcast focused on bridging the gap between humans and machines. In this age of acceleration. My name is David Yakobovitch and on this podcast I interview experts in sociology, psychology, artificial intelligence researchers on consumer facing products and consumer facing companies to help audiences better understand AI and its many capabilities. If you like the show, remember to subscribe and leave a review. Now onto the show. 

For our viewers tuning in today, I’m interviewing on the HumAIn Podcast, Scott Glenn, Daniel, who has a rich history in machine learning involved with instruction, teaching pedagogy, as well as working in AI labs and today doing a lot of the work at Legg Mason in projects that you could say we’re in the works all the way since the 1980s and you just shared some of your background and I’d love to hear how is it different today working on these projects compared to the eighties and nineties?

T. Scott

Sure. Of course back then, I used to drive my dinosaur into work and pull up my stone tablets. No, I’m sorry. Not quite that old. But. What’s been most interesting to me is that the general nature of the projects has remained very similar and a lot of the skill sets data to solve those problems have remained the same.

What’s really changed is that, the pace of advancement has gotten a great deal faster. I can safely say that the industry has changed probably more in the past seven years than it did in 23 years prior to that. And what by that is there is such a larger focus these days in terms of technology and software development, as opposed to it used to be much more of an approach of problem solving and statistical techniques. 

And It’s absolutely fantastic that it has opened up so much so that we’re able to bring in so many folks that have a much stronger computer science and software development background. That’s been great. And the potential downside is that the nature of the problem solving seems so shifted much more towards a software development approach than it has been from a true problem solving approach. And that’s a double-edged sword.

David Yakobovitch

Because that problem solving would be traditionally the analytics, the dashboards, the visualizations, the predictions with actuaries, a lot of those. Where you’re very statistics and data minded. But if I’m hearing you, it’s that now it’s building infrastructure and applications that are end products that include machine learning or predictions. Absolutely. 

T. Scott

The approach that I’ve always taken has been, I want to truly understand what lessons are to be learned in the data that can be applied to more problems than just the model I’m building. One of the concerns I have as people have moved more and more towards deep learning solutions is that people are unaware of the fact that in many cases, a deep learning approach is not going to provide you with a more accurate solution to your problem.

And so when you’re making the decision for a trade-off of how understandable my model is going to be, versus how accurate I’m going to be. A lot of folks are unaware that they’re not going to get that same increase in improvement. If you’re trying to look at fraud detection, as an example.

David Yakobovitch

So I’m a consumer and I’m concerned with fraud detection because my chase credit card is used all the time when I travel and I don’t really pay attention to the backend. Like for me, it’s okay. It’s a linear regression model predicting if my transaction is not in New York or it’s a deep learning model, why should I care? 

T. Scott

I don’t think you should. The only reason you’re going to care is why did my transaction get approved or not? Or what am I doing so that I can keep from having important transactions from being turned down. 

David Yakobovitch

We don’t get much of that from the banks. They don’t say David, you flag five transactions outside of the state of New York. Next time you travel, you should let us know like that it doesn’t happen yet. 

T. Scott

It doesn’t happen a lot. It’s a little bit different for fraud detection versus being turned down for a loan, for example. If you’re turned down for a loan, they’re going to tell you exactly what’s driving your credit score and what’s limiting that score from going up, but it’s actually required by the regulators to be able to have a very simple, succinct answer to those questions. What are the factors that kept me from having my loan go through?

David Yakobovitch

So the factors are, I had too much debt outstanding, too many inquiries on a rapport, non credit and other factors. So it’s very, you could say transparent to the consumer, not black box as some of these algorithms. 

T. Scott

And that’s one of the advantages of having started out in financial services so long ago was there was never permission to a black box. So when you start from the perspective that your model must be fully interpretable, it really builds a lot of discipline. And how you go about to solve a problem.

One of the challenges I have for more advanced problems, where those regulations are not in place as if you don’t understand it, when it’s working.

How are you going to fix it when it breaks? And I don’t think there’s been a heck of a lot of focus on that particular issue. And that concerns me with a lot of the more convoluted approaches to model building that happened today.

David Yakobovitch

And, we’re seeing in the news, particularly around ethics of using deep learning models. And for example, one pioneer in e-commerce that our consumers well know Amazon for their hiring practices. We’re using AI models to screen candidates and determine who’s a good candidate for a role. But what came out in the news was that the model was excluding all women. From being interviewed, it’s just,

T. Scott

Absolutely,

David Yakobovitch

For some reason that was occurring. And, if a consumer’s like, I’m ready to apply for tech, but what if this. This algorithm’s gonna exclude me from interviewing. Should I not even apply? What I’m thinking of is how the consumers protect themselves or arm themselves against how AI is disrupting our lives?

T. Scott

And, that’s going to be a question that has to be raised in terms of consumer groups. Need to be oriented, not only to protecting the rights of consumers, but increasing the requirements on that transparency of the model, because in most cases, it isn’t at all a sinister act from organizations that they’re trying to do something that is going to have negative impact on folks, many times it’s they themselves don’t understand what’s happening with their own algorithms. If we allow organizations to have impact on consumers without understanding they’re even having that impact. That’s a big issue. 

David Yakobovitch

The algorithm is so complex that Amazon’s working on, no one in their right mind would ever exclude women from the interview process. So it’s not like someone said, I don’t want women here. Me Too movement, let me strike this up. It’s that the algorithm just started training and changing over time. And, the question as you just brought up, Scott is. Whether we as consumers do, do we create ethics groups? Do we create chords?

Do we stand up to corporations? Is it humans versus robots? 

T. Scott

I don’t think it needs to be that much of an issue. It needs to be oriented more along the lines of if someone who’s going to be reviewing policies for what is going to be fair hiring standards. There needs to be what they use in financial services, which is called an adverse impact test.

And what that means is regardless of why the algorithm was accidentally excluding women doesn’t matter, it needs to be fixed. And so regardless of the intent of the algorithm or the model, if it has an adverse impact on a group, It’s not allowed to move forward. By making that small requirement in terms of anything, that’s going to have a broad-based impact on society, such as hiring or accepting job applicants for approving someone from, for a loan, anything it’s going to have a societal impact needs to be able to pass that adverse impact test. If it can’t, it should not be allowed to go forward. 

David Yakobovitch

And then the question that’s being begged from here is not just here’s this litmus test, this adverse impact to pass, but who is that regulator? Like? Who is that board to say the standards rigorous enough? Is it going to be a Six Sigma standard? How would that be assessed?

T. Scott

It depends on the industry and the application. In terms of financial services, a lot of those regulatory boards are already in place and a lot of regulation is already in place. There’s always sort of been a long history of that going back into the sixties. And, in many ways it served us well.

In terms of hiring, I don’t have a level of expertise in the legal arena to determine where it should be passed or how it should be passed. It’s more along the lines to make folks aware that it’s an issue that needs to be addressed. 

David Yakobovitch

That’s right. In the financial space. I actually got started in the actuary field.

I did a lot of work with Aflac on loss experience monitoring and seeing that. And, it was actually around women and women who were working to adjust their insurance premiums based on health factors and risk factors. And, that work traditionally actuary was manual. But there was a, and I recall still regulation there that, if we’re changing these rates, like these rates have to be across the board, it can’t discriminate a certain class.

And so what I’m hearing from there is that this exists today in finance, this exists today in education, but not with the AI element. And because technology has changed so fast since 2012, What can we do for these regulations to keep up?

T. Scott

And in some ways we, as an industry, those of us who work in AI are putting ourselves sort of on the horns of a false dilemma. And by that, there are many folks who are engaged in building models who stubbornly believe that I’m either going to have to have a completely black box to have an accurate model or I’m going to have a transparent model. That’s going to be terrible. My experience has been that there are very few applications where you’re going to see orders of magnitude of improvement by having a black box model. 

So for years and years, when I started in the industry, I was terrified that I was going to use the wrong algorithm to solve a given problem. So out of that fear came a healthy respect for testing every algorithm I could get my hands on.

And what I found is that in many cases, sort of the law of the multiplicity of good models proves out if you have your data prep done correctly and you have your problem defined. There are all kinds of algorithms that can solve a given problem with very similar rates of performance. A lot of those are a lot easier to understand and explain to others.

Than some of the more black box approaches, like a 400 layer, deep learning network. If you need to be able to identify kittens in YouTube videos yet you’re probably going to need something like that. If you’re trying to decide whether someone might be qualified for a job or alone, you’re going to find that in most cases, those techniques are a bad fit for the problem.

It’s not one algorithm is good or another algorithm is bad, or that you have a binary choice of it’s explainable or works. It’s trying to figure out which tool is going to be applicable to the job. If I go to build a house and I have the world’s greatest power drill, that’s fantastic for whenever I need to drill a hole, but if I need to hammer something into a wall, the power girl’s not going to help me very much, regardless of whether it’s a great power drill or it’s allowing lousy power drill, it’s the wrong tool for the job. And we have responsibility to make sure that if we’re using something that is particularly to use in terms of interpretability, that we have a really good reason for using that.

David Yakobovitch

That makes sense. And that’s why a lot of the applications as consumers we’ve been using, one could know these errors using Amazon Alexa. OK Google, Facebook portal. You’ll notice in these commands that they don’t always work perfectly, whether it’s a result of an accent or praise or colloquialism, but now as new models have been experimented with, that recently there’s been some breakthroughs with Google and Stanford it’s getting better, but comes to bear fruits that want us to be willing and open, to try different models. 

T. Scott

Absolutely. And if, I’m trying to remember who had the great quote that statistician’s like artists have the bad habit of falling in love with their models. Whatever their particular approach is tends to be the approach they want to use for everything because it’s more convenient and you become very comfortable with an algorithm and a methodology, but you really have to enforce discipline on yourself to make sure that really is the right algorithm or model to solve the problem you’re trying to address. And in many cases, there’s something that is very interpretable that will get you to the same level or better than something that is completely uninterpretable. 

David Yakobovitch

Now Scott, although we’re getting quite technical right here for our audience I want them to know that both of us are involved also in the education space. And part of this is teaching students how to effectively interpret models and which one to use for projects. And, so the question I’m gonna ask for you is okay. If I’m someone working in advertising or retail and, I’m concerned, AI is coming in, jobs are being eliminated. How should I best re-skill or up-skill myself? Like, what does the workforce of tomorrow look like? 

T. Scott

Let’s start off with small questions here to work for such Marshall. 

David Yakobovitch

It’s a big question. That’s right. 

T. Scott

Being aware that models are out there being aware that models are being applied to different situations is the first step. There are certain fields that are going to be very highly impacted by AI and are going to reduce the number of available positions. I would not recommend right at the moment, if somebody is just starting school, that radiology is their best first choice. Because so much of computer vision is taking over certain aspects of that field…

David Yakobovitch

And, with a doctor, this year, and they were talking about how now it’s like brain scans that it’s a hundred percent done by the robot. So the robot and the algorithm does all these scans determines if a sample of tissue is malignant. And then there, there are still a couple humans who will look at these scans and we’ll assess them to confirm that accuracy. But whole divisions have already started. 

T. Scott

That’s true. And, there are certain fields where that’s going to be a big issue, but remember, there’s been a lot more press and attention from the media on AI recently, but the use of using decision systems goes back to the 1940s. The term artificial intelligence was quite right after world war two. So it’s not like all of this is new, they’re much easier applications, but all of the folks who are afraid that there’s going to be no jobs left 10 years from now.

That is overestimating what AI is going to do. It’s going to help make better decisions, but what’s more likely to come up in the future is augmented intelligence, where people are able to use computers better as tools to solve existing problems, as opposed to replacing them. 

David Yakobovitch 

And augmented intelligence. This is a phrase that’s just been talked about in the past couple of years. It’s bridging the gap. Humans and machines, working together, solving problems together. The radiologist example, there are still a few of them, but they’re using the power of the algorithm, which can scan through thousands of these images in seconds versus humans that would take days. So that is the essence of getting quicker results. 

T. Scott

The joke I tell amongst my friends is the fact that when it comes to artificial intelligence, I’ve seen a lot more of the artificial than I’ve seen of the actual intelligence. I don’t regard information systems necessarily so much as smart as they are fast.

What I mean by that is it’s able to create a number of calculations and decisions in a very short period of time. Much more effectively than someone sitting there with a piece of paper and a pencil. That’s very different from waking up to have Rosie, the robot, do all the housework for the day. That there are different levels of applications of AI in different aspects of life that have been happening for quite a while and are already part of a lot of everyday life activities from others you’re using Google Maps or you’re using Amazon Echo or any of those types of tools. At the same time those who fear that everyone’s job is going to be replaced in 10 years. I just don’t see it. 

David Yakobovitch  

So then where do you see the hype or the overblown reaction for AI? That people have this reaction that isn’t necessarily true, maybe as a result of the media. Are there a couple examples that you could? 

T. Scott

Sure. That back around 2010 was one of the big breakthroughs in terms of research with suddenly Google engineers and being able to identify that for taking on very complex tasks, such as computer vision, being able to identify a face on a photo. It was a great example. It’s very hard to do that with traditional methods. 

And so when you see the advance and technology and deep learning, which are basically just very complicated neural networks, you absolutely needed that. And there are great solutions for that problem. And that’s fantastic. But what happened is people then assume that all types of problems can immediately be solved by artificial intelligence.

And if you follow the trade publications where Google publishes, even since October of 2018, they are still very invested in artificial intelligence as a technology, but they are very much backing away from the theory that artificial intelligence is going to be the solution for all information problems.

David Yakobovitch 

I see. And there’s the reason that perhaps we’re conjecturing here, but is a reason that, organizations like Google and Amazon and Facebook might be backing away from AI being the end, all be all. Is that a perhaps we were a little too optimistic on how quickly these things solutions could be realized.

T. Scott

It’s very easy for anyone to tell a story. That is an extreme AI is going to take over everything or AI is completely useless, AI is going to replace all our jobs or AI has no applications. Those stories are much easier to tell than it is to say AI has some great applications in very specific industries for very specific problems.

And they’re doing a wonderful job there and they’re also not doing terribly well in other types of applications. It’s a much more nuanced message and a phone. If one is only given two minutes on a nightly news program to explain an entire field that’s hard. So what tends to happen is people walk away from one of those discussions with a much more simplistic understanding of the situation that can be easily conveyed in two minutes, which goes back to why education is so important. 

I don’t think that people need to run around knowing how to calculate in their heads, an activation function for a neural network. That’s way overkill.

That people understand that models are out there and they help make decisions based on conditions and here, some of the things they look at and you have to be careful with the outcomes. That’s much easier to try and get across to people. And that’s where a lot of the focus and education should be so that people understand the good and the bad.

David Yakobovitch  

And switching gears to education since, we’re both involved in that space for listeners on here today, Scott has taught in the AI and data science realm with Johns Hopkins, University of Maryland, Harvard University and other organizations. And so you’ve seen a lot of students. You’ve seen different students, different communities, different backgrounds, and different programs trying to teach these very dense topics. What have you seen work? or What hasn’t worked even? 

T. Scott 

One of my favorite stories is how I used to teach people how to build a model versus how I teach today. Back in the beginning, I would try and tell people all the theory and the different types of algorithms and what they did. And I found that a lot of the folks who came to the class had been assigned by their manager. They were actually required as part of their job to come to the class and then we’d come into the class either terrified that it was going to be impossible to understand. Or afraid that if they push the enter button on the computer, things were going to start to melt down or some type of horrible reaction was going to happen. 

And I wasn’t getting very far. So the way I teach it today is I go in and I’ll say, here’s a program. Here’s a data set. Once you push this button and this button, I want you to look at that and then press enter. Congratulations everyone in the course of four minutes, you’ve just built your first model. What, what do you mean? I don’t understand. Once you’re able to get people past that sort of wall of fear over the ability of creating a basic model. It’s much easier to get people further down the path faster.

People let their own concerns and fears and lack of self-confidence really get in the way of understanding what’s important. By the time I finished, I had regular assignments where I would take folks primarily in marketing departments and within a day and a half, have them go from having never seen a computer program that did predictive modeling before at all to building their own full functional models a day and a half. 

A lot of it has to do with focus on what’s truly important. What are the areas which can really cause problems and howl avoid them rather than try to set them and have them memorize a bunch of formulas that, as a much taller mountain to climb. 

David Yakobovitch 

I see. So it’s taking that visual outcome from the beginning of this. This is really a chat bot, or this is how you determine cat versus dog, or this is how the price of a house loan was calculated. That then that student has an Aha Moment and that curiosity is sparked right to, to dive deeper into what makes it work.

T. Scott 

I drive to work back and forth about 20 minutes each way every day. And that seems to work out pretty well for me. But if you asked me to explain the compression ratios of an internal combustion gasoline engine, we’d all be in trouble because I have no idea.

What’s really important for me to know is what does stop lights and stop signs mean and where the brake is, what the steering wheel does and, how fast I should be going and how to respect other drivers on the road. Those things are much more important for me to understand. And it’s a similar path to trying to understand the world of predictive analytics.

There are a couple of components that you really need to have a firm grip on. There are some horrible mistakes that you can make, but many of them are very easy to avoid. There has been quite frankly, some folks in our field in the very beginning really seemed to enjoy the fact that they understood a hidden language that no one else understood.

And there developed a sort of intellectual arrogance that I know how to do this. And you don’t. And I’m very happy about that because that makes me feel good. We need a lot less of those people. We need a lot more people in the field who are trying to share the wealth of knowledge. 

David Yakobovitch  

It is all about sharing the wealth, as you just mentioned, and that wealth, consumers are going to see as their lives are potentially eased over the next few years with more AI applications. A lot of your work today is with the strategic artificial intelligence lab sale that you set up at like Mason. And, with that work, I’m just curious from what you can disclose, are there any research projects you’re working on in the labs that you think over the next few years, consumers will see, that will start to benefit them?

T. Scott

A lot of the work they do now involves trying to do the type of education that you and I were just discussing and open sourcing it to the community as a whole so that anyone can start to learn the basics of how to build their own models or how to use models or how those models are going to impact them. That’s one area. 

The other area is that many folks are unaware that they’re already being touched by artificial intelligence quite a bit today. So what are the applications? What are they already interacting with? That they may not even understand it’s artificial intelligence. That the last area, is somewhat concerning for folks because the stuff that’s supposed to be truly, if God forbid I use the word sexy in the field, but they want to learn about the new algorithm and they want to learn about the new technology and they want to learn about the latest type of GPU system that they can apply their model on.

Those are all well and good. And those are important things, but I have found that the best increase in results usually has to do with setting the data up correctly and defining the problem correctly. And I tell people that, and they sort of look at me like, that sounds really boring and I don’t find it boring, but I understand why people do, but that’s where a lot of the power is.

And we need to shift the pendulum back a little bit from the super advanced technologies and algorithms. No one has ever heard of before and whether their activation function should use Relu or a norm sigmoid and go back to. What’s the problem we’re trying to solve here. And, and what are the framework where you have to apply this and how much time do we have to get this out now much, this is going to cost.

And is there an easier way to do it? we need to come a little bit closer to that side and move away a little bit from all the gee whiz-bang new algorithms that come out every day. 

David Yakobovitch 

This is so interesting, we have platforms like archive. There’s hundreds of machine learning, research papers, coming out all the time and researchers claim. I want to use this new algorithm. I’m going to get that 1% boost to have a huge result. But from what I’m hearing from you, Scott is potentially a contrarian viewpoint potentially of, let’s let it sit in the wild. Let’s see how this algorithm really performs, perhaps, there’s more important things such as translating business solutions today, rather than optimizing an algorithm so much to the point that it was a waste of time.

T. Scott 

Absolutely. And there is a point of diminishing returns that happens in modeling that happens very early in the process that people don’t understand. I don’t think that people understand the vast quantities of data and time that are required for using something like an advanced algorithm like deep learning.

If you’re going to have a 600 layer neural network, you’d better be prepared to have millions and millions, if not billions of records and months, and months of training, time to be able to get that to be set up correctly. And for a particular type of problem, you may need that for most of the problems that I’ve ever dealt with across 30 years. A lot of that is bright, shiny objects that are more likely to waste time and effort versus getting the job done versus using something simpler. 

David Yakobovitch  

So going on this theme of bright, shiny objects, is there a viewpoint or viewpoints that you hold that might be contrarian as an artificial intelligence researcher in education than what most people would hold that you’d like to share?

T. Scott  

Definitely. And I want to be clear that algorithms are extremely important. And developments and algorithms are extremely important understanding that there needs to be a fit between problems. And that solution is the piece that is missing. There are some problems in the world that need something extremely sophisticated and extremely advanced, and that new research is incredibly important in that area.

Just be sure that you are matching that solution to a problem that really requires that, I don’t recommend that anyone try and balance their checkbook with a neural network. It’s not a good use of your time. And people need to have that awareness before they jump into starting off on something that’s much more complicated than what they need. 

David Yakobovitch 

And, you said it very well put of start simple and then work on complexity over time. And we’re seeing the industry in AI is continuing to grow with many startups and organizations. A lot of applications, a lot of fragmentation. Are there any trends that you see occurring in the industry today that particular would be noteworthy, for a consumer, to pay attention to?

T. Scott 

It has to do with going back to something that’s even more basic, which is understand what data you’re sharing with folks and how it might be used. I am not one of those people who says you should never do a transaction over the internet or that you should never join a social network. But remember that whatever aspect of a model or artificial intelligence is going to be applied to is going to be based on data.

How much of that data is coming from you? And are you comfortable with that level of data being provided to someone else? That’s the starting point. It’s not understanding the most sophisticated artificial intelligence approach start with, What do you feel comfortable with? What level of control do you have before you start giving away all of your information for free to everybody? Because it can be applied well, and it can be misapplied, understand that effort. 

David Yakobovitch 

And some ways it could be applied well is, you get, you’re on Facebook, the Facebook platform and you have recommendation to add someone as a friend, who’s friends of your friends who you’ve never met, but then you either add them and you see them at a future event, or you see them at the event.

And you’ll say, this is that name. That’s been constantly being recommended to me. That must be the AI. That’s been finding these without getting too technical. These nearest neighbors have these relationships over time. So that could be a benefit that we don’t pay. We don’t realize it’s just part of our everyday use of Facebook.

T. Scott 

Data in general and artificial intelligence specifically, it just, is it isn’t that it is good. It isn’t that it is bad. It can be used in positive ways and good ways it can be used in negative or detrimental ways. So trying to develop a philosophy that all of its good or all of its bad, is the wrong way to think of it. Again, it comes back to what makes most sense for me, what makes most sense for my needs? What are the potential downsides? And am I comfortable with them and just making a decision that’s right for you. 

David Yakobovitch 

AI just is. Are there any types of talking points or things that we missed that you also wanted to include?

T. Scott 

The biggest reason why artificial intelligence seems so mysterious is we do a much better job in researching artificial intelligence. And that’s where we put our emphasis and our focus. And we do a pretty lousy job of explaining and interpreting artificial intelligence for folks. If we would start putting more of our efforts, and to make it understandable and transparent.

A lot of other problems would go away and a lot more comfort would be developed quickly and stop trying to turn everything into a bright, shiny object or making it seem more complicated than it needs to be. 

David Yakobovitch 

As simple as a company using this name, Facebook that we’ve talked about today of saying AI is used in Facebook to make your lives more connected.

Whether that’s a friend connect a friend request that you see, whether that’s a recommendation of someone to add to a message on messenger, or that’s a photo to tag or here, the connect your lives. And that’s how we use AI. 

T. Scott 

Absolutely. And if we stop, take a step back and use a different term from, for example, if we talk about a model, people use models all the time and they have no idea that they’re doing it. For example, if you’re trying to explain to a friend and you’re sitting across the table from them over lunch in the cafeteria, and you draw a really quick map, you go here and then take a right on the street and then you make a left on that street and everything else. 

And you draw a real quick map, a couple seconds handed over to him.

You’ve just given them a model. It’s a representation of information that’s used for a specific purpose. That’s a model. We insist on trying to make everything look like it’s going to come out of the next version of Bladerunner. And that’s very intimidating to people. It’s not a matter that this topic is completely unfamiliar to people. We just do a really poor job of explaining how people are connected to it and do it already. 

David Yakobovitch 

Our life is a model. Us talking today is a model. And that every experience is that. So some wise words being shared 

T. Scott 

Language is a model of thoughts and ideas. So if you speak or listen, that’s part of a model. If you hear a song that’s a model represented through music. There are all types of models that we use all the time. The problem is, if people are in such a hurry to impress one another with the complexity of the model, they miss the basic point of what a model is, what it does, why it’s important to folks.

David Yakobovitch 

So a takeaway for consumers listening to today’s podcast is the next time your engaging in conversation, working with technology, thinking about what you are doing that is repeatable has a process, a framework to it, and could a model the fitted to it to standardize or improve that process, perhaps that’s what the AI that will working so hard to bring into the world is setting up to do.

T. Scott 

And in the famous words of Eric King, stop trying to bounce your checkbooks with the neural network.

David Yakobovitch 

Hire an accountant or bookkeeper. 

T. Scott 

Thanks

David Yakobovitch 

Scott, thanks so much for being with us today on the podcast. Appreciate your time and appreciate your support as we’re working to bridge the gap between humans and machines in this age of acceleration.

T. Scott 

Thank you so much for having me. I really enjoyed it. 

David Yakobovitch 

Awesome. That’s it. For this episode of HumAIn I’m David Yakobovitch and if you enjoyed the show, don’t forget to click subscribe on Apple Podcasts or wherever you are listening to this. Thanks so much for listening and I’ll talk to you in the next one.