Today’s guest speaker led the development of the unreal engine at Epic Games, the growth of the Gears of War franchise, and inspired development on the fourth night blockbuster, he has advised unity game developers conference and has a new explainable AI startup called Diveplane. 

Listen in as Dr. Mike Capps and I discuss open and closed AI systems, why the Canaan algorithm could be better than neural networks and our take on security as it will impact understanding AI in the future.  

David Yakobovitch

Welcome tissue HumAIn my name is David Yakobovitch, and I will be your host throughout this series together. We will explore AI through fireside conversations with industry experts, from business executives and AI researchers, to leaders who advanced AI for all HumAIn is the channel to release new AI products, to learn about industry trends and to bridge the gap between humans and machines in the fourth industrial revolution. If you like this episode, remember to subscribe and leave a review. 

Welcome back everyone to the HumAIn Podcast. My name’s David Yakobovitch and today on our show, I have our guest speaker, Dr. Mike Capps, who is with diveplane. Now Mike has been involved in the gaming industry and the AI industry for many years prior to his current new venture, he was involved with that legendary career in the video game industry, as president of Epic Games, the makers of blockbusters, such as Fortnite and Gears of Wars he’s also served on the advisory councils for unity, the game developers conference, and currently does a lot in both explaining AI and taking video games to the next level. 

Mike, thanks for being with us.        

Mike Capps

Hey, it’s a pleasure. Can’t wait. 

David Yakobovitch

This is so cool for you to be joining us here in May, 2019 I got back from San Francisco a few weeks ago and I was at a game developers conference, and you’ve had the opportunity to do a lot of the work in the industry. What are some of your takeaways from what you saw this year and where the industry is going? 

Mike Capps

I wanted to ask you what you thought of our industry. It’s a hobby industry that has turned into a massive industry and that’s exciting it also means that sort of, it’s still in its roots so nerds with blue hair who just want to make cool games mixed in with multi-billion dollar chuck or so it’s really interesting space and they’re dealing with rapid onset of AI, just like every other technology businesses. 

David Yakobovitch

That’s super exciting and even one other interesting thing after a GDC, I got to see demos of Amazon web services, new deep racer products, which depending on when this episode goes live, may or may not be public yet so I won’t talk too much about it, but it’s super cool how they’re trying to democratize AI by having a racing car that you can program and control and see it go around the track the reason that’s interesting is one of your big missions is having explainable AI and understanding AI and for consumers this is often quite challenging. 

Mike Capps

I’m excited about anything that drives AI earlier in our educational process, the notion that you need a masters from Carnegie Mellon, at least in order to be successful in tuning neural nets, that’s just not going to work and you look at the education systems in China or Finland, where they’re putting sort of base level understanding of what AI is and then what it can do and what it can do when correctly it’s huge and so I’m excited about racer because anything I can put in front of my little boy, that’s fun and is going to help him be successful in the future. How can you not love that?

David Yakobovitch

I agree with that as well. I’m involved in the education space and some of that’s boot camps, some of that’s accelerated certificates and what do you see the future of education as if it’s not four year degrees? 

Mike Capps

What do I see it as, gosh, well the video game industry, it’s a weird one in that and so I had a PhD in computer science, but I so rarely hired PhDs because they were the least likely to be focused on just making stuff fast.

It doesn’t mean that the academic pursuit isn’t important and then on the art side, I’d say half of the artist I hired, hundreds of artists, half of them had a degree at all because it came down to a mix of nurtured talent and a set of skills that you could get at home and that shifted into world design, where we at epic with the unreal engine or at unity would put professional grade tools out for free for kids to use at home.

You don’t need an architecture degree to be able to design cool levels for fortnight you just need a computer or even an iPad now, and you’re using the same tools they’re using the best and brightest architect firms in the world. 

I love the discipline that comes from a well-rounded degree system, but you just see so many creative and technical fields where it just doesn’t make any sense anymore so better that’s not quite an answer the future in the U.S  it’s, everything moves so slowly in the educational system, because it’s a widely distributed, you see technology companies give up over and over again in the education space because they just can’t get enough use quickly enough to make it you should be making Snapchat, not an educational tool that kids love because no one will pay for it so I’m depressed a bit about our distributed system for education in the United States I guess there you go. 

David Yakobovitch

And the education system has consolidated a lot but what I find really interesting is how it’s gone very much into augmented reality, virtual reality, even including these different systems that adapt for your learning and one of my colleagues actually went through Facebook’s Oculus rift Bootcamp in New York, and now they’ve built an education tech startup among many other companies in this space now that are using unity and they’re actually programming in C-sharp and they’re using all these assets and it’s super interesting because you’ve had the experience of being involved with

both unreal at epic games and then unity advising and it seems that these two organizations are leaving the pack very much different, but very much the same. How could you break it down? What’s the future of development with those systems? 

Mike Capps

Just like software development has been trending for 20 years or more it’s about standing on the shoulders of the people before you and what you can accomplish right now with two or three smart kids and a unity engine it’s something that a hundred of us couldn’t do 10 years ago in terms of freely available art assets they can just plug and play and together all the tools you need are built there’s AI already built into the unity engine that you can use to drive avatars, ARToolKit.

I built VR software back in the late nineties and the amount of time it would take just to present an image twice side by side in a stereo headset there was a lot of work we were doing math to solve that problem and now it’s just free you can buy a headset for a hundred bucks. That’s really good. So I love that democratization of capability. That has led to three folks can build a product that a hundred thousand want, and you don’t have to be a blockbuster in order to be successful in the games world anymore.

I love that because it means you can build something reasonably in the education space. So you don’t have to sell 3 million of them at $60 a piece in order to feel like it was worth your time and investment. To answer the question about sort of how it came about, sort of an interesting story, that the first project that Epic Games built when Tim Sweeney was the founder was a word processing tool essentially he was writing useful software quote unquote, and started mucking around with it as a, like a little maze tool, like one of the old adventure games and ended up building an engine called ZZT, which was a game editing tool and then he built a game in it called talented ZZT wasn’t particularly really good he’s not a game designer, he’s a technologist, but the tool set came with it so the very first product we built in 91 was a hobbyist toolkit and put it out and then people made better adventures with it and so it’s kind of in the DNA of the company that we would take our toolset, ship it along with the games we made with those tools and we found talent that way. 

We found licensing businesses that way and suddenly EA had 16 games at a million dollars, a pop. They were building on our technology and it grew into this amazing business, but it all came from the beginning of we might not be the best game designers.

Let’s put the tools out to see what other people can do with it and so many people I hired started that way, messing with the tools that we put out for free so lead story

David Yakobovitch

And it’s incredible because not to focus too much on Epic games, but when you were involved there you got to see unreal, completely evolve as a platform, and you got to take one of the first big billion dollar franchises is Gears of Wars all the way to life and see that whole fruition.

Now that Epic Games, Fortnite is the new thing, and this is in its new era, but what were some of the learning experiences through gears of war that yourself and the team worked on that may have inspired the team with the new franchise fortnight? 

Mike Capps

We worked on gears of war for a decade and it’s still going, it’s been sold off to another team so what lessons did you learn? Sleep is really important and you don’t do your best work if you’re not sleeping well and that was a game that we really built to be a big franchise and we were thinking about it from the start and the certain things you can do if you’re planning on the book, being a trilogy, you write it differently and so we were very focused on building a world that was going to be big enough to support a huge lunch. 

I’d say one mistake we learned was throw away the first pancake cause we built gears of war for a year or two and it just wasn’t coming together and we built this new engine, unreal engine three, that was so much better and we ended up throwing away all of the game we had built entirely gave it away actually to people to use in our toolset and built gears of war again on top of unreal engine three and amusingly enough, that exact same thing happened with Fortnite.

We started working on fortnight and unreal engine three, the cartoon style came in part, because we wanted to be able to quickly on our old tech and this was not meant ever to be a sexy game unreal engine four was in the works and that was going to be the sexy hot game engine that Nvidia would use to sell their next technology and at some point we realized, we’re spending all this time keeping our old engine together.

Shouldn’t we just port this over to the new engine and we did and Fortnite still has that cartoon style, which came purely from. I don’t want anybody to think this is as good as we can do, but where that leg was, it’s a game that’s very accessible to both genders and, or I should say all genders and you’ve got this great, 50, 50 mix and the player base a lot of younger folks like it because it’s cartoon. Adults like it because it’s fun and it was a brilliant business decision. They came entirely from technology decisions.

David Yakobovitch

What’s fun is Fortnite today has become also very educational, whether you love it, or you’re in the school of teachers who sometimes get stressed out that a lot of their students are playing Fortnite and getting all the skins and having all these epic battles it has become educational the platforms evolved into creating your own designs and it’s become very interactive. If you will. Do you think in the future, Fortnite could be a platform where yes, there could be games and those games would be very educational, such as let’s put different items to discover that are custom sprites and very involved, maybe even for like elementary and secondary school students

Mike Capps

The hard problem Epic has is they’re shoveling diamonds into big trucks as fast as they possibly can so it’s a little hard to think about doing anything other than precisely what they’re doing but the engine, we worked really hard. I’m an ex academic.

We worked really hard to open up the technology to academia, to use for nonprofit projects so the focus at Epic isn’t uneducation, but it’s always been, how do we facilitate so I can see as you see more user created content, maybe a good way to say it is. I went to the Apple store the other day on my phone and the top item in the Apple store is five free levels you can download for Fortnite, like above all the content that they are willing to sell you and take the 30% cut for and all that. It’s download free stuff in Fortnite because they know that you’re going to buy a hat and it’s going to be 20 bucks and six goes to Apple, so as that community gets bigger and bigger, of course, there’s going to be more serious content coming. 

David Yakobovitch

I’m looking forward to that and perhaps they’ll even be other games on whether it’s now the new unreal engine that you talk about, or even unity or even new engines that will pop up in there.

There’s been other ones in this space that are still in their early phase.I’d like us now to dive into where you’re at today, so I love Epic games. I actually, for fun that my audience doesn’t know is that whenever I get some free time, I love to hop on Twitch and I love to see the Fortnite streamers like Ninja and some others, and just see what’s going on, because the new generation is so culturally relevant and they have so much trends that can empower learning, can empower education and empower the future of startups.

Mike Capps

I was going to say education comes from engagement, and if you don’t understand what engages people. You can’t educate as well so I love that you try to follow that and be, I hate to say relevant, but it’s essentially that right. It’s understanding what it is that is going to connect with that audience?

David Yakobovitch

I probably will not be a streamer like Ninja because that I have to dye my hair a certain color for that to happen. Let’s tune our gears towards diveplane, which is your new venture. Why don’t you let us know what’s going on there? You’ve been involved for a couple years and you’re scaling this also in North Carolina where you’ve built a lot of your career.

Mike Capps

I retired for a few years, had kids, and it was a great time to be able to get out of, I mean, the games industry is wonderful, but it’s brutal. It’s a lot of work trying to bring together passion projects to an exact date with the marketing campaigns and the commercials and the product, everything getting at once so I stepped out of that for a few years and there were a couple problems that I care a lot about one is a large scale virtual reality and the trend that comes with, if it’s Oculus and it’s Facebook, that sounds great but what happens if you’re working in playing in a virtual reality, which is run by Facebook, suddenly the economy of our lives is hidden inside of a social network and so trying to build an open framework for interchange in VR, pretty important so worked on that with some of the big players in the space and proud of that but then the other issue that I cared a lot about was AI and super intelligence and that notion it makes me nervous.

I have no idea if superintelligence is a thing. I’m sure you’ve debated about it yourself, but I feel like if we’re going in that direction, starting with Blackbox AI, and then making it more and more powerful and more and more a part of everyday life is the wrong way to go. And right in the middle of that, my brilliant friend Chris Hazard finally told me what he’d been working on.

I’ve been mentoring him for years on this software company he had, that was working in defense and intelligence space, but I didn’t really know what he was doing because he was in the defense and intelligence space so finally kind of told me about the software he’s working on and you built a explainable machine learning platform that began in military where the, they would want to run a exercise against a red force the opposing force that’s run by AI but if they read force wins, you need to be able to figure out what it did and why, so you can react to it and learn imagine that the militaries job was to win at go and then the folks at deep mindset, here you go. 

Here’s alpha goes zero it’d beat the pants off you again, but we can’t tell you why and you’re never going to get any better though well, it doesn’t do them any good so Chris’s tech is specifically designed to say, here’s why I took these steps. Here’s why it worked and then help you figure out how to beat it and I was like, that is really important and we need to take that to the commercial sector and we need to put it in front of everyone we can and that’s when diamond got for him it was about, I guess a year and half ago we moved into an office it’s about 20 of us now, a couple of PhDs and some really smart folks who are productizing in healthcare and regulated businesses. They’re the ones that first see the need for an explanation of why this surgery authorization was turned down, why this loan application was turned down; they know they need that if they’re going to use AI, but still active in the military space and some others so there’s the overview. 

David Yakobovitch

And it sounds like keeping the humanity in AI, which is your motto, is what you are doing each and every day with your PhDs and with your business team who are trying to actually explain AI coming from the finance industry, one of those classic examples is yes does a loan get turned down or approved and often you have no idea do you apply for one with Wells Fargo or Bank Of America and it’s either approved or denied, and this is the limit, but what if you can empower consumers and also empower enterprises to know what steps do you have to take to get better to actually uncover that?

Mike Capps

First step is just having a discriminator or a predictor or whatever you want to call it that would say you get this loan yes or no and why was it? Oh, you’re fine. Go score was too low or something I hate to talk about, but it’s true if you took 60 years of residential loan data from North Carolina and fed it into a great neural network on the other side, you’d have a highly racially biased system, I’d love to say it’s not true, but of course it would be, so it’s going to be looking for proxies of race, like zip code or what have you education, whatever it might be that would be appropriate to catch and we would know, and we would have built.

It is based on the best data we had with no intention of bias, but built it anyway and what I love about our tech is that first it’ll say here were the most important features, not overall a data set. I can do that, but also for you, your decision here were the five most important factors for your personal decision and that lets you catch the sort of biases. Exactly the right word technically right catches the bias that’s happening systematically inside of a system and then if we can provide nearest contra positives I what would you have to do to flip this auntie?

Like, all you need is another 10K a year in your income statement and we would have said: Yes, we’re all you need. Un whatever direction that is, whatever dimension of the data it would take what’s the nearest dimensional directions to turn this answer the other way and that’s what goes, we can take back to a consumer and say, here’s all you need to do to get this to be a yes and that’s super empowering. It’s not just raise your FICO, which is a trick, It’s more than that. 

David Yakobovitch

It’s interesting about what you have to do to get better and we talk about black box and white box AI where black boxes, traditionally it’s hidden behind the scenes, but not a lot of people know about white box AI. I’m not even sure if I liked the terms, because we’re talking about racial again trying to bridge the gap from those.

I’d love you to dive into that but before the reason I bring that up is I just had the opportunity to watch the movie The Hate U Give, which is a really powerful movie about black lives matter and about the police officer shootings and body cams and how do you know if it’s certain communities that are being discriminated against versus not? And the reason I bring up this whole example, Is San Francisco, California was the first major city in the United States last week to actually ban public use of AI. 

So they actually started banning police officers and public buses and all these systems from being able to take photos and videos from body cams and all these images to gather data on AI and I wanted to get your take on first, what do you think about the overreach of AI? And then let’s get back into explainability.

Mike Capps

This isn’t a while. I almost said black and white again, there we go, what a tall overuse set of terms? This is not a simple problem. I completely understand why San Francisco would do this. Facial Recognition Systems are not perfect and they tend to have clustered error, that is it’s much more likely to make a false positive for someone whose skin is darker or maybe for females or whatever data set, they didn’t have as much training data for so completely agree that the technology isn’t ready for making important decisions at the same time, from a defense or intelligence perspective, you look at Beijing has 3 million cameras right now. 

There’s no outdoor square footage in Beijing that isn’t covered by cameras and they’re starting to do it inside as well and of course that’s a society in which you don’t get a choice if they want to build cameras inside, they can’t so there’s, there’s no privacy in that town.

Washington DC has probably less than 20,000 cameras and if you’re thinking about it from a security perspective, would you rather be Beijing or Washington DC when you’re trying to secure your nation’s secrets or your principles of the government is a significant problem so saying that you can’t have any cameras in San Francisco.

I get it from a personal freedoms perspective, but I have to wonder, are we being competitive? Or are we making a mistake in the name of freedom against security? It’s absolutely a trade off and it’s a hard one to make. I’m not sure if they’re thinking about it from all directions. 

David Yakobovitch

That’s a great take on that because I live in New York City and here we’ve also recently experienced a lot of public outreach on an Outcry on, should we have these cameras on the public buses?

And ultimately got approved and should be interesting but then another use case that was interesting on this closed AI is certain residential buildings changed the locks on the buildings to go completely digital with like ring doorbells and all these systems can only have a key without getting permission from the residents and it’s almost like, what is that outreach that you can do? 

Because the concern is we’re capturing these images and videos and yes, it could be used on the first forefront of security for the building and the landlord and everyone’s peace of mind but then you always wonder what that closed AI looks like? I really bring it up instead of calling a black box versus white box AI, I would jump into what open AI is doing let’s call it closed AI versus open AI I’ve heard that much, but I really liked that.

Mike Capps

I get your point and these are data gathering systems and data can be used for good or ill all the time the landlords could be thinking, how do I improve security for my tenants? Because I’m not happy with what we’ve been doing in the past, or they could be thinking, how do I track how often our tenants are coming? And whether they have guests that they shouldn’t have, whether it be having an appropriately, or can I feed this information to our government or to another government? That’s significant and I don’t have an answer for that. I don’t have a problem with data. I do have a problem with not knowing how it’s going to be used, not being able to control how it’s being used and so far, we haven’t figured that one out of being able to control your data and make sure it’s used in the right.

I used to have a top secret clearance and all of the data from that top secret clearance got stolen from the government’s offices, right from the OPM when they had that big hack years ago, every bit of every information of mine is out there somewhere or of course the major credit agencies, same thing I didn’t allow them to click the data and then that data was not handled well and was stolen so even if they try, but if they want to be responsible, they may not be able to be responsible with it.

David Yakobovitch

What I know that we can do to be responsible is to start setting up guidelines and pillars to explain AI just a couple of months ago, the European commission came out with their ethics guidelines for trustworthy AI on different pillars you can do and I know a diveplane, you also have your own goals and pillars for explaining AI and understanding AI. Why don’t you share a little bit more about it from your perspective? 

Mike Capps

The terminology hasn’t really been set yet, so it’s a little tricky explainable versus interpreted versus understandable people use them differently some use it interchangeably of explainable as traceable that there’s some way to trace how this decision was made, but that’s not the same as being able to get a human intuition and understanding of the algorithms involved and that’s kind of where it becomes more understandable systems to me are a decision tree with a hundred thousand nodes is explainable.

You can say exactly why this decision was made, but you can’t have a human understand why it does what it does. Like you could with code with an algorithm, for example, so our focus is on understanding. We do have traceable auditability and our technology, but our focus is on human understandability. That is why you turn left car and crash into that barrier? Here’s the things that were most influential in my decision. 

Here’s the training cases that were the ones that most effected this decision that I’ve made. Here’s why I didn’t go straight and here’s why I didn’t turn right le here’s why I didn’t hit the brakes and then that’s kind of the first step to me is being able to have all that given back to you in the form in which you trained it so if you’re a doctor and you train a supervised machine learning system, you want that machine learning system to spit back to you the why’s in medical record for not in some algorithm or not in a graph, not where UI we’ve got UI as of course. 

But it comes down to here’s the five patients that Mike was most like, Mike shouldn’t get this bariatric intervention surgery because here’s five people just like Mike and didn’t get it and as a doctor, I can look at that and say, I kind of understand and then when you give the contrapositives, Hey, if my blood pressure had been 10 points higher, or Hey if you were 20 pounds heavier than we’ve given a different decision, again, that’s phrased in the way that the human who trained the system can understand it and that’s crucial and that’s only step one because step two is fixing it. It doesn’t do us any good to build the machine learning system. It’s biased and you’re like let’s start again from scratch.

I want to be able to go into that car that turned left and you say, why did you turn left? And it’d be like remembering when Mike was drunk in June and he crashed into the barrier that was why I did it. I don’t want to be able to say yank out Mike’s data he’s terrible he got his license revoked years ago Yanked that piece of data out of the training set. Now, what would you do? And then ask it and it can tell you, and if it would be the right answer, Hey, we made a fix. We could push that out to every other car and be done with it, as opposed to post-training you’ve lost the connection from training data to decision like most neural nets do so that explainability that trace, then that understanding of, okay I get it now.

I understand why what went wrong and then editability of being able to fix it that’s the key to building responsible machine learning systems, open AI systems so that’s what we’re doing and we’re slowly scaling our way up we can’t tackle awesome computer vision problems yet, but we can tackle a whole lot of executive function problems and that will keep growing what we can do with them.

David Yakobovitch

Now diving into computer vision, even though that’s a work in process on one of my most recent episodes of HumAIn, I featured Alberto to the Skin who’s the faculty director for all AI and ML systems at university of California Berkeley and we talked for an extensive period of time about gins, these generative adversarial networks and I bring that up because you talked about the car crashing scenario, which is this new classic case people have been talking about stop signs where you put stickers on them and suddenly they actually mean go 45 miles per hour, and then suddenly a crash happens so there’s a lot of hacking happening in here, which is undoing some of that understanding of AI. 

What’s your thoughts on this field of AI known as generative adversarial networks, whether that might be good or bad for the industry? Whether that might be just this like little toy that people are having fun with, or if it’s a whole branch of AI that might become big over time. 

Mike Capps

I would say that games are just one example of an adversarial approach to machine learning and they’re a powerful one, there are unexplainable way but they’re a powerful one there’s a couple of issues here the first one is that often machine learning systems will train shallowly the reason that a Tesla can be run off the road with three dots is due to like absolutely lack of transfer learning that would help it understand not to drive off the road when it sees three dots that lead to another lane no human would ever make that mistake. I get why their system does but one of the key problems is that neural nets are some of the most brittle system’s against attack I am trying to remember, I actually have it right over here in Goodfellow wrote that to the classic paper on adversarial data.

When you’re trying to launch your autonomous vehicle as fast as you can, you’re not thinking about hacking very much if you’re in the defense space, you think about it a lot because these are systems that really any smart kid with three dots and a couple of days of time can figure out how to hack your Tesla. What happens if that is a remotely watched missile or an autonomous vehicle that has troops in the back it’s way cheaper for an insurrectionist group to just figure out how to hack it, then to develop their own autonomous vehicle technology. 

That’s just as good, our tech is actually the most resilient to adversarial attack got some other fun stuff in there, but at its core, that’s what’s going on and that’s part of the reason that it’s been successful in the defense space and cybersecurity space and others is that it is resilient. It’s just so easy to throw again at something and find the holes and I don’t think that’s going away. It’s really interesting actually to me, that hackers kids can figure out how to hack your Jeep remotely and stop it on the highway and that’s not enough to stop us from developing this technology because I guess it’s just not happening enough. I don’t know how to get people more afraid of what can happen to you with 15 year old hackers we were just messing around or people very purposely trying to shut down AI based systems it’s really frightening.

David Yakobovitch

I agree with all that and what’s most frightening is I recently giving a keynote at, uh, explaining AI to non-technical audiences at this conference for major CAD and 3D manufacturing organization and the keynote, one of the principal engineers came up to me, said I’m so excited.

There’s now these TPU chips coming out and they’re on USB drives and David, what’s your thoughts? On me buying a USB drive to do the computer processing for the machine learning and I said, that’s really fascinating, but it’s very easy to hack it you can override the algorithm on the processing power on the USB drive and before it you have access to all the data so the encryption fails if you over process what’s going on there and that’s why it’s really interesting that you mentioned that the technology behind what diveplane is using for understandable AIS Cannons and I love it because I also teach a lot and that’s one of the classic algorithms.

We start out with students thinking of if you’re someone who enjoys food and you’re in the Charlotte area and what the common restaurants that your other friends and students like, so that we can find a recommendation for. You just want to hear more like why do you think cannons are less maybe hackable than certain neural nets?

Mike Capps

I guess it’s been shown empirically so that’s part of it and so Goodfellows, paper’s a great reference for that at its core it’s really hard to Perturb the Caden because it doesn’t have any notion of shallowness it comes down to what’s the K of the number of neighbors you’re talking about you say it’s taught and it is taught in the first machine learning course and then abandoned that because we can’t really do anything serious with it because there just haven’t been any fast approaches to making that distance calculation in a high enough dimension space.

So that’s a big part of what we’ve done is just make it fast enough to apply to an infinite number of cases with a couple of hundred features and I’d love for that couple hundred to be millions of features that would be great but right now, that’s what we’re able to do, part of what we’ve done as well and if folks are interested, we’ve got a tech paper that we published about this on archive is combined in a information theoretic framework with cannons so it’s less about distance and more about probability and it lets us do some really cool stuff like since you can compute surprisal the sort of notion of entropy and surprise and information density that you get from information theory if you’re, if this isn’t your field, think of number of interview bits and a password.

That’s the thing of sort of how much data is in that password if we can apply that notion of amount of information with K nearest neighbors, then we can start figuring out where the most important points in this hyper dimensional space that should be influencing your decision and this is exactly what we were talking about before of what are the records, the patient records that are most like you it’s which are the most important ones, which are the most informational ones, which are the most interesting ones and as you might imagine, as soon as you have a notion of interesting. 

That means you have a notion of creativity. You have a computational notice notion of what is creative and men. We can do really cool stuff with that so I can start generating creative examples of patients and you want to crazy looking patient. I can do that now and that’s exactly the notion of reinforcement learning of exploitation versus exploration. When you can really generate creative ideas mathematically it’s a dramatically beautiful framework. I can’t wait to share more about it.

David Yakobovitch

It’s we’re still in the early stages of exploitation versus exploration because the creativity part, which is exciting is you get to create profiles of who is your ideal customer or who is the ideal patient you can serve and then see, this is what we thought was possible and does the data also confirm that is that verifiable? And that’s great, that’s a lot of the research that your team at diveplane is doing today. Are there any other cool things in the works at the moment?

Mike Capps

One of the things we realized we could deal with this was to synthesize entirely new data sets so give us a whole bunch of cases we’ll train up a machine learning model and then generate within that space, new data points that could theoretically be within it and what that lets us do is clean private data out of a data set so imagine you give me a million patient records, I’ll generate a million new ones from that model none of them were in the originals how you do a quick check that you didn’t randomly create someone who was already there and with the ability to tune surprisal. 

We can generate a whole bunch of vanilla looking patients, or we can generate a whole bunch of really crazy looking patients that fit mathematically or statistically, I should say, within that hyperspace so we can generate you a data set with which you can now go train a neural net you can now go hand it to your FP and a team to go do insights on your customer base or whatever, but you’re not actually providing a private data it turns out this is a neat way to kind of generate a new data set that has all this statistical shape of the old one so in banking and healthcare, as you can imagine, they’re also locked down they’ve got data in China, they can’t move to California, can’t move to Europe, can’t move outside of those places so they can’t figure out what their customers are doing cause I’m not allowed to share it. I’m excited about that. That’s a very practical problem that will help improve privacy for people and I’m excited about it.

David Yakobovitch

All the future of AI, as you mentioned is combination today of distance versus probability problems we look at the classic algorithms, which are like regressions, people also as you mentioned often underwrite but the same as linear regression, if you were to have millions and millions of data points, a linear regression could actually perform as good as a neural network ir just depends on what the use cases so part of that exploration is again, opening up the difference between closed AI and open AI, better knowing. How can we trace our systems? How can we understand our algorithms? And the future is evolving quite quickly. I give a take on a lot of my episodes, even with John Spence, we talked about how AI is becoming autonomous very quickly in, around the world and you mentioned Finlandia and China and the U.S. what’s your take on what the future looks like.

Mike Capps

I’m not an optimist so you’ll get that view from me maybe it’s the defense background, but I think we’re so rapidly grabbing onto technology solutions that seem to work and as soon as it works enough, it’s thrown into production and that means for UNICEF food distribution decisions, not just for identifying what product you’re buying at an Amazon retail store like really important decisions that we’re making of neural nets is like poems, where they’re never done you just stop making them better at some point you think it’s good enough? The author walks away from it and because you never know for sure have I trained this shallowly or is this actually solving the problem?

I want to solve the neural that will tell you with confidence that it has an answer, but you don’t really know if it’s a good answer and you don’t know how it’s getting to that answer if you put in the gosh, are we training them enough? Are we just solving the problem minimally and going with it? And then the brittleness of the systems, people are building how vulnerable they are to adversarial attack.

It doesn’t feel like a good direction to be going that we’re just stacking tack on top of tech uncomfortable tech that is all too vulnerable. I guess that’s the easiest way to say it, regardless of whether or not it was built responsibly or with responsibility in mind, it doesn’t matter if it’s all vulnerable.

David Yakobovitch

I want to fix this vulnerability challenge with AI a lot of my colleagues consider me an eternal optimist, but I usually try to describe myself as a realist, someone in the middle between the pessimists and the optimists AI definitely is the future, but you mentioned there are so many risks here and I’m sure you have a lot of knowledge that comes from the days being in the classified field that you perhaps cannot share with our audience today but it has been informed to know that there’s a lot of exploitation occurring and what that means is one of the big industries that a lot of people don’t talk about enough is cybersecurity and how cybersecurity could be the saving grace for AI if we build enough, teams and enough systems to help fully penetration test the algorithms and test the systems to make sure that they’re secure from attacks.

What’s your take on the future security industry as well?

Mike Capps

I like your optimism, which you referred to as realism so put me in that camp to date, I would say that the world of cybersecurity over the past 30 years has always been one of catching up and trying to put your finger in all the holes in the dike is difficult to me to think that that paradigm will shift to AI powered defense being better than AI power to tech that would be fantastic but the reality is a world in which the best nation States. 

State level actors can penetrate almost anything and also a dude who’s dropping USB drives outside of the headquarters of a major company can probably also do that eventually spearfishing will work eventually, some person’s going to put a USB drive that they did note into their machine and it’s going to happen that way too it’s hard for me to believe that AI is going to not be better at attack than it defense for cybersecurity because you only have to, you only have to fail once so then what you’re left with is a mutually assured destruction scenario like you only have to fail once for a nuclear war to have lost Washington DC but the trick is you can’t do that because they would do it to you too and so nobody shoots problem we’ve experienced in cybersecurity and cyber warfare is nobody knows when you’re attacking or not was Ukraine a state level actor attacking it infrastructure or not was Baltimore a state level actor last week or not and since we don’t know, there’s no uproar, there’s no attack and we’re left in this.

What looks like a detente publicly? Is it actually a detente? I refuse to talk, speculate about that, but I can say that there’ve been times when it hasn’t been a detente, even in our current issues with hallway. We started that the NSA was causing our U.S. companies to put back doors into routers and chips and it does sound awful. The hallway might be doing that, but we did it first and got caught so there you go. There’s my lack of optimism about cybersecurity. What do you think?

David Yakobovitch

I think that’s essential to always see both perspectives because then that informs you to have the most knowledge possible. The reason this is interesting  worked with some clients recently all over the world and I see the hallway, which is the big Chinese cell phone manufacturing company they have a lot of advertising, a lot of marketing, a lot of deep pockets in those locations, but you bring up some very valid points that it’s not just the brand it’s about how secure those systems in fact in may 20, 19 and Forbes, there was an article about that deep learning will be and of end-to-end encryption and this goes back to the point I shared earlier that if Google’s coming out with these AI chips on USB drives, but you can force the AI system to be hacked and access the data. 

The same thing will be happening soon in phones all the data on phones that Apple says is secure and encrypted. Did you just plug in like an adapter and forced the encryption open? I’m sure with my limited knowledge in the security industry, that’s probably early exists in many capacities, but it will become more commonplace and continue to be a concern but hopefully then the work that diveplane is doing and the work that us as professors, that we share, this AI ethics with our students, we can inform them to at least begin having these conversations. 

Mike Capps

No, I think you’re absolutely right. That it’ll maybe a way to look at this as sort of over the last thousand years of what’s happened with the notion of privacy, which just didn’t exist at the tribal level, didn’t exist at the village level. 

There was some notion of you have a large trove of secrets and secret wealth and keys to the secret wealth and all that just wasn’t an expectation of privacy that we have developed over time. I wonder if as a society, we can go back to that of quantum computing actually does crack everything o,r deep learning, whichever it is, which of these magic tools can crack everything and any 12 year old can download an app and crack all of my passwords well, it would change the way I operate quite a bit it doesn’t mean we’re not human anymore probably the one scary thing is because we expect privacy.

We develop these troves of data that we expect to be private and at some point, maybe we just lose that, but then natural next step of it is does that fuel autocracy, in a Chinese state where there’s no democracy, no notion of control of your own data or your own life, is that becoming a permanent dictatorship because they just have so much information and so much compute about that information in a way that we’ve never had before in a feudal state, in the medieval ages, it might be a major step for how humanity exists, but it might be privacy needs to go if you’re right about end to end encryption being gone and then maybe I shouldn’t expect privacy and I should operate appropriately. 

David Yakobovitch

And then that depends on capitalism, are we having the death of capitalism is the new world order moving to, and I know some of these topics are very abstract, but one of the common takeaways is there’s so much new technology on the horizon from AI and deep learning systems to quantum computing, to potentially even the blockchain that these technologies could facilitate stronger privacy, stronger relationships, and a new computational age, but it’s important to always be human, to be humane and to partner together and whether that is in game design with games like gears of war and Fortnite, which are allowing us to actually continue having that connectivity in a digital world or in person meeting with companies and individuals and when you’re sitting at the table giving a loan to actually have a loan advisor, walk you through what you can do to be better, how you can make sure you’re doing the right thing to pass the game, to pass the Turing test then we’re moving into a world that’s being more human. 

Mike Capps

I love it if that wasn’t your closing remarks should automate because that was, that was dead on. I totally agree.

David Yakobovitch

Well, thanks so much, Mike, for being with us here today. I really appreciate you for being a guest on HumAIn and look forward to catching up with the sin.

Mike Capps

Sure. I enjoyed it. Thanks a lot.

David Yakobovitch

Hey humans thanks for listening to this episode of HumAIn. My name is David Yakobovitch, and if you’ve liked HumAIn, remember to click, subscribe on Apple podcasts, Spotify or luminary. Thanks for tuning in and join us for our next episode. New releases are every time.