When you decide on working for a company or building a business, how do you know if you have picked the right industry? On today’s episode of the HumAIn Podcast our guest speaker, Travis Dirks shares how hacking your mental models can level up your decisions. How artificial intelligence has spawned the sister field known as amplified intelligence and why moonshot  companies could be the next big thing. Listening today on HumAIn.

This is HumAIn a weekly podcast focused on bridging the gap between humans and machines in this age of acceleration. My name is David Yakobovitch. And on this podcast, I interview experts in sociology, psychology, artificial intelligence, researchers on consumer facing products and consumer facing companies to help audiences either understand AI and its many capabilities.

If you like the show, remember to subscribe and leave a review.

David Yakobovitch

Everything’s really big this week. And this month, one thing that’s really big in your world is moonshots. And that’s something I think that our audience would be really interested in. We have a startup that just value that of a record 90 plus billion dollars. And is Uber or moonshots or what is this? Let’s debunk this for the audience. 

Travis Dirks

I wouldn’t call it a moonshot. Our definition of moonshots in Xlabs is that which changes your perception of what’s possible. It should push out the boundaries of possibility. Uber is one in an almost scary long line of startups that are more like, basically guys trying to get girls and building businesses around it.

David Yakobovitch

And so if that is that’s startup. Which we’re seeing in the market and it’s changes that it’s evolved. Now there’s a new CEO, a lot of new change management. So Uber’s restructured as a company that’s finally gone public, which is great, but more about moonshots.

When I studied in college, one of my favorite things was Case competitions and Hackathons. And what’s the next big idea? And occasionally when I’ll hear different keynotes throughout the year, I’ll hear them from people like Jeff Bezos with Blue Origin and even Virgin Galactic. And a lot of these, what you would think are Moonshots companies, because they’re literally talking about getting to the moon, like actually getting into the stratosphere, but what’s some examples of that with Xlabs. 

Travis Dirks

We develop moonshots along three different themes. So our goal is over the course of the next 10 to 15 years through a series of start-ups in each theme to kind of change those aspects of the world. So the first of those is we call it amplifying genetics. It’s really a panamax type approach, can we start taking sort of sick care to healthcare and then taking a real look at what’s after that? What does it mean to be more healthy than a normal human being? 

The other is creative destruction of the internet, where we’re looking at how do we create companies that create a more direct value proposition between the supplier and the person consuming it so that they’re happy to pay for it. And we can get rid of this sort of strange three-way incentives that we have with the sort of clickbait ecosystem. 

And the last one is culture as a technology. This is sort of a two-sided coin. It’s really about taking the technology that is, in our cell phones and has gotten so good at drawing us into it and trying to turn that manipulation tech on its head and teach it to help better connect us or on the other side of the coin, how do we teach it to help us drive where we want to go?

David Yakobovitch

And so if we’re starting on this example of talking about Uber, which has been the big story of almost 10 years for them to go public. And, they’d been in talks here in May, 2019, also with Neuro a major company in Silicon Valley, about self delivering groceries and needs for those who might not have the opportunity. So, how can we get better connected? Is this self-driving part of that the solution?

Travis Dirks

I suppose it could be part of this solution in the sense of sort of freeing up that travel time to have actual human conversations between people. But ultimately it’s about finding some way to force ourselves to do things that are a little uncomfortable and that the technology that exists today has allowed us to shy away from and get by without doing those uncomfortable things of actually learning to interact as people.

And that’s the hard part. I wish I could credit the name. He did a great job of making the distinction, but essentially the distinction between sort of happiness versus sort of pleasure seeking type feelings in a kind of long-term versus short-term differences. And a lot of our products are so geared towards that short-term pleasure seeking this makes me happy right now. 

We’re really trying to take a hard look at how you build products that are also going to be a billion dollar companies, but also our four year kind of long term good and are helping you do the things that you wish you would do. And the sort of things that you’re glad you did on your deathbed, looking back.

David Yakobovitch

So one of those ideas and happiness versus pleasure seeking, I recently read more about this is actually around Alexa, which is the Amazon product for voice recognition. And when it was becoming a minimum viable product, the team of engineers actually met with Jeff Bezos and they said, we’re so excited. We can get the audio latency down to two and a half seconds. We think this is incredible. People are going to love this and Jeff responded and he said, If you can’t get this down to one second, don’t even bother trying like that. You have to have some uncomfort and some engineering to make that worth it. 

And so that’s one example. That’s interesting as Alexis went from a novelty idea to something that’s being mass adopted in consumer homes all across the world, and that’s a technology that is getting us better connected if you will, in some ways that in others. I’m going to hear more about what’s happening at Xlabs, are there any of the technologies that are giving you most excited about what your companies are working on or that you’ve seen as a good trend right now?

Travis Dirks

We’re not much for trends. We tend to be sort of internally focused. We’re working on a couple of exciting things. One is we’re building the sort of the core infrastructure it’s called X-core, it’s basically a metal learning AI engine pipeline that learns to build other AIs. It’s a core infrastructure that allows our data scientists to kind of step up a level, if you will, and think on that kind of metal-level of how can I improve the system so that I can attack all these problems at once or while I’m attacking this one problem.

The things that we learned are automatically being propagated down the line to everything else we’ve ever cared about so that the engine can alert us to improvements. So trying to take that, there’s a lot of art in this process of building AI right now and trying to remove some of the art that’s related to just the fact that you need to have the right human, who has the right past experience having their eyes on the problem. So that’s one thing. And then the process of doing that, we’re working on some genetics data that has to do with predicting cancer. I’m not afraid. I’m not free to talk about those results yet, but they are quite exciting. 

David Yakobovitch

And what’s exciting is even though that’s still research in the process, you’re talking about amplifying genetics, having creative destruction of the internet and having culture as a technology. And X-core sounds like the beginning of that for these data scientists, the new sexiest job of the century. Everyone keeps talking about these data scientists. But the truth is a lot of products are helping them start to automate processes, perhaps. So what you’re looking at to create more coherent and better structured data science and AI, and one term that you’ve spoken about on your podcast and work is Mental Model Arbitrage. Would that have any relationship here or can you let our listeners know more about. 

Travis Dirks

Absolutely. Mental Model Arbitrage is a very core concept for us. It’s essentially the idea of going to other fields, intentionally seeking out their best ideas, their best models, their best tools, intentionally generalizing them to see where else you can apply them. So if you look at most major breakthroughs in history, it’s not this kind of straight line. 

No one would think to build the vacuum tube, for example, if their goal was a computer. You had to go through that stage where somebody wanted to study electrons. What are these electron things? Who’s that guy? I always forget this guy. His last name was Penny, but if it’s this random guy working near a magnetron, didn’t have the sandwich in his pocket. We don’t have a microwave. So Mental Model Arbitrage is the process of trying to make that intentional. 

And it happens at a lot of different levels. It happens with me as a human, just sort of exploring different topics and trying to do it from both sides of a problem, but it can also happen at this AI level. So this is where the meta part of the middle learning engine comes in. We’re trying to build AIs that are able to take in characteristics about a particular target about the data that’s available to it. And start to make educated guesses based on what they’d seen in the past about what sort of a pipeline it should have, what sort of a pre pipeline, how should I transform this data? What type of algorithm is likely to be useful for this type of a target and start to make those decisions for us? 

David Yakobovitch

What’s so fascinating about X-core and this meta AI engine you’re building at XLabs is exactly what you just mentioned about Mental Model Arbitrage, that for example, data science and AI, it’s this industry that, although there’s a lot of hype, there is a lot of reality. And this industry can be applied across all industries. So whether it’s genetics, whether it’s the internet, whether it’s culture as a technology, there’s going to be a lot of may I use the word disruption there that that’s going to occur. And it is, as you also mentioned, Travis, about the long run. What is a 10 to 15 year plus goal? Something like 5G has been in development for many years just now it’s getting into nascency. So a lot of that is interesting.

Travis Dirks 

The biggest thing that’s holding AI back right now from the kind of, we all kind of feel that we’re on the verge of something it’s sort of happening and we know it’s going to change the world, but it’s sort of not happening as fast as it feels like it ought to. And the major reason for that is whether you’re working with machine learning or you’re working with AI, if you’ve actually got in there and trained some algorithms and seeing how they work, what I see as a scientist. So I’m a physicist by background. This is very clearly a process that was built by someone who was trying to show that the algorithm could do what they thought it would do as opposed to being built with the assumption that it will do its job. What can we learn about the world? 

And so that’s a lot of the work we’re trying to do is how do you start to think about that from that other aspect? These algorithms work. We’re not trying to prove that we can train them a thousand times and they’ll behave similarly. We’re trying to really dig out what we can learn about the world?

David Yakobovitch

And so learning about the world could be considered an intelligence. Again, the phrase AI is very much artificial intelligence and products are changing over time. We’ve seen in the past couple of years, how a lot of these games, the classic board games that everyone’s been hearing about with AI that now the system’s able to win at chess and checkers and go and do the two.

So you’re seeing a learning type phenomenon, but is that the learning you’re talking about or something else? 

Travis Dirks

That’s a good question. It’s definitely something else. The question is it the same sort of algorithm that will be able to do this something else, but I’m very much interested in applying these algorithms outside of the real world.

And in particular, I’m interested in something that we call Internally Amplified Intelligence. So I’ve had a lot of experience trying to explain to people who are not in the field, What is AI? And sort of, if they go to 10 people, they’ll get 10 different answers. 

But If you kind of stand back and you look at what everyone’s talking about, AI is essentially the attempt to duplicate sort of things that come easy to human beings. So can you see, can you hear, can you walk, can you navigate in the world? Can you hold a conversation? That’s sort of the field of AI and we’re really more interested in the space of things that are not easy for humans. And also we’re not easy for computers and is called Amplified Intelligence.

And the way that we get there is right now, we’ve sort of imagined two ways to get there. One is this intentional process of Mental Model Arbitrage of sort of yanking in this mathematical knowledge that we have from the physical sciences and augmenting existing AI with it. That’s what we did with our previous company, for example, we could predict black swan type events. 

The other is, untried, but I have a hypothesis that an interesting way to go about that might be to work on a sort of embodied AI. So, once we’re able to actually develop an intelligence that has different senses than we do, we’ll provide it for a different intelligence than we have. And, just to go back to what you’re observing about the games and that habit we have, it’s a very human habit. Isn’t moving the goalposts. If an AI can play chess, it will be intelligent. If it can play go, it will be intelligent.   

And It’s kind of fun to think about what is the limit of that argument and what happens when we can pass altering type tests, we can imagine, but we know how the thing works and we’re still not quite comfortable with calling it intelligent. And what does that do to our philosophy about ourselves and our own consciousness? 

David Yakobovitch

And a lot of that goes back to the physical sciences, If you can explore the world around us and what is evident based in science, perhaps some of that can be proven, Travis, you mentioned about Black Swan and, a few years back, I was out in Berkeley. I actually met with Radhika. So we actually met and spoke about a lot of those events. The work that she was doing out in Hawaii and the physical sciences as well. So it’s great to see a lot of that work. You’ve had some outcomes there and what we’re for what you could speak about, What were some of those outcomes that evolved to now having emerged into XLabs, your new venture that’s been going on as you could say, a new kind of lab?

Travis Dirks 

So X-core is really an evolution, sort of rebuilding in a more general way of what we built at seldom and, It was that engine that allowed us to make some of those sort of stunning predictions we had, I believe there were 77 targets that we could predict with our own criteria, which was, we needed to be able to have at least a two thirds precision for, and these are events that only happened about 7% of the time. So quite rare events that we’re trying to predict. 

Things like, Will there be a labor strike in this country this month? Will the commodity prices make a large jump in one direction or another, Will a certain type of violence break out in the country? That sort of a thing. And, what was new about the approach that we took was we had to admit that we did not know if we would be able to predict any of these things. And we did not know what sort of data would be useful in predicting those things. So that required us to build something that could take a look at all of it, and then come back and tell us what are we able to predict?

All of that was hard, but the next hard piece that you have to do is not fool yourself. So we really had to take another look at validation on targets that are rare and obviously very difficult. You can get a lot of randomness fooling you. So that was a lot of the breakthrough that allowed us to do that.

Now we’re coming back and we’re saying, there we made a very intentional, we sort of knew that the knowledge that’s inside of this field called complexity physics would be useful. 

Now we’re coming back and we’re saying there’s probably a lot of knowledge in a lot of human fields. That will be useful for all kinds of things. How do we build this X-core such that as we’re working on particular projects, we can try to grab the knowledge that we think will be useful, build it in the engine and then have it propagate across everything we’re interested in to see where it’s useful, sort of mining of human knowledge, if you will.

David Yakobovitch

As you’re talking about hard problems and hard things to solve, it reminds me actually of Ben Horowitz’s book, The Hard Thing About Hard Things. So whether it’s on the business side, the tech side, the problem side, It sounds like what you’re doing at X-core is trying to dive into these problems that could forever make positive change for our world. We look at examples of what have recently happened in Venezuela in the past month. 

It was an attempt for a country led coup if you will, like these labor strikes that you talk about. Predicting these very rare, almost black swan events that could be like for those who’ve watched the Netflix, the movie Bandersnatch. Choosing your own adventure, these rare events that can take you down different pathways. 

Based on your previous decisions and how those decisions impact future outcomes. What is interesting is you’re mentioning now that you’re working on all these new opportunities in genetics, in disrupting the internet, in culture and technology, and those are very hard problems to solve, let alone just defining those problems. And, one of the big issues there that we previously have connected on is that the investment in those fields is not as high. As other hot topics like AI, if you will. 

Travis Dirks 

We, as a sort of species drastically under invest in these sorts of things, sort of moonshots in general, we’d probably all be a lot better off if, as a society we’re investing a lot more in them. And there’s a lot of reasons for that. One of them is it goes to the way that you pursue these sorts of problems and it makes them very difficult to manage. When it’s tough to define exactly where you’re headed. It’s really tough to define a key metric that a manager can decide is this the group that I should be giving money to, or should it go to another group?

And so we see a lot less investment in these things less because there’s this lack of a broad understanding that we should invest in it. A lot of people sort of understand that. And more because there’s a lot of weight put on the guy in the middle who has to decide where the investment goes and has to make it appear that he’s made a rational decision. Unfortunately, these aren’t things that you can sort of rationalize in the spreadsheet. It’s the nature of a hard problem that why do we call things hard problems? We call them hard problems because a lot of really smart people have been trying to tackle them for a long time. 

And the thing that we can take from that. As fact is whatever the same thing to try is not the thing that’s going to solve the problem. So we can know that, but that doesn’t help the guy in the middle who has to decide where the dollars get billed out. 

David Yakobovitch

And speaking about where the dollars get dolled out. I can’t help hopping back to the Uber example. It’s not a moonshot example in the classic sense. A scenario is about the hard thing about work is putting in time and it takes many years to see an effort. And a lot of data sciences today. And a lot of people in the new tech economy are always thinking, what company can I join? That will be the next moonshot.

And that classic example was, Ryan Graves at Uber. He actually discovered Uber and joined the company based on a tweet that Travis couch shack had posted about who wants to work for Uber and just so happened that Ryan committed about 10 years of his life. I got 2.6% equity, and now I made a lot of money. And so, if you’re someone who’s looking to work for moonshots, perhaps be a CEO of a company coming out of XLabs or be a researcher who’s gonna take one of those to market. How would you get started there? 

Travis Dirks 

I personally think it’s gotta be something that you’re really motivated to do intrinsically. It’s gotta be that you’re driven by, it sounds cliche, but you’ve gotta be driven by the journey. Cause there’s a good chance that 10 years from now, the journey might be all you have. So, make sure that journey is worth it to you. That’s what I would say. The other thing is in terms of people that we work with, generally, we’re looking for people that have some very deep expertise in something. 

Preferably something relevant. If they’re going to be working at XLabs, proper something relevant to AI, machine learning, cloud computing, some kind of deep expertise there. If you’re looking to be a CEO of a company that we’re spinning out, then we’re looking for someone that has deep vertical expertise in one of the markets that fits within these three themes that I described.

And then we’re looking for evidence that your person has the ability to take their expertise outside of where it lives. So that you also have this breadth of interests. And when you’re not talking about the particular thing that you’re an expert in, you’re kind of like your eyes are still lit up and you’re still in the conversation. You don’t just sort of shut down because, this isn’t the thing that I care about because that ability to this Mental Model Arbitrage piece is very important for these types of problems. So we’re really looking for that surprisingly rare skill. 

David Yakobovitch 

And, if you’re someone who has that expertise, I’ve spoken about actually being in the startup scene, I’ve spoken with founders like yourself who have similar initiatives in New York city as well. And the conversation always gets down to not just you have the expertise, you have the interest, but whether you are really interested in. Are you interested in creating or producing and perhaps that’s somewhat about the future of work. You’ve spoken about that before on your podcast. Wanted to hear a little bit about that, Travis.

Travis Dirks 

I guess one thing that you just triggered me on is something that I realized about myself personally, in the last company that I worked on, is that in terms of intrinsic motivation versus extrinsic motivation, there’s sort of like, there’s sort of two types of entrepreneurs. Watch out false dichotomy.

You’ve got this sort of showman, I call this person the showman, because their concern is with the customer, which is properly where your concern should be with when you’re an entrepreneur, but then there’s sort of the artist and their concern tends to be with the product. It’s less about what does the human in front of me want? And it’s more about what should this thing ultimately be. When the world wakes up to what it ought to want. 

I realized that the last company that I’m much more of the artist, I’m much more of this is where the technology needs to go. And that’s one reason that we’ve built XLabs and the way that we’ve built it, because ultimately when a company is going to market, you want to show them in charge of it. So that’s why we have this structure. Where you’ve got the people who have the ability to make these deep technical breakthroughs. They tend to be more of this artist type. And so we’ve intentionally tried to build this structure that allows these two types of people to work together and in the strongest way possible. 

David Yakobovitch 

Do you think they’re exclusive? This journey, we go on say, we go through 10 years for a company. Hope it’s a moonshot. Are we going to be exclusive? And, either one of these fields or is it possible that it’s a blended journey that ambitious people can work on ambitious projects in both funds?

Travis Dirks 

No matter what side you think you’re on, it’s going to be blended. Definitely, the difficulty is, when you don’t know which one you are and when you’re not sort of driving your life accordingly, Warren Buffet has a saying, he talks about his circle of competence when he’s investing. And he says, it doesn’t matter how big my circle is. It just matters where the line is and I stay inside of it. It’s kind of similar when you ultimately want to do artists’ things, but you’re driving your life like a showman. We run into problems making it through those 10 years. Does that make sense? 

David Yakobovitch 

That’s definitely the right balance that you hear about all the time universally across work and the future of work as we’re moving into this work, 2.0 society, which is digitally transformed with remote culture and super jobs and all these new opportunities. Is craving a lot of anxiety in people as they’re going through that journey of, do you want to be the showman, but you’re actually working as the artist or do you want to work as the artist, but you’ve been positioned in a showman role. 

Travis Dirks 

So one of the hobbies that I have to keep myself sane is Vaudeville Strongman type things. So training to bend steel and put nails through boards and things like that. And one of the sort of deep truths that I ran into in speaking with one of my mentors. Was that in terms of the priority lists, are you focusing on your program? Are you focusing on your sleep? Are you focusing on what you’re eating? Are you focusing on what supplements? And he said, ultimately, what you’re focusing on is will you still be doing this five years from now? Because that is top level what matters is you continuing to do that kind of deep practice for long periods of time, that turns you into the sort of human that can do impressive things.

And that gave me a really interesting lens to look at the things that I pursue in a deep way, the things that I plan on making a major impact on the world. And you got to lay that layer on top of it, or am I doing this in a way that’s sustainable for me? 

David Yakobovitch 

That’s sustainable whether for yourself, Travis in your team is in the culture as a technology that genetic discovery that disrupting the internet at those industries, or elsewhere for even listeners on the HumAIn show is about seeking long-term happiness is what I’m hearing as an outcome here, instead of that immediate instant gratification pleasure seeking culture that the world has moved towards. 

Travis Dirks 

Absolutely. Conoman talks about the two types of brains, type one type two, I think very much lean towards that sort of thinking that reflects that future past brain. Am I happy with myself on reflection? That kind of makes sense because that you is the only one that’s a little bit stable, you, and the moment that’s happy with the chocolate milk you’re drinking. He’s not going to be the same five minutes from now.

David Yakobovitch 

For a future episode of the HumAIn. We could perhaps even talk more on the genetics with our gut biomes, because of a lot of cutting edge research in 2018 and 2019 has been showing that even our personalities and how we behave and how we act, is a lot of result of what’s going on there too these little critters that we’re feeding. 

Travis Dirks 

Absolutely.

David Yakobovitch 

I’ll give you another fun topic. Travis, thanks so much for being with us here today on the HumAIn show. And I can’t wait to hear more What’s being built at XLabs

Travis Dirks 

Hopefully we get, we don’t get to the point where we’re recommending fecal transplants from our mentors.

David Yakobovitch 

That’s it for this episode of HumAIn. I’m David Yakobovitch. And if you enjoy the show, don’t forget to click subscribe on Apple Podcasts or wherever you are listening to this. Thanks so much for listening and I’ll talk to you in the next one.