Welcome to our newest season of HumAIn podcast in 2021. HumAIn is your first look at the startups and industry titans that are leading and disrupting ML and AI data science, developer tools and technical education. I am your host, David Yakobovitch, and this is HumAIn. If you liked this episode, remember to subscribe and leave a review, now onto our show.
Listeners of HumAIn, today on our show we’re bringing to you Steve Shillingford, who is the founder and CEO at Deepsee.ai. As we know in the last few years, the AI industry has been continuing to evolve with different information, context, knowledge, and most of it’s been a manual process. Steve’s company is working to automate that process with knowledge process automation.
We’re going to dive deep into this topic among others on today’s show of HumAIn. Steve, thanks so much for joining us on the show.
Thanks very much, David, for having me.
Well, I’d love to start sharing with our listeners a little bit about your background. I know you’ve been in the venture space with multiple startups and the venture ecosystem. What with your background has led you to found this new venture in the last few years with this opportunity?
Thanks for that question. I’m embarrassed to say I’ve been around for a long time, considering my startup past, and I kind of cut my teeth in large enterprise software companies like Oracle, where I was spending time implementing very complex and very customized software to automate what was then financial or sales oriented processes.
Then, sort of, got tired of being in a large company and had an opportunity to kind of move into the startup world and create and grow companies about 10 or 11 years ago. I have in the course of doing a couple of different startups prior to Deepsee, I’ve kind of been seeing these patterns over and over again in technology and the pattern Gartner calls it the hype cycle, but it’s not just Gartner’s hype cycle.
It’s just kind of the nature of innovation, where you have sort of this spring of interesting ideas and technology development. It’s sometimes fueled by venture backed dollars, but you see these companies, which are really feature companies, they sort of spring up and they offer all these different pieces of the puzzle to solving the automation problem to large companies and they say: Buy this block or this erector set and you can build anything you want. And it will be magical, it’ll solve all your problems, and companies scoop it up.
They buy billions of dollars worth of this software and then, at some point, sometimes in a downturn or a pandemic, they go: What are we doing? They’re not getting the value based on the investments they’ve made. The folks who have been hired to put those erector set pieces together are struggling through, no fault of their own, but just all the organizational inertia that we have all come to experience.
And at the peak of that cycle you see this kind of consolidation moving away from interesting technology or ,as I like to say, the shine of the flux capacitor has kind of gone off and now people just want to drive really fast, so It becomes a consolidation opportunity in the marketplace.
I feel like AI is approaching that, and in some ways, maybe, even sort of peaked in its innovation cycle. What I mean by that is simply that instead of innovating on features, people are now looking for innovation around processes. Most importantly, when I talk I think data science and what it has to offer us both in a consumer and enterprise way is really important.
But, it’s more about sort of data outcomes and we want those outcomes to be relevant. The time that we want them and we want them to be interesting and insightful, we don’t really care as much about the tech that serves them up. We care about the notice and the context in which they’re presented to us.
So that’s kind of what inspired me to think about this market slightly differently than maybe it’s traditionally known. When I look at things like robotic process automation or the gardener likes to use terms like composite AI, or hyper automation, or explainable AI, which I appreciate why they’re labeling those.
Those are terrible terms from a human perspective, but what people want is there we’re swimming in information. We have access to more information today than we’ve ever had in the history of the world, and then some. But all of us feel a little bit like we’re drowning, the signal to noise, being able to understand what that information means. Is it relevant to me now? Will it be relevant to me tomorrow? Or can I just ignore it? That’s as useful in business as it is in the enterprise, so we really wanted to move away from this. There’s an old saying: information is the enemy of knowledge. It’s really this idea that knowledge is about the wisdom behind that information, not just the sheer quantity and bulk. So that’s why we labeled our effort and our category as knowledge process automation.
We know that we’re in a knowledge economy and throughout the pandemic, this knowledge economy has been digital only. So in fact, the data delude has only grown us and enterprise employees spending sometimes up to 16 hours a day glued to our digital devices, processing all this data and information, but often not knowing about the insights. Doing this very manually through these different meetings and different projects, but that creates a lot of fatigue and missed insights. So, the question that leads me to Steve is from what you’ve seen with AI, it sounds like it’s reaching a newer maturity state. Where is AI and automation today for delivering to businesses?
There’s a couple of different trends that are very interesting. One is, and we’ve had the good fortune in Deepsee to work with very large, some of the largest financial institutions in the world, but arguably enterprises with unlimited amounts of investment to solve these problems.
They are struggling tremendously around this notion of: Well, I know that my machine learning model or my natural language processing model will be great when I can just run thousands or hundreds of thousands of documents or data sets through it, cause it’ll just learn. But they struggle and they struggle through no fault of their own, in some ways.
It is for regulatory requirements, privacy requirements, internal requirements that prevent them from just submitting this kind of information to what I would call standard data science models. So they have this sort of big data problem, but they only have small data sets.
The second thing is it’s not just about machine learning models or saying: Hey, I’m going to apply a natural language processing model, like a BERT or GPT-3, and all of a sudden I’m going to be smarter. It’s what you do with that insight and how you scope, or if you will refine that insight in a way that allows you to be smarter about your business.
So many of the things that go on, and we all lived through this when we were sort of staying home in 2020, we’re sitting on Zoom eight hours a day, we’ve got email notifications, text notifications, slack incessantly and whatever else is going on. That’s an overwhelming sort of biologically detrimental environment to be in over the long haul, and you just can’t be productive.
People need interaction, they need thoughtful dialogue. They need, sort of, networking opportunities in real time. So what we felt like AI should do and what the innovation should be providing to people is sort of doing all the work of sifting through that noise and sifting through that signal and surfacing the information that is most useful. The wisdom about it particular process or like the essence of a particular insight that the machine learning model actually produced.
What we wanted to do, technically, is solve the big data-small data problem. In other words, I want big data insights, but I only have small data to sort of submit to the models, and we’ve done that with some of the tech that we’re providing with Deepsee. Then secondly, we wanted to build, and it would as a rig, a software rig that you can think of as a crypto mining rig analog.
The notion of: I want to eliminate the need for line of business executives to have to worry about how to mine all of their unstructured data. Which is really where the value, the real value is in an organization. I wanted to just provide a sort of a configurable tool for them and their tech teams.
Then secondly, I wanted to have this environment or workbench that could refine that mind and material, and it could be models that we provide. So we have a version of Bert, called Filbert, which is really a model that’s been trained on billions of finance insurance legal type data sets, so that it is biased to that kind of dialect.
But we work with, as I mentioned, large enterprises that also have their own models. So we built that refinery that can support their models or the models that we provide. Then most importantly, once that refining process has completed we have a pipeline that will allow you to ship those insights to people, either in dashboards that are configured around their jobs, not around the tech or to downstream systems, whether it’s internal to their enterprise or external to their business partners.
We, sort of, do all the underneath the covers tech work, so that a subject matter expert, a line of business analysts, a line of business executive can actually not worry about how they got to that information or knowledge, but really take an action. We want to focus them on bias towards action because the system is producing knowledge that is most important to them at the time that they need it.
I love what you brought up, Steve, about using a lot of open source technology and customizing it for the use cases that you’re working with. For example, in capital markets insurance or the public sectors, in this case, this Filbert. Many of our listening data scientists know in natural language processing that Bert and GPT-3 have been all these breakthrough models in the last few years, though.
They’re often, like STEM cells in the human body, They don’t have a lot of context until you feed them this information to enable transfer learning for your clients and these lines of business. So thinking about that, as you get very nuanced and focused with this line of sight, you’re building fast analytics. You’re building the capacity for companies to transform. In the classic model, if we go back to Gardner is, do you make a decision with a hundred percent of information? No. Most executives only have what, 60, 70% of information, but the information isn’t the insights, and it sounds that that’s the gap that you’re bridging for these executives and enterprise leaders.
A hundred percent and part of the, sort of, like excitement and interest in forming Deepsee was my CTO, Brian Sparks, and I were huge fans of the transformer innovation. It was very clear to us that NLP, specifically, was going to evolve at such a rapid rate, and it was going to be much like the early days of Linux where you were going to see just tremendous innovation to make that operating system, some of the backbones of the internet today.
The same types of innovation, when we look at that, we don’t want to be that we’ve got the best model. We want to be the sort of provider of the rig that can use all the best models. I mean, Gardener’s got it right in the sense that what composite AI is designed to try to say, even though I’m not a fan of the name, is that we shouldn’t be sort of chain to one model or one approach. You need to sort of just like you have different tools for different types of trade craft.
You should have different tools for different types of business problems inside the enterprise. And we think the innovation in NLP is going to support that. The thing that we think we can offer to the table as a value add is a way to deploy that faster, better, cheaper than highly bespoke, customized frameworks, that either get built internally or that get sort of cobbled together through different pieces of technology from different companies.
We’re seeing that trend embraced by even the sort of robotic process folks who are trying to elevate the discussion from robotic, which is sort of rigid, can be fragile with the evolving needs of the business to more cognitive or intelligent. That’s just indicative of them seeing the same problem sets inside their customer base.
So we’re very excited about that movement. We think we have something to offer there. That’s tremendously valuable. As a CEO, and I’ve been a CEO a couple of times, we have 5% of the information usually, and I would love to have 50% of the information to make a better decision, we all feel that way in our jobs. So that’s kind of part of the value, add that we’re bringing to bear.
Thinking about where we are today, Steve, as a world, we’re beginning to emerge from digital only with our living in our apartments and our houses, 24/7 back to a hybrid world. This summer many organizations are starting to bring their enterprise employees and team members into one day a month and then two to three days a week, and eventually back to the new normal, which will be still hybrid, but a lot in the office, interacting with knowledge workers. Where you see the remainder of 2021 looking like for businesses as they’re prepping for this return to the new normal.
I’ve come full circle on this. I’m not alone in saying: For the first part of 2020 was like, I was a bit shell shocked. Cause I just it was hard to kind of find your bearings and how the world was going to unfold. Then I, sort of, had a laugh the other day when I read about the CEO of Zoom saying that even he was tired of zooming. And I thought: Okay, now I know we’ve reached the peak a hundred percent remote, but I do think what we’re going to come out of post-pandemic is a better feel for, and frankly, more trust towards employees having a balance. Sort of, I’ll call it a hybrid approach where people need interaction. I’m a big believer in that, I’ve always believed in that. I struggled a lot with that in 2020. And what we’re finding now is folks do need to interact, and obviously we want to do it safely and with all the proper sort of hygiene.
But balancing that with periods of what I like to call deep work. Deep work, something that Cal Newport wrote in a great book on, which is we’re all in meeting hell sometimes. And, you can go eight hours straight and just be in back-to-back meetings. But did you actually get anything done? Maybe, maybe not.
Sometimes people just need time to think, to process, to sort of contemplate, read, research. And with this remote platform or this remote trend is going to allow us to build more of that deep work, which is as productive, if not more than just showing up and attending meetings.
And that will ultimately lead to just really enhanced accelerated productivity. I read a lot during our time at home about the Spanish flu back in 1918. And what came out of that, as we all know, where the roaring 1920 and I would not be surprised to see us have the same kind of just spurt of growth and energy, and excitement, and interest in embracing new ways of working that will kind of take us into the twenties decade, I don’t even know what the name is for twenties now. But I’m more optimistic than I probably was six or nine months ago. I feel like we’re still figuring it out, and it’s still going to be, creation is messy, especially in the information economy and which is a big part of our current world. We’re going to stand to benefit from that in the long-term and that’s pretty exciting to me.
We’ve been, as we all know, Steven, the longest bullish economy since the 2008 financial crisis. You’re onto something that I think that notion is right that we’re going to see acceleration of technology in these hybrid experiences.
I do not think technology will be slowing down, it’s going to continue to grow. It’s going to continue to be part of all of our experiences. The challenge is, as we move into this hybrid world, where will solutions, like Deepsee, come in to help with the automation and AI as there’s all this knowledge around us that we’re going to need to build more resilience for a crisis filled future.
So, one of the things we saw and it was, I wish I could say I was smart enough to have had a hand in this event. But when the world was sent home in March of 2020, we were in an initial deployment where folks, the prior process that we were automating was really called swivel chair.
You had three monitors, you tried to pair up information across those three monitors. It was intensely manual, very mundane and error prone, and candidly, that’s why we got invited in. But large enterprises, very conservative take forever to adopt new processes and technologies. So they were sort of testing us out, then their teams that were doing those manual processes literally had to go home, they didn’t have fiber to their homes. They were certainly not sitting in front of three monitors. So all of a sudden that sort of low cost labor arbitrage went to zero and they had to kind of flip the switch on the DC platform. And what they found was an immediate up leveling of not just their processing time and being able to offload a lot of that sort of non-human optimized work to the machine; but they were able to do more faster, and they were able to actually take market share from some of their competitors.
That showed up and in their quarterly announcement and what we saw there was: Okay, not only do they benefit from this automation, but once those people sort of come back online, and that could be coming back into the office, or that could be coming plugging into sort of a cloud-based application, those folks were able to up-level their skillset, their work set, such that just, sort of, simple things like job satisfaction retention got better.
We’re always sensitive to the fact that technology automation and efficiency can displace certain sets of workers. We always look at that kind of bubble increasing in other areas. In this case, it was very much the case that someone who was sitting in front of, looking at a process that was taking them an hour to complete, and instead they were in control of the process and they could do it in one to two minutes.
That’s where AI or we say robots internally kind of, casually; but that’s where the humans should be directing the robots. They should be working on behalf of the humans and not the other way around, and that’s really something that’s core to our value proposition is we want humans in control of the machines.
We want the process being directed by, as I like to say: carbon-based lifeforms, and while huge fans of the singularity and I love all the interesting potential advancements with neural link and all the other stuff going on in the AI world. For the time being the humans should continue to be in charge and we just want to equip them with superpowers and those superpowers are enabled by the AI innovation that’s going on.
One of my favorite frameworks is from Carnegie Mellon, it’s actually from their software engineering Institute from a paper that was authored called: Designing trustworthy AI, Human machine teaming framework to guide development. And this framework is, basically, exactly what you just said, Steve. It’s a checklist of ensuring the human is in control or do they have the right to override the machine, or to ensure the machine is behaving properly and putting the human as augmented in their experience is what’s leading us into this new economy. It’s no longer a knowledge economy, but that’s a knowledge process economy and that’s where the knowledge process automation that your team is scaling in Deepsee comes in. Now, Deepsee is leading the charge in this new category. Can you share with us more about how you envision knowledge process automation?
So, we think it’s a natural extension of what I’ll call tier one or level one automation inside the enterprise. So, the background here is every enterprise, every C-level exec inside every Fortune 500 company has got a line item saying digital transformation needs to do more. And then, they’ll sit in a meeting and say: I need to get me some of that AI, because I need to get all the benefits, because I’ve got this mandate. But what does that really mean? Well, what it means is we need to apply these technology solutions, as I said, in our case, we want to operationalize data science and the innovation that comes from AI.
But we want to operationalize it around outcomes. We think of it as a mining of information to achieve knowledge and that knowledge could be reducing your cost, It could be mitigating your risk, or it could be improving your customer satisfaction, so that you retain or obtain more customers.
What we think is important about that is that there’s always a state or a step in the flow where you want a human who’s got just sort of an innate set of experiences and background, the subject matter expert, as we like to say. You want them in control, and you want them directing the system and not the system directing them.
That’s where we get in trouble where the system says: Well, this is what we should do, but there’s no sort of that common sense module that we haven’t developed yet. So we were developing that in a kind of beta in 2018, we deployed it in 2019. We were very fortunate, I guess we announced it earlier this year, losing track of years. We were very fortunate to partner with a like-minded venture capital group, Forgepoint Capital, where they, sort of, got it. And they saw the broad applicability of this type of approach, not just in capital markets, not just in insurance, but across a number of industries.
So they were ready to lead our series a round, which we just closed for little over $23 million. And we’re going to use that to help accelerate, not just what we’re doing and our current customer base, but to really broaden the applicability across different verticals.
We’re very excited about where we are, not just as a company and how we’re, we were able to innovate during a really crazy time, but how our message is starting to resonate with our target customer base.
Let’s talk about some of those core verticals. Is exciting to hear that you’ve brought on this series to expand and enable enterprises to turn data into assets, and some of the initial verticals you’ve been focusing on are the capital markets and insurance. What has led you to hone in on those to start?
So, truth be told, we kind of fell into the capital markets use case, we didn’t pick it as a starting point, but through connections and folks who sort of understood our early value proposition, we were invited in.
Thankfully they supported us as we, sort of, learned and developed a deep understanding of that particular market and all the nuances, not just around the competitive positioning among very large and very, sort of competitive banks; but also, I’ll call it regulatory and compliance burdens that they shoulder not just in the US, but across the world.
Every geo has a different set of rules and these financial institutions have to really spend a lot and invest a lot to make sure that they are following those rules and procedures. Their processes, and in part, because it is heavily regulated, they all look the same to us, they might speak a little bit of a different dialect. Do you think about the United States and English? It sounds a little bit different in Alabama than it does in Alaska. And the processes for things like reconciling trades or looking for market defining trends and being able to create proprietary trading algorithms against them, they all kind of look the same.
So the key for us was to create a platform that had the flows that match those business processes. You can think about Salesforce. The sales automation and marketing automation flows are roughly the same for any organization. The trade reconciliation or the trade execution flows inside capital markets are roughly the same, although they might use different terms and they might have some unique bespoke steps in between, they’re roughly the same.
So we create these generic modules that can be configured on a bank by bank basis, and we’ve gotten a lot of interest and a lot of excitement about our ability to do that. Well, which should I say, the raw material, the unstructured data It looks very similar to what you see in the insurance markets and the reinsurance markets. Different targets, different business models, but same processes.
They deal with unstructured data, they pass paper, there’s lots of checkpoints. These are all things that should be automated by a machine. And we think that we’ve seen a lot of interest in being able to apply that same digital transformation approach using the Deepsee platform in that vertical as we have in capital markets. So,we fit those two areas ripe for optimization around this approach. And we’re gonna spend a lot of time continuing to work with those partners and develop more of those flows.
Of course, as we were diving earlier into the conversation, talking about transfer learning and talking about transformers. The technology that you’re working with can be applied to multiple and other business sectors and verticals. So I’m sure there’s going to be great opportunity to expand beyond the capital markets and insurance, whether some of the next ones that you’re, I’m excited to see knowledge process automation serve.
We’ve all witnessed in the last year, we’ve seen some tremendous innovation in the development and the launch of new therapeutics, new vaccines, new medical supplements and medications in a very directed hyper-focused way.
Solve a very challenging problem that the world has been United against. If you look back at how that all happened, it was really the sort of streamlining of steps in the process that otherwise had taken. If you went back 10 years ago to get a new medication approved, it might take you 5 or 10 years.
A lot of that wasn’t the sort of longitudinal studies on the efficacy of the drug, It was the paperwork. It was working through the sort of bureaucratic process of any given government. Some of the opportunities that we have in approval for new pharmaceuticals or new therapeutics is the opportunity to streamline that paper process that is just really slowing down what should otherwise be great sort of advancements. That really has a kind of life and death impact on people. When you’re thinking about the information problem in that kind of market. You’ve got massive amounts of research and massive amounts of animal and human testing that you have to process and organize and sift through.
And if we can help with researchers and reviewers and sort of approvers. Do that faster than I think that’s a good problem to be solving. And that’s an area that we feel we have a lot to offer. The other area that we have tremendous value long-term, and this applies to any business where there’s a contract between two parties, is a while everybody’s followed the sort of crypto currency trend.
We think the underlying technology that supports the blockchain is going to be something that we can enable. We have a tool that it’s sort of an on-ramp to cementing this notion of a golden copy between parties without having to have an arbitrary and a in the finance institution they want to trade very aggressively in a high velocity, but there’s always this sort of settlement approach.
And it’s always okay, who’s got the right contract, who’s got the golden copy. Well, if that golden copy lives on a ledger, which is a blockchain, and we can enable the agreements to be initiated, and then recorded, and then processed with the golden copy, backstopped on a blockchain. We think we can actually create more transparency and more security in the overall financial process and the overall financial markets. We’re excited to bring some of those innovations to market in the coming year.
That is very exciting to see knowledge process automation, all across the markets. Where does that take you for next steps beyond the markets? What do you see in the roadmap for your product and what would you like to share with our listeners?
I’ll go back to the gardener terms. There’s sort of, I talked about composite age and our DC platform supports this notion of using the best model for the task at hand. By definition, we’re letting people apply this hyper automation capability because we’re streamlining the process of not just document digitization, but document and data analysis. The third element though is really interesting. There’s this notion that I’m a little reluctant to apply automation or let AI make decisions for me because sometimes we’ve reverted to: Well, it’s a black box. It does what it does. I have no idea what the answer is how it got to be, but I can just trust it.
Well, In certain markets, especially in financial markets, but in other markets too, you need to have kind of a trust, but verify capability and you need the bank. Sure, models aren’t biased or certainly not by us in ways that you hadn’t accounted for. You want to make sure models aren’t drifting or they’re not drifting in the ways that you hadn’t anticipated.
So we’ve created this technology that can sort of ride along with the model dispositions and sort of preserve a, if you will, data provenance across all of the actions that happen for that model. So if you need to go back and interrogate or you need to go back and quote: “prove that the model was doing the right thing or conversely was not making sort of biased decisions on the wrong side of the line”.
We have this sort of transparency capability in that model. We call it D3O, it’s the Deepsee data definition object, and it’s really just this ride along ledger that allows folks to take comfort in the fact that there’s not going to be this kind of reversion to: Well, I don’t know. I don’t know how the model got there.
I just know the model says this. We want more transparency and we want to preserve that transparency over time and we think that those two innovations are going to be very, very critical to wide-scale adoption.
Well, I’m excited to see the continued growth of knowledge process automation. It’s the early days, as we continue moving more into our hybrid workplace, it’s going to be great to see more of our knowledge be easier to digest all across the value chain. This has been a show today with Steve shelling Ford, the founder and CEO at Deepsee.ai. Steve, Thanks so much for joining us on the show.
David,Thanks for the time.
Thank you for listening to this episode of the HumAIn podcast. Did the episode measure up to your thoughts and ML and AI data science, developer tools and technical education. Share your thoughts with me at humainpodcast.com/contact. Remember to share this episode with a friend, subscribe and leave a review, and listen for more episodes of HumAIn.