Armen Berjikly

Today’s guest speaker leads Product Strategy at Ultimate Software. Listen as Armen Berjikly and I discuss how AI and NLP will create empathy in the workplace. Why leaders must consider the ethics of AI and what diverse teams are doing to be successful with data and algorithms. This is HumAIn.

Welcome to HumAIn. My name is David Yakobovitch and I will be your host throughout this series. Together, We will explore AI through fireside conversations with industry experts, from business executives and AI researchers, to leaders who advance AI for all. HumAIn is the channel to release new AI products, to learn about industry trends and to bridge the gap between humans and machines in the fourth industrial revolution. If you liked this episode, remember to subscribe and leave a review.

David Yakobovitch

Welcome back to the HumAIn Podcast, everyone. My name is David Yakobovitch and today we have a special guest joining us from San Francisco. Armen Berjikly is an entrepreneur who has dedicated his career to pushing the boundary of artificial intelligence with special focus on emotion and empathy to work with people as they are. He created the company called Kanjoya, which was acquired by Ultimate Software around three years ago. Today he leads Product Strategy for Ultimate Software in San Francisco. Thanks for joining us Armen.

Armen Berjikly

It’s my pleasure, David. Thanks for having me.

David Yakobovitch

AI is so rapidly evolving and we shared before kicking off the show that when I used to live in Florida, at one point, I almost took a job with Ultimate Software. So I love the company. I’ve played in the ping pong tables in Western Florida. It’s an amazing culture that Ultimate has, how has Ultimate fostered such a culture that has humans who are working on technology?

Armen Berjikly

I have to agree with you. Our roads have crossed here. It’s a small world in some ways. Ultimate is a pretty special place and defies some of the characteristics that people will ascribe to companies as they get to a certain size, it’s 5,000 employees, it was recently taken private, in one of the largest deals in history, for almost $11 billion. So you get assessed as sort of a sense of the financial success of the company. But the core has always remained this mantra of people first, internal people, the employees, and the customers in that sort of grouping. And it comes from CEO on down, as the gentleman named Scott Scherr has been CEO and founder for over 20 years, he’s led the ship.

Every time that I’ve had the opportunity to spend time with Scott or see him speak, whether it is internal or external, he’s talking to investors or doing an employee event, it always makes it clear that sort of the very base of his pyramid. The most important thing to him are the people building the organization, the employees and his philosophy has been “If you take care of your people, they will take care of you and they’ll take care of the customers” and that level of trust and authenticity and value placed on.

As we’re going to be talking about today, is infectious, becoming the core part of the culture and the defining characteristic. As we now serve as many, many thousands and thousands of customers and have that privilege, it’s not an insignificant number of them that come to Ultimate and say “ I love your products, but I also want a piece of your culture and I want to replicate it here.” So it’s a fun topic. It’s something we’re proud of and it’s pretty unique. The more that we can help bring it out to the rest of the world, the better.

David Yakobovitch

Traditionally I know Ultimate products have been about software eating the world and helping to support human and people solutions for companies that may not have dedicated human resource teams and dedicated teams to handle all the processes.

So traditional, Ultimate got started with UltiPro and all these great products. I used to do some research in that space as well, and I worked for ADP. So it was fun too, to see how the company’s evolved and gone back private now. But now as a private company, it’s even more important than ever to think about what’s the direction of technology and it’s been 20 years and Ultimately keeps on evolving. What’s some of the new technology that your team’s working on?

Armen Berjikly

That’s a great question. And I also push it even one step further and say sort of what are the challenges that this new technology brings, not just in your traditional rollout, adoption, education of the market, etc., but actually in the arena of why are we doing what we’re doing for a fundamental reason, right. For the ethical challenges that we need to be thoughtful about. So where I focus on my role is, sort of, where are we going a couple of years on out? And particularly the way I describe my job to my teammates is, “You need me to be successful so that our kids can have the opportunity to work at Ultimate Software.”

That may sound a little glib or what have you, but sort of history is littered with companies that were doing great and they failed to see where the market was going and they failed to see where the needs were, and then they had to come to that realization too late. So we like what we’re doing, and want to keep doing it. So my role kind of emphasizes that, reading tea leaves, but also bringing your own understanding of this intersection of the capabilities of new technology and the unmet challenges in the human resource space and where solutions are. So a significant amount of what I do is looking at exactly that, like we can all agree as participants in this space, that there’s some things that we do pretty well.

There’s a lot of things that are still unmet needs, frustrations, gaps. And what you do is you start to come upon new technologies like artificial intelligence and you start to map and say, “well, will this new technology help me solve one of these longstanding problems?” Implicit in that comment is alone technology should not be deified and worshiped and put on a pedestal, like we often do here in Silicon Valley. It is not a solution in and of itself.

It’s an avenue among many others that can help you tackle things that you’ve always wanted to and couldn’t do or you didn’t have, even the imagination that you can solve it. We’re reaching a point now with where technology is and the advancements that have happened. Where you can credibly say pretty much anything we can dream of could happen,right? Science fiction is going to become science fact, regardless of your position on that, it’s just undeniable progress that’s happened in the underlying hardware capabilities.

And so, I’m very excited to take a look, not only at existing product offerings, like performance management, and say: what are the things we can do differently now that we have some of these new tools? As well as dream and imagine and say: where is it that today companies just cannot do it, or that sort of a lucky strike that they can solve a problem, or they have hire a big consulting company, come in and do long study, spend a lot of money areas there where software can make a difference. So in that arena is where I apply my trade and get to apply some of my background and also my projection of where the industry is going.

David Yakobovitch

The industry is going in such a diverse place. We can even look at content with humans, recently the show is thinking very forward about Black Mirror. There was the movie bander stat this year, which is all about choosing your own adventure. So humans are able to get heard throughout that experience and you’re feeling like you’re part of that process.

That was so interesting because Black Mirror won the Nebula award for game writing because of how unique that was and translating that back to companies, the challenge with large organizations is often employees often feel like they’re just another number, right? They’re a cog and we all think they’re part of the process, but that’s possible to be changed. That’s possible to bring empathy into the workplace and AI and NLP could do that. Perhaps you could share more what’s Ultimate doing around that.

Armen Berjikly

Touched upon some really great things there with Black Mirror. It’s a great show, and it underscores the point I’m making that the traditional approach technology has just been sort of “let it lead”, whatever happens, happens. We’ll fix the problems afterwards, sort of retroactively, leading companies like Facebook and Google have been demonstrators of that sort of faith. I don’t think it’s malicious. It’s an optimism that ¡WOW! opportunity.

Let’s put it out in the market and if there’s a problem, then we’ll deal with it later. And in the world of Ultimate, being people is the first step one, but also just more expansively in the world of human capital, the responsibility is too great, right? Like we’re not biotechnology companies, right. Are not directly building medications that save lives, but take one step further, the work that we do. Starting from payroll on up to talent management is we help people pay the bills, right? We help them put food on the table. We help them take their kids and make sure they can get taken care of at a doctor. And so you take these two worlds as sort of this really sue advancement steady and actually geometric growth in capability of technology, which makes pretty much anything possible.

And you put it into a world where, really, I have to quote Spiderman and say “with great power comes great responsibility”. We can’t afford it, and we won’t, just culturally take an approach to put it out there, figure out the problems and fix it later. We’ve been very, very much of the mindset that assumes the future is now assuming capabilities are going to happen.

What is it that we really want out of these technologies and what is it that we don’t want out of these technologies, right? Like what box do we put it in? And then from that point, it’s actually a position of strength to say, here’s the ethical considerations with some of these new capabilities within that boundary box, with that philosophy, how do we pursue some of these goals of building better products, solving customer problems?

I’m pretty excited about how that approach is starting now to become contagious to the overall community. I see lots of companies now starting to say: “yeah, we need to talk about our ethics code both internally and externally”. I’m happy to go into that today and kind of explain to you sort of our directionality on it. But conceptually it’s important to be proactive is this is a set of disruptive capabilities coming our way and sort of putting our head in the sand and saying : “Oh, we’ll deal with it”. But the outcomes later, the good and the bad, is the wrong way to approach it.

David Yakobovitch

So you mentioned the big juggernauts, like Google and Facebook, and there’s so much going on in tech today. Earlier this year, Google launched their app AI advisory board for ethics and in less than one week got completely disbanded. What do you think companies can do better to proactively support ethics and AI and build technology from within?

Armen Berjikly

So I applaud Google for what they tried to do, which was to get a bunch of diverse inputs and from outside the company and set their ethical code and it ran into some trouble because some of the voices on that committee were viewed as potentially flawed. The challenge more broadly is if you look at what happened to Facebook last year, when they got hauled in front of Congress, It typifies the technology first approach. Like, Hey, we put stuff out there. People used it, there were some problems, right? Like governments were overthrown or people were objectified, children were harmed and I don’t fault them for that happening.

But the reaction as set went to Congress was 14 times in each one of those sessions, right? He was in front of the house. He was in front of the Senate. He was posed with one of those hard questions of, well, Facebook has done X, Y, or Z thing, that’s bad. What are you going to do about it? And it was super interesting to me that essentially he said: “AI will fix it”, to answer every time one of those questions. Almost unanswerable questions were asked. I don’t view that in this dismissive way, like he’s trying to dodge the question, I viewed it as philosophical.

Technology may have created the problem. Technology will be the solution to the problem, but that’s a little backwards. Sometimes you need to be more thoughtful about the problems you’re going to create before you create them. And so I can draw an example from Google IO, right? Their yearly conference last year, they got a lot of buzz because they unleashed their voice assistant in a demo during IO, where basically that the voice assistant called on behalf of a human and made an appointment. It was because I’m a hairstylist. And it was really cool to see the computer say: “hello, I’d like to make an appointment”.

And the hairstylist was giving some time ranges and the computer was figuring out how to say “yes, your 10:30”. But what threw everyone off was the fact that the computer had started to mimic some human behaviors, particularly saying things like: “and”, “um”, and “yeah”, and those little tiny signals that me and you will have in a conversation, we’re human, we should have that. The fact that they had replicated that into a machine set off a whole ethical quandary, because essentially the person on the other end of a phone call from a Google assistant could not reasonably tell “ am I talking to a bot? Am I talking to a person?”

And that creates some very interesting challenges for our society. And so, again, as an example of the technology is capable, of course it could essentially replicate a person. So why not? Without proactively asking why should we be doing things like that? And so that is my fundamental criticism of my industry overall, is that we tend to do this backwards and it’s worked for the most part. We all benefit from the fruits of technology right now. But as we get into some of these new boundaries that are getting knocked down, because see, again, the hardware has advanced so much. We have 240,000 times more power in a central processing unit than we did 40 years ago. And that’s only going up exponentially, where I don’t think that old approach works. And particularly, again, in my industry and my company, Ultimate Software, with a culture of people first. We’ve said, let’s not do it that way, we have too much responsibility.

Like we really are enablers of people’s lives and livelihoods and wellness and being able to take care of their families. And so what is the solution for some of these approaches, is first you as a company have to embrace the boundaries and the direction of your artificial intelligence approach. And that’s what we’ve done at Ultimate. We set an internal group. We created a set of ethical considerations as a shortlist of things like six items. But then as we build software, as we design the future, we know which box we’re operating in.

And the second part was, let’s take it to our customers and to our prospects and let them know what their vendor is doing. And one for feedback, of course, to make sure that they understand, but two to say this isn’t something you want to sweep under the rug. This is not something that you want to hope your employees don’t ask about. As we get savvier and savvier audiences. Whether they be a consumer world like Facebook or whether they be internal employees like, and Ultimate’s case…

David Yakobovitch (AD)

Hey humans, are you accumulating lots of listening minutes for your podcasts, but not being rewarded for your listening time? There’s a new app available now called Podcoin, where you can listen to podcasts and donate your listening time to charity. Podcoin gives you the opportunity to be rewarded for listening to your favorite podcast, whether you’re listening to Joe Rogan, Tim Ferriss, or Terry Gross, or even HumAIn. Podcoin is the new App for you to give back for your listening minutes, check it out on the App Store.

Armen Berjikly

People want to know what you are doing all this for, right? Like I’m not just going to take you at face value that all these investments in new technologies, new software, new capabilities are for my and the greater good. So, you better tell me upfront, so that I can get a sense of the directionality, the limits, the boundaries, the approaches. And in that world where you have your team aware, your transparency, you have a healthier environment to actually benefit from some of these capabilities versus a different approach, which is sweeping under the rug, hoping no one notices, hoping no one asks questions, just let technology kind of run wild and figure it out later.  That cavalier approach it’s no-go for me. It leads to outcomes like you described, where if you try and retrofit ethics, try and retrofit morality and responsibility and your advanced technology portfolio, It’s a little too late.

David Yakobovitch

Even from a hiring perspective. A couple of times in the show we’ve talked about the case, last year, where Amazon designed the recruiting tool, where it went through resumes and determined if their candidates were a good fit for employability. And then, they would be pushed on to the hiring manager for the interview. It was shortly uncovered that no women were getting interviews. And that’s because it was very retroactively designed, not proactively.

Armen Berjikly

That was a really interesting case. People often ask me, “Hey, what’s the greatest risk in AI for recruiting, in this case?” And I’ll tell you the greatest risk is that AI actually takes no risks. And it’s a little bit counterintuitive to think that way, but what AI is, is really a bunch of formulas, It’s a bunch of pattern recognition, a bunch of math,  and it is only as smart as the data it’s seen before and what it could derive out of that data. So the mistake that Amazon made and I give them credit for, because they were very open about what happened and they never actually unleashed their recruiting bot into the world, thankfully. They had trained it on themselves and only themselves. And when you work at a company as large as Amazon, you must get a sense that, ¡WOW!, we’re super diverse, there’s lots of different people in the office and so we’re a wonderful training set. But the reality is in the world of artificial intelligence, like the size of a company like Amazon, is actually a drop in the bucket in terms of examples, right?

How many hiring decisions have been made at Amazon up until now, maybe a million at best. And they tend to be wildly biased because, well, who you hire is sort of a by-product of the people you have in place right now. They tend to hire people that talk like them, act like them, look like they are comfortable around them. And that becomes a reinforcing loop. That is exactly what happened with their algorithm, is they had no idea that they had tended to not hire from colleges and universities that were focused on women.

But the machine just looked at the data and said, ¡WOW!, looked like we never ended up hiring someone who went to a woman’s college. Therefore I will make that a rule. And I will enforce that ruthlessly because I’m being a good machine and I’m just trying to be the best possible version of the people that trained me.

And so it brings up a really important point that as you build these essentially pattern matching systems and math machines, the diversity of the data you train them on is wildly critical and to the requirements, right? The hope for some companies, but for us to require that the outcomes be with as minimal bias as possible. All algorithms have bias. So that’s probably for another talk, since it’s such a rich topic. But If you want to curb some of our challenges as people, we have unconscious bias machines that have the ability to have that bias identified, measured, and hopefully over time, ameliorated or potentially even eradicated. You can only get there if you have extremely diverse training inputs.

And so a piece of advice I have to edit from a company that’s thinking about rolling their own algorithms probably is don’t, because you don’t have access to anywhere near enough examples of different people at different voices, different situations. Despite your own belief that you probably do and, hey, Amazon couldn’t do it.

That’s a really big company that has been around for a long time and it’s been wonderfully successful in most measures, and yet they ran into this problem where there was hidden bias in their data because their data was simply too limited. So you gotta be as expansive as possible and go to vendors that have access to lots and lots of diverse inputs, us being one of them because we work with thousands of companies, but there are others out there.

David Yakobovitch

Even if that was in the data, right? If it says you’re a woman, you’re a man, you’re white, you’re African-American,  how would that ever be reliable, right? Like it’s such a diversity in every subset of every ethnic group in every demographic group that it just wouldn’t make sense. So, for me, when I saw that story, I thought like, how is this even possible?  Wouldn’t you anonymize out with male and female from the start?

Armen Berjikly

It’s usually, if you pull aside any Amazon executive and say “Hey, did you know that you guys have this bias in an organization?” They would have been known, right? It’s not obvious,because it’s not a conscientious choice that they were making. I often will let people know that don’t assign maliciousness when sort of ignorance will probably suffice. And so again, companies that are built from the ground up, they start with a founder, that person hires a couple of people. Those people hire people that they enjoyed interviewing, and already unconscious biases at work, right?

If you have 10 hires to make by the end of the month, you have a hundred applicants. It’s only human nature to say: “well, I could create this incredible blinded fair Excel spreadsheets, now look at the person that I’m hiring. I’ll look at their name”, but what happens in reality, right? We have too little time, you have too little resources, we have too much pressure. And so we start to lean on some of these cheat codes that we’ve evolved as people to deal with these resource constraints, which is in this case, unconscious bias of what was a really fun conversation with a few of those people. We came from the same hometown. We like the same teams.

I just felt right. I’ll pick those people, right? And in the end you go home at night, and I probably didn’t pick the exact best people. And oftentimes that plays out. And so when I look again, the broader question of where does artificial intelligence and advanced technologies play into our world? I do not. It’s a hard rule of ours and our ethical code wants to replace people or even replicate people. That’s not a worthy achievement, it’s not a necessary one, it doesn’t lead to a better world. I instead look at this perspective from people having to make decisions all day long, even if you’re a frontline employee, even if you’re an intern all the way up to being a CEO, your day is defined by the decisions you make.

Those decisions are almost always made with a dearth or absence of all the information you need. And even sometimes if you have the information, it tends to be skewed or biased, like in the cases that we’ve been talking about with Amazon. So if the promise of some of these technologies comes to bear, when you go to make a decision, you should have a buddy in the software that services you, you are the important piece of the equation that brings more evidence to you.

That brings you some confidence that you’re doing the right thing. Some perspectives you may not have considered and maybe even curbing your own sort of inclinations to lean on shortcuts like unconscious bias. So decision-making support is the worthy goal of artificial intelligence. And how does it get there? Well, you have to enable it to work with us and understand our problems. And so that kind of gets into the boundaries that we’re starting to push with new technology.

David Yakobovitch

Even in that recruiting case, perhaps we can mask our voice, right? So you could have an interview and you have no idea if the person who you’re talking to is male, female, and what that voice is. But then it’s like, you’ve reduced your bias, but you create the risk of more social arbitrage and social hacking. Because then, Oh, who would be the person who is actually interviewing, maybe is someone else and how long in the process can we reduce that bias until? Oh wow That’s the person I’ve been interviewing. It gets interesting. So, bias is a part of it and the success of algorithms is as much the data, as well. And so, look ultimately you’ve been around for 20 years now, in both with your venture and now in Ultimate you’ve seen a lot.

What are some of the types of data you’re using for new products and what’s on the horizon for Ultimate?

Armen Berjikly

Sure, great question. Step one, the idea of do we have to suddenly pioneer new ways of gathering data and break into what have been previously off limits, type data like emails or Slack chats. Back to ethical code that’s entirely uninteresting to us and we would never do it. You only want to look at data that is intended to be looked at. You don’t want to start feeding the beast and saying, “well, any data is good data” and “hey, they’re my employees, so they signed up for this”. That’s unacceptable. It’s not the right thing to do morally.

It’s also not any way to build a level of trust with your employee pool. So we take a very strong position that the only data that we’ll be looking at is data that was intended to be looked at. Things like surveys that you submit to your company. That’s intended to be looked at things like performance reviews that you submit, let’s say peer 360. Hey, I want someone else to look at this. I’m not writing it for myself, I’m not writing it privately to another individual.

And so, for the most part, the data we look at already exists and we were really bringing to the game is a sense that, the problems that are unsolved or under solved in human capital, they are people problems, of course, are people opportunities to, if you take any CEO side, I’ve been a CEO, it’s not a numerical challenge, right? You’re not going to wake up in the morning and say: “if only I did 5% more of this thing, suddenly all problems would be solved”. If that was the case this entire industry would be a bunch of high fives and hugs. There wouldn’t be conference after conference about things like pretension and regrettable, attrition, all these things, right?

So the unsolved problems are in the qualitative, they’re in the human domain, they’re in the “ I’m doing everything right, I’m doing everything like I used to, and it’s just not working anymore”. The performance isn’t there, where the people aren’t as motivated, they’re not engaged or people challenge themselves.

And so then I look and say, “all right, how can technology help?” If technology always ignores the humanity in us, and when I talk about the humanity in us, it draws upon our psychological background, right? We, as people feel as much, oftentimes more so than we think. And yet software has almost always operated on the assumption that people are just purely rational beings in the human resources space. Well, people are just this combination of their pay and how long they worked for that manager and what location they’re in or their job title. And while all of those demographic details are important. We all know from having coffee with a peer, that how you feel really dominates that entire equation.

I always use the example that people will look at their phone and say: “ I hate this phone. It doesn’t work. Never works”. And that’s totally not true. It’s an emotion that you have. It’s frustrating. And then logic starts to fill in like: Oh, I’m still in contract, so I better deal with it, or my contract up soon. I can think about going somewhere else. But that order of events is exactly how we are built, which is how we feel, we decide based on feeling. And then we throw in some rationalities and logic to try and make us feel in control, actually that we’re not just sort of recklessly going around. And so the point I make around that is if technology is trying to solve really big, interesting problems or help us make big decisions, and yet is not aware, sensitive and thoughtful about the fact that our emotions matter.

In fact, they’re not just some cute thing to throw in as a marketing bullet point, but actually they are the foundation of our decision-making that you cannot look someone in the eye and say: “I’m really going to advance the state of the art in this space”. And so the work that we’ve done at Ultimate and in my personal career, and my research background has been in building capabilities technologies. They have empathy or demonstrate it in a lot of ways being sensitive. When we look at a piece of data, it’s not just how many words were said and a word count and a word cloud, which is sort of where things go to today, but we push forward and we say: how is this person feeling?

What are they feeling about? And that also throws away some of the previous restrictions around what language they are communicating in. Are they an introvert or extrovert? Are they male or female? Are they older or younger? All these things affect the language that we use and the work that we’ve done over the last 15 and nearly 20 years now. Is to equip computers with the capability to say: I’ll do the number crunching, I’ll do the pattern recognition and I will figure out that , however, this person is most comfortable communicating it. I’ll figure out what they are talking about, how they feel about it, because that is the origin story of what I can do to help. I have to understand the problem. You can’t fix what you can’t measure.

So to me, we’ve been counting lots of things that are easy to count, but we haven’t been counting what actually counts. And that’s the work that we do at Ultimate in particular. With a technology tool kit, we call Zander, that helps us understand the qualitative aspects of data equally or if not better than the pure statistical and demographic data. That opens up a whole world of opportunity and being able to help folks with their day-to-day jobs and decision-making.

David Yakobovitch

What do you think the future work looks like, Armen? In this one where we’re going to have these Google glass devices on us and seeing each other’s social feeds, appearing around each other and swiping actions throughout the day. Give me a picture of how you think Ultimate in yourself thinks about the future of work.

Armen Berjikly

Sure, the immersion in technology is a runaway train. I use the example used to have email and phone calls and we added text and then we added Slack, and did we subtract pretty much anything? Maybe phone calls got subtracted a little bit, but it tends to be additive and cumulative. I’ve never met a company that has too little data or too little to do. I’ve never been an HR team that has enough people. So this world of being swamped by inputs and not having enough time, money and people to understand them, that is endemic.

It is our current situation and it’s only going to become worse and therefore it sort of calls for companies like ours to say: “all right, we want to be there. We want to help make your organization better”. What can we do in this space? A lot of that future of work comes down to, well, here’s the fundamental thing, people are still the critical aspect. We have zero interest and we are philosophically opposed to the idea of machines running companies and replacing people. So people are still at the center of the universe. What is the situation of people? Not enough time, not enough energy, not enough resources or privilege to continuously learn.

We’re affected by unconscious bias, so these are situations where machines can come in and say: “all right, let me do some of the heavy lifting for you, let me find some things out for you”. So when you go to make that hire, when you go to build that team, when you go to write a review for someone, let’s do what we actually promised to do, help you develop that person or help you find a person who is really going to find a resource at your organization culturally and develop, within sort of the family of the company and not just be a fit from a skill perspective.

That softer side, that qualitative side, that emotional side that actually impacts our day-to-day existence, like nothing else. Let’s be savvy about it. Let’s build technology that works for us and change the situation that we’ve been subject to where we build the technology, then we end up being sort of subjugated by it and talking the way it wants us to talk and operating in the way it wants us to operate. It’s time to break down those barriers and some of the work we’re doing at Ultimate in artificial intelligence particularly will make that a reality.

David Yakobovitch

Well, I love everything that you’re doing. It sounds like it’s part of a human movement and we’re here about being Humane about taking AI, bridging the gap of humans and machines. I agree AI should not replace people, but mention augment us and thanks so much for being on the show today.

Armen Berjikly

Is my pleasure. Thanks for the opportunity to spend time with you and your audience.

David Yakobovitch

Hey humans, thanks for listening to this episode of HumAIn. My name is David Yakobovitch  and if you like HumAIn, remember to click, subscribe on Apple podcasts, Spotify or Luminary. Thanks for tuning in  and joining us for our next episode. New releases are every Tuesday