Today’s guest speaker has served as an early employee on the Amazon Alexa team. She has contributed at conversational AI ecosystem globally and currently she is scaling Microsoft Enterprises AI platforms. Listen in as Noelle LaCharite and I discuss results from Microsoft Build 2019, how conversational AI is coming of age, how you can be participatory with AI to make a more inclusive world and why the Met Museum is adopting immersive technology.
Welcome to HumAIn. My name is David Yakobovitch, and I will be your host throughout this series. Together we will explore AI through fireside conversations with industry experts. From business executives and AI researchers, to leaders who advance AI for all. HumAIn is the channel to release new AI products, to learn about industry trends and to bridge the gap between humans and machines in the fourth Industrial Revolution.
If you like this episode, remember to subscribe and leave a review. Welcome back to the HumAIn podcast, everyone. I’m David Yakobovitch, your host. And today we have a tech evangelist who I’ve met at hackathons and meetups in New York City.
Noelle LaCharite lives now in Seattle and works as an advocate for Women In Technology. And it’s very passionate, a mindful leadership with work-life harmony and helping people achieve their dreams. She also specializes in helping brands develop skills for Amazon Alexa. And in fact, today works for Microsoft on the Alexa and Cortana partnership. She delivers workshops globally. We’re happy to have you here today, Noelle, how are you doing?
Wonderful. Thanks so much for having me. It’s a long time coming, so I’m so excited to be here.
Oh, it’s so fun. I love when I got to build Alexa skills in New York at some of these hackathons. It’s amazing to see how far the platform has come and how it’s not just an Amazon game, but multiple companies are getting in on that as well.
That’s right. Right now, one of the things I’m really interested in I’ve actually, it’s been somewhat of a soap box for me being, I don’t want to have to build a separate thing for all of these different platforms. So I’ve been constantly saying we have to have one way to build a single thing, whether it’s a bot or a skill or whatever it is. And then all of these things like Google and Cortana and Alexa are just simply channels for that single thing. And we’re finally at a point where we can do that.
So I was excited to join the new Microsoft Assistant team and really help make this core bot, we call it service bot framework, the single bot to rule them all. And then allowing us to, literally, within a couple minutes, deploy. I could write my bot once and then have it on Alexa, Cortana and Google within just a few minutes. If you’re a technologist, it’s relatively easy. If you’re a business user is slightly harder, but there are even companies that are specializing in this today. There’s a company, Zammo.ai, and their whole role has been how to make this easy for business users to create voice skills?
And so similarly, they want to build one bot and then just deploy it everywhere for you. So, it’s really interesting, it will be a very interesting couple of years. Enterprise voice is really exciting, but, like you said it, we’ve come so far in just five years. It’s crazy that five years ago, we weren’t even thinking about this.
I had the opportunity earlier this year to be on the webinar with LogMeIn and Bold360 talking about conversational AI and talking about companies like Nike and Starbucks and H&M, who are doing bots, but also using the voice. Why do you think it is that we switched from text to voice so quickly?
If we could have, we would have gone to voice directly. So text is a bit more mature from a natural language perspective. So just technically, it’s easier to give you an example. We have about 60 languages that you can translate and text using a web service, cognitive services. You’re at Microsoft, 60 languages with a single call.
That’s pretty incredible when you go and then try to do that and nuance it in voice. Right now we support nine languages, very quickly adding more, but it’s definitely a subset because it’s much harder. Text is pretty finite. You type in text and it is what it is. But when people speak, they could have all sorts of different things happen.
They could use slightly different words that means the same thing. They could have an accent, they can have a speech impediment. There’s so much that can come into play that makes the development of speech interfaces much more difficult, which is why having companies come in and support business users to do this work is so important, because it is a very complicated process. But at the same time, it is what people, customers, you and I, it’s what we want. We’d much rather say it than type it if we had the choice. So it’s just a matter of trying to get that technology to meet us there.
Now there’s nine languages. I imagine they’re probably the romance languages, English, Spanish, French, Italian. Why has it been so troublesome to pick up some of these other languages like Chinese, Korean, Japanese, and others?
As a matter of fact, at Microsoft, we focused on Asian languages first. So Chinese is one of our languages because we were looking to achieve human parody in actual translation. Real-time translation. There’s a couple of videos. Maybe I’ll send you a link for you to add to the show notes. A couple of videos that we’ve had at our different developer conferences, where we even do it live on stage and it’s not perfect, but I love, I always ride on the edge of danger doing live demos on stage.
You never know if they’re going to go perfectly. They almost never do. I mean, the sheer availability of the tech to even attempt it. And more importantly, it actually does a decent job. It’s phenomenal.
So, Chinese, for some, it’s hard because it takes a domain expertise in that language. Linguists are harder to come by, more expensive to come by. And also you have to have a pretty decent natural language model. That’s not just voice, but also graphical. Ultimately, if you look at some of our speech services really broadly, text translation happens. It’s like speech. Then you turn it into text, and that text you turn it into speech.
So oftentimes, companies like Microsoft are like, do we really need that middle part? How much work do we need to spend in that text translation piece? And when you realize that you actually have to do quite a bit there, that’s when companies like Microsoft don’t deep and said, Oh my gosh, we have to spend a lot of time on these graphical languages, like Chinese, like Japanese, and be able to really get machine learning to a point where it can successfully do that translation and texts, and then adding speeches really relatively simple on top of that. So, you’ll see more of that moving forward.
I can’t wait for a time when I can make my podcasts and simply translate into 30 languages or write a research report and it’s translated into 60 languages. How soon do you think, or how close are we to making that possible?
We’re there now. So today you could pass in a report and have it generate. Now the nuance to that is that you still need a developer, someone who’s a user experience person. Someone’s going to make it look pretty, someone who might do some initial editing, because, oftentimes now, if you write a white paper and you just want to translate the white paper, then you’ve already done that editing.
So you really just need a human to look at the output of the model and go, ‘yep, that’s about right’. That means, Oh no, we don’t. When we use these words together, that doesn’t mean the same thing as it does in English. So we have to be careful. That’s what used to take a very long time for a human to actually manually translate.
Now we can improve the ingenuity of that individual. Let them do way more because now they’re going to have the lion share of that translation done for them. And now they’re just reviewing it, editing it, iterating on it, as opposed to building that translation from scratch. So that technology is available now. Really the burden is on us as thought leaders, as engineers, as business owners to take advantage of the technology, and use it to solve our customer’s problems.
I can’t wait to try it out. One other thing I think about is, should I even learn new languages anymore? I speak English, I speak some Spanish and I’ve always thought I want to learn Mandarin, but everyone keeps telling me no, it’s too hard. I mean, are we going to get to that point in the next few years where I should just quit English? What I’ll do, I’ll just have an earpiece all the time.
Well, it depends on your motive for learning the language. So many times people are drawn to a language because they moved, going into a country and they want to be able to navigate around. And it’s very temporary. They’re not looking to actually understand the language, of their culture or anything like that. They just need to be able to find the bathroom and get a beer. That’s very transactional. And for those purposes, I do think of these on-the-go services.
There’s actually an app for your phone called Microsoft Translator. And I’ve used it at conferences because, for example, this soon I’ll be speaking at a conference in Poland, and many of the talks will be in Polish. Mine of course will be in English. So what I did is, I basically turned on Translator and it’s a push to talk to you. Hold on the mic. It collects all of this text that’s being said, or all these words that are being said, and then it translates live for you. So I can tell what they’re saying. I can tell the context of the conversation and I don’t need to learn Polish to do that.
But stories like the software, like Duolingo, tell you that people are actually drawn to learn languages for way more than just the transactional benefit of being able to find a bathroom or get a meal.
People actually love the language. I know many of my friends that have learned Japanese, have learned Mandarin, have learned even Spanish or Irish, do it for a love of the language.
And for that neurological benefit that you get, when you try to do something that’s hard and of course, giving yourself an excuse to, then, once you have, maybe, done the work of learning the language, now you get to go visit and now you get to practice that language with actual people that speak it. There’s a totally different kind of human benefit to language learning than just that transactional thing that AI can probably assist with today.
And so much of this work has been going on for the past few years. We look at companies like Nuance and even Microsoft and Apple and Google and many others, Amazon included, who’ve done all that work with research. What’s so fascinating is now that there’s a collective. You mentioned that Amazon and Microsoft are partnering together to create a consistent API and create a consistent experience. What spawned that experience?
No, I mean, at least for me, I started at Alexa in the very beginning when it was a very small team. I remember actually being in a room where they showed us the very first customer complaint, or it wasn’t really a complaint. It was feedback. And the feedback was, why do I ever need to know what skill I need to call or what Alexa knows or has learned? Why do I ever need to know the details of how to make a request? Why can’t I just ask for stuff? if Alexa knows, it knows. If it doesn’t, figures it out and knows pretty quickly and lets me know that it’s figured it out, how does it become more contextual?
And the only way that that would be possible is by really adding to this command and control structure that we have today, where you have to invoke an action on Google a certain way, or invoke a skill. A certain way you have an invocation name and then you have something intense that you’re calling. I mean, it’s more natural than we’ve ever had before, but it’s still not natural. I can’t just say, ‘Hey Alexa, give me a meditation’ or ‘I want to meditate for five minutes’. That would actually push you into this command structure and be like, would you like to call this skill? Where the user’s like, whatever, Okay, if I have to.
They don’t really want to know that. So, we are now drawn to this world where we want to be device independent. This is one of the things that we announced at Build earlier this year, where we were talking about, How do we make voice integration just ubiquitous? It doesn’t matter where you are.I want to be able to start a conversation in my office and then continue it in my car and then be able to go home and find out how it went or get a status update. I want it to be anywhere all the time in that context. Be able to be carried across these devices. And that’s really where that kind of one bot came from, customers want that.
They want to not care. I don’t know if you know a lot of people with voice assistance, but many of them have more than one. I use Siri on my phone all the time. I also have Alexa on my phone. I have Cortana in my office. I have Cortana on my laptop. I also have Alexa at home and sometimes in my car. So I don’t want to necessarily have to remember the nuances of each platform. They should just be channels for a customer, for me as a customer, to be able to communicate and get what I want done.
That’s incredible. I have Alexa everywhere. But I also have Cortana because I have a Mac computer, Windows computer, and a Linux computer. So it’s amazing how for every machine, you have different tools you get to work with. And it’s fascinating how they’re built in this utility. But instead, the framework is sharing, it’s making a one box. Making the shared experience.
And so, as you mentioned earlier this year, Microsoft had its Build Conference, a lot of new developments coming out, especially around conversation, especially around AI. What are some of the changes that were exciting that you saw coming out this year?
Oh, there were so many. It was great to see, of course. I’ve been watching this a long time, just because of my experience at Amazon and watching Alexa. My first re-invent after Alexa was launched, we had to fight to get a few sessions. And then, over the course of a few years, my last reinvent, before I joined Microsoft, we had like 75 sessions.
So watching this velocity is so exciting. And so, similarly, my dev conference last year at Build, I spoke on conversational AI, but there were just a few mentions of it. We launched unified speech services, which is the combination of speech to text, text to speech, speech authorization, and authentication. All these different services were unified into a single model. That was super exciting. But this year, voice was really pervasive. It was throughout every talk someone mentioned natural language, which I thought was really exciting. One of the big ones was this concept of conversation learner, the ability.
So imagine, for example, you’re in a meeting, which many of us maybe spend more time than we care to, and we go into a meeting and you now have a device in that meeting, whether it’s your phone or whether it’s an actual speaker. We have a dev kit now with a really interesting speaker that allows this technology to work the best.
But what happens is that as people are talking conversation learners, which is a model provided by Cognitive Services and applied AI. It allows you to listen, or I should say, the software, the AI model is listening and is able to identify each individual speaker. It is then also able to identify what that speaker is saying and attribute it like in a transcript, where it’ll say Noelle and what Noelle said. And then, David and I will have a profile picture and the words you get this full on, back and forth. In text in real time. And one of the most important things, really one of the things that is the reason I joined Microsoft was it doesn’t limit you just to what a typical natural language model could do. You actually get to train that model.
So let’s say you’re in healthcare, or in digital marketing or finance or real estate. And there are certain terms, maybe hundreds of terms that are specific to your industry, specific to the niche that you’re in. How do you make sure that your language model doesn’t fail because it hears a word that it doesn’t know or an acronym that it doesn’t recognize? So you can train now this conversation learners.
So, as you’re rattling off this dictation, it’s no longer that command and control. You’re not sending it a command and it comes back. Now you’re just talking and it’s able to capture what you said. Identify keywords that you’ve trained it to your specific industry or your company. And then you get to use things like text analytics, to analyze sentiment of the meeting. You get to use text analytics to identify keywords in entities that are important, providing the ability to not just take really detailed notes of that meeting, but maybe even help populate those action items that came from that meeting.
So really great stuff this is. Things that were very difficult. We always had people asking, at least I have as a solutions architect, where they’re like, ‘I really want to dictate notes. I really want to talk for a long time and have the machine be able to figure it out’. And that’s been really hard up until now. So a conversation learner is actually available. You guys could go take a look at it online. You do have to register in order to get into the private beta, but it’ll soon release publicly.
So how exciting! Never before we’ve been able to just rattle off as a web service, which means you don’t have to be a data scientist to take advantage of this tech. We’ve been able to do this for quite some time in AI. This is not a new technology, but the fact that we’ve been able to democratize it and make it accessible as a web service, that indeed is actually quite new.
It’s so incredible for several reasons. In a future episode of HumAIn, I’ll be featuring Mark Sears from CloudFactory, talking about how you can use data and annotate that for success with teams. But for me, particularly, back in 2008, my first job when I was paying my way through undergrad and college was a financial transcriptionist, where I was listening to lawyers and doctors and actually typing with all the shortcuts. So it’s crazy to see how in, literally, just about 10 years where we’ve come.
And I was actually at the ARIA last year at the hack health hackathon, and one of the winning teams was using Microsoft Conversational Platform and it was in its alpha stage. And it was still amazingly early to see how to take healthcare, if there was a dialogue between a doctor and the patient, how instantly you could be having that translation and you could have recommendations. It’s going to be so powerful for enterprise cases coming up very soon. You could say conversational AI is coming of age.
Exactly. And it’s just so interesting. The velocity that it’s moving. I mean, I’ve been in tech for a very long time. I’ve seen lots of iterations of when web services were becoming a thing too, when virtualization was becoming a thing, when Cloud and even Mobile. But nothing quite like this. AI is really increasing the velocity of our technology and what we’re able to do with it, which is great. We do have to be careful, but it’s great.
There’s so much out there in the space of AI, a lot of conferences have popped up, you get to learn a lot and summits to network. Are there any AI conferences you recommend for people wanting to get more into conversational AI?
Yes. Specifically around voice, which is a bit of my passion, there are some really great conferences. This year is the first collection of conferences of its kind where there’s a specialization. Earlier this year, there was a Voice Of The Car Summit that was done. There’s also a Voice Of Hospitality, a Voice Of Banking Summit, where people who have a very specific interest can get together. I love the concept of these conferences because it’s like a mastermind. If you’ve ever read “Think and Grow Rich” by Napoleon Hill, where you bring together people who are already working in this space, combined with people who want to work in this space, but it’s the tide that raises all boats.
Everyone gets smarter in these environments, because people are extremely collaborative and sharing. And everyone’s talking about the same industry, which makes it easier to dive deeper and solve tougher problems that that industry is facing. So I really like those niche, niche ones of Voice First, the same organization that runs the Alexa Conference has the series that they’re doing throughout the year. Alexa Conference. They’re about to be, they might be under a new name at the time. You guys are listening to this, but they have the Voice First Conference. They’ve been around a long time.
I’m a friend of theirs. I’m like, ‘Hey, if you guys ever want me to come and talk, I’m happy to do it’. They’ve got a lot of really great organizations assisting them in that way. So you can always be sure that you’ve got the best breed of speakers. And the first conference that I ever went to that was dedicated to voice really was the Voice Summit.
And that is happening this summer. Basically, the benefit behind Voice Summit is that they’ve got a ton of different tracks in each industry, but there’s around 5,000 people going to be there. It’s probably the largest voice summit of its kind. So really exciting. The likes of Amazon, of course, with Alexa, Microsoft is a platinum sponsor. Prudential is a sponsor. All of these very big names are coming together, which are the people you want to learn from. They always say, pioneers are the ones that you see when you’re going towards your new goal, they’re the ones face down in the dirt with arrows in their back, and you never want to necessarily be super front. Bleeding edge is called that for a reason.
You don’t want to be John Snow.
Exactly. But these larger companies, we can do some of that work. So, it’s nice because here you get to learn from those pioneers, learn from their mistakes and really build on top of the shoulders of giants, which, especially in AI, it’s never been easier to do that.
Yes, this is super cool. I was just mentioning, you never want to be the John Snow with the arrows coming into your back unless you get the second to lifeblood, of coming back to life. Actually, that’s coming, I’ll be joining you in a few weeks at the Voice Summit.
So I’m on one of the AI panels with Google Ethics speaker talking about ethics and voice. So it’s a fun topic, and I look forward to seeing you in New Jersey. And for all those listening to the show, there’ll be a New Jersey Institute of Technology from July 22nd to 25th.
Absolutely. So, so great. So fun.
And it’s also important to think how you and everyone can be participatory in AI and participatory in voice. So one of my favorite startups that lets everyone experience this is one in Canada, it’s called Lyrebird. So you basically get to take a voice sample, do that a few times, and before you know it, you have a robotic or almost human sounding voice of yourself. Are there any other startups or products in Microsoft you can call out for people to get participatory here?
Oh my gosh. There’s so many. So we have a whole collection of AI models that are accessible and you can even test them out right inside of the browser. So you don’t have to download anything or do anything, but we have a collection of these models.
We call them prebuilt AI, because all the hard work of developing the model, training the model and data, all that work has been done. And you just get the benefit of the feature set. So those features would be things like, ‘how can I add facial recognition? I’m just an app developer, I say. Even though it’s my skill set as well, but where we as developers often be like, ‘Oh, I want to implement facial recognition’. And we’re like, wow, that’s hard work. We’re probably not going to do that. That’s not necessarily the case anymore.
For example, maybe I’m building an application that is going to be used by call center people. And they’re going to be triaging, incoming requests. I could tell sentiment analysis right on those requests. I could identify entities and pre-populate a bunch of those entities in the form of questions that might be asked. I can improve that individual person trying to help the customer. I can actually use AI through a web service. Typical application developer, call the web service. Now you can use AI models to bubble this up. So if you’re interested in learning at, though, and want to dive deep, there’s actually a couple of different ways you can do this.
One is, you might not be a technologist. You may not be super technical. And if you’re not, we’ve developed, what’s called the AI Business School. So it’s AI Business School. You probably just Google it, but it’s Microsoft’s AI Business School and it really does go through all of the technical content, but from a business perspective. So if you’re more into product marketing, project marketing or product management, program management, or even an entrepreneur or the leader of an organization, these classes will help you see business value and see AI and even build AI through the lens of business growth and strategic growth.
But then we also have, for those technologists listening, we have the AI school, which is a bunch of curated courses that we’ve developed on how to build an intelligent app, how to build a virtual assistant, how to create a device. I was one of the people who helped work on the original Alexa PI project. That same machine had been put together and we fixed up and deployed at Alexa to teach people.
If you don’t want to, or you’re just in the maker community, how do you take Alexa and put it on a pie? And so now we’ve gone quite a bit further here at Microsoft. We were like, okay, it’s cool, but what about computer vision? What if I could add a sensor to, let’s say, a drone at a camera to a drone? Could I do image analysis from that drone? Could I fly that drone around a bridge and detect debris or algae or cracks in the foundation? And all of this is accessible again, rather than having to be a data scientist.
You simply have to not build an application, which is an easier skill to learn, potentially, than going the route of getting a PhD in mathematics or statistics and learning how to develop algorithms from scratch and certainly more accessible. So as going to an AI business school, both are really great. And of course, my Microsoft Learn people would want me to make sure I mentioned Microsoft Learn, which is really just a learning platform of all AI, but also Azure. There’s a lot that goes into artificial intelligence. And it’s now a bit of a buzzword. So people are not super clear on what it means, and what it means to them, which might be different than what someone else means.
So, realize that when you move into something like artificial intelligence, you’re going to be leveraging web services. So you should know what that is. You’re going to be building models. You should know the context of what that is. All of this is going to run on cloud computing. So you should understand these concepts. And Microsoft Learn provides some detailed curriculum, the classic web-based curriculum, MOOCs environment for learning that.
But all of these are accessible. I, literally today, or just yesterday, built a bot in a couple of minutes, right in front of the customer. And they’re like,’wait, what did you just do? How was that possible?’ And it was a conversational bot, which had a natural language engine. I used something called QNA maker within just a couple of minutes. I had ingested a bunch of content on their website and turned it into a conversational application. And then, by the end of my one-hour, had added voice. So it has never been easier to get started with some of this technology.
The biggest breakthroughs, especially with the use case you just provided is going to be for small and medium businesses. Traditionally, they are non-technical and they think, wow, I’m going to have to spend hundreds of thousands of dollars just to get a bot on my website. And you can show the POC that no, it’s actually just within your reach and you can do it today. And it’s incredible. The blistering pace of the technology.
That’s right. And when everything is cloud-based, as we know from AWS’ growth and now Microsoft doing a really great job in enterprise cloud, how affordable that now becomes? So now you’re not looking at this huge capital expenditure to build infrastructure, to support this thing. Now you just have these much smaller operational expenses like, Oh, every month we have to pay 10 bucks for QNA and a maker. Every month we have to pay for 25,000 transactions to support this bot or whatever it is.
But you’re paying in these small increments that can be turned on and off at your discretion, which really makes a lot of this very difficult and usually high computing performance technology, really accessible to anyone, including the small business. And I do think that’s part of Microsoft’s mission we have on the back of our badges. Our mission statement is to empower every person and organization on the planet to achieve more.
What that means is that we want to democratize things that used to be reserved for just 1% of the 1% tech companies or Silicon Valley, or those very large companies in the Fortune 500. We want this technology accessible at every level. Whether you’re just starting up, whether you’re building a mobile app, no matter what you’re doing, you should have access to this technology and we’re doing our best to make that possible.
And I applaud what you’re doing and someone being here in New York I see a lot of the changes in voice and text and had the opportunity to go to the Met Museum and see that there’s some really cool technology that Microsoft has been working on with MIT, for those who haven’t been to the Met yet. Noelle, why don’t you share with us what you guys have been working on?
Absolutely. A few months ago we got together. I was asked to come in to help lead and facilitate a hackathon. But what made this hackathon unique, and actually we’ve done a couple of its kind. So a few months prior to this, we were at Abbey Road doing the first ever music-based AI hackathon in Studio One. So for those of you music lovers out there, you know how amazing of an opportunity that is. And that was the first time we thought, Oh, we really can’t just have technologists build AI. We really need to have the domain expertise of those who are passionate and love music, for example, to build good AI solutions for the musical industry.
And so, fast forward a couple months, we get a similar opportunity with the Met and we realize again that we need that domain expertise. So we worked with MIT. We actually got a collection of data scientists from MIT to create this wonderful collaboration by inviting a bunch of curators, about 10 curators from the Metropolitan Museum of Art.
And it was like that awkward middle school dance, where you have the data scientists on one side and the curators on the other. And they’re all shuffling around, looking at their feet, trying to figure out, how are we going to work together? Both sides, the MIT side being like, well, I’m definitely strong in my technical skills. I can help you do whatever you want. And then the curator side being like, well, I really understand art, but I’m not technical at all. I’m not sure how this is going to work. So there’s a lot of skepticism and concern. But within the first day, what ended up happening is everyone identified their area of expertise, their area of passion and all these really cool projects started to come about.
So, earlier this year, in February, we did this big reveal of those seven projects on the floor of the Metropolitan Museum of Art. We were then hopeful. Given the opportunity to share that story on NBC nightly news, if any of you caught it, which is now still available on NBCnews.com, but we had this whole excitement that was being drummed up around how it’s not just about AI to solve business problems or AI to solve tech problems, but now AI can help everyday problems.
AI can help music and AI can help art and AI can help fashion. And so this was really good, a really great example of how important it is and critical it is, actually, to make sure that you have domain expertise involved, massively involved in the development of AI to make a meaningful solution for people, for the customer, or in this case, for the patron going to the museum. So it would not have been the same if those data scientists worked in a vacuum and just used the open data. That was the reason we did it, actually, because the Metropolitan Museum of Art released their open dataset on all of their artwork.
And it was great. But if they worked in a vacuum, they wouldn’t have been able to convey that passion and love for art that the curators provided. So it really honed in that message and became something that now we look actively for in these projects. And my claim to fame was that I was a part of the Alexa app that we built.
So we have a company, VoiceSkillsInc, that was asked to come in and really help, taking the hack idea to the next level. And the way that worked was, we wanted to basically be able to ask Alexa for the art of the day, but we couldn’t or we didn’t want it to be random art. We wanted to leverage Microsoft Intelligence, Microsoft Services by asking for the day Microsoft AI Services would actually find out what day it was, and then run text analytics on that day. Identify from that day.
Go to Wikipedia and look for relevant art information. There’s tons of stuff on Wikipedia. If you go like what’s today, what happened today, but find the art things and then correlate that art information off Wikipedia with the open data set from the Met. And that actually is available. We are in beta now, who knows? Maybe by the time you are listening to this, this will be available in the store, but you’ll be sure to look for it. It’s called Art of the Day or Today in Art, and it’s provided and sponsored by the Met. So pretty cool stuff though, just to see how we’re bringing technology into the real world.
It’s so inspiring to see how all industries are being augmented by AI, whether it’s conversational or it’s text. And it’s even more inspiring to see how we’re beginning to bridge the gap on diversity with everyone getting involved in AI and coding. I know for a long time you’ve been involved with Girls Who Code and Women In Tech and AI for All and many different initiatives. And now yourself, you’ve brought that to the next level with your own podcast, Lady Coders, and doing mentoring Mondays live on LinkedIn. I’d love to hear what made you get involved in giving back to the community as well?
I had the luxury, I will say, of being a female in technology. I’ve always been in tech and it’s been a very tough road, and I’ve always been very vocal about how tough it’s been. However, not so many people were interested in my story when I was 20 or 25 or 30, but now that I’ve become a leader and I’m now helping people who are going through those pains of being a 25 ambitious, excited, passionate technologist, and hitting some resistance to that, organizationally, I can assist them. So Lady Coders originally, we’ve got a nice manifesto. You can check it out anywhere podcasts are played, but you take it out on iTunes or Spotify, for example. And in our manifesto, we really identify that it’s tough. Being a woman in tech is tough.
It can often draw us to a point that it shouldn’t be this way. Why is it this way? And come from a place of complaint, but I invite, and I actually have collected a group of women who don’t really think that way. Yes, it is tough. And it’s unfortunate, but how can we change the world? There are such great initiatives like Women In Tech. I run the Women In Tech, Seattle meetup group. There’s so many opportunities for us to pull together and do good.
As a matter of fact, recently, I uncovered some research done by a team at Google around the impact of an HBR. Harvard Business Review just did a similar study, an article on the impact of having females in machine learning development cycles. And why that’s important. It was also identified that there were equally not just women, but how important it is to have the introverted view of the world represented, how important it was to have someone who, maybe, is not technically involved in technical discussions to provide insight.
There’s such an importance, not just for creating a diverse workforce because we all have heard that’s a good thing. Of course, we should do that, but really, it’s more about how do I include their voices in the decisions that we make. And so, Lady Coders and my mentoring Mondays call on LinkedIn. I also do a mastermind, which is a group where we get together in person three times a year and talk about these issues and less about identifying the problem, because we’re all good at that, and more about how do we move the needle? How do we make a change? I’ve learned you can’t do it by yourself. It really does take a bit of, at least, a collection, if not a bit of a movement, to get people thinking a different way. And so Lady Coders has been that for me.
And so far, it’s been my vehicle for helping other women find people just like them and not really taking that negative stance like, Oh, well it was me. This is hard. But more, how do I use my skills for good? How do I use my female perspective to help organizations? and the better part about this or the shining light at the end of that story is, organizations are beginning to see that value too.
I don’t know if you know this, but California recently announced that they now need, they are requiring that there be a female representative on all boards, all company boards. So you have to have a woman on every board within if you’re in California. And that’s fantastic. It’s sad that it had to become a requirement for people to do it, but, hey, I’ll take it. But the reason it’s required is because those companies with female leadership on their boards do better, and maybe companies can’t get there organically. So, states like California are mandating it, but they do better. So how can I help women in technology realize that they actually do provide quite a bit of value?
And what’s the best way to surface that value up, whether in your own company that you started or in a company you work for. So I’m very passionate about it. I love it. I love helping people. It turns out, though, that Lady Coders is not all women. The problems that we face, many people face them, all marginalized people. I’m a Latino woman, as well. So I’m a woman in tech. I’m also Puerto Rican. So I have a unique set of challenges that go along with that. I believe right now, there’s only 1% of women that are in technical leadership roles. 1% of them are Hispanic, which is a sad thing that I hope to change, but it certainly motivates me to do things like this and have conversations like this to help encourage other women.
I never want to say, you should go into tech, whether you like it or not. You should go wherever your passion takes you. But if it’s tech, I want to make sure that you are equipped with all the knowledge I have, anyway, to be successful. So thanks for letting me share that. That’s one of my big passion projects.
Noelle, thanks for helping us move the needle to make a more inclusive world for humans where each and everyone can share our voices powered by AI. It’s such a pleasure to have you on the HumAIn podcast.
It’s been my pleasure. Thank you so much for having me and thanks to all the listeners. It’s been great.
Hey humans. Thanks for listening to this episode of HumAIn. My name is David Yakobovitch, and if you like HumAIn, remember to click, subscribe on Apple podcast, Spotify or Luminary. Thanks for tuning in and join us for our next episode. New releases are every Tuesday.