How AI Can Create Positive Social Outcomes in the United States with Jake Porway of Datakind

[Audio] 

Podcast: Play in new window | Download

Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSS

Jake Porway is a machine learning and technology enthusiast. He is the founder and executive director of DataKind, an organization that brings together leading data scientists with high impact social organizations to better collect, analyze, and visualize data in the service of humanity. Jake was most recently the data scientist in the New York Times R&D lab and remains an active member of the data science community, bringing his technical experience from his past work with groups like NASA, DARPA, Google, and Bell Labs to bear on the social sector. 

Jake’s work has been featured in leading academic journals and conferences (PAMI, ICCV), the Guardian, and the Stanford Social Innovation Review. He has been honored as a 2011 PopTech Social Innovation Fellow and a 2012 National Geographic Emerging Explorer. He holds a B.S. in Computer Science from Columbia University and an M.S. and Ph.D. in Statistics from UCLA.

Episode Links:  

Jake Porway’s LinkedIn: https://www.linkedin.com/in/jakeporway/ 

Jake Porway’s Twitter:  https://twitter.com/jakeporway 

Jake Porway’s Website: http://www.jakeporway.com 

Podcast Details: 

Podcast website: https://www.humainpodcast.com

Apple Podcasts:  https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009

Spotify:  https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS

RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9

YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag

YouTube Clips:  https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos

Support and Social Media:  

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: https://www.patreon.com/humain/creators  

– Twitter:  https://twitter.com/dyakobovitch

– Instagram: https://www.instagram.com/humainpodcast/

– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/

– Facebook: https://www.facebook.com/HumainPodcast/

– HumAIn Website Articles: https://www.humainpodcast.com/blog/

Outline: 

Here’s the timestamps for the episode: 

(00:00) – Introduction

(04:27) – DataKind is a nonprofit dedicated to using data science and AI explicitly in the service of humanity since there are huge opportunities, not just for businesses to use these algorithms to increase profits or efficiency but also social change organizations.

(09:21) – Their goal is to help humans on both sides empowering those who would otherwise work together. Social change organizations could be boosted by technology and tons of compassionate technologists who realized they’ve got skills, whether it’s coding or an analytics or machine learning could be using those skills for those problems.

(10:47) – It’s all about folks who share a vision of the world being better and technology having a role in it working together. 

(11:41) – The ethical use of AI in our society needs more guard rails and possibly regulation. To build ethical AI you need to make sure that community members and social activists are involved in the process from design all the way to the oversight of the system.

(19:06) – Unethical AI is ethical in the end. There are different systems that are designed to do different things and they will use AI for the goals they have. Companies are designed to grow and get big to make profits. Some of that growth comes at the cost of other social elements that we’ve come to rely on, hence the tension.

(22:31) – AI is an accelerant and there are some systems and working social elements that AI could help with. The trick is finding them and really promoting them as opposed to thinking it’s naturally ethical if you’re doing it  for “good cause” or that it can solve all of the social human challenges.

(24:07) –  We are struggling with setting standards for humane or ethical AI because there’s been a large push for ethical AI standards, for computer scientists and AI engineers, machine learning folks to adhere to and that is a very natural step towards standardizing our practices.

(25:02) – Everyone seems to have wanted to create their own standard, but more than that, standards are only as good as your ability to enforce them. There is one school of thought that if engineers were trained in ethics or had more ethical frameworks, maybe we wouldn’t have some of the outcomes we have in companies today.

(27:33) – We’re in a little bit of frontier land with any of these standards or ethical codes on how AI should or shouldn’t be used, for proper labeling of data sets such that you’ll have even racially equitable and gender equitable outcomes. 

(30:49) – When labels are being used for  predicting recidivism and being used in criminal sentencing there’s so many horror stories that actually have real implications on people’s lives. Whereas AI and machine learning have worked pretty well in terms of  medical diagnosis from scans, or reverse image search, audio search. 

(37:00) – One of the things that we are really committed to seeing is a world where we may not have cases of things like gender bias in these technologies, if perhaps more folks who were affected by the technology were involved in the design and oversight of the process. 

(37:50) – We want to create a space where communities can actually build the AI technologies they want for the social outcomes they need. We’re really transforming DataKind trying to move from just doing individual projects to significant social challenges.

(42:56) – we’re moving into a world where everything’s being defined by data. Social good, these predictive positive social outcomes is what we have to focus. Then ethical AI just becomes part of our workflow.