How Platforms Leverage The Extended AI Community To Address Misinformation with Claire Leibowicz

[Audio] 

Podcast: Play in new window | Download

Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSS

Claire Leibowicz currently leads the AI and Media Integrity program at the Partnership on AI. She holds a BA in Psychology and Computer Science from Harvard College, and a master’s degree in the Social Science of the Internet from Balliol College, University of Oxford, where she studied as a Clarendon Scholar.

Episode Links:  

Claire Leibowicz’s LinkedIn: https://www.linkedin.com/in/claire-leibowicz-17156a65/ 

Claire Leibowicz’s Twitter:  https://twitter.com/CLeibowicz 

Claire Leibowicz’s Website: https://www.partnershiponai.org/manipulated-media-detection-requires-more-than-tools-community-insights-on-whats-needed/     

https://medium.com/partnership-on-ai/a-field-guide-to-making-ai-art-responsibly-f7f4a5066ee   

https://arxiv.org/abs/2011.12758   

https://medium.com/swlh/it-matters-how-platforms-label-manipulated-media-here-are-12-principles-designers-should-follow-438b76546078   

Podcast Details: 

Podcast website: https://www.humainpodcast.com

Apple Podcasts:  https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009

Spotify:  https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS

RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9

YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag

YouTube Clips:  https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos

Support and Social Media:  

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: https://www.patreon.com/humain/creators  

– Twitter:  https://twitter.com/dyakobovitch

– Instagram: https://www.instagram.com/humainpodcast/

– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/

– Facebook: https://www.facebook.com/HumainPodcast/

– HumAIn Website Articles: https://www.humainpodcast.com/blog/

Outline: 

Here’s the timestamps for the episode: 

(00:00) – Introduction

(01:36) – Not only tech companies should be involved in creating good, responsible, ethical AI, but also civil society organizations, academic venues, other parts of industry and especially media.

(02:24) – AI and media integrity proposes a very simple way to have good, healthy, beneficial information online by using AI systems to do that. 

(02:47) – Not everyone agrees what type of content should be allowed online. Even humans don’t agree about what misinformation is or what content should be shown to people through technology.

(03:48) – In terms of tactics for misinformation, how people create misinformation, how they spread content, is generally applicable to social media.

(06:33) – AI and Media integrity seeks to reach a public that can distinguish credible information from misleading information. The idea is to find a middle ground for platforms to seem like they’re giving the user control and autonomy, and being able to judge for themselves what’s credible. 

(08:54) – Some people are really skeptical about platforms. Labels might encourage major division in user attitudes between those who think they’re important for people to be healthy consumers of content and those who find them biased and partisan and error prone.  Automating that label deployment is really complicated. 

(10:37) – With the de-platforming of Donald Trump, we’re living in a new society where we are giving the rights of freedoms to platforms to say we can get content so that we’re providing the best interest for our users without acknowledging whether the users really want that.

(24:00) – The platforms have been emboldened, and that has a connotation that we’re going to become the arbiters of truth. Those who value free speech and principles might frown upon, since the internet was founded as a venue for democratizing speech and allowing people to speak. 

(12:25) – Platform labels alone are insufficient to address the question of what people trust and why there is this general distrust, in the principle of platforms to self-regulate and for fact-checkers and media companies to offer non-politicized ratings. 

(13:18) – We need to better design interventions that don’t repress people, but really respect the intelligence and autonomy that has raised awareness of looking into a source and media literacy. 

(15:12) – A lot of the policies that platforms have about speech on the platforms have to do with the way in which they cause real world harm. 

(18:20) – When we talk about manipulated media, it’s really important to underscore what makes that misleading or problematic. So a lot of people have advocated for AI-based solutions to deal with manipulated media. 

(21:37) – It’s not just how an artifact has been manipulated that matters. It’s partially the intent, why it’s been manipulated and what it conveys that really matters. Just because something has been manipulated doesn’t mean it’s inherently misleading or automatically misinformation. But rather, what’s the effect of that manipulation. And that’s a really hard task for machines to gauge, let alone people.