How Platforms Leverage The Extended AI Community To Address Misinformation with Claire Leibowicz


Podcast: Play in new window | Download

Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSS

Claire Leibowicz currently leads the AI and Media Integrity program at the Partnership on AI. She holds a BA in Psychology and Computer Science from Harvard College, and a master’s degree in the Social Science of the Internet from Balliol College, University of Oxford, where she studied as a Clarendon Scholar.

Episode Links:  

Claire Leibowicz’s LinkedIn: 

Claire Leibowicz’s Twitter: 

Claire Leibowicz’s Website:   

Podcast Details: 

Podcast website:

Apple Podcasts:



YouTube Full Episodes:

YouTube Clips:

Support and Social Media:  

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon:  

– Twitter:

– Instagram:

– LinkedIn:

– Facebook:

– HumAIn Website Articles:


Here’s the timestamps for the episode: 

(00:00) – Introduction

(01:36) – Not only tech companies should be involved in creating good, responsible, ethical AI, but also civil society organizations, academic venues, other parts of industry and especially media.

(02:24) – AI and media integrity proposes a very simple way to have good, healthy, beneficial information online by using AI systems to do that. 

(02:47) – Not everyone agrees what type of content should be allowed online. Even humans don’t agree about what misinformation is or what content should be shown to people through technology.

(03:48) – In terms of tactics for misinformation, how people create misinformation, how they spread content, is generally applicable to social media.

(06:33) – AI and Media integrity seeks to reach a public that can distinguish credible information from misleading information. The idea is to find a middle ground for platforms to seem like they’re giving the user control and autonomy, and being able to judge for themselves what’s credible. 

(08:54) – Some people are really skeptical about platforms. Labels might encourage major division in user attitudes between those who think they’re important for people to be healthy consumers of content and those who find them biased and partisan and error prone.  Automating that label deployment is really complicated. 

(10:37) – With the de-platforming of Donald Trump, we’re living in a new society where we are giving the rights of freedoms to platforms to say we can get content so that we’re providing the best interest for our users without acknowledging whether the users really want that.

(24:00) – The platforms have been emboldened, and that has a connotation that we’re going to become the arbiters of truth. Those who value free speech and principles might frown upon, since the internet was founded as a venue for democratizing speech and allowing people to speak. 

(12:25) – Platform labels alone are insufficient to address the question of what people trust and why there is this general distrust, in the principle of platforms to self-regulate and for fact-checkers and media companies to offer non-politicized ratings. 

(13:18) – We need to better design interventions that don’t repress people, but really respect the intelligence and autonomy that has raised awareness of looking into a source and media literacy. 

(15:12) – A lot of the policies that platforms have about speech on the platforms have to do with the way in which they cause real world harm. 

(18:20) – When we talk about manipulated media, it’s really important to underscore what makes that misleading or problematic. So a lot of people have advocated for AI-based solutions to deal with manipulated media. 

(21:37) – It’s not just how an artifact has been manipulated that matters. It’s partially the intent, why it’s been manipulated and what it conveys that really matters. Just because something has been manipulated doesn’t mean it’s inherently misleading or automatically misinformation. But rather, what’s the effect of that manipulation. And that’s a really hard task for machines to gauge, let alone people.