38 min

How Platforms Leverage The Extended AI Community To Address Misinformation with Claire Leibowicz HumAIn Podcast - Artificial Intelligence, Data Science, Developer Tools, and Technical Education

    • Technology

How Platforms Leverage The Extended AI Community To Address Misinformation with Claire Leibowicz

Claire Leibowicz currently leads the AI and Media Integrity program at the Partnership on AI. She holds a BA in Psychology and Computer Science from Harvard College, and a master’s degree in the Social Science of the Internet from Balliol College, University of Oxford, where she studied as a Clarendon Scholar.Not only tech companies should be involved in creating good, responsible, ethical AI, but also civil society organizations, academic venues, other parts of industry and especially media.AI and media integrity proposes a very simple way to have good, healthy, beneficial information online by using AI systems to do that. Not everyone agrees what type of content should be allowed online. Even humans don't agree about what misinformation is or what content should be shown to people through technology. Some tech companies feel empowered to take comments off platforms. So, not only just to declare a label or more context around people, but really to take a public figure off a platform, which is really an emboldening of platform agency in contributing to who is allowed to speak and who's not.In terms of tactics for misinformation, how people create misinformation, how they spread content, is generally applicable to social media. There's misinformation flowing in WhatsApp groups, in texts, in all these different venues. There is a real movement towards this kind of misinformation that's not just total misrepresentation of an event or a fact, but a slant or a leaning, or a caption that may make a post have a different connotation than it would if it was written by someone else.AI and Media integrity seeks to reach a public that can distinguish credible information from misleading information. Labeling is an interesting, almost in-between option, because it's not limiting speech or saying you can't share this post or saying someone's information shouldn't be seen. It's giving you more context. The idea is to find a middle ground for platforms to seem like they're giving the user control and autonomy, and being able to judge for themselves what's credible. Some people are really skeptical about platforms. Labels might encourage major division in user attitudes between those who think they're important for people to be healthy consumers of content and those who find them biased and partisan and error prone. Automating that label deployment is really complicated. And we don't really know what the best intervention is right now to help bolster credible content consumption. With the de-platforming of Donald Trump, we're living in a new society where we are giving the rights of freedoms to platforms to say, we can get content so that we're providing the best interest for our users without acknowledging whether the users really want that.The platforms have been emboldened, and that has a connotation that we're going to become the arbiters of truth. Those who value free speech and principles might frown upon, since the internet was founded as a venue for democratizing speech and allowing people to speak. There are other solutions that the platforms can take to change how content gets shown beyond just labeling. Platform labels alone are insufficient to address the question of what people trust and why there is this general distrust, in the principle of platforms to self-regulate and for fact-checkers and media companies to offer non-politicized ratings. We need to better design interventions that don't repress people, but really respect the intelligence and autonomy that has raised awareness of looking into a source and media literacy. So holistic, digital literacy, educational interventions to focus community-centric moderation,. And that people in the community rather than the platform itself, are the ones doing the moderation, which might increase trust in how the speech is being labeled and ultimately decided upon.A lot of the

How Platforms Leverage The Extended AI Community To Address Misinformation with Claire Leibowicz

Claire Leibowicz currently leads the AI and Media Integrity program at the Partnership on AI. She holds a BA in Psychology and Computer Science from Harvard College, and a master’s degree in the Social Science of the Internet from Balliol College, University of Oxford, where she studied as a Clarendon Scholar.Not only tech companies should be involved in creating good, responsible, ethical AI, but also civil society organizations, academic venues, other parts of industry and especially media.AI and media integrity proposes a very simple way to have good, healthy, beneficial information online by using AI systems to do that. Not everyone agrees what type of content should be allowed online. Even humans don't agree about what misinformation is or what content should be shown to people through technology. Some tech companies feel empowered to take comments off platforms. So, not only just to declare a label or more context around people, but really to take a public figure off a platform, which is really an emboldening of platform agency in contributing to who is allowed to speak and who's not.In terms of tactics for misinformation, how people create misinformation, how they spread content, is generally applicable to social media. There's misinformation flowing in WhatsApp groups, in texts, in all these different venues. There is a real movement towards this kind of misinformation that's not just total misrepresentation of an event or a fact, but a slant or a leaning, or a caption that may make a post have a different connotation than it would if it was written by someone else.AI and Media integrity seeks to reach a public that can distinguish credible information from misleading information. Labeling is an interesting, almost in-between option, because it's not limiting speech or saying you can't share this post or saying someone's information shouldn't be seen. It's giving you more context. The idea is to find a middle ground for platforms to seem like they're giving the user control and autonomy, and being able to judge for themselves what's credible. Some people are really skeptical about platforms. Labels might encourage major division in user attitudes between those who think they're important for people to be healthy consumers of content and those who find them biased and partisan and error prone. Automating that label deployment is really complicated. And we don't really know what the best intervention is right now to help bolster credible content consumption. With the de-platforming of Donald Trump, we're living in a new society where we are giving the rights of freedoms to platforms to say, we can get content so that we're providing the best interest for our users without acknowledging whether the users really want that.The platforms have been emboldened, and that has a connotation that we're going to become the arbiters of truth. Those who value free speech and principles might frown upon, since the internet was founded as a venue for democratizing speech and allowing people to speak. There are other solutions that the platforms can take to change how content gets shown beyond just labeling. Platform labels alone are insufficient to address the question of what people trust and why there is this general distrust, in the principle of platforms to self-regulate and for fact-checkers and media companies to offer non-politicized ratings. We need to better design interventions that don't repress people, but really respect the intelligence and autonomy that has raised awareness of looking into a source and media literacy. So holistic, digital literacy, educational interventions to focus community-centric moderation,. And that people in the community rather than the platform itself, are the ones doing the moderation, which might increase trust in how the speech is being labeled and ultimately decided upon.A lot of the

38 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Hard Fork
The New York Times
Darknet Diaries
Jack Rhysider
TED Radio Hour
NPR