PARTNERSHIP ON AI TO BENEFIT PEOPLE AND SOCIETY
This organization has already been registered
Someone in your organization has already registered and setup an account. would you like to join their team?Profile owner : o********s@p*************i.o*g
Mission Statement
Bringing diverse voices together across global sectors, disciplines, and demographics so developments in AI advance positive outcomes for people and society. The Partnership on AI (PAI) is an independent, nonprofit 501(c)(3) organization. It was originally established by a coalition of representatives from technology companies, civil society organizations, and academic institutions, and supported originally by multi-year grants from Apple, Amazon, Facebook, Google/DeepMind, IBM, and Microsoft.
About This Cause
Today, artificial intelligence is changing everything from the way we work to the way we see the world itself. Used responsibly, we believe AI can create a future that is more just, prosperous, and equitable for all. Guiding this technology so that it best serves everyone, however, is a task too great for anyone to accomplish alone. By bringing together civil society organizations, technology companies, academic institutions, and many more members from across the globe, the Partnership on AI connects the people building these systems with those impacted by them or studying their effects. Our Partners’ diverse perspectives also inform our original research, contributing to the rigorous development of best practices for AI. This research primarily falls under five Issue Areas: ABOUT ML; AI, Labor, and the Economy; AI and Media Integrity; Fairness, Transparency, and Accountability; and Safety-Critical AI. ABOUT ML: ABOUT ML seeks to establish an industry-wide norm for the documentation of machine learning systems, one that supports the goals of transparency, responsibility, and accountability. Additionally, our Methods for Inclusion project is investigating the barriers to communication between AI developers and the diverse communities their work impacts. In March 2020, the National Security Council cited ABOUT ML in their recommendations to Congress, holding it up as a model for responsible AI documentation that government agencies should emulate. And in April, PAI researchers contributed to a multistakeholder report on how to ensure claims about AI systems are verifiable. AI, Labor, and the Economy: AI, Labor, and the Economy’s Shared Prosperity Initiative dares to imagine a world where innovation works to enhance humanity’s industriousness and creativity — not just supplant it. Whether AI makes the poor poorer or all of us richer, is a choice for us to make. In addition, the Responsible Sourcing initiative examines labor conditions within AI development itself, specifically focussing on the professionals who clean and label training data or otherwise contribute human judgment to AI systems. To chart a course where AI’s economic benefits don’t enrich the few at the expense of the many, 23 notable thinkers from around the globe were brought together virtually last fall, identifying major topics of study for this emerging discipline of Responsible AI. AI and Media Integrity: While AI has ushered in an unprecedented era of knowledge-sharing online, it has also led to new categories of harmful digital content and extended their potential reach. PAI’s Media Integrity program area directly addresses these critical challenges to the quality of public discourse and information. This includes ongoing work on the detection and labeling of manipulated media as well as upcoming research identifying potential threats, testing interventions, and exploring responsible content-ranking principles. In March 2020, this work resulted in a report offering six specific recommendations drawn from the Deepfake Detection Challenge. In June, PAI published a set of 12 principles designers should follow when labeling manipulated media online, the result of an ongoing collaboration with the partner organization, First Draft. Fairness, Transparency, and Accountability: The Fairness, Transparency, and Accountability Issue Area encompasses PAI’s extensive research concerning the intersections of AI with equity and social justice. This includes new and continuing initiatives examining algorithmic fairness, the criminal justice system, and diversity and inclusion as they relate to both the application of AI systems and the AI community itself. In 2020, our Fairness, Transparency, and Accountability work resulted in an issue brief explaining why the algorithmic PATTERN tool must not dictate federal prisoner transfers during the COVID-19 pandemic, a paper offering an alternative legal framework for mitigating algorithmic bias, and a new fellowship studying barriers to diversity and inclusion in AI. Safety-Critical AI: As our lives are increasingly saturated with artificial intelligence systems, the safety of these systems becomes a vital consideration. The Safety-Critical AI Issue Area seeks to establish norms and technical foundations that will support the safe development and deployment of AI. At NeurIPS 2020, PAI co-hosted a workshop addressing open questions concerning responsible oversight of novel AI research. And in October, PAI announced a new competitive benchmark for training non-destructive agents in an AI learning environment.