DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – “The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary General’s High-level Panel on Digital Cooperation.”
The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.
“Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present,” the paper reads. “This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.”
The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationships and employing the critical technical approach to AI development Philip Agre proposed in 1997.
The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that “an indifferent field serves the powerful.” VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.
some of DeepMind’s machine learning fairness research…
- A Causal Bayesian Networks Viewpoint on Fairness
- Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics
- Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context
Softlaw: “law that is software coded before it is passed.” (A very direct and literal take on @lessig’s “code is law”)[1,2]
posted by kliuless (38 comments total)