KoolKill

Ethics in AI

Ethics in AI

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – “The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary General’s High-level Panel on Digital Cooperation.”

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

“Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present,” the paper reads. “This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.”

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationships and employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that “an indifferent field serves the powerful.” VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

some of DeepMind’s machine learning fairness research…

also btw…

Softlaw: “law that is software coded before it is passed.” (A very direct and literal take on @lessig’s “code is law”)[1,2]

posted by kliuless (38 comments total)

45 users marked this as a favorite

It’s political correctness gone mad!

It’s political correctness gone mad!

Harper’s Letter on Justice and Open Debate A motley crew of intellectuals, writers and journalists, from Noam Chomsky to J.K. Rowling, Orlando Patterson to Margaret Atwood, Zephyr Teachout to Salman Rushdie have signed an open letter published by Harper’s Magazine decrying the “stifling atmosphere” of contemporary public discourse, where “the free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted”. It’s being taken as a rebuke to “cancel culture”, and it’s going over like a lead balloon with its intended targets. More from In These Times.

posted by dis_integration (12 comments total)


This post was deleted for the following reason: We deleted this once already earlier; the Harper’s piece sucks, like these pieces always suck, and while I get the instinct to want to highlight and note how sucky the sucky stuff is I think we’d be better off just not giving it extra attention. — cortex