KoolKill

Political Influence Through Social Media is Growing, But Slowly

Political Influence Through Social Media is Growing, But Slowly

Guest post by Jacob N. Shapiro, Diego Martin, and Julia Ilhardt

Last week, FBI Director Christopher Wray testified that Russia is using social media, state-run media outlets, and fake online journals to damage Vice President Biden’s chances in the upcoming presidential election. The week before, the US Treasury sanctioned Russian and Ukrainian nationals for interfering in American politics. And earlier this month we learned that the Internet Research Agency was trying to recruit writers for fake news sites intended to influence American politics.

None of this should be surprising. Back in August, the Director of the National Counterintelligence and Security Center announced that: “Russia is using a range of measures to primarily denigrate former Vice President Biden and what it sees as an anti-Russia ‘establishment’”. Two days before that, the State Department’s Global Engagement Center detailed Russia’s online disinformation infrastructure. On July 20, Democratic legislators warned of a potential campaign targeting Congress; on July 14, social media analytics firm Graphika reported on Russia’s “Secondary Infektion” campaign that targeted multiple countries in the last six years; and early in the COVID-19 crisis, EU officials found evidence that Russia was using disinformation to worsen the impact of the pandemic.

As disturbing as the current situation is, the United States is far from alone in being a target of state-sponsored disinformation. During the 2019 pro-democracy protests in Hong Kong, fake accounts linked to the Chinese government tried to muddle online discourse. And during the Libyan National Army’s efforts to capture the capital city of Tripoli in 2019, Twitter was flooded with pro-insurgency hashtags originating in Gulf Countries and Egypt.

So how widespread is this problem? Which states are employing these techniques, and who are they targeting?

To find out, we spent the last two years collecting data on 76 state-backed foreign influence efforts from 2011-2019, as well as 20 domestic operations—i.e., when governments use social media manipulation against their own population. Unlike traditional propaganda, these campaigns include the creation of content designed to appear as though it is produced by normal users in the target states. We released the first report on our data in July of 2019, documenting 53 foreign influence efforts, and just released an updated report. Over the past year, we’ve identified a number of significant trends in the use and organization of influence efforts. Here are the key takeaways.

Influence Efforts for Hire

There are dozens of marketing groups specializing in online content, and in recent years some have begun executing political influence efforts.

Take Archimedes Group, a marketing firm based in Israel. The firm’s specialty is “winning campaigns worldwide,” and when Facebook removed a network of accounts linked to the company in May 2019, the political targets ranged across Africa, Latin America, and Southeast Asia. In contrast to the St. Petersburg-based Internet Research Agency whose only documented customer is the government of Russia, Archimedes Group produces content to support a wide range of political goals while obscuring the involvement of state actors.

A similar political marketing firm is Smaat, a Saudi Arabian company operating out of downtown Riyadh. In addition to marketing for clients like Coca Cola and Toyota, Smaat also works for the Saudi Arabian government. In a now-removed Twitter network, Smaat’s fake social media accounts interspersed commercial content with pro-Saudi political messaging.

Not only does the use of these firms make it hard to identify the actors behind social media manipulation, but it also allows states to engage in political interference without having to develop digital infrastructure.

Further obfuscating government involvement in disinformation campaigns is the trend towards hiring local content creators. Networks linked to Russian oligarch Yevgeny Prigozhin—who was indicted during the Mueller investigation—paid people in Madagascar and Mozambique to manage election-related Facebook pages. Such tactics make it challenging to distinguish foreign interference from genuine local discourse.

Common Content for Local Audiences

In 13 Central Asian countries in 2019, residents following Facebook pages for Latvian travel or the President of Tajikistan may have unknowingly consumed content from government-linked Russian media outlets. By distributing stories from sources like Sputnik and TOK on pages that omitted or obscured the link to Russia, this campaign spread narratives sympathetic to the Kremlin’s foreign policy initiatives.

The Russian network targeting Central Asia was part of a wider move towards efforts that pushed common content to specific locations. In the first version of our report, only two out of 53 foreign influence efforts targeted multiple countries at once. Nine out of 23 additional campaigns in our 2020 report did so. Like Russia, Saudi Arabia and the United Arab Emirates have mounted multi-year efforts to promote sweeping, nationalistic content adapted to resemble domestic discourse in multiple countries. And growing evidence suggests China has begun a broad social media campaign targeting the Chinese diaspora in multiple countries since 2017.

Cases involving widespread distribution of common content are, in some sense, an updated form of propaganda. Disinformation or biased stories adopt an air of local authenticity. Attacking countries do not need to invest as much effort into the creation of generic content as they would for country-specific campaigns.

State-Backed Disinformation on the Domestic Front

Even in democracies, governments sometimes employ media consultancies to justify their policies and damage opposition politicians. In Mexico, for example, branches of the government have paid fake media outlets to amplify stories in favor of the Institutional Revolutionary Party (PRI). During the presidency of Enrique Peña Nieto from 2012-2018, pro-PRI Twitter accounts were so common that they came to be dubbed “Peñabots.”

While foreign influence efforts have been dominated by six countries, particularly Russia and Iran, we found 20 domestic influence efforts spread across 18 countries going back to 2011. In almost all cases, domestic interference has sought to suppress political dissent. Countries such as Vietnam were overt about this goal, with the creation of a cyber military unit called Task Force 47 operating under the Vietnam People’s Army that sought to discredit opposition narratives. Alternatively, government officials in Malta directed social media trolling and hate speech via secret Facebook groups.

Our inclusion criteria required influence campaigns to be directly connected to a government or ruling party. Although political parties in some democracies engage in social media manipulation, these parties are not necessarily representative of the state. For instance, in India, both Prime Minister Narendra Modi’s Bharatiya Janata Party and the Indian National Congress have long made use of influence operations. Similarly, disinformation originating with firms like Cambridge Analytica does not constitute an influence operation in our study unless explicitly linked with governments.

What’s Next?

Online influence efforts are becoming an increasingly widespread tool for both domestic politics and foreign interference. The commercialization of these campaigns could make them easier to access and, in some cases, harder to identify. But the problem of state-back influence efforts is not yet pervasive.

In fact, we found two positive trends in our report. First, only Russia initiated new influence efforts in 2019, and second, Russia initiated only three new efforts in 2019, compared to eight in 2018. This is promising given the widespread capacity for executing influence efforts, and the number of countries who would like to shape US politics, and suggests something is holding countries back from using fake online activity to interfere in their rivals’ politics.

But a global norm needs to be reinforced. At the moment, there is little international collaboration on monitoring social media platforms and no multilateral push to create strong prohibitions on cross-border influence campaigns. And with the US presidential election less than two months away, the threat of foreign interference is being brought squarely to the fore.

Jacob N. Shapiro is professor of politics and international affairs at Princeton University, Diego Martin is a PhD Student in economics at Purdue University, and Julia Ilhardt is a senior in the School of Public and International Affairs at Princeton University.

Google Introduces 6-Month Career Certificates, Threatening to Disrupt Higher Education with “the Equivalent of a Four-Year Degree”

Google Introduces 6-Month Career Certificates, Threatening to Disrupt Higher Education with “the Equivalent of a Four-Year Degree”

I used to make a point of asking every college-applying teenager I encountered why they wanted to go to college in the first place. Few had a ready answer; most, after a deer-in-the-headlights moment, said they wanted to be able to get a job — and in a tone implying it was too obvious to require articulation. But if one’s goal is simply employment, doesn’t it seem a bit excessive to move across the state, country, or world, spend four years taking tests and writing papers on a grab-bag of subjects, and spend (or borrow) a large and ever-inflating amount of money to do so? This, in any case, is one idea behind Google’s Career Certificates, all of which can be completed from home in about six months.

Any such remote educational process looks more viable than ever at the moment due to the ongoing coronavirus pandemic, a condition that also has today’s college-applying teenagers wondering whether they’ll ever see a campus at all. Nor is the broader economic harm lost on Google, whose Senior Vice President for Global Affairs Kent Walker frames their Career Certificates as part of a “digital jobs program to help America’s economic recovery.” He writes that “people need good jobs, and the broader economy needs their energy and skills to support our future growth.” At the same time, “college degrees are out of reach for many Americans, and you shouldn’t need a college diploma to have economic security.”

Hence Google’s new Career Certificates in “the high-paying, high-growth career fields of Data Analytics, Project Management, and User Experience (UX) Design,” which join their existing IT Support and IT Automation in Python Certificates. Hosted on the online education platform Coursera, these programs (which run about $300-$400) are developed in-house and taught by Google employees and require no previous experience. To help cover their cost Google will also fund 100,000 “need-based scholarships” and offer students “hundreds of apprenticeship opportunities” at the company “to provide real on-the-job training.” None of this guarantees any given student a job at Google, of course, but as Walker emphasizes, “we will consider our new career certificates as the equivalent of a four-year degree.”

Biggest tech & higher ed story of the last 2 weeks — #Google entering higher ed, offering BA-equivalent degrees@YahooFinance pic.twitter.com/bsGgwHsnRn

— Scott Galloway (@profgalloway) August 30, 2020

Technology-and-education pundit Scott Galloway calls that bachelor’s-degree equivalence the biggest story in his field of recent weeks. It’s perhaps the beginning of a trend where tech companies disrupt higher education, creating affordable and scalable educational programs that will train the workforce for 21st century jobs. This could conceivably mean that universities lose their monopoly on the training and vetting of students, or at least find that they’ll increasingly share that responsibility with big tech.

This past spring Galloway gave an interview to New York magazine predicting that “ultimately, universities are going to partner with companies to help them expand.” He adds: “I think that partnership will look something like MIT and Google partnering. Microsoft and Berkeley. Big-tech companies are about to enter education and health care in a big way, not because they want to but because they have to.” Whether such university partnerships will emerge as falling enrollments put the strain on certain segments of the university system remains to be seen, but so far Google seems confident about going it alone. And where Google goes, as we’ve all seen before, other institutions often follow.

Note: You can listen to Galloway elaborate on how Google may lead to the unbundling of higher ed here. Listen to the episode “State of Play: The Sharing Economy” from his Prof G podcast:

Related Content:

Free Online Computer Science Courses

Free Online Engineering Courses

Google Launches a Free Course on Artificial Intelligence: Sign Up for Its New “Machine Learning Crash Course”

Google Launches Free Course on Deep Learning: The Science of Teaching Computers How to Teach Themselves

Malcolm Gladwell Asks Hard Questions about Money & Meritocracy in American Higher Education: Stream 3 Episodes of His New Podcast

Nietzsche Lays Out His Philosophy of Education and a Still-Timely Critique of the Modern University (1872)

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall, on Facebook, or on Instagram.

Ethics in AI

Ethics in AI

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – “The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary General’s High-level Panel on Digital Cooperation.”

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

“Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present,” the paper reads. “This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.”

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationships and employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that “an indifferent field serves the powerful.” VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

some of DeepMind’s machine learning fairness research…

also btw…

Softlaw: “law that is software coded before it is passed.” (A very direct and literal take on @lessig’s “code is law”)[1,2]

posted by kliuless (38 comments total)

45 users marked this as a favorite

This Is What The Matrix Looks Like Without CGI: A Special Effects Breakdown

This Is What The Matrix Looks Like Without CGI: A Special Effects Breakdown

in Film, Sci Fi, Technology | May 29th, 2020 Leave a Comment

Advertisement

Those of us who saw the The Matrix in the theater felt we were witness to the beginning of a new era of cinematically and philosophically ambitious action movies. Whether that era delivered on its promise — and indeed, whether The Matrix‘s own sequels delivered on the franchise’s promise — remains a matter of debate. More than twenty years later, the film’s black-leather-and-sunglasses aesthetic may date it, but its visual effects somehow don’t. The Fame Focus video above takes a close look at two examples of how the creators of The Matrix combined traditional, “practical” techniques with then-state-of-the-art digital technology in a way that kept the result from going as stale as, in the movies, “state-of-the-art digital technology” usually has a way of guaranteeing.

By now we’ve all seen revealed the mechanics of “bullet time,” an effect that astonished The Matrix‘s early audiences by seeming nearly to freeze time for dramatic camera movements (and to make visible the eponymous projectiles, of which the film included a great many). They lined up a bunch of still cameras along a predetermined path, then had each of the cameras take a shot, one-by-one, in the span of a split second.

But as we see in the video, getting convincing results out of such a groundbreaking process — which required smoothing out the unsteady “footage” captured by the individual cameras and perfectly aligning it with a computer-generated background modeled on a real-life setting, among other tasks — must have been even more difficult than inventing the process itself. The manual labor that went into The Matrix series’ high-tech veneer comes across even more in the behind-the-scenes video below:

In the third installment, 2003’s The Matrix Revolutions, Keanu Reeves’ Neo and Hugo Weaving’s Agent Smith duke it out in the pouring rain as what seem like hundreds of clones of Smith look on. Viewers today may assume Weaving was filmed and then copy-pasted over and over again, but in fact these shots involve no digital effects to speak of. The team actually built 150 realistic dummies of Weaving as Smith, all operated by 80 human extras themselves wearing intricately detailed silicon-rubber Smith masks. The logistics of such a one-off endeavor sound painfully complex, but the physicality of the sequence speaks for itself. With the next Matrix film, the first since Revolutions, due out next year, fans must be hoping the ideas of the Platonically techno-dystopian story the Wachowskis started telling in 1999 will be properly continued, and in a way that makes full use of recent advances in digital effects. But those of us who appreciate the enduring power of traditional effects should hope the film’s makers are also getting their hands dirty.

Related Content:

The Philosophy of The Matrix: From Plato and Descartes, to Eastern Philosophy

The Matrix: What Went Into The Mix

Philip K. Dick Theorizes The Matrix in 1977, Declares That We Live in “A Computer-Programmed Reality”

Daniel Dennett and Cornel West Decode the Philosophy of The Matrix

Why 1999 Was the Year of Dystopian Office Movies: What The Matrix, Fight Club, American Beauty, Office Space & Being John Malkovich Shared in Common

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall, on Facebook, or on Instagram.