Ethics in AI

Ethics in AI

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – “The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary General’s High-level Panel on Digital Cooperation.”

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

“Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present,” the paper reads. “This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.”

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationships and employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that “an indifferent field serves the powerful.” VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

some of DeepMind’s machine learning fairness research…

also btw…

Softlaw: “law that is software coded before it is passed.” (A very direct and literal take on @lessig’s “code is law”)[1,2]

posted by kliuless (38 comments total)

45 users marked this as a favorite

Karl Marx in the AI Age

Karl Marx in the AI Age

If a free society cannot help the many who are poor, it cannot save the few who are rich.—John F. Kennedy

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.—Stephen Hawking

We are on the verge of one of the most momentous developments in human history. Artificial intelligence algorithms and robotics are already revolutionizing almost every sphere of human endeavor. But will robots and algorithms replace all our jobs?

So far, Karl Marx’s prophecy has been proven wrong. The communist revolution has not occurred, the dictatorship of the proletariat has not been established and capitalism has not collapsed in advanced economies. But the advent of novel technologies may prove Marx right in the twenty-first century.

Marx’s Prophecy

Marx’s prophecy was based on several premises. The first was the formation of a permanently unemployed class, a “reserve army of labor,” due to the introduction of labor-saving technology by capitalists, compelled by competition to decrease their production costs by either cutting wages or implementing novel technologies. The presence of an industrial reserve army exerts downward pressure on the wages of the employed, thereby lowering the life quality of workers. As a result of technological progress, the size of the permanently unemployed class then increases, as do misery and exploitation, according to Marx.

The second assumption of Marxist analysis was a rise in market concentration, as weaker firms are replaced by stronger ones.

For Marx, monopolization and the rise in the number of the  “technologically unemployed” result in rapid increases in inequality and in the immiseration of people, who then start a revolution and overthrow the government. (This is a rough overview of Marx’s prophecy: I won’t be looking at the flaws of the theory itself here.)

The validity of Marx’s forecast thus rests on the impact of technology on the job market. This, in turn, is contingent on fundamental aspects of human nature.

We can’t predict the exact timing, speed and scope of upcoming changes; however, because these outcomes depend on humans, we can outline potential end results: either new technologies will completely replace humans or some jobs will be left for us to perform. In the latter case, appropriate measures could prevent significant social upheaval and help us transition to a new kind of society. In the former scenario, humans will gradually lose their value and the advent of super-intelligent algorithms will mark a turning point in the history of evolution.

For thousands of years, questions about the nature of reality, while of philosophical importance, were of little to no use in the real world. But now the future of our species is dependent on the answers.

The Nature of Reality

If the universe is monist, computers can be conscious and capable of completely replacing humans on the job market. If it is dualist, some jobs will be left to humans because we are of a different nature from artificial machines, notes Byron Reese in the Fourth Age.

Monism (also known as materialism or physicalism) posits that there is no difference between matter and mind, physical and mental: there is only one ultimate substance. Monists believe that we are just groups of atoms interacting in a variety of ways. We do not possess free will. In other words, humans are machines: there is no difference between inanimate and animate substances, for we are all composed of atoms. Monists deny the existence of souls. Francis Crick, co-discover of DNA, sums up the monist viewpoint in his remark that, “you, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.”

By contrast, dualism holds that something differentiates humans from inanimate objects, humans from animals, physical from mental stuff. We are living objects, as opposed to dead machines: we are not just a bunch of atoms. There is something as yet incomprehensible and indecipherable in us, something we might call soul or consciousness.

One of the most famous proponents of dualism is philosopher Frank Cameron Jackson, who devised the Mary’s room thought experiment. Mary is a brilliant scientist who knows everything conceivable about color, including the nature of a photon, how it triggers the brain’s perception of the world, which wavelengths stimulate particular reactions in the retina, etc. However, she spends her whole life in a black and white room, learning about color from a grey screen. If she comes out of the room and sees the real world, will she learn something new? If so, experiencing something is different from knowing about it and there is something in this world that science cannot (yet) explain—and this something proves that the world is dualist.

The Monist Outcome

The monist perspective is shared by Stephen Hawking, who believed that, “there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence—and exceed it.”

If the universe is monist, sooner or later robots and algorithms will completely replace humans on the job market. For, if humans are machines without soul or consciousness, as technology advances algorithms will gradually come to surpass us and perform all the tasks currently performed by humans: from working in factories and serving food to creating works of arts and writing books.

As Yuval Harari has written in 21 Lessons for the 21st Century, “in the long run, no job will remain absolutely safe from automation”, including those which involve emotions, such as art, because “emotions are not some mystical phenomenon—they are the result of a biochemical process”.

According to the monist line of reasoning, technology has already surpassed our physical abilities and replaced many simple mechanical jobs. Now it is beginning to encroach on the sphere of cognitive capabilities: algorithms, for example, can already detect cancer better than humans. Hence, as the pace of technological change accelerates, more and more people will become unemployed. If we fail to act, the impossible communist revolution may actually come true.

In the absence of government intervention, inequality will undoubtedly rise dramatically, and the standard of living of the vast majority of the population will drop. There will also be fewer industrial competitors, thanks to AI industry’s tendency towards monopolization (leading companies obtain more data from their users, with which they can improve their algorithms, which makes their products better, attracting even more users, etc.). This is in line with Marx’s prediction that the introduction of new technologies leads to the formation of a permanently unemployed class and higher inequality.

Further, human population is expected to begin to decline in the next several decades, in the very long run—in, say, several centuries—the human species may become extinct, paving the way for a world order ruled by algorithms.

The Dualist Outcome

This is the most pessimistic scenario. But, if we assume that the universe is dualist, then algorithms will take most of the jobs, but the ones requiring complex physical capabilities or social skills will remain for us.

In this case, there will probably be a substantial number of unemployed people, an industrial reserve army, but only some jobs will be automated, not all, as in the monist paradigm.

Machines do not possess empathy, creativity or critical thinking, dualists claim. Dualists may also recognize that, even though science debunks the existence of the soul, free will, etc., which puts humans on the same level as machines, there are limits to what science or the human mind can comprehend and study using conventional scientific methods.

Friedrich Nietzsche argued that, “what is not intelligible … is not necessarily unintelligent,” for there might be “a realm of wisdom from which the logician is exiled.” We humans fail to grasp the world in its entirety and complexity, which is why we resort to the use of theoretical constructs, the most important of which are language, mathematics, logic, reason and science. These are all specific methods for understanding reality.

But could there be something that theoretical constructs cannot capture, something that differentiates humans from other objects in the universe? After all, we are, first and foremost, a reflective species; the fact that we contemplate these fundamental questions of our existence may itself be evidence that we are not machines. As one old joke puts it, if we are just groups of atoms, then scientists studying atoms are just heaps of atoms studying other atoms. But perhaps humans are not just groups of atoms: perhaps there is something that makes us distinctive—whether it be creativity, soul or consciousness—that science cannot yet grasp, and which a “bunch of atoms”, due to its simplicity, can never comprehend. But if the design of our brains were simple enough for us to understand it, our brains would be too simple to be able to decipher our inner workings, to paraphrase E. M. Pugh. Thus, we can never be sure that we understand ourselves completely.

For, as Nassim Taleb says, “absence of evidence is not evidence of absence”: there are limits to what we can understand, and, if we explain life in terms of biochemical processes, this does not necessarily imply that our being is determined purely by biochemistry. If we do not know something, this does not necessarily imply that that something (soul or consciousness) is nonexistent.

Will Free Markets Save Us?

So, dualism doesn’t offer as gloomy a picture of the future as monism does. According to the dualist view, absent external forces, we will face unemployment in the technological sectors, but not such massive employment as to leave 99.99% of us devoid of resources.

In the dualist scenario, there are two possibilities: the forces of the free markets will counter the negative effects of automation and only minimal government intervention will be needed (for example, to reeducate some workers or offer temporary social benefits as they transition to new jobs); or substantial government intervention (some sort of universal basic income) will become necessary.

There are reasons to think that free markets could smooth out the negative effects of the technological revolution with minimal government intervention.

Automation is not a zero-sum game. Even if we lose many jobs, there is no limit to the number of new jobs we can devise, thanks to our versatility and creativity. Throughout history, new technologies have been introduced and were expected to result in considerable unemployment, but, in the long run, employment rates have remained fairly stable. We have never run out of jobs because people always come up with new things to do. As John F. Kennedy put it, “if men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.”

This study by the World Economic Forum predicts that, by 2022, AI will create around 133 million jobs and eliminate about 75 million, resulting in a net gain of 58 million (the study does not address long-term consequences).

It is far from certain that we will face unemployment on a massive scale, for, by lowering costs for consumers, new technologies will empower citizens to spend more money on other goods and services, thereby driving up overall demand and encouraging producers to expand production and hence create more jobs.

The effects of the new technologies will be more evenly spread and everyone will have the opportunity to move to the top of the social ladder relatively easily.

We will not face the dreaded lack of high-skilled workforce and large number of low-skilled unemployed workers, argues Byron Reese. If, say, the demand for AI engineers rises, but cashiers are replaced by robots, we will not have to retrain cashiers as AI engineers. Instead, college professors could become AI engineers; PhD candidate could fill professors’ jobs; the work of the PhD students could be performed by high school teachers; a middle or elementary school teacher could take the job of a high school teacher, etc. Finally, our cashier, with some effort, could teach programming at elementary school, leaving the robot to perform the monotonous, boring and inhumane job of cashier. Thanks to technology, then, everybody could moves up the societal order and see a rise in income. This is the invisible hand in action.

What is more, due to Moravec’s paradox, new technologies will not solely impact blue-collar workers.

The Moravec principle states that, for evolutionary reasons, the pace of the robotics revolution will be slower than that of the AI revolution, because it is hard for us to replicate abilities, such as sensory motor skills, that have been honed by evolution for hundreds of thousands of years in robots, while the abilities we developed comparatively recently, such as abstract and mathematical reasoning, are much easier to recreate. That is why algorithms are already beating chess champions and detecting cancer better than humans, while robots still cannot play football at the human level. As Kai-Fu Lee puts it in AI Superpowers, “AI is great at thinking, but robots are bad at moving their fingers.” As Hans Moravec writes: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

Blue-collar jobs require physical work, making it hard for robots to replace them, whereas many low-skill white-collar workers (clerks, office support workers, stockbrokers, etc.) are already being displaced by algorithms. Therefore, at least until the robotics revolution gains momentum, we should not expect the irrelevant class (if it forms) to consist solely of low-wage workers.

McKinsey’s study confirms Moravec’s paradox. According to McKinsey, the demand for “office support” jobs (financial and IT workers, administrative assistants) will drop by 20% by 2030, whereas the demand for “unpredictable physical work” (machinery installation, repair and agricultural field work) is expected to grow by 6%.

Since the robotics revolution will be slower than the AI revolution, the working class will not be hit as severely as one might expect. The key challenge will not be job scarcity but the need to retrain and reeducate people.

It is, however, debatable whether the free market will be able to counteract the negative effects of automation on its own. Sooner or later, technology will come to surpass us in most skills and replace most jobs. The free market will probably help us in the short run, but, in the long run, fewer and fewer jobs will be left and government action will be needed to offset the negative impact.

Let me roughly summarize two potential future scenarios:

  1. Monism: all jobs will be automated. Either (a) algorithms will create a new post-human world order or (b) humans will use AI and biotechnology to upgrade themselves and survive in the era of algorithmic domination.
  2. Dualism: the vast majority of jobs will be automated, but some will be left for us. Either (a) significant government action will be needed or (b) government intervention will be limited or even unnecessary, thanks to the free markets.

Free Market Capitalism and Its Enemies

Neo-Marxists may invoke the advent of novel technology and the resulting massive technological unemployment as justification for concentrating the means of production in the hands of the state. For example, Y. Varoufakis suggests that we must “tear away at the old notion of privately owned means of production and force a metamorphosis, which must involve the social ownership of machinery, land and resources.”

But, even if we face massive unemployment, there is no reason why negative income tax for those living below the poverty threshold or universal basic income could not solve that problem. New technologies will be so productive that even modestly progressive taxes would help finance programs for those negatively affected by them. Nationalization of the means of production is not essential to solving the problems posed by technology.

Marx’s forecast will not come true within the twenty-first century. Karl Marx believed in the “impotence of politics” and the inability of governments to change the system. But, in the nineteenth and twentieth centuries, the governments of developed countries adopted the best parts of the socialist program, thereby making communists irrelevant (German chancellor Otto von Bismarck, for example, introduced the system of government-sponsored social programs to the west).

The revolution is not inevitable, so long as we address the system’s flaws. Even though Marx’s forecast will come the closest to fulfillment in our day, due to the AI industry’s tendency towards monopolization and the resulting technological unemployment, if we take the right steps, Marx’s “inexorable law of historical destiny” will not materialize, just as it did not in the past.

Meanwhile, authoritarians believe that implementing AI in economic planning would eliminate the need for free markets. The forces of supply and demand are great at determining the intrinsic value of a product, they say, but by analyzing data about the economy, and people’s demands and preferences, algorithms can be used to mimic the price mechanism. Friedrich Hayek argued that, because knowledge about the world is dispersed among many participants, central planners, who cannot acquire such knowledge, will make wrong decisions about the pricing and distribution of goods and services. But central planners could become capable of obtaining and using such information thanks to AI. As Alibaba founder Jack Ma puts it, “Big Data will make the market smarter and make it possible to plan and predict market forces so as to allow us to finally achieve a planned economy.”

The problem, however, is that, while AI-powered centralized planning and totalitarian control of citizens may be good at determining consumers’ desires and thereby the true prices of products, it will still lack the innovation that characterizes the capitalist system, as this article points out. For how could consumers inform AI-powered planners of their desire for a product that does not yet exist? As Steve Jobs famously said, “consumers don’t know what they want until we’ve shown them.”

It is impossible to break with past paradigms and create something radically new on the basis of old knowledge: progress occurs thanks to the breaking of old habits, by people who are bold enough to go against conventional wisdom. If AI-powered central planners base their economic decisions on past data, the resulting economy will not be innovative: innovation comes mostly (though not always) from independent inventors—people like Thomas Edison, James Watt, the Wright brothers, Alexander Bell, Bill Gates, Steve Jobs, Elon Musk and Mark Zuckerberg—rather than from government-sponsored projects.

Centralized planning would also eliminate the advantages offered by competition between many market players. State control of the economy usually results in stagnation and lack of innovation. Freedom drives creativity and unconventional thinking and encourages great minds to pursue the unknown and expand the boundaries of our understanding. Free markets are not perfect, and should, in some cases, be supplemented by limited government intervention—but, even in the age of AI, they will remain the best way to continue our technological advancement. We should not fall prey to the claims of authoritarian apologists that algorithms will make free markets obsolete.

In Search of Meaning

Many jobs are so dehumanizing and dangerous that we should be willing to hand them over to robots and algorithms for humane reasons. As Bill Gates puts it, “technology is unlocking the innate compassion we have for our fellow human beings.” Technology will help us discover who we are and enable us to engage in activities that are genuinely suited to humans, thereby facilitating our discovery of our true nature. We will finally stop obsessing over the constant optimization and satisfaction of our material needs after AI takes over most tasks and empowers us to pursue what makes us truly human.

For Karl Marx, the ultimate state of society would be defined by the principle, “From each according to his ability, to each according to his need.” Novel technologies could provide all our personal needs, while letting us engage in the endeavors we choose.

Marx believed that communism would be possible at a certain stage of society’s development, when new technologies would be productive enough to satisfy everyone’s needs. We should not deny the possibility that AI and robotics could provide such technologies. However, as paradoxical as it may seem, Marx was in a certain sense a libertarian, because he loathed the state and imagined a stateless society as our ultimate destination, with enough resources to finally allow humans to gain what Marx saw as true freedom—the freedom from material necessity and state patronage and oppression, as well as the ability to freely engage in non-material pursuits.

Preparing for the Future

We should not oppose limited and reasonable state intervention, if AI and robotics begin to pose fundamental challenges to our existing societal system. To take a dogmatic laissez-faire attitude would be both delusional and dangerous. As Friedrich Hayek writes in the Road to Serfdom, “the liberal argument is in favor of making the best possible use of the forces of competition as a means of coordinating human efforts, not as an argument for leaving things just as they are.”

In liberal democracies, political institutions exist to ensure that the weak and minorities are protected from the tyranny of the majority. Unlimited freedom defeats its own purpose, since a lack of restrictions on how one exercises one’s own liberties often results in the violation of other people’s rights.

Therefore, in economics, as in politics, absolutely unrestricted and unfettered laissez-faire policies are undesirable. Just as the state puts bounds on the exercise of social and political freedom to ensure that freedom does not become self-defeating, in order to ensure fairness and defend justice, the state should regulate the economy and place certain limits on people’s economic freedoms that preserve other people’s rights and freedoms, for “if a free society cannot help the many who are poor, it cannot save the few who are rich.” Economic interventionism hence does not restrict, but safeguards people’s freedom.

The era of AI will bring about the most all-encompassing development in history, and we should not succumb to the perennial appeal of the ideas of the enemies of the open society. The AI era will place considerable strain on our civilization, but we should not buckle under the pressure, but always remember that, as Daniel Deudney and G. John Ikenberry put it, “the remedy for the problems of liberal democracy is more liberal democracy, liberalism contains the seeds of its own salvation.” Recognition of one’s fallibility, willingness to engage in rational discourse with others in an attempt to get closer to the truth, prioritizing truth over opinion, solving social ills via a bipartisan consensus that acknowledges the interests of all groups—these are the tools that will safeguard our society.

We have created this world: we must not outsource our responsibility to the laws of historical destiny. Even if such laws exist, they are the product of our own thinking: if we want to prove wrong Marx’s gloomy forecast wrong once again, we must be ready to take the necessary steps.

History does not move in a particular direction, but people do and can change the course of themselves. As Karl Popper puts it, “Although history has no ends, we can impose these ends of ours upon it; and although history has no meaning, we can give it a meaning.”

If you enjoy our articles, be a part of our growth and help us produce more writing for you:

Related Topics
From Viruses to Algorithms, We Are Always Under Threat

From Viruses to Algorithms, We Are Always Under Threat

Joseph Nechvatal, “rear windOw curiOsites” (2012) 2x2m computer-robotic assisted painting (image courtesy the author)

Today your perception of viruses likely has to do with the way you consider the coronavirus pandemic as a threatening invasion of ruthlessly efficient viral code into your body (its host). It may be an indelicate question at this point, but what can you, and culture writ large, learn from the exponential unleashing of viral codes, as they circulate and duplicate beneath the surface of your cultural and physical world?

To arrive at something of an answer, you must probe into obscurity — for the virus needs to shroud. Visually undetectable, its algorithmic exponential pulse is, however, felt, lurking in the shadows, stalking you. In this sense, viruses parallel the ubiquitous surveillance you associate with networked electronic information, and the flickering of its translucent forms. Indeed, the principles of algorithmic viruses — semi-autonomous, machine-vampiric pieces of digital code — are an essential trait of techno-cultural logic. Like these digital algorithmic viruses, actual deadly viruses can transform narratives precipitously — hence their beguiling, almost magical, powers.

Idealistically, contemporary art is associated with such transformative originality, but, as you undoubtedly heard, on January 9th something genuinely new emerged that is now transforming contemporary art and rendering it nearly lifeless (for now). That day the World Health Organization announced the discovery of the novel coronavirus called SARS-CoV-2, responsible for the infectious respiratory disease COVID-19. Certainly its unprecedented viral uncertainties have already decimated many exhibition plans (including my own that was to open this week), while rattling aesthetic assumptions about form and identity.

In his 1991 essay “Viruses of the Mind,” Richard Dawkins established his theory of religion as a meme — a contagious idea — that responds to two characteristic environmental conditions in order to exist and multiply. The first is the ability of a system to copy information accurately and, in case of errors, to copy errors accurately. The second is the system’s unconditional readiness to execute all instructions codified in the copied information within a host.

The chaotic invisibility of viruses is often supposed as innately irrational, like religious beliefs, but is not. Viruses — neither alive nor dead — function with a zombie algorithmic perfection, occasionally to deadly results. The infection and death numbers in France rise as I write (500 dead today). The Mulhouse-Colmar region has been especially heavily hit. A February megachurch meeting of around 2,000 charismatic La Porte Ouverte Chrétienne evangelical Christians in Mulhouse, with many infected attending, had the effect of a coronavirus bomb, first on Alsace, and then on all of France, as the participants spread throughout the country.

Matthias Grünewald, Isenheim Altarpiece, (ca. 1515) Unterlinden Museum, Colmar (detail of the third panel) (all images of the altarpiece courtesy Web Gallery of Art via Wikimedia)

That Alsace region is where Mathias Grünewald’s masterpiece oil-on-wood polyptych painting “Isenheim Altarpiece” (circa 1516) is located, at the Musée d’Unterlinden in Colmar. The painting is pertinent to the coronavirus era because it teems with high-pitched, electrifying emotion that blends ecstasy with agony. A transcendent, visionary impulse comingles here with dark, macabre fatality, grounded in plague time, since the painting contains references to the Saint Anthony’s fire epidemic (aka ergotism). Grünewald created it for the St. Anthony Monks (who treated the epidemic with plant-extracted, tranquilizing balms) to be placed in their Hospital Monastery in Isenheim; intended for those praying to Saint Anthony the Hermit to help them avoid contracting the disease caused by excessive intake of ergot fungi. Besides the suspended, crucified, pockmarked body of Christ in the first panel that is visually reminiscent of the AIDS viral epidemic — on the surface of the body advanced AIDS can manifest as herpes simplex, herpes zoster (shingles), skin rashes, warts and ringworm — there is a fallen victim of the ergot epidemic horrifically depicted in the third panel.

Matthias Grünewald, Isenheim Altarpiece, (ca. 1515) Unterlinden Museum, Colmar (detail within the third panel)

Like the ergot epidemic shaping 16th-century French culture, algorithmic viruses — self-replicating computer programs that spread by inserting copies of themselves into other executable codes — shape ours. It shapes our global, networked media ecology, which includes digital art that uses coded technology as part of its creative or presentational process. An online virtual viral code’s explosive destructiveness may be programed and activated from anywhere. Would you be shocked to learn that the Paris hospital authority, AP-HP, was the target of a cyber-attack on March 22, according to France’s cybersecurity agency ANSSI?

Such a virtual virus attack strikes deep like a biological virus attack — which spreads by inserting itself into living cells — because virtualization is the algorithmic double that accompanies everything cultural. This may account for why your feelings tilt towards anxiety over art appreciation (and market prices) being algorithmically influenced. Does not top-down culture give you the sneaking suspicion that you have been taken control of from within, with Circean willfulness, by the ignobility of ratcheted-up algorithms? Perhaps your mind has become the penetrated materiality of viral memes asserting themselves? Do you not bristle at your identity, taste, cultural habits and relationships being used as hosts in opaque, algorithmic processes of evaluation?

Indeed, digital viruses are the completing culmination of postmodernism, as they, by definition, are merger machines based on parasitism and acculturation. So, it is not only their symbolic or metaphoric power that places them firmly in a wider perspective of cultural importance; it is their formal structure. As Jean Baudrillard said in Cool Memories, a virus is an ultra-modern form of communication which does not distinguish between information and its carrier.

You may wonder about whether actual viruses are living organisms or not — since a virus has to hijack another organism’s biological machinery to replicate, which it does by inserting its DNA into a host cell. You may call the virtual viral force artificial intelligence, or machine learning, or neural networks that run on if/then programed scripts. But the psychic machinery of both viruses, like terrorism, is your unseen enemy, churning invisibly, absolutely, always potentially present.

Anonymous “Memento Mori” (19th century) print, collection of the École des Beaux-Arts de Paris (photo by Thierry Ollivier, Beaux-Arts de Paris)

Curiously, the if/then program of the coronavirus cuts across social categories and is the great equalizer of the day. For SARS-CoV-2, you are, at long last, just the right color, the right shape, the right sex, and have the right intelligence and personality (though it apparently particularly ravishes the aged). SARS-CoV-2 procures its actuality from your encircling environment of human hosts, to which it is receptively coupled. Like a digital virus, it is both medium and message. As such, the viral algorithm, now the central cultural trope of our world, may also be read as a meditation on your eventual, humiliating death — inclusive of its cruel and nasty comedy. You say you have projects and plans?

Locked-down at home, hiding, you are under ever-increasing pressure to conform, to survey, and be surveyed. Probably you are not against this temporary necessity of surveillance and conformity, but these are the perfect conditions in which totalitarianism flourishes. It is ruinous for the creation of daring new art, and effects the shrinking of places that exhibit nonconformist acts of imaginative spontaneity. You may pour your aesthetic energies into your stay-at-home work, but algorithmic cultural calculus is an obstacle you must overcome to realize your aesthetic freedom. Pathetically, algorithm-driven popular culture that uses optimization-driven, actor-critic, neural network for deep learning emotion analysis (such as Apache MXNet, the deep learning framework in Amazon) puts your cultural choices to work even in your imposed quarantined space of leisure. Probably you have little access to art with which to inoculate yourself and think unpredictably with. You dwell in a viral copy culture of increasing cultural homogenization as Google tracks and guides your tastes.

But, ideally, contemporary art stands against the copy format, which is the classical art thing. In face of contemporary art’s challenging stimulus, you enter into yourself and re-emerge with expanded capacities you never knew were there, resisting algorithms that accurately predict what you would like to see and buy, or to whom you are sexually attracted. Actor-critic neural network prompts make your desires feel paltry and your fate predestined — not in a supernatural way, but in an inflexible, machine-like, sad, and sinister way.

So epidemic-contextualized art, like Grünewald’s, may provide you the chance to do the counter-fearful thing: to look lovingly at what you dread so that you will be released from the airy irrationality of this dread and permit your unconscious life to do as it pleases. To do so, you must fabricate a beautiful if complicated forensic fairy-tale out of your recent artistic and social interactions. I have been doing just that with Alonso Cedillo, the post-internet artist in Mexico City, exchanging emails about what we both see as the new viral epoch (in multiple senses). His stained 3D printed sculpture “Oil Venus” (2018) first suggests a re-contextualized Neoclassical period to me, but I found its degraded romanticism also provoking added admiration for my replica of the ancient “Vénus de Lespuge” figurine (circa 27,000 BCE). This kind of time-distance slippage enlarged and deepened our viral conversation considerably as our minds connected over a great distance. The viral sound work of James Hoff is relevant here also, and connects with my Viral Venture (2009) animation and viral symphOny (2008) sound work.

Alonso Cedillo, “Oil Venus” (2018) 3D print, 15 x 12 x 19 cm (image courtesy the artist)

You may now assume that both types of viruses — the virtual and the actual — point you towards distasteful death, that inconceivable, incurable, and deeply ridiculous affliction. But consider this: Although actual viruses were originally discovered and characterized on the basis of the disease and death they caused, most viruses are helpful to life in that they rapidly transfer genetic information from one bacterium to another, helping their hosts survive in hostile environments. It once was taken as scripture that you are living during the Anthropocene, with the human race heading towards extinction. But with you staying home, the rate of human pollution is down.

So by now, cultural considerations of viral algorithmic code should remind you of memento mori — the Latin term meaning “remember you die” — and how such considerations can stimulate (your) life juices. You may, in light of memento mori, appreciate all of life more, given its inevitable doom. The question is: how do you feed your pleasure in art in face of the mounting coronavirus mortality rate? This is something beyond the powers of artistic narration.