The Great White Robot God

Artificial General Intelligence and White Supremacy

David Golumbia
18 min readJan 21, 2019

It may seem improbable at first glance to think that there might be connections between the pursuit of artificial general intelligence (AGI) and white supremacy. Yet the more you examine the question the clearer and more disturbing the links get.

Inspired in part by some recent work mentioned below, conversations with Chris Gilliard, a Twitter thread by Ash, ongoing work by Dale Carrico, and some other recent research mentioned below, I decided to try to see where the threads might lead.

This is a brief think piece intended to stimulate additional reflections. It is not meant as a personal indictment of those who pursue AGI (although it is not meant to exonerate them either), but instead a structural analysis that starts from an acknowledgment of the ways that race and whiteness work in our society, and how they connect to other phenomena that may seem distant from them. In the case of AGI, there is an odd persistence of discourse that seems far in excess of what science allows, and those most captivated by that excess are often the same people captivated by excesses about race. Part of this is visible through the unusual amount of overlap between AGI promoters and those who believe in a strong correlation between what they call “race” and what they call “IQ.” I suspect it would be possible to read through a lot of the media and texts about AGI and find many marks of a commitment to white supremacy that promoters do not recognize in themselves. That may be a project worth pursuing another time. For now I just want to list a few of the ways in which troubling convergences between AGI, the race and IQ discourse, and white supremacy can be found:

  • Some of the strongest AGI promoters are people who have otherwise been accused of significant racial prejudice, and whose commitment to AGI seems unmotivated by whatever scientific work they do (Elon Musk, Sam Harris);
  • There is at least some overlap between the intellectual apparatus of race-and-IQ discourse and that of AGI;
  • The vast majority of “researchers” into AGI are white; almost none are black (in fact, even when Mia Dand put together a great list of 100 women in AI, it is striking how white the group is);
  • The communities most committed to AGI are in more than a few cases hotbeds of far-right politics that frequently blur into various forms of extremism;
  • Believers in AGI believers and in race and IQ respond in very similar ways to critiques and requests for them to carefully define the concepts they are employing, no matter how level-headed;
  • The semantic games played by AGI believers persist no matter how precisely critics point them out (demonstrating a commitment far in excess of what the evidence supports), especially over the key issue of how much “intelligence” has to do with “consciousness” or “mind”;
  • The messianic/Christological structure of AGI belief, especially when promoted by members of the Radical Atheist community, which itself has significant overlap with the alt-right;
  • The role of AGI in the “effective altruism” community has a very strong suggestion of the DARVO/right reaction character of much far-right discourse: the real problems aren’t racism and white supremacy, the real problem is the AI god that is coming, so I am helping the world by trying to stop it (even as I’m trying to conjure it with my other hand).

In the remainder of this post I’m elaborate on some of the conceptual and empirical reasons that this line of research seems important to pursue.

Words Matter: AI vs AGI

There is already a fair amount of good discussion of the question of “bias” in artificial intelligence (hereafter AI), bias that frequently has to do with race. In my opinion the best of these so far are Kate Crawford’s 2016 New York Times article, “Artificial Intelligence’s White Guy Problem,” and Jessie Daniels’s recent “‘Colorblind’ Artificial Intelligence Just Reproduces Racism”; also see the cogent analysis of some of these concerns from late 2018 by Julia Powles and Helen Nissenbaum, “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence.”

This topic is related to the topic I am raising, and that relationship deserves separate exploration, but it is not quite the one I want to try to bring into focus. In my subtitle and throughout this essay I specifically refer to “artificial general intelligence.” The word “general” is critical. “AI” by itself refers to a wide range of technologies and approaches (Wikipedia gives a good overview). Today, the best-known of these is machine learning; others include deep learning and natural language processing. These technologies are used in many different parts of the digital infrastructure, and speaking very generally, rely on methods that are thought to be somewhat like the way humans and animals learn, albeit as seen from a heavily behaviorist orientation (one heavily reliant on feedback and training mechanisms), in machines. These technologies and others like them make up the bulk of industry and academic research into what we now call AI. The question of why they are referred to by the term AI, given the public understanding of what AI means, is interesting and important, but also deserves separate treatment.

AGI, on the other hand, refers to something much more nebulous, in part because nobody has realized it. In fact, nobody can even define it very precisely: this inability to define its target is part of what is so troubling about it. Yet it’s arguable that AGI is what the early proponents of AI were after: the creation of consciousness or mind in a machine. And confusingly, AGI is what we see referred to as AI in movies, television and science fiction novels. AGI is closely related to what transhumanists refer to as the “singularity” or “superintelligence.” Although some might argue that machine learning and deep learning are in some sense parts of AGI, using “AI” to refer to both kinds of things is in many ways distinctly unhelpful. Ordinary people hear about the real successes of machine learning and imagine it means we are well on our way to AGI. It might or it might not mean that, in part because we don’t really know how to define AGI, because we don’t really have any kind of thoroughgoing formal definition of mind or consciousness, which would be predicates for determining whether AGI is possible or even coherent (the predominant position among philosophers and cognitive scientists, and for that matter computing professionals, I respect most is that it is incoherent).

Words are deeply important to this topic. When most advocates talk about AGI, they are manipulating words in at least two serious ways that obscure the core arguments. One is implying that all the non-AGI parts of AI have some kind of direct, obvious connection to AGI. The other is blurring the concept of general intelligence with the concepts of mind or consciousness.

(It’s worth noting that AGI is a new name for what used to be called “Strong AI.” That used to be contrasted with “Weak AI.” Today, the familiar technologies referred to above tend, that are similar to what was Weak AI, to get called “AI” full stop, and what was “Strong AI” has become “AGI.” It’s also important to say that in the days of “Strong AI,” the project had far more academic and less way-out crank support than it does today. How these terminological shifts happened is interesting, but largely beside the point here, since it’s the phrase “general intelligence” that’s at issue, and AGI proponents could have chosen anything at all for their project. “General intelligence” as a concept predates AI technology and rhetoric altogether: it can never have been a mere accident or purely technical justification [for which there really is none] that led to its incorporation in the new name.)

“General Intelligence”: From Artificial to Inherited

This second terminological slippage is not merely self-promotional on the part of AGI advocates. It has a history. It is part of our culture. The idea that consciousness just is the same thing as intelligence is precisely one of the pillars on which contemporary race science has been built, since the earliest incarnations of “intelligence testing.” Further, today, the idea that there is a discrete, identifiable, usefully precise human quality called “intelligence” — and not just “intelligence,” but what is exactly called by those invested in it, “general intelligence” — is one of the central pillars of contemporary race science. General intelligence is what IQ tests are supposed to measure. It is no accident that right-wing ideologues like Charles Murray and Sam Harris obsess so much about IQ and insist that it is related to the unscientific concept they call “race.” In their fact-resistant worldview, a narrow and largely anti-science notion of genetics determines differences in IQ, and determines membership in “races,” and those differences are of fundamentally direct impact in how society gets structured, and are signally important not just for “success,” but for something like the right to be accepted as a full member of the human community.

General intelligence or the “g-factor” is a concept developed by the British psychologist Charles Spearman in the early 20th century. It has been hugely controversial since Spearman developed it. One of the most famous followers of Spearman was Arthur Jensen, another proponent of the view that IQ is highly correlated with race. This whole line of thinking is one of the main targets of Stephen Jay Gould’s 1981 book The Mismeasure of Man, which the right would like us to believe has been discredited. A careful reading of the subsequent scholarship (see the Wikipedia page on the book for many links) suggests quite the opposite. Here is a good point for an off-ramp for some readers: if you are one of those who thinks Gould’s arguments are bad and race is genetically real and is a strong predictor of IQ and that IQ is of extreme importance for being a successful or even good human being, this essay is not really meant for you. If you aren’t — if you take Gould’s claims and others like it seriously — please keep reading.

The racist right needs to find this mechanical, unchangeable, built-in quality called IQ in order to justify its deepest commitment: that white people (or some other racially superior group, since these racists often have particular views about the supposedly superior IQs of groups like “Jews” and “Asians”) are inherently more deserving of power and of rights than anyone else.

You don’t need to wade deeply into the “race and IQ” debates to see this. Most of us who aren’t tempted by these far-right ideas won’t have any trouble seeing this. Of course the racist right will resist it wholeheartedly, in a way most of the rest of us will see as a characteristic embodiment of white supremacy. And this is how I am using the term white supremacy here, following the typical definition of the term: that is how those of us who study fascism understand and have long understood race science. As Wikipedia puts it, white supremacy “has roots in scientific racism, and it often relies on pseudoscientific arguments”; it is a “political ideology that perpetuates and maintains the social, political, historical, or institutional domination by white people.” Scientific racism or race science insists on a hierarchy of what it understands to be genetically-determined “races,” in which whites always figure very high; as some people who believe in these theories will be quick to tell you, groups like “Asians” and “Jews” are sometimes ranked higher than “whites.” Never mind that to the vast majority of geneticists, social scientists, and humanists who have examined these questions, races simply don’t exist at the genetic level in the way racists insist, and that genes don’t work that way to begin with.

There is another relevant terminological slippage that haunts all of these discussions: the nature of the “general intelligence” that is supposed to be measured by IQ tests and that equates with consciousness in AGI discourse. IQ and the “g-factor” measure a variety of factors — indeed, since they can’t really directly measure the means by which human beings come to conclusions, they focus much more on whether you can get the right answers to test questions than whether you are reasoning correctly to get to that answer. Whatever “IQ” measures, it is usually well-understood even by ideologues to be only a part of consciousness (although as I’ve suggested, at least some ideologues appear to believe that it’s the only important part of consciousness): indeed, one almost never reads allegations that people of lower IQ are “less conscious” and those of higher IQ “more conscious” than those in the median range. All of this takes for granted that IQ is really an important, measurable quantity, one that might be unchangeable in the individual and in some sense independent of education and environment; many critical scholars from across disciplines reject this set of views altogether, even if they grant that IQ exists, something that even as right-adjacent but statistically fluent a figure as Nicholas Taleb seriously doubts.

These are doubts you will not hear expressed, or for the most part even tolerated, by the most ardent proponents of AGI.

Bodies, and the Racial Politics of Theories of Mind

One way the AGI problem connects to the race & IQ controversy has exactly to do with the nature of mind. To many who support AGI and believe in IQ as a strongly determinative force in society, mind and brain are nearly identical. The brain, this story goes, is a computer (a view that in philosophy is called the computational theory of mind, or computationalism, which is the subject of my 2009 book, The Cultural Logic of Computation), or at least very similar to a computer; what happens in the mind is what happens in the brain, and what happens in the brain is largely either “information processing” or “pattern recognition” or both (note that I am not specifying the meaning of these terms with anything like the formal precision that would be necessary to interrogate these claims; neither, I am suggesting, do AGI proponents).

This view, taken at the most abstract, is a serious position that is endorsed by a segment of researchers in cognitive science, philosophy of mind, and allied fields (when we get down to specifics, it is more rare to find these researchers endorsing the theory of mind found in AGI altogether). For the sake of argument, let’s call that one theory of mind. It might be right, but so might others.

Another theory of mind, one I happen to find far more cogent in both philosophical and empirical terms, goes by the umbrella term embodied cognition. Speaking very broadly, this view suggests that there are all kinds of other things going on the body in addition to the brain that are inevitably entailed when we use words like “think” and “feel” and “know,” and that it is a mistake to discount the rest of these fundamentally embodied aspects of cognition in any thorough account of what it is to think. Everything that we understand to be conscious has a physical, living, indeed animal body; whether it is possible for something to be conscious without a body is an open question that we may never be able to answer until and unless we encounter something that we can all agree is conscious but does not have an animal body. This is why intuitively it is not incorrect to ask whether beings with very different bodies from ours — most familiarly, dolphins, but of course all sorts of science fiction examples apply here as well — would be conscious in the same way that we are. Some philosophers even wonder, reasonably, whether other forms of animal consciousness are sufficiently the same as human consciousness for us to infer, despite our intuition that of course they are conscious, that “consciousness” means the same thing for a human and a bat.

If we believe that our bodies are intimately and deeply tied to the very idea of consciousness, we may turn out to want to reflect deeply on not just the nature but also the cultures in which our bodies are formed. Indeed, one common observation among writers on racism is that being a minority, and in particular being black in America, is to be made constantly aware of one’s body as intimately part of one’s self in a way that the majority perpetuates without necessarily understanding: this is why someone like Robin DiAngelo can write that when she gives audience members one minute to answer the question “How has your life been shaped by race?,” “most white participants are unable to answer. … this inability is not benign, and it certainly is not innocent. Suggesting that whiteness has no meaning creates an alienating — even hostile — climate for people of color working and living in predominantly white environments.” Black people cannot escape being reminded of their embodiment; white people can indulge the fantasy that they are all mind.

Again, this is not to suggest necessarily that all forms of computationalism are inherently racist (though as I argue in my book, I think they align with the right far more than with the left, and the number of scholars of color who buy into computationalism is remarkably small), but it does suggest that views that the body is unimportant to the mind are views that might be especially attractive to people who believe their bodies make no difference in how they function in the world, despite being very willing to judge that other peoples’ bodies disqualify them from participating in the same way.

AGI and the Alt-Right

It is not merely that the structure of AGI belief overlaps with racist beliefs about IQ. There is also a clear sociological overlap between belief in AGI and the various groups who are loosely gathered under the term “alt-right.” In “The Darkness at the End of the Tunnel: Artificial Intelligence and Neoreaction,” Shuja Haider goes into this in some detail, as do some of the other writers mentioned below. Here I try to explore some of the reasons that overlap exists.

Look at this major panel discussion that mostly features people who think that a superintelligent (that is, conscious) AI is in the offing: the “Superintelligence: Fact or Fiction?” panel at the Beneficial AI 2017 conference sponsored by the transhumanist Future of Life Institute, and featuring white guys Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn with moderator Max Tegmark. At least two of the participants (Musk and Harris) have hard-right politics and have been repeatedly accused of racism (Musk, Harris). In Harris’s case, at least, that goes further into direct engagement with IQ-based race science. The panel really does feature some of the world’s leading voices of those who believe that a superintelligent god is going to emerge soon from our computers; to say the panel “looks white” would be a massive understatement. And virtually nobody on the panel ever expresses anything very far away from the computationalist view, with the possible exception of Chalmers, who is a well-known philosopher of science. All of them start from the basic assumption that mind is intelligence is general intelligence — even Hassabis, one of the world’s leading commercial AI/deep learning executives & CEO of Deep Mind, which was acquired by Google/Alphabet in 2014, thus to some extent giving the lie to the wisdom prevailing in certain quarters that serious AI researchers are not pursuing the Great Robot God. There is virtually no discussion of alternative accounts of consciousness on the panel, which is typical in inside-the-beltway reflections on AI, despite their surface appearance of being scientific (in actual science, competing theories that fit the available facts are considered until one or more is proven false).

Another of the major figures in promoting the superintelligent AGI god is Eliezer Yudkowsky, whom those of us who read about the alt-right may well be familiar with. Yudkowsky probably isn’t a member of the white supremacist alt-right, but he constantly butts right up against it. He also sees his main role as being an AI researcher, by which he means AGI researcher, though “Yudkowsky is almost entirely unpublished outside of his own foundation and blogs and never finished high school, much less did any actual AI research. No samples of his AI coding have been made public” (RationalWiki); this is particularly due to his promotion (in a backhanded way, since he promoted it by trying to prohibit people from talking about it) of a famous “thought problem” (in scare quotes because it is not a problem one reads many actual AI researchers or philosophers taking seriously) called “Roko’s Basilisk,” and to his founding of The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI). To those of us outside of it, MIRI does not look like a scientific or charitable organization: it looks like a hothouse for the development of crackpot pseudo-religious theories, one staffed almost exclusively by white men.

Yudkowsky is another one of those obsessed with “rationality” (by which he means exclusively what he considers “logic”) as the only constitutive aspect of mind. This view is at the root of not just AGI, and not just race science, but the entire edifice known as the “alt-right.” While Yudkowsky is able to skirt direct membership in that group, it is no accident that Elizabeth Sandifer’s magnificent 2018 volume whose subtitle is On and Around the Alt Right has the main title Neoreaction a Basilisk, combining the fictional thought problem associated with Yudkowsky with one of the most poisonous parts of the alt-right, Neoreaction. As Sandifer notes, Mencius Moldbug, aka the computer programmer Curtis Yarvin, partly got his start as a commentator on Overcoming Bias, the blog where Yudkowsky first whet his uber-rationalist appetites, before Yudkowsky went on to form the better-known rationalist site LessWrong. As Sandifer writes, Yudkowsky “is not on the alt-right but has a variety of interesting links to the topic” (3); “in some ways the most basic similarity between him and Moldbug” is that

they are both animated by an entirely sympathetic anger that people with power are making obvious and elementary errors. But what’s really important is how this sheds light on what exactly Yudkowsky is fleeing from, and in turn on why the Basilisk is the monster lurking at the heart of his intellectual labyrinth. Yudkowsky isn’t just running from error; he’s running from the idea of authority. The real horror of the Basilisk is that the AI at the end of the universe is just another third grade teacher who doesn’t care if you understand the material, just if you apply the rote method being taught. (Sandifer 2018, 60)

This push-me pull-you attitude toward authority is itself a hallmark of right-wing authoritarianism; a great deal of the writing surrounding Roko’s Basilisk and the Great Robot God is redolent of Wilhelm Reich’s account of “the individual who is adjusted to the authoritarian order and who will submit to it in spite of all misery and degradation” that he puts at the root of fascism in his classic 1934 Mass Psychology of Fascism. So too does the mysticism of the Roko’s Basilisk story.

Yudkowsky and Moldbug, along with the overtly fascist writer Nick Land, form a kind of unholy trinity for the place where the alt-right intersects with SIlicon Valley culture; that’s why they are the focus of Sandifer’s book, and of many other pieces on the alt-right, itself in many ways a product of the “computer revolution.” “The Silicon Ideology,” a paper quietly posted in May 2016 to archive.org by the pseudonymous “Josephine Armistead” makes this connection as well:

LessWrong served as a convenient “incubation centre” so to speak for neo-reactionary ideas to develop and spread for many years, and the goals of LessWrong: a friendly super-intelligent AI ruling humanity for its own good, was fundamentally compatible with eⅺsting neo-reactionary ideology, which had already begun developing a futurist orientation in its infancy due, in part, to its historical and cultural influences.

AGI and White Supremacy in a Broader Frame

To someone writing from my position, it is absolutely true that nearly everything in our society is connected to white supremacy. At this level it is trivially true that AI in general is connected to white supremacy. Many who don’t share my background assumptions will balk right here. It’s tempting to write that “AGI is white supremacy,” but this suggests a clear identity that is too broad. Many parts of digital culture are closely tied to right-wing politics, and so are many other parts of culture. Even given this general truism, specific segments of that culture evidence right-wing politics in specific ways. The culture of bitcoin is permeated with far-right economic “theories” that don’t show up in such direct form in, say, GamerGate.

In AGI, we see a particular overvaluation of “general intelligence” as not merely the mark of human being, but of human value: everything that is worth anything in being human is captured by “rationality” or “logic,” and soon enough, a quasi-religious revelation will occur that will make that undeniably — transcendentally — true. In other words, God will appear and tell us that white people have been right all along: the thing that they claim they have more of than anyone else will turn out to be the thing that matters more than anything else, the thing that according to which we should ultimately be evaluated, the thing that will save our souls.

Thanks to David Gerard, Tom James, some anonymous friends, and a few others for minor corrections made to earlier versions of this piece.

Works Cited & Additional Reading

--

--