“Making God” by Emily Gorcenski

The central problem with Singularity theory is that it is really only attractive to nerds. Vibing with all of humanity across the universe would mean entangling your consciousness with that of every other creep, and if you’re selling that vision and don’t see that as an issue, then it probably means that you’re the creep. Kurzweil’s The Singularity is Near is paternalistic and at times downright lecherous; paradise for me would mean being almost anywhere he’s not. The metaverse has two problems with its sales pitch: the first is that it’s useless; the second is that absolutely nobody wants Facebook to represent their version of forever.

Of course, it’s not like Meta (Facebook’s rebranded parent company) is coming right out and saying, “hey we’re building digital heaven!” Techno-utopianism is (only a little bit) more subtle. They don’t come right out and say they’re saving souls. Instead they say they’re benefitting all of humanity. Facebook wants to connect the world. Google wants to put all knowledge of humanity at your fingertips. Ignore their profit motives, they’re being altruistic!

In recent years, a bizarre philosophy has gained traction among silicon valley’s most fervent insiders: effective altruism. The basic gist is that giving is good (holy) and in order to give more one must first earn more. Therefore, obscene profit, even that which is obtained through fraud, is justifiable because it can lead to immense charity. Plenty of capitalists have made similar arguments through the years. Andrew Carnegie built libraries around the country out of a belief in a bizarre form of social darwinism, that men who emerge from deep poverty will evolve the skills to drive industrialism forward. There’s a tendency for the rich to mistake their luck with skill.

But it was the canon of Singularity theory that brought this prosaic philosophy to a new state of perversion: longtermism. If humanity survives, vastly more humans will live in the future than live today or have ever lived in the past. Therefore, it is our obligation to do everything we can to ensure their future prosperity. All inequalities and offenses in the present pale in comparison to the benefit we can achieve at scale to the humans yet to exist. It is for their benefit that we must drive steadfast to the Singularity. We develop technology not for us but for them. We are the benediction of all of the rest of mankind.

Longtermism’s biggest advocates were, unsurprisingly, the most zealous evangelists of web3. They proselytized with these arguments for years and the numbers of their acolytes grew. And the rest of us saw the naked truth, dumbfounded watching, staring into our black mirrors, darkly.

Longtermists offered a mind-blowing riposte: who cares about racism today when you’re trying to save billions of lives in the future?

Humanity’s demise is a scarier idea than, say, labor displacement. It’s not a coincidence that AI advocates are keeping extinction risk as the preëminent “AI safety” topic in regulators’ minds. It’s something they can easily agree to avoid without any negligible impact in the day-to-day operations of their business: we are not close to the creation of an Artificial General Intelligence (AGI), despite the breathless claims of the Singularity disciples working on the tech. This allows them to distract from and marginalize the real concerns about AI safety: mass unemployment, educational impairment, encoded social injustice, misinformation, and so forth. Singularity theorists get to have it both ways: they can keep moving towards their promised land without interference from those equipped to stop them.

Effective altruism, longtermism, techno-optimism, fascism, neoreactionaryism, etc are all just variations on a savior mythology. Each of them says, “there is a threat and we are the victim. But we are also the savior. And we alone can defeat the threat.” (Longtermism at least pays lip service to democracy but refuses to engage with the reality that voters will always choose the issues that affect them now.) Every savior myth also must create an event that proves that salvation has arrived. We shouldn’t be surprised that they’ve simply reinvented Revelations. Silicon Valley hasn’t produced a truly new idea in decades.

Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence so we pivoted to artificial life. We failed to make artificial life so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

[Via]

“Confessions of a Viral AI Writer”

BUT WHAT IF I, the writer, don’t matter? I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didn’t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.

I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a world—and maybe we’re already in it—in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.

When I told Sims about this mother, he mentioned Roland Barthes’ influential essay “The Death of the Author.” In it, Barthes lays out an argument for favoring readers’ interpretations of a piece of writing over whatever meaning the author might have intended. Sims proposed a sort of supercharged version of Barthes’ argument in which a reader, able to produce not only a text’s meaning but the text itself, takes on an even more powerful cultural role.

Sims thought AI would let any literature lover generate the narrative they want—specifying the plot, the characters, even the writing style—instead of hoping someone else will.

Sims’ prediction made sense to me on an intellectual level, but I wondered how many people would actually want to cocreate their own literature. Then, a week later, I opened WhatsApp and saw a message from my dad, who grows mangoes in his yard in the coastal Florida town of Merritt Island. It was a picture he’d taken of his computer screen, with these words:

Sweet golden mango,
Merritt Island’s delight,
Juice drips, pure delight.

Next to this was ChatGPT’s logo and, underneath, a note: “My Haiku poem!”

The poem belonged to my dad in two senses: He had brought it into existence and was in possession of it. I stared at it for a while, trying to assess whether it was a good haiku—whether the doubling of the word “delight” was ungainly or subversive. I couldn’t decide. But then, my opinion didn’t matter. The literary relationship was a closed loop between my dad and himself.

In the days after the Sudowrite pile-on, those who had been helping to test its novel generator—hobbyists, fan fiction writers, and a handful of published genre authors—huddled on the Sudowrite Slack, feeling attacked. The outrage by published authors struck them as classist and exclusionary, maybe even ableist. Elizabeth Ann West, an author on Sudowrite’s payroll at the time who also makes a living writing Pride and Prejudice spinoffs, wrote, “Well I am PROUD to be a criminal against the arts if it means now everyone, of all abilities, can write the book they’ve always dreamed of writing.”

It reminded me of something Sims had told me. “Storytelling is really important,” he’d said. “This is an opportunity for us all to become storytellers.” The words had stuck with me. They suggested a democratization of creative freedom. There was something genuinely exciting about that prospect. But this line of reasoning obscured something fundamental about AI’s creation.

As much as technologists might be driven by an intellectual and creative curiosity similar to that of writers—and I don’t doubt this of Sims and others—the difference between them and us is that their work is expensive. The existence of language-generating AI depends on huge amounts of computational power and special hardware that only the world’s wealthiest people and institutions can afford. Whatever the creative goals of technologists, their research depends on that funding.

The language of empowerment, in that context, starts to sound familiar. It’s not unlike Facebook’s mission to “give people the power to build community and bring the world closer together,” or Google’s vision of making the world’s information “universally accessible and useful.” If AI constitutes a dramatic technical leap—and I believe it does—then, judging from history, it will also constitute a dramatic leap in corporate capture of human existence. Big Tech has already transmuted some of the most ancient pillars of human relationships—friendship, community, influence—for its own profit. Now it’s coming after language itself.

A thought experiment occurred to me at some point, a way to disentangle AI’s creative potential from its commercial potential: What if a band of diverse, anti-capitalist writers and developers got together and created their own language model, trained only on words provided with the explicit consent of the authors for the sole purpose of using the model as a creative tool?

That is, what if you could build an AI model that elegantly sidestepped all the ethical problems that seem inherent to AI: the lack of consent in training, the reinforcement of bias, the poorly paid gig workforce supporting it, the cheapening of artists’ labor? I imagined how rich and beautiful a model like this could be. I fantasized about the emergence of new forms of communal creative expression through human interaction with this model.

[Via]

Clickbait: Terry Gross made fun of someone who cannot hear

In response to: Flawed chatbot or threat to society? Both? We explore the risks and benefits of AI

Theistic conceptions of artificial intelligence

 

Other scholars recognise elements of theism in the discourse around AI and its potential impact on our future. Robert Geraci suggests in his 2010 book, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, that AI can fulfil the same role in apocalyptic imaginings as a singular theistic god. Bearing in mind that the biblical apocalypse is an optimistic cosmic transformation, he also draws out parallels with the aims of AI, which often describe hopeful aspirations for a world-yet-to-come, an AI eschatology. In an early part of this particular work, Geraci draws on Rudolph Otto’s 1917 description of god as mysterium tremendum et fascinans (Otto 1917), using it to identify a type of awe-inspiring and fearsome being that at different times in our history can be a god, or in our contemporary modern world, AI. Elsewhere, Geraci’s work has engaged with virtual worlds, drawing attention to the role of transhumanists, including Giulio Prisco, discussed below, in claiming new potential spaces to practice and evolve religion towards transhumanist ends. In such spaces, including Second Life and the World of Warcraft (the MMORPG-a massively multiplayer online role-playing game), Geraci argues a step closer to the fulfilment of transhumanist salvation is being made- “a heavenly realm to inhabit” (Geraci 2014 177). Twitter is another virtual space, but one dominated by discourse rather than aesthetics and virtual embodiment like Second Life and World of Warcraft. However, this article proposes that the expressions of religious metaphor, parody, and tropes on Twitter as in the BBtA tweets represent continuities of theism, continuities enabled by new technological spaces as well as uncertainties about the nature and the volition of ‘the algorithm’.

However, the ‘AI fits into the god-space’ argument can be in danger of supporting a rather strict version of the Secularisation Thesis, and this idea’s historical veracity has been debated by anthropologists and sociologists of religion (see Ward and Hoelzl 2008). This article, and connected research, seeks to add to this debate in by drawing attention to continuities of religiosity and enchantment in super-agential concepts of AI and AI NRMs. Second, this god-space argument can suggest that religion is spurred on by ‘need’ only, a pathology interpretation of religion that ignores other elements of religious inspiration and innovation such as desire, culture, aesthetics, and, often in the online environment, affective virality.

Theistic interpretations of AI do undeniably owe a lot to older cultural conceptions of a singular god. Randall Reed pares this kind of god down to three theological characteristics (with long historical and philosophical roots) that often map easily onto our conceptions of AI superintelligences. These are omnipotence, omniscience, and omnipresence (Reed 2018, 7). Reed also raises the question of omnibenevolence. He notes that AI philosophers such as Nick Bostrom of the Future of Humanity Institute have focussed on the issues of malevolence through “perverse instantiation”, a failure of value alignment leading to unforeseen damage from a superintelligent AI, such as in Bostrom’s famous Paperclip Maximiser thought experiment (Bostrom 2003). Bostrom’s Orthogonality Thesis from his 2012 paper ‘Superintelligent Will’ is also relevant; the argument that intelligence is not intrinsically linked to ‘goodness’, and that an AI could have any number of combination of degrees of both characteristics (Bostrom 2012).

– “Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse by Beth Singler

 

GABBLER RECOMMENDS: How Wikipedia Got Ex Machina (2014) Wrong