“Making God” by Emily Gorcenski

The central problem with Singularity theory is that it is really only attractive to nerds. Vibing with all of humanity across the universe would mean entangling your consciousness with that of every other creep, and if you’re selling that vision and don’t see that as an issue, then it probably means that you’re the creep. Kurzweil’s The Singularity is Near is paternalistic and at times downright lecherous; paradise for me would mean being almost anywhere he’s not. The metaverse has two problems with its sales pitch: the first is that it’s useless; the second is that absolutely nobody wants Facebook to represent their version of forever.

Of course, it’s not like Meta (Facebook’s rebranded parent company) is coming right out and saying, “hey we’re building digital heaven!” Techno-utopianism is (only a little bit) more subtle. They don’t come right out and say they’re saving souls. Instead they say they’re benefitting all of humanity. Facebook wants to connect the world. Google wants to put all knowledge of humanity at your fingertips. Ignore their profit motives, they’re being altruistic!

In recent years, a bizarre philosophy has gained traction among silicon valley’s most fervent insiders: effective altruism. The basic gist is that giving is good (holy) and in order to give more one must first earn more. Therefore, obscene profit, even that which is obtained through fraud, is justifiable because it can lead to immense charity. Plenty of capitalists have made similar arguments through the years. Andrew Carnegie built libraries around the country out of a belief in a bizarre form of social darwinism, that men who emerge from deep poverty will evolve the skills to drive industrialism forward. There’s a tendency for the rich to mistake their luck with skill.

But it was the canon of Singularity theory that brought this prosaic philosophy to a new state of perversion: longtermism. If humanity survives, vastly more humans will live in the future than live today or have ever lived in the past. Therefore, it is our obligation to do everything we can to ensure their future prosperity. All inequalities and offenses in the present pale in comparison to the benefit we can achieve at scale to the humans yet to exist. It is for their benefit that we must drive steadfast to the Singularity. We develop technology not for us but for them. We are the benediction of all of the rest of mankind.

Longtermism’s biggest advocates were, unsurprisingly, the most zealous evangelists of web3. They proselytized with these arguments for years and the numbers of their acolytes grew. And the rest of us saw the naked truth, dumbfounded watching, staring into our black mirrors, darkly.

Longtermists offered a mind-blowing riposte: who cares about racism today when you’re trying to save billions of lives in the future?

Humanity’s demise is a scarier idea than, say, labor displacement. It’s not a coincidence that AI advocates are keeping extinction risk as the preëminent “AI safety” topic in regulators’ minds. It’s something they can easily agree to avoid without any negligible impact in the day-to-day operations of their business: we are not close to the creation of an Artificial General Intelligence (AGI), despite the breathless claims of the Singularity disciples working on the tech. This allows them to distract from and marginalize the real concerns about AI safety: mass unemployment, educational impairment, encoded social injustice, misinformation, and so forth. Singularity theorists get to have it both ways: they can keep moving towards their promised land without interference from those equipped to stop them.

Effective altruism, longtermism, techno-optimism, fascism, neoreactionaryism, etc are all just variations on a savior mythology. Each of them says, “there is a threat and we are the victim. But we are also the savior. And we alone can defeat the threat.” (Longtermism at least pays lip service to democracy but refuses to engage with the reality that voters will always choose the issues that affect them now.) Every savior myth also must create an event that proves that salvation has arrived. We shouldn’t be surprised that they’ve simply reinvented Revelations. Silicon Valley hasn’t produced a truly new idea in decades.

Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence so we pivoted to artificial life. We failed to make artificial life so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

[Via]

Theistic conceptions of artificial intelligence

 

Other scholars recognise elements of theism in the discourse around AI and its potential impact on our future. Robert Geraci suggests in his 2010 book, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, that AI can fulfil the same role in apocalyptic imaginings as a singular theistic god. Bearing in mind that the biblical apocalypse is an optimistic cosmic transformation, he also draws out parallels with the aims of AI, which often describe hopeful aspirations for a world-yet-to-come, an AI eschatology. In an early part of this particular work, Geraci draws on Rudolph Otto’s 1917 description of god as mysterium tremendum et fascinans (Otto 1917), using it to identify a type of awe-inspiring and fearsome being that at different times in our history can be a god, or in our contemporary modern world, AI. Elsewhere, Geraci’s work has engaged with virtual worlds, drawing attention to the role of transhumanists, including Giulio Prisco, discussed below, in claiming new potential spaces to practice and evolve religion towards transhumanist ends. In such spaces, including Second Life and the World of Warcraft (the MMORPG-a massively multiplayer online role-playing game), Geraci argues a step closer to the fulfilment of transhumanist salvation is being made- “a heavenly realm to inhabit” (Geraci 2014 177). Twitter is another virtual space, but one dominated by discourse rather than aesthetics and virtual embodiment like Second Life and World of Warcraft. However, this article proposes that the expressions of religious metaphor, parody, and tropes on Twitter as in the BBtA tweets represent continuities of theism, continuities enabled by new technological spaces as well as uncertainties about the nature and the volition of ‘the algorithm’.

However, the ‘AI fits into the god-space’ argument can be in danger of supporting a rather strict version of the Secularisation Thesis, and this idea’s historical veracity has been debated by anthropologists and sociologists of religion (see Ward and Hoelzl 2008). This article, and connected research, seeks to add to this debate in by drawing attention to continuities of religiosity and enchantment in super-agential concepts of AI and AI NRMs. Second, this god-space argument can suggest that religion is spurred on by ‘need’ only, a pathology interpretation of religion that ignores other elements of religious inspiration and innovation such as desire, culture, aesthetics, and, often in the online environment, affective virality.

Theistic interpretations of AI do undeniably owe a lot to older cultural conceptions of a singular god. Randall Reed pares this kind of god down to three theological characteristics (with long historical and philosophical roots) that often map easily onto our conceptions of AI superintelligences. These are omnipotence, omniscience, and omnipresence (Reed 2018, 7). Reed also raises the question of omnibenevolence. He notes that AI philosophers such as Nick Bostrom of the Future of Humanity Institute have focussed on the issues of malevolence through “perverse instantiation”, a failure of value alignment leading to unforeseen damage from a superintelligent AI, such as in Bostrom’s famous Paperclip Maximiser thought experiment (Bostrom 2003). Bostrom’s Orthogonality Thesis from his 2012 paper ‘Superintelligent Will’ is also relevant; the argument that intelligence is not intrinsically linked to ‘goodness’, and that an AI could have any number of combination of degrees of both characteristics (Bostrom 2012).

– “Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse by Beth Singler

 

Gabbler Recommends: Ancient Historian Reads PERCY JACKSON (+ Book Haul)

I particularly think it’s weird how Medusa still has a head in Percy Jackson. – Gabbler

Keep the Gay in Yuletide

❄️🦌❄️ Trapped with your family? Here’s a way to avoid conversation. Free ebook of Vol. 1 here at our website and/or on the Internet Archive. Forever and always. ❄️🦌❄️


.

Both volume 1 & 2 of the Circo Del Herrero Series will be F R E E

Download Volume 1 THE AUTOMATION | Volume 2 THE PRE-PROGRAMMING