GABBLER RECOMMENDS: Taylor Lorenz | Why People Are Roleplaying Robot Racism

Gabbler Recommends: Everyone’s Cheating At Chess (Allegedly) by Sarah Z

GABBLER RECOMMENDS: ‘AI Signals The Death Of The Author’ | Noēma

If the author as the principal figure of literary authority and accountability came into existence at a particular time and place, there could conceivably also be a point at which it ceased to fulfill this role. That is what Barthes signaled in his now-famous essay. The “death of the author” does not mean the end of the life of any particular individual or even the end of human writing, but the termination and closure of the author as the authorizing agent of what is said in and by writing. Though Barthes never experienced an LLM, his essay nevertheless accurately anticipated our current situation. LLMs produce written content without a living voice to animate and authorize their words. Text produced by LLMs is literally unauthorized — a point emphasized by the U.S. Court of Appeals, which recently upheld a decision denying authorship to AI.

Criticism of tools like ChatGPT tends to follow on from this. They have been described as “stochastic parrots” for the way they simply mimic human speech or repeat word patterns without understanding meaning. The ways in which they more generally disrupt the standard understanding of authorship, authority and the means and meaning of writing have clearly disturbed a great many people. But the story of how “the author” came into being shows us that the critics miss a key point: The authority for writing has always been a socially constructed artifice. The author is not a natural phenomenon. It was an idea that we invented to help us make sense of writing.

After the “death of the author,” therefore, everything gets turned around. Specifically, the meaning of a piece of writing is not something that can be guaranteed a priori by the authentic character or voice of the person who is said to have written it. Instead, meaning transpires in and from the experience of reading. It is through that process that readers discover (or better, “fabricate”) what they assume the author had wanted to say.

This flipping of the script on literary theory alters the location of meaning-making in ways that overturn our standard operating presumptions.Previously, it had lain with the author who, it was assumed, had “something to say”; now, it is with the reader. When we read “Hamlet,” we are not able to access Shakespeare’s true intentions for writing it, so we find meaning by interpreting it (and then we project our interpretations back onto Shakespeare). In the process of our doing so, the authority that had been vested in the author is not just questioned, but overthrown.“Text is made of multiple writings, drawn from many cultures and entering into mutual relations of dialogue, parody, contestation,” wrote Barthes, “but there is one place where this multiplicity is focused and that place is the reader. … A text’s unity lies not in its origin but in its destination.” The death of the author, in other words, is the birth of the critical reader.

All this throws up something that has been missed in the frenzy over the technological significance of LLMs: They are philosophically significant. What we now have are things that write without speaking, a proliferation of texts that do not have, nor are beholden to, the authoritative voice of an author, and statements whose truth cannot be anchored in and assured by a prior intention to say something.

From one perspective — a perspective that remains bound to the usual ways of thinking — this can only be seen as a threat and crisis, for it challenges our very understanding of what writing is, the state of literature and the meaning of truth or the means of speaking the truth. But from another, it is an opportunity to think beyond the limitations of Western metaphysics and its hegemony.

 

…Instead of being (mis)understood as signs of the apocalypse or the end of writing, LLMs reveal the terminal limits of the author function, participate in a deconstruction of its organizing principles, and open the opportunity to think and write differently.

..

The LLM form of artificial intelligence is disturbing and disruptive, but not because it is a deviation or exception to that condition; instead, it exposes how it was always a fiction.

[Via]

GABBLER RECOMMENDS: Broey Deschanel’s ‘Severance, Mickey 17, and the “Digital Double”‘

The doubles mentioned here are statements on capitalism and argued to be philosophical acceptances of AI (or The Other in general?). But may I suggest a version of a double that wasn’t forced onto someone because of capitalism and is more so a statement on trans-humanism? 

“Making God” by Emily Gorcenski

The central problem with Singularity theory is that it is really only attractive to nerds. Vibing with all of humanity across the universe would mean entangling your consciousness with that of every other creep, and if you’re selling that vision and don’t see that as an issue, then it probably means that you’re the creep. Kurzweil’s The Singularity is Near is paternalistic and at times downright lecherous; paradise for me would mean being almost anywhere he’s not. The metaverse has two problems with its sales pitch: the first is that it’s useless; the second is that absolutely nobody wants Facebook to represent their version of forever.

Of course, it’s not like Meta (Facebook’s rebranded parent company) is coming right out and saying, “hey we’re building digital heaven!” Techno-utopianism is (only a little bit) more subtle. They don’t come right out and say they’re saving souls. Instead they say they’re benefitting all of humanity. Facebook wants to connect the world. Google wants to put all knowledge of humanity at your fingertips. Ignore their profit motives, they’re being altruistic!

In recent years, a bizarre philosophy has gained traction among silicon valley’s most fervent insiders: effective altruism. The basic gist is that giving is good (holy) and in order to give more one must first earn more. Therefore, obscene profit, even that which is obtained through fraud, is justifiable because it can lead to immense charity. Plenty of capitalists have made similar arguments through the years. Andrew Carnegie built libraries around the country out of a belief in a bizarre form of social darwinism, that men who emerge from deep poverty will evolve the skills to drive industrialism forward. There’s a tendency for the rich to mistake their luck with skill.

But it was the canon of Singularity theory that brought this prosaic philosophy to a new state of perversion: longtermism. If humanity survives, vastly more humans will live in the future than live today or have ever lived in the past. Therefore, it is our obligation to do everything we can to ensure their future prosperity. All inequalities and offenses in the present pale in comparison to the benefit we can achieve at scale to the humans yet to exist. It is for their benefit that we must drive steadfast to the Singularity. We develop technology not for us but for them. We are the benediction of all of the rest of mankind.

Longtermism’s biggest advocates were, unsurprisingly, the most zealous evangelists of web3. They proselytized with these arguments for years and the numbers of their acolytes grew. And the rest of us saw the naked truth, dumbfounded watching, staring into our black mirrors, darkly.

Longtermists offered a mind-blowing riposte: who cares about racism today when you’re trying to save billions of lives in the future?

Humanity’s demise is a scarier idea than, say, labor displacement. It’s not a coincidence that AI advocates are keeping extinction risk as the preëminent “AI safety” topic in regulators’ minds. It’s something they can easily agree to avoid without any negligible impact in the day-to-day operations of their business: we are not close to the creation of an Artificial General Intelligence (AGI), despite the breathless claims of the Singularity disciples working on the tech. This allows them to distract from and marginalize the real concerns about AI safety: mass unemployment, educational impairment, encoded social injustice, misinformation, and so forth. Singularity theorists get to have it both ways: they can keep moving towards their promised land without interference from those equipped to stop them.

Effective altruism, longtermism, techno-optimism, fascism, neoreactionaryism, etc are all just variations on a savior mythology. Each of them says, “there is a threat and we are the victim. But we are also the savior. And we alone can defeat the threat.” (Longtermism at least pays lip service to democracy but refuses to engage with the reality that voters will always choose the issues that affect them now.) Every savior myth also must create an event that proves that salvation has arrived. We shouldn’t be surprised that they’ve simply reinvented Revelations. Silicon Valley hasn’t produced a truly new idea in decades.

Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence so we pivoted to artificial life. We failed to make artificial life so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

[Via]