GABBLER RECOMMENDS: ‘AI Signals The Death Of The Author’ | Noēma

If the author as the principal figure of literary authority and accountability came into existence at a particular time and place, there could conceivably also be a point at which it ceased to fulfill this role. That is what Barthes signaled in his now-famous essay. The “death of the author” does not mean the end of the life of any particular individual or even the end of human writing, but the termination and closure of the author as the authorizing agent of what is said in and by writing. Though Barthes never experienced an LLM, his essay nevertheless accurately anticipated our current situation. LLMs produce written content without a living voice to animate and authorize their words. Text produced by LLMs is literally unauthorized — a point emphasized by the U.S. Court of Appeals, which recently upheld a decision denying authorship to AI.

Criticism of tools like ChatGPT tends to follow on from this. They have been described as “stochastic parrots” for the way they simply mimic human speech or repeat word patterns without understanding meaning. The ways in which they more generally disrupt the standard understanding of authorship, authority and the means and meaning of writing have clearly disturbed a great many people. But the story of how “the author” came into being shows us that the critics miss a key point: The authority for writing has always been a socially constructed artifice. The author is not a natural phenomenon. It was an idea that we invented to help us make sense of writing.

After the “death of the author,” therefore, everything gets turned around. Specifically, the meaning of a piece of writing is not something that can be guaranteed a priori by the authentic character or voice of the person who is said to have written it. Instead, meaning transpires in and from the experience of reading. It is through that process that readers discover (or better, “fabricate”) what they assume the author had wanted to say.

This flipping of the script on literary theory alters the location of meaning-making in ways that overturn our standard operating presumptions.Previously, it had lain with the author who, it was assumed, had “something to say”; now, it is with the reader. When we read “Hamlet,” we are not able to access Shakespeare’s true intentions for writing it, so we find meaning by interpreting it (and then we project our interpretations back onto Shakespeare). In the process of our doing so, the authority that had been vested in the author is not just questioned, but overthrown.“Text is made of multiple writings, drawn from many cultures and entering into mutual relations of dialogue, parody, contestation,” wrote Barthes, “but there is one place where this multiplicity is focused and that place is the reader. … A text’s unity lies not in its origin but in its destination.” The death of the author, in other words, is the birth of the critical reader.

All this throws up something that has been missed in the frenzy over the technological significance of LLMs: They are philosophically significant. What we now have are things that write without speaking, a proliferation of texts that do not have, nor are beholden to, the authoritative voice of an author, and statements whose truth cannot be anchored in and assured by a prior intention to say something.

From one perspective — a perspective that remains bound to the usual ways of thinking — this can only be seen as a threat and crisis, for it challenges our very understanding of what writing is, the state of literature and the meaning of truth or the means of speaking the truth. But from another, it is an opportunity to think beyond the limitations of Western metaphysics and its hegemony.

 

…Instead of being (mis)understood as signs of the apocalypse or the end of writing, LLMs reveal the terminal limits of the author function, participate in a deconstruction of its organizing principles, and open the opportunity to think and write differently.

..

The LLM form of artificial intelligence is disturbing and disruptive, but not because it is a deviation or exception to that condition; instead, it exposes how it was always a fiction.

[Via]

“Making God” by Emily Gorcenski

The central problem with Singularity theory is that it is really only attractive to nerds. Vibing with all of humanity across the universe would mean entangling your consciousness with that of every other creep, and if you’re selling that vision and don’t see that as an issue, then it probably means that you’re the creep. Kurzweil’s The Singularity is Near is paternalistic and at times downright lecherous; paradise for me would mean being almost anywhere he’s not. The metaverse has two problems with its sales pitch: the first is that it’s useless; the second is that absolutely nobody wants Facebook to represent their version of forever.

Of course, it’s not like Meta (Facebook’s rebranded parent company) is coming right out and saying, “hey we’re building digital heaven!” Techno-utopianism is (only a little bit) more subtle. They don’t come right out and say they’re saving souls. Instead they say they’re benefitting all of humanity. Facebook wants to connect the world. Google wants to put all knowledge of humanity at your fingertips. Ignore their profit motives, they’re being altruistic!

In recent years, a bizarre philosophy has gained traction among silicon valley’s most fervent insiders: effective altruism. The basic gist is that giving is good (holy) and in order to give more one must first earn more. Therefore, obscene profit, even that which is obtained through fraud, is justifiable because it can lead to immense charity. Plenty of capitalists have made similar arguments through the years. Andrew Carnegie built libraries around the country out of a belief in a bizarre form of social darwinism, that men who emerge from deep poverty will evolve the skills to drive industrialism forward. There’s a tendency for the rich to mistake their luck with skill.

But it was the canon of Singularity theory that brought this prosaic philosophy to a new state of perversion: longtermism. If humanity survives, vastly more humans will live in the future than live today or have ever lived in the past. Therefore, it is our obligation to do everything we can to ensure their future prosperity. All inequalities and offenses in the present pale in comparison to the benefit we can achieve at scale to the humans yet to exist. It is for their benefit that we must drive steadfast to the Singularity. We develop technology not for us but for them. We are the benediction of all of the rest of mankind.

Longtermism’s biggest advocates were, unsurprisingly, the most zealous evangelists of web3. They proselytized with these arguments for years and the numbers of their acolytes grew. And the rest of us saw the naked truth, dumbfounded watching, staring into our black mirrors, darkly.

Longtermists offered a mind-blowing riposte: who cares about racism today when you’re trying to save billions of lives in the future?

Humanity’s demise is a scarier idea than, say, labor displacement. It’s not a coincidence that AI advocates are keeping extinction risk as the preëminent “AI safety” topic in regulators’ minds. It’s something they can easily agree to avoid without any negligible impact in the day-to-day operations of their business: we are not close to the creation of an Artificial General Intelligence (AGI), despite the breathless claims of the Singularity disciples working on the tech. This allows them to distract from and marginalize the real concerns about AI safety: mass unemployment, educational impairment, encoded social injustice, misinformation, and so forth. Singularity theorists get to have it both ways: they can keep moving towards their promised land without interference from those equipped to stop them.

Effective altruism, longtermism, techno-optimism, fascism, neoreactionaryism, etc are all just variations on a savior mythology. Each of them says, “there is a threat and we are the victim. But we are also the savior. And we alone can defeat the threat.” (Longtermism at least pays lip service to democracy but refuses to engage with the reality that voters will always choose the issues that affect them now.) Every savior myth also must create an event that proves that salvation has arrived. We shouldn’t be surprised that they’ve simply reinvented Revelations. Silicon Valley hasn’t produced a truly new idea in decades.

Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence so we pivoted to artificial life. We failed to make artificial life so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

[Via]

“Confessions of a Viral AI Writer”

BUT WHAT IF I, the writer, don’t matter? I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didn’t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.

I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a world—and maybe we’re already in it—in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.

When I told Sims about this mother, he mentioned Roland Barthes’ influential essay “The Death of the Author.” In it, Barthes lays out an argument for favoring readers’ interpretations of a piece of writing over whatever meaning the author might have intended. Sims proposed a sort of supercharged version of Barthes’ argument in which a reader, able to produce not only a text’s meaning but the text itself, takes on an even more powerful cultural role.

Sims thought AI would let any literature lover generate the narrative they want—specifying the plot, the characters, even the writing style—instead of hoping someone else will.

Sims’ prediction made sense to me on an intellectual level, but I wondered how many people would actually want to cocreate their own literature. Then, a week later, I opened WhatsApp and saw a message from my dad, who grows mangoes in his yard in the coastal Florida town of Merritt Island. It was a picture he’d taken of his computer screen, with these words:

Sweet golden mango,
Merritt Island’s delight,
Juice drips, pure delight.

Next to this was ChatGPT’s logo and, underneath, a note: “My Haiku poem!”

The poem belonged to my dad in two senses: He had brought it into existence and was in possession of it. I stared at it for a while, trying to assess whether it was a good haiku—whether the doubling of the word “delight” was ungainly or subversive. I couldn’t decide. But then, my opinion didn’t matter. The literary relationship was a closed loop between my dad and himself.

In the days after the Sudowrite pile-on, those who had been helping to test its novel generator—hobbyists, fan fiction writers, and a handful of published genre authors—huddled on the Sudowrite Slack, feeling attacked. The outrage by published authors struck them as classist and exclusionary, maybe even ableist. Elizabeth Ann West, an author on Sudowrite’s payroll at the time who also makes a living writing Pride and Prejudice spinoffs, wrote, “Well I am PROUD to be a criminal against the arts if it means now everyone, of all abilities, can write the book they’ve always dreamed of writing.”

It reminded me of something Sims had told me. “Storytelling is really important,” he’d said. “This is an opportunity for us all to become storytellers.” The words had stuck with me. They suggested a democratization of creative freedom. There was something genuinely exciting about that prospect. But this line of reasoning obscured something fundamental about AI’s creation.

As much as technologists might be driven by an intellectual and creative curiosity similar to that of writers—and I don’t doubt this of Sims and others—the difference between them and us is that their work is expensive. The existence of language-generating AI depends on huge amounts of computational power and special hardware that only the world’s wealthiest people and institutions can afford. Whatever the creative goals of technologists, their research depends on that funding.

The language of empowerment, in that context, starts to sound familiar. It’s not unlike Facebook’s mission to “give people the power to build community and bring the world closer together,” or Google’s vision of making the world’s information “universally accessible and useful.” If AI constitutes a dramatic technical leap—and I believe it does—then, judging from history, it will also constitute a dramatic leap in corporate capture of human existence. Big Tech has already transmuted some of the most ancient pillars of human relationships—friendship, community, influence—for its own profit. Now it’s coming after language itself.

A thought experiment occurred to me at some point, a way to disentangle AI’s creative potential from its commercial potential: What if a band of diverse, anti-capitalist writers and developers got together and created their own language model, trained only on words provided with the explicit consent of the authors for the sole purpose of using the model as a creative tool?

That is, what if you could build an AI model that elegantly sidestepped all the ethical problems that seem inherent to AI: the lack of consent in training, the reinforcement of bias, the poorly paid gig workforce supporting it, the cheapening of artists’ labor? I imagined how rich and beautiful a model like this could be. I fantasized about the emergence of new forms of communal creative expression through human interaction with this model.

[Via]

Clickbait: Terry Gross made fun of someone who cannot hear

In response to: Flawed chatbot or threat to society? Both? We explore the risks and benefits of AI