GABBLER RECOMMENDS: ‘AI Signals The Death Of The Author’ | Noēma

If the author as the principal figure of literary authority and accountability came into existence at a particular time and place, there could conceivably also be a point at which it ceased to fulfill this role. That is what Barthes signaled in his now-famous essay. The “death of the author” does not mean the end of the life of any particular individual or even the end of human writing, but the termination and closure of the author as the authorizing agent of what is said in and by writing. Though Barthes never experienced an LLM, his essay nevertheless accurately anticipated our current situation. LLMs produce written content without a living voice to animate and authorize their words. Text produced by LLMs is literally unauthorized — a point emphasized by the U.S. Court of Appeals, which recently upheld a decision denying authorship to AI.

Criticism of tools like ChatGPT tends to follow on from this. They have been described as “stochastic parrots” for the way they simply mimic human speech or repeat word patterns without understanding meaning. The ways in which they more generally disrupt the standard understanding of authorship, authority and the means and meaning of writing have clearly disturbed a great many people. But the story of how “the author” came into being shows us that the critics miss a key point: The authority for writing has always been a socially constructed artifice. The author is not a natural phenomenon. It was an idea that we invented to help us make sense of writing.

After the “death of the author,” therefore, everything gets turned around. Specifically, the meaning of a piece of writing is not something that can be guaranteed a priori by the authentic character or voice of the person who is said to have written it. Instead, meaning transpires in and from the experience of reading. It is through that process that readers discover (or better, “fabricate”) what they assume the author had wanted to say.

This flipping of the script on literary theory alters the location of meaning-making in ways that overturn our standard operating presumptions.Previously, it had lain with the author who, it was assumed, had “something to say”; now, it is with the reader. When we read “Hamlet,” we are not able to access Shakespeare’s true intentions for writing it, so we find meaning by interpreting it (and then we project our interpretations back onto Shakespeare). In the process of our doing so, the authority that had been vested in the author is not just questioned, but overthrown.“Text is made of multiple writings, drawn from many cultures and entering into mutual relations of dialogue, parody, contestation,” wrote Barthes, “but there is one place where this multiplicity is focused and that place is the reader. … A text’s unity lies not in its origin but in its destination.” The death of the author, in other words, is the birth of the critical reader.

All this throws up something that has been missed in the frenzy over the technological significance of LLMs: They are philosophically significant. What we now have are things that write without speaking, a proliferation of texts that do not have, nor are beholden to, the authoritative voice of an author, and statements whose truth cannot be anchored in and assured by a prior intention to say something.

From one perspective — a perspective that remains bound to the usual ways of thinking — this can only be seen as a threat and crisis, for it challenges our very understanding of what writing is, the state of literature and the meaning of truth or the means of speaking the truth. But from another, it is an opportunity to think beyond the limitations of Western metaphysics and its hegemony.

 

…Instead of being (mis)understood as signs of the apocalypse or the end of writing, LLMs reveal the terminal limits of the author function, participate in a deconstruction of its organizing principles, and open the opportunity to think and write differently.

..

The LLM form of artificial intelligence is disturbing and disruptive, but not because it is a deviation or exception to that condition; instead, it exposes how it was always a fiction.

[Via]

“Confessions of a Viral AI Writer”

BUT WHAT IF I, the writer, don’t matter? I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didn’t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.

I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a world—and maybe we’re already in it—in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.

When I told Sims about this mother, he mentioned Roland Barthes’ influential essay “The Death of the Author.” In it, Barthes lays out an argument for favoring readers’ interpretations of a piece of writing over whatever meaning the author might have intended. Sims proposed a sort of supercharged version of Barthes’ argument in which a reader, able to produce not only a text’s meaning but the text itself, takes on an even more powerful cultural role.

Sims thought AI would let any literature lover generate the narrative they want—specifying the plot, the characters, even the writing style—instead of hoping someone else will.

Sims’ prediction made sense to me on an intellectual level, but I wondered how many people would actually want to cocreate their own literature. Then, a week later, I opened WhatsApp and saw a message from my dad, who grows mangoes in his yard in the coastal Florida town of Merritt Island. It was a picture he’d taken of his computer screen, with these words:

Sweet golden mango,
Merritt Island’s delight,
Juice drips, pure delight.

Next to this was ChatGPT’s logo and, underneath, a note: “My Haiku poem!”

The poem belonged to my dad in two senses: He had brought it into existence and was in possession of it. I stared at it for a while, trying to assess whether it was a good haiku—whether the doubling of the word “delight” was ungainly or subversive. I couldn’t decide. But then, my opinion didn’t matter. The literary relationship was a closed loop between my dad and himself.

In the days after the Sudowrite pile-on, those who had been helping to test its novel generator—hobbyists, fan fiction writers, and a handful of published genre authors—huddled on the Sudowrite Slack, feeling attacked. The outrage by published authors struck them as classist and exclusionary, maybe even ableist. Elizabeth Ann West, an author on Sudowrite’s payroll at the time who also makes a living writing Pride and Prejudice spinoffs, wrote, “Well I am PROUD to be a criminal against the arts if it means now everyone, of all abilities, can write the book they’ve always dreamed of writing.”

It reminded me of something Sims had told me. “Storytelling is really important,” he’d said. “This is an opportunity for us all to become storytellers.” The words had stuck with me. They suggested a democratization of creative freedom. There was something genuinely exciting about that prospect. But this line of reasoning obscured something fundamental about AI’s creation.

As much as technologists might be driven by an intellectual and creative curiosity similar to that of writers—and I don’t doubt this of Sims and others—the difference between them and us is that their work is expensive. The existence of language-generating AI depends on huge amounts of computational power and special hardware that only the world’s wealthiest people and institutions can afford. Whatever the creative goals of technologists, their research depends on that funding.

The language of empowerment, in that context, starts to sound familiar. It’s not unlike Facebook’s mission to “give people the power to build community and bring the world closer together,” or Google’s vision of making the world’s information “universally accessible and useful.” If AI constitutes a dramatic technical leap—and I believe it does—then, judging from history, it will also constitute a dramatic leap in corporate capture of human existence. Big Tech has already transmuted some of the most ancient pillars of human relationships—friendship, community, influence—for its own profit. Now it’s coming after language itself.

A thought experiment occurred to me at some point, a way to disentangle AI’s creative potential from its commercial potential: What if a band of diverse, anti-capitalist writers and developers got together and created their own language model, trained only on words provided with the explicit consent of the authors for the sole purpose of using the model as a creative tool?

That is, what if you could build an AI model that elegantly sidestepped all the ethical problems that seem inherent to AI: the lack of consent in training, the reinforcement of bias, the poorly paid gig workforce supporting it, the cheapening of artists’ labor? I imagined how rich and beautiful a model like this could be. I fantasized about the emergence of new forms of communal creative expression through human interaction with this model.

[Via]

Clickbait: Terry Gross made fun of someone who cannot hear

In response to: Flawed chatbot or threat to society? Both? We explore the risks and benefits of AI

Theistic conceptions of artificial intelligence

 

Other scholars recognise elements of theism in the discourse around AI and its potential impact on our future. Robert Geraci suggests in his 2010 book, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, that AI can fulfil the same role in apocalyptic imaginings as a singular theistic god. Bearing in mind that the biblical apocalypse is an optimistic cosmic transformation, he also draws out parallels with the aims of AI, which often describe hopeful aspirations for a world-yet-to-come, an AI eschatology. In an early part of this particular work, Geraci draws on Rudolph Otto’s 1917 description of god as mysterium tremendum et fascinans (Otto 1917), using it to identify a type of awe-inspiring and fearsome being that at different times in our history can be a god, or in our contemporary modern world, AI. Elsewhere, Geraci’s work has engaged with virtual worlds, drawing attention to the role of transhumanists, including Giulio Prisco, discussed below, in claiming new potential spaces to practice and evolve religion towards transhumanist ends. In such spaces, including Second Life and the World of Warcraft (the MMORPG-a massively multiplayer online role-playing game), Geraci argues a step closer to the fulfilment of transhumanist salvation is being made- “a heavenly realm to inhabit” (Geraci 2014 177). Twitter is another virtual space, but one dominated by discourse rather than aesthetics and virtual embodiment like Second Life and World of Warcraft. However, this article proposes that the expressions of religious metaphor, parody, and tropes on Twitter as in the BBtA tweets represent continuities of theism, continuities enabled by new technological spaces as well as uncertainties about the nature and the volition of ‘the algorithm’.

However, the ‘AI fits into the god-space’ argument can be in danger of supporting a rather strict version of the Secularisation Thesis, and this idea’s historical veracity has been debated by anthropologists and sociologists of religion (see Ward and Hoelzl 2008). This article, and connected research, seeks to add to this debate in by drawing attention to continuities of religiosity and enchantment in super-agential concepts of AI and AI NRMs. Second, this god-space argument can suggest that religion is spurred on by ‘need’ only, a pathology interpretation of religion that ignores other elements of religious inspiration and innovation such as desire, culture, aesthetics, and, often in the online environment, affective virality.

Theistic interpretations of AI do undeniably owe a lot to older cultural conceptions of a singular god. Randall Reed pares this kind of god down to three theological characteristics (with long historical and philosophical roots) that often map easily onto our conceptions of AI superintelligences. These are omnipotence, omniscience, and omnipresence (Reed 2018, 7). Reed also raises the question of omnibenevolence. He notes that AI philosophers such as Nick Bostrom of the Future of Humanity Institute have focussed on the issues of malevolence through “perverse instantiation”, a failure of value alignment leading to unforeseen damage from a superintelligent AI, such as in Bostrom’s famous Paperclip Maximiser thought experiment (Bostrom 2003). Bostrom’s Orthogonality Thesis from his 2012 paper ‘Superintelligent Will’ is also relevant; the argument that intelligence is not intrinsically linked to ‘goodness’, and that an AI could have any number of combination of degrees of both characteristics (Bostrom 2012).

– “Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse by Beth Singler