February 21, 2024

Based on a large language model (LLM), OpenAIt’s Chat Generative Pre-trained Transformer (ChatGPT) was released in November 2022. Ever since it has disrupted several industries, particularly publishing, inviting a range of responses from writers feeling anxious about the security of their jobs to the public at large being unable to fathom the fundamental ways in which human-machine interaction is impacting their everyday lives.

Moneycontrol asked a few writers from diverse backgrounds if they use LLM tools like ChatGPT or any GenAI software for their writing and if they’re concerned by the hullabaloo that their skill will be rendered useless shortly.

Founder of the Bombay Literary Magazine (TBLM) and author of the Sahitya Akademi Yuva Puraskar-winning Diwali in Muzaffarnagar (HarperCollins, 2018), Tanuj Solanki uses OpenAI’s tool Dall-E, which produces digital images based on the prompts inserted, “to create images corresponding to character descriptions.”

correspondence with how it might be visualised,” he says. “It hasn’t been very helpful. [But] I’ve also used ChatGPT to give me the structure of a three-day short-story-writing workshop. That was [well] done. I feel alright about doing these things, and I’m not particularly scared of what the machine’s efforts, or successes therein, mean for our future,” submits the author of the JCB Prize for Fiction-longlisted Machine Is Learning (Pan Macmillan, 2020).

Pune-based writer and poet Shruti Buddhavarapu, however, hasn’t used any GenAI tools for writing, though she feels “it helps, of course, that I haven’t written in a long time. But even if inspiration strikes, GenAI is not for me. I primarily write to process the world and my emotions, so what spiritual leverage or heft remains, I put my prompts in someone else’s ‘brain’ and ask them to tell me a story instead. It might be a great MFA class exercise to use and analyse AI stories, and it certainly might be of value too to teach/learn crucial elements of storytelling — but AI is myopic. It only tells us permutation combinations of what is already known (and made available to it). Humans speak the unknown into existence.”

The buzz that the ChatGPT-3 launch created made Bengaluru-based writer AM Gautam give the tool a try. Though it “was good enough to get the job done, [it worked] only when the job required a bunch of words strung together in grammatically correct sentences,” he says. The writer, whose debut essay collection is forthcoming from Aleph, continues, “So, for instance, it could write a poem about corporate appraisals in the ‘style of (Charles) Bukowski’. The poem made me and my friends laugh for a few seconds, but no one could’ve mistaken its output as having actually been written by Bukowski. When it tried to write like (Rainer Maria) Rilke or (Sylvia) Plath, the output was not even funny, just pathetic.”

This reminds me that one of my friends uses ChatGPT to tell her jokes. It’s amusing that Toby Walsh, author of the recently published book Faking It: Artificial Intelligence in a Human World (Speaking Tiger, 2023), also notes this usage in his research findings. Creativity, it seems, cannot be manufactured through predictive intelligence. As Gautam also notes, “My impression is that there is an inherent insincerity in all fiction or poetry that AI programs like ChatGPT try to produce. It may be because a major part of the training data they ingest comes from the Internet, and we know how sincere everyone is on the Internet.”

Furthermore, the author of Weight of a Cherry Blossom (Rupa, 2019), Buddhavarapu says, “Art is painful and pitiful in terms of compensation and livelihood for most of us. The digital appropriation of author-generated takeaways, without their consent, makes this adjacent to cultural appropriation in my eyes. The end-user of this AI is eventually going to be someone who profits off it — off the back and labour of [others]. And, what happens when few do give consent? Imagine that sample set that somehow will influence, generate, and dictate writing styles for entire generations to come. I always mourn for the art that never made it to the public and I can’t imagine AI will help that already-occurring phenomena.”

Story continues below Advertisement

Solanki, on the other hand, says, “If my writing is going to be available for [machines], then I demand that all the textbooks and all scientific research, which is so guarded and expensive to access, be fed to it, too. Why are there guardrails for advanced science being fed to LLMs but none for art or journalism?”

Regarding the threat that’s making rounds that bots will replace writers, Solanki is unbothered. “I don’t really care for the redundancy of my skill sets,” he says. “As a fiction writer, my skill is rooted in my imagination. Imagination does work somewhat like an LLM. All your experience becomes the generative bouncing board. Now, if a machine can ape those processes on demand — in other words, imagine on demand — then the product is likely to have some nuisance value. But I think I’ll be able to do just better. Because I have experience. Because I actually have an imagination. Because I have peculiarities and defects and, therefore, style.”

Gautam cites Noam Chomsky, who said that ChatGPT is a “little thing [that is] getting too much attention and money.” Agreeing with the great philosopher’s comment, he continues, “Once you look under the surface of all the sci-fi-saturated Muskesque discourse on the subject, LLMs like GPT are nothing more than autofill tools on steroids, courtesy of more computing power. More computing power is merely dependent on the amount of money one has, and it’s hardly anything impressive. In their current form, to quote Chomsky again (and people often forget that he is a trained linguist, not just a political commentator), ‘such programs are stuck in a prehuman or nonhuman phase of cognitive evolution.’”

The author of the 2023 bestseller nonfiction Fire on the Ganges: Life among the Dead in Banaras (HarperCollins), Radhika Iyengar also feels that ChatGPT is a powerful tool, but she feels “that GenAI tools lack the ability to be creative beyond a certain point. In addition, these tools are unable to tap into personally lived human experiences, which many authors draw from. It’s what makes each author’s work different. That is, the uniqueness of their narrative voice and their written craft.”

She adds further, “I ran a random search asking ChatGPT, ‘Tell me about the Dom community in Varanasi’, and it provided basic information. There were no news reports as references, no rich anecdotes or in-depth insights to accompany the answer to make it an interesting and informed read. So, from that perspective, I think the job of a journalist will remain because a journalist (or even an author) will, of course, read books and other materials, which is essential for background research, but they will also weave in their own reporting insights and voice (one that has been polished over years) into their original piece of writing. I believe that is something AI cannot replicate. It cannot go from one region to another, interacting with people, building a rapport, learning about their lives and writing about them. You need a human connection for this.”