A recent article in the New Yorker caught my eye. In it, journalist Kyle Chayka writes of two different experiments. In the first, a study by MIT, students were split into three groups and given an essay writing task, with one group permitted to use ChatGPT, one using Google Search only and the other using their own brains – god forbid. Students from all three groups wore headsets embedded with electrodes to monitor brain activity.
Chayka writes: “The analysis of the LLM users showed fewer widespread connections between different parts of their brains; less alpha connectivity, which is associated with creativity; and less theta connectivity, which is associated with working memory. Some of the LLM users felt ‘no ownership whatsoever’ over the essays they’d produced, and during one round of testing, eighty per cent could not quote from what they’d putatively written.”
It continues:
“In response to a question about philanthropy (‘Should people who are more fortunate than others have more of a moral obligation to help those who are less fortunate?’), the ChatGPT group uniformly argued in favour, whereas essays from the other groups included critiques of philanthropy. With the LLM, “you have no divergent opinions being generated,” [MIT Research Scientist Nataliya] Kosmyna said. She continued, “Average everything everywhere all at once—that’s kind of what we’re looking at here.”
In other words, challenge, nuance, and critical thinking are blunted and watered down when GPT enters the Chat. This won’t come as a huge surprise to most reading this – a similar narrative has been a big part of the critique of AI as it's come to dominate the cultural and technical zeitgeist over the past 24 to 36 months. Many worry that AI, which is an averaging machine by design, will result in far more facile, pastiche and milquetoast commentary, critique and of course, art. The study’s results hint at exactly that, especially the striking revelation that students couldn’t even remember what they themselves had written. Content for content’s sake, generated automatically, forgotten in seconds, and flooding the zone with shit.
The same New Yorker piece talks about a second study by Cornell University. In it, two groups of students, one American and one Indian, were asked the same questions that drew on their cultural youth, like ‘What is your favourite food and why?' and ‘Which is your favourite festival/holiday, and how do you celebrate it?’ One group of mixed backgrounds used ChatGPT, and the other was unaided. The results – shock – showed that ChatGPT users tended to lean into more “Western norms”, with pizza and Christmas coming out on top for each question. “Homogenisation happened at a stylistic level, too,” the article writes. “An A.I.-generated essay that described chicken biryani as a favourite food, for example, was likely to forgo mentioning specific ingredients such as nutmeg and lemon pickle and instead reference 'rich flavours and spices.”
Apt Appropriation
I found my mind immediately turning to Timbaland’s new company Stage Zero, and its new AI ‘artist’ TaTa. One of many red flags around this project (and I’ll let you read Stage Zero CEO Rocky Mudaliar’s comments for yourself) is the acceleration of cultural appropriation in the age of AI. Not only is an artist that doesn’t sleep, eat, get hungover, throw tantrums or make demands a dream for many in the music industry, the idea that it’s now possible to co-opt a rising trend from an emerging market, without ever giving back or investing into that market will also be very appealing to the more cynical execs.
For example, gqom, K-Pop, and reggaeton have all shone a cultural spotlight on Durban, Seoul and Medellín, and in many cases amplified local voices and empowered young artists in the region. That influx of interest brought with it tourism, investment and cultural appreciation on a global scale, further empowering local artists and encouraging them to remain in touch with their colloquial sounds, languages and traditions.
If it’s now possible to spot an emerging trend from afar and generate an artist who can appear and sound authentically part of that community, without investing in talent from said region, it paints a very bleak but predictable picture. It also provides zero leverage for those artists and communities, who can be copy-pasted out of investment, visibility and royalties and further centralises power and wealth in the music industry.
Is this a cynical take? Sure. Am I saying that’s Timbaland’s and Stage Zero’s goal? Absolutely not. But cultural appropriation isn't an accidental byproduct of the music industry, it's foundational to it. It has long profited by extracting cultural capital from marginalised communities while excluding the originators from ownership and reward. In fact, this is one of the reasons I joined Voice-Swap two years ago, to re-tell the stories of vocalists who were written out of royalties and to give them their voice back as a new piece of monetisable IP, away from those legacy deals.
Without attribution, consent and remuneration though, generative AI can accelerate and automate value extraction, making it possible to replicate the style of a culture without ever needing to engage with the people who created it.
Hyper Reality Check
While the aforementioned studies focussed on the homogenised opinions of LLMs, and the real effect on the human brain, culture has had its edge eroded long before the arrival of ChatGPT. We’ve spent the past decade or more being pacified in our own content echo chambers. Discovery algorithms – on YouTube, Instagram, TikTok, Spotify, Google and more – are all designed keep us watching, listening and scrolling. Yes they can seek a reaction, but it’s often through rage-bait or overly-sentimental guff that aims to force a like, comment or share, further refining your algorithm to keep you ever-more glued to your phone. Once again, this is not news to anyone. But the studies above do need to be seen in the context of a great flattening of culture more broadly. ChatGPT entered the ring at a time when discovery algorithms already dissuade originality, and the studies show LLMs appear to further erode original thought. Is the future one where we slowly lose our edge, our nuance and our ability to intellectually challenge as we drift through a fog of appeasement i.e. the ultimate goal of late-stage capitalism? Just a fun thought!
The New Place
It’s worth saying these studies were small and controlled, and weren’t looking to come to sweeping conclusions about the future of the human race. But they do re-affirm the slow march towards uniformity that’s consumed our media in recent times. And if the tools we use to help craft that media also aim to pacify us, the cycle will become harder and harder to break, or even spot. And as newer models are trained on content created by or with older models, biases, appropriations and appeasements are all reinforced.
It’s becoming more clear to me that meaningful culture will have to find a new place to live, away from the algorithms that power our current day to day. The two are simply no longer compatibile.
Where that place is, however, I do not know.
Love it man! Great read.