
I have plenty of reasons to despise Generative Artificial Intelligence. Chief among them, though perhaps unintentional, is its implicit encouragement of slipshod attitudes towards getting any work done. On ACX, the marketplace platform where authors post works for narrators to pick up, honest and dedicated audio readers are constantly plagued by submissions that reek of all the hallmarks (sometimes quite literally the watermarks) of ChatGPT. It is a phenomenon that can only be regarded as an infestation in what was once a sacred marketplace.
In the creative world, Adobe continues to remain in the cross-hairs of several calls for boycott, largely due to their near-wholesale disregard of copyright matters following their deployment of the Image Generation tool Adobe Firefly. Tech Journalist Kyle Marcelino asserts that its inherent content reviewing mechanism inevitably retrieves “all private projects and even works made under a non-disclosure agreement.”¹
It shouldn’t be a surprise then that the UK’s Society of Authors felt compelled to write a letter signed by over twelve thousand writers demanding at least the courtesy of consent. Rather, they expressly made clear:
“Today (Wednesday 21 August) the Society of Authors (SoA) has written to tech companies on behalf of its 12,500+ members to assert that ‘they do not authorise or otherwise grant permission for the use of any of their copyright-protected works’ in relation to the ‘training’, development and operation of generative artificial intelligence (AI) systems. Among those contacted are Microsoft, Google, OpenAI, Apple and Meta.”²
To make matters worse, Amazon, through their Audiobook Commerce Platform, ACX, has recently announced to both narrators and writers a ‘voice replica’ program. The rising frequent use of ElevenLabs has clearly steered Amazon in this direction, as they now appear to offer narrators the option of creating replicas of their own voices. While writers get the higher end of the royalty share, narrators are still (understandably) expected to ensure that the quality of whatever book is delivered using their AI-generated replicas is up to par.
And here’s something that needs to be said: we’re not even calling this technology what it really is. UNESCO’s 2019 report “Steering AI and Advanced ICTs for Knowledge Societies” makes a crucial distinction that the tech industry conveniently ignores. What we’re actually dealing with is ANI – Artificial Narrow Intelligence – systems that perform specific, limited tasks within “a narrow field of human capabilities.”³ Yet the term “AI” has been weaponized into this catch-all buzzword that implies far more capability and consciousness than these pattern-matching algorithms actually possess.
Listen. I have my pragmatic views about this technology. Truly. I know that in some ways, we can glance at it from an equitable vantage and say that it has empowered those who aren’t proficient with language or those who do not have the time. That it’s indeed the responsibility of its human instructors to ensure that AI output is governed by human finesse and polish.
Let’s even expand this pragmatism further to say that writing or narrating is an entrepreneurial profession by nature, and professionals or aspiring prospects should learn to distinguish themselves rather than mourn what have now become the old ways of doing things.
For the first, when has power ever come in tandem with humans delivering on expected responsibility? Second, the danger isn’t just entrepreneurial challenges…
Something needs to be said about what Amazon and its ilk are attempting to achieve. They are going to kill the single most important tradition in the history of our species: storytelling. Humans don’t simply blurt stories, they improvise in the moment. They add practiced, but free-form intonations…sometimes completely out of the blue as they are telling it.
This isn’t mere semantics. By allowing companies to brand their tools as “AI” rather than the more accurate “ANI,” we’ve given them license to mystify and oversell what these tools can do. It’s this very mystification that enables Amazon to present voice replication as some magical solution rather than what it really is: a sophisticated but soulless mimicry that threatens to automate away the human elements that make storytelling meaningful.
While not explicitly stating it, to even offer AI as a ‘cost-effective’ workaround is not just short-sighted – it’s destructive and irresponsible.
¹ Itechpost, “Adobe Faces Boycott Threats After New Spyware-Like ‘Content Moderation’ Rules”
² Society of Authors, “The Society of Authors writes to tech companies asserting members’ rights around uses of their works by generative AI – The Society of Authors”
³ UNESCO, “Steering AI and Advanced ICTs for Knowledge Societies,” 2019,
Leave a Reply