A weird little story in the news:
Amazon has removed half a dozen AI-generated books published under a living author’s name without her consent following a social media backlash…
[Author Jane] Friedman told Gizmodo she learned about the AI-written imposter works after one of her readers stumbled across them on Amazon and reached out to her directly. At first, the reader thought the bizarre language may have been Friedman’s attempt to experiment with a new writing style. Eventually, the reader realized the differences were likely more than mere stylistic departures. They emailed Friedman expressing concern the works were written by someone else entirely.
I’m not familiar with Friedman’s work; apparently she writes about how to write and get published and so forth, and her work is well enough known that she was not simply dismissed as a crank when she complained to Amazon about this (I guess we could call it) inverse plagiarism.
And when I say “not simply dismissed,” the accent is on “simply” because she was indeed dismissed, at first — listen to this utterly, sadly believable preliminary response to her complaint by the world’s largest bookseller:
Friedman says she filed an infringement report through Amazon’s official form as soon as she discovered the books and received an automated response. After informing Amazon that someone else was trading on her name, Friedman says the company asked her for trademark registration numbers, something she said most authors simply do not have. Amazon initially closed the case and told her the content would not be removed since she did not own the copyright to the AI works, only for an Amazon representative to reach out to Friedman hours later to say they were reviewing the case further.
In a story full of key phrases, “would not be removed since she did not own the copyright to the AI works” may be the keymost. It hits that perfect dystopian tone and holds the form of its anti-logic parabola perfectly.
Amazon hasn’t said why they reversed themselves; obviously the shitstorm made them do it. But we can’t expect Amazon to extend the same courtesy to anyone who can’t command a shitstorm as swiftly as Friedman did. (And in the future maybe even that won’t be enough to get them to fix the problem.) From her own post on the matter:
When I complained about this on Twitter/X, an author responded that she had to report 29 illegitimate books in just the last week alone. 29!
I’m sure lots of other authors are getting scammed like this, and a whole lot more will be thus scammed soon enough.
Friedman righteously and rightfully calls for “guardrails on this landslide of misattribution and misinformation… a way to verify authorship, or for authors to easily block fraudulent books credited to them.” It’s easy to see this as a civil and legal issue for professional writers and in fact any kind of artists: Not only are the scammers taking money on the false pretense that it’s by who they say it’s by, they’re injuring the artist’s reputation because the misattributed product will certainly be worse or in any case more carelessly assembled than what the customer has a right to expect.
But I couldn’t help but wonder: What if the fraudulent product was better than the faux Freidmans were? Of course you or I or any normal person would agree that the moral issue would be unchanged. Selling a really good knockoff of a Chanel bag is no less a theft than a shitty one.
But often when I hear even intelligent observers talking about the moral issues attendant on the rise and spread AI tech, they seem a lot less outraged or even worried than I am. In a consideration of AI in the June issue of the Atlantic, Adrienne LaFrance starts off talking about how in the wake of the technological advances of the 19th Century, “the national mood was a mix of exuberance, anxiety, and dread” — which seems ominously like “see? And that all worked out!”
LaFrance is not so thoughtless, though. She is aware that, at present, “self-teaching AI models are being designed to become better at what they do with every single interaction. But they also sometimes hallucinate, and manipulate, and fabricate.” She knows corporations don’t give a shit, and indeed are “already memorizing the platitudes necessary to wave away the critics.” And she sees that whatever advanced AI’s benefits may be, in the short term at least there’ll be disasters, e.g. “widespread unemployment and the loss of professional confidence as a more competent AI looks over our shoulder.”
Good. But LaFrance’s solution is “a human renaissance in the age of intelligent machines” — and now it’s her turn to dispense necessary platitudes: for example, prescriptions like “people ought to disclose whenever an artificial intelligence is present or has been used in communication,” and slogans like “tapping a ‘Like’ button is not friendship.” (Bonus points off for “cultural norms.”)
I don’t think LaFrance is dealing in bad faith, or unperceptive (she also anticipates the Friedman incident: “Artists, writers, and musicians should anticipate widespread impostor efforts and fight against them”). I think, though, that she is missing the moral dimension. Her “human renaissance” seems like a bunch of wishful thinking in the face of what increasingly looks like a huge criminal conspiracy against the human spirit.
Also, we already had a renaissance some centuries back; many of us still remember its precepts, and are also aware that, after a long period in which it seemed society was down with the program, we’re now having to reeducate a depressingly large chunk of the population on why the murk of the Middle Ages was not preferable.
Because that, and not the technology, is really the problem. It’s like when people fret over what to do about Trump, and I keep screaming teach basic humanities — and, while we’re at it, media literacy — like you mean it. Then you won’t have to explain to people that there’s a difference between con men and statesmen, and why the nice-looking, well-spoken AI chatbot isn’t necessarily going to be good for them.
I wrote back in June about how I was getting obvious AI pitches in my journalism work. Yesterday, as I mentioned in a Substack note, I got an email “invitation” from a Substack previously unknown to me to be “added… as an author to a post” and “fill out your profile so readers can find out more about you.” The Substack has one “coming soon” post in English and another in a language I can’t identify.
I’m not sure what tech is being employed there — in fact I can’t swear that the AI pitches I wrote about are, technically, AI and not some other kind of word manipulation engine. But I can still recognize the smell of bullshit.
The paradise unrestrained capitalism offers gets ever closer... For pardon the expression the parasites. For us hahaha no.
Sidebar: apparently there’s also an issue of travel books on Amazon written by AI, again using actual authors’ names. Of course, using AI to fake Dr. Chuck Tingle works would be going much too far...
My dotard theory is that, at least for the near to midterm future, AI won’t and can’t be better than the people developing it. OTOH, that day will come.
A second dotard theory: all the major tech developments have happened and now the VCs are desperate to push anything, no matter how crappy, to try and score again. So crap gets hyped and here we are.
I read about this author’s complaints to Amazon and I’m glad this time the issue was resolved in her favor. But it seems obvious this will happen again to others, especially ebook writers who flourish on Amazon. I can also see complaints a book is fraudulent being used as a means of harassment, causing an author’s legitimate product to be flagged as possible AI.
Also, I can’t help recalling a simpler time when we were merely worried automation would replace physical labor and displace those workers, as opposed to this endeavor to render human thought and creativity itself redundant.