On Monday morning, numerous writers woke up to learn that their books had been uploaded and scanned into a massive dataset without their consent. A project of cloud word processor Shaxpir, Prosecraft compiled over 27,000 books, comparing, ranking and analyzing them based on the “vividness” of their language. Many authors — including Young Adult powerhouse Maureen Johnson and “Little Fires Everywhere” author Celeste Ng — spoke out against Prosecraft for training a model on their books without consent. Even books published less than a month ago had already been uploaded.
After a day full of righteous online backlash, Prosecraft creator Benji Smith took down the website, which had existed since 2017.
“I’ve spent thousands of hours working on this project, cleaning up and annotating text, organizing and tweaking things,” Smith wrote. “But in the meantime, ‘AI’ became a thing. And the arrival of AI on the scene has been tainted by early use-cases that allow anyone to create zero-effort impersonations of artists, cutting those creators out of their own creative process.”
Smith’s Prosecraft was not a generative AI tool, but authors worried it could become one, since he had amassed a dataset of a quarter billion words from published books, which he found by crawling the internet.
Prosecraft would show two paragraphs from a book, one that was “most passive” and one that was “most vivid.” It then placed the books into percentile rankings based on how vivid, how long or how passive it was.
“If you’re a writer as a career it’s maddening, in part because style is not the same as writing a fucking whitepaper for a business that needs to be in active voice or whatever,” author Ilana Masad said. “Style is style!”
Smith did not respond to multiple requests for comment, but he elaborated on his intentions in his blog post.
“Since I was only publishing summary statistics, and small snippets from the text of those books, I believed I was honoring the spirit of the Fair Use doctrine, which doesn’t require the consent of the original author,” Smith wrote. Some authors noted that the excerpts of their books on Prosecraft included major spoilers, causing further frustration.
Though Smith apologized, authors remain exasperated. For artists and writers, the recent proliferation of AI tools has created a deeply frustrating game of whack-a-mole. As soon as they opt out of one database, they find that their work has been used to train another AI model, and so on.
“It’s pretty much the norm, from what I can tell, for these sites and projects to do whatever they’re doing first and then hope that no one notices and then disappear or get defensive when they inevitably do,” Masad said.
Generative AI and the technology behind self-publishing have created a perfect storm for scammy activities. Amazon has been flooded with low-quality, AI-generated travel guides, and even AI-generated children’s books. But tools like ChatGPT are basically trained on the sum total of the internet, so this means that real travel writers or children’s books authors could be getting inadvertently plagiarized.
Author Jane Friedman wrote in a recent blog post — titled “I’d Rather See My Books Get Pirated Than This” — that she is being impersonated on Amazon, where someone is selling books under her name that appear to be written with an AI.
Though Friedman was successful in getting these fake books removed from her Goodreads page, she says that Amazon won’t remove the books for sale unless she has a trademark for her name.
Amazon did not provide a comment before publication.
“I don’t think any writer is seriously convinced that AI is going to ruin books because like, well, that’s not how literature works, and everything I’ve seen ChatGPT write as a ‘story’ is just really fucking boring with no voice or real craft or style,” Masad said.
But she worries that publishers will be convinced otherwise, and possibly replace marketing and publicity teams with AI-generated promotional content.
“It feels really bad,” she said.