In Part 1, we talked about how artificial intelligence is already becoming sophisticated enough to make predictions and write stories, paving the way to a future where computers further augment and possibly eventually compete with human fiction writers.
So far, we’ve covered writing. What about editing?
Writers are already heavily augmented with numerous tools that can help us analyze word frequency, repetition, adverbs, dialogue tag overuse, sentence length, and more. We can analyze readability and identify words and sentences affecting it. We can analyze character agency and a story’s emotional pitch. And, more recently, we can even benchmark against genre averages and even all bestselling fiction. Examples of editing software range from the cheap and simple like Hemingway App to the more sophisticated and robust like AutoCrit.
You can even do some interesting analysis at home without fancy software, just using Microsoft Word’s readability statistics available in the grammar and spelling checker. Simply highlight a section such as a scene or chapter, run the checker, and you’ll be told word count and average word, sentence, and paragraph length. You’ll also be told the percentage of sentences that have passive voice along with the Flesch Reading Ease and Flesh-Kincaid Grade Level. Graph the Flesch Reading Ease by scene or chapter in Excel, and you’ve got a cardiogram of your novel’s pulse, where it speeds up or slows down.
A very interesting application of software to editing is analysis of a story’s emotional arc. Fiction might be described as bad things happening to flawed people. What happens to them, and how they react to it, defines the emotional arc of a story, which can be studied. Author Kurt Vonnegut did just that. After serving in World War 2, he attended the University of Chicago, where he presented his master’s thesis in anthropology with a very simple premise: “Stories have shapes which can be drawn on graph paper, and … the shape of a given society’s stories is at least as interesting as the shape of its pots or spearheads.”
Vonnegut’s thesis was rejected, but he was right. Years later, researchers at the University of Vermont and University of Adelaide hypothesized that certain story arcs are more meaningful, and analyzed more than 1,300 works in the Project Gutenberg fiction collection. Their algorithm assigned emotional ratings to words like “death” and “love” and “laugh” to plot each story’s emotional rhythm. They identified six basic story shapes as most popular based on number of downloads from the collection. These include rags to riches (emotional arc rises over the course of the story), riches to rags (falls), man in a hole (falls then rises), Icarus (rises then falls), Cinderella (rises, falls, then rises again), and Oedipus (falls, rises, then falls again). Riches to rags and man in a hole stories are the most popular.
For a fiction author, having access to this type of analysis could be a powerful asset. Remember if you’re creating empathetic characters, the reader is going on the same emotional journey. Is my story mostly positive or negative in overall mood? How does my emotional arc relate to my plot architecture and character arcs? Am I telling the emotional story I want to get across, with well-timed upset and catharsis?
In 2016, Jodie Archer and Matthew Jockers came out with The Bestseller Code. The authors created an algorithm that analyzed 5,000 novels and rated them for likelihood to be bestsellers, claiming an 80-90 percent predictive accuracy. The results confirmed what many writers already know: Colloquial style, active verbs, strong protagonist agency, fast-moving and rhythmic plot, and topical focus (ideally with two major contrasting topics, such as crime and domesticity in Gone Girl) are all critical to commercial fiction. However, their method was revolutionary in that they actually quantified it using an algorithm.
Archer and Jockers now sell consulting services, part of which involves running a manuscript through their algorithm to produce a report. They don’t produce a number score (for legal reasons) but instead using a starring system. The report also includes interesting character profiles showing key exhibited traits based on keywords. My own experience with them was positive, and their analysis worked very well with the excellent guidance from my editor, who went beyond what the algorithm could do.
As more powerful algorithms and tools become available that are at least partly successful in predicting bestsellers, we may see some big changes in book publishing. This is an industry with slim profit margins and where maybe one out of 10 titles released each year is a bestseller. If a tool comes along that can reliably increase that just to one in five, it would be an explosive change. What might that look like, and how would that impact writers?
What if genre-specific algorithms replaced slush pile readers? You upload a manuscript at a publisher’s website and get an instant score and notification whether a human acquisitions editor will read it based on their screening criteria (e.g., strong midlist up to bestseller). If you don’t qualify past this baseline, you get a complete report and notes on where to improve.
What if algorithms surpassed agents as one of the “gatekeepers” in the industry? In such a future scenario, agents might still screen books to capture great books to sell, but with a direct route to an acquisitions editor, some authors may rely on agents instead to provide contract consulting and deal negotiation.
What if editors used these algorithms as part of their editing? Editing back and forth between authors and editors might become more focused if editors have quantitative tools on which to base some of their guidance.
What if these same algorithms became either strictly or more-or-less standardized so they are available as plug-ins for editing software? Some authors may feel confident they have a strong seller and decide to publish it themselves.
The result would help authors tune and possibly make better decisions about where to place their work, while more or less standardizing big publisher-produced fiction around a quality baseline defined as quantitative metrics based on craft. It likely wouldn’t replace editors, as a novel that is quantitatively solid on craft may still not be a good story, or rather one that a publisher wants to publish. Instead, it might supplement and to an extent streamline the vetting process. It could, however, further delineate commercial and other fiction in the market, particularly literary fiction that intentionally breaks the rules of craft for an artistic effect. Arguably, in this scenario, good editors would be even more vital, as they would remain the gatekeepers for ultimate art and quality in a process that would be partly automated.
As with computers and fiction writing, we are entering an era where algorithms may significantly augment human capability, in this case editing. The results could range from more empowered writers to new ways publishers vet their slush pile, with various risks and rewards.
What do you think?