My application for membership in the Science Fiction & Fantasy Writers of America has been approved. It’s fantastic to be a part of an organization of such great writers!
My application for membership in the Science Fiction & Fantasy Writers of America has been approved. It’s fantastic to be a part of an organization of such great writers!
I’m very happy to announce that my next major release is coming from Orbit in August 2019. In this novel, a brother and sister become child soldiers fighting on opposite sides of a second American civil war. The novel is about their struggle and the lives of the people they come into contact with. It’s prescient and powerful. I hope you’ll check it out!
Here’s a description:
After his impeachment, the president of the United States refuses to leave office, and the country erupts into a fractured and violent war. Orphaned by the fighting and looking for a home, 10-year-old Hannah Miller joins a citizen militia in a besieged Indianapolis.
In the Free Women militia, Hannah finds a makeshift family. They’ll teach her how to survive. They’ll give her hope. And they’ll show her how to use a gun.
Hannah’s older brother, Alex, is a soldier too. But he’s loyal to other side, and has found his place in a militant group of fighters who see themselves as the last bastion of their America. By following their orders, Alex will soon make the ultimate decision behind the trigger.
On the battlefields of America, Hannah and Alex will risk everything for their country, but in the end they’ll fight for the only cause that truly matters — each other.
In 1951, Alan Turing said, “At some stage…we should have to expect the machines to take control.”
If you’re not familiar with Turing, he was highly influential in formalizing the concepts of computation, algorithms, and artificial intelligence (AI). He helped crack the Enigma Code, which arguably shortened World War 2. In 1936, he produced the Turing machine, considered a model of a general-service computer.
Today, his prediction of machines replacing humans appears sharper than ever. According to a 2013 Oxford Martin School study, 47 percent of U.S. jobs are highly vulnerable to automation.
Turing also reportedly said, “We may hope that machines will eventually compete with men in all purely intellectual fields.”
Would that include fiction writers?
For most people, automation means one of three things. The machine augments or supports human activity, makes us better. The machine replaces a human in an activity. Or the machines go on a rampage and end our pointless human existence.
For me, the second one is the most frightening scenario.
We’re already seeing technology augmenting writers. Digitalization and the Internet have democratized publishing, resulting in amazing new opportunities for authors to research, write, edit, and publish.
But could machines actually compete with us?
In some writing fields, it’s already happening.
Natural language generation (NLG) machines are algorithms that compile words to build sentences in a logical order. The simplest ones write pieces like form letters and horoscopes. More sophisticated NLGs are writing sports articles, earnings reports, and financial reports.
Meet Philip Parker, possibly the world’s most prolific author. He’s produced some 1 million books using NLG, mostly covering narrow nonfiction topics, with some 100,000 listed on Amazon at one time or another.
These books appear to pass both the soft and hard versions of Turing’s test as it applies to AI writing. Not only may these books convince readers they were written by a human, but readers are also willing to pay for that work. Still, it’s really compiling, not writing. Surely, AIs can’t imitate human creativity?
A team at the University of London wanted to see if a computer could be programmed to imagine, resulting in the What-If Machine. The machine produced numerous what-if scenarios for five fiction genres, from Kafaesque to Disney. While the ones I read aren’t likely to become Hollywood films anytime soon, a few were fun, like in the Disney section, “What if there was a little atom that lost its neutral charge?”
How about think figuratively? Could a computer be programmed to do that? A researcher at the University of Dublin created Metaphor Magnet, an algorithm that culls the Internet for stereotypes and then inverts or contrasts them to create metaphor and irony. The results are often absurd and surprisingly funny. Other absurdist AI writers are currently writing cynical fortune cookies or, like Inspirobot.me, random inspirational quotes.
The application of NLG to fiction has yielded other fun results as evidenced by the National Novel Generation Month (NaNoGenMo), which gave us works like Twide and Twejudice—Pride and Prejudice but with all dialogue substituted with Twitter posts, and 60,000 Meows—Moby Dick rewritten in a lexicon of meows. This NLG application created a virtually original art form, often producing fun and surprising results.
But NLG isn’t real fiction writing. Is that possible for a computer?
Meet Scott French, a self-taught programmer who in the 1980s spent $50,000 and eight years to develop Hal, a Mac-based AI, with whom he co-wrote a novel in the style of Jacqueline Susann, bestselling author of novels like The Valley of the Dolls. Their novel, Just This Once, was published in 1993 and sold 70,000 copies, igniting a lawsuit to settle the question whether one could write and sell a book as imitating another author. French and the Susann family ended up settling out of court, splitting the profits. Hal wrote 100 percent of the plot, theme, and style. Otherwise, French wrote 10 percent, Hal 25 percent, and the rest was collaboration.
In 2008, a Russian team at Astrel-SPb did something similar, creating a program that rewrote Leo Tolstoy’s Anna Karenina in the style of Japanese author Haruki Murakami. The program wrote the novel in three days based on dossiers of key characteristics: character appearance, vocabulary, psychology, and others.
Then there’s this: If you’re tired of waiting for George R.R. Martin to write the sixth book in the Game of Thrones series, you can read what an AI came up with. A software engineer fed 5,000 pages of Martin’s series into an AI to write the first few chapters. The AI used a recurrent neural network, which allows it to learn from past experience and make predictions.
Among the AI’s predictions: Jaime kills Cersei, Jon rides a dragon, Varys poisons Danaerys, and Sansa turns out to be a Baratheon. Some of the predictions are a bit off, however, such as Ned Stark being alive. The actual writing is crude, but what’s interesting here is the AI making predictions based on past events.
In 2016, a novel partly written by a computer made it past the first round of screening in the Hoshi Shinichi Literary Award. Written by an AI created by Hitoshi Matsubara, professor at Future University Hakodate, The Day a Computer Writes a Novel was one of 11 submissions out of 1,450 partially written by a computer program. An AI getting past the first round in a major literary prize? Clearly, computers are getting better at this game.
These are just some examples of efforts being made to get machines to think creatively and tell stories. Others include reactive story, where a machine responds to your state of arousal and adjusts the story automatically for you. Poetry, where AI is producing interesting work. Scheherazade, the George Institute of Technology’s algorithm that learns narrative intelligence from online crowds to generate stories. Quixote, which teaches “value alignment” and an ethical system to machines by training them to read stories, learn acceptable sequences of events, and thereby understand how to behave in human society by seeing themselves as the protagonist. Or MIT Media Lab’s Alter Ego headset, which translates thoughts into electrical signals, which might allow writers one day to literally put thoughts to paper.
The bottom line is scientists consider a literary AI a major challenge for artificial intelligence, and they’re exploring it. The result may be two things. Tools we can use to augment ourselves as writers. Or new products that compete against us.
Let’s turn on our own what-if machine…
What if AI tools generated plot outlines, character bios, worlds, processes, metaphors, what-ifs, etc.? Imagine an AI tool suggesting a rewrite of all work-related dialogue by a doctor in your story to be medically accurate. Imagine a tool designing a planet and alien civilization from a few inputs. The list goes on.
What if an AI wrote a bare bones first draft of a novel based on an outline, characters arcs, characters, and other author input? Or an initial outline?
What if an AI analyzed a series and continued writing novels for it? Similarly, what if an AI analyzed an author and wrote novels based on that author’s work?
Would all this be the death of art? Or the birth of something new?
Big technological changes are often disruptive. The good news is AI literary competition is still a ways off, and until then, writers may benefit from some terrific new tools and other forms of augmentation.
In the long term, however, competition may arrive, perhaps starting with more formulaic fiction markets. This likely wouldn’t replace humans so much as change their role, perhaps to brand managers who develop a platform around ideas and work with an AI to write books, providing a likeable human face for the brand.
At some point, ethical considerations arise. Would a highly augmented writer actually be writing, or would they be the equivalent of a doped Olympic athlete? Or would this type of augmentation become normalized after a stormy transition, the way digital art was resisted at first as not being “real art” but then accepted?
As a fiction writer, I regard all this as both a source of more than a little anxiety but also hope for opportunity. Change doesn’t happen overnight, and it can be beneficial. Long before AI offers viable literary competition, it may gift us with tools that can help us produce better fiction.
Scary or exciting? Threat or opportunity? What do you think?
In Part 2, I’ll discuss how computerization may impact fiction editing.
In Part 1, we talked about how artificial intelligence is already becoming sophisticated enough to make predictions and write stories, paving the way to a future where computers further augment and possibly eventually compete with human fiction writers.
So far, we’ve covered writing. What about editing?
Writers are already heavily augmented with numerous tools that can help us analyze word frequency, repetition, adverbs, dialogue tag overuse, sentence length, and more. We can analyze readability and identify words and sentences affecting it. We can analyze character agency and a story’s emotional pitch. And, more recently, we can even benchmark against genre averages and even all bestselling fiction. Examples of editing software range from the cheap and simple like Hemingway App to the more sophisticated and robust like AutoCrit.
You can even do some interesting analysis at home without fancy software, just using Microsoft Word’s readability statistics available in the grammar and spelling checker. Simply highlight a section such as a scene or chapter, run the checker, and you’ll be told word count and average word, sentence, and paragraph length. You’ll also be told the percentage of sentences that have passive voice along with the Flesch Reading Ease and Flesh-Kincaid Grade Level. Graph the Flesch Reading Ease by scene or chapter in Excel, and you’ve got a cardiogram of your novel’s pulse, where it speeds up or slows down.
A very interesting application of software to editing is analysis of a story’s emotional arc. Fiction might be described as bad things happening to flawed people. What happens to them, and how they react to it, defines the emotional arc of a story, which can be studied. Author Kurt Vonnegut did just that. After serving in World War 2, he attended the University of Chicago, where he presented his master’s thesis in anthropology with a very simple premise: “Stories have shapes which can be drawn on graph paper, and … the shape of a given society’s stories is at least as interesting as the shape of its pots or spearheads.”
Vonnegut’s thesis was rejected, but he was right. Years later, researchers at the University of Vermont and University of Adelaide hypothesized that certain story arcs are more meaningful, and analyzed more than 1,300 works in the Project Gutenberg fiction collection. Their algorithm assigned emotional ratings to words like “death” and “love” and “laugh” to plot each story’s emotional rhythm. They identified six basic story shapes as most popular based on number of downloads from the collection. These include rags to riches (emotional arc rises over the course of the story), riches to rags (falls), man in a hole (falls then rises), Icarus (rises then falls), Cinderella (rises, falls, then rises again), and Oedipus (falls, rises, then falls again). Riches to rags and man in a hole stories are the most popular.
For a fiction author, having access to this type of analysis could be a powerful asset. Remember if you’re creating empathetic characters, the reader is going on the same emotional journey. Is my story mostly positive or negative in overall mood? How does my emotional arc relate to my plot architecture and character arcs? Am I telling the emotional story I want to get across, with well-timed upset and catharsis?
In 2016, Jodie Archer and Matthew Jockers came out with The Bestseller Code. The authors created an algorithm that analyzed 5,000 novels and rated them for likelihood to be bestsellers, claiming an 80-90 percent predictive accuracy. The results confirmed what many writers already know: Colloquial style, active verbs, strong protagonist agency, fast-moving and rhythmic plot, and topical focus (ideally with two major contrasting topics, such as crime and domesticity in Gone Girl) are all critical to commercial fiction. However, their method was revolutionary in that they actually quantified it using an algorithm.
Archer and Jockers now sell consulting services, part of which involves running a manuscript through their algorithm to produce a report. They don’t produce a number score (for legal reasons) but instead using a starring system. The report also includes interesting character profiles showing key exhibited traits based on keywords. My own experience with them was positive, and their analysis worked very well with the excellent guidance from my editor, who went beyond what the algorithm could do.
As more powerful algorithms and tools become available that are at least partly successful in predicting bestsellers, we may see some big changes in book publishing. This is an industry with slim profit margins and where maybe one out of 10 titles released each year is a bestseller. If a tool comes along that can reliably increase that just to one in five, it would be an explosive change. What might that look like, and how would that impact writers?
What if genre-specific algorithms replaced slush pile readers? You upload a manuscript at a publisher’s website and get an instant score and notification whether a human acquisitions editor will read it based on their screening criteria (e.g., strong midlist up to bestseller). If you don’t qualify past this baseline, you get a complete report and notes on where to improve.
What if algorithms surpassed agents as one of the “gatekeepers” in the industry? In such a future scenario, agents might still screen books to capture great books to sell, but with a direct route to an acquisitions editor, some authors may rely on agents instead to provide contract consulting and deal negotiation.
What if editors used these algorithms as part of their editing? Editing back and forth between authors and editors might become more focused if editors have quantitative tools on which to base some of their guidance.
What if these same algorithms became either strictly or more-or-less standardized so they are available as plug-ins for editing software? Some authors may feel confident they have a strong seller and decide to publish it themselves.
The result would help authors tune and possibly make better decisions about where to place their work, while more or less standardizing big publisher-produced fiction around a quality baseline defined as quantitative metrics based on craft. It likely wouldn’t replace editors, as a novel that is quantitatively solid on craft may still not be a good story, or rather one that a publisher wants to publish. Instead, it might supplement and to an extent streamline the vetting process. It could, however, further delineate commercial and other fiction in the market, particularly literary fiction that intentionally breaks the rules of craft for an artistic effect. Arguably, in this scenario, good editors would be even more vital, as they would remain the gatekeepers for ultimate art and quality in a process that would be partly automated.
As with computers and fiction writing, we are entering an era where algorithms may significantly augment human capability, in this case editing. The results could range from more empowered writers to new ways publishers vet their slush pile, with various risks and rewards.
What do you think?
I’m speaking at When Words Collide, a Calgary-based convention for fiction writers, Saturday, August 11 at 2PM. If you’re in Calgary, I hope you’ll come out and see my presentation, which talks about how computers are increasingly augmenting and may eventually compete with writers themselves.
It’s a thought-provoking presentation, which you can download here.
Copies of my novel ONE OF US will be available in the dealer’s room and at the author signing at 8PM Saturday night.
I was happy to be invited to contribute to author John Scalzi’s Big Ideas blog, in which authors describe the big idea behind their works. For ONE OF US, I was able to share the nonfiction idea behind the novel, which started as a misunderstood monster story and became a much more ambitious examination of prejudice.
I hope you’ll check it out here.