For Californians like me, earthquakes are an ever-present but quiet threat that looms somewhere in the deep recesses of our minds. We grew up bringing earthquake kits to school and secured ourselves under desks and doorways during drills. Then, at the school year’s end, we scarfed down those month-old granola bars for snack time, not survival. Year after year, well into my own adulthood, we’ve been told The Big One could strike at any time.
For the past couple decades, if I’ve been awoken by something shaking, I immediately pull up the U.S. Geological Survey website to see if our nation’s finest seismologists have logged anything. These maps are clunky, and sometimes difficult to understand. And frankly, it’s just a readout of numbers, like this recent entry: “M 2.6 – 17 km NW of Petrolia, CA.” (Wait, where’s Petrolia again?)
Since 2012, the Los Angeles Times’s Quakebot – programmed with parameters to only focus on quakes over a certain magnitude in California – has translated these automated, largely numerical readouts into journalistic English understandable to humans: “A magnitude 4.4 earthquake was reported at 2:44 a.m. Monday, 54 miles from Avalon, according to the National Oceanic and Atmospheric Administration’s Tsunami Warning System.”
The news briefs continue with perfunctory information: where the tremor happened, if similar quakes have happened nearby and how many quakes typically arrive in this region. This is the model for how artificial intelligence can help journalism: give it a specific task, let the audience know what’s going on and continue the reporting process, draft by draft and iteration by iteration.
This little piece of software debuted over a decade ago, long before both the utopian fantasies and dystopian fever dreams of the effect of AI on journalism began infecting the discourse of our profession. As an early form of artificial, intelligence-fueled journalism, Quakebot has become a fast and functional tool that provides clear and concise information about potential disasters. It is the quintessence of public service journalism, a literal lifesaver.
Each formulaic piece from Quakebot closes with the same disclosure: “This story was automatically generated by Quakebot, a computer application that monitors the latest earthquakes detected by the USGS. A Times editor reviewed the post before it was published.”
And this polite kicker is what really stands out: A Times editor reviewed the post before it was published.
In other words, a human is being kept in the loop. That editor will use their professional training to free themselves from what can be the nuts-and-bolts drudgery of reporting on breaking news. A generation ago, this is where young cub reporters might have started. But now, a machine can give a newsroom a head start by automating some of the very basics: who, what, when, where, why.
This kind of a disclosure, even one for a simple piece of software that began running over a decade ago, provides a critical reassurance to the public that this news can be trusted. After all, so much of the web these days is filled with SEO-fueled nonsense. We now live in a world where even the voice of the president of the United States can be convincingly faked.
It is critical that newsrooms teach their reporters how to use AI and other automated tools responsibly, and for specific purposes. Furthermore, media outlets should be clear-eyed as to where and under what circumstances they will use such automated tools. As with the Los Angeles Times, using AI or other types of automation should have a specific purpose – whether this is getting straightforward news of an earthquake out quickly or the latest corporate earnings report.
It’s likely that very soon, if journalists as a whole act wisely and without either undue fear or overreliance on technology, using AI in journalism will become no more controversial than using a computer to write faster than writing with pen and paper.
We’re already starting to see some media outlets move in this direction, and some have already formalized declarations describing what they will and won’t use the technology for.
Take the San Francisco Chronicle, and other Hearst publications, which now have a statement telling their audience that they “will, at times, use Generative AI” in order help writers and editors with “assembling of content before it is published,” including in drafting SEO terms, keywords, and related copy.
Hearst says that its use of such technologies might also be used to help research a topic, or perhaps surface results from the newspaper archives, or create alternate versions of a story – a summary of an investigative piece, for example.
“However, no content from our newsrooms will be published without an editor’s review,” Hearst notes.
Wired, another vanguard of tech journalism, makes its perspective crystal clear: “We do not publish stories with text generated by AI, except when the fact that it’s AI-generated is the whole point of the story.”
The venerable publication specifically cites a point that all of us in the industry have no doubt intuited at this point: “In addition, we think someone who writes for a living needs to constantly be thinking about the best way to express complex ideas in their own words.”
Soon, news organizations will be able to use automated systems to do some of the drudgery of journalism.
Jeremy Gilbert, the head of the Knight Lab at Northwestern University is already working on a system that can use AI to do some kinds of analysis on city council meetings and figure out who the primary speakers are, and what the main topics of conversation are.
Like Quakebot, such tools could be useful for reporters to add some contextual information about a specific hearing or gain some insight into where the heart of a hearing might be. In other words, it can provide a shortcut to a starting point — when and where did an earthquake happen? What was said during this four-hour-long city council meeting? — but that alone isn’t an act of journalism.
Because after all, a bot won’t know when the Big One hits, or what needs reporting on after.
Cyrus Farivar is an Oakland, California-based senior writer at Forbes, and has previously reported for NBC News, Ars Technica, The Economist, The New York Times, Slate, NPR, CBC, and many others. He is always happy to nerd out anytime on cargo bikes, beer, Star Trek, and public records requests.
The Center for Journalism Ethics encourages the highest standards in journalism ethics worldwide. We foster vigorous debate about ethical practices in journalism and provide a resource for producers, consumers and students of journalism. Sign up for our quarterly newsletter here.