Google it is developing AI technology code-named Genesis that can take in information, details of current events, for example and generate news content. It has approached publications such as the New York Times, The Washington Post, and News Corporation, which owns The Wall Street Journal. It’s unclear whether Google is proposing it for news gathering or collaborating on development.
On July 20, a Google spokeswoman told CNET that “these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles,” yet the Times’ description of the tool suggests the reverse. According to the Times, publishing executives who saw Google’s presentation for Genesis found it “unsettling.” According to the spokesperson, the programme can apparently automate some activities by providing “options for headlines or different writing styles.”
I wrote little over seven months ago that ChatGPT would not be coming for journalism jobs (at least not anytime soon) because it just cannot perform what a journalist does. The flagship AI tool from OpenAI is a word organiser rather than a truth collector or inventive narrative maker.
It is unable to report from a crime scene or interview a doctor, a teacher, or anybody else. It is also not trained on up-to-date data. Though I was emphasising ChatGPT’s journalistic abilities, the idea might be applied to huge language models and generative AI at the end of 2022. Their flaws were too numerous, and their hallucinations were too common, to pose a serious threat, I reasoned.
Not because I believe ChatGPT, massive language models, or generative AI have acquired those characteristics and can competently perform the work of a journalist — they cannot. But it doesn’t appear to matter. In any case, the tech behemoths have already produced the tools. They are not intended to replace journalists, but (based on the limited information we have) their skills imply that they could.
Maybe it’s because I’m tired of fitting AI-sized pegs into human-sized holes, or because I just finished watching Oppenheimer (definitely the latter), but the long-term consequences of developing a tool like Genesis feel bigger than anything we’ve seen from generative AI thus far, with resounding ramifications for the people who will be most affected by the misuse or abuse of that tool: you — the reader.
The Oppenheimer analogy seems disturbingly appropriate in this case. When scientists discovered how to split the atom, it became clear that the reaction may aid in the development of a devastating atomic bomb. Trinity, the first such test, took place in the New Mexico desert on July 16, 1945. The bomb went off; it worked.
Two atomic bombs were detonated on the Japanese cities of Hiroshima and Nagasaki less than a month later. I don’t want to minimise the atrocities of the A-bomb, nor do I want to equate the capabilities of generative AI with those catastrophes. I merely want to underline how rapidly we can jump from theory to practise with little regard for the long-term effects. It’s concerning.
Google thinks Genesis can help journalists
Genesis, as we currently understand it, is incapable of producing news. It takes one aspect of journalism — writing — and makes it look to be the entire dang show. It’s not. To even suggest this is a disservice to journalists of all shades, and it should worry readers who recognise that important stories are more than words in a sequence. Journalism entails sourcing, verifying, and fact-checking, as well as spending hours on the phone and years in documents.
Yes, Google thinks Genesis can help journalists, but the tool’s early description appears to behave just like an aggregation tool. It can swiftly assemble something that resembles a news article. But, if it can just remix past reporting, wouldn’t we be better served by an AI that simply gives us with the original stories, with all of the sourcing, validating, and fact-checking already done? Isn’t this just generating and increasing the same difficulties that humans producing stories already have?
This isn’t a sanctimonious rant about journalists being the arbiters of truth, up in our standing-desk castles, infallible geniuses with all-knowing wisdom. We are not, and we cannot be. We’re only human. And the internet already has pages and pages of aggregation. It serves as the foundation for entire websites. Every minor detail, gathered for content. For example, nearly every remark Cillian Murphy spoke on the Oppenheimer press tour has made its way into the internet in some form or another. Sites take one piece of news, remix and republish it, and compete for eyeballs and Google rankings.
As a result, a slew of similar-sounding articles floods the web, TV, and social media. Twitter, uh, “X” users will drop “5 key takeaways” lifted from someone else’s tweet, and TikTok makers will share videos they didn’t make of topics they didn’t study, without referencing where that material came from or even verifying its veracity. This slop is, at least in part, responsible for the blurring of the concept of journalism. We associate slop with substance because we see so much of it.
Rise of AI in newsrooms unstoppable
We could increase the slop by incorporating a generative AI into the mix. Worryingly, we introduce danger. Timeliness is important. Accuracy is important. Readers deserve and desire both. AI may deliver the former, but what about the latter? We feel like we’re on the outskirts of the Trinity test, watching the bomb go off and hoping it’ll never be used in a possibly disastrous way.
The rise of AI in newsrooms is unstoppable. Organizations are investigating how generative AI might be utilised wisely and effectively as part of their toolkit, alongside human writers and editors. CNET has developed limits for when generative AI is a viable tool for journalists to employ and when it isn’t, based on direct experience. You can find our complete AI policy here.
The kind of experience we had has been replicated elsewhere. A few weeks ago, Gizmodo’s io9 published an error-ridden report on Star Wars films and TV episodes made by AI. These incidents serve as cautionary tales regarding the race to establish AI as a news provider.
Defend yourself against Google Genesis AI
Google claims it is testing Genesis AI-enabled tools “in partnership” with publishers, which means there is still time for publishers to object. If the protests are unpleasant, as one anonymous CEO told the Times, perhaps we should stop using Genesis? Fortunately, unlike the Trinity testing, these AI trials aren’t taking place in a desolate stretch of New Mexico desert.
It’s not too late. The ground rules can be established right now: AI should not be used to generate news items based on current events. It should not have this functionality.
The only reason we know anything about Genesis is because journalists were able to confirm its existence by interacting with insiders with knowledge of the classified product. They were able to prod Google into going on the record and revealing the truth about its existence. They offered context by interviewing experts on the potential benefits and drawbacks of such a technology. They eventually wrote something down.
Humans shattered the Genesis myth. Genesis could never do it.