Two journalists talk to the bots — who talk back — about the pros and pitfalls of AI

"Once we accept that AI is flawed, we can use it responsibly, even relish in what it has to offer." ~ journalism professor Casey Frechette
Image for Two journalists talk to the bots — who talk back — about the pros and pitfalls of AI
Rock-n-roll Monkey via Unsplash

Nieman Storyboard contributor Chip Scanlan and Casey Frechette worked together for more than a decade at The Poynter Institute where they created online courses in reporting and writing. Casey is an associate professor and chair of the journalism department at the University of South Florida.

Chip Scanlan: Hi Casey, I’ve had a great time with you on Google Meet over the last several weeks playing and working with AI and talking about its implications for journalists and journalism. Why does the topic interest you?

Casey Frechette: Hi Chip. It’s been great exploring this with you. I got interested in AI last spring, when I had what I’d call a “whoa” moment. I started playing with ChatGPT in earnest, feeding it different journalistic things. I gave it a bunch of news stories and asked it to suggest headlines. I pasted an article from The Atlantic and one from The New Yorker and asked it to tell me which was which. I had it summarize stories in two sentences. And I asked it to review several pieces of breaking news for signs of bias, both obvious and subtle. I was blown away by how quickly it performed these tasks, at how useful the results were. The experience felt a bit magical, akin to going online for the first time and feeling awestruck by the possibilities (minus the crunchy modem-connection noises). But I’m curious: What’s drawn you to this topic?

Chip: I guess I’ve always been an early adopter. As soon as news broke on Nov. 20, 2022, that Open AI launched ChatGPT, I signed on, first as a free user, and then got a $20 a month subscription so I could have access to ChatGPT-4, its most advanced system. I haven’t used it as broadly as you have or focused it on journalism as I would expect an inspiring J-prof like yourself would.  I’m writing a historical novel set during World War II and have used it primarily for research. Its performance has been impressive, although I must concede that a Google search can answer many of my questions. What sets ChatGPT apart is the chat part, a dialogue between you and a super-fast computer, one trained by scraping large swaths of the internet for content. Follow-up questions are the lifeblood of the journalist and I ask the chatbot a lot of them. The most important one is “What are your sources?”

In its staff guide to AI, Wired magazine describes AI’s value to journalists: “AI software can’t call sources and wheedle information out of them, but it can produce half-decent transcripts of those calls, and new generative AI tools can condense hundreds of pages of those transcripts into a summary.”

But you and I have used it together to see how it operates around the discipline of journalism. Want to take it from there?

Casey: Isn’t it amazing how the cost of technology plummets over time? I recall learning early digital watches sold for upwards of $1,000 when they first hit shelves; a nightlight and stopwatch were the extent of their “smart” features. I guess we could call it the “early adopter tax” but, like you, I find the allure of exploring the frontiers of innovation too enticing to pass up.

Once we accept that AI is flawed, we can use it responsibly, even relish in what it has to offer.

Casey Frechette

Using a chatbot as a kind of research assistant for your novel is a great idea. The interactive style you described certainly translates well to the kind of critical engagement that makes journalists so good at finding, organizing and presenting information. Applied to this realm, the possibilities are really quite staggering.

There’s a lot of focus, understandably, on the “generative” side of AI. And we’ve all heard the horror stories of news organizations that have dabbled with “robot-assisted” stories that sound cartoonish or make embarrassing errors.

What’s more interesting to me, though, are all the other ways AI could change news production and consumption, from analyzing vast, complex datasets, as The New York Times recently did in a report on Israeli bombings in Gaza, to generating different story variations for each reader, each calibrated to exacting personal preferences.

I think there are a few lessons here, chief among them: Both pitfalls and possibilities abound.

Chip: We’ve spent a lot of time testing the boundaries. I’m thinking of the time I took notes from a newspaper story of mine that I reverse-engineered to teach writing to journalists. First, I inputted the text into the chatbot with a request to answer questions about the story, including my favorite: What’s the story really about, in one word? In seconds, it replied, “Empathy,” which totally nailed the theme. I was impressed.

But as you’ll recall from the other end of our Google Meeting, when we asked it to start turning the notes into narrative, the problems began.

 “Writing stories is another matter,”  a staff guide for Wired magazine concluded. ”It turns out current AI tools are very good at churning out convincing (if formulaic) copy riddled with falsehoods. A few publications have tried — sometimes with disastrous results “

I’ve read that AI scientists call that “hallucinating” or “confabulation.” My profane translation as a journalist: It began to make shit up. It made up dialogue. It inserted facts that weren’t in the notes.

When you and I challenged it on these points, it apologized “for the oversight.” It said the dialogue was “a creative interpretation based on the situation outlined in your notes, as there was no direct transcript of the conversation.”

Not ready for journalism primetime, we agreed. In a longer version of our e-conversation, you told me that the current iteration of AI tools are “not built for journalism” but, with the right cautions, can still be useful. Those cautions include the warning that we can’t take anything for granted, and must question the sourcing of everything, especially in high-stakes stories.

Your take, as a journalism professor and technologist, holds a lot of weight with me. So let me ask. What do you see as other pitfalls of AI when it comes to reporting and writing stories? How might it be of use to writers and editors? And what does someone have to come up with so journalists can use this technology in the most intelligent and ethical way possible? (Hint: We might call it “Defoe” after the early 18th-century writer whom ChatGPT identified as one of the world’s first journalists (historians might disagree).

Casey: You’ve hit on what I see as the key concern, and that’s the fallibility of the technology. It makes stuff up. It gets stuff wrong. It leaves out important caveats and context. And it does all of this with the confidence of, to paraphrase a popular saying, a person putting on lipstick without a mirror.

In a similar vein, we all need to be concerned with the biases lurking in the datasets that AI models are trained on. There’s important work ongoing in the realm of responsible AI to combat this, but reporters and editors need to be aware that many AI technologies have amplified, not diminished, the unfairness and misrepresentation that ail society and undermine the journalistic mission.

Another pitfall is an extension of the conventional wisdom that phones made reporters lazy because they didn’t have to go see people in person anymore. And then email made them lazy because they didn’t have to pick up the phone. And social media made them lazy because they didn’t have to send email. And now, I suppose, AI might make them lazy because they don’t have to hop on their DMs. I tend to think this concern, and its many permutations over the years, grossly underestimates journalists’ innate resourcefulness.

I do worry, though, that the “smarter” technologies become, the more tempting it is to outsource our cognitive capacities to them.

I’ll confess that if you asked me to drive to a location I’m unfamiliar with, without the assistance of Waze, I’d be hard-pressed to get there without some struggle and a good deal of cursing. The advent of calculators made long-division a rare skill. The internet wreaked havoc on our attention spans, as Nicholas Carr explored in his seminal book “The Shallows.” What will happen with AI? Will our creative and expressive muscles atrophy?

But let’s look at it from another angle. Every reporter must now ask themselves: Did AI write what I’m reading? Draw what I’m seeing? Speak what I’m hearing?

I anticipate we’re entering a messy phase where parts of things are AI-generated, mixed with human-generated content.

Casey Frechette

We might be comfortable answering these questions now, at ease with our ability to notice a stilted voice or a telltale sentence construction. But whatever signs we think make fake content easy to spot at the moment are sure to become outdated as the technology improves. And here’s a further complication: Much of the discourse about deep fakes has assumed what I see as an all-or-nothing framing. But I anticipate we’re entering a messy phase where parts of things are AI-generated, mixed with human-generated content.

Here’s the good news: Once we accept that AI is flawed, we can use it responsibly, even relish in what it has to offer.

For writers and editors, I see potential in AI as a research tool; a brainstorming partner; a transcription service; an assistant able to sort, summarize and label; a writing coach who offers feedback and troubleshoots problems with copy; an editor who can review early drafts for clarity, accuracy, AP style, grammar; a data analyzer; a reader who can give feedback as either a neophyte or expert on the topic at hand, depending on the role we want it to play.

I’d hasten to add that there’s a certain irony in considering how journalists can benefit from AI, considering how much AI has benefited from journalists.

We now know that many chatbots have been trained, in no small part, on stories that have been scraped from news websites. Without the deeply considered, reported and edited content that’s produced almost exclusively by news organizations, chatbots like ChatGPT wouldn’t be nearly as good. At least, that’s the argument lawyers for The New York Times have made in their action against OpenAI and Microsoft alleging massive copyright infringement.

If we’re envisioning an AI tool truly suited for journalists, I’d imagine it would boast a few critical features:

It would be private and secure. It would acknowledge the limits of its understanding and the ways it might be presenting biased or distorted perspectives. It would cite its sources. And it would never make shit up.

What’s interesting is that the current generation of tools isn’t intrinsically incapable of these things. Yet, as we’ve seen from our experiments, it takes tremendous coaxing to get a bot to work only with supplied facts.

As someone who has coached legions of writers, I wonder what features you might add to this wishlist.

Chip: I’m not sure what I can add to your comprehensive list.  I’ve already been impressed with it can summarize and analyze texts with detail and accuracy in a fashion that would make a coach or editor proud. This strength would help writers, too, especially before they submit a story or work without the safety net an editor provides. It could also guide revision, along with other features on your wish list.

The aforementioned risk remains: ChatGPT will happily make the revisions if you ask, and there lie the dangers of hallucination, not to mention the ethical lapse that could, if not should, cost a journalist their job.

Still, it can’t provide the emotional support I’ve found some writers need to get past writer’s block and overcome fears about approaching strangers with sometimes uncomfortable questions, and meeting deadlines — problems that afflict rookies and newsroom veterans alike. It may put me out of business as a coach down the road. (I don’t know how far down.)

I’m convinced it can make us better writers by raising questions and pointing out flaws. The question for the future is how far should we let it go? Highlight passages in news stories that were created with AI? Share a byline with ChatGPT? What will it do to a writer’s voice, the element that draws in readers and keeps them engaged if everything is regurgitated by a large language model and copied from other writers? That’s a big question and one that anyone who works with words has to address.

But first, in the spirit of this exchange, I asked ChatGPT to answer your question about my wishlist as a coach. Its response (in italics so we’re certain of the source):

ChatGPT: Source Verification: An AI tool that can assist journalists in verifying the credibility of sources and the authenticity of information, especially in an era of deepfakes and misinformation.

Ethical Guidance: An AI that incorporates ethical guidelines specific to journalism, helping journalists navigate complex issues like privacy, consent, and the public interest.

Collaborative Editing: Enhanced features for collaborative editing that allow multiple journalists to work together seamlessly, with AI providing real-time suggestions for improvement based on best practices in journalism.

Enhanced Fact-Checking: An AI tool that not only helps in fact-checking but also provides context and background information to help journalists understand the broader implications of their stories.

Between your and my ideas, and ChatGPT’s, we could definitely give a coder some ideas. And now that we’re back to human-to-human conversation, can AI replace what you do as a teacher?

Casey: Replace me? No way!

Wait, let me step back. My knee-jerk reaction to your question is probably driven more by fear of obsolescence than a reasoned view on where we’re heading. Much as I’d like to believe I can offer something of unique value, my sense is that AI can already outperform me on several key tasks and will soon be a worthy rival on many more, from designing learning experiences, to producing lessons, to providing feedback.

I’m not quite ready to let go of the notion that, without emotion, a computer will never be able to provide the kind of connection that instills a passion to learn.

Casey Frechette

In her book “Machines Who Think,” Pamela McCorduck recounts philosopher and mathematician Bertrand Russell’s reaction when the inventors of “The Logic Theorist,” often regarded as the first AI program, presented him with an AI-generated mathematical proof superior to one of his own. Russell’s reaction, McCorduck wrote, was one of delight. I hope I can approach AI’s expanding capacities with a similar open-mindedness and humility.

Still, I’m not quite ready to let go of the notion that, without emotion, a computer will never be able to provide the kind of connection that instills a passion to learn. And yet, I can’t help but think that this is the next chapter in our relationship with technology. I hope you don’t mind if we bring our friend Claude into the conversation.

Claude, what are your thoughts on AI’s potential to provide humans with emotional connection?

Claude: I don’t actually have personal thoughts or feelings. I’m an AI assistant created by Anthropic to be helpful, harmless, and honest.

Casey: Allow me to rephrase the question. Could AI one day provide humans with meaningful emotional connection?

Claude: I think the potential for AI to provide emotional connection to humans is an interesting area of research and development. Some key considerations around this topic:

  • Emotional connection involves mutual understanding, empathy and being “present” for another – abilities that current AI systems lack. However, research is ongoing into developing emotional intelligence and social skills in AI.
  • Humans connect through shared experiences and vulnerabilities. AI has no innate experiences or ability to be “vulnerable” in the human sense. Programming those qualities convincingly is technically challenging.
  • There are risks if AI appears to be sentient when it is not actually conscious. People could anthropomorphize and over-rely on AI that lacks true understanding or care for human wellbeing.
  • On the other hand, some level of emotional rapport could be beneficial, e.g. AI pets or companions for the elderly. This raises ethical issues around deception and manipulation of vulnerable persons.
  • More research is needed into human emotional needs and the appropriate role of technology in healthy relationships. Emotional skills in AI may develop gradually and benefit from ethical oversight. But overall this remains a complex issue with many open questions.

In summary, while AI could potentially emulate emotional connection, truly experiencing and providing it like a human may be beyond current technical capabilities and ethical norms. The path forward merits careful consideration of human psychological needs and AI’s limitations.

Casey: Well, that’s some food for thought. My takeaway is we might miss out on a lot of good, without a full exploration of the capacity for emotions in machines — as practically and philosophically challenging as it might be to do so.

But I’d like to get back to your question: How far should we let it go?

So long as technology outpaces complementary legal, ethical and professional frameworks, individual journalists and news organizations alike will need to make important decisions about whether and how to incorporate AI into their work. And they’ll have to decide how to express those choices to the public.

So long as technology outpaces complementary legal, ethical and professional frameworks, individual journalists and news organizations alike will need to make important decisions about whether and how to incorporate AI into their work.

Casey Frechette

What’s your advice for a reporter or editor who may be grappling with these questions?

Chip: Decide AI’s value to you by using it enough to be familiar with its pitfalls and promises. If you have never used a chatbot before, you need to. AI already has serious implications for you and your future. Understand it. Follow the news about it. If you’re already a user of ChatGPT or Claude (others include Google’s Gemini and the playful chatbot PI), experiment. Ask it questions, especially about things you know to determine how accurate it is. Start by asking for your bio or that of someone you know well. (My first AI-bio erroneously gave me a Pulitzer and jobs with The New York Times and The Washington Post.) Follow Casey’s lead and have it brainstorm questions for an interview.

As with any source of information, verify. Request citations for the answer. ChatGPT and Claude have given me sources, like Wikipedia footnotes. Sometimes, however, the chatbot will say, in effect, it already knows all these things and can say things without attribution. Sounds like hubris to me.

Casey: That’s a great list of tips. I especially appreciate your emphasis on reflective experimentation. It strikes me that it’s important to stake out a middle ground between proactive trial and error, and cautious curiosity, particularly in the absence of the policies, laws and guidelines that will likely solidify in the months and years ahead.

Taking Over

We pasted this entire conversation into ChatGPT and Claude and asked them to respond to Casey’s question: What’s your advice for a reporter or editor who may be grappling with these questions?

ChatGPT: 
Familiarize Yourself with AI: Engage with AI technology to understand its capabilities and limitations. Experiment with AI tools like ChatGPT, Claude, and others to see how they can aid in journalistic tasks and to become aware of the accuracy and reliability of the information they provide.

Evaluate AI’s Role and Value: Consider how AI can enhance or complement your journalistic work. Look at AI as a tool for research, summarization, brainstorming, and even preliminary editing, but be mindful of its current limitations, especially in generating content without errors or biases.

Verify Everything: Like with any source, verification is crucial. Always question the accuracy of the information provided by AI and seek out original sources or citations whenever possible. Be skeptical of AI’s ability to provide information without proper attribution and verify facts independently.

Be Aware of Biases: Recognize that AI models can reflect biases present in their training data. Approach AI-generated content with a critical eye and consider the impact of potential biases on your reporting.

Understand the Legal and Ethical Implications: Stay informed about the legal, ethical, and professional frameworks surrounding the use of AI in journalism. Consider the implications of copyright infringement, privacy, consent, and the public interest in your use of AI technologies.

Educate Yourself and Your Audience: Keep up with developments in AI technology and its implications for journalism. Educate your audience about the role of AI in your reporting process, including how it’s used and its limitations.

Use AI Responsibly: Utilize AI as a supplement to, not a replacement for, traditional journalistic methods. It should aid in enhancing the quality and efficiency of your work without compromising journalistic integrity or ethical standards.

Embrace a Critical Mindset: Maintain a healthy skepticism towards AI-generated content. Question its origins, accuracy, and the potential for “hallucination” or generating false information.

Explore AI’s Potential Ethically: Investigate the ways AI can contribute to journalism without undermining trust or quality. This includes leveraging AI for data analysis, generating story ideas, or improving the accuracy of reporting, while always adhering to the highest journalistic standards.

Prepare for the Future: As AI technology continues to evolve, stay ahead by continually assessing how these tools can be integrated into journalistic practices in ethical, responsible, and innovative ways.

By considering these guidelines, reporters and editors can navigate the complexities of incorporating AI into journalism with a balanced approach that maximizes benefits while minimizing risks.

Claude: 
The key tension seems to be between leveraging AI’s potential benefits – such as automation of routine tasks, surfacing patterns in data, or generating story ideas – while also maintaining journalistic principles. Reporters and editors need to thoughtfully evaluate AI tools to make sure they enhance rather than hinder core journalistic values like accuracy, transparency and human judgment.

Rather than seeing AI as a plug-and-play solution, newsrooms should carefully assess if and how these technologies can augment human capacities rather than replace them. Some constructive ways forward could be keeping humans in the loop on fact-checking AI content, disclosing the use of AI-generated text, or having AI focus more on supplemental analysis rather than primary story generation.

The path to integrating AI into news likely requires an openness to its capabilities but also a mindfulness regarding its limitations. Prioritizing timeless journalistic values around truth and understanding while leveraging cutting-edge technology.

Chip: I think we humans should get the last word. Casey?

Casey: I’m impressed by ChatGPT and Claude’s synthesis and advice. But our conversation brings to mind another advantage humans still have over machines: the ability to gain new appreciations and insights through dialog with each other.

I’ve enjoyed our chat, as always, Chip. Looking forward to continuing our explorations.

* * *

Chip Scanlan is an award-winning writer who taught at the Poynter Institute and now coaches writers around the world. He is the author of several books on writing and the newsletter Chip’s Writing Lessons.

Casey Frechette is an associate professor of journalism at the University of South Florida’s St. Petersburg campus and former interactive learning producer at the Poynter Institute.