It’s time for public media to lead in using generative AI

More

Chad Davis using DALL-E 3

No technology compares to the rise and potential impact of generative AI (gAI). While on a par with the advent of personal computers in the ’80s, and the rise of the internet in the ’00s, nothing in the history of technology has impacted our culture and our work this quickly.

Think about it: ChatGPT — the poster child for all the other gAI tools available to stations today — only launched Nov. 30, 2022. Some of us can remember a time before the personal computer, and some can remember a time before World Wide Web. But if you want to remember a time before gAI, all you have to do is think back to the Thanksgiving before last.

When a technology advances this quickly, it’s natural to want to step back and wait to see how it all plays out. After all, it’s been said that if you are one step ahead, you’re a leader, and if you are two steps ahead, you’re a martyr.

After a year of experimentation across many departments within Nebraska Public Media, I can tell you: It’s time to lead.

What is generative AI, and why should my station use it?

Before we delve into implications and practical applications for stations, let’s just get a level-set on what exactly is gAI. I’m on deadline with this, so instead of trying to synthesize a year’s worth of reading and thinking about this topic, I’m going to make this whole process a bit more efficient by asking ChatGPT Plus to explain it for you. Here’s the prompt I gave it: Maintaining the same writing style you have been using thus far [spoiler: I had it help me create the first draft], please define generative AI in one or two paragraphs.

Generative AI, or gAI, represents a frontier in artificial intelligence that focuses on creating new content — be it text, images, sounds, or even videos — rather than simply analyzing existing data. It leverages advanced algorithms, often based on neural networks, to generate outputs that are novel, yet coherent and contextually relevant. Think of gAI as a digital artist or writer, not just mimicking human creativity but also contributing its own original ideas. This technology draws from a vast pool of data sources to craft outputs that can range from a written article, a piece of music, to realistic visual artworks.

The beauty of generative AI lies in its ability to learn from and respond to the nuances of human input. When you interact with a tool like ChatGPT, you’re not just receiving pre-programmed responses. Instead, the AI is dynamically generating replies based on patterns and information it has learned during its training. This capability makes gAI incredibly versatile, adaptable, and valuable across various sectors — from aiding in creative processes to generating predictive models in science and finance. However, this power also necessitates careful consideration of ethical implications, especially regarding authenticity, bias, and the impact on traditional creative industries.

I quibble a bit with “contributing its own original ideas,” but that’s a different article. I do like “to generate outputs that are novel, yet coherent and contextually relevant.” That’s the “magic” of generative AI. And I’ll stress that I didn’t edit the above text. All of the predictive text algorithms under the hood of ChatGPT Plus assembled the contextually relevant line about ethics, authenticity and bias.

Colloquially, the best way to think about generative AI is as a creative partner or, depending on the task, intern. The value-add to the creative process can be thought of as “cognitive offloading.” And while none of these tools is trustworthy enough to automate the creative process (e.g., they are not reliable enough to be used for research without fact-checking them as an unreliable source), they can save time on brainstorming, information synthesis, and creating first drafts of words, art and sound — especially of content not guided by journalistic integrity. For example, the current state of the art allows for a much more media-rich presentation of vision for any content pitch, regardless of media. The barriers to entry are so low that, if you are a gatekeeper, you should be insisting producers come to you with concept art, sound and script, regardless of medium.

Where I see the most opportunity for these tools now, though, is in corporate communications: press releases, grant applications, donor letters, social media posts. These tools can be trained to replicate your station’s voice and look, and in a matter of minutes be ready for refinement by an editor or artist.

To lead, we need to embrace experimentation with gAI and encourage both intra- and interstation networking for the innovators leading that charge.

How gAI can enhance public media without compromising our values

Public media stands on pillars of trust, localism, and universal service. gAI offers us an opportunity to strengthen these pillars by doing even more in pursuit of our mission with the same investment in talent and creativity. 

To do this responsibly, I advocate defining some values-based guardrails for station experimentation. Whether you know it or not, and whether you like it or not, your staff has dabbled with these tools. In Nebraska, we developed consensus around the following guiding principles, expressed in a series of “We will…” statements in our official Framework for Experimentation (excerpted below):

  • People First: We will prioritize gAI applications that augment human capabilities over automation that could displace jobs (especially those offered by already trusted providers, like Microsoft and Adobe). Consult with your manager and discuss any ideas for using generative AI tools in your work, and understand our priority is to magnify the good work we are already doing.
  • People Last: We will prioritize human oversight as essential to ensuring that the outputs from gAI align with our values, quality standards, and ethical considerations, and we will quickly address any issues that arise from these experiments. gAI tools are impactful, but not infallible.
  • Transparency and Accountability: We will ensure that our audience understands when gAI has influenced editorial content. Transparency is a cornerstone of public trust and serves as evidence of our adherence to the highest editorial principles.
  • Truth and Authenticity: We will not stray from our editorial principles in experimentation with gAI, whether the content is journalistic, educational, or entertainment-focused. The use of gAI should never compromise the authenticity and truthfulness expected of us by our audiences.
  • Bias Mitigation: We will ensure that content generated by AI does not inadvertently propagate or perpetuate stereotypes or prejudice. Early studies have highlighted that AI tools can reflect and amplify societal biases present in their training data. Special care must be taken to proactively identify and mitigate biases, preventing marginalization or harm.
  • Continual Education: We will seek out or create opportunities to learn about the latest advancements, ethical challenges, and best practices in the realm of gAI, not least so that we may be aware of and understand the potential risks associated with its use. As gAI is a rapidly evolving technology, regular training and updating of knowledge are essential.
  • Collaboration and Shared Learning: We will actively seek collaboration opportunities, both internally and across public media. Lessons learned and best practices developed from gAI experimentation should be documented and shared, fostering a culture of shared learning.
  • Accessibility and Inclusivity: We will ensure that gAI tools and the content or solutions derived from them are accessible to all, regardless of abilities or disabilities. gAI development and application should consider diverse perspectives to prevent biases and promote inclusivity.

These guiding principles advocate for human oversight, reinforced transparency, and constant evaluation of bias and source integrity. Used in alignment with these guiding principles, gAI tools can solve specific problems and address key pain points in day-to-day work. If necessity is the mother of invention, gAI offers everyone the opportunity to invent. Using gAI with human oversight and in line with this framework can lead to:

  • Enhanced Creativity: Engaging with gAI tools during ideation, brainstorming and review phases can lead to fresh perspectives and inspirations.
  • Improved Productivity: gAI tools offer potential cost savings for the organization. These savings may be dollars saved, or increased productivity that improves the ROI on dollars already invested (especially in human resources).
  • Operational Efficiency: Leveraging gAI for cognitive offloading can help reduce human error, accelerate time-consuming tasks, and optimize resource allocation, ensuring that Nebraska Public Media remains agile, competitive, and a responsible steward of resources invested in our mission.

For clarity, gAI tools differ from AI-infused software. Generative AI tools require active engagement with a chat interface to create specific outputs and include tools like ChatGPT and Midjourney, while AI-infused software helps automate tasks and includes media enhancements via Adobe Creative Suite, auto-captioning software, and the auto-complete features on your mobile device.

As an industry whose core mission revolves around creation using words, images and sounds, our primary objective is to utilize gAI in ways that enhance our service, reflect our commitment to the highest ethical standards, and protect the trust and relationship we have nurtured with our audience over the years. As such, having a framework for experimentation or policy isn’t just advisable; it’s essential. This policy should cover how gAI aligns with your station’s mission, the extent to which it’s used in content creation, and how to handle ethical dilemmas arising from biases that are baked into gAI training data. The policy should also address the training of staff to use these tools effectively and responsibly. And remember, this type of policy is not a set-and-forget document, but a living entity that evolves with the technology and its societal implications.

To lead, we need to develop policies (or proto-policy frameworks) that carve out a safe space in our corporate culture to reward experimentation, knowledge-sharing and a leave-no-staff-behind approach to adopting gAI technology.

Which gAI tools should you be trying (in late 2023)?

As we approach the end of 2023, several gAI tools are worth attention. While the lines between types of media are blurring, for now these tools can be divided into words, art and sounds. We’ll touch on the first two here.

Words

Although generative art tools hit the scene about five months before chatbots, it’s the chatbots that captured the public’s imagination. Here are a few tools worth researching:

  • ChatGPT: ChatGPT comes in a free version and a Plus version ($20/month). The free version is as good as any other chatbot on the market right now, but the paid version is better. ChatGPT Plus has a more advanced model (GPT-4) under the hood and also allows photos to be uploaded for analysis or context. Plus also gets you integrated access to generative art tool DALL-E 3, as well as to a nascent but growing marketplace of “GPTs.” GPTs are preset roles (aka “agents”) that serve as a shortcut to certain types of expertise or information. Think of them as the apps of the gAI world. There are already thousands, and they have the potential to make the tool ever more user-friendly. These models are also the basis for Microsoft’s Copilot tools, available in M365, the Edge Browser and the Bing search engine.
  • Claude: Thanks to a governing constitutional filter between what it “thinks” and what it “says,” Claude has a kinder, gentler screen-side manner (bordering on fawning at times). But in our experience, its constitution also results in responses that aren’t always as on point as ChatGPT Plus’. Anthropic also doesn’t have the same multimodal integration (yet) that ChatGPT Plus does. But one advantage that Claude does have is that it can take in much more information. Estimates are that it can handle 75,000 words, compared to ChatGPT and ChatGPT Plus limits of 16,000 and 32,000, respectively. And an enterprise version of Claude now boasts a cap of 150,000 words (~500 pages). 
  • Bard (with Gemini): No sooner had this piece dropped than Google released an updated version of its Bard gAI tool with its new Gemini Pro model under the hood. Before Gemini, Bard was not worthy of mention here (we actually edited it out), but my early tests were impressive enough that I asked Current to let me write this fast update. With the Gemini model in place, Bard is much better (than even ChatGPT Plus) at image recognition, and equally as sharp as ChatGPT Plus with basic language tasks. Is this a ChatGPT killer? Google sure hopes so, but regardless, if you are a devotee of Google’s suite of productivity tools, you should try this one.

Art

Everyone thinks the gAI wave began with ChatGPT, but actually, ChatGPT was just a shot of nitrous into a hype cycle that had started spinning up with the launch of Midjourney on July 12, 2022.

  • Adobe Firefly: Firefly’s quality is close to that of sector leaders, but because its model was trained on copyright-cleared imagery, this tool is a safer choice for those worried about IP issues related to gAI. In fact, Adobe actually indemnifies creators who use its Firefly tool against certain legal actions.
  • DALL-E 3: From the same company as ChatGPT, the ease of use thanks to ChatGPT Plus integration makes it my go-to generative art tool right now.
  • Midjourney: Whether you know it or not, you almost certainly have seen Midjourney art on the web. Midjourney’s photorealistic images of “people” have a distinctive look that can seem like actual photos at times.
  • Stable Diffusion: The main reason you should keep track of this tool is that parent company Stability.AI’s experiments in generative imagery, video and music hint at a more comprehensive, integrated set of tools for multimedia creators in the future. But, caveat emptor, there are significant business and legal issues behind the scenes.
  • Runway 2: Text-to-Vivideo is the next frontier for generative art. By the end of this decade, I expect generative video to be a significant tool in production workflows, and Runway 2 is at the head of that pack right now. (Also, keep an eye on Pika.AI, which just launched with a lot of buzz prior to publishing.)

In both these categories, I’ve been selective in what I mention here. There are countless gAI tools on the market now (mostly dross) and more coming online each week. So, if you’ve heard of a tool that isn’t mentioned (Elon’s “Grok,” for example), it means I can’t comment on it due to lack of testing or because it’s simply not up to public media’s standards (Elon’s “Grok,” for example). And while I focused on words and art here, don’t forget sounds and shapes. Keep an eye on the emergent generative voice, generative music, and generative 3D modeling sectors of gAI as well.

To lead, we should track the communities of practice developing around these tools, as well as the developers behind the tools, to better understand their development context and ethical considerations.

When the bubble bursts (or, the near-term future of generative AI)

A decade ago, we used to say that one year in human time was like five years in tech. With AI tech, one year probably equals 20 now, so looking into the future gets murky fast. But we can expect these tools to only improve at augmenting human creative work. We can also expect more gAI integration into various platforms (especially Microsoft and Google products), which will make these tools more accessible to a broader range of users.

But the hype around gAI has been in overdrive for more than a year now, and even the long-term promises of these tools can’t sustain that level of hype in the short term. Learning how to best use these tools requires work. As people try these tools, some will reject them, and some will realize that they can’t magically solve all creative problems.

Davis

As realities prove to be more modest than expectations, the hype bubble will burst. The best-case scenario then is that there will be a backlash of indifference to these tools. You may be tempted to write this tech off at that point, but that is the exact time that you need to double down and work harder toward mastery.

Ultimately generative AI is not just another tech trend. It’s a transformative force that public media can harness to enhance storytelling, engage communities, and streamline operations. We can do more to fulfill our mission with the same investment in human resources.

To lead, we must navigate this new landscape with foresight, responsibility, and a commitment to our core values. People first, people last. As we mark one year since ChatGPT’s launch, it’s time for public media leaders — technologists, content creators, and C-suite executives alike — to seriously engage with gAI as a tool that can help us level up our ability to build the future of public media in the U.S. Failing to embrace it, develop best practices and share lessons learned flies in the face of our core responsibility to wisely steward the resources — human, financial, and cultural — with which we have been entrusted.

Chad Davis is the Chief Innovation Officer at Nebraska Public Media, where he leads Nebraska Public Media Labs, an R&D division focused on emerging media. Chad is also the inaugural Chair of the Public Media Innovators peer learning community at NETA and writes a weekly newsletter about the intersection of public and emerging media. To join the PLC, subscribe to the newsletter or talk to Chad about gAI and emerging media, you can email him at [email protected].

This article was updated Dec. 7 to include Google’s Bard with Gemini Pro.

5 thoughts on “It’s time for public media to lead in using generative AI

  1. Here’s something I didn’t see addressed, at all, in this article: DEI.

    We already know that AI in general, and gAI in particular, has enormous problems with race- and gender-bias. And we also know that, to date, the more we try to correct for those biases, the more inaccurate or error-prone the gAI outputs tend to be.

    We also know that public radio has had a really big problem with a white-male bias throughout its entire ecosystem. It’s gotten better in some ways, but it hasn’t gotten any better at all in others. (https://transom.org/2020/stephanie-foo-2020/)

    And finally, we know that the author of this piece is a white male. I don’t exactly hold that against him. But it means without some explicit mention of the problems of racial/gender/etc bias in AI that we – by objective measure – cannot take the analysis that gAI is a “good thing” at face value.

    Of course, I am a white male myself, which means my own suggestions/analysis are just as suspect. And to my shame, I don’t really have any answers to provide at this moment to the problems I am raising. But I think it should be self-evident to that run pell-mell into using gAI without very strong guardrails to correct for various racial, gender, etc biases is a terrible idea. Remember: the point of a bias is that simply knowing you have a bias does nothing to correct for the bias; you have to institute objective systems to actually correct for the bias. So far AI has mostly done the OPPOSITE of that (https://www.npr.org/2023/02/23/1159084476/know-it-all-ai-and-police-surveillance).

    • Hi Aaron, Chad does mention in the piece that one part of NPM’s “Framework for Experimentation” is “Bias Mitigation: We will ensure that content generated by AI does not inadvertently propagate or perpetuate stereotypes or prejudice. Early studies have highlighted that AI tools can reflect and amplify societal biases present in their training data. Special care must be taken to proactively identify and mitigate biases, preventing marginalization or harm.”

      He also says as advice to stations creating their own policies that the policies cover “how to handle ethical dilemmas arising from biases that are baked into gAI training data.”

      So I would disagree that the piece does not include “explicit mention of the problems of racial/gender/etc bias in AI.”

      Thanks for reading!

  2. Or—and I know this is a wacky idea—maybe the best way to be authentic and truthful is be a human being telling your own story with your own words. Perhaps I have become Grandpa Simpson shaking my fist at clouds, but I don’t see how one upholds the claimed values of public media by building atop an illusory foundation. Because that’s what generative AI is: a mimicry of human involvement.

    I’ve noodled with ChatGPT, and sure, I’ve seen that it can output content that at first glance looks like something a person would write. But it’s not *good*. And it certainly isn’t authentic.

    I retired from public media several years ago, but I recognize the pattern here. Someone excitedly says, “We can do more with less!” and pretty soon whatever was unique and high-quality is replaced by a pale imitation. Customer service is reduced to Clippy. (“It looks as if you are expressing an opinion…”) Because whatever ideals one claims to defend, every time work is offloaded to a shared master control, a programming service or a chatbot, public media becomes less responsive, less authentic, less worthy of support. And let’s not kid ourselves, “efficiency” and “cost savings” are always, *always* about not employing people, whether by reducing staff or not hiring them in the first place.

    But, hey, what do I know? I’ve been out of the game for awhile. Maybe today’s audiences want an anodyne, anonymous content mill. Certainly worked for Sports Illustrated.

    • Thanks David. There’s a lot to unpack there so let’s start with where we agree. Generative AI (gAI) chatbots are absolutely a mimicry of human involvement. That’s what makes them such useful tools, especially when it comes to assisting in certain aspects of the creative process (they really are like “First Draft Machines”). I would also generally agree with you that even the best chatbots don’t generate “good” content on their own. But a chatbot combined with a creative person who is trained and experienced in the use of these tools is a difficult combo to beat.

      Your comment seems to imply that I think we should just automate and outsource content creation to the bots (a shared master control or a programming service). But, as the op-ed demonstrates, I actually think quite the opposite. Let me illustrate why with an example of how I used gAI on a major project. When I developed Nebraska Public Media’s framework for generative AI research, I started with a draft from ChatGPT Plus generated from a prompt I engineered (a creative, skill-driven process). I then ran ChatGPT’s draft through Claude 2 for suggested editing and feedback on content and tone. Then I incorporated those suggestions into the edits I made to the draft to align with the corporate culture in Nebraska. From there it went through editing rounds with leadership at the station (humans all). And before I published it I asked PBS Standards and Practices to give me final comments. You can see the results excerpted in this op-ed.

      How many human hours did that take to generate? About 40 (totaling the estimated time invested of all the people at Nebraska Public Media who participated in the editing and approval). And half of those hours were me editing the draft from the ChatGPT Plus and stewarding it through consensus. That’s one work week of human resource to generate a significant policy for the station, from ideation to publishing. And that’s just one example of gAI’s power to speed up the early phases of content creation so that creators can focus energy on the later stages that matter most.

      Efficiency and cost savings do *not* have to be about reductions in force. Rather, I’d suggest that if we don’t assess and develop strategies around the use of these tools then we’re abrogating our responsibility to be good financial stewards of the resources with which we are entrusted. With the changes that are coming to the media landscape, it will be even more difficult to hire all the staff we need to compete. We need empower our teams to work smarter.

      Ultimately, these are simply tools. And people who are trained to use these tools ‘for good,’ can also do more good with them than without them. That’s why I’m encouraging the innovators and change agents in the system to create space in their stations for all staff to take time, play with these tools, and figure out how they can augment creativity to enhance the pursuit of our mission.

Leave a Reply

Your email address will not be published. Required fields are marked *