How PBS is using AI to improve processes and systems for stations, staff and viewers

More

Nick Licitria, a principal engineer at PBS, prompts AI during the creation of a training video at PBS headquarters.

Technologies such as generative AI and spatial computing are advancing quickly, with headlines about new developments surfacing on a weekly if not daily basis. But which trends will really matter for public media? And who is putting effort towards them? In “Thinking Beyond Screens,” a blog series on the PBS Hub, innovators in public media are showcasing their work with emerging technology to inform and inspire their colleagues. The following article has been adapted from a post in the series with permission from PBS.

The PBS Innovation Team was established in 2020 to advance PBS’ product experimentation. The team’s role is to leverage cutting-edge technologies to differentiate public media. We build AI, machine language, augmented reality and virtual reality prototypes aimed at honing a technological edge for improving content delivery, complementing PBS’ primary strength in content differentiation and finding new ways to serve our viewers, stations and each other. 

This innovation team at PBS is driven by a couple of key values. First, we love to experiment. Asking questions and testing hypotheses inspires us to be creative. Second, from prototyping to product deployment, we believe in iterative but constant improvements. We optimize until we know something works and meets the interests of viewers, delivering on the mission of being a trusted window to the world from any device in America. For us, success is measured by the degree to which we can help PBS and its member stations use emerging technologies to make our media more accessible to more people. And one of the newest tools in our toolbox is AI, which supports our mission of universal service. 

How is PBS using AI today?

We view ML, AI and generative AI as tools that can help us engage and retain viewers. So our digital product teams are venturing further into exciting new innovations in personalization and discovery. Recently, we have been exploring how to integrate AI models’ embeddings into the data lake to help generate content metadata for powering search recommendation engines. Good results start with high-quality, curated, protected data.

The PBS product teams are testing use cases for how we can leverage automation and prediction systems to drive greater engagement, increase conversions, mitigate churn and sustain and grow station revenues. There are so many more use cases, from production to creative, marketing to fundraising.

If a public media organization is interested in working with AI for operational workflows in the cloud but is concerned about data privacy, we have found that hosting a model through AWS using Bedrock is the best viable option. We have been experimenting with AI models that can be hosted on our own servers, thus allowing us to own the data and be sure that our information stays private. 

A row from PBS’s recommendation engine on the app, which uses AI and was built by the PBS Product Teams.
A row from PBS’s recommendation engine on the app, which uses AI and was built by the PBS Product Teams.

We use our established Amazon Web Services cloud infrastructure to service AI in our “walled garden.” That means we control how our data is used, who has access to it and where it goes. Many open-source models, such as Llama and the HuggingFace platform, can be accessed through AWS Bedrock. We also experiment with free tools and cutting-edge models, purely for research. 

Here are some ways PBS is experimenting with AI:

  • Recommendation engine(s)
  • Automations in the cloud 
  • PBS Kids research 
  • Writing in the brand voice: headlines, fundraising appeals, emails
  • Metadata extraction and creation
  • RAG chatbots: Support and Hub Chatbot
  • Image alt tag generation
  • Quiz generators
  • Dynamic video recaps
  • Face swapping tech
  • Search enhancements
  • C2PA
  • Video screening 
  • Personalization automation
  • Fundraising modeling
  • Cybersecurity triage

One example use case of AI-generated content for PBS that could help surface content from our library and share it with users in a more personalized manner: We have a lot of travel shows, but they’re currently housed under the “Culture” genre, so they’re harder to find within our system. But if AI searches the library, identifies travel content, creates a new tag and classification, then compiles output for a row on our apps such as “10 travel shows on PBS you have to watch,” that would provide a lot of value, both for SEO (discovery) and for current viewers (retention).

Generative AI can be leveraged to understand viewers’ and donors’ preferences, offering the ability to present a 360-degree view of all interactions, which could trigger more targeted, contextual and personalized content. 

What are the guidelines from PBS?

The image shows a matrix titled "Task Expertise-AI Performance Matrix." It has two axes:

The vertical axis represents "Expertise" ranging from "Low Expertise" at the bottom to "High Expertise" at the top.
The horizontal axis represents "Performance" ranging from "Moderate Performance" on the left to "Excellent Performance" on the right.
There are several yellow squares representing data points on the graph. A red outline box highlights an area in the middle left of the matrix where both expertise and AI performance are moderate.

At the bottom of the image is a suggestion that organizations should "Look for use cases where moderate AI performance can create value."
Chart describing generative AI’s sweet spot. 

Crunching a lot of data does not translate into knowledge. And making complex calculations with human interfaces does not equal autonomy. For machines to be truly autonomous, we need to develop a new scientific approach. But perhaps that is a different blog post. In short, if you are using AI, be aware that it isn’t perfect yet. And here are some guidelines to follow:

  1. If you use free tools, never upload any proprietary, sensitive or confidential information, or PII of viewers or donors.
  2. If you use generative AI to create new content in production or editorial (or if you hire an external party like a production company who might), that makes it not copyrightable. 
  3. Please follow the PBS editorial standards. 
The image shows a webpage titled "PBS Standards." It includes a navigation bar at the top with links to sections like "Editorial Standards," "Funding Standards," "Co-Production Policy," "Articles," "Case Studies," and "Guidance Memos."

On the left side of the image, there's a section titled "Guidance on Generative AI" with text stating, "The PBS Editorial Standards & Practices provide guidance applicable to the use of emerging generative AI tools." Below this, there’s a prompt to "Read the guidance."

On the right side of the image, there’s a digital illustration of a glowing blue brain connected to a circuit board, symbolizing artificial intelligence and data processing.
Visit the PBS Standards website to find helpful resources on AI.

Here is a quick summary as they apply to specific AI tools:

  • Text or image generation for inspiration is OK, but do not copy/paste
  • Image generation for backgrounds and b-roll is a maybe:
    • Hire experts to review and put a lower third or labels for the audience that explains the use of AI.
  • Voice AI or face manipulation is a no, unless it is used for anonymizing interviewees
    • Hire experts to review and put a lower third or labels for the audience that explains the use of AI.
  • Voice AI for interview edits or narration is no
  • Generative fill is no
  • Using AI stock libraries is no
  • Video and music is no
  • Questions about usage? Email [email protected]

What about marketing?

  • Text or image generation for inspiration is OK, but do not copy/paste
  • Text for summarization/description/tags/headline ideas is OK, but fact check
  • Image generation is maybe
    • Hire experts to review and reference the use of AI in the alt tag for the image
  • Questions about usage? Email [email protected]

So does that mean the AI tools of today are useless? No. What it means is that we have to understand the limitations of the tools and opportunity to use them for what they are good at. If the amount of accuracy they can or cannot provide is limited, what can it help us with? How can we use them to enhance, not replace, our work? 

Will AI take our jobs away?

Empty cubicles in the PBS office. Blame COVID, not AI.
Empty cubicles in the PBS office. Blame COVID, not AI.

AI won’t displace jobs. Today’s AI tools cannot replace unskilled or highly skilled labor. A robot cannot clean off the dinner table, and an algorithm can not be a physician. Humans possess the ability to understand the nuanced context and cultural relevance of a three-dimensional world in ways that AI currently cannot match. This is crucial for creativity and editorial judgment. Some tasks require human expertise that AI lacks:

  • Making editorial decisions on how to use generated content in new productions. 
  • Making decisions about sensitive or controversial content. 
  • Ensuring that content is used appropriately and ethically. 
  • Balancing business interests with journalistic integrity and public trust.
  • Contextualizing content within broader historical or cultural narratives. 
  • Ensuring diverse perspectives are represented in content.

Humans also harness interpersonal communication and collaboration in the workplace, like maintaining relationships with content creators and producers or negotiating rights and permissions in coordinating between different departments (legal, production, marketing).

So what about journalism?

The reality is AI can help, not hurt, newsrooms by transforming and reshaping the way reporters, editors, marketers, product professionals and managers do the work. AI can positively impact both the editorial and business sides, as well as the supply chain of newsgathering, production, distribution and investigative journalism.

Here is an example. A friend at the Wall Street Journal told me they are using ML and AI now to do things like scanning and summarizing docs, analyzing social media conversations, translating in languages and with context, transcribing audio from person-on-the-street interviews, making sort quizzes and more. But what really impressed me is how they are using genAI as a tool to aid investigative journalism. Using image recognition, a form of genAI, they analyzed data from Google Maps to identify electrical wires that may have lead casing.

The journalism industry must keep pace by identifying effective use cases for AI tools, developing best practices for integration and encouraging collaboration. Newsrooms can use AI today for things like: 

  • Reporting and news production
    • Searching knowledge bases
    • Summarizing internal docs
    • Fact-checking
    • Transcribing interviews and footage
    • Investigative journalism 
  • Product and distribution
    • Translating reporting into many languages
    • Generating news quizzes
    • SEO and aIt text image generators
    • Personalized content and news updates
    • Comment moderation
    • Chatbots for support and search

But just remember that all of this requires human oversight. We need to provide critical thinking. And we need to ensure the principles of journalism are applied to this work, with rigorous vetting, a reader-first approach, and commitment to truth and transparency. 

What’s next? What should we be doing with AI? 

How can AI and immersive technologies transform storytelling in public media in 2025? We are now two years into generative AI tools. Is anyone using them for more than just answering questions and having a dialogue? 

Disclaimer: This image was generated with Google Gemini AI Aug. 30. AI outputs may sometimes be offensive, inaccurate or contain bias.

The reality is that most people are not yet empowered, let alone trained to realize either the limitations or the capabilities of these tools. But if we can embrace that and see machines as collaborative tools, we can go in a lot of directions. When I think about the future we could create with AI, five things come to mind.

  1. AI-Powered Personalized Narrative Experiences. Could it be possible to provide adaptive storytelling algorithms that adjust content based on viewer engagement and preferences? Obviously, there are ethical considerations, and we as public media need to maintain editorial integrity. But personalizing video content will soon become the new normal. My son sometimes gets really turned off by dramatic music in PBS shows and doesn’t love violent scenes, even if it’s an educational documentary about animals. What if I could request an uplifting music track matched with optimistic imagery? That future will be coming soon on other platforms. Should public media explore it, too?
  2. Immersive Journalism and Investigative AI Reports. AI is really great at pattern recognition and summarization. And immersion tells a more robust story. There are already examples of how news organizations are using AI for investigative reports, such as for finding patterns in satellite imagery or public data, or analyzing complex datasets and creating compelling interactive visualizations. Could we use AR to transport viewers into news events and documentary settings with 360-degree video and spatial audio in local reporting to literally transport someone to the event? Or help students see a 3-D model of an object with educational overlay in their living room instead of in a planetarium?
  1. AI-Enhanced Interactives and Games. Games take time to produce, but AI can help with the heavy lifting. Or it can be used to craft parts of the experience. Or be the moderator. Gamification elements in educational content, powered by AI and immersive tech, will substantially change how we produce and release interactive experiences. 
  1. Live Events and Community Engagement. AI-powered, real-time audience interaction during live streams and virtual events, with augmented reality overlays for in-person community events and educational workshops, will be a game changer for the public media industry. Town halls, events and screenings can be hybrid and interactive with the help of new technologies. 
  2. Accessibility and Inclusion Through Technology. AI can be used to improve closed captioning and audio descriptions and to make translations in real-time. Imagine watching TV and hitting a button to switch the audio to another language, but not just with overdubs — the lips and facial features would adapt, too. Maybe even words in the background!
Centrella

The potential for AI and immersive tech to amplify public media’s mission and impact should be a call to action for public media organizations to embrace and shape these technologies responsibly.

Do you have great ideas for AI? I want to hear from you!

I would love to know how you are using AI and what tools you find useful. Or if you think PBS should be looking into building its own AI platform for something special,  tell me! Please reach out and let us know what’s on your mind. We are here to help!

Here are a few educational videos about AI: 

Prompt Engineering Tips:

What is RAG?:

Reducing Hallucinations in AI:

C2PA Overview:

Leave a Reply

Your email address will not be published. Required fields are marked *