Gray Media VP, Assistant General Counsel Discusses Updated Gen AI Policy
Claire Ferguson, who leads Gray’s AI efforts, talks about the emerging tech and its proper role
ATLANTA—Gray Media this week revised its group-wide policy regarding the role of generative AI to acknowledge the technology’s potential but underscore the broadcaster’s commitment to using Gen AI responsibly.
“Gray recognizes and values the significant potential for generative AI and other emerging technologies” in news, creative and business processes, the policy says.
However, the group is equally aware that AI-generated content “can distort and misrepresent content,” something antithetical to Gray’s stated “north star” –earning consumer confidence.
I spoke with Claire Ferguson, vice president and assistant general counsel at Gray, who is leading the station group’s internal work on AI about the updated policy, how generative AI can assist journalists in news creation, possible pitfalls and how Gray is preparing its employees for generative AI.
(An edited transcript.)
TV Technology: Gray Media this week updated its AI policy from June 2023. Why?
Claire Ferguson: When we first generated our AI policy over 18 months ago, it was focused on safe use, ensuring that our folks were getting the right mindset about AI and what AI was in June 2023.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
The difference in what AI can do from then to now is remarkable. Even three months ago, ChatGPT didn’t know how many r’s are in the word strawberry. That made it pretty easy to say, “This is something we are going to have a hard time trusting.”
Obviously, we still have swaths of questions and guardrails around what we can trust from ChatGPT and how we can use these kinds of generative AI platforms, but there have been massive improvements.
To have stuck with a policy from June 2023 that started with “Gray is committed to safety” felt like a seatbelt that wasn’t necessary.
So, we decided we wanted the policy to lead with our dedication to innovation. Safety is still 10,000% baked in, but based on what Gray can do and wants to do, and based on how AI can help it do that, it felt like it was time for a refresh.
TVT: Let’s talk about the use of AI around Gray. How is Gray using generative AI in the newsroom?
CF: We are still very much working through all those processes, as I suspect are our peers at the moment.
We trust it for very basic tasks. It's fine to run it through as a spellcheck. It's fine to ask it for suggestions.
We are bullish on AI's capacity to help us meet viewers where they are in a way we're not positioned to right now.
We're not there yet, but imagine you've got a Category 5 hurricane coming to your market. If you don't speak English, we could serve those storm warnings to you in your native tongue. Wouldn't that be a powerful, lifesaving use of this technology? That's just the start.
The important thing about the changes to our AI policy is that the tenets never changed. We are still incredibly centered on our communities and our employees before we would ever prioritize AI.
We call out in the new policy that anything we do has to queue to our north star, which is maintaining consumer confidence.
We would never deploy AI in a way that could potentially jeopardize that because a consumer of our news is not going to care if the error was algorithmic or human. That erosion of trust is something we cannot afford.
TVT: The policy spells out that there remains an emphasis on human-created news and the role humans play in the editorial process. So, how else do you see generative AI being used in the newsroom to enhance journalism and not replace journalists?
CF: Our AI policy committee is an interdisciplinary group with folks from technology, legal, marketing, general managers, sales and journalists.
A number of “capital” J journalists are doing some fascinating things with AI and have been pushing us to maybe take some bigger jumps, which I thought was interesting and promising.
The entire group, however, believes humans have to create news—full stop. But if you can use generative AI to spend only 10 minutes poring over 150,000 pages delivered to you from an FOIA request, you have freed up your investigative reporter in a hugely meaningful way.
Again, people in the newsroom were saying, “We’ve got to get a little bit more latitude here.”
I was a huge proponent of giving our employees the red line of the policy to show them exactly what we've changed, and it was really important that we not touch a comma in that section [regarding humans creating news].
We did add a sentence after that saying, if there is a way that generative AI can better serve our communities, then we will prioritize transparency about that, but it will be humans creating the news.
TVT: Understood, but if AI is scanning 150,000 pages of FOIA released documents to reveal what’s relevant, isn’t that relinquishing a bit of editorial control to AI on a fundamental level?
CF: There are tools that allow you to interview your data set. Upload the 150,000 pages the sheriff's office released. Good luck finding the trends, right?
There are ways to use AI to look for and show trends. How many DUIs did this officer end up pulling last year? What was the issue with that kind of thing?
We are training our journalists not to trust the AI tool’s answer implicitly, by no means. But if it finds that it looks like Detective Ferguson was pulling way more than everybody else, you can go back into the records and see for yourself it sure does look like the case.
And you can ask AI, “What are you seeing that implies those kinds of data points?”
TVT: I was wondering if you could discuss how the policy addresses other potential AI use cases. One involves real-time or near real-time translation of English-language local newscasts into various languages to serve specific non-English speaking communities in local markets. Another is using AI to identify area of interest for cropping purposes to enable reformatting for vertical video consumption on smartphones.
CF: That’s a lot to unpack. From a legal standpoint, my No. 1 concern when we jump ahead to when we are ready to do this is [language translation] is mistranslation,
If the [AI] service mistranslates “sexual assault” into “rape,” we could be looking at a heavily substantial defamation situation.
Our policy says humans will create the news. A translation is a little bit gray as a human created the tag-along version of that news.
Our argument would be, we would deploy that technology if we felt it better served our communities than not delivering it in that language.
We would be up front [with the audience] saying this [story] was produced in English and translated by AI. We hope you understand. If you run into any issues with our translations, please let us know.
Regarding a versioning tool to go from 16x9 to vertical, which is all of the rage, there is a danger as well.
Your news story about lines at a polling place seems to indicate when you reorient and reformat that there are a lot more people at the polling place and you’re discouraging people from getting out to the polls. We are hugely sensitive to that.
It all comes back to having a human in the loop. If we were to deploy that kind of technology, it would have to be on a basis that ensures the edit is totally agnostic to the underlying piece of content. A human in the loop will have to be the one to approve that.
Again, that is why that line stayed in our policy. Humans will create all Gray news because of exactly this type of issue.
TVT: The policy also says Gray will train employees on the use of AI. How is that being handled?
CF: Training was mandatory with the June 2023 policy and continues to be an ongoing onboarding process.
Anybody who joins Gray, fills out their paperwork, sets up their direct deposit and then sits down for 30 minutes to watch an AI training video that is tied to an internal site that houses a number of resources, including every AI platform our team has reviewed.
If I’m looking at using BlurkAI, let me go see if it’s been approved already so I don’t have to go through the process. If it has or has not been approved, we explain why.
George Winslow is the senior content producer for TV Tech. He has written about the television, media and technology industries for nearly 30 years for such publications as Broadcasting & Cable, Multichannel News and TV Tech. Over the years, he has edited a number of magazines, including Multichannel News International and World Screen, and moderated panels at such major industry events as NAB and MIP TV. He has published two books and dozens of encyclopedia articles on such subjects as the media, New York City history and economics.