NAB Tells FCC Its Plan for Political AI Disclosure Can’t Stand

NAB
(Image credit: NAB)

The National Association of Broadcasters says the FCC will face legal obstacles if it goes ahead with its plan to address political deepfakes.

The association continues to question the timing of the push to regulate the use of artificial intelligence in political advertising. In freshly filed comments, it again said the commission should let Congress take the lead on this issue, given the “substantial hurdles and complexity” involved.

If adopted, the FCC’s rulemaking would require broadcasters to identify political ads that include AI content. The commission also proposed requiring broadcast licensees to include a notice in their online political files about political ads that include AI-generated content.

The NAB says the FCC lacks statutory authority to implement these regulations and that the change would violate the First Amendment and the Administrative Procedure Act.

The group criticized previously filed comments from supporters: “Most comments defending the FCC’s authority to adopt its proposals can be summarized as follows: Because the proposals will serve the public interest, then the commission has authority to adopt them. That is error,” the NAB wrote.

It said that no matter how important an issue, an agency’s power to regulate in the public interest must be grounded in a valid grant of authority from Congress.

“Other attempts to justify the proposed rules under the Constitution are inapposite or nonsensical,” NAB said. “Mushing erroneous statutory-related arguments together does not create a convincing or even coherent First Amendment argument.”

The NAB said many commenters supporting the proposal don’t agree with the FCC’s definition of AI-generated content that would be subject to the rules or agree among themselves on how to decide that. And it says the FCC lacks any authority over political advertisers or ad creators.

It said the commission should simply close this proceeding without action. “Only Congress can address AI-generated political deepfakes across platforms and reach the advertisers creating political ads and thus should be the entity to take the lead in considering any needed regulatory action.”

State broadcast associations in jointly filed comments support that position. They too said the FCC’s definition of AI is too broad to be useful and that the administrative burdens on broadcasters would be “simply overwhelming” due to the extra education, inquiry, investigation and disclosure requirements proposed in the NPRM.

The FCC proposes to require that broadcasters inform the buyers of political ads about the station’s obligation to disclose the use of AI in political ads, adding to the complexity of selling political ads.

Broadcasters also would need to inquire about the use of AI in the creation of each spot received, including swap-out spots. They’d need to add a disclosure to any spot that uses AI, make airtime scheduling adjustments to accommodate disclaimers, and add disclosures to the station’s political file.

“The potential for irreparable damage to political speakers, broadcasters and the media-consuming public from well-intentioned but misdirected efforts to limit the impact of negative political AI use is vast,” the state associations wrote.

Comments on the FCC’s NPRM can be viewed via the FCC online system. Refer to proceeding 24-211.

This article originally appeared on TV Tech sister brand Radio World. 

Randy J. Stine has spent the past 40 years working in audio production and broadcast radio news. He joined Radio World in 1997 and covers new technology and regulatory issues. He has a B.A. in journalism from Michigan State University.