ANNAPOLIS, Md. — A proposed rule from the Federal Communications Commission could cripple funding for local community media outlets. The outlets, also known as public, educational and government-access television, make up thousands of channels across the country and broadcast a variety of content, such as distance education courses and candidate forums.
Cable companies currently fund them in exchange for use of public lands to hook up consumers to their services. Under the proposal, local municipalities would have to foot the bill. Martin Jones, president and CEO at MetroEast Community Media, said this could seriously hamper people's access to information.
"It could destroy the ability to deliver transparency in government to everyday citizens by our government channels, when we broadcast council meetings that most people can't leave work to go to,” Jones said; “the school board meetings, the water commission meetings, the planning commissions meetings.”
These channels have historically been considered "in-kind" resources. Under the FCC proposal, a monetary value would be assigned to them and included in what's known as a franchise fee paid by cable companies and capped at 5 percent of their gross revenue. The agency has argued this fee and in-kind contributions impede innovation and are a barrier to the market for cable companies.
The FCC did not respond to a request for comment.
Mike Wassenaar, president and CEO of the Alliance for Community Media, called the proposal a backdoor attempt to drain the coffers of local government on behalf of the cable industry.
"It forces local governments into a position to say either they can take the money to be able to pave roads or pay for fire department or other services, or they could have to give up the channels that they use to communicate with their citizens,” Wassenaar said. “And we think that's not a fair bargain."
Wassenaar said local governments in some places still are struggling out of the recession from a decade ago, and that means community media there is hurting too. But, he said community media stations in other places actually are thriving in a time of increased interest in localism ...
"... at a time when major media outlets are turning their back on local communities,” he said. “And we're actually seeing a renewed interest in people thinking about their place and thinking about information about their place and the information needs of people in their neighborhoods."
The public can comment on this proposal through November 15 on the FCC's website.
get more stories like this via email
Missouri lawmakers are concerned with protecting people from the potential risks of the increasing accessibility of AI-generated images and videos.
The Innovation and Technology Committee is planning to vote on the Taylor Swift Act, a bill aiming to make it illegal to publish or threaten to publish AI-generated sexually explicit images of people.
Rep. Adam Schwadron, R-St. Charles, authored the bill and said it is important to be proactive in protecting ordinary citizens.
"They were able to take it down for her," Schwadron acknowledged. "However, common Missourians would not have the same protections afforded to her. Not everyone is Taylor Swift."
The bill would allow victims of the fake image attacks to sue the creator in civil court and recover the offending images. Rep. Bridgette Walsh, D-St. Louis, also supports the bill and said it is necessary in this day and age, given how easy videos and images are to access and create.
Schwadron noted while they will need to learn how to track items originating from the dark web, he is optimistic the legislation will cover most common offenses.
"The cases that we are seeing across the country of classmates that are being attacked by other classmates of theirs that is creating these images and it's affecting young girls and even boys and those are a lot easier to track when they're being shared from phone to phone," Schwadron explained.
Schwadron added the name "Taylor Swift Act" was fitting due to her ties with the state of Missouri and her recent ordeal with explicit deepfakes.
get more stories like this via email
Experts are saying social media algorithms are radicalizing users and increasing extremism in Arizona and around the country ahead of the 2024 presidential election.
Michael Chertoff, a member of the National Council on Election Integrity, said better protecting data privacy could make the algorithms less destructive, without infringing on free speech.
"I do think we could regulate access to data, uses of data and the application of algorithms to that data without offending the First Amendment," Chertoff contended.
Chertoff pointed out data is one of the most critical ingredients in building algorithms using artificial intelligence. He argued data collection by Big Tech companies should be better regulated, as it is used to send specifically targeted and polarizing messages to consumers. A large majority of Americans said they have little to no trust in companies to use AI responsibly, according to the Pew Research Center.
Farah Pandith, senior adviser for the Anti-Defamation League, noted extremism and radicalization are not decreasing. She suggested the first step to improve the situation is by having government and nongovernment stakeholders acknowledge the severity of the situation and to start having more conversations, which she contended is not happening at the scale it needs to be.
"You're not seeing the commitment as a priority area," Pandith stressed. "That shows up in the funding that is required for NGOs that are doing the first responses to all of this that come up with the creative ways. It doesn't show up in the way in which we can scale solutions that we know."
Pandith added solutions are available and called on social media platforms to assume more responsibility for the inciteful content they display.
get more stories like this via email
A contentious congressional hearing on Wednesday saw a unanimous push for regulations on social media specifically related to children.
U.S. Sen. Josh Hawley - R-MO - pushed Meta CEO Mark Zuckerberg to apologize to families of child victims over social media that caused exploitation, harm and death.
The CEOs of Meta, X - formerly Twitter, TikTok, Discord and Snap testified at the hearing. Zuckerberg and Snap's CEO Evan Spiegel gave apologies for the first time, after Hawley put them on the spot.
"Would you like to do so now? Well, they're here, you're on national television," said Hawley. "Would you like now to apologize to the victims who have been harmed by your products? Show them the pictures. Would you like to apologize for what you've done to these good people?"
Zuckerberg turned and stood and faced the audience and said "I'm sorry for everything you have all been through. No one should go through the things that your families have suffered and this is why we invest so much, and we are going to continue doing industrywide efforts to make sure no one has to go through the things your families have had to suffer."
Some victims' families have said although they were a surprise, they didn't think the apologies sounded sincere.
Members of Congress said they hoped to find common ground in an effort to create laws that would make the internet a safer place. Senators including Sen. Jon Ossoff - D-GA - repeatedly asked the social media tycoons to consider the victims and recognize the risks of being online.
"We want to work in a productive, open, honest and collaborative way to pass legislation that will protect American children above all," said Ossoff. "If we don't start with an open, honest, candid, realistic assessment of the issues, we can't do that if you're not willing to acknowledge the internet is a dangerous place for children."
Earlier this week, explicit deep-fake Artificial Intelligence images of pop icon Taylor Swift were also released on X.
White House press secretary Karine Jean-Pierre announced that legislation would be the obvious way to remedy this type of offense.
get more stories like this via email