Researchers Into ‘Deepfakes’ Highlight Failings of New California Legislation – East Bay Express

When a video depicting real people doing or saying things they never did or said goes viral, the consequences can be disastrous. Such malign scenes can destroy reputations, place elections in peril, put national or international security at risk, and threaten the very concept of truth.

Deceptive visual and audio media manipulated with artificial intelligence have long proliferated in online pornography, and are increasingly seen in political commentary or advertising. The Amsterdam cybersecurity company Deeptrace found that, during seven months in 2019, such so-called "deepfakes" increased 84 percent. Reddit holds the dubious honor of being the first known site for deepfakes, but YouTube, Facebook, Twitter and other social media platforms are not far behind.

Even "cheapfakes" that don't use artificial intelligence can be harmful. Consider the May 2019 video that slowed the speech of House Speaker Nancy Pelosi. The simple, low-tech distortion made Pelosi appear inebriated or otherwise impaired. Despite rapid detection of the doctored video, the damage was done.

As November's U.S. presidential election approaches, making social media platforms more accountable for how their online content can be used to mislead the public has become a critical issue. In California, last year Gov. Gavin Newsom signed into law AB 730, known as the "Anti-Deepfake Bill." Aimed at stemming the tide of deepfakes featuring politicians, the new law is viewed as admirable but imperfect by people concerned about the issue.

The law, which is only in effect until the end of 2022, prohibits the distribution with actual malice of "materially deceptive audio or video media" involving a candidate for public office in the 60 days prior to an election. The law permits candidates to go to court to suppress such media and sue for damages, and exempts content that "constitutes satire or parody," as well as news organizations that broadcast such materials if they acknowledge questions about the material's authenticity.

Observers say the law's flaws include its expiration date, the 60-day window of applicability, the provision allowing deepfakes to remain active until malice is proven, and inadequate enforcement procedures to correct debunked video or audio content and to alert the public regarding deepfakes already posted or gone viral. Most notably, it exempts online platforms from responsibility due to Section 230 of the federal Communications Decency Act, which prevents such platforms from being sued for allowing users to post content. This final limitation could only be corrected by Congress.

Fortunately, the situation has attracted the attention of smart people dedicated to promoting the ethical use of new technologies. Among those leading the charge are Brandie Nonnecke and Camille Crittenden of UC Berkeley's CITRIS and Banatao Institute. CITRIS and the Banatao Institute merged under one name in 2016 and have for 20 years enabled leading researchers at UC Berkeley, UC Davis, UC Merced, and UC Santa Cruz to address information technology development and applications for the public good.

"My concern is that people may be hesitant to believe anything they see," said Nonnecke, CITRIS Policy Lab co-founder; a fellow at the World Economic Forum; and an expert in communication technology, Internet governance, and civic participation. "That's good because they're careful, but bad because they may lose the perception that something is true by jumping too quickly to thinking everything is a lie." Furthermore, she added, hesitancy and disbelief can be exacerbated by actors yelling "deepfake" about any video ... and then there's no drive-back-the-lie vehicle for detecting and broadcasting truth.

"Who would you then trust?" she said. "Who can determine if it's true or not? The platforms aren't positioned to do that. Most often now it's a lawsuit or a third party verifying a video. But can we trust them? The biggest danger is a video that's actually true, showing a politician doing something illegal, and now no one thinks it's true. The truth won't come out."

Crittenden, the executive director of CITRIS, a co-founder of the Policy Lab, and former executive director of the Human Rights Center at Berkeley Law, said that by casting doubt on authentic media content, deepfakes threaten democracy. Quick to acknowledge that social media platforms have placed restrictions around sex trafficking and actively support developing tools to detect manipulated media, she nonetheless said it's not enough.

"With Facebook, they've said they're not taking political ads down," she said. "Twitter has said they will not accept political advertising. But if they won't curb or flag blatantly false information, perhaps they shouldn't be in the business of political advertising at all."

Just Monday, after Crittenden was interviewed, Facebook announced that it is banning deepfakes from its platform. But its ban fell short of including clips edited to change the order of a speaker's words, or parody or satire, two categories of content protected by the First Amendment. The policy also appeared not to cover content such as the Pelosi video. A spokesman for Pelosi released a statement to The Washington Post that the company "wants you to think the problem is video-editing technology, but the real problem is Facebook's refusal to stop the spread of disinformation."

Content such as the Pelosi video, and another infamous one in which CNN White House correspondent Jim Acosta's actions were made to seem abrupt and violent by speeding it up, is especially pernicious. "The content isn't as sophisticated and doesn't take a programmer or AI software for facial swapping," Crittenden said. "It's easier to accomplish and to proliferate, perhaps even harder to detect, which means there could be more of it. It could spread quickly and easily." Depending on how often a video is copied or shared, even taking down the original post leaves iterations to spread. "The creator isn't going to pull it back and the platforms aren't liable so there's no obligation to even alert users there's a false video. Unless there's violence, of course."

Nonnecke said social media platforms are providing CITRIS with data and actual deepfakes that will allow researchers to identify model features. Errors in videos already offer clues. The speaker in a deepfake will appear strange: for example, mouth patterns and timing may not match words. Another simple detection approach is algorithms designed to flag videos that suddenly go viral. "Why did that happen?" Nonnecke asked. "Shocking content? Look at those first and decide if it's true or not."

She called California's new law admirable, but only "a baby step." As detection technology improves, it chases a moving target. Voice- and face-matching technology are developing rapidly. "If I get a call or Skype sounding or looking like my mother asking for my Social Security number, I might give it to her and then find out it wasn't my mother," Nonnecke said. People with limited resources are especially vulnerable. "People who don't have the means to have a software plugin to put on a device to detect deepfakes; yes, they might not be able to afford protection."

Crittenden believes that CITRIS can have the greatest impact by asking researchers and stakeholders to focus on ethical practices, offering control and detection recommendations to social media companies and the public, disseminating materials to politicians and legislative entities, and teaching digital literacy to students and consumers. Healthy skepticism and critical-thinking skills when reading online are low-tech techniques anyone can apply immediately.

Nonnecke believes citizens must weigh in on the debate by discussing artificial intelligence with state and federal legislators and supporting digital ethics training for future programmers and engineers. People opposed to control of deepfakes out of concern for free speech a legitimate concern must better understand the difference between hateful or deceptive content and satire or parody, she believes. "Overall, there should be more tech ethics training, because all fields require a foundation of ethical practice in technology, not just STEM."

Tightening up loopholes in AB 730 timing, misplaced responsibility, and inadequate remedies are legislative next steps, she said.

But until government adequately responds to the crisis, she said, it helps to understand and anticipate deeply biological, human behavior. "We're poised to believe things we can see, like videos," she said. "If you see something shocking, you have a propensity to pay attention to it. Biologically, we're primed to pay attention to scary things that might harm us. And there's heuristics: the more frequently we see something, the more you think it's true.

See the article here:
Researchers Into 'Deepfakes' Highlight Failings of New California Legislation - East Bay Express

Related Posts