US Attorneys General Call on X to Address Sexualized Deep Fakes

Looks like more legal troubles coming up for X, with various U.S. states taking regulatory action against the Elon Musk-owned app over the generation and dissemination of sexualized images.

This stems from X’s Grok chatbot stripping down photos of anyone, from well-known actors to random children, via its image generation capability, which had become a trend on X early in the New Year. Indeed, at one stage, data indicates that Grok was generating over 6,000 sexualized images per day, all of which were publicly accessible in the app.

That prompted a major backlash in several regions, and even bans on both Grok and X in some areas. Though X initially stood firm in the face of criticism, with Musk himself claiming that the criticism was not about X, but more about broader censorship, and an effort to stop X from revealing broader truths.

But, really, there’s no way to justify the generation of nude and sexualized images, and no need for this functionality to exist, regardless of any other politically charged messaging you want to attach to such. And as a result, amid the threat of further bans and restrictions, X eventually backed down and restricted Grok’s image generation capability to paying users only, while also implementing measures to stop the generation of these types of images.

But that may have come too late. Yesterday, the EU Commission announced an investigation into Grok, and xAI’s safeguards to protect against misuse of its tools. 

And now, a group of more 37 U.S. attorneys general are also looking to take action against xAI.

As reported by Wired:

On Friday, a bipartisan group of attorneys general published an open letter to xAI demanding it ‘immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of [non-consensual intimate images].’”

In the letter, the group raises serious concerns about “artificial intelligence produced deepfake non-consensual intimate images (NCII) of real people, including children.”

And while X has now taken action, the group is calling for more responsibility from Musk and his team.

“We recognize that xAI has implemented measures intended to prevent Grok from creating NCII and appreciate your recent meeting with several undersigned attorneys general to discuss these efforts […] Further, you claim to have implemented technical measures to prevent the @Grok account ‘from allowing the editing of images of real people in revealing clothing such as bikinis.’ But we are concerned that these efforts may not have completely solved the issues.”

Indeed, the Attorneys General further suggest that X’s AI tools were actually designed for this purpose, and have built in tools to facilitate harmful usage.

“Grok was not only enabling these harms at an enormous scale but seemed to be actually encouraging this behavior by design. xAI purposefully developed its text models to engage in explicit exchanges and designed image models to include a ‘spicy mode’ that generated explicit content, resulting in content that sexualizes people without their consent.”

As a result, the group is calling for Elon and X to take more definitive measures to outlaw such use, including removal of all avenues to generating such images, removing all such content that’s been created already, and suspending users who misuse Grok for such purpose.

The Attorneys General also want X to give users control over whether their content can be edited by Grok, “including at a minimum the ability to easily prohibit the @Grok account from responding to their posts or editing their images when prompted by another user.”

Which means more challenges for X, in improving transparency, as well as expanded efforts to implement safeguards and restrictions on Grok use.

Which, again, Elon Musk is not a fan of, and it may require a bigger legal fight to make this happen, which Musk will no doubt also use as an opportunity to present himself as the face of free speech, as government regulators look to crack down.

Elon’s main refrain in this instance has been that other apps facilitate the same options, and that regulators aren’t going after other nudification and AI generation apps with the same vigor.

But the Attorneys General also address this:

“While other companies are also responsible for allowing NCII creation, xAI’s size and market share make it a market leader in artificial intelligence. Unique among the major AI labs, you are connecting these tools directly to a social media platform with hundreds of millions of users. So your actions are of utmost importance. The steps you take to prevent and remove NCII will establish industry benchmarks to protect adults and children against harmful deepfake non-consensual intimate images.”

It’s interesting to consider this push in light of Elon’s own very public, very loud stance against CSAM material, with Musk announcing, shortly after taking over Twitter, that combating CSAM was “Prority #1” in his time at the app.

Musk had criticized Twitter’s former leadership for failing to address child sexual exploitation in the app, and he’s since claimed several major advances in address such on X.

Yet, in this instance, Musk wants to fight back, which seems to run counter to these claims.

I mean, clearly, the broader political angling around CSAM content has changed, given that it was once the primary focus of right wing voters, many of whom would now prefer to overlook the Epstein files.

Maybe that’s altered Elon’s own position on the same, though it seems that, on the face of it, this should be a major concern for this group.

Either way, X is now set to come under more scrutiny, in more regions, and with impacts potentially stemming to xAI, and Musk’s broader AI projects, this could have a big impact on his plans.

We’ll see how Musk responds, and whether further action will be sought on this front.  

Read More

US Attorneys general ‌Call on⁣ X⁣ to Address Sexualized ⁤Deep Fakes

What Are Sexualized Deep Fakes?

Sexualized⁢ deep fakes refer to AI-generated manipulated videos or images that impose a sexual character or context onto individuals without their consent. ⁤These fabricated visuals frequently enough exploit ⁢advanced AI⁤ technology to create realistic ​portrayals of people in sexualized scenarios,‍ frequently inappropriately⁤ or maliciously. The act of ​sexualizing someone digitally through deep fake technology⁣ raises serious ethical, legal, and psychological concerns.

The term sexualize means to attribute or impose a sexual ‌character or context on someone or something, usually where it is irrelevant or unwanted [1],⁣ [2].

The Legal and⁣ social Impact ⁤of Sexualized Deep Fakes

Sexualized ⁢deep fakes threaten personal privacy, reputations, and mental health, frequently enough resulting in harassment, defamation, or even blackmail. Victims struggle to protect themselves due to the convincing nature of⁢ these​ images and the speed at which they spread across social media platforms.

  • Reputation Damage: Victims face public scrutiny or career setbacks⁢ from fabricated⁣ sexual‍ content.
  • Psychological Effects: Anxiety, depression, ‍and trauma are ‌common responses.
  • Legal Challenges: Existing laws⁣ may lag behind in addressing AI-generated sexualized content comprehensively.

Why US Attorneys General ⁣Are Pressuring X

Given the scale of the problem, US attorneys General have publicly called on X (formerly Twitter) to implement stronger policies and technologies to combat the⁤ proliferation of sexualized⁤ deep fakes on its platform. The calls include demands for:

  • Robust AI detection and moderation tools specifically targeting sexualized deep fake content.
  • clear community guidelines explicitly prohibiting the unauthorized sexualized manipulation of individuals.
  • Swift removal and ​takedown procedures⁢ for identified sexualized deep fakes.
  • Enhanced user reporting features tailored towards sexualized media abuse.
  • Collaboration wiht law enforcement agencies and policymakers to safeguard victims.

Key Statements ⁣from Attorneys General

Several Attorneys General emphasized that ⁣sexualized deep fakes are a form of digital abuse that violates multiple privacy and harassment​ laws. They underscored X’s responsibility due to the platform’s reach and influence in disseminating content rapidly.

Challenges in Addressing Sexualized Deep ⁣Fakes ⁣on X

  • detection Difficulty: Advanced AI can produce highly ​realistic and hard-to-identify manipulations.
  • Free Speech concerns: Balancing content moderation with freedom of expression rights adds⁣ complexity.
  • Resource limitations: Monitoring millions of posts ⁢daily requires notable technical and human ⁢resources.
  • Jurisdictional Issues: Cross-border content dissemination complicates enforcement of laws.

Technological Tools to Combat Sexualized Deep Fakes

To counter sexualized deep fakes effectively,X‌ is encouraged to adopt and improve advanced detection mechanisms,including:

  • AI-powered pattern recognition ⁣to identify⁢ manipulated⁣ facial features and synthetic voices.
  • Blockchain-based⁣ content verification systems ⁢that trace original media authenticity.
  • User behavior analytics to flag coordinated malicious campaigns distributing sexualized deep fakes.
  • Partnerships with third-party fact-checkers specialized in digital media ⁣verification.

Benefits of Addressing Sexualized Deep Fakes Promptly

Benefit Impact
Enhanced User Safety Reduces harassment and abuse risk, promoting⁤ healthier online interactions.
Preservation of Reputation Protects individuals from‌ false sexualized portrayals that harm public image.
Legal Compliance Ensures platform adherence to anti-harassment and ⁤privacy ⁢laws.
Trust & Credibility Builds user confidence in the platform’s commitment to a safe community.

Practical Tips for Users to Protect Themselves

While ‍platforms ⁣like X implement stronger controls, users​ can take proactive steps to defend against sexualized deep fakes:

  • Regularly Monitor Online Presence: Set up alerts for your‍ name or images to catch unauthorized reposts early.
  • Report Suspicious Content: Use X’s reporting tools promptly⁤ when encountering sexualized deep fake media.
  • Enhance Privacy Settings: Restrict‍ who can view and share your images and personal information.
  • Educate Yourself: Understand what sexualized deep fakes are and how to identify them to avoid being ⁣misled.
  • Seek Legal Assistance: Consult professionals if you fall victim to sexualized deep fakes for advice ‌on legal recourse.

Case Studies: Sexualized Deep fakes and Platform Responses

Several high-profile cases have highlighted the ‍danger and impact of sexualized deep fakes, pushing platforms toward more decisive action:

  • Celebrity Deep Fake Scandals: Multiple celebrities have been victims ⁤of AI-manipulated videos ‍sexualizing their likeness, triggering ⁣public outcry and policy shifts on content moderation.
  • Political Fake Videos: Sexualized ⁤deep ⁢fakes have been weaponized to discredit public figures,leading to calls for stronger regulation.
  • Everyday Users Abused: Numerous ordinary individuals have reported deep fakes used in harassment and revenge porn,⁢ spotlighting the need for better user protection.

Future Outlook: Legislative and Platform Actions

Beyond urging X,US Attorneys General are exploring​ comprehensive legislative ⁤frameworks to outlaw and penalize the creation and distribution of sexualized deep fakes,including:

  • Enacting ​clear definitions of sexualized digital content​ abuse in law.
  • Mandating platforms‍ to maintain⁤ transparency reports on sexualized content moderation.
  • Supporting funding‌ for AI research focused on deep fake detection and countermeasures.

It remains critical for users, platforms ⁤like X, and ‍lawmakers to collaborate proactively to⁢ stem the tide of⁤ sexualized deep fakes and protect digital dignity and privacy.

Subscribe

Related articles