AI SAFETY REPORT

NEWS: Recently, the Department for Science, Innovation and Technology of the British Government, along with the AI Security Institute, released the first-ever International AI Safety Report 2025.

WHAT’S IN THE NEWS?

Emerging AI-Related Harms

Existing Harms: Current AI misuse includes scams, online fraud, non-consensual intimate imagery (NCII), child sexual abuse material (CSAM), algorithmic bias, and privacy violations.

Emerging Threats: New dangers are surfacing, including AI-facilitated hacking, AI-enabled biological attacks, misinformation at scale, and potential large-scale displacement of jobs due to AI automation.

Gendered Nature of Deepfake Abuse

Deepfake technology is disproportionately used to target women and girls with non-consensual, pornographic content.

A 2019 study revealed that 96% of all deepfake videos online were pornographic in nature, with nearly all featuring female victims, pointing to a deeply gendered form of abuse.

Fake Content and Exploitation Risks

Malicious actors use AI-generated fake images, videos, and audio to:

Extort victims for money or compliance.

Scam individuals or organisations by impersonation or deception.

Psychologically manipulate victims into harmful actions.

Sabotage reputations by releasing altered or fake compromising content.

 AI and Child Sexual Abuse Material (CSAM)

Several open-source datasets used to train AI image-generation tools (like Stable Diffusion) were found to contain CSAM, showing lapses in dataset curation.

In 2024, the Internet Watch Foundation (IWF) reported a 380% increase in confirmed reports of AI-generated CSAM — rising from 51 reports in 2023 to 245 reports in 2024.

 Ineffectiveness of Detection Countermeasures

Common solutions like watermarking, warning labels, and metadata tagging have shown limited success in helping people consistently identify AI-generated fake content.

These measures can be easily removed or bypassed by technically skilled offenders.

Definition and Scope of Digital Child Abuse

Digital child abuse involves the use of technology or digital platforms to exploit, manipulate, or harm children.

This includes:

Cyberbullying and grooming.

Non-consensual sharing of explicit images.

AI-generated CSAM, which is becoming increasingly common.

According to a Lancet study, 1 in 12 children globally faces online sexual abuse.

Role of AI in Child Exploitation

a. AI-Generated CSAM:

AI tools like deepfakes and synthetic image generators can now create realistic child abuse material without involving real victims, making detection harder.

These tools fuel demand and normalize such content in underground networks.

b. Grooming and Impersonation:

Offenders use AI chatbots and voice cloning tools to impersonate children or trusted adults, gaining victims' trust before abuse.

c. Automated Harassment:

AI tools automate mass-scale cyberbullying, deepfake blackmail, and bot-generated threats.

d. Data Exploitation:

Offenders harvest children’s data (e.g., photos, preferences, voices) from social media to train models or create synthetic abuse content.

Real-World Example

In 2024, South Korean authorities uncovered a major case where deepfake images of schoolgirls were generated using AI and circulated in secret online groups, illustrating the scale and ease of such crimes.

Impact of Digital Child Abuse

Emotional Trauma: Victims often suffer from anxiety, depression, or suicidal tendencies.

Loss of Privacy: Digital replicas of children can have long-lasting implications on their future digital identity.

Social Withdrawal: Victims tend to isolate themselves socially due to fear and humiliation.

Distrust in Technology: Parents might become overly restrictive, limiting positive digital learning experiences.

Increased Cybercrime Burden: AI-driven exploitation overwhelms law enforcement with new and complex crimes.

Major Challenges in Prevention

Accessibility of Tools: AI CSAM tools like deepfake generators are freely available on both the open web and dark web.

Jurisdictional Complexity: Crimes span multiple countries, making cross-border legal enforcement difficult.

Legal Gaps: Most countries lack specific laws criminalizing AI-generated CSAM, as it doesn’t involve a real child.

Anonymity Tools: Offenders use VPNs, encrypted apps, and decentralized networks to evade detection.

Public Unawareness: Many parents and educators underestimate online grooming and sextortion risks.

 Ethical Dilemmas in Detection

a. Privacy vs Protection:

Using AI to scan private data can prevent abuse but also invades privacy.

Example: A San Francisco father was flagged by Google for sending a medical image of his child to a doctor.

b. Surveillance vs Civil Liberties:

Mass surveillance measures risk violating civil liberties, especially in authoritarian settings.

Example: Scanning encrypted messages for CSAM could be misused to monitor political dissent.

c. False Positives:

AI tools may misclassify innocent content, leading to wrongful accusations and legal harm.

Example: Parents were falsely reported for sending legitimate family photos flagged by detection algorithms.

d. Unchecked Corporate Power:

Tech companies might take arbitrary action without transparency.

Example: Google disabled over 140,000 accounts in a 6-month period, raising concerns about due process.

Supreme Court Observations (India) – 2024

In Just Rights for Children Alliance vs. S. Harish, the Supreme Court held:

Viewing, downloading, or possessing CSEAM is a criminal offence.

This falls under:

Section 15 of POCSO Act (criminalises possession of CSAM).

Section 67B of IT Act, 2000 (criminalises transmission and publication of CSAM).

Constructive Possession applies — even deleted material can be punishable if there was knowledge and control over it.

Measures to Prevent Digital Exploitation in India

Website Blocking: Government blocks sites hosting extreme CSAM based on INTERPOL’s “Worst-of” list.

Dynamic Removal: ISPs in India dynamically remove CSAM based on real-time alerts from IWF (UK).

International Measures

Lanzarote Convention (Europe): Requires criminalization of all child sexual exploitation.

Internet Watch Foundation (UK): Tracks and removes online CSAM.

Google’s Safety API: Uses AI to detect and report abuse material.

UK Law: Makes AI-generated “pseudo-images” of child sexual abuse illegal.

Project Arachnid (Canada): AI-powered tool to detect and remove CSAM globally.

Indian Legal Framework

a. POCSO Act, 2012

Section 15: Criminalises storage/possession of CSAM.

Section 43: Directs central/state governments to conduct public awareness campaigns.

b. IT Act, 2000

Section 67B: Criminalises publishing/browsing of child sexual content online.

c. Bharatiya Nyaya Sanhita (BNS)

Section 294: Penalises sale/public display of obscene material.

Section 295: Specifically criminalises exposing such material to children.

Way Forward

a. Legal Reform and Clarity

Amend the POCSO Act to replace "child pornography" with Child Sexual Abuse Material (CSAM).

Define "sexually explicit" under Section 67B of the IT Act for better enforcement.

Expand the definition of "intermediaries" to include VPNs, VPSs, and Cloud Services.

b. Criminalizing AI-Generated CSAM

Introduce laws that explicitly criminalize AI-generated abuse material.

Example: The UK is the first country to criminalize AI-generated child abuse images.

c. Global Cooperation

Support the UN Draft Convention on ICT for Criminal Purposes to promote international legal standards.

d. National Offender Registry

Create a national database of child abuse offenders to restrict their access to child-related jobs.

e. Education and Awareness

Promote digital literacy in schools through structured programs.

Example: The UK’s Education for a Connected World curriculum teaches children safe online practices using interactive lessons.

Source: https://www.thehindu.com/opinion/op-ed/digital-child-abuse-the-danger-of-ai-based-exploitation/article69404942.ece