An image uploaded in serenity can take on a life of its own, where it is stripped of consent and context and gets resurfaced on anonymous social media sites like Telegram and Reddit. These images are later reframed and sexualized by strangers. For many women, the violation begins long before they even know it has happened. According to NFHS-5, only 33.3% of women have accessed the internet against 57% of men, with rural women comprising just 25%. This gap in access to technology fuels gender based violence like image sharing and deepfakes. The National Cybercrime Reporting Portal recorded 76,657 incidents against women in 2025 which is a sharp 58% rise from 48,335 in 2024 with obscene material comprising 37,743 cases and explicit acts forming 19,703 cases. This rise reflects how easily social media photos are scraped, and morphed into explicit content, later shared across unmoderated Telegram channels and subreddits which evade takedowns.
Revenge Porn and Image Abuse
Revenge porn is more accurately described as non-consensual intimate image sharing which involves the circulation of private intimate photos and videos without consent often by former partners, acquaintances or anonymous actors obtaining images through hacking.
India has witnessed several cases like the Bois Locker Room case exposing how minors circulated morphed images of girls without consent normalizing objectification in peer groups. The rise of artificial intelligence further fuels the problem.
Ananya Sharma* recalls discovering her images online. “It was just a normal photo I had posted, but seeing it reappear in an edited way, sexualised and shared by strangers, felt a complete violation,” said Sharma.
India has witnessed several cases like the Bois Locker Room case exposing how minors circulated morphed images of girls without consent normalizing objectification in peer groups. The rise of artificial intelligence further fuels the problem. The Rashmika Mandanna video going viral exposed how easily realistic explicit content can be generated without any original intimate material. The rapid advancement of artificial intelligence has introduced deepfakes which use AI to create highly realistic images and videos involving a person’s face onto another body to fabricate new scenarios. Indian actor Alia Bhatt was featured in a deceptive deepfake video, later confirmed as false.
“Someone sent me a link, and that is when I saw it, my face was on a video that wasn’t me, being shared in Telegram groups with more than 500 people. It was terrifying to realise how easily it had spread and strangers were consuming it as if it was real,” said Sharma.
Online Grey Areas
Such images are often circulated anonymously on platforms like Reddit and Telegram where circulation becomes rapid and difficult to control. On Reddit, images are frequently posted in niche subreddits dedicated to sexualized content and voyeuristic content. Many of these communities operate in ‘grey’ areas and when reported, they are removed, only to reappear under a different name. The platform’s upvote system further incentivises the circulation and pushes such content towards wider visibility in minutes.
Telegram, another social media platform, poses the challenge of encrypted channels and large group capacities, making it a preferred space for sharing of non-consensual images. Pavel Durov, CEO of Telegram, was indicted in France [travel ban lifted as of 2025] for allegedly allowing criminal activity on the platform, including distribution of child sexual abuse material, drug trafficking and fraud. In 2022, the Delhi High Court ordered Telegram to disclose user information of operators behind these channels. A government investigation is, however, still underway for potential criminal misuse.
Legal Limits in India
Indian law does not have a specific, standalone criminal offence for ‘revenge porn’ or ‘non-consensual intimate image sharing’, unlike some other common law countries. Courts in India instead rely mainly on Section 66E and Section 67 of the IT Act that comprise violation of privacy and penal code sections on defamation and obscenity which are poorly made for this specific harm.
Section 66E criminalizes capturing or transmitting images of private areas without consent but it focuses on the body and not on non-consensual dissemination of intimate media in general. Section 67 on the other hand, treats all ‘obscene’ online material as generic porn making it hard to distinguish between non-consensual, abusive distribution and adult content shared consensually.
Ali Ibrahim, an advocate at Delhi High Court, argues that the substance of the law is not the main problem. “The rise in cyber complaints does show that there has been an increase in awareness, and women are now not reluctant anymore to file complaints. However, a rise in complaints mostly never translates to a rise in convictions. Delays in prosecution are the norm, and I believe the tactics used by lawyers in courts are one of the causes,” said Ibrahim.
More importantly, he explains further that “the high threshold of proof required in criminal prosecutions more often than not leads to the acquittal of offenders”, implying that in criminal law, the burden on the prosecution is extremely stringent and the accused’s guilt must be proven beyond reasonable doubt. In cases involving cybercrime, such as non-consensual image sharing/deepfakes, gathering clear and direct sound evidence can be challenging.
Any inconsistencies or lack of definitive proof are enough for the court to give the benefit of doubt to the accused. Even when violations have occurred and complaints are filed, many cases fail to meet the strict evidentiary standard leading to acquittals and reinforcing the gap between reporting and actual justice.
Cyber Safety of Women
For victims, practical steps matter as much as the law. “Take screenshots before deleting anything out of panic. Report as soon as possible to the National Cybercrime Reporting Portal. Evidence is not usually the difficult part in cyber offences, as almost everything is retrievable. However, as a precautionary measure, it is best to keep one’s interactions on anonymous platforms like Reddit and Telegram limited”, said Ibrahim.
On platforms like Reddit, users can engage with and report content through communities like r/BanFemaleHateSubs and other similar subreddits that document and push for bans on women-hating spaces.
On platforms like Instagram and Facebook, features such as restricting who can share, remix or download users’ content, turning off reshares to stories and limiting who can tag or mention you help add friction to mass circulation.
“Periodic reverse image searches using tools like Google Lens or TinEye can help detect if images are reused elsewhere,” said Ibrahim. Experts majorly recommend keeping social media accounts private wherever possible, limiting the visibility of personal photos, and using reverse image search tools to track misuse, but it is limited, as keeping accounts private does not mean it evades the harm.
On platforms like Reddit, users can engage with and report content through communities like r/BanFemaleHateSubs and other similar subreddits that document and push for bans on women-hating spaces. “Reporting abusive subreddits and users, blocking accounts and avoiding engagement with exploitative communities can help limit visibility and circulation”, said Ibrahim.
Cyber Sathi, a digital safety and awareness initiative is designed to help women navigate online risks including non-consensual intimate image sharing, deepfakes and cyber-harassment through workshops, helplines and educational toolkits that train women in recognizing abusive content and reporting misuse on platforms like Reddit and Telegram.
Individual precautions can reduce risks but don’t address the larger problem. Content moderation, quicker removal of abusive content and accountability need to be adopted so that users are not left helpless and have to manage such risks on their own.
(*Name changed on the request of the individual)
About the author(s)
Aditya Ansh is a freelance reporter based in New Delhi.

