LAHORE – Pakistani social media influencer Alina Amir says she’s reached her limit as AI-generated deepfake nude videos keep circulating online. She has denied the clips again and again, and she has asked people to take them down, yet they continue to resurface.
The 22-year-old creator, known for upbeat dance reels and lively short videos on TikTok and Instagram, says the fake content targets her name and career. “This is not just fake content, it’s harassment,” Amir said in a recent message shared across her social channels.
Who Is Alina Amir?
Alina Amir (born May 10, 2003, in Lahore, Pakistan) built a large following as a TikTok star and Instagram influencer. She’s known for lip-syncs, dance challenges, fashion posts, and everyday humor.
With millions of followers, she holds over 2 million on TikTok (@alinaamir1) and 3 million on Instagram (@alinaamiirr). That reach helped her grow quickly, but it also put her in the spotlight for the wrong reasons.
- Early life and rise online: Amir grew up in Lahore and liked modeling and acting from a young age. With support at home, she posted often and kept building momentum.
- Social media presence: She has a strong following across platforms, including:
- TikTok: 2.4 million+ followers and hundreds of millions of likes.
- Instagram: 3 million followers, with content managed by agencies like Primecast.
- Snapchat and other platforms where she chats with fans more directly.
- Public appearances and work: Amir has attended bridal couture weeks and movie premieres, and she has also worked with brands. Some of her viral reels, including Bollywood dialogue recreations, spread beyond Pakistan and gained traction in India and elsewhere.
Her public image has often been seen as upbeat and family-friendly. Still, that same visibility has made her an easy target for AI manipulation.
When the Alina Amir Nude Videos Started Appearing
Reports about the videos grew in early 2026, around January. Several outlets, including Gulf News and Bollywood Life, covered Amir’s response to what she described as fake, leaked obscene videos.
- Key moments in the timeline:
- January 2026: Clips that claimed to show Amir began spreading across social media, Telegram channels, and other spaces. Some posts described long videos (such as “5-minute” or “7-minute-11-second” clips) and framed them as private leaks.
- Late January 2026: Amir spoke publicly and said the clips were AI-made deepfakes. She said she had seen “at least 100 such videos” and decided to address the issue so it wouldn’t grow further.
- Ongoing into March 2026: Even after her statements, fresh uploads and re-posts keep appearing. The repeated cycle has added to her frustration. Influencers, including Umer Butt, have also weighed in and pointed to the emotional impact.
Amir said she stayed quiet at first because she didn’t want to give the videos more attention. However, the volume increased, and she felt she had to respond.
How Deepfake Technology Hurts Celebrities and Influencers
Deepfakes are AI-made videos or images that can place a real person’s face onto another body. In many cases, people use them to create non-consensual porn and to damage reputations. Amir’s experience matches a wider pattern seen with public figures around the world.
- How it often happens: People pull photos and videos from public accounts and use them to train AI tools. As a result, they can produce realistic-looking content that’s still completely fake.
- Real harm, even when it’s fake: Victims often deal with stress, shame, and fear for their safety. On top of that, the content can affect work opportunities and invite harassment. For young women with large audiences, the damage can spread fast through comments, rumors, and re-uploads.
Amir’s anger comes from how long this has gone on. “People continue to post these despite knowing they’re fake. It’s damaging not just me but sets a dangerous precedent,” she said.
What Platforms Are Doing About Deepfake Nude Videos
Big social platforms say they’re working to stop non-consensual deepfakes, but enforcement still varies. Removals can take time, and re-uploads often appear quickly.
- Meta (Instagram/Facebook): Meta uses detection systems to spot manipulated media and remove it. Its rules ban non-consensual intimate imagery, and it offers reporting and takedown requests for victims.
- TikTok: TikTok uses a mix of moderation teams and automated tools to flag deepfakes. It has removed accounts that share explicit deepfakes, and it also provides in-app reporting for harassment.
- Industry and legal efforts:
- Watermarking and detection tools: Companies like Adobe and Microsoft build tools designed to identify AI-generated content.
- Legal options: In Pakistan and nearby countries, cyber harassment and defamation laws may apply, although deepfake-specific rules are still changing.
- Wider pressure on platforms: Governments have pushed for faster removals. Separate incidents, including those involving actress Rashmika Mandanna, also prompted renewed attention on how platforms respond.
Even with these steps, Amir and others say platforms need to move faster. They also want stronger checks, quicker AI flagging, and better coordination with law enforcement.
Amir asked supporters to report deepfake videos and not share them. “By not engaging, we starve this harassment of attention,” she said.
As AI tools improve, cases like Alina Amir’s show why stronger safeguards matter. For now, the influencer says she’s focused on protecting her name and continuing to post the content her fans followed her for in the first place.





