Artificial Intelligence is revolutionizing the digital world, but with growing popularity comes growing misuse. What started as a creative trend on social media has now turned into a cause for concern. Ghibli-style AI-generated images, which are flooding platforms like Facebook, Instagram, X (formerly Twitter), and WhatsApp, have taken a disturbing turn — as users are now using ChatGPT and similar AI tools to generate fake Aadhaar cards, PAN cards, and even counterfeit bills.
The trend of creating Ghibli-style avatars using AI tools became a viral sensation in early 2025. Millions of users began crafting and sharing imaginative, anime-inspired versions of themselves online. The launch of GPT-4o, the latest iteration of OpenAI's ChatGPT, supercharged this trend by allowing users to create high-quality visuals instantly.
However, with great power comes potential for misuse.
In recent weeks, several users on social media platform X have shared images of fake identity cards — including mock Aadhaar and PAN cards — allegedly generated using AI image tools embedded in ChatGPT. One such post even showcased fictional versions of historical figures like Aryabhata, complete with ID numbers and QR codes, mimicking real documents.
According to OpenAI, since the introduction of image generation capabilities within ChatGPT, users have already created over 700 million images, and the number is rapidly growing. While the majority of this content is harmless and fun, the emergence of deepfakes and fake documentation is a stark reminder of how easily AI can be misused.
Faking an identity is not just unethical, it’s illegal — and when tools like ChatGPT are used for such purposes, it poses a serious cybersecurity threat. The creation of fake Aadhaar and PAN cards, even as jokes or satire, can lead to:
Identity theft
Financial fraud
Legal complications for innocent users
Potential loss of trust in digital services
Cyber experts have expressed concern that if such AI-generated fake documents fall into the wrong hands, they could be used to bypass KYC verifications, deceive organizations, or scam individuals.
AI tools like ChatGPT, Midjourney, DALL·E, and others are rapidly becoming part of everyday life. From writing scripts and preparing PPTs to conducting market research and tutoring, AI is helping millions save time and effort.
But as with every tool, intent matters.
On one hand, AI empowers users with creativity, efficiency, and productivity. On the other hand, misuse can lead to misinformation, fraud, and ethical dilemmas.
OpenAI and other AI companies are actively working to implement safeguards. Many platforms already:
Watermark AI-generated images
Restrict certain prompts related to document creation
Monitor unusual usage patterns through automated moderation systems
Still, experts argue that stricter regulation and user awareness are key to preventing AI misuse.
The rise of fake documents created using AI should be a wake-up call for users, developers, and regulators alike. While it's fun to engage in the Ghibli-style art trend, crossing ethical or legal lines can have serious consequences.
As artificial intelligence continues to evolve, so must our understanding of its boundaries. Innovation should not come at the cost of integrity.