The AI-powered “Nano Banana” photo editing trend, fueled by Google’s Gemini technology, has taken social media by storm—and with it, a host of unforeseen privacy and security concerns. What started as playful vanity has spiraled into urgent conversations about user data safety, cybercrime risk, and the ethical boundaries of AI.[web:2][web:5]
Gemini’s transformation features, which can turn selfies into cartoon figurines, vintage portraits, or Bollywood-inspired graphics, have already generated over half a billion images globally.[web:2] But a viral Instagram post by user Jhalakbhawani exposed a chilling vulnerability: the AI inexplicably inferred a unique mole on her arm—a detail not visible in her uploaded photo.[web:2] This incident, viewed millions of times, raised alarming questions about how much our selfies—and the world’s largest AI platforms—actually know about us.
Law enforcement and cybersecurity experts quickly amplified the warning. Indian Police Service officer VC Sajjanar posted a public advisory on X, urging users to think twice before uploading personal images, as “with just one click, the money in your bank accounts can end up in the hands of criminals”.[web:2] Jalandhar Rural Police echoed this, advising that platform terms and conditions could allow Google to use uploaded photos for further AI training, potentially exposing users to identity theft and fraud.[web:2]
Google asserts that all Gemini-generated images carry SynthID watermarks, designed to distinguish AI-generated content from real photos.[web:2] But cybersecurity experts remain skeptical. The detection tools necessary to verify these watermarks aren’t publicly accessible, and watermarks themselves can easily be faked or removed.[web:2] “Watermarking’s real-world applications fail from the onset,” notes Ben Colman, CEO of Reality Defender.[web:2]
The risks aren’t limited to adults. Common Sense Media recently rated Gemini’s platforms for children and teens as “high risk,” citing evidence that the company’s youth offerings are “essentially adult models with superficial safety features”.[web:2] This means young users could inadvertently access harmful advice—ranging from drugs to mental health misinformation—despite parents assuming otherwise.
Google’s 2025 updates to Gemini include policies focused on transparency and user control for EU and UK users.[web:5] However, these changes also mean that human reviewers—including third-party contractors—may “read, annotate, and process” user conversations, uploaded files, images, and even screen content from connected apps.[web:5] Google says it disconnects this data from user accounts before review, but under GDPR, pseudonymized data can still be personal and potentially re-identifiable.[web:5] Retention periods for Gemini app prompts and responses stretch up to three years, even after user deletion.[web:5]
A July 2025 update rolled out Gemini’s ability to access WhatsApp, SMS, and phone call data on Android devices by default—regardless of whether Gemini Apps Activity is enabled.[web:6][web:4] While Google claims it does not read message content, Gemini can now respond to notifications and handle media directly on your device.[web:6][web:4] Users reacted with alarm, some resorting to technical measures like disabling the Google app entirely to block Gemini’s access.[web:6][web:4]
The Gemini Nano Banana trend is a microcosm of a much larger debate: the tradeoff between viral convenience and digital privacy.[web:2][web:5] As AI’s capabilities grow, so do the stakes for personal security—especially for children and vulnerable populations.[web:2] Google’s privacy policies may be evolving, but users must remain vigilant, informed, and cautious.[web:2][web:5][web:6] The most effective safeguard, as always, is awareness, skepticism, and a measured approach to the next big viral craze.
Stay safe, stay skeptical, and think twice before you upload.