Harsh Varshney, a 31-year-old Google employee working on Chrome AI security, has shared critical insights into protecting personal data while using AI tools. With two years of experience on Google’s privacy team building infrastructure to protect user data, Varshney now focuses on securing Google Chrome from malicious threats, including hackers who use AI agents for phishing campaigns.
Varshney emphasizes that AI has become a silent partner in daily life, assisting with deep research, note-taking, coding, and online searches. However, his work has made him acutely aware of privacy concerns associated with AI usage. He outlines four essential habits for protecting data:
1. Treat AI Like a Public Postcard: Users should never share credit card details, Social Security numbers, home addresses, or personal medical history with AI chatbots. Information shared with public AI chatbots can be used to train future models, potentially resulting in “training leakage” where models memorize personal information and regurgitate it in responses to other users. Data breaches also pose significant risks.
2. Know Which ‘Room’ You’re In: Distinguishing between public AI tools and enterprise-grade models is crucial. While public AI models may use conversations for training, enterprise models typically don’t train on user conversations, making them safer for discussing work projects. Varshney uses enterprise models even for small tasks like editing work emails, avoiding public chatbots for Google projects.
3. Delete Your History Regularly: AI chatbots maintain conversation histories, which should be deleted regularly on both enterprise and public models. Varshney discovered an enterprise Gemini chatbot remembered his exact address from a previous email refinement request due to long-term memory features. He recommends using “temporary chat” features (similar to incognito mode) available in ChatGPT and Gemini when searching for sensitive information.
4. Use Well-Known AI Tools: Established AI tools like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s products are more likely to have clear privacy frameworks and guardrails. Users should review privacy policies and disable “improve the model for everyone” settings to prevent conversations from being used for training.
Key Quotes
AI has quickly become a silent partner in our daily lives, and I can’t imagine life without AI tools.
Harsh Varshney, a Google Chrome AI security team member, emphasizes how deeply integrated AI has become in everyday workflows, from research to coding, highlighting why privacy protection is now essential for millions of users.
I treat AI chatbots like a public postcard. If I wouldn’t write a piece of information on a postcard that could be seen by anyone, I wouldn’t share it with a public AI tool.
Varshney provides a simple but powerful mental model for users to evaluate what information is safe to share with AI chatbots, addressing the false sense of intimacy that can lead people to overshare sensitive data.
Once, I was surprised that an enterprise Gemini chatbot was able to tell me my exact address, even though I didn’t remember sharing it.
This revelation from Varshney demonstrates how AI’s long-term memory features can retain personal information from previous conversations, illustrating why regularly deleting chat history is crucial even when using enterprise-grade tools.
Our Take
What makes Varshney’s advice particularly credible is his dual perspective as both an AI power user and a security professional at one of the world’s leading AI companies. His revelation about being surprised by Gemini’s memory of his address is telling—if a Google AI security expert can be caught off-guard by data retention, average users are likely even more vulnerable. The “public postcard” analogy is brilliant in its simplicity and should become standard guidance for AI literacy programs. Most concerning is the training leakage phenomenon, which represents a systemic privacy risk that individual users cannot fully control. As AI models become more capable and memory-enabled, the gap between user expectations of privacy and actual data handling practices may widen. Organizations need to move faster on establishing clear AI usage policies, and regulators should prioritize transparency requirements around how conversational data is stored and used for model training.
Why This Matters
This insider perspective from a Google AI security expert is particularly significant as AI adoption accelerates across industries and personal use. With millions of users interacting with AI chatbots daily, understanding privacy risks has become essential. The revelation about “training leakage” highlights a critical vulnerability where personal information shared with AI models could inadvertently appear in responses to other users—a risk many consumers may not fully appreciate.
The distinction between public and enterprise AI models has major implications for businesses concerned about intellectual property protection. Reports of employees accidentally leaking company data to ChatGPT underscore the need for clear AI usage policies in workplaces. As AI tools become more sophisticated with long-term memory features, the potential for unintended data retention increases.
Varshney’s recommendations provide actionable guidance at a time when AI privacy regulations are still evolving. His emphasis on using established AI providers with clear privacy frameworks reflects growing concerns about data brokers and cybercriminals exploiting AI vulnerabilities. This story matters because it bridges the gap between AI’s tremendous utility and the practical steps users must take to protect themselves in an increasingly AI-integrated world.
Related Stories
- How to Comply with Evolving AI Regulations
- OpenAI Lost Nearly Half of Its AI Safety Team, Ex-Researcher Says
- Google’s ‘Ask for Me’ AI Phone Tool: A Game-Changer for Time Management
- Reddit Sues AI Company Perplexity Over Industrial-Scale Scraping
Source: https://www.businessinsider.com/google-ai-security-safe-habits-privacy-data-2025-12