6 Ways to Prevent Leaking Private Data Through Public AI Tools

Tanya Wetson-Catt • 16 January 2026

We all agree that public AI tools are fantastic for general tasks such as brainstorming ideas and working with non-sensitive customer data. They help us draft quick emails, write marketing copy, and even summarise complex reports in seconds. However, despite the efficiency gains, these digital assistants pose serious risks to businesses handling customer Personally Identifiable Information (PII).


Most public AI tools use the data you provide to train and improve their models. This means every prompt entered into a tool like ChatGPT or Gemini could become part of their training data. A single mistake by an employee could expose client information, internal strategies, or proprietary code and processes. As a business owner or manager, it’s essential to prevent data leakage before it turns into a serious liability.


Financial and Reputational Protection


Integrating AI into your business workflows is essential for staying competitive, but doing it safely is your top priority. The cost of a data leak resulting from careless AI use far outweighs the cost of preventative measures. A single mistake by an employee could expose internal strategies, proprietary code, or sensitive client information. This can lead to devastating financial losses from regulatory fines, loss of competitive advantage, and the long-term damage to your company's reputation.


Consider the real-world example of Samsung in 2023. Multiple employees at the company's semiconductor division, in a rush for efficiency, accidentally leaked confidential data by pasting it into ChatGPT. The leaks included source code for new semiconductors and confidential meeting recordings, which were then retained by the public AI model for training. This wasn't a sophisticated cyberattack, it was human error resulting from a lack of clear policy and technical guardrails. As a result was that Samsung had to implement a company-wide ban on generative AI tools to prevent future breaches.


6 Prevention Strategies


Here are six practical strategies to secure your interactions with AI tools and build a culture of security awareness.


1. Establish a Clear AI Security Policy


When it comes to something this critical, guesswork won’t cut it. Your first line of defence is a formal policy that clearly outlines how public AI tools should be used. This policy must define what counts as confidential information and specify which data should never be entered into a public AI model, such as social security numbers, financial records, merger discussions, or product roadmaps.


Educate your team on this policy during onboarding and reinforce it with quarterly refresher sessions to ensure everyone understands the serious consequences of non-compliance. A clear policy removes ambiguity and establishes firm security standards.


2. Mandate the Use of Dedicated Business Accounts


Free, public AI tools often include hidden data-handling terms because their primary goal is improving the model. Upgrading to business tiers such as ChatGPT Team or Enterprise, Google Workspace, or Microsoft Copilot for Microsoft 365 is essential. These commercial agreements explicitly state that customer data is not used to train models. By contrast, free or Plus versions of ChatGPT use customer data for model training by default, though users can adjust settings to limit this.


The data privacy guarantees provided by commercial AI vendors, which ensure that your business inputs will not be used to train public models, establish a critical technical and legal barrier between your sensitive information and the open internet. With these business-tier agreements, you’re not just purchasing features; you’re securing robust AI privacy and compliance assurances from the vendor.


3. Implement Data Loss Prevention Solutions with AI Prompt Protection


Human error and intentional misuse are unavoidable. An employee might accidentally paste confidential information into a public AI chat or attempt to upload a document containing sensitive client PII. You can prevent this by implementing data loss prevention (DLP) solutions that stop data leakage at the source. Tools like Cloudflare DLP and Microsoft Purview offer advanced browser-level context analysis, scanning prompts and file uploads in real time before they ever reach the AI platform.


These DLP solutions automatically block data flagged as sensitive or confidential. For unclassified data, they use contextual analysis to redact information that matches predefined patterns, like credit card numbers, project code names, or internal file paths. Together, these safeguards create a safety net that detects, logs, and reports errors before they escalate into serious data breaches.


4. Conduct Continuous Employee Training


Even the most airtight AI use policy is useless if all it does is sit in a shared folder. Security is a living practice that evolves as the threats advance, and memos or basic compliance lectures are never enough.


Conduct interactive workshops where employees practice crafting safe and effective prompts using real-world scenarios from their daily tasks. This hands-on training teaches them to de-identify sensitive data before analysis, turning staff into active participants in data security while still leveraging AI for efficiency.


5. Conduct Regular Audits of AI Tool Usage and Logs


Any security program only works if it’s actively monitored. You need clear visibility into how your teams are using public AI tools. Business-grade tiers provide admin dashboards, make it a habit to review these weekly or monthly. Watch for unusual activity, patterns, or alerts that could signal potential policy violations before they become a problem.


Audits are never about assigning blame, but identifying gaps in training or weaknesses in your technology stack. Reviewing logs might help you discover which team or department needs extra guidance or indicate areas to refine and close loopholes.


6. Cultivate a Culture of Security Mindfulness


Even the best policies and technical controls can fail without a culture that supports them. Business leaders must lead by example, promoting secure AI practices and encouraging employees to ask questions without fear of reprimand.


This cultural shift turns security into everyone’s responsibility, creating collective vigilance that outperforms any single tool. Your team becomes your strongest line of defence in protecting your data.


Make AI Safety a Core Business Practice


Integrating AI into your business workflows is no longer optional, it’s essential for staying competitive and boosting efficiency. That makes doing it safely and responsibly your top priority. The six strategies we’ve outlined provide a strong foundation to harness AI’s potential while protecting your most valuable data.



Take the next step toward secure AI adoption, contact us today to formalise your approach and safeguard your business.

Let's Talk Tech

More from our blog

by Tanya Wetson-Catt 17 April 2026
It usually starts small. Someone uses an AI tool to refine a difficult email. Someone enables an AI add-on inside a SaaS app because it promises to save an hour a week. Someone pastes a paragraph into a chatbot to “make it sound better.” Then it becomes routine. And once it’s routine, it stops being a simple tool decision and becomes a data governance issue: what’s being shared, where it’s going, and whether you could prove what happened if something goes wrong. That’s the core of shadow AI security. The goal isn’t to block AI entirely. It’s to prevent sensitive data from being exposed in the process. Shadow AI Security in 2026 Shadow AI is the unsanctioned use of AI tools without IT approval or oversight, often driven by speed and convenience. The challenge is that the “helpful shortcut” can become a blind spot when IT can’t see what’s being used, by whom, or with what data. Shadow AI security matters in 2026 because AI isn’t just a standalone tool employees choose to use. It’s increasingly embedded directly into the applications you already rely on. At the same time, it’s expanding through plug-ins, extensions, and third-party copilots that can tap into business data with very little friction. And there’s a human reality in it: 38% of employees admit they’ve shared sensitive work information with AI tools without permission. It’s people trying to work faster, but making risky decisions as they go. That’s why Microsoft sees the issue as a data leak problem, not a productivity problem. In its guidance on preventing data leaks to shadow AI, the core risk is simple: employees can use AI tools without proper oversight, and sensitive data can end up outside the controls you rely on for governance and compliance. And here’s what many teams overlook: the risk isn’t just which tool someone used. It’s what that tool continues to do with the data over time. This is known as “ purpose creep ”, when data begins to be used in ways that no longer align with its original purpose, disclosures, or agreements. But shadow AI isn’t limited to one obvious chatbot . It shows up in workflows across marketing, HR, support, and engineering, often through browser-based tools and integrations that are easy to adopt and hard to track. The Two Ways Shadow AI Security Fails 1.) You don’t know what tools are in use or what data is being shared Shadow AI isn’t always a shiny new app someone signs up for. It can be an AI add-on enabled inside an existing platform, a browser extension, or a feature that only shows up for certain users. That makes it easy for AI usage to spread without a clear “moment” where IT would normally review or approve it. It’s best to treat this as a visibility problem first: if you can’t reliably discover where AI is being used, you can’t apply consistent controls to prevent data leakage. 2.) You have visibility, but no meaningful way to manage or limit it Even when you can name the tools, shadow AI security still fails if you can’t enforce consistent behaviour. That typically happens when AI activity lives outside your managed identity systems, bypasses normal logging, or isn’t governed by a clear policy defining what’s acceptable. You’re left with “known unknowns”: people assume it’s happening, but no one can document it, standardise it, or rein it in. This can quickly turn into a governance issue . This happens when the organisation loses confidence in where data flows and how it’s being used across workflows and third parties. How to Conduct a Shadow AI Audit A shadow AI audit should feel like routine maintenance, not a crackdown. The goal is to gain clarity quickly, reduce the most significant risks first, and keep the team moving without disruption. Step 1: Discover Usage Without Disruption Start by reviewing the signals you already have before sending a company-wide email. Practical places to look: Identity logs: who is signing in, to which tools, and whether the account is managed or personal Browser and endpoint telemetry on managed devices SaaS admin settings and enabled AI features A brief, non-judgmental self-report prompt, such as: “What AI tools or features are helping you save time right now?” Shadow AI is often adopted for productivity first , not because people are trying to bypass security. You’ll get better answers when you approach discovery as “help us support this safely.” Step 2: Map the Workflows Don’t obsess over tool names. Map where AI touches real work. Build a simple view: Workflow AI touchpoint Input type Output use Owner Step 3: Classify What data is Being Put into AI This is where shadow AI security becomes practical. Use simple buckets that your team can apply without legal translation: Public Internal Confidential Regulated (if relevant) Step 4: Triage Risk Quickly You’re not aiming to create a perfect inventory. You’re focused on identifying the highest risks right now. A simple scoring model can help you move quickly: Sensitivity of the data involved Whether access occurs through a personal account or a managed/SSO account Clarity around retention and training settings Ability to share or export the data Availability of audit logging If you keep this step lightweight, you’ll avoid the trap of analysing everything and fixing nothing. Step 5: Decide on Outcomes Make decisions that are easy to follow and easy to enforce: Approved: Permitted for defined use cases, with managed identity and logging wherever possible Restricted: Allowed only for low-risk inputs, with no sensitive data Replaced: Transition the workflow to an approved alternative Blocked: Poses unacceptable risk or lacks workable controls. Stop Guessing and Start Governing Shadow AI security isn’t about shutting down innovation. It’s about making sure sensitive data doesn’t flow into tools you can’t monitor, govern, or defend. A structured shadow AI audit gives you a repeatable process: identify what’s in use, understand where it intersects with real workflows, define clear data boundaries, prioritise the biggest risks, and make decisions that hold. Do it once, and you reduce risk right away. Make it a quarterly discipline and shadow AI stops being a surprise.  If you’d like help building a practical shadow AI audit for your organisation, contact us today. We’ll help you gain visibility, reduce exposure, and put guardrails in place without slowing your team down.
by Tanya Wetson-Catt 13 April 2026
Most small businesses aren’t falling short because they don’t care. They’re falling short because they didn’t build their security strategy as one coordinated system. They added tools over time to solve immediate problems, a new threat here, a client request there. On paper, that can look like strong coverage. In reality, it often creates a patchwork of products that don’t fully work together. Some areas overlap. Others get overlooked. And when security isn’t intentionally designed as a system, the weaknesses don’t show up during routine support tickets. They show up when something slips through and turns into a disruptive, expensive problem. Why “Layers” Matter More in 2026 In 2026, your small business security can’t rely on a single control that’s “mostly on”. It must be layered because attackers don’t politely line up at your firewall anymore. They come in through whichever gap is easiest today. The real story is how quickly the landscape is changing. The World Economic Forum’s Global Cybersecurity Outlook 2026 says “AI is anticipated to be the most significant driver of change in cyber security… according to 94% of survey respondents.” That’s more than a headline. It means phishing becomes more convincing, automation becomes more affordable, and “spray and pray” attacks become more targeted and effective. If your security model depends on one or two layers catching everything, you’re essentially betting against scale. The NordLayer MSP trends report highlights that active enforcement of foundational security measures is becoming the standard. It also points to a future where you are expected to actively enforce foundational security measures, not just check a compliance box. It also highlights that regular cyber risk assessments will become essential for identifying gaps before attackers do. In other words, the market is shifting toward consistent security baselines and proactive oversight, rather than best-effort protection. And the easiest way to keep layers practical and not chaotic, is to think in outcomes, not tools. A Simple Way to Think About Your Security Coverage The easiest way to spot gaps in your security is to stop thinking in products and start thinking in outcomes. A practical way to structure this is the NIST Cybersecurity Framework 2.0 , which groups security into six core areas: Govern, Identify, Protect, Detect, Respond, and Recover. Here’s a simple translation for your business: Govern: Who owns security decisions? What’s considered standard? What qualifies as an exception? Identify: Do you know what you’re protecting? Protect: What controls are in place to reduce the likelihood of compromise? Detect: How quickly can you recognise that something is wrong? Respond: What happens next? Who is responsible, how fast do they act, and how is communication handled? Recover: How do you restore operations, and demonstrate that systems are fully back to normal? Most small business security stacks are strong in Protect. Many are okay in Identify. The missing layers usually live in Govern, Detect, Respond, and Recover. The 5 Security Layers MSPs Commonly Miss Strengthen these five areas, and your business's security becomes more consistent, more defensible, and far less reliant on luck. Phishing-Resistant Authentication. Phishing-Resistant Authentication Basic multifactor authentication (MFA) is a good start, but it’s not the finish line. The common gap is inconsistent enforcement and authentication methods that can still be tricked by modern phishing. How to add it: Make strong authentication mandatory for every account that touches sensitive systems Remove “easy bypass” sign-in options and outdated methods Use risk-based step-up rules for unusual sign-ins Device Trust & Usage Policies Most IT systems manage endpoints. Far fewer have a clearly defined and consistently enforced standard for what qualifies as a “trusted” device, or a defined response when a device falls short. How to add it: Set a minimum device baseline Put Bring Your Own Device (BYOD) boundaries in writing Block or limit access when devices fall out of compliance instead of relying on reminders Email & User Risk Controls Email remains the front door for most cyberattacks. If you’re relying on user training alone to stop phishing and credential theft, you’re betting on perfect attention. The real gap is the absence of built-in safety rails, controls that flag risky senders, block lookalike domains, limit account takeover impact, and reduce the damage from common mistakes. How to add it: Implement controls that reduce exposure, such as link and attachment filtering, impersonation protection, and clear labelling of external senders Make reporting easy and judgement-free Establish simple, consistent process rules for high-risk actions Continuous Vulnerability & Patch Coverage “Patching is managed” often really means “patching is attempted.” The real gap is proof, clear visibility into what’s missing, what failed, and which exceptions are quietly accumulating over time. How to add it: Set patch SLAs by severity and stick to them Cover third-party apps and common drivers/firmware, not just the operating system Maintain an exceptions register so exceptions don’t become permanent Detection & Response Readiness Most environments generate alerts. What’s often missing is a consistent, repeatable process for turning those alerts into action. How to add it: Define your minimum viable monitoring baseline Establish triage rules that clearly separate “urgent now” from “track and review” Create simple, practical runbooks for common scenarios Test recovery procedures in real-world conditions The Security Baseline for 2026 When you strengthen these five layers—phishing-resistant authentication, device trust, email risk controls, verified patch coverage, and real detection and response readiness—you turn your business's security into a repeatable, measurable baseline you can be confident in. Start with the weakest layer in your business environment. Standardise it. Validate that it’s working. Then move to the next.  If you’d like help identifying your gaps and building a more consistent security baseline for your business, contact us today for a security strategy consultation. We’ll help you assess your current stack, prioritise improvements, and create a practical roadmap that strengthens protection without adding unnecessary complexity.
by Tanya Wetson-Catt 7 April 2026
At home, security incidents don’t look like dramatic movie hacks. They look like stepping away from your laptop during a delivery, or leaving it unlocked while you grab something from another room. Those ordinary moments, repeated over time, are how work devices end up exposed. A remote work security checklist focuses on simple, practical controls that hold up in real life. Put it in place once, make it routine, and you’ll prevent the kinds of issues that hurt most because they were entirely avoidable. Why Home Is a Different Security Environment A work laptop doesn’t magically become “less secure” at home. But the environment around it does. In the office, there are built-in boundaries: fewer shared users, fewer casual touchpoints, and more predictable networks. At home, that same laptop is suddenly operating in a space designed for convenience, not control. For starters, physical exposure goes up. At home, devices move from room to room, sit on tables and countertops, and are left unattended for short stretches throughout the day. That’s why a remote work security checklist must treat physical security as part of cyber security. In its training on device safety, CISA stresses the basics: keep devices secured, limit access, and lock them when you’re not using them. Those simple habits matter more at home because there’s no “office culture” quietly enforcing them for you. Second, home is where work and personal life collide, and that creates messy, very human risks. The NI Cyber Security Centre is blunt about it: don’t let other people use your work device, and don’t treat it like the family laptop. Third, the network is different. Home Wi-Fi often starts with default settings, old router firmware, or passwords that have been shared with everyone who’s ever visited. CISA’s guidance on connecting a new computer to the internet offers the baseline steps many people skip at home: secure your router, enable the firewall, use anti-virus, and remove unnecessary software and default features. Finally, remote access raises the stakes for identity. In its remote workforce security guidance, Microsoft’s best practices frames remote security around a Zero Trust approach and emphasizes that access should be strongly authenticated and checked for anomalies before it’s granted. The Remote Work Security Checklist Use this remote work security checklist as your “minimum standard” for company laptops at home. It’s designed to be practical, repeatable, and easy to enforce without turning everyone into part-time IT employees. Lock the Screen Every Time You Step Away Set a short auto-lock timer and get into the habit of locking manually, even at home. Store the Laptop Like it’s Valuable Assume that “out of sight” is safer than “out of the way.” When you’re finished, store your device somewhere protected, not on the couch, not on the kitchen counter, and never in the car. Don’t Share Work Laptops with Family At home, good intentions can still lead to accidental clicks. Even a quick “just checking something” can result in risky downloads, unfamiliar logins, or unwanted browser extensions. Use a Strong Sign-In and MFA Use a long passphrase, not a clever but short password, and never reuse it across accounts. Treat multifactor authentication (MFA) as a baseline requirement, not a nice extra. Stop Using Devices That Can’t Update If a laptop can’t receive security updates, it’s not a work device. It’s a risk. Patch Fast Updates are where most known issues get fixed. The longer you wait, the bigger the risk. Enable automatic updates and restart when prompted. Secure Home Wi-Fi Like it’s Part of the Office Use a strong Wi-Fi password and enable modern encryption. If your router still has the default admin login or hasn’t been updated in a long time, consider that your cue to fix it. Use the Firewall and Keep Security Tools Switched On Turn on your firewall, keep antivirus software active, and make sure both are properly configured. If security tools feel inconvenient, don’t switch them off, address the friction instead. Remove Unnecessary Software The more apps you install, the more updates you have to manage, and the more opportunities there are for something to go wrong. Remove software you don’t need, disable unnecessary default features, and stick to approved applications from trusted sources. Keep Work Data in Work Storage Storing work data in approved systems keeps access controlled, audit-ready, and much easier to recover if something goes wrong. Avoid saving work documents to personal cloud accounts or personal backup services. Be Wary of Unexpected Links and Attachments If a message pressures you to click, open, download, or “confirm now,” treat it as suspicious. When in doubt, verify the request through a separate, trusted channel before taking any action. Only Allow Access From “Healthy Devices” The safest remote setups gate access based on device health. Microsoft warns that unmanaged devices can be a powerful entry point and stresses the importance of allowing access only from healthy devices. Are Your Laptops “Home-Proof”? If you want remote work to remain seamless, your devices need to be “home-proof” by default. That means treating the fundamentals as non-negotiable: automatic screen locks, secure storage, protected sign-ins, timely updates, properly secured Wi-Fi, and work data stored only in approved locations. Nothing complicated, just consistent execution. Start by adopting this remote work security checklist as your baseline standard. When the defaults are strong, you reduce avoidable incidents without slowing anyone down.  If you’d like help turning these basics into a practical, enforceable remote work policy, contact us today. We’ll help you standardise protections across your team so remote work stays productive, and secure.