Our blog

by Tanya Wetson-Catt 9 March 2026
When you first move your data and computing resources to the cloud, the bills often seem manageable. But as your business grows, a worrying trend can appear. Your cloud expenses start climbing faster than your revenue. This is not just normal growth, it is a phenomenon called cloud waste, the hidden drain on your budget hiding in your monthly cloud invoice. Cloud waste happens when you spend money on resources that do not add value to your business. Examples include underused servers, storage for completed or abandoned projects, and development or testing environments left active over the weekend. It is like keeping every piece of equipment in your factory running all the time, even when it is not needed. The cloud makes it easy to spin up resources on demand, but the same flexibility can make it easy to forget to turn them off. Most providers use a pay-as-you-go model, so the billing meter is always running. Controlling cloud waste is not just about saving money. Every dollar you save can be reinvested in innovation, stronger security, or your team. The Hidden Sources of Your Leaking Budget Cloud waste can be surprisingly easy to overlook. A common example is over-provisioning. You launch a virtual server for a project, thinking you might need a larger instance just to be safe, and then forget to scale it down. That server keeps running and billing you every hour, month after month. Orphaned resources are another common drain, especially in companies with many projects or large teams. When a project ends, do you remember to delete the storage disks, load balancers, or IP addresses that were used? Often, they stay active indefinitely. Idle resources, like databases or containers that are set up but rarely accessed, quietly add up over time. According to a 2025 report by VMWare that drew responses from over 1,800 global IT leaders, about 49% of the respondents believe that more than 25% of their public cloud expenditure is wasted, while 31% believe that waste exceeds 50%. Only 6% of the respondents believe they are not wasting any cloud spend. The FinOps Mindset: Your Financial Control Panel Fixing this level of cloud waste requires more than a one-time audit. It requires a cultural shift known as FinOps , i.e., the practice of bringing financial accountability to the variable spend model of the cloud. It is a collaborative effort where finance, technology, and business teams work together to make data-driven spending decisions. A FinOps strategy turns cloud cost from a static IT expense into a dynamic, managed business variable. The goal is not to minimise cost at all costs, but to maximise business value from every cloud dollar spent. Gaining Visibility: The Non-Negotiable First Step You can’t manage what you don’t measure, so start with the native tools your cloud provider offers. Explore their cost management consoles and take these steps to create accountability and track what’s driving expenses: Use tagging consistently to make filtering, organising, and tracking costs easier. Assign every resource to a project, department, and owner. Consider third-party cloud cost optimisation tools for deeper insights. They can automatically spot waste, recommend right-sizing actions, and consolidate data into a single dashboard if you’re using multiple cloud providers. Implementing Practical Optimisation Tactics Once you have visibility, you can act, and the easiest place to start is with the low-hanging fruit. For example: Automatically schedule non-production environments like development and testing to turn off during nights and weekends. Implement storage lifecycle policies to move old data to lower-cost archival tiers or delete it after a set period. Adjust the size of your servers by checking how much they are actually used. If the CPU is used less than 20% of the time, the server is larger than necessary, replace it with a smaller, more affordable option. Leveraging Commitments for Strategic Savings Cloud providers offer substantial discounts, like AWS Savings Plans or Azure Reserved Instances, when you commit to using a consistent level of resources for one to three years. For predictable workloads, these commitments are the most effective way to reduce unnecessary spending at full list price. The key is to make these purchases after you have right-sized your environment. Committing to an oversized instance just locks in waste. Optimise first, then commit. Making Optimisation a Continuous Cycle Managing cloud costs is not a one-time project, it’s an ongoing cycle of learning, optimising, and operating. Set up regular check-ins, monthly or quarterly, where stakeholders review cloud spending against budgets and business goals. Give your teams access to their own cost data. When developers can see the real-time impact of their architectural decisions, they become strong partners in reducing waste. Scale Smarter, Not Just Bigger The cloud offers elastic efficiency, but managing waste ensures you capture that benefit fully. It frees up capital to invest in your real business goals instead of letting it disappear into unnecessary cloud spend. As you plan for growth in 2026, make cost intelligence a core part of your strategy. Use data to guide provisioning decisions and set up automated controls to prevent waste before it starts. Reach out today for a cloud waste assessment, and we’ll help you build a sustainable FinOps practice. Article FAQ What is the most common type of cloud waste? The most common type of cloud waste is idle or underutilised compute resources, such as virtual machines, containers, or databases, that are running but not actively serving a meaningful workload, often left on accidentally or “just in case.” Can cloud waste really make a big difference to my bottom line? Absolutely. Industry reports consistently show that enterprises waste an average of 30% of their cloud spend. For a growing small business, reclaiming even 15–20% of your cloud bill can translate to thousands of dollars annually for reinvestment. Are reserved instances always the right choice to save money? They are excellent for stable, predictable workloads running 24/7. However, they are not ideal for spiky, experimental, or short-term projects. The key is to analyse your usage patterns for at least a month before making a commitment. Is automating shutdowns safe for my production systems? Automation should be applied cautiously to production. Focus initial automation efforts on non-production environments (development, testing, staging). For production, use scaling policies that automatically add/remove capacity based on real-time demand (like auto-scaling groups), which is safer than blanket shutdowns.
by Tanya Wetson-Catt 2 March 2026
AI chatbots can answer questions. But now picture an AI that goes further, updating your CRM, booking appointments, and sending emails automatically. This isn’t some far-off future. It’s where things are headed in 2026 and beyond, as AI shifts from reactive tools to proactive, autonomous agents. This next wave of AI is called “Agentic AI.” It describes AI that can set a goal, figure out the steps, use the right tools, and get the job done on its own. For a small business, that could mean an AI that takes an invoice from inbox to paid, or one that runs your whole social media presence. The upside is massive efficiency, but it also means you need to be prepared. When AI gets more powerful, having the right controls matters just as much. What Makes an AI “Agentic”? Think of the difference between a tool and an employee. A chatbot is a tool you use to help you with tasks while you stay in control. An AI agent, on the other hand, is more like a digital employee you give direction to. It has access to systems, can make decisions with set boundaries, and learns from outcomes. A research article on the evolution and architecture of AI agents explains the big shift like this: AI is moving from tools that wait for instructions to systems that work toward goals on their own. Instead of just helping with tasks, AI starts doing the work, making it possible to hand off whole processes and collaborate with it like a teammate. The 2026 Opportunity for Your Business For small businesses, this is about real leverage. Agentic AI can work around the clock, clear out repetitive bottlenecks, and cut down errors in routine processes. That means things like personalising customer experiences at scale or even adjusting supply chains in real time become possible. And this isn’t about replacing your team. It’s about leveling them up. AI takes the busywork so your people can focus on strategy, creativity, tough problems, and relationships, the things humans do best. Your role shifts too, from doing everything yourself to guiding and supervising your AI. What You Need Before You Launch Agentic AI Before you hand over your processes to an AI agent, you need to make sure those processes are rock solid. The reasoning is simple: AI will amplify whatever it touches, order or chaos, with equal efficiency. That’s why preparation is key. Start with this checklist: 1. Clean and Organise Your Data: AI agents make decisions based on the data you give them. Garbage in means not just garbage out, it can lead to major errors. Audit your critical data sources first. 2. Document Workflows Clearly: If a human can’t follow a process step by step, an AI won’t be able to either. Map out each workflow in detail before you automate. Building Your Governance Framework Just like with human team members, delegating to an AI agent requires oversight. That means setting up clear guardrails by asking a few key questions: What decisions can the AI agent make on its own? When does it need human approval or guidance? What are its spending limits if it handles finances? Which data sources is it allowed to access? Answering these questions lets you build a framework that becomes your company’s rulebook for its “digital employees.” Security is another critical piece. Every AI agent needs strict access controls, following the principle of least privilege. Just as you wouldn’t give an intern full access to the company bank account, you must carefully define which systems and data each agent can touch. Regular audits of agent activity are now a non-negotiable part of good IT hygiene. Start Preparing Your Business Today You don’t have to deploy an AI agent immediately, but you can start laying the groundwork today. Start by identifying three to five repetitive, rules-based workflows in your business and document them in detail. Then, clean up and centralise the data those workflows rely on. Try experimenting with existing automation tools as a stepping stone. Platforms that connect your apps, like Zapier or Make, let you practice designing triggered, multi-step actions. Thinking this way is the perfect training ground for an agentic AI future. Embracing the Role of Strategic Supervisor The businesses that will thrive are the ones that learn to manage a blended workforce of humans and AI agents. Research from Stanford University suggests that key human skills are shifting, from information-processing to organisational and interpersonal abilities. In a world with agentic AI, leadership means setting agent goals, defining ethical boundaries, providing creative direction, and interpreting outcomes. Agentic AI is a true force multiplier, but it depends on clean data and well-defined processes. It rewards careful preparation and punishes the hasty. By focusing on data integrity and process clarity now, you position your business not just to adapt, but to lead. Contact us today for a technology consultation on AI integration. We can help you audit workflows and create a roadmap for reliable, effective adoption. Article FAQ What is a simple example of Agentic AI in a small business? A good example is an AI agent that monitors inventory levels. For example, when stocks run low, it contacts pre-approved suppliers, negotiates prices based on preset limits, and places a purchase order, all autonomously. Are AI agents expensive to implement for small businesses? Not necessarily. Most AI agents operate on a subscription model, and there are many open-source solutions that you can self-host and run locally. Ideally, the larger cost is not the technology, but investing in preparing your data and workflows for use by the AI agent. What is the biggest risk of using autonomous AI agents? The biggest risk is “unchecked autonomy,” which leads to automation chaos. Basically, implementing an AI agent without clear limits, oversight, and audit logs could lead to financial loss, reputational damage, and security breaches if the agent makes erroneous decisions or is manipulated.
by Tanya Wetson-Catt 25 February 2026
The modern office extends far beyond traditional cubicles or open-plan spaces. Since the concept of remote work became popularised in the COVID and post-COVID era, employees now find themselves working from their homes, libraries, bustling coffee shops, and even vacation destinations. These environments, often called “ third places ,” offer flexibility and convenience but can also introduce risks to company IT systems. With remote work now a permanent reality, businesses must adapt their security policies accordingly. A coffee shop cannot be treated like a secure office, as its open environment exposes different types of threats. Employees need clear guidance on how to stay safe and protect company data. Neglecting security on public Wi-Fi can have serious consequences, as hackers often target these locations to exploit remote workers. Equip your team with the right knowledge and tools, and enforce a robust external network security policy to keep company data safe. The Dangers of Open Networks Free internet access is a major draw for remote workers frequenting cafes, malls, libraries, and coworking spaces. However, these networks rarely have encryption or strong security, and even when they do, they lack the specific controls that would be present in a secure company network. This makes it easy for cybercriminals to intercept network traffic and steal passwords or sensitive emails in a matter of seconds. Attackers often set up fake networks that look legitimate. They might give them names such as “Free Wi-Fi” or give them a name resembling a nearby business, such as a coffee shop or café, to trick users. Once connected, the hacker who controls the network sees everything the employee sends. This is a classic “man-in-the-middle” attack. It is critical to advise employees never to rely on open connections. Networks that require a password may still be widely shared, posing significant risks to business data. Exercise caution at all times when accessing public networks. Mandating Virtual Private Networks The most effective tool for remote security is a VPN. A Virtual Private Network encrypts all data leaving the laptop by creating a secure tunnel through the unsecured public internet. This makes the data unreadable to anyone trying to snoop. Providing a VPN is essential for remote work, and employees should be required to use it whenever they are outside the office. Ensure the software is easy to launch and operate, as overly complex tools may be ignored. Whenever possible, configure the VPN to connect automatically on employee devices, eliminating human error and ensuring continuous protection. At the same time, enforce mandatory VPN usage by implementing technical controls that prevent employees from bypassing the connection when accessing company servers. The Risk of Visual Hacking Digital threats are not the only concern in public spaces since someone sitting at the next table can easily glance at a screen. Visual hacking involves stealing information just by looking over a shoulder, which makes it low-tech but highly effective and hard to trace. Employees often forget how visible their screens are to passers-by, and in a crowded room full of prying eyes, sensitive client data, financial spreadsheets, and product designs are at risk of being viewed and even covertly photographed by malicious actors. To address this physical security gap, issue privacy screens to all employees who work remotely. Privacy screens are filters that make laptop and monitor screens appear black from the side, and only the person sitting directly in front can see the content. Some devices come with built-in hardware privacy screens that obscure content so that it cannot be viewed from an angle. Physical Security of Devices Leaving a laptop unattended is a recipe for theft. In a secure office, you might walk away to get water or even leave the office and expect to find your device in the same place, untouched. In a coffee shop, that same action can cost you a device, since thieves are always scanning for distracted victims and are quick to act. Your remote work policy should stress the importance of physical device security. Employees must keep their laptops with them at all times and never entrust them to strangers. A laptop can be stolen and its data accessed in just seconds. Encourage employees to use cable locks, particularly if they plan to remain in one location for an extended period. While not fool proof, locks serve as a deterrent, especially in coworking spaces where some level of security is expected. The goal is to make theft more difficult, and staying aware of the surroundings helps employees assess potential risks. Handling Phone Calls and Conversations Coffee shops can be noisy, but conversations still travel through the air. Discussing confidential business matters in public is risky, as you never know who might be listening. Competitors or malicious actors could easily overhear sensitive information. Employees should avoid discussing sensitive matters in these “third places.” If a call is necessary, they should step outside or move to a private space, such as a car. While headphones prevent others from hearing the other side, the employee’s own voice can still be overheard. Creating a Clear Remote Work Policy Employees shouldn’t have to guess the rules. A written policy clarifies expectations, sets standards, and supports training and enforcement. Include dedicated sections on public Wi-Fi and physical security, and explain the reasoning behind each rule so employees understand their importance. Make sure the policy is easily accessible on the company intranet. Most importantly, review this policy annually as technology changes. As new threats emerge, your guidelines must also evolve to counter them. Make routine updates to the policy, and reissue the revised versions to keep the conversation about security alive and ongoing. Empower Your Remote Teams While working from a “third place” offers flexibility and a morale boost, it also requires a higher level of vigilance. This makes prioritising public Wi-Fi security and physical awareness non-negotiable, and you must equip your team to work safely from anywhere. With the right tools and policies, you can manage the risks while enjoying the benefits of remote work. Success comes from balancing freedom with responsibility, and well-informed employees serve as your strongest line of defence. Protect your data, no matter where your team works.  Is your team working remotely without a safety net? We help businesses implement secure remote access solutions and policies, ensuring your data stays private, even on public networks. Call us today to fortify your remote workforce.
by Tanya Wetson-Catt 20 February 2026
Time moves fast in the world of technology, and operating systems that once felt cutting-edge are becoming obsolete. With Microsoft having set the deadline for Windows Server 2016 End of Support to January 12, 2027 , the clock is ticking for businesses that use this operating system. Once support ends, Microsoft will no longer provide security updates or patches, leaving your business systems vulnerable. It’s not just about missing new features, continuing to use unsupported software significantly increases the risk of cyberattacks. If your systems are still on Windows Server 2016, now is the time to plan your upgrade. With about a year until support ends, waiting until the last minute can lead to rushed decisions and higher costs. Understanding the Security Implications When support ends, the protection provided by security updates and patches disappears, as Microsoft will no longer fix bugs or vulnerabilities. Hackers often target unsupported systems, knowing any new exploits will go unpatched and open the door to attacks. Legacy systems put IT administrators in a tough spot. Without vendor support, defending against threats becomes nearly impossible, compliance with industry regulations is compromised, and running unsupported software can lead to failed audits. Additionally, customer data on servers running this operating system is vulnerable to theft and ransomware. The cost of a breach far outweighs the cost of upgrading. Using unsupported systems is like driving a faulty, uninsured car, failure is inevitable. The question isn’t if it will happen, but when. The Case for Cloud Migration With the end-of-support deadline approaching, businesses face a choice: purchase new physical servers that run the latest Windows Server editions, or migrate their infrastructure to the cloud. Investing in new hardware and software comes with substantial upfront costs and locks you into that capacity for five years, the typical span of mainstream support for Windows Server , plus an additional five years for Long-Term Servicing Channel (LTSC) releases. On the other hand, a cloud migration strategy offers a more flexible alternative. Platforms such as Microsoft Azure or Amazon’s AWS cloud services, allow you to select virtualised computing resources such as servers and storage, which can scale as needed. On these platforms, you only pay for what you use, transforming your IT spending from capital expenditure to operating expense. The cloud provides greater reliability and disaster recovery, eliminating concerns about hard drive failures in your server rack. Cloud providers handle the management and upgrades of the physical infrastructure, freeing your IT team to focus on driving business growth. Analyse Your Current Workloads Before moving to the cloud, it’s essential to know what you’re working with. Take inventory of all applications running on your Windows Server 2016 machines. While some are cloud-ready, others may need updates or reconfiguration. Identify which workloads are critical to your daily operations and prioritise them in your migration plan. You may also discover applications you no longer need, making this an ideal time to streamline and clean up your environment. When in doubt, consult with your software vendors to confirm compatibility, as they might have specific requirements for newer operating systems. Gathering this information early helps you to avoid surprises during the actual migration. Create a Phased Migration Plan When transitioning to a new system, moving everything at once is risky, ‘big bang’ migrations often cause downtime and confusion. The best approach is a phased migration to manage risk effectively. Begin with low-impact workloads to test the process, then proceed to medium and high-impact workloads once you’re confident everything runs smoothly. Set a realistic timeline that beats the server upgrade deadline by a significant margin, and then work backward from the end-of-support date. This approach allows for plenty of buffer time for testing and troubleshooting, since rushing migrations often results in mistakes and security gaps. Communicate the schedule to your staff clearly, they need to know when maintenance windows will occur, so that they can also manage their workflows effectively. Managing expectations is just as important as managing servers, and you don’t want to get in your own way. A smooth transition requires everyone to be informed and on the same page. Test and Validate Once you migrate a workload, it’s essential to verify that it functions as expected. Key questions to ask include: Does the application launch correctly? Can users access their data without permission errors? Testing is the most critical phase of any migration. After migration, run extensive performance benchmarks to compare the new system with the old one. The cloud should offer equal or better speed, and if things are slow, you might need to adjust resources. Optimisation will be a normal part of the migration process, until you find the perfect balance that works for you. The summarised steps for a successful migration include: Audit all current hardware and software assets Choose between an on-premise upgrade or a cloud migration Back up all data securely before making changes Test applications thoroughly in the new environment Do not declare victory until users confirm everything is working. The Cost of Doing Nothing Ignoring the end of support deadline is not a viable strategy. Some businesses hope to delay until the last minute and then rush a migration, but this is extremely risky. Cybercriminals constantly target outdated, vulnerable systems, often using automated bots to scan for weaknesses. If you continue using Windows Server 2016 past the extended support dates, you may need to purchase 'Extended Security Updates.' While Microsoft offers this service, it is extremely costly, and the price rises each year, making it more a penalty for delay than a sustainable long-term solution. Act Now to Modernise Your Infrastructure If your business still relies on Windows Server 2016, the end of support marks a pivotal moment for your IT strategy, upgrading your technology stack is no longer optional. Whether you choose new hardware or a cloud solution, decisive action is required. Take this opportunity to enhance your legacy system’s security and efficiency, ensuring your modern business runs on a modern infrastructure. Don’t let time compromise your data’s safety, plan your migration today and safeguard your future.  Concerned about the approaching Windows Server 2016 end-of-support deadline? We specialise in smooth migrations to the cloud and modern server environments. Let us take care of the technical heavy lifting, contact us today to begin your upgrade plan.
by Tanya Wetson-Catt 16 February 2026
For years, enabling Multi-Factor Authentication (MFA) has been a cornerstone of account and device security. While MFA remains essential, the threat landscape has evolved, making some older methods less effective. The most common form of MFA, four- or six-digit codes sent via SMS, is convenient and familiar, and it’s certainly better than relying on passwords alone. However, SMS is an outdated technology, and cybercriminals have developed reliable ways to bypass it. For organisations handling sensitive data, SMS-based MFA is no longer sufficient. It’s time to adopt the next generation of phishing-resistant MFA to stay ahead of today’s attackers. SMS was never intended to serve as a secure authentication channel. Its reliance on cellular networks exposes it to security flaws, particularly in telecommunication protocols such as Signaling System No. 7 (SS7), used for communication between networks. Attackers know that many businesses still use SMS for MFA, which makes them appealing targets. For instance, hackers can exploit SS7 vulnerabilities to intercept text messages without touching your phone. Techniques such as eavesdropping, message redirection, and message injection can be carried out within the carrier network or during over-the-air transmission. SMS codes are also vulnerable to phishing. If a user enters their username, password, and SMS code on a fake login page, attackers can capture all three in real time and immediately gain access the legitimate account. Understanding SIM Swapping Attacks One of the most dangerous threats to SMS-based security is the SIM swap. In SIM swapping attacks, a criminal contacts your mobile carrier pretending to be you and claims to have lost their phone. They then request the support staff to port your number to a new blank SIM card in their possession. If they succeed, your phone goes offline, allowing them to receive all calls and SMS messages, including MFA codes for banking and email. Without knowing your password, they can quickly reset credentials and gain full access to your accounts. This attack doesn’t depend on advanced hacking skills; instead, it exploits social engineering tactics against mobile carrier support staff, making it a low-tech method with high‑impact consequences. Why Phishing-Resistant MFA Is the New Gold Standard To prevent these attacks, it’s essential to remove the human element from authentication by using phishing-resistant MFA. This approach relies on secure cryptographic protocols that tie login attempts to specific domains. One of the more prominent standards used for such authentication is Fast Identity Online 2 (FIDO2) open standard, that uses passkeys created using public key cryptography linking a specific device to a domain. Even if a user is tricked into clicking a phishing link, their authenticator application will not release the credentials because the domain does not match the specific record. The technology is also passwordless, which removes the threat of phishing attacks that capture credentials and one-time passwords (OTPs). Hackers are forced to target the endpoint device itself, which is far more difficult than deceiving users. Implementing Hardware Security Keys Perhaps one of the strongest phishing-resistant authentication solutions involves hardware security keys. Hardware security keys are physical devices resembling a USB drive, which can be plugged into a computer or tapped against a mobile device. To log in, you simply insert the key into the computer or touch a button, and the key performs a cryptographic handshake with the service. This method is quite secure since there are no codes to type, and attackers can’t steal your key over the internet. Unless they physically steal the key from you, they cannot access your account. Mobile Authentication Apps and Push Notifications If physical keys are not feasible for your business, mobile authenticator apps such as Microsoft or Google Authenticator are a step up from SMS MFA. These apps generate codes locally on the device, eliminating the risk of SIM swapping or SMS interception since the codes are not sent over a cellular network. Simple push notifications also carry risks. For example, attackers may flood a user’s phone with repeated login approval requests, causing “ MFA fatigue,” where a frustrated or confused user taps “approve” just to stop the notifications. Modern authenticator apps address this with “number matching,” requiring the user to enter a number shown on their login screen into the app. This ensures the person approving the login is physically present at their computer. Passkeys: The Future of Authentication With passwords being routinely compromised, modern systems are embracing passkeys, which are digital credentials stored on a device and protected by biometrics such as fingerprint or Face ID. Passkeys are phishing-resistant and can be synchronised across your ecosystem, such as iCloud Keychain or Google Password Manager. They offer the security of a hardware key with the convenience of a device that you already carry. Passkeys reduce the workload for IT support, as there are no passwords to store, reset, or manage. They simplify the user experience while strengthening security. Balancing Security With User Experience Moving away from SMS-based MFA requires a cultural shift. Since users are already used to the universality and convenience of text messages, the introduction of physical keys and authenticator apps can trigger resistance. It’s important to explain the reasoning behind the change, highlighting the realities of SIM-swapping attacks and the value of the protected information. When users understand the risks, they are more likely to embrace the new measures. While a phased rollout can help ease the transition for the general user base, phishing-resistant MFA should be mandatory for privileged accounts. Administrators and executives must not rely on SMS-based MFA. The Costs of Inaction Sticking with legacy MFA techniques is a ticking time bomb that gives a false sense of security. While it may satisfy compliance requirements, it leaves systems vulnerable to attacks and breaches, which can be both costly and embarrassing. Upgrading your authentication methods offers one of the highest returns on investment in cybersecurity. The cost of hardware keys or management software is minimal compared to the expense of incident response and data recovery. Is your business ready to move beyond passwords and text codes? We specialise in deploying modern identity solutions that keep your data safe without frustrating your team. Reach out, and we’ll help you implement a secure and user-friendly authentication strategy.
by Tanya Wetson-Catt 9 February 2026
The phone rings, and it’s your boss. The voice is unmistakable; with the same flow and tone you’ve come to expect. They’re asking for a favour: an urgent bank transfer to lock in a new vendor contract, or sensitive client information that’s strictly confidential. Everything about the call feels normal, and your trust kicks in immediately. It’s hard to say no to your boss, and so you begin to act. What if this isn’t really your boss on the other end? What if every inflection, every word you think you recognise has been perfectly mimicked by a cybercriminal? In seconds, a routine call could turn into a costly mistake; money gone, data compromised, and consequences that ripple far beyond the office. What was once the stuff of science fiction is now a real threat for businesses. Cybercriminals have moved beyond poorly written phishing emails to sophisticated AI voice cloning scams, signalling a new and alarming evolution in corporate fraud. How AI Voice Cloning Scams Are Changing the Threat Landscape We have spent years learning how to spot suspicious emails by looking for misspelled domains, odd grammar, and unsolicited attachments. Yet we haven’t trained our ears to question the voices of people we know, and that’s exactly what AI voice cloning scams exploit. Attackers only need a few seconds of audio to replicate a person’s voice, and they can easily acquire this from press releases, news interviews, presentations, and social media posts. Once they obtain the voice samples, attackers use widely available AI tools to create models capable of saying anything they type. The barrier to entry for these attacks is surprisingly low. AI tools have proliferated in recent years, covering applications from text and audio, to video creation and coding. A scammer doesn’t need to be a programming expert to impersonate your CEO, they only need a recording and a script. The Evolution of Business Email Compromise Traditionally, business email compromise (BEC) involved compromising a legitimate email account through techniques like phishing and spoofing a domain to trick employees into sending money or confidential information. BEC scams relied heavily on text-based deception, which could be easily countered using email and spam filters. While these attacks are still prevalent, they are becoming harder to pull off as email filters improve. Voice cloning, however, lowers your guard by adding a touch of urgency and trust that emails cannot match. While you can sit back and check email headers and a sender’s IP address before responding, when your boss is on the phone sounding stressed, your immediate instinct is to help. “Vishing” (voice phishing) uses AI voice cloning to bypass the various technical safeguards built around email and even voice-based verification systems. Attackers target the human element directly by creating high-pressure situations where the victim feels they must act fast to save the day. Why Does It Work? Voice cloning scams succeed because they manipulate organisational hierarchies and social norms. Most employees are conditioned to say “yes” to leadership, and few feel they can challenge a direct request from a senior executive. Attackers take advantage of this, often making calls right before weekends or holidays to increase pressure and reduce the victim’s ability to verify the request. More importantly, the technology can convincingly replicate emotional cues such as anger, desperation, or fatigue. It is this emotional manipulation that disrupts logical thinking. Challenges in Audio Deepfake Detection Detecting a fake voice is far more difficult than spotting a fraudulent email. Few tools currently exist for real-time audio deepfake detection, and human ears are unreliable, as the brain often fills in gaps to make sense of what we hear. That said, there are some common tell-tale signs, such as the voice sounding slightly robotic or having digital artifacts when saying complex words. Other subtle signs you can listen for include unnatural breathing patterns, weird background noise, or personal cues such as how a particular person greets you. Depending on human detection is an unreliable approach, as technological improvements will eventually eliminate these detectable flaws. Instead, procedural checks should be implemented to verify authenticity. Why Cybersecurity Awareness Training Must Evolve Many corporate training programs remain outdated, focusing primarily on password hygiene and link checking. Modern cybersecurity awareness must also address emerging threats like AI. Employees need to understand how easily caller IDs can be spoofed and that a familiar voice is no longer a guarantee of identity. Modern IT security training should include policies and simulations for vishing attacks to test how staff respond under pressure. These trainings should be mandatory for all employees with access to sensitive data, including finance teams, IT administrators, HR professionals, and executive assistants. Establishing Verification Protocols The best defence against voice cloning is a strict verification protocol. Establish a “zero trust” policy for voice-based requests involving money or data. If a request comes in by phone, it must be verified through a secondary channel. For example, if the CEO calls requesting a bank transfer, the employee should hang up and call the CEO back on their internal line or send a message via an encrypted messaging app like Teams or Slack to confirm. Some companies are also implementing challenge-response phrases and “safe words” known only by specific personnel. If the caller cannot provide or respond to the phrase, the request is immediately declined. The Future of Identity Verification We are entering an era where digital identity is fluid. As AI voice cloning scams evolve, we may see a renewed emphasis on in-person verification for high-value transactions and the adoption of cryptographic signatures for voice communications. Until technology catches up, a strong verification process is your best defence. Slow down transaction approvals, as scammers rely on speed and panic. Introducing deliberate pauses and verification steps disrupts their workflow. Securing Your Organisation Against Synthetic Threats The threat of deepfakes extends beyond financial loss. It can lead to reputational damage, stock price volatility, and legal liability. A recording of a CEO making offensive comments could go viral before the company can prove it is a fake. Organisations need a crisis communication plan that specifically addresses deepfakes since voice phishing is just the beginning. As AI tools become multimodal, we will likely see real-time video deepfakes joining these voice scams, and you will need to know how to prove that a recording is false to the press and public. Waiting until an incident occurs means you will already be too late. Does your organisation have the right protocols to stop a deepfake attack? We help businesses assess their vulnerabilities and build resilient verification processes that protect their assets without slowing down operations. Contact us today to secure your communications against the next generation of fraud.
by Tanya Wetson-Catt 5 February 2026
Moving to the cloud offers incredible flexibility and speed, but it also introduces new responsibilities for your team. Cloud security is not a “set it and forget it” type task, small mistakes can quickly become serious vulnerabilities if ignored. You don’t need to dedicate hours each day to this. In most cases, a consistent, brief review is enough to catch issues before they escalate. Establishing a routine is the most effective way to defend against cyber threats, keeping your environment organised and secure. Think of a daily cloud security check as a morning hygiene routine for your infrastructure. Just fifteen minutes a day can help prevent major disasters. A proactive approach is essential for modern business continuity and should include the following best practices: 1. Review Identity and Access Logs The first step in your routine involves looking at who logged in and verifying that all access attempts are legitimate. Look for logins from unusual locations or at strange times since these are often the first signs of a compromised account. Pay attention to failed login attempts as well, since a spike in failures might indicate a brute-force or dictionary attack. Investigate these anomalies immediately, as swift action stops intruders from gaining a foothold. Finally, effective cloud access management depends on careful oversight of user identities. Make sure former employees no longer have active accounts by promptly removing access for anyone who has left. Maintaining a clean user list is a core security practice. 2. Check for Storage Permissions Data leaks often happen because someone accidentally exposes a folder or file. Weak file-sharing permissions make it easy to click the wrong button and make a file public. Review the permission settings on your storage buckets daily, and ensure that your private data remains private. Look for any storage containers that have “public” access enabled. If a file does not need to be public, lock it down. This simple scan prevents sensitive customer information from leaking and protects both your reputation and legal standing. Misconfigured cloud settings remain a top cause of data breaches. While vendors offer tools to automatically scan for open permissions, an extra manual review by skilled cloud administrators is advisable to stay fully aware of your data environment. 3. Monitor for Unusual Resource Spikes Sudden changes in usage can indicate a security issue. A compromised server might be used for cryptocurrency mining or as part of a botnet network attacking other cloud or internet systems. One common warning sign is CPU usage hitting 100%, often followed by unexpected spikes in your cloud bill. Check your cloud dashboard for any unexpected spikes in computing power and compare each day’s metrics with your average baseline. If something looks off, investigate the specific instance or container, and track the root cause since it could mean bigger problems. Resource spikes can also indicate a distributed denial-of-service (DDoS) attack. Identifying a DDOS attack early allows you to mitigate the traffic and helps you keep your services online for your customers. 4. Examine Security Alerts and Notifications Your cloud provider likely sends security notifications, but many administrators ignore them or let them end up in spam. Make it a point to review these alerts daily, as they often contain critical information about vulnerabilities. These alerts can notify you about outdated operating systems or databases that aren’t encrypted. Addressing them promptly helps prevent data leaks, as ignoring them leaves vulnerabilities open to attackers. Make the following maintenance and security checks part of your daily routine: Review high-priority alerts in your cloud security centre Check for any new compliance violations Verify that all backup jobs have completed successfully. Confirm that antivirus definitions are up to date on servers Addressing these notifications not only strengthens your security posture but also shows due diligence in safeguarding company assets. 5. Verify Backup Integrity Backups are your safety net when things go wrong, but they’re only useful if they’re complete and intact. Check the status of your overnight backup jobs every morning. A green checkmark gives peace of mind, but if a job fails, restart it immediately rather than waiting for the next scheduled run. Losing a day of data can be costly, so maintaining consistent backups is key to business resilience. Once in a while, test a backup restoration to ensure that it works and restores as required, and always ensure to check the logs daily. Knowing your data is safe allows you to focus on other tasks since it eliminates the fear of ransomware and other malware disrupting your business. 6. Keep Software Patched and Updated Cloud servers require updates just like physical ones, so your daily check should include a review of patch management status. Make sure automated patching schedules are running correctly, as unpatched servers are prime targets for attackers. Since new vulnerabilities are discovered daily by both researchers and attackers, minimising the window of opportunity is critical. Applying security updates is essential to keeping your infrastructure secure. When a critical patch is released, address it immediately rather than waiting for the standard maintenance window, being agile with patching can prevent serious problems down the line. Build a Habit for Safety Security does not require heroic efforts every single day. It requires consistency, attention to detail, and a solid routine. The daily 15-minute cloud security check is a small investment with a massive return, since it keeps your data safe and your systems running smoothly. Spending just fifteen minutes a day shifts your approach from reactive to proactive, significantly reducing risk. This not only strengthens confidence in your IT operations but also simplifies cloud maintenance. Need help establishing a strong cloud security routine? Our managed cloud services handle the heavy lifting, monitoring your systems 24/7 so you don’t have to. Contact us today to protect your cloud infrastructure.
by Tanya Wetson-Catt 2 February 2026
Artificial Intelligence (AI) has taken the business world by storm, pushing organisations of all sizes to adopt new tools that boost efficiency and sharpen their competitive edge. Among these tools, Microsoft 365 Copilot rises to the top, offering powerful productivity support through its seamless integration with the familiar Office 365 environment. In the push to adopt new technologies and boost productivity, many businesses buy licenses for every employee without much consideration. That enthusiasm often leads to “shelfware”, AI tools and software that go unused while the company continues to pay for them. Given the high cost of these solutions, it’s essential to invest in a way that actually delivers a return on investment. Because you can’t improve what you don’t measure, a Microsoft 365 Copilot audit is essential for assessing and quantifying your adoption rates. A thorough review shows who is truly benefiting from and actively using the technology. It also guides smarter licensing decisions that reduce costs and improve overall efficiency. The Reality of AI Licensing Waste Implementing automated access revocation for contractors is not just about better security; it's a critical component of financial risk management and regulatory compliance. The biggest risk in contractor management is relying on human memory to manually delete accounts and revoke permissions after a project ends. Forgotten accounts with lingering access, often referred to as “dormant” or “ghost” accounts, are a prime target for cyber-attackers. If an attacker compromises a dormant account, they can operate inside your network without detection, as no one is monitoring an "inactive" user. For example, many security reports cite the Target data breach in 2013 as a stark illustration. Attackers gained initial entry into Target's network by compromising the credentials of a third-party HVAC contractor that had legitimate, yet overly permissive, access to the network for billing purposes. If Target had enforced the principle of least privilege, limiting the vendor's access only to the necessary billing system, the lateral movement that compromised millions of customer records could have been contained or prevented entirely. By leveraging Microsoft Entra Conditional Access to set a sign-in frequency and instantly revoke access when a contractor is removed from the security group, you eliminate the chance of lingering permissions. This automation ensures that you are consistently applying the principle of least privilege, significantly reducing your attack surface and demonstrating due diligence for auditors under regulations like GDPR or HIPAA. It turns a high-risk, manual task into a reliable, self-managing syAt first, buying licenses in bulk may seem like a convenient strategy since it simplifies the procurement process for your IT department. However, this collective approach often ignores actual user behaviour, since not every role needs the advanced features offered by Copilot. AI licensing waste occurs when tools sit unused on employee dashboards. For example, a receptionist may have no need for advanced data-analysis capabilities, while a field technician might never open the desktop application at all. Paying for unused licenses drains your budget, so identifying and closing these gaps is essential to protecting your bottom line. The savings can then be redirected to higher-value initiatives where they’ll make the greatest impact. Analysing User Activity Reports Fortunately, Microsoft includes built-in tools that make it easy to view your AI usage data. The Microsoft 365 admin centre is the best place to start. From there, you can generate reports that track active usage over specific time periods and give you a clear view of engagement. From this dashboard, you can track various metrics such as enabled users, active users, adoption rates, trends, and so on. This makes it easy to identify employees who have never used AI features, or those whose limited usage may not justify the licensing cost. This kind of software usage tracking allows you to make data-driven decisions and distinguish between power users and those who ignore the tool. This clarity not only allows for making efficient license purchases, but also sets the stage for having conversations with department heads to determine why certain teams do not engage with AI tools. Strategies for IT Budget Optimisation Once you identify the waste, the next step is taking action. Start by reclaiming licenses from inactive users and reallocating them to employees who actually need them. This simple shift, making sure licenses go to those who use them, can significantly reduce your subscription costs. Establish a formal request process for Copilot licenses. This ensures employees must justify their need for the tool, granting access only to those who truly require it and adding accountability to your spending. IT budget optimisation isn’t a one-time task; it’s an ongoing process that requires continuous refinement. Regularly reviewing these metrics, whether monthly or quarterly, helps keep your software spending efficient and under control. Boosting Adoption Through Training Low AI tool usage isn’t always about lack of interest. Sometimes, employees simply don’t need the tool, while other times they avoid it because they don’t know how to use it, insufficient training can lead to frustration and poor adoption. This means that cutting licenses alone isn’t enough; investing in user training is equally important. The most effective approach is to survey staff and assess their comfort level with Copilot. For employees who find it confusing, provide self-paced tutorials or conduct training workshops that demonstrate practical use cases relevant to their daily tasks. When employees see clear value and convenience, they are much more likely to adopt the tool. Consider the following steps to improve adoption: Host lunch-and-learn sessions to demonstrate key features Share success stories from power users within the company Create a library of quick tip videos for common tasks Appoint “Copilot Champions” in each department to help others Investing in training often transforms low usage into high value, turning what was once a wasted expense into a productivity-enhancing asset. Establishing a Governance Policy Another way to minimise Copilot license waste involves setting rules for how your company handles AI tools. A governance policy effectively brings order to your software management by outlining who qualifies for a license and setting expectations for usage and review cycles. The policy should also define criteria based on job roles and responsibilities. For instance, content creators and data analysts get automatic access, while other roles might require manager approval, thus preventing the “free-for-all” mentality that leads to waste. The policy should be clearly communicated to all employees to ensure transparency regarding how decisions are being made. This way, a culture of responsibility regarding company resources is established. Preparing for Renewal Season The worst time to check your Copilot AI usage is the day before renewal. Instead, schedule audits at least 90 days in advance to allow ample time to adjust your contract and license counts. This also gives you leverage during negotiations with vendors. By presenting data showing your actual needs, you put yourself in a strong position to right-size your contract and avoid getting locked into another year of paying for shelfware. Smart Management Matters Managing modern software costs demands both vigilance and data, particularly as most vendors move to subscription-based models for AI and software tools. With recurring expenses, letting subscriptions run unchecked is no longer an option. Regular Microsoft 365 Copilot audits safeguard your budget and ensure efficiency by aligning technology purchases with actual usage. Take control of your licensing strategy today. Look at the numbers, ask the hard questions, and ensure every dollar you spend contributes to your business’ growth. Smart management leads to a leaner and more productive organisation. Are you ready to get a handle on your AI tool spending? Reach out to our team for help with comprehensive Microsoft 365 Copilot audits, and eliminate waste from your IT budget. Contact us today to schedule your consultation.
by Tanya Wetson-Catt 30 January 2026
Your business runs on a SaaS (software-as-a-service) application stack, and you learn about a new SaaS tool that promises to boost productivity and streamline one of your most tedious processes. The temptation is to sign up for the service, click “install,” and figure out the rest later. This approach sounds convenient, but it also exposes you to significant risk. Each new integration acts as a bridge between different systems, or between your data and third-party systems. This bridging raises data security and privacy concerns, meaning you need to learn how to vet new SaaS integrations with the seriousness they require. Protecting Your Business from Third-Party Risk A weak link can lead to compliance failures or, even worse, catastrophic data breaches. Adopting a rigorous, repeatable vetting process transforms potential liability into secure guarantees. If you’re not convinced, just look at the T-Mobile data breach of 2023 . While the initial vector was a zero-day vulnerability in their environment, a key challenge in the fallout was the sheer number of third-party vendors and systems T-Mobile relied upon. In highly interconnected systems, a vulnerability in one area can be exploited to gain access to other systems, including those managed by third parties. The incident highlighted how a sprawling digital ecosystem multiplies the attack surface. By contrast, a structured vetting process, which maps the tool’s data flow, enforces the principle of least privilege , and ensures vendors provide a SOC 2 Type II report, drastically minimises this attack surface. A proactive vetting strategy ensures you are not just securing your systems, but you are also fulfilling your legal and regulatory obligations, thereby safeguarding your company’s reputation and financial health. 5 Steps for Vetting Your SaaS Integrations To prevent these weak links, let’s look at some smart and systematic SaaS vendor/product evaluation processes that protect your business from third-party risk. 1. Scrutinise the SaaS Vendor’s Security Posture After being enticed by the SaaS product features, it is important to investigate the people behind the service. A nice interface means nothing without having a solid security foundation. Your first steps should be examining the vendor’s certifications and, in particular, asking them about the SOC 2 Type II report . This is an independent audit report that verifies the effectiveness of a retail SaaS vendor’s controls over the confidentiality, integrity, availability, security, and privacy of their systems. Additionally, do a background check on the founders, the vendor’s breach history, how long they have been around, and their transparency policies. A reputable company will be open about its security practices and will also reveal how it handles vulnerability or breach disclosures. This initial background check is the most important step in your vetting since it separates serious vendors from risky ones. 2. Chart the Tool’s Data Access and Flow You need to understand exactly what data the SaaS integration will touch, and you can achieve this by asking a simple, direct question: What access permissions does this app require? Be wary of any tool that requests global “read and write” access to your entire environment. Use the principle of least privilege: grant applications only the access necessary to complete their tasks, and nothing more. Have your IT team chart the information flow in a diagram to track where your data goes, where it is stored, and how it is transmitted. You must know its journey from start to finish. A reputable vendor will encrypt data both at rest and in transit and provide transparency on where your data is stored, including the geographical location. This exercise in third-party risk management reveals the full scope of the SaaS integration’s reach into your systems. 3. Examine Their Compliance and Legal Agreements If your company must comply with regulations such as GDPR , then your vendors must also be compliant. Carefully review their terms of service and privacy policies for language that specifies their role as a data processor versus a data controller, and confirm that they will sign a Data Processing Addendum (DPA) if required. Pay particular attention to where your vendor stores your data at rest, i.e., the location of their data centres, since your data may be subject to data sovereignty regulations that you are unaware of. Ensure that your vendor does not store your data in countries or regions with lax privacy laws. While reviewing legal fine print may seem tedious, it is critical, as it determines liability and responsibility if something goes wrong. 4. Analyse the SaaS Integration’s Authentication Techniques How the service connects with your system is also a key factor. Choose integrations that use modern and secure authentication protocols such as OAuth 2.0 , which allow services to connect without directly sharing usernames and passwords. The provider should also offer administrator dashboards that enable IT teams to grant or revoke access instantly. Avoid services that require you to share login credentials, and instead prioritise strong, standards-based authentication. 5. Plan for the End of the Partnership Every technology integration follows a lifecycle and will eventually be deprecated, upgraded, or replaced. Before installing, know how to uninstall it cleanly by asking questions such as: What is the data export process after the contract ends? Will the data be available in a standard format for future use? How does the vendor ensure permanent deletion of all your information from their servers? A responsible vendor will have clear, well-documented offboarding procedures. This forward-thinking strategy prevents data orphanage, ensuring you retain control over your data long after the partnership ends. Planning for the exit demonstrates strategic IT management and a mature vendor assessment process. Build a Fortified Digital Ecosystem Modern businesses run on complex systems comprising webs of interconnected services where data moves from in-house systems, through the Internet, and into third-party systems and servers for processing, and vice versa. Since you cannot operate in isolation, vetting is essential to avoid connecting blindly. Your best bet for safe integration and minimising the attack surface is to develop a rigorous, repeatable process for vetting SaaS integrations. The five tips above provide a solid baseline, transforming potential liability into secure guarantees.  Protect your business and gain confidence in every SaaS integration, contact us today to secure your technology stack.
Show more