5 Cybersecurity Predictions for 2024 You Should Plan For

Tanya Wetson-Catt • 2 April 2024

Cybersecurity is a constantly evolving field. There are new threats, technologies, and opportunities emerging every year. As we enter 2024, organizations need to be aware of current and future cyber threats. Businesses of all sizes and sectors should plan accordingly.


Staying ahead of the curve is paramount to safeguarding digital assets. Significant changes are coming to the cybersecurity landscape. Driving these changes are emerging technologies and evolving threats. As well as shifting global dynamics.


Next, we'll explore key cybersecurity predictions for 2024 that you should consider.


1. AI Will Be a Double-edged Sword


Artificial intelligence (AI) has been a game-changer for cybersecurity. It has enabled faster and more accurate threat detection, response, and prevention. But AI also poses new risks such as adversarial AI, exploited vulnerabilities, and misinformation.

For example, malicious actors use chatbots and other large language models to generate:


  • Convincing phishing emails
  • Fake news articles
  • Deepfake videos


This malicious content can deceive or manipulate users. Organizations will need to put in place robust security protocols. This includes embracing a human-in-the-loop approach as well as regularly tracking and reviewing their AI systems. These steps will help them mitigate these risks and harness the power of AI for a more secure future.


2. Quantum Computing Will Become a Looming Threat


Quantum computing is still a few years away from reaching its full potential. But it is already a serious threat to the security of current encryption standards.


Quantum computers can potentially break asymmetric encryption algorithms. These algorithms are widely used to protect data in transit and at rest. This means that quantum-enabled hackers could compromise sensitive data, like financial transactions.


Organizations will need to start preparing for this scenario. They can do this by assessing their potential risks first. Then, adopting quantum-resistant technologies and deploying quantum-safe architectures.


3. Hacktivism Will Rise in Prominence


Hacktivism is the use of hacking techniques to promote a political or social cause. Such as exposing corruption, protesting injustice, or supporting a movement.


Hacktivism has been around for decades. But it's expected to increase in 2024. Particularly during major global events. These may include the Paris Olympics and the U.S. Presidential Election as well as specific geopolitical conflicts.


Hacktivists may target organizations that they perceive as adversaries or opponents. This can include governments, corporations, or media outlets. These attacks can disrupt their operations as well as leak their data or deface their websites.


Organizations will need to be vigilant against potential hacktivist attacks. This includes being proactive in defending their networks, systems, and reputation.


4. Ransomware Will Remain a Persistent Threat


Ransomware is a type of malware that encrypts the victim's data. The attacker then demands a ransom for its decryption. Ransomware has been one of the most damaging types of cyberattacks in recent years.


In 2023, ransomware attacks increased by more than 95% over the prior year.


Ransomware attacks are likely to continue increasing in 2024. Due to new variants, tactics, and targets emerging. For example, ransomware attackers may leverage AI to enhance their encryption algorithms. As well as evade detection and customize their ransom demands.


Hackers may also target cloud services, IoT devices, or industrial control systems. This could cause more disruption and damage. Organizations will need to put in place comprehensive ransomware prevention and response strategies. Including:


  • Backing up their data regularly
  • Patching their systems promptly
  • Using reliable email and DNS filtering solutions
  • Educating their users on how to avoid phishing emails


5. Cyber Insurance Will Become More Influential


Cyber insurance covers the losses and liabilities resulting from cyberattacks. It has become more popular and important in recent years. This is due to cyberattacks becoming more frequent and costly.


Cyber insurance can help organizations recover from cyber incidents faster and more effectively. It provides financial compensation, legal help, or technical support.


But cyber insurance can also influence the security practices of organizations. More cyber insurers may impose certain requirements or standards on their customers such as implementing specific security controls or frameworks. Organizations will need to balance the benefits and costs of cyber insurance as well as ensure that they are in compliance with their cyber insurers' expectations.


Be Proactive About Cybersecurity – Schedule an Assessment


It's clear that the cybersecurity landscape will continue to evolve rapidly. Organizations and individuals must proactively prepare for emerging threats. This includes adopting advanced technologies and prioritizing workforce development as well as staying abreast of regulatory changes.


Put a comprehensive cybersecurity strategy in place. One that encompasses these predictions. This will help you navigate the digital frontier with resilience and vigilance.


Need help ensuring a secure and trustworthy digital environment for years to come?



Contact us today to schedule a cybersecurity assessment.

Let's Talk Tech

More from our blog

by Tanya Wetson-Catt 20 April 2026
If you want to uncover unsanctioned cloud apps, don’t begin with a policy. Start with your browser history. The cloud environment most businesses actually use rarely matches the one shown on the IT diagram. It’s built through countless small shortcuts: a “just this once” file share, a free tool that solves one problem faster, a plug-in installed to meet a deadline, or an AI feature quietly enabled inside an app you already pay for. In the moment, none of it feels like a problem. It feels efficient. Helpful. Until it isn’t. Then you realise business data is scattered across tools you didn’t formally approve, accounts you can’t easily offboard, and sharing settings that don’t reflect the actual risk. Why Unsanctioned Cloud Apps Are a 2026 Problem Unsanctioned cloud apps have always existed. What’s changed this year is the scale, the speed, and the fact that “cloud apps” now include AI features hiding in plain sight. Start with scale. Microsoft’s shadow IT guidance points out that most IT teams assume employees use “30 or 40” cloud apps, but “in reality, the average is over 1,000 separate apps.” It also notes that “80% of employees use non-sanctioned apps” that haven’t been reviewed against company policy. That’s the uncomfortable reality of unsanctioned cloud apps: the gap between what you believe is happening and what’s actually happening is often far wider than expected. Now add the 2026 twist: AI isn’t just a standalone tool employees consciously choose to use. The Cloud Security Alliance notes that AI is increasingly embedded as a feature within everyday business applications, rather than existing only as a standalone tool. In other words, you can have shadow AI risk without anyone signing up for a new AI product. It’s just… there. That creates a different kind of exposure. The same Cloud Security Alliance article cites research showing “54% of employees” admit they would use AI tools even without company authorisation. It also references an IBM finding that “20% of organisations” experienced breaches linked to unauthorised AI use, adding an average of “$670,000” to breach costs. So, this isn’t just a governance problem. It’s a measurable risk problem. And here’s the final reason 2026 feels different: the old “block it and move on” strategy no longer works. The Cloud Security Alliance has pointed out that simply blocking cloud apps isn’t an option anymore because cloud services are woven into everyday work. If you don’t provide a secure alternative, employees will find another workaround. Don’t Start with Blocking The fastest way to drive cloud app usage further underground is to treat it as a discipline problem and respond with bans. Yes, some applications do need to be blocked. But if blocking is your first move, it typically creates two unintended side effects: 1. People get better at hiding what they’re doing. 2. They switch to a different tool that’s just as risky or, sometimes, worse. Either way, you haven’t reduced the problem. You’ve just made it harder to see. A better starting point is to understand what’s happening and why. The recommendation is to evaluate cloud app risk against an “ objective yardstick ”. You should monitor what users are actually doing in those apps so you can focus on the behaviour that creates exposure, not just the name of the tool. Once you have that visibility, you can respond in a way that actually lasts. Some apps will be approved. Others may be restricted. Some will need to be replaced. And the truly high-risk ones? Those are the apps you block thoughtfully, with a clear plan, a communication message, and a secure alternative that allows people to keep doing their jobs. The Practical Workflow to Uncover Unsanctioned Cloud Apps This isn’t a one-time clean-up. It’s a workflow you can run quarterly (or continuously) to stay ahead of new tools and new habits. Discover What’s Actually in Use Start by generating a real inventory from the signals you already collect: endpoint telemetry, identity logs, network and DNS data, and browser activity. Microsoft’s shadow IT tutorial emphasises a dedicated discovery phase, because you can’t manage what you haven’t first identified. Analyse Usage Patterns Don’t stop at identifying which apps are in use. Review things like: Who is accessing cloud apps What admin activity is happening Whether data is being shared publicly or with personal accounts Access that should no longer exist, such as former employees who still have active connections Score and Prioritise Risk Not every unsanctioned app is equally dangerous. Use a simple risk lens: The sensitivity of the data involved How information is being shared The strength of identity controls The level of administrative visibility Whether AI features could be ingesting or exposing data Tag Apps Make decisions visible and repeatable by tagging apps. Microsoft explicitly calls tagging apps as sanctioned or unsanctioned an important step, because it lets you filter, track progress, and drive consistent action over time. Take Action Once an app is tagged, you can enforce the decision. Microsoft’s governance guidance outlines two practical responses: issuing user warnings, a lighter control that encourages better behaviour, or blocking access to applications that present unacceptable risk. Just keep in mind that changes aren’t always immediate. Plan for communication and a smooth transition, rather than triggering unexpected disruptions. Your New Default: Discover, Decide, Enforce Unsanctioned cloud apps aren’t disappearing in 2026. If anything, they’ll continue to multiply, especially as new AI features appear inside the tools your team already relies on. The goal isn’t to block everything. It’s to create a repeatable operating model: discover what’s in use, determine what’s acceptable, and enforce those decisions with clear guidance and secure alternatives. When you apply that consistently, cloud app sprawl stops being a surprise. It becomes another controlled, managed part of your environment.  If you’d like help building a practical cloud app governance process that fits your organisation, contact us today. We’ll help you gain visibility, reduce exposure, and put guardrails in place, without slowing productivity.
by Tanya Wetson-Catt 17 April 2026
It usually starts small. Someone uses an AI tool to refine a difficult email. Someone enables an AI add-on inside a SaaS app because it promises to save an hour a week. Someone pastes a paragraph into a chatbot to “make it sound better.” Then it becomes routine. And once it’s routine, it stops being a simple tool decision and becomes a data governance issue: what’s being shared, where it’s going, and whether you could prove what happened if something goes wrong. That’s the core of shadow AI security. The goal isn’t to block AI entirely. It’s to prevent sensitive data from being exposed in the process. Shadow AI Security in 2026 Shadow AI is the unsanctioned use of AI tools without IT approval or oversight, often driven by speed and convenience. The challenge is that the “helpful shortcut” can become a blind spot when IT can’t see what’s being used, by whom, or with what data. Shadow AI security matters in 2026 because AI isn’t just a standalone tool employees choose to use. It’s increasingly embedded directly into the applications you already rely on. At the same time, it’s expanding through plug-ins, extensions, and third-party copilots that can tap into business data with very little friction. And there’s a human reality in it: 38% of employees admit they’ve shared sensitive work information with AI tools without permission. It’s people trying to work faster, but making risky decisions as they go. That’s why Microsoft sees the issue as a data leak problem, not a productivity problem. In its guidance on preventing data leaks to shadow AI, the core risk is simple: employees can use AI tools without proper oversight, and sensitive data can end up outside the controls you rely on for governance and compliance. And here’s what many teams overlook: the risk isn’t just which tool someone used. It’s what that tool continues to do with the data over time. This is known as “ purpose creep ”, when data begins to be used in ways that no longer align with its original purpose, disclosures, or agreements. But shadow AI isn’t limited to one obvious chatbot . It shows up in workflows across marketing, HR, support, and engineering, often through browser-based tools and integrations that are easy to adopt and hard to track. The Two Ways Shadow AI Security Fails 1.) You don’t know what tools are in use or what data is being shared Shadow AI isn’t always a shiny new app someone signs up for. It can be an AI add-on enabled inside an existing platform, a browser extension, or a feature that only shows up for certain users. That makes it easy for AI usage to spread without a clear “moment” where IT would normally review or approve it. It’s best to treat this as a visibility problem first: if you can’t reliably discover where AI is being used, you can’t apply consistent controls to prevent data leakage. 2.) You have visibility, but no meaningful way to manage or limit it Even when you can name the tools, shadow AI security still fails if you can’t enforce consistent behaviour. That typically happens when AI activity lives outside your managed identity systems, bypasses normal logging, or isn’t governed by a clear policy defining what’s acceptable. You’re left with “known unknowns”: people assume it’s happening, but no one can document it, standardise it, or rein it in. This can quickly turn into a governance issue . This happens when the organisation loses confidence in where data flows and how it’s being used across workflows and third parties. How to Conduct a Shadow AI Audit A shadow AI audit should feel like routine maintenance, not a crackdown. The goal is to gain clarity quickly, reduce the most significant risks first, and keep the team moving without disruption. Step 1: Discover Usage Without Disruption Start by reviewing the signals you already have before sending a company-wide email. Practical places to look: Identity logs: who is signing in, to which tools, and whether the account is managed or personal Browser and endpoint telemetry on managed devices SaaS admin settings and enabled AI features A brief, non-judgmental self-report prompt, such as: “What AI tools or features are helping you save time right now?” Shadow AI is often adopted for productivity first , not because people are trying to bypass security. You’ll get better answers when you approach discovery as “help us support this safely.” Step 2: Map the Workflows Don’t obsess over tool names. Map where AI touches real work. Build a simple view: Workflow AI touchpoint Input type Output use Owner Step 3: Classify What data is Being Put into AI This is where shadow AI security becomes practical. Use simple buckets that your team can apply without legal translation: Public Internal Confidential Regulated (if relevant) Step 4: Triage Risk Quickly You’re not aiming to create a perfect inventory. You’re focused on identifying the highest risks right now. A simple scoring model can help you move quickly: Sensitivity of the data involved Whether access occurs through a personal account or a managed/SSO account Clarity around retention and training settings Ability to share or export the data Availability of audit logging If you keep this step lightweight, you’ll avoid the trap of analysing everything and fixing nothing. Step 5: Decide on Outcomes Make decisions that are easy to follow and easy to enforce: Approved: Permitted for defined use cases, with managed identity and logging wherever possible Restricted: Allowed only for low-risk inputs, with no sensitive data Replaced: Transition the workflow to an approved alternative Blocked: Poses unacceptable risk or lacks workable controls. Stop Guessing and Start Governing Shadow AI security isn’t about shutting down innovation. It’s about making sure sensitive data doesn’t flow into tools you can’t monitor, govern, or defend. A structured shadow AI audit gives you a repeatable process: identify what’s in use, understand where it intersects with real workflows, define clear data boundaries, prioritise the biggest risks, and make decisions that hold. Do it once, and you reduce risk right away. Make it a quarterly discipline and shadow AI stops being a surprise.  If you’d like help building a practical shadow AI audit for your organisation, contact us today. We’ll help you gain visibility, reduce exposure, and put guardrails in place without slowing your team down.
by Tanya Wetson-Catt 13 April 2026
Most small businesses aren’t falling short because they don’t care. They’re falling short because they didn’t build their security strategy as one coordinated system. They added tools over time to solve immediate problems, a new threat here, a client request there. On paper, that can look like strong coverage. In reality, it often creates a patchwork of products that don’t fully work together. Some areas overlap. Others get overlooked. And when security isn’t intentionally designed as a system, the weaknesses don’t show up during routine support tickets. They show up when something slips through and turns into a disruptive, expensive problem. Why “Layers” Matter More in 2026 In 2026, your small business security can’t rely on a single control that’s “mostly on”. It must be layered because attackers don’t politely line up at your firewall anymore. They come in through whichever gap is easiest today. The real story is how quickly the landscape is changing. The World Economic Forum’s Global Cybersecurity Outlook 2026 says “AI is anticipated to be the most significant driver of change in cyber security… according to 94% of survey respondents.” That’s more than a headline. It means phishing becomes more convincing, automation becomes more affordable, and “spray and pray” attacks become more targeted and effective. If your security model depends on one or two layers catching everything, you’re essentially betting against scale. The NordLayer MSP trends report highlights that active enforcement of foundational security measures is becoming the standard. It also points to a future where you are expected to actively enforce foundational security measures, not just check a compliance box. It also highlights that regular cyber risk assessments will become essential for identifying gaps before attackers do. In other words, the market is shifting toward consistent security baselines and proactive oversight, rather than best-effort protection. And the easiest way to keep layers practical and not chaotic, is to think in outcomes, not tools. A Simple Way to Think About Your Security Coverage The easiest way to spot gaps in your security is to stop thinking in products and start thinking in outcomes. A practical way to structure this is the NIST Cybersecurity Framework 2.0 , which groups security into six core areas: Govern, Identify, Protect, Detect, Respond, and Recover. Here’s a simple translation for your business: Govern: Who owns security decisions? What’s considered standard? What qualifies as an exception? Identify: Do you know what you’re protecting? Protect: What controls are in place to reduce the likelihood of compromise? Detect: How quickly can you recognise that something is wrong? Respond: What happens next? Who is responsible, how fast do they act, and how is communication handled? Recover: How do you restore operations, and demonstrate that systems are fully back to normal? Most small business security stacks are strong in Protect. Many are okay in Identify. The missing layers usually live in Govern, Detect, Respond, and Recover. The 5 Security Layers MSPs Commonly Miss Strengthen these five areas, and your business's security becomes more consistent, more defensible, and far less reliant on luck. Phishing-Resistant Authentication. Phishing-Resistant Authentication Basic multifactor authentication (MFA) is a good start, but it’s not the finish line. The common gap is inconsistent enforcement and authentication methods that can still be tricked by modern phishing. How to add it: Make strong authentication mandatory for every account that touches sensitive systems Remove “easy bypass” sign-in options and outdated methods Use risk-based step-up rules for unusual sign-ins Device Trust & Usage Policies Most IT systems manage endpoints. Far fewer have a clearly defined and consistently enforced standard for what qualifies as a “trusted” device, or a defined response when a device falls short. How to add it: Set a minimum device baseline Put Bring Your Own Device (BYOD) boundaries in writing Block or limit access when devices fall out of compliance instead of relying on reminders Email & User Risk Controls Email remains the front door for most cyberattacks. If you’re relying on user training alone to stop phishing and credential theft, you’re betting on perfect attention. The real gap is the absence of built-in safety rails, controls that flag risky senders, block lookalike domains, limit account takeover impact, and reduce the damage from common mistakes. How to add it: Implement controls that reduce exposure, such as link and attachment filtering, impersonation protection, and clear labelling of external senders Make reporting easy and judgement-free Establish simple, consistent process rules for high-risk actions Continuous Vulnerability & Patch Coverage “Patching is managed” often really means “patching is attempted.” The real gap is proof, clear visibility into what’s missing, what failed, and which exceptions are quietly accumulating over time. How to add it: Set patch SLAs by severity and stick to them Cover third-party apps and common drivers/firmware, not just the operating system Maintain an exceptions register so exceptions don’t become permanent Detection & Response Readiness Most environments generate alerts. What’s often missing is a consistent, repeatable process for turning those alerts into action. How to add it: Define your minimum viable monitoring baseline Establish triage rules that clearly separate “urgent now” from “track and review” Create simple, practical runbooks for common scenarios Test recovery procedures in real-world conditions The Security Baseline for 2026 When you strengthen these five layers—phishing-resistant authentication, device trust, email risk controls, verified patch coverage, and real detection and response readiness—you turn your business's security into a repeatable, measurable baseline you can be confident in. Start with the weakest layer in your business environment. Standardise it. Validate that it’s working. Then move to the next.  If you’d like help identifying your gaps and building a more consistent security baseline for your business, contact us today for a security strategy consultation. We’ll help you assess your current stack, prioritise improvements, and create a practical roadmap that strengthens protection without adding unnecessary complexity.