How to Create Insightful Dashboards in Microsoft Power BI

Tanya Wetson-Catt • 19 June 2023

Data visualization is a powerful tool for communicating complex data. It presents it in a simple, easily understandable format. But it is not enough to simply create a graph or chart and call it a day. To truly make use of information, it is important to create insightful reports. Reports that effectively communicate the story behind the data.


Insightful reports help decision-makers understand key trends and patterns. As well as identify areas of opportunity and make informed decisions. If analytics graphs and bar charts are only telling part of the story, it can lead people to wrong decisions.


Creating holistic and insightful reports requires the use of several data points. One tool that enables this is Microsoft Power BI.


What Is Microsoft Power BI?


Microsoft Power BI is a business intelligence tool. It allows you to connect many data sources to one dashboard. Using Power BI, you can easily model and visualize data holistically.


The platform has over 500 different data connectors. These connectors can tap into sources such as Salesforce, Excel, Azure, and more. Users can leverage pre-built report templates to save time in creating data-rich reporting. Teams can also collaborate and share dashboards virtually.

Tips for Designing Great Data Visualization Reports


Getting started in Microsoft Power BI entails:


  • Signing up for the software
  • Connecting your data sources
  • Using its tools to create report visualizations


But creating great reports goes beyond that. Below, we’ll go through several tips and best practices for getting the most out of your Power BI output.


Consider Your Audience


You should design reporting dashboards with the end user in mind. What is it that this audience wants to see? Are they looking for bottom-line sales numbers? Or do they want to cover insights that can help target productivity gaps?


The use of clear and concise language and effective visualizations are important. These help to highlight the key takeaways from the data. Customize reports to the audience’s level of technical expertise and business goals.


Don’t Overcomplicate Things


Many times, less is more. If you find that your dashboard looks crowded, you may be adding too many reports. The more you add, the more difficult it is to read the takeaways from the data.


Remove all but the most essential reports. Look for ways to include different data sets in a single report, such as using stacked bar charts. Dashboards should show important data at a glance, so do your best to avoid the need to scroll.


Try Out Different Chart Types


Experiment with presenting your data in different ways. Flip between bar, pie, and other types of charts to find the one that tells the story the best. When building a new dashboard for your organization, get some input. Ask those who will review the reports which chart type works best for them.


Get to Know Power Query


Power Query is a data preparation engine. It can save you a lot of time in developing insightful reports. This engine is used in Microsoft tools like Power BI and Excel. 

Take time to learn how to leverage this tool for help with:


  • Connecting a wide range of data sources to the dashboard
  • Previewing data queries
  • Building intuitive queries over many data sources
  • Defining data size, variety, and velocity


Build Maps with Hints to Bing


Bing and Power BI integrate, allowing you to leverage default map coordinates. Use best practices to leverage the mapping power of Bing to improve your geo-coding.


For example, if you want to plot cities on a map, name your columns after the geographic designation. This helps Bing identify exactly what you’re looking for.


Tell People What They Are Looking At


A typical comment heard often when presenting executives with a new report is, “What am I looking at?” Tell your audience what the data means by using features like tooltips and text boxes to add context.


Just one or two sentences can save someone 5-10 minutes of trying to figure out why you gave them this report. That context can get them to a decision faster. It also helps avoid any confusion or misunderstandings about the data.


Use Emphasis Tricks


People usually read left to right and from top to bottom. So put your most important chart at the top, left corner. Follow, with the next most important reports.


If you have specific numbers that need to stand out, increase the font size or bold the text. This ensures that your audience understands the key takeaways.


Use can also use colors to emphasize things like a “High, Mid, Low.” For example, a low level of accidents could be green, a mid-level in yellow, and a high colored red. This provides more visual context to the data.


Need Help with Power Bi or Other Microsoft Products?


We can help you get started or improve your use of Microsoft 365, Power BI, and more. Give us a call today to schedule a chat about leveraging this powerful platform.

Let's Talk Tech

More from our blog

by Tanya Wetson-Catt 4 May 2026
When you first sign up for a software-as-a-service (SaaS) platform, everything is designed to feel effortless. The problem is that the first real test of a SaaS relationship isn’t the onboarding. It’s the exit. For many small businesses, the front door is wide open, but the emergency exit is bolted shut: exports are incomplete, key data sits in proprietary formats, and leaving requires expensive vendor help. That’s more than inconvenient. It’s a business risk. As teams move toward a workforce blended with humans and Agentic AI in 2026, your advantage will come from data you can move, reuse, and trust. If your data can’t leave a vendor cleanly, you don’t fully control your processes. Then your options, timelines, and costs are controlled for you. Why This Gets Worse in 2026 The “backup exit strategy” question is getting sharper in 2026 because SaaS sprawl and third-party dependence are now normal. Your business data isn’t sitting in one system. It’s spread across platforms, integrations, plug-ins, and automation. When one vendor changes pricing, terms, features, or risk profile, you don’t just “switch tools.” You either move your data cleanly or you stay stuck. The breach environment also raises the stakes. Verizon’s 2025 DBIR Executive Summary says it analysed 22,052 security incidents and 12,195 confirmed breaches, calling it “the highest number of breaches ever analysed in a single report,” across 139 countries. That volume matters because exits and migrations often happen under pressure. A backup exit strategy is what prevents “we need to move” from becoming “we can’t move.” Attackers are also increasingly focused on credentials and data pathways. These are the same pathways you rely on during exports and migrations. Microsoft’s Digital Defense Report 2025 notes that credential and access key theft attempts are up 23%, and attempts to extract sensitive data from storage accounts and databases increased 58%. Microsoft also reports that data collection showed up in 80% of reactive engagements, which is a reminder that “getting the data” is now a common objective. If you can’t export your data safely and predictably, you end up trapped. You can’t rotate away from a risky platform quickly. And you can’t migrate without creating new exposure. Finally, being stuck is expensive even before you factor in vendor fees. IBM’s Cost of a Data Breach Report 2025 puts the global average cost of a breach at USD 4.4M. That’s not a “lock-in” statistic, but it is a useful reality check: data incidents cost real money. A clean exit strategy reduces the chance that a vendor becomes an added cost multiplier during an already expensive situation. In 2026, the question isn’t whether you’ll ever need to move data. It’s whether you’ll be able to do it without vendor hand-holding, surprise costs, or emergency timelines. The Financial Cost of the "Proprietary Trap" A weak exit plan doesn’t just slow innovation. It quietly increases operating costs because you end up paying for a setup you can’t easily change. When you’re locked into a vendor, spending becomes sticky. You can’t right-size quickly, consolidate tools, or move workloads to a better-fit platform without turning it into a major project. That’s how waste hangs around. The real cost isn’t the monthly invoice. It’s the lack of options. When your data can’t move easily, every renewal, pricing change, or product shift becomes a forced decision instead of a strategic one. A true backup exit strategy flips that dynamic. It gives you the ability to migrate on your timeline, reduce duplicate tooling, and make cost decisions based on value rather than inertia. In practical terms, it turns “we can’t leave” into “we can compare, choose, and move when it makes sense.”. Securing the Move Once you decide to move your data, the migration itself becomes a high-risk moment. Not because migrations are inherently unsafe. But because they concentrate exactly what attackers want: High-privilege access Lots of open sessions, A lot of data moving at once During a data move, your team is often signed into multiple admin-level tools at the same time. That’s where session cookie hijacking becomes relevant. An attacker doesn’t need to “crack” your password if they can steal the session token that proves you’re already authenticated. Microsoft has described adversary-in-the-middle phishing campaigns that intercept session cookies so attackers can reuse an authenticated session and bypass the MFA prompt. Cloudflare also notes that attackers are finding ways to circumvent MFA as part of broader attack chains, which is why the safest approach is layered rather than relying on one control. To protect your backup exit migration: Use phishing-resistant sign-ins where possible for migration and admin accounts. Tighten session controls so privileged sessions expire sooner and re-authentication is required for risky actions. Treat device health as part of access: run the migration from a managed, patched, protected device. Monitor for suspicious access during the move. Ownership is a Discipline The businesses that thrive over the next few years won’t just adopt new tools. They’ll stay flexible as tools change. In a world of SaaS sprawl and AI-driven workflows, that flexibility comes from clean data, clear processes, and the ability to move when you need to.  If you’d like help building an exit-ready baseline across your vendor stack, contact us for a technology consultation.
by Tanya Wetson-Catt 27 April 2026
Most small businesses aren’t breached because they have no security at all. They’re breached because a single stolen password becomes a master key to everything else. That’s the flaw in the old “castle-and-moat” model. Once someone gets past the perimeter, they can often move through the environment with far fewer restrictions than they should. And today, with cloud apps, remote work, shared links, and BYOD, the “perimeter” isn’t even a clearly defined boundary anymore. Zero-trust architecture for small businesses represents the shift that breaks that chain reaction. It’s an approach that treats every access request as potentially risky and requires verification every time. What Is Zero-Trust Architecture? Zero Trust is a model that moves defenses away from “static, network-based perimeters.” Instead, it focuses on “users, assets, and resources.” It also “ assumes there is no implicit trust granted to assets or user accounts ” based only on network location or ownership. Microsoft sets the idea down into a simple principle: the model teaches us to “never trust, always verify.” In practice, that means verifying each request as though it came from an uncontrolled network, even if it’s coming from the office. IBM reports that the global average cost of a data breach is over $4 million, which is why reducing blast radius isn’t a nice-to-have. So, what does “Zero Trust” actually do differently day to day? Microsoft frames it around three core principles: verify explicitly, use least privilege access, and assume breach. In small-business terms, that usually translates to: Identity-first controls: Strong MFA, blocking risky legacy authentication, and applying stricter policies to admin accounts. Device-aware access: Evaluating who is signing in and whether their device is managed, patched, and meets your security standards. Segmentation to limit impact: Breaking your environment into smaller zones so access to one area doesn’t automatically grant access to everything else. Cloudflare describes microsegmentation as dividing perimeters into “small zones” to prevent lateral movement between systems. Before You Start If you try to “implement Zero Trust” everywhere at once, two things usually happen: 1. Everyone gets frustrated. 2. Nothing meaningful gets completed. Instead, start with a defined protect surface, a small group of critical systems, data, and workflows that matter most and can realistically be secured first. What Counts as a “Protect Surface”? A protect surface typically includes one of the following: A business-critical application A high-value dataset A core operational service A high-risk workflow The 5 Surfaces Most Small Businesses Start With If you’re unsure where to begin, this shortlist applies to most environments: 1. Identity and email 2. Finance and payment systems 3. Client data storage 4. Remote access pathways 5. Admin accounts and management tools BizTech makes the point that there’s no “Zero Trust in a box.” It’s achieved through the right mix of people, process, and technology. The Roadmap This is where zero-trust architecture for small businesses stops being a concept and becomes a plan. Each phase builds on the one before it, so you get meaningful risk reduction without creating a security obstacle course. 1. Start with Identity Network location should not be treated as a trusted signal . Access should be based on who or what is requesting it, and whether they should have access at that moment. That’s why identity is step one. Do this first: Enforce multifactor authentication (MFA) everywhere Remove weak sign-in paths Separate admin accounts from day-to-day user accounts 2. Bring Devices into the Trust Decision Zero Trust isn’t just asking, “Is the password correct?” It’s asking, “Is this device safe to trust right now?” Microsoft’s SMB guidance explicitly calls out securing both managed devices and BYOD, because small businesses often have a mix. Keep it simple: Set a clear baseline: patched operating systems, disk encryption, and endpoint protection Require compliant devices for access to sensitive applications and data Establish a clear BYOD policy: limited access, not unrestricted access 3. Fix Access Microsoft’s principle here is “use least privilege access.” This means users should have only what they need, when they need it, and nothing more. Practical moves: Eliminate broad “everyone has access” groups and shared login accounts Shift to role-based access, where job roles determine defined access bundles Require additional verification for admin elevation, and make sure it’s logged 4. Lock Down Apps and Data The old perimeter model doesn’t map cleanly to cloud services and remote access, which is why organizations shift towards a model that verifies access at the resource level. Focus on your protect surface first: Tighten sharing defaults Require stronger sign-in checks for high-risk apps Clarify ownership: every critical system and dataset needs an accountable owner 5. Assume Breach Microsegmentation divides your environment into smaller, controlled zones so that a breach in one area doesn’t automatically expose everything else. That’s the whole point of “assume breach”: contain, don’t panic. What to do: Segment critical systems away from general user access Limit admin pathways to management tools Reduce lateral movement routes 6. Add Visibility and Response Zero Trust decisions can be informed by inputs like logs and threat intelligence . Because verification isn’t a one-time event, it’s ongoing Minimum viable visibility: Centralize sign-in, endpoint, and critical app alerts Define what counts as suspicious for your protect surface Create a simple response Your Zero-Trust Roadmap Zero Trust architecture for small businesses doesn’t begin with a shopping list. It begins with a clear, focused plan. If you’re ready to move from “good idea” to real implementation, start with a single protect surface and commit to the next 30 days of measurable improvements. Small steps, consistent execution, and fewer unpleasant surprises.  If you’d like help defining your protect surface and building a practical Zero Trust roadmap, contact us today for a consultation. We’ll help you prioritize the right controls, align them to your environment, and turn Zero Trust into steady progress, not complexity.
by Tanya Wetson-Catt 20 April 2026
If you want to uncover unsanctioned cloud apps, don’t begin with a policy. Start with your browser history. The cloud environment most businesses actually use rarely matches the one shown on the IT diagram. It’s built through countless small shortcuts: a “just this once” file share, a free tool that solves one problem faster, a plug-in installed to meet a deadline, or an AI feature quietly enabled inside an app you already pay for. In the moment, none of it feels like a problem. It feels efficient. Helpful. Until it isn’t. Then you realise business data is scattered across tools you didn’t formally approve, accounts you can’t easily offboard, and sharing settings that don’t reflect the actual risk. Why Unsanctioned Cloud Apps Are a 2026 Problem Unsanctioned cloud apps have always existed. What’s changed this year is the scale, the speed, and the fact that “cloud apps” now include AI features hiding in plain sight. Start with scale. Microsoft’s shadow IT guidance points out that most IT teams assume employees use “30 or 40” cloud apps, but “in reality, the average is over 1,000 separate apps.” It also notes that “80% of employees use non-sanctioned apps” that haven’t been reviewed against company policy. That’s the uncomfortable reality of unsanctioned cloud apps: the gap between what you believe is happening and what’s actually happening is often far wider than expected. Now add the 2026 twist: AI isn’t just a standalone tool employees consciously choose to use. The Cloud Security Alliance notes that AI is increasingly embedded as a feature within everyday business applications, rather than existing only as a standalone tool. In other words, you can have shadow AI risk without anyone signing up for a new AI product. It’s just… there. That creates a different kind of exposure. The same Cloud Security Alliance article cites research showing “54% of employees” admit they would use AI tools even without company authorisation. It also references an IBM finding that “20% of organisations” experienced breaches linked to unauthorised AI use, adding an average of “$670,000” to breach costs. So, this isn’t just a governance problem. It’s a measurable risk problem. And here’s the final reason 2026 feels different: the old “block it and move on” strategy no longer works. The Cloud Security Alliance has pointed out that simply blocking cloud apps isn’t an option anymore because cloud services are woven into everyday work. If you don’t provide a secure alternative, employees will find another workaround. Don’t Start with Blocking The fastest way to drive cloud app usage further underground is to treat it as a discipline problem and respond with bans. Yes, some applications do need to be blocked. But if blocking is your first move, it typically creates two unintended side effects: 1. People get better at hiding what they’re doing. 2. They switch to a different tool that’s just as risky or, sometimes, worse. Either way, you haven’t reduced the problem. You’ve just made it harder to see. A better starting point is to understand what’s happening and why. The recommendation is to evaluate cloud app risk against an “ objective yardstick ”. You should monitor what users are actually doing in those apps so you can focus on the behaviour that creates exposure, not just the name of the tool. Once you have that visibility, you can respond in a way that actually lasts. Some apps will be approved. Others may be restricted. Some will need to be replaced. And the truly high-risk ones? Those are the apps you block thoughtfully, with a clear plan, a communication message, and a secure alternative that allows people to keep doing their jobs. The Practical Workflow to Uncover Unsanctioned Cloud Apps This isn’t a one-time clean-up. It’s a workflow you can run quarterly (or continuously) to stay ahead of new tools and new habits. Discover What’s Actually in Use Start by generating a real inventory from the signals you already collect: endpoint telemetry, identity logs, network and DNS data, and browser activity. Microsoft’s shadow IT tutorial emphasises a dedicated discovery phase, because you can’t manage what you haven’t first identified. Analyse Usage Patterns Don’t stop at identifying which apps are in use. Review things like: Who is accessing cloud apps What admin activity is happening Whether data is being shared publicly or with personal accounts Access that should no longer exist, such as former employees who still have active connections Score and Prioritise Risk Not every unsanctioned app is equally dangerous. Use a simple risk lens: The sensitivity of the data involved How information is being shared The strength of identity controls The level of administrative visibility Whether AI features could be ingesting or exposing data Tag Apps Make decisions visible and repeatable by tagging apps. Microsoft explicitly calls tagging apps as sanctioned or unsanctioned an important step, because it lets you filter, track progress, and drive consistent action over time. Take Action Once an app is tagged, you can enforce the decision. Microsoft’s governance guidance outlines two practical responses: issuing user warnings, a lighter control that encourages better behaviour, or blocking access to applications that present unacceptable risk. Just keep in mind that changes aren’t always immediate. Plan for communication and a smooth transition, rather than triggering unexpected disruptions. Your New Default: Discover, Decide, Enforce Unsanctioned cloud apps aren’t disappearing in 2026. If anything, they’ll continue to multiply, especially as new AI features appear inside the tools your team already relies on. The goal isn’t to block everything. It’s to create a repeatable operating model: discover what’s in use, determine what’s acceptable, and enforce those decisions with clear guidance and secure alternatives. When you apply that consistently, cloud app sprawl stops being a surprise. It becomes another controlled, managed part of your environment.  If you’d like help building a practical cloud app governance process that fits your organisation, contact us today. We’ll help you gain visibility, reduce exposure, and put guardrails in place, without slowing productivity.