How to Build a Customer-Safe Power BI Layer Without Rebuilding Your Stack

Brian DeLuca
Brian DeLuca is a co-founder and CEO of The Reporting Hub. As a seasoned expert in data, analytics, and business intelligence, Brian brings over 20 years of experience driving innovation and organizat...
Clock
5 minutes
Subscribe to our blog to stay up to date on all the latest information from the Reporting Hub team! We’ll never share your email with anyone else.

Picture a BI team that has spent two years building a strong Power BI setup. The semantic models are clean, the workspace structure makes sense, and RLS works properly. The business now trusts the certified datasets, and the investment is finally paying off.

Then the product team asks, "Can customers get access to this too?" That simple question creates a problem that the internal setup was never built to handle. This guide explains how to build a safe external layer without having to rebuild everything from scratch.


Why Your Internal Power BI Environment Isn't Ready for Customers (Yet)

Let's be clear about something before we get into architecture: this isn't about your BI team doing anything wrong. Internal analytics environments are built for internal decision-making. The problem is when you try to repurpose them for external delivery without changing the fundamental architecture.

Here's what actually breaks when teams try to expose their internal Power BI environment directly to customers.

The Workspace Boundary Problem

In Power BI Service, workspace permissions are binary, which matters a lot for external access. Users assigned Admin, Member, or Contributor roles bypass RLS entirely - they see the full underlying semantic model. Only Viewer-role assignments respect RLS filters.

Critical RLS Limitation

RLS does not apply to workspace members with Admin, Member, or Contributor permissions. If a user has Build permissions on the semantic model, they can also bypass RLS through Analyze in Excel. For external delivery, assigning anyone above the Viewer role is a data exposure risk.

That sounds manageable until you think about what 'internal workspace' means in practice. Your internal analysts have Contributor access, and data engineers have Member access. If a customer-facing role shares the same workspace, a misconfiguration can produce a data breach. The failure mode is silent and potentially catastrophic.

The Semantic Model Exposure Problem

Most internal semantic models contain measures, columns, and calculated tables that were never designed to be visible to external audiences. Business logic that's fine for internal context can't easily be hidden at the surface layer. You'd need Object Level Security (OLS) configured per table and per column, across every model you want to expose.

The Branding and Surface Problem

External customers aren't supposed to see the Microsoft Power BI interface or the report URLs that include your organization's tenant domain. The moment a customer lands on a native Power BI Service URL, the internal architecture is exposed. That's a branding problem, a security signal problem, and, for some organizations, a commercial positioning problem.


The Thing Most Architecture Guides Get Wrong

The counterintuitive lesson is that the answer is not simply more RLS. Most guides start with row-level security - dynamic RLS, USERPRINCIPALNAME(), security mapping tables, and Viewer roles - and while that advice is valid, it is incomplete. Treating RLS as the foundation for external delivery is like installing a deadbolt on a building with no exterior walls.

RLS filters rows within a semantic model. It doesn't:

  • Isolate workspace permissions between your internal team and external customers
  • Prevent high-privilege internal users from bypassing filters
  • Control what happens when the Direct Lake engine falls back to DirectQuery and query performance degrades
  • Give you any governance over AI-generated content that sits alongside your filtered data
  • Create the surface separation that tells a customer their data is safe from your other customers

Microsoft's own guidance - buried in the RLS documentation rather than highlighted - acknowledges this directly. If you have only a few simplistic RLS rules, consider publishing multiple semantic models instead, with one workspace per audience. Workspace-based isolation and RLS serve different purposes. You often need both, in the right architecture.

The Right Framing

RLS is a data filter. External delivery architecture is an access boundary, a surface boundary, and a governance boundary. You need all three, and they require different design decisions.

With that framing established, here's the actual architecture approach.


The Architecture for Customer-Safe Delivery

1
Establish Workspace Separation - Internal vs. External

This is the foundational decision, and it needs to be made before anything else. Your internal analytics workspaces and your external delivery workspaces must be separate Power BI Service workspaces. The architecture looks like this:

Internal Workspaces
Your analysts, data engineers, and report developers. Contributor and Member access. Full semantic model access. No external users.
Certified / Promoted Dataset Workspaces
Your endorsed semantic models. Tightly controlled. These are the models external delivery will reference - not copy, reference.
External Delivery Workspaces
Viewer-only for any external-facing service principal or embedded token scenario. Contain only the reports and models curated for customer consumption.

The critical rule: never add internal users with elevated permissions to external delivery workspaces. If an analyst needs to update a report in the external workspace, they do it in the internal environment and publish. Not in the customer-facing workspace directly.

2
Build the Semantic Model Separation Layer

Once workspace boundaries are in place, the next decision is what your external semantic models actually contain. You have two options, and the choice matters for long-term maintainability:

Option A - Live connection to certified internal models: Your external reports connect via live connection to certified, promoted semantic models in a dedicated dataset workspace. You apply OLS at the model level to hide columns or tables not appropriate for external consumption. RLS handles row-level customer filtering. The internal model is the single source of truth - you're not duplicating.

Option B - Dedicated external models per customer segment: Separate semantic models built specifically for external delivery, sourced from the same underlying data but curated for external consumption from the ground up. More models to maintain, but cleaner isolation and no risk of internal measures bleeding through OLS gaps.

Microsoft's guidance here is direct: if you have distinct customer segments with meaningfully different data permissions, workspace-based isolation with dedicated models outperforms a single shared model with complex RLS.

3
Implement the External Access and Identity Layer

The third layer is how external users authenticate and how your application generates the right filtered view for each customer. This is where the implementation gets technical fast.

Service Principal architecture: For any production external delivery, you should be using an app registration (service principal) with Power BI Embedded 'app owns data' model - not user-delegates-data. This means your application authenticates to Power BI on behalf of customers, generates embed tokens with the customer's effective identity, and passes that identity through to RLS at the semantic model level.

The embed token generation in your application looks something like:

EffectiveIdentity(username: customerTenantId, roles: ["CustomerRole"], datasets: [datasetId])

Where customerTenantId is the identifier you use to filter data in your security table, and CustomerRole is a dynamic RLS role in your semantic model that filters via USERPRINCIPALNAME() against a customer mapping table.

What this gives you: Each customer's embed token is scoped to their data only. They can't see other customers' data because the effective identity controls what the RLS filter evaluates to. They can't navigate to other reports because the embed token scopes access to specific report IDs. And if a token expires, the session ends cleanly.

4
Add the AI Governance and Consistency Layer

This is the step most architecture guides skip entirely because, until about 18 months ago, it wasn't really a problem. It is now.

If your external analytics product includes or plans to include AI-generated content - narrative summaries, anomaly explanations, trend commentary, conversational data queries - you need governance infrastructure for that layer before it reaches customers. Not after.

The specific risks:

  • AI outputs are non-deterministic. The same underlying data can produce different narrative outputs at different times.
  • There's no native approval workflow in any current Power BI AI feature that gates output before it reaches an embedded customer surface.
  • In regulated industries - financial services, healthcare, insurance - AI-generated content reaching customers may require audit trails. Power BI doesn't produce these natively for embedded scenarios.

The architecture answer is a governed AI delivery layer that sits between your semantic models and your customer surface: approval workflows for AI-generated content, per-tenant AI context configuration, audit logging of what AI told which customer and when, and source attribution so customers can trace an AI insight back to its underlying data.

This is exactly what BI Genius - Reporting Hub's native AI engine - provides. It's not a replacement for Power BI's internal AI capabilities. It's the governance and delivery infrastructure for those capabilities when they reach the external audience.


What This Architecture Looks Like vs. the Common Alternatives

Design Decision ⚠ Common Shortcut ✓ Governed Architecture
Workspace structure Single shared workspace, Viewer access for customers Dedicated external workspace, separate from the internal environment
Semantic model Same internal model with RLS applied, internal measures exposed Curated external model or OLS-controlled live connection from certified dataset
Customer identity Guest accounts added to the tenant via Azure AD B2B; customers see the PBI interface Service principal + embed token; customers authenticate to your app, not to PBI
Data filtering RLS only; complex multi-customer DAX with maintenance overhead Effective identity in embedded token + dynamic RLS, or workspace isolation per customer
AI content governance None; Copilot output reaches customers without review or audit trail Governed AI delivery layer with approval workflow, version control, per-tenant config, and audit trail

What You Don't Have to Rebuild

This is the part that often surprises people: the architecture sounds heavier than it is. For teams with an existing Power BI environment that already works, the goal is not to rebuild it. The goal is to add a cleaner external delivery layer around it.

The existing data models stay in place. Semantic models, measures, DAX, and refresh schedules continue working as they do today, while the external delivery layer references certified datasets instead of replacing them. The internal analytics experience also stays intact, with analysts continuing to work in the internal workspace and internal reports remaining in Power BI Service.

The data pipelines do not need to be rebuilt either. Dataflows, Azure Data Factory pipelines, direct database connections, Fabric OneLake, and other upstream infrastructure can continue feeding the same Power BI semantic models.

What gets added is the surface layer: external workspaces, curated semantic models, or OLS configuration, service principal, and embed token infrastructure, and the governed delivery layer that sits between models and customers.

The full rebuild usually occurs when teams skip the architecture phase and try to bolt external delivery onto an internal environment not designed for it. Four months later, they're rebuilding from scratch anyway - but now under pressure with a live customer commitment.

Where to Start

If the question is whether customers can access existing Power BI assets, start with the structure before permissions. The goal is to confirm whether the current environment can safely support external delivery. That means auditing what already exists, defining what customers should see, and building the delivery layer in the right order.

  • Audit every workspace, including user roles, semantic models, certified assets, promoted datasets, and any external delivery gaps.
  • Identify certified models first because external delivery should reference trusted assets rather than drafts or development workspaces.
  • Define the customer data contract in plain English before writing DAX filters or configuring security rules.
  • Build from the outside in: external workspace, curated model, service principal identity, embed token flow, governance layer.
  • Treat existing Power BI investments as the foundation, then add a safe external delivery layer around them.
See the External Delivery Architecture in Action

Reporting Hub is the orchestration layer that sits on top of your existing Power BI investment - workspace separation, governed AI delivery, and multi-tenant external access, all in 30 days.

Book a Demo Install Today

Reporting Hub is the AI-native Intelligence Orchestration Platform built on Power BI, powered by BI Genius. Architecture guidance reflects Power BI Service capabilities as of Q1 2026. RLS and OLS implementation details refer to Microsoft's official documentation.