
Back in September I published an article reviewing options for sharing lakehouse format data amongst Fabric, Snowflake, and Databricks. The article can be found by clicking here. This article expands on connectivity options between Snowflake and Fabric / Power BI, covering both traditional SQL endpoint connectivity and new lakehouse integration options. Interoperability between Snowflake and Fabric / Power BI is a hot topic these days, Vice Presidents from both companies even co-authored an article on the topic at this link.
Per Figure 2.0 above, each of the seven options above will be explained along with potential use case pros and cons. After taking a deep dive into these options while preparing for this article, my opinion is that all of them will have scenarios for which they make sense albeit some should be reserved for niche architectural use cases. Depending on project personas (IT vs Business), data latency requirements, report & AI agent rendering time expectations, cost management priorities, and cloud egress costs there are many factors influencing the connectivity choices for your projects.
Please note that the views expressed in this article are my own and do not necessarily reflect the views of my employer. I am not comparing Fabric / Power BI versus Snowflake, but rather discussing these options as ways to make them work better together. Also, I plan to write a similar article for Fabric / Power BI integration options with Databricks in the near future.
For each of the seven options for Fabric / Power BI connectivity and integration in Figure 2.0 above, I have added a section below with a diagram and some examples of how the option might benefit a project:
1. Fabric / Power BI read Snowflake SQL endpoint

Power BI reading the Snowflake SQL endpoint has been generally available and supported for many years. Based on what I see in my daily work, most solutions built with Power BI and Snowflake use this method of connectivity. Fabric tools and Power BI Semantic Models can query Snowflake in a similar manner as SQL Server, Oracle, Teradata, Redshift, etc. I further break this connectivity method into three modes: Import Mode, Direct Query Mode, and Composite Mode. Import mode is also similar to the queries from Fabric tools including Dataflows, Dataflows Gen 2, and Pipelines querying the Snowflake SQL endpoint in iterative batches.
1.1 Import Mode
Power BI Semantic Model Import Mode queries the Snowflake SQL endpoint on a schedule or if triggered by an API. The results of the query are cached in a compressed columnar database of a Fabric Semantic Model that is optimized for a reduced data storage footprint, high query complexity, and high query concurrency. When architected correctly, I’ve seen Import Mode Semantic Models perform well with hundreds of millions and sometimes billions of rows in a table.
Fabric tools such as Dataflows, Dataflows Gen 2, and Pipelines also query the Snowflake SQL endpoint in a similar manner, so I have included them in this mode.
When is an Import Mode Semantic Model a good choice?
Snowflake Administrators can serve up data to users via a SQL endpoint. There’s no need to grant end users access to build anything in Snowflake, and Snowflake doesn’t need to do anything in Fabric / Power BI. Fabric / Power BI users can still do transformations with Power Query or Dataflows / Dataflows Gen 2. Using Fabric Pipelines to ingest tables, views, or custom queries from Snowflake is an option for SQL query based ingestion into Fabric.
What are the risks of an Import Mode Semantic Model?
Data latency can be an issue if data from Snowflake needs to be less than thirty minutes old. Large or complex queries can also take longer to run on Snowflake and can increase costs in pay-per-query scenarios with Snowflake. If many semantic models are hitting the same Snowflake tables, duplicate queries could compound the problem and increase total costs. Also, Import Mode Semantic Models can reach performance limits with a combination of extreme query complexity and extremely large volumes of data.
1.2 Direct Query Mode
Direct Query Mode will not create a cached columnar database in the Fabric Semantic Model. The Fabric Semantic Model will convert queries to SQL and send them directly to Snowflake for processing.
When is a Direct Query Mode Semantic Model a good choice?
Direct Query Mode to Snowflake should be a niche option reserved for either 1) use cases requiring extreme low latency for data freshness, or 2) extremely large tables of data (too large for Import Mode) with simple query logic.
What are the risks of a Direct Query Mode Semantic Model?
Every single web part on a report, or every query from an AI data agent, will send a SQL query back to Snowflake. If there are 15 widgets on the screen, clicking a filter value will send 15 queries. If 10 people are using the report concurrently, there will be (15 widgets x 10 people) 150 potentially complex queries running concurrently on Snowflake. Costs will usually be higher on the Snowflake side of the architecture, and reports may not render as quickly as Import Mode.
1.3 Composite Mode
Composite Mode is an option to use both Import Mode and Direct Query Mode in the same Fabric / Power BI Semantic Model. Composite Mode Semantic Models offer the best of both worlds, and can also be ideal use cases for advanced capabilities such as Aggregations to improve query performance on large and complex data models. I wrote an article about Composite Mode Semantic Models and Aggregations a few years ago at this link. Composite Mode will also support Direct Lake Mode Semantic Models as sources, which are covered as part of options 2-4 below.
When is a Composite Mode Semantic Model a good choice?
Composite Mode Semantic Models are good options when a complex data model is neither a good fit for Import or Direct Query Mode alone. Extremely large fact tables mixed with smaller fact tables and dimensions are usually a good fit for Composite Mode.
What are the risks of a Composite Mode Semantic Model?
Individual tables within a Composite Mode Semantic Model have the same inherent risks as Import and Direct Query Mode. Also, the complexity of the design requires strong knowledge of data modeling best practices.
2. Fabric Mirroring of Snowflake

With a large Power BI environment or numerous AI Data Agents, queries between Snowflake and Fabric / Power BI can be complex with extremely high concurrency. Even for Import Mode Semantic Models using the Snowflake SQL endpoint, numerous semantic models refreshing frequently can result in high compute costs and duplicate data moving across networks. Fabric OneLake has a capability called Mirroring which can be used to optimize costs and query performance when connecting to Snowflake. Mirroring will connect to a Snowflake table change data capture (CDC) mechanism to pull updates into OneLake as incrementally updated delta parquet tables. Fabric / Power BI tools and reports can then query the copy of the table which has been moved over the network from Snowflake once, opposed to each query crossing the network separately. The Fabric compute and storage costs for Mirroring are also free up to limits described at this link.
When is Mirroring a good choice?
- Numerous Fabric / Power BI items are querying the same table in Snowflake
- The Snowflake table supports CDC (is not a view)
- The Snowflake table is updated periodically with small changes or in small batches (not a Type 1 table with full refreshes)
- Cost savings for both Snowflake and Fabric compute are a goal
- Direct Lake Semantic Models are used for Power BI or AI Fabric Data Agents
What are the risks of Mirroring?
- Snowflake views and custom queries are not supported, only tables.
- Type 1 (full refresh) Snowflake tables, or tables that have bulk daily updates, are not good candidates for Mirroring
- If Mirroring breaks or needs to be reset, there may be a delay as the historical data re-populates
- Security must be reconfigured on the new iceberg tables in Fabric / Power BI
3. Snowflake write Iceberg to Fabric

Tools in Fabric / Power BI are optimized to run on delta parquet or Iceberg tables in OneLake. Snowflake natively supports Iceberg format and can write Iceberg tables to external storage. Writing those Iceberg tables to OneLake enables Fabric / Power BI tools to query the Iceberg tables directly without crossing between the platforms every time a query runs. The Snowflake team can write the table to Fabric, keep it updated, and then all the compute and traffic from Fabric / Power BI can be contained withing Fabric.
When is Snowflake writing Iceberg to Fabric a good choice?
- Numerous Fabric / Power BI items are querying the same table in Snowflake
- Snowflake data can be written from a table, a view, or a custom query
- The Snowflake environment can be configured to update the table in Fabric as needed. Less frequent updates will reduce total compute usage
- Data is sent from Snowflake to Fabric by the Snowflake team, so end users do not need permission to query Snowflake
- Cost savings for both Snowflake and Fabric compute are a goal
- Direct Lake Semantic Models are used for Power BI or AI Fabric Data Agents
What are the risks of Snowflake writing Iceberg to Fabric?
- Refreshes are triggered from Snowflake, not by end users in Fabric / Power BI
- Security must be reconfigured on the new iceberg tables in Fabric / Power BI
4. Fabric Shortcut to Snowflake Iceberg

In addition to Snowflake writing Iceberg tables to Fabric OneLake, Fabric OneLake can also shortcut to Iceberg tables in the Snowflake environment. This scenario is similar to Snowflake writing Iceberg to Fabric (previous option), but the storage will be outside of Fabric and accessed by OneLake as needed.
When is Fabric Shortcut to Snowflake Iceberg a good choice?
- Numerous Fabric / Power BI items are querying the same table in Snowflake
- Snowflake data can be written from a table, a view, or a custom query
- The Snowflake environment can be configured to update the Iceberg table as needed. Less frequent updates will reduce total compute usage
- Only the Fabric team setting up shortcuts will need access to Snowflake
- Cost savings for both Snowflake and Fabric compute are a goal
- Direct Lake Semantic Models are used for Power BI or AI Fabric Data Agents
What are the risks of Fabric Shortcut to Snowflake Iceberg?
- Refreshes to the Iceberg tables are configured in Snowflake, not by end users in Fabric / Power BI
- Security must be reconfigured on the new iceberg tables in Fabric / Power BI
- Fabric OneLake shortcuts will still pull the data across the network from Snowflake when the data is needed for a query or Direct Lake Semantic Model.
5. Snowflake Read Fabric SQL endpoint

What if the Snowflake team needs a secure and supported means to query data from Fabric OneLake into Snowflake? For example business users may port data into OneLake from Excel, flat files, or Power Platform tools. Snowflake can query the Fabric SQL endpoint to pull that data into Snowflake using SQL queries. The Fabric SQL endpoint should be a niche option for Snowflake users, and the next option on this list will likely be a better choice (6. Snowflake read table from Fabric as Iceberg).
When is Snowflake reading the Fabric SQL endpoint a good choice?
- Business users have uploaded curated data to Fabric, and the data is needed for projects in Snowflake
- Projects need a secure platform for business users to upload data (Fabric OneLake) as a source for Snowflake
- Data from sources that natively integrate with Fabric OneLake are needed for Snowflake projects
What are the risks of Snowflake reading the Fabric SQL endpoint?
- Connecting to OneLake and reading tables as Iceberg is probably a better option for most use cases (next option below)
- Authentication to Fabric with this option requires using managed identity
6. Snowflake read table from Fabric as Iceberg

Just as Fabric can shortcut to Snowflake and read Iceberg tables, Snowflake can also connect to Fabric OneLake and read tables as Iceberg. In most scenarios, Snowflake reading tables from Fabric OneLake as Iceberg should be the best method to get data from Fabric into Snowflake from the perspective of cost and scalability.
When is Snowflake read table from Fabric as Iceberg a good choice?
- Business users have uploaded curated data to Fabric, and the data is needed for projects in Snowflake
- Projects need a secure platform for business users to upload data (Fabric OneLake) as a source for Snowflake
- Data from other sources that natively integrate with Fabric OneLake are needed for Snowflake projects
- Snowflake team will use Snowflake compute to work with data from Fabric
What are the risks of Snowflake read table from Fabric as Iceberg?
Compared to other options for getting Fabric data into Snowflake, I cannot find any relative risks.
7. Snowflake ingest Fabric Real-Time data as Iceberg

Fabric Real-Time Intelligence is an area of great potential growth for analytics and AI. Data can be pushed into Fabric (rather than pulled) and used for alerting and agentic operations. Real-Time Intelligence is often referred to as the hot path of a lambda architecture. For the cold path of a lambda architecture, the streaming data is stored for deep analytics and machine learning purposes. Many data science and analytics teams use Snowflake for storing historical data. Snowflake can ingest Fabric Eventhouse (Real-Time) data via OneLake as Iceberg for cold path storage.
When is Snowflake ingest Fabric Real-Time data as Iceberg a good choice?
- Fabric Real-Time Intelligence tools are used for lambda hot path AI, alerting and analytics but cold path data is stored and used in Snowflake
- Snowflake teams need access to data that is easily streamed into Fabric from Fabric-friendly sources
What are the risks of Snowflake ingest Fabric Real-Time data as Iceberg?
Compared to other options for getting Fabric Real-Time data into Snowflake, I cannot find any relative risks.
Summary of Options

| Option | Reference url |
| Fabric read Snowflake SQL endpoint | https://learn.microsoft.com/en-us/fabric/data-factory/connector-snowflake-overview https://learn.microsoft.com/en-us/power-bi/connect-data/service-connect-snowflake |
| Fabric mirroring of Snowflake DB (Copies Metadata & Data) | https://learn.microsoft.com/en-us/fabric/mirroring/snowflake |
| Snowflake write Iceberg to Fabric | https://docs.snowflake.com/en/sql-reference/sql/create-external-volume#label-create-external-volume-onelake |
| Fabric shortcut to Snowflake Iceberg | https://learn.microsoft.com/en-us/fabric/onelake/onelake-iceberg-tables |
| Snowflake read Fabric SQL endpoint | https://learn.microsoft.com/en-us/fabric/data-warehouse/query-warehouse |
| Snowflake read table from Fabric as Iceberg | https://blog.fabric.microsoft.com/en-us/blog/new-in-onelake-access-your-delta-lake-tables-as-iceberg-automatically?ft=All |
| Snowflake ingest Fabric Real-Time data as Iceberg | Query Fabric OneLake Delta tables from Snowflake – Microsoft Fabric | Microsoft Learn |
Closing Thoughts
All of these seven options for connectivity between Snowflake and Fabric will likely have use cases in the real world. In my opinion, the following options will be the most popular due to performance, cost, and administrative considerations:
- 3. Snowflake write Iceberg to Fabric – High query complexity and user concurrency are the primary drivers of unnecessary costs when using Power BI with Snowflake. This option will allow Snowflake teams to write data (tables, views, custom queries) to OneLake on a schedule or when triggered, and then hundreds of queries can hit OneLake for AI and reporting use cases. In theory, this method should minimize cross-platform traffic and reduce total costs while offering fast query response times to end users.
- 1.3 Composite Mode Semantic Models – Leveraging both Import Mode and Direct Lake Mode for the best of both worlds, can be used in combination with #3 and #2 above.
- 6. Snowflake read table from Fabric as Iceberg – When Snowflake users need data from OneLake, this option should offer the best cost and performance.
- 7. Snowflake ingest Fabric Real-Time data as Iceberg – Similar to #6, but in the context of pulling data from the Fabric EventHouse for cold path storage and analytics in Snowflake.
Feedback and suggestions are welcome via my LinkedIn page or Twitter.












