Microsoft Power BI Performance Best Practices PDF

Document Details

Uploaded by Deleted User

2024

Thomas LeBlanc, Bhavik Merchant

Tags

Power BI performance business intelligence data visualization

Summary

This guide provides practical techniques for building high-speed Power BI solutions. It discusses performance targets, Power BI architecture, and optimization strategies for different data storage modes. It also includes details about external tools, like Tabular Editor, and third-party utilities used for performance tuning.

Full Transcript

Microsoft Power BI Performance Best Practices Learn practical techniques for building high-speed Power BI solutions Thomas LeBlanc | Bhavik Merchant Microsoft Power BI Performance Best Practices Copyright © 2024 Packt Publishing All rights reserved. No part of this book may be reproduced, stored...

Microsoft Power BI Performance Best Practices Learn practical techniques for building high-speed Power BI solutions Thomas LeBlanc | Bhavik Merchant Microsoft Power BI Performance Best Practices Copyright © 2024 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Group Product Manager: Apeksha Shetty Publishing Product Manager: Nilesh Kowadkar Book Project Manager: Aparna Nair Senior Editor: Sushma Reddy Technical Editor: Kavyashree K S Copy Editor: Safis Editing Proofreader: Sushma Reddy Indexer: Manju Arasan Production Designer: Prashant Ghare DevRel Marketing Coordinator: Nivedita Singh First published: April 2022 Second edition: July 2024 Production reference: 1260724 Published by Packt Publishing Ltd. Grosvenor House 11 St Paul’s Square Birmingham B3 1RB, UK ISBN 978-1-83508-225-6 www.packtpub.com To my family, Janet, Tyler, and Kaylyn – I love y’all! Thomas LeBlanc Foreword I have known Thomas for close to 15 years. From the very beginning, his desire to pass on his knowledge to those around him was obvious. Today, it is my pleasure to serve with him as a leader in the Baton Rouge User Groups. Thomas co-leads our SQL Server and.NET User Group meetings. He also plays a key role in the organization of our annual SQL Saturday and Analytics events. He understands the value of community and he works hard to bring more people into our community and make them feel welcome. Thomas is not only a great leader but also a great friend. He is always supportive, generous, and kind to everyone he meets. He has taught me a lot about SQL Server, BI, and community building. I am honored to serve the community with him and to be able to call him a friend. Kenneth Neal, President, Baton Rouge User Groups Contributors About the author Thomas LeBlanc is a seasoned Business Intelligence Architect at Data on the Geaux, where he applies his extensive skillset in dimensional modeling, data visualization, and analytical modeling to deliver robust solutions. With a Bachelor of Science in Management Information Systems from Louisiana State University, Thomas has amassed over 30 years of experience in Information Technology, transitioning from roles as a software developer and database administrator to his current expertise in business intelligence and data warehouse architecture and management. Throughout his career, Thomas has spearheaded numerous impactful projects, including consulting for various companies on Power BI implementation, serving as lead database administrator for a major home health care company, and overseeing the implementation of Power BI and Analysis Service for a large bank. He has also contributed his insights as an author to the Power BI MVP book. Thomas is recognized as a Microsoft Data Platform MVP and is actively engaged in the tech community through his social media presence, notably as TheSmilinDBA on Twitter and ThePowerBIDude on Bluesky and Mastodon. With a passion for solving real-world business challenges with technology, Thomas continues to drive innovation in the field of business intelligence. I want to thank all the people who have helped me with the updates to an already great book. Bhavik Merchant has nearly 18 years of deep experience in Business Intelligence. He is currently the Director of Product Analytics at Salesforce. Prior to that, he was at Microsoft, first as a Cloud Solution Architect and then as a Product Manager in the Power BI Engineering team. At Power BI, he led the customer-facing insights program, being responsible for the strategy and technical framework to deliver system-wide usage and performance insights to customers. Before Microsoft, Bhavik spent years managing high-caliber consulting teams delivering enterprise-scale BI projects. He has provided extensive technical and theoretical BI training over the years, including expert Power BI performance training he developed for top Microsoft Partners globally. About the reviewer Devanshu Tayal is a highly accomplished data scientist with a master’s degree from BITS, Pilani, India. His extensive expertise in data science is evidenced by his contributions to a wide range of industries. Devanshu is deeply committed to mentoring and guiding aspiring data scientists and is an avid researcher of emerging technologies in the field. He is a strong advocate for diversity and inclusion and has shared his insights through various publications. Devanshu is frequently invited to deliver guest lectures at universities throughout India, and his contributions as a technical reviewer have been acknowledged in multiple books. His comprehensive knowledge and experience in the field make him an asset to any team or project. Table of Contents Preface xv Part 1: Architecture, Bottlenecks, and Performance Targets 1 Setting Targets and Identifying Problem Areas 3 Defining good performance 4 Connecting data sources 8 Reporting performance goals 4 Import mode 8 Setting realistic performance targets 6 DirectQuery mode 8 Live connection mode 9 Considering areas that could slow DirectLake mode 9 you down 7 Summary 9 2 Exploring Power BI Architecture and Configuration 11 Understanding data connectivity and Deploying Power BI gateways 21 storage modes 12 How gateways work 22 Choosing between Import, DirectQuery, and Good practices for gateway performance 23 Direct Lake mode 14 Sizing gateways 24 Import mode 15 General architectural guidance 30 DirectQuery mode 17 Direct Lake mode 18 Capacities 30 Live connection 20 Planning data and cache refresh schedules 33 Summary 34 viii Table of Contents 3 Learning the Tools for Performance Tuning 35 Technical requirements 36 Using the Optimize ribbon 49 Overview of data engine architecture 36 Pause and Refresh visuals 49 Import mode 37 Optimization presets 51 Executing a query 38 The Apply all slicers button 51 Term definitions 39 Adapting external tools 52 Learning about the performance DAX Studio 52 analyzer40 Query Diagnostics 53 Actions and metrics in the Tabular Editor 54 performance analyzer 41 Other tools 54 Determining user actions 43 Summary55 Exporting and analyzing performance data 44 Part 2: Performance Analysis, Improvement, and Management 4 Analyzing Logs and Metrics 59 Power BI usage metrics 59 Analysis Services server traces with the Customizing the usage metrics report 63 XMLA endpoint 76 Integration with Azure Log Analytics 77 Power BI logs and engine traces 73 Activity logs and unified audit logs 73 Monitoring Azure Analysis Services (AAS) and PBIE 77 Import from activity logs 74 Azure metrics for AAS 77 Summary79 Table of Contents ix 5 Optimization for Storage Modes 81 DirectQuery and relationships 82 Direct Lake semantic models 93 Optimizing DirectQuery relationships 86 Using Delta tables in Fabric 93 On-demand loading 94 General DirectQuery guidance 88 Power BI Desktop settings 88 Summary95 Optimizing external data sources 91 6 Third-Party Utilities 97 Technical requirements 98 Working with Tabular Editor 102 Exploring Power BI Helper 98 Using Tabular Editor’s Best Practice Analyzer 102 Identifying large column dictionaries 98 Tuning with DAX Studio and Identifying unused columns 99 VertiPaq Analyser 106 Identifying bidirectional and Analyzing model size with VertiPaq Analyzer 106 inactive relationships 100 Performance tuning the data model and DAX 108 Identifying measure dependencies 101 Summary115 7 Performance Governance Framework 117 Establishing a repeatable Fabric Capacity Metrics 128 improvement process 118 Calling REST APIs for The performance management cycle 118 monitoring data 133 Knowledge sharing and awareness 121 Custom connectors 134 Helping self-service users 121 Storing REST API data 136 Leveraging professional developers 122 Other resources 137 Applying steps to different usage scenarios 122 Summary138 Using performance metrics reports 126 Usage metrics report 126 x Table of Contents Part 3: Fetching, Transforming, and Visualizing Data 8 Loading, Transforming, and Refreshing Data 141 Technical requirements 142 Refreshing incrementally 156 General data Using Query Diagnostics 160 transformation guidance 142 Collecting Power Query diagnostics 161 Data refresh, parallelism, and resource usage 142 Analyzing the Power Query logs 162 Improving the development experience 145 Optimizing dataflows 164 Folding and joining queries 151 Gen2 destinations 168 Query folding 151 Summary168 Joining queries 156 9 Report and Dashboard Design 171 Technical requirements 171 Using Top N to limit data 182 Optimizing report layout 172 Moving slicers to the filter pane 183 Too many elements in a report 172 Optimization for dashboard and Reduce a busy report 175 paginated reports 184 Reducing queries to the semantic model 177 Following best practices for dashboards 184 Using the small multiples option 178 Optimizing paginated reports 186 Interaction optimization for slicing Summary187 and dicing 180 Selecting a value for a slicer 180 Disabling interaction when necessary 181 Table of Contents xi Part 4: Data Models, Calculations, and Large Semantic Models 10 Dimensional Modeling and Row Level Security 191 Technical requirements 192 Avoiding pitfalls with row-level Building efficient models 192 security205 The Kimball dimensional model theory 192 General guidance for RLS configuration 205 Designing a basic star schema 194 Optimize relationships 206 Guidance that applies to dynamic RLS 206 Building a single source of truth 196 Reducing dataset size 198 Summary210 Considering many-to-many relationships and bi-directional filtering201 Using bi-directional relationships carefully 204 11 Improving DAX 211 Technical requirements 212 Improving filter context for a measure 217 Understanding row and filter context 212 Understanding DAX pitfalls and Calculated column 212 optimizations219 Measure213 Tuning DAX 219 Dissecting row context 214 DAX guidance 220 Discovering filter context 214 Improving the performance of Summary229 a calculated column 216 xii Table of Contents 12 High Scale Patterns 231 Technical requirements 232 Leveraging aggregations 242 Scaling with capacities and Azure Improving performance with Analysis Services 232 Synapse and Fabric 247 Leveraging Fabric for data scale 232 The modern data warehouse Throttling and smoothing in Fabric capacity 234 architecture (Synapse) 248 Leveraging AAS for data and user scale 234 ADLS249 Using QSO to achieve higher Azure Synapse Analytics and Fabric 249 user concurrency 235 Using partitions in the fact table 238 Summary251 Further reading 251 Scaling with aggregations and composite models 240 Leveraging composite models 240 Part 5: Optimizing Capacities in Power BI Enterprises 13 Working with Capacities 255 How a noisy neighbor impacts Capacity planning, monitoring, shared capacity 256 and optimization 269 Controlling capacity workloads Determining the initial capacity size 269 and settings 258 Validating capacity size with load testing 271 Capacity settings 259 Alert notifications 273 Monitoring capacities 274 How capacities manage resources 262 Understanding the compute report page 275 Managing capacity overload and Autoscale 264 Handling peak loads in Premium capacity Summary284 with Autoscale 267 Table of Contents xiii 14 Performance Needs for Fabric Artifacts 287 Fabric artifacts 288 Tips for enhancements 297 Delta tables 291 Load balancing 298 Warehouse or lakehouse 292 Dataflow copy fast 298 The Spark engine 293 On-demand loading 298 Loading data in large chunks 299 Using Direct Lake for data sources 294 Vacuum and Delta table structure 299 Monitoring Fabric resource consumption 295 Summary 299 Measuring the hotness of data 296 15 Embedding in Web Apps 301 Improving embedded performance 302 Summary 308 Measuring embedded performance 306 Index 309 Other Books You May Enjoy 320 Preface It is very easy to start building analytical solutions in Power BI. Insightful content can come from many sources imported into a semantic model. The popularity of the model can bring many requests from various people in the organization for additional slicers and filters. If you do not plan for scale appropriately, performance issues will arise at all angles. This book can help by covering performance issues with optimizations for every layer of Power BI, from the report pane to data modeling as well as transformation, storage, and architecture. Developers and architects working with Power BI will be able to put their knowledge to work with this practical guide to designing and implementing solutions at every step of the development process ending with a deployment. This book is not only a collection of best practices but it also provides a structured process with a hands-on approach to identifying and preventing common issues with using Power BI. Complete with explanations of essential concepts and practical examples, you’ll learn about common design choices that affect the performance and consumption of resources. You’ll grasp common architectural patterns and settings that affect most deployments to the service. As you progress through the book, each level will show a typical example at that stage. This will help with the scale and usability of the semantic model and any reports associated with that model. The layers include the report pane, a semantic model, capacities for deployment, and gateway optimizations. By the end of this book, you will know where to go to get the latest optimization techniques for each layer of design and deployment of Power BI models and reports. Who this book is for This book is for a range of users. These can be data analysts who use Power BI for analytical reporting. They could be architects who are deploying a shared model to a host of report developers. As far as Power BI administrators are concerned, there are chapters for the gateway and capacities assisting with configurations. Other titles include business intelligence developers, report writers, application developers, and Power Platform administrators. These titles do not include beginner levels because an intermediate level of business intelligence implementation should be already part of their experience. xvi Preface What this book covers Chapter 1, Setting Targets and Identifying Problem Areas, describes a Power BI solution as a stream of data from multiple sources reaching consumers in a consolidated fashion. We look at how data can be stored in Power BI and the different paths it can take before reaching a user. Many of the initial architectural design choices made in the early stages of the solution are very difficult and costly to switch later. That’s why it’s important to have a solid grasp of the implications of those choices and how to decide what’s best at the start. Chapter 2, Exploring Power BI Architecture and Configuration, looks at data storage modes in Power BI and how the data reaches the data model while giving some general guidance to improve throughput and latency. The storage mode chosen can limit size and data freshness. It also covers how to best deploy Power BI gateways, which are commonly used to connect to external data sources. This is important because users often demand up-to-date data, historical data, and aggregated data. Chapter 3, Learning the Tools for Performance Tuning, explores how the easiest way to see where time is being spent in reports is to use the desktop Performance Analyzer to get detailed breakdowns for every user action, on a per-visual basis. Queries from this tool can be run in DAX Studio for server timing breakdown and better analysis. In addition, Tabular Editor can be used to examine measures for properties and syntax for performance tuning. Chapter 4, Analyzing Logs and Metrics, describes how performance can only be improved if it can be measured objectively. Therefore, this chapter covers all the sources of performance data and how to make sense of the information provided to identify the parts of the solution that are bottlenecks. This includes useful native and third-party utilities. We also provide guidelines to help monitor and manage performance continuously. Chapter 5, Optimization for Storage Models, describes how, with the proliferation of data lakes, more options are available for performance improvements with DirectQuery or DirectLake. Synapse has brought Massively Parallel Processing (MPP) from big data to analytical databases. DirectQuery can use the column store type tables in Synapse and other MPPs in the cloud. The use of aggregations with DirectQuery external data sources has become a common choice for large fact tables. There are optimizations that can be made in both Power BI and external sources to avoid hitting limits too quickly. Chapter 6, Third-Party Utilities, covers a few popular third-party utilities that are effective in performance investigation and tuning and walks through typical use cases around connecting them to Power BI, collecting metrics, and what to look for when diagnosing performance problems. Chapter 7, Performance Governance and Framework, talks about how the metrics and tools covered in earlier chapters are essential building blocks for performance management. However, success is more likely with a structured and repeatable approach to build performance-related thinking into the entire Power BI solution lifecycle. This chapter provides guidelines to set up data-driven processes to avoid sudden scale issues for new content and prevent degradations for existing content. Preface xvii Chapter 8, Loading, Transforming, and Refreshing Data, explains how loading data periodically is a critical part of any analytical system, and in Power BI, this applies to Import mode semantic models. Data refresh operations in Import mode are CPU- and memory-intensive, which can lead to long delays or failures, especially with large semantic models. This can leave users with stale data or slow down development significantly, which is why it should be designed with performance in mind. Chapter 9, Report and Dashboard Design, covers reports and dashboards, which are the “tip of the iceberg” in a Power BI solution since they are what consumers interact with regularly. This chapter covers important considerations and practices to apply regarding visual layout, configuration, and slicing/filtering. It also looks at paginated reports, which behave differently from interactive reports and have special performance considerations. Chapter 10, Dimensional Modeling and Row-level Security, describes how the Power BI semantic model is where data lands after being shaped, and where data is retrieved for analysis. Hence, it is arguably the most critical piece, at the core of a Power BI solution. Power BI’s feature richness and modeling flexibility provide alternatives when modeling data. Some choices can make development easier at the expense of query performance and/or semantic model size. This chapter provides guidance on model design, size reduction, and faster relationships. Chapter 11, Improving DAX, covers DAX formulas, which allow BI developers to add a diverse range of additional functionality into the model. The same correct result can be achieved by writing different DAX formulas without realizing that one version may be significantly slower in certain query or visual configurations. This chapter highlights common DAX issues and recommended practices to get calculations performing at their best. It will also contain the definitions and examples for computed columns and measures with a dive into the filter context. Chapter 12, High-Scale Patterns, explains how the amount of data organizations collect and process is increasing all the time. Even with Power BI’s data compression technology, it isn’t always possible to load and store massive amounts of data in an Import mode model in a reasonable amount of time. This problem is worse when you must support hundreds or thousands of users in parallel. This chapter covers the options available to deal with such issues by leveraging Azure technologies and Power BI aggregations and composite models. In addition, Fabric and Synapse will be utilized for speed improvements in data sources including the lakehouse. Chapter 13, Working with Capacities, covers working with and monitoring capacity. Power BI offers dedicated capacity, higher limits, and many additional capabilities such as paginated reports and AI. This does, however, require diligent capacity management to prevent resource exhaustion. This chapter covers each of the available workload settings in detail. We then look at ideal to extreme usage/load scenarios and how the capacity manages its memory in each case. We also look at the Microsoft- provided template apps to monitor capacities. xviii Preface Chapter 14, Performance Needs for Fabric Artifacts, talks about how Fabric options bring new artifacts into the capacity and some that are updated. Performance of the capacity will be affected by pipelines, Lakehouse/warehouse structures, as well as a destination added for Dataflow Gen2. These all have resource requirements and many people will be guided toward a different capacity for using Fabric features. Chapter 15, Embedding in Web Apps, teaches how embedding Power BI content in a custom web app is a great way to expose data analytics within a completely customized UI experience, along with other no-Power BI-related content. This pattern does introduce additional considerations since the Power BI application is hosted externally via API calls. This chapter looks at how to do this efficiently and then measure performance. To get the most out of this book Some chapters in this book come with sample files that you can open in Power BI Desktop to explore the concepts and enhancements we provide. The examples largely show designs before and after performance improvements have been implemented. Therefore, it is not mandatory to review these examples, but they do provide useful context and can help teach new concepts through hands-on experience. Software/hardware covered in the book Operating system requirements Power BI Desktop Windows DAX Studio 3.0.11 Tabular Editor 2.x Power BI Pro license Premium Capacity or Fabric (can be trial version) We always recommend having the latest version of Power BI Desktop available due to the monthly release cycle. If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code. Download the example code files You can download the example code files for this book from GitHub at https://github.com/ PacktPublishing/Microsoft-Power-BI-Performance-Best-Practices-Second- Edition. If there’s an update to the code, it will be updated in the GitHub repository. Preface xix The example files included with the book are of various types. The PBIX files are Power BI desktop examples used in each chapter. The other file types are as follows: 1..bak - SQL Server database backup file, to be used when data is needed. 2..json - Exports from monitoring tools or configuration file examples. 3..xlsx - Example Excel files. 4..ps1 - PowerShell scripts for monitoring. 5..txt - Tabular Editor rules for best practices. 6..csv - Example data. 7..gitattributes - GitHub configuration file. Each file has a use for following a specific example in the related chapter. We also have other code bundles from our rich catalog of books and videos available at https:// github.com/PacktPublishing/. Check them out! Conventions used There are a number of text conventions used throughout this book. Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “We exported this data and then worked on it in the sample Analyzing Desktop Performance Logs.pbix file.” A block of code is set as follows: {    "version":"1.1.0",    "events":[      {        "name":"User Action",        "component":"Report Canvas",        "start":"2021-09-03T03:53:22.139Z",        "id":"a702542d7cbbcd9b37a0",        "metrics":{          "sourceLabel":"UserAction_StartedMonitoring"         }     }, xx Preface Any command-line input or output is written as follows: $json = Get-PowerBIActivityEvent -StartDateTime $StartDate-EndDateTime $EndDate | ConvertFrom-Json Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Initial architectural choices can help with the cost, but the Software as a Service (Saas) structure of Power BI enables many easy and fast ways to adjust the service behind reports or dashboards” Tips or important notes Appear like this. Get in touch Feedback from our readers is always welcome. General feedback: If you have questions about any aspect of this book, email us at customercare@ packtpub.com and mention the book title in the subject of your message. Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form. Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material. If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com. Preface xxi Share Your Thoughts Once you’ve read Microsoft Power BI Performance Best Practices, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback. Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content. xxii Preface Download a free PDF copy of this book Thanks for purchasing this book! Do you like to read on the go but are unable to carry your print books everywhere? Is your eBook purchase not compatible with the device of your choice? Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost. Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application. The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily Follow these simple steps to get the benefits: 8. Scan the QR code or visit the link below https://packt.link/free-ebook/978-1-83508-225-6 9. Submit your proof of purchase 10. That’s it! We’ll send your free PDF and other benefits to your email directly Part 1: Architecture, Bottlenecks, and Performance Targets In this part, we will review high-level Power BI architecture and identify areas where performance can be affected by design choices. We will explain how to define realistic performance targets. This part has the following chapters: Chapter 1, Setting Targets and Identifying Problem Areas Chapter 2, Exploring Power BI Architecture and Configuration Chapter 3, Learning the Tools for Performance Tuning 1 Setting Targets and Identifying Problem Areas Many people would consider report performance as the most critical area to focus on when trying to improve the speed of an analytics solution. This is largely true, because it is the most visible part of the system used by pretty much every class of user, from administrators to business executives. However, you will learn that there are other areas of the solution that should be considered if performance is to be managed comprehensively. For example, achieving good performance in the reporting layer might be of no consequence if the underlying dataset that powers the report takes a long time to be refreshed or is susceptible to failures due to resource limits or system limits being reached. In this case, users may have great-looking, fast reports that do not provide value because the data is stale. The authors of this book have experienced the effects of poor report performance firsthand but have also enjoyed the fruits of improving performance or a great dimensional model. In multiple projects, large companies have tried to migrate from one reporting system to another without much planning. They just wanted to duplicate the existing system in a new reporting application. Copying the old reporting system was not functionally equivalent. This led to poor design choices and resulted in some slow reports. Some end users were reluctant to adopt the new system after lots of money was spent on licensing and consulting. While this example is on the extreme side, it does demonstrate the potential ramifications when you do not use a good design for great performance. This chapter begins the journey to achieving good and consistent performance in Microsoft Power BI. To introduce the full scope of performance management, we will describe a Power BI solution as a stream of data from multiple sources. These streams will be consolidated and presented to a data analyst. Data will be looked at to examine and understand the storage process and how it gets to the report viewer. Initial architectural choices can help with the cost, but the Software as a Service (Saas) structure of Power BI enables many easy and fast ways to adjust the service behind reports or dashboards. Hence, it is important to have a solid grasp of the implications of the path to implementation and see a data-driven approach to help us decide what is the best start. 4 Setting Targets and Identifying Problem Areas An area of performance management that is easily overlooked is that of setting performance targets. How do you know whether the experience you are delivering is great, merely acceptable, or poor? We will begin by exploring this theoretical area first to define our goal before diving into technical concepts. The chapter is broken into the following sections: Defining good performance Considering areas that could slow you down Choices that affect performance Defining good performance With the advent of ever-faster computers and the massive scale of processing available today by way of cloud computing, business users expect and demand analytical solutions that perform well. This is essential for competitive business decision-making. Business Intelligence (BI) software vendors echo this need and tend to promise quick results in their sales and marketing materials. Their expectations mean that it is uncommon to find users getting excited about how fast reports are or how fresh data is because it is something implicit to them having a positive experience. Conversely, when users must wait a long time for a report to load, they are quite vocal and tend to escalate such issues via multiple channels. When these problems are widespread, they can damage the reputation of both software platforms and the teams involved in building and maintaining those solutions. In the worst possible case, users may refuse to adopt these solutions and management may begin looking for alternative platforms. It’s important to think about performance from the onset because it is often very costly and time-consuming to fix performance after a solution has reached production, potentially affecting thousands of users. Reporting performance goals Today, most BI solutions are consumed via a web interface. A typical report consumption experience involves not just opening a report, but also interacting with it. In Power BI terms, this translates to opening a report and then interacting with filters, slicers, and report visuals, plus navigating to other pages explicitly or via a bookmark and drill through. With each report interaction, the user generally has a specific intention, and the goal is to not interrupt their flow. A term commonly used in the industry is analysis at the speed of thought. This experience and the related expectations are very similar to navigating regular web pages or interacting with a web-based software system. Defining good performance 5 Figure 1.1 – Power BI portal (app.powerbi.com) Therefore, defining good performance for a BI solution can take some cues from the many studies on web and user interface performance that have been performed over the past two or three decades: it is not a complex task. Nah F. (2004) conducted a study focusing on tolerable wait time (TWT) for web users. TWT was defined as how long users are willing to wait before abandoning the download of a web page. Nah reviewed many previous studies that explored the thresholds at which users’ behavioral intentions get lost and when their attitudes begin to become negative. From this research, we can derive that a well-performing Power BI report should completely load a page or the result of an interaction, ideally, in less than 4 seconds and in most cases not more than 12 seconds. We should always measure report performance from the user’s perspective, which means we measure from the time they request the report (for example, click a report link on the Power BI portal) until the time the last report visual finishes drawing its results on the screen. 6 Setting Targets and Identifying Problem Areas Setting realistic performance targets Now that we have research-based guidance to set targets, we need to apply it to real-world scenarios. A common mistake is to set a single performance target for every report in the organization and to expect it to be met every single time a user interacts with a report. For example, very large dataset sizes (tens of GBs) combined with complex nested DAX calculations that are displayed on multiple hierarchical levels of granularity in a table visual will naturally need significant time to be processed and displayed. This would generally not be the case with a report working on a small data model (tens of MBs) containing a row of simple sum totals, each displayed within a Card visual. Figure 1.2 – Card and table visuals Due to the variability of the solution complexity and other factors beyond the developer’s control (such as the speed of a user’s computer or which web browser they use), it is recommended that you think of performance targets in terms of typical user experience and acknowledge that there may be exceptions and outliers. Therefore, the performance target metric should consider what most users experience. We recommend report performance metrics that use the 90th percentile of the report load or interaction duration, often referred to as P90. Applying the research guidance on how long a user can wait before becoming frustrated, a reasonable performance target would be a P90 report load duration of 10 seconds or less. This means 90% of report loads would occur in under 10 seconds. However, a single target such as P90 is still not sufficient and we will introduce further ideas about this in Chapter 7, Performance Goverance Framework. For now, we should consider that there may be different levels of complexity, so it is recommended to set up a range of targets that reflect the complexity of solutions and the tolerance levels of users and management alike. The following table presents an example of a performance target table that could be adopted in an organization: Considering areas that could slow you down 7 Typical Report Complex Report P90 Duration Target Under 5 Seconds Under 15 Seconds Figure 1.3 – Example report performance targets Next, we will look at Power BI from a high level to get a broad understanding of the areas that need to be considered for performance improvement. Considering areas that could slow you down The next step in our performance management journey is to understand where time is spent. A Power BI solution is ultimately about exposing data to a user and can be thought of as a flow of data from source systems to data stores. Through these various components, eventually, the visual will reach a user through a browser or mobile device. A simplified view of a Power BI solution is presented in the following figure. Figure 1.4 – Power BI solution overview Here, we will briefly focus on the different paths of a typical solution to explain why each piece has important considerations for users and the effect poor performance can have on each process. Some of these areas will be covered in more detail in Chapter 2, Exploring Power BI Architecture and Configuration. 8 Setting Targets and Identifying Problem Areas Connecting data sources The following diagram highlights the areas of the solution that are affected when data sources and connectivity methods do not perform well: Figure 1.5 – Areas affected by data source and connectivity issues Import mode When using Import mode datasets, developers can experience sluggish user interface responsiveness when working with Power Query or M on the desktop. In extreme cases, this can extend data transformation development from hours to days. Once the solution is deployed, problems in this area can cause refresh times to time out or fail. The Power BI service has a refresh limit of 2 hours, while Premium and Fabric capacities can extend this to 5 hours. Any refresh hitting this limit will be canceled by the system. DirectQuery mode DirectQuery mode leaves the data at the source and needs to fetch data and process it for measures when users interact with a report page. Issues with this part of the configuration most often cause slow report responses. Visuals will take a longer time to load because the source data must be accessed and transformed. Users will get frustrated and, in turn, interrupt the current page, and interact with another view that has a similar response time. This itself will issue more queries to the data source and, ironically, slow down the report even further by placing an additional load on the external data system. Summary 9 Live connection mode Live connection mode originally referred exclusively to connections to external Analysis Services deployments. These could be cloud-native (Azure Analysis Services) or on-premises (SQL Server Analysis Services). More recently, this mode was extended to more use cases with the introduction of shared datasets deployed to the Power BI service. Now, the desktop can connect to a published dataset and build a report separate from the data or have a composite mode with a dataset connection and import mode for more data sources. The connected dataset can be Import, DirectQuery, or DirectLake (see the next subsection), so performance can vary. DirectLake mode DirectLake is a Fabric (or Premium) capacity data source that uses the Delta Lake structure from a Fabric warehouse or lakehouse. The performance is somewhere between an Import and DirectQuery mode connection. The structure of the Parquet files underlying the Delta tables plus the processing of a big data system can help increase performance with a single source data. This is called OneLake in the Fabric documentation. Summary As we have seen in this chapter, interacting with analytical reports is very similar to other web applications, so the user’s level of engagement and satisfaction can be measured in similar ways. Studies of use interfaces and web browsing suggest that a report that is generated in less than 5 seconds is ideal. They also suggest that reports completing in 10-second durations or higher should be considered carefully as this is the point of user frustration. You should set performance targets and be prepared for outliers by measuring against baselines. Success may still require setting the right expectations by having different targets if you have highly complex reports.It is impotant to remember that each component of Power BI along with system resources can contribute to performance issues. Therefore, performance issues cannot be solved in isolation of just the report. In the next chapter, we will focus on data conectivity with various storage modes of the semantic model. We will also look at gateway optimization and general architectural advice to make sure the environment does not become a bottleneck. 2 Exploring Power BI Architecture and Configuration In the previous chapter, we established guidelines for setting reasonable performance targets and gained an understanding of the major solution areas and Power BI components that should be considered for holistic performance management. In this chapter, we will dig deeper into specific architectural choices, learning how and why these decisions affect your solution’s performance. You will learn to consider broad requirements and make informed decisions to design a solution that meets the needs of different stakeholders. Ultimately, this chapter will help you choose the best components to host your data within Power BI. We will focus mainly on the efficient movement of data from the source system to end users by improving data throughput and minimizing latency. We will begin by looking at data storage modes for the tabular engine and how data reaches the Power BI dataset. We will cover how to best deploy Power BI gateways, which are commonly used to control external data sources. These aspects are important because users often demand up-to-date data, or historical data, and there can be thousands of parallel users in very large deployments. This chapter is broken down into the following sections: Understanding data connectivity and storage modes Deploying Power BI data gateways General architectural guidance 12 Exploring Power BI Architecture and Configuration Understanding data connectivity and storage modes Choosing a data connectivity and storage mode is usually the first major decision that must be made when setting up a brand-new solution in Power BI. Today, this decision is based on what is available in Power BI or Fabric, plus the underlying data sources. This means choosing between Import, DirectQuery, and Direct Lake, which we introduced in the previous chapter. Within Power BI Desktop or the Power BI service, you need to make this decision as soon as you connect to a data source and before you can see a preview of the data to begin modeling. A few notes here: DirectQuery mode does not support all data sources. Most only offer Import mode. Be aware of this for data freshness and when combining different data sources. Direct Lake supports the Delta Table format in Fabric. An extract, transform, and load method would need to be implemented in Fabric or another Lakehouse tool to have the Delta Table support needed for this storage mode. Direct Lake is only supported by developing in the Power BI service or a third-party tool such as Tabular Editor. It is not supported by the Power BI Desktop application. Figure 2.1 shows a SQL Server data connector in Power BI Desktop showing both Import and DirectQuery modes: Figure 2.1 – Data connectivity options for a SQL Server source Understanding data connectivity and storage modes 13 Excel workbooks can only be configured with Import mode. Figure 2.2 demonstrates this, where we can only see a Load button without any choices for data connectivity mode. This implies that it is Import mode. Figure 2.2 – Data connection for Excel showing no Import or DirectQuery choice The Direct Lake mode is created in a Lakehouse or Warehouse in Fabric. Figure 2.3 shows the creation of a semantic model in a Lakehouse. Figure 2.3 – Data connection to Direct Lake Delta Table(s) in Fabric Lakehouse 14 Exploring Power BI Architecture and Configuration The Warehouse model is shown in Figure 2.4 as the Model tab. There is no DirectQuery or Import mode in this area of the Power BI service. The storage mode is like DirectQuery to a data source except there is only one type of data source: Delta Tables. Fabric workspaces can be created in a Premium capacity. Figure 2.4 – Data connection to Direct Lake Delta Table(s) for Fabric Warehouse The following section will dig deeper into the storage modes of data for a tabular model. The tabular engine from SQL Server Analysis Services is the underlying storage for Power BI semantic models. This engine uses xVelocity technology to create a compressed, column storage structure that is fast for analytical reporting. It also places the data in-memory, which is faster than disk storage. Choosing between Import, DirectQuery, and Direct Lake mode When using Power BI Desktop, import data connectivity mode is the default choice because it is faster than DirectQuery, sometimes by orders of magnitude. Import mode tables store data in a tabular database, which is effectively an in-memory cache. With the advent of Fabric, Power BI datasets will now be called semantic models. The use of the term dataset and semantic model refers to the same object. Understanding data connectivity and storage modes 15 The different modes offer different speeds and capabilities, listed as follows: Fastest – Import mode: Data is imported into an in-memory cache Acceptable – Direct Lake mode: DirectQuery to Parquet supported with Delta Table structure Slowest – DirectQuery mode: Connection to a data source with queries retrieving the data Import mode From a purely performance-oriented standpoint, the recommendation is Import mode to take advantage of the tabular engine. The column-store compressed structure will help satisfy the analytical reporting needed from Power BI. The downside is data must be refreshed to be current. Composite models with aggregate tables can help. The other reason why Import models are much faster is that they use Microsoft’s proprietary xVelocity storage known as VertiPak. xVelocity is a column-based storage engine, as opposed to row-based storage found in relational databases. Column-based storage came about to deal with how badly row-based transactional databases handle queries from reporting applications. They do many aggregations, potentially over large volumes of data while also offering detailed data exploration capability. Figure 2.5 shows the imported semantic models (datasets) in a data warehouse workspace with their imported data sizes. Figure 2.5 – Import mode models and sizes 16 Exploring Power BI Architecture and Configuration Row-based data storage engines physically store information in groups of rows. This works well when used by transaction systems because they frequently read and write individual or small groups of rows. They end up using most or all columns in the underlying table and were traditionally optimized to save and retrieve whole rows of data. Consider a sales management system where a new order is entered into a system – this would require writing a few complete rows in the database. Now consider the same system being used to view an invoice onscreen – this would read a few rows from various tables and likely use most of the columns in the underlying tables. Now, let’s consider typical reporting and analytical queries for the same sales management system. Business staff would most often be looking at aggregate data such as sales and revenue figures by month, broken down into various categories or being filtered by them. These queries need to look at large volumes of data to work out the aggregates, and they often ignore many columns available in the underlying tables. This access pattern led to column-based storage engines, which store columns physically instead of rows. They are optimized to perform aggregates and filtering on a column of data without having to retrieve entire rows with many redundant columns that do not need to be displayed or filtered. The following diagram shows a simplified view of a table stored in a row-based relational database versus column-based storage of analytical databases. Figure 2.6 – Comparison of row and column storage They also recognize that there is often significant repetition within a column of data; that is, the same values can be found many times. This fact can be leveraged to apply compression to the columns by not storing the same physical values many times. The xVelocity engine does exactly this – it applies different compression algorithms to columns depending on their data type and the number of unique values. This concept of reducing repetition to reduce data size is not new and is the same technique you end up using when you compress or zip files on a computer to make them smaller. In summary, xVelocity’s column-based compression technology gives you the best speed by bringing the data close to reports and squeezes that data down to significantly less than the original size. In Chapter 10, Data Modeling and Row-Level Security, you will learn how to optimize import models. Understanding data connectivity and storage modes 17 Keeping import models as small as possible will help you avoid hitting system limits such as the per-workspace storage limit, which varies depending on shared capacity (1 GB) versus dedicated capacity (10 to 100s of GB). Important note A good rule of thumb is that Import mode tables using xVelocity are about 5 to 10 times smaller. For example, 1 GB of raw source data could fit into a 100 to 200 MB semantic model. It is often possible to get even higher compression depending on the data’s cardinality (uniqueness of values in a column). Next, we will talk about when DirectQuery mode is more appropriate than Import mode. DirectQuery mode From a near real-time reporting perspective, DirectQuery would be the go-to for this requirement. DirectQuery does not import data into the VertiPaq engine. The DAX measure sends a structured query to the data source engine to gather data for visuals. Each interaction with the visuals will send a new query to the data source for execution. The downside is slower results with administrative requirements for the source data. If this were a relational database as a source, then a database administrator would be needed for any performance tuning at the source. The advantage of DirectQuery over Import mode is the query gets current data from the data source whereas the Import mode would have to perform a refresh to get current data. While Import mode offers great benefits in terms of model size and query speed, there are some good reasons to choose DirectQuery instead. The main point is that it gets current data from a data source. Sometimes, you will not have a choice and requirements will dictate the use of DirectQuery. The model is smaller because only the metadata about tables, columns, relationships, and measures is stored. Important note Import versus DirectQuery is a trade-off. Import gives you the best query performance while needing data refresh management and potentially not having the latest data available. DirectQuery can get you the latest data and allow you to have data sizes beyond Power BI’s model size limits. DirectQuery sacrifices some queries and can add optimization work to the source system. Direct Lake offers current data but relies on the storage and compute engines of OneLake for data retrieval. 18 Exploring Power BI Architecture and Configuration Here are the main reasons why you would use DirectQuery mode: Large data volumes: A model published to a workspace in a capacity can vary in size limit according to the available memory capacity. So, a P1 is 25 GB while a P5 is 400 GB. The 10 GB limit is for publishing a pbix file to the service. If you have more data than this, it may be impractical or simply impossible to move it into the service and have a refresh run successfully. DirectQuery does have a 1 million row limit per query. The row limit is because Power BI is an analytical tool for aggregation, not row-level detail reporting. Near real-time access to source data: If business requirements stipulate near real-time results from the data source, then DirectQuery is the right choice between the Import and DirectQuery models. Direct Lake is also a possibility for near real-time queries. Existing data platform investment: Some organizations may already have significant investments in a data warehouse or data mart(s) that stores data in a centralized database. These already contain clean data, modeled in a form that is directly consumable by analysts and business users, and act as a single source of truth. These data sources are likely to be accessed by different reporting tools and a consistent, up-to-date view is expected across these tools. You may want to use DirectQuery here to fit into this central source of truth model and not have older copies in a Power BI model. Regulatory or compliance requirements: Laws or company policies that restrict where data can be stored and processed may require source data to remain within a specific geographical or political boundary. This is often referred to as data sovereignty. If you cannot move the data into Power BI because it would break compliance, you may be forced to use DirectQuery mode. Frequently changed data: If the source of data changes frequently, such as in a matter of seconds or minutes, the visuals might change aggregations after applying a filter and then returning to the non-filtered data. The user might see different results from the previous non-filtered total and believe the reports are working correctly and then not want to return to the report again. Single sign-on (SSO): This is the only security used for connecting DirectQuery to SQL Server data sources. There is no alternative in DirectQuery mode. More details about using DirectQuery mode are discussed in Chapter 5, Optimization for Storage Modes. Direct Lake mode Direct Lake is the latest capability for analyzing large data sources. Direct Lake performance is somewhere between DirectQuery and Import mode levels but leans more toward the performance of Import mode. All data is stored in Delta Table (Parquet file format) collectively in OneLake. Since the data is not imported into the in-memory structure, you achieve some of the near real-time reporting. In addition to that, the Parquet file format is a column store compressed big data structure. Just like Databricks and Synapse, the compute performance is achieved by the Spark cluster(s) engine. Here, like other Lakehouse structures, the storage is separated from the compute engine, so scaling can be increased for speed and decreased for cost savings. Understanding data connectivity and storage modes 19 The Direct Lake engine also has some caching options for storing some query results in memory for quicker access when needed. The data is based on column segments and has a temperature (hot to cold) that eventually determines what gets paged out of memory. The column segment data is compressed data held in the Vertipaq engine. The downside of Direct Lake is the semantic model must be designed in the service in a Fabric-enabled workspace as well as the data structure in Delta Table(s). A plus with this is that OneLake can share these Delta Tables with other workspaces through Shortcuts. Figure 2.7 does a comparison between Import, DirectQuery, and Direct Lake modes. The OneLake model eliminates an intermediate step that is involved with the Import and DirectQuery modes. Figure 2.7 – Comparing Import, DirectQuery, and Direct Lake If the caching of the Direct Lake query results in too much memory being requested, beyond the capacity, the DAX will not fail. The engine will default to using DirectQuery to the SQL endpoint in the lakehouse structure. Important note Unlike Import mode, which requires memory to be available in the right capacity for a full refresh, the Direct Lake structure does not require this. Since the whole semantic model for Direct Lake does not have this memory requirement, the memory limit capacity does not need to be as high as in the Import mode. Now that we have discussed the three primary storage mode options and understand the trade-offs, we recommend bearing the following considerations in mind when choosing between them: How much source data do you have and at what rate will it grow? How compressible is your source data? 20 Exploring Power BI Architecture and Configuration Is there a capacity option that allows larger Import models to be hosted? Can you move or create the data warehouse in a lakehouse in Fabric? Will a blended architecture suffice? See the following Composite models section. From purely a performance standpoint, the recommendation is Import mode. Composite models Power BI does not limit you to using only a single mode for a dataset or.pbix file. It is possible to combine one or more Import mode tables with one or more DirectQuery tables in a composite model. In a composite model, the Import and DirectQuery tables would be optimized the same way you would in a strictly Import-only or strictly DirectQuery-only model. However, combined with the Aggregations feature, composite models allow you to strike a balance between report performance, data freshness, dataset size, and dataset refresh time. You will learn how to leverage aggregations in Chapter 10, Data Modeling and Row-level Security. Live connection Live connection is not a storage model. A live connection is a report that instead of using a storage mode, the connection is to a deployed data model. This could be an Analysis Service’s tabular model or a Power BI semantic model. The connected model could be a DirectQuery, Import, or DirectLake storage mode. The idea behind this method is to create one semantic model and have multiple reports get the data from the same set of data. This is commonly called a single source of truth data model. This method of connecting eliminates multiple pbix files having the same or similar data imported, which can cause issues with size or capacity limitation. The Power BI report will issue native DAX queries to the external dataset. A live connection is used in the following scenarios: Creating reports from a dataset available in a Power BI workspace from Power BI Desktop or Power BI on the web. Your organization has invested in Azure Analysis Services or SQL Server Analysis Services, and this is the primary central data source for Power BI reports. The top reasons for choosing this are as follows: ‚ You need a high level of control around partitions, data refresh timings, scale-out, and query/ refresh workload splitting ‚ Integration with CI/CD or similar automation pipelines ‚ Granular Analysis Services auditing and diagnostics are required ‚ The initial size of the dataset cannot fit into Premium capacity ‚ Previous investment in using SQL Server Analysis Services and training for developers Deploying Power BI gateways 21 Figure 2.4 highlights the scenarios that use a live connection: Figure 2.8 – Live connection scenarios Important note Connections to Analysis Services or Power BI semantic models also support Import mode, where data is copied and only updated when a data refresh is executed. The external Analysis Services semantic model may itself be in Import mode, so you should consider whether a live connection is indeed a better option to get the latest data. Import can be a good choice if you are simply building lookup tables for a smaller data mart or temporary analysis (for example, a list of products or customers). The way a report connects to its data source depends on where the report is being run. A connection from Power BI Desktop from a work office may take a completely different route than a connection from the Power BI service initiated by a person using the Power BI web portal or mobile app. When organizations need a way to secure and control communications from Power BI to their on-premises data sources (data that is not in the cloud), they deploy Power BI gateways. In the next section, we will discuss Power BI gateways, their role in data architecture optimization, and specific tips on getting the most out of gateways. Deploying Power BI gateways The on-premises data gateway provides a secure communications channel between on-premises data sources and various Microsoft services in the Power Platform domain. These cloud services include Power BI, Power Apps, Power Automate, Azure Analysis Services, and Azure Logic Apps. Gateways allow organizations to keep sensitive data sources within their network boundaries on-premises and then control how Power BI and users can access them. The gateway is available in Enterprise, Personal, and cloud versions. The remainder of this section focuses on the Enterprise version. 22 Exploring Power BI Architecture and Configuration When a gateway is heavily loaded or undersized, this usually means slower report loading and interactive experiences for users. Worse, an overloaded gateway may be unable to make more data connections, which will result in failed queries and some empty report visuals. What can make matters worse is that the user’s first reaction is often to refresh the failed report, which can add even more loads to a gateway or on-premises data source. How gateways work Gateways are sometimes thought of as just a networking component used to channel data. While they are indeed a component of the data pipeline, gateways do more than just allow data movement. The gateway hosts Power BI’s Mashup Engine and supports Import, DirectQuery, and live connections. The gateway service must be installed on a physical or virtual server. It is important to know that the gateway executes PowerQuery/M as needed, performing the processing locally on the gateway machine. In addition, the gateway compresses and encrypts the data streams it sends to the Power BI service. This design minimizes the amount of data sent to the cloud to reduce the refresh and query duration. However, since the gateway supports such broad connectivity and performs potentially expensive processing, it is important to configure and scale gateway machines, so they perform well. Figure 2.5 shows a simplified view of the gateway architecture. The dotted line is the layers of processing that are hidden for a normal user. Figure 2.9 – The on-premises gateway performs mashup processing locally Deploying Power BI gateways 23 Good practices for gateway performance Some general guidelines should be applied whenever gateways are deployed. We will discuss each one in the following list and provide reasons to explain how this design will benefit you: Place gateways close to data sources: Gateways should be as physically close to the data source as possible. The physical distance can add to network latency due to more infrastructure and network paths. We want to remove as many hops as possible. For data sources on virtual machines in the cloud, try to place them in the same region as your Power BI home region. Remove network throttling: Some network firewalls or proxies may be configured to throttle connections to optimize internet connectivity. This may slow down transfers through the gateway, so it is a good idea to check this with network administrators. Avoid running other applications or services on the gateway: This ensures that loads from other applications cannot unpredictably impact queries and users. This could be relaxed for development environments. Separate DirectQuery and scheduled refresh gateways: Import mode connections would only be used during data refresh operations and are often used more after hours, when data refreshes are scheduled. Since they often contain Power Query/M data transformations, refresh operations consume both CPU and memory and may require significant amounts for complex operations on large datasets. For DirectQuery connections, the gateway acts as a pass-through for query results from a data source. DirectQuery connections generally consume much less CPU and memory than Import mode. Significant bursts in CPU can occur for datasets with lots of transformations and calculations. By separating DirectQuery from Import datasets to different gateways, the servers can be created with the proper resources on the VM or physical server. Use sufficient and fast local storage: The gateway server buffers data on the disk before it sends it to the cloud. It is saved to the %LOCALAPPDATA%\Microsoft\On-premises data gateway\Spooler location. If you are refreshing large datasets in parallel, you should ensure that you have enough local storage to temporarily host those datasets. We highly recommend using high-speed, low-latency storage options such as solid-state disks to avoid storage becoming a bottleneck. Understand gateway parallelism limits: The gateway will automatically configure itself to use reasonable default values for parallel operations based on the CPU cores available. We recommend monitoring the gateway. If advised by an expert to change parallelism settings, consulting with Microsoft is the best option due to the advanced nature of parallelism and servers. 24 Exploring Power BI Architecture and Configuration Sizing gateways Most organizations start with a single gateway server and then scale up and/or out based on their real- world data needs. It is very important to follow the minimum specifications suggested by Microsoft for a production gateway. At the time of writing, Microsoft recommends a machine with at least 8 CPU cores, 8 GB RAM, and multiple gigabit network adapters. Regular monitoring is recommended to understand what load patterns the gateway experiences and which resources are under pressure. We will cover monitoring later in this chapter. We have already learned that the gateway supports different connection types. The type and number of connections will largely determine resource usage on the gateway server. Therefore, you should keep the following questions in mind when planning a gateway deployment: How many concurrent dataset refreshes will the gateway need to support? How much data is going to be transferred during the refresh? Is the refresh performing complex transformations? How many users would hit a DirectQuery source in parallel? How many visuals are in the most used DirectQuery reports? Each data-driven visual will generate at least one query to the data source. How many reports use Automatic Page Refresh and what is the refresh frequency? In the next section, we will look at how to monitor a gateway and gather data to inform sizing and scaling to ensure consistent performance. Configuring gateway performance logging The on-premises gateway has performance logging enabled by default. There are two types of logs captured – query executions and system counters. The gateway configuration file has many configuration options that can fine-tune the individual gateway. Administrative access must be enabled to look at the raw logs. They can be exported from the gateway service application as well. Deploying Power BI gateways 25 Figure 2.6 shows the diagnostic setup and configuration for log files in the gateway. Figure 2.10 – The gateway application Diagnostics screen to export logs Things such as turning the logs on and off are controlled in the configuration file by the DisableSystemCounterReport setting. Other settings include ReportFileCount and ReportFileSizeInBytes, as well as QueryExecutionAggregarionTimeInMinutes. Due to the monthly updates for the gateway application, it is better to reference Microsoft’s help online than to list the options and values in this book. The default as well as the suggested range can change frequently, as well as the type of log files. 26 Exploring Power BI Architecture and Configuration When logging is enabled, you will start to collect information in four sets of files with the.log extension and numerical suffixes in the filename. The log file group names are provided in the following list. This is explained in more detail in the Microsoft documentation for monitoring and optimizing gateways – https://learn.microsoft.com/en-us/data-integration/gateway/ service-gateway-performance: Query execution report: These logs contain detailed information on every query execution. They tell you whether the query was successful, the data source information, the type of query, how long is spent executing and processing data, how long it took to write data to the disk, how much data was written, and what the average speed was of the disk operations. This information can be used to work out where bottlenecks are at a query level. Query start report: These are simpler query logs that provide the actual query text, data source information, and when the query started. You can see the exact query that was sent to data sources, which can be useful for performance troubleshooting, especially with DirectQuery data sources. Query execution aggregation report: These logs contain aggregated query information in buckets of 5 minutes by default. They provide useful summary information such as the number of queries within the time window, the average/minimum/maximum query execution duration, and the average/minimum/maximum data processing duration. System counter aggregation report: This log contains aggregated system resource information from the gateway server. It aggregates average/minimum/maximum CPU and memory usage for the gateway machine, gateway service, and the mashup engine. Parsing and modeling gateway logs Microsoft has provided a basic Power BI report template to help you analyze gateway data. This template can be found at the following link: (https://learn.microsoft.com/en-us/ data-integration/gateway/service-gateway-performance). The template will scan your log folder and process all the files it finds that match the default naming pattern. It parses and expands complex columns such as JSON. Deploying Power BI gateways 27 Figure 2.8 demonstrates one of the default views in the gateway performance template: Figure 2.11 – Example of gateway performance visualization from the template The Microsoft-provided template does a reasonable job of giving you visibility of some aggregate and detailed operations on the gateway. However, to extract further value from it, you will likely need to make some changes to the transformations, data model, and calculations. This could take some work to perfect, so it may be worth considering whether a pre-built option is feasible. If your organization uses Microsoft Premier or Unified support, you may have access to Power BI performance assessments. These are run by experienced customer engineers who have enhanced templates to analyze gateway logs. Another option is to engage consultants who have a professional solution on the market. If you choose to build on the Microsoft template yourself, do consider the following improvements: Examine the PowerQuery transformations in the template to see how to parse the log files. Automate the retrieval and storage of logs from the gateway server, for example, with PowerShell scripts. Build separate date and time dimensions and connect them to all the log tables so that you can build reports that can look at time-correlated activity across every log. Build common dimension tables for Query Status, Query Type, and Data Source from the log files and connect them to each log table. This will allow you to slice report pages using the same filter across different logs. 28 Exploring Power BI Architecture and Configuration Add a dimension table containing details of all your gateways, such as the environment, gateway ID, name, memory size, and CPU core count. Use the gateway ID to connect it to the fact tables log. Build report views that focus on trends and aggregates to highlight spikes in CPU or memory while able to distinguish between DirectQuery and Refresh queries. Further details are provided in the next section. Next, we’ll look at gateway logs. Analyzing gateway logs We suggest that the initial views you build on gateway logs will help you to answer high-level questions and spot problem areas quickly. Here are some important questions you should be able to answer: Are there any spikes in overall gateway resource usage and do the spikes recur regularly? When I reach high or maximum resource usage, what is the workload pattern? What datasets, dataflows, or reports consume the most gateway resources? What is the gateway throughput in terms of queries per second and bytes processed per second? When I see throughput drops, what operations were running in that time slice, and which contributed most from a resource perspective? Is the gateway performing many refresh and DirectQuery operations in parallel? This is likely to create pressure on CPU and memory at the same time, so consider dedicated DirectQuery and refresh gateways, spreading out refresh operations and scaling. What is the average query duration over time and what contributes to increases – gateway resource limits or growing data volume/query complexity? What are the slowest queries? Are they consistently slow or does the performance vary greatly? The former may suggest query or model design issues, or that optimization may be needed at the data source or even the network. The varying performance of the same queries suggests unpredictable loads on the gateway or data source are the issue. Next, we will look at when you should consider scaling and how to do so. Scaling up gateways It is possible to manage a gateway well but still begin to reach resource limits due to data and usage growth. Scaling up is simply adding more resources or replacing them with faster components. You know it is time to scale when your analysis shows you are hitting a memory, CPU, or disk limit and have no more room to import refresh schedules or optimize other layers of the solution. We will cover such optimizations in detail in subsequent chapters. Deploying Power BI gateways 29 For now, let’s assume that the deployed solutions are perfect, yet you are seeing performance degradation and an increase in query failures caused by excessive loads. The first choice here should be to scale up. You may choose to increase the number of CPU cores and memory independently if your analysis identified only one as the problem and you see enough headroom in the other. While CPU and memory are the common candidates for scaling up, do keep an eye on disk and network performance too. You may need to scale those up too or scale out if this is not an option. Scaling out with multiple gateways When you can no longer effectively scale up a single gateway machine, you should consider adding a node or nodes to the cluster. When an enterprise gateway is created, a cluster is automatically created when the initial node is created. This will allow you to load balance across more than one gateway machine, referred to as a node. Clusters also provide high availability through redundancy in case one machine goes down for whatever reason. To create a gateway cluster, you simply run the gateway installer on a different server. At the time of installation, you will be given the option of connecting the gateway to an existing gateway cluster, which acts as the primary instance. This is shown in the following figure. Figure 2.12 – Adding a gateway to a cluster by selecting the primary instance 30 Exploring Power BI Architecture and Configuration All requests are routed to the primary instance of a gateway cluster. The request is routed to another gateway instance in the cluster only if the primary gateway instance is offline. Tip If a gateway member server goes down, you should remove it from the cluster using the Remove-OnPremisesDataGateway PowerShell command. If not, query requests may still be sent to it, which can reduce performance. Load balancing on the gateway is random by default. You can change this to balance the load based on CPU or memory thresholds. This will change the behavior so that when a member is at or over the throttling limit, another member within the cluster is selected. The request will fail only if all members within the cluster are above the limits. A gateway admin must update settings in the config file introduced earlier. The following settings can be adjusted to control load balancing: CPUUtilizationPercentageThreshold: A value between 0 and 100 that sets the throttling limit for the CPU. 0 means the configuration is disabled. MemoryUtilizationPercentageThreshold: A value between 0 and 100 that sets the throttling limit for memory. 0 means the configuration is disabled. ResourceUtilizationAggregationPeriodInMinutes: The time window in minutes for which CPU and memory system counters of the gateway machine are aggregated. These aggregates are compared against the thresholds defined beforehand. The default value is 5. Now that we have a good grasp of storage modes and gateway optimization, we will consider broader factors that come into play and can slow down operations in these areas. General architectural guidance This section presents general architectural items for using Power BI. Capacities There are different capacities for Power BI architecture. They can be grouped under two categories – shared and dedicated: Shared capacity: Power BI Pro license or Premium Per User (PPU) license not in a tenant with a Premium or Fabric capacity Dedicated: Premium, Fabric, and Embedded capacities purchased through a subscription General architectural guidance 31 Shared capacities The Power BI Pro and PPU licenses allow individuals to publish datasets and reports to shared capacity. You share the capacity with other Pro and PPU users. There is no administration for controlling or adjusting the resources for your reports and datasets. Limits for dataset sizes to import as well as the number of refreshes (8) are tighter than dedicated capacity. To ensure everyone on shared capacity plays together, there are certain limits and throttling done behind the scenes. Dedicated capacity Dedicated capacity is either a Premium, Embedded, or Fabric subscription. This capacity is reserved just for a single Power BI company. In the admin portal, there are tenant settings as well as adjustments for capacity to help set the resources for the workload’s needs. There are features such as more refreshes a day and refresh size increases than for shared capacity. These capacity nodes are like individual virtual machines for each capacity purchased with a few vCPUs and memory size. Figure 2.13 – Capacity administration 32 Exploring Power BI Architecture and Configuration Figure 2.10 shows the capacity administration and the option to manage autoscale for this capacity. Other options for capacities are the ability to split the capacity node(s) into individual other capacities within the company tenant. A P2 capacity can be purchased and split into a developement and test capacity and different security groups can manage each capacity separately. Additionally, an A1 or higher SKU can be created in your Azure environment to assign to the Autoscale option. Figure 2.11 shows the Enable Autoscale option. Figure 2.14 – Autoscale settings Autoscaling enables an additional set of compute to be turned on and used while a large load is running to prevent a timeout. The autoscale function will turn off the additional compute when the load returns below the Premium capacity and a configurable amount of time has elapsed. This is configured in the Manage Autoscale settings. The embedded and Fabric subscriptions can have their capacities paused. This allows for pay-as-you-go type billing, whereas Premium has a fixed cost per month. Fabric allows a fixed cost per month as well as the ability to be pay-as-you-go. General architectural guidance 33 Power BI Report Server The Power BI Report Server license is for a version of SQL Server Report Server with Power BI support that can be installed on a physical or virtual server. The limits are the resources in the physical or virtual server. The releases for updates to this version are every three to five months and some features such as AutoML, AI functions, and dashboards are not available. Tuning is the job of an administrator with the infrastructure for CPU, memory, and disk speeds. Planning data and cache refresh schedules A sometimes-overlooked consideration is how fresh an Import dataset’s sources are. There is no point refreshing a dataset multiple times a day if it relies on an external data mart that is only refreshed nightly. This adds an unnecessary load on data sources and the Power BI service. Look at your environment to see when refresh operations are happening and how long they take. If many are happening in parallel, this could slow down other operations due to intense CPU and memory usage. The effect can be larger with Power BI Premium. Consider working with dataset owners to remove unnecessary refreshes or change timings so that they do not occur altogether but are potentially staggered instead. A data refresh in progress can require as much additional memory as the dataset itself – sometimes more if the transformations are complex or inefficient. A general rule of thumb is that a refreshing dataset consumes twice the memory. Reducing network latency In an earlier section, we discussed how reducing the physical distance and hops between data sources helps to reduce network latency. Here are additional considerations: Co-locate your data sources, gateways, and other services as much as possible, at least for production. If you relied on Azure, for example, it would be recommended to use the same Azure Region as your Power BI home tenant region. Consider a cloud replica of on-premises data sources. This incurs some cloud costs but can significantly reduce latency for Power BI if the cloud region is far from the on-premises data center. If your data is in the cloud, consider performing Power BI development through a remote desktop into cloud virtual machines. Those virtual machines should ideally be in the same region as the data sources. Use Azure ExpressRoute to have a secure, dedicated, high-speed connection from your on-premises network to the Azure cloud. Now that you have a good understanding of the architectural choices that affect performance in Power BI, let’s summarize what we’ve learned before we explore the next area of performance in Power BI. 34 Exploring Power BI Architecture and Configuration Summary In this chapter, we saw how the different storage modes in Power BI work. Import mode datasets create an in-memory data cache in the Power BI service. DirectQuery mode datasets pass queries to the external data sources. Delta Lake allows connections to Delta Table structures in a Fabric lakehouse or warehouse. Generally, import mode is the fastest because it is local to Power BI, in-memory, a column-based analytical database, and compresses data. However, DirectQuery mode provides a way to always have the latest data returned from the source and avoid managing refreshes. Fabric’s Direct Lake is somewhere in between. There is a trade-off between all three options. There is also the composite model if the data is large, but you can provide some aggregate tables in import mode. You also learned about the role of on-premises gateways for enterprises to allow Power BI to connect securely with on-premises data sources. Gateways host Power BI’s mashup engine, where data transformations are performed locally. These can be resource-hungry, especially with hundreds or thousands of users, which could translate to many connections per second. This means gateways need to be sized, monitored, and scaled. Hence, we looked at the high-level questions that should be asked, for example, relating to simultaneous refreshes or user counts. An introduction to gateway logs drifted to the Microsoft-provided template for monitoring. Patterns were revised to help if a scale-up or scale-out approach is needed. The last section looked at other performance features such as parallel activity, refresh scheduling, and capacity options through subscription. The next chapter will start to review the different free tools to use for performance tuning and management. These tools will be used throughout the book to help with best practices in Power BI performance tuning. 3 Learning the Tools for Performance Tuning Until now, we have looked at Power BI performance from a relatively high level. You have learned which areas of Power BI performance can be impacted by your design decisions and what to consider when making these choices. Those decisions were architectural, so were about choosing the right components to ensure the most efficient movement of data to suit your data volume and freshness requirements. However, this knowledge alone is not sufficient and will not guarantee good performance. With the gateways in the previous chapter, we saw how a single component of the solution can be configured and optimized quite heavily. This applies to most of the other areas of Power BI, so now we will begin to look at performance tools that can help us pinpoint issues. In Chapter 2, Exploring Power BI Architecture and Configuration, we looked at storage modes for Power BI datasets and learned about Import, DirectQuery (DQ), and Direct Lake (DL). For Power BI reports, we often need to know whether visuals, queries, or combinations thereof are slow. Using these tools can help with slow model calculations, slow reports, or slow model refreshes. Some of this granularity is not available from the Power BI service in production at the time of writing. However, you can get much more granular performance information usi

Use Quizgecko on...
Browser
Browser