🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

PL-300 Exam – Free Actual Q&As, Page 1 _ ExamTopics_answers 1.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

- Expert Verified, Online, Free.  Custom View Settings Topic 1 - Question Set 1 Question #1 Topic 1 HOTSPOT - You plan to create the Power...

- Expert Verified, Online, Free.  Custom View Settings Topic 1 - Question Set 1 Question #1 Topic 1 HOTSPOT - You plan to create the Power BI model shown in the exhibit. (Click the Exhibit tab.) The data has the following refresh requirements: ✑ Customer must be refreshed daily. ✑ Date must be refreshed once every three years. ✑ Sales must be refreshed in near real time. ✑ SalesAggregate must be refreshed once per week. You need to select the storage modes for the tables. The solution must meet the following requirements: ✑ Minimize the load times of visuals. ✑ Ensure that the data is loaded to the model based on the refresh requirements. Which storage mode should you select for each table? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:   _Jay_ Highly Voted  4 months, 1 week ago Technically Yes, Correct Dual (Composite) Mode: The dual storage mode is between Import and DirectQuery. it is a hybrid approach, Like importing data, the dual storage mode caches the data in the table. However, it leaves it up to Power BI to determine the best way to query the table depending on the query context. 1) Sales Must be Refreshed in Near real time so "Direct Query" 2) Sales Aggregate is once per week so "Import" (performance also required) 3) Both Date and Customer has relationship with both Sales and SalesAggregate tables so "Dual" because to support performance for DirectQuery(Sales) and Import(SalesAggregate) upvoted 68 times   GuerreiroJunior 2 weeks, 2 days ago Make sense, thank you so much Jay upvoted 2 times   disndat7 1 month, 2 weeks ago Agreed with this approach. upvoted 2 times   Deeku 2 months, 1 week ago makes sense upvoted 1 times   Dovoto 3 months ago makes sense. thanks upvoted 2 times   g0 Most Recent  1 week, 6 days ago I think this is a terrible question.. there needs to be plenty more information provided. I would always opt for import for all except for Sales which should be direct query. upvoted 2 times   UserNo1 2 weeks, 1 day ago Is this correct?: THe tables should be set to DUAL since it will interact with a DQ table (sales) for query optimization reasons upvoted 1 times   souvikpoddersm 2 weeks, 3 days ago Can anybody share the all question , answer please ? upvoted 1 times   lukelin08 1 month, 3 weeks ago Seems correct upvoted 1 times   YourExams 1 month, 4 weeks ago Hi, how to review all questions ? i dont have access to review case studies. upvoted 1 times   Raza12 2 months ago i think the given answer is Right. please check the link and confirm back for the rest of the people. https://docs.microsoft.com/en-us/power-bi/transform-model/desktop-storage-mode upvoted 1 times   Namenick10 2 months, 1 week ago Customer: Dual Date: Dual Sales: DirectQuery SalesAggregate: Import upvoted 2 times   Churato 2 months, 2 weeks ago Direct Query Dual Dual Import upvoted 1 times   Manzy2599 3 months, 2 weeks ago This answer is misleading... Not sure why it's showing DUAL - the exact same question on skillcertpro had a different answer upvoted 4 times   zerone72 3 months, 3 weeks ago @Elena3061 I guess you should dual instead of DirectQuery because of their relationship with both import mode table and directQuery mode table. Take a look at _Jay_'s post and at the "Limited Relationship" paragraph in this article https://learn.microsoft.com/en-us/power-bi/transform- model/desktop-relationships-understand upvoted 3 times   Elena3061 3 months, 3 weeks ago thank you! upvoted 1 times   Krish_456 3 months, 3 weeks ago can anyone help me how to get all the question and answers. I will pay for contributor access and i need help how to get contributor access that process and all upvoted 1 times   Fahim_88 2 months, 2 weeks ago Hey if you wanna buy contributor access in sharing let me know upvoted 1 times   GGrace 4 months ago find more: https://docs.microsoft.com/en-us/power-bi/transform-model/desktop-storage-mode upvoted 2 times   Elena3061 4 months, 1 week ago Can someone explain for me why should we use Dual instead of DirectQuiry for Date and Customer tables? Thank you upvoted 2 times   gabrysr1997 4 months, 1 week ago is the answer correct? upvoted 2 times Question #2 Topic 1 You have a project management app that is fully hosted in Microsoft Teams. The app was developed by using Microsoft Power Apps. You need to create a Power BI report that connects to the project management app. Which connector should you select? A. Microsoft Teams Personal Analytics B. SQL Server database C. Dataverse D. Dataflows   Abasifreke Highly Voted  4 months, 1 week ago You can use the Microsoft Power BI template to import data into Power BI from Project for the web and Project Online. When you're using the template, you're connected to your Microsoft Dataverse instance, where your Microsoft Project web app data is stored. https://support.microsoft.com/en-us/office/use-power-bi-desktop-to-connect-with-your-project-data-df4ccca1-68e9-418c-9d0f-022ac05249a2 upvoted 10 times   lukelin08 Most Recent  1 month, 3 weeks ago Selected Answer: C C, Dataverse is correct upvoted 3 times   Churato 2 months, 2 weeks ago Dataverse upvoted 1 times   Churato 2 months, 2 weeks ago https://learn.microsoft.com/en-us/power-apps/teams/overview-data-platform upvoted 1 times   Lkra1 3 months, 1 week ago Selected Answer: C is C 100% upvoted 1 times   Nurgul 3 months, 2 weeks ago Selected Answer: C I think it's Dataverse upvoted 1 times   eckip 4 months ago Selected Answer: C correct upvoted 3 times Question #3 Topic 1 For the sales department at your company, you publish a Power BI report that imports data from a Microsoft Excel file located in a Microsoft SharePoint folder. The data model contains several measures. You need to create a Power BI report from the existing data. The solution must minimize development effort. Which type of data source should you use? A. Power BI dataset B. a SharePoint folder C. Power BI dataflows D. an Excel workbook   Nomios Highly Voted  4 months ago It should be dataset, because the case states there is already a report published and the datamodel contains measures. therefore and to be able to use the measures in the datamodel you should connect to the existing dataset (which was created when you plublished the report) instead of starting from scratch with the files in the SharePoint folder. upvoted 28 times   Hoeishetmogelijk 1 month, 2 weeks ago The question is confusing because it doesn't tell clearly that there are two reports. So the second report can reuse the dataset of the first one. upvoted 3 times   bbshu0801 2 days, 2 hours ago Yea, I think so. upvoted 1 times   NatRob 2 months ago After reading the question multiple times, the biggest takeaway is that its asking directly for data. A SharePoint folder HOLDS data, but it is not data itself. I agree with this and think its the existing dataset upvoted 1 times   rashjan Highly Voted  3 months, 4 weeks ago Selected Answer: A reuse the existing dataset. upvoted 11 times   BabaJee Most Recent  1 week, 6 days ago Selected Answer: A existing dataset to minimise efforts makes sense upvoted 1 times   Astroid_1994 2 weeks ago This question ought to be asked like this: You create a Power BI report for your company's sales division that imports data from a Microsoft Excel file that is housed in a Microsoft SharePoint folder. The data model includes a number of measurements. From the current data, a Power BI report must be made. The solution must reduce the amount of development work. Which kind of data source ought you to employ? upvoted 1 times   MBA_1990 2 weeks, 2 days ago Selected Answer: A reuse of existing data upvoted 1 times   VinayKadaya 2 weeks, 3 days ago I guess the first line "For Sales Team" has an implication here. The data set in Powerbi report that we publish for sales team has measures specific for them. And since the objective or end users of the second Power Bi report is not stated, it would imply we have to obtain data without any filters (which may have been in the first data set) upvoted 2 times   PsgFe 3 weeks, 1 day ago minimize the effort with the measures already created. Use the dataset. upvoted 1 times   Dr_Do 1 month ago Selected Answer: A Several measures to be reused --> Dataset upvoted 2 times   Hoeishetmogelijk 1 month, 2 weeks ago Selected Answer: A I read this question case like this: 1. You publish a Power BI report with its dataset including measures 2. You create a second Power BI report using the existing data with a minimal of effort. So my conclusion would be to reuse the dataset that was published together with the first report. So answer A. upvoted 2 times   Shiiton 1 month, 2 weeks ago Sharepoint! Powerbi dataset wouldn't allow to use some features which was not declared in the question if was going to be necessary or not. upvoted 1 times   chiwilo 2 months ago La respuesta correcta es Un conjunto de datos upvoted 1 times   albi123 2 months ago Answer is "A" upvoted 1 times   Churato 2 months, 2 weeks ago "...you publish a Power BI report..." "...The solution must minimize development effort." So the solution is A upvoted 1 times   Churato 2 months, 3 weeks ago Selected Answer: A A) There's a Dataset already used once upvoted 1 times   xxfangxx 2 months, 3 weeks ago Selected Answer: A "You need to create a Power BI report from the existing data." the keywords here is "EXISTING DATA" upvoted 2 times   lukelin08 3 months, 1 week ago Selected Answer: A Can use the existing dataset, it contains measures and the requirement was for least development effort. Its entirely possible to use an existing dataset to create a new report within the PowerBI service upvoted 1 times   Lkra1 3 months, 1 week ago Selected Answer: B Its a sharepoint folder, we are specifically being told that the data is in a SharePoint location upvoted 2 times   Dovoto 3 months ago Yes. The Microsoft Excel file is in a SharePoint location. But we are asked to "create a Power BI report from the existing data. The solution must minimize development effort". So we can reuse the existing dataset. upvoted 2 times Question #4 Topic 1 You import two Microsoft Excel tables named Customer and Address into Power Query. Customer contains the following columns: ✑ Customer ID ✑ Customer Name ✑ Phone ✑ Email Address ✑ Address ID Address contains the following columns: ✑ Address ID ✑ Address Line 1 ✑ Address Line 2 ✑ City ✑ State/Region ✑ Country ✑ Postal Code Each Customer ID represents a unique customer in the Customer table. Each Address ID represents a unique address in the Address table. You need to create a query that has one row per customer. Each row must contain City, State/Region, and Country for each customer. What should you do? A. Merge the Customer and Address tables. B. Group the Customer and Address tables by the Address ID column. C. Transpose the Customer and Address tables. D. Append the Customer and Address tables.   LokeshJ 1 day, 23 hours ago A is corect Merge the tables upvoted 1 times   PsgFe 3 weeks, 1 day ago The customer table has the id Address foreign key field. To create a query that has one row per customer, Merge the Customer and Address tables upvoted 1 times   lukelin08 1 month, 3 weeks ago Selected Answer: A A is correct, we merge the tables upvoted 2 times   Pauwels 1 month, 3 weeks ago Merge because we are adding more columns to the Customer table upvoted 3 times   samad1234 3 months, 1 week ago A is correct upvoted 3 times   mannerism 3 months, 1 week ago Remember Merge is JOIN, APPEND is UNION upvoted 4 times   Nurgul 3 months, 2 weeks ago Selected Answer: A A is correct. We merge 2 tables using the AdressID. upvoted 2 times   ns_guy 3 months, 2 weeks ago Selected Answer: A A is correct, transposing just re-orients the data; and appending will stack the tables not create the combined records you need upvoted 3 times   OGESSIUSER 4 months ago Selected Answer: A A. Merge the Customer and Address tables upvoted 3 times   eckip 4 months ago Selected Answer: A a is correct upvoted 1 times   mrspeket 4 months, 1 week ago A. Merge the Customer and Address tables Merge Queries by AddressID Expand and choose City, State/Region, and Country. upvoted 2 times Question #5 Topic 1 HOTSPOT - You have two Azure SQL databases that contain the same tables and columns. For each database, you create a query that retrieves data from a table named Customer. You need to combine the Customer tables into a single table. The solution must minimize the size of the data model and support scheduled refresh in powerbi.com. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:   Namenick10 Highly Voted  3 months, 3 weeks ago Correct - Append Queries as New - Disable loading the query to the data model upvoted 18 times   lukelin08 Highly Voted  3 months, 1 week ago Answer is correct. However just Append is also valid. Its just that due to the two part answer box's given and needing an answer, then it means the first box must be using Append (as new) https://community.powerbi.com/t5/Power-Query/Append-vs-Append-as-new-for-performance/td-p/1822710 upvoted 9 times   PinkZebra 3 months ago Agreed. upvoted 1 times   Debs23 Most Recent  1 week, 4 days ago Since the merged query is dependent on original 2 queries, won't doing "disable loading the query to the data model" also prevent the merged query from getting refreshed with new data? This is something I was experimenting with one of my dashboards, and saw that disabling load on a table prevents that table, and tables using that table as a source from getting new data. Please clarify. upvoted 1 times   iccent2 4 weeks ago What happens if we decide to delete the original 2 queries? upvoted 1 times   Xikta 2 weeks, 1 day ago Data cannot refresh upvoted 1 times   md_sultan 4 weeks, 1 day ago If you select only append then from second question you need to disable the both table. So in the model you won't have any table. For this reason the first answer has to be append as new. So that you get overall three table and the inital two table can be deleted. upvoted 2 times   Patrick666 1 month ago Append Queries as New - Disable loading the query to the data model upvoted 1 times   Hoeishetmogelijk 1 month, 2 weeks ago I agree with "Disable loading the query to the data model" But I am sure that it should be "Append Queries" rather then "Append Queries as New" Reading the page below, you see that "Append Queries" can be used for 2 tables and ""Append Queries as New" must be used for 3 or more tables to append. https://learn.microsoft.com/en-us/power-query/append-queries upvoted 3 times   lukelin08 1 month, 1 week ago I suggest testing it. I have tested it and you can "Append Queries as New" on only two tables. This is necessary given the two part question where you have to "Disable loading the query to the data model" on the 2 original querys upvoted 4 times   Hoeishetmogelijk 1 month ago Ah, I see what you mean. The catch is in the frase "Action to perform on the original TWO Queries" in front of the second answer box. When both original queries are not being load in the data model, then there must be one created with "Append querie as NEW" that will be loaded in the data model. upvoted 1 times   Pocu 2 months ago For this case, I would say merging the two tables is actually better because there might be a lot of duplicated customer records. Appending doesn't really make sense because it will cause data duplication issue. Though the comparing process is not easy because the customer ID might be different for the same customer in two databases. Correct me if wrong. upvoted 1 times   Hoeishetmogelijk 1 month, 2 weeks ago I'm afraid you are thinking of a SQL Merge here. The naming differences are indeed confusing: Power BI Append = SQL Union All Power BI Merge = SQL Join https://community.powerbi.com/t5/Data-Stories-Gallery/Merge-Vs-Append-Concepts-in-Power-BI-Power-Query/m-p/1729808 upvoted 2 times   samad1234 3 months, 1 week ago 1.Append queries as new. 2.Disable loading the query to the data model upvoted 5 times   MauDV 3 months, 2 weeks ago I'd say Append queries for the first box, Appending queries as new would increase the size of the model I believe upvoted 4 times   Nurgul 3 months, 2 weeks ago The given answer is correct. 1. Append queries as new. 2.Disable loading the query to the data model. upvoted 2 times   Adhi_Adhi 3 months, 2 weeks ago I think we need to use the merge queries to combine. upvoted 1 times   bmaaaata 3 months, 2 weeks ago Can someone explain why Append as New instead of just Append? Appending as New will create additional table which takes space upvoted 5 times   INDEAVR 3 months, 1 week ago I think that the second part of the answer is the reason. When we disable loading into the model, we cannot use them anymore, so that is why we append in a New query. upvoted 7 times Question #6 Topic 1 DRAG DROP - In Power Query Editor, you have three queries named ProductCategory, ProductSubCategory, and Product. Every Product has a ProductSubCategory. Not every ProductsubCategory has a parent ProductCategory. You need to merge the three queries into a single query. The solution must ensure the best performance in Power Query. How should you merge the tables? To answer, drag the appropriate merge types to the correct queries. Each merge type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:   learnazureportal Highly Voted  4 months ago Answer is correct upvoted 11 times   gtc108 Highly Voted  2 months, 2 weeks ago Answer is correct: 1. Inner join 2. Left Outer Join b/c you want to keep everything to the left (subCategory) upvoted 7 times   Artefa8 Most Recent  1 week, 3 days ago 1. Inner Join 2. Left Outer upvoted 1 times   lukelin08 1 month, 3 weeks ago Answer seems correct - 1. Inner join 2. Left Outer Join upvoted 1 times   samad1234 3 months, 1 week ago 1. Inner join, 2. Left outer join upvoted 3 times   fred92 3 months, 1 week ago Answer is correct: 1. Inner join, 2. Left outer join If each row in table A has a matching row in table B, always use inner join because it has the best performance. upvoted 6 times   Booster21 1 month, 1 week ago What does the best performance mean here? upvoted 1 times   NevilleV 3 months ago Question 1. You in all likelihood have to say 10 products each with a parent category, But your subcategories are eg 3 because product 1, 2 and 3 are subcategory socks, product 4, 5 and 6 are subcategory shoes and 7, 8 and 9 are shirts. Sure every Product has a SubCategory but they aren't duplicates. I think the answer to Question 1 is Left Outer. Question 2 is also Left outer upvoted 3 times   fred92 2 months, 3 weeks ago When you join tables (inner join) you'll get all rows from T1 and all rows from T2 that meet the join and where conditions. It is not relevant if the cardinality is 1 or many on one or both sides. In your example the result would be: product 1 - socks product 2 - socks product 3 - socks product 4 - shoes product 5 - shoes... and so on upvoted 1 times Question #7 Topic 1 You are building a Power BI report that uses data from an Azure SQL database named erp1. You import the following tables. You need to perform the following analyses: ✑ Orders sold over time that include a measure of the total order value Orders by attributes of products sold The solution must minimize update times when interacting with visuals in the report. What should you do first? A. From Power Query, merge the Order Line Items query and the Products query. B. Create a calculated column that adds a list of product categories to the Orders table by using a DAX function. C. Calculate the count of orders per product by using a DAX function. D. From Power Query, merge the Orders query and the Order Line Items query.   PinkZebra Highly Voted  3 months, 1 week ago Selected Answer: D I'm very sure it's D. It's the Header/Detail Schema, and the most optimal way is to flatten the header into the detail table. Source: https://www.sqlbi.com/articles/header-detail-vs-star-schema-models-in-tabular-and-power-bi/ upvoted 23 times   NevilleV 3 months ago D. doesn't have a common field. The answer has to be A upvoted 3 times   PinkZebra 3 months ago I agree that it's not clearly stated in the question that Order and Order Line tables have common field (for example: order ID) If there is no common fields, there is no way to implement the requirements (calculating order value from Order line). upvoted 6 times   David_Zed Highly Voted  4 months ago Selected Answer: A Should be A, because we need to get " Orders sold over time that include a measure of the total order value Orders by attributes of products sold" Order line detail for quantities ordered, and product for product's attribute upvoted 21 times   golden_retriever 1 month, 1 week ago Price is also an attribute to the product, which is present in Order line detail. The key word here is a product sold. The sold items are present only in the Order line detail. So A is INCORRECT upvoted 2 times   WZ17 1 month, 2 weeks ago I think you're forgetting about the "over time" part of the objective. You cannot show a distribution of sales over time without having a date column which does not seem to be present in Products or Order Line Items. upvoted 3 times   Legato 1 month, 1 week ago Exactly upvoted 1 times   sbilal Most Recent  4 days, 14 hours ago Selected Answer: D D seems to make more sense to me. upvoted 1 times   iccent2 2 weeks ago Let us put all assumption to rest and work with what we have. I do not see any pairing field such as Order ID just as we have Product ID. I think the examiners know what they are doing and i will go with option A upvoted 1 times   PsgFe 3 weeks, 1 day ago I understand that denormalizing this model a little and merging Orders and Order Line Items. This fact table relates to the Product dimension ( Star Schema.) D. In Power Query, merge the Orders query and the Order Line query upvoted 1 times   Luxtra 3 weeks, 1 day ago It seems, only the “Order” Table has Date Information, while only “Products” Table has “attributes of products sold”. Quantity of Products sold is needed from “Order Line Items”. Personally, I would merge Orders and Order Line to only retrieve the Dates, disable loading “Orders” and create a relationship between “Order Line Items” and “Products”. (Reduces data volume, as writing the entire Product Catalogue Data in every order line will create huge amounts of duplicate data). So Answer D. upvoted 1 times   AlexYang_ 4 weeks ago Selected Answer: D D is what we do first, and we can do A in the following step. upvoted 1 times   KarthikKumarK 1 month, 1 week ago Selected Answer: D The answer is D based on the question! "What should you do first?" 1st we should merge the Order & Order Lines items. Next, merge the Order line items & Product. If you feel you have clear understanding after reading above solution, Please give a upvote. Thanks, Karthik upvoted 8 times   SkullCrusher 1 month, 2 weeks ago Selected Answer: D D sounds about right, assuming the tables have a common column Order ID. Order Line items doesn't seem to have the time details. If it does, it would be self sufficient to fulfil the requirement.. upvoted 2 times   Ayush_Tiwari 1 month, 2 weeks ago D is the right answer i think because it is clearly mentioned orders sold over time so orders quantity and orders price is mentioned in order line but date will be in order table so we need to merge order and order line to get the result. upvoted 3 times   Hoeishetmogelijk 1 month, 2 weeks ago Selected Answer: A I think the answer is A For these two requirements: - Orders sold over time that include a measure of the total order value - Orders by attributes of products sold are only the Products and the Order Line Items tables needed. As the Order Line Items table has the column ProductID, these tables can be merged (joined) together. upvoted 3 times   Lucky_me 1 month, 3 weeks ago Selected Answer: D It doesn't ask for product information in the result and the objective is to minimize the read, (less tables), price is in order details and time is in order, D is correct upvoted 2 times   Hoeishetmogelijk 1 month, 2 weeks ago It DOES ask for product information in the result. Please read this requirement: "Orders by attributes of products sold" upvoted 1 times   andregrahamnz 1 month, 4 weeks ago This one is fairly controversial and comes down to reading skills really. As other commentators have stated there are three key concepts required; date information, order value (price*quantity), and 'attributes of products sold'. We know the date and order value are present in Order Line Items. The only question is where are we most likely to find 'attributes of products sold'. The obvious answer is already the Products table, but this is further reinforced by the fact that the answer also indicates there is a matching key between Order Details and Product. There is no such explicitly stated matching key between orders and order line items. The only correct answer to this is A. Anybody saying D has not properly absorbed and thought about the information available. upvoted 6 times   Raza12 1 month, 4 weeks ago Selected Answer: D it seems D is correct, the given explanation is also point on "D" upvoted 1 times   Raza12 2 months ago Its "D" , make sense, and already explained upvoted 1 times   Raza12 2 months ago Seems D is Correct, as Product Query have only Product information. upvoted 1 times   evipap 2 months ago After a lot of thought and after reading all the comments here I believe the right answer is A. The reason is that the table named Orders has only HIGH LEVEL INFORMATION. We need total order value that is available only from Order Line items Table. Thus Order table is useless. We also need attributes of products sold so Products table is necessary too. So answer is A for sure. upvoted 2 times Question #8 Topic 1 You have a Microsoft SharePoint Online site that contains several document libraries. One of the document libraries contains manufacturing reports saved as Microsoft Excel files. All the manufacturing reports have the same data structure. You need to use Power BI Desktop to load only the manufacturing reports to a table for analysis. What should you do? A. Get data from a SharePoint folder and enter the site URL Select Transform, then filter by the folder path to the manufacturing reports library. B. Get data from a SharePoint list and enter the site URL. Select Combine & Transform, then filter by the folder path to the manufacturing reports library. C. Get data from a SharePoint folder, enter the site URL, and then select Combine & Load. D. Get data from a SharePoint list, enter the site URL, and then select Combine & Load.   lukelin08 Highly Voted  3 months, 1 week ago Selected Answer: A Video explains it all https://youtu.be/XuLnSYjmsJo upvoted 15 times   lukelin08 1 month, 3 weeks ago A is correct upvoted 1 times   NevilleV 3 months ago Good tutorial! upvoted 1 times   fred92 Highly Voted  3 months, 1 week ago Selected Answer: A We have to import Excel files from SharePoint, so we need the connector SharePoint folder which is used to get access to the files stored in the library. SharePoint list is a collection of content that has rows and columns (like a table) and is used for task lists, calendars, etc. Since we have to filter only on manufacturing reports, we have to select Transform and then filter by the corresponding folder path. upvoted 9 times   svg10gh Most Recent  1 week ago A is correct answer. upvoted 1 times   MBA_1990 2 weeks, 2 days ago Selected Answer: A A is correct upvoted 1 times   viethoa 3 weeks, 2 days ago Selected Answer: A Answer is Get data from a SharePoint Online folder and enter the site URL. Select Combine & Transform, then filter by the folder path to the manufacturing reports library. Reference: https://www.c-sharpcorner.com/article/combine-and-transform-data-of-multiple-files-located-in-a-folder-in-power-bi/ upvoted 2 times   AlexYang_ 4 weeks ago Selected Answer: A A is correct! upvoted 1 times   Hoeishetmogelijk 1 month ago Selected Answer: C C. Get data from a SharePoint folder, enter the site URL, and then select Combine & Load. upvoted 1 times   Hoeishetmogelijk 1 month, 2 weeks ago Selected Answer: D The answer is D Once the site URL is entered, the user selects "Combine & Transform Data" or "Combine & Load". But not just "Transform". Also there is no need to filter by the folder path, because the folder path is already in the URL. See: https://learn.microsoft.com/en-us/power-query/connectors/sharepointfolder upvoted 1 times   Hoeishetmogelijk 1 month ago I mean: the answer is: C. Get data from a SharePoint folder, enter the site URL, and then select Combine & Load. Of course it is about a SharePoint FOLDER upvoted 1 times   JukMar 2 months, 1 week ago correct, A should be the correct answer upvoted 1 times   TimO_215 3 months ago Selected Answer: B I think that the answer is B. The question says that you are using Power Query Desktop. According to the documentation, you would click "Transform" if you are using Power Query Online, but you click "Combine & Transform" if you are using Power Query Desktop. https://learn.microsoft.com/en-us/power-query/connectors/sharepointfolder upvoted 1 times   TimO_215 3 months ago I spoke too soon, I am in agreement with A. I would delete this comment, but I can't find a way to do it. upvoted 4 times   fdsdfgxcvbdsfhshfg 3 months, 3 weeks ago Selected Answer: A We need to access the subfolders; we have to filter using Folder Path column upvoted 4 times   Manikom 3 months, 4 weeks ago Selected Answer: A I think A is correct. SharepointFolder 'combine&load' should load all files and not only Manufacturing ones so this should exclude answer C. Sharepointlist doesn't have 'Combine&Tranform', 'Combine&Load' options so this excludes answers B and D (https://docs.microsoft.com/en-us/power-query/connectors/sharepointlist) upvoted 3 times   Hoeishetmogelijk 1 month, 2 weeks ago These line clearly states that the document library only contains Manufacturing reports of the same structure: "One of the document libraries contains manufacturing reports saved as Microsoft Excel files. All the manufacturing reports have the same data structure." upvoted 1 times   alosh 4 months ago Selected Answer: A A is correct upvoted 3 times   Nomios 4 months ago See for more information: https://docs.microsoft.com/nl-nl/power-query/connectors/sharepointonlinelist upvoted 1 times   Nomios 4 months ago Answer: None are correct! From Get Data in PowerBI desktop the only options are: - SharePoint folder - SharePoint Online List - SharePoint List So this excludes answers A & C From the navigator screen you only have the options 'Load' or 'Transform Data'. 'Combine' is not an available option. So this excludes answers B, C & D upvoted 1 times   Pushliang 4 months ago A IS RIGHT. upvoted 3 times   eckip 4 months ago Selected Answer: A I tried it out. For me all answers are not completely correct. C & D: Are wrong because you have to filter somehow, otherwise you will get all files also from other document libraries. A: Sounds correct until you also have to combine the files after filtering. This is missing here. Otherwise it works exactly like this. B: SharePoint List connector needs many more steps. After entering the site URL, you have to do many transformation steps and I did not manage to come until the combine step. So the answer as it is here seems to be wrong for me. I would go with A. upvoted 2 times Question #9 Topic 1 DRAG DROP - You have a Microsoft Excel workbook that contains two sheets named Sheet1 and Sheet2. Sheet1 contains the following table named Table1. Sheet2 contains the following table named Table2. You need to use Power Query Editor to combine the products from Table1 and Table2 into the following table that has one column containing no duplicate values. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:   Muffinshow Highly Voted  4 months, 1 week ago Import From Excel Append Table 2 to Table 1 Remove Duplicates upvoted 117 times   cygwin 2 months ago agreed upvoted 1 times   Djibsonx7 2 months, 4 weeks ago Correct upvoted 1 times   juanceee 3 months ago Agreed, that's the correct upvoted 1 times   Sjefen 4 months ago Agreed upvoted 3 times   emmanuelkech Highly Voted  4 months, 1 week ago Import From Excel since it has not been loaded to Powerbi initially Append Table 2 to Table 1 Remove Duplicates from the table appended to (Table1) upvoted 22 times   Astroid_1994 Most Recent  2 days, 14 hours ago The two tables (1 and 2) are assumed to have been loaded into the power query editor. Considering how the question was framed. 1. From power query editor, append table 1&2 2 From power query editor, remove error (because some of the data maybe entered manually and not properly formated) 3. From power query editor, select table 1, and then select remove duplicate. My view to this question upvoted 1 times   sbilal 4 days, 14 hours ago Import from Excel Append tables 2 to 1 Remove Duplicates upvoted 1 times   svg10gh 1 week ago Import From Excel : since it has not been loaded to Powerbi initially so need to load Append Table 2 to Table 1 then Remove Duplicates this should be sequence upvoted 1 times   GuerreiroJunior 2 weeks, 2 days ago 1st - Import from Excel Workbook, 2nd - Append Table 2 to Table 1 3rd - Remove Dublicates. Note: We dont see any error in the both data that we have in this tables. upvoted 1 times   asaad79 3 weeks, 2 days ago Import From Excel Append Table 2 to Table 1 Remove Duplicates upvoted 1 times   Motivator 3 weeks, 3 days ago Import From Excel Merge Table 2 to Table 1 Remove Duplicates. Note: Since the abc column is common on both table, then it should be merge and not append. I believe this is clear upvoted 2 times   AlexYang_ 4 weeks ago Import From Excel Append Table 2 to Table 1 Remove Duplicates upvoted 1 times   Patrick666 1 month ago Import From Excel since it has not been loaded to Powerbi initially Append Table 2 to Table 1 Remove Duplicates from the table appended to (Table1) upvoted 1 times   Pauwels 1 month, 2 weeks ago Import From Excel Append Table 2 to Table 1 Remove Duplicates upvoted 4 times   Analysis 1 month, 2 weeks ago I have tested the scenario Import from EXCEL Append Table2 to Table1 Remove Duplicates upvoted 2 times   velvarga 2 months, 1 week ago why we have to remove errors???? upvoted 3 times   Hoeishetmogelijk 1 month, 2 weeks ago No, that is a wrong answer. upvoted 2 times   Clodia 2 months, 3 weeks ago I need a clarification here: Shouldn't we append Table1 to Table2? I know it's not an option - but the values from Table2 seem to be shown first in the final table which means that Table1 is actually added at the end of Table2 (appended) and the Remove Duplicates action should be applied to Table2. upvoted 1 times   rehoboth2165 2 months ago You are very correct but the option is not in the answers, so it's best to go with append table 2 into 1. upvoted 1 times   lukelin08 3 months, 1 week ago In addition to the given answer being incorrect. When appending two tables with duplicate data (as tested) no errors are shown in the table created. So there is no need to remove errors in table (as none are presented) therefore correct answer is 1.Import data from Excel. 2.Append Table2 to Table1. 3.Select Table1 and remove duplicates Simple one to test and try out using PowerBI Desktop yourself upvoted 4 times   Nurgul 3 months, 2 weeks ago 1.Import data from Excel. 2.Append Table2 to Table1. 3.Select Table1 and remove duplicates upvoted 6 times Question #10 Topic 1 You have a CSV file that contains user complaints. The file contains a column named Logged. Logged contains the date and time each complaint occurred. The data in Logged is in the following format: 2018-12-31 at 08:59. You need to be able to analyze the complaints by the logged date and use a built-in date hierarchy. What should you do? A. Apply a transformation to extract the last 11 characters of the Logged column and set the data type of the new column to Date. B. Change the data type of the Logged column to Date. C. Split the Logged column by using at as the delimiter. D. Apply a transformation to extract the first 11 characters of the Logged column.   _Jay_ Highly Voted  4 months ago Selected Answer: C Answer C is best approach Split the Logged column by using "at" as the delimiter. upvoted 26 times   GuerreiroJunior 2 weeks, 2 days ago Agreed with you Jay upvoted 1 times   Jay_98_11 1 month, 3 weeks ago agreed upvoted 1 times   Sjefen 4 months ago Correct! upvoted 2 times   hassan2383 Most Recent  2 days, 22 hours ago C it didn't mention before or after delimiter, D is correct upvoted 1 times   kiwi69 1 week, 2 days ago Selected Answer: C Correct answer is C upvoted 1 times   Meebler 2 weeks, 1 day ago C, You should split the Logged column by using "at" as the delimiter. This will allow you to separate the date and time into separate columns, which will enable you to analyze the complaints by date and use a built-in date hierarchy. Alternatively, you could also use a transformation to extract the date and time from the Logged column and set the data type of the new columns to Date and Time, respectively. Option A is incorrect because it only extracts the last 11 characters of the Logged column, which would not include the date. Option B is incorrect because the data in the Logged column is in a non-standard date format and cannot be directly converted to the Date data type. Option D is incorrect because it only extracts the first 11 characters of the Logged column, which would not include the time. upvoted 3 times   PsgFe 3 weeks, 1 day ago Os arquivos csv (Comma-separated values) separam valores com virgula e não tem tipo de dados. Então a resposta C faz muito sentido. C. Divida a coluna Logged usando at como delimitador. upvoted 1 times   LucianaFS 3 weeks, 4 days ago Selected Answer: C The answer is C, I've tested. Split column with at as delimiter recgonize automatically in a date and a hour column. upvoted 2 times   prad_raj1 1 month ago Answer D is wrong. It should be A or C upvoted 2 times   ThomasDB 1 month ago Selected Answer: C After testing this in PowerBI - only splitting with a delimiter automatically changes the column type transforms automatically into a date column. By extracting the first 11 (or 10, if you don't want the " " at the end), the column type does not automatically change. upvoted 3 times   KarthikKumarK 1 month, 1 week ago Selected Answer: C If we take 10 characters from left, Then D also correct. Thanks Karthik upvoted 1 times   golden_retriever 1 month, 1 week ago I'm new to this site. Why does the answer has not yet corrected? It should be C upvoted 3 times   rajkoma 1 month, 1 week ago Both C and D, still makes the split value as Text.In that case,A should be the answer. upvoted 1 times   Pauwels 1 month, 2 weeks ago It is C because if it was D then the best Answer should have been A. Cause we must change the time to Date. So it is C. upvoted 2 times   Lewiasskick 1 month, 2 weeks ago A, is correct, the essential step is to change the type to date. upvoted 2 times   Analysis 1 month, 2 weeks ago I have tested the scenario, Created the column name as logged complains and put the sample data as 2018-12-31 at 08:59 and save the file with CSV. Imported the file with get data it opens a window and shows the table with three column headers. Second column consist of Delimiter. Put space as delimiter you will get the date as 12/31/2018 and when you transform the data in power query it shows data type as Date. Therefore, answer is C. upvoted 3 times   Hoeishetmogelijk 1 month, 3 weeks ago Selected Answer: C I think that the catch is that extracting the first 11 characters gives you the date plus a space character (the actual date is 10 characters). So you get "2018-12-31 ". And that leaves answer C as the best answer. upvoted 3 times   Pauwels 1 month, 3 weeks ago Selected Answer: C upvoted 1 times   Hoeishetmogelijk 1 month, 3 weeks ago Also the split option automatically transforms the first column to date format, where extracting the first 11 or 10 characters doesn't. upvoted 2 times Question #11 Topic 1 You have a Microsoft Excel file in a Microsoft OneDrive folder. The file must be imported to a Power BI dataset. You need to ensure that the dataset can be refreshed in powerbi.com. Which two connectors can you use to connect to the file? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Excel Workbook B. Text/CSV C. Folder D. SharePoint folder E. Web   Fer079 Highly Voted  3 months, 2 weeks ago Selected Answer: DE We can import an excel file from multiple connectors (excel workbook, folder, web, sharepoint) but if we must refresh the data from the service with no gateways then We must use web and sharepoint connectors upvoted 22 times   NevilleV 3 months ago Try it. D and E won't work. Its looking for a URL upvoted 1 times   Fer079 2 months, 4 weeks ago I tried both and they work perfectly, and of course, you need the path (in this case the URL of the excel file on One Drive) of the file, so I don ´t see the problem you say... upvoted 10 times   KobeData 2 months, 1 week ago Works just fine, this is how you do it :) https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-use-onedrive-business-links upvoted 7 times   GuerreiroJunior 2 weeks, 2 days ago Agreed KobeData upvoted 2 times   Hoeishetmogelijk 1 month, 2 weeks ago This page explains both the Web and the SharePoint option: https://learn.microsoft.com/en-us/power-query/sharepoint-onedrive-files upvoted 1 times   fred92 Highly Voted  3 months, 1 week ago Selected Answer: DE A, B, C: wrong! Would work technically, but the connection will be only to the local copy of the file, no refresh from the online version stored on OneDrive D: correct, but more complicated than option E E: correct, this is the best option to import from OneDrive upvoted 11 times   Divspl300 Most Recent  3 days, 18 hours ago Can anyone please confirm if we should rely on the answers given? HAs anyone tested them? upvoted 1 times   JainiFleischer 1 week, 3 days ago A and C upvoted 1 times   Nikeferrr 2 weeks, 2 days ago a make a test with my personal onedrive space from office 365 logged on my windows 11, then i can catch the excel using excel connectors and folder connector, its work but only personal onedrive, maybe on ONEDRIVE FOR BUSSINESS the answer is other. upvoted 1 times   LucianaFS 3 weeks, 4 days ago This is a very tricky question. The options A and E are correct IF MIcrosoft OneDrive had syncronized with a folder at personal computer... Otherwise the answer is D and E. upvoted 1 times   marcionlinerj 1 month ago Selected Answer: E I Agree. E is a better solution. upvoted 1 times   KarthikKumarK 1 month, 1 week ago Selected Answer: DE If data should be refreshed, Then refresh will work when the connectors are web or SharePoint folder. It means, data should be available all time (online). If we use local path, It required a gateways. Thanks Karthik upvoted 1 times   golden_retriever 1 month, 1 week ago The question is tricky. It stated "OneDrive Folder", not "OneDrive for Business Folder", which turns to Sharepoint and has path. Mere OneDrive has no path at all. upvoted 2 times   Hoeishetmogelijk 1 month, 2 weeks ago Selected Answer: DE D&E See: https://learn.microsoft.com/en-us/power-query/sharepoint-onedrive-files upvoted 2 times   Pauwels 1 month, 2 weeks ago Selected Answer: DE I have Try them, A, B, C Totally impossible. E work D i got some errors surelly i miss something upvoted 2 times   lukelin08 1 month, 3 weeks ago Selected Answer: DE D & E seem to be the consensus upvoted 2 times   Glubbs 2 months, 1 week ago Selected Answer: DE To keep the dataset on powerbi.com updated, should be online. (I believe also must be considered the right OneDrive version - https://www.microsoft.com/en-ca/microsoft-365/onedrive/onedrive-for-business) upvoted 1 times   Namenick10 2 months, 1 week ago Selected Answer: DE D & E correct upvoted 2 times   Churato 2 months, 2 weeks ago Selected Answer: DE D and E. that's the way that I actually do on my job upvoted 4 times   Djibsonx7 2 months, 4 weeks ago It's DE i think referred from https://learn.microsoft.com/en-us/power-bi/connect-data/refresh-excel-file-onedrive upvoted 3 times   PinkZebra 3 months, 1 week ago Selected Answer: DE Two options: - Copy and edit Path of the Excel file then use "Web" Connector: Option E - Copy and edit Path of the OneDrive folder then use "Sharepoint Folder" connector: Option D Source: https://www.youtube.com/watch?v=GGHbbg6yi-A upvoted 4 times Question #12 Topic 1 HOTSPOT - You are profiling data by using Power Query Editor. You have a table named Reports that contains a column named State. The distribution and quality data metrics for the data in State is shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area:   olajor Highly Voted  3 months, 3 weeks ago 69 is always the right choice! ;) upvoted 30 times   learnazureportal Highly Voted  4 months ago Answer is correct upvoted 14 times   opek Most Recent  2 weeks ago 69 nice 4 upvoted 2 times   synru 1 month ago null value is counted in distinct and unique values upvoted 1 times   MawadaRaafat 2 weeks, 3 days ago it will not be counted in case of unique because it occurred 4% I think it happened more than one time upvoted 1 times   lukelin08 1 month, 3 weeks ago 69 & 4. Answer is correct upvoted 6 times   lukelin08 1 month, 3 weeks ago https://community.powerbi.com/t5/Desktop/Difference-between-Distinct-and-Unique-when-using-column/td-p/2736921?lightbox-message- images-2854526=808117iD9D42C5DB8B8558A upvoted 1 times   andregrahamnz 1 month, 4 weeks ago 69/4, 100% upvoted 2 times   Churato 2 months, 2 weeks ago Unique represents values that appears just 1 time (Only once) at this column... If Null is greater than 1, it counts JUST as a "Distinct" and will NOT change the "Unique Value". PS: In case of just 1 Null row, it WILL increase the Unique (just +1, no matters how many times it will occured, JUST +1)!!!..So, 69 different values (including Nulls) are on this Column AND it's not possible to define How many rows it has (so far, so good! this is not required here) AND, as... Null is GREATER than 1 (just checking the percentage), we conclued that : There are 4 Unique non-nulls values that occured only once in State. the answer is: 69 for the first and 4 to the last one. upvoted 3 times   Churato 2 months, 2 weeks ago Please, disregard the "PS" upvoted 1 times   Nurgul 3 months, 1 week ago The given answer is correct. There are 69 different values in State including nulls. There are 4 non-null values that occur only once in State. upvoted 5 times   div4lyfe 3 months, 4 weeks ago answer is correct upvoted 4 times Question #13 Topic 1 HOTSPOT - You have two CSV files named Products and Categories. The Products file contains the following columns: ✑ ProductID ✑ ProductName ✑ SupplierID ✑ CategoryID The Categories file contains the following columns: ✑ CategoryID ✑ CategoryName ✑ CategoryDescription From Power BI Desktop, you import the files into Power Query Editor. You need to create a Power BI dataset that will contain a single table named Product. The Product will table includes the following columns: ✑ ProductID ✑ ProductName ✑ SupplierID ✑ CategoryID ✑ CategoryName ✑ CategoryDescription How should you combine the queries, and what should you do on the Categories query? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:   GPerez73 Highly Voted  4 months, 1 week ago Ok for me upvoted 21 times   svg10gh Most Recent  1 week ago This is correct - Merge - Disable the query load upvoted 2 times   GuerreiroJunior 2 weeks, 2 days ago I totaly agree with the answer, Merge and disable the category query upvoted 2 times   PsgFe 3 weeks, 1 day ago correct - Merge - Disable the query load upvoted 3 times   SSN_18 3 weeks, 6 days ago correct answer upvoted 1 times   GSKop 3 weeks, 6 days ago Correct upvoted 1 times   AlexYang_ 4 weeks ago -Merge -Disable load upvoted 1 times   reyn007 1 month, 2 weeks ago I understand the merge and the disable query concept but why don't you delete the categories table after merge upvoted 1 times   Hoeishetmogelijk 1 month, 2 weeks ago Usually the import is not a one time excercise and you will want to be able to refresh the datamodel with updated sources. Then you will need the Categories QUERY again. This first option is about deleting the Categories QUERY, not the Categories TABLE. upvoted 4 times   lukelin08 1 month, 3 weeks ago Answer is correct for me upvoted 2 times   psychosystema 1 month, 3 weeks ago Answer is correct, disabling the query load for Categories will exclude it from appearing as a table. upvoted 3 times   JohnHail 2 months, 1 week ago ok, make sense upvoted 2 times   ClassMistress 2 months, 1 week ago Correct answer upvoted 2 times   Zainah22 2 months, 2 weeks ago Right Ans upvoted 2 times   Dovoto 3 months ago Correct answer upvoted 1 times   Nurgul 3 months, 1 week ago The given answer is correct. Combine the queries by performing a: Merge. On the Categories query: Disable the query load. upvoted 4 times   adizzz54 3 months, 2 weeks ago Right Ans upvoted 2 times   val38 3 months, 3 weeks ago OK for me upvoted 3 times Question #14 Topic 1 You have an Azure SQL database that contains sales transactions. The database is updated frequently. You need to generate reports from the data to detect fraudulent transactions. The data must be visible within five minutes of an update. How should you configure the data connection? A. Add a SQL statement. B. Set the Command timeout in minutes setting. C. Set Data Connectivity mode to Import. D. Set Data Connectivity mode to DirectQuery.   lukelin08 Highly Voted  1 month, 3 weeks ago Selected Answer: D D is correct for me upvoted 9 times   ClassMistress Most Recent  5 days, 10 hours ago D is the correct answer upvoted 1 times   Nuli 3 weeks, 3 days ago D is correct because the database is updated frequently. upvoted 1 times   scotchtapebunny 1 month, 4 weeks ago Yup! D seems most appropriate. upvoted 4 times   ClassMistress 2 months, 1 week ago D. Set Data Connectivity mode to DirectQuery because the data is accessed frequently. upvoted 3 times   CHT1988 2 months, 3 weeks ago Selected Answer: D D. Set Data Connectivity mode to DirectQuery. upvoted 3 times   samad1234 3 months, 1 week ago DirectQuery upvoted 2 times   adizzz54 3 months, 2 weeks ago Selected Answer: D Direct query upvoted 3 times   OGESSIUSER 4 months ago Selected Answer: D D. Set Data Connectivity mode to DirectQuery. upvoted 3 times   MilouSluijter 4 months, 1 week ago D This question also occurs in examtopics DA-100: topic 1, question 11 upvoted 3 times Question #15 Topic 1 DRAG DROP - You have a folder that contains 100 CSV files. You need to make the file metadata available as a single dataset by using Power BI. The solution must NOT store the data of the CSV files. Which three actions should you perform in sequence. To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:   emmanuelkech Highly Voted  4 months, 1 week ago I think the correct flow is Get data then select folder Remove content Colum Expand Attribute Colum upvoted 63 times   GabryPL 1 week, 2 days ago what about: 1) get data from folder 2) expand attribute 3) remove content column why should this order be wrong? upvoted 2 times   pnb11 4 months ago These are right answer 1.Get data the select folder 2.Remove attribute column (because this column contain information about file which not needed). 3.Combine Content column (which contain actual data which needed for us) upvoted 16 times   Shakilpatil 3 weeks, 1 day ago The question is not to store data of files upvoted 3 times   Hoeishetmogelijk 1 month, 2 weeks ago See the requirement "The solution must NOT store the data of the CSV files." So the content column must be removed. upvoted 6 times   Tata11 3 months, 3 weeks ago Hello dear, Metadata means information about files. It's why we remove content. upvoted 12 times   NevilleV 3 months ago I agree that this is the requirement. The thing that bothers me is WHY? Why would you want to create a dataset with only the metadata? upvoted 4 times   cnmc 2 weeks, 2 days ago audit purpose. Not everything is about the business results, for big corps you'd care about how it's run too upvoted 2 times   Guru1337 Highly Voted  4 months, 1 week ago It should be remove Content not combine, since the file data is NOT to be stored. upvoted 36 times   Churato 2 months, 2 weeks ago Tested here and it works. Thankyou! upvoted 2 times   GPerez73 4 months ago I agree upvoted 6 times   RooneySmith Most Recent  2 days, 23 hours ago Seeing how this answer doesn't make any sense at all, and as many of other questions' answers, I wonder: Is this correction trusted?? And I if tomorrow I want to pass the exam no matter what, should I answer the way it's answered here or should I follow what I believe is correct?? upvoted 1 times   vero1971_ 3 weeks, 3 days ago Why is a wrong answer in some questions ? upvoted 2 times   Patrick666 1 month ago Get data then select folder Remove content Colum Expand Attribute Colum upvoted 3 times   lukelin08 3 months, 1 week ago I agree that it should be remove content. However it is another ambiguous possible answer from Microsoft, because after getting the data as the first step, the last two steps (Remove content column, & Expand attribute column) can be done in any order. The order doesn't matter for the last two steps, it would work either way. So again its annoying if Microsoft dont allow for both answers to be correct due to the order. upvoted 9 times   cldrmn 1 month, 2 weeks ago Agreed. upvoted 2 times   NevilleV 3 months ago Agreed. The order of the last 2 don't matter upvoted 4 times   Nurgul 3 months, 1 week ago Actions: From Power BI Desktop, select Get Data, and then select Folder. From Power Query Editor, remove the Content column. From Power Query Editor, expand the Attributes column. upvoted 8 times   RichardOgoma 3 months, 3 weeks ago 1. Get data and select folder 2. Remove the content column 3. Expand the attributes column You'll have only metadata of the files remaining. upvoted 9 times   Tata11 3 months, 3 weeks ago "You need to make the file metadata (metadata= information about files) available" so, get data, remove content, expand attribute. upvoted 7 times Question #16 Topic 1 A business intelligence (BI) developer creates a dataflow in Power BI that uses DirectQuery to access tables from an on-premises Microsoft SQL server. The Enhanced Dataflows Compute Engine is turned on for the dataflow. You need to use the dataflow in a report. The solution must meet the following requirements: ✑ Minimize online processing operations. ✑ Minimize calculation times and render times for visuals. ✑ Include data from the current year, up to and including the previous day. What should you do? A. Create a dataflows connection that has DirectQuery mode selected. B. Create a dataflows connection that has DirectQuery mode selected and configure a gateway connection for the dataset. C. Create a dataflows connection that has Import mode selected and schedule a daily refresh. D. Create a dataflows connection that has Import mode selected and create a Microsoft Power Automate solution to refresh the data hourly.   IxIsa Highly Voted  3 months, 2 weeks ago C, because one of the requirements is 'Minimize online processing operations'. Although the dataflow uses DirectQuery, the Dataset can be refreshed with Import.https://learn.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-directquery upvoted 11 times   thanhtran7 1 month, 1 week ago "Although the dataflow uses DirectQuery, the Dataset can be refreshed with Import." -> I dont understand this point. Can you help explain more details? upvoted 1 times   Sunny_Liya 3 months, 1 week ago Need a gateway upvoted 2 times   Dovoto 3 months ago The BI developer has already created the dataflow, so the gateway must be present. Import and daily scheduled refresh should do the trick. upvoted 5 times   oakey66 Most Recent  1 week ago This doesn't seem correct. Based on this link, you should use directquery: https://learn.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-develop-solutions Avoid separate refresh schedules: DirectQuery connects directly to a dataflow, which removes the need to create an imported dataset. As such, using DirectQuery with your dataflows means you no longer need separate refresh schedules for the dataflow and the dataset to ensure your data is synchronized. This explicitly calls out that you should not need refresh schedules. Am I missing something? upvoted 1 times   lukelin08 1 month, 3 weeks ago Selected Answer: C C is correct upvoted 2 times   PCCCCCC 1 month, 3 weeks ago Why its cant be A, they have compute setting turned ON, we can directly use Direct Query from dataflow upvoted 2 times   Churato 2 months, 3 weeks ago Selected Answer: C Dovoto, yes the BI devoloper already created the Dataflow upvoted 3 times   Dovoto 3 months ago The BI developer has already created the dataflow, so the gateway must be present. Import and daily scheduled refresh should do the trick. upvoted 4 times   Manzy2599 3 months, 2 weeks ago Is it b or c? upvoted 1 times   Snow_28 3 months, 3 weeks ago B. because it uses direct query to access the tables with the connection of on-premises SQL server which would require a gateway for the connection. upvoted 3 times   saurinkhamar 3 months, 3 weeks ago B. Could be an answer. OnPremise SQL server to be connected which would require Gateway upvoted 4 times   GPerez73 3 months, 2 weeks ago I also think so upvoted 2 times   fdsdfgxcvbdsfhshfg 3 months, 3 weeks ago Selected Answer: C C is legit upvoted 4 times Question #17 Topic 1 DRAG DROP - You publish a dataset that contains data from an on-premises Microsoft SQL Server database. The dataset must be refreshed daily. You need to ensure that the Power BI service can connect to the database and refresh the dataset. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.   svg10gh 2 days, 5 hours ago Current sequence looks good upvoted 1 times   JoaoTrade 3 days, 16 hours ago Correct. upvoted 1 times   jsking 3 days, 16 hours ago Configure an on-premises data gateway. Add the dataset owner to the data source. Add a data source. Configure a scheduled refresh. upvoted 1 times   jsking 3 days, 16 hours ago I changed my mind. The answer provided is correct because the owner needs a data source to own in the first place so add a data source should be second upvoted 2 times   Hansen_G 2 days, 5 hours ago Agree. upvoted 1 times Question #18 Topic 1 You attempt to connect Power BI Desktop to a Cassandra database. From the Get Data connector list, you discover that there is no specific connector for the Cassandra database. You need to select an alternate data connector that will connect to the database. Which type of connector should you choose? A. Microsoft SQL Server database B. ODBC C. OLE DB D. OData   GuerreiroJunior 2 days, 19 hours ago Selected Answer: B B is Correct because, B´cause it allows you to connect to data sources that aren't identified in the Get Data lists. The ODBC connector lets you import data from any third-party ODBC driver simply by specifying a Data Source Name (DSN) or a connection string. As an option, you can also specify a SQL statement to execute against the ODBC driver. List details a few examples of data sources to which Power BI Desktop can connect by using the generic ODBC interface: https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-connect-using-generic-interfaces upvoted 1 times   mtvl123 3 days, 12 hours ago I would chose OData connector based on this documentation: https://www.cdata.com/kb/tech/cassandra-odata-power-query.rst upvoted 1 times   jsking 3 days, 16 hours ago Selected Answer: B Answer is correct. https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-connect-using-generic-interfaces upvoted 1 times   JoaoTrade 3 days, 16 hours ago Selected Answer: B B is correct upvoted 1 times Question #19 Topic 1 DRAG DROP - You receive annual sales data that must be included in Power BI reports. From Power Query Editor, you connect to the Microsoft Excel source shown in the following exhibit. You need to create a report that meets the following requirements: Visualizes the Sales value over a period of years and months Adds a slicer for the month Adds a slicer for the year Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.   GuerreiroJunior 2 days, 19 hours ago Correct Answer! upvoted 2 times   mtvl123 3 days, 12 hours ago It's correct! upvoted 1 times   jsking 3 days, 16 hours ago Provided answer is correct. upvoted 1 times   JoaoTrade 3 days, 16 hours ago Correct. A, B and C upvoted 1 times Question #20 Topic 1 HOTSPOT - You are using Power BI Desktop to connect to an Azure SQL database. The connection is configured as shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct solution is worth one point.   GuerreiroJunior Highly Voted  2 days, 19 hours ago The defaut time out is 10 minutes, but if it takes more than it you can enter another value in minutes to keep the connection open longer. 1. 10 minutes 2. All the tables Reference: https://learn.microsoft.com/en-us/power-query/connectors/azuresqldatabase upvoted 6 times   Hansen_G 2 days, 1 hour ago Navigate using full hierarchy is unchecked. Only table with data will be displayed. upvoted 2 times   Danylessoucis 2 days, 12 hours ago Right.10mn and All the tables because ‘’Navigate using full hierarchy : If checked, the navigator displays the complete hierarchy of tables in the database you're connecting to. If cleared, the navigator displays only the tables whose columns and rows contain data’’. upvoted 1 times   Vikash14 Most Recent  15 hours, 8 minutes ago 10 mins All the tables If checked, the navigator displays the complete hierarchy of tables in the database you're connecting to. If cleared, the navigator displays only the tables whose columns and rows contain data. Reference : https://learn.microsoft.com/en-us/power-query/connectors/azuresqldatabase upvoted 1 times   Vikash14 15 hours, 4 minutes ago Sorry my bad as Navigate is not checked : it would show tables with data upvoted 1 times   Sushvij 2 days, 18 hours ago 10min Only tables with data If navigate using full hierarchy is unchecked you can see only tables(rows n columns) with data. Otherwise you can see all tables upvoted 2 times   NICOx 2 days, 21 hours ago It's correct, 10 min and all tables because we have not query upvoted 1 times   JoaoTrade 3 days, 16 hours ago I agree with 10 min, but not sure on the only tables that contain data.. i would say all the tables upvoted 2 times   Kai_don 3 days, 18 hours ago It should be 10 mins and all the tables. upvoted 1 times Topic 2 - Question Set 2 Question #1 Topic 2 You are creating a report in Power BI Desktop. You load a data extract that includes a free text field named coll. You need to analyze the frequency distribution of the string lengths in col1. The solution must not affect the size of the model. What should you do? A. In the report, add a DAX calculated column that calculates the length of col1 B. In the report, add a DAX function that calculates the average length of col1 C. From Power Query Editor, add a column that calculates the length of col1 D. From Power Query Editor, change the distribution for the Column profile to group by length for col1   Muffinshow Highly Voted  4 months, 1 week ago Selected Answer: D Wrong answer, A will affect the size of the model as would C. B doesn't give you enough information about the distribution (just the average) D is the right answer. upvoted 43 times   Hoeishetmogelijk 1 month, 3 weeks ago I agree completely! upvoted 2 times   Jonagan 2 months ago Why do you think that aggregating in the PowerQuery size will not influence the size of the datamodel? its getting smaller isnt it? Measures are the only solutions that does not influence the datamodel. They require CPU but but does not store additional data or does not reduce the data in the model upvoted 5 times   GabryPL 1 week, 2 days ago Option B is also correct for me it's the only one that will not affect the size of the model upvoted 1 times   Kai_don 2 weeks, 3 days ago Option A is saying useing calculated column which increases the size of the model. So D is correct. upvoted 2 times   GPerez73 4 months ago I agree upvoted 2 times   lukelin08 Highly Voted  3 months, 1 week ago Selected Answer: D Its D, this can easily be tested by going to Power Query Editor > View > Column Profile > distribution graph, click the three little dots and select group by text length. This will allow you to view the distribution of text length within the column upvoted 19 times   dnpr Most Recent  1 week ago Generally, measures are more useful, but the trade-offs are the performance hit (report runtime vs. pre-processed), storage space, and the type of expressions you can use. For example calculated columns are often used when you want to filter on the result rather than just as a calculated result - Here "A" correct. upvoted 1 times   dnpr 1 week ago My Bad - calculated columns will store the result and increase the data model size. upvoted 2 times   oakey66 1 week ago Selected Answer: A Creating a new column in Power Query adds to the data model size because the column is calculated prior to being loaded into the data model. Where as creating a calculated column, that is not a part of the data model. I believe the answer is A. upvoted 1 times   Analysis 1 week, 1 day ago Tested the scenario Answer is D upvoted 1 times   uniquing 1 week, 2 days ago I would say A is the answer. The top priority is not to affect the model size. "A" would not affect the model size but only increase memory use. But "D" changes the model size. upvoted 1 times   PracticeAnalytics 1 week, 1 day ago Did you do the test? upvoted 1 times   MBA_1990 2 weeks, 1 day ago Right answer is D Power Query Editor > View > Column Profile > By clicking on the three little dots -> Group by text length upvoted 1 times   AhmedMRagab 2 weeks, 2 days ago I believe D is the right one since the rest will increase the data size upvoted 1 times   AlexYang_ 1 month ago Selected Answer: A using DAX dont increase model size upvoted 2 times   Kai_don 2 weeks, 3 days ago Option A is saying useing calculated column which increases the size of the model. So D is correct. upvoted 1 times   Mati_123 1 month ago Have a look into question again, it is mentioned that "You need to analyze the frequency distribution of the string lengths in col1", so with D option will we be able to analyze the frequency distribution of the string, if not then we should go for "A". upvoted 1 times   lsperes2982 1 month, 1 week ago Selected Answer: A using DAX you dont increase model size upvoted 2 times   Yaldaa 1 month, 2 weeks ago Selected Answer: A You need to analyze the data. so go with A, DAX calculations do not increase model size. The problem with D is that you can only see the distribution, but no further analysis is possible. upvoted 1 times   Hoeishetmogelijk 1 month, 3 weeks ago Selected Answer: D As Muffinshow already worded perfectly: A will affect the size of the model as would C. B doesn't give you enough information about the distribution (just the average) D is the right answer. See for distribution on text length this page at the bottom: https://learn.microsoft.com/en-us/power-query/data-profiling-tools upvoted 1 times   andregrahamnz 1 month, 4 weeks ago Selected Answer: D Answer can ONLY be D. A and C both affect the size of the data model. B if you read it carefully is a measure to calculate the AVERAGE length, which will not provide the required 'analysis'. While it's a bit rich to refer to D as an 'analysis'; it's the only answer that provides a grouped breakdown of string length AND doesnt increase the size of the data model. The answer is D by process of elimination. upvoted 4 times   RooneySmith 2 days, 22 hours ago What confuses me is that if I tomorrow I am going to take the test which answer should I choose: the one that makes sense to me or the one stated here? Sometimes I feels these answers are not trusted even though I find same answers in other platforms! upvoted 1 times   Jonagan 2 months ago Selected Answer: A D is definately wrong, because you will reduce the size of the datamodel. Using measures is the only sway to not influence the size of the datamodel. It requires a little bit more CPU to calculate, but it does not influence the size. Hence, A is the correct answer. Muffinshow, why do you think answer A will affect the size of the model and D not? upvoted 2 times   Churato 2 months, 2 weeks ago Selected Answer: D Tested D , it works as expected. upvoted 4 times   samad1234 3 months ago Correct Answer : D upvoted 3 times Question #2 Topic 2 You have a collection of reports for the HR department of your company. The datasets use row-level security (RLS). The company has multiple sales regions. Each sales region has an HR manager. You need to ensure that the HR managers can interact with the data from their region only. The HR managers must be prevented from changing the layout of the reports. How should you provision access to the reports for the HR managers? A. Publish the reports in an app and grant the HR managers access permission. B. Create a new workspace, copy the datasets and reports, and add the HR managers as members of the workspace. C. Publish the reports to a different workspace other than the one hosting the datasets. D. Add the HR managers as members of the existing workspace that hosts the reports and the datasets.   GPerez73 Highly Voted  4 months ago I would say it is correct since an app would prevent to change the layout upvoted 14 times   lukelin08 Highly Voted  3 months, 1 week ago Selected Answer: A A is correct. upvoted 6 times   svg10gh Most Recent  1 week ago correct ans looks as A because in the Power BI service, members of a workspace have access to datasets in the workspace. RLS doesn't restrict this data access. and RLS is used to restrict access to data not to layout of the report. Members are allowed to change the report layout. upvoted 2 times   MBA_1990 2 weeks, 1 day ago Selected Answer: A RLS is not applied to a member of Workspace upvoted 1 times   AlexYang_ 4 weeks ago Selected Answer: A A is correct. upvoted 1 times   csillag 1 month ago A is correct. In Workspace > Access you can add user as Viewer. upvoted 1 times   DOUMI 3 months, 1 week ago A est correcte upvoted 2 times   Snow_28 3 months, 3 weeks ago A. would be the answers because in the Power BI service, members of a workspace have access to datasets in the workspace. RLS doesn't restrict this data access. upvoted 4 times   MilouSluijter 4 months, 1 week ago I think its B. https://docs.microsoft.com/en-us/power-bi/collaborate-share/service-create-distribute-apps RLS is used to restrict access to data not to layout of the report. Members are allowed to change the report layout. upvoted 1 times   MilouSluijter 4 months, 1 week ago Oeps should have said A upvoted 2 times Question #3 Topic 2 You need to provide a user with the ability to add members to a workspace. The solution must use the principle of least privilege. Which role should you assign to the user? A. Viewer B. Admin C. Contributor D. Member   GPerez73 Highly Voted  4 months ago Correct upvoted 10 times   lukelin08 Highly Voted  3 months, 1 week ago Selected Answer: D D is correct as per example picture and principal of least privilege required upvoted 5 times   svg10gh Most Recent  1 week ago correct ans is D that must use least privilege. upvoted 1 times   PsgFe 3 weeks, 1 day ago the question says: use the principle of least privilege. D. Member (correct) upvoted 1 times   Snow_28 3 months, 3 weeks ago B or D can both be the answers because they both have the permissions to add the members in the workspaces. upvoted 1 times   Luffy561 3 months, 3 weeks ago answer is D must use least privilege upvoted 4 times Question #4 Topic 2 You have a Power BI query named Sales that imports the columns shown in the following table. Users only use the date part of the Sales_Date field. Only rows with a Status of Finished are used in analysis. You need to reduce the load times of the query without affecting the analysis. Which two actions achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Remove the rows in which Sales[Status] has a value of Canceled. B. Remove Sales[Sales_Date]. C. Change the data type of Sale[Delivery_Time] to Integer. D. Split Sales[Sale_Date] into separate date and time columns. E. Remove Sales[Canceled Date].   bjornopjemic Highly Voted  3 months, 3 weeks ago A, only records with state finished are used D, personally I would transform the column to a date format and not split it since only the date part is used Not E, All the cancelled rows are already deleted with A and when a order is not cancelled it will contain a null value upvoted 26 times   Fer079 2 months, 4 weeks ago The option A is clear. Regarding D or E, I understand your point of view and it makes sense, however if we split the date column into two columns then we will have new data types for these columns and maybe it will affect the model and the analysis that we have currently, and one of the requirements is "You need to reduce the load times of the query without affecting the analysis." so we should discard the option D, therefore we should go ahead with the E option upvoted 8 times   Mizaan 2 months, 3 weeks ago Agree. Splitting the column as per D does not reduce the model size. Removing the column does. Since we don't need the cancelled date (because we only filter by finished) the cancelled date is not useful for anything. upvoted 2 times   cnmc 2 weeks, 2 days ago Splitting the column without deleting one of them isn't going to do anything for performance. And you're right that if step A is done then the cancelled_date column will only contain null values. But reducing the number of columns is going to improve the performance - even if that column is all null. upvoted 3 times   evipap Highly Voted  2 months ago Selected Answer: AE It says: You need to reduce the LOAD times of the query without affecting the analysis. Only answers A and E can reduce the load times. D may reduce only the time needed to process the data. Someone said that E is not the answer because: "All the cancelled rows are already deleted with A and when a order is not cancelled it will contain a null value". You must read again the decription cause it says " Each answer presents a COMPLETE solutiuon" not part of a solution. upvoted 8 times   nmosq Most Recent  1 day, 6 hours ago Selected Answer: AD I would normally go with AE as most of the people, but from the sample I'm seeing the even with the status being "Finished", Canceled Date has a value. upvoted 1 times   svg10gh 2 days, 5 hours ago Selected Answer: AD AD are the correct answers. upvoted 1 times   dnpr 1 week ago Selected Answer: AD with out date split you cant do analysis ,so AD upvoted 1 times   svg10gh 1 week ago A,D is the correct answer. upvoted 1 times   Xikta 2 weeks, 1 day ago Selected Answer: AE "Each correct answer presents a complete solution." Therefore, Dont think that A already delete the Cancelled row, and we dont need E anymore. That wrong. Because each answer is saparate, not a part of each other. upvoted 2 times   MBA_1990 2 weeks, 1 day ago Selected Answer: AE A is clear D will make the compression of the Dataset more efficient but wil not reduce the time of load. E is the right answer (importing less columns will reduce the time of load) upvoted 1 times   PsgFe 3 weeks, 1 day ago You need to reduce loading times -without affecting the analysis. A): Records with the status: "canceled" are not suitable for analysis and must be filtered. D) Split date and time is good practice. and the work with the data field gets better. A and D upvoted 1 times   PsgFe 3 weeks, 1 day ago 259 / 5.000 Resultados de tradução Resultado da tradução You need to reduce loading times -without affecting the analysis. A): Records with the status: "canceled" are not suitable for analysis and must be filtered. D) Split date and time is good practice. and the work with the data field gets better. A and D upvoted 1 times   AlexYang_ 4 weeks ago Selected Answer: AD AD clearly upvoted 1 times   KarthikKumarK 1 month, 1 week ago Selected Answer: AE A - 1st thing, You cannot remove rows(But, you can filter) E - If the "Status" is "Finished", The "Canceled_Date" should be "null" & Even thought this column is not required to load. But, not "D". Because, If you split, Sales_Date again Power query needs to calculate and load 2 column values. Thanks Karthik upvoted 4 times   jboiret 1 month, 1 week ago Selected Answer: AD Split date and time is a high recommandation upvoted 1 times   Hoeishetmogelijk 1 month, 2 weeks ago Selected Answer: AD If you read well the text, the only answers can be A & D. upvoted 1 times   KobeData 2 months, 1 week ago A and D is correct. The Power BI Desktop data model only supports date/time, but they can be formatted as dates or times independently. Date/Time – Represents both a date and time value. Underneath the covers, the Date/Time value is stored as a Decimal Number Type. Since there's a T in the dates column before split, it's saved as a source text value. Splitting converts it to a numeric value. This reduces the size. upvoted 3 times   Tiz88 2 months, 1 week ago This is the perfect answer, and makes totally sense. Thanks upvoted 1 times   xxfangxx 2 months, 3 weeks ago Selected Answer: AD AD are the correct answers. upvoted 1 times   Mizaan 2 months, 3 weeks ago Selected Answer: AE A and E. A is easy. Splitting the column as per D does not reduce the model size. Removing the column does. Since we don't need the cancelled date (because we only filter by finished) the cancelled date is not useful for anything. upvoted 4 times   Churato 2 months, 2 weeks ago Agree, https://learn.microsoft.com/en-us/power-bi/guidance/import-modeling-data-reduction upvoted 2 times Question #5 Topic 2 You build a report to analyze customer transactions from a database that contains the tables shown in the following table. You import the tables. Which relationship should you use to link the tables? A. one-to-many from Transaction to Customer B. one-to-one between Customer and Transaction C. many-to-many between Customer and Transaction D. one-to-many from Customer to Transaction   RickyAnd Highly Voted  4 months ago Selected Answer: D Correct upvoted 6 times   GPerez73 Highly Voted  4 months ago It is correct for me upvoted 5 times   jsking Most Recent  2 weeks, 5 days ago Selected Answer: D It's an obvious one. Relationship always flows downstream from primary (fact) to foreign (dim) upvoted 1 times   psychosystema 1 month, 3 weeks ago Selected Answer: D One customer can have many transactions, so D. upvoted 3 times   srikanth923 2 months, 1 week ago Selected Answer: D D is the answer upvoted 3 times   samad1234 3 months, 1 week ago D IS CORRECT upvoted 2 times   Nurgul 3 months, 1 week ago Selected Answer: D D is correct upvoted 2 times   lukelin08 3 months, 1 week ago Selected Answer: D D is correct upvoted 2 times   Ron22Ron 3 months, 2 weeks ago Selected Answer: D one customer many transactions. Answer is D upvoted 2 times   Pushliang 4 months ago D IS RIGHT upvoted 4 times Question #6 Topic 2 You have a custom connector that returns ID, From, To, Subject, Body, and Has Attachments for every email sent during the past year. More than 10 million records are returned. You build a report analyzing the internal networks of employees based on whom they send emails to. You need to prevent report recipients from reading the analyzed emails. The solution must minimize the model size. What should you do? A. From Model view, set the Subject and Body columns to Hidden. B. Remove the Subject and Body columns during the import. C. Implement row-level security (RLS) so that the report recipients can only see results based on the emails they sent.   RickyAnd Highly Voted  4 months ago Selected Answer: B correct, "prevent report recipients from reading the analyzed emails" upvoted 7 times   aloulouder Highly Voted  4 months ago correct upvoted 5 times   louisaok Most Recent  1 month, 2 weeks ago Remove the sensitive info at the very beginning upvoted 1 times   lukelin08 1 month, 3 weeks ago Selected Answer: B B is correct for me upvoted 1 times   CHT1988 2 months, 1 week ago Selected Answer: B B is correct upvoted 3 times   samad1234 3 months, 1 week ago B is correct upvoted 2 times   Nurgul 3 months, 1 week ago Selected Answer: B B is correct, it minimizes the model size. upvoted 3 times Question #7 Topic 2 HOTSPOT - You create a Power BI dataset that contains the table shown in the following exhibit. You need to make the table available as an organizational data type in Microsoft Excel. How should you configure the properties of the table? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:   Namenick10 Highly Voted  3 months, 3 weeks ago Row label: Name Key column: ID Is featured table: Yes upvoted 33 times   Churato 2 months, 2 weeks ago The Row label field value is used in Excel so users can easily identify the row. It appears as the cell value for a linked cell, in the Data Selector pane, and in the Information card. The Key column field value provides the unique ID for the row. This value enables Excel to link a cell to a specific row in the table. Source: https://learn.microsoft.com/en-us/power-bi/collaborate-share/service-create-excel-featured-tables upvoted 1 times   HamzaMeziane 3 months, 2 weeks ago why you said is ? upvoted 1 times   Alexeyvykhodtsev Highly Voted  3 months, 3 weeks ago Maybe a Row label must be a Name. upvoted 15 times   fdsdfgxcvbdsfhshfg 3 months, 3 weeks ago Yeah, Name of the Business Unit should be a Row Label upvoted 7 times   Patrick666 Most Recent  1 month ago Row label: Name Key column: ID Is featured table: Yes upvoted 4 times   Hoeishetmogelijk 1 month, 2 weeks ago Row label: Name Key column: ID Is featured table: Yes See: https://www.myonlinetraininghub.com/power-bi-organizational-data-types-in- excel#:~:text=Power%20BI%20Organizational%20Data%20Types%20in%20Excel%20allow%20you%20to,company%2C%20to%20name%20a%20fe w. upvoted 1 times   lukelin08 1 month, 3 weeks ago My choices are Row label: Name Key column: ID Is featured table: Yes upvoted 3 times   saciduni 1 month, 4 weeks ago Cost center as row label is not unique enough to identify as a row in a table (or at least that's what I assume), name should be the correct answer for row label because it's more precise upvoted 5 times   Nurgul 3 months, 1 week ago My answer would be: Row label: Name Key column: ID Is featured table: yes upvoted 5 times Question #8 Topic 2 You have the Power BI model shown in the following exhibit. A manager can represent only a single country. You need to use row-level security (RLS) to meet the following requirements: ✑ The managers must only see the data of their respective country. ✑ The number of RLS roles must be minimized. Which two actions should you perform? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Create a single role that filters Country[Manager_Email] by using the USERNAME DAX function. B. Create a single role that filters Country[Manager_Email] by using the USEROBJECTID DAX function. C. For the relationship between Purchase Detail and Purchase, select Apply security filter in both directions. D. Create one role for each country. E. For the relationship between Purchase and Purchase Detail, change the Cross filter direction to Single.   Nurgul Highly Voted  3 months, 1 week ago Selected Answer: AC The given answer is correct. A. Create a single role that filters Country[Manager_Email] by using the USERNAME DAX function. C.For the relationship between Purchase Detail and Purchase, select Apply security filter in both directions. upvoted 9 times   kiwi69 Most Recent  1 week, 2 days ago Selected Answer: AD I think the answer C does not represent a complete solution at all. You can't apply RLS without roles and answer C does not create roles. Answer D is not optimal but would work. upvoted 1 times   csillag 1 month ago Correct answer is AC. https://asankap.wordpress.com/2018/05/28/how-does-row-level-security-works-when-there-is-a-bi-directional-filter-in- power-bi-tabular-model/ upvoted 1 times   sharmila29 1 month, 2 weeks ago In my opinion first you have to create a role for each

Use Quizgecko on...
Browser
Browser