Learn Power BI and Build Your Own Coronavirus Report – Update including Doubling Times

Update to the Blog PostLearn Power BI and Build Your Own Coronavirus Report

Dataset Updated

The Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE), supported by ESRI Living Atlas Team and the Johns Hopkins University Applied Physics Lab (JHU APL), has updated the dataset files and they have a new names and the US data has new files that break down by state. The US as a whole is still in the global datasets.

The good news is you can follow the original blog post but just use the new files for the global data and/or the US data.

JHU CSSE GitHub Repo/Folder with COVID-19 Data in Time Series Format – https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series

In the GitHub folder you will find three CSV (Comma Separated Values) files (see image below); one for Confirmed Cases, one for Recovered Cases and one for Deaths for Global and new files for US.

URLs for the 3 Updated files for Global Dataset:

Confirmed Cases – https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv
Recovered Cases –
https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv
Deaths –
https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv

Dates for Non-US Users

The dates in the file are stored in US format. For users outside of the US please follow Jeff Meddemmen comment from my original post:

“For countries who use a different date format that the source data (i.e in UK we use DD-MM-YYYY) during the Power Query steps when we are changing the Change Type of the columns if you change it just to ‘Date’ as directed then it results in errors. You need to first select Change Type > Locale and then select ‘Date’ and ‘English (United States)’. Then do this again to convert to another format. i.e. again select Change Type > Locale, select ‘Date’ and ‘English (United Kingdom)’ when it prompts if you want to replace the existing Change or add a new Step, choose to Add a new step. You need to repeat that for all the tables.”

New Power BI File with more DAX including Doubling Times!

I have a more detailed Power BI file (it produced the image above) with DAX for normalizing the dates to first confirmed case by Country and normalizing dates by date first 100 cases.S

You can find the detailed Power BI file in my GitHub here:

https://github.com/STATCowboy/Power-BI-Coronavirus-Report-Tutorial/blob/master/Covid-19%20via%20Power%20BI.pbix.

Thanks

I hoped you enjoyed learning how to user Power BI and how to build your own report for the Novel Coronavirus (COVID-19). If you want to learn more about Power BI check out the official Power BI documentation – https://docs.microsoft.com/en-us/power-bi/. Also, follow Adam and Patrick on the Guy in a Cube YouTube Channel – https://www.youtube.com/channel/UCFp1vaKzpfvoGai0vE5VJ0w. It is the ultimate source of Power BI knowledge.

I have posted a Power BI file that should look exactly like the one you just built if you followed my instructions! You can follow me on Twitter @STATCowboy, connect up with me on LinkedIn at https://www.linkedin.com/in/jameyj/.

Finally, please be safe during this global pandemic. Wash your hands. Keep safe social distance and limit your interactions with groups until the curve is flat! (You can see in the report you just built that it isn’t flat for the rest of the world!

Guidance from CDC – https://www.cdc.gov/coronavirus/2019-ncov/index.html

Code

GitHub Repo for Power BI Coronavirus Report Tutorial https://github.com/STATCowboy/Power-BI-Coronavirus-Report-Tutorial

Learn Power BI and Build Your Own Coronavirus Report

 

Introduction

The Novel Coronavirus (COVID-19) emerged in the city of Wuhan, China. It has been declared a global pandemic by the World Health Organization (WHO). There has been many charts, infographics, dashboards and reports on the web showing the numbers of confirmed and recovered cases as well as deaths around the world.

I thought, “How could I build my own report/dashboard in Power BI”?

In this blog post I will show you how to use Power BI and build a COVID-19 report in Power BI!

 

Dataset

Well, that would first require me to find the dataset. Luckily the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE), supported by ESRI Living Atlas Team and the Johns Hopkins University Applied Physics Lab (JHU APL), has a GitHub Repository with curated daily datasets by Country/Region and Province/State for confirmed cases, recovered cases and deaths. The dataset is updated daily with the latest data.

JHU CSSE GitHub Repo/Folder with COVID-19 Data in Time Series Format – https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series

In the GitHub folder you will find three CSV (Comma Separated Values) files (see image below); one for Confirmed Cases, one for Recovered Cases and one for Deaths.


If you click on one of the files you will see the data. Also, you will see a button called “Raw”. If you click the “Raw” button you will get the URL you will need to import that dataset into Power BI.

URLs for the 3 files:

Confirmed Cases – https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv
Recovered Cases –
https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Recovered.csv
Deaths –
https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Deaths.csv

Each file has the data reported with the Province/State, Country/Region, Lat and Long (Province/State) and pivoted column with cumulative/running totals (notice the values increase by day) of cases/deaths for each day since reporting has started. We will import these three files into Power BI and wrangle the data into a format useful for reporting in Power BI.


 

Power BI Desktop

Now that we know where to get the data, we need to get the right tool to report on the data, Power BI! You can go to https://powerbi.microsoft.com/en-us/ and click the “Start free >” button to get Power BI Desktop (it’s free!) and start building amazing reports.

Then click “Download free >” on the next page to launch Microsoft Store to install the Power BI Desktop application

When the prompt appears notifying about opening the Microsoft Store click the “Open” button.

In the Microsoft Store click “Install” to install on the Power BI Desktop screen to install Power BI Desktop.

 

Power BI – Get and Transform Data with Power Query

 

After Power BI installs in the Start Menu find “Power BI Desktop” launch Power BI. When prompted sign-up for a free account.

After you sign-up on the splash screen click “Get Data”. We will start with import the Confirmed Cases file into Power BI.

NOTE: If the splash screen is not longer there and you are in the Power BI Desktop App

OR …

In the Get Data dialog box type “web” in the search box at the top left, select “Web” and then click “Connect”.

In the “From Web” dialog paste in the URL (https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv) for the Confirmed Cases into the URL box and click “OK”.

In the next screen click “Transform Data” to open Power Query so we can wrangle some data!

The next screen to pop-up is the Power Query Editor. Yes, like the one in Excel (a little better but mostly the same)! Here we can use the language M to transform and load our data into Power BI Desktop. We will do some steps to get the data in the format we want to report in Power BI.

If you look to the far right, you will see a section called “Applied Steps”. You can see Power BI has done two steps, “Source” and “Changed Type”. We are going to remove the “Changed Type” step it automatically added on the import of the CSV file. Click the “Red X” next to “Changed Type” to remove this step.

Now, we will promote the first row of the CSV file to be our column names. If you look at the first row of the data in Power Query you will see the column names. To promote the first row to be Headers of the table click the “Transform” menu, then click the “Use First Row as Headers” button in the ribbon. This will add two steps in the “Applied Steps” section: Promoted Headers and Changed Type.

When we promoted the headers Power Query automatically added a step to detect the data types. Click the “Red X” next to Changed Type to remove this step. We will do this later after we unpivot the data.

Should look like below now.

Now we are going to edit the raw M code to remove an option that Power Query put in the Source step (first one) so that as data is added (new columns) each day to the dataset we will pick up the new columns. To do this, in the “Home” menu click the “Advanced Editor” button.

In the “Advanced Editor” window scroll to the right and in the “Source = Csv.Document …” line remove the “Columns=XX,” part in the line (XX will be a number representing the number of columns in the file, in my example below it was 60) and click “Done”. We want to remove the hard-coded column count because as new data points are added as columns to the file each day, we will want to pick up those columns. If we leave the hard-coded “Columns=XX” we will not pick up these new days/columns.

Should look like this below between “Delimiter” and “Encoding” after you remove the section (be sure to get the comma). When you click “Done” you will not notice any changes.

Let’s rename the table to Confirmed. To do this, in the Name box to the right change time_series_19-covid-Confirmed to Confirmed.

 

Next, we need to duplicate the table before we perform other steps so that we can create a dimension table to use later for better data modeling and report building in Power BI. To do this, right-click on the Confirmed query on the left and choose Duplicate.

Right click on the new Query it created called Confirmed (2) and choose Rename and name the query Header.

Should look like this.

Let’ cleanup our Header query to be our dimension.

Hold down the shift key and select the four columns in Header called: Province/State, Country/Region, Lat and Long. You should see the column headers for the four columns become yellow. Right-click on one of the four columns and choose Remove Other Columns. With the same four columns selected, right-click again and choose Remove Dupliates.


We are good with the Header table/query now.

 

Let’s go back to editing our Confirmed query/table. We can now remove the Lat and Long columns of our Confirmed table as we will use them from the new Header table we created. To do this, hold down the shift key and select the Lat and Long columns in the Confirmed table and right-click and choose Remove Columns.


Our Applied Steps should now look like this.

Now comes the fun part, the part that Power Query in Power BI is amazing at doing – Un-pivoting data! We are going to un-pivot on the Province/State and Country/Region columns to get the daily data columns to go from being a column per day to a row per day with a column for the date and a column for the value. To do this we hold down the shift key and select the Province/State and Country/Region columns and then right-click on one of the two columns, Province/State or Country/Region, you selected and choose Unpivot Other Columns.


Now look at our Confirmed table and steps! We now have the data in a better format for reporting … but we have a few more things to do.

So, let’s rename our new Attribute column Date. Right-click on Attribute column and click Rename… and rename the column Date.

Looks like this now.

Now finally let’s change the datatypes of the Date and Value columns to Date and Whole Number, respectively. Right-click on the Date column and click Change Type ->
Date. Right-click on the Value column and click Change Type -> Whole Number.

Our table should look like now!

 

Repeat these steps in Power Query (MINUS the steps to create the Header table!!) for the other two files – Recovered Cases and Deaths. Call the tables/queries Recovered and Deaths and be sure to use the correct URLs for each table.

To bring in the two new files you can click New Source -> Web at the top and follow the steps above (MINUS the steps to create the Header table!!).

 

Once done it should look like this!

Now let’s Close & Apply our work and get into the Power BI Desktop. Click the Close & Apply button at the top left to close Power Query and return to the Power BI Desktop.

You will see it bringing the tables into Power BI Desktop.

 

Click the Save button at the top left and save your work so far! You can call the file Power BI Coronavirus Report.

NOTE: Name your Power BI Report files carefully, i.e., give them names that our friendly as they become the Report name in the Power BI Service when you publish.

 

Power BI Desktop – DAX Data Wrangling

We now have our data in Power BI Desktop to build report but before we build a report we need to create a few more tables using DAX. DAX is the Data Analysis Expressions language used in Power BI to create calculated columns, measures and tables.

We need to add a few columns as keys in our table, hide a few columns from the reports and add a date table. Click the Data button at the top left side to start using DAX to transform data using DAX in Power BI Desktop. Notice our four tables we created in Power Query on the right.

… … … … … … … … …

We are going to add a column called CountryState that will be combination of the Province/State and Country/Region columns. This will be a key for our tables to the Header table. To add the column/key right-click on the table and click New column.

In the dialog at top enter the formulas below for each table, respectively, to create the key.

Should look like this for the Confirmed table. Click the Checkmark to apply and create the column.

Repeat for each table using the following DAX formulas:

 

Confirmed: CountryState = COMBINEVALUES("|", 'Confirmed'[Country/Region], 'Confirmed'[Province/State])

 

 

Deaths: CountryState = COMBINEVALUES("|", 'Deaths'[Country/Region], 'Deaths'[Province/State])

 

 

Header: CountryState = COMBINEVALUES("|", 'Header'[Country/Region], 'Header'[Province/State])

 

 

Recovered: CountryState = COMBINEVALUES("|", 'Recovered'[Country/Region], 'Recovered'[Province/State])

 

We can now hide these four keys from each table as we do not need them in the reports themselves but just to create the relationships between the 3 data tables and the header table.

In each table right-click on the CountryState column we just created and choose Hide in report view. Repeat for each table.

You will now see that the CountryState column is greyed-out now.

 

Repeat (Hide) for the Date columns in the 3 data table, Confirmed, Deaths and Recovered.


 

Now we will use DAX to create a date/calendar table to use as our Date dimension table. To do this click the Table tools menu and then click the New table button.

In the formula bar enter the DAX code below from Adam Saxton to create a Calendar table to use as a Date Table dimension.

 

DAX for Date Table (from Adam Saxton, Twitter: @awsaxton, Web: https://guyinacube.com/)

Calendar =

VAR _calendar =

CALENDAR (MIN(‘Confirmed'[Date]),

MAX(‘Confirmed'[Date]))

RETURN

ADDCOLUMNS (

_calendar,

“Year”, YEAR ( [Date] ),

“MonthNumber”, MONTH ( [Date] ),

“Month”, FORMAT ( [Date], “mmmm” ),

“Quarter”, “QTR ” & FORMAT ( [Date], “Q” ),

“QuaterNumber”, FORMAT ( [Date], “Q” ),

“MonthYearNumber”, FORMAT([Date], “yy mm”),

“Month Year”, FORMAT([Date], “mmm yyyy”)

)

 

Should look like this for the Calendar table. Click the Checkmark to apply and create the table.

We now have a Calendar table!

Finally, we need to mark the Calendar table as a “date table”. Select the Calendar table in the right. Then in the Table tools menu select the Mark as date table.

 

Then in the pop-up window select the Date column for the Date column drop-down and click Ok.

 

Power BI Desktop – Modeling

Now we need to setup the relationships for our tables. To do this we use the Model section of Power BI Desktop. Click the Model button at the top left side to start modeling our table relationships.

Our tables will show up in a dark grey area where we can drag the table around to re-arrange them and create relationships. Drag the Header and Calendar tables into the center and place the Confirmed, Deaths and Recovered tables around them like below. NOTE: You will have to scroll around a little to move and re-arrange them as so.

Now that we have them arranged we can build the relationships. To do this we will use are keys, Date and CountryState, and drag from Calendar the Date field to the Date field in the 3 data tables Confirmed, Deaths and Recovered. We will repeat for the CountryState column in the Header table to the 3 data tables Confirmed, Deaths and Recovered. Below is an example of how to drag the columns.


When done are model should look like below.

 

Let’s click the Save button to save our work again!

 

Power BI Desktop – Reports

Now we are ready to build a report. Click the Report button at the top left side to start building our report!

In the Report window let’s first setup our Theme. Click the View menu at top and then you will see all the available pre-built themes you can choose from (like PowerPoint!). Click the light grey arrow to the right and then click the Storm theme (see below).

Now let’s add a title to our report. Click the Insert menu and then click Text box.

Position it as below and give it a title, COVID-19 Global Report, and set the font to 24.

Next, add a map to the report so we can show the occurrences of coronavirus by country. Click the Map visual icon in the Visualizations pane to add it to the report (make sure nothing is selected on the report before clicking the Map visual).

Position it at the bottom left.

Drag the Country/Region column in the Header table to the Location box and the Legend box in the Fields area.

Add the Value column from the Confirmed table to the Size box.

You map should look like this. There is one problem though! Since our data is values with running totals (see data section above) we get incorrect counts for the number of confirmed cases. Notice when we hover over China it shows China has over 3 million cases which is not correct. The reason is because the default aggregation method in Power BI is SUM.

To fix this we need to change the aggregation method for the Value column in the Size field to be Maximum. To do this we click the little down arrow to the right of Value in the Size field.


Then we choose Maximum.



Now when we hover over China, we get the correct value.

OK, now we will add a line chart to show the spread of the coronavirus by country by day. Make sure none of the items in the report are selected and choose the Line Chart visualization.

Position it as show below. Notice the read guide/snap lines that show up to help align it.

Drag the Date field from the Calendar table to Axis. Drag the Country/Region field form the Header table to Legend and drag the Value field from the Confirmed table to Values.

Notice the Date field automatically has a date hierarchy created and used in the visual.

In this instance we do NOT want a data hierarchy. So, to remove it, click the grey down arrow next to Date in the Axis field and choose Date.

Bazinga! We have a nice line graph showing the spread of the virus by country!

Now add two more Line Charts below this one and position as below.

In the left one add the same fields as the top Line Chart but replace the Values field for the Value field in Recovered.
In the right one add the same fields as the top Line Chart but replace the Values field for the Value field in Deaths.

Your report should look like this below now.

 

Now add some Slicers to the report to slice and dice the data by country or data. Make sure nothing is selected and click the Slicer visualization twice to add to Slicers to the report.

Position the slicers as such.

Select the left Slicer and drag the Country/Region field into the Field box.

Your Slicer will now be populated with the countries and regions. Let’s change this to be a dropdown. In the top right of the Slicer hover your mouse and click the very little light grey down arrow.

In the menu that displays select Dropdown.

Now in the Format section (the Paint Roller icon!) of the Slicer let’s change a few options.

In the Selection controls section let’s enable Show “Select all” option so we can make it easier to select countries in our slicer.

Now on select the right slicer and let’s setup a Date slicer. Drag the Date field from Calendar into the Field box.

You now have a date slicer to play with the dates in your report. Slide the date slider around and see what happens!

Finally, let’s add another Text box to show credit for the data source. Click the Insert menu and then click Text box and position below the title and add the following text and format as desired.

Data Source:
CSSE at Johns Hopkins University
https://github.com/CSSEGISandData/COVID-19

 

You can use the Paint Roller icon on the visuals to change colors, titles, etc.!

For example, lets change the Title text of the Line chart for Confirmed cases. Select the top Line chart. Then select the Format (Paint Roller)
icon and scroll to Title. Change the Title text to Confirmed Cases by Country.

Repeat for the other two Line charts using the titles, Recovered Cases by Country and Deaths by Country.

Last thing, there is a Page name at the bottom left. Change this to COVID-19 by Country.

Click the Save icon at the top left to save your report!

Your final report should look like this now!!! Congratulations!!! You did it!

 

 

Power BI Service

If you have a Power BI Pro license you can use the Publish button at the top right to publish to the Power BI Service to share your report to your co-workers, your friends and even the world! You also can setup an automatic refresh of the data to in the service and a whole lot more. More info about the Power BI service is here – https://docs.microsoft.com/en-us/power-bi/service-get-started.

 

Conclusion

I hoped you enjoyed learning how to user Power BI and how to build your own report for the Novel Coronavirus (COVID-19). If you want to learn more about Power BI check out the official Power BI documentation – https://docs.microsoft.com/en-us/power-bi/. Also, follow Adam and Patrick on the Guy in a Cube YouTube Channel – https://www.youtube.com/channel/UCFp1vaKzpfvoGai0vE5VJ0w. It is the ultimate source of Power BI knowledge.

I have posted a Power BI file that should look exactly like the one you just built if you followed my instructions! You can follow me on Twitter @STATCowboy, connect up with me on LinkedIn at https://www.linkedin.com/in/jameyj/.

Finally, please be safe during this global pandemic. Wash your hands. Keep safe social distance and limit your interactions with groups until the curve is flat! (You can see in the report you just built that it isn’t flat for the rest of the world!

Guidance from CDC – https://www.cdc.gov/coronavirus/2019-ncov/index.html

 

Code

GitHub Repo for Power BI Coronavirus Report Tutorial https://github.com/STATCowboy/Power-BI-Coronavirus-Report-Tutorial

Power BI dataflows – Enabling BI Professionals and Data Scientists

by Jamey Johnston, Sr. Data Scientist/Engineer, @STATCowboy


Why Power BI dataflows

Recently, I was reading an excellent article titled “Agile Digital Transformation for Data-Driven CEOs with Power BI” by Jen Stirrup (t|b) on why Power BI is a valuable tool for data driven CEOs to change the culture of their organizations. One of my favorite features to come to Power BI is Power BI dataflows and I believe it will improve the data management and usage in organizations. This article will discuss how dataflows can be used by BI professionals, data engineers and data scientist to improve their data management skills and build valuable insights for their companies.

I work with data scientist on a regular basis who spend a lot of time cobbling together data from different data sources to build models in Python using packages like Scikit-learn. These data scientist will often dump the data into CSV files and read them in as Panda dataframes or NumPy arrays to start the data science algorithm magic. This is messy! The data scientist often ask for new dumps of the data or losing the data dumps they have. I believe Power BI dataflows can really assist to make data management better for BI professionals, data scientist and data engineers.

What is Power BI dataflows

Power BI dataflows enable organizations to wrangle data spread across multiple data sources in the cloud or on-premises into Azure Data Lake Storage gen 2 in a Common Data Model. Dataflows are created and managed in workspaces in the Power BI service via an online version of the well-known Power Query interface. Data in Power BI dataflows can be leveraged by data scientist and data engineers in Azure services like Machine Learning (ML), Databricks and Azure Synapse Analytics because the data is stored in Common Data Model in Azure Data Lake Storage gen 2. Citizen Data scientist can take advantage of Microsoft Cognitive services or Azure Automated ML in dataflows. Microsoft also has Python packages like the Azure Datalake service client library for Python to connect to Azure Data Lake Storage gen 2 which can be used to connect to Power BI dataflows data to build data models in packages like Scikit-learn. Power BI reports can also connect to Power BI dataflows to build reports and dashboards.

Power BI dataflows Demo

Let’s look at Power BI dataflows. I have uploaded a CSV file of the Ames Housing Data
from Kaggle on my GitHub in a repository, https://github.com/STATCowboy/pbidataflowstalk, for Power BI dataflows.

To use Power BI dataflows you will need a Pro license of Power BI and to use some of the advanced features, like Cognitive Services or Automated ML, you will need Premium capacity. Your Power BI tenant admin and Azure admin will have to enable dataflows and connect to a Azure Data Lake Storage gen 2 to use the Power BI dataflows and integration with Azure Data Lake.

Once Power BI dataflows is setup, connect to your Power BI Service and create a workspace called “Ames Housing” and enable dataflows (under Advanced at the bottom) for the workspace.
Note, the Power BI service will manage the ACL (permissions) in the Azure Data Lake storage utilizing the users defined in the workspace automatically.


Now let’s import the CSV file of the Ames Housing data using dataflows. In your Ames Housing workspace click the “+Create” at the top and choose “Dataflow”.



Click the “Add new entities” button on the left panel on the page to open Power Query online to add the data.



In the Power Query Online screen click “Text/CSV” button to add the Ames Housing data into dataflows.

In the File path or URL textbox enter https://raw.githubusercontent.com/STATCowboy/pbidataflowstalk/master/AmesHousing.csvand then click the yellow “Next” button at the bottom right. Notice, you can connect to your On-premises data gateway if you have on-premises data.


In the Power Query Online editor remove all the columns except, “Gr Living Area” and “SalesPrice”, name the table “AmesHousingData” and click “Save and close”.


The next screen that pop-up is where we name our dataflow. Name your dataflow “Housing Data” and click “Save”. When it finishes saving click the “Refresh Now” button to populate your dataflow with data. Note, you can setup scheduled refreshes in dataflows including incremental refreshes.



You should have data in your dataflow stored in your Azure data lake now!

Power BI Desktop

Now that we have data in our dataflow let’s connect to it using Power BI desktop.

Open Power BI Desktop and click the “Get Data” button at the top or on the startup screen.

Select “Power Platform” on the left, then “Power BI dataflows” on the right and click “Connect”.

If this is the first time connecting to dataflows you will need to click the “Sign In” button and login to your Power BI Service. Once you login click “Connect”.


Once you are connected navigate to the “AmesHousingData” table and select it and click “Transform Data” or “Load” if you want and you will now have your Power BI dataflows data in Power BI Desktop to work with!


CDM Folders and Azure Data Lake Storage gen 2 (ADLS gen 2)

As mentioned earlier you can connect your Power BI dataflows to your own Azure Data Lake Storage and use it for data science or data engineering. The great thing that Microsoft did with Power BI dataflows is deploy the data into the Common Data Model in folders in Azure Data Lake Storage gen 2.

CDM Folders conform to “specific, well-defined, and standardized metadata structures and self-describing data. These folders facilitate metadata discovery and interoperability between data producers and data consumers”. (https://docs.microsoft.com/en-us/common-data-model/data-lake)

This CDM Folders have a metadata file, model.json, which follows the Common Data Model metadata format. The file contains information about the entities (table) and attributes, and links to underlying data files in the CDM Folder. The data files, currently CSV but support for other formats is in the works, are stored in folders for the entities.

If you look at the diagram below it shows the layout of the “Ames Housing” workspace and “Housing Data” CDM folder after it was written to the ADLS gen 2 storage. On the left is the view from Storage Explorer and on the right is the diagram of a CDM folder from the Microsoft Power BI dataflows documentation.

Storage Explorer
https://azure.microsoft.com/en-us/features/storage-explorer/

Python Demo to read CDM Folder for Data Science

Now that you have your data in ADLS gen 2 you begin to do the data magic! The azure-storage-file-datalake package was recently released to access hierarchical namespace enabled (HNS) ADLS gen 2 accounts. The below referenced Python demo will use this new package to access the CDM folders written by Power BI dataflows. It will also use the CDM helper file located here.

To run the demo follow the instructions here to setup a Conda environment using the below referenced YAML file to run the Python Demo below. Edit the Credentials.py file and update the accountName and credential variables to your values. Then run the AMES CDM Access Demo – Azure ML Python.py file to pull the data from entity, AmesHousingData, from the Ames Housing/Housing Data dataflow and run a simple OLS Linear Regression to predict SalesPrice from Gr Liv Area. You can run this in Visual Studio Code or Azure ML or any tool that can run Python like Jupyter Notebooks.

YAML file for Python Env
https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file


Python Demo
https://github.com/STATCowboy/pbidataflowstalk/tree/master/Azure%20ML%20Python%20Example


CDM Helper
https://github.com/Azure-Samples/cdm-azure-data-services-integration/tree/master/AzureMachineLearning

Databricks Demo to read CDM Folder for Data Science

You can also access the data in powerful tools like Azure Databricks. Follow the instructions on this GitHub, by Ted Malone (t|b) to configure you Databricks to access your Power BI dataflows in CDM Folders.

Databricks Setup
https://github.com/temalo/spark-cdm/blob/master/docs/Getting%20Started.md.

Sample code to connect to the Ames Housing Data you loaded in your Power BI dataflows in the demo above is in my GitHub.

Sample Code
https://github.com/STATCowboy/pbidataflowstalk/tree/master/Databricks%20Example.

Conclusion

I hope this article helped to introduce and showcase some of the great features of Power BI dataflows. To learn more about Power BI dataflows check out the official Microsoft documentation here. Also, I recommend following Matthew Roche (t|b) who works for Microsoft and evangelizes Power BI dataflows. Finally, follow me on Twitter @STATCowboy, connect up with me on LinkedIn at https://www.linkedin.com/in/jameyj/ or check out my blog, https://www.statcowboy.com to keep in touch.

GitHub Repo of Code
https://github.com/STATCowboy/pbidataflowstalk

Row Level Security in Power BI using Parent/Child Hierarchies

 

Row Level Security in Power BI using Parent/Child Hierarchies

Jamey Johnston (@STATCowboy)

 

Overview

I wrote a blog post on using Parent/Child Hierarchies to enforce Row Level Security (RLS) in SQL Server 2016+. The demo in that article used a parent/child hierarchy for an organization combined with a flattened asset hierarchy to enforce strict row level security on a fictional Oil & Gas dataset for a fictional Oil & Gas organization. I recommend reading the “Overview of the Demo” and the “Asset and Organization Hierarchies and RLS” sections in that blog post before proceeding with this blog post. It will give you a good understanding of what hierarchies entail and how they are used in this demo/blog post.

We will only focus on the Organizational Hierarchy and how they can be used to enforce RLS in Power BI in this blog post. Think about creating a HR report and have it automatically filter the person logged into the report and all direct reports under that person. Later we will bring in the Asset Hierarchy in a follow up blog post soon and combine the organizational and asset hierarchies to enforce a more complex RLS scheme as used in the MSSQL demo linked above.

GitHub

of the demo code and Power BI file.

Scripts and Explanation

There are 2 scripts to run to install the entire demo (and you just run them in the order they are number, 1 – 2). They are located in the oilgaspbidemo-SQL-Code folder in the GitHub. If you have used the ones for the RLS demo in SQL Server they are similar but create a different database (oilgaspbidemo) and use just a subset of the tables:

1 – Oil&Gas PBI Demo – BuildTables.sql

The script will create the database, oilgasrlsdemo2016, and the tables needed for the demo

The Tables are as such:

  • ASSET_HIERARCHY – Table contains the asset hierarchy and is used to build the demo data for the asset hierarchy in the Well_Master table.
  • DATES – Generic table to hold a date dimension for use later in an Excel Power Pivot model.
  • SEC_ASSET_MAP – Table contains the entries mapping the organization units to the asset hierarchy levels for access to the data. The table would be managed by the security team for the application. Users and their subordinates are denied access to data via an entry of ou, ‘NONE’, ‘NONE’ or granted all access via an entry of ou, ‘ALL’, ‘ALL’.
  • SEC_ORG_USER_BASE – Table contains the employees including the employee to manager parent/child relationship to build an organization hierarchy and the organizational unit id for mapping to asset hierarchy levels for security. This table would likely be fed from an HR system. Also, will demonstrate Always Encrypted in this table.
  • SEC_USER_EXCEPTIONS – Table contains entries to allow for overrides of the organization hierarchy based model. Any employee entered here will use permission defined in this table instead of what is inherited from the organizational hierarchy.
  • WELL_DAILY_PROD – Contains COMPLETELY made-up and randomly generated daily well production data for Oil, Gas and NGL. Primary key is WELL_ID and RLS is achieved by using the asset hierarchy in the WELL_MASTER table to filter the WELL_IDs. This is a Clustered ColumnStore Indexed table.
  • WELL_MASTER – Contains COMPLETELY made-up and randomly generated master well data including the made up asset hierarchy. This is the main business table used for RLS for ALL well tables.
  • WELL_REASON_CODE – Contains COMPLETELY made-up and randomly generated daily well downtime data for Oil, Gas and NGL. Primary key is WELL_ID and RLS is achieved by using the asset hierarchy in the WELL_MASTER table to filter the WELL_IDs.

 

2 – Oil&Gas PBI Demo – LoadTables.sql

 

This script is used to load or generate the demo data including user and security tables and hierarchy and well data. There are two parameters close to the top that can be used to specify the amount of data to load – @wellcount and @drillyearstart. @wellcount specifies the number of wells to generate and @drillyearstart specifies the first possible year to use for a well. The start date for a well will be randomly selected between @drillyearstart and the current year the script is run.

There is also a Power BI file in the GitHub, Oil and Gas Power BI RLS Demo.pbix, to see the setup of the roles, DAX, tables and demo the hierarchies.

 

RLS in Power BI

So how do we accomplish Row Level Security in Power BI using a parent/child organizational hierarchy? First, we need to know the basics of RLS in Power BI. Let’s start with capturing the username of the currently connected user.

USERNAME() and USERPRINCIPALNAME() in DAX

The two DAX functions, USERNAME() and USERPRINCIPALNAME(), are the key to RLS based security and organizational hierarchies. It is the way we determine who is logged into the PowerBI.com Service and thus use that information (login name) to apply row level security. USERNAME() function returns the domain name and username for the currently connected user as DOMAIN\username where as the USERPRINCIPALNAME() function returns as username@domain.com.

Good article from Adam Saxton, Guy in a Cube GH|B|T|YT, on USERNAME() and RLS – https://powerbi.microsoft.com/en-us/blog/using-username-in-dax-with-row-level-security/. Might be worth a read before continuing with this article

 

RLS Roles

Next, we need to understand Roles in Power BI with RLS. Take a look at this article from Adam Saxton – https://docs.microsoft.com/en-us/power-bi/service-admin-rls – and then head back to this article …

Now that we understand RLS basics in Power BI let see how to implement RLS in Power BI using a Parent/Child relationship in an organization hierarchy.

 

Organizational Hierarchy

So, look at the following Organizational Hierarchy in the demo code described above. Let’s look at Joachim Hapitash in the organizational hierarchy. Under him are Roger Wagner and under Roger are Tracey and Sean. Now let’s say when Roger Wagner logs into the PowerBI HR Report we want to use the hierarchy to only show Roger his record and his employees under him in the hierarchy. How do you accomplish this? Keep reading and I will show you!

 

 

Parent/Child Hierarchy in SQL Server

Now let’s examine the data for the SEC_ORG_USER_BASE table created in the demo. The table uses a unary relationship to relate the employee to his/her manager in the table. Look below and you will see that the CEO, Dave Campos, has NULL for MGRID. That is because he has no manager. But if you look at Joachim Hapitash you will see that he has 1001 for MGRID as Dave Campos, EMPLID = 1001, is his manager.

 

 

Now let’s use a recursive CTE to see how to relate the employee and managers together. If you run the following query against the SEC_ORG_USER_BASE table, you will get Roger and all the employees under Roger in the hierarchy.

oilgaspbidemo-SQL-Code\userhierarchyquery.sql

WITH DirectReports

(MGRID, EMPLID, NAME, MGR, ORG_UNIT_ID, ORG_UNIT_NAME, Level, ORG_LINEAGE, ORG_NAME_LINEAGE, NAME_LINEAGE)

AS

(

SELECT e.MGRID, e.EMPLID, e.NAME, cast(null as varchar(50)) as MGR,

e.ORG_UNIT_ID, e.ORG_UNIT_NAME,

0 AS Level,

     cast(e.ORG_UNIT_ID as varchar(max)) as ORG_LINEAGE,

     cast(e.ORG_UNIT_NAME as varchar(max)) as ORG_NAME_LINEAGE,

     cast(e.NAME as varchar(max)) as NAME_LINEAGE

FROM dbo.SEC_ORG_USER_BASE AS e

WHERE MGRID IS NULL

UNION ALL

SELECT e.MGRID, e.EMPLID, e.NAME, d.NAME, e.ORG_UNIT_ID, e.ORG_UNIT_NAME,

Level + 1,

        case when e.org_unit_id <> d.org_unit_id then cast(d.ORG_LINEAGE + ‘|’ +

cast(e.ORG_UNIT_ID as varchar(max)) as varchar(max)) else cast(d.ORG_LINEAGE as varchar(max)) end,

        case when e.ORG_UNIT_NAME <> d.ORG_UNIT_NAME then cast(d.ORG_NAME_LINEAGE + ‘|’ +

e.ORG_UNIT_NAME as varchar(max)) else cast(d.ORG_NAME_LINEAGE as varchar(max)) end,

        case when e.NAME <> d.NAME then cast(d.NAME + ‘|’ + e.NAME as varchar(max)) else cast(d.NAME as varchar(max)) end

FROM dbo.SEC_ORG_USER_BASE AS e

INNER JOIN DirectReports AS d

ON e.MGRID = d.EMPLID

)

SELECT MGRID, EMPLID, NAME, MGR, ORG_UNIT_ID, ORG_UNIT_NAME, Level, ORG_LINEAGE, ORG_NAME_LINEAGE, NAME_LINEAGE

FROM DirectReports

WHERE NAME_LINEAGE like ‘Roger Wagner%’;

GO

The results are as such:

 

Power BI, RLS and Organizational Hierarchies

Next, we add the SEC_ORG_USER_BASE table into Power BI and add some calculated columns (Path, Level 1, Level 2, Level 3, Level 4, Level 5, Level 6, and HierarchyLevel) and one measure (MaxNumLevels). In this demo we will only look at the SEC_ORG_USER_BASE table. In the next blog article, we will look at all the tables in the demo!

 

 

Below is the DAX for the calculated columns and measure:

  • Path = PATH(SEC_ORG_USER_BASE[EMPLID],SEC_ORG_USER_BASE[MGRID])
  • Level 1 = PATHITEM(SEC_ORG_USER_BASE[Path],1)
  • Level 2 = PATHITEM(SEC_ORG_USER_BASE[Path],2)
  • Level 3 = PATHITEM(SEC_ORG_USER_BASE[Path],3)
  • Level 4 = PATHITEM(SEC_ORG_USER_BASE[Path],4)
  • Level 5 = PATHITEM(SEC_ORG_USER_BASE[Path],5)
  • Level 6 = PATHITEM(SEC_ORG_USER_BASE[Path],6)
  • Level 1 Name = LOOKUPVALUE(SEC_ORG_USER_BASE[NAME], SEC_ORG_USER_BASE[EMPLID], VALUE(PATHITEM(SEC_ORG_USER_BASE[Path],1)))
  • Level 2 Name = LOOKUPVALUE(SEC_ORG_USER_BASE[NAME], SEC_ORG_USER_BASE[EMPLID], VALUE(PATHITEM(SEC_ORG_USER_BASE[Path],2)))
  • Level 3 Name = LOOKUPVALUE(SEC_ORG_USER_BASE[NAME], SEC_ORG_USER_BASE[EMPLID], VALUE(PATHITEM(SEC_ORG_USER_BASE[Path],3)))
  • Level 4 Name = LOOKUPVALUE(SEC_ORG_USER_BASE[NAME], SEC_ORG_USER_BASE[EMPLID], VALUE(PATHITEM(SEC_ORG_USER_BASE[Path],4)))
  • Level 5 Name = LOOKUPVALUE(SEC_ORG_USER_BASE[NAME], SEC_ORG_USER_BASE[EMPLID], VALUE(PATHITEM(SEC_ORG_USER_BASE[Path],5)))
  • Level 6 Name = LOOKUPVALUE(SEC_ORG_USER_BASE[NAME], SEC_ORG_USER_BASE[EMPLID], VALUE(PATHITEM(SEC_ORG_USER_BASE[Path],6)))
  • HierarchyLevel = PATHLENGTH(PATH(SEC_ORG_USER_BASE[EMPLID], SEC_ORG_USER_BASE[MGRID]))
  • MaxNumLevels = MAX(SEC_ORG_USER_BASE[HierarchyLevel])

The Path column give a pipe, “|” separated list of the hierarchy at that node.
The Level X columns give the EMPLID of the LEVEL X employee at that level in the hierarchy. You will use these columns to build a hierarchy for the report.
The Level X Name columns give the NAME of the LEVEL X employee at that level in the hierarchy. You will use these columns to build a hierarchy for the report.
The HierarchyLevel column gives the depth of the organizational level of the employee.
The MaxNumLevels measure calculates the max levels in the hierarchy. Used to determine the number of Level X columns to create. Don’t filter any employees and it will show “6” in this demo if you add a table with just the measure.

These DAX statements use the Parent and Child DAX functions described here – https://msdn.microsoft.com/en-us/library/mt150102.aspx.

 

Now let’s setup a hierarchy in the Power BI model so we navigate the organizational hierarchy in visuals. This is accomplished by right-clicking on “Level 1″in the Fields section on the far right and choosing “New hierarchy”.

 

This will create a new hierarchy called “Level 1 Hierarchy”. Let’s rename it to “Org Hierarchy” by right-clicking (or clicking the ellipses – three dots – to the right of the name) and choosing “Rename”.


 

To add the remaining levels to the Hierarchy, starting with Level 2 and ending with Level 6 right-click and select “Add to Hierarchy -> Org Hierarchy” for each level until they are all in the hierarchy.

 


 

We can hide “Level X” columns now that we have a hierarchy defined. Right-click on each “Level X” field and choose “Hide”.

 

We will repeat the above for the “Level X Name” columns and call the hierarchy “Org Hierarchy Names”.

 

Now let’s setup the Row Level Security. To do this we will setup a role called “Org Security”. Under the “Modeling” menu choose “Manage Roles”.

 

When the Manage roles dialog pops up create a new role called “Org Security” for the table “SEC_ORG_USER_BASE” and use the following DAX expression as the Table filter and “Save”.

PATHCONTAINS(PATH(SEC_ORG_USER_BASE[EMPLID],SEC_ORG_USER_BASE[MGRID]),
LOOKUPVALUE(SEC_ORG_USER_BASE[EMPLID], [USERID],
PATHITEM(SUBSTITUTE(USERPRINCIPALNAME(), “@”, “|”), 1)))

The DAX expression is doing the following:

  • Uses the PATHCONTAINS DAX function to search the Parent/Child hierarchy (by using the PATH DAX function) to find the employee record and all employee records under that employee.
  • Uses the LOOKUPVALUE DAX function to convert the logged in USERID to the EMPLID.
  • Uses the PATHITEM(SUBSTITUTE(USERPRINCIPALNAME(), “@”, “|”), 1)) DAX expression to convert the USERPRINCIPALNAME() (e.g. mcox@poweroilandgas.com) to the USERID (e.g. mcox) and feed the LOOKUPVALUE DAX function to get the EMPLID to feed the PATHCONTAINS DAX function.

 

Now let’s test the security! There is a “Matrix” visual added to the PBIX file in the downloads. It has the “Org Hierarchy Names” added to it as well as a column with counts by “EMPLID”. This will show us the employee counts at each level of the hierarchy in the organization.

 

Now let’s use the “View as Roles” to test! Click the “View as Roles” button under the Modeling menu.

 

 

Then enter mcox@poweroil-n-gas.com for “Other user” and check the “Other user” box. Also, check the “Org Security” box to enable the “Org Security” role.

 

 

Now check the Matrix visual! It should be filtered to only “mcox” and employees under “mcox”. Notice how we only see 119 employees in the dataset now and NOT 157!

Now we need to add a HR Role to allow HR personnel the ability to see all employees and not limited to just their organizational units. This is easy to do. Just add a role called “HR Security” and leave the filter blank.

Now when you publish to the PowerBI.com service you would assign the HR personnel to the “HR Security Role” and all other personnel to the “Org Security” role.

Take a look again at the article from Adam Saxton – https://docs.microsoft.com/en-us/power-bi/service-admin-rls#manage-security-on-your-model – to see how to assign the users/groups to the Roles in the Power BI service and to test it in the Power BI service.

Download Code!