Learn Power BI and Build Your Own Coronavirus Report

Introduction

The Novel Coronavirus (COVID-19) emerged in the city of Wuhan, China. It has been declared a global pandemic by the World Health Organization (WHO). There has been many charts, infographics, dashboards and reports on the web showing the numbers of confirmed and recovered cases as well as deaths around the world.

I thought, “How could I build my own report/dashboard in Power BI”?

In this blog post I will show you how to use Power BI and build a COVID-19 report in Power BI!

 

Dataset

Well, that would first require me to find the dataset. Luckily the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE), supported by ESRI Living Atlas Team and the Johns Hopkins University Applied Physics Lab (JHU APL), has a GitHub Repository with curated daily datasets by Country/Region and Province/State for confirmed cases, recovered cases and deaths. The dataset is updated daily with the latest data.

JHU CSSE GitHub Repo/Folder with COVID-19 Data in Time Series Format – https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series

In the GitHub folder you will find three CSV (Comma Separated Values) files (see image below); one for Confirmed Cases, one for Recovered Cases and one for Deaths.


If you click on one of the files you will see the data. Also, you will see a button called “Raw”. If you click the “Raw” button you will get the URL you will need to import that dataset into Power BI.

URLs for the 3 files:

Confirmed Cases – https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv
Recovered Cases –
https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Recovered.csv
Deaths –
https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Deaths.csv

Each file has the data reported with the Province/State, Country/Region, Lat and Long (Province/State) and pivoted column with cumulative/running totals (notice the values increase by day) of cases/deaths for each day since reporting has started. We will import these three files into Power BI and wrangle the data into a format useful for reporting in Power BI.


 

Power BI Desktop

Now that we know where to get the data, we need to get the right tool to report on the data, Power BI! You can go to https://powerbi.microsoft.com/en-us/ and click the “Start free >” button to get Power BI Desktop (it’s free!) and start building amazing reports.

Then click “Download free >” on the next page to launch Microsoft Store to install the Power BI Desktop application

When the prompt appears notifying about opening the Microsoft Store click the “Open” button.

In the Microsoft Store click “Install” to install on the Power BI Desktop screen to install Power BI Desktop.

 

Power BI – Get and Transform Data with Power Query

 

After Power BI installs in the Start Menu find “Power BI Desktop” launch Power BI. When prompted sign-up for a free account.

After you sign-up on the splash screen click “Get Data”. We will start with import the Confirmed Cases file into Power BI.

NOTE: If the splash screen is not longer there and you are in the Power BI Desktop App

OR …

In the Get Data dialog box type “web” in the search box at the top left, select “Web” and then click “Connect”.

In the “From Web” dialog paste in the URL (https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv) for the Confirmed Cases into the URL box and click “OK”.

In the next screen click “Transform Data” to open Power Query so we can wrangle some data!

The next screen to pop-up is the Power Query Editor. Yes, like the one in Excel (a little better but mostly the same)! Here we can use the language M to transform and load our data into Power BI Desktop. We will do some steps to get the data in the format we want to report in Power BI.

If you look to the far right, you will see a section called “Applied Steps”. You can see Power BI has done two steps, “Source” and “Changed Type”. We are going to remove the “Changed Type” step it automatically added on the import of the CSV file. Click the “Red X” next to “Changed Type” to remove this step.

Now, we will promote the first row of the CSV file to be our column names. If you look at the first row of the data in Power Query you will see the column names. To promote the first row to be Headers of the table click the “Transform” menu, then click the “Use First Row as Headers” button in the ribbon. This will add two steps in the “Applied Steps” section: Promoted Headers and Changed Type.

When we promoted the headers Power Query automatically added a step to detect the data types. Click the “Red X” next to Changed Type to remove this step. We will do this later after we unpivot the data.

Should look like below now.

Now we are going to edit the raw M code to remove an option that Power Query put in the Source step (first one) so that as data is added (new columns) each day to the dataset we will pick up the new columns. To do this, in the “Home” menu click the “Advanced Editor” button.

In the “Advanced Editor” window scroll to the right and in the “Source = Csv.Document …” line remove the “Columns=XX,” part in the line (XX will be a number representing the number of columns in the file, in my example below it was 60) and click “Done”. We want to remove the hard-coded column count because as new data points are added as columns to the file each day, we will want to pick up those columns. If we leave the hard-coded “Columns=XX” we will not pick up these new days/columns.

Should look like this below between “Delimiter” and “Encoding” after you remove the section (be sure to get the comma). When you click “Done” you will not notice any changes.

Let’s rename the table to Confirmed. To do this, in the Name box to the right change time_series_19-covid-Confirmed to Confirmed.

 

Next, we need to duplicate the table before we perform other steps so that we can create a dimension table to use later for better data modeling and report building in Power BI. To do this, right-click on the Confirmed query on the left and choose Duplicate.

Right click on the new Query it created called Confirmed (2) and choose Rename and name the query Header.

Should look like this.

Let’ cleanup our Header query to be our dimension.

Hold down the shift key and select the four columns in Header called: Province/State, Country/Region, Lat and Long. You should see the column headers for the four columns become yellow. Right-click on one of the four columns and choose Remove Other Columns.


We are good with the Header table/query now.

 

Let’s go back to editing our Confirmed query/table. We can now remove the Lat and Long columns of our Confirmed table as we will use them from the new Header table we created. To do this, hold down the shift key and select the Lat and Long columns in the Confirmed table and right-click and choose Remove Columns.


Our Applied Steps should now look like this.

Now comes the fun part, the part that Power Query in Power BI is amazing at doing – Un-pivoting data! We are going to un-pivot on the Province/State and Country/Region columns to get the daily data columns to go from being a column per day to a row per day with a column for the date and a column for the value. To do this we hold down the shift key and select the Province/State and Country/Region columns and then right-click on one of the two columns, Province/State or Country/Region, you selected and choose Unpivot Other Columns.


Now look at our Confirmed table and steps! We now have the data in a better format for reporting … but we have a few more things to do.

So, let’s rename our new Attribute column Date. Right-click on Attribute column and click Rename… and rename the column Date.

Looks like this now.

Now finally let’s change the datatypes of the Date and Value columns to Date and Whole Number, respectively. Right-click on the Date column and click Change Type ->
Date. Right-click on the Value column and click Change Type -> Whole Number.

Our table should look like now!

 

Repeat these steps in Power Query (MINUS the steps to create the Header table!!) for the other two files – Recovered Cases and Deaths. Call the tables/queries Recovered and Deaths and be sure to use the correct URLs for each table.

To bring in the two new files you can click New Source -> Web at the top and follow the steps above (MINUS the steps to create the Header table!!).

 

Once done it should look like this!

Now let’s Close & Apply our work and get into the Power BI Desktop. Click the Close & Apply button at the top left to close Power Query and return to the Power BI Desktop.

You will see it bringing the tables into Power BI Desktop.

 

Click the Save button at the top left and save your work so far! You can call the file Power BI Coronavirus Report.

NOTE: Name your Power BI Report files carefully, i.e., give them names that our friendly as they become the Report name in the Power BI Service when you publish.

 

Power BI Desktop – DAX Data Wrangling

We now have our data in Power BI Desktop to build report but before we build a report we need to create a few more tables using DAX. DAX is the Data Analysis Expressions language used in Power BI to create calculated columns, measures and tables.

We need to add a few columns as keys in our table, hide a few columns from the reports and add a date table. Click the Data button at the top left side to start using DAX to transform data using DAX in Power BI Desktop. Notice our four tables we created in Power Query on the right.

… … … … … … … … …

We are going to add a column called CountryState that will be combination of the Province/State and Country/Region columns. This will be a key for our tables to the Header table. To add the column/key right-click on the table and click New column.

In the dialog at top enter the formulas below for each table, respectively, to create the key.

Should look like this for the Confirmed table. Click the Checkmark to apply and create the column.

Repeat for each table using the following DAX formulas:

 

Confirmed: CountryState = COMBINEVALUES("|", 'Confirmed'[Country/Region], 'Confirmed'[Province/State])

 

 

Deaths: CountryState = COMBINEVALUES("|", 'Deaths'[Country/Region], 'Deaths'[Province/State])

 

 

Header: CountryState = COMBINEVALUES("|", 'Header'[Country/Region], 'Header'[Province/State])

 

 

Recovered: CountryState = COMBINEVALUES("|", 'Recovered'[Country/Region], 'Recovered'[Province/State])

 

We can now hide these four keys from each table as we do not need them in the reports themselves but just to create the relationships between the 3 data tables and the header table.

In each table right-click on the CountryState column we just created and choose Hide in report view. Repeat for each table.

You will now see that the CountryState column is greyed-out now.

 

Repeat (Hide) for the Date columns in the 3 data table, Confirmed, Deaths and Recovered.


 

Now we will use DAX to create a date/calendar table to use as our Date dimension table. To do this click the Table tools menu and then click the New table button.

In the formula bar enter the DAX code below from Adam Saxton to create a Calendar table to use as a Date Table dimension.

 

DAX for Date Table (from Adam Saxton, Twitter: @awsaxton, Web: https://guyinacube.com/)

Calendar =

VAR _calendar =

CALENDAR (MIN(‘Confirmed'[Date]),

MAX(‘Confirmed'[Date]))

RETURN

ADDCOLUMNS (

_calendar,

“Year”, YEAR ( [Date] ),

“MonthNumber”, MONTH ( [Date] ),

“Month”, FORMAT ( [Date], “mmmm” ),

“Quarter”, “QTR ” & FORMAT ( [Date], “Q” ),

“QuaterNumber”, FORMAT ( [Date], “Q” ),

“MonthYearNumber”, FORMAT([Date], “yy mm”),

“Month Year”, FORMAT([Date], “mmm yyyy”)

)

 

Should look like this for the Calendar table. Click the Checkmark to apply and create the table.

We now have a Calendar table!

Finally, we need to mark the Calendar table as a “date table”. Select the Calendar table in the right. Then in the Table tools menu select the Mark as date table.

 

Then in the pop-up window select the Date column for the Date column drop-down and click Ok.

 

Power BI Desktop – Modeling

Now we need to setup the relationships for our tables. To do this we use the Model section of Power BI Desktop. Click the Model button at the top left side to start modeling our table relationships.

Our tables will show up in a dark grey area where we can drag the table around to re-arrange them and create relationships. Drag the Header and Calendar tables into the center and place the Confirmed, Deaths and Recovered tables around them like below. NOTE: You will have to scroll around a little to move and re-arrange them as so.

Now that we have them arranged we can build the relationships. To do this we will use are keys, Date and CountryState, and drag from Calendar the Date field to the Date field in the 3 data tables Confirmed, Deaths and Recovered. We will repeat for the CountryState column in the Header table to the 3 data tables Confirmed, Deaths and Recovered. Below is an example of how to drag the columns.


When done are model should look like below.

 

Let’s click the Save button to save our work again!

 

Power BI Desktop – Reports

Now we are ready to build a report. Click the Report button at the top left side to start building our report!

In the Report window let’s first setup our Theme. Click the View menu at top and then you will see all the available pre-built themes you can choose from (like PowerPoint!). Click the light grey arrow to the right and then click the Storm theme (see below).

Now let’s add a title to our report. Click the Insert menu and then click Text box.

Position it as below and give it a title, COVID-19 Global Report, and set the font to 24.

Next, add a map to the report so we can show the occurrences of coronavirus by country. Click the Map visual icon in the Visualizations pane to add it to the report (make sure nothing is selected on the report before clicking the Map visual).

Position it at the bottom left.

Drag the Country/Region column in the Header table to the Location box and the Legend box in the Fields area.

Add the Value column from the Confirmed table to the Size box.

You map should look like this. There is one problem though! Since our data is values with running totals (see data section above) we get incorrect counts for the number of confirmed cases. Notice when we hover over China it shows China has over 3 million cases which is not correct. The reason is because the default aggregation method in Power BI is SUM.

To fix this we need to change the aggregation method for the Value column in the Size field to be Maximum. To do this we click the little down arrow to the right of Value in the Size field.


Then we choose Maximum.



Now when we hover over China, we get the correct value.

OK, now we will add a line chart to show the spread of the coronavirus by country by day. Make sure none of the items in the report are selected and choose the Line Chart visualization.

Position it as show below. Notice the read guide/snap lines that show up to help align it.

Drag the Date field from the Calendar table to Axis. Drag the Country/Region field form the Header table to Legend and drag the Value field from the Confirmed table to Values.

Notice the Date field automatically has a date hierarchy created and used in the visual.

In this instance we do NOT want a data hierarchy. So, to remove it, click the grey down arrow next to Date in the Axis field and choose Date.

Bazinga! We have a nice line graph showing the spread of the virus by country!

Now add two more Line Charts below this one and position as below.

In the left one add the same fields as the top Line Chart but replace the Values field for the Value field in Recovered.
In the right one add the same fields as the top Line Chart but replace the Values field for the Value field in Deaths.

Your report should look like this below now.

 

Now add some Slicers to the report to slice and dice the data by country or data. Make sure nothing is selected and click the Slicer visualization twice to add to Slicers to the report.

Position the slicers as such.

Select the left Slicer and drag the Country/Region field into the Field box.

Your Slicer will now be populated with the countries and regions. Let’s change this to be a dropdown. In the top right of the Slicer hover your mouse and click the very little light grey down arrow.

In the menu that displays select Dropdown.

Now in the Format section (the Paint Roller icon!) of the Slicer let’s change a few options.

In the Selection controls section let’s enable Show “Select all” option so we can make it easier to select countries in our slicer.

Now on select the right slicer and let’s setup a Date slicer. Drag the Date field from Calendar into the Field box.

You now have a date slicer to play with the dates in your report. Slide the date slider around and see what happens!

Finally, let’s add another Text box to show credit for the data source. Click the Insert menu and then click Text box and position below the title and add the following text and format as desired.

Data Source:
CSSE at Johns Hopkins University
https://github.com/CSSEGISandData/COVID-19

 

You can use the Paint Roller icon on the visuals to change colors, titles, etc.!

For example, lets change the Title text of the Line chart for Confirmed cases. Select the top Line chart. Then select the Format (Paint Roller)
icon and scroll to Title. Change the Title text to Confirmed Cases by Country.

Repeat for the other two Line charts using the titles, Recovered Cases by Country and Deaths by Country.

Last thing, there is a Page name at the bottom left. Change this to COVID-19 by Country.

Click the Save icon at the top left to save your report!

Your final report should look like this now!!! Congratulations!!! You did it!

 

 

Power BI Service

If you have a Power BI Pro license you can use the Publish button at the top right to publish to the Power BI Service to share your report to your co-workers, your friends and even the world! You also can setup an automatic refresh of the data to in the service and a whole lot more. More info about the Power BI service is here – https://docs.microsoft.com/en-us/power-bi/service-get-started.

 

Conclusion

I hoped you enjoyed learning how to user Power BI and how to build your own report for the Novel Coronavirus (COVID-19). If you want to learn more about Power BI check out the official Power BI documentation – https://docs.microsoft.com/en-us/power-bi/. Also, follow Adam and Patrick on the Guy in a Cube YouTube Channel – https://www.youtube.com/channel/UCFp1vaKzpfvoGai0vE5VJ0w. It is the ultimate source of Power BI knowledge.

I have posted a Power BI file that should look exactly like the one you just built if you followed my instructions! You can follow me on Twitter @STATCowboy, connect up with me on LinkedIn at https://www.linkedin.com/in/jameyj/.

Finally, please be safe during this global pandemic. Wash your hands. Keep safe social distance and limit your interactions with groups until the curve is flat! (You can see in the report you just built that it isn’t flat for the rest of the world!

Guidance from CDC – https://www.cdc.gov/coronavirus/2019-ncov/index.html

 

Code

GitHub Repo for Power BI Coronavirus Report Tutorial https://github.com/STATCowboy/Power-BI-Coronavirus-Report-Tutorial

 

 

My Journey to Data Science – Part 3 of 3

My Career Journey Over the Last 20+ Years

October 28, 2016

 

Jamey Johnston (@STATCowboy)

 

This is Part 3 (the final post!) of a 3-part Blog series on my Career Journey the Last 20+ years into my current role as a Data Scientist. Part 1 can be found here and Part 2 can be found here!

BI

 

BI is fun! I loved it and soon was not missing the late-night pages of down databases or late nights doing database upgrades. (Although, one night I upgraded 11 Oracle production databases to Oracle 11g at the same time which I must admit was fun!) I loved the insights I gained from building BI solutions using MS products like SSAS, SSRS, MSSQL and SharePoint. I also loved the new-found freedom of exploring data and making it all work together in tools like Spotfire and Power BI! I was building large analytic datasets with billions of rows and special analytic calculations. I was having fun but I was missing something to truly fill the gap as a data scientist, statistics!

 

TAMU MS of Analytics

 

I was driving late one night into the office on I-45 in Houston which was not my normal route and I see this billboard for Texas A&M Masters of Science in Analytics! I had been looking for a program to learn statistics but I wanted it local (my ADHD doesn’t do well with online classes!). This was March of 2014. I was accepted into the program in June 2014 and started classes the end of August 2014! To say the least, the next two years my life was crazy! I had started to learn R the last year or so and in the program, I learned about SAS and SAS JMP and a ton of statistics from regression, time series, spatial statistics, multivariate analysis to machine learning! I was having a blast learning statistics and all my friends and family thought I was plum mad to start grad school in my 40s and in all things, Statistics!

I can’t emphasize the importance of a good education in statistics and how valuable the MS in Analytics from TAMU was in that education. I graduated in May 2016 with my Masters and could not be happier with my experience at TAMU. I am proud to be a member of the Fightin’ Texas Aggie Class of 2016! Yes, I wore Mickey Ears with my Twitter handle (@STATCowboy) on the back at graduation!

 


 

Data Scientist

 

Armed with my learning of R, SAS and more importantly, Statistics, I am fully ready for whatever data is thrown at me. I get to build some really cool statistical and machine learning models these days and I can’t believe at 42 years of age I have a new career I basically started at 40! I guess if you are going to have a midlife crisis a career change to The Sexiest Job of the 21st Century according to the Harvard Business Review is a good crisis! Who knows what career I will have at 50?!

I’ll finish with my definition of a Data Scientist – someone who loves finding insight from data, big or small! In some ways, we are all data scientist and I would encourage anyone who would like to take their data knowledge to the next level to learn Statistics. Statistics is the first leg of a three-legged stool that makes up data science. The second leg is a data guru! Someone who can store, move and manage data and present it in an easy to use fashion that can be consumed quickly. The final and third leg of the stool is a Subject Matter Expert (SME) who knows the business and the data around the business. One person is never an expert in all of these but a Data Scientist is typically an expert in one and very proficient in the others. However, I find the best Data Scientist is really a team of folks that love data and bring expertise in one of the three areas – statistics, data or subject matter expert – to the table to build models to make business better!

To get you started in your career as a Data Scientist check out my presentation from SQL Summit 2016 here. Be sure to check out the resources section of the PPT to see some good locations to learn things like R! Also, feel free to reach out to me on Twitter if you have questions along your journey. I am happy to help!

I hope you enjoyed reading about my journey. I know have enjoyed living it!

Code Like a Pirate … Intro to R and Data Science Tools in MS

So, you want to code like a Pirate! – R … R … R!

SQL Summit 2016 Version!
October 28, 2016

 

Jamey Johnston (@STATCowboy)

 

Thanks for all who attended my session at SQL Pass Summit 2016. The link to the scripts, documents and presentation materials from the session is below. If you have any questions or comments, feel free to hit me up on Twitter (@STATCowboy). I really appreciate all of you who attended and sorry if I got to statistically on you!

 

Download from OneDrive!

 

My Journey to Data Science – Part 2 of 3

My Career Journey Over the Last 20+ Years

October 26, 2016

 

Jamey Johnston (@STATCowboy)

 

 

This is Part 2 of a 3-part Blog series on my Career Journey the Last 20+ years into my current role as a Data Scientist. Part 1 can be found here!

DBA

 

I had been working for LDEQ for three years now and was one year removed from college when a good friend called. He was working as a consultant for Oracle Corporation in Houston. Oracle had signed a big contract with a client in Houston and needed DBAs on the project. He told me to send my resume so I could join him which I did! Two weeks later I was a Senior Consultant with Oracle Corporation working and living in Houston. It was a very exciting time of my life those three years. I had one of the best DBA mentors you could have, Clement C., teaching me about Oracle and technology and was getting to travel to some cool places in the US. Most importantly, though, I found the love of my life soon after moving to Houston, my wife, Melanie!

Oracle was a great 3 years for me and I learned so much that truly set the stage for the next 10 years or so of my life as a DBA. I enjoyed Oracle but after 3 years of travelling it was time to stop traveling and get a local job in Houston so I could spend more time with my wife and dogs! My wife will tell you I only stopped traveling for the dogs (not true though)!

This little-known company at the time called Enron (Ha Ha!) was hiring and I got hired on there! It was June of 2001 and Enron couldn’t be flying any higher (at least in their make-believe world!). We were one of the 10 largest companies (at least on invisible paper) in the world and the talent coming to work there was second to none. Over the next year and half I would learn a great deal about business as well as management as I was soon promoted to manage one of the core DBA teams at Enron. Enron went bankrupt and I moved to work for UBS when they bought the North American trading division of Enron.

After a brief stent at UBS they decided to get out of Houston and I went to work for a small consulting firm, Methods Technology Solutions, out of Baton Rouge. I did some travelling again and really had a lot of freedom to learn some new things and work on a variety of database projects from very small projects to bigger ones. I worked for Methods for 4 years, the last year and half consulting at a large independent Oil and Gas company. The Oil and Gas company offered me a full-time employment job and I have been there now for over 12 years.

The first 7 years at the O&G company I was a DBA team member working on admin, upgrades and a lot of tuning. It was fun! I had some time to write a little code, too! I created an end-user web dashboard for the app owners to see what was happening in the DB with their systems in C# ASP.NET. I created a multi-threaded application in C# to rapidly put together archived emails in to a bundle and create audit trails in XML, CSV, TXT and DB tables. However, I mostly did the typical enterprise DBA tasks.

 

Tuning SQL to Enhancing Business Processes and Relationships

 

I really enjoyed tuning database systems. I was a master of 10046 traces in Oracle and SQL Profiler and Extended Events in SQL Server and was able to tune a great deal of our applications. The funny thing about tuning databases when you are a DBA is you tend to look for that magic parameter, trace flag, bad SQL, etc. and often forget about the application and business processes behind what you are trying to optimize. More importantly, you often forget about the business people using those applications and how that data is used to add business value.

Anyone who knows me knows I am not shy! I love people and I love engaging with people. This led me to have a different approach to tuning. When I would get a call that something was slow I would find the business person who was having an issue, schedule an hour of their time in their office, have them reproduce the issue in the system while I watch both what they were doing in the application and also what was happening in the database. It was funny to see their face when the “DBA” would show up in their office. It was like an endangered species had left their native habitat and ended up in their office. Many would even tell me I have never seen a DBA before! This approach created a special relationship that allowed me to start understanding the business processes behind my company and fostering relationships to do more than just “tune database systems”.

I was put on a special project to implement a new business system and was actually placed for a year within the business team to assist with implementing the application. The application is a critical one for O&G companies and so I was dedicated to just working on that project and application. For the first time in almost 15 years I wasn’t a DBA! I wasn’t on-call (I didn’t know what to do with my new-found freedom). I didn’t have to do database upgrades. It was so weird to actually be a “customer” of the DBA team! It certainly gave me insight into how it felt to be on the other side of the table. I absolutely loved the project. I spent so much time understanding all the various data pieces and how they would flow into the new system (we had several different DBs that would need to integrate into this new system) and more importantly how that data truly worked for this particular part of the business. I really loved data! I realized then that I didn’t really want to be a DBA full-time (still like the part-time idea just to keep current but not very practical) but I wanted to work with data. I wanted to build datasets and analyze them. I wanted to find ways to better connect them together. I wasn’t quite on the statistics track yet but definitely the data and business side of Data Science.

 

DBA to BI

 

As I began to understand the applications and business processes behind using the databases I supported, I also became aware of how so many of the systems were connected to one another either by ETL, database links, linked servers or virtually through various business processes. The connections of all the systems and more importantly, for me, the data led me on my next journey into BI.

I had become exposed to a tool called Spotfire while I was on the special project and learned how to use the tool and how people at work were using the tool. So, I began to work on connecting Spotfire directly and securely to the data sources and soon I was moved onto the BI team working on BI projects in Microsoft and Spotfire. I was a BI guy now!

 

Stayed tuned for Part 3 tomorrow – BI to Data Scientist!

 

My Journey to Data Science – Part 1 of 3

My Career Journey Over the Last 20+ Years

October 25, 2016

 

Jamey Johnston (@STATCowboy)

 

So I am at the EMP in Seattle attending the SQL Summit 2015 Appreciation Event and I am introduced to Denise McInerny (t) by my good friend Wendy Pastrick (t | b). Of course, the common pleasantries are exchanged, “My name is …”, “I work at XYZ company …” and “I do ABC job …”, which in my case is “My name is Jamey Johnston, I work at an O&G company and I am a Data Scientist”. Denise’s response was not quite what most people’s response is when I tell them I am a Data Scientist. Usually I get a general look of trepidation or the occasional, “Oh, you are an unicorn!” (true story, several times), but in Denise’s case she says “You should write a blog about your journey to become a Data Scientist” (or something along those lines). I thought that might be a fun blog to write and said “Sure!”. So here is the story of my journey to becoming a Data Scientist.

 

10 PRINT “My Name is Jamey”

 

So, I am an eight-year-old living in the suburbs of New Orleans and I want a computer, a Commodore Vic-20, and I get it! I plug it in and connect it to the color console TV in my living room and turn it on. Within 10 minutes I have written my first program:

 

10 PRINT “My Name is Jamey”

20 GOTO 10

30 RUN
 

The next thing I see is the greatest thing I have ever seen – “My Name is Jamey” scrolling across my parent’s TV screen as fast as I can read it! I sit there and watch it go across the screen over and over and over again finally asking myself – “How do I stop it!?”. I scan through the manual that came with the Vic-20 looking for the correct key combination to stop it! No Luck and no Internet in 1982 so I do the only thing I can think of to stop it – “Unplug!”. I loved that computer and it was what fueled my desire to learn more about computers!

 

GIS and College

 

Fast forward to college and I am a 19-year-old who just finished his first year of college as a Psych major and I get a job with the Civil Engineering department at school working with GIS and Remote Sensing technologies. My father had started in GIS and Remote Sensing back in the 70’s for Wetlands research working for the Department of Interior so I was familiar with GIS and Remote Sensing and I was excited to work with big workstations, lots of data and getting to work in the same field as my dad! So these big workstations were SGI and DG boxes running from 25Mhz to 100Mhz processors! Your phone is probably 100 times faster!

Two years later I finish my junior year and I am still working on GIS and Remote Sensing projects, one of which was to run clusters against tiles of Thematic Mapper satellite data of the entire state of Louisiana and then use those clusters to classify them into land use and land cover categories (i.e. is this cluster water or agriculture or coniferous forest, etc.). I was working with ESRI and Erdas Imagine software and learning UNIX which was really fun and beneficial to my career.

I loved being a GIS and Remote Sensing technologist but I was still a Psych major! I wanted to change majors but the problem was my University only had ONE geography class! So there was no way I could stay at the University of Southwestern Louisiana and get a GIS degree. I transferred to LSU to start in the Fall of 1995 which had a great GIS program through the Department of Geography and Anthropology to get a BS in Geography with an emphasis in GIS and Remote Sensing. I also went to work for the GIS Center at the Louisiana Department of Environmental Quality (LDEQ) continuing my career in GIS and Remote Sensing. It took me two more years to finish school as I had to take 36 hours of Geography to graduate. My first semester was 12 hours of Geography with a Geography 1001 class up to a Graduate level class of Geography 4998 and a music class I believe. It was a fun semester to say the least.

I graduated in Spring of 1997 from LSU and continued working for the GIS Center in the field of GIS and Remote Sensing. I also started working with another cool technology, Relational Database Management System (RDBMS). ESRI had created the first version of ArcSDE which was a way to store your spatial data in a database. In my case it was Oracle. We had Oracle v6 and v7 databases at LDEQ and I was starting to learn them to support ArcSDE along with some other technologies that required RDBMS.

 

Stayed tuned for Part 2 tomorrow – DBA to BI!