## Are the dice in Mario Party fair?

Introduction
Over the holidays I was playing a lot of games with friends and family as one does, and one of those games was Super Mario Party for Nintendo Switch.

Now what’s interesting about this game is that, in addition to requiring dice rolls like any other board game, depending upon your character (or various ‘allies’ you can acquire when you team up with other playable characters and get the option to use their dice in addition to a bonus) you can choose to use different character-specific dice which are unique and have different values than a standard one.

Super Mario Party, with Mario holding his custom dice

So, being the guy that I am this got me to wondering – are all the different dice for the different characters ‘fair’? If your goal is to traverse the maximum number of spaces (as it often is) are any of the dice better to use on average than the others?

## Are We Solving The Wrong Problems With Machine Learning?

Corn and how it gets from growing in fields onto your table.

Below is a video of a corn harvesting machine:

And here is a video of people gathering corn:

So, I hear you say, what all does this have to do with machine learning?

A lot, as it so happens.

## Training an RNN on the Archer Scripts

#### Introduction

So all the hype these days is around “AI”, as opposed to “machine learning” (though I’ve yet to hear an exact distinction between the two), and one of the tools that seems to get talked about most is Google’s Tensorflow.
I wanted to get playing around with Tensorflow and RNN’s a little bit, since they’re not the type of machine learning I’m most familiar with, with a low investment in time to see what kind of outputs I could come up with.

#### Background

A little digging and I came across this tutorial, which is a pretty good brief overview intro to RNNs, and uses Keras and computes things character-wise.
This is turn lead me to word-rnn-tensorflow, which expanding on the works of others, uses a word-based model (instead of character based).
I wasn’t about to spend my whole weekend rebuilding RNNs from scratch – no sense reinventing the wheel – so just thought it’d be interesting to play around a little with this one, and perhaps give it a more interesting dataset. Shakespeare is ok, but why not something a little more culturally relevant… like I dunno, say the scripts from a certain cartoon featuring a dysfunctional foul-mouthed spy agency?

## When to Use Sequential and Diverging Palettes

#### Introduction

I wanted to take some time to talk an about important rule for the use of colour in data visualization.
The more I’ve worked in visualization, the more I have come to feel that one of the most overlooked and under-discussed facets (especially for novices) is the use of colour. A major pet peeve of mine, and a mistake I see all too often, is the use of a diverging palette instead of a sequential one or vice-versa.
So what is the difference between a sequential and diverging palette, and when is it to correct to use each? The answer is one that arises very often in visualization: it all depends on the data, and what you’re trying to show.

#### Sequential vs. Diverging Palettes

First of all, let’s define what we are discussing here.
Sequential Palettes
A sequential palette ranges between two colours (typically having one “main” colour) ranging from white or a lighter shade to a darker one, by varying one or more of the parameters in the HSV/HSL colour space (usually only saturation or value/luminosity, or both).
For me, at least, varying hue is going between two very distinct colours and is usually not good practice if your data vary linearly, as it is much closer to a diverging palette which will discuss next. There are others reasons why this is bad visualization practice, and, of course, exceptions to this rule, which we will discuss later in the post.
 A sequential palette (generated in R)
Diverging Palettes
In contrast to a sequential palette, a diverging palette ranges between three or more colours with the different colours being quite distinct (usually having different hues).
While technically a diverging palette could have as many colours as you’d like in a (such as in the rainbow palette which is the default in some visualizations like in MATLAB), diverging palettes usually range only between two contrasting colours at either end with a neutral colour or white in the middle separating the two.
 A diverging palette (generated in R)

#### When to Use Which

So now that we’ve defined the two different palette types of interest, when is it appropriate and inappropriate to use them?

The rule for the use of diverging palettes is very simple: they should only be used when there is a value of importance around which the data are to be compared.

This central value is typically zero, with negative values corresponding to one hue and positive the other, though this could also be done for any other value, for example, comparing numbers around a measure of central tendency or reference value.

A Simple Example
For example, looking at the Superstore dataset in Tableau, a visualizer might be tempted to make a map such as the one below, with colour encoding the number of sales in each city:

Here points on the map correspond to the cities and are sized by total number of sales and coloured by total sales in dollars. Looks good, right? The cities with the highest sales clearly stick out in the green against the dark red?

Well, yes, but do you see a problem? Look at the generated palette:

The scale ranges from the minimum sales in dollars (\$4.21) to max (~\$155K), so we cover the whole range of the data. But what about the midpoint? It’s just the dead center point between the two, which doesn’t correspond to anything meaningful in the data – so why would the hue change from red to green at that point?

This is a case which is better suited using a sequential palette, since all the values are positive and were not highlighting a meaningful value which the range of data falls around. A better choice would be a sequential palette, as below:

Here, the range is full covered and there is no midpoint, and the palette ranges from light green to dark. The extreme values still stand out in dark green, however there is no well-defined center where the hue arbitraily changes, so this is a better choice.

There are other ways we could improve this visualization’s encoding of quantity as colour, for one, by using endpoints that would be more meaningful to business users instead of just the range of the data (say, \$0 to \$150K+), and another which we will discuss later.

Taking a look at the two palettes together, it’s clearer which is a better choice for encoding the always positive value of the metric sales dollars across its range:

Going Further
Okay, so when would we want to use a diverging palette? As per the rule, if there was a meaningful midpoint or other important value you wanted to contrast the data around.

For example, in our Superstore data, sales dollars are always positive, but profit can be positive or negative, so it is appropriate to use a diverging palette in this case, with one hue corresponding to negative values and another to positive, and the neutral colour in the middle occurring at zero:

Here it is very clear which values fall at the extremes of the range, but also which are closer to the meaningful midpoint (zero): that one city in Montana is in the negative, and the others don’t seem to be very profitable either; we can tell they are close to zero by how washed out their colours are.

Tableau is smart enough to know to set the midpoint at zero for our diverging palette. Again, you could tinker with the range to make the end-points more meaningful (e.g. round values), as well as varying the range: sometimes a symmetrical range for a diverging palette is easier to interpret from a numerical standpoint, though of course you have to keep in mind how perceptually this going to impact the salience of the colour values for the corresponding data.

So could we use a diverging palette for the always positive sales data? Sure. There just needs to be a point around which we are comparing the values. For example, I happen to know that the median sales per city over the time period in question is \$495.82 – this would be a meaningful value to use for the midpoint of a diverging palette, and we can redo our original sales map as such:

No we have a better version of our original sales map, where here the cities coloured in red are below the median value per city, and those coloured in green are above. Much better!

But now something strange seems to be going on with the palette – what’s that all about?

So what is going on with the palette in the last map from our example above? And what of my promise to discuss other ways the palette scaling can be improved, and of exceptions to the rule of not using differing hues in a continuous scale?

Well, the reason that the map looks good above but the scale looks wrong has to do with how the data are distributed: the distribution of sales by city is not normal, but follows a power law, with most of the data falling in the low end, so our palette looks the same when the colours are scaled linearly with the data:

One way to fix this is to transform the data by taking the log, and seeing that the resulting palette looks more like we’d expect:

Though of course now the range is between transformed values. It’s interesting to not that in this case the midpoint comes out being nearly correct automatically (2.907 vs. log(495.82) ~= 2.695).

Further complicating all this is the fact that human perception of colour is not linear, but follows something like the Weber-Fenchner Law depending on the various properties. Robert Simmon writes on this in his excellent series of posts while he was at NASA which is definitely worth a read (and multiple re-reads).

There he also notes an exception to my statement that you shouldn’t use continuous palettes with different hues, as sometimes even that can be appropriate, as he notes in the section on figure-ground when talking about earth surface temperature.

#### Conclusion

So there you have it. Once again: use diverging palettes only when there is a meaningful point around which you want to contrast the other values in your data.

Remember, it all depends on the data. What is the ideal palette for a given data set, and how should you choose it? That’s not an easy question to answer, one always left up to the visualization practitioner, which only comes with the knowledge of proper visualization technique and the theoretical foundations that form it.

There are no right or wrong answers, only better or worse choices. It’s all about the details.

#### References and Resources

Subtleties of Colour (by Robert Simmon)
Understanding Sequential and Diverging Palettes in Tableau
How to Choose Colours for Maps and Heatmaps

## How Often Does Friday the 13th Happen?

#### Background

So yesterday was Friday the 13th.

I hadn’t even thought anything of it until someone mentioned it to me. They also pointed out that there are two Friday the 13ths this year: the one that occurred yesterday, and there will be another one in October.

This got me to thinking: how often does Friday the 13th usually occur? What’s the most number of times it can occur in a year?

Sounds like questions for a nice little piece of everyday analytics.

#### Analysis

A simple Google search revealed over a list of all the Friday the 13ths from August, 2010 up until the end of 2050 over at timeanddate.com. It was a simple matter to plunk that into Excel and throw together some simple graphs.
So to answer the first question, how often does Friday the 13th usually occur?
It looks like the maximum number of times it can occur per year is 3 (those are the years Jason must have a heyday and things are really bad at Camp Crystal Lake) and the minimum is 1. So my hypothesis is:
a. it’s not possible to have a year where a Friday the 13th doesn’t occur, and
b. Friday the 13th can’t occur more than 3 times in a year, due to the way the Gregorian calendar works.
Of course, this is not proof, just evidence, as we are only looking at a small slice of data.
So what is the distribution of the number of unlucky days per year?
The majority of the years in the period have only one (18, or ~44%) but not by much, as nearly the same amount have 2 (17, or ~42%). Far less have 3 F13th’s, only 6 (~15%). Again, this could just be an artifact of the interval of time chosen, but gives a good idea of what to expect overall.
Are certain months favoured at all, though? Does Jason’s favourite day occur more frequently in certain months?
Actually it doesn’t really appear so – they look to be spread pretty evenly across the months and we will see why this is the case below.
So, what if we want even more detail. When we say how frequently does Friday the 13th occur, and we mean how long is it between each occurrence of Friday the 13th? Well, that’s something we can plot over the 41-year period just by doing a simple subtraction and plotting the result.
Clearly, there is periodicity and some kind of cycle to the occurrence of Friday the 13th, as we see repeated peaks at what looks like 420 days and also at around 30 days on the low end. This is not surprising, if you think about how the calendar works, leap years, etc.
If we pivot on the number of days and plot the result, we don’t even get a distribution that is spread out evenly or anything like that; there are only 7 distinct intervals between Friday the 13ths during the period examined:
So basically, depending on the year, the shortest time between successive Friday the 13ths will be 28 days, and the greatest will be 427 (about a year and two months), but usually it is somewhere in-between at around either three, six, or eight months. It’s also worth noting that every interval is divisible by seven; this should not be surprising at all either, for obvious reasons.

#### Conclusion

Overall and neat little bit of simple analysis. Of course, this is just how I typically think about things, by looking at data first. I know that in this case, the occurrence of things like Friday the 13th (or say, holidays that fall on a certain day of week or the like) are related to the properties of the Gregorian calendar and follow a pattern that you could write specific rules around if you took the time to sit down and work it all out (which is exactly what some Wikipedians have done in the article on Friday the 13th).
I’m not a superstitious, but now I know when those unlucky days are coming up and so do you… and when it’s time to have a movie marathon with everyone’s favourite horror villain who wears a hockey mask.

## Top 100 CEOs in Canada by Salary 2008-2015, Visualized

I thought it’d been a while since I’d some good visualization work with Tableau, and noticed that this report from the Canadian Centre on Policy Alternatives was garnering a lot of attention in the news.

However, most of the articles about the report did not have any graphs and simply restated data from it in narrative to put it in context, and I found the visualizations within the report itself to be a little lacking in detail. It wasn’t a huge amount of work to extract the data from the report and quickly throw it into Tableau, and put together a cohesive picture using the Stories feature (best viewed on Desktop at 1024×768 and above).

See below for the details, it’s pretty staggering, even for some of the bottom earners. To put things in context, the top earner had \$183M a year all-in, which, if you work 45 hours a week and only take two weeks of vacation per year, translates to about \$81,000 and hour.

Geez, Looks like I need to get into a new line of work.

## Mapping the TTC Lines with R and Leaflet

It’s been quite a while since I’ve written a post, but as of late I’ve become really interested in mapping and so have been checking out different tools for doing this, one of which is Leaflet. This is an example of a case where, because of a well-written package for R, it’s easy for the user to create interactive web maps directly from R, without even knowing any Javascript!

I had three requirements for myself:

1. Write code that created an interactive web map using Leaflet
2. Use Shapefile data about the City of Toronto
3. Allow anyone to run it on their machine, without having to download or extract data

I decided to use shapefile data on the TTC, available from Toronto’s Open Data portal. Point #3 required a little research, as the shapefile itself was buried within a zip, but it’s fairly straightforward to write R code to download and unpack zip files into a temporary directory.

The code is below, followed by the result. Not a bad result for only 10 or 15 lines!

`# MAPPING THE TORONTO SUBWAY LINES USING R & Leaflet# --------------------------------------------------## Myles M. Harrison# https://www.everydayanalytics.ca#install.packages('leaflet')#install.packages('maptools')library(leaflet)library(htmlwidgets)library(maptools)# Data from Toronto's Open Data portal: http://www.toronto.ca/open# Download the file and read in thedata_url <- "http://opendata.toronto.ca/gcc/TTC_subway%20lines_wgs84.zip"cur_dir <- getwd()temp_dir <- tempdir()setwd(temp_dir)download.file(data_url, "subway_wgs84.zip")unzip("subway_wgs84.zip")sh <- readShapeLines("subway_wgs84.shp")unlink(dir(temp_dir))setwd(cur_dir)# Create a categorical coloring functionlinecolor <- colorFactor(rainbow(16), sh@data\$SBWAY_NAME)# Plot using leafletm <- leaflet(sh) %>%  addTiles() %>%  addPolylines(popup = paste0(as.character(sh@data\$SBWAY_NAME)), color=linecolor(sh@data\$SBWAY_NAME)) %>%  addLegend(colors=linecolor(sh@data\$SBWAY_NAME), labels=sh@data\$SBWAY_NAME)m# Save the outputsaveWidget(m, file="TTC_leaflet_map.html")`

## Plotting Choropleths from Shapefiles in R with ggmap – Toronto Neighbourhoods by Population

#### Introduction

So, I’m not really a geographer. But any good analyst worth their salt will eventually have to do some kind of mapping or spatial visualization. Mapping is not really a forte of mine, though I have played around with it some in the past.
I was working with some shapefile data a while ago and thought about how its funny that so much of spatial data is dominated by a format that is basically proprietary. I looked around for some good tutorials on using shapefile data in R, and even so it took me a while to figure it out, longer than I would have thought.
So I thought I’d put together a simple example of making nice choropleths using R and ggmap. Let’s do it using some nice shapefile data of my favourite city in the world courtesy of the good folks at Toronto’s Open Data initiative.

#### Background

We’re going to plot the shapefile data of Toronto’s neighbourhoods boundaries in R and mash it up with demographic data per neighbourhood from Wellbeing Toronto.
We’ll need a few spatial plotting packages in R (ggmap, rgeos, maptools).
Also the shapefile originally threw some kind of weird error when I originally tried to load it into R, but it was nothing loading it into QGIS once and resaving it wouldn’t fix. The working version is available on the github page for this post.

#### Analysis

First let’s just load in the shapefile and plot the raw boundary data using maptools. What do we get?
`# Read the neighborhood shapefile data and plotshpfile <- "NEIGHBORHOODS_WGS84_2.shp"sh <- readShapePoly(shpfile)plot(sh)`
This just yields the raw polygons themselves. Any good Torontonian would recognize these shapes. There’s some maps like these with words squished into the polygons hanging in lots of print shops on Queen Street. Also as someone pointed out to me, most T-dotters think of the grid of downtown streets as running directly North-South and East-West but it actually sits on an angle.

Okay, that’s a good start. Now we’re going to include the neighbourhood population from the demographic data file by attaching it to the dataframe within the shapefile object. We do this using the merge function. Basically this is like an SQL join. Also I need to convert the neighbourhood number to a integer first so things work, because R is treating it as an string.

`# Add demographic data# The neighbourhood ID is a string - change it to a integersh@data\$AREA_S_CD <- as.numeric(sh@data\$AREA_S_CD)# Read in the demographic data and merge on Neighbourhood Iddemo <- read.csv(file="WB-Demographics.csv", header=T)sh2 <- merge(sh, demo, by.x='AREA_S_CD', by.y='Neighbourhood.Id')`
Next we’ll create a nice white to red colour palette using the colorRampPalette function, and then we have to scale the population data so it ranges from 1 to the max palette value and store that in a variable. Here I’ve arbitrarily chosen 128. Finally we call plot and pass that vector of colours into the col parameter:
`# Set the palettep <- colorRampPalette(c("white", "red"))(128)palette(p)# Scale the total population to the palettepop <- sh2@data\$Total.Populationcols <- (pop - min(pop))/diff(range(pop))*127+1plot(sh, col=cols)`
And here’s the glorious result!

Cool. You can see that the population is greater for some of the larger neighbourhoods, notably on the east end and The Waterfront Communities (i.e. condoland)

I’m not crazy about this white-red palette so let’s use RColorBrewer’s spectral which is one of my faves:

`#RColorBrewer, spectralp <- colorRampPalette(brewer.pal(11, 'Spectral'))(128)palette(rev(p))plot(sh2, col=cols)`

There, that’s better. The dark red neighborhood is Woburn. But we still don’t have a legend so this choropleth isn’t really telling us anything particularly helpful. And it’d be nice to have the polygons overplotted onto map tiles. So let’s use ggmap!

#### ggmap

In order to use ggmap we have to decompose the shapefile of polygons into something ggmap can understand (a dataframe). We do this using the fortify command. Then we use ggmap’s very handy qmap function which we can just pass a search term to like we would Google Maps, and it fetches the tiles for us automatically and then we overplot the data using standard calls to geom_polygon just like you would in other visualizations using ggplot.

The first polygon call is for the filled shapes and the second is to plot the black borders.

`#GGPLOT points <- fortify(sh, region = 'AREA_S_CD')# Plot the neighborhoodstoronto <- qmap("Toronto, Ontario", zoom=10)toronto +geom_polygon(aes(x=long,y=lat, group=group, alpha=0.25), data=points, fill='white') +geom_polygon(aes(x=long,y=lat, group=group), data=points, color='black', fill=NA)`
Voila!

Now we merge the demographic data just like we did before, and ggplot takes care of the scaling and legends for us. It’s also super easy to use different palettes by using scale_fill_gradient and scale_fill_distiller for ramp palettes and RColorBrewer palettes respectively.

`# merge the shapefile data with the social housing data, using the neighborhood IDpoints2 <- merge(points, demo, by.x='id', by.y='Neighbourhood.Id', all.x=TRUE)# Plottoronto + geom_polygon(aes(x=long,y=lat, group=group, fill=Total.Population), data=points2, color='black') +  scale_fill_gradient(low='white', high='red')# Spectral plottoronto + geom_polygon(aes(x=long,y=lat, group=group, fill=Total.Population), data=points2, color='black') +  scale_fill_distiller(palette='Spectral') + scale_alpha(range=c(0.5,0.5))`

So there you have it! Hopefully this will be useful for other R users wishing to make nice maps in R using shapefiles, or those who would like to explore using ggmap.

#### References & Resources

Neighbourhood boundaries at Toronto Open Data:
Demographic data from Well-being Toronto:

## Toronto Data Science Group – One or the Other: An Overview of Binary Classification Methods

So Chris was kind enough to invite me to speak at the Toronto Data Science Group again this past Thursday. I spoke on binary classification, and made an effort to cover a fair bit of ground and some technical detail, while still making it accessible. I wanted to give an overview for an audience that was more interested in the ‘how’, and the practical realities of using classification to solve problems within an organization.

As before, I’ll keep my observations to be more about presenting and less about the content.

The meetup is a lot different now, having presentations at venues like MaRS or the conference room at Thompson Hotel with large audiences, as opposed to the early days when it was much smaller.

Speaking to a larger group is challenging; both in that it’s more nerve-racking, and I also noticed it was harder to make eye contact and include the whole audience than I am used to with smaller groups. The temptation is to just look out straight ahead in front of you. Speaking in front of a podium has its disadvantages this way, but it does keep you anchored and give you something on which to rest your hands and remain centered. Looking back toward the screen is usually a bad idea when presenting regardless of audience size, unless you are pointing something out, and is doubly so when that screen is very large and above you.

Some folks were kind enough to take some photos of me during the talk for social media and the like. In retrospect, while I do try to have a very visual style (and inject some humour with it) I think it can come across as overly simplistic and flippant in certain contexts, such as with this larger group. There’s a balance to be struck there, I’m sure. Also, as always, you need to be mindful of how large you are making things on your slides (especially text), given the size of the screen with respect to the venue.

The point I made about the explainability of different classification methods to the non-technical audience or end consumer (i.e. client) receiving the results of their application was less controversial than I would have thought. Chris commented on this as well.

As always I was overly ambitious and was able to get through a lot less material in the timeframe than I originally would have thought.

I was asked some very insightful and detailed questions, some of which I wasn’t totally prepared to answer. Talking about something is fairly easy, I think, because you can put together exactly what you want to say and rehearse; it’s in the answering of the questions that people decide whether you really know the subject, or just putting pretty pictures up on the screen and painting in broad verbal strokes. Many people in the audience seemed to have assumed that because I was speaking on the topic of binary classification that I was a complete expert on it – there’s a danger here too, I think, when you see anyone give a presentation.

All in all, I think the talk was very well received. As always I learned a lot putting it together, and even more afterward, discussing with Toronto’s data scientists and knowledgeable analysts with insightful points of view.

Looking forward to the next one.

## I’m Dreaming of a White Christmas

I’m heading home for the holidays soon.

It’s been unseasonably warm this winter, at least here in Ontario, so much so that squirrels in Ottawa are getting fat. I wanted to put together a really cool post predicting the chance of a white Christmas using lots of historical climate data, but it turns out Environment Canada has already put together something like that by crunching some numbers. We can just slam this into Google Fusion tables and get some nice visualizations of simple data.

#### Map

It seems everything above a certain latitude has a much higher chance of having a white Christmas in recent times than those closer to the America border and on the coast, which I’m going to guess is likely due to how cold it gets in those areas on average during the winter. Sadly Toronto has less than a coin-flip’s chance of a white Christmas in recent times, with only a 40% chance of snow on the ground come the holiday.

#### Chart

But just because there’s snow on the ground doesn’t necessary mean that your yuletide weather is that worthy of a Christmas storybook or holiday movie. Environment Canada also has a definition for what they call a “Perfect Christmas”: 2 cm or more of snow on the ground and snowfall at some point during the day. Which Canadian cities had the most of these beautiful Christmases in the past?

Interestingly Ontario, Quebec and Atlantic Canada are better represented here, which I imagine has something to do with how much precipitation they get due to proximity to bodies of water, but hey, I’m not a meteorologist.
A white Christmas would be great this year, but I’m not holding my breath. Either way it will be good to sit by the fire with an eggnog and not think about data for a while. Happy Holidays!