## How Does SPF Work?

So I went on vacation recently, which was nice.

One of the conversations that came up, which I'm sure does for many folks on vacation, was around the application of sunscreen. How often should you re-apply? How long will SPF 50 last vs. SPF 15? And then as we were talking, an even more fundamental question arose - what the hell is SPF anyway, and how does it work?

I'd always assumed in the past, like I assume many other people do, that it was a linear scale - so SPF 60 was 4x 'as good' as SPF 15. Someone in our group also said that the number was supposed to be a measure of duration for sun exposure. So, for SPF 60 you could go in direct sunlight for an hour longer than you would normally without burning whereas whereas for SPF 15 it'd only be a quarter of that.

Apparently as it turns out, neither of these things are true.

## Are the dice in Mario Party fair?

Introduction
Over the holidays I was playing a lot of games with friends and family as one does, and one of those games was Super Mario Party for Nintendo Switch.

Now what's interesting about this game is that, in addition to requiring dice rolls like any other board game, depending upon your character (or various 'allies' you can acquire when you team up with other playable characters and get the option to use their dice in addition to a bonus) you can choose to use different character-specific dice which are unique and have different values than a standard one.

Super Mario Party, with Mario holding his custom dice

So, being the guy that I am this got me to wondering - are all the different dice for the different characters 'fair'? If your goal is to traverse the maximum number of spaces (as it often is) are any of the dice better to use on average than the others?

## Are We Solving The Wrong Problems With Machine Learning?

Corn and how it gets from growing in fields onto your table.

Below is a video of a corn harvesting machine:

And here is a video of people gathering corn:

So, I hear you say, what all does this have to do with machine learning?

A lot, as it so happens.

## Training an RNN on the Archer Scripts

#### Introduction

So all the hype these days is around "AI", as opposed to "machine learning" (though I've yet to hear an exact distinction between the two), and one of the tools that seems to get talked about most is Google's Tensorflow.
I wanted to get playing around with Tensorflow and RNN's a little bit, since they're not the type of machine learning I'm most familiar with, with a low investment in time to see what kind of outputs I could come up with.

#### Background

A little digging and I came across this tutorial, which is a pretty good brief overview intro to RNNs, and uses Keras and computes things character-wise.
This is turn lead me to word-rnn-tensorflow, which expanding on the works of others, uses a word-based model (instead of character based).
I wasn't about to spend my whole weekend rebuilding RNNs from scratch - no sense reinventing the wheel - so just thought it'd be interesting to play around a little with this one, and perhaps give it a more interesting dataset. Shakespeare is ok, but why not something a little more culturally relevant... like I dunno, say the scripts from a certain cartoon featuring a dysfunctional foul-mouthed spy agency?

## When to Use Sequential and Diverging Palettes

#### Introduction

I wanted to take some time to talk an about important rule for the use of colour in data visualization.
The more I've worked in visualization, the more I have come to feel that one of the most overlooked and under-discussed facets (especially for novices) is the use of colour. A major pet peeve of mine, and a mistake I see all too often, is the use of a diverging palette instead of a sequential one or vice-versa.
So what is the difference between a sequential and diverging palette, and when is it to correct to use each? The answer is one that arises very often in visualization: it all depends on the data, and what you're trying to show.

#### Sequential vs. Diverging Palettes

First of all, let's define what we are discussing here.
Sequential Palettes
A sequential palette ranges between two colours (typically having one "main" colour) ranging from white or a lighter shade to a darker one, by varying one or more of the parameters in the HSV/HSL colour space (usually only saturation or value/luminosity, or both).
For me, at least, varying hue is going between two very distinct colours and is usually not good practice if your data vary linearly, as it is much closer to a diverging palette which will discuss next. There are others reasons why this is bad visualization practice, and, of course, exceptions to this rule, which we will discuss later in the post.
 A sequential palette (generated in R)
Diverging Palettes
In contrast to a sequential palette, a diverging palette ranges between three or more colours with the different colours being quite distinct (usually having different hues).
While technically a diverging palette could have as many colours as you'd like in a (such as in the rainbow palette which is the default in some visualizations like in MATLAB), diverging palettes usually range only between two contrasting colours at either end with a neutral colour or white in the middle separating the two.
 A diverging palette (generated in R)

#### When to Use Which

So now that we've defined the two different palette types of interest, when is it appropriate and inappropriate to use them?

The rule for the use of diverging palettes is very simple: they should only be used when there is a value of importance around which the data are to be compared.

This central value is typically zero, with negative values corresponding to one hue and positive the other, though this could also be done for any other value, for example, comparing numbers around a measure of central tendency or reference value.

A Simple Example
For example, looking at the Superstore dataset in Tableau, a visualizer might be tempted to make a map such as the one below, with colour encoding the number of sales in each city:

Here points on the map correspond to the cities and are sized by total number of sales and coloured by total sales in dollars. Looks good, right? The cities with the highest sales clearly stick out in the green against the dark red?

Well, yes, but do you see a problem? Look at the generated palette:

The scale ranges from the minimum sales in dollars ($4.21) to max (~$155K), so we cover the whole range of the data. But what about the midpoint? It's just the dead center point between the two, which doesn't correspond to anything meaningful in the data - so why would the hue change from red to green at that point?

This is a case which is better suited using a sequential palette, since all the values are positive and were not highlighting a meaningful value which the range of data falls around. A better choice would be a sequential palette, as below:

Here, the range is full covered and there is no midpoint, and the palette ranges from light green to dark. The extreme values still stand out in dark green, however there is no well-defined center where the hue arbitraily changes, so this is a better choice.

There are other ways we could improve this visualization's encoding of quantity as colour, for one, by using endpoints that would be more meaningful to business users instead of just the range of the data (say, $0 to$150K+), and another which we will discuss later.

Taking a look at the two palettes together, it's clearer which is a better choice for encoding the always positive value of the metric sales dollars across its range:

Going Further
Okay, so when would we want to use a diverging palette? As per the rule, if there was a meaningful midpoint or other important value you wanted to contrast the data around.

For example, in our Superstore data, sales dollars are always positive, but profit can be positive or negative, so it is appropriate to use a diverging palette in this case, with one hue corresponding to negative values and another to positive, and the neutral colour in the middle occurring at zero:

Here it is very clear which values fall at the extremes of the range, but also which are closer to the meaningful midpoint (zero): that one city in Montana is in the negative, and the others don't seem to be very profitable either; we can tell they are close to zero by how washed out their colours are.

Tableau is smart enough to know to set the midpoint at zero for our diverging palette. Again, you could tinker with the range to make the end-points more meaningful (e.g. round values), as well as varying the range: sometimes a symmetrical range for a diverging palette is easier to interpret from a numerical standpoint, though of course you have to keep in mind how perceptually this going to impact the salience of the colour values for the corresponding data.

## Plotting Choropleths from Shapefiles in R with ggmap – Toronto Neighbourhoods by Population

#### Introduction

So, I'm not really a geographer. But any good analyst worth their salt will eventually have to do some kind of mapping or spatial visualization. Mapping is not really a forte of mine, though I have played around with it some in the past.
I was working with some shapefile data a while ago and thought about how its funny that so much of spatial data is dominated by a format that is basically proprietary. I looked around for some good tutorials on using shapefile data in R, and even so it took me a while to figure it out, longer than I would have thought.
So I thought I'd put together a simple example of making nice choropleths using R and ggmap. Let's do it using some nice shapefile data of my favourite city in the world courtesy of the good folks at Toronto's Open Data initiative.

#### Background

We're going to plot the shapefile data of Toronto's neighbourhoods boundaries in R and mash it up with demographic data per neighbourhood from Wellbeing Toronto.
We'll need a few spatial plotting packages in R (ggmap, rgeos, maptools).
Also the shapefile originally threw some kind of weird error when I originally tried to load it into R, but it was nothing loading it into QGIS once and resaving it wouldn't fix. The working version is available on the github page for this post.

#### Analysis

First let's just load in the shapefile and plot the raw boundary data using maptools. What do we get?
# Read the neighborhood shapefile data and plotshpfile <- "NEIGHBORHOODS_WGS84_2.shp"sh <- readShapePoly(shpfile)plot(sh)
This just yields the raw polygons themselves. Any good Torontonian would recognize these shapes. There's some maps like these with words squished into the polygons hanging in lots of print shops on Queen Street. Also as someone pointed out to me, most T-dotters think of the grid of downtown streets as running directly North-South and East-West but it actually sits on an angle.

Okay, that's a good start. Now we're going to include the neighbourhood population from the demographic data file by attaching it to the dataframe within the shapefile object. We do this using the merge function. Basically this is like an SQL join. Also I need to convert the neighbourhood number to a integer first so things work, because R is treating it as an string.

# Add demographic data# The neighbourhood ID is a string - change it to a integersh@data$AREA_S_CD <- as.numeric(sh@data$AREA_S_CD)# Read in the demographic data and merge on Neighbourhood Iddemo <- read.csv(file="WB-Demographics.csv", header=T)sh2 <- merge(sh, demo, by.x='AREA_S_CD', by.y='Neighbourhood.Id')
Next we'll create a nice white to red colour palette using the colorRampPalette function, and then we have to scale the population data so it ranges from 1 to the max palette value and store that in a variable. Here I've arbitrarily chosen 128. Finally we call plot and pass that vector of colours into the col parameter:
# Set the palettep <- colorRampPalette(c("white", "red"))(128)palette(p)# Scale the total population to the palettepop <- sh2@data\$Total.Populationcols <- (pop - min(pop))/diff(range(pop))*127+1plot(sh, col=cols)
And here's the glorious result!

Cool. You can see that the population is greater for some of the larger neighbourhoods, notably on the east end and The Waterfront Communities (i.e. condoland)

I'm not crazy about this white-red palette so let's use RColorBrewer's spectral which is one of my faves:

#RColorBrewer, spectralp <- colorRampPalette(brewer.pal(11, 'Spectral'))(128)palette(rev(p))plot(sh2, col=cols)

There, that's better. The dark red neighborhood is Woburn. But we still don't have a legend so this choropleth isn't really telling us anything particularly helpful. And it'd be nice to have the polygons overplotted onto map tiles. So let's use ggmap!

#### ggmap

In order to use ggmap we have to decompose the shapefile of polygons into something ggmap can understand (a dataframe). We do this using the fortify command. Then we use ggmap's very handy qmap function which we can just pass a search term to like we would Google Maps, and it fetches the tiles for us automatically and then we overplot the data using standard calls to geom_polygon just like you would in other visualizations using ggplot.

The first polygon call is for the filled shapes and the second is to plot the black borders.

#GGPLOT points <- fortify(sh, region = 'AREA_S_CD')# Plot the neighborhoodstoronto <- qmap("Toronto, Ontario", zoom=10)toronto +geom_polygon(aes(x=long,y=lat, group=group, alpha=0.25), data=points, fill='white') +geom_polygon(aes(x=long,y=lat, group=group), data=points, color='black', fill=NA)
Voila!

Now we merge the demographic data just like we did before, and ggplot takes care of the scaling and legends for us. It's also super easy to use different palettes by using scale_fill_gradient and scale_fill_distiller for ramp palettes and RColorBrewer palettes respectively.

# merge the shapefile data with the social housing data, using the neighborhood IDpoints2 <- merge(points, demo, by.x='id', by.y='Neighbourhood.Id', all.x=TRUE)# Plottoronto + geom_polygon(aes(x=long,y=lat, group=group, fill=Total.Population), data=points2, color='black') +  scale_fill_gradient(low='white', high='red')# Spectral plottoronto + geom_polygon(aes(x=long,y=lat, group=group, fill=Total.Population), data=points2, color='black') +  scale_fill_distiller(palette='Spectral') + scale_alpha(range=c(0.5,0.5))

So there you have it! Hopefully this will be useful for other R users wishing to make nice maps in R using shapefiles, or those who would like to explore using ggmap.

#### References & Resources

Neighbourhood boundaries at Toronto Open Data:
Demographic data from Well-being Toronto:

## Toronto Data Science Group – One or the Other: An Overview of Binary Classification Methods

So Chris was kind enough to invite me to speak at the Toronto Data Science Group again this past Thursday. I spoke on binary classification, and made an effort to cover a fair bit of ground and some technical detail, while still making it accessible. I wanted to give an overview for an audience that was more interested in the 'how', and the practical realities of using classification to solve problems within an organization.

As before, I'll keep my observations to be more about presenting and less about the content.

The meetup is a lot different now, having presentations at venues like MaRS or the conference room at Thompson Hotel with large audiences, as opposed to the early days when it was much smaller.

Speaking to a larger group is challenging; both in that it's more nerve-racking, and I also noticed it was harder to make eye contact and include the whole audience than I am used to with smaller groups. The temptation is to just look out straight ahead in front of you. Speaking in front of a podium has its disadvantages this way, but it does keep you anchored and give you something on which to rest your hands and remain centered. Looking back toward the screen is usually a bad idea when presenting regardless of audience size, unless you are pointing something out, and is doubly so when that screen is very large and above you.

Some folks were kind enough to take some photos of me during the talk for social media and the like. In retrospect, while I do try to have a very visual style (and inject some humour with it) I think it can come across as overly simplistic and flippant in certain contexts, such as with this larger group. There's a balance to be struck there, I'm sure. Also, as always, you need to be mindful of how large you are making things on your slides (especially text), given the size of the screen with respect to the venue.

The point I made about the explainability of different classification methods to the non-technical audience or end consumer (i.e. client) receiving the results of their application was less controversial than I would have thought. Chris commented on this as well.

As always I was overly ambitious and was able to get through a lot less material in the timeframe than I originally would have thought.

I was asked some very insightful and detailed questions, some of which I wasn't totally prepared to answer. Talking about something is fairly easy, I think, because you can put together exactly what you want to say and rehearse; it's in the answering of the questions that people decide whether you really know the subject, or just putting pretty pictures up on the screen and painting in broad verbal strokes. Many people in the audience seemed to have assumed that because I was speaking on the topic of binary classification that I was a complete expert on it - there's a danger here too, I think, when you see anyone give a presentation.

All in all, I think the talk was very well received. As always I learned a lot putting it together, and even more afterward, discussing with Toronto's data scientists and knowledgeable analysts with insightful points of view.

Looking forward to the next one.