Generating a cartogram of access to pain medication around the world.

Okay! Now that I have my data in a shiny new shapefile. Time to make some cartograms using ScapeToad.

The Data:  Morphine per Death as reported by the Global Access to Pain Relief Initiative (GAPRI)

I am working on this project with Kim Ducharme for WGBH, The World, for a series on cancer and access to pain medication, which highlights vast disparities in access between the developed and developing world. Below is a snippet of the data obtained from GAPRI, showing the top 15 countries for amount of morphine available/used per death by Cancer and HIV and the bottom 15 for which there were data.

                                   Country   mg Morphine/ Death
                            --------------   -------------------
                             United States   348591.575146
                                    Canada   332039.707309
                               Switzerland   203241.090828
                                   Austria   180917.511535
                                 Australia   177114.495731
                                   Denmark   160465.864664
                Iran (Islamic Republic Of)   149739.130818
                                   Germany   144303.589803
                                   Ireland   140837.280443
                                 Mauritius   121212.701934
                            United Kingdom   118557.183885
                                     Spain   116480.684253
                               New Zealand   112898.731957
                                   Belgium   108881.848319
                                    Norway   106706.195632

And the 15 countries with the least amount of morphine access:

                                   Country   mg Morphine/ Death
                            --------------   -------------------
                                   Burundi       38.261986
                                  Zimbabwe       34.508702
                                     Niger       31.359717
                                    Angola       30.485112
                                   Lesotho       25.998371
                                  Ethiopia       25.323131
                                      Mali       24.713729
                                    Rwanda       23.269946
                                  Cameroon       15.162560
                                      Chad       10.866740
                             Côte D'Ivoire        9.723552
                                  Botswana        9.352994
                                   Nigeria        8.780894
                              Sierra Leone        8.546830
                              Burkina Faso        7.885819

Traditional Cartogram

Based on these numbers of morphine/death, in a basic cartogram where each country’s area becomes proportional to the metric, Switzerland would be 60% of the size of the US. But wait… this wasn’t what I expected, gosh that’s ugly and hard to read… And so starts the cartogram study and tweaking experiment. Is there a perfect solution?

Morphine per Death (as Mass)

Note the countries of Europe are too constrained to get to their desired sizes, so there is always some error in these images. Regardless of that there are two issues: 1) Europe/Africa/Asia are so badly distorted as become nearly unreadable, and bring the emphasis to a fish-eye view of Europe with weird France and Switzerland shapes. 2) This seems to make the whole story about Europe, de-emphasizing the US and Canada, which have higher usage than any of the European countries and also taking the focus away from shrunken Africa/Asia and South America.

This seems to be the best that the diffusion based contiguous cartogram is going to be able to do for this data set. ScapeToad has some options for mesh size and algorithm iterations, none of which seem to significantly effect the output image in this case. The other option is to take your metric and apply it as a “Mass” (as above) or as a “Density” to each shape. ScapeToad explains what the Mass/Density distinction is pretty well:

In our case Morphine/death is a “Mass/Mass” ratio which is also a “Mass”. However, for kicks I ran the “Density” option which is technically wrong (scales the area of each country based on the metric, instead of making the area proportional to the metric as a traditional cartogram should). Low and behold, the density image is certainly more satisfying and seems to tell a better story, although over-emphasizing the role of the US, Canada and Australia, which all dwarf Europe:

Morphine per Death (as Density)

Well, this is a quandary, the “correct” image is too confusing to be useful and takes the focus away from the story about the developing world and into what-the-?-is-this-distorted-picture land. But the “density” image is not “correct”.

From here I spent some time trying to generate a less distorted mass based cartogram. By running the cartogram generation on each continent separately I generated much less distorted images of Europe and Africa (Asia still needs some work). Shown here are the raw outputs for these regions in green, purple and pale blue respectively.

Morphine per Death (as Mass) by region, unscaled

To piece the cartogram back together the continents needed to be scaled and translated to the correct locations. Here is how far I got in that process. Europe is much easier to read and Africa is a huge improvement. Asia/the Middle East are still quite confusing, potential for improvement breaking this into more chunks, but it was becoming a more and more manual process and the output image still isn’t “satisfying”.

Morphine per Death (as Mass) each region calculated separately, then scaled appropriately to maintain more recognizable shapes

Does this cartogram tell the story we want? Does it really make sense to honor country borders and make small countries as large as big countries that have the same morphine/death value? For example, all things remaining equal, if the German and French speaking parts of Switzerland split into two new countries, given the same morphine/death number should each of the two halves have a cartogram area equal to previous Switzerland, effectively doubling the size because of  a political change? That doesn’t make much sense, but that would be considered a technically “correct” cartogram measure. It seems to me in some ways scaling the area is more correct as in the Morphine/Death as Density image, as it doesn’t exaggerate small countries with smaller populations…

In the end it is possible to generate a lot of different cartogram images. Some of which are suggestive of the story you want to tell, none of which are easily deciphered to provide actual data numbers. Keeping in mind that a cartogram isn’t a tool for communicating precise data measures, I think pick the one you like that makes sense to you vis-á-vis your data and the story you want to tell, don’t overstate the accuracy of the image, and provide other means to get at the actual numbers. For example, I created an alternate view in this choropleth map of the same data.

UPDATE 2012/12/03:

The final image is published now as part of PRI’s The World new series on Cancer’s New Battleground — The Developing World.

Access to Pain Medication around the World

Adding custom data to a shapefile using pyshp 1.1.4

As part of a cartogram generating project I need to get data from a .xls file into a shapefile for use with ScapeToad. Reading the excel file is easy with xlrd. Shapefiles? that is new territory.

Shapefile resources:

ScapeToad requires a high quality shapefile as input. I tested two that worked fine, both include population data suitable for testing:

  1. Thematic Mapping: http://thematicmapping.org/downloads/world_borders.php TM_WORLD_BORDERS_SIMPL-0.3.zip. Which is licensed under the Create Commons Attribution-Share Alike License. This license is not appropriate for commercial use and the author didn’t respond to my question regarding other licensing options.
  2. Natural Earth: On a tip from Andy Woodruff, I switched to using Natural Earth shapefiles which are licensed completely in the public domain, suitable for anything you want! Note, I discovered that the most zoomed out Natural Earth file, “ne_110m”, didn’t have shapes for some of the smaller island countries in my data set, so switched to using the “ne_50m” versions which included everything I needed.
Next step, getting custom data into the shapefile.

Using pyshp to add data to a shapefile

Since I do most of my data processing in python anymore, I was happy to find a python module for read/write/editing shapefiles. Unfortunately, pyshp.py is not the best maintained module. I used pyshp version 1.1.4, because it was packaged in ubuntu. After discovering a number of bugs I realized they have already been reported but nothing significant seems to have been fixed in 1.1.6. So I will just document the workarounds I used here.

1st pyshp 1.1.4 workaround: Renamed the shapefiles to remove any dots in the file name (the 0.3 in the case of thematic mapping shapefiles) because pyshp can’t handle extra dots in the file name.

This is kinda a nuisance since there are 4 files in a “shapefile”. This command will rename the extensions “dbf”, “prj”, “shp” and “shx” all at once:

 for a in dbf prj shp shx;do mv TM_WORLD_BORDERS-0.3.$a TM_WORLD_BORDERS_dot_free.$a;done

2nd pyshp 1.1.4 workaround: Massage numeric data you are adding to a record to have the correct precision.

My whole reason for using pyshp is to add data from excel into the shapefile. This means adding fields to identify the data and then adding the data to the record for each shape. The format of the new attributes (a.k.a. fields) is well described here. In my case I want to add numbers for example: sh.field(‘MY_DATA’, ‘N’, 6, 3). The number args are width and precision, where width is the total number of characters to represent the number and precision is the number of characters after the decimal. The above (6,3) can encode: -1.234 and 98.765.

Note, pyshp will error (AssertionError assert len(value) == size) if you put data into the record with greater precision than specified (it will not truncate for you).  I used a simple hack below to get a precision of 3 for my data:

    def precision3(n):
        ''' Force argument to precision of 3'''
        return float('%0.3f'%n)

3rd pyshp 1.1.4 workaround: When adding a new data field, pad all the records with default data for the new attribute.

pyshp assumes when saving the file that the data is perfectly formatted, but doesn’t help too much when adding or deleting data. Records are stored in python as a list of lists, when the shapefile is written pyshp assumes that the records lengths equal the number of fields (as they should be). But it is your job to make sure this is true (if not the records will wrap around and become non-sense). Q-GIS is useful for inspecting shapefile attribute tables to discover issues and verify that your new shapefile works in an independent reader.

In my case data wasn’t available for all countries, so I padded with a default value (appended to the end of all records when adding the field) and then looped through and put the correct data in the records for which data was available.

Example here adding a new field for Numeric data and default data to all records. All my data is non-negative, so magic number “1” is for the decimal point.

    def addField(name, widthMinusPrecision, precision = 3, default = 0): 
        sf.field(name, 'N', widthMinusPrecision+precision+1, precision)
        # add default data all the way down to keep the shape file "in shape"
        for r in sf.records:
            r.append(default)
        return

4th pyshp 1.1.4 workaround: delete() method doesn’t work, don’t use it.

Each shape is described by two pieces of data, linked together based on their index. When deleting a shape, both the record (with the meta data) and the shape (with coordinates etc) must be removed. If only one is deleted pyshp will add a dummy entry at the end and many of your records and shapes won’t line up anymore. To delete a shape, you must delete both the shape and the corresponding record. The delete method doesn’t do this, don’t use it, do it yourself:

    def deleteShape(n):
        del sf._shapes[n]
        del sf.records[n]
        return

5th pyshp 1.1.4 workaround: Handle non-ascii character encodings yourself

pyshp doesn’t declare a character encoding when reading files, so they default to “ascii”. If you are using the Natural Earth shapefiles they have non-ascii characters and are encoded in Windows-1252. (See previous post for more info about the Natural Earth encoding.) I worked around this by looping over the records and encoding all strings to unicode:

    for r in sf.records:
        for i,f in enumerate(r):
            if isinstance(f, str):
                r[i] = unicode(f,'cp1252')

And then reversed this before saving the file via:

    for r in sf.records:
        for i,f in enumerate(r):
            if isinstance(f, unicode):
                r[i] = f.encode('cp1252')

6th pyshp 1.1.4 workaround: When looking at sf.fields adjust the index by one to ignore ‘DeletionFlag’

pyshp adds an imaginary field for internal state tracking to the beginning of the fields list. If you are looking up field names in this list to find indexes, you should correct your indexes accordingly, there is not actually a field called ‘DeletionFlag’.

Conclusion:

After working around these bugs and massaging my country names to map from .xls to the names in the shapefile (17 cases of “Bolivia (Plurinational State Of)” == “Bolivia”) , I was able to use pyshp to generate a new shapefile with my data in it! Next up, cartogram-orama.

Unicode with HTML and javascript

OMG, really, character encoding problems again??! The adventure continues now in the browser.

First problem: I tried to use the d3 projection.js module, but including it gives me the error “Uncaught SyntaxError: Unexpected token =” in projection.js line 3. Looking at the file I am initially confused:

(function() {
  var ε = 1e-6,
      π = Math.PI,
      sqrtπ = Math.sqrt(π);

Until I noticed this module does a lot of fancy math with characters like ζ, μ, λ, π and φ. Ah ha! Perhaps this is my problem. Lo and behold, my lazy html didn’t declare a character encoding. The error was resolved by adding the following in the head of index.html:

<meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8” />

Fast forward sometime later… Oh no! My old friend Cote d’Ivoire isn’t looking right:

Looks like my shapefile data, now converted to GeoJSON is still encoded in a non UTF-8 encoding. Switching the html encoding to <meta http-equiv=”Content-Type” content=”text/html; charset=ISO-8859-1” /> results in the tooltip rendering as expected, but clearly I am not willing to give up on using 3rd party code that is encoded in UTF-8 am I?

Fortunately there is another easy fix. Simply specify the encoding on the data file when I import it:

<script type = “text/javascript” charset=”ISO-8859-1″ src=”non-UTF-8_data_file.js”></script>

And all is well again:

Whew! Previous battles with character encodings came in handy.

Cartogram Basics

I am working on a Cartogram of the World with my friend Kim Ducharme. We are looking for something dramatic to show disparity between affluent countries and developing countries and a cartogram seems a great way to hit the message home. This is my first foray in to the world of cartograms, so here is some useful background.

A cartogram is a map where the areas of regions have been adjusted to represent some other metric of interest. They are intentionally distorted “maps” and yes there is controversy over them :).

Cartograms come in a few different flavors (see indiemaps summary):

  1. Non-contiguous Cartograms: Each object (state/country/etc) grows or shrinks independently of its neighbors. With the result being perfectly accurate, but the original map is filled with white space. Excellent history of Non-contiguous cartograms here.
  2. Dolring Cartograms: Replace the regions with circles or squares I won’t discuss these more.
  3. Contiguous Cartograms: Attempts to keep boundaries connected and distorts the shapes (often grossly) attempting to scale the country areas according to some metric. The canonical approach seems to be the Gastner/Newman diffusion method.
Our preliminary investigation showed that the Non-contiguous Cartogram was not a very satisfying image. Too much white space and just doesn’t have the punch of a wildly distorted contiguous cartogram.
Here are is an example of the diffusion method image generated by Mark Newman, this one represents Total spending on Healthcare, check out Mark Newman’s site for more in this family:

Having decided to pursue generating a contiguous area cartogram, first step was to find out how.

Selecting a Cartogram program:

There are a few options I found for generating a contiguous area cartogram:

  1. Download the source for Gastner/Newman’s cart program, compile and run it. Looks doable, but would need to massage my data into some grid format. Maybe someone else has made it easier…
  2. Apparently ArcGIS has a Cartogram Geoprocessing Tool based on the Gastner/Newman method too. But I don’t have a budget for fancy commercial software.
  3. ScapeToad: At last, a ready to go cartogram generating program also based on the Gastner/Newman method. This one has a GUI and is released under the GPL. Perfect!

Using ScapeToad to generate a cartogram:

Using ScapeToad is easy! It has simple instructions and I only ran it a few issues easy to work around. It uses ESRI shapefiles and will output the updated image as an ESRI shapefile (with error  data added in). I also found the ScapeToad documentation pretty helpful. The only annoying issue  on my system was the recently used files selection silently didn’t work. So when opening a shapefile (via “Add Layer…”) I had to always browse to the correct location. I will discuss about the “Mass” and “Density” options in another post. Otherwise there is not much to say, I will let ScapeToad speak for itself.

ScapeToad requires the shape file to have “perfect contiguity”, so find a suitable shapefile and test it before moving on. As discussed in a previous post, the Natural Earth shapefiles are now my go to. These files conveniently have some population data you can use to test the ScapeToad is working.

More on the real challenges, adding your own data to a shapefile, coming up…

Reprojecting maps with QGIS

I figured out how to reproject maps with Q-GIS and would like to celebrate with this lovely image, a US Atlas Equal Area Projection, which I thought was the most fun of the built in projections in Q-GIS 1.8.0:

If the Q-GIS documentation site is working it isn’t too hard to figure out how to reproject. Quick summary:

  • File >> Project Properties: You can set the project Coordinate Reference System (CRS), if you “Enable ‘on the fly’ CRS transformation” you can play around with the different projection options.
  • To set the CRS on a shapefile layer, right-click on the layer name and select “Set Layer CRS”, this is particularly useful if you don’t have ‘on the fly’ enabled, then your layers will not display if they are in a different CRS than the current project one, in which case you can change them here so they will display.

Note for practical equal area projections I found the Mollweide options seemed to work well, but don’t understand yet the difference between world and sphere options. Anyone?

Funny, it wasn’t untill I started looking at these equal area projections that I realized how big Russia is!

My introduction to unicode, pyshp.py and Natural Earth ESRI shapefiles

Working on a project for a friend I am adding data for different countries into a shapefile of the world. On a tip from Andy Woodruff, a cartographer I met at the Hubway Challenge Awards Ceremony, I switched to using the Natural Earth maps which are completely in the public domain. Great maps, with the added bonus of country names with non-ascii characters!

The data I started with is in Excel, with ALL CAPS country names. I needed to create a mapping from the .xls names to the names in the existing shapefile and then add new fields for my data and add it to the corresponding country shapes. Turns out in the excel file CÔTE D’IVOIRE is the only one with any fancy characters. Note, I am new to unicode, so I hope that renders as a capitol O with a circumflex in your browser too. The python csv module correctly reads the excel file as utf-8 encoded and so in python this name is represented with the string cote_str = u”C\xd4TE D’IVOIRE”. The ‘u’ prefix indicates it is a unicode string. When printed to my console using “print cote_str” it is rendered using the default encoding of my terminal of utf-8 and displays as desired: CÔTE D’IVOIRE. However, using the repr() method I can get at the details: (u”C\xd4TE D’IVOIRE”) and see that the unicode code point value for this character is 0xd4. However, if I encode the string into a utf-8 byte string, I can see the utf-8 encoding for this character (c394) as it would be stored in a utf-8 encoded file, see this unicode/utf-8 table for reference:

>>> cote_str.encode(‘utf-8′)
“C\xc3\x94TE D’IVOIRE”

Had I thought to look I would have seen it clearly documented that “Natural Earth Vector comes in ESRI shapefile format, the de facto standard for vector geodata. Character encoding is Windows-1252.” Windows-1252 is a superset of ISO-8859-1 (a.k.a. “latin1″). However, it didn’t occur to me to check and I ran into some unexpected problems, since several country names in this shapefile had non-ascii characters.

The pyshp module doesn’t specify an encoding and so the default for python is used which is “ascii”. So for example I end up with byte strings with non-ascii characters: “C\xf4te d’Ivoire”. When printed to the terminal it is rendered to utf-8, but since 0xf4 is not a valid utf-8 encoding it renders as: “C�te d’Ivoire”. More problematic other operations won’t work, for instance I need to compare this country name to the ones in the .xls file. Note I found it confusing at first that both unicode and latin-1 share encodings for values 0-255, but utf-8 has different encodings above 128 (because of how utf-8 uses variable numbers of bytes, the upper part of latin1 is not valid utf-8 at all, wikipedia’s description chart shows it well).

The raw byte string:

raw_c = “C\xf4te d’Ivoire”

can be converted to unicode with the proper encoding:

u_c = unicode(raw_c, ‘cp1252′)

which is now a unicode string (u”C\xf4te d’Ivoire”) and will print correctly to the console (because print is converting it to the correct encoding for the console).

Just playing about some more.

raw_utf8 = u_c.encode(‘utf-8′)

raw_utf8 now stores “C\xc3\xb4te d’Ivoire”, note that utf-8 needs two bytes to store the correct o. This will print looking correctly to my linux console because utf-8 is being used by the console.

However, in windows again I get something weird looking, because the windows command line is using code page 437 as the console encoding. Using u_c.encode(‘cp437′) gives me a binary string that prints correctly in this case “C\x93te d’Ivoire”. Having fun yet?

Moral of the story, debugging unicode can be confusing at first. Using unicode strings is clearer.

Tired of typing in ‘\xf4′ etc? You can change the default python from using ascii to using other encodings by adding a special comment in the first or second line of the file;

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# This encoding allows you to use unicode in your source code
st = u’São Tomé and Principe’

Here is a good reference on typing unicode characters in emacs.

Now I am less confused and have all the tools I need to work with these files shapefiles in the imperfect but still pretty functional pyshp module.

  1. Convert latin1 binary strings to unicode using unicode(s, ‘latin1′)
  2. Add the needed custom mapping entries by typing in unicode.
  3. Convert the unicode strings back to latin1 before saving the shapefile.

Hacky, but it works.

A better simple map

I am learning to use pyshp.py (a pretty buggy but functional python module for reading and writing ESRI shapefiles) and Quantum GIS. As a quick demonstration I replotted the data from an earlier map. Q-GIS makes it pretty easy to adjust the appearance. The world map shapefile is from Natural Earth.

This time I generated a shapefile from python directly, super easy (I will highlight problems with pyshp in a future post, but creating this simple file worked fine, although doesn’t seem to define a CRS, I am pretty sure it is WGS 84). Here is the code:

import shapefile
w = shapefile.Writer(shapefile.POINT)
max_len = max([len(s.name) for s in stations])
w.field(‘NAME’,’C’,’%i’%max_len)
for s in stations:
    w.record(s.name)
    w.point(s.lon, s.lat)

w.save(‘stations’)

Picture viewing web page

I wanted to share pictures taken from our Antony and Cleopatra Reading this week and decided to create a page with thumbnails and links to the full pictures. Sure there are programs out there that do it and host for you, but it is so hard to get at the full res images and I guess they just annoy me, so I wanted to do it old school. Also I figure it is a good opportunity to practice some of the skills I have been exploring lately.

1) Write a shell script to shrink the image files and create thumbnails. Well actually putting it in a batch file is overkill, but it’s the first shell script I have created, pretty simple:

  • create a file with .sh extension and give it executable permissions with “chmod -x [name].sh”
  • add “#!/bin/sh” at the top
  • add commands that you want to execute from the commandline below

2) shrink all the files, I used imagemagick (convert) to do this the -thumbnail option is designed for creating thumbnails and the x300 makes the final image 300 pixels tall and maintains the aspect ratio. The command I ended up using was:

ls *.JPG | xargs -0 | xargs -I {} -n1 convert {} -thumbnail x300 thumbs/thumb_{}

3) I used a little java script to generate the web page.

  • first created a list of all the files using underscore.js, then
  • used d3.js to populate the appropriate number of links to display the thumbnails as links to the full size images

The resulting picture page is not fancy, but functional.

Bikes In / Bikes Out — Hubway Data Viz group submission

Winner of Best Analysis!

Together with Kim Ducharme, Kenn Knowles, Verena Tiefenbeck, I created an interactive visualization of bike movements throughout the city on a typical weekday. In order for the Hubway bike sharing system to work, there must be a bike available when desired and an empty dock available when returning. At peak hours at some stations the imbalance introduced by commuters is enough that Hubway has to resupply and remove bikes during rush hour. The visualization we created allows you see the bikes in and out of a station throughout the day. Also using the map you can see the imbalance throughout the day. It is interesting that while the flow is high at lunch time for example, it is pretty evenly balanced in the city. However during prime commuting hours the commuting patterns are visible and different patterns around entertainment spots like Harvard Square in the evenings.

Check out the interactive Bikes In / Bikes Out visualization and the other entries in the Hubway Challenge.

.

Many thanks the challenge organizers for hosting a Hackathon where I met my team for the first time and to the team members for being awesome to work with and for making this happen in such a short time.

Kim — Thanks for laying out the concept and giving us immediately something to work towards and your keen eyes for the visual details and color scheme. Sorry we couldn’t implement all the nice-to-haves! Your enthusiasm and encouragement were great as well.

Kenn — Wow, working with you was a great introduction to so many technologies on my “to learn” list and several I hadn’t even heard of yet. It’s almost embarrassing to list them, but here goes, this was the first time I used: CSS, Compass/sass, git/git-hub, javascript: d3.js, underscore.js, jquery.js, knockout.js. Thanks for putting the infrastructure and architecture together, there is no way I could have put this together on my own, but feel well positioned for it next time.

Verena — Thanks for your enthusiasm and insights on data trends especially casual versus registered usage patterns. I’m sorry we didn’t get to highlight that more!

Hubway Data Visualization Submission!

Click on the image below to see the full resolution image of my submission to the Hubway Data Challenge (which is huge and should be made into a poster, it’s the only reasonable way to view it, sorry!) The explanation of the graphic is included below. Do you like it? Check out this and the other visualizations. This entry was 16th in the popular vote of 67 entries. Not bad for a days work!

Hubway Station Connectivity Matrix

This graphic provides a breakdown of all station to station Hubway traffic, by month and by hour for the first 15 months of Hubway operation.

At the macro level, looking from top to bottom, you can see the growth of Hubway traffic since the system started in July 2011 (top row), to August 2012, on the bottom. Seasonal decline is visible near the winter months when Hubway does not operate (central black band). Notice the top-bottom symmetry around the winter, with fewer evening riders in November and March.

Variation throughout the day is visible from left to right. Quiet activity in the night swells for commuter/work day traffic. If you look very carefully you may see commuter bands at 8am and a more general swell between 4-6pm, which is explained by higher causal use of the system in the afternoons.

At the pixel level you can see the inter-connectivity of the 95 Hubway stations represented in the data-set. Each hour/month combination creates a 95×95 pixel matrix. Each pixel in this grid is colored to represent the number of trips that originate at the station specified by the row index which ended at the station specified by the column the pixel is in. The stations were organized roughly by neighborhood and the list and pixel indexes are provided at the right.

A first thing to notice in these small matrices are the diagonal bands running upper left to lower right. These represent the 7% of the total trips that started and ended at the same Hubway station. Clusters around diagonal indicate traffic within neighborhoods. Boston, with the commuting hot spots of North Station and South Station, is unsurprisingly the brightest region of the matrix. Interestingly, in the evenings the city traffic doesn’t really stand out.

Looking at the top to bottom trend you can see the addition of stations throughout Hubway’s operation, most notably several new stations in July and August 2012, which fill in the empty spaces from before those stations were installed. It is interesting how interconnected the stations are, with even the furthest station distances having been traveled by some intrepid rider. It is also interesting to note that approximately half of the colored pixels represent only one trip made between those stations during that month. Most of the traffic are the expected commuters and tourists.

How I made it

This visualization is based on the station connectivity matrix images I had generated earlier when looking at the Hubway data. Several people thought they were interesting, but I struggled with how to capture the trends, should I do an animation? Something with slider bars where they could move either forward by month or by hour? There are so many stations it is really hard to label them on a plot and it looks all lop-sided and imbalanced. Also the matrix really flattens the relationship between the station locations, but also trying to put it on a map is just a mess as well (although some people have done some great things in the Hubway Visualizaton Challenge, you should check it out.

Hubway extended their deadline and our team submission was nearly complete so I decided to see what I could do to visualize this data I had gathered in 24 hours before the deadline. First I made a web page, was planning to throw in some slider bars and tooltips/mouse over information to provide the data and highlight station names. That was going okay. One of the problem is that this is really a large amount of data, each plot has nearly 10,000 data points and I wanted to show the breakdown by hour within each month. Decent looking .pngs were too big. I spent some time thinking about how to store and transmit the data.

Then I took a mini break and finished re-reading Tufte’s Data Visualization book, where I was reminded of the idea of small multiples as a way of presenting data. Whew! Could I do that?

Okay, I created a web page with small multiples (back to my novice web layout nightmares of flowing images). The png’s at 1″x1″ were looking pretty bad.

Next I tried to generate an .svg with the data from one matrix and then see if I could composite them together. Just one crashed Inkscape. It’s a lot of data and a lot of little vector circles! Although I liked the number of trips being represented by circles, the sheer magnitude of the data pretty much ruled that out.

So a pixel map it is. Then I started getting super excited, I could align the matrices so that a new one starts every 100 pixels and then the legend would be in the pixel location! In the end I didn’t like the visual padding between the matrices and so they are at  95 pixel intervals, corresponding to the 95 stations.

I ended up generating the main image using the Python Imaging Library, since my data was all in python already it was extremely easy! I then added the text and legend with the GIMP.  I spent an inordinate amount of time trying out different color schemes and wasn’t really happy with any of them. This data distribution is very non-linear. 1/2 of the colored pixels represent 1 trip. This is definitely an area worth more study.

I am super excited about how many of the trends we know about were visible in this graphic. I am also excited about the amount of real underlying data I was able to include. Unfortunately the image is huge. But maybe I will have a poster made to celebrate my first visualization entry of all time.

Errata

The pixel count is too high, I removed the data from October 2012 for which there was incomplete data and including it was visually confusing and misleading in terms of the trends, but forgot to update the numbers.

The legend doesn’t include all of the colors used, I struggled with trying to find a color scheme that was pleasing and also conveyed the hot spots. I conclude it is hard to illustrate hot spots with only one pixel, magenta and purple pixels all count into the red category of very high traffic. A handful of pixels represent over 100 trips in the specified month/hour.

ToDo’s

I would like to try grouping stations by popularity within neighborhoods. I suspect with this change some neighborhoods would be easier to identify and provide a better “legend” of sorts.

Learn a vector drawing package and render the text in a vector form.

Correct issues above, perhaps tweak the color scheme some more or try grayscale.

Rotate month axis markers to be horizontal and adjust paragraph width on description and generally perfect the layout.