Tag Archives: learning

My introduction to unicode, pyshp.py and Natural Earth ESRI shapefiles

Working on a project for a friend I am adding data for different countries into a shapefile of the world. On a tip from Andy Woodruff, a cartographer I met at the Hubway Challenge Awards Ceremony, I switched to using the Natural Earth maps which are completely in the public domain. Great maps, with the added bonus of country names with non-ascii characters!

The data I started with is in Excel, with ALL CAPS country names. I needed to create a mapping from the .xls names to the names in the existing shapefile and then add new fields for my data and add it to the corresponding country shapes. Turns out in the excel file CÔTE D’IVOIRE is the only one with any fancy characters. Note, I am new to unicode, so I hope that renders as a capitol O with a circumflex in your browser too. The python csv module correctly reads the excel file as utf-8 encoded and so in python this name is represented with the string cote_str = u”C\xd4TE D’IVOIRE”. The ‘u’ prefix indicates it is a unicode string. When printed to my console using “print cote_str” it is rendered using the default encoding of my terminal of utf-8 and displays as desired: CÔTE D’IVOIRE. However, using the repr() method I can get at the details: (u”C\xd4TE D’IVOIRE”) and see that the unicode code point value for this character is 0xd4. However, if I encode the string into a utf-8 byte string, I can see the utf-8 encoding for this character (c394) as it would be stored in a utf-8 encoded file, see this unicode/utf-8 table for reference:

>>> cote_str.encode(‘utf-8′)
“C\xc3\x94TE D’IVOIRE”

Had I thought to look I would have seen it clearly documented that “Natural Earth Vector comes in ESRI shapefile format, the de facto standard for vector geodata. Character encoding is Windows-1252.” Windows-1252 is a superset of ISO-8859-1 (a.k.a. “latin1″). However, it didn’t occur to me to check and I ran into some unexpected problems, since several country names in this shapefile had non-ascii characters.

The pyshp module doesn’t specify an encoding and so the default for python is used which is “ascii”. So for example I end up with byte strings with non-ascii characters: “C\xf4te d’Ivoire”. When printed to the terminal it is rendered to utf-8, but since 0xf4 is not a valid utf-8 encoding it renders as: “C�te d’Ivoire”. More problematic other operations won’t work, for instance I need to compare this country name to the ones in the .xls file. Note I found it confusing at first that both unicode and latin-1 share encodings for values 0-255, but utf-8 has different encodings above 128 (because of how utf-8 uses variable numbers of bytes, the upper part of latin1 is not valid utf-8 at all, wikipedia’s description chart shows it well).

The raw byte string:

raw_c = “C\xf4te d’Ivoire”

can be converted to unicode with the proper encoding:

u_c = unicode(raw_c, ‘cp1252′)

which is now a unicode string (u”C\xf4te d’Ivoire”) and will print correctly to the console (because print is converting it to the correct encoding for the console).

Just playing about some more.

raw_utf8 = u_c.encode(‘utf-8′)

raw_utf8 now stores “C\xc3\xb4te d’Ivoire”, note that utf-8 needs two bytes to store the correct o. This will print looking correctly to my linux console because utf-8 is being used by the console.

However, in windows again I get something weird looking, because the windows command line is using code page 437 as the console encoding. Using u_c.encode(‘cp437′) gives me a binary string that prints correctly in this case “C\x93te d’Ivoire”. Having fun yet?

Moral of the story, debugging unicode can be confusing at first. Using unicode strings is clearer.

Tired of typing in ‘\xf4′ etc? You can change the default python from using ascii to using other encodings by adding a special comment in the first or second line of the file;

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# This encoding allows you to use unicode in your source code
st = u’São Tomé and Principe’

Here is a good reference on typing unicode characters in emacs.

Now I am less confused and have all the tools I need to work with these files shapefiles in the imperfect but still pretty functional pyshp module.

  1. Convert latin1 binary strings to unicode using unicode(s, ‘latin1′)
  2. Add the needed custom mapping entries by typing in unicode.
  3. Convert the unicode strings back to latin1 before saving the shapefile.

Hacky, but it works.

Picture viewing web page

I wanted to share pictures taken from our Antony and Cleopatra Reading this week and decided to create a page with thumbnails and links to the full pictures. Sure there are programs out there that do it and host for you, but it is so hard to get at the full res images and I guess they just annoy me, so I wanted to do it old school. Also I figure it is a good opportunity to practice some of the skills I have been exploring lately.

1) Write a shell script to shrink the image files and create thumbnails. Well actually putting it in a batch file is overkill, but it’s the first shell script I have created, pretty simple:

  • create a file with .sh extension and give it executable permissions with “chmod -x [name].sh”
  • add “#!/bin/sh” at the top
  • add commands that you want to execute from the commandline below

2) shrink all the files, I used imagemagick (convert) to do this the -thumbnail option is designed for creating thumbnails and the x300 makes the final image 300 pixels tall and maintains the aspect ratio. The command I ended up using was:

ls *.JPG | xargs -0 | xargs -I {} -n1 convert {} -thumbnail x300 thumbs/thumb_{}

3) I used a little java script to generate the web page.

  • first created a list of all the files using underscore.js, then
  • used d3.js to populate the appropriate number of links to display the thumbnails as links to the full size images

The resulting picture page is not fancy, but functional.

Reading HTML with Python

Today I read some HTML data from a table using Python. The tools I ended up using to get the data were:

import urllib
f = urllib.urlopen(<URL_string>)
# Read from the object, storing the page's contents in 's'.
s = f.read()
f.close()

Then to parse tables in the HTML I used lxml which can read HTML and provide an ElementTree like API.

from lxml import etree
html = etree.HTML(s)

I was then able to do everything I needed using list() on the element to get the children and using the attributes .text and .tag to get the text out of tables and links as desired.

Putting data on a map

Here is my first plot of some data on a map. This happens to be the locations of seismic monitoring stations around the world.

I used a equirectangular projection of the world map from wikipedia, which means that lat/long coordinates map easily to pixel coordinates. Also used the python csv library to read the data and the python imaging library to modify the image with red dots for monitoring station locations.

Python CSV library and matplotlib

I reworked the previous data using python’s CSV library, which was embarassingly easy and saved much time over dealing with escaped quotes and commas by hand. :) I also used matplotlib for the first time. For the most part I like it since I can leverage my knowledge of MATLAB plotting. It is also really nice to use list comprehensions to generate labels.

It only took a moment to find the documentation for custom labels (if you already know ‘xticks’ it makes it easy to find!), b is the starting value for each range and the following labels the binned data in the bar graph.

pyplot.xticks(b,[str(bi)+’-‘ +str(bi+10) for bi in b], rotation =30)

So… The reveal, my first plot of data for the Kaggle Harvard Business Review Competition.

I am not going to put words to explain that I am very embarrassed, but I think that’s the point. I’ve massaged the data provided by the Harvard Business Review on articles they have written in the last 90 years, ignoring the interesting data like the titles and abstracts text.

I used python to process this data and then plotted it in OpenOffice. Clearly the *beginning* of something.