11.15.13

Cartographic “suicide caucus” map

Posted in Art, Maps at 10:13 am by ducky

Ryan Lizza posted an article and map in the New York Times which showed the locations of the US Congressional Districts whose Representatives backed the US federal government’s shutdown in an attempt to defund Obamacare.  Here is a version of the map which I made, with yellow for the “suicide caucus”:

shutdownSigner-congressionalDistrict-2011-2013

The article and map were good.  I liked them.  But there’s a real danger when looking at a map that you will — consciously or unconsciously — mentally equate the relative number of pixels on a map into the relative number of people.  

Unfortunately, the geographical distribution of people is wildly, wildly uneven: from 0.04 people per square mile in the Yukon-Koyukuk Borough to more than 70,000 people per square mile in Manhattan.  Yes, there are 1.75 MILLION more people per square mile in Manhattan than rural Alaska.

The map above makes it look like a higher percentage of congresspeople supported the shutdown than actually did.  If you look at the shutdown districts on a cartogram — a map where the area of a congressional is distorted to be proportional to the population of that district — instead, it becomes even more clear just how few representatives were involved.

shutdownSigner-congressionalDistrictPopCart-2011-2013

I have made a web page where you can explore congressional districts yourself.

In addition to seeing the maps above, you can also see thematic maps (both cartogram and regular) of

  • percent without insurance
  • percent white
  • median family income
  • median gross rent
  • median home value
  • percent living in poverty
  • percent of children living in poverty
  • percent of elderly living in poverty
  • median age
  • congressional election results from 2012

Additionally, if you click on a congressional district, you can see who represents that district, plus all of the above values for that district.  If you click on the map away from a congressional district, you can see a table comparing the shutdown districts with the non-shutdown districts.

You can also look at maps for the presidential 2012 election results and seasonally-adjusted unemployment, but because those are county-based figures, you can’t do a strict comparison between shutdown/non-shutdown districts, so they aren’t in the comparison table or the per-district summaries.

Implementation notes

I used ScapeToad to generate the cartograms.  It was a lot of trial and error to figure out settings which gave good results.  (For those of you who want to make cartograms themselves: I defined cartogram parameters manually, and used 6000 points for the first grid, 1024 for the second, and 3 iterations.)

I used QGIS and GRASS to clean it up afterward (to remove slivers and little tiny holes left between the districts sometimes) and to merge congressional districts to make cartogram shapes.

NB: I use the state boundaries which I created by merging the cartogramified congressional districts, even for the maps which are based on counties (e.g. unemployment and the presidential results).  It is pretty impressive how well the merged congressional district state boundaries match the county cartogram state borders.  It wasn’t at all obvious to me that would happen.  You could imagine that ScapeToad might have been more sensitive to the shapes of the counties, but somehow it all worked.  Kudos to ScapeToad!

At some zoom levels, not all the district boundaries get drawn.  That’s because I don’t want the map to be all boundary when way zoomed out, so I check the size before drawing boundaries.  If the jurisdiction is too small, I don’t draw the boundary.

As a starting point, I used the Congressional District shapefiles from the US Census Bureau. For the population used for generating the cartogram, I used the Census Bureau American Community Survey 2011 values.  For the other map attributes, I specify the source right under the “Color areas by”.

I made the map tiles myself, using custom PHP code.  You can read more about it in Optimizing Map Tile Generation.  I came up with my own algorithm for showing city labels.

10.06.13

Google Glasses app to help autistic people?

Posted in Random thoughts, Technology trends at 2:11 pm by ducky

I have heard that looking at faces is difficult for people with autism. I don’t understand it, but the impression I gotten from reading descriptions from high-functioning adults that the facial recognition hardware has a bug which causes some sort of feedback loop that is uncomfortable.

What if there was a Google Glasses application which put ovals in front of people’s faces? Blue ovals if they were not looking at you, pink ovals if they are. Maybe a line to show where the center line of their face is.

Maybe that would make it more comfortable to be around collections of people.

04.18.13

Variably-sized points on maps

Posted in Hacking, Maps at 1:20 am by ducky

The Huffington Post made a very nice interactive map of homicides and accidental gun deaths since the shooting at Sandy Hook.  It’s a very nice map, but has the (very common problem) that it mostly shows where the population density is high: of course you will have more shootings if there are more people.

I wanted to tease out geographical/political effects from population density effects, so I plotted the gun deaths on a population-based cartogram.  Here was my first try.  (Click on it to get a bigger image.)

Unfortunately, the Huffington Post data gives the same latitude/longitude for every shooting in the same city.  This makes it seem like there are fewer deaths in populated areas than there really are.  So for my next pass, I did a relatively simple map where the radius of the dots was proportional to the square root of the number of gun deaths (so that the area of the dot would be proportional to the number of gun deaths).

 

 

This also isn’t great.  Some of the dots are so big that they obscure other dots, and you can’t tell if all the deaths were in one square block or spread out evenly across an entire county.

For the above map, for New York City, I dug through news articles to find the street address of each shooting and geocoded it (i.e. determined the lat/long of that specific address). You can see that the points in New York City (which is the sort of blobby part of New York State at the south) seem more evenly distributed than for e.g. Baltimore.  Had I not done that, there would have been one big red dot centered on Manhattan.

(Side note: It was hugely depressing to read article after article about people’s — usually young men’s — lives getting cut short, usually for stupid, stupid reasons.)

I went through and geocoded many of the cities.  I still wasn’t satisfied with how it looked: the size balance between the 1-death and the multiple-death circles looked wrong.  It turns out that it is really hard — maybe impossible — to get area equivalence for small dots.  The basic problem is that with radiuses are integers, limited by pixels.  In order to get the area proportional to gun deaths, you would want the radius to be proportional to the square root of the number of gun deaths, or {1, 1.414, 1.732 2.0, 2.236, 2.449, 2.645, 2.828, 3.000}, the rounded numbers will be {1, 1, 2, 2, 2, 2, 3, 3, 3}; instead of areas of {pi, 2*pi, 3*pi, 4*pi, …}, you get {pi, pi, 4*pi, 4*pi, 4*pi, 9*pi, 9*pi, 9*pi}.

Okay, fine.  We can use a trick like anti-aliasing, but for circles: if the square root of the number of gun deaths is between two integer values (e.g. 2.236 is between 2 and 3), draw a solid circle with a radius of the just-smaller integer (for 2.236, use 2), then draw a transparent circle with a radius of the just-higher integer (for 2.236, use 3), with the opacity higher the closer the square root is to the higher number. Sounds great in theory.

In practice, however, things still didn’t look perfect.  It turns out that for very small dot sizes, the dots’ approximations to circles is pretty poor.  If you actually zoom in and count the pixels, the area in pixels is {5, 13, 37, 57, 89, 118, 165, …} instead of what pi*R^2 would give you, namely {3.1, 12.6, 28.3, 50.3, 78.5, 113.1, 153.9, …}.

 

But wait, it’s worse: combine the rounding radiuses problem with the problem of approximating circles, and the area in pixels will be {5, 5, 13, 13, 13, 13, 37, 37, 37, …}, for errors of {59.2%, -60.2% -54.0% -74.1% -83.4% -67.3% -76.0%, …}.  In other words, the 1-death dot will be too big and the other dots will be too small.  Urk.

Using squares is better.  You still have the problem of the rounding the radius, but you don’t have the circle approximation problem.  So you get areas in pixels of {1, 1, 4, 4, 4, 9, 9, 9, …} instead of {1, 2, 3, 4, 5, 6, 7, 8, …} for errors of {0.0%, -50.0%, 33.3%, 0.0%, -20.0%, -33.3%, 28.6%, …} which is better, but still not great.

Thus: rectangles:

Geocoding provided by GeoCoder.ca

 

03.14.13

My Dream Job

Posted in Maps at 12:11 am by ducky

I imagine some epidemiologist somewhere, who has statistics on the something like the measles rate by postal code, who wants to see if there is a geographic trend, like if warmer places have more measles. She has a spreadsheet with the postal codes and the number of cases in that postal code, and wants to turn that into a map where each postal code’s colour represents the number of cases per capita in that postal code.

She should not need to know what a shapefile is, should not need to know that the name of the map type she wants is “choropleth”, and should not have to dig up the population of that postal code. The boundaries of the jurisdictions she cares about (postal codes, in this case) and the population are well-understood and don’t change often; the technology to make such a map out to be invisible to her. She should be able to upload a spreadsheet and get her map.

I find it almost morally wrong that it is so hard to make a map.

Making that possible would be my dream job. It is a small enough job that I could do it all by myself, but it is a large enough job that it would effectively prevent me from doing other paying work for probably about a year, and I can’t see a way to effectively monetize it.

The challenges are not in creating a map that is displayed onscreen — that’s the easy part. To develop this service would require (in order of difficulty):

  • code and resources to enable users to store their data and map configurations securely;
  • code to pick out jurisdiction names and data columns from spreadsheets, and/or a good UI to walk the user through picking the columns;
  • fuzzy matching code which understands that e.g. “PEI” is really “Prince Edward Island”, a province in Canada; that “St John, LA” is actually “Saint John the Baptist Parish”, a county-equivalent in Louisiana; that there are two St. Louis counties in Misouri; that Nunavut didn’t exist before 1999;
  • code to allow users to share their data if they so choose;
  • UI (and underlying code) to make the shared data discoverable, usable, and combinable;
  • code (and perhaps UI) to keep spammers from abusing the system;
  • code to generate hardcopy of a user’s map (e.g. PNG or PDF);
  • code for a user account mechanism and UI for signing in

This service would give value to many people: sales managers trying to figure out how to allocate sales districts, teachers developing lesson plans about migration of ethnic minorities, public health officials trying to understand risk factors, politicians targeting niche voters, urban planning activists trying to understand land use factors, etc.

Unfortunately, for the people to whom this really matters, if they already have money, they already ponied up the money for an ESRI mapping solution. If they don’t have money, then they won’t pay for this service.

GeoCommons tries to do this. GeoCommons makes maps from users’ data, and you stores and shares users’ data, but their map making is so slow it is basically unusable, and it is not easy to combine data from multiple sources into one map.

It might be that one of the “big data” organizations, e.g. Google or Amazon, might provide this as an enticement for getting people to use their other services. Google, for example, has a limited ability to do this kind of thing with their Fusion Tables (although if you want to do jurisdictions other than countries, then you have to provide a shapefile). Amazon provides a lot of data for use with the Amazon Web Services.

However, it would be almost as difficult for Google or Amazon to monetize this service as it would for me.  Google could advertise and Amazon could restrict it to users of its AWS service, but it isn’t clear to me how much money that could bring in.

If anybody does figure out a way to monetize it, or wants to take a gamble on it being possible, please hire me!

03.12.13

Optimizing Map Tile Generation

Posted in Hacking, Maps at 11:54 am by ducky

In the past, when people asked me how I managed to make map tiles so quickly on my World Wide Webfoot Maps site, I just smiled and said, “Cleverness.” I have decided to publish how I optimized my map tile generation in hopes that others can use these insights to make snappier map services. I give a little background of the problem immediately below; mapping people can skip to the technical details.

Background

Choropleth maps colour jurisdictions based on some attribute of the jurisdiction, like population. They are almost always implemented by overlaying tiles (256×256 pixel PNG images) on some mapping framework (like Google Maps).

Map tile from a choropleth map (showing 2012 US Presidential voting results by county)

Most web sites with choropleth maps limit the user: users can’t change the colours, and frequently are restricted to a small set of zoom levels. This is because the tiles are so slow to render that the site developers must render the tile images ahead of time and store them.  My mapping framework is so fast that I do not need to pre-render all the tiles for each attribute. I can allow the users to interact with the map, going to arbitrary zoom levels and changing the colour mapping.

Similarly, when people draw points on Google Maps, 100 is considered a lot. People have gone to significant lengths to develop several different techniques for clustering markers. By contrast, my code can draw thousands very quickly.

There are 32,038 ZIP codes in my database, and my framework can show a point for each with ease. For example, these tiles were generated at the time this web page loaded them.

32,038 zip codes at zoom level 0 (entire world)
Zip codes of the Southeast US at zoom level 4

(If the images appeared too fast for you to notice, you can watch the generation here and here. If you get excited, you can change size or colour in the URL to convince yourself that the maps framework renders the tile on the fly.)

Technical Details

The quick summary of what I did to optimize the speed of the map tile generation was to pre-calculate the pixel coordinates, pre-render the geometry and add the colours later, and optimize the database. In more detail:

Note that I do NOT use parallelization or fancy hardware. I don’t do this in the cloud with seventy gajillion servers. When I first wrote my code, I was using a shared server on Dreamhost, with a 32-bit processor and operating system. Dreamhost has since upgraded to 64-bits, but I am still using a shared server.

Calculating pixel locations is expensive and frequent

For most mapping applications, buried in the midst of the most commonly-used loop to render points is a very expensive operation: translating from latitude/longitude to pixel coordinates, which almost always means translating to Mercator projection.

While converting from longitude to the x-coordinate in Mercator is computationally inexpensive, to convert from latitude to y-coordinate using the Mercator projection is quite expensive, especially for something which gets executed over and over and over again.

A spherical mercator translation (which is actually much simpler than the actual projection which Google uses) uses one logarithmic function, one trigonometric function, two multiplications, one addition, and some constants which will probably get optimized away by the compiler:

function lat2y(a) { return 180/Math.PI * Math.log(Math.tan(Math.PI/4+a*(Math.PI/180)/2)); }

(From the Open Street Maps wiki page on the Mercator projection)

Using Lists of instruction latencies, throughputs and micro-operation breakdowns for Intel, AMD and VIA CPUs by Agner Fog, a tangent can take between 11 and 190 cycles, and a logarithm can take between 10 and 175 cycles on post-Pentium processors. Adds and multiplies are one cycle each, so converting from latitude to y will take between 24 and 368 cycles (not counting latency). The average of those is almost 200 cycles.

And note that you have to do this every single time you do something with a point. Every. Single. Time.

If you use elliptical Mercator instead of spherical Mercator, it is much worse.

Memory is cheap

I avoid this cost by pre-calculating all of the points’ locations in what I call the Vast Coordinate System (VCS for short). The VCS is essentially a pixel space at Google zoom level 23. (The diameter of the Earth is 12,756,200 meters; at zoom level 23, there are 2^23 tiles, and each tile has 256 or 2^8 pixels, so there are 2^31 pixels around the equator. This means that the pixel resolution of this coordinate system is approximately .6cm at the equator, which should be adequate for most mapping applications.)

Because the common mapping frameworks work in powers of two, to get the pixel coordinate (either x or y) at a given zoom level from a VCS coordinate only requires one right-shift (cost: 1 cycle) to adjust for zoom level and one bitwise AND (cost: 1 cycle) to pick off the lowest eight bits. The astute reader will remember that calculating the Mercator latitude takes, for the average post-Pentium processor, around 100 times as many cycles.

Designing my framework around VCS and the Mercator does make it harder to change the projection, but Mercator won: it is what Google uses, what Yahoo uses, what Bing uses, and even what the open-source Leaflet uses. If you want to make map tiles to use with the most common services, you use Mercator.

Furthermore, should I decide that I absolutely have to use a different projection, I would only have to add two columns to my points database table and do a bunch of one-time calculations.

DISTINCT: Draw only one ambiguous point

If you have two points which are only 10 kilometers apart, then when you are zoomed way in, you might see two different pixels for those two points, but when you zoom out, at some point, the two points will merge and you will only see one pixel. Setting up my drawing routine to only draw the pixel once when drawing points is a big optimization in some circumstances.

Because converting from a VCS coordinate to a pixel coordinate is so lightweight, it can be done easily by MySQL, and the DISTINCT keyword can be used to only return one record for each distinct pixel coordinate.

The DISTINCT keyword is not a big win when drawing polygons, but it is a HUGE, enormous win when drawing points. Drawing points is FAST when I use the DISTINCT keyword, as shown above.

For polygons, you don’t actually want to remove all but one of a given point (as the DISTINCT keyword would do), you want to not draw two successive points that are the same. Doing so is a medium win (shaving about 25% of the time off) for polygon drawing when way zoomed out, but not much of a win when way zoomed in.

Skeletons: Changing the colour palette

While the VCS speed improvement means that I could render most tiles in real time, I still could not render tiles fast enough for a good user experience when the tiles had very large numbers of points. For example, the 2000 Census has 65,322 census tracts; at zoom level 0, that was too many to render fast enough.

Instead, I rendered and saved the geometry into “skeletons”, with one set of skeletons for each jurisdiction type (e.g. census tract, state/province, country, county). Instead of the final colour, I filled the polygons in the skeleton with an ID for the particular jurisdiction corresponding to that polygon. When someone asked for a map showing a particular attribute (e.g. population) and colour mapping, the code would retrieve (or generate) the skeleton, look up each element in the colour palette, decode the jurisdictionId, look up the attribute value for that jurisdictionId (e.g. what is the population for Illinois?), use the colour mapping to get the correct RGBA colour, and write that back to the colour palette. When all the colour palette entries had been updated, I gave it to the requesting browser as a PNG.

While I came up with the idea of fiddling the colour palette independently, it is not unique to me. My friend also came up with this idea independently, for example. What I did was take it a bit farther: I modified the gd libraries so they had a 16-bit colour palette in the skeletons which I wrote to disk. When writing out to PNG, however, my framework uses the standard format. I then created a custom version of PHP which statically linked my custom GD libraries.

(Some people have asked why I didn’t contribute my changes back to gd. It’s because the pieces I changed were of almost zero value to anyone else, while very far-reaching. I knew from testing that my changes didn’t break anything that I needed, but GD has many many features, and I couldn’t be sure that I could make changes in such a fundamental piece of the architecture without causing bugs in far-flung places without way more effort than I was willing to invest.)

More than 64K jurisdictions

16 bits of palette works fine if you have fewer than 64K jurisdictions on a tile (which the 2000 US Census Tract count just barely slid under), but not if you have more than 64K jurisdictions. (At least not with the gd libraries, which don’t reuse a colour palette entry if that colour gets completely overwritten in the image.)

You can instead walk through all the pixels in a skeleton, decode the jurisdiction ID from the pixel colour and rewrite that pixel instead of walking the colour palette. (You need to use a true-colour image if you do that, obviously.) For large numbers of colours, changing the colour palette is no faster than walking the skeleton; it is only a win for small numbers of colours. If you are starting from scratch, it is probably not worth the headache of modifying the graphics library and statically linking in a custom version of GD into PHP to walk the colour palette instead of walking the true-colour pixels.

(I had to modify GD anyway due to a bug I fixed in GD which didn’t get incorporated into the GD release for a very long time.  My patch finally got rolled in, so you don’t need to do that yourself.)

My framework now checks to see how many jurisdiction are in the area of interest; if there are more than 64K, it creates a true-colour image, otherwise a paletted image. If the skeleton is true-colour, it walks pixels, otherwise it walks the palette.

Credits: My husband implemented the pixel-walking code.

On-demand skeleton rendering

While I did pre-render about 10-40 tiles per jurisdiction type, I did not render skeletons for the vast majority of tiles. Instead, I render and save a skeleton only when someone asks for it. I saw no sense in rendering ahead of time a high-maginification tile of a rural area. Note that I could only do this on-demand skeleton generation because the VCS speedup made it so fast.

I will also admit that I did generate final tiles (with the colour properly filled in, not a skeleton) to zoom level 8 for some of my most commonly viewed census tract attributes (e.g. population by census tract) with the default value for the colour mapping. I had noticed that people almost never change the colour mapping. I did not need to do this; the performance was acceptable without doing so. It did make things slightly snappier, but mostly it just seemed to me like a waste to repeatedly generate the same tiles. I only did this for US and Australian census jurisdictions.

MySQL vs. PostGIS

One happy sort-of accident is that my ISP, Dreamhost, provides MySQL but does not allow PostGIS. I could have found myself a different ISP, but I was happy with Dreamhost, Dreamhost was cheap, and I didn’t particularly want to change ISPs. This meant that I had to roll my own tools instead of relying upon the more fully-featured PostGiS.

MySQL is kind of crummy for GIS. Its union and intersection operators, for example, use bounding boxes instead of the full polygon. However, if I worked around that, I found that for what I needed to do, MySQL was actually faster (perhaps because it wasn’t paying the overhead of GIS functions that I didn’t need).

PostGIS’ geometries are apparently stored as serialized binary objects, which means that you have to pay the cost of deserializing the object every time you want to look it or one of its constituent elements. I have a table of points, a table of polygons, and a table of jurisdictionIds; I just ask for the points, no deserialization needed. Furthermore, at the time I developed my framework, there weren’t good PHP libraries for deserializing WKB objects, so I would have had to write my own.

Note: not having to deserialize is only a minor win. For most people, the convenience of the PostGIS functions should be worth the small speed penalty.

Database optimization

One thing that I did that was entirely non-sexy was optimizing the MySQL database. Basically, I figured out where there should be indices and put them there. This sped up the code significantly, but it did not take cleverness, just doggedness. There are many other sites which discuss MySQL optimization, so I won’t go into that here.

Future work: Feature-based maps

My framework is optimized for making polygons, but it should be possible to render features very quickly as well. It should be possible to, say, decide to show roads in green, not show mountain elevation, show cities in yellow, and not show city names.

To do this, make a true-colour skeleton where each bit of the pixel’s colour corresponds to the display of a feature. For example, set the least significant bit to 1 if a road is in that pixel. Set the next bit to 1 if there is a city there. Set the next bit to 1 if a city name is displayed there. Set the next bit to 1 if the elevation is 0-500m. Set the next bit to 1 if the elevation is 500m-1000m. Etc. You then have 32 feature elements
which you can turn on and off by adjusting your colour mapping function.

If you need more than 32 feature elements, then you could use two skeletons and merge the images after you do the colour substitutions.

You could also, if you chose, store elevation (or depth) in the pixel, and adjust the colouring of the elevation with your colour mapping function.

Addendum: I meant to mention, but forgot, UTFGrid.  UTFGrid has an array backing the tile for lookup of features, so it is almost there.  However, it is just used to make pop-up windows, not (as near as I can tell) to colour polygons.

02.01.13

Unsolicited comments on Frogbox

Posted in Consumer advice at 2:24 pm by ducky

I recently moved, and (because I injured my shoulder and because we are slowly facing up to the fact that we are not 25 any more) we hired packers and movers.  We had a lot of boxes, but not enough, and the packers expressed a strong desire to use Frogbox boxes.

I had heard of Frogbox before, but hadn’t really found their service compelling.  The boxes looked really big and heavy, in addition to being expensive compared to scrounging boxes from here and there.

What I didn’t understand is that movers and packers absolutely love the boxes.  Because all the boxes are a standard size, loading the truck becomes less like a cross between Tetris and Operation and more stacking Mac&Cheese boxes on grocery store shelves.

Because the boxes are very sturdy, they minimize risk, especially for the movers.  The bottom isn’t going to fall out of one of the boxes; one box in a stack isn’t going to collapse asymmetrically and tip over the whole stack.

They are big and heavy enough that desk jockeys who are ferrying boxes in their car and then carrying them up stairs aren’t going to like them, but for muscular movers with the right trucks, dollies, and lift gates, they aren’t a real problem (especially if there are elevators instead of stairs).

If you filled them up entirely with books, they would be too heavy for the packers to move easily, but a) I don’t think the packers would do that and b) the packers generally didn’t move the boxes.  A packer would set an empty Frogbox in one spot, fill it, close the lids, put an empty Frogbox on top of the first, and proceed to load the second.  We ended up with short towers of Frogboxes scattered around our apartment.

The packers did not need to spend time converting flattened boxes into 3D boxes or to tape the boxes shut.  This, in turn, meant that they spent no time looking for their tape pistols (or, on the other side, box cutters).

It didn’t seem to me like the lids closed really securely, but it turns out that doesn’t matter: the weight of the box above holds it down, and the lids are heavy enough that unless you are moving in hurricane-force winds, they aren’t going to open by themselves.  (And if you are trying to move in a hurricane, you’ve got bigger problems.)  The boxes are also shaped to be wider at the top than bottom, which would rather discourage anyone from trying to load them in any manner besides flap-side-up.

I believe there are cheaper ways to get boxes — scavenge from liquor stores, get the ones from your mother’s basement or your company’s loading dock.  However, the overall cost might end up being lower with Frogbox because the movers and packers will work a little more quickly and you will have slightly less risk of damage to the contents.

I think that Frogbox is going to do very well as a company.  The only thing I can think of that would get in their way is bedbugs.  If it turns out that Frogboxes are a vector for bedbugs, then they would need to hose down the boxes after every use, which would increase costs.  Yes, there might be bugs in the boxes you get from the liquor store or even from your mother’s garage, but cardboard boxes probably have fewer users.

the bottom isn’t going to fall out of one of the boxes.

03.26.12

Maps of US Religions, take 2

Posted in Maps at 10:07 pm by ducky

I added a few more religious denominations to my elections/demographics site, again from Churches and Church Membership in the United States, 1990.

Note that these denominations have fewer adherents than the denominations I featured in my previous post, so these have full white corresponding to 0%, while full blue is 70% (vs. 100% in the previous post).

Here are adherents to the Unified Methodist Church:

% United Methodist Church Adherents 1990

I hadn’t realized that Methodists were concentrated in the center band of the country like that.

Here are the adherents to the Presbyterian Church (USA):

% Presbyterian Church USA Adherents 1990

I was surprised at how diffuse the Presbyterians are.

Here is the African Methodist Episcopal Zion adherents:

% African Methodist Episcopal Adherents 1990

I was surprised at how concentrated the AMEZ church was — in North Carolina and Alabama.

03.25.12

Maps of religion

Posted in Maps at 12:42 am by ducky

I recently added some data from Churches and Church Membership in the United States, 1990 to my election/demographics map. Collected by the Association of Statisticians of American Religious Bodies (ASARB) and distributed by the Association of Religion Data Archives. (1990).

There is data on about 130 denominations, with number of houses of worship, number of adherents, and number of members for (almost) every one, by county.  Houses of worship were surveyed, not individuals.  “Adherents” is a somewhat looser criterion than “Members”, but the survey allowed the houses of worship to interpret the question as they chose.  The combination of self-reporting and self-interpretation means that you probably shouldn’t pay too much attention to the raw numbers.  In particular, the respondents might well be over-estimating: Joe’s Church might be counting people who went to Joe’s Church only once.  However, I think the relative values across the country are interesting.

Here is the percentage of the population in the Continental US that is an adherent to any denomination (remember, as measured by the houses of worship).  The more blue, the more adherents.

Adherents as a % of population

 

I was a little surprised at how non-churchgoing the West Coast, Florida, and Maine were.

Here is the % of the population which adheres to the Church of Jesus Christ of Latter-Day Saints (also known as “the Mormons”):

% LDS Adherents - 1990

 

It isn’t surprising how the concentration of LDS adherents is centered in Utah, but I was surprised at how clearly you can see the Utah state borders.

Here is the percent of the population which adheres to any of the twelve denomination with the word “Lutheran” in the name:

% Lutheran Adherents 1990

 

Here is a map of the percentage of the population which adheres to a denomination with the name “Lutheran” in the name.  I was surprised at how concentrated the Lutherans were in the upper center of the country.  I had sort of thought that a group which had a “Missouri Synod” would have significant adherents in, you know, Missouri.

Here is a map of the percentage of the population which adheres to the Southern Baptist Convention. Note that there are 25 different denomination with the word “Baptist” in it, this is just the “big one”, the Southern Baptist Convention:

% Southern Baptist Adherents - 1990

I was really surprised at how clear the state boundaries were, especially for Missouri and Kansas.  I guess I kind of knew that the Southern Baptist Convention was sort of the religion of slavery, but I hadn’t realized just how long the geographical connection would persist.  (The Southern Baptist Convention split off from the northern branch in 1845, specifically over slavery.  They did apologize in 1995.)

Here is a map of the percentage of the population which adheres to Roman Catholicism:

% Catholic Adherents 1990

I was amazed at how few Catholics there were in the Deep South.  Aside from Latino influence in southern Texas and, to a lesser extent in Florida, plus the French influence in Louisiana, there are practically no Catholics in the south.  (At least, not in 1990.)  I grew up a few hours south of Chicago, so I rather had the impression that Catholics were ubiquitous.

I have a lot more data, but I’m not really sure what groupings make sense.  For example, do I group “Holy Apostolic Catholic Assyrian Church of the East” in with “Greek Orthodox”?  I have no idea if they have similar doctrines, if they hate each others’ guts, or both.  Similarly, I think it would be useful to group together evangelical churches, but I’m not sure how to tell which churches are properly called “evangelical”.  Stay tuned.

03.12.12

Incremental code coverage in EclEmma

Posted in Eclipse, programmer productivity at 9:57 pm by ducky

Several years ago, I found that differential code coverage was extremely powerful as a debugging tool and made a prototype using EclEmma.  I also had some communications with the EclEmma team, and put in a feature request for differential code coverage.  Actually, I thought that incremental code coverage would be easier for people to understand and use.

Well, EclEmma 2.1 just came out, and it has incremental code coverage in it!  I am really excited by this, and wrote a blog post at my workplace.  I have given motivation on this blog before, but I give some more on the Tasktop blog posting, as well as some instructions for which buttons to push to effectively use EclEmma to do incremental code coverage.

05.22.11

Novice paragliding

Posted in Canadian life, Family at 7:36 pm by ducky

Note: Dion, my instructor read this and was concerned that it painted an overly negative, overly scary picture of the sport.  I toned my language down slightly, but my main objective was to tell my family and friends about how I felt, not to evangelize for how fun (or safe) the sport is!.  Paragliding is actually quite safe when done right; my next post will be on paragliding safety.

My husband Jim flies powered aircraft; I find flying in small planes dull, noisy, cold, and a waste of fossil fuel.  Jim sings; I don’t.  Jim runs; I have bad knees.  I do artwork; Jim doesn’t.  I like to skate; Jim doesn’t.  I like to read and write, which are inherently solitary activities.  From time to time, one of us will try to do something that the other likes: I sang in one opera; Jim has gone skating a few times; Jim and I took a sketching class together; I have flown in small planes with Jim a few times.  Unfortunately, those efforts have not worked really well.  (For example, I threw up on one of my small plane rides with Jim.)  Despite us really liking each other, we don’t do much together.

When we were in Turkey with the nephews, I got a chance to take a ride in a tandem paraglider.  Despite throwing up twice due to motion sickness (I am sensitive, and didn’t take meds in time), it was one of the high points of the trip for me.  (Figuratively as well as literally.)  So I seized on paragliding as something we could maybe enjoy together.

We signed up for and are now mostly done with the P1 introductory class at iParaglide.  In the rest of this post, I’m going to talk about our experiences.

We started out with two theory classes.  We learned intellectually what we were supposed to do on launch and landing, about the gear, a little on the aerodynamics of the wings, how the controls affected the wing, how wind strength and weight affected ground speed and sink rate (which are the components of the glide ratio), and a bit about weather in BC.

Jim preparing to do a reverse launch kite; our apartment tower is in the background.

We next had a gratis session of kiting practice.  (This wasn’t on the class agenda, but Dion Vuk, our instructor, said that the weather was great for it and it would make us better fliers.)  Kiting is done on a flat piece of ground and the exercise is to get the glider (aka “the wing”) aloft over our heads for as long as we could.

It was difficult, and hard work for my out-of-shape 47 year-old body.  I was exhausted at the end of it.  I said to myself that my enjoyment of this sport would be lower if I didn’t get myself into better shape, so I started carrying water in my backpack on my walk to work.  Four or five days per week, I would walk 3km to work carrying five litres of water in my backpack; two or three days per week I would also walk home with it.  If there had been an earthquake, I would have been prepared!

The day after our kiting session, the class of about seven students practiced what is called “slope soaring”.  We got up at 5 AM to go out to a city park about an hour away which has about a 30′ hill. That hill  is just steep enough that you can launch off of it, but not steep enough that you can get very high above it.

Jim catching air at slope soaring

The wing needs a relative airspeed of 20 km/hr (12.5 mph, or 4:48 minutes per mile).  This would be really really hard on a flat surface with no headwind if your name isn’t Usain Bolt, but running downhill with a little bit of a headwind (which drops the groundspeed you have to achieve) makes it more possible.  It is still a little bit of a challenge: when the wing isn’t fully up, it’s like you are pulling a parachute — because you are!  As soon as the wing gets up, it is easier, but if you don’t haul posterior, worst case the wing’s momentum can bring it over and in front of you and you run into the wing, oops.  More likely is that you run run run and just don’t get enough speed to lift off the ground, which is unsatisfying.

There is no jumping: if you jump up, that reduces your airspeed and you just come right back down.  (Hubby Jim also points out that it reduces tension in the lines, which is counterproductive: the tension in the lines is part of what gives it the shape you need.)

The weather that day was suboptimal: the winds were coming from the west instead of the east, which meant we needed to launch from a less-optimal hill; it was a bit gusty, so hard to keep the wing from rolling off to one side. Because the weather was worse than forecast,  Dion decided to not take us up to the mountain that same day, but to give us another morning of slope soaring.  I was glad, because I was wiped out.  (See above about being 47 years old.)

Thus the next day, we got up at 5 AM to get to the park at 6 AM, for another three hours or so of slope soaring.  It was much easier due to much better winds, and we had fun running up and down the hill in a friendly competition to see whic pair of people could get the most flights-with-air in a specified time. (Note: it is really helpful to have a “buddy” when learning.  Once you are clipped in to your harness, you are connected to your wing so can’t do a good job of laying the wing out by hand if the wind moves it.  We were taught how to better adjust our wing on the ground using our lines, but it is helpful to have an extra pair of hands.  We were paired with a buddy in kiting and slope soaring.)

We then drove 2 hrs up to the mountain site, walked around the Landing Zone (LZ), and then went to lunch.  The weather in the Lower Mainland of BC is such that almost always, the winds will pick up significantly at mid-day, too much for novices to handle.  We pretty much can’t fly between 1300h and 1700h, so lunch tends to be from 1400h-1600h or so (with the rest of the time spent packing or unpacking and getting up from or down to the restaurant).

After lunch, Dion (slowly and deliberately) launched the students, one by one.  The winds died down as the day progressed, so Dion launched the students in reverse order of weight, which put me in the penultimate spot.

A note: I ♥ Dion.  Dion is extremely safety conscious, attentive, and supportive.  Not only did he give Jim and I kiting practice plus two days of slope soaring practice before the Big Launch, he spent a long time with each student on launch to make sure that they had a good launch: checking the lines (the cables that attach to the glider), checking our harness (the thing that attaches us to the lines), laying out the glider so that it would be maximally easy to launch, reassuring us, etc.  All of Dion’s students had perfect launches the first time that day; this was not true of all the student pilots with other teachers.

I had a totally unremarkable launch and then… I was in the air!  “Was it cool?  Were you excited?” I hear you ask.  Well, yes and no.  Mostly I was focusing on not killing myself; following Dion’s instructions on the radios (we each had radios clipped to our harness; two for redundancy) to sit back in the harness, do a left turn, a right turn, a 180, etc. as he made sure that I had some modicum of competency.  Next, I was focusing on aiming at my target: three tall trees at the far, upwind side of the LZ.  I was distracted for a minute by some bumps in the ride: I apparently had gone through a thermal: one bump for going in, one bump for coming out.

The landing sequence goes like this: go to above the far, upwind corner of the LZ (a rectangular grassy field bounded by tall trees).  Do one or more figure-8s along the short side (cross wind) to burn altitude; then turn and go downwind along the long axis of the field on the far (i.e. farther from the launch area) side of the field.  At the other end of the field (“the base”), optionally burn some more altitude with one or more figure-8s; turn into the wind (to help slow the groundspeed); at the last minute, flare (i.e. stall the wing) to give a slight bit of lift and a lot of decrease in ground speed; run like hell to keep up with the glider as it comes down.

We were told to always always always turn towards the instructor, never ever away (which meant left turns for this landing spot); to “put our landing gear down” (i.e. stand up in the harness instead of sitting in it) halfway down the downwind leg; and to never ever ever make sharp turns close to the ground.  I blew all three of those.  The LZ instructor (who was new) had the practice of putting legs down much closer to the touchdown so didn’t tell me yet, and I didn’t remember to do it on my own.  On the downwind leg, I misunderstood the LZ instructor telling me to ease up on my right brake as a request for a right turn, which confused me long enough that I didn’t turn left when I should have.  That meant that I was closer to the trees in front of me than I liked, so I made a sharp turn (oops!) to the left.

"my" wing in the briars

Well.  If you do sharp turns, you lose altitude fast, and suddenly I was on the ground.  Also, because I had not turned in time to hit the nice part of the field, I landed in bunch of briars.  I didn’t really panic because I didn’t have time.  One minute I was heading for the trees, the next I knew the ground was really close, the next I was on the ground on my side in the midst of briars.

I thought to myself, “Am I damaged?  Nope: successful landing!”  And I really wasn’t: not a cut, not a scratch.  Later I found what might have been two tiny little puncture wounds, each about the size of a small zit, but I might have easily gotten those during slope soaring.

 

My heroes!

My stomach felt awful, however.  My stomach is already acid-sensitive, and it turns out that adrenaline dumps a lot of acid into the stomach.  I didn’t know that, however, so thought I had gotten motion sick. “This sucks!”, I thought to myself.  I really wanted this to be something Jim and I could do together, but if I am so sensitive that I will get this motion sick on my first fifteen-minute flight, that wasn’t good.

Given how concerned everyone else was by my well-being after “crash landing” in the briars, and how bad I felt, they let me lie around groaning while they untangled my wing from the briars for me.  Thanks, peeps!

Interestingly, this landing did not make me more scared of flying, it made me less scared.  I am an out-of-shape, not particularly coordinated 47-year-old who did three things that I had been explicitly warned not to do, had an uncontrolled landing into briars, and still was unscratched.  There is more room for error in this sport than I had realized.

Jim and I debriefed, went home, and collapsed into bed.  I was wiped out.

The next two weekends had nasty weather, so we didn’t fly.

Finally, a break in the weather.  On Thursday, Dion offered another evening kiting session that we jumped on.  (The kiting sessions are surprisingly fun.)  While I had trouble getting the wing up, I was not completely exhausted.  Let’s hear it for carrying five litres of water 3 km to work and back for three weeks!

We were scheduled to have class on Sat/Sun, but Thursday evening, after the kiting session, Dion looked at the weather and didn’t like what he saw.  He called around to see who could come to a session on Friday, and managed to get a quorum.  So we got up at 5AM on a Friday morning and drove out to the mountain.

One really big advantage of flying on Friday is that we had almost no company at the top of the mountain.  I think there were only five other people there the whole day.

At the launch site, holding the bag the wing is stored in; you can see Mike setting up behind me.

Me starting to get up from a faceplant; LZ instructor coming to check on my health.

On my first flight of the day, I was a little bit more relaxed, and actually got to look around a little bit.  However, I came in a little bit low and wasn’t able to make my turn onto final.  Instead, I landed crosswind on the downwind short-side of the field.  This meant that I didn’t get any help from a headwind to slow me down.  I couldn’t run fast enough, so stumbled and fell face-first.

The great news was that I was again completely unhurt (again, not even a scratch, bruise, or scrape); the good news is that my stomach wasn’t nearly as upset as it had been after my close encounter with the briars; the bad news is that my stomach was still unhappy; the worse news is that I got motion sick on the bumpy drive up the rutted logging road.  (The great news is that I did not get vomit all over the inside of Jeff’s vehicle!)

I was still feeling queasy when the time came for my next flight.  Dion asked how I was feeling, I shook my head “no”, and he immediately scrubbed my flight with no recriminations of any kind.  The man is extremely supportive.

(Jim and the other students flew, however, and had great fun.  Dion had them ride thermals a little bit to get them used to soaring.  One of the other students, Jeff, is really good at this, in part from experience kiteboarding, and he was aloft forrrreevvvvvverrrrr!)

Then we debriefed, had lunch, and went back up.  We started flying again around 1700h, I believe.  As I said before, the winds are high mid-day and get weaker, so Dion again sorted by decreasing weight, putting me at the end of the line.  When it was just me and Dion at the top, the wind started being a bit erratic and I started getting nervous about the launch.  I was also aware that everyone else was waiting for me at the bottom.  Dion soothed me and calmed me down about the launch.

I was also a bit nervous about the landing.  The stated objective of the last flight was to get us to land on our own.  At this point, I have one landing in the briars and one near-face-plant… and that was with help.  Zero fully correct landings (unlike the others, who have had two or three apiece by this time).  Now I’m supposed to do it on my own?

That tiny speck in the center is me.

But Dion was right, the winds did die down.  I tried to launch — and wasn’t going fast enough.  I tried to abort, fell on my butt, and slid into my wing.  Pick up, try again, wait… wait… wait… and finally, it was a go!  I ran like hell down the slope and was airborne!

This time, everything felt smoother.  I looked around more, and got to go “wow, I am way up in the air and can see all kinds of stuff!”.  Eventually I got over the landing field and started my way down.  First mistake: I did triangles instead of figure-8s to kill altitude on the upwind short side.  No real harm done, but it meant that I ended up way over (inside) the field instead of sticking to the boundary.  Next mistake: I forgot to put my landing gear down halfway through the downwind leg.  Then, when I got to the base (downwind, short-side), I was a bit high.

On the base (downwind, short-side) leg

I did one loop of a figure 8 to kill some height, and started back towards the far side of the field.  I dithered and dithered for an eternity (like, two seconds) about whether I should turn upwind or do another loop.  I wasn’t sure if I had enough height to do another loop of the figure 8 or not, but felt like I was higher than I was supposed to be to land.  I worried that if I was too low, then things would happen so quickly that the instructor (monitoring and on the radio to provide corrections if I got into trouble) might not be able to help me before I got in the trees, and that would be bad.  The landing field is very long, so I figured that the instructor would have enough time to tell me how to recover if I was too high.  I thus decided to go for “final” instead of another figure 8 loop.

I was too high to land in the first third of the field like they like us to.  The orange spot in the lower left of the next photo is the target we were supposed to try to hit, and you can see I am way too high to hit that:

on final approach

As I came in, I flew over the heads of the other students, who gaped up at me.  It became clear that I was in fact going to land well before the trees, which was a relief.  The LZ instructor came on the radio and told me to put my landing gear down when I was about 30 feet off the ground, oops!  Fortunately, it was easy to pop into a “starfish” stance from my sitting position.

But where should I flare?  If you flare too high, then you fall from a height.  If you flare too low, then you don’t lose enough speed and might get pulled by your glider (i.e. face plant).  Fortunately, the instructor came on and told me to when to flare.  I flared hard and got ready to run like the dickens:

getting ready to sprint

I touched down very gently, took two, maybe three dainty little steps, and was, much to my surprise, stopped!  Zero forward velocity, and my wing was just sitting above my head.  It took 15 or 30 seconds to float to the ground as I just stood there with my mouth hanging open.  In the following picture, I am not only stopped dead, I have started to turn around an look behind me at everybody else.  You can if you look very closely that the lines to the right of me are slack and starting to collapse.

cold stop

~1(?) sec later

It was amazing. I literally could not have imagined — it was beyond my imagination — that it was possible for me to have a landing like that.

My stomach was not upset immediately, but it got unhappy a minute or two later.  (Not as bad as the other times.)   I immediately chewed two Tums, and that seemed to make it better.

 

about 10(?) seconds after landing

We debriefed and went home.  The next day, I was really lethargic and spent basically the whole day in bed.  Note, however, that I was merely lethargic after one day of flying plus one evening of kiting (and two days of getting up way way early) instead of being totally wiped out after one day of either.  Now if you will excuse me, I need to go carry six litres of water down to the beach!

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »