Severe Weather’s “Second Season”

A rash of severe thunderstorms rolled across my region Sunday night, an opportunity to watch a radar broadcast instead of football and respond to quizzical messages from friends. The synoptic setup was very similar to a springtime system, with a strong south wind supplying Gulf moisture and an elevated cold front pushing in from the northwest. Especially after sundown, these storms were destructive, producing an EF3 tornado in Dallas, a long-track EF2 in a populated area of NW Arkansas, and a few brief squall-line tornadoes in Oklahoma and Missouri.

Tornadoes aren’t “typical” this time of year, but Sunday’s severe weather event certainly wasn’t beyond expectation. Tornadoes can strike in any month, particularly in southern locations like Dallas that have even experienced strong tornadoes in winter. Since severe thunderstorms require warm surface-level air, cold air aloft, and wind shear to kick things off, it’s no wonder they mostly form when the jet stream is unstable during the seasonal transition of spring…or fall. Yes, there is a statistically significant uptick in tornado occurrence during November and the latter half of October, as an active jet stream and strengthening cold fronts collide with lingering surface-level summer air.

Violent tornadoes (EF4 or stronger) are far more likely in the spring, but there’s a blip in November. (Source: ustornadoes.com)

But why are tornadoes more likely in the spring than in the fall? Meteorologists think in probabilities, so they might answer that the unstable atmospheric conditions required to generate tornadic storms are present most often in the spring. Undeniably true, but not a satisfying explanation. As an engineer, I think in representative averages: what about the mean conditions of the fall make tornadoes possible but not as likely as during spring? At this time of year, cold fronts are stronger and more numerous than warm fronts, whereas the reverse is true in spring. The fall season experiences about a third less precipitation, often limiting the surface-level moisture. The days are shorter, allowing for less surface heating. Even if the jet stream/wind shear profile are similar, the thermal instability is, at least on average, markedly lower, hence why fall storms usually fall under the severe threshold.

Fall storms can reach severe levels, however, as they did on Sunday. Fortunately, the NWS was all over it, issuing a tornado watch for an area of ‘enhanced’ convective risk. These storms were atypical in that they became severe after sundown. According to the 7pm sounding near DFW, the high CAPE of 2900 J/kg was maximized at the land surface, but the LFC (level of free convection) was comparatively high. Thus, convection ahead of the front was incited by surface-level heat and moisture, whose uplift was likely strengthened by a delayed nocturnal inversion. While I focus my research on modeling surface-level heat and moisture, I had not seriously considered the nocturnal inversion as a driving influence. I’ll work on incorporating that time dependence into my algorithms.

Zooming out, it’s kind of a miracle that Dallas sustained no loss of life. Just a few injuries and an estimated 2 billion dollars of property damage from an offseason EF3 tornado after dark in an urban area…a major success in warning communication and emergency management. The majority of deaths from this storm system were not tornado-related, which perhaps speaks to the public’s awareness of tornado warnings but disregard of anything less. At any rate, I’m sure this outbreak will be studied pretty extensively by atmospheric scientists. For those affected, I hope for a smooth and swift recovery, ideally before winter settles in. For everyone else, I hope that this reinforced the notion that a tornado can happen at any time of the year.

Aerial view of Dallas tornado track, the day after (Josh Crowder)

Projecting onto the World

Sometimes I wish the world were flat. Not because I’m some kind of conspiracy wonk, just because I’m practical. Like your prototypical engineer, I love applying coordinate systems to, well, anything. A flat earth, easy, model it with a 2D Cartesian coordinate space. We could even transform it into polar coordinates, say, if we wanted to superimpose a tornado above the origin. I thought that was a legitimate approach when I started conceptualizing the tornado prediction model a few years ago, but it really isn’t that simple. In fact, I found some of the intricacies interesting. This post aims to break down my foray into GIS (geographic information systems) as my cartographic view evolved from “Let’s just call the earth flat” to “The ideal projection should minimize shape and size distortion to promote a balanced, unified worldview, but alas, such a projection is only possible in three dimensions.”

The earth’s geometry is complex, even if you ignore its movement through space and its relation to other celestial bodies. First off, it isn’t truly a sphere: it’s an oblate spheroid. With the angular momentum of planetary rotation stretching earth’s liquid core, Earth’s diameter at the equator is about 43 kilometers greater than its diameter at the poles. Moreover, the axis of rotation differs from its magnetic axis, complicated by the fact that Earth’s magnetic poles are continuously on the move. This squished globe is our home, however, so cartographers and others have worked for centuries to develop accurate, catch-all representations of its irregular surface for navigational purposes and more.

Earth’s irregular form is governed by many forces. Source: ASU

Before projecting a coordinate system with accuracy, the three-dimensional form of Earth, also known as the geodesy, must be established. Traditionally, this was done based on simple ellipsoid geometry, as the spheroid parameters were periodically updated throughout the 19th and early 20th centuries with new geographic observations and theoretical advances. By the 1970s, the advent of GPS required an exactitude that motivated worldwide cooperation to come up with a standard geodetic reference. So naturally, we came up with two of them: WGS84 (the World Geodetic System, used by most of the world) and NAD83 (the North American Datum, used by the United States). Both are accurate on a scale of inches over North America, and remote sensing data can take either reference as its basis. It’s best practice to run a conversion algorithm to transform all of your spatial layers into the same geodetic reference to minimize any offset with your projections.

While an error in geodetic reference may be minor, your choice of projection can have hugely important consequences. It is, of course, difficult to visualize the surface of a sphere in a 2D medium, so large-scale map projections sacrifice accuracy in the portrayal of size, shape, or both. Depending on your cartographic needs, there are hundreds of projections that people have developed over the years, many of them easily usable with open-source GIS functions. For my application, I require the spatial information in a square grid by latitude and longitude so that I can perform math on the grid cells. Despite the plethora of options at my disposal, I’ve found my wind modeling calculations easiest to visualize with a Mercator-type projection. Controversial history aside, I appreciate that the lat-lon coordinates are perpendicular and the relative shapes of surface features are preserved. Satellite remote sensing data is often released in spherical coordinates, and I use the USGS elevation product (a 1/3 arc-second resolution DEM) as a base layer without requiring a transformation.

The sailing hasn’t been smooth, however, mainly because the MRLC datasets – land cover, tree canopy, and ground imperviousness – that I use to describe the terrain come in an Albers equal area projection. To maintain 30-meter square grid cells, the grid only aligns with the compass rose on one finite line, in my case the 96th west meridian between the latitudes of 29.5° N and 45.5° N. Fortunately this line runs directly through Tulsa and Omaha, limiting distortion in Tornado Alley. To accurately serve areas significantly east and west of the meridian, I have been experimenting with open-source algorithms to transform the conical raster data into the preferred lat-lon coordinates. The transformation is not trivial, converting squares into slightly convex trapezoids of differing size and centroid position. While I’m sure most of these algorithms work, the challenge lies in implementing a transformation that includes the ability to crop the input data (so I never need to transform all 20GB of the United States at once), executes with optimal computational efficiency, and coexists with any user’s Python software environment. After a few rounds of trying and debugging, I think I have found a suitable solution, though not elegant. This solution involves three steps: reading a larger rectangular domain in raster coordinates, transforming the raster to WGS84 coordinates, then cropping to the requested lat-lon domain. The computational inefficiency ranges between 0 and 30% due to this location-sensitive domain mismatch, but I’m willing to live with that tradeoff in place of trying to code an efficient data management algorithm myself.

I realize that the level of information went from basic to deep very quickly, that’s exactly how it happened for me. There’s such a wealth of GIS routines available for Python data processing, but the information is so decentralized that building my code has felt like a virtual treasure hunt. I’m still deep in the GIS developer’s rabbit hole, but at least I’m enjoying it down here. If you’d like to know more about the basics of projection, I love this video from Vox. And to play around with the shape distortions that accompany world map projections, I highly recommend this cool web application by Jason Davies. I’ll be back soon with pretty graphics to share!