An Engineered Economy

While studying for the FE Exam, I recently dusted off a subject that I learned a decade ago as a college senior: engineering economics. It didn’t fully make sense at the time, a compendium of formulas that manipulate capital and operating costs with annualized cash flow estimates, interest rates, inflation adjustments, depreciation from planned obsolescence, and more. I understand the rationale behind teaching this to engineering students, as these analysis tools would be useful when engineers are elevated to project management roles that deal directly with financial resources. However, in my experience, the business-savvy decision-makers often do not understand these financial metrics, instead embracing a less mathematical, all-or-nothing mode of thinking. The disconnect between engineering economics and business practice was striking to me, in a way explaining some of the current volatility experienced within the economy.

To establish a basis, I’ll start by summarizing the “engineering school edition” of project economics, which mirrors financial tools used by other fields. We learn accounting principles, including how to create a balance sheet and apply terms like book value, gross margin, ROI ratio, leverage, liquidity and solvency. We learn several ways to calculate asset depreciation (most notably MACRS) to justify equipment purchases considering tax implications. We learn how to weigh benefits and costs in many different scenarios, quantifying societal benefits (in dollars) for municipal projects and projecting cash flows for industrial investments. Sensitivity analysis – including risk assessment, which is a key aspect of engineering design – adds rigor to these cost-benefit evaluations. As a preliminary design step, we learn to use discount factors to assess the profitability of a project against a nominal interest rate from investing in a bond or long-term index fund, for example, adjusting for inflation and manipulating the bases of capital and operating costs to fit the conventional yearly terms. Many of the discount factor calculations over a wide range of cash flows and interest compounding strategies are tabulated in the document below; incidentally, it is an excerpt of the reference handbook for the FE Exam, and I find it a comprehensive yet succinct resource for these value-fudging formulas.

In industry, at least in my experience with small-to-medium-sized companies, all of these formulas go out the window. I’ve sat in many meetings with owners and investors, featuring rolling leather chairs and unbuttoned blazers and ego-stroking small talk and, if we’re lucky, perhaps a few PowerPoint slides. These guys invariably want to skip straight to the bottom line: what is my money doing for me? Investors are not impressed by uniform theoretical cash flows or future worth regressions; rather, they want to know the minimum amount of money they can put in for an acceptable return. As the engineer overseeing technical aspects of the project, I would be responsible for coming up with an itemized list of expected capital and labor costs (treated as upfront expenditures, never annualized) and an implementation timeline. Exact figures were generally expected, necessitating detailed equipment quotes instead of cost index estimates or other shorthands. The finance team, led by investors or owners, would generate their own revenue projections based on idealized profit margin and market share estimates. The resulting negotiation between finance and engineering was usually one-sided, as they would try to make line-by-line cuts, especially to labor, in the hopes of saving as much money as possible while keeping their glistening business projections intact.

One might argue that this is the normal give-and-take of business, but it often led to what I would consider ill-informed and dangerous decision-making. The financial pressure to make cuts to critical safety infrastructure and ignore expensive code requirements was strong and ever-present. When implementing a design for a hand sanitizer blending process, I had to fight hard to get critical safety features like automated fill/level control for the stainless steel blend tanks and a fire alarm-sprinkler system for the building. I was forced to sacrifice other requisite items like FDA-compliant ancillary equipment for material handling, EPA-compliant pollutant testing equipment, and IFC-compliant safety ventilation. The owners had made the calculation that saving several hundred thousand dollars up front was worth the possibility of a fine, as any regulatory enforcement would take months or years to take effect and the amount of penalty is generally limited by law (e.g. USDOT can assess a maximum fine of $250,000 for a shipping violation, TCEQ can assess no more than $25,000/day, etc.). They would sooner stop production or disband the company in the event of a regulatory crackdown, as long as they have made it to the bank first – an attitude that I encountered with a more extreme twist in 2020 when numerous lawsuits and unpaid wage claims never caught up with the people behind a fly-by-night manufacturing operation in rural west Texas.

When investors see their peers garner massive returns in startup companies with often-nefarious business practices, they want massive returns for themselves. One investment group declined to invest in a Felix Tech project because they “will not invest in anything that doesn’t guarantee at least a 500% return within 5 years.” We didn’t even qualify for a meeting with another group, which required that investment candidates already have an operating revenue of at least $2 million at a margin of at least 30% (i.e. an established, highly profitable business…and these guys had the audacity to call themselves ‘angel’ investors). When this expectation of high returns is coupled with an aversion to risk, the pressure to make unrealistic promises is immense. Wishful thinking prevails, making the interest rate formulas for comparing returns with the bond market seem droll and irrelevant. But there is a sobering reality that every project is at risk of losing its funding to another profitable venture, an extraordinarily difficult challenge in the increasingly financialized climate of the last five years where massive gains in the stock market, cryptocurrency, and real estate assets were commonplace. When a financier decides to pull $500,000 out of a $2 million build to invest in cryptocurrency instead, it has a debilitating effect on the project and sends shockwaves that impact everyone connected to the business.

The broader point here is that when I watch the recent bank runs at Silicon Valley Bank, First Republic Bank, and others, the overall behavior feels familiar. Greedy investments were made into tech companies that were pressured into promising the moon. The banks entrusted with safeguarding these investments also got in on the action, seeking to maximize returns for their own shareholders. When the first dominoes fell, the parties with the riskiest positions had to race to pull their money from these banks. Meanwhile, the banks have a backstop from the federal government, who has committed to reimbursing account holders up to $250,000 or likely more through the FDIC. The American people will pay for this two-fold, as taxpayer money is used to subsidize financial impropriety and as the larger economy suffers from the speculative instability of major banks collapsing. Maybe the business attitude will undergo a much-needed adjustment as time goes on – especially as the explosive growth of financialized assets cools and investors must consider investing again in longer-term projects that generate solid, lasting returns. Or maybe the financial system needs to borrow from engineering logic, appropriately assessing its risks and recalibrating toward a realistic rate of stable growth for the future.

Hurricane Ian: a Category 4 Direct Hit

Hurricanes are, in my opinion, the most terrifying type of weather phenomenon, and Hurricane Ian is a textbook reason why. While the most monstrous tornadoes are a mile or two wide and churn along for about an hour, Ian’s severe bands spanned a nearly 300 mile diameter and exposed much of its path to sustained winds equivalent to a EF2-EF3 tornado. Damage was exacerbated by flooding; some areas were deluged by over a foot of rain in a single day, and coastal areas below the storm surge height (up to 10 feet) were inundated by seawater. It’s a situation of unavoidable devastation – even though we knew that Ian would hit the Gulf Coast nearly a week in advance, how do you prepare a million people or more for a day of 100+ mph sustained winds and over a foot of rain?

I write about hurricanes often, but I don’t want to understate the gravity of this disaster. This hurricane is among the worst to hit the U.S. in my living memory, joining Maria and Katrina in the total destruction category. Ian was comparable to the worst-case scenario that forecasters were expecting for Hurricane Irma in 2017, landfalling as a Category 4 near Fort Myers and leaving a broad swath of destruction across the state. Ian already compares to the worst storms in Florida’s history, according to the infographic below. A 500-year wind event for Florida’s southwest coast, a 1000-year rain event for the Orlando area (though I generally believe rain risk estimates need updating). Thousands of homes flooded, over 2 million residents without power, a death toll in the double digits and climbing… a tragic and devastating event.

But hurricanes are a fact of life in Florida. Catastrophic storms will continue to hit Florida, likely with increasing frequency as the planet warms. And people will continue to live there; with the exception of a few days during hurricane season, the Sunshine State has some of the best weather in all of America, not to mention the beautiful beaches and endless tourist attractions. Florida will surely build back, hopefully with an eye on resilience. On that note, what can actually be done to prepare for a disaster scenario like this?

  • Assessing risk for flooding: there are numerous resources available, including the National Flood Hazard Layer from FEMA, for mapping floodplains and storm surge areas. Moreover, versions of these maps are requestable public records from every municipality, and informed residents can know roughly what volume of rain or height of storm surge will overtop any flood barriers and cause damage to their homes or yards.
  • Preparing for an evacuation: Piggybacking on the flood hazard data, emergency managers have compiled an excellent map interface that gives neighborhood-specific evacuation information. Officials knocked on doors and issued mandatory evacuations for about 300,000 people of highest risk, but some people unfortunately resist these mandates.
  • Strengthening building codes: Hurricane clips and other wind resistant architecture have been required in Florida since Andrew in 1992, but Houston meteorologist Eric Berger said this would have been a “nightmare storm” for less-regulated coastal Texas. It is tragic to see mobile home parks getting swept away, and I wish a regulatory approach could move those developments to safer ground away from storm surge areas.
  • Investing in public infrastructure: Florida’s power outage could have been a lot worse if not for a concerted effort since 2004 to re-route electrical lines underground. But more flood control infrastructure like seawalls and salt barriers could better protect developments on both beachfronts (like Fort Myers Beach and Captiva Island) and intracoastal waterways (like the entirety of Cape Coral).
  • Mobilizing disaster response: Finally, when the hurricane inevitably hits, first responders and debris-moving equipment need to be ready to move. Shelters opened to accept thousands, serving hot meals and providing basic necessities and temporary lodging. Florida appears to have done an exceptional job with this, with a detailed plan disseminated from the state level and supported by numerous public and private organizations. If only all counties followed it
  • Rebuilding with all hands on deck: Floridians will be rebuilding for a long time, and the initial estimate that only ~20% of those affected had flood insurance might make rebuilding efforts even messier. I hope that FEMA has the breadth and staying power to assist those in need and has learned from disasters like the Joplin tornado (where numerous nefarious contractors swarmed to grift victims dependent on the federal aid). And I hope that communities can come together to rebuild in a way that steels Florida for the next one.

ASHRAE Part 2: Optimizing Indoor Spaces in the COVID Era

Hysteria around the COVID-19 pandemic provided the perfect cover for many enterprising companies to peddle all sorts of products of varying antiviral utility. While I may have the most experience with the opportunistic hand sanitizer industry, the markets for masks, sanitizing chemicals, COVID tests, infrared thermometers, plexiglass barriers, air purifiers, UV disinfection lamps, and bug-out bags all exploded since early 2020. A steady hand through all of this was ASHRAE – though I knew them for their useful weather datasets, the HVAC experts that make up this trade organization have been putting out clear, concise, scientifically-motivated resources since the pandemic began.

As a respiratory virus, the defining challenge in pinpointing SARS-CoV-2 transmission risk was determining the extent of airborne/aerosol transmission versus droplet/surface transmission. ASHRAE released their first guidance in April 2020 to clarify this question early, acknowledging that the primary risk was from droplets but that aerosol spread couldn’t be ruled out. This codified the 6-foot social distancing in indoor spaces, allowing businesses like restaurants to reopen on an interim, wait-and-see basis. From there, ASHRAE set an ambitious research schedule, leading to a series of detailed and informative white papers to guide HVAC professionals…and that’s about it. A downside to leaving such an important task to a trade organization is that the adoption of guidelines feels optional, completely unenforceable in all but a few jurisdictions.

To me, it’s logical that indoor air management should be the primary focus to prevent the spread of a respiratory virus, especially with the emergence of more highly contagious variants like Omicron. This goes beyond a simple air purifier, a common scam item that ASHRAE played an active role in screening/warning against – strategies include directing airflow patterns downward and toward local exhaust registers, modulating pressure to isolate high-risk rooms, adding UV treatment and HEPA filtration to recycled air, and boosting outside air intake and exhaust flow rates. New requirements for hospital air filtration/ventilation are in the works, with the goal of mitigating the spread from contagious patients. Likewise, the next edition of the International Mechanical Code will possibly include airflow design requirements specifically aimed at curbing viral spread in high-occupancy indoor spaces.

The science is settled: we have the expertise to greatly reduce airborne transmission within new construction for very little added cost. While it’s unlikely that existing buildings are forced to upgrade their HVAC anytime soon, I do hope that the appropriate regulatory pathways (building codes, OSHA, etc.) begin to require a consideration of viral spread in ventilation design. Our society as a whole may not be prepared for the next pandemic, but thanks to subject matter experts at ASHRAE working in conjunction with public health professionals, at least we may be able to breathe a little easier going forward.

ASHRAE Part 1: Weather at Work

Over the course of renovating an industrial facility to meet code, I have encountered several aspects where meteorological/climatological factors play a role in guiding design. Resistance to natural disasters might be the first aspect that comes to mind, as many coastal locations mandate hurricane clips and other structural reinforcement techniques in new construction. Rainfall management is also important, as the outdoor surfaces must be engineered to ensure adequate runoff and spill containments must be sized to also include a maximum daily rainfall (11 inches in our south Texas location). Monthly temperature and humidity averages designate the climatic zone for building code purposes, which determines the required R-value of insulation and whether a vapor barrier should be placed on the inside or outside of walls. Other design implements, like high-albedo coatings or shades to protect against insolation, may not stem from a specific code requirement but can be critical to the functionality of a facility throughout the seasons.

Lately, I have been tasked with redesigning the heating and cooling systems for our facility. To meet the requirements of the International Energy Conservation Code (IECC), HVAC systems must be sized with consideration to the location’s expected weather conditions throughout the year. The American Society of Heating, Refrigerating and Air-Conditioning Engineers, or ASHRAE, has developed a compendium of resources to estimate HVAC loads based on climatic averages and assess risk for extreme weather events. It’s meteorology data processed in a way that is most useful for understanding how your building will interact with the elements. I included a sample of the data for nearby Victoria, Texas, embedded below for your perusal; it’s a lot of information but it’s pretty digestible, even for non-experts:

From this extensive tabulation of data, I was mainly concerned with the extreme temperatures (dry-bulb for winter and wet-bulb for summer) to establish edge cases for facility HVAC sizing. The summer extreme was straightforward: a design temperature/humidity difference is easy to pinpoint when the dew point maximum is a fixed value and there’s only about a few degrees difference between the yearly average dry-bulb temperature and the 50-year extreme. The winter extreme had some nuance, however: with twice the standard deviation for yearly dry-bulb minima, some winters barely freeze whereas last winter brought an extended winter blast of single-digit temperatures. While code requires that the roof deck of our building be heated to 40 °F to prevent wet-pipe sprinklers from freezing, there is some room for engineering judgment in deciding what minimum temperature becomes central to the design. I considered the 10-year value a reasonable design target, back-calculating to ensure that even last year’s all-time extreme case would keep the roof deck above 32 °F.

To put a ribbon on this meteorology-aided design, I incorporated some analytical techniques from previous work on local heat island modeling. I recycled some Delaunay triangulation code to establish two other data vertices at San Marcos and College Station then perform a weighted average calculation to interpolate the necessary data values for Hallettsville. Despite other stations being closer as the crow flies, this triangulation technique filtered out noisy results and provided a directionally balanced estimate of the climatic conditions at the exact location of our facility. A relatively simple way to add specificity and cross-check the data – I firmly believe that more local-level data is always a good thing.

With ASHRAE weather data in my toolbox, I feel better prepared to model generalized conditions for normal and extreme weather across the United States. It was fun to explore the intersection between engineering design and meteorology, even though the owners may decide not to build any of this according to my design – as it stands, we have an unprotected sprinkler system but no budgetary approval for further safety or weatherization installations. Meanwhile, I will continue using my fascination with weather to analyze 10-day forecasts in the hopes that our pipes don’t freeze and our building doesn’t flood.

Hurricane Ida: handled it a little better

Yesterday, New Orleans and the surrounding marshland parishes of SE Louisiana were directly hit by Hurricane Ida. We all knew that this was going to be bad. Before the rain showers had coalesced into a tropical storm off the coast of Jamaica, the spaghetti models were in the most agreeance I’ve ever seen for any storm (see below, it’s impressive) that this was going to be a Category 4 or 5. The next name in line was Ida, which just sounds like the name of a notorious hurricane, certainly in comparison to more docile choices like Ingrid, Indica, or Ivanka. And it was due to strike on the 16-year anniversary of Hurricane Katrina, one of the deadliest and costliest natural disasters in U.S. history, slated to inflict its most intense wrath in many of the same places most affected by Katrina. All we could do was collectively hold our breath as Louisianans went through the seasonal ritual of boarding up windows and deciding whether or not to evacuate to higher ground.

Model runs from the GFS and ECMWF from 4 days prior to landfall are remarkably consistent in path and strength.

After Ida made landfall as a Category 4 on Sunday morning, we had to keep holding our breath as the extent of damage would not be known until the next day. Of the over 2000 residents of Grand Isle, one of Louisiana’s only recreational beach towns, only 40 resisted mandatory evacuation orders, stranded with 8 firefighters to fend for themselves. The electricity quickly went out all over the region as all sizes of transmission lines were downed in the 100+ mph sustained winds, wiping out power to 1.8 million people. But the news that managed to get out was mainly positive: the levees and pumping systems around New Orleans held the floodwaters back completely. Only one storm-related fatality was reported by Monday morning in the New Orleans area, and the Cajun Navy is out in force to assist with rescue operations. While the physical destruction of Ida was devastating to the numerous small fishing communities on the bayous in her path, New Orleans fared comparatively well thanks to the 16 billion dollar flood control investment since Katrina.

However, the prolonged power outage remains a serious issue. As of Monday night, Entergy Corporation, who has a monopoly on electricity over New Orleans and southern Louisiana, has no update as to when power will be restored, except that it “could take up to 4 days to assess the damage” before grid-scale restoration efforts even begin. The story is startling: all 8 connections to the grid were knocked out of service, leaving the New Orleans area on an island with no electricity generation. Worse yet, the company shuttered its 60-year-old natural gas plant just across the river in 2016, lobbied extensively for a subsidized replacement, then pivoted before completing construction. It’s a grave situation that has many similarities to the Texas grid failure in February, a disaster of corporate mismanagement and lax government oversight. Hopefully the 15,000+ line workers pouring in from other states can rectify the crisis in Louisiana as soon as possible.

Red is bad….all of SE Louisiana is out of power. Map engine from Entergy’s website

After seeing viral imagery of Category 4 damage and widespread power outages, I can already sense the keyboard warriors winding up their “I’d’a handled it better” grandstanding: that all power distribution should be underground everywhere, that victims shouldn’t get relief money because it was their choice to live in hurricane-prone areas, and that New Orleans should be abandoned before it sinks into the sea due to repeated storm surge flooding. I would argue, rather, that Hurricane Ida is a strong example that New Orleans can continue to survive and prosper with proper, government-initiated preparations. The levee system, much maligned after Katrina, survived a similar threat from Ida with no detected breaches. Wind-resistant construction has been a key requirement of the building code since the 1990s, so structures that were built back after Katrina stood up valiantly to the 100+ mph winds of Ida. The federal flood insurance program is tightening to reward flood-resilient architecture, no longer just a bailout for badly-built beach houses. The electrical grid could be remade to withstand catastrophes like this with some thoughtful civic design, as any combination of maintaining backup generation capacity or over-engineering a main grid connection could have greatly lessened this disaster. A little extra investment in rebuilding stronger can preserve the beautiful bayous of southern Louisiana along with the vibrant cultural epicenter that is New Orleans, one of my favorite cities in the entire world.

A Different Kind of Code

This segment of the blog is usually dedicated to the various aspects of my tornado modeling project: researching physical property relations, sifting through radar and sounding data, learning geographical analysis techniques, and eventually writing Python code. Since moving to Texas, however, that project has moved to the back burner while I’ve been more concerned with a different kind of code. As the engineer overseeing the construction of a blending facility for hand sanitizer and other sanitation products, I’ve had to ensure that the facility complies with all federal and local codes. This is a large undertaking, especially since I had very little background in this aspect of engineering design. Luckily, we hired a few consultants to guide me through the process of reviewing the myriad regulations, applying for permits, and directing the site renovation, a process that nearly a year after my hiring is still ongoing.

To transform the old metal fabrication warehouse into its new hazards occupancy, we had to first submit a permit application to city hall. Within the city limits, all new development must follow the 2009 IBC (International Building Code), 2009 IFC (International Fire Code), 2008 NEC (National Electrical Code), 2009 IMC (International Mechanical Code), 2009 IPC (International Plumbing Code), 2009 IECC (International Energy Conservation Code), and all relevant OSHA and ADA/TDLR employee accessibility standards. The City outsources permit reviews to a third-party certification company, Bureau Veritas, since it’s hard to come by the requisite code experts in a town of 2,500 people. Our submittal took about 6 months to compile, including full-scale blueprints for architectural, fire protection, mechanical, electrical, plumbing, and demolition.

The code-required improvements were diverse and extensive. The building was surveyed, drawn up in CAD, tested for asbestos, and zoned based on each area’s projected use. Our flammable liquid hazards occupancy (H3) is limited to about 15,000 square feet, so a 1-hour rated firewall will be built between the processing area and my office (I’m happy about this requirement!). Fire code also outlines how much of each classification of flammable, combustible, corrosive, explosive material can be stored in each room, on a racking system with a precisely calculated flow of sprinkler water available through individual sprinkler heads aimed at each pallet/tote on the racks. To be installed in the roof are five UL793-listed trapdoors that automatically open during a fire to exhaust smoke and heat. A 4-inch tall containment barrier will be constructed around the perimeter of the H3 occupancy to prevent chemical releases and contain the sprinkler volume. Smoke alarms, sprinklers, and the overhead door traversing the firewall will tie into the building controls system. Safety ventilation of 1 cfm/sqft is required throughout the hazards occupancy to prevent the accumulation of flammable vapors. The electrical code specifies distances from process equipment where classes of non-sparking and/or explosion-proof electrical devices are required to prevent the ignition of such vapors. Even before any process equipment is configured, our preparation for any kind of incident is explicit and exhaustive.

But that’s only the safety component of the facility design….there’s a lot more. The FDA good manufacturing practice (GMP) regulations specify materials for storage and process implements, down to the grade of polish on stainless steel and the cleanliness of air in the processing/packaging areas. Our bottling line will be in an ISO class 9 clean room to limit contamination – a freestanding room with wipeable, vapor-barrier walls and climate-controlled, HEPA-filtered air intake. All HVAC systems must be installed/balanced to follow mechanical code, and any climate-controlled space must be insulated per energy code. New eyewash stations and safety showers must be plumbed into the service water and sewer systems per plumbing code. Finally, we had to check our egresses per the building code, cutting several new doorways in the exterior wall so that every point within the facility is located within 75 feet of an emergency exit. Stripe the driveway with a fire lane and a parking lot meeting minimum parking requirements. Replace the front door and stepped landing to make the offices handicap accessible. Install new lighting and signage to illuminate our driveway and indicate hazards.

Is this complete overhaul excessive? Possibly. My company’s board of directors certainly thinks so: after all, Texas is supposed to be a pro-business utopia, unimpeded by regulations and government interference. However, a number of high-profile accidents have occurred in Texas chemical plants, in part due to weak enforcement by the state and local governments. The fertilizer plant explosion in West, Texas that killed 15 in 2013 is a particularly harrowing example of how devastating the lack of safety oversight can be in my industry. That plant was cited by OSHA, TCEQ (Texas’s EPA), and the Department of Transportation for storage violations, yet no changes were implemented after fines were levied. A similar story followed the deadly fire at TPC near Beaumont in 2019, as compliance recommendations were not followed once inspectors left. Without the fear of code enforcement, our board was prepared to store highly flammable alcohol in a single-walled fiberglass tank, a gross safety violation that they would’ve gotten away with if not for the city’s permitting process. Instead, we will be storing alcohol in steel tanks following API 12F and UL 142 standards (I hardly mentioned the role of trade organizations in steering regulatory policy, but in the absence of a unified code they are instrumental in standardizing safe practices).

To see the value of building codes generally, we need look no further than the tragic aftermath of the condo collapse in Miami (which, incidentally, also ignored the recommendations of an inspection, in 2018). That particular building was constructed prior to Hurricane Andrew, which brought a wave of code improvements for hurricane resilience. Likewise, towns like Moore and Joplin that have been affected by devastating tornadoes have implemented codes that mandate hurricane clips and other wind-rated construction strategies. I never thought that I would become intimately familiar with this area of the law, but I am grateful to have learned code from the broad scope of safety systems design. And I have a new appreciation and hope for the continued improvement of these codes that work in the background to preserve our lives and property.

Inside the Hand Sanitizer Industry

Since the onset of the pandemic, there has been a huge boom in new hand sanitizer products, from companies large and small, new and old, honest and dishonest, competent and incompetent. The onus has fallen to the consumer to differentiate between reputable and disreputable labels, a tough challenge for anyone, even for me, a quality control engineer for a company that manufactures hand sanitizer. I intended to write this post months ago, as it only took a few weeks in the industry for me to realize several sources of skullduggery. From toxic ingredients to tequila smell to yellow discoloration, I’ve seen it all, which is why this long overdue breakdown of hazards is still relevant and necessary.

You may be asking, “Where is the FDA’s leadership? Isn’t it their responsibility to ensure safety and quality in this industry?” While hand sanitizer is classed as an over-the-counter topical pharmaceutical under the FDA’s regulatory umbrella, the FDA does not have the bandwidth to rigorously monitor products from all 2,500+ firms that registered to manufacture hand sanitizer since March 2020. The agency released guidances (many of which were nonbinding) for new manufacturers to follow the recommendations of the WHO and CDC to produce a liquid sanitizer with limits on certain toxic impurities. Then they took a reactive approach to regulation, responding to only the most serious problem of 17 fatal methanol poisonings with a ban on imports from Mexico.

As you might expect, this hands-off approach did not create a culture of self-enforcement among firms desperate to claim a share of the competitive new market. First, an overwhelming majority of products contain thickening agents, disregarding the FDA’s instructions for liquid sanitizers. This isn’t a problem in and of itself, as most thickening agents are safe and FDA-approved. However, the addition of these artificial ingredients makes many labels that claim a “100% natural” product misleading. Worse yet, the carbomer-type binders (which I prefer in my formulas as a low-residue, odorless ingredient) often introduce side reactions that form a yellow tint in the presence of aldehyde impurities or an unpleasant amine smell when paired with certain neutralizers and excessive heat or sunlight exposure.

This leads to a second, more harmful problem. The shortage of high-grade ethanol and isopropanol available led many manufacturers to use lower grades of ethanol, which the FDA authorized provided that certain purity standards were met. A certificate of analysis should accompany all alcohol deliveries, but these analyses often leave out critical lab tests for one or more of what I’d consider the “big five” contaminants: methanol, benzene, 1-propanol, acetals/aldehydes, and inorganic residues. Even with lab analyses that exceeded limits in one or more categories, numerous alcohol suppliers misrepresented their product as “food grade” or “USP spec” to unknowing customers, who parroted the false purity claims down the supply chain. Intake quality control has been a significant challenge for me, as it can be difficult to obtain certificates of analysis that meet all standards and actually reflect the exact batch that is delivered.

The FDA’s silence on contaminants other than methanol has been deafening. In its place, Valisure, an independent lab, published a study analyzing hand sanitizers for a range of unwanted impurities. The lab identified violations of FDA recommendations across the board, a strong example of the lack of regulatory accountability. Most alarmingly, 23% of the brands studied exceeded allowable limits of benzene, one of the earliest known carcinogens. The worst offender, ArtNaturals, has shown no intention of voluntarily recalling any of its millions of units sold, even after the lab revealed a staggering 16 ppm of benzene (compared to the emergency use allowance of 2 ppm and OSHA maximum exposure limit of 0.1 ppm). Until the FDA acts decisively to demand recalls or shut down production facilities within the U.S., I expect companies to continue to pump substandard sanitizer products into a market that is already littered with known hazards.

It’s not all doom and gloom – there are many high-quality hand sanitizers out there (perhaps up to 77% of the market, if the Valisure study is representative). They just require some discernment to find. Here are my consumer recommendations if you still wish to purchase hand sanitizer, but discerningly:

  • Check the active ingredient. The FDA only allows three active ingredients: ethyl alcohol or ethanol, isopropyl alcohol or isopropanol, and benzalkonium chloride (see footnote). I recommend a 70% alcohol-by-volume product to insure against dosing errors or vapor losses during packaging. Any other ingredients are not recognized to have any effect against pathogens.
  • Look for clear, untinted sanitizers. Yellow discoloration likely indicates elevated concentrations of ketones/aldehydes, which have an unpleasant odor similar to paint thinner. Fun-colored dyes may be present to mask an unwanted discoloration, and they end up as residue on your skin once the alcohol and water evaporate.
  • Bigger brands are generally safer. Established companies are more familiar with the rules surrounding OTC pharmaceutical production, and they were more likely to have supply contracts with reputable sources before last year’s demand spike and supply disruptions.
  • Be careful with gels and fragrances. To minimize sticky or foul-smelling residue from gels, I would only trust thickeners listed as carbomer, carbopol, or acryl acrylate crosspolymer on the drug facts label. Fragrances also leave residue on your hands, and industrial fragrances may not be produced to the same rigorous sanitary standards.
  • If a label has typos, weird capitalization or formatting issues, avoid like the plague! Chances are the same level of care went into determining that all ingredients were sourced to the requisite purity standards and blended in a proper manner.

An example of poor quality control from a third-party hand sanitizer manufacturer

Note: Benzalkonium chloride is a high strength mutagenic disinfectant. When improperly dosed, it can be toxic to human cells. Widely used as an industrial disinfecting agent, benzalkonium chloride has also sparked debates about antibiotic resistance in which some call for a decrease in its use. While undeniably effective in the right quantities, I cannot recommend it over alcohol products.

The Engineering Power Struggle

Over the last two weeks, Texas has endured the coldest type of hell, where a once-in-a-decade winter storm reduced the entire state to frozen anarchy. The state was less than 5 minutes away from catastrophic destruction of the power grid, and rolling blackouts (that for the most part did not roll as advertised) left millions of households unprotected from the cold. Pipes froze, homes flooded, many elderly lost oxygen and other critical care, any grocery stores that weren’t closed were picked completely clean, and several people died of hypothermia. I was largely spared from suffering in my outpost across from the town square in quaint Hallettsville, but it pained me to watch what I consider a complete infrastructural failure from top to bottom. While extreme weather events inevitably cause some suffering, I can’t blame the weather for the chaos and tragedy that occurred. In the richest country in the world, with the advanced technology and knowledge of the 21st century, the disaster stemmed from our collective lack of preparedness.

As an engineer, I was trained to identify failure modes and design systems to mitigate these failures. The first failure was in warning communication: the forecasts of the arctic blast were accurate nearly two weeks in advance, but a combination of doubt and denial prevented most Texans from taking any preparatory actions until the day before. Atmospheric scientists and operational meteorologists have discussed this issue at length in publications and conferences, but even as short-range forecasts approach the asymptotic limit of “perfection” there is a significant obstacle to communicating warnings to the public. Responsibility for emergency preparedness falls to local governments, who receive the same warnings as the public (unless they’re large enough to employ someone with a meteorology background, like Oklahoma City). Disaster preparation is entirely up to them; here in Hallettsville, bridges and intersections were covered with copious amounts of pea gravel for traction, that’s about it. Dallas had a skeleton crew of snowplows clear one lane on major thoroughfares, but this did not extend into the jurisdictions of some distant suburbs. Houston appeared to have none of the sort, as all 26 lanes of Katy Freeway were blanketed by ice and snow. Even when the forecasts are spot-on, how do you communicate to people that they will likely be stuck in their homes for at least 3-5 days? Especially in a society that is resistant to shutting down businesses (or otherwise modifying behavior at the request of some authority), communicating warnings is a challenge that goes beyond engineering.

An even greater challenge was exposed when the power grid failed to meet the elevated demand as temperatures dropped statewide. After a similar deep-freeze crippled north and west Texas in 2011, federal regulators composed a 357-page report detailing what went wrong and providing winterization recommendations. These cold weather precautions are required by law in the rest of the continental U.S. electric grid, but Texas has managed to dodge regulations by maintaining their own interconnect and independent oversight (by the now-infamous ERCOT). Utilities have been increasingly privatized in Texas since the 90s, intended to drive prices down but has done the opposite, prioritizing profits over maintenance and system upgrades. The engineering solutions are well-known, but there needs to be a regulatory or market-driven impetus to implement any of these solutions. Moreover, the post-disaster talk has been reactionary, enraged entities threatening lawsuits against anyone on the generation side. As a forward-thinking engineer, I strongly believe that we would be better served by focusing on investing in updated infrastructure, from more robust primary generation to a smarter distribution system. A smart grid that could isolate nonessential commercial loads and customers with backup generators would have lessened the severity of rolling blackouts, maintaining a base level of service for all Texans to prevent any of the lasting damages to homes and municipal water systems.

As a freshman in college a decade ago, I wanted to become an engineer so I can help solve problems in the energy sector, addressing grid reliability as the industry shifts toward renewable alternatives. The technical expertise exists to transform the grid into a smart, futuristic network that optimizes the use of natural resources. However, economic and political factors suppress innovation in favor of cost savings, especially in a monopolistic utility market where there is no economic incentive to gain a technological advantage. This profit-obsessed cost-cutting extends beyond the energy sector, and I have encountered it throughout my career: in academic research, in the private sector of weather forecasting, and especially in manufacturing. It’s a frustrating situation for engineers, having to do R&D on a shoestring budget and often becoming expendable once our algorithms/processes are implemented. I believe this trend will only be reversed when more technical people occupy decision-making roles in business and government, when the value of innovation and societal benefit is considered instead of just dollars and cents. Natural disasters will continue to occur, but this was mainly a manmade disaster, and we can mitigate those with engineering – that is, if we want to.

The Curse of Continuity

We made it through the first month of the COVID-19 pandemic, adapting to the social disruption while ‘essential’ workers chugged courageously along. For those stuck at home, about 20% of the workforce has been laid off or furloughed, which means 80% of jobs have been deemed ‘essential.’ Whether out of a sense of duty or necessity, most people are still working, often under more stressful and dangerous conditions than usual, often at a discount to their employer. They are truly essential, from healthcare professionals to food service employees to sanitation workers to package handlers and more, because our economy runs as a complex machine. And I recently realized that the reason that our machine is breaking down, the reason tensions are building and protests are mounting, stems from a fundamental idea of engineering design that continuous processes are always optimal.

In college I majored in chemical engineering, a discipline that evolved from industrial process engineering. The overarching theme of the curriculum involved solving different permutations of mass and energy balance calculations for batch, semi-batch, and continuous flow processes. For mathematical and infrastructural reasons, continuous processes have the highest efficiency and are strongly preferred in industrial design. My team’s senior design project was to design a mobile biomass-to-biofuel refinery, optimizing energy consumption versus throughput for a continuous process. Despite the frequent on-and-off switching inherent to a mobile operation, our professor urged us to focus on the continuous flow specifications and completely ignore the much more rigorous calculations for startup.

I can imagine that the oil industry operates under the same principles; after all, much of our curriculum was tailored to groom us for the fossil fuel and petrochemical industries. Oil futures are traded months or years in advance, and wells and pipelines are designed to serve this demand on a continuous basis. As consumption plummeted in March, our oil and gas infrastructure continued pumping at the same rate well into April: there were orders to fulfill and significant costs associated with shutting down and subsequently restarting the pumps. Since the cost of storage infrastructure is also significant, the limited storage capacity filled up quickly, causing the crude futures market to crash. “Why not keep the oil in the ground instead of paying customers to take it” was never a valid question before, since revenues had always been positive.

Similar choke points have been exposed in other supply chains. When a large pork processing plant in South Dakota closed due to a COVID-19 outbreak among its staff, it triggered an absolutely dreadful domino effect. Farmers throughout the region scrambled to find slaughterhouses for their juvenile hogs, whose value diminishes significantly as they reach adulthood. These farmers are short on barn space, even with the implementation of a high-fiber diet to slow hog growth. Tens of thousands of hogs that were already queued for slaughter have been euthanized to be composted; since up to 80% of hogs contract pneumonia from close-packed conditions before slaughter, the animals can’t feasibly be sent to other meat-packing facilities or back to the farm. To fulfill domestic demand and profit from the new export deal with China, the president invoked the Defense Production Act to keep meat processors open, shielding corporations from liability while putting workers at risk of contracting the virus. All this mayhem for an 8% drop in pork supply, which you’d think we could atone for with an 8% reduction in pork consumption. But that’s not how our consumer markets operate, as we’ve seen with the hoarding of toilet paper and hand sanitizer.

I had expressed optimism that we could reshape the American economy, support essential workers relative to the value they provide, and strengthen our communities in a way that better prepares us for the next economic disruption or global pandemic. I was wrong: the opposite is happening. Corporate behemoths like Amazon and Walmart are profiting immensely while rebuffing their employees’ demands for hazard pay and workplace PPE. The Paycheck Protection Program designed to loan money to keep small businesses afloat ran out of funds within days, with much of its money usurped by larger companies. Meanwhile, a larger share of government stimulus money has gone to prop up Fortune 500 companies, and I completely understand the interests of keeping the economic machine moving as usual. But our obsession with growth has come at the expense of resilience. And I don’t expect anything to change.

Projecting onto the World

Sometimes I wish the world were flat. Not because I’m some kind of conspiracy wonk, just because I’m practical. Like your prototypical engineer, I love applying coordinate systems to, well, anything. A flat earth, easy, model it with a 2D Cartesian coordinate space. We could even transform it into polar coordinates, say, if we wanted to superimpose a tornado above the origin. I thought that was a legitimate approach when I started conceptualizing the tornado prediction model a few years ago, but it really isn’t that simple. In fact, I found some of the intricacies interesting. This post aims to break down my foray into GIS (geographic information systems) as my cartographic view evolved from “Let’s just call the earth flat” to “The ideal projection should minimize shape and size distortion to promote a balanced, unified worldview, but alas, such a projection is only possible in three dimensions.”

The earth’s geometry is complex, even if you ignore its movement through space and its relation to other celestial bodies. First off, it isn’t truly a sphere: it’s an oblate spheroid. With the angular momentum of planetary rotation stretching earth’s liquid core, Earth’s diameter at the equator is about 43 kilometers greater than its diameter at the poles. Moreover, the axis of rotation differs from its magnetic axis, complicated by the fact that Earth’s magnetic poles are continuously on the move. This squished globe is our home, however, so cartographers and others have worked for centuries to develop accurate, catch-all representations of its irregular surface for navigational purposes and more.

Earth’s irregular form is governed by many forces. Source: ASU

Before projecting a coordinate system with accuracy, the three-dimensional form of Earth, also known as the geodesy, must be established. Traditionally, this was done based on simple ellipsoid geometry, as the spheroid parameters were periodically updated throughout the 19th and early 20th centuries with new geographic observations and theoretical advances. By the 1970s, the advent of GPS required an exactitude that motivated worldwide cooperation to come up with a standard geodetic reference. So naturally, we came up with two of them: WGS84 (the World Geodetic System, used by most of the world) and NAD83 (the North American Datum, used by the United States). Both are accurate on a scale of inches over North America, and remote sensing data can take either reference as its basis. It’s best practice to run a conversion algorithm to transform all of your spatial layers into the same geodetic reference to minimize any offset with your projections.

While an error in geodetic reference may be minor, your choice of projection can have hugely important consequences. It is, of course, difficult to visualize the surface of a sphere in a 2D medium, so large-scale map projections sacrifice accuracy in the portrayal of size, shape, or both. Depending on your cartographic needs, there are hundreds of projections that people have developed over the years, many of them easily usable with open-source GIS functions. For my application, I require the spatial information in a square grid by latitude and longitude so that I can perform math on the grid cells. Despite the plethora of options at my disposal, I’ve found my wind modeling calculations easiest to visualize with a Mercator-type projection. Controversial history aside, I appreciate that the lat-lon coordinates are perpendicular and the relative shapes of surface features are preserved. Satellite remote sensing data is often released in spherical coordinates, and I use the USGS elevation product (a 1/3 arc-second resolution DEM) as a base layer without requiring a transformation.

The sailing hasn’t been smooth, however, mainly because the MRLC datasets – land cover, tree canopy, and ground imperviousness – that I use to describe the terrain come in an Albers equal area projection. To maintain 30-meter square grid cells, the grid only aligns with the compass rose on one finite line, in my case the 96th west meridian between the latitudes of 29.5° N and 45.5° N. Fortunately this line runs directly through Tulsa and Omaha, limiting distortion in Tornado Alley. To accurately serve areas significantly east and west of the meridian, I have been experimenting with open-source algorithms to transform the conical raster data into the preferred lat-lon coordinates. The transformation is not trivial, converting squares into slightly convex trapezoids of differing size and centroid position. While I’m sure most of these algorithms work, the challenge lies in implementing a transformation that includes the ability to crop the input data (so I never need to transform all 20GB of the United States at once), executes with optimal computational efficiency, and coexists with any user’s Python software environment. After a few rounds of trying and debugging, I think I have found a suitable solution, though not elegant. This solution involves three steps: reading a larger rectangular domain in raster coordinates, transforming the raster to WGS84 coordinates, then cropping to the requested lat-lon domain. The computational inefficiency ranges between 0 and 30% due to this location-sensitive domain mismatch, but I’m willing to live with that tradeoff in place of trying to code an efficient data management algorithm myself.

I realize that the level of information went from basic to deep very quickly, that’s exactly how it happened for me. There’s such a wealth of GIS routines available for Python data processing, but the information is so decentralized that building my code has felt like a virtual treasure hunt. I’m still deep in the GIS developer’s rabbit hole, but at least I’m enjoying it down here. If you’d like to know more about the basics of projection, I love this video from Vox. And to play around with the shape distortions that accompany world map projections, I highly recommend this cool web application by Jason Davies. I’ll be back soon with pretty graphics to share!