Thursday, October 3, 2013

Why Do Joints Ache When It Rains?

Last year, 31 years of treating my body like a rental car finally caught up to me.  Unexpectedly though, it was one of my hip joints and not my liver that crapped out.  From what I understand, years of bike riding and a malformed femur bone had gradually mashed the cartilage in my hip joint into a very painful paste.  After trying a bunch of stuff that didn't work for six months, I eventually had to have arthroscopic surgery to do something about the bits of dangling flesh that were getting pinched whenever my hip joint moved.  One painful operation, three months of physical therapy, and an assload of painkillers later I'm mostly fine again, with one exception: the joint aches like hell when it's raining or about to rain.

If I undestood the nice doctor correctly, the labrum (latin for "horrible pain flesh") had to be sewn or trimmed back so the femur would stop squishing the edge of it.  Don't get old, kids.  

Everyone's heard old people complain about this, but old people love to complain about all kinds of things so I never took it that seriously.  Thinking about it a little bit though, it's very weird-- common sense would dictate that it's either the barometric pressure drop or spike in relative humidity you get with a rainstorm that causes the joint pain, but I'm at a loss to explain why either of those would do it.  There also seems to be a time-delay effect at work-- right now I'm sitting two stories underground, in a clean room with its temperature, pressure, and humidity regulated to the second decimal place, and somehow my hip still seems to know it's raining out.  

It doesn't make sense that humidity would have any effect on a joint-- your body is mostly water and your joint is buried deep in your body and behind your somewhat-impermeable-to-water skin, so it's hard to imagine a slight increase in the partial pressure of water vapor in the air being something a joint could "feel." Barometric pressure is a little bit more believable-- a change in the pressure differential between the inside and outside of your body could cause slight decompression or compression of the joint.  Not much, but since joint tissue is apparently made entirely of angry nerves, it might be enough to feel.

The first article in my Google search results for this question was on WebMD.  Given that, I expected it to blame weather-related joint pain on HIV, since that seems to be their explanation for most other things that can go wrong with your body, but the article was actually pretty decent.  Apparently, the jury is still out on exactly what causes weather-based joint pain; it's one of those things that has loads of anecdotal evidence but not enough incentive to actually bother studying in detail (kind of like baby colic, the subject of a future post).

The article pretty much agrees with me that it's got to be a pressure thing.  Barometric pressure is a pretty good leading indicator of weather; you can often detect a front coming through by a drop in pressure long before any physical indicators of it, like rain clouds, become obvious.  As mentioned above, a drop in ambient pressure is going to change the pressure differential between the inside of your body and the environment, which means your body will expand outward like a balloon very slightly (the extreme case of this would be explosive decompression in space, or that experiment where you stick a knotted balloon in a vacuum chamber).  In this case, the expansion is very, very minor.  Like I said though, damaged joints are basically tiny globs of connective tissue and hate; even a slight change that causes a tiny pressure to be put on the tissue is enough to cause noticeable discomfort.

One thing I've learned as the owner and operator of a bona fide Bad Hip is that once something makes it hurt, it's going to keep hurting for awhile.  Anything that causes joint pain is also going to cause inflammation of the tissue, which makes it come into more contact with the surrounding bone, which makes it hurt worse, which makes it get inflamed more, etc.  That's probably the explanation for why, even down here in my little climate-controlled hole in the ground, my leg still knows it's raining-- it had 12 hours at "rainstorm" pressure to get good and inflamed, and it's damned well not going to stop doing that just because the original cause is no longer present.  Thanks, leg!

WebMD has listed a number of helpful "solutions" to this problem, including "Apply a heating pad to the joint" and "Try to improve your mood".  While I'm sure sitting at work with a heating pad jammed down the front of my pants would help the pain, the resulting sexual harassment lawsuit would probably drop my mood down a couple of notches and I'd be back at square one, so that's not that useful.  I've found anti-inflammatories like aspirin help a little, but mostly the cure seems to be just waiting it out (luckily it doesn't seem to rain more than two or three times a year in Minnesota now).  In conclusion, being old sucks and if any of you have a bionic leg you're looking to sell, drop me a line.

Tuesday, July 16, 2013

Why Do Cats Like Fish?

It's common knowledge to anyone who gets most of their information about animals from cartoons that cats love eating fish.  Being the owner/butler of two of the ungrateful food vacuums myself, I can empirically verify that this is true at least half the time (the other cat is a special case, as he considers things like bread and rubber bands to be perfectly reasonable sources of food, so we're going to throw him out as a data point).  The non-stupid cat, while pretty enthusiastic about the normal spectrum of cat food in general, will abandon all subtlety and inhibition when there's any kind of fish or fish-derived food within smelling range.  Even though he's 15, arthritic, and generally moves the bare minimum necessary to prevent bedsores, he's more than happy to jump on the dining room table and run off with a piece of salmon bigger than his head if there's a three-second window of opportunity to do it.  He usually can't actually eat it, since he's trying to chew while involuntarily purring, but if you try to reclaim your dinner he'll act like you're tearing his legs off.  The point is that my cats are monsters, but the other point is that cats (at least some cats) are batshit for a food they have no evolutionary reason at all to be interested in.

Image of cat enjoying fish, taken from a popular series of nature documentaries.  Apparently mice like fish too, but that's beyond the scope of this post.  

Your basic felis catus (and its closest wild relatives, bobcats and lynx and such) is built for hunting birds and small mammals on the ground and up trees in more wooded areas.  That's pretty standard across the whole cat family; with one odd exception (the fishing cat, one of those niche-evolved oddities that does exactly what its name advertises) cats are pretty exclusively terrestrial predators.  So domestic cats' similar enthusiasm for things like chicken makes some sense, since "poultry" to "unfortunate songbirds" isn't really a huge leap.  I imagine if we ate a lot of rabbit in our house the cats would be into that too (full disclosure: as a vegetarian, I mostly just get to point and laugh as the cats steal my wife's stuff).
A fishing cat doing what it does.  How it has the patience to fish without a six pack is not clear. 

Fish smell and taste quite a bit different from any normal cat prey though.  Moreover, most cats don't even like getting their paws damp, which tends to preclude any kind of major underwater hunting activity (there are exceptions, because cats are all special snowflakes, but they're not common).  So where did they pick up a taste for fish? 

The answer to this one is obvious when you think about domestic cats a little bit.  They're what wildlife experts like to call "opportunistic feeders" (a fancy way of saying "they'll try to eat any goddamn thing," a philosophy my dumber cat takes to its logical extreme) and they've also had thousands of years to get good at hanging around humans and scavenging their leftovers.  So in a really half-assed way, this is a natural-selection/co-evolution thing: the cats who figured out that all the weird-smelling meat the humans down by the docks were throwing away was both tasty and good for their coats had a basically limitless, zero-effort food source available and could reproduce to the point of ridiculousness.  There's a story I ran across a few times in my extensive research for this post about how this all started back in ancient Egypt, when the first crazy cat people domesticated the first cats by luring them into their houses with bits of fish.  That may be apocryphal (I couldn't verify it at even the Wikipedia level of rigor I usually use for these things) but it does make some sense; the cats who ended up domesticated were the descendants of the cats willing to be suckered into someone's hut by a chunk of some kind of meat they'd never seen before.  

It's worth mentioning that, since cats' digestive systems haven't really caught up with their relatively recent discovery that fish tastes awesome, fish really isn't that good for them.  Even ignoring all the mercury/lead issues that come with seafood in general, too much fish in the diet can leave cats susceptible to UTIs and other issues.  Speaking from experience the combination of a cat, a UTI, and any furniture you aren't willing to burn is one of the worst things there is, so make sure your cats only get fish in moderation, no matter how much they try to convince you otherwise.  

It's easy to forget that humans have had both intentional and unintentional effects on the evolution of all kinds of other species, from corn all the way to house cats.  Cats liking to eat fish makes no sense if you think about it in terms of what they did before humans were around (hunt small land critters and stay the hell away from water) but all the sense in the world if you consider it in terms of how they've lived with us for the past several thousand years (as unapologetic, indiscriminate scavengers).  And speaking of "unapologetic, indiscriminate scavengers," I'm totally going to bust my cats' chops about how my species evolved their species when I get home later.  I'm sure that will make me feel much better when I'm sleeping in the living room because they successfully crowded me out of the bed again.  

Monday, April 22, 2013

What Is the Pressure in Outer Space?

For those of you getting ready to hit "Post" on a comment reading "It's a vacuum, dipshit" or something similar, give me at least a paragraph to explain this one a little.  One of the rarely-discussed fringe benefits of doing semiconductor engineering is that you end up learning a lot about vacuum systems.  For various reasons (most of which have to do with contamination) almost every process associated with making and looking at things on the micro/nano scale has to take place in vacuum.  That means all the machines you use for said processes are basically vacuum chambers.  Eventually, one of the pumps or gauges or other expensive things connected to a vacuum chamber will break, at which point you'll have to figure out how to fix or replace it.  As a result, spending any time at all working in nanofabrication or related fields will get you well-versed in the finer points of the care and feeding of vacuum systems in a hurry.

These two things are not, in fact, equivalent even though we use the same word for both of them (vacuum swiped from www.bubblews.com, space-scape from www.outerspaceuniverse.org).

One of the first things you learn about vacuum environments is that "vacuum," at least in the real world, is a catch-all term that encompasses many, many orders of magnitude of sub-atmospheric pressure, some of which are much harder to maintain than others.  The vacuum created by your vacuum cleaner ("low vacuum") is about 500-600 torr, which is only a bit lower than the standard atmospheric pressure of 760 torr (my guess is that my monster Dyson Animal does a bit better than that.  Damn thing can suck the fibers out of a carpet if you don't watch it).  On the other end of the scale, the vacuum you need to keep a high-voltage electron gun from arcing is on the order of 10-9 to 10-11 torr ("ultra-high vacuum", or UHV), pressures so low that it honestly starts to make more sense to count them in particles per volume or something.   Most vacuum equipment isn't quite that extreme; typical base pressures on semiconductor gear tend to be in the 10-5 to 10-7 torr ("high vacuum") range, which is still pretty low but achievable without weird things like ion getter pumps.

Vacuum system status screen for my electron beam lithography system at work.  Pressures range from an astronomical 10-2 torr in the pre-vacuum area (G6), to about 10-7 torr (G4) in the main chamber where the magic actually happens, all the way down to 10-9 torr (G5) near the 100 keV electron gun.


As we know, space isn't empty, although it's pretty close.  Earth's atmosphere actually extends a good fraction of the way to the moon, if you define "atmosphere" as "higher pressure than interplanetary space" rather than "where I can breathe without dying," and by the same logic the sun's "atmosphere" is present throughout most of the solar system.  So space in earth orbit, on the moon, out in the solar system, and deep in interstellar/intergalactic space are all going to be "vacuum," but very different degrees of vacuum.

Let's start with low-earth orbit.  Low-earth orbit is generally defined as "high enough to not fall out of the sky that fast," or between 200-2000 km above Earth's surface.  You're generally considered to be outside the atmosphere in the conventional sense at this point.  From a vacuum-systems standpoint though, the pressure here isn't particularly low-- about 10-7 torr, or on the outside range of what's achievable with a single good vacuum pump.  The technical term for this part of the sky is the "exosphere"--not empty space, but nowhere near enough pressure to even begin to call it the atmosphere.

The exosphere extends almost an entire earth-diameter from the surface, apparently (image swiped from www.universetoday.com)

Head out to the moon and you'll unsurprisingly see the pressure drop quite a bit.  The moon being about 400,000 km from Earth (sidenote: holy shit that's far. We really went there? That's bananas), it's pretty much completely free of Earth's atmosphere; surface pressure is going to range from 10-10 (night) to 10-11 (day) torr, or right about at the practical limit of artificial "ultra-high vacuum" conditions.  That's about standard for interplanetary space in the solar system too, although it can vary a bit with solar wind flux.

That's pretty low, but even between the planets we're still sitting in an almost insubstantial cloud of "solar atmosphere."  We know this because as soon as you get well outside of the solar system, pressure drops again, to about 10-15 torr in the interstellar voids of the Milky Way galaxy.  That's substantially lower than anything we've been able to artificially create; even the best ultra-high vacuum systems top out at about 10-12 mBar.

Even the "empty space" of the Milky Way has some density to it though.  Calling it an atmosphere is probably pushing it a little bit, but the pressure out in the intergalactic void, light-years away from pretty much anything that could cause matter to gravitationally accrete, is even lower, about 10-17 torr.

So the "vacuum of space" can refer to about ten different orders of magnitude of pressure, depending on what you're calling "space."  It's important to note at this point that for anyone but a vacuum-systems engineer and/or pedant, there's no practical difference between 10-7 and 10-17 torr; each one will cause you to die of freezing/explosive decompression at exactly the same speed if you're exposed to it without protection.  Given that, it seems pretty reasonable to just call it all "vacuum," but it's interesting that stars and galaxies have atmospheres just like we do, provided you're extremely loose with the definition of "atmosphere."

Wikipedia has a neat table of pressure orders of magnitude, which is where I cribbed a lot of this stuff from.  You can figure out how astronomers were able to calculate, say, the pressure in intergalactic space by following the reference links.

Friday, February 1, 2013

What's the NFL Quarterback Passer Rating?

(NOTE: This post was written in early December 2012, but only recently edited to be publishable.  Most of the current-season NFL stats herein are therefore pretty nonsensical now.  Apologies in particular to Alex Smith, who I feel kind of bad for these days.)

I don't really like watching most sports.  Baseball, especially in a post-Moneyball world, holds all the excitement of watching a weighted random number generator update every few seconds for me.  Basketball at least moves quickly, but it's hard to get excited about a game where only the last few minutes seem to matter very much.  I don't have a snarky reason for not liking hockey (in principle it's fast, violent, and scoring is infrequent enough to matter) but I never quite got into it, despite basically learning to curse from watching my parents watch Flyers-Islanders games as a kid.

Football (note to non-American readers, if any: I'm not talking about that thing you play with the round ball and awesomely faked injuries.  Never got into that either, sorry) is a different beast.  It's violent as hell, deceptively strategic, and to a large extent totally unpredictable.  Anyone attempting to apply Moneyball-style "Sabermetrics" to the NFL runs into basically the same problem I did when I was trying to optimize my fantasy football team using the powers of math: there are so many variables, most of them coupled together, as well as totally random factors like injuries (which are much, much more frequent than in other sports), that we haven't yet built a supercomputer that can do anything useful with NFL stats analysis.  And quite honestly that's kind of awesome.

One downside of the fact that the NFL is completely impervious to rigorous statistical analysis is that it still uses a lot of the weird, probably anachronistic metrics that baseball has largely phased out.  For example, a key stat for receivers is number of catches, which is fairly meaningless without knowing how many times they were thrown catchable passes, which is going to depend in large part on the quarterback, who is relying on the offensive line to give him time to accurately throw the ball, etc etc.  Like I said, it's really hard to pull useful individual numbers out of a sport where everything is so interconnected, so we mostly just don't bother and count up what we can, whether it makes much sense or not, and hope averaging over a long enough period (usually the whole season) will get rid of most of the noise.  Which brings us to the weirdest artificial stat of all: the QB passer rating.

The QB passer rating was apparently designed in 1973 by Don Smith of the Pro Football Hall of Fame, who was trying to find a useful way to quantify QB performance holistically, since looking at a single statistic (completion ratio or points scored, for example) is a highly misleading way to evaluate all the things that make a good quarterback.   It's supposed to be an aggregate measure of quarterback performance, distilling completed passes, points scored, times sacked, interceptions, and other stuff into one convenient number that even ESPN reporters can understand.  If the number is higher, the quarterback is better.  It usually falls between about 50 and 150 and seems to correlate reasonably well with observed quarterback performance (Peyton Manning, for example, has a season rating of 108 as of this writing, while perennial object of ridicule in my household Jay Cutler is rocking about an 81), but sometimes seems to be way off (or maybe Alex Smith really is the third best quarterback in the NFL right now?  Who knows).  What you never see or hear about, probably because most sportscasters have no idea, is how this magic number is actually calculated.

I crammed the entire NFL passer rating calculation into one equation to give you an idea of just how complicated it is:


All four of those terms in the numerator are bounded, meaning they get replaced with zero if they go negative and aren't allowed to be higher than 2.375, just to add confusion.

Ignoring all the arbitrary weighting, the four relevant metrics here are completion percentage (comp/att), average yardage (yards/att), touchdown frequency (TD/att), and interception frequency (INT/att).  The first two metrics (completion percentage and average yardage) are pretty hard to argue with.  You obviously want a quarterback to complete most of his passes, and ideally you'd like those passes to be for as much yardage as possible.  Similarly, interception percentage is pretty important, since throwing lots of interceptions (hi Michael Vick!) is somewhat less than ideal.  TD frequency is the most problematic of the bunch.  Offenses with good running options are going to drive down the field a lot, get close to the goal line, and then run the ball into the end zone, meaning the QB doesn't get a touchdown credit no matter what he did on the drive (even if the QB is the one who runs the ball in, oddly enough).  So TD frequency is going to be biased pretty heavily towards QBs leading pass-centric offenses, although good quarterbacks are still going to generally throw more touchdown passes than bad ones.

Sum it all up and you get a number between zero and 158.3, which is a perfect passer rating.  Interestingly, while you'd think that would mean perfect passing (all completions, no interceptions, lots of yards and touchdowns) it's actually almost real-world attainable, thanks to the way each stat is weighted; a perfect rating corresponds to a lower bound of 77.5% completion percentage, 12.5 yards per attempt, and touchdowns on 11.875% of passes (and zero interceptions, obviously).  That's by design, since "perfect" is completely unachievable in some categories (100% of your passes are for TDs?) and just nonsensical in others (an average passing yardage of...INFINITY?); it was set up so the numbers would be low enough to be theoretically achievable, but still high enough that it's unlikely that a quarterback could hit the ceiling on any of them, let alone all four at once.  Unsurprisingly, that's been proven wrong; there are lots of QBs who have exceeded the "perfect" threshold on one or more of these stats over one game, and perfect single-game ratings have been achieved 41 times at last count, most recently by Robert Griffin III against my own perennially useless Philadelphia Eagles in November of 2012.

There are some pretty obvious problems with the passer rating formulation, as you'd probably guess.  Conspicuously left out of the numbers are sacks, fumbles, and any kind of rushing contribution, meaning "running" QBs like Cam Newton and RGIII are always going to get undervalued in the passer rating (it should be noted at this point, to be fair, that it is called a "passer rating" and not a "quarterback rating," so only taking passing-related stuff into account is technically kosher).  Similarly, getting sacked a lot or fumbling the ball won't hurt your rating (since there was no pass attempt), even though a QB that's constantly getting sacked and/or fumbling the (hi Michael Vick!) is not particularly useful.  From a more numerical standpoint, making it possible to "max out" the passer-rating contribution of any of the four statistics it measures is problematic.  Take, for example, two QBs with identical stats, except that QB #1 has an average of 15 yards per completion and QB #2 has an average of 12.5.  Since the completion-yardage term has to max out at 12.5 yards/pass, they'd both have the same passer rating; that extra 2.5 yards/completion that QB #1 has effectively don't count.  Luckily it's next to impossible to string out numbers like that over a whole season; while single-game perfect passer ratings have been achieved fairly often, the highest season-length passer rating ever was 122.5 (Aaron Rodgers 2011, if you were wondering).  The last (and presumably only) time anyone has maxed out even a single term in the passer rating equation was in 1943, when Sid Luckman posted a 13.9 yards/completion ratio.  So taken over a whole season, the "ceiling" values for all four terms are apparently high enough to not matter, even though they can underrate really good single-game performances.  Why the "perfect" number was left at 158.3, instead of just being normalized to 100, is completely beyond me; it doesn't really matter, it's just weird is all.

So the passer rating is kind of an imperfect statistic.  In the only way it matters at the end of the day (correlating to whether or not QBs can win football games) though, it's pretty good; in 2010; 80% of the NFL games played were won by the team with the higher-rated quarterback.  There have been some attempts to tweak the formula, most famously with ESPN's largely-ignored total quarterback rating, an attempt to take all the situational factors that could possibly affect a QB's stats into account.  It was insanely complex, drew on lots of weird hard-to-access stats like pass travel distance and something called a "clutch factor", proprietary (ESPN never revealed exactly how it was calculated), and not appreciatively better than passer rating at predicting wins.  In principle it wouldn't be too hard to just tweak the passer rating formula to, say, count rushes as pass attempts, rushing yards as pass yards and rushing touchdowns as touchdown passes (that's not perfect but it's a starting point), but to my knowledge nobody's done it, or if they have it hasn't caught on.

The entire clusterfuck that is the passer rating calculation is an almost perfect illustration of the mind-bending difficulty of applying statistical analysis to football.  It's an oddly-calculated, arbitrarily-aggregated statistic that has some pretty major omissions and issues, particularly when used to calculate single-game performance.  It also predicts quarterback performance fairly well when integrated over a good number of games (a season, say) and it's simple enough to calculate it that anyone with a pen and calculator (and access to Wikipedia for the formula, probably) can do it.  That makes it a pretty good metric, especially compared to previous ways quarterback performance was measured, and the fact that ESPN's attempt to take every single possible contributing factor of quarterback performance into account didn't produce appreciably better results speaks volumes for its efficacy.  That's the beauty of statistics; you don't need to know everything about the system you're analyzing if you can average over enough data and come up with a way to measure the results that helps you predict future behavior.  You don't need to know the thumb position, angular velocity, and air turbulance during 100 coin flips (basically the equivalent of what ESPN's system was trying to do) to know that about 50 of them will end up heads, to use a more concrete example.

Friday, September 28, 2012

What's the Difference Between Regular and Diesel Engines?

I've never really owned a car.  I guess I technically own half of my wife's temperamental 2002 Nissan Sentra now, since I'm pretty sure that's what marriage means, but I've never actually gone out of my way to own and operate a motor vehicle.  As a result, I have an almost childishly simplistic understanding of what the hell makes cars go; I know there's an "engine" inside, which burns "fuel" to turn the "wheels," but saying I'm hazy on the specifics is probably being unfair to haze.  Unsurprisingly then, the question of what makes a diesel engine different from a regular old engine managed to not occur to me for over 32 years, for the same reason that you've probably never wondered what makes a mountain gorilla different from a lowland gorilla (apologies to gorilla owners and/or gorillas among my readership, obv.).  It wasn't until a week ago while I was absently staring at a gas station sign (note to self future possible blog entry: why the 9/10 of a cent thing?) that it occurred to me that some cars (and, apparently, all big trucks) need a special kind of fuel called "diesel," and that why that is exactly was almost certainly a Thing I Should Probably Know.  To the wikipedias!

What secrets lurk in his comically oversized head?
A happy side effect of figuring out why diesel engines are different from regular engines was that I had to finally figure out how regular engines work in the process.  A happy side effect of that side effect was that I learned that the basic spark-ignition internal combustion engine found in most cars is called an "Otto-cycle engine," which made me laugh because I am basically a seven year old with an advanced degree. Anyway, Otto-cycle engines work pretty much how I'd always guessed they do.  You fill a combustion chamber above a piston with some mixture of fuel and air, then apply a high voltage across a spark plug to cause a small electrical arc, which makes the fuel/air mix explode and pushes the piston down.  If you have a bunch of pistons in a row and you time the explosions right, you can make them do useful work, like spin a drive shaft.    I built a potato gun in college that worked on basically the same principle, although the "useful work" in question was "firing potatoes at the grad student dorms until they called the cops on us."

The Diesel engine cycle (coincidentally invented by Rudolf Diesel in 1892) is actually quite a bit more clever, at least from an engineer's standpoint.  It's the same basic process as a normal engine cycle (ignite some fuel, push a piston with the explosion, move some shit), but instead of relying on a spark to do the igniting you just compress the air in the combustion chamber and use the resulting heat to light the fuel.

That's a crap explanation, so here's a more detailed one.  Basically, what we're doing here is exploiting the ideal gas law, which you probably encountered in high school chemistry at some point (PV=nRT, remember?).  The ideal gas law says that, all other things being equal, if you increase the pressure of a gas the temperature is going to increase accordingly.  So with the piston fully extended from the combustion chamber, you're going to fill the chamber with air.  When the piston moves back up into the chamber (via the motion of the engine cycle itself), it's going to effectively make the chamber much smaller, compressing the air inside.  The increased pressure causes increased temperature, which will eventually (at pressures of about 600 psi) exceed the auto-ignition temperature of the vaporized fuel.  Once you've reached that threshold (which if you've designed the engine properly is also the top of the piston's cycle), injecting some vaporized fuel into the chamber will cause it to spontaneously ignite, pushing the piston out of the chamber with no external ignition system needed.  You've replaced spark plugs with the laws of physics essentially; like I said, it's clever.

I'm lazy today so I'm just cold swiping images (this is from automobilehitech.com).  It's a pretty good cross-sectional illustration of the combustion chamber and piston at each point in the Diesel cycle. 
Obviously there's a bit of a chicken/egg problem here: if the ignition cycle relies on the engine already running with enough power to compress the air to its critical pressure, how do we actually start the engine?  If you know anything about cars right now you're probably saying "duh," but as I discussed at length earlier this is all fairly new to me.  Anyway, you start the engine the same way you start a regular engine: by using an electric starter motor to spin the engine until it's moving fast enough for the ignition cycle to take over.  A disadvantage of using heated air vs. spark plugs as your fuel ignition system is that cold weather, which results in both a lower starting air temperature and a cold engine block around the combustion chamber, can make it much harder to get the engine started.  This is usually worked around by using electric engine block heaters, although you can get cute and do stuff like use a fuel with a lower auto-ignition temperature than diesel gasoline (ether is popular if you can get it) to get the engine running until it's up to operating temperature.

So in the Diesel engine, we've got a clever but functionally identical analog to the Otto-cycle design, which begs the question: why use one over the other?  The big advantage of Diesel engines is efficiency; because you have to compress the combustion chamber quite a bit more in order to get the fuel-air mixture to spontaneously ignite, Diesel engines have much higher compression ratios than standard engines.  Higher compression ratio means more piston travel distance which means more work done per combustion cycle; the end result is that Diesel engines can be up to 50% more fuel-efficient than their Otto-cycle counterparts.  The downside is that all that extra piston travel makes the engine block much bigger and heavier, which can entirely negate the extra engine efficiency in smaller vehicles where the engine is a large fraction of the total weight.  As a result, the places you'll usually see Diesel engines are in applications where engine weight isn't a huge deal-- big trucks, ships, tanks, and stationary stuff like generators.  They do show up in cars (particularly in Europe), but maintaining an efficient engine weight in smaller vehicles generally means sacrificing some power, although the fact that you don't have an explosive fuel-air mix in the chamber until just before combustion time means you can play all kinds of games with turbo-charging the compression cycle via increased pressure without worrying about the engine blowing up on you.

The other huge advantage of Diesel engines is the fuel itself.  It's based on petroleum, like gasoline, but unlike gasoline it's pretty much just distilled petroleum; you don't have to do a bunch of stuff to it afterward to turn it into a usable fuel, which means it's much cheaper and cleaner to make.  Petroleum distillate also has a way lower vapor pressure than gasoline, meaning that if you spill some you don't have that whole rapid-outgassing-of-explosive-fumes issue you do with regular gasoline.  Unless you're committing arson (note to self: if you ever need to burn down a building, don't buy diesel fuel even if it's cheaper) that's generally a good thing.  It's also relatively easy to make a diesel fuel substitute ("biodiesel") from all kinds of organic material, and unlike gasoline substitutes like ethanol you can pretty much just run pure biodiesel in a stock diesel engine without issues.  As people come up with increasingly clever and efficient ways of synthesizing biodiesel on useful scales, that one's going to become a big deal.

All of my gearhead/engineer friends are always gushing about Diesel engines, and now I understand why.  Being able to yank out the entire spark ignition system and replace it with a clever application of the laws of physics is the kind of elegant solution to a problem that engineers spend their whole careers trying (and often failing) to come up with; the fact that it actually results in a more efficient engine with a more flexible fuel system is just gravy, honestly.  And since said gearhead friends are probably reading this, I'm well aware that I glossed over a lot of subtleties of engine design and optimization here, so feel free to point them out in the comments.

Tuesday, May 1, 2012

How Does Radiocarbon Dating Work?

Like most things in this world, I never gave radiocarbon dating (the method archaeologists use to determine the approximate age of stuff they dig out of the ground) a whole lot of thought.  It seemed obvious enough how it worked; some small percentage of the carbon in everything is apparently the unstable carbon-14 isotope.  That isotope has a finite, known half-life; over time, it'll decay into the more stable carbon-12 and carbon-13 isotopes.  The result of all this is that you should be able to estimate the age of things by the amount of carbon-14 left in them.  Assuming you can avoid sample contamination by "modern" carbon-bearing materials (which I would guess is really hard), radiocarbon dating is apparently pretty accurate for measuring the ages of things.  Or that's what I assumed; like I said, since it doesn't really have anything to do with my field, I didn't give it a whole lot of thought.

After writing that post a few weeks ago on why the earth's crust is mostly silicon though, I realized something: basically all the carbon in the universe was formed in supernovas during the first couple of billion years after the Big Bang.  That means that the carbon in atmospheric CO2, animal remains, and anything else you might find on Earth should be 1) pretty much all the same age, give or take a billion years, and 2) significantly older than the Earth itself.  How can you use atoms from the primordial universe to determine the relative ages of things from the (fairly short, on the cosmic scale) history of planet Earth?

Per usual with this kind of thing, I started out with a couple of entirely wrong assumptions.  The first was that all the carbon on earth, in all its various isotopes, was Big Bang detritus.  In fact, it turns out that carbon-14 (C-14 from here on out, because typing is hard) is continuously produced in the atmosphere via interaction between atmospheric nitrogen and the high-energy cosmic rays that are constantly bombarding the earth.  The cosmic rays produce neutrons in the upper atmosphere, which can replace a proton in one of the nitrogen molecules floating around out there.  The result is an atom with the same atomic weight as nitrogen (because protons and neutrons weigh the same) but an atomic number (proton count) reduced by 1, which if you check the periodic table corresponds to our unstable C-14 atom.  Said C-14 atom will eventually combine with oxygen to form atmospheric CO2, which is then taken up by plants, etc etc, until basically every living thing has some small quantity of the stuff in them. 

The half-life of C-14 is surprisingly short, about 5750 years.  That means that the stuff would be long gone from the Earth if the supply of it wasn't being constantly re-upped by cosmic ray bombardment, which solves the "all carbon is as old as the universe" problem.  It does make things tricky though, because now you've got two questions to answer when you want to carbon-date something:

1) How much of the C-14 in the thing you're dating has decayed?

2) How much C-14 was there begin with, and how old was it?

You need at least a reasonable approximation of both of those numbers to accurately determine the age of something. 

The critical concept here is that a living organism is constantly refreshing its internal supply of C-14.  Whether it's a plant taking up CO2 as part of its respiration cycle, an animal eating that plant, or a bigger animal eating that animal, atmospheric CO2 is constantly making its way into the internals of every living thing on Earth.  As a result, the ratio of C-14 to C-12 in any living thing is going to be approximately identical to the ratio in the atmosphere, which we can treat as constant over time.  When an organism dies, it stops doing all the things that would normally refresh its supply of C-14 (eating and breathing, for example), making death a handy "t=0" point for carbon dating.  Basically, if you know the half-life of C-14 (which we do), you can approximate the time of death of any once-living thing (or anything that was originally a part of a living thing, like fabrics) by comparing the ratio of C-14 to C-12 atoms in a sample of it to the atmospheric ratio.  Since half-life is a relative measurement (it tells you the ratio of the current amount of isotope to the initial amount at a given time), the age of the C-14 already in the body doesn't matter.  It's worth mentioning that carbon dating only works on discrete ex-lifeforms; trying to carbon-date something like soil or peat will just give you a mess, since there's bits of things that all died at totally different times in there.

The most interesting consequence of C-14's much-shorter-than-I-thought half-life is that it sets a pretty firm limit on the maximum carbon-dateable age of stuff.  The older something is, the more of the C-14 in it has already decayed, and the more difficult accurately measuring the remaining concentration of it will be.  In practice, you can carbon-date things back to about 10 C-14 half-lives, or ~60,000 years. At that point, the C-14 ratio has decayed to less than 1/1000th of its initial value; as you approach that limit, measurements get much more difficult and susceptible to contamination too.  So really radiocarbon dating is only useful for figuring out how old once-living things that fall roughly within the blip of time when humans have been around are.  Whether because of pop-cultural portrayal or just the fact that I'm naturally incurious, I'd always just assumed carbon dating would give you the age of anything you wanted, regardless of composition or oldness.

(Wikipedia, as usual)

Friday, April 20, 2012

Will the Chevrolet Volt Save You Money?

EDIT 4/23/12: I made a pretty huge mistake here in not taking the relative efficiencies of gas vs. electric car motors into account and just assuming they were roughly equal.  In reality electric motors are about a factor of four better.  That completely changes the conclusion of the post, which originally had the Volt costing about as much as a regular car to drive around.  I've edited heavily to take all that into account.  

The Chevrolet Volt, and other plug-in hybrid vehicles like it, are marvels of engineering that should have all self-respecting nerds salivating for them to come down to normal-people prices.  Their genius is in the way they get around what's always been the main issue with electric cars: the energy density (the amount of energy you can store in a given weight or volume) of batteries, while it sucks less than it used to, has never been able to match the energy density of gasoline.  As a result, electric cars either need to carry several times the mass of a full gas tank in storage batteries (almost always impractical) or have their range drastically compromised.  The Volt and its ilk work around this by adding a gasoline engine to basically act as a battery charger on the road; it doesn't do much more than burn (high-energy-density) gasoline to keep the (low-energy-density) battery topped up, while the battery powers the drivetrain via electric motor.  This is different from parallel-hybrids like the Prius, which freely switch between using their gas and electric motors to power the drivetrain, and in principle should be able to seriously increase gas mileage to well above what even current hybrids are capable of.  It does in fact succeed in doing this, as this photo from James Fallows' blog shows:

For reference, that's approximately Philadelphia to Denver on a single tank of gas.

That's bananas, right?  Unfortunately it's not that simple.  The Volt ain't magic; it still takes the same amount of energy to push it that 1389.4 miles as it does a regular car of the same size.  The difference is that a lot of that energy is coming from electricity (via charge-ups between drives) rather than gasoline.  Since electricity isn't generated from nowhere and definitely isn't free, miles per gallon of gasoline is an extremely deceiving metric to use when evaluating the efficiency of plug-in hybrids; we need to think of a way to take that generated electricity into account too.  I decided to figure out a way to do this and, in the process, find out (roughly) how much money you'd actually save by driving a Volt vs. a regular, decently-efficient car.

I made a couple of simplifying assumptions in figuring all this out:
  •  I assumed the car usage summarized in the photo above was typical.  From the numbers I've heard thrown around for plug-in hybrids, it's probably close.
  • My "normal" car was assumed to have an average gas mileage of 30 mpg over the same 1389.4 miles that Volt drove.  That's a reasonable assumption for a well-built traditional car of the Volt's size.  I also assumed that it had roughly the same size, weight, drag coefficient, etc. as the Volt, so the energy used to move both of them would be close to identical.
  • I assumed that the efficiency of charging the Volt's battery with a gasoline motor is the same as the efficiency of moving the regular car with a gasoline motor. The Volt is probably slightly more efficient for various reasons. 
  • I assumed that all the electric power used to push the Volt came from charging it off a residential power grid.  In real life some of it will come from regenerative braking, but probably not enough to throw off our calculations by more than a couple of percentage points.  
We need to know a few numbers before we can get started:
  • The energy you get from burning gasoline is, according to Wikipedia, about 34 MJ/liter.  In American units, that converts to about 35.75 kW-hr/gallon
  • The average US price of a gallon of gas, at the time of this writing, was about $3.88.  We can calculate the energy cost of burning gasoline to be about 10.9 cents per kW-hr, using the previous number. 
  • Likewise, the average national cost of 1 kW-hr of electricity in 2010 (the most recent data I could find with a quick Google search) was about 11.5 cents.
  • The energy efficiency of a good internal combustion engine is about 20%; electric motors run closer to 80%.  In other words, you can get an electric motor to do the same amount of work as a gasoline motor for 1/4 the input energy.  

First we need to calculate the total energy used to move the car over our representative 1389.4 miles.  If we assume it's the same for both our Volt and normal car, and that our normal car can average 30 mpg efficiency, then we know the normal car will need 46.313 gallons of gas to go that distance.  Since we know how much energy is contained in a gallon of gas now, we can work out that the gas-powered car will use about 1656.16 kW-hrs of energy during the drive.  Since the engine is only 20% efficient, only about 331 kW-hrs of that were actually needed to move the car; the rest gets lost as heat, noise, etc.

All of that energy came from burning gasoline for the traditional car, so we can pretty easily calculate the total cost of the drive by using the price of gas and the energy density of gas: about $180.52.

For the Volt, things are slightly more complicated.  The electric motor powering the drivetrain is about 80% efficient, so dividing 331 kW-hrs by that gives us 414 kW-hrs, the input energy needed to move the car.  Since we know we burned 10.4 gal of gasoline during the trip, we can calculate that about 372 kW-hrs was used by the (20% efficient) gasoline engine.  If we assume all of that was used to charge the battery (in reality some would have been used to power the drivetrain, but we'll ignore that for simplicity) that's about 75 kW-hrs of battery charge from burning gasoline.  The rest of the battery's charge would have come from the power grid;; we can calculate that by subtracting the gas engine's contribution to charging the battery from the total 414 kW-hr charge.  The total energy cost can then be calculated as follows, by adding the battery-charging contributions of the gasoline engine and the power grid:

Cost = (372 kW-hrs)*($0.109/kW-hr) + (414-75 kW-hrs)*($0.115/kW-hr)

The total cost comes out to about $79.50, more than a factor of two less than the conventional car

So that's about 13 cents/mile to drive the conventional car, vs. 5.6 cents/mile for the Volt.  That's a pretty big difference; even with the money you're paying to charge the car, it's still costing you about half as much to drive your Volt around as it would a similarly-sized conventional car.  To put that in perspective, if you drive 20,000 miles in a year, the Volt will save you almost $1500 annually.  That's a long way from "paying for itself," at least at current prices (a Volt will run you almost twice as much as a similar conventional car), but if plug-in hybrids like the Volt are anything like the current generation of hybrids they should fall in price pretty quickly over the next few years. 

So yes, the Volt will save you quite a bit of gas money, even though it's far from the free lunch that the mpg numbers being thrown around make it look like.  From a cost standpoint, you could assume it's equivalent to a hypothetical gas car that got around 70 mpg; that's not exactly Philly to Denver on a tank of gas, but it's pretty good.   The cost savings will probably only get bigger, as gas prices continue to rise faster than electricity prices, and in places like Europe where gas isn't artificially cheap the Volt is already close to paying for itself in a couple of years.  

It's worth mentioning that this is only looking at raw fuel cost; there are a lot of other advantages (less pollution and reduced dependence on foreign oil are two big ones) to using centrally-generated electric power vs. burning gasoline to power our cars.  It's far from a perfect solution to the transportation problem the US is going to have to deal with when our current era of cheap gas ends, but it's one of the best things anybody's come up with so far.