Friday, September 28, 2012

What's the Difference Between Regular and Diesel Engines?

I've never really owned a car.  I guess I technically own half of my wife's temperamental 2002 Nissan Sentra now, since I'm pretty sure that's what marriage means, but I've never actually gone out of my way to own and operate a motor vehicle.  As a result, I have an almost childishly simplistic understanding of what the hell makes cars go; I know there's an "engine" inside, which burns "fuel" to turn the "wheels," but saying I'm hazy on the specifics is probably being unfair to haze.  Unsurprisingly then, the question of what makes a diesel engine different from a regular old engine managed to not occur to me for over 32 years, for the same reason that you've probably never wondered what makes a mountain gorilla different from a lowland gorilla (apologies to gorilla owners and/or gorillas among my readership, obv.).  It wasn't until a week ago while I was absently staring at a gas station sign (note to self future possible blog entry: why the 9/10 of a cent thing?) that it occurred to me that some cars (and, apparently, all big trucks) need a special kind of fuel called "diesel," and that why that is exactly was almost certainly a Thing I Should Probably Know.  To the wikipedias!

What secrets lurk in his comically oversized head?
A happy side effect of figuring out why diesel engines are different from regular engines was that I had to finally figure out how regular engines work in the process.  A happy side effect of that side effect was that I learned that the basic spark-ignition internal combustion engine found in most cars is called an "Otto-cycle engine," which made me laugh because I am basically a seven year old with an advanced degree. Anyway, Otto-cycle engines work pretty much how I'd always guessed they do.  You fill a combustion chamber above a piston with some mixture of fuel and air, then apply a high voltage across a spark plug to cause a small electrical arc, which makes the fuel/air mix explode and pushes the piston down.  If you have a bunch of pistons in a row and you time the explosions right, you can make them do useful work, like spin a drive shaft.    I built a potato gun in college that worked on basically the same principle, although the "useful work" in question was "firing potatoes at the grad student dorms until they called the cops on us."

The Diesel engine cycle (coincidentally invented by Rudolf Diesel in 1892) is actually quite a bit more clever, at least from an engineer's standpoint.  It's the same basic process as a normal engine cycle (ignite some fuel, push a piston with the explosion, move some shit), but instead of relying on a spark to do the igniting you just compress the air in the combustion chamber and use the resulting heat to light the fuel.

That's a crap explanation, so here's a more detailed one.  Basically, what we're doing here is exploiting the ideal gas law, which you probably encountered in high school chemistry at some point (PV=nRT, remember?).  The ideal gas law says that, all other things being equal, if you increase the pressure of a gas the temperature is going to increase accordingly.  So with the piston fully extended from the combustion chamber, you're going to fill the chamber with air.  When the piston moves back up into the chamber (via the motion of the engine cycle itself), it's going to effectively make the chamber much smaller, compressing the air inside.  The increased pressure causes increased temperature, which will eventually (at pressures of about 600 psi) exceed the auto-ignition temperature of the vaporized fuel.  Once you've reached that threshold (which if you've designed the engine properly is also the top of the piston's cycle), injecting some vaporized fuel into the chamber will cause it to spontaneously ignite, pushing the piston out of the chamber with no external ignition system needed.  You've replaced spark plugs with the laws of physics essentially; like I said, it's clever.

I'm lazy today so I'm just cold swiping images (this is from automobilehitech.com).  It's a pretty good cross-sectional illustration of the combustion chamber and piston at each point in the Diesel cycle. 
Obviously there's a bit of a chicken/egg problem here: if the ignition cycle relies on the engine already running with enough power to compress the air to its critical pressure, how do we actually start the engine?  If you know anything about cars right now you're probably saying "duh," but as I discussed at length earlier this is all fairly new to me.  Anyway, you start the engine the same way you start a regular engine: by using an electric starter motor to spin the engine until it's moving fast enough for the ignition cycle to take over.  A disadvantage of using heated air vs. spark plugs as your fuel ignition system is that cold weather, which results in both a lower starting air temperature and a cold engine block around the combustion chamber, can make it much harder to get the engine started.  This is usually worked around by using electric engine block heaters, although you can get cute and do stuff like use a fuel with a lower auto-ignition temperature than diesel gasoline (ether is popular if you can get it) to get the engine running until it's up to operating temperature.

So in the Diesel engine, we've got a clever but functionally identical analog to the Otto-cycle design, which begs the question: why use one over the other?  The big advantage of Diesel engines is efficiency; because you have to compress the combustion chamber quite a bit more in order to get the fuel-air mixture to spontaneously ignite, Diesel engines have much higher compression ratios than standard engines.  Higher compression ratio means more piston travel distance which means more work done per combustion cycle; the end result is that Diesel engines can be up to 50% more fuel-efficient than their Otto-cycle counterparts.  The downside is that all that extra piston travel makes the engine block much bigger and heavier, which can entirely negate the extra engine efficiency in smaller vehicles where the engine is a large fraction of the total weight.  As a result, the places you'll usually see Diesel engines are in applications where engine weight isn't a huge deal-- big trucks, ships, tanks, and stationary stuff like generators.  They do show up in cars (particularly in Europe), but maintaining an efficient engine weight in smaller vehicles generally means sacrificing some power, although the fact that you don't have an explosive fuel-air mix in the chamber until just before combustion time means you can play all kinds of games with turbo-charging the compression cycle via increased pressure without worrying about the engine blowing up on you.

The other huge advantage of Diesel engines is the fuel itself.  It's based on petroleum, like gasoline, but unlike gasoline it's pretty much just distilled petroleum; you don't have to do a bunch of stuff to it afterward to turn it into a usable fuel, which means it's much cheaper and cleaner to make.  Petroleum distillate also has a way lower vapor pressure than gasoline, meaning that if you spill some you don't have that whole rapid-outgassing-of-explosive-fumes issue you do with regular gasoline.  Unless you're committing arson (note to self: if you ever need to burn down a building, don't buy diesel fuel even if it's cheaper) that's generally a good thing.  It's also relatively easy to make a diesel fuel substitute ("biodiesel") from all kinds of organic material, and unlike gasoline substitutes like ethanol you can pretty much just run pure biodiesel in a stock diesel engine without issues.  As people come up with increasingly clever and efficient ways of synthesizing biodiesel on useful scales, that one's going to become a big deal.

All of my gearhead/engineer friends are always gushing about Diesel engines, and now I understand why.  Being able to yank out the entire spark ignition system and replace it with a clever application of the laws of physics is the kind of elegant solution to a problem that engineers spend their whole careers trying (and often failing) to come up with; the fact that it actually results in a more efficient engine with a more flexible fuel system is just gravy, honestly.  And since said gearhead friends are probably reading this, I'm well aware that I glossed over a lot of subtleties of engine design and optimization here, so feel free to point them out in the comments.

Tuesday, May 1, 2012

How Does Radiocarbon Dating Work?

Like most things in this world, I never gave radiocarbon dating (the method archaeologists use to determine the approximate age of stuff they dig out of the ground) a whole lot of thought.  It seemed obvious enough how it worked; some small percentage of the carbon in everything is apparently the unstable carbon-14 isotope.  That isotope has a finite, known half-life; over time, it'll decay into the more stable carbon-12 and carbon-13 isotopes.  The result of all this is that you should be able to estimate the age of things by the amount of carbon-14 left in them.  Assuming you can avoid sample contamination by "modern" carbon-bearing materials (which I would guess is really hard), radiocarbon dating is apparently pretty accurate for measuring the ages of things.  Or that's what I assumed; like I said, since it doesn't really have anything to do with my field, I didn't give it a whole lot of thought.

After writing that post a few weeks ago on why the earth's crust is mostly silicon though, I realized something: basically all the carbon in the universe was formed in supernovas during the first couple of billion years after the Big Bang.  That means that the carbon in atmospheric CO2, animal remains, and anything else you might find on Earth should be 1) pretty much all the same age, give or take a billion years, and 2) significantly older than the Earth itself.  How can you use atoms from the primordial universe to determine the relative ages of things from the (fairly short, on the cosmic scale) history of planet Earth?

Per usual with this kind of thing, I started out with a couple of entirely wrong assumptions.  The first was that all the carbon on earth, in all its various isotopes, was Big Bang detritus.  In fact, it turns out that carbon-14 (C-14 from here on out, because typing is hard) is continuously produced in the atmosphere via interaction between atmospheric nitrogen and the high-energy cosmic rays that are constantly bombarding the earth.  The cosmic rays produce neutrons in the upper atmosphere, which can replace a proton in one of the nitrogen molecules floating around out there.  The result is an atom with the same atomic weight as nitrogen (because protons and neutrons weigh the same) but an atomic number (proton count) reduced by 1, which if you check the periodic table corresponds to our unstable C-14 atom.  Said C-14 atom will eventually combine with oxygen to form atmospheric CO2, which is then taken up by plants, etc etc, until basically every living thing has some small quantity of the stuff in them. 

The half-life of C-14 is surprisingly short, about 5750 years.  That means that the stuff would be long gone from the Earth if the supply of it wasn't being constantly re-upped by cosmic ray bombardment, which solves the "all carbon is as old as the universe" problem.  It does make things tricky though, because now you've got two questions to answer when you want to carbon-date something:

1) How much of the C-14 in the thing you're dating has decayed?

2) How much C-14 was there begin with, and how old was it?

You need at least a reasonable approximation of both of those numbers to accurately determine the age of something. 

The critical concept here is that a living organism is constantly refreshing its internal supply of C-14.  Whether it's a plant taking up CO2 as part of its respiration cycle, an animal eating that plant, or a bigger animal eating that animal, atmospheric CO2 is constantly making its way into the internals of every living thing on Earth.  As a result, the ratio of C-14 to C-12 in any living thing is going to be approximately identical to the ratio in the atmosphere, which we can treat as constant over time.  When an organism dies, it stops doing all the things that would normally refresh its supply of C-14 (eating and breathing, for example), making death a handy "t=0" point for carbon dating.  Basically, if you know the half-life of C-14 (which we do), you can approximate the time of death of any once-living thing (or anything that was originally a part of a living thing, like fabrics) by comparing the ratio of C-14 to C-12 atoms in a sample of it to the atmospheric ratio.  Since half-life is a relative measurement (it tells you the ratio of the current amount of isotope to the initial amount at a given time), the age of the C-14 already in the body doesn't matter.  It's worth mentioning that carbon dating only works on discrete ex-lifeforms; trying to carbon-date something like soil or peat will just give you a mess, since there's bits of things that all died at totally different times in there.

The most interesting consequence of C-14's much-shorter-than-I-thought half-life is that it sets a pretty firm limit on the maximum carbon-dateable age of stuff.  The older something is, the more of the C-14 in it has already decayed, and the more difficult accurately measuring the remaining concentration of it will be.  In practice, you can carbon-date things back to about 10 C-14 half-lives, or ~60,000 years. At that point, the C-14 ratio has decayed to less than 1/1000th of its initial value; as you approach that limit, measurements get much more difficult and susceptible to contamination too.  So really radiocarbon dating is only useful for figuring out how old once-living things that fall roughly within the blip of time when humans have been around are.  Whether because of pop-cultural portrayal or just the fact that I'm naturally incurious, I'd always just assumed carbon dating would give you the age of anything you wanted, regardless of composition or oldness.

(Wikipedia, as usual)

Friday, April 20, 2012

Will the Chevrolet Volt Save You Money?

EDIT 4/23/12: I made a pretty huge mistake here in not taking the relative efficiencies of gas vs. electric car motors into account and just assuming they were roughly equal.  In reality electric motors are about a factor of four better.  That completely changes the conclusion of the post, which originally had the Volt costing about as much as a regular car to drive around.  I've edited heavily to take all that into account.  

The Chevrolet Volt, and other plug-in hybrid vehicles like it, are marvels of engineering that should have all self-respecting nerds salivating for them to come down to normal-people prices.  Their genius is in the way they get around what's always been the main issue with electric cars: the energy density (the amount of energy you can store in a given weight or volume) of batteries, while it sucks less than it used to, has never been able to match the energy density of gasoline.  As a result, electric cars either need to carry several times the mass of a full gas tank in storage batteries (almost always impractical) or have their range drastically compromised.  The Volt and its ilk work around this by adding a gasoline engine to basically act as a battery charger on the road; it doesn't do much more than burn (high-energy-density) gasoline to keep the (low-energy-density) battery topped up, while the battery powers the drivetrain via electric motor.  This is different from parallel-hybrids like the Prius, which freely switch between using their gas and electric motors to power the drivetrain, and in principle should be able to seriously increase gas mileage to well above what even current hybrids are capable of.  It does in fact succeed in doing this, as this photo from James Fallows' blog shows:

For reference, that's approximately Philadelphia to Denver on a single tank of gas.

That's bananas, right?  Unfortunately it's not that simple.  The Volt ain't magic; it still takes the same amount of energy to push it that 1389.4 miles as it does a regular car of the same size.  The difference is that a lot of that energy is coming from electricity (via charge-ups between drives) rather than gasoline.  Since electricity isn't generated from nowhere and definitely isn't free, miles per gallon of gasoline is an extremely deceiving metric to use when evaluating the efficiency of plug-in hybrids; we need to think of a way to take that generated electricity into account too.  I decided to figure out a way to do this and, in the process, find out (roughly) how much money you'd actually save by driving a Volt vs. a regular, decently-efficient car.

I made a couple of simplifying assumptions in figuring all this out:
  •  I assumed the car usage summarized in the photo above was typical.  From the numbers I've heard thrown around for plug-in hybrids, it's probably close.
  • My "normal" car was assumed to have an average gas mileage of 30 mpg over the same 1389.4 miles that Volt drove.  That's a reasonable assumption for a well-built traditional car of the Volt's size.  I also assumed that it had roughly the same size, weight, drag coefficient, etc. as the Volt, so the energy used to move both of them would be close to identical.
  • I assumed that the efficiency of charging the Volt's battery with a gasoline motor is the same as the efficiency of moving the regular car with a gasoline motor. The Volt is probably slightly more efficient for various reasons. 
  • I assumed that all the electric power used to push the Volt came from charging it off a residential power grid.  In real life some of it will come from regenerative braking, but probably not enough to throw off our calculations by more than a couple of percentage points.  
We need to know a few numbers before we can get started:
  • The energy you get from burning gasoline is, according to Wikipedia, about 34 MJ/liter.  In American units, that converts to about 35.75 kW-hr/gallon
  • The average US price of a gallon of gas, at the time of this writing, was about $3.88.  We can calculate the energy cost of burning gasoline to be about 10.9 cents per kW-hr, using the previous number. 
  • Likewise, the average national cost of 1 kW-hr of electricity in 2010 (the most recent data I could find with a quick Google search) was about 11.5 cents.
  • The energy efficiency of a good internal combustion engine is about 20%; electric motors run closer to 80%.  In other words, you can get an electric motor to do the same amount of work as a gasoline motor for 1/4 the input energy.  

First we need to calculate the total energy used to move the car over our representative 1389.4 miles.  If we assume it's the same for both our Volt and normal car, and that our normal car can average 30 mpg efficiency, then we know the normal car will need 46.313 gallons of gas to go that distance.  Since we know how much energy is contained in a gallon of gas now, we can work out that the gas-powered car will use about 1656.16 kW-hrs of energy during the drive.  Since the engine is only 20% efficient, only about 331 kW-hrs of that were actually needed to move the car; the rest gets lost as heat, noise, etc.

All of that energy came from burning gasoline for the traditional car, so we can pretty easily calculate the total cost of the drive by using the price of gas and the energy density of gas: about $180.52.

For the Volt, things are slightly more complicated.  The electric motor powering the drivetrain is about 80% efficient, so dividing 331 kW-hrs by that gives us 414 kW-hrs, the input energy needed to move the car.  Since we know we burned 10.4 gal of gasoline during the trip, we can calculate that about 372 kW-hrs was used by the (20% efficient) gasoline engine.  If we assume all of that was used to charge the battery (in reality some would have been used to power the drivetrain, but we'll ignore that for simplicity) that's about 75 kW-hrs of battery charge from burning gasoline.  The rest of the battery's charge would have come from the power grid;; we can calculate that by subtracting the gas engine's contribution to charging the battery from the total 414 kW-hr charge.  The total energy cost can then be calculated as follows, by adding the battery-charging contributions of the gasoline engine and the power grid:

Cost = (372 kW-hrs)*($0.109/kW-hr) + (414-75 kW-hrs)*($0.115/kW-hr)

The total cost comes out to about $79.50, more than a factor of two less than the conventional car

So that's about 13 cents/mile to drive the conventional car, vs. 5.6 cents/mile for the Volt.  That's a pretty big difference; even with the money you're paying to charge the car, it's still costing you about half as much to drive your Volt around as it would a similarly-sized conventional car.  To put that in perspective, if you drive 20,000 miles in a year, the Volt will save you almost $1500 annually.  That's a long way from "paying for itself," at least at current prices (a Volt will run you almost twice as much as a similar conventional car), but if plug-in hybrids like the Volt are anything like the current generation of hybrids they should fall in price pretty quickly over the next few years. 

So yes, the Volt will save you quite a bit of gas money, even though it's far from the free lunch that the mpg numbers being thrown around make it look like.  From a cost standpoint, you could assume it's equivalent to a hypothetical gas car that got around 70 mpg; that's not exactly Philly to Denver on a tank of gas, but it's pretty good.   The cost savings will probably only get bigger, as gas prices continue to rise faster than electricity prices, and in places like Europe where gas isn't artificially cheap the Volt is already close to paying for itself in a couple of years.  

It's worth mentioning that this is only looking at raw fuel cost; there are a lot of other advantages (less pollution and reduced dependence on foreign oil are two big ones) to using centrally-generated electric power vs. burning gasoline to power our cars.  It's far from a perfect solution to the transportation problem the US is going to have to deal with when our current era of cheap gas ends, but it's one of the best things anybody's come up with so far.

Tuesday, April 10, 2012

Why Is The Earth's Crust Mostly Silicon?

A couple posts ago, I mused that it was a pretty goddamn convenient coincidence that most of the crust of the planet we live on was made of the one element that's absolutely essential to all modern technology.  Being a generally lazy person, I was ready to just shrug and say "eh god did it" until I remembered that I'm, at best, an agnostic and not supposed to be doing that.  So I went with my backup plan-- shrugging and saying "eh, astronomy/geology did it."  It wasn't really germane to the earlier post's topic anyway, but the whole point of this blog is to actually try to find out the answers to all the things in life I usually just shrug and accept.  Plus thinking about it got me curious about two things I know next to nothing about: how heavy elements are formed and how the earth was formed.

The answer to this one goes all the way back to the beginning of the universe, when all the matter in existence (your desk, my computer, Andy Reid, etc) was created in the first couple of hundred thousand years after the Big Bang.  Problematically, that matter at the time consisted almost entirely of hydrogen and helium, since a rapidly cooling quark-gluon-lepton plasma (the mess left by the Big Bang) is going to relax into the least energetic state possible. In this case, that means lots of individual or double protons that were eventually able to capture an equivalent number of electrons as the universe continued to cool.

Hydrogen is great for making water and explosions and everyone loves balloons, but as you've probably noticed almost everything solid in the universe is made up of heavier elements like carbon, silicon, and iron.  So how did we go from "shit-tons of hydrogen, helium, and not much else" to the clusterfuck of 100+ elements that makes up the periodic table?

Short answer: explosions, and lots of 'em.  All those clouds of hydrogen and helium in the early universe would eventually (~1 billion years) coalesce into discrete masses, aided by gravity.  Eventually these masses got dense enough that the hydrogen and helium at the cores was under enough pressure to undergo fusion.  The result was lots and lots of gigantic primordial stars. 
 
These primordial stars, being much purer hydrogen-helium blobs than most of our current crop, were able to burn a lot hotter and, as a result, could get quite a bit bigger.  More mass means more core pressure means way more fusion than we see in most "modern" stable stars, which mostly just make helium; large numbers of protons could be fused into heavy elements. Every element from carbon through iron is/was formed via extreme stellar fusion this way.

Conveniently, the stellar mass that's necessary for this kind of higher-order fusion to occur also tends to make a giant star (superstar?) extremely unstable, so after creating heavier elements in its core for awhile it generally goes boom in a supernova/hypernova event, spreading those elements out through the universe.  As a result, the universe's supply of heavy elements consists overwhelmingly of the stuff between carbon and iron on the periodic table.  The elements heavier than iron, created from less-common non-fusion processes in large stars (physical limits on stellar mass mean iron is about the heaviest thing you can make with pure stellar fusion), are quite a bit rarer and get even more so as their atomic number goes up.

Relative abundance of the elements in our solar system (and, by extension, the galaxy/universe).  The weird sawtooth pattern is due to the fact that elements with even atomic numbers have a higher binding energy than odd-numbered ones.  Note that the y-axis is log scale, so differences are bigger than they look.  (thx Wikipedia)

So a lot of the early history of the universe was just giant stars forming and exploding, making lots of heavy elements in the process (it's worth mentioning that this is still going on, although less frequently).  At the same time this supernova-fest was happening, more reasonably-sized stars that didn't explode all the damn time were also getting formed and coalesced into clusters, galaxies, etc, eventually giving us approximately the universe we know and love today.  Once there were stable stars, the whole process of gravitational capture of heavy elements and planetary accretion started creating solar systems, including ours. 

So at the end of the day (or couple billion years or whatever), the top ten most common heavy elements in the galaxy (in order of abundance) are oxygen, carbon, neon, iron, nitrogen, silicon, magnesium, sulfur, argon, and calcium.  It's a pretty safe bet that most of these are going to have a lot to do with Earth's composition.  We can rule neon and argon out almost immediately though; they're noble gasess and aren't going to form anything solid without lots of coercion.  Of the others, oxygen has a tremendous advantage: it can form stable, solid compounds with everything else on the list except the carbon and nitrogen, and lots of other elements too.  More importantly, it's the only top-ten element that's capable of doing this.  So it's pretty much a given that the crust is going to be made up of mostly "rock-like" (solid at planetary temperatures) oxides of abundant elements.  Oxygen, ergo, is pretty much a lock for most common crustal element, and indeed wins by more than a factor of two over the first runner-up.

So now that we're battling for second place, the question now becomes "which oxides?"  You can roughly work this out by looking at all the rock-like oxides, rating them by the galactic abundance (or lack thereof) of the other element involved, and then accounting for each oxide's molecular weight.  The weight matters because Earth was basically a liquid during its formation; heavier elements/compounds had a tendency to sink down toward the core, while the lighter ones floated around in what would become the crust.  So while you'd expect iron oxide to be the most common compound in the crust, its relatively high molecular weight causes it to place a distant fifth, after the silicon, aluminum, calcium, and magnesium oxides.  Same deal with magnesium, to a lesser extent; the less common, but much lighter aluminum and calcium oxides end up beating it out even though aluminum isn't even in the top ten of galactic abundance.

Relative abundance of elements in the Earth's crust.  Note that the green blob (elements that form rocky oxides) is kicking everything else's ass. (thx Wikipedia)
Silicon, though, is the best of all worlds: not only is it the second most common rock-like-oxide forming element in the galaxy (after iron), but the oxide it forms is also pretty light as these things go. Result: lots of silicon oxide in the overall composition of the earth, and nearly all of it floating at the top in what would eventually cool down and become the crust.  The only other oxide that even comes close is aluminum, and even it still lags more than a factor of three behind silicon oxide in crustal abundance.

So as usual, there's a perfectly reasonable, if somewhat long and complicated, explanation for why the most common element in the crust of our home planet is also one of the most useful.  Yes, it's a complete coincidence that silicon happens to also be a semiconductor as far as I can tell, but at least now we know why there's so much of it around.  Still, if silicon didn't semiconduct we'd be pretty SOL; the next most common Si-like elemental semiconductor is germanium, which is about six orders of magnitude less abundant than silicon.  (Slight caveat for the pedantic: carbon, in diamond form, will semiconduct, but not in ways that are very conducive to the low-power digital electronics we like so much.  Still, we might've made it work if we had to, we're clever like that.) 

An interesting, largely unrelated fact I learned while looking all this up is that the galaxy (and by extension probably the universe) is, even now, still more than 99% composed of hydrogen and helium.  All the rest of the other elements put together barely comprise enough matter to even rate as a contaminant.  Even weirder, that contaminated field of hydrogen-helium only comprises about 5% of the universe; the rest is apparently dark matter and dark energy.  And that's where I'll stop, because I really don't want to have to go there.  

Thanks to Wikipedia for most of the basics of this one, and the blog's astrophysicist pal for some fact-checking of the parts with stars in them.

Wednesday, April 4, 2012

Why Are Manhole Covers Round?

One of the things they (or I guess "we" at this point, since I routinely fail to mention this at science-outreach events) never tell you about science is that most of it is waiting around.  A good percentage of my work time is spent waiting for processes to finish, waiting for vacuum systems to pump down, and waiting for machines to stabilize.  To make matters even more boring, most of that waiting happens in a clean room, where I'm wearing a bunny suit and forbidden from bringing in outside objects; it's not like I'm going to spend that time catching up on paperwork or reading a book.  The point I was gradually getting to: the other morning I was waiting for something science-y or other to happen and  killing time by repeatedly clicking the "random" button on theoatmeal.com.  While doing so, I ran across this:


(this is where you take a break and go read the hard-to-embed comic)


I realize that this is sort of missing the point, but the manholes question bugged the crap out of me for a solid day afterward.  The best answer I could come up with was "because the pipes they cap off are also round," but that's the worst kind of kick-the-can answer and just begs the question "well then why are the pipes round genius?"  So that's less than helpful.  All I accomplished from trying to come up with a better answer was getting the Teenage Mutant Ninja Turtles cartoon theme stuck in my head, so I decided to go to the Wikipedia.

Apparently this is an actual, somewhat famous question Microsoft likes to ask at interviews.  So for probably the 20th time today (and it's only a bit after noon), I'm extremely grateful to have a job despite quite clearly not deserving one.  Anyway, the reason Microsoft likes it so much is that there are a lot of good answers, and apparently which one you pick gives some unique insight into your personality (all my answer tells you is "MIT's Ph.D program has really gone downhill," so SUCK IT MICROSOFT).

Japanese manhole cover, which looks exactly how I'd expect it to.

The most important reason for having round sewer caps involves spatial geometry, so as someone who once got tagged "possibly learning disabled" in high school math I really never had a prayer here.  Basically, there's no orientation of a round sewer cap that can make it fall through a round hole of the same size.  A square/rectangular sewer cap, on the other hand, can easily be dropped down its corresponding hole if the angle is right. As an abortive childhood adventure inspired by either Goonies or Ninja Turtles taught me, sewer caps weigh like a quarter-ton; I can respect that dudes working underground, waist-deep in human excrement, probably have enough problems without the threat of one falling on their head.

The fact that sewer caps are so heavy also leads to a few other good reasons for them being round: they'll fit on the sewer in any orientation (no need to rotate to the correct angle), and they can be easily moved around via rolling.  Similarly, if there's a sewer cap in the road that's not correctly seated, it's a lot less likely to shred your car tires if it's got a round, vs. a sharply-cornered, edge sticking out.


Perhaps somewhat more practically, circular tunnels are both easier to dig and more stable than any other shape, so if you don't want your sewer system to collapse they're an excellent choice.  And finally, there's simple economics to consider: there's only a couple of companies that make sewer caps, and they all make exclusively round ones for all of the above reasons.  If you want any other shape, it's going to cost significantly more to have it custom-made.

You do occasionally see square sewer/utility hatches, usually in places that are a) not in a road, and b) leading down to electrical conduits, storm drains, or other things that don't run very deep.  The rugged individualists of Nashua, NH apparently use triangular sewer caps because LIVE FREE OR DIE or something.  I really hope their libertarian utopia is willing to make an exception and give sewer workers some decent health insurance. (addendum: there's no way a triangular cap can fall through a hole of the same size either.  See earlier comments on spatial geometry)

FREEDOM!!!!


So that's about six different, very good reasons for making sewer caps round, none of which I was able to come up with on my own.  That's disappointing, but I'm heartened by the fact that judging by such bestselling products as the Zune and Windows Phone 7, the question doesn't seem to be working out all that well for talent-screening at Microsoft.  Still, now you can go get a job there if you want, assuming they're dense enough to not change their interview questions after they turn up on both Wikipedia and random webcomics.  They'll probably ask you something like "why do we have a 120V/60Hz AC power grid" now, in which case you're welcome and you can repay me by doing something about the idiotic User Access Control in Windows.

Monday, February 27, 2012

Why Is The Symbol for Current "I"?

As I've mentioned once or twice before in this very blog, everybody who's ever done anything with circuits knows Ohm's law.  It's a simple little equation that states that the voltage across a linear circuit element is equal to the current passing through it times the element's resistance, or V=IR.  The symbol for voltage, for reasons that I really hope are obvious, is V.  The symbol for resistance is R.  The symbol for current is I.  One of these things is not like the others.

When you're a first- or second-year EE student you've got enough other crap to think about that it usually doesn't occur to you to wonder why all the symbols for things are what they are.  I was usually too busy frantically trying to solve for I to wonder why, in fact, they called it I and not C or something more logical.  The only time I remember it ever even coming up was when we did things with complex numbers.  Normally when you write out a complex number (if you've successfully put high school algebra out of your mind, a complex number is one with both real and imaginary components) you do it as the sum of the real and imaginary parts, with "i" being added to the imaginary so you can tell the difference (e.g. 5 + 3i).  For various reasons, complex numbers turn up a lot in circuit theory, so EEs use "j" in place of "i" to represent the imaginary component of a complex number.  You would think this would've sparked me to think "wait, isn't using I for current actually kind of confusing and stupid?" but I was probably thinking something more along the lines of "jesus christ math is hard and what's this crap with the letter j now?  Fine, I'll call imaginary numbers whatever you want if you pass me."

Much later, I've learned what every engineer eventually learns (all that hard math they made you do as an undergrad can be done pretty trivially with a computer) and have managed to avoid doing all but the most basic math for years now.  That and the general thrust of this blog is probably why I only got to wondering about the symbol for current over a decade after I learned it. 

As with most things in the world, this one is all France's fault.  One of the pioneering scientists who figured out the basics of electromagnetism back in the 19th century was a guy called André-Marie Ampère.  His contributions to the field were so important that he achieved the rare science twofer of getting both an equation (Ampère's Force Law, one of Maxwell's equations) and a unit of measurement (the Ampère, usually shortened to the Amp) named after him (eat that, Einstein!).  Said equation (Ampère's Law) basically states that an electric and/or magnetic field (this was pre-Maxwell, nobody knew they were pretty much the same thing yet) can cause charge in a conductor to move.  Since the charge is "flowing" through the conductor kind of like a liquid flowing through a pipe, Ampere decided to call the rate of charge flow the "current," with the designation for the amount of current being the "current intensity," or intensité de courant in the original French.  The canny reader will note that the first letter in that phrase is our culprit.

André-Marie Ampère (bottom) and his force law(s). 

Science is pretty much first-come-first-served with this stuff; Ampere figured it out first, so the naming conventions he used for the new units he derived became standard throughout the world pretty quickly (he didn't name the unit after himself, as that's widely considered to be a major Science Dick Move; someone else did that later).  There was a bit of pushback in England, with some people persisting in using C for current as late as 1896, but in general everybody stuck with I and was pretty okay with it, probably because C was starting to be widely used for capacitance around the same time.  Apparently older textbooks will still refer to current as "current intensity," although they apparently quit it with that before I went to college, hence the confusion. 

Now that the US is the greatest empire in the history of the world and English is pretty much the universal language of science, it's easy to forget that that wasn't always the case.  Lots of Western work in physics, chemistry, and biology was done in French, German, and even occasionally Italian well into the 20th century, once everybody gave up on Latin as a universal academic language.

Wednesday, February 22, 2012

How Does Capillary Action Work?

There's a good chance this is one of those "everyone knows but  me" questions, but since I write this blog purely for my own amusement you're just going to have to live with that.

Anyway, I got to wondering about capillary action via the usual route; I left the end of a towel or something hanging into the sink, which still had a bit of water in it, the other week.  About eight hours later, the towel was completely saturated with water in clear defiance of gravity.  This phenomena is called "capillary action" and is familiar to basically anyone who has encountered water and fabric in some combination in their lifetimes, but when I got to thinking about it I realized my poorly-thought-out mental justification for it ("it's just like a really slow siphon!") made no goddamn sense on a bunch of different levels.  To the internet!

You know what you never, ever run into in electrical engineering?  Fluid dynamics.  I'd argue that this is one of the best reasons to go into electrical engineering, but the fact that we make up for it with horrible things like electromagnetics and quantum mechanics kind of undercuts that.  Anyway, the point is that beyond the basic physics it shares with other things (e.g. diffusion) I know next to nothing about why liquids do whatever it is they do, including weird shit like climbing up fabric.

Apparently capillary action isn't exclusive to fabrics; it's something that will happen in lots of porous materials, including things like bricks and cinder blocks.  The key to the whole thing is narrow channels through the material.  Stick some water in a narrow, vertical-ish tube (the most idealized demonstration of this is a thin glass tube partly immersed in water; see below) and you've suddenly got two pretty big forces to think about.  Surface tension is the first one, which will cause the surface of our narrow water-channel to form something called a "concave meniscus", which is a fancy term for a liquid surface where the level at the edges is higher than the level at the center (the smaller the channel, the greater the meniscus curvature and the higher the surface tension). The formation of a concave meniscus is dependent on there being an attractive force between the liquid and the channel material; assuming our liquid is water, we'd need to make the channel out of something hydrophilic, like glass.

Idealized capillary force demonstration.  A narrow glass tube is placed in a bath of water; surface tension and liquid cohesion "pull" the water up the tube to some height where their force is exactly balanced by gravity.


Next you've got the interaction between the capillary medium and the water to consider.  A concave meniscus of water in a channel of hydrophilic material is going to have its edges drawn upward by adhesion forces between the water and the sides of the channel.  As the edges of the water meniscus move up the channel, surface tension and general cohesion of the water molecules ensures that the rest of the surface, as well as the water behind it, is also drawn upward to preserve the shape of the meniscus.

So you've got a combination of adhesion and surface tension creating an upward force that's stronger than the downward force of gravity on the mass of the water.  Obviously that can't last forever; the capillary force is constant with height (assuming a constant-width channel) while the force of gravity is going to increase as more mass (water) is drawn up into the channel.  At some point they'll balance and water will stop "flowing" upward.  This balance point depends largely on the channel diameter (smaller diameter = greater surface tension force, remember?), although the channel wall material (specifically, the hydrophilic-ness thereof) is obviously going to play a role. There's probably a really simple equation for the critical height of the water level in a capillary tube, so if that's of interest to you definitely look it up!

Now extrapolate that out to a porous medium like fabric or a paper towel.  The pores are basically thousands of very tiny channels, and we know from the fact that you can get them wet that paper towels and most cellulose-based fabrics are pretty hydrophilic.  Add in the fact that most of our pores/channels aren't even going to be perfectly vertical (reducing gravity's counter-force) and you've got a really good situation for capillary action to do its work.  In most cases, such as the towel in the sink that inspired this, capillary action can easily lift a decent volume of water up a distance of several inches if you give it enough time to work. 

Water climbs up a brick, which is basically an assload of microscopic glass channels from a fluid-dynamic standpoint



Besides making a mess of my counter that one time, capillary action has some legitimate applications.  It's basically the mechanism by which things like paper towels absorb water, which as most people know is just all kinds of handy.  It's also the mechanism by which certain fabrics "wick" sweat away from the body.  It actually encroaches on my field a little bit too; people building microfluidic and nanofluidic devices (small devices designed to move tiny amounts of liquid around strategically, in order to sort DNA and detect chemicals and things) rely heavily on capillary forces to do their work, since extremely narrow channels mean extremely strong capillary forces.

Per usual, thanks to Wikipedia for making me feel dumb.

Friday, February 17, 2012

Where Does Silicon Come From?

If I ever got it into my head to start believing that there had to be a god, the existence of silicon would probably be one of my main arguments in favor of.  It's just one of the several elements sitting on the edge of the metal/nonmetal divide on the periodic table, but as noted white supremacist/physicist William Shockley and friends figured out back in 1947, it's a semiconductor, meaning that whether or not it will conduct electricity is something we can exert control over pretty easily via some clever applications of quantum mechanics and materials science.

It wouldn't be an exaggeration to say that that single discovery probably transformed society more than anything has since the printing press; without it we've got no solid-state electronics, no computers, no internet, no solar cells, and basically none of the cool technology we all take for granted these days (I also wouldn't have a job).  On its own, being a semiconductor isn't that special; there are loads of materials that semiconduct, both elements and alloys.  Silicon has two other big things going for it though.  One is that it oxidizes spontaneously, meaning that if you want to make part of it into a nonconductive oxide it's almost stupidly easy to do so (most other semiconductors don't share this trait).  When you're making complex three-dimensional circuits out of silicon you're going to want to insulate lots of parts from other parts; the fact that it forms oxides so easily makes manufacturing Si-based devices vastly simpler (ergo cheaper) than other semiconductors.

The second, even more important advantage of silicon is that it's the second most common element in the earth's crust.  This miracle material that is the foundation of all modern technology is literally just sitting around all over the place in things like "sand" and "rocks"; unlike other useful things we dig out of the earth (oil, gold, helium, etc), we're never going to run out of silicon no matter how many cell phones each of us decides we need to own.  Even my shrivelled, agnostic engineer's soul is moved by the staggering serendipity of that one, although I'm sure there's a good reason for it that I just don't know (readers who know anything about planetary formation and earth science, feel free to chime in here).


Come on, it's pretty bananas right?


Unfortunately it's not as simple as just scooping up some sand and turning it into a bunch of Core i7 processors, which you'd probably already guessed from the fact that Core i7's cost significantly more than sand.  Remember when I said that silicon will oxidize spontaneously?  That's incredibly convenient in manufacturing, but it also means that basically none of the silicon found in nature is "pure;" it's all been oxidized into "silicate," or silicon dioxide (SiO2) if you like chemistry.  The flipside of the fact that silicon oxidizes so easily is that it's extremely goddamn hard to get it to un-oxidize back into elemental form; when you're making semiconductor devices, the only way to cut through a layer of SiO2 is with directed plasma or hydrofluoric acid (HF), a chemical whose lethality has become the stuff of legend among people in this line of work.

So that brings us to the question: how do you turn silicate from the ground into semiconductor-grade Si (which is 99.9999999% pure.  I might have missed a 9 in there) on the scale required to feed the insatiable monster that is the commercial semiconductor industry?  I assume they're not doing it with warehouse-sized plasma fields or gigantic vats of HF.  I hope not, anyway.

Did you know there's a whole field devoted to turning stuff into other stuff on industrial scales?  It's called metallurgy, and aside from that I know next to nothing about it.  Unsurprisingly, metallurgists solved this problem a long time ago (probably about when people were realizing that large quantities of very pure silicon would be a handy thing to have around) and I just never knew about it because it never occurred to me to question where the ultrapure, monocrystalline silicon wafers I work with actually come from. 

Any workable process to turn silicate into metallic Si is going to need:

  1. Input energy, because the oxygen isn't just going to un-bond from the Si if you ask it nicely

  2. Something else for the free oxygen to bond to and get it out of the way once you've separated it from the Si

  3. A way to get the now-free Si out and separated from any other byproduct you may have created while doing the first two things
It would also be nice if the process scaled, so you could make a ton of silicon as easily as a gram just by building a bigger reactor.


The most scalable, widely-used process by which SiO2 is turned into reasonably pure silicon is called carbothermic reduction, and for the most part it's pretty much what it sounds like.  The first thing you need is heat, a lot of heat (furnace temperatures run north of 2000C), in order to provide the necessary energy to split SiO2 into its component atoms.  This is achieved by running the whole mess inside something called an arc furnace, which produces heat via (yup) several sustained electrical arcs inside the furnace (it's not like you're going to get to 2000C by just lighting a fire under the thing or whatever).  Into this hellish environment you'll then dump both silicate and carbon feedstock.  It helps if both are reasonably pure, both for the purity of the final product and the minimization of nasty byproducts.  Coal is generally used for the carbon, while mined crystalline quartz is used for the silicate.

There are a few steps involved in the actual reaction, but basically what happens is that the extreme heat forces the silicon and oxygen atoms in SiO2 to separate.  Both atoms go through several intermediate reactions with the carbon we dumped in there, but eventually they'll settle into a combination of metallic silicon and carbon monoxide or carbon dioxide.  Metallic silicon is liquid at the furnace temperatures, so it'll drip down to the bottom where it can be collected.  CO/CO2, as we all know, are gases, so it'll leave via the furnace exhaust and do what it can to contribute to the global warming problem.

A Si-refining furnace, swiped from the survey paper I used as the main source for this thing.


The silicon you get out of this process is ~99% pure, or metallurgical-grade.  That's pretty good, but a far cry from the "nine 9's", or 99.9999999% purity, we need for semiconductor-grade material.  There are a variety of ways to increase the purity from here, both by adding things to the mix to precipitate out certain impurities and by good old-fashioned repeated distillation.  Even a few stray atoms of the wrong type can play absolute hell with semiconductor operation, so you really have to do a good job with this part.  That said, the details of it are extensive and boring, so I'm just going to stop here.  We've figured out the main question (how to get the Si out of SiO2 on a large scale) anyway; the rest is just gravy.

So we've learned how to make very pure metallic silicon out of most of the earth's crust and found a brand-new way to contribute to climate change, all in one shot!  If you're interested in all the gory details of this process, I yanked a lot of them from a survey paper that goes into much more detail on every step of getting silicon from the quartz mine to the inside of your iPhone.