Thursday, December 8, 2011

What Does Defrost Mode On My Microwave Do?

From a science standpoint, there's nothing incredibly complicated about a microwave oven.  A thing called a magnetron generates high-power radio waves (the frequency used is about 2.5 GHz, corresponding to a wavelength on the order of centimeters), which are then directed by a waveguide into a box with dimensions set to more or less create a stable standing wave.  The frequency of the generated microwaves is matched to the resonant frequency of the dipole moment of a water molecule, so water molecules exposed to the standing wave will absorb the microwaves as heat.  Ideally the water molecules in question are a constituent of some kind of food you're attempting to heat up because you're too lazy to cook real dinner, so in principle this is a quick, efficient way of evenly heating (because the standing wave penetrates your entire TV dinner more or less equally) stuff that contains water.

Most microwaves don't have much in the way of controls; the main thing you can set is the cook time, although there's usually a seldom-used option to vary the microwave power too.  Without exception though, they've also got a mysterious setting called "Defrost."  Based on extensive observational research (I stood in front of the microwave for three whole minutes while my soup defrosted the other night) I've been able to determine that:

1) The oven was switching the microwave power on and off, or at least varying it, in cycles that got faster the longer it ran (you can hear the magnetron vibrating slightly when the power is on)

2) My soup-iceberg melted a lot more evenly than it generally does when I don't use defrost mode

Irritatingly, we actually discussed why this happens in some detail in my undergrad electromagnetics class.  I hate E&M with the passion of one thousand suns though, so I either wasn't paying attention the first time or just erased that factoid in the process of purging that whole semester from memory.  Either way, I had to go look it up and then felt stupid when I did.

Way back when I wrote about windchill, I talked about heat always wanting to diffuse into areas where there's less heat.  The same is true of water; if there's a blob of hot water molecules in a body of much cooler water, the hot water will disperse until the whole volume of water is the same temperature (this usually happens pretty quickly but, as anyone who's ever swam through a mysterious warm region in their community pool knows, not instantaneously).  The point here is that water is really good at dispersing heat. 

So to tie that back to our microwave, my frozen soup is almost entirely made of ice.  Obviously ice and water are chemically identical, but in ice the molecules are locked in place whereas in water they're free to go wherever.  So when you start blasting away at a chunk of ice with microwave energy, you'll still heat up the water molecules but they won't be able to go move around and spread the heat out evenly; ice heats up much slower than water in a microwave for this reason. 

That's problematic if you're trying to melt ice evenly with microwaves.  Because the power in a microwave is slightly (or sometimes drastically) different at different points in the box, some of your ice is going to get heated up much more quickly than the rest of it, and that heat is going to stay pretty much where it is.  The problem becomes even worse when you actually manage to melt some of the ice at some point: now you've got a blob of water which, as we discussed, heats up much faster than ice.  So while you're trying to melt the rest of that ice, the water is getting hotter and hotter, and also melting the ice adjacent to it to create an enlarging pocket of "runaway heating."  The end result, when all the ice is finally melted, some regions of your food have been cooking for several minutes, while some just defrosted seconds ago.  Since it's all liquid now the temperature will even out pretty fast, but there's going to be a huge variation in how much "bonus cooking" different bits of the soup received.  Nobody wants that.

Defrost mode is an attempt to get around this problem by cycling the microwave power on and off.  While it's on, the ice is absorbing heat unevenly, but during the off cycle that heat has a chance to diffuse and reduce those temperature variations (heat will conduct through ice, just not nearly as fast as it conducts through liquid water).  Initially the "off" cycles are much longer than the "on" cycles, but as the food gets heated up closer to melting (which raises its thermal conductivity) you can get away with much longer "on" and much shorter "off" cycles without things getting too nonuniform. 

It's a far from perfect solution, the main problem being that most microwaves calibrate the length of the on/off cycles by asking you to input the total mass of water you want to heat up.  So unless you have a scale in your kitchen specifically for weighing blocks of frozen food (do you?  we don't) you're just going to end up guessing, and if you're off by much the calibration will be wrong and the whole thing won't work nearly as well.  Even if you're only in the ballpark with the weight though, you're probably going to get a much more uniform defrost than you would by just running the microwave in always-on mode. 

You can extrapolate from all this the reason that microwaves cook things so notoriously unevenly: most food is going to be made up of several different ingredients, all of which will contain varying amounts of water and have different thermal conductivities.  Since heating in a microwave is directly proportional to the amount of water that's there to absorb microwave power, that means all your ingredients will heat at different rates even though they're (in principle) exposed to the same amount of microwave power.  That's why even with modern microwaves, which have the little rotating plate at the bottom to make sure no part of your food stays in a dead spot, hardly anything ever tastes quite right after heating.  As we've already established my basic hatred and ignorance when it comes to all things electromagnetic-y, I have no idea how to solve this problem, besides saying "use a real oven, lazy-ass." 

Monday, November 21, 2011

Why Is the Sky (insert color here)?

Depending on the time of day, weather conditions, and where you're looking, the sky above us can appear:

  • Blue
  • Yellow
  • Kind of reddish-pink
  • White
(I'm not counting black because it's not technically a color, and also because I'm not a complete idiot)

Anyway, since all of those colors are coming from the same broadband light source (the sun, which you may have encountered at some point), I always just figured there was a lot of light filtering and absorption in the atmosphere that cut out certain colors and let others pass.  The problem is that the atmosphere is always pretty much the same, but apparently it'll let totally different frequencies of light hit your eyes depending on the circumstances (blue in the daytime, red at sunset, which pretty much covers the whole visible spectrum).  This is probably something I learned in elementary school, but apparently somewhere in two-odd decades of cramming my brain with largely useless facts about electricity, as well as any unguarded alcohol that happened to be lying around, that knowledge got lost.  Luckily we have an internet now.

As I said above, I'm not a complete idiot (usually).  I know as far as we're concerned here on earth, the sun has a radiation spectrum similar to any other black body, with a center frequency somewhere toward the low end of the center of the visible spectrum, meaning it contains various amounts of all visible wavelengths.  In non-science words, that means it emits what's basically a whitish-yellow light.  You can independently verify this by looking directly at the sun, which I recommend doing on a clear, bright day to maximize the effect.

So then why, during a clear day, does the part of the sky not currently being occupied by the sun look a whitish blue?  My initial, typically wrong guess would have been "because the atmosphere is absorbing all the photons but the blue ones."  The problem with that is that 1) We already know photons from most of the visible spectrum can get through the atmosphere just fine, from that looking-at-the-sun experiment we did earlier (which you probably did wrong if you can still read this, incidentally), and 2) higher-frequency light, like blue and UV, tends to be absorbed by things a lot more easily than lower-frequency light like green, red, and infrared (there are exceptions to this rule of thumb, but they're uncommon enough that various people have gotten rich discovering them).  So if absorption was the culprit, we'd expect the sky to look kind of reddish-yellow during the day instead of blue, which it clearly does not.

So what's the deal?  It turns out I was close, kind of.  low-frequency (reddish to greenish) photons pass through our atmosphere pretty much unimpeded, meaning any that hit your eyes if you're looking up at the sky probably came on a more or less straight path from the sun.  Higher-frequency (blue and UV) photons, on the other hand, are going to interact with the nitrogen, oxygen, and argon in the atmosphere quite a bit, but instead of being absorbed and turned into heat they'll get scattered.

Like most things in science, if you ignore the horrific mathematical and physical models that govern the specifics of it, scattering is conceptually very simple: photon hits atom, photon bounces off atom at some other angle, photon hits yet another atom, wash rinse repeat.  So any blue photon that makes it to your eyes (or UV photon that makes it to the back of my neck) is going to have been rattling around in the atmosphere for awhile, and as a result could be coming from pretty much anywhere in the sky.  So when you look directly at the sun, you're seeing the photons that are low-energy enough to pass through the atmosphere without scattering, while if you look elsewhere in the sky you're seeing the ones that scattered all over the place when they hit, instead of going straight from the sun to your eye.  Incidentally, this is why you can still get sunburned without direct sunlight; the UV rays that burn follow the same rules as blue light, and as a result can come from anywhere in the atmosphere.

This guy's looking right at the sun, and look how happy he is!  Come on, try it!

So alright, that mystery's solved.  What about sunset though?  At sunset we see both the sun going from a yellow to reddish color and the sky going from a blue to red-pinkish color, suggesting that the spectrum of photons getting to us is changing dramatically and inspiring countless terrible poems over the centuries.

The answer is my least favorite thing in the whole world: trigonometry. When the sun is directly overhead, it has exactly one atmosphere between it and you.  That means the light from the sun only has to get through one atmosphere to make it to your eyes.  When the sun is setting near the horizon on the other hand, the fact that there's only a small angle between the sun and where you're standing means the light has to go through the equivalent of more than one, sometimes as many as two, atmospheres (see diagram).  This extra atmospheric travel time means that most of those scattering blue photons aren't going to make it to where you're standing; they'll end up hitting some other lucky person who's standing under where the sun is right now.  So what's left is basically just the non-scattered reddish sunlight, which is the only thing illuminating the sky and therefore not washed out by scattered blues the way it would be during the day.  This also explains why the sun itself looks redder; the atmospheric travel distance is so long that even some of the midrange (green) frequencies are going to get scattered away, meaning the only photons still making it directly from the sun to your eye are mostly red.

Distances somewhat exaggerated, both for emphasis and because I'm bad at MSPaint

So last quick question: why do clouds appear white?  You can probably work this one out for yourself by applying what we've already learned, but since I had to look it up maybe it isn't as obvious as it seems in hindsight.  Anyway, clouds are made of water vapor.  Water molecules are way bigger than the other things in the atmosphere, and also clumped together pretty closely if there are enough of them to make a visible cloud.  Long story short, enough water vapor will scatter any visible photons hitting it more or less equally, which means that what gets to your eyes when you look at a cloud is roughly the same amount of everything, even though that's not what the solar spectrum looks like.  Combine equal amounts of all frequencies of visible light and you get...white!  Darker clouds look darker because they're taller, which means a lot of the light hitting them never gets to you at all, just scattered out the top or sides where you'll never get to see it.

We could have come at this from a different direction: if my initial hypothesis (atmospheric absorption) had been correct, most or all of the substantial amount of energy in the light the sun is continuously dumping on the earth would get absorbed as heat by the atmosphere, meaning the place would get really goddamn hot (like melt-lead hot) really fast.  To put it in some perspective, the estimated amount of "greenhouse gases" like carbon dioxide and methane (which DO absorb heat from the sun) that are in the atmosphere right now is something like less than a thousandth of the percent of the total atmosphere, and that's already causing us some problems at the moment.

Thanks to Wikipedia and usually terrible sci-fi site io9 for this one!

Thursday, November 17, 2011

What Purpose Do Washers Serve?

Get the "ha ha engineer-boy is dirty and wears dirty clothes and doesn't know what a washer is for" jokes out of your system now, please.

OK, done?  Good.  If you hadn't already guessed, I didn't mean the thing that occasionally cleans my clothes when I remember to do things like that; I was actually referring to the tiny metal discs with the hole in them that you put between the head of a screw and the thing the screw is attached to (fig. 1).  It's sort of common knowledge that you use washers when you want to be extra-sure a screw will stay in place, but I'll be damned if I can tell you why that is exactly. 

Fig. 1: A bunch of washers somebody inexplicably decided to take a picture of (thx Wikipedia)

Unsurprisingly, washers are not rocket science (I mean, except for the fact that they're probably used in rocket construction).  Basically, your standard flat washers (not the split-ring lock washers, which I'll come to) perform two basic functions: increasing friction and redistributing load.  The friction-increasing feature is particularly useful if the surface you're screwing into isn't particularly flat; sticking a washer between the screw and the surface means the underside of the screw head will have a nice flat metal surface to bear down onto; since frictional force is proportional to surface area, that makes the screw much less likely to loosen. See the diagram below for an illustration of this.
Screws attached to an uneven surface without (A) and with (B) a washer to give the screw head as much contact area as possible. 

The second function (load redistribution) will be familiar to anyone who's ever tried to tightly screw a bolt into some kind of soft material like wood; after a certain point, the pressure exerted on the wood by the screw head exceeds the strength of the wood and further tightening just causes the screw head to get buried in the wood.  So basically there's a "critical screw tightness" that you can't exceed for any given material, and for materials that are softer than metal it often isn't tight enough for things that need to be load-bearing. 

Remember when I said "pressure" up there?  That was the giveaway; pressure equals force per area.  In this case, it's the force applied by the screw head on the material during tightening, divided by the area of the screw head.  So to keep from going over that critical pressure, you can either decrease the force (not tighten the screw as much, which is as previously discussed not helpful) or increase the area of the screw head.  By now you've probably put together that sticking a nice, fat washer between the screw and the wood increases the effective area of the screw head by quite a bit, and will let you get screws and bolts into soft materials much more tightly than you normally could. See below:
Screws attached to soft material, ruining the hole by overtightening (A) or using a washer to spread the force out and keep that from happening (B)

So that's what regular old washers do, and if I'd given it much thought I probably could've worked that out for myself.  Split-ring "lock" washers, though, are a whole other story and much more complicated/interesting.

You'll see lock washers used between screws that absolutely need to stay in place no matter what; holding bits of your car together, for example.  That's because lots of things, including but not limited to vibration, thread damage, thermal expansion/contraction, and corrosion can cause a screw to slowly loosen over time.  The lock washer counteracts this, by applying a continuous spring force pushing the screw head and the material apart.  That way, if the screw manages to loosen a little bit for whatever reason, it'll just be shoved outward a tiny little bit by the lock washer but stay "tight," rather than being able to move around freely and possibly damage/loosen itself some more. 

There are other kinds of washers that are mostly application-specific (plumbing washers, for example, are supposed to be leak-tight) but those are the two main ones you'll see a lot in everyday life.  And now we know why.

Thanks to the wonderful, which has been doing basically the same thing as this blog for much longer and better, for the explanation on this one.

Monday, November 14, 2011

What Is the Mass of the Universe?

I don't feel bad about knowing this one because in some sense astronomy is basically the opposite of nanotechnology (my numbers are really small, their numbers are really big, right?).  Still, like any good nerd I read most of the pop-science books and articles I run across about space and the various weird things floating around in it.  Most of those articles concern dark matter and dark energy, the parts of the universe that we know have to be there but can't figure out a way to directly observe.  The question of dark matter and dark energy appears, to this vaguely informed outsider, to be a huge clusterfuck of an issue and not something anybody has a good answer for at this point (this has been confirmed by at least one actual astronomer person that I know, right down to the word "clusterfuck"), so I'm not even going to step in that one.

My question is related, but much simpler: we know "dark matter" exists because the mass of the "visible" universe is much lower than you'd expect it to be from gravity observations.  Ergo, there must be more matter out there than we can directly detect (secret science hint: aforementioned astronomer friend says it's probably just lots of neutrinos, and not "invisible space monsters" as I'd originally guessed).  Implicit in that explanation is something pretty staggering: astronomers have apparently figured out how to estimate the mass of the entire goddamn observable universe to a reasonable level of precision (at least a couple of orders of magnitude).  How the hell did they manage that?

This seemed kind of impractical (image of universe from Wikipedia, image of scale from random google image search)

The answer is kind of an anticlimax, unfortunately; there were no ridiculous calculations or impossibly precise measurements of gravity involved, just some basic observations and lots of extrapolation.

The key is that space, as far as we can tell, is pretty much the same all across the universe: stars of the same type have roughly the same densities, galaxies have roughly the same stellar densities, chunks of space have roughly the same galactic densities, etc etc etc.  There are centuries of astronomical data to back this one up, and the fact that even I know it should say something about its complete lack of controversy. 

So armed with that convenient fact, you can vastly simplify the question: if you can calculate the area of one more or less representative chunk of space, you should be able to extrapolate that number out ad infinitum.  That's still a lot of universe to deal with, but you can keep drilling down until you hit numbers that can actually be measured.

One of the most current (according to Wikipedia, which may not be the leading authority on this matter) estimates of the mass of the universe uses the Hubble volume (a sphere as big as the whole observable universe; what's important here is that it has a volume of about 4 x 10^30 cubic light years) as its representative chunk of space.  It combines this with observations (by the Hubble telescope, natch) of stellar and galactic volume and density to estimate the number of galaxies, and by extension the number of stars, in the gigantic bubble, mostly by assuming that the parts we've been able to observe are the same as all the other parts density-wise.  For what it's worth, there are about 5 x 10^21 stars in there.

So we're almost there, we just need to decide what to use as the mass of a star.  This is where it gets a little heliocentric and sketchy; the mass of the sun (2 x 10^30 kg) gets used as the mean stellar mass, ostensibly because it's about average-sized (there are lots of bigger stars, but also lots of much smaller ones) but also because we know its mass to a conveniently high degree of precision, what with it being right next door and all.  Still, people who know a lot more than me about this stuff seem to think it's a reasonable assumption.

So the mass calculation becomes a simple equation: mass = number of stars * mean stellar mass.  Even I can do that one.  Apparently the mass of the observable universe is approximately 3 x 10^52 kg, which is a lot of kg. There's another estimation that's even simpler, based on the fact that the universe appears to be at near-critical density, but it gives you essentially the same number. 

Interestingly, if you go back and use that number to calculate the density of observable space, you get 1.766 x 10^-26 kg/m^3 as the mean density of the universe.  To put that in nano-perspective, that's slightly under one atom per cubic meter.  The universe is pretty empty when you think about it.

Why Can't I See in Infrared?

The mosquito entry from a few days ago got me thinking about infrared vision in general.  The ability to see heat signatures in the dark seems like it would be a tremendously useful evolutionary advantage, particularly to mostly-nocturnal hunters and prey.  However, with a couple of exceptions (there are a few types of snakes that have very crude IR-sensing organs) you don't see a lot of infrared vision in nature; most animals that need to see in low light have extra adaptations (like the reflectors behind cats' eyes that make them glow in the dark) that help them pick up light in the visible spectrum more efficiently instead.

This thing is more highly evolved than me.

The problem with my reasoning here is that when I think "infrared vision" I think of the night-stages in the Modern Warfare games, where my lovely IR goggles turn the dudes I need to shoot in the face into bright orange blobs against a dark background.  While this is a more or less realistic portrayal of how modern infrared imaging gear works, it ignores the fact that IR-imaging gear is specifically designed and tuned to detect a very narrow range of infrared radiation; in the Call of Duty case that would be whatever wavelength corresponds to the ~37C body temperature of the guy trying to kill me.  We've gotten extremely good at building wavelength-specific sensors as long as we know what wavelength we want to look for, so something like this isn't all that difficult from an electronics standpoint.

As I mentioned in an earlier post though, monochromatic sensors are pretty much nonexistent in nature, for various evolutionary reasons I'll discuss in a minute.  That's problematic, because in the grand scheme of things there's a ton of infrared radiation around and a lot of is close to the same temperature/wavelength.  Basically anything warm is radiating at some infrared frequency, including water, the ground, and the air.  If you're looking for plants or cold-blooded animals to eat and/or not get eaten by, an IR sense is going to be entirely useless, since they're the same temperature/wavelength as their surroundings.  You might have slightly better luck with endotherms, but they're still not THAT much hotter than their surroundings, and (contra the Discovery Institute) our designer apparently wasn't intelligent enough to give us infrared eyes with frequency-specific lock-in amplifiers on them.  Essentially, unless you have extremely good spectral resolution, seeing in infrared would just look like the equivalent of a whiteout most of the time.  Since the evolutionary advantages of seeing a constant whiteout are dubious, it's not too surprising that no random mutations with an infrared sense stuck around long enough to evolve the resolution they'd need to go all Splinter Cell on woodland creatures.

Practically speaking, visible light is a lot more useful.  Assuming you're out in the woods looking for food, there's really only one source of visible-range light (the sun) to deal with; everything else is just reflecting varying amounts of that source (this is another advantage, as IR light tends to be absorbed by things rather than reflected).  At night, if there's any kind of moon, you'll get the same spectrum from the reflected sunlight.  So if you've only got a broadband detector (hello eye!) it makes a lot more sense, if you need good night vision, to evolve in the direction of making your visible-light detector work better (see cat eyes) than developing an IR sense that's going to be evolutionarily useless unless it's tuned to the precision of a military-spec lock-in amplifier.  The aforementioned IR sense that some snakes have is more a backup than anything else; it's low-resolution and mostly just helps them figure out the general direction of prey until their senses of visible-light sight and smell can do the fine-location work.  It's one of those mildly-useful-but-they-could-probably-live-without-it quirks of natural selection.

So as usual, video games have made me stupider.  Possible next entry: why doesn't jumping on turtles give me extra lives?

Thanks to for the majority of the information here.  No thanks to useless Wikipedia this time.

Why Does AC Electricity Operate at 60 Hz?

As I briefly touched on the other day in the post about switching transformers, pretty much all AC power "alternates" at a rate of 60 cycles per second, or 60 Hz (in some parts of the world it's 50 Hz, but that's not much of a practical difference).  There are a number of reasons why this is a total pain in the ass and/or deadly:
  • 60 cycles per second is about at the limit of what the human eye can detect, which means things like fluorescent lights that are driven by a 60 Hz supply and have fast on/off times will have a barely perceptible flicker (regular light bulbs, which take more than 1/60th of a second to stop glowing when you cut the power, don't have this problem nearly as much).  Depending on how good your eyes are, this may be unnoticeable, subconsciously noticeable (do fluorescent lights give you headaches?  This is why), or incredibly goddamned annoying.
  • The general rule of screening out electromagnetic interference is that high-frequency noise, since it doesn't travel through conductors very well, is easy to shield, while low-frequency noise is notoriously difficult.  60 Hz is, in a world where our computers clock almost a billion times faster than that, about as low-frequency as you're going to get without being DC.  As a result, keeping sensitive electronics (like the audio and science gear I spend most of my time around) from picking up a 60 Hz "hum" is a highly frustrating black art that's not for the migraine-prone. 
  • As we learned when discussing switched-mode transformers, stepping voltages up and down and converting AC to DC can be done much more efficiently with much smaller components at high frequencies.
  • Possibly most problematically, your sinoatrial node (the thing that tells your heart when to beat) operates at frequencies close to 60 Hz.  People will tell you it's the current, not the voltage, that you want to worry about when you receive an electric shock, and that's generally true, but even low-current 60 Hz shocks have a decent chance of scrambling your SA node and stopping your heart if they travel through you right.
Whether you're an easily-annoyed scientist with exceptionally good eyes or just a fan of not being electrocuted, that seems like a pretty damning case against using 60 Hz as our power transmission frequency.  So why do we do it?

A bit of history first of all: in one of science's most awesome/ridiculous moments, Thomas Edison and Nikola Tesla got into a big thing at the end of the 19th century over whether America's new and growing electrical infrastructure should use Edison's DC power standard or Tesla's AC design.  Edison, who was more of a dick than history books usually give him credit for being, insisted that DC power posed significantly less risk of electrocution than AC power, and illustrated his point by using AC power to electrocute a goddamn elephant and film it.  When that inexplicably failed to win him the argument, he also secretly funded the construction of the first (AC-powered obviously) electric chair to further prove his point, even though he was technically against the death penalty.  Like I said, he was a bit of a dick.

Thomas Edison, the Michael Vick of science
Thomas Edison, the Michael Vick of 19th-century science.

Despite the fact that Edison obviously wanted it more, Tesla's AC system ended up winning out for the simple fact that AC power is much, much more efficient to transmit over long distances than DC.  The hell of it is that Edison was technically correct; it is much harder to kill yourself with DC current, but this was Gilded Age America and "safer" never stood a chance against "cheaper."  Anyway, 60 Hz eventually got settled on as the alternating frequency, both because it was convenient for the large industrial motors, electric trains, and incandescent light bulbs that were the main things using the power grid at the time and because electricity travels over a transmission line more efficiently at lower frequencies than high ones.

As every engineer knows, standards are like a candiru fish; once they're in place for awhile, there usually isn't any getting rid of them.  So in spite of the fact that it wants to murder you and is very inconvenient for most modern electronics, the fact that it's been around for awhile, coupled with its hard-to-beat long-distance transmission efficiency, are why we're stuck with 60 Hz power for the forseeable future.  Apparently driving innocent engineers insane with flickering lights and impossible-to-get-rid-of interference is a small price to pay for saving some cash on delivery costs.

There are a couple of random exceptions to this.  The "third rail" on most subways is actually a high-voltage (~750V) DC supply, which even though it's DC is still way more than enough to kill you.  Apparently Alcatraz Prison, back when it operated, got all its electricity from an onsite DC generator too.

UPDATE 11/16/11: The wonderful Matt Taibbi, in a totally unrelated article, reminded me that Edison only fried that elephant after electrocuting a bunch of random dogs and cats didn't attract enough attention.  That Michael Vick comment seems even more appropriate now.

How do Switched-Mode Power Supplies Work?

This is a little bit esoteric if you're not an electronics geek, but since I am I feel like there's really no excuse for not understanding it.  A little bit of background for those of you with better things to do than think about how electricity gets from the power plant into your Xbox is probably in order:

Basically all consumer electronics these days run on direct current (DC) power.  Unfortunately, for various reasons (actually just one, that it can be efficiently transmitted over long-distance power lines) all the wall outlets in your house put out alternating (AC) current.  While DC power basically just sits at a constant voltage, AC power "alternates" between the positive and negative value of its rated voltage in a sinewave pattern at a frequency of either 50 or 60 Hz, depending on where you live.  As you'd probably guess, the first step to plugging DC electronics into an AC wall outlet is to convert the AC current to DC.

In the olden days, this was accomplished with a simple circuit consisting of a transformer, something called a bridge rectifier, and one or two large filter capacitors.  Wire those up the right way and you'll get something that pretty much looks like a DC voltage on the output side, with the final value depending on the transformer you used.  This tried-and-true circuit was generally crammed into a big, fat power brick or "wall wart" that plugged into AC power outlets, as shown below:

Olde tyme "wall wart" power supply.  Can you spot all the fire hazards in this picture?

The problems with this approach are obvious and legion.  The biggest one is that transformers are large, heavy chunks of iron wrapped in copper wire.  Additionally, if you want your DC output to be as clean as possible you'll need to use the biggest filter capacitors you can.  The result is that AC-DC (ha) power supplies tended to be really damn big, especially in situations where a lot of power was needed, as anyone who's ever tried to jam one into a power strip without covering up five other outlets knows.  A second, less obvious but more dangerous issue is that the circuit is not particularly efficient-- a lot of power gets lost as heat during the conversion process.  That means two things: that you need to make the brick even bigger to account for the lost power, and that in certain situations the brick is going to get hot.  Catch-on-fire hot, occasionally.  Add that to the fact that their ridiculous weight would often cause them to hang partway out of an outlet, exposing the 120V prongs, and it's honestly amazing that we were ever even allowed to have these things in our houses.

Enter the switched-mode power supply.  As you may or may not have noticed, over the last 5-10 years gigantic "wall warts" have been replaced by much smaller, lighter converters, or occasionally no converter at all when the conversion circuitry can be crammed into the thing you're powering itself (see previously-mentioned Xbox, etc).  We have the switched-mode power supply to thank for that.

Switched-mode power supply for my phone.  It's about the size of a fat person's thumb.

I know exactly two things about switched-mode supplies:
  • They are much, much, much more efficient than old-timey wall bricks, and also way smaller
  • They do something mysterious at high frequencies, judging by the amount of high-frequency noise they're constantly throwing out.
Noticeably absent from that list is "how they actually work."  So let's go to the almighty Wikipedia...

A common theme I'm discovering while writing this blog is "transistors are goddamn magic."  You'd think that would have registered after 11 years of being an EE student, but I'm consistently amazed at the clever things people figure out for them to do.  This is a perfect example of that.

The first step of switched-mode transforming isn't too different from a regular AC-DC converter; the AC input voltage is subjected to a rectifier and a large filter capacitor to turn it into something vaguely DC-ish.  The two differences here are that the voltage isn't stepped down first (meaning no big fat transformer) and, because later stages will deal with it, the output voltage doesn't have to be perfectly constant, meaning you can get away with a much smaller filter capacitor than usual.

Here's where it gets weird: having just converted AC to DC, we're now going to turn around and convert it back to AC with something called a "chopper circuit," which isn't as cool as it sounds but is still pretty clever.  The chopper rapidly switches the voltage on and off to convert the DC signal back into AC, but with a way higher frequency than before.  Typical switching frequencies here are in the 10-100 KHz (1 KHz = 1000 Hz) neighborhood, usually high enough to not make audible noise but not, unfortunately, always high enough to not mess with high-end audio recording equipment.  Whatever the output frequency ends up being, it's definitely thousands of times higher than the 50/60 Hz input frequency.

So now we've got a high-frequency AC voltage that's still at the same level as the voltage from the wall.  What's that buy us?  I'm glad you asked!  Here's where we'll throw in our transformer to step down to the voltage we eventually want.  The advantage of doing it now is that we're operating at a much higher frequency, which for various physics-y reasons means we can get away with making the transformer a lot smaller and more efficient.  Same deal with the rectifier and filter circuitry used to convert back to DC; the high frequency means you can get away with using much smaller-value components than you would when converting at 60 Hz.

You can get even cuter with it if you want; really dirt-cheap switched-mode supplies don't even bother with a transformer, they just vary the duty cycle of the high-frequency switching and then run it through a circuit that will give a DC output that varies with the duty cycle of the AC input.

So the guiding principle here is that, if you can crank up the frequency of an AC voltage, you can use a lot less hardware to get it converted down to DC, and do it a lot more efficiently to boot.  The big downside of switched-mode power supplies, aside from the aforementioned noise, is that because you've messed with the signal so much the DC voltage output is usually less "clean" than a linear transformer and will have some ripples in it.  You can usually get around this pretty easily by running the output through a voltage regulator (another transistor-based miracle device), but it's significant enough that you'll still see old-timey power supplies on things where a constant supply voltage is critical, like audio equipment and science gear.

Interesting historical note: the first known instance of a switched-mode power supply being used in consumer electronics was the Apple II way back in 1977.  Oddly, they didn't start popping up in large numbers until well over a decade later.  And for those of you currently waving your "#1 Steve Jobs" foam fingers, it was actually a dude called Rod Holt who designed it, presumably in between porn shoots. 

How Do Mosquitos Decide Who To Bite?

For whatever reason, I haven't been bitten by a mosquito in the continental US since I was about 15 years old.  My wife, on the other hand, gets treated like a mobile buffet by mosquitos and related bloodsuckers any time she leaves the house after dark.  The bite-affinity thing seems at least a little bit hereditary-- one of my grandfathers had the same no-mosquito-bites deal going while he was alive, and various other members of my family seem to have varying degrees of it (mine seems to work the best, a rare bright spot in the scorched and time-bomb-strewn wreckage of my genome). One evening while I was enjoying a lakeside sunset and my wife was enjoying having most of the blood removed from her body by carnivorous insects though, I got to wondering 1) why they bite some people and not others, and 2) as a corollary, how they locate prey.

I'm actually insecure enough to be slightly jealous that they like her so much better.

Because I'm an electrical engineer and have an embarrassingly simplistic understanding of nature and biology, my first thought was that they probably locate prey via infrared vision.  Mosquitos hunt at night, so visible light isn't going to be that useful, and their prey (any endothermic animal, such as my wife) light up like light bulbs if you're seeing them in the right range of infrared.  The hypothesis made some superficial sense-- I've got a lowish standard body temperature of about 97.9, while my wife is closer to the "normal" 98.6.  It's also at least anecdotally true that mosquitos like to bite hot, sweaty people more.  Giving the IR hypothesis a little more thought though, differentiating between infrared sources with a temperature variation of less than one degree Fahrenheit would be quite a feat, definitely way beyond the capability of any sensor we've ever built that isn't way bigger than a mosquito.  That's not necessarily a deal-breaker; nature is annoyingly full of stuff that science can't replicate (yet), but it did make the whole thing questionable.

As usual, my "infrared hypothesis" was absolutely dead wrong.  Like most bugs and other small critters, it turns out that mosquitos do most of their sensing and hunting via sophisticated chemical detection, also occasionally known as "smell."  As anyone who's ever used a public restroom knows, the olfactory (smell) sense is about as good as it gets sensor-wise in terms of bang for the buck-- a single olfactory receptor can be triggered by a single molecule of some chemicals, even in humans (who have kind of low-end olfactory systems compared to most other things out there).  Mosquitos' senses of smell are tuned to detect very specific things; the important ones for our purposes are carbon dioxide (exhaled by everything with lungs, as well as secreted through the skin to varying degrees) and various organic molecules that show up as trace chemicals in human sweat.  So people who sweat more are going to get bitten more, not because they're showing up better in infrared but because they're secreting way more of the things that mosquitos use to locate prey.  In general, I don't sweat a whole lot compared to my wife, which probably accounts for our difference in popularity among the bloodsucking set.  In addition to demonstrating (yet again) my general ignorance of the world around me, my little odyssey of discovery here is an excellent example of how easy it is to come to completely the wrong conclusion about something using only a limited set of data and a few bad assumptions.

Interestingly, some more reading appears to indicate that blood type also plays a role in mosquito affinity.  That seems counterintuitive on its face, since the mosquito obviously doesn't know what kind of blood you have til it's already biting you, but apparently different blood types secrete different amounts of the chemicals mosquitos go looking for.  Again, this roughly correlates with my anecdotal observations-- I'm type O-something, which means I've got the lowest possible amount of extra stuff in my blood.  My wife is either A or B, I'm pretty sure; theoretically, she'd be in even worse shape if she was AB (I have no idea if the positive/negative factor matters).  Another, more disturbing metric is blood-alcohol level-- an elevated one apparently increases your likelihood of getting bitten by quite a bit, either because alcohol in your blood slightly alters its chemical composition (and probably as a result your sweat-signature) or possibly just because drunk people tend to sweat more.  Because science is important, I'll test this hypothesis by sitting out in the backyard with a 12-pack as soon as its warm enough for mosquitos and other living things to be outside where I live.

How is the "feels like" temperature calculated? (part 2 - windchill)

So the first part of this series explained heat index, a semi-useful way of quantifying the effect of humidity on the temperature your body perceives.  Unfortunately heat index really only has any relevance when it's hot enough for you to be sweating, so to perform the same type of correction in cold weather we have to go to a different method, in this case the dreaded Wind Chill Factor.

The idea between wind chill is simple but thermodynamically kind of interesting.  Basically, heat likes to diffuse into places where there's less heat; the bigger the difference in heat between the two places, the faster this diffusion occurs.  Your body is continually dumping heat into the (much cooler) atmosphere around you via various mechanisms, which heats the air immediately around you.  If you're standing in a totally wind-free area, that heated air around you will pretty much stay where it is, which means you're surrounded by air that's warmer than usual because of the heat you've been dumping into it.  The warmer that air gets, the smaller the temperature difference between your body and the air gets, and as a result the rate at which your body can dump heat into the surrounding air slows.  This can be quantified with a bunch of partial differential equations but I don't care and you don't care, so we'll stick with "conceptual explanation" for now.

There's a pretty easy way around this that you've probably already figured out: blow the hot air away.  Anyone who's ever stood in front of a fan or blown on something to cool it off knows that stuff can dump heat faster when the air around it is being continually replaced by cool air before it can heat up too much.  This can be all kinds of useful on hot days (see aforementioned fan), but when it's freezing cold and your body is desperately trying to hold onto all the heat it possibly can, waves of fresh, subzero air blowing over you will make it substantially harder than usual to keep warm (see handy illustration)

Enter the wind chill factor, yet another attempt to quantify the perceived effect of other factors (in this case the wind stealing all your heat) on the temperature you actually perceive.  Like the heat index, the wind chill factor was created with the best of intentions (making sure school kids don't freeze to death at the bus stop and such), and also like the heat index it's now mainly used to make weather reports more dramatic-sounding.

There are actually several dueling schools of thought on how windchill should be calculated, because apparently people have time on their hands these days.  The first experiment to measure the windchill was developed, probably out of necessity, by two dudes in the Arctic sometime before WWI.  Basically they hung a bottle of water outside their tent or igloo or whatever next to a wind speed gauge, timed how long it took the bottle to freeze, and correlated that with how fast the wind was blowing.  They used this data to calculate something called the Wind Chill Index, basically just a number that tells you how likely you are to freeze to death if you go outside. It's a pretty simple equation that I'm guessing is just a curve-fit to their data.

Because Americans are superstitious and distrustful of science and its fancy "units" and "equations," they were generally confused by the Wind Chill Index.  Someone got it into their head to "solve" this problem by defining a wind chill equivalent temperature, which is what we used today.  The first models used to calculate this were pretty dumb, and actually seem to have pissed off the dudes who invented the original Wind Chill Index quite a bit, but our current method (based on the heat loss of bare skin facing into wind of a given velocity) is slightly more sophisticated.  It's still semi-empirical, only valid in certain temperature/wind speed ranges (in this case below 50F and above 3 mph), and operates under a lot of assumptions about clothing (hilariously, there is still some debate among wind chill experts as to whether the calculation should assume a naked dude or a dude wearing "appropriate clothing" in an open field.  As in most cases, I am totally on the naked side here) and activity level and such, but it has familiar-looking units and gives at least a reasonable approximation of what you're walking into when you leave the house in January, so we'll take it, I guess.

If you like math and hate free time, Wikipedia has all the equations used to calculate this stuff, as well as most of the information I've cleverly paraphrased here.

How is the "feels like" temperature calculated? (part 1 - heat index)

This question has recently become of great interest to me, having moved from the lovely, temperate Bay Area to a part of the world where the weather is actively trying to murder me for nine months a year (fun Minnesota fact: the hot/muggy summers are actually worse than the Road-esque winters in many ways!)  Even so, I never really put much thought into what the "feels like" temperature field in ForecastFox signified, besides an excuse to bitch about the weather even more than I already do.

There are two major methods of estimating what temperature it "feels like" outside, based on the actual temperature, relative humidity, wind speed, and various other things.  When it's hot, your go-to metric is the Heat Index; in the cold, it's the Wind Chill Factor.  I'll cover heat index today, and probably get around to wind chill tomorrow or sometime later this week.

The heat index was originally known by the inexplicably-awesome term "humiture," since it mostly bases its calculation on temperature and humidity.  However, since there's no way to say the word "humiture" without sounding like the language center in your brain just broke, the National Weather Service started calling it the Heat Index when they adopted it as a standard around 1980 (Canadians, who always kind of sound like their language center is broken anyway, persist in calling it the "humidex" to this day).

The basic heat-index calculation is both conceptually and mathematically simple.  The basic deal is that your body has a built-in, somewhat gross air-conditioning system known as "sweating", where it basically shoves a bunch of salt water out of your pores and lets it evaporate, cooling your skin via evaporative cooling.  Since evaporation rate is inversely proportional to the amount of ambient water vapor already in the atmosphere, this otherwise pretty clever setup isn't going to work as well on a humid day.  So as a result of your body's reduced ability to cool itself, your perceived body temperature is going to be quite a bit higher on a humid day vs. a dry one, all other things equal.  I've illustrated this in my chosen medium of MSPaint below.

The heat index, before it became a way for bored weather forecasters to scare people, was designed to take this into account for not-dropping-dead reasons.  Basically, the heat index is the air temperature scaled by the ratio between the partial pressure of water vapor in the air (humidity/dewpoint, essentially) and an arbitrarily-selected baseline humidity of 1.6 kPa, or about 0.01 atmosphere.  So in essence, if the humidity is above that threshold value, the heat index will be higher than the actual air temperature; below it and it ends up lower.

Meteorology is where linearity goes to die though, so there are a whole bunch of caveats and exceptions to this seemingly-simple relationship depending on what the heat/humidity conditions are like.  For all intents and purposes though, the heat index relationship is considered to be useful at temperatures above 80F and relative humidities above 40%.  The effect of wind is ignored entirely, so a rigorous (ha ha) "feels like" temperature measurement will take both heat index and wind chill into account (which you can't do because they're valid in totally different temperature ranges -- see tomorrow's post for details).

There are a whole bunch of assumptions baked into the heat index relationship, mostly about human-body stuff like mass, height, and blood thickness, as well as some sketchier assumptions like "amount of clothes worn" and "level of physical activity."  Since anyone who has met more than one other human being knows that there is quite a bit of variation in all of these things among the population, you've probably gathered by now that the whole "calculation" isn't a whole lot better than a random guess unless you're a statistically-average adult human from 1980.  Wikipedia, in its inimitably understated style, says that "significant deviations from these [average parameters] will result in heat index values which do not accurately reflect the perceived temperature."  Still, it was created with the best of intentions (the aforementioned keeping-people-from-dropping-dead-of-heatstroke) and can still be more useful than the raw temperature when you're trying to decide whether or not today is a good day to run a random marathon.

FUN READER EXERCISE: The heat index was developed shortly before America became a nation of disgustingly overweight slug people.  Try recalculating the heat index using the difference between median adult weights in 1980 vs. now and find out what it "feels like" to ride a Rascal down the street to Burger King on a hot summer day!  Or just get fat and try it yourself, I guess, if you don't like math.

Why Do Electrical Plugs Have Holes In Them?

This is one of those questions that's always low-level bothered me, but never enough for me to actually expend effort finding out what the answer was (in cases like this I'm usually content to just assume the answer is "some dumb legacy thing" and get on with my life, but I started this blog partially so I'd stop doing things like that).  At least here in the US, 99% of consumer-grade electrical plugs have ~2mm wide holes bored into the hot and cold prongs.  See the technical diagram below if you don't know what I'm talking about:

They don't serve any purpose that I can think of, but since it must cost more to manufacture plugs with them than without them they must have at least a mildly compelling reason to exist, right?  And that reason probably isn't "to make it easy to attach wires to them for incredibly unsafe DIY electrical work" either, although they are handy for that.

You would think that being a person who gets paid to do things involving electronics I'd have taken apart at least one electrical outlet in my life, or at least smashed it open to see what was inside (my standard method for doing science from ages 8-31).  Apparently not though, because if I had I'd know there were little leaf-spring-loaded knobs inside modern electrical sockets that lightly latch into the plug-holes when you plug something in.  They don't really "lock" as much as "add a little extra force holding the plug in," so you can do things like plug into a ceiling socket without gravity unplugging you.  The contacts inside an electrical socket are also tapered so they'll "grab" the plug blades a little bit, but that will eventually wear out.  If you've ever tried to keep a hole-less plug plugged into an older/crappier outlet, or even a holed plug plugged into an outlet so old it doesn't have the knob assembly, you'll probably notice that they fall out much more easily (I just tested this a second ago; handily, most AC USB chargers don't seem to be made with holes in the plugs these days, probably because they're light and have no hardwired cord).

Amazingly, even Wikipedia didn't know this one.  I had to go to some random techie message board to figure it out, unfortunately depriving me of the opportunity to smash open an outlet and empirically determine the answer.  I might do it anyway though, just to be sure.

How Do White LEDs Work?

This one is really pretty embarrassing.  LEDs, being very simple devices made of semiconductors, are about as in my wheelhouse as anything you can buy at Wal-Mart these days.  Even so, until I got around to looking it up a few minutes ago I had no idea how white (polychromatic) LEDs made light in a whole bunch of wavelengths at once.

First off, a quick primer on how LEDs (Light Emitting Diodes) work.  I've simplified a lot here for lucky folks who haven't been subjected to solid-state physics, but you'll get the gist. Use the handy diagram below to follow along.

Simplified energy band diagram of a semiconductor.  If you've never seen one of these before I sincerely envy you.

If you're willing to ignore some quantum mechanics (being an engineer, I am always willing to ignore some quantum mechanics) your basic single-color LED is a pretty simple thing.  All semiconductors have something called a bandgap, which is kind of a condemned energy region where electrons aren't allowed to be (see previously-referenced handy diagram).  Most of the time, electrons hang out in an energy regime below the bandgap called the valence band (1), because like everything else in nature they are lazy (you can work out a staggering amount of physics from the fact that basically everything in the universe, from electrons to stars to your author, just wants to get to the lowest possible energy state and stay there).  Giving the electrons a metaphorical kick in the ass (in the form of some injected energy, like electrical current) will knock a few of them up across the bandgap and into the region above it (2), which for various reasons is called the conduction band.

Since it took energy to get there, our slacker electrons aren't going to want to stay in the conduction band any longer than they have to.  A lot of them will eventually drop back down across the bandgap into the valence band (3).  Doing this causes them to lose energy, in this case an amount of energy exactly equal to the bandgap (this entire process is not unfamiliar to anyone who's ever tried to drag me out of bed in the morning, minus the part where I emit light).  If you're using the right kind of semiconductor, that energy will be released as a photon (light), with a wavelength equal to whatever the bandgap energy was (4).  So now you've got (and I'm vastly oversimplifying the explanation here, caveat emptor) a semiconducting device that emits light equal to the semiconductor's bandgap energy when you run enough current through it.  Neat, right?

So what color would you like your LED to be?  You can set that by your choice of semiconductor material: if you want a device that emits low-frequency (red, infrared) light, pick one with a low bandgap energy; for a blue or UV LED, go with a high-bandgap semiconductor.  Through the magic of alloyed compound semiconductors, you can design materials with pretty much any bandgap you want (within reason) these days, so have fun.

You may have already caught the problem with white LEDs at this point.  If not, the clue is in the phrase "bandgap energy," singular.  As in one specific energy.  You can make the bandgap be one of many different energies, and therefore get lots of different possible colors, but at the end of the day you have to pick one energy, which means picking one color.  As a result, LEDs tend to be very monochromatic, which is usually a pretty useful property.  Unfortunately white light, by definition, is a combination of pretty much every energy/wavelength/color in the visible spectrum, so to make a white LED you'd need to somehow design a semiconductor with many bandgap energies at once.  For various complicated reasons, this is impossible.  So how do they work?

The answer, naturally, turns out to be "by cheating DUH."  White LEDs are a bit of a misnomer, as the diode itself is not emitting white light.  Instead, you've got a device consisting of a diode tuned to emit blue or ultraviolet (high-energy) light, coated in a material called a phosphor.  You might remember phosphors from the old-timey pre-flatscreen TV you finally just got around to throwing out; that thing worked by zapping a phosphor-coated piece of glass with an electron beam to make the phosphors light up.  The canny reader will have guessed by now that phosphors will take energy thrown at them and re-radiate it as light.  Again, like LEDs, you can tune what that light looks like by the composition of the phosphor, but the big difference here is that you're not limited to a single color; phosphorescence can be broadband (i.e. white) if you want it to be.  You'll also get more energy/light out if you put more energy/light in, which is why for best results you'll want to use a LED that emits high-energy (blue or UV) photons for this.

So put it all together and you've got a blue or UV LED coated in a phosphor material tuned to emit light at a whole shit-ton of visible wavelengths.  Blue/UV light injects energy into phosphor, phosphor repsonds by re-radiating the energy as white light, QE-fucking-D.  For added fun, you can tune the composition of the phosphor to vary the spectrum, in case you want more of a natural yellow than a true white, for example.
So to summarize: white LEDs are actually phosphor-coated blue LEDs, a handy workaround for the fact that you can't generally get LEDs to emit more than one wavelength of light.  I actually don't feel as bad for not getting this now, since it turned out to be basically a hack rather than a gross misunderstanding of physics on my part.

As always thanks to Wikipedia for educating my overeducated ass.

How Does Soap Work?

I realized this morning while taking a rare shower that, despite being able to at least pass for a materials scientist sometimes, I have no damn clue why and how soap cleans such a wide variety of stuff.

The answer (thanks Wikipedia!) turned out to actually be kind of complicated, which made me feel better.

Soap, basically, is made by reacting a base (usually some kind of alkali salt) and some kind of fat, in a process awesomely called "saponification".  The type of base used in the manufacturing determines the structure of the soap (NaOH will give you a bar of soap, while KOH gives you liquid soap).

The fat is the thing that does the actual cleaning work.  When you mix soap and water, the fat disperses and forms little hollow pockets called micelles, which are hydrophilic (attracted to water) on the outer surface and lipophilic (attracted to various organic molecules) on the inner surface.  Oil, grease, fats, and other stuff that ordinarily wouldn't wash away in water due to hydrophobicity (oil and water don't mix, remember?) will get trapped inside the micelles, which the water then easily washes away.  So basically soap is a clever workaround to the fact that lots of lipid-based stuff doesn't mix with water.  Somewhat counterintuitively (to me), the base part of the soap doesn't really have a lot to do with anything, since it's pretty dilute.  It mostly just provides a matrix to store the fatty molecules in a stable state until they need to do their thing.

This is obviously a pretty oversimplified picture and probably wrong on some details, but you get the idea.