Tuesday, January 31, 2012

Why Do Crows Flock In the Winter?

I'm an amateur birdwatcher, which in lazy-person-speak means "sometimes there are birds outside, which I look at if I don't have to turn my head that much."  Despite a complete lack of commitment to what is evidently a fairly serious hobby, and the fact that I spend most of my time in biodiversity-free metropolitan areas, I still occasionally get to see some weird bird-related stuff, like a red-tailed hawk that couldn't take off because its foot was stuck in a pigeon carcass that I ran across back when I was an undergrad.  The latest is something I've observed for two straight winters in our not-quite-urban Minneapolis neighborhood.  Every night for the past couple of months, thousands of crows descend on the skeletal trees on our block around twilight.  They spend about an hour blackening the trees, yelling at each other, driving my cat insane, and generally adding to the postapocalyptic vibe that comes with being in the dead of a Minneapolis winter.  Once they've accomplished whatever it is they came to do (attempt to poop on every car, house, and unlucky human or animal caught outside on the block?  Discuss who to endorse in the Republican primary?  Plot the downfall of the human race?) they all fly off en masse, which is sort of terrifying in itself.  Like I said, the whole process takes about an hour, and only seems to happen in the winter.  What are they up to?  Is it as simple as trading tips on where the day's most accessible garbage cans are located or something much more sinister?  I feel like it's in my best interest to figure this one out.

This is every tree in our neighborhood right now
The first thing I learned while researching crow behavior is that a group of crows is called a "murder," which had the understandable effect of making the project seem even more urgent.  You don't generally call something a "murder" when it has only your continued happiness in mind, right?  Luckily that turned out to be a false alarm; groups of crows only got that name because they have a habit of occasionally swarming and killing what I can only imagine are the more irritating members of their group.  While that's a bit weird and unsettling, I'm not a crow so it didn't really seem like my problem.

The second thing I learned about crows is that they're incredibly intelligent, social birds that do things like construct their own tools and may even have a rudimentary language.  Whether or not that's a good or bad thing at this point is still up for debate (you know who else was incredibly intelligent, social, and constructed his own tools?  Hannibal Lecter), but it means their social behavior is somewhat more complex than, say, your basic stupid pigeon.

While crows tend to be mostly solitary, territorial birds in the summer, winter will find them congregating in "roosts" of a few hundred to a thousand or more birds at night.  The reasons for this are not entirely understood, but common sense would dictate that it's some combination of predator protection (in the winter they don't really have the option of sleeping camouflaged in a leafy tree), warmth, and congregation near a reliable food source in order to get a guaranteed breakfast-- in other words, all the usual reasons animals hang out together.  There may also be a social/sexytime component to the behavior; winter isn't really mating season, but a roost of thousands of crows wouldn't be a bad place to find potential mates if you're a single crow (interestingly, crows appear to mate more or less for life). 

So OK, fine, crows are roosting in my neighborhood.  Except that they're not-- like I said, they usually hang out for an hour or so jabbering about whatever, then all head off en masse.   This is apparently also normal, well-observed crow behavior-- crows from all over the city will come to a designated "meet-up" spot, hang out for a bit until everyone gets there, then all fly to the final roosting site, which may consist of dozens of these meet-up groups, together in a pretty spectacular display of sky-darkening feathers.  None of the crow-related sites I found in 15 minutes of Googling had an explanation for this, although it's likely a safety-in-numbers thing again; crows in a given area make the sometimes-longish flight to the roost in a group, rather than by themselves where they'd be easy prey for owls and hawks.  It's also entirely possible that they just like socializing in the evening before they go to sleep, since they apparently get settled in and fairly quiet not too long after reaching the roost proper.

So it would appear that the crows' nightly invasion of our neighborhood is just typical crow behavior and not indicative of some larger, more insidious agenda.  As crow invasions go, it's also pretty minor (like I said, an eyeballed count puts their number in the thousands) since it only represents a fraction of the final roost population; there are anecdotal reports all over the internet of American Crow roosts containing hundreds of thousands, or even millions, of individuals.  Interestingly, crow roosts are showing up more and more in urban and semi-urban areas like mine, possibly because the lack of natural camouflage and large amount of artificial light helps them spot predators more easily, or possibly just because if, like crows, you're willing to eat pretty much any damn thing, a garbage-strewn American city is basically a gigantic buffet. 

Even though this all gets filed under normal crow behavior, it does seem worth noting that crows are:
  1. Congregating in groups of thousands to millions, via a surprisingly orderly and organized process, on a daily basis, for reasons science is currently unclear on
  2. Making a lot of noise during said congregating, which may or may not constitute language
  3. Increasingly doing both of these things in places with lots of humans
I'm not saying they're planning to kill us all, but I'm also not saying they aren't.

I'm sure they have only your best interests at heart
Thanks to Wikipedia, crows.net, Cornell University and various other sites for the information.

Here's a handy map of major known crow roosts, just in case. 

Thursday, January 26, 2012

How Hard Is It To Die Of An Electric Shock?

Being both an electrical engineer and a person with questionable judgement, the occasional high-voltage zap is something I've learned to accept as an unavoidable occupational hazard.  I've suffered dozens of 120-volt AC (wall-outlet-level) shocks, occasionally even directly through the chest cavity (traveling from one hand to the other), as well as more low-voltage and DC shocks than I could easily count, with no noticeable ill effects.  The highest voltage I've been bitten by, so far, was a 9000VAC neon-sign transformer, which zapped me (again, through the chest cavity) during an unfortunate attempt to build high-voltage capacitors out of empty malt liquor bottles and salt water in college (note to budding scientists: if science requires empty malt liquor bottles, that doesn't necessarily mean you have to drink all the malt liquor yourself before getting started).  Aside from firing every muscle in my upper body at once and leaving me sore for days, that one didn't hurt me either.  From these experiences, I can draw two possible conclusions:

  1. The human body is more resistant to electric shocks than I'd previously been led to believe.
  2. Unlike you puny humans, I cannot be killed by electricity.
In the interest of figuring out which of those it is (smart money's on #2, obviously), I decided to look into what exactly it takes to deliver a lethal electric shock to an average-sized adult human. 

Conventional wisdom about electric shocks is that "it's the current, not the voltage" that hurts you.  That's true, at least in the strictest physical sense; voltage is just a measure of potential energy, while current is a measure of the actual number of electrons using your body as a transmission line.  A reasonable analogy would be getting a rock dropped on your head; while the rock had to be up above you (giving it gravitational potential energy) for it to happen, what actually made it hurt is that the rock got dropped from up there and hit you in the goddamn head.  Same deal with electricity; all the voltage in the world doesn't mean a thing if it isn't driving any current, just like the mere act of someone holding a rock above your head isn't going to give you a concussion.  So yeah, you generally need to run some current through your body to do damage, which begs the question "how much?"  And that's where things get complicated.

This otherwise well-intentioned public service announcement from Electric Six critically misunderstands the voltage/current distinction.

Every EE undergrad knows Ohm's law, which states that V=IR, where V is voltage (in volts), I is current (in amps) for some reason, and R is the resistance of your circuit to having current passed through it, in units called ohms.  So to figure out how much current is passing through something, you just take the voltage across it and divide by the resistance.   It's a simple, handy equation that will tell you what's going on in nearly any electrical circuit (until you start throwing semiconductors into the mix, at which point it all goes to hell, but that's neither here nor there), providing you actually know the value of R.  When you're working with basic circuits (generally containing resistors that have their values printed on the side) that's a no-brainer, but it gets weird when you start trying to add more complex elements like "the sack of meat and water that is a human body" to the mix.

The most basic possible electrical circuit.  A voltage V is applied to a resistance R, the value of which decides how much current (I) will flow through the circuit.  Nobody knows why the symbol for current is I so don't ask. 

The resistance "seen" by a high voltage trying to pass current through your body can vary wildly, depending on the path it takes, how wet your skin is, damage to the skin already caused by current, and lots of other factors.  The inside of your body, being mostly water and electrolytes, has a pretty low resistance (<500 ohms), so the resistance of the entire "meat-resistor" is going to depend in large part on the skin resistance.  While a totally dry-skinned person should in principle have a resistance in the 100,000 ohm neighborhood (meaning it would take 100,000 volts to drive an amp through you), natural moisture on the skin usually brings that down to something in the neighborhood of 1,000-5,000 ohms in most situations (it varies by as much as 2-3X by person), and fully wet skin can cause it to be even lower.  So a reasonable, low-end estimate for the resistance of a human body is probably about 1000 ohms.  Keep that in mind during the next part.

As Edison so ably demonstrated back at the turn of last century, low-frequency alternating current (AC, the kind that comes out of our walls) is by far the most likely type of current to electrocute you.  That's because the frequency of the wall current (either 50 or 60 Hz, depending on where you live) is close enough to your heartbeat frequency to scramble your sinoatrial node (the heart's pacemaker) without much trouble, a condition known as fibrillation.  When your heart stops due to fibrillation it's not going to start again without a defibrillator or some very good CPR; basically, you're pretty screwed.  As a result, you really don't need much of this stuff to die; the "death current" is generally considered to be in the range of 0.1 to 0.2 amps.  To put that in perspective, that's 100-200V if we use our 1000-ohm estimate of the resistance of a body.  The standard home line voltage in the US is 120V, so that's more than capable of delivering a lethal shock under the right conditions.  Oddly enough, AC current above 0.2 amps usually isn't lethal, since that much current will cause all your heart muscles to "clamp," preventing the heart from going into fibrillation by just plain stopping it until the shock ends.  Obviously stopping your heart is bad and will eventually kill you, but if the shock is short-duration your heart will probably restart normally afterward without the need for CPR or anything

DC current, which is "always on" and has no frequency associated with it, will have a much harder time killing you but can still get the job done.  The end result is similar; enough current = stopped heart.  In this case the clamping (seizing of all the heart muscles) effect we also see at high AC currents is the culprit; too much DC voltage will stop your heart, which as most people know will eventually make you die.  Fortunately, it takes way more DC current to cause clamping than it does AC current to induce fibrillation; typical fatal DC currents are usually about a factor of two higher than AC (so 0.2-0.5 amps), meaning the associated voltage has to be about twice as high assuming identical skin resistance.  Clamping, as I previously mentioned, is also a reversible condition; if you can get yourself off the circuit before your heart is off long enough to do damage, it'll probably restart and you'll be fine. Even so, a modest DC voltage (a good example is the 48V power applied to certain types of microphones) is perfectly capable of murdering you under the right conditions.

The Neumann U87 patiently awaits its prey

So what are "the right conditions"?  Obviously the electricity is going to need to pass through the heart to stop it, so you need to put your body in the circuit in a way that that's possible (having the current run from one arm to the other is a good way to do it).  Contact area is important; if you're touching a live wire with a fingertip, the total resistance of the circuit formed by your body is going to be much higher than if you grab it with the entire palm of your hand, meaning the shock is much less likely to be lethal.  Similarly, since most of your body's resistance comes via the skin, being wet or damp will bring your total resistance down considerably, reducing the "kill voltage" needed to produce a fatal current.  Conversely, if there's some other source of resistance in the circuit besides your body (a thick pair of rubber gloves, for example) that's going to increase your electrocution-voltage threshold by a lot.  Finally, there's duration to consider; as we know, clamping will take a little while to kill you, and even fibrillation takes a few seconds to take effect, so if you can pull yourself off the source of the shock within a second or two you'll probably be OK.  Interestingly, this is where low voltages can be more lethal than higher ones; voltages (both DC and AC) between 50-500V will usually cause your muscles to contract when shocked, which can result in you involuntarily gripping something that's slowly electrocuting you.  Higher voltages, as I've empirically discovered, tend to just cause all the muscles near the contact site to fire uncontrollably, which usually has the useful effect of throwing you out of the circuit almost instantly. 

So that's the easiest way for electricity to kill you.  Beyond just stopping your heart though, it can also cause nasty internal burns at even sub-lethal currents.  The reason for that is that as current passes through a resistor (or meat-resistor in this case), it loses electrical potential energy, or voltage. This energy gets lost in the form of heat, which means whatever the current is passing through will heat up.  Even currents of less than 50% of the "death threshold" can dump enough heat under the right conditions to cause internal burns.  Interestingly the shorter the path taken through your body the more severe the burn, since the same amount of energy is being distributed as heat over a smaller distance.  When you go to really high currents (like the electric chair or subway third rail) you're going to die of organ failure caused by being literally cooked from the inside out long before your heart ever gets a chance to go into fibrillation or clamping.  In extreme cases, like lightning strikes, it's possible to be completely vaporized from the resistive heat alone.

So to bring this back around to the personal: why didn't my 9000V shock kill or injure me?  My meat-resistance was probably extremely low due to both hands being covered in highly conductive salt water, so even the short time I was connected to the power supply should have caused some really nasty internal burns even if it didn't get a chance to stop my heart.  The thing that saved my life was that the power supply was current-limited; it had a resistor inside with a value set so that the total output current, even in a full short-circuit condition, could never exceed 0.03 amps.  That's not enough current to do much more than tickle a little even in the most idiotic of circumstances (like this one), so I really wasn't in any danger.  So the question of whether or not I can be killed by electricity will, unfortunately, remain unanswered until the next time I do a dumb thing involving malt liquor and high voltages.

The majority of the specifics of this one came from here, although I did a lot of googling around to figure out the exact mechanism by which DC current kills.

Friday, January 20, 2012

Where Did DNA Come From?

Crankiness that comes with chronic sleep deprivation and resulting need for weapons-grade stimulants aside, I'm a pretty easygoing dude.  Still, mentioning that you don't believe in evolution (or worse, that you "think there are some holes in the theory") within earshot of me is a great way to get a quick crash course in all the swear words I know, as well as the myriad ways to connect them together that I've come up with over the years.  Part of it is a general irritation with people who decide science isn't true because they don't understand it (I have similar reactions to global-warming deniers, people who don't vaccinate their children, and everything Deepak Chopra has ever said), but it goes a bit beyond that because, as tools for understanding the world around us go, you really aren't going to do a whole lot better than the theory of evolution.

Somebody (I think it was Einstein, he was always saying smart stuff when he wasn't denying that quantum mechanics was true) said that the value of a scientific theory can be quantified by how good a predictive model it is divided by how much effort it takes to get said predictions out of it.  By that metric, evolution beats everything out there: the essentials of it can be summed up in a paragraph at most, and armed with just that paragraph (no calculus! yay!) and basic logic you can work out everything from why night predators don't have infrared vision to how the platypus came about.  Anyone who has ever tried to talk someone who hasn't taken grad-level physics through quantum mechanics, general relativity, or (godforbid) the standard model can appreciate that.

Unfortunately one of the major consequences of evolutionary theory is that people, with all their brainpower, personalities, nice racks, etc, aren't much more than highly-optimized delivery mechanisms for their genetic information.  Walking, talking seed packets, if you will.  Quite understandably, this is upsetting to a lot of people, some of whom respond by pretending evolutionary theory is "controversial" and forcing dumb shit like "intelligent design" to be taught to Texas schoolkids who don't know any better.  In reality, evolution is about as controversial as gravity; if you don't believe me, go contract one of those lovely MRSA strains that seem to have worked out how to defeat every single one of our antibiotics.  Not only is it utterly noncontroversial, it's also absolutely fundamental to any understanding of modern biology (and, interestingly, some computer science); trying to do, for example, immunology without a solid understanding of evolutionary theory would be like trying to read Shakespeare without knowing the alphabet.

Rick Santorum, inadvertently making the case against intelligent design

People who don't particularly like evolution usually resort to attacking it by pointing to things in nature with "irreducible complexity;" that is, complex things like the eye that don't seem like they'd confer any evolutionary advantage until all their parts were in place, as proof that there's more than the blind statistics of natural selection at work in creating biodiversity.  Aside from being a logical fallacy masquerading as an argument (not understanding something isn't the same thing as it not being true), it's usually pretty easy to come up with stepwise models for how even complex organs like the eye could have evolved from, say, a single cell with a mutation that made it slightly light-sensitive.  The one place I've always gotten hung up, though, is in trying to explain where DNA, which is one of the main ingredients needed for natural selection to work, came about.

In order to get natural selection to happen, you need three things: a way to transfer information between subsequent generations (that'd be DNA), selection pressure (an environment that ensures that some individuals will be more likely to reproduce than others), and a randomization factor so new things keep getting tried out (that'd be genetic mutation).  Start with those three things and you can go from a single cell to an elephant, although it'll take awhile (one thing evolution is not is efficient).  The crux of the problem here is that without all three of these things, you've got no natural selection, leading to a bit of a chicken/egg problem.  Selection pressure and randomization are easy (any environment is going to exert some degree of selection pressure, and pure statistics will give you "bad copies" every once in awhile if you're working at the molecular scale), but how did we get a molecule that can hold an assload of information and the associated, unbelievably complex cellular machinery needed to both translate that information into proteins (which are themselves so complicated that we need supercomputers just to predict their morphology) so it can do things as well as allow it to self-replicate?  There's a lot of infrastructure there, and contrary to what the God-botherers will tell you it didn't all just pop into existence at once.  Or did it?

The answer is no, it didn't, and if I knew more about basic chemistry and biology (disclosure: I'm way outside my science comfort zone here, so the following explanation is going to be a bit sketchy.  I'm doing my best though) I'd have been able to reason this one out for myself the way I usually can with any complex evolutionary result.  The secret is that the complicated DNA/RNA/protein system we use as the basis for our genetics itself evolved from a series of similar, but much simpler and less effective, information-transfer systems.  The prevailing theory for how it all came about is called the "RNA World Hypothesis," and it's been backed up by a lot of really rigorous experimental and theoretical work and is pretty universally accepted. 

So let's go back to early planet Earth.  It wasn't a very exciting place, mostly just rocks, lava, a corrosive and mostly poisonous (to humans) atmosphere, and a primordial soup of miscellaneous organic molecules.  Miller's famous experiment from the 50s showed that said primordial soup could originate from a combination of methane, hydrogen, ammonia, and water, catalyzed by an electric discharge, so the idea that there were these pools of "probiotic soup" lying around isn't really that far-fetched.  This soup mostly sat around reacting with itself for awhile, producing useful stuff like fats and amino acids.  Eventually, via one of several possible reaction paths, a handy little molecule called ribonucleic acid (RNA) turned up.

RNA is kind of the beta-test version of DNA (and, like most beta code, bits of it can still be found floating around in the current version of cellular mechanics).  Like DNA, it's a long-chain molecule with a bunch of amino-acid bases attached in a specific order.  Unlike DNA, it lacks a double-helix structure, instead opting for a much less chemically stable single-strand morphology.  That's useful in this case though, because it means RNA can catalyze reactions as well as store information-- something DNA can't do without a lot of supporting apparatus.  Since RNA is reactive, it's going to interact with the other stuff in the prebiotic soup a lot, including via something called a "replication reaction" where the dangling amino-acid bases of the RNA chain essentially help it assemble a mirror-image copy of itself.  The copy efficiency and fidelity would have sucked, because the RNA was sitting in a soup full of organics and any number of other reactions (including reactions with other RNA molecules) could screw the whole thing up, but that isn't too important-- we've now got a large molecule that can replicate itself, at least under ideal conditions.  Step one.

Like I said, there was a lot of other crap floating around in the soup besides the sugars and amino acids that make up RNA.  Fatty acids, for example, probably would've also been present.  Fatty acids like to clump together into hollow pockets called vesicles, and it's not hard to imagine one of these vesicles eventually forming around one of the many RNA molecules floating around.  So now some of the RNA chains are isolated from the environment by a membrane.  Usefully, a fatty-acid membrane would have been impermeable to big RNA molecules (to prevent them from reacting with one another and essentially making a mess of their stored information) but easily permeable to the smaller amino acids and nucleotides needed for the RNA-replication reaction.  So by surrounding some RNA molecules with a fatty membrane, we've created a situation where some RNA can copy itself much more successfully than the membrane-less stuff.  So in other words, some RNA is now much better at transferring its information to its "children" than others, giving us a very primitive form of selection pressure.  Step two.  As a bonus, we've also created the first, very primitive, cell-like structure, with a membrane surrounding and protecting information-bearing macromolecules. 

The ancestor of all life on earth relaxes at home in the prebiotic slime.  The facial structure is somewhat conjectural.

Step three (possibility for occasional copying errors) pretty much takes care of itself, thanks to the inherent instability of RNA.  While the cell membrane protects the RNA well enough to ensure that most of its information survives the replication reaction, the whole thing is pretty much a self-catalyzed crapshoot; at some point some weird isomer of an amino acid is going to bind to the wrong place on the chain, or something.  What's crucial is that most of the information can still be transferred successfully, thanks to the membrane, but the system is still imperfect enough to spit out the odd error.

Mutation capability was a necessary component of the first protocells. 

Now that we've got everything we need for natural selection to take its course, we're off to the races.  The fact that the natural selection is occurring on molecules as opposed to anything that could even loosely be called "alive" is irrelevant; selection pressure is a purely statistical process and as such doesn't care whether it's screwing around with giraffe necks or the ordering of base pairs in an RNA molecule.  Natural selection in this case would favor moving toward better and better replication mechanisms, because more accurate replication would allow the RNA to make more accurate "children," so it's not too hard to picture the first simple, DNA-based prokaryotic cells eventually turning up after a few million years or so of refinement. 

While it's widely accepted that RNA was the original information-transferring molecule, it's entirely possible that it was preceded by some other, even more primitive nucleic acid.  We can't do a whole lot beyond educated guesses in that department, but people have done lots of interesting experimental work attempting to replicate probiotic-soup conditions and actually demonstrate a lot of this stuff.

So not only are we descended from monkeys, if you go back far enough we're probably descended from an unstable polymer sitting in a bubble of fat.  Can't imagine that's going to make the Jesus types very happy, but at least now I've got a good answer for the inevitable smug "well then where did DNA come from?" question whenever I'm dumb enough to actually try to argue with these people. 

Following my First Rule of Doing Science (it's way easier to find someone who knows a thing than it is to learn the thing yourself) I called in a ringer on this one.  He explained the gist of it to me and then pointed me to this article on the origin of life, which is an incredibly detailed (and well-cited) expansion of everything in this post.  It's entirely worth a read, and even having an at-best-passing knowledge of genetics and biology I was able to mostly understand it without having to type too many words into Wikipedia.

Tuesday, January 17, 2012

How Can Magnetic Monopoles Exist (and what good are they)?

This is kind of a weird one, I'll admit, but it's something that's confused the hell out of me ever since I read somewhere that magnetic monopoles were theoretically possible.  Both the occasional-scientist (the "how do they exist?" part of the question) and full-time engineer (the "what good are they?" bit) parts of me are pretty intrigued by the whole idea, which seems to violate everything I know about electromagnetics. 

For people with lives, the concept of a magnetic monopole is pretty simple.  Any magnet, from those giant ones on cranes at junkyards to the tiny little bits on your hard drive, has both a north and a south pole.  As all kids who grew up in a house with a refrigerator (if you didn't grow up with a refrigerator, my apologies and that totally sucks by the way) know, two magnets will push each other apart when two of their like poles get close to each other, and stick together if two opposite poles come into close contact.

A magnetic monopole is basically a chunk of material with only one magnetic pole, which means all sides of it would exert the same type of force on another magnet regardless of what part of the monopole the other magnet was close to.  A good analogy would be two styrofoam balls with a positive electrical charge; they're going to do the same thing (repel each other) no matter which way you bring them together.

The big problem with magnetic monopoles is that, according to classical physics, they can't exist.  Gauss' Law of Magnetism (sort of the Rodney Dangerfield of Maxwell's equations) says (among other things) that the net magnetic "charge" in an object is always zero, meaning that if part of the object is positively magnetized, some other part has to be equally and oppositely magnetized.  In other words, you can't have a magnetic north pole without a magnetic south pole.  This is easy to test if you don't mind breaking stuff: just take a magnet and crack it in half (see figure below).  You'd think what that would do is make a pair of opposite monopoles (one magnetic north and one magnetic south), but what actually happens is that the electrons in the two pieces reorganize themselves to balance the now-unbalanced magnetic fields, which gives you two smaller magnets with a north and south pole each. 

Wikipedia already had a good picture of what happens when you break a magnet, saving me countless minutes of MSPaint work.

As classical physics goes, you're not going to get much more useful than Maxwell's equations (see below).  Depending on how you solve them, they can tell you everything from how circuits work to how a cell phone signal will propagate from a tower.  In other words, they basically predict everything that has to do with electricity and/or magnetism, and for the most part they do it really accurately.  So when they say something like a magnetic monopole is a physical impossibility, I'm generally inclined to buy it.

Maxwell's equations.  Like the DMV, they are a total pain in the ass but it's almost impossible to avoid dealing with them at some point, at least if you're doing science.


You know who was way less lazy than me (and also probably smarter)?  Paul Dirac.  He was one of the handful of geniuses who made everybody's lives more difficult by developing quantum mechanics in the mid-20th century.  Quantum mechanics is horrendously complicated but easy to sum up: when shit gets small, shit gets weird.  In other words, when you start thinking at the scale of atoms and molecules, normal physics goes out the window and you get things like electrons passing through solid barriers and particles randomly popping in and out of existence happening as a matter of routine.  On the macroscopic scale this doesn't affect us a whole lot (which is why it took so long for anyone to even realize quantum mechanics was a thing), but it's turned out to be pretty important anyway (transistors, for example, shouldn't work according to strict classical physics, but they definitely do).

Dirac worked out that, at the quantum level (where things like matter and energy are quantized into discrete, indivisible chunks, hence the name), the rules for magnetic fields might be a bit different.  The math on the Wikipedia page is currently making my head hurt and I'm not even going to pretend I understand or can verify it, but the upshot of Dirac's result is that, if electric charge is quantized (it is) and Maxwell's equations have basically the correct form (they do), then it should be possible for nonzero magnetic charge (magnetic monopoles, in other words) to exist.  Since we know that electric charge is quantized (because you can't ever have less than one electron's worth of charge) then the implication is that, contrary to all common sense, magnetic monopoles are a thing, at least at the subatomic scale (at the macro scale, electrons are small enough that you can't really treat charge as quantized, like how you don't think of a glass of water as being made up of a shitload of hydrogen and oxygen atoms).  Subsequent work in even more advanced physics that I'm even less qualified to comment on appears to also support the existence of monopoles, generally in the form of subatomic particles with a discrete magnetic charge that would be almost exactly analogous to electrons. 

So (very tiny) magnetic monopoles CAN exist.  That's a long way from saying that they DO exist, let alone that they exist outside a cryogenic vacuum chamber or supercollider or something.  So far, pretty much all attempts to detect a subatomic-sized magnetic monopole, using everything from SQUIDs (the most sensitive magnetic-field detectors ever built) to the Large Hadron Collider that's currently getting so much publicity for all that Higgs boson business, have been failures.  There have been a couple of blips, but nothing anyone's been able to reproduce.  This has led to the general consensus that, if they do exist, they're extremely rare and/or hard to detect even under weird, artificial conditions like the inside of a giant particle accelerator or the near-absolute-zero temperatures a SQUID operates at. 

This was all pretty disappointing to an electrical engineer like myself, whose general ignorance of advanced physics led to thoughts of macro-scale magnetic monopoles existing somewhere just waiting to be mined, and more thoughts of the weird gadgets and motors and things you could build with them (to be fair, I think I got most of that idea from a Larry Niven short story).  Unfortunately the fact that, if they're around at all, monopoles only exist as nearly undetectable subatomic particles kind of limits their usefulness for basically everything, except proving this or that formulation of string theory is a little more plausible, if they're ever detected.  When someone figures out how to generate a focused beam of magnetically-charged particles I'll get excited and/or think of something cool to do with it; until then the eggheads can keep them.

Wednesday, January 11, 2012

Why Do We Use Such Stupid Time Units?

Here in the US of A, we like our units of measurement obsolete and ridiculous.  Unlike basically the rest of the civilized world (no, England and Australia, you don't count as part of the civilized world), we continue to use anachronistic and unwieldy units of measurement like the foot, the mile, the pound, and the fluid ounce in our everyday lives.  Aside from unnecessarily confusing elementary school kids though, this is a pretty harmless indulgence: in science, engineering, and most other areas where precise measurement is important, we pretty much just suck it up and use the metric system.

In very general terms, the metric system uses a single (usually arbitrary, but whatever) base unit for each type of measurement, e.g. the meter for distance, the gram for mass, and the liter for liquid volume.  You measure everything in terms of multiples of the base unit, often with fun prefixes grafted on as shorthand for orders of magnitude (e.g. one gigameter = 1 billion meters).  It makes converting between units easy and you don't have to remember things like that there's exactly 5280 feet in a mile or that water boils at 212 degrees Fahrenheit.  There's a metric base unit for almost every possible thing you could want to measure, from simple things like distance (meters) to somewhat more complex units like electrical potential energy (volts).

The big exception, if you hadn't already guessed it from the post title, is time.  Our basic unit of time is the second, which we pretend has been carefully defined to be how long it takes a photon to travel about 3 x 108 meters in vacuum but is really just an arbitrary thing that got settled on millennia before anyone even knew that light had a finite velocity.  Like I said though, pretty much all the base metric units are arbitrary so how we came by it is really not important.  It only gets weird when you start scaling up: 60 seconds is one minute, 60 minutes is one hour, 24 hours is one day, and 365.25 days is a year (I'm not even going to touch months here).  The latter two have solid physical groundings; it takes the earth one day (86400 seconds) to complete a full rotation, and 365.25 days (31557600 seconds) to complete a full orbit around the earth.  So we'll keep those, thank you.

What I've never understood is the minutes and hours part of the scale: why 60 seconds to a minute, why 60 minutes to an hour, and why 24 hours to a day?  Ordinarily I wouldn't care, but unlike most of the stupider units still floating around today I actually have to deal with minutes and hours quite a bit in my job as a science-person, and it's a total pain in the ass.  So where did these three oddball units come from, and why have they stuck around even in metric-system-dominated fields?

The answer to how the day got divided into 24 chunks is kind of obvious if you think about ancient civilizations and how little of the universe they could actually observe and measure.  The key to everything is the number 12, which is the number of lunar cycles in a year.  The moon was one of the very few extraterrestrial things that could be easily observed by ancient civilizations, so this was apparently considered a big enough deal that the Sumerians, Indians, and Chinese all (apparently independently?) decided to ignore the number of fingers on their hands and split the time between sunrise and sunset into 12 chunks instead of the slightly more logical 10.  For the sake of symmetry, the Greeks and Romans eventually got around to splitting the night up similarly, giving a total of 24 "hours" per full day.

An early issue with the "between sunrise and sunset" reckoning of hours was the fact that the time between sunrise and sunset will vary quite a bit depending on the time of year.  This was generally worked around by just letting the length of the hour change with the seasons.  Since it isn't like pre-technological civilizations had any real need for precise time synchronization, this probably mattered less than my horrified engineer's brain seems to think.  Even pendulum-based clocks could be made to account for the varying length of the hour by just adjusting their swing period every once in awhile.  Eventually (around 100BC) the Hellenistic-period Greeks realized this was dumb and came up with what's more or less our current system, where hours are always the same length and sunrise/sunset just occur at different times throughout the year.

So that's why there's 24 hours in a day, but why are there 60 minutes per hour (and, since the answer turned out to be the same, 60 seconds per minute)?  Blame the Babylonians, who were very clever astronomers but not so good with things like fractions (to be fair this was a long time ago and they were pretty much inventing math as they went along).  They decided to go with base-60 math for their astronomical calculations, since you can divide 60 by 2, 3, 4, 5, 6, and 10 without having to figure out what to do with a remainder.  Because this was handy, it stuck and has been used as the sub- and sub-sub-division of the hour since about the time that the Greeks standardized the hour itself, about 100BC.


That's all great, but it brings us to the real question: why are we still using these odd time units that are based on things that long since became irrelevant?  Like most questions like this, the answer is sort of a squishy mix of convenience and convention.  Basically every civilization on this planet has been keeping time with the 24/60/60 scale for over 2000 years, which is a lot longer than any other unit of measurement has survived.  So in spite of its general unwieldiness we're pretty used to it and it basically does what it needs to do (exhibit A here: the fact that even the godless socialists in Europe never tried switching to metric or decimal time).  Even the set of Standard International (SI) units (the metric system, basically) attempts to square the circle on this one; the only official time unit in SI is the second, which you can multiply up or down with prefixes the way you would with any other basic unit of measurement.  However, both the minute and hour are considered what amounts to honorary SI units; they aren't in the club, but enough people use them anyway that someone thought they should at least be defined in relation to the second and have official unit suffixes and stuff. I'm sure someone somewhere is measuring some time-based process in kiloseconds or megaseconds, but quite honestly even most scientists would think that was pretty weird and just grab a calculator to convert to hours, days, or years.

Tuesday, January 3, 2012

How Does Flash Memory Work?

Flash memory is basically the awesomest thing ever invented: it's easy to fabricate (and the process scales well, meaning we can cram more and more memory into the same area every day), reasonably fast to read and write to, can be read and rewritten millions of times, and (most importantly) is non-volatile, meaning it holds its data even when you cut the power to it.  We've had lots of different types of computer memory for awhile that could do each one of those things, and sometimes even more than one of those things, but having one transistor-based memory technology that could do 'em all seemed like an insane fever dream until surprisingly recently, and even after flash was invented (in 1988, I believe) it took awhile to learn how to make it usefully small and fast.

For an example of how far and fast flash memory has come in the last decade, take a look at the Hitachi MicroDrive, which was the cutting edge of high-capacity digital camera storage as recently as 2005: it was considered easier to miniaturize an entire goddamn hard drive, magnetic platter, read-write head, control circuitry and all, than to cram more than a gigabyte or two of flash memory into a reasonable space.  These days you can spend about 50 bucks and get a 32GB microSD flash memory card that's about the size of a fingernail.  I don't care how jaded you are about technology; that's bananas. 

Hitachi MicroDrive, shown with hamster for scale because why not.  These were so popular when they came out that people would buy certain digital cameras just to take them apart and get the drive out.  Now they just look quaint.
 
Now that I've nerdgasmed over flash a bit, we come to the awkward part: I have a solid background in solid-state physics, particularly semiconductor physics, and yet I have no idea how flash memory works.  I could make some hand-wavey guesses about things like "electron traps" and "confinement," but at the end of the day I'd pretty much be talking out of my ass.  Luckily Wikipedia is way smarter than I am.

So it turns out flash memory cells are basically field-effect transistors (FETs) with a couple of added features.  Assuming you have no good reason to know what a FET is beyond "a thing that helps the pornography get from the internet to my eyeballs," a quick walkthrough is probably in order (see diagram and follow along):

The field effect transistor (FET) in both its "off" and "on" states.  More description of what the hell's going on in the paragraphs below.
 
So basically you start with a chunk of silicon.  Silicon in general isn't all that conductive, but two regions of our chunk in question have had enough other things added to them to make them conduct almost as well as metal (this is one of many neat things you can do with silicon).  Those are the "source" and "drain" electrodes.  In between them is a large expanse of regular old not-that-conductive silicon, so even if you put a pretty big voltage on one side you're not going to get much current flow between the two. 

Here's where it gets cute: sitting above the region between the source and drain is a piece of insulator with a metal contact on top.  This is called the "gate," because applying a positive voltage to it will pull lots of electrons in the bulk Si up toward the surface.  There's very little actual current flowing, because of that insulator, but the area under the gate is now very rich in electrons.  Unsurprisingly, lots of electrons = more conductivity, so now a voltage applied across the source and drain will cause electrons to flow from the drain to the source (that's not a typo, someone just screwed up way back when and made the electron charge negative when they were defining all this stuff, so current goes in the opposite direction of electron flow, confusing generations of EE undergrads).   So basically you've got an "electron valve," where the flow of electrons between the drain and source can be switched on or off by changing the gate voltage.  FETs aren't the only type of transistor, but since they're extremely scalable and really goddamn easy to make in huge numbers, they're the only kind you're going to find in digital electronics.  The computer I'm typing this on has a couple billion of them sitting on its not-particularly-modern CPU, for example.

Flash memory bits are essentially FETs with an extra piece of conductive material between the gate and the electron channel.  For lack of a better term, let's call it the "electron jail" for reasons that will become apparent.   When the flash bit is in its off (zero) state, the electron jail is just kind of sitting there doing nothing, so the flash cell works like a normal transistor would if you crammed another piece of metal between the gate and the channel.

Basic flash memory cell.  Readers with an attention span longer than a goldfish may recognize it.  Readers without an attention span longer than a goldfish are encouraged to scroll up a little.

Where it gets weird is when you decide to switch the cell from the off (zero) to on (one) state.  This is accomplished by putting a large voltage on the gate electrode (to turn the transistor part on) and then running an assload of current (relatively speaking) from the drain to the source.  Said assload of current will contain electrons at many different energy levels, including a predictable number of them that are energetic enough to jump across the insulating barrier between the channel and the electron jail via quantum tunneling, the specifics of which is probably a whole other blog post.  This phenomenon is known, without the slightest trace of irony, as "hot electron injection" in solid-state physics.  Per usual, the math is really awful but we don't care about it; all that matters is that after we've run our assload of current through the flash cell for awhile, there are a bunch of electrons stuck in the electron jail. 

Like Alcatraz, there's no escaping from the electron jail.  It's surrounded on all sides by insulating material, so the electrons that are in there are staying there for awhile.  Quantum mechanics, because it is awful, says that they have a slight probability of being able to tunnel through the insulating barrier and out of jail, but like escaping from Alcatraz it ain't too likely to happen, or at least not to enough electrons to matter.  In general, once you've charged up a flash cell this way it'll stay charged for something like tens of years (or "20 to life," as we say in the hard-bitten world of semiconductor engineering).

So now we've got a negatively-charged electron jail sitting between the gate and channel of our flash cell.  The fact that there's negative charge there means it takes more of a positive voltage on the gate to switch the transistor on.  This shift in the threshold voltage (the technical term for how much you have to juice the gate to get the transistor to switch on) is what gives us our "one" state: flash cells with a low threshold voltage are zeros and flash cells with a high threshold voltage are ones.

A flash cell in its "one" state.  Also probably a metaphor for our overcrowded prison system or something.

That's great, but what if we want to erase it (set it back to zero)?  We've already established that our dystopian electron prison is very, very secure, so how do we spring the prisoners?  Here's where my arch-nemesis quantum mechanics comes to the rescue.  Remember when I said that there was a slight probability that electrons would be able to escape by tunneling out of prison?  We can increase that probability by putting a large negative voltage on the gate.  Electrons hate other electrons, so the presence  of a large negative voltage just above our prison will suddenly make our trapped electrons much more keen on busting out of jail than they were previously, vastly increasing the probability that they'll tunnel back out into the silicon substrate to commit more crimes.  It should be noted at this point that the prison analogy is just convenient; there isn't any evidence that electrons are more prone to crime than other elementary particles. 

So that's the basic idea behind flash memory. Obviously it's not that simple; you have to wire up billions of these things in a way that makes them convenient to read and write to if you want to do anything practical, but the fundamental thing is the shift in threshold voltage caused by filling the electron jail with electrons. 

Oddly enough I had more or less guessed right on this one; I assumed there was some coercion of electrons to tunnel somewhere and be confined going on at some point.  Where I was surprised though is how much like a regular old transistor the basic flash cell is.  It explains a lot when you think about it though; we've gotten incredibly good at making tons of small, dense transistors in the last 50-odd years, so the fact that the structure of flash memory is so similar is probably one of the main things that has made it so amazingly scalable. 

As always, props to Wikipedia.