One of the most interesting things about LLVM is a quote at the bottom of page 1 of the article
Apple also uses LLVM in the OpenGL stack in Leopard, leveraging its virtual machine concept of common IR to emulate OpenGL hardware features on Macs that lack the actual silicon to interpret that code. Code is instead interpreted or JIT on the CPU.
This approach makes it very likely that developers will use the hardware optimized instructions. Most other approaches impose significant costs upon the developers, e.g., the need to write additional code to cover every possible hardware configuration. With the LLVM there is no coding penalty, therefore using the optimized routines becomes a no-brainer, resulting in faster code for people with beefier hardware (who are also those who tend to be most worried about performance) and usable code for everyone else.
As background the article points to a presentation by Chris Lattner but I prefer his paper with Vikram Adve LLVM: a compilation framework for lifelong program analysis because it talks in terms I can understand (like "Static Single Assignment").
So here's what's cool: LLVM eats a code representation that is very amenable to optimization and analysis. It optimizes this input and outputs machine code (potentially tuned for the actual hardware which will run the code) decorated to allow low-overhead runtime profiling.
(Click on image for larger version)
This approach permits repeated optimizations based upon recent run-time data rather than generalized heuristics -- it is reminiscent of "hot spot" with larger scope but less immediacy.
I'd be remiss to not mention the strategic implications of this: it allows Apple to radically shift hardware configurations, while restricting the software impact to a relatively small chunk of code c.f. iPad.
Update 21 Aug 2010: Just noticed LLVM got a SIGPLAN award -- well deserved!