Upthere.com has posted an interesting problem:
When mentioning this to a (non-computational) friend, the question came up “what's a core?” -- certainly a legitimate question.
I'm posting my answer because it is short, and apparently comprehensible.
A core is sort of like programming a ten-year-old personal computer (before they went multi-core). To first approximation, it is one processor; doing one thing at a time.
The next level of complexity (still single-core) adds multiple different types of computational units to the "core". A good example is a floating-point unit. That means if you want to add
A&B are integers and
C&D are floating point, you can do
"A+B" in the integer unit at the same time you are doing
"C+D" in the floating point unit.
The biggest level of complexity has to do with the (multi) thousand fold speed difference between the "core" and external memory (RAM + Disk). In the worst case this would have the core doing nothing 99.9% of the time.
This would be bad, so there are a lot of tricks used to keep the core busy while it is waiting for other stuff to happen. Think of this as waiting for a particular result from a database to fill in a form -- If clever (& lucky) you could build the whole rest of the form while you're waiting.