Computers, as many people know, at root work with numbers in the binary (base 2) system. This is because the on-off nature of digital circuits maps very easily to a series of 1s and 0s.
How, then do we get letters (and other symbols), and fractions, and very large numbers, out of the thing? We need to have ways of encoding them in binary.
For example, we might all agree that 01000001 represents 'A'. It wasn't too hard back in the days when we only let Americans use computers (I kid, I kid) and so had a relatively small number of letters, numerals, and punctuation marks to account for; but as we started to deal with accented characters, and typographical symbols like ©, and then Chinese and Japanese and Arabic and...well, it got complicated, and it's something that programmers often get wrong. Gobbledygook all too often shows up in web pages and other documents.
And on the math side, rounding errors continue to be a problem in many applications.
So here are two useful guides that everyone who writes software ought to read:
- Joel on Software's "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)", dealing with the binary-numbers-to-letters (and other symbols) problem; and
- floating-point-gui.de's "What Every Programmer Should Know About Floating-Point Arithmetic", which explains how we get fractions and very large numbers out of strings of 1s and 0s.