@krellin,
First, I don't want to continue the audio discussion _too_ far (though obviously feel free to reply), since as I already said, _I_ don't have a strong opinion as to the difference between the two, and I actually agree that it could well be that certain imperfections are pleasing to some people.
I do want to correct one thing you said though. You said
"Here's the discussion: Per science and math, I can recreate anything I want *perfectly* if I sample it at the correct frequency. In digital, if we sample at twice the highest frequency we want (i.e. the highest frequency the human ear hears) then I can duplicate music perfectly."
This is not true. Specifically, we can reconstruct a sample _perfectly_ only if the original signal contains a cutoff frequency B beyond which it contians no components (i.e., there is no frequency higher than B contained in the signal). The theorem is the Nyquist-Shannon theorem here:
http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem#Shannon.27s_original_proof
A signal without a frequency cutoff can not be perfectly reconstructed.
A sound signal does _not_ have a frequency cutoff, and thus cannot be perfectly reconstructed in this way. (Even if it could, of course, be perfectly reconstructed in theory, the "finitely many" numbers we need are real numbers, of which there are uncountably many, so digitizing it would still introduce _small_ imperfections.) Instead engineers choose to treat the sound as though it has no frequencies higher than about 22KHz, which is the highest a human ear can hear, and then apply the theorem as if that were the case.
Hence, the signal that is created digitally is only a near-perfect recreation of the "truncated" sound wave that has had the higher-frequency sounds removed.
Note that this works in linear approximation, where different frequencies are independent. However, the physics of sound is really only approximately linear, which means that in real life, there is some small but _possibly_ noticeable effect of the higher frequencies on the lower frequencies, which means that making this cutoff choice could possibly lead to audible differences.
To cut to the chase, anyway, no, it is not possible to take an arbitrary signal and represent it perfectly by finitely many numbers.
Continuing to ebooks,
"I just can't buy the same argument in reading. As long as the method of reading is not somehow combersome or distracting....say the book weighs 50 pounds or the e-reader has an awful glare (i.e. not e-ink) then, providing you enjoy the book, the particular media upon which you are (and this is critical...) **looking at words** should be irrelevent."
If that were really true, then you would not be arguing for the superiority of EITHER method. You would view them as functionally equivalent. That you are in fact arguing that one (ereaders) is better shows that you don't actually believe this, so now we just disagree about which actually IS better.
Of course, as we have both now said, this probably comes down to taste, specific uses, etc. If you prefer your ereader and its convenience, dictionary, and ability to carry books around with you easily, then I'm glad the technology exists for you. Similarly, since I prefer (physical) books, I'm glad they exist for me. I didn't mean to make a big deal out of it, just to point out there are certain _technological_ advantages of books over ereaders, so it is a tradeoff, not a clear-cut decision.