I remember the old monitors we used had 256 color as a setting option. But now, even the cheap monitors they give us here at work have three color setting options: 16, 24 and 32 bit. Shouldn't that mean that a 14 bit photograph should look better than a 12bit one even on the lowest setting?
I apologize in advance for how confusing this explanation will be.
In the old days, monitors took analog signals and could display however many different colors your computer was capable of displaying (I'm ignoring the accuracy of the analog circuitry, but that's not important). It was the computer that was limited, not the monitor. In those days, many computers had 4 color, 16 color, and 256 color modes. Those weren't 4, 16, and 256 levels of brightness for each color; they were literally that many different colors that could be displayed at one time.
Here's roughly how it worked for 256 color modes. You started with a very large box of crayons (usually either 65 thousand or 16 million). You had to choose 256 of those crayons and that became your "palette." You could draw a picture only with those crayons. You could swap out a crayon for a different one, but when you did, everything that you drew with that crayon changed to the color of the new crayon. So you could display 256 colors at one time but you could pick those colors from a larger set of colors.
After that, computers started to switch to 16-bit, 24-bit, and even 32-bit color modes. With these modes, you didn't pick a set of crayons to draw with. Instead, you could use any color from the crayon box for any pixel on the screen. With 16-bit color, you had 5 bits for red, 5 bits for blue, and 5 or 6 bits for green. With 5 bits, you have 32 different numbers. So you could have 32 brightnesses for red, for blue, and for green. That gave you 32 thousand different colors you could show.
With 24-bit color, you used 8 bits per color which works out to 256 brightness levels per color. That's what JPG files use. It gives you a choice of just over 16 million different colors for every pixel. That's probably more different colors that you can see, so there isn't much point in displaying more.
32-bit exists for two reasons - marketing hype and it's easy for computers to work with. Computers like working in clumps of 16-bits and 32-bits, so it's easy for them. They store the colors as 10-bits per color (1024 different brightness levels). That sounds better, but the DAC (the part that converts the number to an analog color signal) in just about every computer can really only handle 8 bits per color, so the extra 2 bits are pretty much wasted. In fact, many cheaper LCD displays can only deal with 6-bits.
Clearer? Probably not, but it's not important. The important bits to remember are that a JPG has only 256 brightness levels for each color, which is all that you need when displaying a picture. Having more brightness levels in a RAW file is useful if you need to manipulate an image. You can't keep adding more different levels of brightness forever. Like your eye, your sensor has a limit to how many colors it can realistically distinguish between. For the cameras with 14-bit color RAW files on the market today, the writer of the article concluded that the extra 2 bits are slicing the colors more finely than the sensor can really see, so the extra 2-bits just waste space and processor time and don't add value.