I already understand how the red, green and blue photo sites collect the light and send it to be recorded in a digital format. My understanding was that ISO settings happen because of a change in signal gain on the sensor. Not because of how the camera processes the data. Though I can see how a refinement in processing can yield a cleaner image and expand ISO range because we all know that RAW isn't really straight from the sensor but rather minimally processed from the sensor. i still question whether it could give you a whole stop though.
It's interesting because since I first asked the question I've been reading some and I've found like 5 explanations that contradict themselves at least a little bit. All from "reputable" sources. Makes it hard to find accurate information. Then again in the end an extra stop is an extra stop so does it matter how it happens?
Most sources will focus on what the camera does not so much how it does it and the 'how' of this gets really technical. Here's my attempt... Be warned I don't know anything specific to image sensors, but I design circuits using Analogue to Digital Converters (ADC) for a number of other applications.
Okay, so (generally speaking) your RAW file is a direct record of the charge state on each pixel of the image sensor, yes? (or close enough for what we are doing here?)
That sensor is outputting that information as an analog signal. A specific voltage within a range of maybe 0 to 3 volts. This voltage is resolved at very fine resolutions, so for each color (rgb or whatever) a pixel reporting 1.117V is reporting a very different value than one reporting 1.11659V.
Your RAW image file is a mostly unprocessed recording of each of these values for each pixel. But you don't store information on digital media using analog values. So most image sensors have an ADC as a componant.
Now SkaGoat says Canon moved the ADC off the image sensor and onto the DIGIC. I can see this speeding up the data pipeline inf the DIGIC can handle a more capable ADC. I'm guessing the sensitivity gain comes from clearing the state of the sensor faster and increasing the sample rate. The sensitivity comes from how much light the sensor can record within that shutterspeed, each of those photon receptors must be cleared in order to record more light and they cannot be cleared until that data is converted to a digital value and recorded to the buffer.
Maybe I've made this worse. Probably not the best simplification of this. Get what ya pay for is what I say.