[sane-devel] Canon LIDE35 color and motor issues

Pierre Willenbrock pierre at pirsoft.dnsalias.org
Tue Mar 7 20:10:12 UTC 2006


Hi Luke,

Luke Campagnola schrieb:
> First I want to thank everybody for their great work on the genesys backend, 
> we really appreciate your hard work! 
> Now the complaining bit. I've been comparing the sane driver to the windows 
> driver and I've come up with a few notes:
>  - The windows driver seems to have much smarter control of the stepper motor. 
> This is just a minor issue, but at any scan speed, the head is constantly 
> starting, stopping, reversing, and starting again under sane.

I guess it looks at the type of the usb connection, and increases the
time taken per line if the connection is not a high speed connection,
leading to a smaller data rate, so the scanner does not need to backtrack.

>  - Color reproduction is better with the sane driver than the windows driver, 
> which tends to boost the reds way too much (yaay)
>  - Brightness reproduction is quite poor with the sane driver. It seems that 
> darker colors just drop off to black very quickly. I've posted a sample scan 
> on my website here: http://luke.no-ip.org/tmp/color_comparison.png
> There are four images--the top row is SANE, bottom row is windows. The left 
> column shows the original scans and the right column has the contrast cranked 
> up. You can see pretty clearly in the top-right image that a lot of the photo 
> has been lost under SANE. 

A better tool to determine if the bright and dark colors are correctly
reproduced is a histogram tool. You can clearly see, if there is still
room at the bottom and top of the histogram(not too much, as you still
want some useable color values ;-)).

This reveals that dark colors are cut off for the sane driver, but it
also shows that the windows driver cuts off bright colors.

Anyway, it shows a incorrect calibration on the sane side.

> If I have time, I'll tinker with some of the exposure registers (EXPR, EXPG, 
> EXPB) and see if that helps.. any ideas for other places I might look?

These are calibrated to get good values from the ccd, while not
overexposing.

I will now go into detail on what happens during processing the image.

The overall flow of color information is this:
First, a light falls on your original, with a specific intensity. The
light reflects from the original and hits (through some optics) an ccd
chip. The ccd chip accumulates the incoming light and creates a certain
voltage. The voltage is largely proportional to the intensity, but if
the intensity is too high, the voltage is much lower.

After some time the voltages of the ccd are stored away into some
buffers(sample and hold i guess). Now the scanning logic goes
sequentially through all pixels, requesting the voltage. The voltage
then is fed into an A/D converter, which can adjust the analog voltages
by adding an offset and multiplying with a gain. Then it converts the
voltages into an digital value.

This value is then adjusted digitally for each pixel to remove
unregularities in the scanning optics, leading to a regular pattern of
brighter and darker vertical lines. This is named shading correction.
The final step is a gamma correction.


In the case of the LiDE 35, the light is a colored led, which blinks for
a part second. The duration the led is lit is our exposure time, giving
us the ability to calibrate the intensity. The output voltage is then
moved to the A/D converter, but only one channel is connected, so the
offset and gain are used for all colors. This gives 16 bits worth of
intensity information. The shading correction on the other hand is per
color and pixel. It is very similar to what the A/D does, adding an
offset and multiplying by a gain.
Gamma is adjusted with one table per color, mapping a range of
intensities to an 8 bit value.


This leads to the following variables that can be tuned:
* exposure for each color
* analog offset for all colors
* analog gain for all colors
* digital offset for each color and pixel
* digital gain for each color and pixel
* gamma table for 256 ranges

The exposure needs to be as high as possible while not saturating the
ccd to get the best possible voltage range out of the ccd, while not
oversaturating it.

At the same time, alle three colors should lead to nearly the same
maximum voltages, as there is only one channel through the A/D
converter, whose offset and gain can be changed.

Analog offset and gain should map the voltage to nearly the full range
of 16bit values. Again, in this case it is better to have the
highest/lowest values unmapped, than losing voltage values.

The shading mapping works as described above. Just before the shading
mapping is done, the averaging takes place, averaging a set of pixels to
reduce the resolution for transport. For each set of averaged pixels,
only one shading data entry is used, and the other entries for this set
are ignored.

So far for the differences to the general model, now on to the calibration.

The exposure is calibrated by repeatedly scanning a white calibration
area, adjusting the exposures by color to get the intensity to be nearly
equal. The part with oversaturation is taken care of by an upper limit
for exposure.

The analog offset/gain is calibrated by first setting the gain to a
reasonable large value, then repeatedly scanning a black calibration
area. This leads to the analog offset. The a white calibration area is
scanned to determine the analog gain.

| Note that calibration the exposure and the analog offset/gain is kind
| of a hen and egg problem: When calibration the exposure, we need a
| working analog offset/gain and when calibrating the analog
| offset/gain, we need a working exposure time.

| The current solution is to first set the exposure to a reasonable
| value, and calibrate the analog offset/gain. Then calibrate the
| exposure times to give nearly equal values for each color, then
| calibrate analog offset/gain again to get usefull value ranges.

The shading calibration is done by scanning the whole calibration area,
averaging the black and white parts for each vertical line leading to an
average of the black data and one for the white data for each vertical
line. The black and white average for each line is then used for
calculating the digital offset/gain.

To get an unshaded image, you can disable shading correction in
gl841_init_regs_for_scan by changing the flags(last parameter to
gl841_init_scan_regs, currently 0) to include SCAN_FLAG_DISABLE_SHADING.

If the image then still shows dark colors cut off, the offset/exposure
calibration is to blame. Otherwise the shading calibration calculates
incorrect data(which i suspect).

I should put this lengthy description online somewhere..

Regards,
  Pierre



More information about the sane-devel mailing list