[sane-devel] artec e+pro- vertical lines- some samples

kilgota at banach.math.auburn.edu kilgota at banach.math.auburn.edu
Sun Jan 13 22:35:55 UTC 2008



On Sun, 13 Jan 2008, m. allan noah wrote:

> On 1/12/08, kilgota at banach.math.auburn.edu wrote:
>> What I see in these images is, something like what I suspected could be a
>> possibility, or it could be something else.
>
> well, that about covers all the possibilites :)
>

Yep. We should never neglect to do that, because the problem will always 
be coming from the one we neglected. :)

>> It is difficult to tell
>> without the actual raw data. I assume (without actually knowing) that
>> there is a SANE option to capture the raw data and dump to a file, with no
>> processing at all.
>
> You very well might BE looking at the raw data.

Ah, so.

Since scanners use a
> 1-D instead of a 2-D array, they dont have any need to conserve
> internal storage space, and their cpu's are generally quite slow, they
> dont typically use much compression.

Perhaps I should change my line of work then. Compression is the big 
bugbear for camera support. It can take months or years to figure out, 
sometimes, and assuredly the manufacturers of the chips do not cooperate, 
either. If it is very unusual that you have to deal with this, then you 
people are lucky.

Some machines do have jpeg
> compression, but that requires a bit more horsepower. so- all in all,
> i think you are pretty far off-base.
>

OK. From your description of the problem and judging from the results, 
you are probably right. Here is what happened on this end yesterday:

I tried to work with the images anyway, as I got them. I wrote a little 
program which extracts one color at each pixel location, using a standard 
Bayer pattern, and thus artificially constructs a "raw" file and then 
tries to put it back together using the libgphoto2 Bayer interpolation 
routine.

No matter what Bayer tiling was assumed to be present in the "true" raw 
data, and no matter whether it was assumed that there might be some 
column-switching going on or not, the results of the 
deconstruction-reconstruction always came out looking worse than the 
original. Perhaps interestingly, many of the attempts resulted in a 
diagonal pattern, or in a lattice pattern with diagonals going in both 
directions (so that there is an X every place that two diagonals cross).

One of the amazing things, too, is that the vertical lines in the images 
were not visible in hexdumps of the images. Or, perhaps I did not try hard 
enough by doing something like actually rearranging the resulting hexdumps 
according to rows and columns as they were in the original. One would 
think that results which are so dramatic in an image ought to be visible 
in the data readings.

> As a consequence of their low cost/power, many machines cannot
> self-calibrate.

OK, so we are working with a one-dimensional scanner reader and not with a 
lens which takes an image of a large area. Not only this, but also the way 
this _could_ work is that there is a bar across the scanner, which gets 
moved "vertically" from one row to the next, and on the bar there is a 
sliding sensor, which takes a sample at each column location across the 
image, and the sample is of R and G and B all at the same time and same 
location. Just out of intellectual curiosity, precisely what would be 
involved in this "calibration" that you are mentioning? Am I thinking 
correctly now, and the problem with this scanner is that the horizontal 
"jump" from one pixel to the next is not set correctly and thus needs to 
be adjusted?

> That is my guess for the culprit here.
>
> Philip- what are the three 'colored' scans of? I ask because the blue
> has some sort of diagonal pattern the others dont have, almost like a
> denim material.

Theodore Kilgore



More information about the sane-devel mailing list