[sane-devel] artec e+pro- vertical lines- some samples
m. allan noah
kitno455 at gmail.com
Mon Jan 14 00:34:45 UTC 2008
On 1/13/08, kilgota at banach.math.auburn.edu
<kilgota at banach.math.auburn.edu> wrote:
> On Sun, 13 Jan 2008, m. allan noah wrote:
> > On 1/12/08, kilgota at banach.math.auburn.edu wrote:
> >> What I see in these images is, something like what I suspected could be a
> >> possibility, or it could be something else.
> > well, that about covers all the possibilites :)
> Yep. We should never neglect to do that, because the problem will always
> be coming from the one we neglected. :)
> >> It is difficult to tell
> >> without the actual raw data. I assume (without actually knowing) that
> >> there is a SANE option to capture the raw data and dump to a file, with no
> >> processing at all.
> > You very well might BE looking at the raw data.
> Ah, so.
> Since scanners use a
> > 1-D instead of a 2-D array, they dont have any need to conserve
> > internal storage space, and their cpu's are generally quite slow, they
> > dont typically use much compression.
> Perhaps I should change my line of work then. Compression is the big
> bugbear for camera support. It can take months or years to figure out,
> sometimes, and assuredly the manufacturers of the chips do not cooperate,
> either. If it is very unusual that you have to deal with this, then you
> people are lucky.
ahh- but we have many more parameters, transparency units,
filmadapters, sheet feeders, motor slope tables, and calibration, all
of which must often be reverse engineered, taking months or years to
> One of the amazing things, too, is that the vertical lines in the images
> were not visible in hexdumps of the images. Or, perhaps I did not try hard
> enough by doing something like actually rearranging the resulting hexdumps
> according to rows and columns as they were in the original. One would
> think that results which are so dramatic in an image ought to be visible
> in the data readings.
the human eye is very sensitive to these sorts of changes. (look at
the jpeg compression algo...) it could have been a very slight
> > As a consequence of their low cost/power, many machines cannot
> > self-calibrate.
> OK, so we are working with a one-dimensional scanner reader and not with a
> lens which takes an image of a large area. Not only this, but also the way
> this _could_ work is that there is a bar across the scanner, which gets
> moved "vertically" from one row to the next, and on the bar there is a
> sliding sensor, which takes a sample at each column location across the
> image, and the sample is of R and G and B all at the same time and same
no, i said 1-D, not a point. they are always a linear array, with
thousands of cells, often in little groups, and the break between the
groups is often shaded differently.
> Just out of intellectual curiosity, precisely what would be
> involved in this "calibration" that you are mentioning? Am I thinking
> correctly now, and the problem with this scanner is that the horizontal
> "jump" from one pixel to the next is not set correctly and thus needs to
> be adjusted?
no, depending on the scanner type, you have offset and gain params for
the analog front-end chip, then additional offset and gain on a
per-cell (or even per-cell-per-channel) basis, plus you might have
lamp timing and exposure settings too, and who knows what else.
these are usually determined by doing a scan of some hidden white
stripe inside the machine, lamp on and off, or separate white and
black stripes, or, or, or...
dont quit your day job. :)
"The truth is an offense, but not a sin"
More information about the sane-devel