[sane-devel] Scan quality enhancements/processing (vs Windows with Fujitsu ScanSnap S1500)
matthew.garman at gmail.com
Wed Jul 5 17:50:44 UTC 2017
Hi Roger, thank you for your thoughts! More comments below...
On Mon, Jul 3, 2017 at 8:19 PM, Roger <rogerx.oss at gmail.com> wrote:
> In my experience, the Windows or proprietary solutions usually utilize
> significant software processing after the original data is acquired from the
I had a hunch that was/is the case.
> Most experienced users (eg. photographers, ...) tend to desire vanilla scanned
> results or data, for either realistic/exact results or for legal reasons. I
> personally just cannot stand to see a scan of negative media being processed
> with overly satured colors, rather than seeing the original colors of a
> negative! I personally prefer an original vanilla scanned image for archiving
> purposes. I then, if needed, augment the image later.
> Most people in Linux/Unix world, desire one tool for one specific task.
> Sane/XSane gets the data in the computer. Other separate tools are utilized
> later for improving an image.
This now seems obvious, but wasn't until your comment. I scan all
these documents almost entirely for archival purposes. 99% of the
time, the documents are scanned and never re-visited. Sometimes I
might take a quick look at the occasional document (e.g., when was
that invoice dated?), and as long as it's readable, good enough. And
I almost never need something that is perfectly cleaned or
print-quality, though it does come up.
So eventually, I will need to figure out how to get a document looking
its best, but for now I can simply focus on getting a good quality
scans, and save them in a losslessly-compressed format.
> The command line sane tool is pretty basic in my brief experience. The XSane
> interface seems to perform a better job than some of those command line
> switches, unless somebody else wants to pipe-in here. It maybe, XSane uses the
> sane program libraries (eg. routines/functions) a little bit better to get
> better results than the sane command line tool.
Anyone have any comment on this? I assumed the CLI and GUI versions
did exactly the same thing, but perhaps that's a bad assumption.
> VueScan likely does all you're probably wanting as well, but think you're doing
> fine using open source. The main reason I use VueScan, is for old
> ancestry/genealogy negatives and VueScan is well proven for photography uses.
> I don't get much time here, and the media is very time sensitive.
I might give that a try, just to see. Looks like they have a free trial.
> Personally, I prefer command line tools; as it's far easier to pipe tasks to
> other utilities. For document processing, you'll probably have a far easier
> time creating a script.
At any rate, now that I'm changing my focus to just get a good scan
and worry about post-processing later, I'm finding getting a good
initial scan isn't so easy.
In particular, now I see that most of my scans have the top few mm cut
off. I set the --page-width and --page-height arguments to be padded
with an extra inch. So in my scans I have about an inch of "pure
white", but the top is still cut off a few mm.
I haven't yet messed with the scan area params (t, l, x, y), because
that leads me to a more general question: is it necessary to always
specify the paper size and scan area for every single document? While
I agree most processing can be a separate task, I feel like the
hardware and/or software ought to be able to auto-detect the document
size and scan area.
More information about the sane-devel