[Pkg-privacy-commits] [nautilus-wipe] 75/224: Fix typos and slight reformulation

Ulrike Uhlig u-guest at moszumanska.debian.org
Thu Jul 7 19:45:36 UTC 2016


This is an automated email from the git hooks/post-receive script.

u-guest pushed a commit to branch master
in repository nautilus-wipe.

commit 7bc0b8d0ff905ad7bd5b60cfdea8cd3f96733f33
Author: Colomban Wendling <ban at herbesfolles.org>
Date:   Tue Mar 30 16:53:35 2010 +0200

    Fix typos and slight reformulation
---
 help/C/nautilus-srm.txt | 54 ++++++++++++++++++++++++-------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/help/C/nautilus-srm.txt b/help/C/nautilus-srm.txt
index f415929..47f0093 100644
--- a/help/C/nautilus-srm.txt
+++ b/help/C/nautilus-srm.txt
@@ -24,18 +24,17 @@ THC[1].
 
 ## Deleting doesn't affect data
 
-When you delete a file, even when bypassing or emptying the trash, you
-only tell your [computer/OS/???] that you don't care anymore for the
-file. The file's entry is removed from the list of existing files. The
-content of the file ??? the actual data ??? remains on the storage
-medium. The data will remain there until the operating system reuses
-the space for new data.
+When you delete a file, even when bypassing or emptying the trash, you only
+tell your computer that you don't care anymore for the file. The file's
+entry is removed from the list of existing files but the content of the file
+remains on the storage medium. The data will remain there until the
+operating system reuses the space for new data.
 
 This could take weeks, months or years before this space is actually
 used for new data, actually overwriting the content of the deleted
 file. Until then, it's possible to recover it by reading directly the
 data on the storage media. That's a quite simple operation, automated
-by number of softwares.
+by numerous softwares.
 
 
 ## An answer : overwriting data several times
@@ -44,10 +43,10 @@ If you want to make the content of a file really hard to recover, you
 have to overwrite it with other data. But that's not enough. On a
 magnetic hard disk, it's known[2] that the content can still be
 recovered by doing magnetic analysis of the hard disk surface. To
-adress this issue, it's possible to overwrite several times the
-content do be deleted. That process is called "wiping".
+address this issue, it's possible to overwrite several times the
+content to be deleted. That process is called "wiping".
 
-If some sensible files have been already deleted whitout paying
+If some sensible files have been already deleted without paying
 attention to this issue, some of their data probably remains on the
 storage media. It's thus also useful to wipe all the available free
 space of a storage media.
@@ -60,7 +59,7 @@ space of a storage media.
 ## Limitations
 
 This section is quite technical. In a nutshell, there's a lot of limitations, so
-using this tool whithout setting up a complete security policy will probably be
+using this tool without setting up a complete security policy will probably be
 useless.
 
 - Temporary files and disks: lots of programs writes temporary and backup files.
@@ -70,20 +69,20 @@ useless.
   part of the hard disk called swap space. Your sensitive data could exist
   there;
 - storage media features: modern storage mediums often reorganize their content,
-  e.g.  to spread the writings over the media or to hide defectuous[??] places
-  to the [computer/os/???]. Consequently, you can't be sure that the actual place
+  e.g.  to spread the writings over the media or to hide defective places to the
+  operating system. Consequently, you can't be sure that the actual place
   occupied by your sensitive data was wiped;
-- journalised file systems: modern file systems log modifications of the files to
-  ease recovering after a crash. This could make wiping inefficient. The same
+- journalized file systems: modern file systems log modifications of the files
+  to ease recovering after a crash. This could make wiping inefficient. The same
   kind of problem exists with redundant file systems (e.g. RAID), file systems
-  that make snapshots or that cache data (e.g. NFS). However, only the names of
+  that make snapshots or that caches data (e.g. NFS). However, only the names of
   the files are logged if you use the default parameters of the standard Linux
   file system (ext3/ext4);
-- old algorythms: the wipe algorythms are old, and they are not guarranteed to
+- old algorithms: the wipe algorithms are old, and they are not guaranteed to
   work as expected on new storage medias.
 
 
-## The wipe algorythm
+## The wipe algorithm
 
 nautilus-secure-delete enables you to wipe files and free disk space
 from `nautilus` using the `secure-delete` program written by van Hauser
@@ -94,7 +93,7 @@ from `nautilus` using the `secure-delete` program written by van Hauser
      1. The overwriting procedure (in the secure mode) does a 38 times
         overwriting. After each pass, the disk cache is flushed.
      2. truncating the file, so that an attacker don't know which
-        diskblocks belonged to the file.
+        disk blocks belonged to the file.
      3. renaming of the file, so that an attacker can't draw any conclusion
         from the filename on the contents of the deleted file.
      4. finally deleting the file (unlink).
@@ -139,7 +138,7 @@ wiping can take hours.
 
 It's possible, but discouraged, to cancel the wipe process. This would kill the
 underlaying `secure-delete` program and could lead to strange things like files
-partly overwriten but not deleted or big junk files.
+partially overwritten but not deleted or big junk files.
 
 When the wipe is finished, a dialog should inform you of the success of the
 deletion.
@@ -155,9 +154,10 @@ introduction to data deletion]].
 
 ## Number of passes
 
-You can configure the number of times that the data to be wiped is overwritten by new data.
+You can configure the number of times that the data to be wiped is overwritten
+by new data.
 
-38: Overwriting the data 38 times should prevent data recovery throught
+38: Overwriting the data 38 times should prevent data recovery through
     magnetic analysis of the hard drive surface. This is achieved by the
     following procedure:
 
@@ -174,20 +174,20 @@ You can configure the number of times that the data to be wiped is overwritten b
     values. [FIXME: implications]
 
 1:  Only one random pass is written. Overwriting the data only one time should
-    prevent from data recovery by analysing raw data written on the storage
-    media, but is unuseful against magnetic analysis of the hard drive surface.
-    In this mode, only one random pass is written.
+    prevent from data recovery by analyzing raw data written on the storage
+    media, but is useless against magnetic analysis of the hard drive surface.
 
 
 ## Fast and insecure mode (no /dev/urandom, no sync)
 
 Fastens the wipe at the expense of security:
 
-- use a more predictible but faster pseudo-random number generator;
+- use a more predictable but faster pseudo-random number generator;
 - do not ensure that overwriting data is actually written on the storage media.
 
 
 ## Last pass with zeros instead of random data
 
-Use zeros for the last overwrite, which is the data that will be actually easy to read. The default is to use pseudo random data.
+Use zeros for the last overwrite, which is the data that will be actually easy
+to read. The default is to use pseudo random data.
 

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/pkg-privacy/packages/nautilus-wipe.git



More information about the Pkg-privacy-commits mailing list