[Pkg-javascript-commits] [node-leveldown] 290/492: add advanced options
Andrew Kelley
andrewrk-guest at moszumanska.debian.org
Sun Jul 6 17:14:10 UTC 2014
This is an automated email from the git hooks/post-receive script.
andrewrk-guest pushed a commit to annotated tag rocksdb-0.10.1
in repository node-leveldown.
commit e39c92b520c9ea7453457a14c9dbf5dd4f557de2
Author: Rod Vagg <rod at vagg.org>
Date: Sat Mar 30 06:12:36 2013 +1100
add advanced options
---
README.md | 18 ++++++++++++++++--
src/database.cc | 10 +++++++++-
src/database.h | 4 ++++
src/database_async.cc | 20 ++++++++++++++------
src/database_async.h | 4 ++++
5 files changed, 47 insertions(+), 9 deletions(-)
diff --git a/README.md b/README.md
index 797f801..c2d5886 100644
--- a/README.md
+++ b/README.md
@@ -54,6 +54,8 @@ Tested & supported platforms
### leveldown#open([options, ]callback)
<code>open()</code> is an instance method on an existing database object.
+The `callback` function will be called with no arguments when the database has been successfully opened, or with a single `error` argument if the open operation failed for any reason.
+
#### `options`
The optional `options` argument may contain:
@@ -64,9 +66,21 @@ The optional `options` argument may contain:
* `'compression'` *(boolean, default: `true`)*: If `true`, all *compressible* data will be run through the Snappy compression algorithm before being stored. Snappy is very fast and shouldn't gain much speed by disabling so leave this on unless you have good reason to turn it off.
-* `'cacheSize'` *(number, default: `8 * 1024 * 1024`)*: The size (in bytes) of the in-memory [LRU](http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) cache with frequently used uncompressed block contents.
+* `'cacheSize'` *(number, default: `8 * 1024 * 1024` = 8MB)*: The size (in bytes) of the in-memory [LRU](http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) cache with frequently used uncompressed block contents.
-The `callback` function will be called with no arguments when the database has been successfully opened, or with a single `error` argument if the open operation failed for any reason.
+**Advanced options**
+
+The following options are for advanced performance tuning. Modify them only if you can prove actual benefit for your particular application.
+
+* `'writeBufferSize'` *(number, default: `4 * 1024 * 1024` = 4MB)*: The maximum size (in bytes) of the log (in memory and stored in the .log file on disk). Beyond this size, LevelDB will convert the log data to the first level of sorted table files. From the LevelDB documentation:
+
+> Larger values increase performance, especially during bulk loads. Up to two write buffers may be held in memory at the same time, so you may wish to adjust this parameter to control memory usage. Also, a larger write buffer will result in a longer recovery time the next time the database is opened.
+
+* `'blockSize'` *(number, default `4096` = 4K)*: The *approximate* size of the blocks that make up the table files. The size related to uncompressed data (hence "approximate"). Blocks are indexed in the table file and entry-lookups involve reading an entire block and parsing to discover the required entry.
+
+* `'maxOpenFiles'` *(number, default: `1000`)*: The maximum number of files that LevelDB is allowed to have open at a time. If your data store is likely to have a large working set, you may increase this value to prevent file descriptor churn. To calculate the number of files required for your working set, divide your total data by 2MB, as each table file is a maximum of 2MB.
+
+* `'blockRestartInterval'` *(number, default: `16`)*: The number of entries before restarting the "delta encoding" of keys within blocks. Each "restart" point stores the full key for the entry, between restarts, the common prefix of the keys for those entries is omitted. Restarts are similar to the concept of keyframs in video encoding and are used to minimise the amount of space required to store keys. This is particularly helpful when using deep namespacing / prefixing in your keys.
--------------------------------------------------------
diff --git a/src/database.cc b/src/database.cc
index 2614643..e824ee9 100644
--- a/src/database.cc
+++ b/src/database.cc
@@ -199,7 +199,11 @@ v8::Handle<v8::Value> Database::Open (const v8::Arguments& args) {
LD_BOOLEAN_OPTION_VALUE_DEFTRUE(optionsObj, createIfMissing)
LD_BOOLEAN_OPTION_VALUE(optionsObj, errorIfExists)
LD_BOOLEAN_OPTION_VALUE(optionsObj, compression)
- LD_UINT32_OPTION_VALUE(optionsObj, cacheSize, 8 << 20)
+ LD_UINT32_OPTION_VALUE(optionsObj, cacheSize , 8 << 20 )
+ LD_UINT32_OPTION_VALUE(optionsObj, writeBufferSize , 4 << 20 )
+ LD_UINT32_OPTION_VALUE(optionsObj, blockSize , 4096 )
+ LD_UINT32_OPTION_VALUE(optionsObj, maxOpenFiles , 1000 )
+ LD_UINT32_OPTION_VALUE(optionsObj, blockRestartInterval , 16 )
OpenWorker* worker = new OpenWorker(
database
@@ -208,6 +212,10 @@ v8::Handle<v8::Value> Database::Open (const v8::Arguments& args) {
, errorIfExists
, compression
, cacheSize
+ , writeBufferSize
+ , blockSize
+ , maxOpenFiles
+ , blockRestartInterval
);
AsyncQueueWorker(worker);
diff --git a/src/database.h b/src/database.h
index bf5027c..b2f2377 100644
--- a/src/database.h
+++ b/src/database.h
@@ -19,6 +19,10 @@ LD_SYMBOL ( option_createIfMissing , createIfMissing ); // for open()
LD_SYMBOL ( option_errorIfExists , errorIfExists ); // for open()
LD_SYMBOL ( option_compression , compression ); // for open()
LD_SYMBOL ( option_cacheSize , cacheSize ); // for open()
+LD_SYMBOL ( option_writeBufferSize , writeBufferSize ); // for open()
+LD_SYMBOL ( option_blockSize , blockSize ); // for open()
+LD_SYMBOL ( option_maxOpenFiles , maxOpenFiles ); // for open()
+LD_SYMBOL ( option_blockRestartInterval , blockRestartInterval ); // for open()
LD_SYMBOL ( option_sync , sync ); // for put() and delete()
LD_SYMBOL ( option_asBuffer , asBuffer ); // for get()
LD_SYMBOL ( option_fillCache , fillcache ); // for get() and readStream()
diff --git a/src/database_async.cc b/src/database_async.cc
index 644ef35..2644f7c 100644
--- a/src/database_async.cc
+++ b/src/database_async.cc
@@ -22,15 +22,23 @@ OpenWorker::OpenWorker (
, bool errorIfExists
, bool compression
, uint32_t cacheSize
+ , uint32_t writeBufferSize
+ , uint32_t blockSize
+ , uint32_t maxOpenFiles
+ , uint32_t blockRestartInterval
) : AsyncWorker(database, callback)
{
options = new leveldb::Options();
- options->create_if_missing = createIfMissing;
- options->error_if_exists = errorIfExists;
- options->compression = compression
- ? leveldb::kSnappyCompression
- : leveldb::kNoCompression;
- options->block_cache = leveldb::NewLRUCache(cacheSize);
+ options->create_if_missing = createIfMissing;
+ options->error_if_exists = errorIfExists;
+ options->compression = compression
+ ? leveldb::kSnappyCompression
+ : leveldb::kNoCompression;
+ options->block_cache = leveldb::NewLRUCache(cacheSize);
+ options->write_buffer_size = writeBufferSize;
+ options->block_size = blockSize;
+ options->max_open_files = maxOpenFiles;
+ options->block_restart_interval = blockRestartInterval;
};
OpenWorker::~OpenWorker () {
diff --git a/src/database_async.h b/src/database_async.h
index 25df5d9..8ca9fdd 100644
--- a/src/database_async.h
+++ b/src/database_async.h
@@ -24,6 +24,10 @@ public:
, bool errorIfExists
, bool compression
, uint32_t cacheSize
+ , uint32_t writeBufferSize
+ , uint32_t blockSize
+ , uint32_t maxOpenFiles
+ , uint32_t blockRestartInterval
);
virtual ~OpenWorker ();
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/pkg-javascript/node-leveldown.git
More information about the Pkg-javascript-commits
mailing list