[med-svn] [cnrun] 24/25: WIP (packaged)
andrei zavada
hmmr-guest at moszumanska.debian.org
Thu Nov 6 22:08:32 UTC 2014
This is an automated email from the git hooks/post-receive script.
hmmr-guest pushed a commit to branch WIP
in repository cnrun.
commit ad302856736c25a2a7c5a2216b04a0d9cd8f176b
Author: Andrei Zavada <johnhommer at gmail.com>
Date: Mon Oct 27 01:46:14 2014 +0200
WIP (packaged)
---
debian/README.Debian | 10 +-
debian/changelog | 6 +
debian/cnrun-tools.install | 2 +
debian/cnrun-tools.manpages | 2 +
debian/cnrun.manpages | 1 -
debian/control | 49 +-
debian/copyright | 2 +-
debian/libcnrun-lua.install | 2 +
debian/libcnrun-lua.postinst | 21 +
debian/libcnrun-lua.postrm | 17 +
debian/libcnrun.install | 2 +
debian/rules | 7 +-
debian/{upstream => upstream/metadata} | 0
debian/watch | 2 +-
upstream/COPYING | 676 +------------
upstream/ChangeLog | 6 +
upstream/INSTALL | 255 +----
upstream/Makefile.am | 14 +-
upstream/README | 2 +-
upstream/configure.ac | 31 +-
upstream/data/Makefile.am | 7 -
upstream/data/lua/cnrun.lua | 197 ----
upstream/doc/Makefile.am | 12 +-
upstream/doc/README | 80 +-
upstream/doc/examples/example1.lua | 172 +++-
.../ratiocoding/{ORNa.x1000.in => ORNa.in} | 0
upstream/doc/examples/ratiocoding/ORNb.x1000.in | 112 ---
upstream/doc/examples/ratiocoding/PN.0.sxf.target | 10 -
upstream/doc/examples/ratiocoding/batch | 34 -
.../ratiocoding/rational-plot-sdf-interactive | 31 -
.../examples/ratiocoding/rational-plot-sdf-static | 29 -
.../doc/examples/ratiocoding/rational-plot-var | 34 -
upstream/doc/examples/ratiocoding/script | 58 --
upstream/man/cnrun.1.in | 245 -----
upstream/man/varfold.1.in | 129 ---
upstream/src/Common.mk | 8 +-
upstream/src/Makefile.am | 5 +-
upstream/src/cnrun/commands.cc | 1013 -------------------
upstream/src/cnrun/completions.cc | 456 ---------
upstream/src/cnrun/interpreter.cc | 270 -----
upstream/src/cnrun/main.cc | 160 ---
upstream/src/cnrun/print_version.cc | 26 -
upstream/src/libcnlua/Makefile.am | 36 -
upstream/src/libcnlua/lua-iface.cc | 163 ----
upstream/src/{libcnlua => libcnrun-lua}/.gitignore | 0
upstream/src/libcnrun-lua/Makefile.am | 15 +
upstream/src/{libcnlua => libcnrun-lua}/cnhost.hh | 84 +-
upstream/src/libcnrun-lua/commands.cc | 1027 ++++++++++++++++++++
upstream/src/{libcn => libcnrun}/Makefile.am | 22 +-
upstream/src/{libcn => libcnrun}/base-neuron.hh | 2 +-
upstream/src/{libcn => libcnrun}/base-synapse.hh | 2 +-
upstream/src/{libcn => libcnrun}/base-unit.cc | 2 +-
upstream/src/{libcn => libcnrun}/base-unit.hh | 2 +-
upstream/src/{libcn => libcnrun}/forward-decls.hh | 2 +-
upstream/src/{libcn => libcnrun}/hosted-attr.hh | 2 +-
upstream/src/{libcn => libcnrun}/hosted-neurons.cc | 2 +-
upstream/src/{libcn => libcnrun}/hosted-neurons.hh | 2 +-
.../src/{libcn => libcnrun}/hosted-synapses.cc | 2 +-
.../src/{libcn => libcnrun}/hosted-synapses.hh | 2 +-
upstream/src/{libcn => libcnrun}/integrate-base.hh | 2 +-
upstream/src/{libcn => libcnrun}/integrate-rk65.hh | 2 +-
upstream/src/{libcn => libcnrun}/model-cycle.cc | 2 +-
upstream/src/{libcn => libcnrun}/model-nmlio.cc | 6 +-
upstream/src/{libcn => libcnrun}/model-struct.cc | 20 +-
upstream/src/{libcn => libcnrun}/model-tags.cc | 2 +-
upstream/src/{libcn => libcnrun}/model.hh | 3 +-
upstream/src/{libcn => libcnrun}/mx-attr.hh | 2 +-
upstream/src/{libcn => libcnrun}/sources.cc | 2 +-
upstream/src/{libcn => libcnrun}/sources.hh | 2 +-
.../src/{libcn => libcnrun}/standalone-attr.hh | 2 +-
.../src/{libcn => libcnrun}/standalone-neurons.cc | 2 +-
.../src/{libcn => libcnrun}/standalone-neurons.hh | 2 +-
.../src/{libcn => libcnrun}/standalone-synapses.cc | 2 +-
.../src/{libcn => libcnrun}/standalone-synapses.hh | 2 +-
upstream/src/{libcn => libcnrun}/types.cc | 2 +-
upstream/src/{libcn => libcnrun}/types.hh | 2 +-
upstream/src/libstilton/Makefile.am | 15 +-
upstream/src/libstilton/lang.hh | 1 -
upstream/src/tools/.gitignore | 1 -
upstream/src/tools/Makefile.am | 11 +-
upstream/src/tools/hh-latency-estimator.cc | 548 ++++++-----
upstream/src/tools/spike2sdf.cc | 146 +--
upstream/src/tools/varfold.cc | 718 --------------
83 files changed, 1812 insertions(+), 5247 deletions(-)
diff --git a/debian/README.Debian b/debian/README.Debian
index 86f41ea..c8e090b 100644
--- a/debian/README.Debian
+++ b/debian/README.Debian
@@ -1,15 +1,7 @@
-For Debian, cnrun is configured --with-tools=no. These tools are:
-
- - varfold, a matrix convolution utility;
+For Debian, cnrun is configured --with-tools=yes. These tools are:
- hh-latency-estimator, for measurement of first-spike latency
in response to stimulation, and
- spike2sdf, for estimation of a spike density function from a record of
spike times.
-
-These utilities were used in the original experiment
-(http://johnhommer.com/academic/code/cnrun/ratiocoding), and hence not
-deemed of general purpose enough to be included in the Debian package.
-If you believe you have a particular need for any of them, feel free
-to build cnrun from source.
diff --git a/debian/changelog b/debian/changelog
index d33fa45..2666ff9 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cnrun (2.0.0-1) unstable; urgency=low
+
+ * New upstream version.
+
+ -- Andrei Zavada <johnhommer at gmail.com> Mon, 27 Oct 2014 00:46:00 +0300
+
cnrun (1.1.14-1) unstable; urgency=low
* New upstream version (Closes: #722760).
diff --git a/debian/cnrun-tools.install b/debian/cnrun-tools.install
new file mode 100644
index 0000000..efe9cf7
--- /dev/null
+++ b/debian/cnrun-tools.install
@@ -0,0 +1,2 @@
+usr/bin/hh-latency-estimator
+usr/bin/spike2sdf
diff --git a/debian/cnrun-tools.manpages b/debian/cnrun-tools.manpages
new file mode 100644
index 0000000..283dc89
--- /dev/null
+++ b/debian/cnrun-tools.manpages
@@ -0,0 +1,2 @@
+man/hh-latency-estimator.1
+man/spike2sdf.1
diff --git a/debian/cnrun.manpages b/debian/cnrun.manpages
deleted file mode 100644
index c3330e1..0000000
--- a/debian/cnrun.manpages
+++ /dev/null
@@ -1 +0,0 @@
-man/cnrun.1
diff --git a/debian/control b/debian/control
index 758c177..34e38bc 100644
--- a/debian/control
+++ b/debian/control
@@ -2,24 +2,49 @@ Source: cnrun
Section: science
Priority: optional
Maintainer: Andrei Zavada <johnhommer at gmail.com>
-Build-Depends: debhelper (>= 9), dh-autoreconf, autoconf-archive, g++,
- libgomp1, libreadline6-dev, pkg-config, libgsl0-dev, libxml2-dev,
+Build-Depends: debhelper (>= 9), dh-autoreconf, autoconf-archive,
+ libgomp1, pkg-config, libgsl0-dev, libxml2-dev,
liblua5.2-dev
Standards-Version: 3.9.6
Homepage: http://johnhommer.com/academic/code/cnrun
Vcs-Git: git://git.debian.org/git/debian-med/cnrun.git
Vcs-Browser: http://anonscm.debian.org/gitweb/?p=debian-med/cnrun.git;a=summary
-Package: cnrun
+Package: libcnrun
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
+Description: NeuroML-capable neuronal network simulator (shared lib)
+ CNrun is a neuronal network simulator implemented as a Lua package.
+ This package contains shared libraries.
+ .
+ See libcnrun-lua description for extended description.
+
+Package: cnrun-tools
+Architecture: any
+Depends: ${shlibs:Depends}, ${misc:Depends}
+Description: NeuroML-capable neuronal network simulator (tools)
+ CNrun is a neuronal network simulator implemented as a Lua package.
+ This package contains two standalone tools (hh-latency-estimator and
+ spike2sdf) that may be of interest to CNrun users.
+ .
+ See libcnrun-lua description for extended description.
+
+Package: libcnrun-lua
+Architecture: any
+Depends: libcnrun, ${shlibs:Depends}, ${misc:Depends}
Suggests: gnuplot
-Description: NeuroML-capable neuronal network simulator
- CNrun is a neuronal network model simulator, similar in purpose to
- NEURON except that individual neurons are not compartmentalised. It
- can read NeuroML files (e.g., as generated by neuroConstruct);
- provides a Hodgkin-Huxley neuron (plus some varieties), a Rall and
- Alpha-Beta synapses, Poisson, Van der Pol, Colpitts oscillators and
- regular pulse generator; external inputs and logging state variables.
- Uses a 6-5 Runge-Kutta integration method. Basic scripting and (if
- run interactively) context-aware autocompletion.
+Recommends: lua
+Description: NeuroML-capable neuronal network simulator (Lua package)
+ CNrun is a neuronal network simulator, with these features:
+ * a conductance- and rate-based Hodgkin-Huxley neurons, a Rall and
+ Alpha-Beta synapses;
+ * a 6-5 Runge-Kutta integration method: slow but precise, adjustable;
+ * Poisson, Van der Pol, Colpitts oscillators and interface for
+ external stimulation sources;
+ * NeuroML network topology import/export;
+ * logging state variables, spikes;
+ * implemented as a Lua module, for scripting model behaviour (e.g.,
+ to enable plastic processes regulated by model state);
+ * interaction (topology push/pull, async connections) with other
+ cnrun models running elsewhere on a network, with interactions
+ (planned).
diff --git a/debian/copyright b/debian/copyright
index a67a602..0b640c7 100644
--- a/debian/copyright
+++ b/debian/copyright
@@ -7,7 +7,7 @@ Files: *
Copyright: 2008-2012 Andrei Zavada <johnhommer at gmail.com>
License: GPL-2+
-Files: src/libcn/*.cc
+Files: src/libcnrun/*.cc
Copyright: 2008 Thomas Nowotny <t.nowotny at sussex.ac.uk>
2008-2012 Andrei Zavada <johnhommer at gmail.com>
License: GPL-2+
diff --git a/debian/libcnrun-lua.install b/debian/libcnrun-lua.install
new file mode 100644
index 0000000..1561dea
--- /dev/null
+++ b/debian/libcnrun-lua.install
@@ -0,0 +1,2 @@
+usr/lib/*/cnrun/libcnrun-lua.so
+usr/share/doc/cnrun/examples/example1.lua
diff --git a/debian/libcnrun-lua.postinst b/debian/libcnrun-lua.postinst
new file mode 100644
index 0000000..c92a7eb
--- /dev/null
+++ b/debian/libcnrun-lua.postinst
@@ -0,0 +1,21 @@
+#!/bin/sh
+
+#DEBHELPER#
+
+set -e
+
+case "$1" in
+ configure|upgrade)
+ ln -s /usr/lib/cnrun/libcnrun-lua.so /usr/lib/lua/5.2/cnrun.so
+ ;;
+
+ abort-upgrade|abort-remove|abort-deconfigure|failed-upgrade)
+ rm -f /usr/lib/lua/5.2/cnrun.so
+ ;;
+ *)
+ echo "postinst called with unknown argument \`$1'" >&2
+ exit 1
+ ;;
+esac
+
+exit 0
diff --git a/debian/libcnrun-lua.postrm b/debian/libcnrun-lua.postrm
new file mode 100644
index 0000000..ba0cfc8
--- /dev/null
+++ b/debian/libcnrun-lua.postrm
@@ -0,0 +1,17 @@
+#!/bin/sh
+
+#DEBHELPER#
+
+set -e
+
+case "$1" in
+ purge|remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
+ rm -f /usr/lib/lua/5.2/cnrun.so
+ ;;
+ *)
+ echo "postrm called with unknown argument \`$1'" >&2
+ exit 1
+ ;;
+esac
+
+exit 0
diff --git a/debian/libcnrun.install b/debian/libcnrun.install
new file mode 100644
index 0000000..199cec1
--- /dev/null
+++ b/debian/libcnrun.install
@@ -0,0 +1,2 @@
+usr/lib/*/cnrun/libstilton.so
+usr/lib/*/cnrun/libcnrun.so
diff --git a/debian/rules b/debian/rules
index e69cdcc..649cc66 100755
--- a/debian/rules
+++ b/debian/rules
@@ -6,11 +6,12 @@
# dh-make output file, you may use that output file without restriction.
# This special exception was added by Craig Small in version 0.37 of dh-make.
-# Uncomment this to turn on verbose mode.
-#export DH_VERBOSE=1
-
DPKG_EXPORT_BUILDFLAGS = 1
include /usr/share/dpkg/buildflags.mk
%:
dh $@ --with autoreconf
+
+override_dh_clean:
+ rm -f config.log
+ dh_clean
diff --git a/debian/upstream b/debian/upstream/metadata
similarity index 100%
rename from debian/upstream
rename to debian/upstream/metadata
diff --git a/debian/watch b/debian/watch
index 90dcd11..ed1e141 100644
--- a/debian/watch
+++ b/debian/watch
@@ -1,2 +1,2 @@
version=3
-http://johnhommer.com/academic/code/cnrun/source/cnrun-([\d\.]+).tar.bz2
+http://johnhommer.com/academic/code/cnrun/source/cnrun-([\d\.]+).tar.xz
diff --git a/upstream/COPYING b/upstream/COPYING
index 94a9ed0..0698566 100644
--- a/upstream/COPYING
+++ b/upstream/COPYING
@@ -1,674 +1,2 @@
- GNU GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users. We, the Free Software Foundation, use the
-GNU General Public License for most of our software; it applies also to
-any other work released this way by its authors. You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you have
-certain responsibilities if you distribute copies of the software, or if
-you modify it: responsibilities to respect the freedom of others.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
- Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
- For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
- Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the manufacturer
-can do so. This is fundamentally incompatible with the aim of
-protecting users' freedom to change the software. The systematic
-pattern of such abuse occurs in the area of products for individuals to
-use, which is precisely where it is most unacceptable. Therefore, we
-have designed this version of the GPL to prohibit the practice for those
-products. If such problems arise substantially in other domains, we
-stand ready to extend this provision to those domains in future versions
-of the GPL, as needed to protect the freedom of users.
-
- Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish to
-avoid the special danger that patents applied to a free program could
-make it effectively proprietary. To prevent this, the GPL assures that
-patents cannot be used to render the program non-free.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Use with the GNU Affero General Public License.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
- <one line to give the program's name and a brief idea of what it does.>
- Copyright (C) <year> <name of author>
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see <http://www.gnu.org/licenses/>.
-
-Also add information on how to contact you by electronic and paper mail.
-
- If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
-
- <program> Copyright (C) <year> <name of author>
- This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, your program's commands
-might be different; for a GUI interface, you would use an "about box".
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU GPL, see
-<http://www.gnu.org/licenses/>.
-
- The GNU General Public License does not permit incorporating your program
-into proprietary programs. If your program is a subroutine library, you
-may consider it more useful to permit linking proprietary applications with
-the library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License. But first, please read
-<http://www.gnu.org/philosophy/why-not-lgpl.html>.
+CNRun is licensed under GPL-2+. The full text of the latest version
+of GPL is easiest found in /usr/share/common-licenses/GPL.
diff --git a/upstream/ChangeLog b/upstream/ChangeLog
index 2ccfcc2..801e7e9 100644
--- a/upstream/ChangeLog
+++ b/upstream/ChangeLog
@@ -1,3 +1,9 @@
+2014-10-26 andrei zavada <johnhommer at gmail.com>
+ * cnrun executable gone, replaced by a Lua module.
+ * Sweeping refactoring effort, incomplete in places, towards
+ higher coding standards and discipline.
+ * Drop varfold (too specific to the ratiocoding experiment setup).
+
2013-09-22 andrei zavada <johnhommer at gmail.com>
* donotwant boost.
* Proper use of installed libexec/*.so.
diff --git a/upstream/INSTALL b/upstream/INSTALL
index 9ccf15b..b223b14 100644
--- a/upstream/INSTALL
+++ b/upstream/INSTALL
@@ -1,249 +1,16 @@
-Installation Instructions
-*************************
-
-Cnrun is fully autotools compliant, and normally installable by
+CNRun uses canonical autotools and is normally installable by
./configure && make install.
-Dependencies include: libxml2.
-
-The ./configure option --enable-tools will build these three
-executables in addition to cnrun: varfold, spike2sdf, and
-hh-latency-estimator.
-
-The standard GNU autotools install instructions follow.
-
-
-Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005,
-2006, 2007 Free Software Foundation, Inc.
-
-This file is free documentation; the Free Software Foundation gives
-unlimited permission to copy, distribute and modify it.
-
-Basic Installation
-==================
-
-Briefly, the shell commands `./configure; make; make install' should
-configure, build, and install this package. The following
-more-detailed instructions are generic; see the `README' file for
-instructions specific to this package.
-
- The `configure' shell script attempts to guess correct values for
-various system-dependent variables used during compilation. It uses
-those values to create a `Makefile' in each directory of the package.
-It may also create one or more `.h' files containing system-dependent
-definitions. Finally, it creates a shell script `config.status' that
-you can run in the future to recreate the current configuration, and a
-file `config.log' containing compiler output (useful mainly for
-debugging `configure').
-
- It can also use an optional file (typically called `config.cache'
-and enabled with `--cache-file=config.cache' or simply `-C') that saves
-the results of its tests to speed up reconfiguring. Caching is
-disabled by default to prevent problems with accidental use of stale
-cache files.
-
- If you need to do unusual things to compile the package, please try
-to figure out how `configure' could check whether to do them, and mail
-diffs or instructions to the address given in the `README' so they can
-be considered for the next release. If you are using the cache, and at
-some point `config.cache' contains results you don't want to keep, you
-may remove or edit it.
-
- The file `configure.ac' (or `configure.in') is used to create
-`configure' by a program called `autoconf'. You need `configure.ac' if
-you want to change it or regenerate `configure' using a newer version
-of `autoconf'.
-
-The simplest way to compile this package is:
-
- 1. `cd' to the directory containing the package's source code and type
- `./configure' to configure the package for your system.
-
- Running `configure' might take a while. While running, it prints
- some messages telling which features it is checking for.
-
- 2. Type `make' to compile the package.
-
- 3. Optionally, type `make check' to run any self-tests that come with
- the package.
-
- 4. Type `make install' to install the programs and any data files and
- documentation.
-
- 5. You can remove the program binaries and object files from the
- source code directory by typing `make clean'. To also remove the
- files that `configure' created (so you can compile the package for
- a different kind of computer), type `make distclean'. There is
- also a `make maintainer-clean' target, but that is intended mainly
- for the package's developers. If you use it, you may have to get
- all sorts of other programs in order to regenerate files that came
- with the distribution.
-
- 6. Often, you can also type `make uninstall' to remove the installed
- files again.
-
-Compilers and Options
-=====================
-
-Some systems require unusual options for compilation or linking that the
-`configure' script does not know about. Run `./configure --help' for
-details on some of the pertinent environment variables.
-
- You can give `configure' initial values for configuration parameters
-by setting variables in the command line or in the environment. Here
-is an example:
-
- ./configure CC=c99 CFLAGS=-g LIBS=-lposix
-
- *Note Defining Variables::, for more details.
-
-Compiling For Multiple Architectures
-====================================
-
-You can compile the package for more than one kind of computer at the
-same time, by placing the object files for each architecture in their
-own directory. To do this, you can use GNU `make'. `cd' to the
-directory where you want the object files and executables to go and run
-the `configure' script. `configure' automatically checks for the
-source code in the directory that `configure' is in and in `..'.
-
- With a non-GNU `make', it is safer to compile the package for one
-architecture at a time in the source code directory. After you have
-installed the package for one architecture, use `make distclean' before
-reconfiguring for another architecture.
-
-Installation Names
-==================
-
-By default, `make install' installs the package's commands under
-`/usr/local/bin', include files under `/usr/local/include', etc. You
-can specify an installation prefix other than `/usr/local' by giving
-`configure' the option `--prefix=PREFIX'.
-
- You can specify separate installation prefixes for
-architecture-specific files and architecture-independent files. If you
-pass the option `--exec-prefix=PREFIX' to `configure', the package uses
-PREFIX as the prefix for installing programs and libraries.
-Documentation and other data files still use the regular prefix.
-
- In addition, if you use an unusual directory layout you can give
-options like `--bindir=DIR' to specify different values for particular
-kinds of files. Run `configure --help' for a list of the directories
-you can set and what kinds of files go in them.
-
- If the package supports it, you can cause programs to be installed
-with an extra prefix or suffix on their names by giving `configure' the
-option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
-
-Optional Features
-=================
-
-Some packages pay attention to `--enable-FEATURE' options to
-`configure', where FEATURE indicates an optional part of the package.
-They may also pay attention to `--with-PACKAGE' options, where PACKAGE
-is something like `gnu-as' or `x' (for the X Window System). The
-`README' should mention any `--enable-' and `--with-' options that the
-package recognizes.
-
- For packages that use the X Window System, `configure' can usually
-find the X include and library files automatically, but if it doesn't,
-you can use the `configure' options `--x-includes=DIR' and
-`--x-libraries=DIR' to specify their locations.
-
-Specifying the System Type
-==========================
-
-There may be some features `configure' cannot figure out automatically,
-but needs to determine by the type of machine the package will run on.
-Usually, assuming the package is built to be run on the _same_
-architectures, `configure' can figure that out, but if it prints a
-message saying it cannot guess the machine type, give it the
-`--build=TYPE' option. TYPE can either be a short name for the system
-type, such as `sun4', or a canonical name which has the form:
-
- CPU-COMPANY-SYSTEM
-
-where SYSTEM can have one of these forms:
-
- OS KERNEL-OS
-
- See the file `config.sub' for the possible values of each field. If
-`config.sub' isn't included in this package, then this package doesn't
-need to know the machine type.
-
- If you are _building_ compiler tools for cross-compiling, you should
-use the option `--target=TYPE' to select the type of system they will
-produce code for.
-
- If you want to _use_ a cross compiler, that generates code for a
-platform different from the build platform, you should specify the
-"host" platform (i.e., that on which the generated programs will
-eventually be run) with `--host=TYPE'.
-
-Sharing Defaults
-================
-
-If you want to set default values for `configure' scripts to share, you
-can create a site shell script called `config.site' that gives default
-values for variables like `CC', `cache_file', and `prefix'.
-`configure' looks for `PREFIX/share/config.site' if it exists, then
-`PREFIX/etc/config.site' if it exists. Or, you can set the
-`CONFIG_SITE' environment variable to the location of the site script.
-A warning: not all `configure' scripts look for a site script.
-
-Defining Variables
-==================
-
-Variables not defined in a site shell script can be set in the
-environment passed to `configure'. However, some packages may run
-configure again during the build, and the customized values of these
-variables may be lost. In order to avoid this problem, you should set
-them in the `configure' command line, using `VAR=value'. For example:
-
- ./configure CC=/usr/local2/bin/gcc
-
-causes the specified `gcc' to be used as the C compiler (unless it is
-overridden in the site shell script).
-
-Unfortunately, this technique does not work for `CONFIG_SHELL' due to
-an Autoconf bug. Until the bug is fixed you can use this workaround:
-
- CONFIG_SHELL=/bin/bash /bin/bash ./configure CONFIG_SHELL=/bin/bash
-
-`configure' Invocation
-======================
-
-`configure' recognizes the following options to control how it operates.
-
-`--help'
-`-h'
- Print a summary of the options to `configure', and exit.
-
-`--version'
-`-V'
- Print the version of Autoconf used to generate the `configure'
- script, and exit.
-
-`--cache-file=FILE'
- Enable the cache: use and save the results of the tests in FILE,
- traditionally `config.cache'. FILE defaults to `/dev/null' to
- disable caching.
-
-`--config-cache'
-`-C'
- Alias for `--cache-file=config.cache'.
-
-`--quiet'
-`--silent'
-`-q'
- Do not print messages saying which checks are being made. To
- suppress all normal output, redirect it to `/dev/null' (any error
- messages will still be shown).
+Dependencies include libxml2, gsl, and lua libs.
-`--srcdir=DIR'
- Look for the package's source code in directory DIR. Usually
- `configure' can determine that directory automatically.
+With the --enable-tools option, configure will build a couple of
+standalone tools, spike2sdf and hh-latency-estimator, which some may
+find useful.
-`configure' also accepts some other, not widely useful, options. Run
-`configure --help' for more details.
+In order to use the Lua package, you can e.g. symlink
+/usr/lib/cnrun/libcnrun-lua.so to /usr/lib/lua/$LUA_VERSION/cnrun.so.
+You should then be able to write 'require("cnrun")'; for further
+instructions by example, see doc/cnrun/examples/example1.lua.
+For the standard GNU autotools install instructions, please consult
+the original INSTALL file (commonly /usr/share/autoconf/INSTALL).
diff --git a/upstream/Makefile.am b/upstream/Makefile.am
index be32637..c9fb9c9 100644
--- a/upstream/Makefile.am
+++ b/upstream/Makefile.am
@@ -1,19 +1,13 @@
-ACLOCAL_AMFLAGS := -I m4
+ACLOCAL_AMFLAGS = -I m4
-SUBDIRS := src doc data
+SUBDIRS := src doc
EXTRA_DIST = \
ChangeLog \
- autogen.sh \
- acinclude.m4 \
- make_version
-
-man_MANS = \
- man/cnrun.1
+ autogen.sh
if DO_TOOLS
-man_MANS += \
- man/varfold.1 \
+man_MANS = \
man/spike2sdf.1 \
man/hh-latency-estimator.1
endif
diff --git a/upstream/README b/upstream/README
index 378fbe6..2064512 100644
--- a/upstream/README
+++ b/upstream/README
@@ -1 +1 @@
-(refer to doc/README)
+Refer to doc/README, or visit http://johnhommer.com/academic/code/cnrun.
diff --git a/upstream/configure.ac b/upstream/configure.ac
index bbc859d..98a6a7a 100644
--- a/upstream/configure.ac
+++ b/upstream/configure.ac
@@ -1,7 +1,7 @@
AC_COPYRIGHT([Copyright (c) 2008-14 Andrei Zavada <johnhommer at gmail.com>])
AC_INIT([cnrun], [2.0.0], [johnhommer at gmail.com])
-AC_CONFIG_SRCDIR([src/cnrun/main.cc])
+AC_CONFIG_SRCDIR([src/libcnrun/model.hh])
AC_CONFIG_MACRO_DIR([m4])
AC_PREREQ(2.61)
@@ -55,12 +55,6 @@ cxx_version=`$CXX --version | head -n1`
AC_OPENMP()
-AX_LIB_READLINE
-if test x"$ax_cv_lib_readline" = x"no"; then
- echo "Required library readline not found"
- AC_MSG_ERROR( [Missing readline], 2)
-fi
-
PKG_CHECK_MODULES([LIBCN], [gsl libxml-2.0])
AX_PROG_LUA([5.1], [5.3],)
@@ -76,36 +70,20 @@ fi
AC_ARG_ENABLE(
[tools],
AS_HELP_STRING( [--enable-tools], [build spike2sdf, varfold & hh-latency-estimator (default = no)]),
- [do_tools=$enableval], [do_tools=no])
+ [do_tools=$enableval], [do_tools=yes])
AM_CONDITIONAL(DO_TOOLS, [test x"$do_tools" = xyes])
if test x"$do_tools" != xyes; then
do_tools=no
fi
-AC_ARG_ENABLE(
- [pch],
- [AS_HELP_STRING( [--enable-pch], [precompile headers (default = no)])],
- [do_pch=$enable_pch],
- [do_pch=no])
-dnl defaulting to no to enable make dist-check
-AM_CONDITIONAL(DO_PCH, test x$do_pch = xyes)
-
-
-AC_SUBST(user, [`whoami`@`hostname`])
-AC_SUBST(docdir, [${prefix}/share/doc/${PACKAGE_TARNAME}])
-
-
AC_OUTPUT([
Makefile
src/Makefile
src/libstilton/Makefile
- src/libcn/Makefile
- src/libcnlua/Makefile
- data/Makefile
+ src/libcnrun/Makefile
+ src/libcnrun-lua/Makefile
doc/Makefile
- man/cnrun.1
man/spike2sdf.1
- man/varfold.1
man/hh-latency-estimator.1
src/tools/Makefile])
@@ -120,5 +98,4 @@ AC_MSG_RESULT([
build tools: ${do_tools}
- precompile headers: $do_pch
])
diff --git a/upstream/data/Makefile.am b/upstream/data/Makefile.am
deleted file mode 100644
index 5345ca9..0000000
--- a/upstream/data/Makefile.am
+++ /dev/null
@@ -1,7 +0,0 @@
-luaincdir := $(datadir)/${PACKAGE}/lua
-
-luainc_DATA := \
- lua/cnrun.lua
-
-EXTRA_DIST := \
- $(luainc_DATA)
diff --git a/upstream/data/lua/cnrun.lua b/upstream/data/lua/cnrun.lua
deleted file mode 100644
index 963608b..0000000
--- a/upstream/data/lua/cnrun.lua
+++ /dev/null
@@ -1,197 +0,0 @@
--- ; -*- mode: Lua -*-
--- First, you collect necessary pieces
-local A, F = ...
-
--- (1) A is an opaque structure representing our interpreter host side;
--- all you need to do with it is pass it as the first arg to all calls
--- of Fi and Fo (see below).
-
--- (2) F, a CFunction, is the proxy to get any data across.
--- It has the signature (think syscall):
--- F( A, opcode, arg1, arg2, ...)
--- where opcode is a string id of a host function.
-
--- Here are wrappers of the CNrun interpreter API:
-
--- common notes:
--- (a) on error, F returns {false, error_string}; else, results as described below;
--- (b) all page parameters are 1-based.
-
-function new_model (mname)
- -- returns:
- return F (A, "new_model", mname)
-end
-
-function delete_model (mname)
- -- returns:
- return F (A, "delete_model", mname)
-end
-
-function import_nml (mname, fname)
- -- returns:
- return F (A, "import_nml", mname, fname)
-end
-
-function export_nml (mname, fname)
- -- returns:
- return F (A, "export_nml", mname, fname)
-end
-
-function reset_model (mname)
- -- returns:
- return F (A, "reset_model", mname)
-end
-
-function cull_deaf_synapses (mname)
- -- returns:
- return F (A, "cull_deaf_synapses", mname)
-end
-
-function describe_model (mname)
- -- returns:
- return F (A, "describe_model", mname)
-end
-
-function get_model_parameter (mname, pname)
- -- returns:
- return F (A, "get_model_parameter", mname, pname)
-end
-
-function set_model_parameter (mname, pname, value)
- -- returns:
- return F (A, "set_model_parameter", mname, pname, value)
-end
-
-function advance (mname, time_to_go)
- -- returns:
- return F (A, "advance", mname, time_to_go)
-end
-
-function advance_until (mname, end_time)
- -- returns:
- return F (A, "advance_until", end_time)
-end
-
-
-function new_neuron (mname, ntype, label)
- -- returns:
- return F (A, "new_neuron", mname, ntype, label)
-end
-
-function new_synapse (mname, ytype, source, dest, g)
- -- returns:
- return F (A, "new_synapse", mname, ytype, source, dest, g)
-end
-
-function get_unit_properties (mname, label)
- -- returns:
- return F (A, "get_unit_properties", mname, label)
-end
-
-function get_unit_parameter (mname, label, pname)
- -- returns:
- return F (A, "get_unit_parameter", mname, label, pname)
-end
-
-function set_unit_parameter (mname, label, pname, value)
- -- returns:
- return F (A, "set_unit_parameter", mname, label, pname, value)
-end
-
-function get_unit_vars (mname, label, vname)
- -- returns:
- return F (A, "get_unit_vars", mname, label, vname)
-end
-
-function reset_unit (mname, label)
- -- returns:
- return F (A, "reset_unit", mname, label)
-end
-
-
-function get_units_matching (mname, pattern)
- -- returns:
- return F (A, "get_units_matching", mname, pattern)
-end
-
-function get_units_of_type (mname, tname)
- -- returns:
- return F (A, "get_units_of_type", mname, tname)
-end
-
-function set_matching_neuron_parameter (mname, pattern, parameter, value)
- -- returns:
- return F (A, "set_matching_neuron_parameter", mname, pattern, parameter, value)
-end
-
-function set_matching_synapse_parameter (mname, source_pattern, dest_pattern, parameter, value)
- -- returns:
- return F (A, "set_matching_synapse_parameter", mname, source_pattern, dest_pattern, parameter, value)
-end
-
-function revert_matching_unit_parameters (mname, pattern)
- -- returns:
- return F (A, "revert_matching_unit_parameters", mname, pattern)
-end
-
-function decimate (mname, pattern, fraction)
- -- returns:
- return F (A, "decimate", mname, pattern, fraction)
-end
-
-function putout (mname, pattern)
- -- returns:
- return F (A, "putout", mname, pattern)
-end
-
-
-function new_tape_source (mname, sname, fname, is_looping)
- -- returns:
- return F (A, "new_tape_source", mname, sname, fname, is_looping)
-end
-
-function new_periodic_source (mname, sname, fname, is_looping, period)
- -- returns:
- return F (A, "new_periodic_source", mname, sname, fname, is_looping, period)
-end
-
-function new_noise_source (mname, sname, min, max, sigma, distribution)
- -- returns:
- return F (A, "new_noise_source", mname, sname, min, max, sigma, distribution)
-end
-
-function get_sources (mname)
- -- returns:
- return F (A, "get_sources", mname)
-end
-
-function connect_source (mname, label, parameter, sname)
- -- returns:
- return F (A, "connect_source", mname, label, parameter, sname)
-end
-
-function disconnect_source (mname, label, parameter, sname)
- -- returns:
- return F (A, "disconnect_source", mname, label, parameter, sname)
-end
-
-
-function start_listen (mname, pattern)
- -- returns:
- return F (A, "start_listen", mname, pattern)
-end
-
-function stop_listen (mname, pattern)
- -- returns:
- return F (A, "stop_listen", mname, pattern)
-end
-
-function start_log_spikes (mname, pattern)
- -- returns:
- return F (A, "start_log_spikes", mname, pattern)
-end
-
-function stop_log_spikes (mname, pattern)
- -- returns:
- return F (A, "stop_log_spikes", mname, pattern)
-end
diff --git a/upstream/doc/Makefile.am b/upstream/doc/Makefile.am
index a8393f7..5f080fa 100644
--- a/upstream/doc/Makefile.am
+++ b/upstream/doc/Makefile.am
@@ -3,19 +3,11 @@ doc_DATA = \
examples_DATA = \
examples/example1.lua \
- examples/ratiocoding/ORNa.x1000.in \
- examples/ratiocoding/ORNb.x1000.in \
- examples/ratiocoding/PN.0.sxf.target \
- examples/ratiocoding/batch \
- examples/ratiocoding/m.nml \
- examples/ratiocoding/script
+ examples/ratiocoding/ORNa.in \
+ examples/ratiocoding/m.nml
examplesdir = $(docdir)/examples
-install-data-hook:
- $(mkinstalldirs) $(DESTDIR)/$(examplesdir)
-
-
EXTRA_DIST = \
$(examples_DATA) \
$(doc_DATA)
diff --git a/upstream/doc/README b/upstream/doc/README
index e1594fb..878da73 100644
--- a/upstream/doc/README
+++ b/upstream/doc/README
@@ -1,74 +1,22 @@
-CNrun
------
+CNrun is a neuronal network simulator, with the following features:
-1. Overview
-2. Usage
-3. Repeatability, rng-dependent behaviour
+* a conductance- and rate-based Hodgkin-Huxley neurons, a Rall and
+ Alpha-Beta synapses;
+* a 6-5 Runge-Kutta integration method: slow but precise, adjustable;
-1. Overview
+* Poisson, Van der Pol, Colpitts oscillators and interface for
+ external stimulation sources;
-This is a library (libcn) and a CLI (cnrun) tool to simulate neuronal
-networks, similar to NEURON and GENESIS except that neurons are
-non-compartmentalised, and there is no scripting language. It is
-written by Andrei Zavada <johnhommer at gmail.com> building on the
-original work by Thomas Nowotny <tnowotny at sussex.ac.uk>.
+* NeuroML network topology import/export;
-CNrun reads network topology description from a NeuroML file (as, for
-example, generated by neuroConstruct), where the `cell_type' attribute
-determines the unit class.
+* logging state variables, spikes, for visualization with e.g. gnuplot;
-The following neuron and synapse classes are provided by libcn:
+* implemented as a Lua module, for scripting model behaviour (e.g.,
+ to enable plastic processes regulated by model state);
- - HH : Hodgkin-Huxley by Traub and Miles (1991)
- - HHRate : Rate-based model of the Hodgkin-Huxley neuron
- - HH2 : Hodgkin-Huxley by Traub & Miles w/ K leakage
- - DotPoisson : Duration-less spike Poisson oscillator
- - Poisson : Poisson oscillator
- - DotPulse : Dot Pulse generator
- - NMap : Map neuron
- - LV : Lotka-Volterra oscillator
- - Colpitts : Colpitts oscillator
- - VdPol : Van der Pol oscillator
-
- - AB : Alpha-Beta synapse (Destexhe, Mainen, Sejnowsky, 1994)
- - ABMinus : Alpha-Beta synapse w/out (1-S) term
- - Rall : Rall synapse (Rall, 1967)
- - Map : Map synapse
-
-Scripting support in CNrun includes commands for creating and
-populating a model, setting parameters for single units or groups
-selected based on regex matching. Variables (‘a = 1; b = a + 2’) and
-arithmetic expressions (‘-’, ‘+’, ‘*’, ‘/’, ‘<’, ‘<=’, ‘>’, ‘>=’,
-‘==’, ‘()’) are supported as in C.
-
-
-2. Installation and prerequisites
-
-As a reasonably complex C++ piece of code, CNRun has many a loop with
-iterators. Since gcc 4.4.4, the keyword auto has come as a great
-relief in this regard; versions of gcc prior to 4.4.4, therefore, will
-not compile CNRun.
-
-Cnrun depends on libreadline, libgsl, libxml2, whichever
-version is current at the time of release.
-
-
-3. Repeatability, rng-based behaviour
-
-Using rng facilities of the GNU Scientific Library, cnrun has the
-ability to specify the gsl rng type and set the seed via the
-environment variables GSL_RNG_TYPE and GSL_RNG_SEED, in which case
-reproducibility of CNrun results is guaranteed (as per gsl's statement
-that the generated series will).
-
-If you don't bother setting those env vars, seeding will be done with
-the current time (specifically, field .tv_usec of a struct timeval
-after a call to gettimeofday()).
-
-
-
-If you are interested in using libcn for your own projects, look at
-doc/example.cc, and perhaps at src/hh-latency-estimator.cc
-(all the code is there, and it's yours :).
+* interaction (topology push/pull, async connections) with other
+ cnrun models running elsewhere on a network, with interactions
+ (planned).
+There is an example1.lua in examples dir for a primer.
diff --git a/upstream/doc/examples/example1.lua b/upstream/doc/examples/example1.lua
index 8b28bce..258ddeb 100644
--- a/upstream/doc/examples/example1.lua
+++ b/upstream/doc/examples/example1.lua
@@ -1,2 +1,172 @@
-libcn = require("libcn")
+-- with lu-5.2, replace do s/table.unpack/unpack/g.
+local M = require("cnrun")
+
+local res, ult, result
+local C, model
+
+res, ult = M.cn_get_context ()
+if res == nil then
+ print (ult)
+ return
+end
+C = ult
+
+local mname = "FAFA"
+res, ult = M.cn_new_model (C, mname)
+if res == nil then
+ print (ult)
+ return
+end
+model = ult
+print ("Created model")
+
+print ("Setting verbosely to 4")
+M.cn_set_model_parameter (C, mname, "verbosely", 4)
+
+result = {M.cn_list_models (C)}
+res, ult = result[1], {table.unpack(result, 2)}
+if res == nil then
+ print (ult)
+ return
+end
+print ()
+
+print ("Model(s):")
+local model_list = ult
+print (table.concat(model_list))
+print ()
+
+
+res, ult = M.cn_import_nml (C, mname, "ratiocoding/m.nml")
+if res == nil then
+ print (ult)
+ -- return
+end
+print ()
+
+
+print ("Host parmeters:")
+local parameters = {
+ "verbosely", "integration_dt_min",
+ "integration_dt_max", "integration_dt_cap",
+ "listen_dt", "listen_mode",
+ "sxf_start_delay", "sxf_period", "sdf_sigma"
+}
+local fmt = " %22s: %-q"
+for i,p in ipairs(parameters) do
+ res, ult = M.cn_get_model_parameter (C, mname, p)
+ print (string.format (fmt, p, ult))
+end
+print ()
+
+res, ult = M.cn_delete_model (C, "fafa moo")
+if res == nil then
+ print (ult .. " (ignored)")
+ -- return
+end
+
+
+result = {M.cn_get_units_matching(C, mname, "L.*")}
+res, ult = result[1], {table.unpack(result, 2)}
+if res == nil then
+ print (ult)
+ return
+end
+print ()
+print ("There are " .. #ult .. " unit(s) matching L.*:")
+local unit_list = ult
+local fmt = " %-10s %-16s %-16s %-12s %-16s %-6s"
+print (string.format(
+ fmt,
+ "label", "class", "family", "species", "has_sources", "is_altered"))
+print (string.rep('-', 87))
+for _, u in ipairs(unit_list) do
+ result = {M.cn_get_unit_properties (C, mname, u)}
+ res, ult = result[1], {table.unpack(result, 2)}
+ local b = function (x) if x then return "yes" else return "no" end end
+ print (string.format(
+ fmt,
+ ult[1], ult[2], ult[3], ult[4], b(ult[5]), b(ult[6])))
+end
+print()
+
+
+print ("Advancing 10 sec:")
+res, ult = M.cn_advance (C, mname, 10000)
+
+
+print ("Modify parameter:")
+local u, p, v0, v9, vr = "LNz.0", "gNa"
+_, ult = M.cn_get_unit_parameter (C, mname, u, p)
+v0 = ult
+_, ult = M.cn_set_unit_parameter (C, mname, u, p, v0 * 2)
+_, ult = M.cn_get_unit_parameter (C, mname, u, p)
+v9 = ult
+-- with a revert
+res, ult = M.cn_revert_matching_unit_parameters (C, mname, u)
+if res == nil then
+ print (ult)
+ return
+end
+local count_reset = ult
+_, ult = M.cn_get_unit_parameter (C, mname, u, p)
+vr = ult
+print (string.format(
+ ".. changed %s of %s from %g to %g, then reset (%d affected) to %g\n",
+ p, u, v0, v9, count_reset, vr))
+
+res, ult = M.cn_describe_model (C, mname)
+
+
+print ("State variables:")
+for i = 1, 6, 1 do
+ M.cn_advance (C, mname, 1000)
+ result = {M.cn_get_unit_vars (C, mname, "LNz.0")}
+ res, ult = result[1], {table.unpack(result, 2)}
+ print (table.concat(ult, '; '))
+end
+print()
+
+
+local affected, remaining
+print ("Putout:")
+-- there is unit_list already:
+math.randomseed(os.time())
+local deleting = unit_list[math.random(1, #unit_list)]
+-- deleting, _ = string.gsub(deleting, ".", "\\.")
+res, ult = M.cn_putout (C, mname, deleting)
+if res == nil then
+ print (ult)
+ return
+end
+print (string.format(".. deleted unit %s", deleting))
+print()
+
+print ("Decimate:")
+res, ult = M.cn_decimate (C, mname, "L.*", 0.3)
+if res == nil then
+ print (nil)
+ return
+end
+affected, remaining = ult
+remaining = #{M.cn_get_units_matching (C, mname, ".*")} - 1
+print (string.format(
+ ".. %d units gone, %d remaining",
+ affected, remaining))
+print()
+
+
+res, ult = M.cn_delete_model (C, mname)
+if res == nil then
+ print ("Error: Failed to delete model: ", ult)
+ return
+end
+print ("Model ".. ult .. " deleted")
+
+res, ult = M.cn_drop_context (C)
+if res == nil then
+ print ("Error: Failed to drop context: ", ult)
+ return
+end
+print ("Context dropped: " .. ult)
diff --git a/upstream/doc/examples/ratiocoding/ORNa.x1000.in b/upstream/doc/examples/ratiocoding/ORNa.in
similarity index 100%
rename from upstream/doc/examples/ratiocoding/ORNa.x1000.in
rename to upstream/doc/examples/ratiocoding/ORNa.in
diff --git a/upstream/doc/examples/ratiocoding/ORNb.x1000.in b/upstream/doc/examples/ratiocoding/ORNb.x1000.in
deleted file mode 100644
index f1282e2..0000000
--- a/upstream/doc/examples/ratiocoding/ORNb.x1000.in
+++ /dev/null
@@ -1,112 +0,0 @@
-125
-0
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
-0 0 10 10
-0 0 13 13
-0 0 16.9 16.9
-0 0 21.97 21.97
-0 0 28.561 28.561
-0 0 37.1293 37.1293
-0 0 48.2681 48.2681
-0 0 62.7485 62.7485
-0 0 81.5731 81.5731
-0 0 106.045 106.045
-
diff --git a/upstream/doc/examples/ratiocoding/PN.0.sxf.target b/upstream/doc/examples/ratiocoding/PN.0.sxf.target
deleted file mode 100644
index 3d86f48..0000000
--- a/upstream/doc/examples/ratiocoding/PN.0.sxf.target
+++ /dev/null
@@ -1,10 +0,0 @@
- 1.68000000e+01 1.62910659e+00 -6.76042467e+00 -7.19703816e+00 -7.19999730e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00
- 1.62910659e+00 1.68000000e+01 1.62910659e+00 -6.76042467e+00 -7.19703816e+00 -7.19999730e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00
- -6.76042467e+00 1.62910659e+00 1.68000000e+01 1.62910659e+00 -6.76042467e+00 -7.19703816e+00 -7.19999730e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00
- -7.19703816e+00 -6.76042467e+00 1.62910659e+00 1.68000000e+01 1.62910659e+00 -6.76042467e+00 -7.19703816e+00 -7.19999730e+00 -7.20000000e+00 -7.20000000e+00
- -7.19999730e+00 -7.19703816e+00 -6.76042467e+00 1.62910659e+00 1.68000000e+01 1.62910659e+00 -6.76042467e+00 -7.19703816e+00 -7.19999730e+00 -7.20000000e+00
- -7.20000000e+00 -7.19999730e+00 -7.19703816e+00 -6.76042467e+00 1.62910659e+00 1.68000000e+01 1.62910659e+00 -6.76042467e+00 -7.19703816e+00 -7.19999730e+00
- -7.20000000e+00 -7.20000000e+00 -7.19999730e+00 -7.19703816e+00 -6.76042467e+00 1.62910659e+00 1.68000000e+01 1.62910659e+00 -6.76042467e+00 -7.19703816e+00
- -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.19999730e+00 -7.19703816e+00 -6.76042467e+00 1.62910659e+00 1.68000000e+01 1.62910659e+00 -6.76042467e+00
- -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.19999730e+00 -7.19703816e+00 -6.76042467e+00 1.62910659e+00 1.68000000e+01 1.62910659e+00
- -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.20000000e+00 -7.19999730e+00 -7.19703816e+00 -6.76042467e+00 1.62910659e+00 1.68000000e+01
diff --git a/upstream/doc/examples/ratiocoding/batch b/upstream/doc/examples/ratiocoding/batch
deleted file mode 100755
index 29b669d..0000000
--- a/upstream/doc/examples/ratiocoding/batch
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/bin/bash
-
-totalruns=$1
-m=$2
-
-function run_in_dir()
-{
- mkdir -p $1 && cd "$1"
- echo
- echo "----------------- running in $1"
- rm -f CFs *sxf*
- local i
- for ((i = 0; i < totalruns; ++i)); do
- echo "run $((i+1)) of $totalruns"
- cnrun -Dfi=$2 -Dft=$3 -Dfo=$4 -e ../script -v1 -tT.05 && \
- varfold -x10 -x10 -G.. -Vweight -om PN.0 && \
- mv PN.0.sxf.mx $i.PN.0.sxf.mx && \
- mv PN.0.sxf $i.PN.0.sxf && \
- cat <PN.0.CF >>CFs
- done
-
- varfold -x10 -x10 -om -zavg -t- -UAVERAGE *.PN.0.sxf.mx
- cp AVERAGE.mx ../AVERAGE.$1.mx
-
- cd ..
-}
-
-run_in_dir "___" 1 1 1
-run_in_dir "__o" 1 1 $m
-run_in_dir "_o_" 1 $m 1
-run_in_dir "_oo" 1 $m $m
-run_in_dir "o__" $m 1 1
-run_in_dir "o_o" $m 1 $m
-run_in_dir "oo_" $m $m 1
diff --git a/upstream/doc/examples/ratiocoding/rational-plot-sdf-interactive b/upstream/doc/examples/ratiocoding/rational-plot-sdf-interactive
deleted file mode 100755
index b1652ed..0000000
--- a/upstream/doc/examples/ratiocoding/rational-plot-sdf-interactive
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-
-D=`dirname $1`
-T=`basename $1`
-
-cd "$D"
-CF=$(<CF)
-
-DESC=${D##*/}
-CASE=${DESC%%_*}
-PARAMS="Params: "${DESC#$CASE"_"}
-
-gnuplot -persist <<EOF
-
-set title "$CASE"
-set key off
-
-set samples 40
-set isosample 20
-
-set cbrange [0:10]
-
-set hidden3d
-set pm3d
-
-set label "$PARAMS" at character 1,3
-set label "CF = $CF" at character 1,1
-
-splot "$T" matrix with dots
-
-EOF
diff --git a/upstream/doc/examples/ratiocoding/rational-plot-sdf-static b/upstream/doc/examples/ratiocoding/rational-plot-sdf-static
deleted file mode 100755
index 4574a85..0000000
--- a/upstream/doc/examples/ratiocoding/rational-plot-sdf-static
+++ /dev/null
@@ -1,29 +0,0 @@
-#!/bin/bash
-
-D=`dirname $1`
-T=`basename $1`
-
-cd "$D"
-#CF=$(<CF)
-
-DESC=${D##*/}
-CASE=$T
-#CASE=${DESC%%_*}
-PARAMS="Params: "${DESC#$CASE"_"}
-
-gnuplot -persist <<EOF
-
-set title "$CASE"
-set key off
-
-set cbrange [0:12]
-
-#set label "$PARAMS" at character 1,3
-#set label "CF = $CF" at character 1,1
-
-unset xtics
-unset ytics
-
-plot "$T" matrix with image
-
-EOF
diff --git a/upstream/doc/examples/ratiocoding/rational-plot-var b/upstream/doc/examples/ratiocoding/rational-plot-var
deleted file mode 100755
index 825bb5e..0000000
--- a/upstream/doc/examples/ratiocoding/rational-plot-var
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/bin/bash
-
-CSIZE=500
-
-D=`dirname $1`
-T=`basename $1`
-
-GNUPLOTARGS=
-for F in $*; do
- FTITLE=`basename $F`
- FTITLE=${FTITLE%.var}
- FTITLE=${FTITLE%.varx}
- if file "$1" | grep data &>/dev/null; then
- FSPEC="binary format=\"%lf%lf\""
- else
- FSPEC="using 1:2"
- fi
- GNUPLOTARGS+="\"$F\" $FSPEC title \"$FTITLE\" with lines,"
-done
-GNUPLOTARGS=${GNUPLOTARGS%,}
-echo $GNUPLOTARGS
-
-DESC=${D##*/}
-
-gnuplot -persist <<EOF
-
-set title "$DESC"
-
-set xrange [0:5000]
-set xtics 0,$((CSIZE*10))
-set mxtics 10
-plot $GNUPLOTARGS
-
-EOF
diff --git a/upstream/doc/examples/ratiocoding/script b/upstream/doc/examples/ratiocoding/script
deleted file mode 100644
index a7ec47a..0000000
--- a/upstream/doc/examples/ratiocoding/script
+++ /dev/null
@@ -1,58 +0,0 @@
-load_nml ../m.nml
-
-new_source Periodic aabb ../ORNa.x1000.in
-new_source Periodic abab ../ORNb.x1000.in
-connect_source aabb ORNa\.0 lambda
-connect_source abab ORNb\.0 lambda
-
-# .0103 and .0074 come from annealing
-set_parm_synapse ORN.* LN[12]\..* gsyn 0.110/1000
-set_parm_synapse ORN.* LNz\..* gsyn 0.073/1000
-
-set_parm_synapse ORN.* LN.* alpha .4 # .27785
-set_parm_synapse ORN.* LN.* beta .3
-set_parm_synapse ORN.* LN.* trel 3
-
-set_parm_synapse LN.* LN.* Esyn -80
-
-gi = 0.499/1
-gt = 0.389/1
-go = 0.386/1
-#gi = 0.18
-#gt = 0.18
-#go = 0.18
-
-# inbound
-set_parm_synapse LN[12]\..* LNz\..* gsyn gi * fi
-# transverse
-set_parm_synapse LN[12]\..* LN[21]\..* gsyn gt * ft
-# outbound
-set_parm_synapse LNz\..* LN[12]\..* gsyn go * fo
-
-set_parm_synapse LN.* LN.* alpha .1
-set_parm_synapse LN.* LN.* beta .05
-set_parm_synapse LN.* LN.* trel 5
-
-set_parm_synapse LNz\..* PNi\..* Esyn -80
-set_parm_synapse LNz\..* PNi\..* gsyn 0.0060
-set_parm_synapse LNz\..* PNi\..* alpha .2
-set_parm_synapse LNz\..* PNi\..* beta .05
-set_parm_synapse LNz\..* PNi\..* trel 25
-
-set_parm_synapse PNi\..* PN\..* Esyn -80
-set_parm_synapse PNi\..* PN\..* gsyn 0.02
-set_parm_synapse PNi\..* PN\..* alpha .2
-set_parm_synapse PNi\..* PN\..* beta .05
-
-# set up oscillations
-set_parm_neuron PNi?\..* Idc .1
-
-
-sxf_params 625:500:400
-start_log_spikes PN\.0
-
-listen_mode b+
-start_listen LNz\..
-start_listen PN\..
-
-advance 50000+250
diff --git a/upstream/man/cnrun.1.in b/upstream/man/cnrun.1.in
deleted file mode 100644
index 0359ac3..0000000
--- a/upstream/man/cnrun.1.in
+++ /dev/null
@@ -1,245 +0,0 @@
-.TH CNrun 1 "@build_date@" @VERSION@ "CNrun"
-.SH NAME
- CNrun -- a neuronal network simulator
-.SH SYNOPSIS
- cnrun \fB\-h\fR | \fB\-U\fR | \fIscript\fR [\fBOPTION\fR ...]
-.B
-.PP
-
-.SH DESCRIPTION
-.PP
-\fBCNrun\fR is a conductance- and rate-based neuronal network
-simulator with a capability for scripting plastic processes (in Lua)
-and NeuroML support.
-
-Available neuron types, by the corresponding \(oqcell_type\(cq string,
-include:
-.IP \(bu
-\fIHH\fR and \fIHHRate\fR, conductance\- and rate\-based Hodgkin\-Huxley
-neurons (Traub & Miles, 1991);
-.IP \(bu
-A simplified but fast, fixed\-dt \fIMap\fR neurons mimicking the HH
-model;
-.IP \(bu
-\fIPoisson\fR, Van der Pol (\fIVdP\fR) and
-simple \fIPulse\fR oscillators;
-.IP \(bu
-synapses as described in Rall et al, 1967 (\fIRall\fR) and Destexhe et
-al, 1994 (\fIAB\fR).
-
-.PP
-A 6\-5\-order Runge\-Kutta integration method is used to compute state
-variables. These (membrane potential E or instantaneous firing rate R
-for neurons, neurotransmitter release S for synapses) as well as spike
-times can be logged for further visualisation.
-
-Scripting in CNrun is implemented in Lua. Model simulator functions
-available for use in scripts include those for reading state variables
-and setting parameters for specific (regex-matched groups of) units;
-creating and connecting new units; modifying input sources feeding
-into units.
-
-.SH OPTIONS
-\fB\-C\fR \fIdir\fR
-chdir to \fIdir\fR before running.
-.TP
-\fB\-v \fIint\fR
-Set verbosity level (default 1; values up to 7 are meaningful).
-Use a negative value to show the progress percentage only,
-indented on the line at \-8 x this value.
-.TP
-\fB\-U\fR
-List all available units.
-.TP
-\fB\-h\fR
-Print the overview of command\-line options.
-
-.SH SCRIPTING
-In your Lua scripts, include the boilerplate from @datadir@/cnrun.lua.
-This makes available the following CNrun functions:
-.TP
-\fBnew_model\fR NAME
-Create a new model called NAME. Existing model is deleted.
-.TP
-\fBuse_nml\fR NML_FILE
-Load network topology from NML_FILE, creating
-a model if necessary, or replacing an existing model\(rq topology.
-.TP
-\fBmerge_nml\fR NML_FILE
-Merge in the topology from NML_FILE.
-.TP
-\fBadd_neuron\fR TYPE LABEL
-Add a new newron of type TYPE with label LABEL.
-.TP
-\fBadd_synapse\fR TYPE SOURCE TARGET G
-Connect the neuron labelled SOURCE to one labelled TARGET with a
-synapse of type TYPE, with gsyn G.
-.TP
-\fBcull_deaf_synapses\fR
-Remove synapses with zero weight.
-.TP
-\fBset_parm_neuron\fR LABEL PARM VALUE
-Set parameter PARM for a specified group of neurons labelled matching LABEL.
-.TP
-\fBset_parm_synapse\fR SRC TGT PARM VALUE
-Set parameter PARM for synapses between neurons labelled matching SRC and TGT.
-The synaptic weight, itself not being a synapse parameter, can also be set with
-this command: to do this, use \(oqgsyn\(cq as PARM.
-.TP
-\fBreset\fR
-Reset the model. Model time is rewound to 0 and all units have their
-state variables reset to stock defaults. Any previously assigned unit
-parameters and attached data sources are preserved.
-.TP
-\fBreset_revert_params\fR
-Reset the model. Model time is rewound to 0, all units have their
-state variables and parameters reset to stock defaults.
-.TP
-\fBreset_state_units\fR REGEX
-Reset the units\(cq as above, keeping current model time.
-.TP
-\fBadvance_until\fR TIME
-Advance until TIME msec.
-.TP
-\fBadvance\fR TIME
-Advance TIME msec.
-.TP
-\fBputout\fR REGEX
-Delete units matching REGEX by label.
-.TP
-\fBdecimate\fR REGEX FRAC
-Randomly delete FRAC units of a population of units selected by REGEX.
-.TP
-\fBstart_listen\fR REGEX
-Make matching units listen.
-.TP
-\fBstop_listen\fR
-Make matching units stop listening.
-.TP
-\fBlisten_dt\fR [VALUE]
-Set listening interval to VALUE, or show current value if VALUE not given.
-.TP
-\fBlisten_mode\fR [SPEC]
-Print (if argument is omitted) the current listening mode (one var only, deferred write,
-and/or binary); otherwise, enable the corresponding mode if \(oq1\(cq, \(oqd\(cq or \(oqb\(cq
-occurs in SPEC, or disable it if it does and is immediately followed by a \(oq\-\(cq.
-Note that those units already listening will be unaffected; to change the mode for them, issue
-\fBstart_listen\fR for them after the new mode has been set.
-.TP
-\fBstart_log_spikes\fR REGEX
-Make neurons matching REGEX log spikes.
-.TP
-\fBstop_log_spikes\fR REGEX
-Make neurons matching REGEX stop log spikes.
-.TP
-\fBsxf_params\fR DELAY:PERIOD:SIGMA
-Set spike density function initial delay, sampling period and sigma as specified.
-.TP
-\fBdescribe_model\fR
-Print a summary of model topology and unit types.
-.TP
-\fBshow_units\fR REGEX
-Print parameters and state of units matching REGEX.
-.TP
-\fBnew_source\fR TYPE ID ARG ...
-Create a new source of type and with an id as indicated. Sources can be connected to unit
-parameters as a means to set up a dynamically changing behaviour. See \fBDYNAMIC SOURCES\fR below.
-.TP
-\fBconnect_source\fR SOURCE_ID LABEL PARM
-Connect this source to matching units\(cq parameter.
-.TP
-\fBshow_sources\fR
-Show the currently active sources (both connected and idle).
-.TP
-\fBexec\fR [SCRIPT]
-Execute a script. If SCRIPT not specified, start an interactive interpreter.
-.TP
-\fBverbosity\fR [LEVEL]
-Set/show verbosity level.
-.TP
-\fBshow_vars\fR [REGEX]
-Print variables matching REGEX, or all variables if REGEX omitted.
-.TP
-\fBclear_vars\fR [REGEX]
-Clear variables matching REGEX, or all if REGEX omitted.
-.TP
-\fBpause\fR [DELAY]
-Pause for DELAY sec if specified, or until user presses Enter otherwise.
-.TP
-\fBquit\fR
-Exit current interpreter if called by \fBexec\fR; exit the program otherwise.
-
-.RE
-When you use the interpreter interactively, TAB will list completions
-approproiately, depending on the context.
-
-
-.SH DYNAMIC SOURCES
-In addition to static unit parameter/variable assignment with
-\fBset_parm_{neuron,synapse}\fR, units can have a data source attached
-to any of their parameters or variable (even though variables will get
-overwritten in the next cycle).
-
-Data sources are of three types (a fourth one is available for
-developers, an arbitrary user function of time, but not exposed as an
-interpreter command). Where data for a source are read from a file,
-values are read using a \(oq>>\(cq operator (from <ifstream>) into a
-double variable. The corresponding \fBnew_source\fR arguments are:
-
-.TP
-\fBTape\fR FILE
-Read \(lqtime value\(rq pairs from FILE and set the parameter\(cqs value accordingly.
-.TP
-\fBPeriodic\fR FILE
-FILE is expected to contain, as the first number value read
-by scanf("%lg"), a time period at which the following values are
-sequentially assigned to the parameter. Values are assigned at the
-beginning of each integration cycle.
-.TP
-\fBNoise\fR MIN:MAX
-Generate (irrespective of time) a uniformly distributed random number within MIN:MAX.
-
-.RE
-Similarly to the parameters, state variables can also be set in this
-manner; in this case, the values read, will override whatever the
-inner workings of the unit assign to it. Where a Tape has a gap
-between assignment times larger than current dt, assignments are still
-made; this, however, does not apply to Periodic sources (chiefly for
-performance reasons).
-
-.SH SYNAPSE COALESCING
-Coalesced synapses are those having identical parameters and having
-the same source. Coalescing reduces, per divergence rate, the number
-of times the S variable is recomputed with identical parameters per
-cycle; additionally for hosted synapses, the integration vector is
-shrunk to fit towards further performance gain.
-
-Coalescing happens automatically between two synapses from same source
-when, after all parameter assignments, they are found to be identical
-(disregarding synaptic weights). Conversely, when the user changes a
-parameter to one coalesced synapses that is different from that
-parameter\(cqs value in the others, that synapse becomes independent.
-
-Note that a synapse units\(cqs label is dynamically formed of the
-label of the source with a semicolon and the current number of
-targets. Another consequence of coalescing is that there can be more
-than one synapse units labelled identically (hence, uniquely to
-identify a synapse, you need to specify its source and target).
-
-The command\-line option \fB\-nc\fR can be used to disable coalescing.
-
-.SH EXAMPLE
-In @docdir@/ratiocoding, there is a working example of cnrun
-setup which reproduces some of the results presented in Zavada et al
-(2011) PLoS paper.
-
-.SH BUGS
-The oscillator units other than Poisson, have not been tested.
-
-.SH SEE ALSO
-spike2sdf(1), varfold(1).
-
-.SH AUTHOR
-CNRun and the underlying library libcn is written by Andrei Zavada
-<johnhommer at gmail.com>, building on the original code by Thomas
-Nowotny, while at Sussex University in 2008\-10.
diff --git a/upstream/man/varfold.1.in b/upstream/man/varfold.1.in
deleted file mode 100644
index a0a804c..0000000
--- a/upstream/man/varfold.1.in
+++ /dev/null
@@ -1,129 +0,0 @@
-.TH varfold 1 "@build_date@" @VERSION@ "CNrun"
-.SH NAME
- varfold -- a simple numerical matrix convolution tool
-.SH SYNOPSIS
- varfold \fB\-h\fR | [\fBOPTION\fR ...] \fBfilename_or_unitlabel\fR ...
-.B
-.PP
-
-.SH DESCRIPTION
-.PP
-Varfold is a simple tool to obtain a measure of fitting of a matrix to
-another, reference matrix, by means of \(oqconvoluting\(cq the former
-against the latter to produce a scalar value (Cost Function).
-
-The data are expected to be timestamped (time in the first column
-followed by data in a record). Use \fB\-t\-\fR if your data are not.
-
-Varfold can also extract data by sampling the trace at specified
-intervals from a .var output from a CNrun simulation.
-
-In a typical usage with CNrun, you have spiking data of a trial saved
-as a vector of SDF (spike density function) values, along with SHF
-(spike heterogeneity function, which is
-SDF/stdev(\fIintervals_between_spikes\fR)), and the spike count per
-sampling window from a simulation where you have enabled some
-spikeloggers. Those data are available, per unit, in .sxf files. You
-then create a similar, reference vector of the same size. Varfold
-will read these vectors and, for each, apply an element\-by\-element
-operation (currently, sum of differences squared or weighting, see
-option \fB\-V\fR) to it vs the reference vector, thus yielding
-individual scalar CF values.
-
-Individual CF value(s) will be saved in files ending in \(oq.CF\(cq.
-If data from more then one unit are combined (option \fB\-z\fR), a
-grand CF will be taken by convoluting the individual matrix sum,
-average or product against the reference matrix read from a file
-specified with option \fB\-T\fR, and written into a file named
-\(oqAVERAGE.CF\(cq, \(oqSUM.CF\(cq or \(oqPRODUCT.CF\(cq.
-
-If no convolution is desired, varfold can be useful to \(oqfold\(cq
-the data vectors into matrices (only for two\-dim matrices, though).
-See option \fB\-o\fR.
-
-NaN and inf data are allowed in input vectors.
-
-.SH OPTIONS
-\fB\-C\fR \fIdir\fR
-chdir to \fIdir\fR before running.
-.TP
-\fB\-G\fR \fIdir\fR
-Look for reference files in \fIdir\fR rather than in the current
-directory. These should match what will eventually be determined as a
-unit\(cqs data vector file name, plus a \(oq.target\(cq suffix.
-.TP
-\fB\-x\fR \fIn\fR
-A dimension size. If your vector is a serialised (multidimensional) matrix,
-repeat this option as needed. Only useful in conjunction with option \fB\-o\fR
-(which see).
-.TP
-\fB\-V\fRsqdiff|weight
-Operation to apply on trial vs reference vectors to produce the cost
-function: a sum of squared differences, or a sum of trial vector
-elements multiplied by corresponding elements from the reference
-vector.
-.TP
-\fB\-z\fRsum|avg|prod
-Operation applied on individual matrices to produce an overall matrix.
-.TP
-\fB\-T\fR \fIfname\fR
-Read the overall reference matrix from this file.
-.TP
-\fB\-R\fR
-Sample trial vector data from a .var file rather than .sxf. With this option,
-you can use option \fB\-f\fR and must, option \fB\-d\fR.
-.TP
-\fB\-f \fIn\fR:\fIm\fR
-Extract \fIn\fRth field of \fIm\fR consecuive fields per datum; sample
-every datapoint if omitted.
-.TP
-\fB\-d \fIf\fR:\fIp\fR:\fIw\fR
-Sample from time \fIf\fR at period \fIp\fR with window size \fIw\fR.
-.TP
-\fB\-F \fIn\fR
-Read vector data from position \fIn\fR (does not apply to the .var
-case, where you would specify \fB\-d\fIf\fR other than 0).
-.TP
-\fB\-H\fR
-Multiply SDF value (in the second column in a .sxf file) by SHF (the third column).
-.TP
-\fB\-H\-\fR
-Assume there is no SHF column in your files. Use this options with files generated by spike2sdf
-(in this case, files will have an .sdf suffix, not .sxf).
-.TP
-\fB\-t\-\fR
-Assume there is no timestamp in the data vector (does not apply to
-data sampled from .var files). Implies \fB\-H\-\fR.
-.TP
-\fB\-N\fR
-Normalise input matrix before convolution.
-.TP
-\fB\-o\fR [mc]
-Fold vectors and output data as a matrix (m) and/or a list of <x y value> records (c);
-only valid for two\-dimensional data.
-.TP
-\fB\-O\fR
-Write Octave\-compatible designations of nan and inf data (i.e.,
-\(oqNaN\(cq and \(oqInf\(cq).
-.TP
-\fB\-h\fR
-Print the overview of command\-line options.
-.TP
-\fIunit_label\fR or \fIfilename\fR
-
-Non\-option arguments are treated each as a single data vector file
-name, or label of a unit in the trial (with file names
-\(oq\fIunit_label\fR.s{x,d}f\(cq). If a convolution is requested
-(with option \fB\-V\fR), reference vector is read from a file with the
-name as found for the data vector suffixed with \(oq.target\(cq in the
-directory specified with \fB\-G\fR.
-
-Files named \fIlabel\fR are tried first, failing which varfold will
-try \fIlabel\fR.sxf, and eventually \fIlabel\fR.sdf as if with option
-\fB\-H\-\fR enabled.
-
-.SH SEE ALSO
-cnrun(1), spike2sdf(1).
-
-.SH AUTHOR
-Andrei Zavada (johnhommer at gmail.com).
diff --git a/upstream/src/Common.mk b/upstream/src/Common.mk
index e0287d8..05741fc 100644
--- a/upstream/src/Common.mk
+++ b/upstream/src/Common.mk
@@ -1,10 +1,4 @@
-%.hh.gch: %.hh
-# for some reason $(CXXCOMPILE) is just... "c", whereas when seen in
-# any sub/Makefile.am, it does the trick alright, so spell it out in full
- $(CXX) $(AM_CXXFLAGS) -c $<
-
AM_CXXFLAGS := -Wall -std=c++0x -fno-rtti \
-I$(top_srcdir) -I$(top_srcdir)/src \
$(LIBCN_CFLAGS) $(OPENMP_CXXFLAGS) \
- -DHAVE_CONFIG_H \
- -DBUILT_BY=\"@user@\"
+ -DHAVE_CONFIG_H
diff --git a/upstream/src/Makefile.am b/upstream/src/Makefile.am
index 092a045..43b80d4 100644
--- a/upstream/src/Makefile.am
+++ b/upstream/src/Makefile.am
@@ -1,6 +1,9 @@
include $(top_srcdir)/src/Common.mk
-SUBDIRS = libstilton libcn libcnlua
+SUBDIRS = libstilton libcnrun libcnrun-lua
if DO_TOOLS
SUBDIRS += tools
endif
+
+install-exec-hook:
+ rm -f "$(DESTDIR)/$(pkglibdir)/lib*/*.la"
diff --git a/upstream/src/cnrun/commands.cc b/upstream/src/cnrun/commands.cc
deleted file mode 100644
index ae6bb85..0000000
--- a/upstream/src/cnrun/commands.cc
+++ /dev/null
@@ -1,1013 +0,0 @@
-/*
- * File name: cnrun/commands.cc
- * Project: cnrun
- * Author: Andrei Zavada <johnhommer at gmail.com>
- * Initial version: 2014-09-21
- *
- * Purpose: interpreter commands (as CInterpreterShell methods)
- *
- * License: GPL
- */
-
-#if HAVE_CONFIG_H && !defined(VERSION)
-# include "config.h"
-#endif
-
-#include <sys/stat.h>
-#include <stdio.h>
-#include <unistd.h>
-#include <regex.h>
-#include <list>
-
-#include "libstilton/string.hh"
-#include "libcn/integrate-rk65.hh"
-#include "libcn/base-unit.hh"
-#include "libcn/hosted-neurons.hh" // for TIncludeOption
-#include "cnrun.hh"
-
-using namespace std;
-using namespace cnrun;
-
-namespace {
-inline const char* es(int x) { return (x == 1) ? "" : "s"; }
-}
-
-#define CMD_PROLOG(N, F) \
- CInterpreterShell::SCmdResult R; \
- if ( aa.size() != N ) { \
- vp( 0, F"() takes %d arg%s, called with %zu\n", N, es(N), aa.size()); \
- return R.result = TCmdResult::bad_arity, move(R); \
- } \
- const string& model = aa[0].vs; \
- if ( models.find(model) == models.end() ) { \
- vp( 0, F"(): no such model: \"%s\"\n", model.c_str()); \
- return R.result = TCmdResult::logic_error, move(R); \
- } \
- auto& M = *models.at(model);
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_new_model( const TArgs& aa)
-{
- CInterpreterShell::SCmdResult R;
- if ( aa.size() != 1) {
- vp( 0, stderr, "new_model() takes 1 parameter, called with %zu\n", aa.size());
- return R.result = TCmdResult::bad_arity, move(R);
- }
- const string& model_name = aa[0].vs;
-
- auto M = new CModel(
- model_name,
- new CIntegrateRK65(
- options.integration_dt_min,
- options.integration_dt_max,
- options.integration_dt_cap),
- options);
- if ( !M ) {
- vp( 0, stderr, "Failed to create model\n");
- return R.result = TCmdResult::system_error, move(R);
- }
- models[model_name] = M;
-
- vp( 3,
- "generator type: %s\n"
- " seed: %lu\n"
- " first value: %lu\n",
- gsl_rng_name( M->rng()),
- gsl_rng_default_seed,
- gsl_rng_get( M->rng()));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_delete_model( const TArgs& aa)
-{
- CMD_PROLOG (1, "delete_model");
-
- delete &M;
-
- models.erase( model);
-
- return move(R);
-}
-
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_import_nml( const TArgs& aa)
-{
- CMD_PROLOG (1, "import_nml")
-
- const string
- &fname = aa[1].vs;
- string fname2 = stilton::str::tilda2homedir(fname);
- if ( M.import_NetworkML( fname2, CModel::TNMLImportOption::merge) < 0 ) {
- return R.result = TCmdResult::system_error, move(R);
- }
-
- M.cull_blind_synapses();
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_export_nml( const TArgs& aa)
-{
- CMD_PROLOG (1, "export_nml")
-
- const string
- &fname = aa[1].vs;
- string fname2 = stilton::str::tilda2homedir(fname);
- if ( M.export_NetworkML( fname2) < 0 ) {
- return R.result = TCmdResult::system_error, move(R);
- }
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_reset_model( const TArgs& aa)
-{
- CMD_PROLOG (1, "reset_model")
-
- M.reset( CModel::TResetOption::no_params); // for with_params, there is revert_unit_parameters()
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_cull_deaf_synapses( const TArgs& aa)
-{
- CMD_PROLOG (1, "cull_deaf_synapses")
-
- M.cull_deaf_synapses();
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_describe_model( const TArgs& aa)
-{
- CMD_PROLOG (1, "describe_model");
-
- M.dump_metrics();
- M.dump_units();
- M.dump_state();
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_get_model_parameter( const TArgs& aa)
-{
- CInterpreterShell::SCmdResult R;
- if ( aa.size() != 2 ) {
- vp( 0, "get_model_parameter() takes 2 args, called with %zu\n", aa.size());
- return R.result = TCmdResult::bad_arity, move(R);
- }
- const string& model_name = aa[0].vs;
- CModel *M = nullptr;
- if ( model_name.size() != 0 ) {
- auto Mi = models.find(model_name);
- if ( Mi == models.end() ) {
- vp( 0, "get_model_parameter(): no such model: \"%s\"\n", model_name.c_str());
- return R.result = TCmdResult::logic_error, move(R);
- } else
- M = Mi->second;
- }
-
- const string
- ¶meter = aa[1].vs;
-
- if ( parameter == "verbosely" ) {
- R.values.push_back( SArg (M ? M->options.verbosely : options.verbosely));
-
- } else if ( parameter == "integration_dt_min" ) {
- R.values.push_back( SArg (M ? M->dt_min() : options.integration_dt_min));
-
- } else if ( parameter == "integration_dt_max" ) {
- R.values.push_back( SArg (M ? M->dt_min() : options.integration_dt_max));
-
- } else if ( parameter == "integration_dt_cap" ) {
- R.values.push_back( SArg (M ? M->dt_min() : options.integration_dt_cap));
-
- } else if ( parameter == "listen_dt" ) {
- R.values.push_back( SArg (M ? M->options.listen_dt : options.listen_dt));
-
- } else if ( parameter == "listen_mode" ) {
- auto F = [] (bool v) -> char { return v ? '+' : '-'; };
- R.values.push_back(
- SArg (M
- ? stilton::str::sasprintf(
- "1%cd%cb%c",
- F(M->options.listen_1varonly),
- F(M->options.listen_deferwrite),
- F(M->options.listen_binary))
- : stilton::str::sasprintf(
- "1%cd%cb%c",
- F(options.listen_1varonly),
- F(options.listen_deferwrite),
- F(options.listen_binary))));
-
- } else if ( parameter == "sxf_start_delay" ) {
- R.values.push_back( SArg (M ? M->options.sxf_start_delay : options.sxf_start_delay));
-
- } else if ( parameter == "sxf_period" ) {
- R.values.push_back( SArg (M ? M->options.sxf_period : options.sxf_period));
-
- } else if ( parameter == "sdf_sigma" ) {
- R.values.push_back( SArg (M ? M->options.sdf_sigma : options.sdf_sigma));
-
- }
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_set_model_parameter( const TArgs& aa)
-{
- CInterpreterShell::SCmdResult R;
- if ( aa.size() != 3 ) {
- vp( 0, "set_model_parameter() takes 3 args, called with %zu\n", aa.size());
- return R.result = TCmdResult::bad_arity, move(R);
- }
- const string& model = aa[0].vs;
- CModel *M = nullptr;
- if ( model.size() != 0 ) {
- auto Mi = models.find(model);
- if ( Mi == models.end() ) {
- vp( 0, "set_model_parameter(): no such model: \"%s\"\n", model.c_str());
- return R.result = TCmdResult::logic_error, move(R);
- } else
- M = Mi->second;
- }
-
- const string
- ¶meter = aa[1].vs,
- &value_s = aa[2].vs; // unconverted
-
- if ( parameter == "verbosely") {
- int v;
- if ( 1 != sscanf( value_s.c_str(), "%d", &v) ) {
- vp( 0, stderr, "set_model_parameter(): bad value for parameter `verbosely'\n");
- return R.result = TCmdResult::bad_value, move(R);
- }
- options.verbosely = v;
- if ( M )
- M->options.verbosely = v;
-
- } else if ( parameter == "integration_dt_min" ) {
- double v;
- if ( 1 != sscanf( value_s.c_str(), "%lg", &v) ) {
- vp( 0, stderr, "set_model_parameter(): bad value for parameter `integration_dt_min'\n");
- return R.result = TCmdResult::bad_value, move(R);
- }
- options.integration_dt_min = v;
- if ( M )
- M->set_dt_min( v);
-
- } else if ( parameter == "integration_dt_max" ) {
- double v;
- if ( 1 != sscanf( value_s.c_str(), "%lg", &v) ) {
- vp( 0, stderr, "set_model_parameter(): bad value for parameter `integration_dt_max'\n");
- return R.result = TCmdResult::bad_value, move(R);
- }
- options.integration_dt_max = v;
- if ( M )
- M->set_dt_max( v);
-
- } else if ( parameter == "integration_dt_cap" ) {
- double v;
- if ( 1 != sscanf( value_s.c_str(), "%lg", &v) ) {
- vp( 0, stderr, "set_model_parameter(): bad value for parameter `integration_dt_cap'\n");
- return R.result = TCmdResult::bad_value, move(R);
- }
- options.integration_dt_cap = v;
- if ( M )
- M->set_dt_cap( v);
-
- } else if ( parameter == "listen_dt" ) {
- double v;
- if ( 1 != sscanf( value_s.c_str(), "%lg", &v) ) {
- vp( 0, stderr, "set_model_parameter(): bad value for parameter `listen_dt'\n");
- return R.result = TCmdResult::bad_value, move(R);
- }
- options.listen_dt = v;
- if ( M )
- M->options.listen_dt = v;
-
- } else if ( parameter == "listen_mode" ) {
- size_t p;
- if ( (p = value_s.find('1')) != string::npos ) options.listen_1varonly = (value_s[p+1] != '-');
- if ( (p = value_s.find('d')) != string::npos ) options.listen_deferwrite = (value_s[p+1] != '-');
- if ( (p = value_s.find('b')) != string::npos ) options.listen_binary = (value_s[p+1] != '-');
- if ( M ) {
- M->options.listen_1varonly = options.listen_1varonly;
- M->options.listen_deferwrite = options.listen_deferwrite;
- M->options.listen_binary = options.listen_binary;
- }
- // better spell out these parameters, ffs
-
- } else if ( parameter == "sxf_start_delay" ) {
- double v;
- if ( 1 != sscanf( value_s.c_str(), "%lg", &v) ) {
- vp( 0, stderr, "set_model_parameter(): bad value for parameter `sxf_start_delay'\n");
- return R.result = TCmdResult::bad_value, move(R);
- }
- options.sxf_start_delay = v;
- if ( M )
- M->options.sxf_start_delay = v;
-
- } else if ( parameter == "sxf_period" ) {
- double v;
- if ( 1 != sscanf( value_s.c_str(), "%lg", &v) ) {
- vp( 0, stderr, "set_model_parameter(): bad value for parameter `sxf_period'\n");
- return R.result = TCmdResult::bad_value, move(R);
- }
- options.sxf_period = v;
- if ( M )
- M->options.sxf_period = v;
-
- } else if ( parameter == "sdf_sigma" ) {
- double v;
- if ( 1 != sscanf( value_s.c_str(), "%lg", &v) ) {
- vp( 0, stderr, "set_model_parameter(): bad value for parameter `sdf_sigma'\n");
- return R.result = TCmdResult::bad_value, move(R);
- }
- options.sdf_sigma = v;
- if ( M )
- M->options.sdf_sigma = v;
- }
-
- return move(R);
-}
-
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_advance( const TArgs& aa)
-{
- CMD_PROLOG (2, "advance")
-
- const double& time_to_go = aa[1].vg;
- const double end_time = M.model_time() + time_to_go;
- if ( M.model_time() > end_time ) {
- vp( 0, stderr, "advance(%g): Cannot go back in time (model is now at %g sec)\n", end_time, M.model_time());
- return R.result = TCmdResult::bad_value, move(R);
- }
- if ( !M.advance( end_time) ) {
- return R.result = TCmdResult::logic_error, move(R);
- }
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_advance_until( const TArgs& aa)
-{
- CMD_PROLOG (2, "advance_until")
-
- const double end_time = aa[1].vg;
- if ( M.model_time() > end_time ) {
- vp( 0, stderr, "advance_until(%g): Cannot go back in time (model is now at %g sec)\n", end_time, M.model_time());
- return R.result = TCmdResult::bad_value, move(R);
- }
- if ( !M.advance( end_time) ) {
- return R.result = TCmdResult::logic_error, move(R);
- }
-
- return move(R);
-}
-
-
-// ----------------------------------------
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_new_neuron( const TArgs& aa)
-{
- // arity has been checked already, in host_fun(), but it may still make sense to
- CMD_PROLOG (3, "new_neuron")
-
- const string
- &type = aa[1].vs,
- &label = aa[2].vs;
-
- if ( !M.add_neuron_species(
- type, label,
- TIncludeOption::is_last) ) {
- // vp( "`add_neuron' failed"); // we trust sufficient diagnostics has been reported
- return R.result = TCmdResult::logic_error, move(R);
- }
-
- return /* R.result = TCmdResult::ok, */ move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_new_synapse( const TArgs& aa)
-{
- CMD_PROLOG (5, "new_synapse")
-
- const string
- &type = aa[1].vs,
- &src = aa[2].vs,
- &tgt = aa[3].vs;
- const double
- &g = aa[4].vg;
-
- if ( !M.add_synapse_species(
- type, src, tgt, g,
- CModel::TSynapseCloningOption::yes,
- TIncludeOption::is_last) ) {
- return R.result = TCmdResult::logic_error, move(R);
- }
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_get_unit_properties( const TArgs& aa)
-{
- CMD_PROLOG (2, "get_unit_properties")
-
- const string
- &label = aa[1].vs;
- auto Up = M.unit_by_label(label);
- if ( Up ) {
- R.values.push_back( SArg (Up->label()));
- R.values.push_back( SArg (Up->class_name()));
- R.values.push_back( SArg (Up->family()));
- R.values.push_back( SArg (Up->species()));
- R.values.push_back( SArg ((int)Up->has_sources()));
- R.values.push_back( SArg ((int)Up->is_not_altered()));
- } else {
- vp( 0, stderr, "get_unit_properties(\"%s\"): No such unit\n", label.c_str());
- return R.result = TCmdResult::bad_id, move(R);
- }
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_get_unit_parameter( const TArgs& aa)
-{
- CMD_PROLOG (3, "get_unit_parameter")
-
- const string
- &label = aa[1].vs,
- ¶m = aa[2].vs;
- auto Up = M.unit_by_label(label);
- if ( Up )
- try {
- R.values.push_back(
- SArg (Up->get_param_value( param)));
- } catch (exception& ex) {
- return R.result = TCmdResult::bad_param, move(R);
- }
- else {
- vp( 0, stderr, "get_unit_parameter(\"%s\"): No such unit\n", label.c_str());
- return R.result = TCmdResult::bad_id, move(R);
- }
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_set_unit_parameter( const TArgs& aa)
-{
- CMD_PROLOG (4, "set_unit_parameter")
-
- const string
- &label = aa[1].vs,
- ¶m = aa[2].vs;
- const double
- &value = aa[3].vg;
-
- auto Up = M.unit_by_label(label);
- if ( Up )
- try {
- Up->param_value( param) = value;
- } catch (exception& ex) {
- return R.result = TCmdResult::bad_param, move(R);
- }
- else {
- vp( 0, stderr, "set_unit_parameter(\"%s\"): No such unit\n", label.c_str());
- return R.result = TCmdResult::bad_id, move(R);
- }
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_get_unit_vars( const TArgs& aa)
-{
- CMD_PROLOG (3, "get_unit_vars")
-
- const string
- &label = aa[1].vs,
- ¶m = aa[2].vs;
- auto Up = M.unit_by_label(label);
- if ( Up )
- try {
- R.values.push_back(
- SArg (Up->get_param_value( param)));
- } catch (exception& ex) {
- return R.result = TCmdResult::bad_param, move(R);
- }
- else {
- vp( 0, stderr, "get_unit_parameter(\"%s\"): No such unit\n", label.c_str());
- return R.result = TCmdResult::bad_id, move(R);
- }
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_reset_unit( const TArgs& aa)
-{
- CMD_PROLOG (3, "reset_unit")
-
- const string
- &label = aa[1].vs;
- auto Up = M.unit_by_label(label);
- if ( Up )
- Up -> reset_state();
- else {
- vp( 0, stderr, "reset_unit(\"%s\"): No such unit\n", label.c_str());
- return R.result = TCmdResult::bad_id, move(R);
- }
-
- return move(R);
-}
-
-
-// ----------------------------------------
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_get_units_matching( const TArgs& aa)
-{
- CMD_PROLOG (2, "get_units_matching")
-
- const string &pattern = aa[1].vs;
- auto L = M.list_units( pattern);
- for ( auto& U : L )
- R.values.emplace_back( SArg (U->label()));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_get_units_of_type( const TArgs& aa)
-{
- CMD_PROLOG (2, "get_units_of_type")
-
- const string &type = aa[1].vs;
- auto L = M.list_units();
- for ( auto& U : L )
- if ( type == U->species() )
- R.values.emplace_back( SArg (U->label()));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_set_matching_neuron_parameter( const TArgs& aa)
-{
- CMD_PROLOG (4, "set_matching_neuron_parameter")
-
- const string
- &pattern = aa[1].vs,
- ¶m = aa[2].vs;
- const double
- &value = aa[3].vg;
-
- list<CModel::STagGroupNeuronParmSet> tags {CModel::STagGroupNeuronParmSet (pattern, param, value)};
- R.values.push_back(
- SArg ((int)M.process_paramset_static_tags(
- tags)));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_set_matching_synapse_parameter( const TArgs& aa)
-{
- CMD_PROLOG (5, "set_matching_synapse_parameter")
-
- const string
- &src = aa[1].vs,
- &tgt = aa[2].vs,
- ¶m = aa[3].vs;
- const double
- &value = aa[4].vg;
-
- list<CModel::STagGroupSynapseParmSet> tags {CModel::STagGroupSynapseParmSet (src, tgt, param, value)};
- R.values.push_back(
- SArg ((int)M.process_paramset_static_tags(
- tags)));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_revert_matching_unit_parameters( const TArgs& aa)
-{
- CMD_PROLOG (4, "revert_matching_unit_parameters")
-
- const string
- &pattern = aa[1].vs;
-
- auto L = M.list_units( pattern);
- size_t count = 0;
- for ( auto& U : L ) {
- U->reset_params();
- ++count;
- }
-
- R.values.push_back( SArg ((int)count));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_decimate( const TArgs& aa)
-{
- CMD_PROLOG (3, "decimate")
-
- const string &pattern = aa[1].vs;
- const double& frac = aa[2].vg;
- if ( frac < 0. || frac > 1. ) {
- vp( 0, stderr, "decimate(%g): Decimation fraction outside [0..1]\n", frac);
- return R.result = TCmdResult::bad_value, move(R);
- }
-
- list<CModel::STagGroupDecimate> tags {{pattern, frac}};
- R.values.push_back(
- SArg ((int)M.process_decimate_tags(
- tags)));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_putout( const TArgs& aa)
-{
- CMD_PROLOG (2, "putout")
-
- const string &pattern = aa[1].vs;
-
- list<CModel::STagGroup> tags {{pattern, CModel::STagGroup::TInvertOption::no}};
- R.values.push_back(
- SArg ((int)M.process_putout_tags(
- tags)));
-
- return move(R);
-}
-
-
-// ----------------------------------------
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_new_tape_source( const TArgs& aa)
-{
- CMD_PROLOG (4, "new_tape_source")
-
- const string
- &name = aa[1].vs,
- &fname = aa[2].vs;
- const double
- &looping = aa[3].vd;
-
- if ( M.source_by_id( name) ) {
- vp( 0, stderr, "new_tape_source(): A source named \"%s\" already exists\n", name.c_str());
- return R.result = TCmdResult::logic_error, move(R);
- }
-
- try {
- auto source = new CSourceTape(
- name, fname,
- looping ? TSourceLoopingOption::yes : TSourceLoopingOption::no);
- if ( source )
- M.add_source( source);
- else {
- vp( 0, stderr, "new_tape_source(\"%s\", \"%s\"): Failed impossibly\n",
- name.c_str(), fname.c_str());
- return R.result = TCmdResult::system_error, move(R);
- }
- } catch (exception& ex) {
- vp( 0, stderr, "new_tape_source(\"%s\", \"%s\"): %s\n",
- name.c_str(), fname.c_str(), ex.what());
- return R.result = TCmdResult::system_error, move(R);
- }
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_new_periodic_source( const TArgs& aa)
-{
- CMD_PROLOG (5, "new_periodic_source")
-
- const string
- &name = aa[1].vs,
- &fname = aa[2].vs;
- const int
- &looping = aa[3].vd;
- const double
- &period = aa[4].vg;
-
- if ( M.source_by_id( name) ) {
- vp( 0, stderr, "new_periodic_source(): A source named \"%s\" already exists\n", name.c_str());
- return R.result = TCmdResult::logic_error, move(R);
- }
-
- try {
- auto source = new CSourcePeriodic(
- name, fname,
- looping ? TSourceLoopingOption::yes : TSourceLoopingOption::no,
- period);
- if ( source )
- M.add_source( source);
- else {
- vp( 0, stderr, "new_periodic_source(\"%s\", \"%s\"): Failed impossibly\n",
- name.c_str(), fname.c_str());
- return R.result = TCmdResult::system_error, move(R);
- }
- } catch (exception& ex) {
- vp( 0, stderr, "new_periodic_source(\"%s\", \"%s\"): %s\n",
- name.c_str(), fname.c_str(), ex.what());
- return R.result = TCmdResult::system_error, move(R);
- }
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_new_noise_source( const TArgs& aa)
-{
- CMD_PROLOG (6, "new_noise_source")
-
- const string
- &name = aa[1].vs;
- const double
- &min = aa[2].vg,
- &max = aa[3].vg,
- &sigma = aa[4].vg;
- const string
- &distribution = aa[5].vs;
-
- if ( M.source_by_id( name) ) {
- vp( 0, stderr, "new_noise_source(): A source named \"%s\" already exists\n", name.c_str());
- return R.result = TCmdResult::logic_error, move(R);
- }
-
- try {
- auto source = new CSourceNoise(
- name, min, max, sigma, CSourceNoise::distribution_by_name(distribution));
- if ( source )
- M.add_source( source);
- else {
- vp( 0, stderr, "new_noise_source(\"%s\"): Failed impossibly\n",
- name.c_str());
- return R.result = TCmdResult::system_error, move(R);
- }
- } catch (exception& ex) {
- vp( 0, stderr, "new_noise_source(\"%s\"): %s\n",
- name.c_str(), ex.what());
- return R.result = TCmdResult::system_error, move(R);
- }
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_get_sources( const TArgs& aa)
-{
- CMD_PROLOG (1, "get_sources")
-
- for ( auto& S : M.sources() )
- R.values.push_back( SArg (S->name()));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_connect_source( const TArgs& aa)
-{
- CMD_PROLOG (4, "connect_source")
-
- const string
- &label = aa[1].vs,
- &parm = aa[2].vs,
- &source = aa[3].vs;
- C_BaseSource *S = M.source_by_id( source);
- if ( !S ) {
- vp( 0, stderr, "connect_source(): Unknown source: %s\n", source.c_str());
- return R.result = TCmdResult::bad_id, move(R);
- }
- // cannot check whether units matching label indeed have a parameter so named
- list<CModel::STagGroupSource> tags {{label, parm, S, CModel::STagGroup::TInvertOption::no}};
- R.values.push_back(
- SArg ((int)M.process_paramset_source_tags(
- tags)));
-
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_disconnect_source( const TArgs& aa)
-{
- CMD_PROLOG (4, "disconnect_source")
-
- const string
- &label = aa[1].vs,
- &parm = aa[2].vs,
- &source = aa[3].vs;
- C_BaseSource *S = M.source_by_id( source);
- if ( !S ) {
- vp( 0, stderr, "disconnect_source(): Unknown source: %s\n", source.c_str());
- return R.result = TCmdResult::bad_id, move(R);
- }
- list<CModel::STagGroupSource> tags {{label, parm, S, CModel::STagGroup::TInvertOption::yes}};
- R.values.push_back(
- SArg ((int)M.process_paramset_source_tags(
- tags)));
-
- return move(R);
-}
-
-
-// ----------------------------------------
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_start_listen( const TArgs& aa)
-{
- CMD_PROLOG (2, "start_listen")
-
- const string
- &pattern = aa[1].vs;
- list<CModel::STagGroupListener> tags {CModel::STagGroupListener (
- pattern, (0
- | (M.options.listen_1varonly ? CN_ULISTENING_1VARONLY : 0)
- | (M.options.listen_deferwrite ? CN_ULISTENING_DEFERWRITE : 0)
- | (M.options.listen_binary ? CN_ULISTENING_BINARY : CN_ULISTENING_DISK)),
- CModel::STagGroup::TInvertOption::no)};
- R.values.push_back(
- SArg ((int)M.process_listener_tags(
- tags)));
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_stop_listen( const TArgs& aa)
-{
- CMD_PROLOG (2, "stop_listen")
-
- const string
- &pattern = aa[1].vs;
- list<CModel::STagGroupListener> tags {{
- pattern, (0
- | (M.options.listen_1varonly ? CN_ULISTENING_1VARONLY : 0)
- | (M.options.listen_deferwrite ? CN_ULISTENING_DEFERWRITE : 0)
- | (M.options.listen_binary ? CN_ULISTENING_BINARY : CN_ULISTENING_DISK)),
- CModel::STagGroup::TInvertOption::yes}};
- R.values.push_back(
- SArg ((int)M.process_listener_tags(
- tags)));
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_start_log_spikes( const TArgs& aa)
-{
- CMD_PROLOG (2, "start_log_spikes")
-
- if ( M.options.sxf_period <= 0. || M.options.sdf_sigma <= 0. )
- vp( 1, "SDF parameters not set up, will only log spike times\n");
-
- const string
- &pattern = aa[1].vs;
- list<CModel::STagGroupSpikelogger> tags {{
- pattern,
- M.options.sxf_period, M.options.sdf_sigma, M.options.sxf_start_delay,
- CModel::STagGroup::TInvertOption::no}};
- R.values.push_back(
- SArg ((int)M.process_spikelogger_tags(
- tags)));
- return move(R);
-}
-
-
-
-CInterpreterShell::SCmdResult
-cnrun::CInterpreterShell::
-cmd_stop_log_spikes( const TArgs& aa)
-{
- CMD_PROLOG (2, "start_log_spikes")
-
- const string
- &pattern = aa[1].vs;
- list<CModel::STagGroupSpikelogger> tags {{
- pattern,
- M.options.sxf_period, M.options.sdf_sigma, M.options.sxf_start_delay,
- CModel::STagGroup::TInvertOption::yes}};
- R.values.push_back(
- SArg ((int)M.process_spikelogger_tags(
- tags)));
- return move(R);
-}
-
-
-// Local Variables:
-// Mode: c++
-// indent-tabs-mode: nil
-// tab-width: 8
-// c-basic-offset: 8
-// End:
diff --git a/upstream/src/cnrun/completions.cc b/upstream/src/cnrun/completions.cc
deleted file mode 100644
index 6ba8cdd..0000000
--- a/upstream/src/cnrun/completions.cc
+++ /dev/null
@@ -1,456 +0,0 @@
-/*
- * File name: cnrun/completions.cc
- * Project: cnrun
- * Author: Andrei Zavada <johnhommer at gmail.com>
- * Initial version: 2010-02-12
- *
- * Purpose: interpreter readline completions
- *
- * License: GPL
- */
-
-#if HAVE_CONFIG_H && !defined(VERSION)
-# include "config.h"
-#endif
-
-#include <cstdio>
-
-#ifdef HAVE_LIBREADLINE
-# if defined(HAVE_READLINE_READLINE_H)
-# include <readline/readline.h>
-# elif defined(HAVE_READLINE_H)
-# include <readline.h>
-# endif
-#endif
-
-#ifdef HAVE_READLINE_HISTORY
-# if defined(HAVE_READLINE_HISTORY_H)
-# include <readline/history.h>
-# elif defined(HAVE_HISTORY_H)
-# include <history.h>
-# endif
-#endif
-
-#include "libcn/model.hh"
-#include "cnrun.hh"
-
-using namespace std;
-using namespace cnrun;
-
-
-
-static char*
-cnrun_null_generator( const char* text, int state)
-{
- return nullptr;
-}
-
-
-static char*
-cnrun_cmd_generator( const char* text, int state)
-{
- static int list_index, len;
- const char *name;
-
- if ( !state ) {
- list_index = 0;
- len = strlen( text);
- }
-
- while ( (name = cnrun_cmd[list_index]) ) {
- list_index++;
- if ( strncmp( name, text, len) == 0 )
- return strdup( name);
- }
- return nullptr;
-}
-
-static char*
-cnrun_source_types_generator( const char* text, int state)
-{
- static int list_index, len;
- const char *name;
-
- if ( !state ) {
- list_index = 0;
- len = strlen( text);
- }
-
- while ( (name = __SourceTypes[list_index]) ) {
- list_index++;
- if ( strncmp( name, text, len) == 0 )
- return strdup( name);
- }
- return nullptr;
-}
-
-
-
-
-
-
-
-static char*
-cnrun_neu_type_generator( const char *text, int state)
-{
- static const char** neuron_types = nullptr;
- if ( !neuron_types ) {
- if ( !(neuron_types = (const char**)malloc( (NT_LAST - NT_FIRST+1+1)*sizeof(char*))) )
- abort();
- size_t n;
- for ( n = 0; n <= NT_LAST - NT_FIRST; n++ )
- neuron_types[n] = strdup( __CNUDT[NT_FIRST+n].species); // family would do just as well
- neuron_types[n] = nullptr;
- }
-
- static int list_index, len;
- const char *name;
- if ( !state ) {
- list_index = 0;
- len = strlen( text);
- }
- while ( (name = neuron_types[list_index]) ) {
- list_index++;
- if ( strncmp( name, text, len) == 0 )
- return strdup( name);
- }
- return nullptr;
-}
-
-
-
-static char*
-cnrun_syn_type_generator( const char *text, int state)
-{
- static const char** synapse_types = nullptr;
- if ( !synapse_types ) {
- if ( !(synapse_types = (const char**)malloc( (YT_LAST - YT_FIRST+1+1)*sizeof(char*))) )
- abort();
- size_t n, i;
- for ( n = i = 0; n <= YT_LAST - YT_FIRST; n++ )
- synapse_types[i++] = strdup( __CNUDT[YT_FIRST+n].family);
- // there are fewer families than species, so we are wasting some tens of bytes here. oh well.
- synapse_types[i] = nullptr;
- }
-
- static int list_index, len;
- const char *name;
- if ( !state ) {
- list_index = 0;
- len = strlen( text);
- }
- while ( (name = synapse_types[list_index]) ) {
- list_index++;
- if ( strncmp( name, text, len) == 0 )
- return strdup( name);
- }
- return nullptr;
-}
-
-
-
-
-
-bool cnrun::regenerate_unit_labels = true;
-
-#define GENERATE_NEURONS 1
-#define GENERATE_SYNAPSES 2
-static int restrict_generated_set = 0;
-
-static char*
-cnrun_unit_label_generator( const char *text, int state)
-{
- static int list_index, len;
- const char *name;
-
- static char** unit_labels = nullptr;
-
- if ( regenerate_unit_labels ) {
- regenerate_unit_labels = false;
-
- if ( !Model ) {
- free( unit_labels);
- unit_labels = nullptr;
- return nullptr;
- }
-
- if ( !(unit_labels = (char**)realloc( unit_labels, (Model->units()+1) * sizeof(char*))) )
- abort();
- size_t n = 0;
- for_model_units (Model, U)
- if ( ((restrict_generated_set & GENERATE_NEURONS) && (*U)->is_neuron()) ||
- ((restrict_generated_set & GENERATE_SYNAPSES) && (*U)->is_synapse()) )
- unit_labels[n++] = strdup( (*U) -> label());
- unit_labels[n] = nullptr;
- }
-
- if ( !unit_labels )
- return nullptr;
-
- if ( !state ) {
- list_index = 0;
- len = strlen( text);
- }
- while ( (name = unit_labels[list_index]) ) {
- list_index++;
- if ( strncmp( name, text, len) == 0 )
- return strdup( name);
- }
- return nullptr;
-}
-
-
-
-bool cnrun::regenerate_var_names = true;
-
-static char*
-cnrun_var_names_generator( const char *text, int state)
-{
- static int list_index, len;
- const char *name;
-
- static char** var_names = nullptr;
-
- if ( regenerate_var_names ) {
- regenerate_var_names = false;
-
- if ( current_shell_variables->size() == 0 )
- return nullptr;
-
- if ( !(var_names = (char**)realloc( var_names, (current_shell_variables->size()+1) * sizeof(char*))) )
- abort();
- size_t n = 0;
- for ( auto &v : *current_shell_variables )
- var_names[n++] = strdup( v.name);
- var_names[n] = nullptr;
- }
-
- if ( !var_names )
- return nullptr;
-
- if ( !state ) {
- list_index = 0;
- len = strlen( text);
- }
- while ( (name = var_names[list_index]) ) {
- list_index++;
- if ( strncmp( name, text, len) == 0 )
- return strdup( name);
- }
- return nullptr;
-}
-
-
-
-
-
-bool cnrun::regenerate_source_ids = true;
-
-static char*
-cnrun_source_id_generator( const char *text, int state)
-{
- static int list_index, len;
- const char *name;
-
- static char** source_ids = nullptr;
-
- if ( regenerate_source_ids ) {
- regenerate_source_ids = false;
-
- if ( !Model || Model->Sources.size() == 0 )
- return nullptr;
-
- if ( !(source_ids = (char**)realloc( source_ids, (Model->Sources.size()+1) * sizeof(char*))) )
- abort();
- size_t n = 0;
- for ( auto &v : Model->Sources )
- source_ids[n++] = strdup( v->name.c_str());
- source_ids[n] = nullptr;
- }
-
- if ( !source_ids )
- return nullptr;
-
- if ( !state ) {
- list_index = 0;
- len = strlen( text);
- }
- while ( (name = source_ids[list_index]) ) {
- list_index++;
- if ( strncmp( name, text, len) == 0 )
- return strdup( name);
- }
- return nullptr;
-}
-
-
-
-
-static char **parm_names = nullptr;
-static char *unit_label_completing_for = nullptr;
-static char *synapse_target_label_completing_for = nullptr;
-
-static char*
-cnrun_parm_names_generator( const char *text, int state)
-{
- static int list_index, len;
- const char *name;
-
- if ( !Model )
- return nullptr;
- C_BaseSynapse *y;
- TUnitType t;
- C_BaseUnit *u1, *u2;
- if ( synapse_target_label_completing_for )
- if ( (u1 = Model->unit_by_label( unit_label_completing_for)) && u1->is_neuron() &&
- (u2 = Model->unit_by_label( synapse_target_label_completing_for)) && u2->is_neuron() &&
- (y = (static_cast<C_BaseNeuron*>(u1)) -> connects_via( *static_cast<C_BaseNeuron*>(u2))) )
- t = y->type();
- else
- return nullptr;
- else
- t = Model -> unit_by_label( unit_label_completing_for) -> type();
- if ( t == NT_VOID )
- return nullptr;
-
- if ( !(parm_names = (char**)realloc( parm_names, (__CNUDT[t].pno+1) * sizeof(char*))) )
- abort();
- size_t n, p;
- for ( n = p = 0; p < __CNUDT[t].pno; p++ )
- if ( __cn_verbosely > 5 || __CNUDT[t].stock_param_syms[p][0] != '.' )
- parm_names[n++] = strdup( __CNUDT[t].stock_param_syms[p]);
- parm_names[n] = nullptr;
-
- if ( !parm_names )
- return nullptr;
-
- if ( !state ) {
- list_index = 0;
- len = strlen( text);
- }
- while ( (name = parm_names[list_index]) ) {
- list_index++;
- if ( strncmp( name, text, len) == 0 )
- return strdup( name);
- }
- return nullptr;
-}
-
-
-
-
-static int
-rl_point_at_word() __attribute__ ((pure));
-
-static int
-rl_point_at_word()
-{
- int p = 0, delims = 0;
- while ( p < rl_point ) {
- if ( isspace(rl_line_buffer[p]) ) {
- delims++;
- do p++;
- while ( p < rl_point && isspace(rl_line_buffer[p]) );
- }
- p++;
- }
- return delims;
-}
-
-
-
-char**
-cnrun::
-cnrun_completion( const char *text, int start, int end)
-{
- if ( start == 0 )
- return rl_completion_matches( text, &cnrun_cmd_generator);
-
- char *line_buffer = strdupa( rl_line_buffer),
- *cmd = strtok( line_buffer, " \t");
-
- if ( strcmp( cmd, cnrun_cmd[CNCMD_add_neuron]) == 0 ) {
- switch ( rl_point_at_word() ) {
- case 1: return rl_completion_matches( text, &cnrun_neu_type_generator);
- default: return rl_completion_matches( text, &cnrun_null_generator);
- }
-
- } else if ( strcmp( cmd, cnrun_cmd[CNCMD_add_synapse]) == 0 ) {
- switch ( rl_point_at_word() ) {
- case 1: return rl_completion_matches( text, &cnrun_syn_type_generator);
- case 2:
- case 3: return (restrict_generated_set = 0|GENERATE_NEURONS,
- rl_completion_matches( text, &cnrun_unit_label_generator));
- default: return rl_completion_matches( text, &cnrun_null_generator);
- }
-
- } else if ( strcmp( cmd, cnrun_cmd[CNCMD_load_nml]) == 0 ) {
- return nullptr; // use built-in filename completion
-
- } else if ( strcmp( cmd, cnrun_cmd[CNCMD_show_units]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_decimate]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_start_listen]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_stop_listen]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_start_log_spikes]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_stop_log_spikes]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_putout]) == 0 ) {
- return (rl_point_at_word() == 1) ? (restrict_generated_set = 0|GENERATE_NEURONS|GENERATE_SYNAPSES,
- rl_completion_matches( text, &cnrun_unit_label_generator)) : nullptr;
-
- } else if ( strcmp( cmd, cnrun_cmd[CNCMD_show_vars]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_clear_vars]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_listen_dt]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_integration_dt_min]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_integration_dt_max]) == 0 ||
- strcmp( cmd, cnrun_cmd[CNCMD_integration_dt_cap]) == 0 ) {
- return (rl_point_at_word() == 1) ? rl_completion_matches( text, cnrun_var_names_generator) : nullptr;
-
- } else if ( strcmp( cmd, cnrun_cmd[CNCMD_set_parm_neuron]) == 0 ) {
- switch ( rl_point_at_word() ) {
- case 1: restrict_generated_set = 0|GENERATE_NEURONS;
- return rl_completion_matches( text, cnrun_unit_label_generator);
- case 2: unit_label_completing_for = strtok( nullptr, " ");
- synapse_target_label_completing_for = nullptr;
- return rl_completion_matches( text, cnrun_parm_names_generator);
- default: return rl_completion_matches( text, cnrun_var_names_generator);
- }
-
- } else if ( strcmp( cmd, cnrun_cmd[CNCMD_set_parm_synapse]) == 0 ) {
- switch ( rl_point_at_word() ) {
- case 1:
- case 2: restrict_generated_set = 0|GENERATE_NEURONS;
- return rl_completion_matches( text, cnrun_unit_label_generator);
- case 3: unit_label_completing_for = strtok( nullptr, " ");
- synapse_target_label_completing_for = strtok( nullptr, " ");
- return rl_completion_matches( text, cnrun_parm_names_generator);
- default: return rl_completion_matches( text, cnrun_var_names_generator);
- }
-
- } else if ( strcmp( cmd, cnrun_cmd[CNCMD_connect_source]) == 0 ) {
- switch ( rl_point_at_word() ) {
- case 1: return rl_completion_matches( text, &cnrun_source_id_generator);
- case 2: restrict_generated_set = 0|GENERATE_NEURONS|GENERATE_SYNAPSES;
- return rl_completion_matches( text, &cnrun_unit_label_generator);
- case 3: unit_label_completing_for = (strtok( nullptr, " "), strtok( nullptr, " "));
- synapse_target_label_completing_for = nullptr;
- return rl_completion_matches( text, cnrun_parm_names_generator);
- default: return rl_completion_matches( text, cnrun_null_generator);
- }
-
- } else if ( strcmp( cmd, cnrun_cmd[CNCMD_new_source]) == 0 ) {
- switch ( rl_point_at_word() ) {
- case 1: return rl_completion_matches( text, cnrun_source_types_generator);
- default: return rl_completion_matches( text, cnrun_null_generator);
- }
-
- } else {
- return nullptr;
- }
-}
-
-// Local Variables:
-// Mode: c++
-// indent-tabs-mode: nil
-// tab-width: 8
-// c-basic-offset: 8
-// End:
diff --git a/upstream/src/cnrun/interpreter.cc b/upstream/src/cnrun/interpreter.cc
deleted file mode 100644
index 0e623b2..0000000
--- a/upstream/src/cnrun/interpreter.cc
+++ /dev/null
@@ -1,270 +0,0 @@
-/*
- * File name: cnrun/interpreter.cc
- * Project: cnrun
- * Author: Andrei Zavada <johnhommer at gmail.com>
- * building on original work by Thomas Nowotny <tnowotny at ucsd.edu>
- * Initial version: 2010-02-12
- *
- * Purpose: CModel runner, using Lua.
- *
- * License: GPL
- */
-
-#if HAVE_CONFIG_H && !defined(VERSION)
-# include "config.h"
-#endif
-
-#include <list>
-
-extern "C" {
-#include <lua.h>
-#include <lualib.h>
-#include <lauxlib.h>
-}
-
-#include "cnrun.hh"
-
-using namespace std;
-using namespace cnrun;
-using stilton::str::sasprintf;
-
-cnrun::CInterpreterShell::
-CInterpreterShell (const SInterpOptions& options_)
- : options (options_)
-{
- lua_state = luaL_newstate();
- luaL_openlibs( lua_state);
-}
-
-cnrun::CInterpreterShell::
-~CInterpreterShell ()
-{
- for ( auto& M : models )
- delete M.second;
- lua_close( lua_state);
-}
-
-namespace {
-
-struct SCmdDesc {
- const char* id;
- //enum class TAType { aint, afloat, astr, };
- //vector<TAType> arguments;
- const char* arg_sig;
- //CInterpreterShell::TCmdResult (CInterpreterShell::* fun)(CInterpreterShell::TArgs&);
- decltype(&CInterpreterShell::cmd_new_model) fun;
-};
-
-const SCmdDesc Commands[] = {
- { "new_model", "s", &CInterpreterShell::cmd_new_model },
- { "delete_model", "s", &CInterpreterShell::cmd_delete_model },
- { "import_nml", "ss", &CInterpreterShell::cmd_import_nml },
- { "export_nml", "ss", &CInterpreterShell::cmd_export_nml },
- { "reset_model", "s", &CInterpreterShell::cmd_reset_model },
- { "cull_deaf_synapses", "s", &CInterpreterShell::cmd_cull_deaf_synapses },
- { "describe_model", "s", &CInterpreterShell::cmd_describe_model },
- { "get_model_parameter", "ss", &CInterpreterShell::cmd_get_model_parameter },
- { "set_model_parameter", "sss", &CInterpreterShell::cmd_set_model_parameter },
- { "advance", "sg", &CInterpreterShell::cmd_advance },
- { "advance_until", "sg", &CInterpreterShell::cmd_advance_until },
-
- { "new_neuron", "sss", &CInterpreterShell::cmd_new_neuron },
- { "new_synapse", "ssssg", &CInterpreterShell::cmd_new_synapse },
- { "get_unit_properties", "ss", &CInterpreterShell::cmd_get_unit_properties },
- { "get_unit_parameter", "sss", &CInterpreterShell::cmd_get_unit_parameter },
- { "set_unit_parameter", "sssg", &CInterpreterShell::cmd_set_unit_parameter },
- { "get_unit_vars", "ss", &CInterpreterShell::cmd_get_unit_vars },
- { "reset_unit", "ss", &CInterpreterShell::cmd_reset_unit },
-
- { "get_units_matching", "ss", &CInterpreterShell::cmd_get_units_matching },
- { "get_units_of_type", "ss", &CInterpreterShell::cmd_get_units_of_type },
- { "set_matching_neuron_parameter", "sssg", &CInterpreterShell::cmd_set_matching_neuron_parameter },
- { "set_matching_synapse_parameter", "ssssg", &CInterpreterShell::cmd_set_matching_synapse_parameter },
- { "revert_matching_unit_parameters", "ss", &CInterpreterShell::cmd_revert_matching_unit_parameters },
- { "decimate", "ssg", &CInterpreterShell::cmd_decimate },
- { "putout", "ss", &CInterpreterShell::cmd_putout },
-
- { "new_tape_source", "sssb", &CInterpreterShell::cmd_new_tape_source },
- { "new_periodic_source", "sssbg", &CInterpreterShell::cmd_new_periodic_source },
- { "new_noise_source", "ssgggs", &CInterpreterShell::cmd_new_noise_source },
- { "get_sources", "s", &CInterpreterShell::cmd_get_sources },
- { "connect_source", "ssss", &CInterpreterShell::cmd_connect_source },
- { "disconnect_source", "ssss", &CInterpreterShell::cmd_disconnect_source },
-
- { "start_listen", "ss", &CInterpreterShell::cmd_start_listen },
- { "stop_listen", "ss", &CInterpreterShell::cmd_stop_listen },
- { "start_log_spikes", "ss", &CInterpreterShell::cmd_start_log_spikes },
- { "stop_log_spikes", "ss", &CInterpreterShell::cmd_stop_log_spikes },
-};
-
-}
-
-
-list<string>
-cnrun::CInterpreterShell::
-list_commands()
-{
- list<string> ret;
- for ( auto& cs : Commands )
- ret.push_back( {cs.id});
- return move(ret);
-}
-
-
-namespace {
-extern "C"
-int
-host_fun( lua_State* L) // -> nargsout
-{
- size_t nargsin = lua_gettop(L) - 2; // the first two being, a CScoreAssistant* and opcode
-
- auto this_p = (CInterpreterShell*)lua_touserdata( L, 1);
-
- auto reperr = [&] (const char* str)
- {
- lua_pushboolean( L, false);
- lua_pushfstring( L, str);
- };
-
- if ( !this_p ) {
- reperr( "Opaque shell blob object is NULL");
- return 2;
- }
-
- const char* opcode = lua_tostring( L, 2);
-
- for ( auto& C : Commands ) {
- if ( strcmp( opcode, C.id) != 0 )
- continue;
- if ( nargsin != strlen(C.arg_sig) ) {
- reperr( sasprintf( "Bad arity in call to %s (expecting %zu arg(s), got %zu",
- opcode, strlen(C.arg_sig), nargsin).c_str());
- return 2;
- }
-
- // we don't accept arrays from lua yet
- CInterpreterShell::TArgs args;
- size_t argth = 0;
- while ( ++argth <= nargsin ) {
- CInterpreterShell::SArg A (0);
- A.type = C.arg_sig[argth-1];
- switch ( A.type ) {
- case 's': A.vs = lua_tostring( L, 2 + argth); break;
- case 'd': A.vd = lua_tointeger( L, 2 + argth); break;
- case 'b': A.vd = lua_tointeger( L, 2 + argth); break;
- case 'g': A.vg = lua_tonumber( L, 2 + argth); break;
- default:
- throw "Fix type literals in SCmdDesc?";
- }
- args.push_back(A);
- }
-
- // return: ok result code, # of values pushed, value0, value1, ...; o
- // non-ok result code, error string
- this_p->vp( 5, "fun %s/%zu\n", C.id, args.size());
- auto R = (this_p ->* C.fun)( args);
- lua_settop( L, 0);
- lua_pushboolean( L, true);
- lua_pushinteger( L, R.result);
- if ( R.result == CInterpreterShell::TCmdResult::ok ) {
- lua_pushinteger( L, R.values.size());
- for ( auto& V : R.values )
- switch (V.type) {
- case 's': lua_pushstring( L, V.vs.c_str()); break;
- case 'd': lua_pushinteger( L, V.vd); break;
- case 'g': lua_pushnumber( L, V.vg); break;
- default:
- throw "Fix type literals in SCmdDesc?";
- }
- return 1 + 1 + 1 + R.values.size();
- } else {
- lua_pushstring( L, R.error_message.c_str());
- return 1 + 1 + 1;
- }
- }
- reperr( sasprintf( "Unrecognized function \"%s\"/%zu",
- opcode, nargsin - 2).c_str());
- return 2;
-}
-}
-
-cnrun::CInterpreterShell::TScriptExecResult
-cnrun::CInterpreterShell::
-exec_script( const string& script_fname)
-{
- // 0. load script
- string script_contents;
- {
- ifstream oleg (script_fname);
- char b[8888];
- while ( oleg.good() ) {
- size_t n = oleg.readsome( b, 8888);
- if ( n == 0 )
- break;
- b[n] = 0;
- script_contents += b;
- }
- if ( script_contents.size() == 0 ) {
- vp( 0, "%s: empty file\n", script_fname.c_str());
- return TScriptExecResult::file_error;
- }
- }
-
- // 1a. prepare lua side
- lua_settop( lua_state, 0);
-
- // 1b. compile
- int ret1 = luaL_loadbuffer(
- lua_state,
- script_contents.c_str(),
- script_contents.size(),
- script_fname.c_str());
- if ( ret1 ) {
- const char* errmsg = lua_tostring( lua_state, -1);
- vp( 0, "%s: compilation failed: %s (%d)\n", script_fname.c_str(), errmsg, ret1);
- return TScriptExecResult::compile_error;
- }
-
- // 1c. put host_fun on stack
- if ( !lua_checkstack( lua_state, 2) ) {
- vp( 0, "failed to grow stack for 2 elements\n");
- return TScriptExecResult::stack_error;
- }
-
- lua_pushlightuserdata( lua_state, this);
- lua_pushcfunction( lua_state, host_fun);
-
- // 1d. exec script
- int call_result = lua_pcall(
- lua_state,
- 2, // nargsin
- 1, // nargsout
- 0);
- if ( call_result ) {
- vp( 0, "%s: script call failed (%d): %s\n", script_fname.c_str(), call_result, lua_tostring( lua_state, 1));
- return TScriptExecResult::call_error;
- }
-
- return TScriptExecResult::ok;
-}
-
-
-
-int
-cnrun::CInterpreterShell::
-run()
-{
- for ( const auto& S : options.scripts ) {
- vp( 1, "Exec %s:\n", S.c_str());
- if ( exec_script(S) != TScriptExecResult::ok )
- return 1;
- }
- return 0;
-}
-
-// Local Variables:
-// Mode: c++
-// indent-tabs-mode: nil
-// tab-width: 8
-// c-basic-offset: 8
-// End:
diff --git a/upstream/src/cnrun/main.cc b/upstream/src/cnrun/main.cc
deleted file mode 100644
index 502c810..0000000
--- a/upstream/src/cnrun/main.cc
+++ /dev/null
@@ -1,160 +0,0 @@
-/*
- * File name: cnrun/main.cc
- * Project: cnrun
- * Author: Andrei Zavada <johnhommer at gmail.com>
- * Initial version: 2008-09-02
- *
- * Purpose: function main
- *
- * License: GPL
- */
-
-
-#if HAVE_CONFIG_H && !defined(VERSION)
-# include "config.h"
-#endif
-
-#include <cstdarg>
-#include <cstdlib>
-#include <list>
-#include <string>
-#include <unistd.h>
-
-#include "cnrun.hh"
-
-// needs to go after <algorithm>
-#include <argp.h>
-
-using namespace std;
-using namespace cnrun;
-
-
-// argparse
-
-const char
- *argp_program_version = PACKAGE_STRING,
- *argp_program_bug_address = PACKAGE_BUGREPORT;
-
-static char doc[] =
- PACKAGE " -- a neuronal network model runner";
-
-namespace opt {
-enum TOptChar {
- list_units = 'U',
- chdir = 'C',
- verbosely = 'v',
-};
-} // namespace opt, strictly to enclose enum TOptChar
-
-#pragma GCC diagnostic ignored "-Wmissing-field-initializers"
-#pragma GCC diagnostic push
-static struct argp_option options[] = {
- {"list-units", opt::list_units, NULL, 0,
- "List all available units and exit." },
-
- {"chdir", opt::chdir, "DIR", 0,
- "Change to DIR before executing script."},
-
- {"verbose", opt::verbosely, "LEVEL", 0,
- "Verbosity level (default 1; values up to 7 are meaningful). Use a"
- " negative value to show the progress percentage only,"
- " indented on the line at -8 x this value."},
-
- { 0 }
-};
-#pragma GCC diagnostic pop
-
-static char args_doc[] = "SCRIPTFILE(S)";
-
-static error_t parse_opt( int, char*, struct argp_state*);
-
-static struct argp argp = {
- options,
- parse_opt,
- args_doc,
- doc
-};
-
-
-static error_t
-parse_opt( int key, char *arg, struct argp_state *state)
-{
- auto& Q = *(cnrun::SInterpOptions*)state->input;
-
- char *endp = nullptr;
- switch ( key ) {
- case opt::list_units:
- Q.list_units = true;
- break;
-
- case opt::chdir:
- Q.working_dir = arg;
- break;
-
- case opt::verbosely:
- Q.verbosely = strtol( arg, &endp, 10);
- break;
-
- case ARGP_KEY_ARG:
- Q.scripts.emplace_back(arg);
- break;
-
- case ARGP_KEY_END:
- if ( Q.scripts.empty() && !Q.list_units )
- argp_usage (state);
- break;
-
- default:
- return (error_t)ARGP_ERR_UNKNOWN;
- }
-
- return (error_t)0;
-}
-
-
-
-
-// #include "print-version.hh"
-void print_version( const char* appname);
-
-int
-main( int argc, char *argv[])
-{
- print_version( "cnrun");
-
- cnrun::SInterpOptions Options;
- argp_parse( &argp, argc, argv, 0, NULL, (void*)&Options);
-
- // purely informational, requires no model
- if ( Options.list_units ) {
- cnmodel_dump_available_units();
- return 0;
- }
-
- // cd as requested
- char *pwd = nullptr;
- if ( Options.working_dir.size() ) {
- pwd = getcwd( nullptr, 0);
- if ( chdir( Options.working_dir.c_str()) ) {
- fprintf( stderr, "Failed to cd to \"%s\"", Options.working_dir.c_str());
- return 2;
- }
- }
-
- cnrun::global::verbosely = Options.verbosely;
-
- cnrun::CInterpreterShell Interp (Options);
- int ret = Interp.run();
-
- if ( pwd )
- if ( chdir( pwd) )
- fprintf( stderr, "Failed to cd back to \"%s\"", pwd);
-
- return ret;
-}
-
-// Local Variables:
-// indent-tabs-mode: nil
-// tab-width: 8
-// c-basic-offset: 8
-// End:
diff --git a/upstream/src/cnrun/print_version.cc b/upstream/src/cnrun/print_version.cc
deleted file mode 100644
index f7a7467..0000000
--- a/upstream/src/cnrun/print_version.cc
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * File name: print_version.cc
- * Project: cnrun
- * Author: Andrei Zavada <johnhommer at gmail.com>
- * Initial version: 2014-03-22
- *
- * Purpose: print version (separate file for every make to always touch it)
- *
- * License: GPL
- */
-
-#include <cstdio>
-#include "config.h"
-
-void
-print_version( const char* this_program)
-{
- printf( "%s %s built " __DATE__ " " __TIME__ " by %s\n", this_program, GIT_DESCRIBE_TAGS, BUILT_BY);
-}
-
-// Local Variables:
-// Mode: c++
-// indent-tabs-mode: nil
-// tab-width: 8
-// c-basic-offset: 8
-// End:
diff --git a/upstream/src/libcnlua/Makefile.am b/upstream/src/libcnlua/Makefile.am
deleted file mode 100644
index 2996281..0000000
--- a/upstream/src/libcnlua/Makefile.am
+++ /dev/null
@@ -1,36 +0,0 @@
-include $(top_srcdir)/src/Common.mk
-AM_CXXFLAGS += $(LUA_INCLUDE)
-
-if DO_PCH
-BUILT_SOURCES := \
- cnhost.hh.gch
-
-CLEANFILES := $(BUILT_SOURCES)
-endif
-
-
-lib_LTLIBRARIES = \
- libcnlua.la
-libcnlua_la_SOURCES = \
- lua-iface.cc cnhost.hh
-libcnlua_la_LIBADD := \
- ../libcn/libcn.la \
- ../libstilton/libstilton.la \
- $(LIBCN_LIBS) \
- $(LUA_LIB)
-libcnlua_la_LDFLAGS := \
- -shared -version-info $(subst .,:,$(PACKAGE_VERSION))
-
-
-#print_version.o: CXXFLAGS = $(AM_CXXFLAGS) -DGIT_DESCRIBE_TAGS=\"$(shell ../../make_version)\"
-#print_version.o: FORCE
-#FORCE:
-
-lua_libdir := $(DESTDIR)/$(libdir)/lua/$(LUA_VERSION)
-install-exec-hook:
- rm -f "$(DESTDIR)/$(pkglibdir)/*.la"
- $(MKDIR_P) "$(lua_libdir)"
- $(LN_S) -f "$(DESTDIR)/$(libdir)/libcnlua.so.$(PACKAGE_VERSION)" \
- "$(lua_libdir)/libcn.so"
-uninstall-hook:
- rm "$(lua_libdir)/libcn.so"
diff --git a/upstream/src/libcnlua/lua-iface.cc b/upstream/src/libcnlua/lua-iface.cc
deleted file mode 100644
index 8bba8b3..0000000
--- a/upstream/src/libcnlua/lua-iface.cc
+++ /dev/null
@@ -1,163 +0,0 @@
-/*
- * File name: cnrun/lua-iface.cc
- * Project: cnrun
- * Author: Andrei Zavada <johnhommer at gmail.com>
- * building on original work by Thomas Nowotny <tnowotny at ucsd.edu>
- * Initial version: 2014-10-09
- *
- * Purpose: libcn and some state, exported for use in your lua code.
- *
- * License: GPL
- */
-
-#if HAVE_CONFIG_H && !defined(VERSION)
-# include "config.h"
-#endif
-
-#include <list>
-
-extern "C" {
-#include <lua.h>
-#include <lualib.h>
-#include <lauxlib.h>
-}
-
-#include "libstilton/string.hh"
-#include "cnhost.hh"
-
-using namespace std;
-using namespace cnrun;
-
-namespace {
-
-int check_signature( lua_State* L, const char* fun, const char* sig)
-{
- using cnrun::stilton::str::sasprintf;
-
- size_t siglen = strlen(sig),
- nargsin = lua_gettop( L);
- if ( nargsin != siglen ) {
- lua_pushnil(L);
- lua_pushstring(L, sasprintf("%s: Expected %zu arg(s), got %zu", fun, siglen, nargsin).c_str());
- return -1;
- }
-
- for ( size_t i = 0; i < siglen; ++i )
- switch ( sig[i] ) {
- case 's':
- if ( !lua_isstring( L, i) ) {
- lua_pushnil(L);
- lua_pushstring( L, sasprintf( "%s(\"%s\"): Expected a string arg at position %zu", fun, sig, i).c_str());
- return i + 1;
- }
- case 'g':
- case 'd':
- if ( !lua_isnumber( L, i) ) {
- lua_pushnil(L);
- lua_pushstring( L, sasprintf( "%s(\"%s\"): Expected a numeric arg at position %zu", fun, sig, i).c_str());
- return i + 1;
- }
- case 'p':
- if ( !lua_islightuserdata( L, i) ) {
- lua_pushnil(L);
- lua_pushstring( L, sasprintf( "%s(\"%s\"): Expected a light user data arg at position %zu", fun, sig, i).c_str());
- return i + 1;
- }
- }
- return 0;
-}
-
-const int TWO_ARGS_FOR_ERROR = 2;
-
-int cn_get_context( lua_State *L)
-{
- if ( check_signature( L, "cn_get_context", "") )
- return TWO_ARGS_FOR_ERROR;
-
- auto ctx = new CHost (SHostOptions ());
- lua_pushinteger( L, 1);
- lua_pushlightuserdata( L, ctx);
- return 2;
-}
-
-int cn_new_model( lua_State *L)
-{
- if ( check_signature( L, "cn_new_model", "ps") )
- return TWO_ARGS_FOR_ERROR;
-
- auto& C = *(CHost*)lua_topointer( L, 1);
- const char* model_name = lua_tostring( L, 2);
-
- if ( C.have_model( model_name) )
- return lua_pushnil(L),
- lua_pushstring(L, sasprintf( "cn_new_model(): Model named %s already exists", model_name).c_str()),
- TWO_ARGS_FOR_ERROR;
-
- auto M = new CModel(
- model_name,
- new CIntegrateRK65(
- C.options.integration_dt_min,
- C.options.integration_dt_max,
- C.options.integration_dt_cap),
- C.options);
- if ( !M )
- return lua_pushnil(L),
- lua_pushstring(L, sasprintf( "cn_new_model(): Failed to create a model (%s)", model_name).c_str()),
- TWO_ARGS_FOR_ERROR;
-
- C.new_model(*M);
-
- return lua_pushinteger( L, 1),
- lua_pushlightuserdata( L, M),
- 2;
-}
-
-
-int cn_list_models( lua_State *L)
-{
- if ( check_signature( L, "cn_list_models", "p") )
- return TWO_ARGS_FOR_ERROR;
-
- auto& C = *(CHost*)lua_topointer( L, 1);
-
- lua_pushinteger( L, 1);
- auto MM = C.list_models();
- for ( auto& M : MM )
- lua_pushstring( L, M);
- return MM.size() + 1;
-}
-
-
-const struct luaL_Reg cnlib [] = {
- {"cn_get_context", cn_get_context},
- {"cn_new_model", cn_new_model},
- {"cn_list_models", cn_list_models},
- {NULL, NULL}
-};
-
-}
-
-
-extern "C" {
-
-int luaopen_libcn( lua_State *L)
-{
-#ifdef HAVE_LUA_51
- printf( "register cnlib\n");
- luaL_register(L, "cnlib", cnlib_funtable, 0);
-#else // this must be 5.2
- printf( "newlib cnlib\n");
- luaL_newlib(L, cnlib);
-#endif
- return 1;
-}
-
-}
-
-
-// Local Variables:
-// Mode: c++
-// indent-tabs-mode: nil
-// tab-width: 8
-// c-basic-offset: 8
-// End:
diff --git a/upstream/src/libcnlua/.gitignore b/upstream/src/libcnrun-lua/.gitignore
similarity index 100%
rename from upstream/src/libcnlua/.gitignore
rename to upstream/src/libcnrun-lua/.gitignore
diff --git a/upstream/src/libcnrun-lua/Makefile.am b/upstream/src/libcnrun-lua/Makefile.am
new file mode 100644
index 0000000..ad671c4
--- /dev/null
+++ b/upstream/src/libcnrun-lua/Makefile.am
@@ -0,0 +1,15 @@
+include $(top_srcdir)/src/Common.mk
+AM_CXXFLAGS += $(LUA_INCLUDE)
+
+lib_LTLIBRARIES := \
+ libcnrun-lua.la
+libcnrun_lua_la_SOURCES := \
+ commands.cc cnhost.hh
+libcnrun_lua_la_LIBADD := \
+ ../libcnrun/libcnrun.la \
+ ../libstilton/libstilton.la \
+ $(LIBCN_LIBS) \
+ $(LUA_LIB)
+libcnrun_lua_la_LDFLAGS := \
+ -module -shared -avoid-version \ # -version-info $(subst .,:,$(PACKAGE_VERSION))
+ -rpath $(libdir)/$(PACKAGE)
diff --git a/upstream/src/libcnlua/cnhost.hh b/upstream/src/libcnrun-lua/cnhost.hh
similarity index 57%
rename from upstream/src/libcnlua/cnhost.hh
rename to upstream/src/libcnrun-lua/cnhost.hh
index 7c21eff..7ac327b 100644
--- a/upstream/src/libcnlua/cnhost.hh
+++ b/upstream/src/libcnrun-lua/cnhost.hh
@@ -6,7 +6,7 @@
*
* Purpose: C host side for cn lua library
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_CNRUN_CNHOST_H_
@@ -22,7 +22,7 @@ extern "C" {
#include <lua.h>
}
-#include "libcn/model.hh"
+#include "libcnrun/model.hh"
namespace cnrun {
@@ -49,7 +49,7 @@ class CHost
CHost (const SHostOptions& rv)
: options (rv)
{}
- ~CHost ()
+ virtual ~CHost ()
{
for ( auto& m : models )
delete m.second;
@@ -87,45 +87,45 @@ class CHost
delete models[name];
models.erase( name);
}
- // SCmdResult cmd_new_model( const TArgs&);
- // SCmdResult cmd_delete_model( const TArgs&);
- // SCmdResult cmd_import_nml( const TArgs&);
- // SCmdResult cmd_export_nml( const TArgs&);
- // SCmdResult cmd_reset_model( const TArgs&);
- // SCmdResult cmd_cull_deaf_synapses( const TArgs&);
- // SCmdResult cmd_describe_model( const TArgs&);
- // SCmdResult cmd_get_model_parameter( const TArgs&);
- // SCmdResult cmd_set_model_parameter( const TArgs&);
- // SCmdResult cmd_advance( const TArgs&);
- // SCmdResult cmd_advance_until( const TArgs&);
-
- // SCmdResult cmd_new_neuron( const TArgs&);
- // SCmdResult cmd_new_synapse( const TArgs&);
- // SCmdResult cmd_get_unit_properties( const TArgs&);
- // SCmdResult cmd_get_unit_parameter( const TArgs&);
- // SCmdResult cmd_set_unit_parameter( const TArgs&);
- // SCmdResult cmd_get_unit_vars( const TArgs&);
- // SCmdResult cmd_reset_unit( const TArgs&);
-
- // SCmdResult cmd_get_units_matching( const TArgs&);
- // SCmdResult cmd_get_units_of_type( const TArgs&);
- // SCmdResult cmd_set_matching_neuron_parameter( const TArgs&);
- // SCmdResult cmd_set_matching_synapse_parameter( const TArgs&);
- // SCmdResult cmd_revert_matching_unit_parameters( const TArgs&);
- // SCmdResult cmd_decimate( const TArgs&);
- // SCmdResult cmd_putout( const TArgs&);
-
- // SCmdResult cmd_new_tape_source( const TArgs&);
- // SCmdResult cmd_new_periodic_source( const TArgs&);
- // SCmdResult cmd_new_noise_source( const TArgs&);
- // SCmdResult cmd_get_sources( const TArgs&);
- // SCmdResult cmd_connect_source( const TArgs&);
- // SCmdResult cmd_disconnect_source( const TArgs&);
-
- // SCmdResult cmd_start_listen( const TArgs&);
- // SCmdResult cmd_stop_listen( const TArgs&);
- // SCmdResult cmd_start_log_spikes( const TArgs&);
- // SCmdResult cmd_stop_log_spikes( const TArgs&);
+ // cmd_new_model( const TArgs&);
+ // cmd_delete_model( const TArgs&);
+ // cmd_import_nml( const TArgs&);
+ // cmd_export_nml( const TArgs&);
+ // cmd_reset_model( const TArgs&);
+ // cmd_cull_deaf_synapses( const TArgs&);
+ // cmd_describe_model( const TArgs&);
+ // cmd_get_model_parameter( const TArgs&);
+ // cmd_set_model_parameter( const TArgs&);
+ // cmd_advance( const TArgs&);
+ // cmd_advance_until( const TArgs&);
+
+ // cmd_new_neuron( const TArgs&);
+ // cmd_new_synapse( const TArgs&);
+ // cmd_get_unit_properties( const TArgs&);
+ // cmd_get_unit_parameter( const TArgs&);
+ // cmd_set_unit_parameter( const TArgs&);
+ // cmd_get_unit_vars( const TArgs&);
+ // cmd_reset_unit( const TArgs&);
+
+ // cmd_get_units_matching( const TArgs&);
+ // cmd_get_units_of_type( const TArgs&);
+ // cmd_set_matching_neuron_parameter( const TArgs&);
+ // cmd_set_matching_synapse_parameter( const TArgs&);
+ // cmd_revert_matching_unit_parameters( const TArgs&);
+ // cmd_decimate( const TArgs&);
+ // cmd_putout( const TArgs&);
+
+ // cmd_new_tape_source( const TArgs&);
+ // cmd_new_periodic_source( const TArgs&);
+ // cmd_new_noise_source( const TArgs&);
+ // cmd_get_sources( const TArgs&);
+ // cmd_connect_source( const TArgs&);
+ // cmd_disconnect_source( const TArgs&);
+
+ // cmd_start_listen( const TArgs&);
+ // cmd_stop_listen( const TArgs&);
+ // cmd_start_log_spikes( const TArgs&);
+ // cmd_stop_log_spikes( const TArgs&);
// vp
int verbose_threshold() const
diff --git a/upstream/src/libcnrun-lua/commands.cc b/upstream/src/libcnrun-lua/commands.cc
new file mode 100644
index 0000000..a03f5fa
--- /dev/null
+++ b/upstream/src/libcnrun-lua/commands.cc
@@ -0,0 +1,1027 @@
+/*
+ * File name: libcnlua/commands.cc
+ * Project: cnrun
+ * Author: Andrei Zavada <johnhommer at gmail.com>
+ * building on original work by Thomas Nowotny <tnowotny at ucsd.edu>
+ * Initial version: 2014-10-09
+ *
+ * Purpose: libcn and some host-side state, exported for use in your lua code.
+ *
+ * License: GPL-2+
+ */
+
+#if HAVE_CONFIG_H && !defined(VERSION)
+# include "config.h"
+#endif
+
+extern "C" {
+#include <lua.h>
+#include <lualib.h>
+#include <lauxlib.h>
+}
+
+#include "libstilton/string.hh"
+#include "cnhost.hh"
+
+using namespace std;
+using namespace cnrun;
+
+namespace {
+
+// supporting functions:
+
+const int TWO_ARGS_FOR_ERROR = 2;
+
+int make_error( lua_State *L, const char *fmt, ...) __attribute__ ((format (printf, 2, 3)));
+int make_error( lua_State *L, const char *fmt, ...)
+{
+ va_list ap;
+ va_start (ap, fmt);
+ auto s = stilton::str::svasprintf( fmt, ap);
+ va_end (ap);
+
+ return lua_pushnil(L),
+ lua_pushstring(L, s.c_str()),
+ TWO_ARGS_FOR_ERROR;
+}
+
+int check_signature( lua_State* L, const char* fun, const char* sig)
+{
+ size_t siglen = strlen(sig),
+ nargsin = lua_gettop( L);
+ if ( nargsin != siglen )
+ return make_error(
+ L, "%s: Expected %zu arg(s), got %zu",
+ fun, siglen, nargsin);
+
+ for ( size_t i = 1; i <= siglen; ++i )
+ switch ( sig[i-1] ) {
+ case 's':
+ if ( !lua_isstring( L, i) )
+ return make_error(
+ L, "%s(\"%s\"): Expected a string arg at position %zu",
+ fun, sig, i);
+ break;
+ case 'g':
+ case 'd':
+ if ( !lua_isnumber( L, i) )
+ return make_error(
+ L, "%s(\"%s\"): Expected a numeric arg at position %zu",
+ fun, sig, i);
+ break;
+ case 'p':
+ if ( !lua_islightuserdata( L, i) )
+ return make_error(
+ L, "%s(\"%s\"): Expected a light user data arg at position %zu",
+ fun, sig, i);
+ break;
+ case 'b':
+ if ( !lua_isboolean( L, i) )
+ return make_error(
+ L, "%s(\"%s\"): Expected a boolean arg at position %zu",
+ fun, sig, i);
+ break;
+ }
+ return 0;
+}
+
+}
+
+
+// here be the commands:
+namespace {
+
+#define INTRO_CHECK_SIG(sig) \
+ if ( check_signature( L, __FUNCTION__, sig) ) \
+ return TWO_ARGS_FOR_ERROR;
+
+int cn_get_context( lua_State *L)
+{
+ INTRO_CHECK_SIG("");
+
+ auto Cp = new CHost (SHostOptions ());
+
+ return lua_pushinteger( L, 1),
+ lua_pushlightuserdata( L, Cp),
+ 2;
+}
+
+#define INTRO_WITH_CONTEXT(sig) \
+ INTRO_CHECK_SIG(sig) \
+ auto& C = *(CHost*)lua_topointer( L, 1);
+
+int cn_drop_context( lua_State *L)
+{
+ INTRO_WITH_CONTEXT("p");
+
+ delete &C; // come what may
+
+ return lua_pushinteger( L, 1),
+ lua_pushstring( L, "fafa"),
+ 2;
+}
+
+
+#define INTRO_WITH_MODEL_NAME(sig) \
+ INTRO_WITH_CONTEXT(sig) \
+ const char* model_name = lua_tostring( L, 2);
+
+#define VOID_RETURN \
+ return lua_pushinteger( L, 1), \
+ lua_pushstring( L, model_name), \
+ 2;
+
+#define NUMVAL_RETURN(v) \
+ return lua_pushinteger( L, 1), \
+ lua_pushnumber( L, v), \
+ 2;
+
+int cn_new_model( lua_State *L)
+{
+ INTRO_WITH_MODEL_NAME("ps");
+
+ if ( C.have_model( model_name) )
+ return make_error(
+ L, "%s(): Model named %s already exists",
+ __FUNCTION__, model_name);
+
+ auto M = new CModel(
+ model_name,
+ new CIntegrateRK65(
+ C.options.integration_dt_min,
+ C.options.integration_dt_max,
+ C.options.integration_dt_cap),
+ C.options);
+ if ( !M )
+ return make_error(
+ L, "%s(): Failed to create a model (%s)",
+ __FUNCTION__, model_name);
+
+ C.new_model(*M);
+
+ return lua_pushinteger( L, 1),
+ lua_pushlightuserdata( L, M),
+ 2;
+}
+
+
+int cn_delete_model( lua_State *L)
+{
+ INTRO_WITH_MODEL_NAME("ps");
+
+ C.del_model( model_name);
+
+ VOID_RETURN;
+}
+
+
+int cn_list_models( lua_State *L)
+{
+ INTRO_WITH_CONTEXT("p");
+
+ lua_pushinteger( L, 1);
+ auto MM = C.list_models();
+ for ( auto& M : MM )
+ lua_pushstring( L, M);
+ return lua_pushinteger( L, MM.size() + 1),
+ 2;
+}
+
+
+#define INTRO_WITH_MODEL(sig) \
+ INTRO_WITH_MODEL_NAME(sig) \
+ if ( not C.have_model( model_name) ) \
+ return make_error( \
+ L, "%s(): No model named %s", \
+ __FUNCTION__, model_name); \
+ auto& M = *C.get_model(model_name);
+
+int cn_import_nml( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char* fname = lua_tostring( L, 3);
+ string fname2 = stilton::str::tilda2homedir(fname);
+
+ int ret = M.import_NetworkML( fname2, CModel::TNMLImportOption::merge);
+ if ( ret < 0 )
+ return make_error(
+ L, "%s(%s): NML import failed from \"%s\" (%d)",
+ __FUNCTION__, model_name, fname, ret);
+
+ M.cull_blind_synapses();
+
+ VOID_RETURN;
+}
+
+
+int cn_export_nml( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char* fname = lua_tostring( L, 3);
+ string fname2 = stilton::str::tilda2homedir(fname);
+
+ if ( M.export_NetworkML( fname2) )
+ return make_error(
+ L, "%s(%s): NML export failed to \"%s\"",
+ __FUNCTION__, model_name, fname);
+
+ VOID_RETURN;
+}
+
+
+int cn_reset_model( lua_State *L)
+{
+ INTRO_WITH_MODEL("ps");
+
+ M.reset( CModel::TResetOption::no_params);
+ // for with_params, there is revert_unit_parameters()
+
+ VOID_RETURN;
+}
+
+
+int cn_cull_deaf_synapses( lua_State *L)
+{
+ INTRO_WITH_MODEL("ps");
+
+ M.cull_deaf_synapses();
+
+ VOID_RETURN;
+}
+
+
+int cn_describe_model( lua_State *L)
+{
+ INTRO_WITH_MODEL("ps");
+
+ M.dump_metrics();
+ M.dump_units();
+ M.dump_state();
+
+ VOID_RETURN;
+}
+
+
+int cn_get_model_parameter( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const string
+ P (lua_tostring( L, 3));
+
+ double g = NAN;
+ string s;
+ if ( P == "verbosely" ) {
+ g = M.options.verbosely;
+ } else if ( P == "integration_dt_min" ) {
+ g = M.dt_min();
+ } else if ( P == "integration_dt_max" ) {
+ g = M.dt_min();
+ } else if ( P == "integration_dt_cap" ) {
+ g = M.dt_min();
+ } else if ( P == "listen_dt" ) {
+ g = M.options.listen_dt;
+ } else if ( P == "listen_mode" ) {
+ auto F = [] (bool v) -> char { return v ? '+' : '-'; };
+ s = stilton::str::sasprintf(
+ "1%cd%cb%c",
+ F(M.options.listen_1varonly),
+ F(M.options.listen_deferwrite),
+ F(M.options.listen_binary));
+ } else if ( P == "sxf_start_delay" ) {
+ g = M.options.sxf_start_delay;
+ } else if ( P == "sxf_period" ) {
+ g = M.options.sxf_period;
+ } else if ( P == "sdf_sigma" ) {
+ g = M.options.sdf_sigma;
+ } else
+ return make_error(
+ L, "%s(%s): Unrecognized parameter: %s",
+ __FUNCTION__, model_name, P.c_str());
+
+ return lua_pushinteger( L, 1),
+ s.empty() ? lua_pushnumber( L, g) : (void)lua_pushstring( L, s.c_str()),
+ 2;
+}
+
+
+int cn_set_model_parameter( lua_State *L)
+{
+ INTRO_WITH_MODEL("psss");
+
+ const char
+ *P = lua_tostring( L, 3),
+ *V = lua_tostring( L, 4);
+
+#define ERR_RETURN \
+ return make_error( \
+ L, "%s(%s): bad value for parameter `%s'", \
+ __FUNCTION__, model_name, P)
+
+ if ( 0 == strcmp( P, "verbosely") ) {
+ int v;
+ if ( 1 != sscanf( V, "%d", &v) )
+ ERR_RETURN;
+ C.options.verbosely = M.options.verbosely = v;
+
+ } else if ( 0 == strcmp( P, "integration_dt_min") ) {
+ double v;
+ if ( 1 != sscanf( V, "%lg", &v) )
+ ERR_RETURN;
+ M.set_dt_min( C.options.integration_dt_min = v);
+
+ } else if ( 0 == strcmp( P, "integration_dt_max" ) ) {
+ double v;
+ if ( 1 != sscanf( V, "%lg", &v) )
+ ERR_RETURN;
+ M.set_dt_max( C.options.integration_dt_max = v);
+
+ } else if ( 0 == strcmp( P, "integration_dt_cap" ) ) {
+ double v;
+ if ( 1 != sscanf( V, "%lg", &v) )
+ ERR_RETURN;
+ M.set_dt_cap( C.options.integration_dt_cap = v);
+
+ } else if ( 0 == strcmp( P, "listen_dt") ) {
+ double v;
+ if ( 1 != sscanf( V, "%lg", &v) )
+ ERR_RETURN;
+ C.options.listen_dt = M.options.listen_dt = v;
+
+ } else if ( 0 == strcmp( P, "listen_mode" ) ) {
+ const char *p;
+ if ( (p = strchr( V, '1')) )
+ M.options.listen_1varonly = C.options.listen_1varonly = (*(p+1) != '-');
+ if ( (p = strchr( V, 'd')) )
+ M.options.listen_deferwrite = C.options.listen_deferwrite = (*(p+1) != '-');
+ if ( (p = strchr( V, 'b')) )
+ M.options.listen_binary = C.options.listen_binary = (*(p+1) != '-');
+ // better spell out these parameters, ffs
+
+ } else if ( 0 == strcmp( P, "sxf_start_delay" ) ) {
+ double v;
+ if ( 1 != sscanf( V, "%lg", &v) )
+ ERR_RETURN;
+ C.options.sxf_start_delay = M.options.sxf_start_delay = v;
+
+ } else if ( 0 == strcmp( P, "sxf_period" ) ) {
+ double v;
+ if ( 1 != sscanf( V, "%lg", &v) )
+ ERR_RETURN;
+ C.options.sxf_period = M.options.sxf_period = v;
+
+ } else if ( 0 == strcmp( P, "sdf_sigma" ) ) {
+ double v;
+ if ( 1 != sscanf( V, "%lg", &v) )
+ ERR_RETURN;
+ C.options.sdf_sigma = M.options.sdf_sigma = v;
+ }
+#undef ERR_RETURN
+
+ VOID_RETURN;
+}
+
+
+int cn_advance( lua_State *L)
+{
+ INTRO_WITH_MODEL("psg");
+
+ const double time_to_go = lua_tonumber( L, 3);
+ const double end_time = M.model_time() + time_to_go;
+ if ( M.model_time() > end_time )
+ return make_error(
+ L, "%s(%s): Cannot go back in time (model is now at %g sec)",
+ __FUNCTION__, model_name, M.model_time());
+ if ( !M.advance( time_to_go) )
+ return make_error(
+ L, "%s(%s): Failed to advance",
+ __FUNCTION__, model_name);
+
+ VOID_RETURN;
+}
+
+
+int cn_advance_until( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const double end_time = lua_tonumber( L, 3);
+ if ( M.model_time() > end_time )
+ return make_error(
+ L, "%s(%s): Cannot go back in time (model is now at %g sec)",
+ __FUNCTION__, model_name, M.model_time());
+ if ( !M.advance( end_time) )
+ return make_error(
+ L, "%s(%s): Failed to advance",
+ __FUNCTION__, model_name);
+
+ VOID_RETURN;
+}
+
+
+// ----------------------------------------
+
+int cn_new_neuron( lua_State *L)
+{
+ INTRO_WITH_MODEL("psss");
+
+ const char
+ *type = lua_tostring( L, 3),
+ *label = lua_tostring( L, 4);
+
+ if ( !M.add_neuron_species(
+ type, label,
+ TIncludeOption::is_last) )
+ return make_error(
+ L, "%s(%s): error", __FUNCTION__, model_name);
+
+ VOID_RETURN;
+}
+
+
+int cn_new_synapse( lua_State *L)
+{
+ INTRO_WITH_MODEL("pssssg");
+
+ const char
+ *type = lua_tostring( L, 3),
+ *src = lua_tostring( L, 4),
+ *tgt = lua_tostring( L, 5);
+ const double
+ g = lua_tonumber( L, 6);
+
+ if ( !M.add_synapse_species(
+ type, src, tgt, g,
+ CModel::TSynapseCloningOption::yes,
+ TIncludeOption::is_last) )
+ return make_error(
+ L, "%s(%s): error", __FUNCTION__, model_name);
+
+ VOID_RETURN;
+}
+
+
+int cn_get_unit_properties( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *label = lua_tostring( L, 3);
+ auto Up = M.unit_by_label(label);
+ if ( Up )
+ return lua_pushnumber( L, 1),
+ lua_pushstring( L, Up->label()),
+ lua_pushstring( L, Up->class_name()),
+ lua_pushstring( L, Up->family()),
+ lua_pushstring( L, Up->species()),
+ lua_pushboolean( L, Up->has_sources()),
+ lua_pushboolean( L, Up->is_not_altered()),
+ 7;
+ else
+ return make_error(
+ L, "%s(%s): No such unit: %s",
+ __FUNCTION__, model_name, label);
+}
+
+
+int cn_get_unit_parameter( lua_State *L)
+{
+ INTRO_WITH_MODEL("psss");
+
+ const char
+ *label = lua_tostring( L, 3),
+ *param = lua_tostring( L, 4);
+ auto Up = M.unit_by_label(label);
+ if ( !Up )
+ return make_error(
+ L, "%s(%s): No such unit: %s",
+ __FUNCTION__, model_name, label);
+ try {
+ return lua_pushinteger( L, 1),
+ lua_pushnumber( L, Up->get_param_value( param)),
+ 2;
+ } catch (exception& ex) {
+ return make_error(
+ L, "%s(%s): Unit %s (type %s) has no parameter named %s",
+ __FUNCTION__, model_name, label, Up->class_name(), param);
+ }
+}
+
+
+int cn_set_unit_parameter( lua_State *L)
+{
+ INTRO_WITH_MODEL("psssg");
+
+ const char
+ *label = lua_tostring( L, 3),
+ *param = lua_tostring( L, 4);
+ const double
+ value = lua_tonumber( L, 5);
+ auto Up = M.unit_by_label(label);
+ if ( !Up )
+ return make_error(
+ L, "%s(%s): No such unit: %s",
+ __FUNCTION__, model_name, label);
+ try {
+ Up->param_value( param) = value;
+ } catch (exception& ex) {
+ return make_error(
+ L, "%s(%s): Unit %s (type %s) has no parameter named %s",
+ __FUNCTION__, model_name, label, Up->class_name(), param);
+ }
+
+ VOID_RETURN;
+}
+
+
+int cn_get_unit_vars( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *label = lua_tostring( L, 3);
+ auto Up = M.unit_by_label(label);
+ if ( !Up )
+ return make_error(
+ L, "%s(%s): No such unit: %s",
+ __FUNCTION__, model_name, label);
+
+ lua_pushnumber( L, 1);
+ for ( size_t i = 0; i < Up->v_no(); ++i )
+ lua_pushnumber( L, Up->get_var_value(i));
+ return Up->v_no() + 1;
+}
+
+
+int cn_reset_unit( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *label = lua_tostring( L, 3);
+ auto Up = M.unit_by_label(label);
+ if ( !Up )
+ return make_error(
+ L, "%s(%s): No such unit: %s",
+ __FUNCTION__, model_name, label);
+
+ Up -> reset_state();
+
+ VOID_RETURN;
+}
+
+
+// ----------------------------------------
+
+int cn_get_units_matching( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *pattern = lua_tostring( L, 3);
+ auto UU = M.list_units( pattern);
+ lua_pushinteger( L, 1);
+ for ( auto& U : UU )
+ lua_pushstring( L, U->label());
+ return UU.size() + 1;
+}
+
+
+int cn_get_units_of_type( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *type = lua_tostring( L, 3);
+ auto UU = M.list_units();
+ lua_pushinteger( L, 1);
+ for ( auto& U : UU )
+ if ( strcmp( U->species(), type) == 0 )
+ lua_pushstring( L, U->label());
+ return UU.size() + 1;
+}
+
+
+int cn_set_matching_neuron_parameter( lua_State *L)
+{
+ INTRO_WITH_MODEL("psssg");
+
+ const char
+ *pattern = lua_tostring( L, 3),
+ *param = lua_tostring( L, 4);
+ const double
+ value = lua_tonumber( L, 5);
+ list<CModel::STagGroupNeuronParmSet> tags {
+ CModel::STagGroupNeuronParmSet (pattern, param, value)};
+ int count_set = M.process_paramset_static_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, count_set),
+ 2;
+}
+
+
+int cn_set_matching_synapse_parameter( lua_State *L)
+{
+ INTRO_WITH_MODEL("pssssg");
+
+ const char
+ *pat_src = lua_tostring( L, 3),
+ *pat_tgt = lua_tostring( L, 4),
+ *param = lua_tostring( L, 5);
+ const double
+ value = lua_tonumber( L, 6);
+
+ list<CModel::STagGroupSynapseParmSet> tags {
+ CModel::STagGroupSynapseParmSet (pat_src, pat_tgt, param, value)};
+ int count_set = M.process_paramset_static_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, count_set),
+ 2;
+}
+
+
+int cn_revert_matching_unit_parameters( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *pattern = lua_tostring( L, 3);
+
+ auto UU = M.list_units( pattern);
+ for ( auto& U : UU )
+ U->reset_params();
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, UU.size()),
+ 2;
+}
+
+
+int cn_decimate( lua_State *L)
+{
+ INTRO_WITH_MODEL("pssg");
+
+ const char
+ *pattern = lua_tostring( L, 3);
+ const double
+ frac = lua_tonumber( L, 4);
+ if ( frac < 0. || frac > 1. )
+ return make_error(
+ L, "%s(%s): Decimation fraction (%g) outside [0..1]\n",
+ __FUNCTION__, model_name, frac);
+
+ list<CModel::STagGroupDecimate> tags {{pattern, frac}};
+ int affected = M.process_decimate_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, affected),
+ 2;
+}
+
+
+int cn_putout( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *pattern = lua_tostring( L, 3);
+
+ list<CModel::STagGroup> tags {{pattern, CModel::STagGroup::TInvertOption::no}};
+ int affected = M.process_putout_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, affected),
+ 2;
+}
+
+
+// ----------------------------------------
+
+int cn_new_tape_source( lua_State *L)
+{
+ INTRO_WITH_MODEL("psssb");
+
+ const char
+ *name = lua_tostring( L, 3),
+ *fname = lua_tostring( L, 4);
+ const bool
+ looping = lua_toboolean( L, 5);
+
+ if ( M.source_by_id( name) )
+ return make_error(
+ L, "%s(%s): Tape source \"%s\" already exists",
+ __FUNCTION__, model_name, name);
+
+ try {
+ auto source = new CSourceTape(
+ name, fname,
+ looping ? TSourceLoopingOption::yes : TSourceLoopingOption::no);
+ if ( source )
+ M.add_source( source);
+ else
+ return make_error(
+ L, "%s(%s): reading from %s, bad data",
+ __FUNCTION__, model_name, fname);
+ } catch (exception& ex) {
+ return make_error(
+ L, "%s(%s): %s, %s: %s",
+ __FUNCTION__, model_name, name, fname, ex.what());
+ }
+
+ VOID_RETURN;
+}
+
+
+int cn_new_periodic_source( lua_State *L)
+{
+ INTRO_WITH_MODEL("psssbg");
+
+ const char
+ *name = lua_tostring( L, 3),
+ *fname = lua_tostring( L, 4);
+ const bool
+ looping = lua_toboolean( L, 5);
+ const double
+ period = lua_tonumber( L, 6);
+
+ if ( M.source_by_id( name) )
+ return make_error(
+ L, "%s(%s): Periodic source \"%s\" already exists",
+ __FUNCTION__, model_name, name);
+
+ try {
+ auto source = new CSourcePeriodic(
+ name, fname,
+ looping ? TSourceLoopingOption::yes : TSourceLoopingOption::no,
+ period);
+ if ( source )
+ M.add_source( source);
+ else
+ return make_error(
+ L, "%s(%s): reading from %s, bad data",
+ __FUNCTION__, model_name, fname);
+ } catch (exception& ex) {
+ return make_error(
+ L, "%s(%s): %s, %s: %s",
+ __FUNCTION__, model_name, name, fname, ex.what());
+ }
+
+ VOID_RETURN;
+}
+
+
+int cn_new_noise_source( lua_State *L)
+{
+ INTRO_WITH_MODEL("pssgggs");
+
+ const char
+ *name = lua_tostring( L, 3);
+ const double
+ &min = lua_tonumber( L, 4),
+ &max = lua_tonumber( L, 5),
+ &sigma = lua_tonumber( L, 6);
+ const string
+ &distribution = lua_tostring( L, 7);
+
+ if ( M.source_by_id( name) )
+ return make_error(
+ L, "%s(%s): Noise source \"%s\" already exists",
+ __FUNCTION__, model_name, name);
+
+ try {
+ auto source = new CSourceNoise(
+ name, min, max, sigma, CSourceNoise::distribution_by_name(distribution));
+ if ( source )
+ M.add_source( source);
+ else
+ return make_error(
+ L, "%s(%s): bad data",
+ __FUNCTION__, model_name);
+ } catch (exception& ex) {
+ return make_error(
+ L, "%s(%s): %s: %s",
+ __FUNCTION__, model_name, name, ex.what());
+ }
+
+ VOID_RETURN;
+}
+
+
+int cn_get_sources( lua_State *L)
+{
+ INTRO_WITH_MODEL("ps");
+
+ lua_pushinteger( L, 1);
+ for ( auto& S : M.sources() )
+ lua_pushstring( L, S->name());
+ return M.sources().size() + 1;
+}
+
+
+int cn_connect_source( lua_State *L)
+{
+ INTRO_WITH_MODEL("pssss");
+
+ const char
+ *label = lua_tostring( L, 3),
+ *parm = lua_tostring( L, 4),
+ *source = lua_tostring( L, 5);
+ C_BaseSource *S = M.source_by_id( source);
+ if ( !S )
+ return make_error(
+ L, "%s(%s): No such stimulation source: %s",
+ __FUNCTION__, model_name, source);
+ // cannot check whether units matching label indeed have a parameter so named
+ list<CModel::STagGroupSource> tags {
+ {label, parm, S, CModel::STagGroup::TInvertOption::no}};
+ int affected = M.process_paramset_source_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, affected),
+ 2;
+}
+
+
+int cn_disconnect_source( lua_State *L)
+{
+ INTRO_WITH_MODEL("pssss");
+
+ const char
+ *label = lua_tostring( L, 3),
+ *parm = lua_tostring( L, 4),
+ *source = lua_tostring( L, 5);
+ C_BaseSource *S = M.source_by_id( source);
+ if ( !S )
+ return make_error(
+ L, "%s(%s): No such stimulation source: %s",
+ __FUNCTION__, model_name, source);
+ // cannot check whether units matching label indeed have a parameter so named
+ list<CModel::STagGroupSource> tags {
+ {label, parm, S, CModel::STagGroup::TInvertOption::yes}};
+ int affected = M.process_paramset_source_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, affected),
+ 2;
+}
+
+
+// ----------------------------------------
+
+int cn_start_listen( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *pattern = lua_tostring( L, 3);
+ list<CModel::STagGroupListener> tags {
+ CModel::STagGroupListener (
+ pattern, (0
+ | (M.options.listen_1varonly ? CN_ULISTENING_1VARONLY : 0)
+ | (M.options.listen_deferwrite ? CN_ULISTENING_DEFERWRITE : 0)
+ | (M.options.listen_binary ? CN_ULISTENING_BINARY : CN_ULISTENING_DISK)),
+ CModel::STagGroup::TInvertOption::no)};
+ int affected = M.process_listener_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, affected),
+ 2;
+}
+
+
+int cn_stop_listen( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *pattern = lua_tostring( L, 3);
+ list<CModel::STagGroupListener> tags {
+ CModel::STagGroupListener (
+ pattern, (0
+ | (M.options.listen_1varonly ? CN_ULISTENING_1VARONLY : 0)
+ | (M.options.listen_deferwrite ? CN_ULISTENING_DEFERWRITE : 0)
+ | (M.options.listen_binary ? CN_ULISTENING_BINARY : CN_ULISTENING_DISK)),
+ CModel::STagGroup::TInvertOption::yes)};
+ int affected = M.process_listener_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, affected),
+ 2;
+}
+
+
+int cn_start_log_spikes( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *pattern = lua_tostring( L, 3);
+ list<CModel::STagGroupSpikelogger> tags {{
+ pattern,
+ M.options.sxf_period, M.options.sdf_sigma, M.options.sxf_start_delay,
+ CModel::STagGroup::TInvertOption::no}};
+ int affected = M.process_spikelogger_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, affected),
+ 2;
+}
+
+
+int cn_stop_log_spikes( lua_State *L)
+{
+ INTRO_WITH_MODEL("pss");
+
+ const char
+ *pattern = lua_tostring( L, 3);
+ list<CModel::STagGroupSpikelogger> tags {{
+ pattern,
+ M.options.sxf_period, M.options.sdf_sigma, M.options.sxf_start_delay,
+ CModel::STagGroup::TInvertOption::yes}};
+ int affected = M.process_spikelogger_tags( tags);
+
+ return lua_pushinteger( L, 1),
+ lua_pushinteger( L, affected),
+ 2;
+}
+
+
+// all together now:
+const struct luaL_Reg cnlib [] = {
+#define BLOOP(X) {#X, X}
+ BLOOP(cn_get_context),
+ BLOOP(cn_drop_context),
+ BLOOP(cn_new_model),
+ BLOOP(cn_delete_model),
+ BLOOP(cn_list_models),
+ BLOOP(cn_import_nml),
+ BLOOP(cn_export_nml),
+ BLOOP(cn_reset_model),
+ BLOOP(cn_cull_deaf_synapses),
+ BLOOP(cn_describe_model),
+ BLOOP(cn_get_model_parameter),
+ BLOOP(cn_set_model_parameter),
+ BLOOP(cn_advance),
+ BLOOP(cn_advance_until),
+
+ BLOOP(cn_new_neuron),
+ BLOOP(cn_new_synapse),
+ BLOOP(cn_get_unit_properties),
+ BLOOP(cn_get_unit_parameter),
+ BLOOP(cn_set_unit_parameter),
+ BLOOP(cn_get_unit_vars),
+ BLOOP(cn_reset_unit),
+
+ BLOOP(cn_get_units_matching),
+ BLOOP(cn_get_units_of_type),
+ BLOOP(cn_set_matching_neuron_parameter),
+ BLOOP(cn_set_matching_synapse_parameter),
+ BLOOP(cn_revert_matching_unit_parameters),
+ BLOOP(cn_decimate),
+ BLOOP(cn_putout),
+
+ BLOOP(cn_new_tape_source),
+ BLOOP(cn_new_periodic_source),
+ BLOOP(cn_new_noise_source),
+ BLOOP(cn_get_sources),
+ BLOOP(cn_connect_source),
+ BLOOP(cn_disconnect_source),
+
+ BLOOP(cn_start_listen),
+ BLOOP(cn_stop_listen),
+ BLOOP(cn_start_log_spikes),
+ BLOOP(cn_stop_log_spikes),
+#undef BLOOP
+ {NULL, NULL}
+};
+
+}
+
+
+extern "C" {
+
+int luaopen_libcn( lua_State *L)
+{
+#ifdef HAVE_LUA_51
+ printf( "register cnlib\n");
+ luaL_register(L, "cnlib", cnlib);
+#else // this must be 5.2
+ printf( "newlib cnlib\n");
+ luaL_newlib(L, cnlib);
+#endif
+ return 1;
+}
+
+}
+
+
+// Local Variables:
+// Mode: c++
+// indent-tabs-mode: nil
+// tab-width: 8
+// c-basic-offset: 8
+// End:
diff --git a/upstream/src/libcn/Makefile.am b/upstream/src/libcnrun/Makefile.am
similarity index 55%
rename from upstream/src/libcn/Makefile.am
rename to upstream/src/libcnrun/Makefile.am
index 15e8899..fe05c9a 100644
--- a/upstream/src/libcn/Makefile.am
+++ b/upstream/src/libcnrun/Makefile.am
@@ -1,8 +1,8 @@
include $(top_srcdir)/src/Common.mk
-pkglib_LTLIBRARIES = libcn.la
+pkglib_LTLIBRARIES = libcnrun.la
-libcn_la_SOURCES = \
+libcnrun_la_SOURCES = \
forward-decls.hh \
sources.cc \
types.cc \
@@ -24,23 +24,7 @@ libcn_la_SOURCES = \
model.hh \
integrate-base.hh integrate-rk65.hh
-libcn_la_LDFLAGS = \
+libcnrun_la_LDFLAGS = \
-avoid-version \
-rpath $(libdir)/$(PACKAGE) \
-shared -module
-
-
-if DO_PCH
-BUILT_SOURCES = \
- forward-decls.hh.gch \
- sources.hh.gch \
- types.hh.gch \
- mx-attr.hh.gch \
- base-unit.hh.gch standalone-attr.hh.gch hosted-attr.hh.gch \
- base-synapse.hh.gch standalone-neurons.hh.gch hosted-neurons.hh.gch \
- base-neuron.hh.gch standalone-synapses.hh.gch hosted-synapses.hh.gch \
- model.hh.gch \
- integrate-base.hh.gch integrate-rk65.hh.gch
-
-CLEANFILES = $(BUILT_SOURCES)
-endif
diff --git a/upstream/src/libcn/base-neuron.hh b/upstream/src/libcnrun/base-neuron.hh
similarity index 99%
rename from upstream/src/libcn/base-neuron.hh
rename to upstream/src/libcnrun/base-neuron.hh
index ffc69ad..a13c634 100644
--- a/upstream/src/libcn/base-neuron.hh
+++ b/upstream/src/libcnrun/base-neuron.hh
@@ -7,7 +7,7 @@
*
* Purpose: neuron base class
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_BASENEURON_H_
diff --git a/upstream/src/libcn/base-synapse.hh b/upstream/src/libcnrun/base-synapse.hh
similarity index 98%
rename from upstream/src/libcn/base-synapse.hh
rename to upstream/src/libcnrun/base-synapse.hh
index ac98f99..864148e 100644
--- a/upstream/src/libcn/base-synapse.hh
+++ b/upstream/src/libcnrun/base-synapse.hh
@@ -7,7 +7,7 @@
*
* Purpose: synapse base class
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_BASESYNAPSE_H_
diff --git a/upstream/src/libcn/base-unit.cc b/upstream/src/libcnrun/base-unit.cc
similarity index 99%
rename from upstream/src/libcn/base-unit.cc
rename to upstream/src/libcnrun/base-unit.cc
index 03fdd67..9984dfa 100644
--- a/upstream/src/libcn/base-unit.cc
+++ b/upstream/src/libcnrun/base-unit.cc
@@ -7,7 +7,7 @@
*
* Purpose: unit base class
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/base-unit.hh b/upstream/src/libcnrun/base-unit.hh
similarity index 99%
rename from upstream/src/libcn/base-unit.hh
rename to upstream/src/libcnrun/base-unit.hh
index a60f03f..21bfbd4 100644
--- a/upstream/src/libcn/base-unit.hh
+++ b/upstream/src/libcnrun/base-unit.hh
@@ -7,7 +7,7 @@
*
* Purpose: unit base class
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_BASEUNIT_H_
diff --git a/upstream/src/libcn/forward-decls.hh b/upstream/src/libcnrun/forward-decls.hh
similarity index 96%
rename from upstream/src/libcn/forward-decls.hh
rename to upstream/src/libcnrun/forward-decls.hh
index 7c038a7..07ae859 100644
--- a/upstream/src/libcn/forward-decls.hh
+++ b/upstream/src/libcnrun/forward-decls.hh
@@ -6,7 +6,7 @@
*
* Purpose: forward declarations
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_FORWARDDECLS_H_
diff --git a/upstream/src/libcn/hosted-attr.hh b/upstream/src/libcnrun/hosted-attr.hh
similarity index 97%
rename from upstream/src/libcn/hosted-attr.hh
rename to upstream/src/libcnrun/hosted-attr.hh
index 5eca9c0..84cc6f5 100644
--- a/upstream/src/libcn/hosted-attr.hh
+++ b/upstream/src/libcnrun/hosted-attr.hh
@@ -7,7 +7,7 @@
*
* Purpose: Interface class containing hosted unit attributes.
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_HOSTEDATTR_H_
diff --git a/upstream/src/libcn/hosted-neurons.cc b/upstream/src/libcnrun/hosted-neurons.cc
similarity index 99%
rename from upstream/src/libcn/hosted-neurons.cc
rename to upstream/src/libcnrun/hosted-neurons.cc
index 7c862bf..ed082d2 100644
--- a/upstream/src/libcn/hosted-neurons.cc
+++ b/upstream/src/libcnrun/hosted-neurons.cc
@@ -8,7 +8,7 @@
* Purpose: hosted neuron classes (those having their
* state vars on parent model's integration vectors)
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/hosted-neurons.hh b/upstream/src/libcnrun/hosted-neurons.hh
similarity index 99%
rename from upstream/src/libcn/hosted-neurons.hh
rename to upstream/src/libcnrun/hosted-neurons.hh
index af57395..d77e2b5 100644
--- a/upstream/src/libcn/hosted-neurons.hh
+++ b/upstream/src/libcnrun/hosted-neurons.hh
@@ -8,7 +8,7 @@
* Purpose: hosted neuron classes (those having their
* state vars on parent model's integration vectors)
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_HOSTEDNEURONS_H_
diff --git a/upstream/src/libcn/hosted-synapses.cc b/upstream/src/libcnrun/hosted-synapses.cc
similarity index 99%
rename from upstream/src/libcn/hosted-synapses.cc
rename to upstream/src/libcnrun/hosted-synapses.cc
index 98b88ac..6aed7c8 100644
--- a/upstream/src/libcn/hosted-synapses.cc
+++ b/upstream/src/libcnrun/hosted-synapses.cc
@@ -8,7 +8,7 @@
* Purpose: hosted synapse classes (those having their
* state vars on parent model's integration vectors)
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/hosted-synapses.hh b/upstream/src/libcnrun/hosted-synapses.hh
similarity index 99%
rename from upstream/src/libcn/hosted-synapses.hh
rename to upstream/src/libcnrun/hosted-synapses.hh
index b86eb6c..db2cc59 100644
--- a/upstream/src/libcn/hosted-synapses.hh
+++ b/upstream/src/libcnrun/hosted-synapses.hh
@@ -8,7 +8,7 @@
* Purpose: hosted synapse classes (those having their
* state vars on parent model's integration vectors)
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_HOSTEDSYNAPSES_H_
diff --git a/upstream/src/libcn/integrate-base.hh b/upstream/src/libcnrun/integrate-base.hh
similarity index 98%
rename from upstream/src/libcn/integrate-base.hh
rename to upstream/src/libcnrun/integrate-base.hh
index 2481a27..e4d6399 100644
--- a/upstream/src/libcn/integrate-base.hh
+++ b/upstream/src/libcnrun/integrate-base.hh
@@ -7,7 +7,7 @@
*
* Purpose: base class for integrators, to be plugged into CModel.
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_INTEGRATE_BASE_H_
diff --git a/upstream/src/libcn/integrate-rk65.hh b/upstream/src/libcnrun/integrate-rk65.hh
similarity index 97%
rename from upstream/src/libcn/integrate-rk65.hh
rename to upstream/src/libcnrun/integrate-rk65.hh
index 874b757..ae1d033 100644
--- a/upstream/src/libcn/integrate-rk65.hh
+++ b/upstream/src/libcnrun/integrate-rk65.hh
@@ -7,7 +7,7 @@
*
* Purpose: A Runge-Kutta 6-5 integrator.
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_INTEGRATERK65_H_
diff --git a/upstream/src/libcn/model-cycle.cc b/upstream/src/libcnrun/model-cycle.cc
similarity index 99%
rename from upstream/src/libcn/model-cycle.cc
rename to upstream/src/libcnrun/model-cycle.cc
index e75265e..5610f48 100644
--- a/upstream/src/libcn/model-cycle.cc
+++ b/upstream/src/libcnrun/model-cycle.cc
@@ -6,7 +6,7 @@
*
* Purpose: CModel top cycle
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/model-nmlio.cc b/upstream/src/libcnrun/model-nmlio.cc
similarity index 99%
rename from upstream/src/libcn/model-nmlio.cc
rename to upstream/src/libcnrun/model-nmlio.cc
index 9fef2bb..2286c7b 100644
--- a/upstream/src/libcn/model-nmlio.cc
+++ b/upstream/src/libcnrun/model-nmlio.cc
@@ -6,7 +6,7 @@
*
* Purpose: NeuroML import/export.
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
@@ -164,7 +164,7 @@ _process_populations( xmlNode *n)
try {
for ( ; n; n = n->next ) { // if is nullptr (parent had no children), we won't do a single loop
- if ( n->type != XML_ELEMENT_NODE || xmlStrEqual( n->name, BAD_CAST "population") )
+ if ( n->type != XML_ELEMENT_NODE || !xmlStrEqual( n->name, BAD_CAST "population") )
continue;
group_id_s = xmlGetProp( n, BAD_CAST "name");
@@ -435,7 +435,7 @@ _process_projection_connections(
snprintf( tgt_s, C_BaseUnit::max_label_size-1, "%s.%s", tgt_grp_prefix, tgt_cell_id_s);
if ( !weight_s ) {
- vp( 1, stderr, "Assuming 0 for a synapse of \"%s.%s\" to \"%s%s\" without a \"weight\" attribute near line %u\n",
+ vp( 3, stderr, "Assuming 0 for a synapse of \"%s.%s\" to \"%s%s\" without a \"weight\" attribute near line %u\n",
src_grp_prefix, src_cell_id_s, tgt_grp_prefix, tgt_cell_id_s, n->line);
weight = 0.;
}
diff --git a/upstream/src/libcn/model-struct.cc b/upstream/src/libcnrun/model-struct.cc
similarity index 98%
rename from upstream/src/libcn/model-struct.cc
rename to upstream/src/libcnrun/model-struct.cc
index 90c4fac..e74e8c2 100644
--- a/upstream/src/libcn/model-struct.cc
+++ b/upstream/src/libcnrun/model-struct.cc
@@ -7,7 +7,7 @@
*
* Purpose: CModel household.
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
@@ -850,23 +850,27 @@ void
cnrun::CModel::
cull_blind_synapses()
{
- auto Yi = hosted_synapses.rbegin();
+ auto Yi = hosted_synapses.begin();
// units remove themselves from all lists, including the one
// iterated here
- while ( Yi != hosted_synapses.rend() ) {
+ while ( Yi != hosted_synapses.end() ) {
auto& Y = **Yi;
if ( !Y._source && !Y.has_sources() ) {
vp( 3, " (deleting synapse with NULL source: \"%s\")\n", Y._label);
- delete &Y;
- }
+ delete &Y; // units are smart, self-erase
+ // themselves from the list we are
+ // iterating over here
+ } else
+ ++Yi;
}
- auto Zi = standalone_synapses.rbegin();
- while ( Zi != standalone_synapses.rend() ) {
+ auto Zi = standalone_synapses.begin();
+ while ( Zi != standalone_synapses.end() ) {
auto& Y = **Zi;
if ( !Y._source && !Y.has_sources() ) {
vp( 3, " (deleting synapse with NULL source: \"%s\")\n", Y._label);
delete &Y;
- }
+ } else
+ ++Zi;
}
}
diff --git a/upstream/src/libcn/model-tags.cc b/upstream/src/libcnrun/model-tags.cc
similarity index 99%
rename from upstream/src/libcn/model-tags.cc
rename to upstream/src/libcnrun/model-tags.cc
index eab56f2..847529d 100644
--- a/upstream/src/libcn/model-tags.cc
+++ b/upstream/src/libcnrun/model-tags.cc
@@ -7,7 +7,7 @@
*
* Purpose: CModel household (process_*_tags(), and other methods using regexes).
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/model.hh b/upstream/src/libcnrun/model.hh
similarity index 99%
rename from upstream/src/libcn/model.hh
rename to upstream/src/libcnrun/model.hh
index 44ef9d1..a0468ea 100644
--- a/upstream/src/libcn/model.hh
+++ b/upstream/src/libcnrun/model.hh
@@ -6,7 +6,7 @@
*
* Purpose: Main model class.
*
- * License: GPL
+ * License: GPL-2+
*/
/*--------------------------------------------------------------------------
@@ -309,6 +309,7 @@ class CModel : public cnrun::stilton::C_verprintf {
double dt_min() const { return _integrator->_dt_min; }
double dt_max() const { return _integrator->_dt_max; }
double dt_cap() const { return _integrator->_dt_cap; }
+ void set_dt(double v) { _integrator->dt = v; }
void set_dt_min(double v) { _integrator->_dt_min = v; }
void set_dt_max(double v) { _integrator->_dt_max = v; }
void set_dt_cap(double v) { _integrator->_dt_cap = v; }
diff --git a/upstream/src/libcn/mx-attr.hh b/upstream/src/libcnrun/mx-attr.hh
similarity index 97%
rename from upstream/src/libcn/mx-attr.hh
rename to upstream/src/libcnrun/mx-attr.hh
index e91f10e..e38dd93 100644
--- a/upstream/src/libcn/mx-attr.hh
+++ b/upstream/src/libcnrun/mx-attr.hh
@@ -7,7 +7,7 @@
*
* Purpose: Interface class for mltiplexing units.
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_MXATTR_H_
diff --git a/upstream/src/libcn/sources.cc b/upstream/src/libcnrun/sources.cc
similarity index 99%
rename from upstream/src/libcn/sources.cc
rename to upstream/src/libcnrun/sources.cc
index 4800fad..1dcd47d 100644
--- a/upstream/src/libcn/sources.cc
+++ b/upstream/src/libcnrun/sources.cc
@@ -7,7 +7,7 @@
*
* Purpose: External stimulation sources (periodic, tape, noise).
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/sources.hh b/upstream/src/libcnrun/sources.hh
similarity index 99%
rename from upstream/src/libcn/sources.hh
rename to upstream/src/libcnrun/sources.hh
index 88533b8..7caa63c 100644
--- a/upstream/src/libcn/sources.hh
+++ b/upstream/src/libcnrun/sources.hh
@@ -7,7 +7,7 @@
*
* Purpose: External stimulation sources (periodic, tape, noise).
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_SOURCES_H_
diff --git a/upstream/src/libcn/standalone-attr.hh b/upstream/src/libcnrun/standalone-attr.hh
similarity index 97%
rename from upstream/src/libcn/standalone-attr.hh
rename to upstream/src/libcnrun/standalone-attr.hh
index a86eefd..29ccb1d 100644
--- a/upstream/src/libcn/standalone-attr.hh
+++ b/upstream/src/libcnrun/standalone-attr.hh
@@ -7,7 +7,7 @@
*
* Purpose: Interface class for standalone units.
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_STANDALONEATTR_H_
diff --git a/upstream/src/libcn/standalone-neurons.cc b/upstream/src/libcnrun/standalone-neurons.cc
similarity index 99%
rename from upstream/src/libcn/standalone-neurons.cc
rename to upstream/src/libcnrun/standalone-neurons.cc
index ba45d45..332419a 100644
--- a/upstream/src/libcn/standalone-neurons.cc
+++ b/upstream/src/libcnrun/standalone-neurons.cc
@@ -8,7 +8,7 @@
* Purpose: standalone neurons (those not having state vars
* on model's integration vector)
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/standalone-neurons.hh b/upstream/src/libcnrun/standalone-neurons.hh
similarity index 99%
rename from upstream/src/libcn/standalone-neurons.hh
rename to upstream/src/libcnrun/standalone-neurons.hh
index 83b7f49..e9168d5 100644
--- a/upstream/src/libcn/standalone-neurons.hh
+++ b/upstream/src/libcnrun/standalone-neurons.hh
@@ -8,7 +8,7 @@
* Purpose: standalone neurons (those not having state vars
* on model's integration vector)
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_STANDALONENEURONS_H_
diff --git a/upstream/src/libcn/standalone-synapses.cc b/upstream/src/libcnrun/standalone-synapses.cc
similarity index 98%
rename from upstream/src/libcn/standalone-synapses.cc
rename to upstream/src/libcnrun/standalone-synapses.cc
index e6385c0..6370805 100644
--- a/upstream/src/libcn/standalone-synapses.cc
+++ b/upstream/src/libcnrun/standalone-synapses.cc
@@ -7,7 +7,7 @@
*
* Purpose: standalone synapses.
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/standalone-synapses.hh b/upstream/src/libcnrun/standalone-synapses.hh
similarity index 99%
rename from upstream/src/libcn/standalone-synapses.hh
rename to upstream/src/libcnrun/standalone-synapses.hh
index 8a0057f..fa82c92 100644
--- a/upstream/src/libcn/standalone-synapses.hh
+++ b/upstream/src/libcnrun/standalone-synapses.hh
@@ -8,7 +8,7 @@
* Purpose: standalone synapses (those not having state vars
* on model's integration vector)
*
- * License: GPL
+ * License: GPL-2+
*/
#ifndef CNRUN_LIBCN_STANDALONESYNAPSES_H_
diff --git a/upstream/src/libcn/types.cc b/upstream/src/libcnrun/types.cc
similarity index 99%
rename from upstream/src/libcn/types.cc
rename to upstream/src/libcnrun/types.cc
index b804873..5e5372e 100644
--- a/upstream/src/libcn/types.cc
+++ b/upstream/src/libcnrun/types.cc
@@ -7,7 +7,7 @@
*
* Purpose: CN global unit descriptors
*
- * License: GPL
+ * License: GPL-2+
*/
#if HAVE_CONFIG_H && !defined(VERSION)
diff --git a/upstream/src/libcn/types.hh b/upstream/src/libcnrun/types.hh
similarity index 99%
rename from upstream/src/libcn/types.hh
rename to upstream/src/libcnrun/types.hh
index eab0c93..479d59e 100644
--- a/upstream/src/libcn/types.hh
+++ b/upstream/src/libcnrun/types.hh
@@ -7,7 +7,7 @@
*
* Purpose: Enumerated type for unit ids, and a structure describing a unit type.
*
- * License: GPL
+ * License: GPL-2+
*/
//#define CN_WANT_MORE_NEURONS
diff --git a/upstream/src/libstilton/Makefile.am b/upstream/src/libstilton/Makefile.am
index fc167c1..5fa22b1 100644
--- a/upstream/src/libstilton/Makefile.am
+++ b/upstream/src/libstilton/Makefile.am
@@ -7,20 +7,11 @@ libstilton_la_SOURCES = \
alg.hh \
containers.hh \
lang.hh \
- libstilton.cc \
- misc.hh
+ misc.hh \
+ string.hh \
+ libstilton.cc
libstilton_la_LDFLAGS = \
-avoid-version \
-rpath $(libdir)/$(PACKAGE) \
-shared -module
-
-if DO_PCH
-BUILT_SOURCES = \
- alg.hh.gch \
- containers.hh.gch \
- lang.hh.gch \
- misc.hh.gch
-
-CLEANFILES = $(BUILT_SOURCES)
-endif
diff --git a/upstream/src/libstilton/lang.hh b/upstream/src/libstilton/lang.hh
index 4e5ddbb..da77173 100644
--- a/upstream/src/libstilton/lang.hh
+++ b/upstream/src/libstilton/lang.hh
@@ -73,7 +73,6 @@ inline int dbl_cmp( double x, double y) // optional precision maybe?
#define unlikely(x) __builtin_expect (!!(x), 0)
-#define FABUF printf( __FILE__ ":%d (%s): %s\n", __LINE__, __FUNCTION__, __buf__);
#define FAFA printf( __FILE__ ":%d (%s): fafa\n", __LINE__, __FUNCTION__);
}
diff --git a/upstream/src/tools/.gitignore b/upstream/src/tools/.gitignore
index a3ee24b..3a6c993 100644
--- a/upstream/src/tools/.gitignore
+++ b/upstream/src/tools/.gitignore
@@ -1,3 +1,2 @@
-varfold
hh-latency-estimator
spike2sdf
diff --git a/upstream/src/tools/Makefile.am b/upstream/src/tools/Makefile.am
index d866f51..c95da44 100644
--- a/upstream/src/tools/Makefile.am
+++ b/upstream/src/tools/Makefile.am
@@ -4,22 +4,15 @@ AM_CXXFLAGS += \
$(LIBCN_CFLAGS)
bin_PROGRAMS = \
- spike2sdf varfold hh-latency-estimator
+ spike2sdf hh-latency-estimator
spike2sdf_SOURCES = \
spike2sdf.cc
-varfold_SOURCES = \
- varfold.cc
-varfold_LDFLAAGS = \
- -shared
-varfold_LDADD = \
- $(LIBCN_LIBS)
-
hh_latency_estimator_SOURCES = \
hh-latency-estimator.cc
hh_latency_estimator_LDADD = \
- ../libcn/libcn.la \
+ ../libcnrun/libcnrun.la \
../libstilton/libstilton.la \
$(LIBCN_LIBS)
hh_latency_estimator_LDFLAAGS = \
diff --git a/upstream/src/tools/hh-latency-estimator.cc b/upstream/src/tools/hh-latency-estimator.cc
index deaea57..b5653b2 100644
--- a/upstream/src/tools/hh-latency-estimator.cc
+++ b/upstream/src/tools/hh-latency-estimator.cc
@@ -1,23 +1,27 @@
/*
- * Author: Andrei Zavada <johnhommer at gmail.com>
+ * File name: tools/hh-latency-estimator.cc
+ * Project: cnrun
+ * Author: Andrei Zavada <johnhommer at gmail.com>
+ * Initial version: 2009-09-12
*
- * License: GPL-2+
+ * Purpose: convenience to estiate latency to first spike of a HH neuron
+ * in response to continuous Poisson stimulation
*
- * Initial version: 2009-09-12
+ * License: GPL-2+
*/
-#include <unistd.h>
-
-#include "libcn/hosted-neurons.hh"
-#include "libcn/standalone-synapses.hh"
-
-#include "libcn/model.hh"
-#include "libcn/types.hh"
-
#if HAVE_CONFIG_H && !defined(VERSION)
# include "config.h"
#endif
+#include <unistd.h>
+
+#include "libcnrun/hosted-neurons.hh"
+#include "libcnrun/standalone-synapses.hh"
+
+#include "libcnrun/model.hh"
+#include "libcnrun/types.hh"
+
using namespace cnrun;
CModel *Model;
@@ -26,37 +30,37 @@ enum TOscillatorType { S_POISSON = 0, S_PULSE = 1 };
enum TIncrOpType { INCR_ADD, INCR_MULT };
struct SOptions {
- double pulse_f_min,
- pulse_f_max,
- pulse_df,
- syn_g,
- syn_beta,
- syn_trel;
- bool enable_listening;
-
- size_t n_repeats;
- const char
- *irreg_mag_fname,
- *irreg_cnt_fname;
-
- TOscillatorType
- src_type;
- TIncrOpType
- incr_op;
-
- SOptions()
- : pulse_f_min (-INFINITY),
- pulse_f_max (-INFINITY),
- pulse_df (-INFINITY),
- syn_g (-INFINITY),
- syn_beta (-INFINITY),
- syn_trel (-INFINITY),
- enable_listening (false),
- n_repeats (1),
- irreg_mag_fname (nullptr),
- irreg_cnt_fname (nullptr),
- src_type (S_POISSON)
- {}
+ double pulse_f_min,
+ pulse_f_max,
+ pulse_df,
+ syn_g,
+ syn_beta,
+ syn_trel;
+ bool enable_listening;
+
+ size_t n_repeats;
+ const char
+ *irreg_mag_fname,
+ *irreg_cnt_fname;
+
+ TOscillatorType
+ src_type;
+ TIncrOpType
+ incr_op;
+
+ SOptions()
+ : pulse_f_min (-INFINITY),
+ pulse_f_max (-INFINITY),
+ pulse_df (-INFINITY),
+ syn_g (-INFINITY),
+ syn_beta (-INFINITY),
+ syn_trel (-INFINITY),
+ enable_listening (false),
+ n_repeats (1),
+ irreg_mag_fname (nullptr),
+ irreg_cnt_fname (nullptr),
+ src_type (S_POISSON)
+ {}
};
SOptions Options;
@@ -66,162 +70,167 @@ const char* const pulse_parm_sel[] = { "lambda", "f" };
static int parse_options( int argc, char **argv);
-#define CNRUN_CLPARSE_HELP_REQUEST -1
-#define CNRUN_CLPARSE_ERROR -2
+#define CNRUN_CLPARSE_HELP_REQUEST -1
+#define CNRUN_CLPARSE_ERROR -2
static void usage( const char *argv0);
-#define CNRUN_EARGS -1
-#define CNRUN_ESETUP -2
-#define CNRUN_ETRIALFAIL -3
+#define CNRUN_EARGS -1
+#define CNRUN_ESETUP -2
+#define CNRUN_ETRIALFAIL -3
int
main( int argc, char *argv[])
{
-// cout << "\nHH latency estimator compiled " << __TIME__ << " " << __DATE__ << endl;
+// cout << "\nHH latency estimator compiled " << __TIME__ << " " << __DATE__ << endl;
- if ( argc == 1 ) {
- usage( argv[0]);
- return 0;
- }
+ if ( argc == 1 ) {
+ usage( argv[0]);
+ return 0;
+ }
- int retval = 0;
+ int retval = 0;
- switch ( parse_options( argc, argv) ) {
- case CNRUN_CLPARSE_ERROR:
- cerr << "Problem parsing command line or sanitising values\n"
- "Use -h for help\n";
- return CNRUN_EARGS;
- case CNRUN_CLPARSE_HELP_REQUEST:
- usage( argv[0]);
- return 0;
- }
+ switch ( parse_options( argc, argv) ) {
+ case CNRUN_CLPARSE_ERROR:
+ cerr << "Problem parsing command line or sanitising values\n"
+ "Use -h for help\n";
+ return CNRUN_EARGS;
+ case CNRUN_CLPARSE_HELP_REQUEST:
+ usage( argv[0]);
+ return 0;
+ }
// create and set up the model
- if ( !(Model = new CModel( "hh-latency", new CIntegrateRK65(), 0)) ) {
- cerr << "Failed to create a model\n";
- return CNRUN_ESETUP;
- }
-
- Model->verbosely = 0;
+ Model = new CModel(
+ "hh-latency",
+ new CIntegrateRK65(
+ 1e-6, .5, 5, 1e-8, 1e-12, 1e-6, true),
+ SModelOptions ());
+ if ( !Model ) {
+ cerr << "Failed to create a model\n";
+ return CNRUN_ESETUP;
+ }
+
+ Model->options.verbosely = 0;
// add our three units
- CNeuronHH_d *hh = new CNeuronHH_d( "HH", 0.2, 0.1, 0.3, Model, CN_UOWNED);
- C_BaseNeuron *pulse = (Options.src_type == S_PULSE)
- ? static_cast<C_BaseNeuron*>(new CNeuronDotPulse( "Pulse", 0.1, 0.2, 0.3, Model, CN_UOWNED))
- : static_cast<C_BaseNeuron*>(new COscillatorDotPoisson( "Pulse", 0.1, 0.2, 0.3, Model, CN_UOWNED));
- CSynapseMxAB_dd *synapse = new CSynapseMxAB_dd( pulse, hh, Options.syn_g, Model, CN_UOWNED);
+ CNeuronHH_d *hh = new CNeuronHH_d( "HH", 0.2, 0.1, 0.3, Model, CN_UOWNED);
+ C_BaseNeuron *pulse = (Options.src_type == S_PULSE)
+ ? static_cast<C_BaseNeuron*>(new CNeuronDotPulse( "Pulse", 0.1, 0.2, 0.3, Model, CN_UOWNED))
+ : static_cast<C_BaseNeuron*>(new COscillatorDotPoisson( "Pulse", 0.1, 0.2, 0.3, Model, CN_UOWNED));
+ CSynapseMxAB_dd *synapse = new CSynapseMxAB_dd( pulse, hh, Options.syn_g, Model, CN_UOWNED);
// enable_spikelogging_service
- hh -> enable_spikelogging_service();
+ hh -> enable_spikelogging_service();
- if ( Options.enable_listening ) {
- hh -> start_listening( CN_ULISTENING_DISK | CN_ULISTENING_1VARONLY);
- pulse -> start_listening( CN_ULISTENING_DISK);
- synapse -> start_listening( CN_ULISTENING_DISK);
- Model->listen_dt = 0.;
- }
+ if ( Options.enable_listening ) {
+ hh -> start_listening( CN_ULISTENING_DISK | CN_ULISTENING_1VARONLY);
+ pulse -> start_listening( CN_ULISTENING_DISK);
+ synapse -> start_listening( CN_ULISTENING_DISK);
+ Model->options.listen_dt = 0.;
+ }
// assign user-supplied values to parameters: invariant ones first
- if ( Options.syn_beta != -INFINITY )
- synapse->param_value("beta") = Options.syn_beta;
- if ( Options.syn_trel != -INFINITY )
- synapse->param_value("trel") = Options.syn_trel;
+ if ( Options.syn_beta != -INFINITY )
+ synapse->param_value("beta") = Options.syn_beta;
+ if ( Options.syn_trel != -INFINITY )
+ synapse->param_value("trel") = Options.syn_trel;
// do trials
- size_t n_spikes;
- double warmup_time = 30;
-
- size_t i;
-
- size_t n_steps = 1 + ((Options.incr_op == INCR_ADD)
- ? (Options.pulse_f_max - Options.pulse_f_min) / Options.pulse_df
- : log(Options.pulse_f_max / Options.pulse_f_min) / log(Options.pulse_df));
-
- double frequencies[n_steps];
- for ( i = 0; i < n_steps; i++ )
- frequencies[i] = (Options.incr_op == INCR_ADD)
- ? Options.pulse_f_min + i*Options.pulse_df
- : Options.pulse_f_min * pow( Options.pulse_df, (double)i);
- vector<double>
- irreg_mags[n_steps];
- size_t irreg_counts[n_steps];
- memset( irreg_counts, 0, n_steps*sizeof(size_t));
-
- double latencies[n_steps];
-
- for ( size_t trial = 0; trial < Options.n_repeats; trial++ ) {
- memset( latencies, 0, n_steps*sizeof(double));
-
- for ( i = 0; i < n_steps; i++ ) {
-
- if ( Options.enable_listening ) {
- char label[CN_MAX_LABEL_SIZE];
- snprintf( label, CN_MAX_LABEL_SIZE, "pulse-%06g", frequencies[i]);
- pulse->set_label( label);
- snprintf( label, CN_MAX_LABEL_SIZE, "hh-%06g", frequencies[i]);
- hh->set_label( label);
- snprintf( label, CN_MAX_LABEL_SIZE, "synapse-%06g", frequencies[i]);
- synapse->set_label( label);
- }
- Model->reset(); // will reset model_time, preserve params, and is a generally good thing
-
- pulse->param_value( pulse_parm_sel[Options.src_type]) = 0;
-
- // warmup
- Model->advance( warmup_time);
- if ( hh->spikelogger_agent()->spike_history.size() )
- printf( "What? %zd spikes already?\n", hh->spikelogger_agent()->spike_history.size());
- // calm down the integrator
- Model->dt() = Model->dt_min();
- // assign trial values
- pulse->param_value(pulse_parm_sel[Options.src_type]) = frequencies[i];
- // go
- Model->advance( 100);
-
- // collect latency: that is, the time of the first spike
- latencies[i] = (( n_spikes = hh->spikelogger_agent()->spike_history.size() )
- ? *(hh->spikelogger_agent()->spike_history.begin()) - warmup_time
- : 999);
-
- printf( "%g\t%g\t%zu\n", frequencies[i], latencies[i], n_spikes);
- }
-
- printf( "\n");
- for ( i = 1; i < n_steps; i++ )
- if ( latencies[i] > latencies[i-1] ) {
- irreg_mags[i].push_back( (latencies[i] - latencies[i-1]) / latencies[i-1]);
- irreg_counts[i]++;
- }
- }
-
-
- {
- ostream *irrmag_strm = Options.irreg_mag_fname ? new ofstream( Options.irreg_mag_fname) : &cout;
-
- (*irrmag_strm) << "#<at>\t<irreg_mag>\n";
- for ( i = 0; i < n_steps; i++ )
- if ( irreg_mags[i].size() )
- for ( size_t j = 0; j < irreg_mags[i].size(); j++ )
- (*irrmag_strm) << frequencies[i] << '\t' << irreg_mags[i][j] << endl;
-
- if ( Options.irreg_mag_fname )
- delete irrmag_strm;
- }
- {
- ostream *irrcnt_strm = Options.irreg_cnt_fname ? new ofstream( Options.irreg_cnt_fname) : &cout;
-
- (*irrcnt_strm) << "#<at>\t<cnt>\n";
- for ( i = 0; i < n_steps; i++ )
- (*irrcnt_strm) << frequencies[i] << '\t' << irreg_counts[i] << endl;
-
- if ( Options.irreg_cnt_fname )
- delete irrcnt_strm;
- }
- delete Model;
-
- return retval;
+ size_t n_spikes;
+ double warmup_time = 30;
+
+ size_t i;
+
+ size_t n_steps = 1 + ((Options.incr_op == INCR_ADD)
+ ? (Options.pulse_f_max - Options.pulse_f_min) / Options.pulse_df
+ : log(Options.pulse_f_max / Options.pulse_f_min) / log(Options.pulse_df));
+
+ double frequencies[n_steps];
+ for ( i = 0; i < n_steps; i++ )
+ frequencies[i] = (Options.incr_op == INCR_ADD)
+ ? Options.pulse_f_min + i*Options.pulse_df
+ : Options.pulse_f_min * pow( Options.pulse_df, (double)i);
+ vector<double>
+ irreg_mags[n_steps];
+ size_t irreg_counts[n_steps];
+ memset( irreg_counts, 0, n_steps*sizeof(size_t));
+
+ double latencies[n_steps];
+
+ for ( size_t trial = 0; trial < Options.n_repeats; trial++ ) {
+ memset( latencies, 0, n_steps*sizeof(double));
+
+ for ( i = 0; i < n_steps; i++ ) {
+
+ if ( Options.enable_listening ) {
+ char label[C_BaseUnit::max_label_size];
+ snprintf( label, C_BaseUnit::max_label_size, "pulse-%06g", frequencies[i]);
+ pulse->set_label( label);
+ snprintf( label, C_BaseUnit::max_label_size, "hh-%06g", frequencies[i]);
+ hh->set_label( label);
+ snprintf( label, C_BaseUnit::max_label_size, "synapse-%06g", frequencies[i]);
+ synapse->set_label( label);
+ }
+ Model->reset(); // will reset model_time, preserve params, and is a generally good thing
+
+ pulse->param_value( pulse_parm_sel[Options.src_type]) = 0;
+
+ // warmup
+ Model->advance( warmup_time);
+ if ( hh->spikelogger_agent()->spike_history.size() )
+ printf( "What? %zd spikes already?\n", hh->spikelogger_agent()->spike_history.size());
+ // calm down the integrator
+ Model->set_dt( Model->dt_min());
+ // assign trial values
+ pulse->param_value(pulse_parm_sel[Options.src_type]) = frequencies[i];
+ // go
+ Model->advance( 100);
+
+ // collect latency: that is, the time of the first spike
+ latencies[i] = (( n_spikes = hh->spikelogger_agent()->spike_history.size() )
+ ? *(hh->spikelogger_agent()->spike_history.begin()) - warmup_time
+ : 999);
+
+ printf( "%g\t%g\t%zu\n", frequencies[i], latencies[i], n_spikes);
+ }
+
+ printf( "\n");
+ for ( i = 1; i < n_steps; i++ )
+ if ( latencies[i] > latencies[i-1] ) {
+ irreg_mags[i].push_back( (latencies[i] - latencies[i-1]) / latencies[i-1]);
+ irreg_counts[i]++;
+ }
+ }
+
+
+ {
+ ostream *irrmag_strm = Options.irreg_mag_fname ? new ofstream( Options.irreg_mag_fname) : &cout;
+
+ (*irrmag_strm) << "#<at>\t<irreg_mag>\n";
+ for ( i = 0; i < n_steps; i++ )
+ if ( irreg_mags[i].size() )
+ for ( size_t j = 0; j < irreg_mags[i].size(); j++ )
+ (*irrmag_strm) << frequencies[i] << '\t' << irreg_mags[i][j] << endl;
+
+ if ( Options.irreg_mag_fname )
+ delete irrmag_strm;
+ }
+ {
+ ostream *irrcnt_strm = Options.irreg_cnt_fname ? new ofstream( Options.irreg_cnt_fname) : &cout;
+
+ (*irrcnt_strm) << "#<at>\t<cnt>\n";
+ for ( i = 0; i < n_steps; i++ )
+ (*irrcnt_strm) << frequencies[i] << '\t' << irreg_counts[i] << endl;
+
+ if ( Options.irreg_cnt_fname )
+ delete irrcnt_strm;
+ }
+ delete Model;
+
+ return retval;
}
@@ -233,24 +242,24 @@ main( int argc, char *argv[])
static void
usage( const char *argv0)
{
- cout << "Usage: " << argv0 << "-f...|-l... [-y...]\n" <<
- "Stimulation intensity to estimate the response latency for:\n"
- " -f <double f_min>:<double f_incr>:<double f_max>\n"
- "\t\t\tUse a DotPulse oscillator, with these values for f, or\n"
- " -l <double f_min>:<double f_incr>:<double f_max>\n"
- "\t\t\tUse a DotPoisson oscillator, with these values for lambda\n"
- "Synapse parameters:\n"
- " -yg <double>\tgsyn (required)\n"
- " -yb <double>\tbeta\n"
- " -yr <double>\ttrel\n"
- "\n"
- " -o\t\t\tWrite unit variables\n"
- " -c <uint>\t\tRepeat this many times\n"
- " -T <fname>\tCollect stats on irreg_cnt to fname\n"
- " -S <fname>\tCollect stats on irreg_mags to fname\n"
- "\n"
- " -h \t\tDisplay this help\n"
- "\n";
+ cout << "Usage: " << argv0 << "-f...|-l... [-y...]\n" <<
+ "Stimulation intensity to estimate the response latency for:\n"
+ " -f <double f_min>:<double f_incr>:<double f_max>\n"
+ "\t\t\tUse a DotPulse oscillator, with these values for f, or\n"
+ " -l <double f_min>:<double f_incr>:<double f_max>\n"
+ "\t\t\tUse a DotPoisson oscillator, with these values for lambda\n"
+ "Synapse parameters:\n"
+ " -yg <double>\tgsyn (required)\n"
+ " -yb <double>\tbeta\n"
+ " -yr <double>\ttrel\n"
+ "\n"
+ " -o\t\t\tWrite unit variables\n"
+ " -c <uint>\t\tRepeat this many times\n"
+ " -T <fname>\tCollect stats on irreg_cnt to fname\n"
+ " -S <fname>\tCollect stats on irreg_mags to fname\n"
+ "\n"
+ " -h \t\tDisplay this help\n"
+ "\n";
}
@@ -261,72 +270,89 @@ usage( const char *argv0)
static int
parse_options( int argc, char **argv)
{
- int c;
-
- while ( (c = getopt( argc, argv, "f:l:y:oc:S:T:h")) != -1 )
- switch ( c ) {
- case 'y':
- switch ( optarg[0] ) {
- case 'g': if ( sscanf( optarg+1, "%lg", &Options.syn_g) != 1 ) {
- cerr << "-yg takes a double\n";
- return CNRUN_CLPARSE_ERROR;
- } break;
- case 'b': if ( sscanf( optarg+1, "%lg", &Options.syn_beta) != 1 ) {
- cerr << "-yb takes a double\n";
- return CNRUN_CLPARSE_ERROR;
- } break;
- case 'r': if ( sscanf( optarg+1, "%lg", &Options.syn_trel) != 1 ) {
- cerr << "-yr takes a double\n";
- return CNRUN_CLPARSE_ERROR;
- } break;
- default: cerr << "Unrecognised option modifier for -y\n";
- return CNRUN_CLPARSE_ERROR;
- }
- break;
-
- case 'f':
- case 'l':
- if ( (Options.incr_op = INCR_ADD,
- (sscanf( optarg, "%lg:%lg:%lg",
- &Options.pulse_f_min, &Options.pulse_df, &Options.pulse_f_max) == 3))
- ||
- (Options.incr_op = INCR_MULT,
- (sscanf( optarg, "%lg*%lg:%lg",
- &Options.pulse_f_min, &Options.pulse_df, &Options.pulse_f_max) == 3)) ) {
-
- Options.src_type = (c == 'f') ? S_PULSE : S_POISSON;
-
- } else {
- cerr << "Expecting all three parameter with -{f,l} min{:,*}incr:max\n";
- return CNRUN_CLPARSE_ERROR;
- }
- break;
-
- case 'o': Options.enable_listening = true; break;
-
- case 'c': Options.n_repeats = strtoul( optarg, nullptr, 10); break;
-
- case 'S': Options.irreg_mag_fname = optarg; break;
- case 'T': Options.irreg_cnt_fname = optarg; break;
-
- case 'h':
- return CNRUN_CLPARSE_HELP_REQUEST;
- case '?':
- default:
- return CNRUN_CLPARSE_ERROR;
- }
-
- if ( Options.pulse_f_min == -INFINITY ||
- Options.pulse_f_max == -INFINITY ||
- Options.pulse_df == -INFINITY ) {
- cerr << "Oscillator type (with -f or -l) not specified\n";
- return CNRUN_EARGS;
- }
-
- return 0;
+ int c;
+
+ while ( (c = getopt( argc, argv, "f:l:y:oc:S:T:h")) != -1 )
+ switch ( c ) {
+ case 'y':
+ switch ( optarg[0] ) {
+ case 'g':
+ if ( sscanf( optarg+1, "%lg", &Options.syn_g) != 1 ) {
+ cerr << "-yg takes a double\n";
+ return CNRUN_CLPARSE_ERROR;
+ }
+ break;
+ case 'b':
+ if ( sscanf( optarg+1, "%lg", &Options.syn_beta) != 1 ) {
+ cerr << "-yb takes a double\n";
+ return CNRUN_CLPARSE_ERROR;
+ }
+ break;
+ case 'r':
+ if ( sscanf( optarg+1, "%lg", &Options.syn_trel) != 1 ) {
+ cerr << "-yr takes a double\n";
+ return CNRUN_CLPARSE_ERROR;
+ }
+ break;
+ default:
+ cerr << "Unrecognised option modifier for -y\n";
+ return CNRUN_CLPARSE_ERROR;
+ }
+ break;
+
+ case 'f':
+ case 'l':
+ if ( (Options.incr_op = INCR_ADD,
+ (sscanf( optarg, "%lg:%lg:%lg",
+ &Options.pulse_f_min, &Options.pulse_df, &Options.pulse_f_max) == 3))
+ ||
+ (Options.incr_op = INCR_MULT,
+ (sscanf( optarg, "%lg*%lg:%lg",
+ &Options.pulse_f_min, &Options.pulse_df, &Options.pulse_f_max) == 3)) ) {
+
+ Options.src_type = (c == 'f') ? S_PULSE : S_POISSON;
+
+ } else {
+ cerr << "Expecting all three parameter with -{f,l} min{:,*}incr:max\n";
+ return CNRUN_CLPARSE_ERROR;
+ }
+ break;
+
+ case 'o':
+ Options.enable_listening = true;
+ break;
+
+ case 'c':
+ Options.n_repeats = strtoul( optarg, nullptr, 10);
+ break;
+
+ case 'S':
+ Options.irreg_mag_fname = optarg;
+ break;
+ case 'T':
+ Options.irreg_cnt_fname = optarg;
+ break;
+
+ case 'h':
+ return CNRUN_CLPARSE_HELP_REQUEST;
+ case '?':
+ default:
+ return CNRUN_CLPARSE_ERROR;
+ }
+
+ if ( Options.pulse_f_min == -INFINITY ||
+ Options.pulse_f_max == -INFINITY ||
+ Options.pulse_df == -INFINITY ) {
+ cerr << "Oscillator type (with -f or -l) not specified\n";
+ return CNRUN_EARGS;
+ }
+
+ return 0;
}
-
-
-
-// EOF
+// Local Variables:
+// Mode: c++
+// indent-tabs-mode: nil
+// tab-width: 8
+// c-basic-offset: 8
+// End:
diff --git a/upstream/src/tools/spike2sdf.cc b/upstream/src/tools/spike2sdf.cc
index 91de7f7..a1dafa6 100644
--- a/upstream/src/tools/spike2sdf.cc
+++ b/upstream/src/tools/spike2sdf.cc
@@ -1,13 +1,17 @@
/*
- * Author: Andrei Zavada <johnhommer at gmail.com>
+ * File name: tools/spike2sdf.cc
+ * Project: cnrun
+ * Author: Andrei Zavada <johnhommer at gmail.com>
+ * Initial version: 2008-11-11
*
- * License: GPL-2+
+ * Purpose: A remedy against forgetting to pass -d to cnrun
*
- * Initial version: 2008-11-11
- *
- * A remedy against forgetting to pass -d to cnrun
+ * License: GPL-2+
*/
+#if HAVE_CONFIG_H && !defined(VERSION)
+# include "config.h"
+#endif
#include <iostream>
#include <fstream>
@@ -17,75 +21,77 @@
#include <cstring>
#include <cmath>
-#if HAVE_CONFIG_H && !defined(VERSION)
-# include "config.h"
-#endif
-
using namespace std;
int
main( int argc, char *argv[])
{
- if ( argc != 5 ) {
- cerr << "Expecting <fname> <period> <sigma> <restrict_window_size\n";
- return -1;
- }
-
- string fname( argv[1]);
-
- double sxf_sample = strtod( argv[2], nullptr),
- sdf_sigma = strtod( argv[3], nullptr),
- restrict_window = strtod( argv[4], nullptr);
-
- ifstream is( fname.c_str());
- if ( !is.good() ) {
- cerr << "Can't read from file " << fname << endl;
- return -1;
- }
- is.ignore( numeric_limits<streamsize>::max(), '\n');
-
- if ( fname.rfind( ".spikes") == fname.size() - 7 )
- fname.erase( fname.size() - 7, fname.size());
- fname += ".sdf";
-
- ofstream os( fname.c_str());
- if ( !os.good() ) {
- cerr << "Can't open " << fname << " for writing\n";
- return -1;
- }
- os << "#<t>\t<sdf>\t<nspikes>\n";
-
-
- vector<double> _spike_history;
- while ( true ) {
- double datum;
- is >> datum;
- if ( is.eof() )
- break;
- _spike_history.push_back( datum);
- }
-
- double at, len = _spike_history.back(), dt,
- sdf_var = sdf_sigma * sdf_sigma;
- cout << fname << ": " << _spike_history.size() << " spikes (last at " << _spike_history.back() << ")\n";
- for ( at = sxf_sample; at < len; at += sxf_sample ) {
- double result = 0.;
- unsigned nspikes = 0;
- for ( auto &T : _spike_history ) {
- dt = T - at;
- if ( restrict_window > 0 && dt < -restrict_window/2 )
- continue;
- if ( restrict_window > 0 && dt > restrict_window/2 )
- break;
-
- nspikes++;
- result += exp( -dt*dt/sdf_var);
-
- }
- os << at << "\t" << result << "\t" << nspikes << endl;
- }
-
- return 0;
+ if ( argc != 5 ) {
+ cerr << "Expecting <fname> <period> <sigma> <restrict_window_size\n";
+ return -1;
+ }
+
+ string fname( argv[1]);
+
+ double sxf_sample = strtod( argv[2], nullptr),
+ sdf_sigma = strtod( argv[3], nullptr),
+ restrict_window = strtod( argv[4], nullptr);
+
+ ifstream is( fname.c_str());
+ if ( !is.good() ) {
+ cerr << "Can't read from file " << fname << endl;
+ return -1;
+ }
+ is.ignore( numeric_limits<streamsize>::max(), '\n');
+
+ if ( fname.rfind( ".spikes") == fname.size() - 7 )
+ fname.erase( fname.size() - 7, fname.size());
+ fname += ".sdf";
+
+ ofstream os( fname.c_str());
+ if ( !os.good() ) {
+ cerr << "Can't open " << fname << " for writing\n";
+ return -1;
+ }
+ os << "#<t>\t<sdf>\t<nspikes>\n";
+
+
+ vector<double> _spike_history;
+ while ( true ) {
+ double datum;
+ is >> datum;
+ if ( is.eof() )
+ break;
+ _spike_history.push_back( datum);
+ }
+
+ double at, len = _spike_history.back(), dt,
+ sdf_var = sdf_sigma * sdf_sigma;
+ cout << fname << ": " << _spike_history.size() << " spikes (last at " << _spike_history.back() << ")\n";
+ for ( at = sxf_sample; at < len; at += sxf_sample ) {
+ double result = 0.;
+ unsigned nspikes = 0;
+ for ( auto &T : _spike_history ) {
+ dt = T - at;
+ if ( restrict_window > 0 && dt < -restrict_window/2 )
+ continue;
+ if ( restrict_window > 0 && dt > restrict_window/2 )
+ break;
+
+ nspikes++;
+ result += exp( -dt*dt/sdf_var);
+
+ }
+ os << at << "\t" << result << "\t" << nspikes << endl;
+ }
+
+ return 0;
}
-// EOF
+
+// Local Variables:
+// Mode: c++
+// indent-tabs-mode: nil
+// tab-width: 8
+// c-basic-offset: 8
+// End:
diff --git a/upstream/src/tools/varfold.cc b/upstream/src/tools/varfold.cc
deleted file mode 100644
index 7a2bdf1..0000000
--- a/upstream/src/tools/varfold.cc
+++ /dev/null
@@ -1,718 +0,0 @@
-/*
- * Author: Andrei Zavada <johnhommer at gmail.com>
- *
- * License: GPL-2+
- *
- * Initial version: 2008-11-11
- *
- */
-
-
-
-#include <unistd.h>
-#include <cmath>
-#include <cstdlib>
-#include <cstring>
-#include <iostream>
-#include <fstream>
-#include <sstream>
-#include <limits>
-#include <stdexcept>
-#include <vector>
-#include <valarray>
-#include <numeric>
-
-#if HAVE_CONFIG_H && !defined(VERSION)
-# include "config.h"
-#endif
-
-using namespace std;
-
-
-typedef vector<double>::iterator vd_i;
-typedef vector<unsigned>::iterator vu_i;
-
-
-enum TConvType {
- SDFF_CMP_NONE,
- SDFF_CMP_SQDIFF,
- SDFF_CMP_WEIGHT
-};
-
-enum TCFOpType {
- SDFF_CFOP_AVG,
- SDFF_CFOP_PROD,
- SDFF_CFOP_SUM
-};
-
-
-struct SOptions {
- const char
- *working_dir,
- *target_profiles_dir,
- *grand_target_fname,
- *grand_result_fname;
- vector<string>
- units;
- TCFOpType
- cf_op_type;
- vector<unsigned>
- dims;
- bool go_sdf:1,
- use_shf:1,
- do_normalise:1,
- do_matrix_output:1,
- do_column_output:1,
- assume_no_shf_value:1,
- assume_generic_data:1,
- assume_no_timepoint:1,
- octave_compat:1,
- verbosely:1;
- double sample_from,
- sample_period,
- sample_window;
- unsigned
- field_n,
- of_fields,
- skipped_first_lines;
- TConvType
- conv_type;
-
- SOptions()
- : working_dir ("."),
- target_profiles_dir ("."),
- grand_target_fname ("overall.target"),
- grand_result_fname (nullptr),
- cf_op_type (SDFF_CFOP_AVG),
- go_sdf (true),
- use_shf (false),
- do_normalise (false),
- do_matrix_output (true),
- do_column_output (false),
- assume_no_shf_value (false),
- assume_generic_data (true),
- assume_no_timepoint (false),
- octave_compat (false),
- verbosely (true),
- sample_from (0),
- sample_period (0),
- sample_window (0),
- field_n (1),
- of_fields (1),
- skipped_first_lines (0),
- conv_type (SDFF_CMP_NONE)
- {}
-};
-
-static SOptions Options;
-
-//static size_t dim_prod;
-
-static int get_unit_cf( const char *unit_label, valarray<double> &Mi, double *result);
-
-static int parse_cmdline( int argc, char *argv[]);
-static void usage( const char *argv0);
-
-#define SDFCAT_EARGS -1
-#define SDFCAT_EHELPREQUEST -2
-#define SDFCAT_EFILES -3
-#define SDFCAT_ERANGES -4
-
-
-static int read_matrices_from_sxf( const char* fname, valarray<double> &M, valarray<double> &H, double *sdf_max_p = nullptr);
-static int construct_matrix_from_var( const char* fname, valarray<double> &M);
-static int read_matrix( const char*, valarray<double>&);
-static int write_matrix( const char*, const valarray<double>&);
-static double convolute_matrix_against_target( const valarray<double>&, const valarray<double>&);
-
-
-
-int
-main( int argc, char *argv[])
-{
- int retval = 0;
-
- if ( argc == 1 ) {
- usage( argv[0]);
- return SDFCAT_EARGS;
- }
-
- {
- int parse_retval = parse_cmdline( argc, argv);
- if ( parse_retval ) {
- if ( parse_retval == SDFCAT_EHELPREQUEST )
- usage( argv[0]);
- return -1;
- }
-
- if ( Options.assume_no_shf_value && Options.use_shf ) {
- cerr << "Conflicting options (-H and -H-)\n";
- return -1;
- }
- }
-
- // cd as requested
- char *pwd = nullptr;
- if ( Options.working_dir ) {
- pwd = getcwd( nullptr, 0);
- if ( chdir( Options.working_dir) ) {
- fprintf( stderr, "Failed to cd to \"%s\"\n", Options.working_dir);
- return -2;
- }
- }
-
-
-// vector<double> unit_CFs;
-
- size_t dim_prod = accumulate( Options.dims.begin(), Options.dims.end(), 1., multiplies<double>());
- valarray<double>
- Mi (dim_prod), Mi_valid_cases (dim_prod),
- G (dim_prod), G_valid_cases (dim_prod);
-
- for ( vector<string>::iterator uI = Options.units.begin(); uI != Options.units.end(); uI++ ) {
- double CFi;
- if ( get_unit_cf( uI->c_str(), Mi, &CFi) ) // does its own convolution
- return -4;
-
- for ( size_t i = 0; i < dim_prod; i++ )
- if ( !isfinite( Mi[i]) )
- Mi[i] = (Options.cf_op_type == SDFF_CFOP_PROD) ? 1. : 0.;
- else
- G_valid_cases[i]++;
-
- switch ( Options.cf_op_type ) {
- case SDFF_CFOP_SUM:
- case SDFF_CFOP_AVG:
- G += Mi;
- break;
- case SDFF_CFOP_PROD:
- G *= Mi;
- break;
- }
-
- if ( Options.conv_type != SDFF_CMP_NONE ) {
- ofstream o( (*uI)+".CF");
- o << CFi << endl;
- }
- }
-
- // for ( size_t i = 0; i < dim_prod; i++ )
- // if ( G_valid_cases[i] == 0. )
- // G_valid_cases[i] = 1;
-
- if ( Options.cf_op_type == SDFF_CFOP_AVG )
- G /= G_valid_cases; // Options.units.size();
-
- if ( Options.units.size() > 1 || Options.grand_result_fname ) {
-
- string grand_total_bname (Options.grand_result_fname ? Options.grand_result_fname
- : (Options.cf_op_type == SDFF_CFOP_AVG)
- ? "AVERAGE"
- : (Options.cf_op_type == SDFF_CFOP_SUM)
- ? "SUM" : "PRODUCT");
- write_matrix( grand_total_bname.c_str(), G);
-
- if ( Options.conv_type != SDFF_CMP_NONE ) {
- valarray<double> T (dim_prod);
- if ( read_matrix( (string(Options.target_profiles_dir) + '/' + Options.grand_target_fname).c_str(), T) )
- return -4;
- double grandCF = convolute_matrix_against_target( G, T);
-
- ofstream grand_CF_strm ((grand_total_bname + ".CF").c_str());
- grand_CF_strm << grandCF << endl;
- }
- }
-
- if ( pwd )
- if ( chdir( pwd) )
- ;
-
- return retval;
-}
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-static int
-get_unit_cf( const char* ulabel, valarray<double> &M, double *result_p)
-{
- valarray<double> H (M.size()), T (M.size());
-
- string eventual_fname;
- if ( Options.go_sdf ) {
- if ( (Options.assume_generic_data = true,
- read_matrices_from_sxf( (eventual_fname = ulabel).c_str(), M, H)) &&
-
- (Options.assume_generic_data = false,
- read_matrices_from_sxf( (eventual_fname = string(ulabel) + ".sxf").c_str(), M, H)) &&
-
- (Options.assume_no_shf_value = true, Options.use_shf = false,
- read_matrices_from_sxf( (eventual_fname = string(ulabel) + ".sdf").c_str(), M, H)) ) {
-
- fprintf( stderr, "Failed to read data from\"%s\" or \"%s.s{x,d}f\"\n", ulabel, ulabel);
- return -2;
- }
- } else // go var
- if ( construct_matrix_from_var( (eventual_fname = ulabel).c_str(), M) &&
- construct_matrix_from_var( (eventual_fname = string(ulabel) + ".var").c_str(), M) ) {
-
- fprintf( stderr, "Failed to read \"%s.var\"\n", ulabel);
- return -2;
- }
-
- if ( (Options.do_matrix_output || Options.do_column_output)
- && Options.dims.size() == 2 ) { // only applicable to 2-dim matrices
-
- write_matrix( eventual_fname.c_str(), M);
- if ( Options.use_shf )
- write_matrix( (string(ulabel) + "(shf)").c_str(), H);
- }
-
- if ( Options.conv_type != SDFF_CMP_NONE ) {
- if ( read_matrix( (string(Options.target_profiles_dir) + '/' + eventual_fname + ".target").c_str(), T) ) {
- if ( !Options.do_matrix_output && !Options.do_column_output ) {
- fprintf( stderr, "Failed to read target profile for \"%s\", and no matrix folding output specified\n",
- eventual_fname.c_str());
- return -2;
- }
- } else
- if ( result_p )
- *result_p = convolute_matrix_against_target( M, T);
- }
-
- return 0;
-}
-
-
-
-int
-read_datum( ifstream &ifs, double& v) throw (invalid_argument)
-{
- static string _s;
- ifs >> _s;
- if ( !ifs.good() )
- return -1;
- double _v = NAN;
- try { _v = stod( _s); }
- catch ( invalid_argument ex) {
- if ( strcasecmp( _s.c_str(), "NaN") == 0 )
- v = NAN;
- else if ( strcasecmp( _s.c_str(), "inf") == 0 || strcasecmp( _s.c_str(), "infinity") == 0 )
- v = INFINITY;
- else {
- throw (ex); // rethrow
- return -2;
- }
- }
- v = _v;
- return 0;
-}
-
-
-
-// ------------------------- matrix io ------
-
-static int
-read_matrices_from_sxf( const char *fname, valarray<double> &M, valarray<double> &H, double *sdf_max_p)
-{
- if ( Options.verbosely )
- printf( "Trying \"%s\" ... ", fname);
-
- ifstream ins( fname);
- if ( !ins.good() ) {
- if ( Options.verbosely )
- printf( "not found\n");
- return -1;
- } else
- if ( Options.verbosely )
- printf( "found\n");
-
-// size_t ignored_lines = 0;
-
- double sdf_max = -INFINITY,
- _;
- size_t idx, row;
- for ( idx = row = 0; idx < M.size(); idx += (++row > Options.skipped_first_lines)) {
- while ( ins.peek() == '#' ) {
-// ignored_lines++;
- ins.ignore( numeric_limits<streamsize>::max(), '\n');
- }
-
- if ( ins.eof() ) {
- fprintf( stderr, "Short read from \"%s\" at element %zu\n", fname, idx);
- return -2;
- }
- if ( !Options.assume_no_timepoint )
- ins >> _; // time
-
- try {
- read_datum( ins, M[idx]);
- if ( !Options.assume_generic_data ) {
- if ( !Options.assume_no_shf_value )
- read_datum( ins, H[idx]); // shf
- read_datum( ins, _); // nspikes
- }
- } catch (invalid_argument ex) {
- fprintf( stderr, "Bad value read from \"%s\" at element %zu\n", fname, idx);
- return -2;
- }
-
- if ( M[idx] > sdf_max )
- sdf_max = M[idx];
- }
-
- if ( Options.use_shf )
- M *= H;
-
- if ( Options.do_normalise ) {
- M[idx] /= sdf_max;
- //H[idx] /= sdf_max;
- }
-
- if ( sdf_max_p )
- *sdf_max_p = sdf_max;
-
- return 0;
-}
-
-
-
-
-
-static int
-construct_matrix_from_var( const char *fname, valarray<double> &M)
-{
- ifstream ins( fname);
- if ( !ins.good() ) {
-// cerr << "No results in " << fname << endl;
- return -1;
- }
-
- double at, _, var;
- vector<double> sample;
- size_t idx;
-
- string line;
- try {
- for ( idx = 0; idx < M.size(); ++idx ) {
- M[idx] = 0.;
-
- while ( ins.peek() == '#' )
- ins.ignore( numeric_limits<streamsize>::max(), '\n');
-
- sample.clear();
- do {
- getline( ins, line, '\n');
- if ( ins.eof() ) {
- if ( idx == M.size()-1 )
- break;
- else
- throw "bork";
- }
- stringstream fields (line);
- fields >> at;
- for ( size_t f = 1; f <= Options.of_fields; ++f )
- if ( f == Options.field_n )
- fields >> var;
- else
- fields >> _;
-
- if ( at < Options.sample_from + Options.sample_period * idx - Options.sample_window/2 )
- continue;
-
- sample.push_back( var);
-
- } while ( at <= Options.sample_from + Options.sample_period * idx + Options.sample_window/2 );
-
- M[idx] = accumulate( sample.begin(), sample.end(), 0.) / sample.size();
- }
- } catch (...) {
- fprintf( stderr, "Short read, bad data or some other IO error in %s at record %zd\n", fname, idx);
- return -2;
- }
-
- // if ( Options.do_normalise ) {
- // for ( idx = 0; idx < dim_prod; idx++ )
- // M[idx] /= sdf_max;
- // // if ( H )
- // // for ( idx = 0; idx < dim_prod; idx++ )
- // // H[idx] /= sdf_max;
- // }
-
- return 0;
-}
-
-
-
-
-
-static int
-read_matrix( const char *fname, valarray<double> &M)
-{
- ifstream ins( fname);
- if ( !ins.good() ) {
- cerr << "No results in " << fname << endl;
- return -1;
- }
-
- while ( ins.peek() == '#' ) {
- ins.ignore( numeric_limits<streamsize>::max(), '\n'); // skip header
- }
-
- size_t idx;
- for ( idx = 0; idx < M.size(); idx++ )
- if ( ins.eof() ) {
- fprintf( stderr, "Short read from \"%s\" at element %zu\n", fname, idx);
- return -1;
- } else
- ins >> M[idx];
- return 0;
-}
-
-
-
-
-
-
-static int
-write_matrix( const char *fname, const valarray<double> &X)
-{
- if ( Options.do_matrix_output ) {
- ofstream outs( (string(fname) + ".mx").c_str());
- if ( Options.verbosely )
- printf( "Writing \"%s.mx\"\n", fname);
- for ( size_t k = 0; k < Options.dims[0]; k++ )
- for ( size_t l = 0; l < Options.dims[1]; l++ ) {
- if ( l > 0 ) outs << "\t";
- const double &datum = X[k*Options.dims[0] + l];
- if ( Options.octave_compat && !std::isfinite(datum) )
- outs << (std::isinf(datum) ? "Inf" : "NaN");
- else
- outs << datum;
- if ( l == Options.dims[1]-1 ) outs << endl;
- }
- if ( !outs.good() )
- return -1;
- }
-
- if ( Options.do_column_output ) {
- ofstream outs( (string(fname) + ".col").c_str());
- if ( Options.verbosely )
- printf( "Writing \"%s.mx\"\n", fname);
- for ( size_t k = 0; k < Options.dims[0]; k++ )
- for ( size_t l = 0; l < Options.dims[1]; l++ )
- outs << l << "\t" << k << "\t" << X[k*Options.dims[0] + l] << endl;
- if ( !outs.good() )
- return -1;
- }
-
- return 0;
-}
-
-
-
-
-
-
-static double
-convolute_matrix_against_target( const valarray<double> &M, const valarray<double> &T)
-{
- double CF = 0.;
- size_t idx;
-
- switch ( Options.conv_type ) {
- case SDFF_CMP_WEIGHT:
- for ( idx = 0; idx < M.size(); idx++ )
- CF += M[idx] * T[idx];
- break;
- case SDFF_CMP_SQDIFF:
- for ( idx = 0; idx < M.size(); idx++ )
- CF += pow( M[idx] - T[idx], 2);
- CF = sqrt( CF);
- break;
- case SDFF_CMP_NONE:
- return NAN;
- }
-
- return CF;
-}
-
-
-
-
-
-
-
-
-
-
-static int
-parse_cmdline( int argc, char *argv[])
-{
- char c;
- while ( (c = getopt( argc, argv, "OC:Rd:f:G:H::-t:Nx:T:U:V:z:o:F:qh")) != -1 ) {
- switch ( c ) {
- case 'C': Options.working_dir = optarg; break;
-
- case 'R': Options.go_sdf = false; break;
-
- case 'T': Options.grand_target_fname = optarg; break;
- case 'U': Options.grand_result_fname = optarg; break;
-
- case 'd': if ( sscanf( optarg, "%lg:%lg:%lg",
- &Options.sample_from, &Options.sample_period, &Options.sample_window) < 2 ) {
- cerr << "Expecting three parameter with -d (from:period[:window])\n";
- return SDFCAT_EARGS;
- }
- if ( Options.sample_window == 0. )
- Options.sample_window = Options.sample_period; break;
-
- case 'f': if ( sscanf( optarg, "%d:%d",
- &Options.field_n, &Options.of_fields) < 1 ) {
- cerr << "Expecting two parameters with -f (field:fields)\n";
- return SDFCAT_EARGS;
- } break;
-
- case 'G': Options.target_profiles_dir = optarg; break;
-
- case 'u': Options.units.push_back( string(optarg)); break;
-
- case 'H': if ( optarg )
- if ( strcmp( optarg, "-") == 0 )
- Options.assume_no_shf_value = true, Options.use_shf = false;
- else {
- cerr << "Unrecognised option to -H: `" << optarg << "\n";
- return SDFCAT_EARGS;
- }
- else
- Options.use_shf = true; break;
-
- case 't': if ( optarg ) {
- if ( strcmp( optarg, "-") == 0 )
- Options.assume_no_timepoint = Options.assume_generic_data = true,
- Options.use_shf = false;
- else {
- cerr << "Option -t can only be -t-\n";
- return SDFCAT_EARGS;
- }
- } break;
-
- case 'N': Options.do_normalise = true; break;
-
- case 'V': if ( strcmp( optarg, "sqdiff" ) == 0 )
- Options.conv_type = SDFF_CMP_SQDIFF;
- else if ( strcmp( optarg, "weight") == 0 )
- Options.conv_type = SDFF_CMP_WEIGHT;
- else {
- cerr << "-V takes `sqdiff' or `weight'\n";
- return SDFCAT_EARGS;
- }
- break;
- case 'z': if ( strcmp( optarg, "sum" ) == 0 )
- Options.cf_op_type = SDFF_CFOP_SUM;
- else if ( strcmp( optarg, "avg") == 0 )
- Options.cf_op_type = SDFF_CFOP_AVG;
- else if ( strcmp( optarg, "prod") == 0 )
- Options.cf_op_type = SDFF_CFOP_PROD;
- else {
- cerr << "-X can be `sum', `avg' or `prod'\n";
- return SDFCAT_EARGS;
- }
- break;
- case 'o': Options.do_matrix_output = (strchr( optarg, 'm') != nullptr);
- Options.do_column_output = (strchr( optarg, 'c') != nullptr);
- break;
-
- case 'x':
- {
- unsigned d;
- if ( sscanf( optarg, "%ud", &d) < 1 ) {
- cerr << "-x takes an unsigned\n";
- return SDFCAT_EARGS;
- }
- Options.dims.push_back( d);
- } break;
-
- case 'F':
- if ( sscanf( optarg, "%ud", &Options.skipped_first_lines) < 1 ) {
- cerr << "-F takes an unsigned\n";
- return SDFCAT_EARGS;
- }
- break;
-
- case 'O': Options.octave_compat = true; break;
-
- case 'q': Options.verbosely = false; break;
-
- case 'h':
- return SDFCAT_EHELPREQUEST;
- default:
- return SDFCAT_EARGS;
- }
- }
-
- for ( int i = optind; i < argc; i++ )
- Options.units.push_back( string(argv[i]));
-
- if ( Options.units.empty() ) {
- cerr << "No units (-u) specified\n";
- return SDFCAT_EARGS;
- }
- if ( Options.dims.empty() ) {
- cerr << "No dimensions (-x) specified\n";
- return SDFCAT_EARGS;
- }
-
- return 0;
-}
-
-
-
-
-static void
-usage( const char *argv0)
-{
- cout << "Usage: " << argv0 << "[options] [unitname_or_filename] ...\n"
- "Options are\n"
- " -C <dir>\t\tcd into dir before working\n"
- " -G <dir>\t\tSearch for target profiles in dir (default " << Options.target_profiles_dir << ")\n"
- " -x <dim>\t\tDimensions for the target and data matrices (repeat as necessary)\n"
- " -V[sqdiff|weight]\tObtain resulting profile by this convolution method:\n"
- "\t\t\t sum of squared differences between source and target profiles,\n"
- "\t\t\t sum of source profile values weighted by those in the target profile\n"
- " -z[sum|avg|prod]\tOperation applied to individual CFs, to produce a grand total\n"
- " -T <fname>\tRead reference profile from this file (default \"" << Options.grand_target_fname << "\"\n"
- " -U <fname>\tWrite the total result to this file (default is {SUM,AVERAGE,PRODUCT}.mx, per option -z)\n"
- "\n"
- " -R\t\t\tCollect .var data rather than .sxf\n"
- "With -R, use\n"
- " -f <unsigned n1>:<unsigned n2>\n"
- "\t\t\tExtract n1th field of n2 consec. fields per record\n"
- "\t\t\t (default " << Options.field_n << " of " << Options.of_fields << ")\n"
- " -d <double f>:<double p>:<double ws>\tSample from time f at period p with window size ws\n"
- "otherwise:\n"
- " -F <unsigned>\t\tRead sxf data from that position, not from 0\n"
- " -H \t\t\tMultiply sdf by shf\n"
- " -H-\t\t\tAssume there is no shf field in .sxf file\n"
- " -t-\t\t\tAssume no timestamp in data file; implies -H-\n"
- "\n"
- " -o[mc]\t\t\tWrite <unit>.[m]atrix and/or .[c]ol profiles\n"
- " -O\t\t\tWrite nan and inf as \"NaN\" and \"Inf\" to please octave\n"
- " -q\t\t\tSuppress normal messages\n"
- " -h\t\t\tDisplay this help\n"
- "\n"
- " unitname_or_filename\tData vector (e.g., PN.0; multiple entries as necessary;\n"
- "\t\t\t will try label.sxf then label.sdf)\n";
-}
-
-// EOF
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/cnrun.git
More information about the debian-med-commit
mailing list