[med-svn] [pycorrfit] 01/05: Imported Upstream version 0.9.3+dfsg

Alex Mestiashvili malex-guest at moszumanska.debian.org
Fri Dec 4 08:34:57 UTC 2015


This is an automated email from the git hooks/post-receive script.

malex-guest pushed a commit to branch master
in repository pycorrfit.

commit e8dd2d8d71fa0e927c7626f730b9ca2179adf713
Author: Alexandre Mestiashvili <alex at biotec.tu-dresden.de>
Date:   Thu Dec 3 15:58:48 2015 +0100

    Imported Upstream version 0.9.3+dfsg
---
 ChangeLog.txt                                     |  21 ++
 doc/PyCorrFit_doc_content.tex                     |  23 +-
 doc/PyCorrFit_doc_models.tex                      |  12 +
 pycorrfit/doc.py                                  |   2 +
 pycorrfit/fcs_data_set.py                         | 432 +++++++++++++++++-----
 pycorrfit/frontend.py                             |   5 +-
 pycorrfit/models/MODEL_TIRF_1C.py                 |   4 +-
 pycorrfit/models/MODEL_TIRF_2D2D.py               |  12 +-
 pycorrfit/models/MODEL_TIRF_3D2D.py               |   6 +-
 pycorrfit/models/MODEL_TIRF_3D2Dkin_Ries.py       |   5 +-
 pycorrfit/models/MODEL_TIRF_3D3D.py               |  11 +-
 pycorrfit/models/MODEL_TIRF_gaussian_1C.py        |  16 +-
 pycorrfit/models/MODEL_TIRF_gaussian_3D2D.py      |  43 +--
 pycorrfit/models/MODEL_TIRF_gaussian_3D3D.py      |  54 +--
 pycorrfit/models/MODEL_classic_gaussian_2D.py     |  72 ++--
 pycorrfit/models/MODEL_classic_gaussian_3D.py     |  62 ++--
 pycorrfit/models/MODEL_classic_gaussian_3D2D.py   |  50 +--
 pycorrfit/models/MODEL_classic_gaussian_TT3D3D.py | 178 +++++++++
 pycorrfit/models/__init__.py                      |  49 ++-
 pycorrfit/openfile.py                             |   6 +-
 pycorrfit/page.py                                 | 179 +--------
 pycorrfit/readfiles/read_SIN_correlator_com.py    |  10 +
 pycorrfit/tools/info.py                           |   3 +-
 pycorrfit/tools/parmrange.py                      |  24 +-
 pycorrfit/wxutils.py                              | 179 +++++++++
 setup.py                                          |   1 +
 tests/test_fit_model_gaussian.py                  | 231 ++++++++++++
 tests/test_fit_models.py                          |   1 -
 tests/test_global_fit.py                          |   1 +
 29 files changed, 1182 insertions(+), 510 deletions(-)

diff --git a/ChangeLog.txt b/ChangeLog.txt
index 1f71d1f..013cd16 100644
--- a/ChangeLog.txt
+++ b/ChangeLog.txt
@@ -1,3 +1,24 @@
+0.9.3
+- Fitting: migrate to lmfit
+  - This introduces a new dependency for building PyCorrFit.
+    (e.g. in Debian, the package "python-lmfit" is required)
+  - Improved fitting behavior at parameter boundaries
+  - Removed "Polak-Ribiere" fitting algorithm
+  - Added "Sequential Linear Squares Programming" algorithm
+- Heuristic fit (#109):
+  - Detect parameters that are stuck during fitting
+  - Fit each curve five times or less and check
+    whether the fit converges.
+  - If two diffusion time parameter exist in a model, always
+    make sure that one parameter is the larger one. This
+    feature can currently not be disabled (#110).
+- Allow infinity ("inf" and "-inf") parameters for models
+  and boundaries.
+- New model: confocal T+T+3D+3D
+- Bugfixes:
+  - Sessions saved with 64bit Windows were not opened (#136)
+  - Old sessions and "KeyError: 'chi2'"
+  - Old session file extension was not recognized (#106)
 0.9.2
 - Bugfixes:
   - "Slider Simulation"/"Parm Range" broken (#133)
diff --git a/doc/PyCorrFit_doc_content.tex b/doc/PyCorrFit_doc_content.tex
index a7b071d..f5c0d75 100755
--- a/doc/PyCorrFit_doc_content.tex
+++ b/doc/PyCorrFit_doc_content.tex
@@ -57,17 +57,7 @@ PyCorrFit is available from the Debian repositories and can be installed via the
 \item\textbf{PyPI.} To run \textit{PyCorrFit} on any other operating system, the installation of Python v.2.7 is required. \textit{PyCorrFit} is included in the Python package index (PyPI, \url{http://pypi.python.org/pypi/pip}) and can be installed via\footnote{See also the wiki article at \url{https://github.com/paulmueller/PyCorrFit/wiki/Installation_pip}}
 \texttt{pip~install~pycorrfit$\!$[GUI]}.
 \item \textbf{Sources.}
-You can also directly download the source code at any developmental stage\footnote{See also the wiki article at \url{https://github.com/paulmueller/PyCorrFit/wiki/Running-from-source}}. \textit{PyCorrFit} depends on the following python modules:
-\texttt{
-\begin{itemize}
-\item matplotlib ($\geq$ 1.0.1)
-\item NumPy ($\geq$ 1.5.1)
-\item PyYAML
-\item SciPy ($\geq$ 0.8.0)
-\item sympy ($\geq$ 0.7.2)
-\item wxPython3
-\end{itemize}
-}
+You can also directly download the source code at any developmental stage\footnote{See also the wiki article at \url{https://github.com/paulmueller/PyCorrFit/wiki/Running-from-source}}.
 \end{itemize}
 
 
@@ -80,7 +70,7 @@ You can also directly download the source code at any developmental stage\footno
 
 The following chapter introduces the general idea of how to start and accomplish a fitting project. FCS experiments produce different sets of experimental correlation functions which must be interpreted with appropriate physical models (\hyref{Chapter}{sec:theor}). Each correlation function refers to a single contiguous signal trace or ``run''. In \textit{PyCorrFit}, the user must assign a mathematical model function to each correlation function during the loading procedure. The assignme [...]
 
-Let's briefly discuss a typical example: To determine the diffusion coefficient of a fluorescently labeled protein in free solution, one has to deal with two sets of autocorrelation data: measurements of a diffusion standard (e.g. free dye for which a diffusion coefficient has been published) to calibrate the detection volume and measurements of the protein sample. The protein sample may contain small amounts of slowly diffusing aggregates. While the calibration measurements can be fitte [...]
+Let's briefly discuss a typical example: To determine the diffusion coefficient of a fluorescently labeled protein in free solution, one has to deal with two sets of autocorrelation data: measurements of a diffusion standard (e.g. free dye for which a diffusion coefficient has been published) to calibrate the detection volume and measurements of the protein sample. The protein sample may contain small amounts of slowly diffusing aggregates. While the calibration measurements can be fitte [...]
 
 
 \begin{enumerate}
@@ -306,9 +296,9 @@ When choosing a model from the \textit{Models} menu, a new page opens and the mo
 \rule{0pt}{3ex} - Confocal (Gaussian): T+3D+3D & Triplet with two diffusive components \\
 %Confocal (Gaussian): T+3D+3D+3D & [Triplet with three diffusive components]
 %Confocal (Gaussian): 2D &  2D diffusion, e.g. in membranes \\
-\rule{0pt}{3ex} - Confocal (Gaussian): T-2D &  Triplet blinking and 2D diffusion \\
-\rule{0pt}{3ex} - Confocal (Gaussian): T-2D-2D & Triplet with two diffusive components \\
-\rule{0pt}{3ex} - Confocal (Gaussian): T-3D-2D &  Triplet with mixed 3D and 2D diffusion \\
+\rule{0pt}{3ex} - Confocal (Gaussian): T+2D &  Triplet blinking and 2D diffusion \\
+\rule{0pt}{3ex} - Confocal (Gaussian): T+2D+2D & Triplet with two diffusive components \\
+\rule{0pt}{3ex} - Confocal (Gaussian): T+3D+2D &  Triplet with mixed 3D and 2D diffusion \\
 \rule{0pt}{3ex}
 \end{tabular}
 
@@ -734,9 +724,8 @@ PyCorrFit can utilize several algorithms to perform this minimization. The descr
 \item The \textbf{BFGS} method uses the quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) \cite{Nocedal2006} (pp. 136). It uses the first derivatives only. BFGS has proven good performance even for non-smooth optimizations.
 \item The \textbf{Levenberg-Marquardt} algorithm \cite{Levenberg1944} uses the first derivatives and combines the Gauss–Newton algorithm with a trust region approach. It is very robust compared to other algorithms and it is very popular in curve-fitting. \textit{PyCorrFit} uses this algorithm by default. If this algorithm is used, \textit{PyCorrFit} can estimate an error of the fit parameters using the covariance matrix.
 \item The \textbf{Nelder-Mead} method uses the Simplex algorithm \cite{Nelder1965,Wright1996}. This algorithm has been successful in many applications but other algorithms using the first and/or second derivatives information might be preferred for their better performances and robustness in general.
-\item The method \textbf{Polak-Ribiere} uses a nonlinear conjugate gradient algorithm by Polak and Ribiere, a variant of the Fletcher-Reeves method described in \cite{Nocedal2006} pp.
-120-122. Only the first derivatives are used.
 \item The method \textbf{Powell} is a modification of Powell's method \cite{Powell1964, Press} which is a conjugate direction method. It performs sequential one-dimensional minimizations along each vector of the directions set, which is updated at each iteration of the main minimization loop. The function need not be differentiable, and no derivatives are taken.
+\item \textbf{Sequential Linear Squares Programming}  inherently accepts boundaries and thus might behave better than other algorithms for problems with  bounded parameters.
 \end{itemize}
 
 \input{PyCorrFit_doc_models}
diff --git a/doc/PyCorrFit_doc_models.tex b/doc/PyCorrFit_doc_models.tex
index 27b4c8c..bbf4f1a 100644
--- a/doc/PyCorrFit_doc_models.tex
+++ b/doc/PyCorrFit_doc_models.tex
@@ -167,6 +167,18 @@ $\tau_\mathrm{trip}$ &  Characteristic residence time in triplet state \\
 \end{center}
 \vspace{2em}
 
+
+% 3D+3D diffusion + T+T
+\noindent \begin{tabular}{lp{.7\textwidth}}
+Name & \textbf{Confocal (Gaussian) T+T+3D+3D} \\ 
+ID & \textbf{6043} \\ 
+Descr. &  Two-component three-dimensional free diffusion with a Gaussian laser profile, including two triplet components. 
+The correlation function is a superposition of three-dimensional model functions of the type \textbf{T+3D} (6011).
+\end{tabular}
+\vspace{2em}
+
+
+
 \subsection{TIR-FCS}
 \label{sec:imple.tirfc}
 The model functions make use of the Faddeeva function (complex error function)\footnote{In user-defined model functions (\hyref{Section}{sec:hacke.extmod}), the Faddeeva function is accessible through \texttt{wofz()}. For convenience, the function \texttt{wixi()} can be used which only takes $\xi$ as an argument and the imaginary $i$ can be omitted.}:
diff --git a/pycorrfit/doc.py b/pycorrfit/doc.py
index 93ce575..00e09fc 100755
--- a/pycorrfit/doc.py
+++ b/pycorrfit/doc.py
@@ -26,6 +26,7 @@ import warnings
 with warnings.catch_warnings():
     warnings.simplefilter("ignore")
     matplotlib.use('WXAgg') # Tells matplotlib to use WxWidgets for dialogs
+import lmfit
 import numpy
 import os
 import platform
@@ -148,6 +149,7 @@ def SoftwareUsed():
     text = "Python "+sys.version+\
            "\n\nModules:"+\
            "\n - cython "+\
+           "\n - lmfit "+lmfit.__version__+\
            "\n - matplotlib "+matplotlib.__version__+\
            "\n - NumPy "+numpy.__version__+\
            "\n - PyYAML "+yaml.__version__ +\
diff --git a/pycorrfit/fcs_data_set.py b/pycorrfit/fcs_data_set.py
index 4c301b3..cde3b93 100644
--- a/pycorrfit/fcs_data_set.py
+++ b/pycorrfit/fcs_data_set.py
@@ -5,15 +5,15 @@ Classes for FCS data evaluation.
 """
 from __future__ import print_function, division
 
+import copy
 import hashlib
+import lmfit
 import numpy as np
 import scipy.integrate as spintg
 import scipy.interpolate as spintp
-import scipy.optimize as spopt
 import warnings
 
 from . import models as mdls
-from . import plotting
 
 class Trace(object):
     """ unifies trace handling
@@ -237,7 +237,7 @@ class Correlation(object):
         Set the backgrounds. The value can either be a list of traces or
         instances of traces or a single trace in an array.
         """
-        backgrounds = list()
+        backgrounds = []
         if not isinstance(value, list):
             value = [value]
         assert len(value) in [0,1,2], "Backgrounds must be list with up to two elements."
@@ -299,13 +299,16 @@ class Correlation(object):
             boundaries self.fit_parameters_range for each parameter.
         """
         p = 1.*np.array(parms)
-        p = self.fit_model.func_verification(p)
         r = self.fit_parameters_range
-        # TODO:
-        # - add potentials such that parameters don't stick to boundaries
         for i in range(len(p)):
             if r[i][0] == r[i][1]:
                 pass
+            elif r[i][0] is None:
+                if p[i] > r[i][1]:
+                    p[i] = r[i][1]
+            elif r[i][1] is None:
+                if p[i] < r[i][0]:
+                    p[i] = r[i][1]
             elif p[i] < r[i][0]:
                 p[i] = r[i][0]
             elif p[i] > r[i][1]:
@@ -341,6 +344,7 @@ class Correlation(object):
             corr[:,1] *= self.bg_correction_factor
             # perform parameter normalization
             return corr[self.fit_ival[0]:self.fit_ival[1],:]
+        
     
     @property
     def correlation_plot(self):
@@ -354,8 +358,7 @@ class Correlation(object):
             # perform parameter normalization
             corr[:,1] *= self.normalize_factor
             return corr
-    
-    
+
     @property
     def is_ac(self):
         """True if instance contains autocorrelation"""
@@ -464,7 +467,25 @@ class Correlation(object):
     @property
     def fit_parameters_range(self):
         """valid fitting ranges for fit parameters"""
-        return self._fit_parameters_range
+        model = self.fit_model.boundaries
+        mine = self._fit_parameters_range
+        new = []
+        for a, b in zip(model, mine):
+            c = [-np.inf, np.inf]
+            if a[0] != a[1]:
+                c[0] = a[0]
+                c[1] = a[1]
+            # user overrides model
+            if b[0] != b[1]:
+                c[0] = b[0]
+                c[1] = b[1]
+            if c[0] is not None and np.isnan(c[0]):
+                c[0] = -np.inf
+            if c[1] is not None and np.isnan(c[1]):
+                c[1] = np.inf
+
+            new.append(c)         
+        return np.array(new)
 
     @fit_parameters_range.setter
     def fit_parameters_range(self, value):
@@ -490,7 +511,7 @@ class Correlation(object):
     def lag_time(self):
         """logarithmic lag time axis"""
         if self.correlation is not None:
-            return self._correlation[:,0]
+            return self._correlation[:,0].copy()
         elif self._lag_time is not None:
             return self._lag_time
         else:
@@ -530,6 +551,9 @@ class Correlation(object):
         """fitted data values, same shape as self.correlation_fit"""
         toplot = self.modeled_fit
         toplot[:,1] *= self.normalize_factor
+        if toplot[-1,1] == 0.1:
+            import IPython
+            IPython.embed()
         return toplot
 
     @property
@@ -596,7 +620,7 @@ class Correlation(object):
         Set the traces. The value can either be a list of traces or
         instances of traces or a single trace in an array.
         """
-        traces = list()
+        traces = []
         if not isinstance(value, list):
             value = [value]
         assert len(value) in [0,1,2], "Traces must be list with up to two elements."
@@ -683,14 +707,19 @@ class Fit(object):
                 # fit_bool: True for variable
                 self.fit_bool = corr.fit_parameters_variable.copy()
                 self.fit_parm = corr.fit_parameters.copy()
+                self.fit_bound = copy.copy(corr.fit_parameters_range)
                 self.is_weighted_fit = corr.is_weighted_fit
                 self.fit_weights = Fit.compute_weights(corr,
                                                    verbose=verbose,
                                                    uselatex=uselatex)
+                self.fit_parm_names = corr.fit_model.parameters[0]
                 self.func = corr.fit_model.function
                 self.check_parms = corr.check_parms
+                self.constraints = corr.fit_model.constraints
                 # Directly perform the fit and set the "fit" attribute
                 self.minimize()
+                # Run a second time:
+                self.minimize()
                 # update correlation model parameters
                 corr.fit_parameters = self.fit_parm
                 # save fit instance in correlation class
@@ -701,15 +730,16 @@ class Fit(object):
             #   i.e. fitting "n" separately for two models
             # Initiate all arrays
             self.fit_algorithm = self.correlations[0].fit_algorithm
-            xtemp = list()      # x
-            ytemp = list()      # y
-            weights = list()    # weights
+            xtemp = []      # x
+            ytemp = []      # y
+            weights = []    # weights
             ids = [0]           # ids in big fitting array
-            cmodels = list()    # correlation model info
-            initpar = list()    # initial parameters
-            varin = list()      # names of variable fitting parameters
-            variv = list()      # values of variable fitting parameters
-            varmap = list()     # list of indices of fitted parameters
+            cmodels = []    # correlation model info
+            initpar = []    # initial parameters
+            varin = []      # names of variable fitting parameters
+            variv = []      # values of variable fitting parameters
+            varmap = []     # list of indices of fitted parameters
+            varbound = []   # list of fitting boundaries
             self.is_weighted_fit = None
             for corr in self.correlations:
                 xtemp.append(corr.correlation_fit[:,0])
@@ -719,12 +749,13 @@ class Fit(object):
                 cmodels.append(corr.fit_model)
                 initpar.append(corr.fit_parameters)
                 # Create list of variable parameters
-                varthis = list()
+                varthis = []
                 for ipm, par in enumerate(corr.fit_model.parameters[0]):
                     if corr.fit_parameters_variable[ipm]:
                         varthis.append(ipm)
                         varin.append(par)
                         variv.append(corr.fit_parameters[ipm])
+                        varbound.append(corr.fit_parameters_range[ipm])
                 varmap.append(varthis)
 
             # These are the variable fitting parameters
@@ -739,6 +770,9 @@ class Fit(object):
             self.fit_parm = variv
             self.fit_weights = np.concatenate(weights)
             self.fit_parm_names = varin
+            self.fit_bound = varbound
+            self.constraints = []
+            warnings.warn("Constraints are not supported yet for global fitting.")
             
             
             def parameters_global_to_local(parameters, iicorr, varin=varin,
@@ -775,7 +809,7 @@ class Fit(object):
             # Create function for fitting using ids
             def global_func(parameters, tau,
                             glob2loc=parameters_global_to_local):
-                out = list()
+                out = []
                 # ids start at 0
                 for ii, mod in enumerate(cmodels):
                     # Update parameters
@@ -852,7 +886,7 @@ class Fit(object):
         # Calculate degrees of freedom
         dof = len(self.x) - np.sum(self.fit_bool) - 1
         # This is exactly what is minimized by the scalar minimizers
-        chi2 = self.fit_function_scalar(self.fit_parm[self.fit_bool], self.x)
+        
         if self.chi_squared_type == "reduced expected sum of squares":
             fitted = self.func(self.fit_parm, self.x)
             chi2 = np.sum((self.y-fitted)**2/np.abs(fitted)) / dof
@@ -864,6 +898,10 @@ class Fit(object):
             fitted = self.func(self.fit_parm, self.x)
             variance = self.fit_weights**2
             chi2 = np.sum((self.y-fitted)**2/variance) / dof
+        else:
+            chi2 = self.fit_function_scalar(self.fit_parm, self.x, self.y, self.fit_weights)
+        
+            
         return chi2
 
 
@@ -961,22 +999,20 @@ class Fit(object):
                                      "{} knots.".format(knotnumber))
                 return
             if verbose > 0:
+                ## If plotting module is available:
+                #name = "spline fit: "+str(knotnumber)+" knots"
+                #plotting.savePlotSingle(name, 1*x, 1*y, 1*ys,
+                #                         dirname=".",
+                #                         uselatex=uselatex)
+                # use matplotlib.pylab
                 try:
-                    # If plotting module is available:
-                    name = "spline fit: "+str(knotnumber)+" knots"
-                    plotting.savePlotSingle(name, 1*x, 1*y, 1*ys,
-                                             dirname=".",
-                                             uselatex=uselatex)
-                except:
-                    # use matplotlib.pylab
-                    try:
-                        from matplotlib import pylab as plt
-                        plt.xscale("log")
-                        plt.plot(x, ys, x, y)
-                        plt.show()
-                    except ImportError:
-                        # Tell the user to install matplotlib
-                        print("Couldn't import pylab! - not Plotting")
+                    from matplotlib import pylab as plt
+                    plt.xscale("log")
+                    plt.plot(x, ys, x, y)
+                    plt.show()
+                except ImportError:
+                    # Tell the user to install matplotlib
+                    print("Couldn't import pylab! - not Plotting")
 
             ## Calculation of variance
             # In some cases, the actual cropping interval from ival[0]
@@ -1090,78 +1126,269 @@ class Fit(object):
         return dataweights
         
 
-    def fit_function(self, parms, x):
-        """ Create the function to be minimized. The old function
-            `function` has more parameters than we need for the fitting.
-            So we use this function to set only the necessary 
-            parameters. Returns what `function` would have done.
+    def fit_function(self, params, x, y, weights=1):
+        """ 
+        objective function that returns the residual (difference between
+        model and data) to be minimized in a least squares sense.
         """
-        # We reorder the needed variables to only use these that are
-        # not fixed for minimization
-        index = 0
-        for i in np.arange(len(self.fit_parm)):
-            if self.fit_bool[i]:
-                self.fit_parm[i] = parms[index]
-                index += 1
-        # Only allow physically correct parameters
-        self.fit_parm = self.check_parms(self.fit_parm)
-        tominimize = (self.func(self.fit_parm, x) - self.y)
+        parms = Fit.lmfitparm2array(params)
+        tominimize = (self.func(parms, x) - y)
         # Check dataweights for zeros and don't use these
         # values for the least squares method.
         with np.errstate(divide='ignore'):
-            tominimize = np.where(self.fit_weights!=0, 
-                                  tominimize/self.fit_weights, 0)
+            tominimize = np.where(weights!=0, 
+                                  tominimize/weights, 0)
         ## There might be NaN values because of zero weights:
         #tominimize = tominimize[~np.isinf(tominimize)]
         return tominimize
 
-    def fit_function_scalar(self, parms, x):
+    def fit_function_scalar(self, parms, x, y, weights=1):
         """
-            Wrapper of `fit_function` for scalar minimization methods.
-            Returns the sum of squares of the input data.
-            (Methods that are not "Lev-Mar")
+        Wrapper of `fit_function`.
+        Returns the sum of squares of the input data.
         """
-        e = self.fit_function(parms, x)
+        e = self.fit_function(parms, x, y, weights)
         return np.sum(e*e)
 
+    def get_lmfitparm(self):
+        """
+        Generates an lmfit parameter class from the present data set.
+
+        The following parameters are used:
+        self.x : 1d ndarray length N
+        self.y : 1d ndarray length N
+        self.fit_weights : 1d ndarray length N
+        
+        self.fit_bool : 1d ndarray length P, bool
+        self.fit_parm : 1d ndarray length P, float
+        """
+        params = lmfit.Parameters()
+
+        # First, add all fixed parameters
+        for pp in range(len(self.fit_parm)):
+            if not self.fit_bool[pp]:
+                if self.fit_bound[pp][0] == self.fit_bound[pp][1]:
+                    self.fit_bound[pp]=[-np.inf, np.inf]
+                params.add(lmfit.Parameter(name="parm{:04d}".format(pp),
+                                           value=self.fit_parm[pp],
+                                           vary=self.fit_bool[pp],
+                                           min=self.fit_bound[pp][0],
+                                           max=self.fit_bound[pp][1],
+                                            )
+                                           )
+        
+        # Second, summarize the constraints in a dictionary, where
+        # keys are the parameter indexes of varied parameters.
+        # The dictionary cstrnew only allows integer keys that are
+        # representing parameter indices. The fact that we are effectively
+        # reducing the number of valid constraints to one per parameter
+        # is a design problem that cannot be resolved here. The constraints
+        # must be defined in such a way, that a parameter with a larger
+        # index number is dependent on only one parameter with a lower
+        # index number, e.g. parm1>parm0, parm3<parm1, etc..
+        cstrnew = {}
+        bound = np.array(self.fit_bound).copy()
+        for cc in self.constraints:
+            if self.fit_bool[cc[0]] and self.fit_bool[cc[2]]:
+                # Both cc[0] and c[2] are varied.
+                # Everything will work fine, independent of the
+                # the fact if cc[2] is varied or not.
+                cstrnew[cc[0]] = [cc[1], cc[2]]
+            elif self.fit_bool[cc[0]]:
+                # Only cc[0] is varied, create boundary
+                if cc[1] == "<":
+                    # maximum
+                    bnd = [-np.inf, self.fit_parm[cc[2]]]
+                elif cc[1] == ">":
+                    # minimum
+                    bnd = [self.fit_parm[cc[2]], np.inf]
+                # update boundaries if necessary
+                bound[cc[0]] = [max(bound[cc[0]][0], bnd[0]),
+                                min(bound[cc[0]][1], bnd[1])]
+            elif self.fit_bool[cc[2]]:
+                # Only cc[2] is varied, create boundary
+                if cc[1] == "<":
+                    # minimum boundary
+                    bnd = [self.fit_parm[cc[0]], np.inf]
+                elif cc[1] == ">":
+                    # maximum boundary
+                    bnd = [-np.inf, self.fit_parm[cc[0]]]
+                # update boundaries if necessary
+                bound[cc[2]] = [max(bound[cc[2]][0], bnd[0]),
+                                min(bound[cc[2]][1], bnd[1])]
+            else:
+                # Neither cc[0] nor cc[2] are varied.
+                # Do nothing.
+                pass
+
+        # Third, setup all variable parameters with the necessary constraints.
+        for pp in range(len(self.fit_parm)):
+            if self.fit_bool[pp]:
+                # analyze constraints using lmfit:
+                if pp in cstrnew:
+                    # constrained parameters
+                    ppref = cstrnew[pp][1]
+                    rel = cstrnew[pp][0]
+                    #TODO:
+                    # - combine the two following cases for better readybility
+                    if rel == "<":
+                        #p2 < p1
+                        #-> p2 = p1 - d21
+                        #-> d21 = p1 - p2
+                        #-> d21 > 0
+                        deltaname="delta_{}_{}".format(pp, ppref)
+                        params.add(lmfit.Parameter(name=deltaname,
+                                               value=self.fit_parm[ppref]-self.fit_parm[pp],
+                                               vary=self.fit_bool[pp],
+                                               min=0,
+                                               max=np.inf,
+                                                ))
+                        ppcomp = "parm{:04d}-{}".format(ppref, deltaname)
+                        params.add(lmfit.Parameter(name="parm{:04d}".format(pp),
+                                                         # this condition deals with negative numbers
+                                                   expr="{MIN} if {COMP} < {MIN} else {MAX} if {COMP} > {MAX} else {COMP}".format(
+                                                        COMP=ppcomp,
+                                                        MIN=bound[pp][0],
+                                                        MAX=bound[pp][1])
+                                                ))
+                    elif rel == ">":
+                        # The opposite of the above case
+                        #p2 > p1
+                        #-> p2 = p1 + d21
+                        #-> d21 = p2 - p1
+                        #-> d21 > 0
+                        deltaname="delta_{}_{}".format(pp, ppref)
+                        params.add(lmfit.Parameter(name=deltaname,
+                                               value=self.fit_parm[pp]-self.fit_parm[ppref],
+                                               vary=self.fit_bool[pp],
+                                               min=0,
+                                               max=np.inf,
+                                                ))
+                        ppcomp = "parm{:04d}+{}".format(ppref, deltaname)
+                        params.add(lmfit.Parameter(name="parm{:04d}".format(pp),
+                                                         # this condition deals with negative numbers
+                                                   expr="{MIN} if {COMP} < {MIN} else {MAX} if {COMP} > {MAX} else {COMP}".format(
+                                                        COMP=ppcomp,
+                                                        MIN=bound[pp][0],
+                                                        MAX=bound[pp][1])
+                                                ))
+                    else:
+                        raise NotImplementedError("Only '<' and '>' are allowed constraints!")
+                
+                else:
+                    ## normal parameter
+                    params.add(lmfit.Parameter(name="parm{:04d}".format(pp),
+                                               value=self.fit_parm[pp],
+                                               vary=self.fit_bool[pp],
+                                               min=bound[pp][0],
+                                               max=bound[pp][1],
+                                                )
+                                               )
+        return params
+
+    @staticmethod
+    def lmfitparm2array(parms, parmid="parm", attribute="value"):
+        """
+        Convert lmfit parameters to a numpy array.
+        Parameters are identified by name `parmid` which should
+        be at the beginning of a parameters.
+        
+        This method is necessary to separate artificial constraint parameters
+        from the actual parameters.
+        
+        Parameters
+        ----------
+        parms : lmfit.parameter.Parameters or ndarray
+            The input parameters.
+        parmid : str
+            The identifier for parameters. By default this is
+            "parm", i.e. parameters are named like this:
+            "parm0001", "parm0002", etc.
+        attribute : str
+            The attribute to return, e.g.
+            - "value" : return the current value of the parameter
+            - "vary" : return if the parameter is varied during fitting
+        
+        Returns:
+        parr : ndarray
+            If the input is an ndarray, the input will be returned.
+        """
+        if isinstance(parms, lmfit.parameter.Parameters):
+            items = parms.items()
+            items.sort(key=lambda x: x[0])
+            parr = [getattr(p[1], attribute) for p in items if p[0].startswith(parmid)]
+        else:
+            parr = parms
+
+        return np.array(parr)
+
     def minimize(self):
         """ This will run the minimization process
+      
         """
         assert (np.sum(self.fit_bool) != 0), "No parameter selected for fitting."
+        
+        # get all parameters for minimization
+        params = self.get_lmfitparm()
+        
         # Get algorithm
-        algorithm = Algorithms[self.fit_algorithm][0]
+        method = Algorithms[self.fit_algorithm][0]
+        methodkwargs = Algorithms[self.fit_algorithm][2]
 
         # Begin fitting
-        
-        if self.fit_algorithm == "Lev-Mar":
-            res = algorithm(self.fit_function, self.fit_parm[self.fit_bool],
-                            args=(self.x,), full_output=1)
-        else:
-            disp = self.verbose > 0 # print convergence message
-            res = algorithm(self.fit_function_scalar, self.fit_parm[self.fit_bool],
-                            args=(self.x,), full_output=1, disp=disp)
+        # Fit a several times and stop earlier if the residuals
+        # are small enough (heuristic approach).
+        nfits = 5
+        diff = np.inf
+        parmsinit = Fit.lmfitparm2array(params)
+        for ii in range(nfits):
+            res0 = self.fit_function(params, self.x, self.y)
+            result = lmfit.minimize(fcn=self.fit_function,
+                                    params=params,
+                                    method=method,
+                                    kws={"x":self.x,
+                                            "y":self.y,
+                                            "weights":self.fit_weights},
+                                    **methodkwargs
+                                    )
+            params = result.params
+            res1 = self.fit_function(params, self.x, self.y)
+            diff = np.average(np.abs(res0-res1))
+
+            if hasattr(result, "ier") and not result.errorbars and ii+1 < nfits:
+                # This case applies to the Levenberg-Marquardt algorithm
+                multby = .5
+                # Try to vary stuck fitting parameters
+                # the result from the previous fit
+                parmsres = Fit.lmfitparm2array(params)
+                # the parameters that are varied during fitting
+                parmsbool = Fit.lmfitparm2array(params, attribute="vary")
+                # The parameters that are stuck
+                parmstuck = parmsbool * (parmsinit==parmsres)
+                parmsres[parmstuck] *= multby
+                # write changes
+                self.fit_parm = parmsres
+                params = self.get_lmfitparm()
+                warnings.warn(u"PyCorrFit detected problems in fitting, "+\
+                              u"detected a stuck parameter, multiplied "+\
+                              u"it by {}, and fitted again. ".format(multby)+\
+                              u"The stuck parameters are: {}".format(
+                                np.array(self.fit_parm_names)[parmstuck]))
+            elif diff < 1e-8:
+                # Experience tells us this is good enough.
+                break
 
-        # The optimal parameters
-        parmoptim = res[0]
         # Now write the optimal parameters to our values:
-        index = 0
-        for i in range(len(self.fit_parm)):
-            if self.fit_bool[i]:
-                self.fit_parm[i] = parmoptim[index]
-                index = index + 1
+        self.fit_parm = Fit.lmfitparm2array(params)
         # Only allow physically correct parameters
         self.fit_parm = self.check_parms(self.fit_parm)
-        # Write optimal parameters back to this class.
-        # Must be called after `self.fitparm = ...`
-        chi = self.chi_squared
         # Compute error estimates for fit (Only "Lev-Mar")
-        if self.fit_algorithm == "Lev-Mar":
+        if self.fit_algorithm == "Lev-Mar" and result.success:
             # This is the standard way to minimize the data. Therefore,
             # we are a little bit more verbose.
-            if res[4] not in [1,2,3,4]:
-                warnings.warn("Optimal parameters not found: " + res[3])
+            self.covar = result.covar
             try:
-                self.covar = res[1] * chi # The covariance matrix
+                self.parmoptim_error = np.diag(self.covar)
             except:
                 warnings.warn("PyCorrFit Warning: Error estimate not "+\
                               "possible, because we could not "+\
@@ -1169,22 +1396,19 @@ class Fit(object):
                               "try reducing the number of fitting "+\
                               "parameters.")
                 self.parmoptim_error = None
-            else:
-                # Error estimation of fitted parameters
-                if self.covar is not None:
-                    self.parmoptim_error = np.diag(self.covar)
         else:
             self.parmoptim_error = None
 
 
+
 def GetAlgorithmStringList():
     """
         Get supported fitting algorithms as strings.
         Returns two lists (that are key-sorted) for key and string.
     """
     A = Algorithms
-    out1 = list()
-    out2 = list()
+    out1 = []
+    out2 = []
     a = list(A.keys())
     a.sort()
     for key in a:
@@ -1201,21 +1425,33 @@ def GetAlgorithmStringList():
 Algorithms = dict()
 
 # the original one is the least squares fit "leastsq"
-Algorithms["Lev-Mar"] = [spopt.leastsq, 
-           "Levenberg-Marquardt"]
+Algorithms["Lev-Mar"] = ["leastsq", 
+                         "Levenberg-Marquardt",
+                         {"ftol" : 1.49012e-08,
+                          "xtol" : 1.49012e-08,
+                          }
+                        ]
 
 # simplex 
-Algorithms["Nelder-Mead"] = [spopt.fmin,
-           "Nelder-Mead (downhill simplex)"]
+Algorithms["Nelder-Mead"] = ["nelder",
+                             "Nelder-Mead (downhill simplex)",
+                             {}
+                             ]
 
 # quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno
-Algorithms["BFGS"] = [spopt.fmin_bfgs,
-           "BFGS (quasi-Newton)"]
+Algorithms["BFGS"] = ["lbfgsb",
+                      "BFGS (quasi-Newton)",
+                      {}
+                      ]
 
 # modified Powell-method
-Algorithms["Powell"] = [spopt.fmin_powell,
-           "modified Powell (conjugate direction)"]
+Algorithms["Powell"] = ["powell",
+                        "modified Powell (conjugate direction)",
+                        {}
+                        ]
 
 # nonliner conjugate gradient method by Polak and Ribiere
-Algorithms["Polak-Ribiere"] = [spopt.fmin_cg,
-           "Polak-Ribiere (nonlinear conjugate gradient)"]
+Algorithms["SLSQP"] = ["slsqp",
+                       "Sequential Linear Squares Programming",
+                       {}
+                       ]
diff --git a/pycorrfit/frontend.py b/pycorrfit/frontend.py
index b57c58e..4318059 100644
--- a/pycorrfit/frontend.py
+++ b/pycorrfit/frontend.py
@@ -1258,9 +1258,8 @@ class MyFrame(wx.Frame):
             return "abort"
         
         ## Create user dialog
-        wc = opf.session_wildcards
-        wcstring = "PyCorrFit session (*.pcfs)|*{};*{}".format(
-                                                           wc[0], wc[1])
+        wc = [ "*{}".format(w) for w in opf.session_wildcards ]
+        wcstring = "PyCorrFit session (*.pcfs)|{}".format(";".join(wc))
         if sessionfile is None:
             dlg = wx.FileDialog(self, "Open session file",
                                 self.dirname, "", wcstring, wx.FD_OPEN)
diff --git a/pycorrfit/models/MODEL_TIRF_1C.py b/pycorrfit/models/MODEL_TIRF_1C.py
index 52832cd..e11ee3b 100755
--- a/pycorrfit/models/MODEL_TIRF_1C.py
+++ b/pycorrfit/models/MODEL_TIRF_1C.py
@@ -239,13 +239,13 @@ model1 = dict()
 model1["Parameters"] = parms_6000
 model1["Definitions"] = m_twodsq6000
 model1["Supplements"] = MoreInfo_6000
-model1["Verification"] = lambda parms: np.abs(parms)
+model1["Boundaries"] = [[0, None]]*len(values_6000)
 
 model2 = dict()
 model2["Parameters"] = parms_6010
 model2["Definitions"] = m_3dtirsq6010
 model2["Supplements"] = MoreInfo_6010
-model2["Verification"] = lambda parms: np.abs(parms)
+model2["Boundaries"] = [[0, None]]*len(values_6010)
 
 
 Modelarray = [model1, model2]
diff --git a/pycorrfit/models/MODEL_TIRF_2D2D.py b/pycorrfit/models/MODEL_TIRF_2D2D.py
index 3219031..696b5d7 100755
--- a/pycorrfit/models/MODEL_TIRF_2D2D.py
+++ b/pycorrfit/models/MODEL_TIRF_2D2D.py
@@ -21,7 +21,7 @@ def wixi(x):
     
     return np.real_if_close(wixi)
 
-
+# model 6022
 # 2D + 2D no binding TIRF
 def CF_Gxy_TIR_square_2d2d(parms, tau, wixi=wixi):
     u""" Two-component two-dimensional diffusion with a square-shaped
@@ -87,8 +87,8 @@ labels_6022 = [ u"D"+u"\u2081"+u" [10 µm²/s]",
                 u"\u03b1"+" (q"+u"\u2082"+"/q"+u"\u2081"+")"
                 ]
 values_6022 = [
-                0.90,     # D_2D₁ [10 µm²/s]
-                0.01,    # D_2D₂ [10 µm²/s]
+                0.01,    # D_2D₁ [10 µm²/s]
+                0.90,    # D_2D₂ [10 µm²/s]
                 2.3,     # σ [100 nm]
                 7.50,    # a [100 nm]
                 0.01,    # conc.2D₁ [100 /µm²]
@@ -120,9 +120,13 @@ parms_6022 = [labels_6022, values_6022, valuestofit_6022,
               labels_human_readable_6022, values_factor_human_readable_6022]
 
 
+boundaries = [ [0, np.inf] ] * len(values_6022)
+
+
 model1 = dict()
 model1["Parameters"] = parms_6022
 model1["Definitions"] = m_tir_2d_2d_mix_6022
-model1["Verification"] = lambda parms: np.abs(parms)
+model1["Boundaries"] = boundaries
+model1["Constraints"] = [[1, ">", 0]]
 
 Modelarray = [model1]
diff --git a/pycorrfit/models/MODEL_TIRF_3D2D.py b/pycorrfit/models/MODEL_TIRF_3D2D.py
index 5055d75..bbf1d93 100755
--- a/pycorrfit/models/MODEL_TIRF_3D2D.py
+++ b/pycorrfit/models/MODEL_TIRF_3D2D.py
@@ -23,6 +23,7 @@ def wixi(x):
 
 
 # 3D + 2D no binding TIRF
+# model 6020
 def CF_Gxyz_TIR_square_3d2d(parms, tau, wixi=wixi):
     u""" Two-component two- and three-dimensional diffusion
         with a square-shaped lateral detection area taking into account
@@ -96,7 +97,7 @@ labels_6020 = [u"D_3D [10 µm²/s]",
                u"\u03b1"+" (q3D/q2D)"
                 ]
 values_6020 = [
-                50.0,     # D_3D [10 µm²/s]
+                50.0,    # D_3D [10 µm²/s]
                 0.81,    # D_2D [10 µm²/s]
                 2.3,     # σ [100 nm]
                 7.50,    # a [100 nm]
@@ -135,6 +136,7 @@ parms_6020 = [labels_6020, values_6020, valuestofit_6020,
 model1 = dict()
 model1["Parameters"] = parms_6020
 model1["Definitions"] = m_tir_3d_2d_mix_6020
-model1["Verification"] = lambda parms: np.abs(parms)
+model1["Boundaries"] = [[0, np.inf]]*len(values_6020)
+model1["Constraints"] = [[1, "<", 0]]
 
 Modelarray = [model1]
diff --git a/pycorrfit/models/MODEL_TIRF_3D2Dkin_Ries.py b/pycorrfit/models/MODEL_TIRF_3D2Dkin_Ries.py
index 9ea2697..56350e9 100755
--- a/pycorrfit/models/MODEL_TIRF_3D2Dkin_Ries.py
+++ b/pycorrfit/models/MODEL_TIRF_3D2Dkin_Ries.py
@@ -392,11 +392,12 @@ values_factor_human_readable_6021 = [10, # "D_3D [µm²/s]",
 parms_6021 = [labels_6021, values_6021, valuestofit_6021,
               labels_human_readable_6021, values_factor_human_readable_6021]
 
+boundaries = [ [0, np.inf] ] * len(values_6021)
 
 model1 = dict()
 model1["Parameters"] = parms_6021
 model1["Definitions"] = m_tir_3d_2d_ubib6021
-model1["Verification"] = lambda parms: np.abs(parms)
-
+model1["Boundaries"] = boundaries
+model1["Constraints"] = [[1, "<", 0]]
 
 Modelarray = [model1]
diff --git a/pycorrfit/models/MODEL_TIRF_3D3D.py b/pycorrfit/models/MODEL_TIRF_3D3D.py
index 2a9fb58..0f9fda0 100755
--- a/pycorrfit/models/MODEL_TIRF_3D3D.py
+++ b/pycorrfit/models/MODEL_TIRF_3D3D.py
@@ -23,6 +23,7 @@ def wixi(x):
 
 
 # 3D + 3D no binding TIRF
+# model 6023
 def CF_Gxyz_TIR_square_3d3d(parms, tau, wixi=wixi):
     u""" Two-component three-dimensional free diffusion
         with a square-shaped lateral detection area taking into account
@@ -102,14 +103,14 @@ labels_6023 = [u"D"+u"\u2081"+u" [10 µm²/s]",
                u"\u03b1"+" (q"+u"\u2082"+"/q"+u"\u2081"+")"
                 ]
 values_6023 = [
-                9.0,     # D_3D₁ [10 µm²/s]
-                0.5,    # D_3D₂ [10 µm²/s]
+                0.9,     # D_3D₁ [10 µm²/s]
+                9.0,     # D_3D₂ [10 µm²/s]
                 2.3,     # σ [100 nm]
                 7.50,    # a [100 nm]
                 1.0,     # d_eva [100 nm]
                 0.01,    # conc.3D₁ [1000 /µm³]
                 0.03,    # conc.3D₂ [1000 /µm³]
-                1       # alpha
+                1        # alpha
                 ]        
 # For user comfort we add values that are human readable.
 # Theese will be used for output that only humans can read.
@@ -140,7 +141,7 @@ parms_6023 = [labels_6023, values_6023, valuestofit_6023,
 model1 = dict()
 model1["Parameters"] = parms_6023
 model1["Definitions"] = m_tir_3d_3d_mix_6023
-model1["Verification"] = lambda parms: np.abs(parms)
-
+model1["Boundaries"] = [[0, np.inf]] * len(values_6023)
+model1["Constraints"] = [[1, ">", 0]]
 
 Modelarray = [model1]
diff --git a/pycorrfit/models/MODEL_TIRF_gaussian_1C.py b/pycorrfit/models/MODEL_TIRF_gaussian_1C.py
index 7f30460..0195190 100755
--- a/pycorrfit/models/MODEL_TIRF_gaussian_1C.py
+++ b/pycorrfit/models/MODEL_TIRF_gaussian_1C.py
@@ -202,6 +202,18 @@ def MoreInfo_6014(parms, countrate=None):
     return Info
 
 
+def get_boundaries_6014(parms):
+    # strictly positive
+    boundaries = [[0, None]]*len(parms)
+    boundaries[5] = [0, 1]
+    return boundaries
+
+def get_boundaries_6013(parms):
+    # strictly positive
+    boundaries = [[0, None]]*len(parms)
+    return boundaries
+
+
 # 3D Model TIR gaussian
 m_3dtirsq6013 = [6013, "3D","Simple 3D diffusion w/ TIR",
                  CF_Gxyz_TIR_gauss]
@@ -232,7 +244,7 @@ model1 = dict()
 model1["Parameters"] = parms_6013
 model1["Definitions"] = m_3dtirsq6013
 model1["Supplements"] = MoreInfo_6013
-model1["Verification"] = lambda parms: np.abs(parms)
+model1["Boundaries"] = get_boundaries_6013(values_6013)
 
 
 # 3D Model TIR gaussian + triplet
@@ -271,7 +283,7 @@ model2 = dict()
 model2["Parameters"] = parms_6014
 model2["Definitions"] = m_3dtirsq6014
 model2["Supplements"] = MoreInfo_6014
-model2["Verification"] = lambda parms: np.abs(parms)
+model2["Boundaries"] = get_boundaries_6014(values_6014)
 
 
 Modelarray = [model1, model2]
diff --git a/pycorrfit/models/MODEL_TIRF_gaussian_3D2D.py b/pycorrfit/models/MODEL_TIRF_gaussian_3D2D.py
index e1694b2..c8d6288 100755
--- a/pycorrfit/models/MODEL_TIRF_gaussian_3D2D.py
+++ b/pycorrfit/models/MODEL_TIRF_gaussian_3D2D.py
@@ -24,6 +24,7 @@ def wixi(x):
     return np.real_if_close(wixi)
 
 # 3D + 2D + T
+# model 6033
 def CF_Gxyz_3d2dT_gauss(parms, tau):
     u""" Two-component, two- and three-dimensional diffusion
         with a Gaussian lateral detection profile and
@@ -106,34 +107,15 @@ def CF_Gxyz_3d2dT_gauss(parms, tau):
     return G + off
 
 
-def Checkme(parms):
-    parms[0] = np.abs(parms[0])
-    parms[1] = np.abs(parms[1]) # = D2D
-    parms[2] = np.abs(parms[2]) # = D3D
-    F=parms[3]
-    parms[4] = np.abs(parms[4]) # = r0
-    parms[5]=np.abs(parms[5])
-    parms[6]=np.abs(parms[6])
-    tautrip=np.abs(parms[7])
-    T=parms[8]
-    #off=parms[9]
-
-    #taud2D = r0**2/(4*D2D)
-    #taud3D = r0**2/(4*D3D)
-    # We are not doing this anymore (Issue #2).
-    ## Force triplet component to be smaller than diffusion times
-    ## tautrip = min(tautrip,taud2D*0.9, taud3D*0.9)
-    
-    # Triplet fraction is between 0 and one. T may not be one!
-    T = (0.<=T<1.)*T + .99999999999999*(T>=1)
-    # Fraction of molecules may also be one
-    F = (0.<=F<=1.)*F + 1.*(F>1)
-
-    parms[3] = F
-    parms[7] = tautrip
-    parms[8] = T
-
-    return parms
+def get_boundaries(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    # F
+    boundaries[3] = [0, .9999999999999]
+    # T
+    boundaries[8] = [0, .9999999999999]
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
 
 
 def MoreInfo(parms, countrate=None):
@@ -217,7 +199,7 @@ labelshr  = [u"n",
              u"F_3D", 
              u"r₀ [nm]",
              u"d_eva [nm]",
-             u"\u03b1"+" (q_3D/q_2D)", 
+             u"\u03b1"+u" (q_3D/q_2D)", 
              u"τ_trip [µs]",
              u"T",
              u"offset"
@@ -242,7 +224,8 @@ parms = [labels, values, valuestofit, labelshr, valueshr]
 model1 = dict()
 model1["Parameters"] = parms
 model1["Definitions"] = m_gauss_3d_2d_t
-model1["Verification"] = Checkme
+model1["Boundaries"] = get_boundaries(values)
 model1["Supplements"] = MoreInfo
+model1["Constraints"] = [[2, ">", 1]]
 
 Modelarray = [model1]
diff --git a/pycorrfit/models/MODEL_TIRF_gaussian_3D3D.py b/pycorrfit/models/MODEL_TIRF_gaussian_3D3D.py
index 2a5684a..a6768ac 100755
--- a/pycorrfit/models/MODEL_TIRF_gaussian_3D3D.py
+++ b/pycorrfit/models/MODEL_TIRF_gaussian_3D3D.py
@@ -24,6 +24,7 @@ def wixi(x):
     return np.real_if_close(wixi)  
 
 # 3D + 3D + T
+# model 6034
 def CF_Gxyz_3D3DT_gauss(parms, tau):
     u""" Two-component three-dimensional diffusion with a Gaussian
         lateral detection profile and an exponentially decaying profile
@@ -123,34 +124,16 @@ def CF_Gxyz_3D3DT_gauss(parms, tau):
     return G + off
 
 
-def Checkme(parms):
-    parms[0] = np.abs(parms[0])
-    parms[1] = np.abs(parms[1]) # = D1
-    parms[2] = np.abs(parms[2]) # = D2
-    F=parms[3]
-    parms[4] = np.abs(parms[4]) # = r0
-    parms[5]=np.abs(parms[5])
-    parms[6]=np.abs(parms[6])
-    tautrip=np.abs(parms[7])
-    T=parms[8]
-    #off=parms[9]
-
-    # REMOVED (issue #2)
-    ## Force triplet component to be smaller than diffusion times
-    #tauD2 = r0**2/(4*D2)
-    #tauD1 = r0**2/(4*D1)
-    #tautrip = min(tautrip,tauD2*0.9, tauD1*0.9)
-
-    # Triplet fraction is between 0 and one. T may not be one!
-    T = (0.<=T<1.)*T + .99999999999999*(T>=1)
-    # Fraction of molecules may also be one
-    F = (0.<=F<=1.)*F + 1.*(F>1)
-
-    parms[3] = F
-    parms[7] = tautrip
-    parms[8] = T
-
-    return parms
+def get_boundaries(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    # F
+    boundaries[3] = [0, .9999999999999]
+    # T
+    boundaries[8] = [0, .9999999999999]
+    # offset
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
 
 
 def MoreInfo(parms, countrate=None):
@@ -217,14 +200,14 @@ labels  = [u"n",
                 ]
 values = [ 
                 25,      # n
-                25.,       # D1
-                0.9,    # D2
+                0.9,     # D1
+                25.0,    # D2
                 0.5,     # F1
-                9.44,       # r0
-                1.0,       # deva
+                9.44,    # r0
+                1.0,     # deva
                 1.0,     # alpha
-                0.001,       # tautrip
-                0.01,       # T
+                0.001,   # tautrip
+                0.01,    # T
                 0.0      # offset
                 ]    
 # Human readable stuff
@@ -259,7 +242,8 @@ parms = [labels, values, valuestofit, labelshr, valueshr]
 model1 = dict()
 model1["Parameters"] = parms
 model1["Definitions"] = m_gauss_3d_3d_t
-model1["Verification"] = Checkme
+model1["Boundaries"] = get_boundaries(values)
 model1["Supplements"] = MoreInfo
+model1["Constraints"] = [[2, ">", 1]]
 
 Modelarray = [model1]
diff --git a/pycorrfit/models/MODEL_classic_gaussian_2D.py b/pycorrfit/models/MODEL_classic_gaussian_2D.py
index 58b6a47..04c9b37 100755
--- a/pycorrfit/models/MODEL_classic_gaussian_2D.py
+++ b/pycorrfit/models/MODEL_classic_gaussian_2D.py
@@ -1,6 +1,6 @@
 # -*- coding: utf-8 -*-
 """
-This file contains some simple 2D models for confocal microscopy
+This file contains simple 2D models for confocal FCS.
 """
 from __future__ import division
 
@@ -33,11 +33,12 @@ def CF_Gxy_gauss(parms, tau):
     G = dc + 1/n * BB
     return G
 
-def Check_xy_gauss(parms):
-    parms[0] = np.abs(parms[0])
-    parms[1] = np.abs(parms[1])
 
-    return parms
+def get_boundaries_xy_gauss(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
 
 
 # 2D simple gauss
@@ -82,21 +83,13 @@ def CF_Gxy_T_gauss(parms, tau):
     return G
 
 
-
-def Check_xy_T_gauss(parms):
-    parms[0] = np.abs(parms[0])
-    parms[1] = np.abs(parms[1]) # = taudiff
-    tautrip = np.abs(parms[2])
-    T=parms[3]
-
-
-    # Triplet fraction is between 0 and one. T may not be one!
-    T = (0.<=T<1.)*T + .99999999999999*(T>=1)
-
-    parms[2] = tautrip
-    parms[3] = T
-    return parms
-
+def get_boundaries_xy_T_gauss(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    # F
+    boundaries[3] = [0,.9999999999999]
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
 
 
 # 2D + 2D + Triplet Gauß
@@ -150,27 +143,15 @@ def CF_Gxyz_gauss_2D2DT(parms, tau):
     G = 1/n*(particle1 + particle2)*triplet/norm + off
     return G
 
-def Check_6031(parms):
-    parms[0] = np.abs(parms[0])
-    parms[1] = np.abs(parms[1]) # = taud1
-    parms[2] = np.abs(parms[2]) # = taud2
-    F=parms[3]
-    parms[4] = np.abs(parms[4])
-    tautrip = np.abs(parms[5])
-    T=parms[6]
-    #off=parms[7]
-    
-     
-    # Triplet fraction is between 0 and one. T may not be one!
-    T = (0.<=T<1.)*T + .99999999999999*(T>=1)
-    # Fraction of molecules may also be one
-    F = (0.<=F<=1.)*F + 1.*(F>1)
-
-    parms[3] = F
-    parms[5] = tautrip
-    parms[6] = T
-
-    return parms
+def get_boundaries_6031(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    # F
+    boundaries[3] = [0,.9999999999999]
+    # T
+    boundaries[6] = [0,.9999999999999]
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
 
 
 def MoreInfo_6001(parms, countrate=None):
@@ -288,19 +269,20 @@ model1 = dict()
 model1["Parameters"] = parms_6001
 model1["Definitions"] = m_twodga6001
 model1["Supplements"] = MoreInfo_6001
-model1["Verification"] = Check_xy_gauss
+model1["Boundaries"] = get_boundaries_xy_gauss(values_6001)
 
 model2 = dict()
 model2["Parameters"] = parms_6002
 model2["Definitions"] = m_twodga6002
 model2["Supplements"] = MoreInfo_6001
-model2["Verification"] = Check_xy_T_gauss
+model2["Boundaries"] = get_boundaries_xy_T_gauss(values_6002)
+model2["Constraints"] = [[2, "<", 1]]   # triplet time < diffusion time
 
 model3 = dict()
 model3["Parameters"] = parms_6031
 model3["Definitions"] = m_gauss_2d_2d_t_mix_6031
 model3["Supplements"] = MoreInfo_6031
-model3["Verification"] = Check_6031
-
+model3["Boundaries"] = get_boundaries_6031(values_6031)
+model3["Constraints"] = [[2, ">", 1], [5, "<", 1]]   # triplet time < diffusion time
 
 Modelarray = [model1, model2, model3]
diff --git a/pycorrfit/models/MODEL_classic_gaussian_3D.py b/pycorrfit/models/MODEL_classic_gaussian_3D.py
index 163ee2e..c66d9d2 100755
--- a/pycorrfit/models/MODEL_classic_gaussian_3D.py
+++ b/pycorrfit/models/MODEL_classic_gaussian_3D.py
@@ -1,6 +1,6 @@
 # -*- coding: utf-8 -*-
 """
-This file contains TIR one component models + triplet
+This file contains 3D models for confocal FCS.
 """
 from __future__ import division
 
@@ -88,21 +88,13 @@ def CF_Gxyz_blink(parms, tau):
     return G
 
 
-def Check_6011(parms):
-    parms[0] = np.abs(parms[0])
-    T = parms[1]
-    tautrip = np.abs(parms[2])
-    parms[3] = np.abs(parms[3])# = taudiff
-    parms[4] = np.abs(parms[4])
-    #off = parms[5]
-    
-    # Triplet fraction is between 0 and one.
-    T = (0.<=T<1.)*T + .999999999*(T>=1)
-
-    parms[1] = T
-    parms[2] = tautrip
-
-    return parms
+def get_boundaries_6011(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    # T
+    boundaries[1] = [0,.9999999999999]
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
 
 
 # 3D + 3D + Triplet Gauß
@@ -160,28 +152,15 @@ def CF_Gxyz_gauss_3D3DT(parms, tau):
     G = 1/n*(particle1 + particle2)*triplet/norm + off
     return G
 
-def Check_3D3DT(parms):
-    parms[0] = np.abs(parms[0])
-    parms[1] = np.abs(parms[1]) # = taud1
-    parms[2] = np.abs(parms[2]) # = taud2
-    F=parms[3]
-    parms[4]=np.abs(parms[4])
-    parms[5]=np.abs(parms[5])
-    tautrip=np.abs(parms[6])
-    T=parms[7]
-    #off=parms[8]
-    
-    
-    # Triplet fraction is between 0 and one. T may not be one!
-    T = (0.<=T<1.)*T + .99999999999999*(T>=1)
-    # Fraction of molecules may also be one
-    F = (0.<=F<=1.)*F + 1.*(F>1)
-
-    parms[3] = F
-    parms[6] = tautrip
-    parms[7] = T
-
-    return parms
+def get_boundaries_3D3DT(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    # F
+    boundaries[3] = [0,.9999999999999]
+    # T
+    boundaries[7] = [0,.9999999999999]
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
 
 
 def MoreInfo_1C(parms, countrate=None):
@@ -305,7 +284,8 @@ model1 = dict()
 model1["Parameters"] = parms_6011
 model1["Definitions"] = m_3dblink6011
 model1["Supplements"] = MoreInfo_1C
-model1["Verification"] = Check_6011
+model1["Boundaries"] = get_boundaries_6011(values_6011)
+model1["Constraints"] = [[3, ">", 2]]   # triplet time < diffusion time
 
 model2 = dict()
 model2["Parameters"] = parms_6012
@@ -316,7 +296,7 @@ model3 = dict()
 model3["Parameters"] = parms_6030
 model3["Definitions"] = m_gauss_3d_3d_t_mix_6030
 model3["Supplements"] = MoreInfo_6030
-model3["Verification"] = Check_3D3DT
-
+model3["Boundaries"] = get_boundaries_3D3DT(values_6030)
+model3["Constraints"] = [[2, ">", 1], [6, "<", 1]]
 
 Modelarray = [model1, model2, model3]
diff --git a/pycorrfit/models/MODEL_classic_gaussian_3D2D.py b/pycorrfit/models/MODEL_classic_gaussian_3D2D.py
index 742d5f6..4dda648 100755
--- a/pycorrfit/models/MODEL_classic_gaussian_3D2D.py
+++ b/pycorrfit/models/MODEL_classic_gaussian_3D2D.py
@@ -1,22 +1,6 @@
 # -*- coding: utf-8 -*-
 """
-    PyCorrFit
-    This file contains a 3D+2D+T confocal FCS model.
-
-    Copyright (C) 2011-2012  Paul Müller
-
-    This program is free software; you can redistribute it and/or modify
-    it under the terms of the GNU General Public License as published by
-    the Free Software Foundation; either version 2 of the License, or
-    (at your option) any later version.
-
-    This program is distributed in the hope that it will be useful,
-    but WITHOUT ANY WARRANTY; without even the implied warranty of
-    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-    GNU General Public License for more details.
-
-    You should have received a copy of the GNU General Public License 
-    along with this program. If not, see <http://www.gnu.org/licenses/>.
+This file contains a T+3D+2D model for confocal FCS.
 """
 from __future__ import division
 
@@ -76,26 +60,15 @@ def CF_Gxyz_3d2dT_gauss(parms, tau):
 
     return G + off
 
-def Checkme(parms):
-    parms[0] = np.abs(parms[0])
-    parms[1] = np.abs(parms[1]) #= taud2D
-    parms[2] = np.abs(parms[2]) #= taud3D
-    F=parms[3]
-    parms[4]=np.abs(parms[4])
-    parms[5]=np.abs(parms[5])
-    tautrip=np.abs(parms[6])
-    T=parms[7]
-    #off=parms[8]
-    # Triplet fraction is between 0 and one. T may not be one!
-    T = (0.<=T<1.)*T + .99999999999999*(T>=1)
-    # Fraction of molecules may also be one
-    F = (0.<=F<=1.)*F + 1.*(F>1)
-
-    parms[3] = F
-    parms[6] = tautrip
-    parms[7] = T
-
-    return parms
+def get_boundaries(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    # F
+    boundaries[3] = [0,.9999999999999]
+    # T
+    boundaries[7] = [0,.9999999999999]
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
 
 
 def MoreInfo(parms, countrate=None):
@@ -176,7 +149,8 @@ parms = [labels, values, valuestofit,
 model1 = dict()
 model1["Parameters"] = parms
 model1["Definitions"] = m_gauss_3d_2d_t
-model1["Verification"] = Checkme
+model1["Boundaries"] = get_boundaries(values)
 model1["Supplements"] = MoreInfo
+model1["Constraints"] = [[2, "<", 1], [6, "<", 2]]
 
 Modelarray = [model1]
diff --git a/pycorrfit/models/MODEL_classic_gaussian_TT3D3D.py b/pycorrfit/models/MODEL_classic_gaussian_TT3D3D.py
new file mode 100644
index 0000000..3211c19
--- /dev/null
+++ b/pycorrfit/models/MODEL_classic_gaussian_TT3D3D.py
@@ -0,0 +1,178 @@
+# -*- coding: utf-8 -*-
+"""
+This file contains a T+T+3D+3D model for confocal FCS.
+"""
+from __future__ import division
+
+import numpy as np
+
+# 3D + 3D + Triplet Gauß
+# Model 6043
+def CF_Gxyz_gauss_3D3DTT(parms, tau):
+    u""" Two-component three-dimensional free diffusion
+        with a Gaussian laser profile, including two triplet components.
+        The triplet factor takes into account a blinking term.
+        Set *T* or *τ_trip* to 0, if no triplet component is wanted.
+
+        particle1 = F₁/( (1+τ/τ₁) * sqrt(1+τ/(τ₁*SP²)))
+        particle2 = α*(1-F₁)/( (1+τ/τ₂) * sqrt(1+τ/(τ₂*SP²)))
+        triplet1 = 1 + T₁/(1-T₁)*exp(-τ/τ_trip₁)
+        triplet2 = 1 + T₂/(1-T₂)*exp(-τ/τ_trip₂)
+        norm = (F₁ + α*(1-F₁))²
+        G = 1/n*(particle1 + particle2)*triplet1*triplet2/norm + offset
+
+        *parms* - a list of parameters.
+        Parameters (parms[i]):
+        [0]  n        Effective number of particles in confocal volume
+                      (n = n₁+n₂)
+        [1]  τ₁       Diffusion time of particle species 1
+        [2]  τ₂       Diffusion time of particle species 2
+        [3]  F₁       Fraction of molecules of species 1 (n₁ = n*F₁)
+                      0 <= F₁ <= 1
+        [4]  SP       SP=z₀/r₀, Structural parameter,
+                      describes elongation of the confocal volume
+        [5]  α        Relative molecular brightness of particle
+                      2 compared to particle 1 (α = q₂/q₁)
+        [6]  τ_trip₁  Characteristic residence time in triplet state
+        [7]  T₁       Fraction of particles in triplet (non-fluorescent) state
+                      0 <= T < 1
+        [8]  τ_trip₂  Characteristic residence time in triplet state
+        [9]  T₂       Fraction of particles in triplet (non-fluorescent) state
+                      0 <= T < 1
+        [10] offset
+        *tau* - lag time
+    """
+    n=parms[0]
+    taud1=parms[1]
+    taud2=parms[2]
+    F=parms[3]
+    SP=parms[4]
+    alpha=parms[5]
+    tautrip1=parms[6]
+    T1=parms[7]
+    tautrip2=parms[8]
+    T2=parms[9]
+    off=parms[10]
+
+    particle1 = F/( (1+tau/taud1) * np.sqrt(1+tau/(taud1*SP**2)))
+    particle2 = alpha**2*(1-F)/( (1+tau/taud2) * np.sqrt(1+tau/(taud2*SP**2)))
+    # If the fraction of dark molecules is zero.
+    if tautrip1 == 0 or T1 == 0:
+        triplet1 = 1
+    else:
+        triplet1 = 1 + T1/(1-T1) * np.exp(-tau/tautrip1)
+    if tautrip2 == 0 or T2 == 0:
+        triplet2 = 1
+    else:
+        triplet2 = 1 + T2/(1-T2) * np.exp(-tau/tautrip2)
+    # For alpha == 1, *norm* becomes one
+    norm = (F + alpha*(1-F))**2
+
+    G = 1/n*(particle1 + particle2)*triplet1*triplet2/norm + off
+    return G
+
+def get_boundaries_3D3DTT(parms):
+    # strictly positive
+    boundaries = [[0, np.inf]]*len(parms)
+    # F
+    boundaries[3] = [0,.9999999999999]
+    # T
+    boundaries[7] = [0,.9999999999999]
+    boundaries[9] = [0,.9999999999999]
+    boundaries[-1] = [-np.inf, np.inf]
+    return boundaries
+
+
+m_gauss_3d_3d_t_t_mix_6043 = [6043, "T+T+3D+3D",
+                            u"Separate 3D diffusion + two triplet, Gauß",
+                            CF_Gxyz_gauss_3D3DTT]
+
+labels_6043  = [u"n",
+                u"τ"+u"\u2081"+" [ms]",
+                u"τ"+u"\u2082"+" [ms]",
+                u"F"+u"\u2081", 
+                u"SP",
+                u"\u03b1"+" (q"+u"\u2082"+"/q"+u"\u2081"+")", 
+                u"τ_trip₁ [ms]",
+                u"T₁",
+                u"τ_trip₂ [ms]",
+                u"T₂",
+                u"offset"
+                ]
+
+labels_human_readable_6043  = [
+                        u"n",
+                        u"τ₁ [ms]",
+                        u"τ₂ [ms]",
+                        u"F₁", 
+                        u"SP",
+                        u"\u03b1"+u" (q₂/q₁)", 
+                        u"τ_trip₁ [µs]",
+                        u"T₁",
+                        u"τ_trip₂ [µs]",
+                        u"T₂",
+                        u"offset"
+                            ]
+
+values_6043 = [ 
+                25,      # n
+                5,       # taud1
+                1000,    # taud2
+                0.5,     # F
+                5,       # SP
+                1.0,     # alpha
+                0.002,   # tautrip1
+                0.01,    # T1
+                0.001,   # tautrip2
+                0.01,    # T2
+                0.0      # offset
+                ]   
+
+values_factor_human_readable_6043 = [
+                        1.,     # n
+                        1.,     # taud1
+                        1.,     # taud2
+                        1.,     # F
+                        1.,     # SP
+                        1.,     # alpha
+                        1000.,  # tautrip1 [µs]
+                        1.,     # T1
+                        1000.,  # tautrip2 [µs]
+                        1.,     # T2
+                        1.      # offset
+                ]
+
+
+valuestofit_6043 = [True, True, True, True, False, False, False, False, False, False, False]
+parms_6043 = [labels_6043, values_6043, valuestofit_6043,
+              labels_human_readable_6043, values_factor_human_readable_6043]
+
+
+def MoreInfo_6043(parms, countrate=None):
+    u"""Supplementary parameters:
+        [9]  n₁ = n*F₁     Particle number of species 1
+        [10] n₂ = n*(1-F₁) Particle number of species 2
+    """
+    # We can only give you the effective particle number
+    n = parms[0]
+    F1 = parms[3]
+    Info = list()
+    # The enumeration of these parameters is very important for
+    # plotting the normalized curve. Countrate must come out last!
+    Info.append([u"n\u2081", n*F1])
+    Info.append([u"n\u2082", n*(1.-F1)])
+    if countrate is not None:
+        # CPP
+        cpp = countrate/n
+        Info.append(["cpp [kHz]", cpp])
+    return Info
+
+
+model = dict()
+model["Parameters"] = parms_6043
+model["Definitions"] = m_gauss_3d_3d_t_t_mix_6043
+model["Supplements"] = MoreInfo_6043
+model["Boundaries"] = get_boundaries_3D3DTT(values_6043)
+model["Constraints"] = [[2, ">", 1], [6, "<", 1], [8, "<", 6]]
+
+Modelarray = [model]
\ No newline at end of file
diff --git a/pycorrfit/models/__init__.py b/pycorrfit/models/__init__.py
index 45ef025..916f52d 100644
--- a/pycorrfit/models/__init__.py
+++ b/pycorrfit/models/__init__.py
@@ -23,14 +23,16 @@ unit of inv. volume : 1000 /µm³
 # This file is necessary for this folder to become a module that can be 
 # imported from within Python/PyCorrFit.
 
-import numpy as np                  # NumPy
+import copy
+import numpy as np
 import sys
-
+import warnings
 
 ## Models
 from . import MODEL_classic_gaussian_2D
 from . import MODEL_classic_gaussian_3D
 from . import MODEL_classic_gaussian_3D2D
+from . import MODEL_classic_gaussian_TT3D3D
 from . import MODEL_TIRF_gaussian_1C
 from . import MODEL_TIRF_gaussian_3D2D
 from . import MODEL_TIRF_gaussian_3D3D
@@ -53,11 +55,26 @@ class Model(object):
         else:
             self._supplements = lambda x, y: []
 
-        if "Verification" in list(datadict.keys()):
-            self._verification = datadict["Verification"]
+        if "Boundaries" in list(datadict.keys()):
+            self._boundaries = datadict["Boundaries"]
         else:
             # dummy verification function
-            self._verification = lambda parms: parms
+            self._boundaries = [[None,None]]*len(self._parameters[1])
+        
+        if "Constraints" in list(datadict.keys()):
+            # sort constraints such that the first value is always
+            # larger than the last.
+            newcc = []
+            for cc in datadict["Constraints"]:
+                if cc[0] < cc[2]:
+                    if cc[1] == ">":
+                        cc = [cc[2], "<", cc[0]]
+                    elif cc[1] == "<":
+                        cc = [cc[2], ">", cc[0]]
+                newcc.append(cc)
+            self._constraints = newcc
+        else:
+            self._constraints = []
 
     def __call__(self, parameters, tau):
         return self.function(parameters, tau)
@@ -80,6 +97,11 @@ class Model(object):
         return self.function(parameters, tau)
 
     @property
+    def constraints(self):
+        """ fitting constraints """
+        return copy.copy(self._constraints)
+
+    @property
     def components(self):
         """how many components does this model have"""
         return self._definitions[1]
@@ -118,7 +140,8 @@ class Model(object):
 
     @property
     def func_verification(self):
-        return self._verification
+        warnings.warn("`func_verification is deprecated: please do not use it!")
+        return lambda x: x
     
     def get_supplementary_parameters(self, values, countrate=None):
         """
@@ -158,6 +181,10 @@ class Model(object):
     def parameters(self):
         return self._parameters
 
+    @property
+    def boundaries(self):
+        return self._boundaries
+
 
 def AppendNewModel(Modelarray):
     """ Append a new model from a modelarray. *Modelarray* has to be a list
@@ -171,7 +198,7 @@ def AppendNewModel(Modelarray):
     global models
     global modeldict
     global supplement
-    global verification
+    global boundaries
 
     for datadict in Modelarray:
         # We can have many models in one model array
@@ -187,7 +214,7 @@ def AppendNewModel(Modelarray):
         supplement[amod.id] = amod.func_supplements
 
         # Check functions - check for correct values
-        verification[amod.id] = amod.func_verification
+        boundaries[amod.id] = amod.boundaries
 
 
 def GetHumanReadableParms(model, parameters):
@@ -419,8 +446,8 @@ models = list()
 modeldict = dict()
 # A dictionary for supplementary data:
 supplement = dict()
-# A dictionary for checking for correct variables
-verification = dict()
+# A dictionary containing model boundaries
+boundaries = dict()
 
 
 # Load all models from the imported "MODEL_*" submodules
@@ -435,7 +462,7 @@ modeltypes = dict()
 #modeltypes[u"TIR (Gaussian/Exp.)"] = [6013, 6033, 6034]
 #modeltypes[u"TIR (□xσ/Exp.)"] = [6000, 6010, 6022, 6020, 6023, 6021]
 
-modeltypes[u"Confocal (Gaussian)"] = [6011, 6030, 6002, 6031, 6032]
+modeltypes[u"Confocal (Gaussian)"] = [6011, 6030, 6002, 6031, 6032, 6043]
 modeltypes[u"TIR (Gaussian/Exp.)"] = [6014, 6034, 6033]
 modeltypes[u"TIR (□xσ/Exp.)"] = [6010, 6023, 6000, 6022, 6020, 6021]
 modeltypes[u"User"] = list()
diff --git a/pycorrfit/openfile.py b/pycorrfit/openfile.py
index 170fa8d..9b797f1 100644
--- a/pycorrfit/openfile.py
+++ b/pycorrfit/openfile.py
@@ -311,7 +311,7 @@ def SaveSessionData(sessionfile, Infodict):
         # Range of fitting parameters
         Parms[idparm][9] = np.array(Parms[idparm][9],dtype="float").tolist()
         Parmlist.append(Parms[idparm])
-    yaml.dump(Parmlist, open(parmsfilename, "wb"))
+    yaml.safe_dump(Parmlist, open(parmsfilename, "wb"))
     Arc.write(parmsfilename)
     os.remove(os.path.join(tempdir, parmsfilename))
     # Supplementary data (errors of fit)
@@ -325,7 +325,7 @@ def SaveSessionData(sessionfile, Infodict):
         chi2 = Sups[idsup]["Chi sq"]
         globalshare = Sups[idsup]["Global Share"]
         Suplist.append([idsup, error, chi2, globalshare])
-    yaml.dump(Suplist, open(errsfilename, "wb"))
+    yaml.safe_dump(Suplist, open(errsfilename, "wb"))
     Arc.write(errsfilename)
     os.remove(os.path.join(tempdir, errsfilename))
     # Save external functions
@@ -637,7 +637,7 @@ def ExportCorrelation(exportfile, Page, info, savetrace=True):
         openedfile.close()
 
 
-session_wildcards = [".pcfs", ".pycorrfit-session.zip"]
+session_wildcards = [".pcfs", ".pycorrfit-session.zip", ".fcsfit-session.zip"]
 
 
 ReadmeCSV = """# This file was created using PyCorrFit version {}.
diff --git a/pycorrfit/page.py b/pycorrfit/page.py
index 94fcc9b..868a059 100644
--- a/pycorrfit/page.py
+++ b/pycorrfit/page.py
@@ -20,165 +20,7 @@ from . import models as mdls
 from . import tools
 from . import fcs_data_set as pcfbase
 from .fcs_data_set import Correlation, Fit
-
-
-def float2string_nsf(fval, n=7):
-    """
-    Truncate a float to n significant figures and return nice string.
-    Arguments:
-      q : a float
-      n : desired number of significant figures
-    Returns:
-    String with only n s.f. and trailing zeros.
-    """
-    #sgn=np.sign(fval)
-    if fval == 0:
-        npoint=n
-    else:
-        q=abs(fval)
-        k=int(np.ceil(np.log10(q/n)))
-        npoint = n-k
-    string="{:.{}f}".format(fval, npoint)
-    return string
-
-def nice_string(string):
-    """
-    Convert a string of a float created by `float2string_nsf`
-    to something nicer.
-    
-    i.e.
-    - 1.000000 -> 1
-    - 1.010000 -> 1.010
-    """
-    if len(string.split(".")[1].replace("0", "")) == 0:
-        return "{:d}".format(int(float(string)))
-    else:
-        olen = len(string)
-        newstring = string.rstrip("0")
-        if olen > len(newstring):
-            string=newstring+"0"
-        return string
-
-class PCFFloatValidator(wx.PyValidator):
-    def __init__(self, flag=None, pyVar=None):
-        wx.PyValidator.__init__(self)
-        self.flag = flag
-        self.Bind(wx.EVT_CHAR, self.OnChar)
-
-    def Clone(self):
-        return PCFFloatValidator(self.flag)
-
-    def Validate(self, win):
-        tc = self.GetWindow()
-        val = tc.GetValue()
-        
-        for x in val:
-            if x not in string.digits:
-                return False
-
-        return True
-
-    def OnChar(self, event):
-        """
-        Filter the characters that are put in the control.
-        
-        TODO:
-        - check for strings that do not make sense
-          - 2e-4.4
-          - 2e--3
-          - 3-1+5
-        """
-        key = event.GetKeyCode()
-        ctrl = event.GetEventObject()
-        # Get the actual string from the object
-        curval = wx.TextCtrl.GetValue(ctrl)
-
-        if key < wx.WXK_SPACE or key == wx.WXK_DELETE or key > 255:
-            event.Skip()
-            return
-
-        char = chr(key)
-        char = char.replace(",", ".")
-        
-        onlyonce = [".", "e"]
-        if char in onlyonce and curval.count(char):
-            # not allowed
-            return
-
-        if ( char in string.digits or
-             char in ["+", "-", ".", "e"]):
-            event.Skip()
-            return
-
-        if not wx.Validator_IsSilent():
-            wx.Bell()
-
-        # Returning without calling event.Skip eats the event before it
-        # gets to the text control
-        return
-
-
-class PCFFloatTextCtrl(wx.TextCtrl):
-    def __init__(self, *args, **kwargs):
-        wx.TextCtrl.__init__(self, *args, validator=PCFFloatValidator(), size=(110,-1),
-                             style=wx.TE_PROCESS_ENTER, **kwargs)
-        self.Bind(wx.EVT_ENTER_WINDOW, self.OnMouseEnter)
-        self.Bind(wx.EVT_LEAVE_WINDOW, self.OnMouseLeave)
-        self._PCFvalue = 0.0
-
-    def OnMouseEnter(self, e):
-        self.SetFocus()
-        self.SetSelection(-1,0)
-
-    def OnMouseLeave(self, e):
-        self.SetSelection(0,0)
-        self.SetInsertionPoint(0)
-
-    def SetValue(self, value):
-        self._PCFvalue = value
-        string = PCFFloatTextCtrl.float2string(value)
-        wx.TextCtrl.SetValue(self, string)
-    
-    def GetValue(self):
-        string = wx.TextCtrl.GetValue(self)
-        if string == PCFFloatTextCtrl.float2string(self._PCFvalue):
-            # use internal value: more accurate
-            #print("internal", self._PCFvalue)
-            return self._PCFvalue
-        else:
-            # new value
-            #print("external", string)
-            return PCFFloatTextCtrl.string2float(string)
-        
-    @staticmethod
-    def float2string(value):
-        """
-        inverse of string2float with some tweaks
-        """
-        value = float2string_nsf(value)
-        value = nice_string(value)
-        return value
-        
-    @staticmethod
-    def string2float(string):
-        """
-        Remove any characters that are not in
-        [+-{0-9},.] and show a decent float
-        value.
-        """
-        # allow comma
-        string = string.replace(",", ".")
-        # allow only one decimal point
-        string = string[::-1].replace(".", "", string.count(".")-1)[::-1]
-        try:
-            string = "{:.12f}".format(float(string))
-        except:
-            pass
-        # remove letters
-        string = re.sub(r'[^\d.-]+', '', string)
-        if len(string) == 0:
-            string = "0"
-        return float(string)
+from . import wxutils
 
 
 
@@ -497,7 +339,13 @@ class FittingPanel(wx.Panel):
         # Update spin-control values
         self.apply_parameters_reverse()
         # Plot everthing
-        self.PlotAll(trigger=trigger)
+        try:
+            self.PlotAll(trigger=trigger)
+        except OverflowError:
+            # Sometimes parameters are just bad and
+            # we still want the user to use the
+            # program.
+            warnings.warn("Could not plot canvas.") 
         # update displayed chi2
         self.updateChi2()
         # Return cursor to normal
@@ -554,7 +402,7 @@ class FittingPanel(wx.Panel):
             sizerh = wx.BoxSizer(wx.HORIZONTAL)
             checkbox = wx.CheckBox(self.panelsettings, label=label)
             # We needed to "from wx.lib.agw import floatspin" to get this:
-            spinctrl = PCFFloatTextCtrl(self.panelsettings)
+            spinctrl = wxutils.PCFFloatTextCtrl(self.panelsettings)
             sizerh.Add(spinctrl)
             sizerh.Add(checkbox)
             sizer.Add(sizerh)
@@ -825,6 +673,7 @@ class FittingPanel(wx.Panel):
             lines.append(linecorr)
             PlotCorr = plot.PlotGraphics(lines, 
                                 xLabel=u'lag time τ [ms]', yLabel=u'G(τ)')
+
             self.canvascorr.Draw(PlotCorr)
             ## Calculate residuals
             resid_norm = self.corr.residuals_plot
@@ -915,7 +764,7 @@ class FittingPanel(wx.Panel):
                     label="Background"))
         sizeint.Add(wx.StaticText(self.panelsettings, label="Ch1"))
         intlabel1 = wx.TextCtrl(self.panelsettings)
-        bgspin1 = PCFFloatTextCtrl(self.panelsettings)
+        bgspin1 = wxutils.PCFFloatTextCtrl(self.panelsettings)
         bgspin1.Bind(wx.EVT_KILL_FOCUS, self.OnBGSpinChanged)
         bgspin1.Bind(wx.EVT_TEXT_ENTER, self.OnBGSpinChanged)
         sizeint.Add(intlabel1)
@@ -925,7 +774,7 @@ class FittingPanel(wx.Panel):
         sizeint.Add(chtext2)
         intlabel2 = wx.TextCtrl(self.panelsettings)
         intlabel2.SetEditable(False)
-        bgspin2 = PCFFloatTextCtrl(self.panelsettings)
+        bgspin2 = wxutils.PCFFloatTextCtrl(self.panelsettings)
         bgspin2.Bind(wx.EVT_KILL_FOCUS, self.OnBGSpinChanged)
         sizeint.Add(intlabel2)
         sizeint.Add(bgspin2)
@@ -1015,8 +864,8 @@ class FittingPanel(wx.Panel):
         if hasattr(self.corr, "fit_results"):
             if "chi2" in self.corr.fit_results:
                 chi2 = self.corr.fit_results["chi2"]
-                chi2str = float2string_nsf(chi2, n=3)
-                chi2str = nice_string(chi2str)
+                chi2str = wxutils.float2string_nsf(chi2, n=3)
+                chi2str = wxutils.nice_string(chi2str)
                 label = u"  χ²={}".format(chi2str)
         # This does not work with wxPython 2.8.12:
         #self.WXTextChi2.SetLabelMarkup(u"<b>{}</b>".format(label))
diff --git a/pycorrfit/readfiles/read_SIN_correlator_com.py b/pycorrfit/readfiles/read_SIN_correlator_com.py
index ac62950..92ee37a 100644
--- a/pycorrfit/readfiles/read_SIN_correlator_com.py
+++ b/pycorrfit/readfiles/read_SIN_correlator_com.py
@@ -232,6 +232,16 @@ def openSIN(dirname, filename):
         traces.append(np.array(trace2))
         traces.append([np.array(trace1), np.array(trace2)])
         traces.append([np.array(trace1), np.array(trace2)])
+    else:
+        # We assume that we have a mode like this:
+        # Mode= 2 3 3 2 0 1 1 0
+        raise NotImplemented("This format is not yet implemented (issue #135).")
+        # TODO:
+        # - write test function for .sin correlator files
+        # - load correlation and traces from sin files in 2D arrays
+        # -> use this if, elif, elif, else loop to only assign curves
+        #    (to improve code quality)
+        
     dictionary = dict()
     dictionary["Correlation"] = correlations
     dictionary["Trace"] = traces
diff --git a/pycorrfit/tools/info.py b/pycorrfit/tools/info.py
index 119ccc5..86068fb 100644
--- a/pycorrfit/tools/info.py
+++ b/pycorrfit/tools/info.py
@@ -158,7 +158,8 @@ class InfoClass(object):
                     Title.append(["Type AC/CC", "Cross-correlation" ]) 
                 else:
                     Title.append(["Type AC/CC", "Autocorrelation" ]) 
-                Fitting.append([ u"χ²", corr.fit_results["chi2"]])
+                if "chi2" in corr.fit_results:
+                    Fitting.append([ u"χ²", corr.fit_results["chi2"]])
                 if weightedfit:
                     try:
                         Fitting.append(["Weighted fit", corr.fit_results["weighted fit type"]])
diff --git a/pycorrfit/tools/parmrange.py b/pycorrfit/tools/parmrange.py
index d0df4f1..d4037ff 100644
--- a/pycorrfit/tools/parmrange.py
+++ b/pycorrfit/tools/parmrange.py
@@ -14,6 +14,8 @@ import numpy as np
 
 from .. import edclasses  # edited floatspin
 from .. import models as mdls
+from .. import wxutils
+
 
 
 class RangeSelector(wx.Frame):
@@ -46,19 +48,31 @@ class RangeSelector(wx.Frame):
         """
         corr = self.Page.corr
         self.parameter_range = np.zeros_like(corr.fit_parameters_range)
+        leftbound = []
+        for item in corr.fit_parameters_range[:,0]:
+            if item is None:
+                leftbound.append(-np.inf)
+            else:
+                leftbound.append(item)
         labels, parmleft = mdls.GetHumanReadableParms(corr.fit_model.id,  # @UnusedVariable
-                                                 corr.fit_parameters_range[:,0])
+                                                      leftbound)
+        rightbound = []
+        for item in corr.fit_parameters_range[:,1]:
+            if item is None:
+                rightbound.append(np.inf)
+            else:
+                rightbound.append(item)
+        
         labels, parmright = mdls.GetHumanReadableParms(corr.fit_model.id,
-                                                 corr.fit_parameters_range[:,1])
+                                                       rightbound)
         self.parameter_range[:,0] = np.array(parmleft)
         self.parameter_range[:,1] = np.array(parmright)
         # create line
         
-        # = wx.BoxSizer(wx.VERTICAL)
         self.WXboxsizer = wx.FlexGridSizer(rows=len(labels), cols=4, vgap=5, hgap=5)
         for i in range(len(labels)):
-            left = floatspin.FloatSpin(self.panel, digits=7, increment=.1)
-            right = floatspin.FloatSpin(self.panel, digits=7, increment=.1)
+            left = wxutils.PCFFloatTextCtrl(self.panel)
+            right = wxutils.PCFFloatTextCtrl(self.panel)
             left.SetValue(self.parameter_range[i][0])
             right.SetValue(self.parameter_range[i][1])
             left.Bind(wx.EVT_SPINCTRL, self.OnSetParmRange)
diff --git a/pycorrfit/wxutils.py b/pycorrfit/wxutils.py
new file mode 100644
index 0000000..7e55c8c
--- /dev/null
+++ b/pycorrfit/wxutils.py
@@ -0,0 +1,179 @@
+# -*- coding: utf-8 -*-
+"""
+PyCorrFit
+
+Module wxutils
+"""
+import numpy as np                      # NumPy
+import re
+import string
+import wx                               # GUI interface wxPython
+
+
+def float2string_nsf(fval, n=7):
+    """
+    Truncate a float to n significant figures and return nice string.
+    Arguments:
+      q : a float
+      n : desired number of significant figures
+    Returns:
+    String with only n s.f. and trailing zeros.
+    """
+    #sgn=np.sign(fval)
+    try:
+        if fval == 0:
+            npoint=n
+        else:
+            q=abs(fval)
+            # prevent k from having negative values
+            k=max(0,int(np.ceil(np.log10(q/n))))
+            npoint = n-k
+        string="{:.{}f}".format(fval, npoint)
+    except:
+        string="{}".format(fval)
+    return string
+
+def nice_string(string):
+    """
+    Convert a string of a float created by `float2string_nsf`
+    to something nicer.
+    
+    i.e.
+    - 1.000000 -> 1
+    - 1.010000 -> 1.010
+    """
+    if string.count(".") and len(string.split(".")[1].replace("0", "")) == 0:
+        return "{:d}".format(int(float(string)))
+    else:
+        olen = len(string)
+        newstring = string.rstrip("0")
+        if olen > len(newstring):
+            string=newstring+"0"
+        return string
+
+
+class PCFFloatValidator(wx.PyValidator):
+    def __init__(self, flag=None, pyVar=None):
+        wx.PyValidator.__init__(self)
+        self.flag = flag
+        self.Bind(wx.EVT_CHAR, self.OnChar)
+
+    def Clone(self):
+        return PCFFloatValidator(self.flag)
+
+    def Validate(self, win):
+        tc = self.GetWindow()
+        val = tc.GetValue()
+        
+        for x in val:
+            if x not in string.digits:
+                return False
+
+        return True
+
+    def OnChar(self, event):
+        """
+        Filter the characters that are put in the control.
+        
+        TODO:
+        - check for strings that do not make sense
+          - 2e-4.4
+          - 2e--3
+          - 3-1+5
+        """
+        key = event.GetKeyCode()
+        ctrl = event.GetEventObject()
+        # Get the actual string from the object
+        curval = wx.TextCtrl.GetValue(ctrl)
+
+        if key < wx.WXK_SPACE or key == wx.WXK_DELETE or key > 255:
+            event.Skip()
+            return
+
+        char = chr(key)
+        char = char.replace(",", ".")
+        
+        onlyonce = [".", "e", "i", "n", "f"]
+        if char in onlyonce and curval.count(char):
+            # not allowed
+            return
+
+        if ( char in string.digits or
+             char in ["+", "-"]+onlyonce):
+            event.Skip()
+            return
+
+        if not wx.Validator_IsSilent():
+            wx.Bell()
+
+        # Returning without calling event.Skip eats the event before it
+        # gets to the text control
+        return
+
+
+class PCFFloatTextCtrl(wx.TextCtrl):
+    def __init__(self, *args, **kwargs):
+        wx.TextCtrl.__init__(self, *args, validator=PCFFloatValidator(), size=(110,-1),
+                             style=wx.TE_PROCESS_ENTER, **kwargs)
+        self.Bind(wx.EVT_ENTER_WINDOW, self.OnMouseEnter)
+        self.Bind(wx.EVT_LEAVE_WINDOW, self.OnMouseLeave)
+        self._PCFvalue = 0.0
+
+    def OnMouseEnter(self, e):
+        self.SetFocus()
+        self.SetSelection(-1,0)
+
+    def OnMouseLeave(self, e):
+        self.SetSelection(0,0)
+        self.SetInsertionPoint(0)
+
+    def SetValue(self, value):
+        self._PCFvalue = value
+        string = PCFFloatTextCtrl.float2string(value)
+        wx.TextCtrl.SetValue(self, string)
+    
+    def GetValue(self):
+        string = wx.TextCtrl.GetValue(self)
+        if string == PCFFloatTextCtrl.float2string(self._PCFvalue):
+            # use internal value: more accurate
+            #print("internal", self._PCFvalue)
+            return self._PCFvalue
+        else:
+            # new value
+            #print("external", string)
+            return PCFFloatTextCtrl.string2float(string)
+        
+    @staticmethod
+    def float2string(value):
+        """
+        inverse of string2float with some tweaks
+        """
+        value = float2string_nsf(value)
+        value = nice_string(value)
+        return value
+        
+    @staticmethod
+    def string2float(string):
+        """
+        Remove any characters that are not in
+        [+-{0-9},.] and show a decent float
+        value.
+        """
+        if string.count("inf"):
+            if string[0]=="-":
+                return -np.inf
+            else:
+                return np.inf
+        # allow comma
+        string = string.replace(",", ".")
+        # allow only one decimal point
+        string = string[::-1].replace(".", "", string.count(".")-1)[::-1]
+        try:
+            string = "{:.12f}".format(float(string))
+        except:
+            pass
+        # remove letters
+        string = re.sub(r'[^\d.-]+', '', string)
+        if len(string) == 0:
+            string = "0"
+        return float(string)
diff --git a/setup.py b/setup.py
index 9afb8d8..7c45b74 100644
--- a/setup.py
+++ b/setup.py
@@ -131,6 +131,7 @@ setup(
         "NumPy >= 1.5.1",
         "SciPy >= 0.8.0",
         "PyYAML >= 3.09",
+        "lmfit",
         ],
     setup_requires=["cython"],
     # scripts
diff --git a/tests/test_fit_model_gaussian.py b/tests/test_fit_model_gaussian.py
new file mode 100644
index 0000000..303c6f9
--- /dev/null
+++ b/tests/test_fit_model_gaussian.py
@@ -0,0 +1,231 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""
+Check known model parameters and cross-check across models. 
+"""
+from __future__ import division, print_function
+import sys
+from os.path import abspath, dirname, split
+
+import numpy as np
+
+# Add parent directory to beginning of path variable
+sys.path.insert(0, dirname(dirname(abspath(__file__))))
+
+import pycorrfit
+from pycorrfit import models as mdls
+
+# GLOBAL PARAMETERS FOR THIS TEST:
+TAU = 1.468e-6
+
+
+def test_6001():
+    #2D
+    model = mdls.modeldict[6001]
+    parms = [4.874, 0.2476, 0.015]
+    assert model(parms, tau=TAU) - 0.22016907491127263 < 1e-14
+
+
+def test_6002():
+    #T+2D
+    model = mdls.modeldict[6002]
+    #         n     τ_diff τ_trip     T     offset
+    parms = [4.891, 0.853, 0.00141, 0.0121, 0.034]
+    assert model(parms, tau=TAU) - 0.24095843709396209 < 1e-14
+    
+    model2 = mdls.modeldict[6001]
+    parms2 = [4.891, 0.853, 0.034]
+    parms1 = [4.891, 0.853, 0.0, 0.0, 0.034]
+    assert model(parms1, tau=TAU) - model2(parms2, tau=TAU) < 1e-14
+    
+
+def test_6031():
+    #T+2D+2D
+    model = mdls.modeldict[6031]
+    parms = [ 
+                2.487,        # n
+                3.4325,      # taud1
+                2534,        # taud2
+                0.153,       # F
+                .879,        # alpha
+                0.00123,     # tautrip
+                0.0314,      # T
+                0.00021      # offset
+                ]
+    assert model(parms, tau=TAU) - 0.41629799102222742 < 1e-14
+    
+    model2 = mdls.modeldict[6002]
+    parms2 = [4.891, 0.853, 0.0012, 0.108, 0.034]
+    parms1 = [ 
+                4.891,      # n
+                0.853,      # taud1
+                1.0,        # taud2
+                1.0,        # F
+                1.0,        # alpha
+                0.0012,     # tautrip
+                0.108,      # T
+                0.034       # offset
+                ]
+    assert  model(parms1, tau=TAU) - model2(parms2, tau=TAU) < 1e-14
+
+
+def test_6011():
+    #T+3D
+    model = mdls.modeldict[6011]
+    #            n       T   τ_trip τ_diff    SP  offset
+    parms = [2.168, 0.1682, 0.00028, 0.54, 5.864, 0.0053]
+    assert model(parms, tau=TAU) - 0.55933660640533278 < 1e-14
+
+    model2 = mdls.modeldict[6012]
+    parms2 = [2.168, 0.54, 5.864, 0.0053]
+    parms1 = [2.168, 0, 1.0, 0.54, 5.864, 0.0053]
+    assert  model(parms1, tau=TAU) - model2(parms2, tau=TAU) < 1e-14
+
+
+def test_6012():
+    #3D
+    model = mdls.modeldict[6012]
+    parms = [2.168, 0.54, 5.864, 0.0053]
+    assert model(parms, tau=TAU) - 0.46655334038750634 < 1e-14
+
+
+def test_6030():
+    #T+3D+3D
+    model = mdls.modeldict[6030]
+    parms = [ 
+                2.153,      # n
+                5.54,       # taud1
+                1532,       # taud2
+                0.4321,     # F
+                4.4387,     # SP
+                0.9234,     # alpha
+                0.002648,   # tautrip
+                0.1151,     # T
+                0.008       # offset
+                ]
+    assert model(parms, tau=TAU) - 0.53367456244118261 < 1e-14
+    
+    model2 = mdls.modeldict[6011]
+    #             n       T   τ_trip τ_diff    SP  offset
+    parms2 = [2.168, 0.1682, 0.00028, 0.54, 5.864, 0.0053]
+    parms1 = [ 
+                2.168,       # n
+                0.54,        # taud1
+                1.0,         # taud2
+                1.0,         # F
+                5.864,       # SP
+                0.9234,      # alpha
+                0.00028,     # tautrip
+                0.1682,      # T
+                0.0053       # offset
+                ]
+    assert  model(parms1, tau=TAU) - model2(parms2, tau=TAU) < 1e-14
+    
+
+def test_6032():
+    #T+3D+2D
+    model = mdls.modeldict[6032]
+    parms = [ 
+                1.58,      # n
+                3548,      # taud2D
+                0.351,     # taud3D
+                0.345,     # F3D
+                4.984,     # SP
+                0.879,     # alpha
+                0.0014,    # tautrip
+                0.108,     # T
+                0.008      # offset
+                ]
+    assert model(parms, tau=TAU) - 0.72001694812574801 < 1e-14
+    
+    #->T+3D
+    model2 = mdls.modeldict[6011]
+    #             n       T   τ_trip τ_diff    SP  offset
+    parms2 = [2.168, 0.1682, 0.0028, 0.54, 5.864, 0.0053]
+    parms1a = [ 
+                2.168,      # n
+                1.0,        # taud2D
+                0.54,       # taud3D
+                1.0,        # F3D
+                5.864,      # SP
+                0.879,      # alpha
+                0.0028,     # tautrip
+                0.1682,     # T
+                0.0053      # offset
+                ]
+    assert  model(parms1a, tau=TAU) - model2(parms2, tau=TAU) < 1e-14
+
+    #->T+2D
+    model3 = mdls.modeldict[6002]
+    #         n     τ_diff τ_trip     T     offset
+    parms3 = [4.891, 0.853, 0.00141, 0.0121, 0.034]
+    parms1b = [ 
+                4.891,    # n
+                0.853,    # taud2D
+                1.0,      # taud3D
+                0.0,      # F3D
+                1.0,      # SP
+                0.879,    # alpha
+                0.00141,  # tautrip
+                0.0121,   # T
+                0.034     # offset
+                ]
+    assert  model(parms1b, tau=TAU) - model3(parms3, tau=TAU) < 1e-14
+
+
+def test_6043():
+    model = mdls.modeldict[6043]
+    parms = [ 
+                1.452,       # n
+                4.48,        # taud1
+                8438,        # taud2
+                0.425,       # F
+                5.43,        # SP
+                0.876,       # alpha
+                0.0012,      # tautrip1
+                0.0101,      # T1
+                0.0021,      # tautrip2
+                0.0102,      # T2
+                0.00004      # offset
+                ] 
+    assert model(parms, tau=TAU) - 0.70599013426715551 < 1e-14
+
+    #->T+3D+3D
+    model2 = mdls.modeldict[6030]
+    parms2 = [ 
+                2.153,      # n
+                5.54,       # taud1
+                1532,       # taud2
+                0.4321,     # F
+                4.4387,     # SP
+                0.9234,     # alpha
+                0.002648,   # tautrip
+                0.1151,     # T
+                0.008       # offset
+                ]
+    parms1 = [
+                2.153,      # n
+                5.54,       # taud1
+                1532,       # taud2
+                0.4321,     # F
+                4.4387,     # SP
+                0.9234,     # alpha
+                0.002648,   # tautrip1
+                0.1151,     # T1
+                0.0021,     # tautrip2
+                0.0,        # T2
+                0.008       # offset
+                ] 
+
+    assert  model(parms1, tau=TAU) - model2(parms2, tau=TAU) < 1e-14
+
+
+if __name__ == "__main__":
+    # Run all tests
+    loc = locals()
+    for key in list(loc.keys()):
+        if key.startswith("test_") and hasattr(loc[key], "__call__"):
+            loc[key]()
+
+    
+    
\ No newline at end of file
diff --git a/tests/test_fit_models.py b/tests/test_fit_models.py
index 6acb059..de78490 100644
--- a/tests/test_fit_models.py
+++ b/tests/test_fit_models.py
@@ -8,7 +8,6 @@ from __future__ import division, print_function
 import sys
 from os.path import abspath, dirname, split
 
-import matplotlib.pylab as plt
 import numpy as np
 
 # Add parent directory to beginning of path variable
diff --git a/tests/test_global_fit.py b/tests/test_global_fit.py
index bde5b81..18c6d7a 100644
--- a/tests/test_global_fit.py
+++ b/tests/test_global_fit.py
@@ -55,6 +55,7 @@ def test_globalfit():
     corrs, initparms = create_corr()
     # commence global fit
     globalfit = Fit(correlations=corrs, global_fit=True)
+
     assert np.allclose(globalfit.fit_parm, initparms), "Global fit failed"
     
 

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/pycorrfit.git



More information about the debian-med-commit mailing list