[med-svn] [pycorrfit] 02/06: Imported Upstream version 0.8.6
Alex Mestiashvili
malex-guest at moszumanska.debian.org
Wed Mar 18 17:33:53 UTC 2015
This is an automated email from the git hooks/post-receive script.
malex-guest pushed a commit to branch master
in repository pycorrfit.
commit 9d1050e2de9b36efb06bef2e2d7f504919e627c7
Author: Alexandre Mestiashvili <alex at biotec.tu-dresden.de>
Date: Wed Mar 18 18:11:07 2015 +0100
Imported Upstream version 0.8.6
---
ChangeLog.txt | 20 +
MANIFEST.in | 11 +-
PyCorrFit_doc.pdf | Bin 581216 -> 0 bytes
README.md | 2 +-
Readme.txt | 3 +-
bin/pycorrfit | 20 +-
{doc-src => doc}/Bibliography.bib | 0
.../Images/PyCorrFit_Screenshot_CSFCS.png | Bin
.../Images/PyCorrFit_Screenshot_Main.png | Bin
{doc-src => doc}/Images/PyCorrFit_icon.png | Bin
{doc-src => doc}/Images/PyCorrFit_icon.svg | 0
{doc-src => doc}/Images/PyCorrFit_icon_dark.svg | 0
{doc-src => doc}/Images/PyCorrFit_logo.svg | 0
{doc-src => doc}/Images/PyCorrFit_logo_dark.pdf | Bin
{doc-src => doc}/Images/PyCorrFit_logo_dark.png | Bin
{doc-src => doc}/Images/PyCorrFit_logo_dark.svg | 0
{doc-src => doc}/PyCorrFit_doc.tex | 0
{doc-src => doc}/PyCorrFit_doc_content.tex | 14 +-
{doc-src => doc}/PyCorrFit_doc_models.tex | 0
{doc-src => doc}/README.md | 2 +-
.../ExampleFunc_CS_2D+2D+S+T.txt | 0
.../ExampleFunc_CS_3D+S+T.txt | 0
.../ExampleFunc_Exp_correlated_noise.txt | 0
.../ExampleFunc_SFCS_1C_2D_Autocorrelation.txt | 0
.../ExampleFunc_SFCS_1C_2D_Cross-correlation.txt | 0
.../ExampleFunc_TIRF_zOnly.txt | 0
.../Model_AC_3D+T_confocal.txt | 0
.../Model_Flow_AC_3D_confocal.txt | 0
.../Model_Flow_CC_Backward_3D_confocal.txt | 0
.../Model_Flow_CC_Forward_3D_confocal.txt | 0
.../sample_sessions}/CSFCS_DiO-in-DOPC.pcfs | Bin
.../ConfocalFCS_Alexa488_xcorr.pcfs | Bin
pycorrfit/PyCorrFit.py | 10 +
{src => pycorrfit}/__init__.py | 9 +-
pycorrfit/__main__.py | 34 +
{src => pycorrfit}/doc.py | 103 ++-
{src => pycorrfit}/edclasses.py | 2 +-
pycorrfit/fcs_data_set.py | 739 ++++++++++++++++++
{src => pycorrfit}/fitting.py | 9 +-
{src => pycorrfit}/frontend.py | 713 +++++++++++-------
{src => pycorrfit}/icon.py | 0
src/PyCorrFit.py => pycorrfit/main.py | 122 +--
{src => pycorrfit}/misc.py | 4 +-
{src => pycorrfit}/models/MODEL_TIRF_1C.py | 0
{src => pycorrfit}/models/MODEL_TIRF_2D2D.py | 0
{src => pycorrfit}/models/MODEL_TIRF_3D2D.py | 0
.../models/MODEL_TIRF_3D2Dkin_Ries.py | 0
{src => pycorrfit}/models/MODEL_TIRF_3D3D.py | 0
.../models/MODEL_TIRF_gaussian_1C.py | 0
.../models/MODEL_TIRF_gaussian_3D2D.py | 0
.../models/MODEL_TIRF_gaussian_3D3D.py | 0
.../models/MODEL_classic_gaussian_2D.py | 0
.../models/MODEL_classic_gaussian_3D.py | 0
.../models/MODEL_classic_gaussian_3D2D.py | 0
{src => pycorrfit}/models/__init__.py | 33 +-
pycorrfit/openfile.py | 766 +++++++++++++++++++
{src => pycorrfit}/page.py | 8 +-
{src => pycorrfit}/plotting.py | 4 +-
{src => pycorrfit}/readfiles/__init__.py | 56 +-
{src => pycorrfit}/readfiles/read_ASC_ALV.py | 0
{src => pycorrfit}/readfiles/read_CSV_PyCorrFit.py | 0
{src => pycorrfit}/readfiles/read_FCS_Confocor3.py | 191 +++--
.../readfiles/read_SIN_correlator_com.py | 0
{src => pycorrfit}/readfiles/read_mat_ries.py | 0
pycorrfit/readfiles/read_pt3_PicoQuant.py | 95 +++
pycorrfit/readfiles/read_pt3_scripts/LICENSE | 340 +++++++++
pycorrfit/readfiles/read_pt3_scripts/__init__.py | 0
.../read_pt3_scripts/correlation_methods.py | 134 ++++
.../read_pt3_scripts/correlation_objects.py | 527 +++++++++++++
pycorrfit/readfiles/read_pt3_scripts/fib4.pyx | 60 ++
.../readfiles/read_pt3_scripts/import_methods.py | 190 +++++
{src => pycorrfit}/tools/__init__.py | 28 +-
{src => pycorrfit}/tools/average.py | 4 +-
{src => pycorrfit}/tools/background.py | 8 +-
{src => pycorrfit}/tools/batchcontrol.py | 45 +-
{src => pycorrfit}/tools/chooseimport.py | 6 +-
{src => pycorrfit}/tools/comment.py | 0
{src => pycorrfit}/tools/datarange.py | 0
{src => pycorrfit}/tools/example.py | 0
{src => pycorrfit}/tools/globalfit.py | 4 +-
{src => pycorrfit}/tools/info.py | 4 +-
{src => pycorrfit}/tools/overlaycurves.py | 28 +-
{src => pycorrfit}/tools/parmrange.py | 4 +-
{src => pycorrfit}/tools/plotexport.py | 0
{src => pycorrfit}/tools/simulation.py | 4 +-
{src => pycorrfit}/tools/statistics.py | 4 +-
{src => pycorrfit}/tools/trace.py | 0
{src => pycorrfit}/usermodel.py | 7 +-
setup.py | 51 +-
src/openfile.py | 838 ---------------------
90 files changed, 3833 insertions(+), 1423 deletions(-)
diff --git a/ChangeLog.txt b/ChangeLog.txt
index 1e1ed03..1fb05be 100644
--- a/ChangeLog.txt
+++ b/ChangeLog.txt
@@ -1,3 +1,23 @@
+0.8.6
+- Bugfix: Opening .fcs files with only one AC curve works now
+- Zip files with measurements may now contain subfolders
+- Improved pt3-file support from
+ https://github.com/dwaithe/FCS_point_correlator (#89)
+0.8.5
+- Fixed bug that made it impossible to load data (#88)
+- Exceptions are now handled by wxPython
+- Under the hood:
+ - pythonic repository structure
+ - Relative imports
+ - Windows build machine is now Windows 7
+ - Removed strict dependency on matplotlib
+0.8.4
+- Support for PicoQuant data file format
+ Many thanks to Dominic Waithe (@dwaithe)
+- Improved comaptibility with Zeiss .fcs file format
+- PyCorrFit is now dependent on Cython
+- The module 'openfile' is now available from within Python
+- Installer for Windows
0.8.3
- New .pcfs (PyCorrFit Session) file format (#60)
- Additional fitting algorithms: Nelder-Mead, BFGS, Powell, Polak-Ribiere (#71)
diff --git a/MANIFEST.in b/MANIFEST.in
index 924013e..f887a12 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,7 +1,8 @@
-include doc-src/*.tex
-include doc-src/*.bib
-include doc-src/Images/*
-include external_model_functions/*
+include doc/*.tex
+include doc/*.bib
+include doc/*.pdf
+include doc/Images/*
+include examples/external_model_functions/*
include Readme.txt
include ChangeLog.txt
-include PyCorrFit_doc.pdf
+
diff --git a/PyCorrFit_doc.pdf b/PyCorrFit_doc.pdf
deleted file mode 100644
index 6d3afe5..0000000
Binary files a/PyCorrFit_doc.pdf and /dev/null differ
diff --git a/README.md b/README.md
index 1c314a2..c3e9ba9 100644
--- a/README.md
+++ b/README.md
@@ -17,7 +17,7 @@ information, visit the official homepage at http://pycorrfit.craban.de.
- [Download the latest version](https://github.com/paulmueller/PyCorrFit/releases)
-- [Documentation](https://github.com/paulmueller/PyCorrFit/raw/master/PyCorrFit_doc.pdf)
+- [Documentation](https://github.com/paulmueller/PyCorrFit/wiki/PyCorrFit_doc.pdf)
- [Run PyCorrFit from source](https://github.com/paulmueller/PyCorrFit/wiki/Running-from-source)
- [Write your own model functions](https://github.com/paulmueller/PyCorrFit/wiki/Writing-model-functions)
- [Need help?](https://github.com/paulmueller/PyCorrFit/wiki/Creating-a-new-issue)
diff --git a/Readme.txt b/Readme.txt
index 77ad18f..3548526 100644
--- a/Readme.txt
+++ b/Readme.txt
@@ -21,7 +21,8 @@ latest version of PyCorrFit using pip can be found there:
https://github.com/paulmueller/PyCorrFit/wiki/Installation_pip
Further reading:
+
- Latest downloads: https://github.com/paulmueller/PyCorrFit/releases
-- Documentation: https://github.com/paulmueller/PyCorrFit/raw/master/PyCorrFit_doc.pdf
+- Documentation: https://github.com/paulmueller/PyCorrFit/wiki/PyCorrFit_doc.pdf
- Write model functions: https://github.com/paulmueller/PyCorrFit/wiki/Writing-model-functions
- Need help? https://github.com/paulmueller/PyCorrFit/wiki/Creating-a-new-issue
diff --git a/bin/pycorrfit b/bin/pycorrfit
index 0ee17a9..28bfcc5 100644
--- a/bin/pycorrfit
+++ b/bin/pycorrfit
@@ -1,16 +1,4 @@
-#!/bin/sh
-# debian
-if [ -f "/usr/share/pyshared/pycorrfit/PyCorrFit.py" ]
-then
- python /usr/share/pyshared/pycorrfit/PyCorrFit.py
-elif [ -f "/usr/local/lib/python2.7/dist-packages/pycorrfit/PyCorrFit.py" ]
-# pip
-then
- python /usr/local/lib/python2.7/dist-packages/pycorrfit/PyCorrFit.py
-# pip and virtualenv
-elif [ -f "../lib/python2.7/site-packages/pycorrfit/PyCorrFit.py" ]
-then
- python ../lib/python2.7/site-packages/pycorrfit/PyCorrFit.py
-else
- echo "Could not find PyCorrFit.py. Please notify the author."
-fi
+#!/bin/bash
+# go to this directory to prevent execution of a git checkout
+cd "$(dirname "$0")"
+python -m pycorrfit
diff --git a/doc-src/Bibliography.bib b/doc/Bibliography.bib
similarity index 100%
rename from doc-src/Bibliography.bib
rename to doc/Bibliography.bib
diff --git a/doc-src/Images/PyCorrFit_Screenshot_CSFCS.png b/doc/Images/PyCorrFit_Screenshot_CSFCS.png
similarity index 100%
rename from doc-src/Images/PyCorrFit_Screenshot_CSFCS.png
rename to doc/Images/PyCorrFit_Screenshot_CSFCS.png
diff --git a/doc-src/Images/PyCorrFit_Screenshot_Main.png b/doc/Images/PyCorrFit_Screenshot_Main.png
similarity index 100%
rename from doc-src/Images/PyCorrFit_Screenshot_Main.png
rename to doc/Images/PyCorrFit_Screenshot_Main.png
diff --git a/doc-src/Images/PyCorrFit_icon.png b/doc/Images/PyCorrFit_icon.png
similarity index 100%
rename from doc-src/Images/PyCorrFit_icon.png
rename to doc/Images/PyCorrFit_icon.png
diff --git a/doc-src/Images/PyCorrFit_icon.svg b/doc/Images/PyCorrFit_icon.svg
similarity index 100%
rename from doc-src/Images/PyCorrFit_icon.svg
rename to doc/Images/PyCorrFit_icon.svg
diff --git a/doc-src/Images/PyCorrFit_icon_dark.svg b/doc/Images/PyCorrFit_icon_dark.svg
similarity index 100%
rename from doc-src/Images/PyCorrFit_icon_dark.svg
rename to doc/Images/PyCorrFit_icon_dark.svg
diff --git a/doc-src/Images/PyCorrFit_logo.svg b/doc/Images/PyCorrFit_logo.svg
similarity index 100%
rename from doc-src/Images/PyCorrFit_logo.svg
rename to doc/Images/PyCorrFit_logo.svg
diff --git a/doc-src/Images/PyCorrFit_logo_dark.pdf b/doc/Images/PyCorrFit_logo_dark.pdf
similarity index 100%
rename from doc-src/Images/PyCorrFit_logo_dark.pdf
rename to doc/Images/PyCorrFit_logo_dark.pdf
diff --git a/doc-src/Images/PyCorrFit_logo_dark.png b/doc/Images/PyCorrFit_logo_dark.png
similarity index 100%
rename from doc-src/Images/PyCorrFit_logo_dark.png
rename to doc/Images/PyCorrFit_logo_dark.png
diff --git a/doc-src/Images/PyCorrFit_logo_dark.svg b/doc/Images/PyCorrFit_logo_dark.svg
similarity index 100%
rename from doc-src/Images/PyCorrFit_logo_dark.svg
rename to doc/Images/PyCorrFit_logo_dark.svg
diff --git a/doc-src/PyCorrFit_doc.tex b/doc/PyCorrFit_doc.tex
similarity index 100%
rename from doc-src/PyCorrFit_doc.tex
rename to doc/PyCorrFit_doc.tex
diff --git a/doc-src/PyCorrFit_doc_content.tex b/doc/PyCorrFit_doc_content.tex
similarity index 99%
rename from doc-src/PyCorrFit_doc_content.tex
rename to doc/PyCorrFit_doc_content.tex
index c0e6ae0..e75bda5 100755
--- a/doc-src/PyCorrFit_doc_content.tex
+++ b/doc/PyCorrFit_doc_content.tex
@@ -153,21 +153,23 @@ Some examples can be found at GitHub in the \textit{PyCorrFit} repository, e.g.
\rule{0pt}{3ex} (2) ALV (*.ASC) & ALV Laser GmbH, Langen, Germany \\
\rule{0pt}{3ex} (3) Correlator.com (*.SIN) & www.correlator.com, USA \\
+
+ \rule{0pt}{3ex} (4) PicoQuant (*.pt3) & PicoQuant \\
- \rule{0pt}{3ex} (4) Zeiss ConfoCor3 (*.fcs) & AIM 4.2, ZEN 2010, Zeiss, Germany \\
+ \rule{0pt}{3ex} (5) Zeiss ConfoCor3 (*.fcs) & AIM 4.2, ZEN 2010, Zeiss, Germany \\
- \rule{0pt}{3ex} (5) Matlab ‘Ries (*.mat) & EMBL Heidelberg, Germany \\
+ \rule{0pt}{3ex} (6) Matlab ‘Ries (*.mat) & EMBL Heidelberg, Germany \\
- \rule{0pt}{3ex} (6) PyCorrFit (*.csv) & Paul Müller, TU Dresden, Germany \\
+ \rule{0pt}{3ex} (7) PyCorrFit (*.csv) & Paul Müller, TU Dresden, Germany \\
- \rule{0pt}{3ex} (7) PyCorrFit session (*.pcfs) & Paul Müller, TU Dresden, Germany \\
+ \rule{0pt}{3ex} (8) PyCorrFit session (*.pcfs) & Paul Müller, TU Dresden, Germany \\
- \rule{0pt}{3ex} (8) Zip file (*.zip) & Paul Müller, TU Dresden, Germany \\
+ \rule{0pt}{3ex} (9) Zip file (*.zip) & Paul Müller, TU Dresden, Germany \\
\end{tabular}
\vspace{3ex}
\newline
-While (2)-(4) are file formats associated with commercial hardware, (5) refers to a MATLAB based FCS evaluation software developed by Jonas Ries in the Schwille lab at TU Dresden, (6) is a text file containing comma-separated values (csv) generated by PyCorrFit via the command \textit{Current Page / Save data}. Zip-files are automatically decompressed and can be imported when matching one of the above mentioned formats. In particular loading of *.pcfs files (which are actually zip files) [...]
+While (2)-(5) are file formats associated with commercial hardware, (6) refers to a MATLAB based FCS evaluation software developed by Jonas Ries in the Schwille lab at TU Dresden, (7) is a text file containing comma-separated values (csv) generated by PyCorrFit via the command \textit{Current Page / Save data}. Zip-files are automatically decompressed and can be imported when matching one of the above mentioned formats. In particular loading of *.pcfs files (which are actually zip files) [...]
During loading, the user is prompted to assign fit models in the \textit{Choose Models} dialogue window. There, curves are sorted according to channel (for example AC1, AC2, CC12, and CC21, as a typical outcome of a dual-color cross-correlation experiment). For each channel a fit model must be selected from the list (see \hyref{Section}{sec:menub.model}):
diff --git a/doc-src/PyCorrFit_doc_models.tex b/doc/PyCorrFit_doc_models.tex
similarity index 100%
rename from doc-src/PyCorrFit_doc_models.tex
rename to doc/PyCorrFit_doc_models.tex
diff --git a/doc-src/README.md b/doc/README.md
similarity index 95%
rename from doc-src/README.md
rename to doc/README.md
index 9df4f4c..0afe7cf 100644
--- a/doc-src/README.md
+++ b/doc/README.md
@@ -1,5 +1,5 @@
This folder contains the TeX-source of the
-[PyCorrFit documentation](https://github.com/paulmueller/PyCorrFit/raw/master/PyCorrFit_doc.pdf).
+[PyCorrFit documentation](https://github.com/paulmueller/PyCorrFit/wiki/PyCorrFit_doc.pdf).
If, for some reason, you wish to compile it yourself, you will need a
working LaTeX distribution.
diff --git a/external_model_functions/ExampleFunc_CS_2D+2D+S+T.txt b/examples/external_model_functions/ExampleFunc_CS_2D+2D+S+T.txt
similarity index 100%
rename from external_model_functions/ExampleFunc_CS_2D+2D+S+T.txt
rename to examples/external_model_functions/ExampleFunc_CS_2D+2D+S+T.txt
diff --git a/external_model_functions/ExampleFunc_CS_3D+S+T.txt b/examples/external_model_functions/ExampleFunc_CS_3D+S+T.txt
similarity index 100%
rename from external_model_functions/ExampleFunc_CS_3D+S+T.txt
rename to examples/external_model_functions/ExampleFunc_CS_3D+S+T.txt
diff --git a/external_model_functions/ExampleFunc_Exp_correlated_noise.txt b/examples/external_model_functions/ExampleFunc_Exp_correlated_noise.txt
similarity index 100%
rename from external_model_functions/ExampleFunc_Exp_correlated_noise.txt
rename to examples/external_model_functions/ExampleFunc_Exp_correlated_noise.txt
diff --git a/external_model_functions/ExampleFunc_SFCS_1C_2D_Autocorrelation.txt b/examples/external_model_functions/ExampleFunc_SFCS_1C_2D_Autocorrelation.txt
similarity index 100%
rename from external_model_functions/ExampleFunc_SFCS_1C_2D_Autocorrelation.txt
rename to examples/external_model_functions/ExampleFunc_SFCS_1C_2D_Autocorrelation.txt
diff --git a/external_model_functions/ExampleFunc_SFCS_1C_2D_Cross-correlation.txt b/examples/external_model_functions/ExampleFunc_SFCS_1C_2D_Cross-correlation.txt
similarity index 100%
rename from external_model_functions/ExampleFunc_SFCS_1C_2D_Cross-correlation.txt
rename to examples/external_model_functions/ExampleFunc_SFCS_1C_2D_Cross-correlation.txt
diff --git a/external_model_functions/ExampleFunc_TIRF_zOnly.txt b/examples/external_model_functions/ExampleFunc_TIRF_zOnly.txt
similarity index 100%
rename from external_model_functions/ExampleFunc_TIRF_zOnly.txt
rename to examples/external_model_functions/ExampleFunc_TIRF_zOnly.txt
diff --git a/external_model_functions/Model_AC_3D+T_confocal.txt b/examples/external_model_functions/Model_AC_3D+T_confocal.txt
similarity index 100%
rename from external_model_functions/Model_AC_3D+T_confocal.txt
rename to examples/external_model_functions/Model_AC_3D+T_confocal.txt
diff --git a/external_model_functions/Model_Flow_AC_3D_confocal.txt b/examples/external_model_functions/Model_Flow_AC_3D_confocal.txt
similarity index 100%
rename from external_model_functions/Model_Flow_AC_3D_confocal.txt
rename to examples/external_model_functions/Model_Flow_AC_3D_confocal.txt
diff --git a/external_model_functions/Model_Flow_CC_Backward_3D_confocal.txt b/examples/external_model_functions/Model_Flow_CC_Backward_3D_confocal.txt
similarity index 100%
rename from external_model_functions/Model_Flow_CC_Backward_3D_confocal.txt
rename to examples/external_model_functions/Model_Flow_CC_Backward_3D_confocal.txt
diff --git a/external_model_functions/Model_Flow_CC_Forward_3D_confocal.txt b/examples/external_model_functions/Model_Flow_CC_Forward_3D_confocal.txt
similarity index 100%
rename from external_model_functions/Model_Flow_CC_Forward_3D_confocal.txt
rename to examples/external_model_functions/Model_Flow_CC_Forward_3D_confocal.txt
diff --git a/sample_sessions/CSFCS_DiO-in-DOPC.pcfs b/examples/sample_sessions/CSFCS_DiO-in-DOPC.pcfs
similarity index 100%
rename from sample_sessions/CSFCS_DiO-in-DOPC.pcfs
rename to examples/sample_sessions/CSFCS_DiO-in-DOPC.pcfs
diff --git a/sample_sessions/ConfocalFCS_Alexa488_xcorr.pcfs b/examples/sample_sessions/ConfocalFCS_Alexa488_xcorr.pcfs
similarity index 100%
rename from sample_sessions/ConfocalFCS_Alexa488_xcorr.pcfs
rename to examples/sample_sessions/ConfocalFCS_Alexa488_xcorr.pcfs
diff --git a/pycorrfit/PyCorrFit.py b/pycorrfit/PyCorrFit.py
new file mode 100644
index 0000000..106f462
--- /dev/null
+++ b/pycorrfit/PyCorrFit.py
@@ -0,0 +1,10 @@
+# -*- coding: utf-8 -*-
+""" PyScanFCS loader
+"""
+from os.path import dirname, join, abspath, split
+
+import sys
+sys.path = [split(abspath(dirname(__file__)))[0]] + sys.path
+
+import pycorrfit
+pycorrfit.Main()
diff --git a/src/__init__.py b/pycorrfit/__init__.py
similarity index 93%
rename from src/__init__.py
rename to pycorrfit/__init__.py
index e743160..5692da0 100644
--- a/src/__init__.py
+++ b/pycorrfit/__init__.py
@@ -27,9 +27,12 @@
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
-import doc
-import models
-import readfiles
+from . import doc
+from . import models
+from . import openfile
+from . import readfiles
+
+from .main import Main
__version__ = doc.__version__
__author__ = "Paul Mueller"
diff --git a/pycorrfit/__main__.py b/pycorrfit/__main__.py
new file mode 100644
index 0000000..6727f15
--- /dev/null
+++ b/pycorrfit/__main__.py
@@ -0,0 +1,34 @@
+# -*- coding: utf-8 -*-
+"""
+ When a membrane is scanned perpendicularly to its surface, the
+ fluorescence signal originating from the membrane itself must be
+ separated from the signal of the surrounding medium for an FCS
+ analysis. PyCorrFit interactively extracts the fluctuating
+ fluorescence signal from such measurements and applies a
+ multiple-tau algorithm. The obtained correlation curves can be
+ evaluated using PyCorrFit.
+
+ Copyright (C) 2011-2012 Paul Müller
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from . import doc
+from . import main
+
+## VERSION
+version = doc.__version__
+__version__ = version
+
+main.Main()
diff --git a/src/doc.py b/pycorrfit/doc.py
similarity index 65%
rename from src/doc.py
rename to pycorrfit/doc.py
index b45e687..263e9d9 100755
--- a/src/doc.py
+++ b/pycorrfit/doc.py
@@ -1,31 +1,30 @@
# -*- coding: utf-8 -*-
-""" PyCorrFit
+""" Documentation and program specific information
- Module doc
- *doc* is the documentation. Functions for various text output point here.
+PyCorrFit
- Dimensionless representation:
- unit of time : 1 ms
- unit of inverse time: 10³ /s
- unit of distance : 100 nm
- unit of Diff.coeff : 10 µm²/s
- unit of inverse area: 100 /µm²
- unit of inv. volume : 1000 /µm³
+Dimensionless representation:
+unit of time : 1 ms
+unit of inverse time: 10³ /s
+unit of distance : 100 nm
+unit of Diff.coeff : 10 µm²/s
+unit of inverse area: 100 /µm²
+unit of inv. volume : 1000 /µm³
- Copyright (C) 2011-2012 Paul Müller
+Copyright (C) 2011-2012 Paul Müller
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; either version 2 of the License, or
- (at your option) any later version.
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with this program. If not, see <http://www.gnu.org/licenses/>.
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
@@ -68,22 +67,20 @@ import yaml
import readfiles
-def GetLocationOfChangeLog(filename = "ChangeLog.txt"):
- locations = list()
- fname1 = os.path.realpath(__file__)
- # Try one directory up
- dir1 = os.path.dirname(fname1)+"/../"
- locations.append(os.path.realpath(dir1))
- # In case of distribution with .egg files (pip, easy_install)
- dir2 = os.path.dirname(fname1)+"/../pycorrfit_doc/"
- locations.append(os.path.realpath(dir2))
+def GetLocationOfFile(filename):
+ dirname = os.path.dirname(os.path.abspath(__file__))
+ locations = [
+ dirname+"/../",
+ dirname+"/../pycorrfit_doc/",
+ dirname+"/../doc/",
+ ]
## freezed binaries:
if hasattr(sys, 'frozen'):
try:
- dir2 = sys._MEIPASS + "/doc/"
+ adir = sys._MEIPASS + "/doc/"
except:
- dir2 = "./"
- locations.append(os.path.realpath(dir2))
+ adir = "./"
+ locations.append(os.path.realpath(adir))
for loc in locations:
thechl = os.path.join(loc,filename)
if os.path.exists(thechl):
@@ -93,31 +90,13 @@ def GetLocationOfChangeLog(filename = "ChangeLog.txt"):
return None
+def GetLocationOfChangeLog(filename = "ChangeLog.txt"):
+ return GetLocationOfFile(filename)
+
+
def GetLocationOfDocumentation(filename = "PyCorrFit_doc.pdf"):
""" Returns the location of the documentation if there is any."""
- ## running from source
- locations = list()
- fname1 = os.path.realpath(__file__)
- # Documentation is usually one directory up
- dir1 = os.path.dirname(fname1)+"/../"
- locations.append(os.path.realpath(dir1))
- # In case of distribution with .egg files (pip, easy_install)
- dir2 = os.path.dirname(fname1)+"/../pycorrfit_doc/"
- locations.append(os.path.realpath(dir2))
- ## freezed binaries:
- if hasattr(sys, 'frozen'):
- try:
- dir2 = sys._MEIPASS + "/doc/"
- except:
- dir2 = "./"
- locations.append(os.path.realpath(dir2))
- for loc in locations:
- thedoc = os.path.join(loc,filename)
- if os.path.exists(thedoc):
- return thedoc
- break
- # if this does not work:
- return None
+ return GetLocationOfFile(filename)
def info(version):
@@ -135,7 +114,7 @@ def info(version):
unit of Diff.coeff : 10 µm²/s
unit of inverse area: 100 /µm²
unit of inv. volume : 1000 /µm^3 """
- textlin = """
+ textlin = u"""
© 2011-2012 Paul Müller, Biotec - TU Dresden
A versatile tool for fitting and analyzing correlation curves.
@@ -184,15 +163,23 @@ def SoftwareUsed():
""" Return some Information about the software used for this program """
text = "Python "+sys.version+\
"\n\nModules:"+\
+ "\n - cython "+\
"\n - matplotlib "+matplotlib.__version__+\
"\n - NumPy "+numpy.__version__+\
"\n - PyYAML "+yaml.__version__ +\
"\n - SciPy "+scipy.__version__+\
"\n - sympy "+sympy.__version__ +\
"\n - wxPython "+wx.__version__
+ # Other software
+ text += "\n\nOther software:"+\
+ "\n - FCS_point_correlator (9311a5c15e)" +\
+ "\n PicoQuant file format for Python by Dominic Waithe"
if hasattr(sys, 'frozen'):
pyinst = "\n\nThis executable has been created using PyInstaller."
- text = text+pyinst
+ text += pyinst
+ if 'conda' in sys.version:
+ conda = "\n\nPowered by Anaconda"
+ text += conda
return text
diff --git a/src/edclasses.py b/pycorrfit/edclasses.py
similarity index 99%
rename from src/edclasses.py
rename to pycorrfit/edclasses.py
index 34b7bdc..fd515eb 100644
--- a/src/edclasses.py
+++ b/pycorrfit/edclasses.py
@@ -240,5 +240,5 @@ class MyYesNoAbortDialog(wx.Dialog):
try:
# Add the save_figure function to the standard class for wx widgets.
matplotlib.backends.backend_wx.NavigationToolbar2Wx.save = save_figure
-except NameError:
+except (NameError, AttributeError):
pass
diff --git a/pycorrfit/fcs_data_set.py b/pycorrfit/fcs_data_set.py
new file mode 100644
index 0000000..df86242
--- /dev/null
+++ b/pycorrfit/fcs_data_set.py
@@ -0,0 +1,739 @@
+# -*- coding: utf-8 -*-
+""" PyCorrFit data set
+
+Classes for FCS data evaluation.
+"""
+from __future__ import print_function, division
+
+
+import hashlib
+import numpy as np
+
+from . import models as mdls
+
+class Background(object):
+ """ A class to unify background handling
+ """
+ def __init__(self, coutrate=None, duration_s=None, trace=None,
+ identifier=None, name=None):
+ """ Initiate a background.
+
+ Parameters
+ ----------
+ coutrate : float
+ Average countrate [Hz].
+ duration_s : float
+ Duration of measurement in seconds.
+ trace : 2d `numpy.ndarray` of shape (N,2)
+ The trace (time [s], countrate [Hz]).
+ Overwrites `average` and `duration_s`
+ name : str
+ The name of the measurement.
+ identifier : str
+ A unique identifier. If not given, a sha256 hash will be
+ created.
+
+ """
+ self.coutrate = coutrate
+ self.duration_s = duration_s
+ self.identifier = identifier
+ self.name = name
+
+ if trace is not None:
+ self.trace = trace
+ self.coutrate = np.average(trace[:,1])
+ self.duration_s = trace[-1,0] - trace[0,0]
+
+ ## Make sure all parameters have sensible values
+ if self.duration_s is None:
+ self.duration_s = 0
+
+ if self.countrate is None:
+ self.countrate = 0
+
+ if self.trace is None:
+ self.trace = np.zeros((2,2))
+ self.trace[:,1] = self.countrate
+
+ if name is None:
+ self.name = "{:.2f kHz, {} s}".format(coutrate/1000,
+ self.duration_s)
+ if identifier is None:
+ hasher = hashlib.sha256()
+ hasher.update(str(self.trace))
+ hasher.update(self.name)
+ self.identifier = hasher.hexdigest()
+
+
+
+class FCSDataSet(object):
+ """ The class has all methods necessary for an FCS measurement.
+
+ """
+ def __init__(self, ac=None, trace=None, trace2=None,
+ ac2=None, cc12=None, cc21=None,
+ background1=None, background2=None):
+ """ Initializes the FCS data set.
+
+ All parms are 2d ndarrays of shape (N,2).
+
+ """
+ self.filename = None
+ self.trace1 = trace
+ self.trace2 = trace2
+ self.ac1 = ac
+ self.ac2 = ac2
+ self.cc12 = cc12
+ self.cc21 = cc21
+
+ if background1 is not None:
+ self.background1 = Background(trace = background1)
+ if background2 is not None:
+ self.background2 = background(trace = background2)
+
+ self.InitComputeDerived()
+ self.DoBackgroundCorrection()
+
+
+ def EnableBackgroundCorrection(self, enable=True):
+ """ Set to false to disable background correction.
+ """
+ self.UseBackgroundCorrection = enable
+
+
+ def InitComputeDerived(self):
+ """ Computes all parameters that can be derived from the
+ correlation data and traces themselves.
+ """
+ # lenght of traces determines auto- or cross-correlation
+ if self.cc12 is not None or self.cc21 is not None:
+ self.IsCrossCorrelation = True
+ else:
+ self.IsCrossCorrelation = False
+ if self.ac1 is not None or self.ac2 is not None:
+ self.IsAutCorrelation = True
+ else:
+ self.IsAutCorrelation = False
+
+ if self.trace1 is not None:
+ self.duration1_s = self.trace1[-1,0] - self.trace1[0,0]
+ self.countrate1 = np.average(self.trace1[:,1])
+ else:
+ self.countrate1 = None
+ self.duration1_s = None
+
+ if self.trace2 is not None:
+ self.duration2_s = self.trace2[-1,0] - self.trace2[0,0]
+ self.countrate2 = np.average(self.trace2[:,1])
+ else:
+ self.duration2_s = None
+ self.countrate2 = None
+
+
+ # Initial fitting range is entire data set
+ self.fit_range = {}
+ names = ["ac1", "ac2", "cc12", "cc21"]
+ for name in names:
+ if hasattr(self, name):
+ data = getattr(self, name)
+ self.fit_range[name] = (data[0,0], data[-1,0])
+
+
+
+ def DoBackgroundCorrection(self, data):
+ """ Performs background correction.
+
+ Notes
+ -----
+ Thompson, N. Lakowicz, J.;
+ Geddes, C. D. & Lakowicz, J. R. (ed.)
+ Fluorescence Correlation Spectroscopy
+ Topics in Fluorescence Spectroscopy,
+ Springer US, 2002, 1, 337-378
+ """
+ # Autocorrelation
+ if ( not self.countrate1 in [0, None] and
+ self.background1 is not None and
+ self.ac1 is not None):
+ S = self.countrate1
+ B = self.background1.countrate
+ # Calculate correction factor
+ bgfactor = (S/(S-B))**2
+ # set plotting data
+ self.plot_ac1 = self.ac1 * bgfactor
+ if ( not self.countrate2 in [0, None] and
+ self.background2 is not None and
+ self.ac2 is not None):
+ S = self.countrate2
+ B = self.background2.countrate
+ # Calculate correction factor
+ bgfactor = (S/(S-B))**2
+ # set plotting data
+ self.plot_ac2 = self.ac2 * bgfactor
+
+ # Crosscorrelation
+ if ( not self.countrate1 in [0, None] and
+ not self.countrate2 in [0, None] and
+ self.background1 is not None and
+ self.background2 is not None and
+ self.IsCrossCorrelation
+ )
+ S = self.countrate1
+ S2 = self.countrate2
+ B = self.background1.countrate1
+ B2 = self.background1.countrate2
+ bgfactor = (S/(S-B)) * (S2/(S2-B2))
+ if self.cc12 is not None:
+ self.plot_cc12 = self.cc12 * bgfactor
+ if self.cc21 is not None:
+ self.plot_cc21 = self.cc21 * bgfactor
+
+
+ def GetBackground(self):
+ """ Returns the backgrounds of the data set.
+ """
+ return self.background1, self.background2
+
+
+ def GetPlotCorrelation(self):
+ """ Returns a dictionary with correlation curves.
+
+ Keys may include "ac1", "ac2", "cc12", "cc21" as well as
+ "ac1_fit", "ac2_fit", "cc12_fit", "cc21_fit".
+ """
+ if self.UseBackgroundCorrection:
+ self.DoBackgroundCorrection()
+
+ result = dict()
+
+ names = ["ac1", "ac2", "cc12", "cc21"]
+
+ for name in names:
+ if not hasattr(self, "plot_"+name):
+ rawdata = getattr(self, name)
+ if rawdata is not None:
+ result[name] = rawdata.copy()
+ else:
+ plotdata = getattr(self, name)
+ result[name] = plotdata.copy()
+ if hasattr(self, "plot_"+name+"_fit"):
+ fitted = getattr(self, "plot_"+name+"_fit")
+ result[name+"_fit"] = fitted.copy()
+
+ return result
+
+
+ def SetBackground(self, bgac1=None, bgac2=None):
+ """ Set the background of the measurement.
+
+ `bg*` is an instance of `pycorrfit.Background`.
+ """
+ #if isinstance(background, Background):
+ # self.background = bg
+ #elif isinstance(background, list)
+ self.background1 = bgac1
+ self.background2 = bgac2
+
+
+ def SetFitRange(self, start, end, components=None):
+ """ Set the range for fitting a correlation curve.
+
+ The unit is seconds.
+
+ Exapmles
+ --------
+ SetFitRange(.2, 1)
+ SetFitRange(.4, .7, ["ac2", "cc12"])
+ """
+ if components is not None:
+ self.fit_range = {"ac1" : (start,end),
+ "ac2" : (start,end),
+ "cc12": (start,end),
+ "cc21": (start,end) }
+ else:
+ for cc in components:
+ self.fitrange[cc] = (start,end)
+
+
+
+class Fit(object):
+ """ Used for fitting FCS data to models.
+ """
+ def __init__(self, raw_data, model_id, model_parms=None,
+ fit_bool=None, fit_ival=None, fit_ival_is_index=False,
+ weight_type="none", weight_spread=0, weights=None,
+ fit_algorithm="Lev-Mar",
+ verbose=False, uselatex=False):
+ """ Using an FCS model, fit the data of shape (N,2).
+
+
+ Parameters
+ ----------
+ raw_data : 2d `numpy.ndarray` of shape (2,N)
+ The data to which should be fitted. The first column
+ contains the x data (time in s). The second column contains
+ the y values (correlation data).
+ mode_lid : int
+ Modelid as in `pycorrfit.models.modeldict.keys()`.
+ model_parms : array-type of length P
+ The initial parameters for the specific model.
+ fit_bool : bool array of length P
+ Defines the model parameters that are variable (True) or
+ fixed (False) durin fitting.
+ fit_ival : tuple
+ Interval of x values for fitting given in seconds or by
+ indices (see `fit_ival_is_index`). If the discrete array
+ does not match the interval, then the index closer towards
+ the center of the interval is used.
+ fit_ival_is_index : bool
+ Set to `True` if `fit_ival` is given in indices instead of
+ seconds.
+ weight_type : str
+ Type of weights. Should be one of
+
+ - 'none' (standard) : no weights.
+ - 'splineX' : fit a Xth order spline and calulate standard
+ deviation from that difference
+ - 'model function' : calculate std. dev. from difference
+ of fit function and dataexpfull.
+ - 'other' - use `weights`
+
+ weight_spread : int
+ Number of values left and right from a data point to include
+ to weight a data point. The total number of points included
+ is 2*`weight_spread` + 1.
+ weights : 1d `numpy.ndarray` of length (N)
+ Weights to use when `weight_type` is set to 'other'.
+ fit_algorithm : str
+ The fitting algorithm to be used for minimization. Have a
+ look at the PyCorrFit documentation for more information.
+ Should be one of
+
+ - 'Lev-Mar' : Least squares minimization
+ - 'Nelder-Mead' : Simplex
+ - 'BFGS' : quasi-Newton method of Broyden,
+ Fletcher, Goldfarb and Shanno
+ - 'Powell'
+ - 'Polak-Ribiere'
+ verbose : int
+ Increase verbosity by incrementing this number.
+ uselatex : bool
+ If verbose > 0, plotting will be performed with LaTeX.
+ """
+ self.y_full = raw_data[:,1].copy()
+ self.x_full = raw_data[:,0] * 1000 # convert to ms
+
+ # model function
+ self.func = mdls.GetModelFunctionFromId(model_id)
+
+ # fit parameters
+ if model_parms is None:
+ model_parms = mdls.GetModelParametersFromId(model_id)
+ self.model_parms = model_parms
+ self.model_parms_initial = 1*model_parms
+
+ # values to fit
+ if fit_bool is None:
+ fit_bool = mdls.GetModelFitBoolFromId(model_id)
+ assert len(fit_bool) == len(model_parms)
+ self.fit_bool = fit_bool
+
+ # fiting interval
+ if fit_ival is None:
+ fit_ival = (self.x[0], self.x[-1])
+ assert fit_ival[0] < fit_ival[1]
+ self.fit_ival = fit_ival
+
+ self.fit_ival_is_index = fit_ival_is_index
+
+
+ # weight type
+ assert weight_type.strip("1234567890") in ["none", "spline",
+ "model function", "other"]
+ self.weight_type = weight_type
+
+ # weight spread
+ assert int(weight_spread) >= 0
+ self.weight_spread = int(weight_spread)
+
+ # weights
+ if weight_type == "other":
+ assert isinstance(weights, np.ndarray)
+ self.weights = weights
+
+ self.fit_algorithm = fit_algorithm
+ self.verbose = verbose
+ self.uselatex = uselatex
+
+ self.ComputetXYArrays()
+ self.ComputeWeights()
+
+
+ def ComputeXYArrays(self):
+ """ Determine the fitting interval and set `self.x` and `self.y`
+
+ Sets:
+ self.x
+ self.y
+ self.fit_ival_index
+ """
+ if not self.fit_ival_is_index:
+ # we need to compute the indices that are inside the
+ # fitting interval.
+ #
+ # interval in seconds:
+ # self.ival
+ #
+ # x values:
+ # self.x
+ #
+ start = np.sum(self.x <= self.ival[0]) - 1
+ end = self.x.shape[0] - np.sum(self.x >= self.ival[1])
+ self.fit_ival_index = (start, end)
+ else:
+ self.fit_ival_index = start, end = self.fit_ival
+ # We now have two values. Both values will be included in the
+ # cropped arrays.
+ self.x = self.x_full[start:end+1]
+ self.y = self.y_full[start:end+1]
+
+
+ def ComputeWeights(self):
+ """ Determines if we have weights and computes them.
+
+ sets
+ - self.fit_weights
+ - self.is_weighted_fit
+ """
+ ival = self.fit_ival_index
+ weight_spread = self.weight_spread
+ weight_type = self.weight_type
+
+ # some frequently used lengths
+ datalen = self.x.shape[0]
+ datalenfull = self.x_full.shape[0]
+ # Calculated dataweights
+ dataweights = np.zeros(datalen)
+
+ self.is_weighted_fit = True # will be set to False if not weights
+
+ if weight_type[:6] == "spline":
+ # Number of knots to use for spline
+ try:
+ knotnumber = int(weight_type[6:])
+ except:
+ if self.verbose > 1:
+ print("Could not get knot number. Setting it to 5.")
+ knotnumber = 5
+
+ # Compute borders for spline fit.
+ if ival[0] < weight_spread:
+ # optimal case
+ pmin = ival[0]
+ else:
+ # non-optimal case
+ # we need to cut pmin
+ pmin = weight_spread
+ if datalenfull - ival[1] < weight_spread:
+ # optimal case
+ pmax = datalenfull - ival[1]
+ else:
+ # non-optimal case
+ # we need to cut pmax
+ pmax = weight_spread
+ x = self.x_full[ival[0]-pmin:ival[1]+pmax]
+ y = self.y_full[ival[0]-pmin:ival[1]+pmax]
+ # we are fitting knots on a base 10 logarithmic scale.
+ xs = np.log10(x)
+ knots = np.linspace(xs[1], xs[-1], knotnumber+2)[1:-1]
+ try:
+ tck = spintp.splrep(xs, y, s=0, k=3, t=knots, task=-1)
+ ys = spintp.splev(xs, tck, der=0)
+ except:
+ if self.verbose > 0:
+ raise ValueError("Could not find spline fit with "+\
+ "{} knots.".format(knotnumber))
+ return
+ if self.verbose > 0:
+ try:
+ # If plotting module is available:
+ name = "Spline fit: "+str(knotnumber)+" knots"
+ plotting.savePlotSingle(name, 1*x, 1*y, 1*ys,
+ dirname=".",
+ uselatex=self.uselatex)
+ except:
+ # use matplotlib.pylab
+ try:
+ from matplotlib import pylab as plt
+ plt.xscale("log")
+ plt.plot(x, ys, x, y)
+ plt.show()
+ except ImportError:
+ # Tell the user to install matplotlib
+ print("Couldn't import pylab! - not Plotting")
+
+ ## Calculation of variance
+ # In some cases, the actual cropping interval from ival[0]
+ # ro ival[1] is chosen, such that the dataweights must be
+ # calculated from unknown datapoints.
+ # (e.g. points+endcrop > len(dataexpfull)
+ # We deal with this by multiplying dataweights with a factor
+ # corresponding to the missed points.
+ for i in range(datalen):
+ # Define start and end positions of the sections from
+ # where we wish to calculate the dataweights.
+ # Offset at beginning:
+ if i + ival[0] < weight_spread:
+ # The offset that occurs
+ offsetstart = weight_spread - i - ival[0]
+ offsetcrop = 0
+ elif ival[0] > weight_spread:
+ offsetstart = 0
+ offsetcrop = ival[0] - weight_spread
+ else:
+ offsetstart = 0
+ offsetcrop = 0
+ # i: counter on dataexp array
+ # start: counter on y array
+ start = i - weight_spread + offsetstart + ival[0] - offsetcrop
+ end = start + 2*weight_spread + 1 - offsetstart
+ dataweights[i] = (y[start:end] - ys[start:end]).std()
+ # The standard deviation at the end and the start of the
+ # array are multiplied by a factor corresponding to the
+ # number of bins that were not used for calculation of the
+ # standard deviation.
+ if offsetstart != 0:
+ reference = 2*weight_spread + 1
+ dividor = reference - offsetstart
+ dataweights[i] *= reference/dividor
+ # Do not substitute len(y[start:end]) with end-start!
+ # It is not the same!
+ backset = 2*weight_spread + 1 - len(y[start:end]) - offsetstart
+ if backset != 0:
+ reference = 2*weight_spread + 1
+ dividor = reference - backset
+ dataweights[i] *= reference/dividor
+ elif weight_type == "model function":
+ # Number of neighbouring (left and right) points to include
+ if ival[0] < weight_spread:
+ pmin = ival[0]
+ else:
+ pmin = weight_spread
+ if datalenfull - ival[1] < weight_spread:
+ pmax = datalenfull - self.ival[1]
+ else:
+ pmax = weight_spread
+ x = self.x_full[ival[0]-pmin:ival[1]+pmax]
+ y = self.y_full[ival[0]-pmin:ival[1]+pmax]
+ # Calculated dataweights
+ for i in np.arange(datalen):
+ # Define start and end positions of the sections from
+ # where we wish to calculate the dataweights.
+ # Offset at beginning:
+ if i + ival[0] < weight_spread:
+ # The offset that occurs
+ offsetstart = weight_spread - i - ival[0]
+ offsetcrop = 0
+ elif ival[0] > weight_spread:
+ offsetstart = 0
+ offsetcrop = ival[0] - weight_spread
+ else:
+ offsetstart = 0
+ offsetcrop = 0
+ # i: counter on dataexp array
+ # start: counter on dataexpfull array
+ start = i - weight_spread + offsetstart + ival[0] - offsetcrop
+ end = start + 2*weight_spread + 1 - offsetstart
+ #start = ival[0] - weight_spread + i
+ #end = ival[0] + weight_spread + i + 1
+ diff = y - self.func(self.model_parms, x)
+ dataweights[i] = diff[start:end].std()
+ # The standard deviation at the end and the start of the
+ # array are multiplied by a factor corresponding to the
+ # number of bins that were not used for calculation of the
+ # standard deviation.
+ if offsetstart != 0:
+ reference = 2*weight_spread + 1
+ dividor = reference - offsetstart
+ dataweights[i] *= reference/dividor
+ # Do not substitute len(diff[start:end]) with end-start!
+ # It is not the same!
+ backset = 2*weight_spread + 1 - len(diff[start:end]) - offsetstart
+ if backset != 0:
+ reference = 2*weight_spread + 1
+ dividor = reference - backset
+ dataweights[i] *= reference/dividor
+ elif self.fittype == "other":
+ # This means that the user knows the dataweights and already
+ # gave it to us.
+ assert self.weights is not None
+
+ # Check if these other weights have length of the cropped
+ # or the full array.
+ if len(self.weights) == datalen:
+ dataweights = self.weights
+ elif len(self.weights) == datalenfull:
+ dataweights = self.weights[ival[0], ival[1]+1]
+ else:
+ raise ValueError, \
+ "`weights` must have length of full or cropped array."
+ else:
+ # The fit.Fit() class will divide the function to minimize
+ # by the dataweights only if we have weights
+ self.is_weighted_fit = False
+
+ self.fit_weights = dataweights
+
+
+ def fit_func(self, parms, x):
+ """ Create the function to be minimized. The old function
+ `function` has more parameters than we need for the fitting.
+ So we use this function to set only the necessary
+ parameters. Returns what `function` would have done.
+ """
+ # We reorder the needed variables to only use these that are
+ # not fixed for minimization
+ index = 0
+ for i in np.arange(len(self.model_parms)):
+ if self.fit_bool[i]:
+ self.model_parms[i] = parms[index]
+ index = index + 1
+ # Only allow physically correct parameters
+ self.model_parms = self.check_parms(self.model_parms)
+ tominimize = (self.func(self.model_parms, x) - self.y)
+ # Check if we have a weighted fit
+ if self.is_weighted_fit:
+ # Check dataweights for zeros and don't use these
+ # values for the least squares method.
+ with np.errstate(divide='ignore'):
+ tominimize = np.where(self.fit_weights!=0,
+ tominimize/self.fit_weights, 0)
+ ## There might be NaN values because of zero weights:
+ #tominimize = tominimize[~np.isinf(tominimize)]
+ return tominimize
+
+
+ def fit_function_scalar(self, parms, x):
+ """
+ Wrapper of `fit_function` for scalar minimization methods.
+ Returns the sum of squares of the input data.
+ (Methods that are not "Lev-Mar")
+ """
+ e = self.fit_func(parms,x)
+ return np.sum(e*e)
+
+
+ def get_chi_squared(self):
+ """
+ Calculate Chi² for the current class.
+ """
+ # Calculate degrees of freedom
+ dof = len(self.x) - len(self.model_parms) - 1
+ # This is exactly what is minimized by the scalar minimizers
+ chi2 = self.fit_function_scalar(self.model_parms, self.x)
+ return chi2 / dof
+
+
+ def minimize(self):
+ """ This will minimize *self.fit_function()* using least squares.
+ *self.values*: The values with which the function is called.
+ *valuestofit*: A list with bool values that indicate which values
+ should be used for fitting.
+ Function *self.fit_function()* takes two parameters:
+ self.fit_function(parms, x) where *x* are x-values of *dataexp*.
+ """
+ assert (np.sum(self.fit_bool) == 0), "No parameter selected for fitting."
+
+ # Get algorithm
+ algorithm = Algorithms[self.fit_algorithm][0]
+
+ # Begin fitting
+ if self.fit_algorithm == "Lev-Mar":
+ res = algorithm(self.fit_function, self.fitparms[:],
+ args=(self.x), full_output=1)
+ else:
+ res = algorithm(self.fit_function_scalar, self.fitparms[:],
+ args=([self.x]), full_output=1)
+
+ # The optimal parameters
+ parmoptim = res[0]
+
+ # Now write the optimal parameters to our values:
+ index = 0
+ for i in range(len(self.model_parms)):
+ if self.valuestofit[i]:
+ self.model_parms[i] = parmoptim[index]
+ index = index + 1
+ # Only allow physically correct parameters
+ self.model_parms = self.check_parms(self.model_parms)
+ # Write optimal parameters back to this class.
+
+ self.chi = self.get_chi_squared()
+
+ # Compute error estimates for fit (Only "Lev-Mar")
+ if self.fit_algorithm == "Lev-Mar":
+ # This is the standard way to minimize the data. Therefore,
+ # we are a little bit more verbose.
+ if res[4] not in [1,2,3,4]:
+ warnings.warn("Optimal parameters not found: " + res[3])
+ try:
+ self.covar = res[1] * self.chi # The covariance matrix
+ except:
+ warnings.warn("PyCorrFit Warning: Error estimate not "+\
+ "possible, because we could not "+\
+ "calculate covariance matrix. Please "+\
+ "try reducing the number of fitting "+\
+ "parameters.")
+ self.parmoptim_error = None
+ else:
+ # Error estimation of fitted parameters
+ if self.covar is not None:
+ self.parmoptim_error = np.diag(self.covar)
+ else:
+ self.parmoptim_error = None
+
+
+
+def GetAlgorithmStringList():
+ """
+ Get supported fitting algorithms as strings.
+ Returns two lists (that are key-sorted) for key and string.
+ """
+ A = Algorithms
+ out1 = list()
+ out2 = list()
+ a = list(A.keys())
+ a.sort()
+ for key in a:
+ out1.append(key)
+ out2.append(A[key][1])
+ return out1, out2
+
+
+# As of version 0.8.3, we support several minimization methods for
+# fitting data to experimental curves.
+# These functions must be callable like scipy.optimize.leastsq. e.g.
+# res = spopt.leastsq(self.fit_function, self.fitparms[:],
+# args=(self.x), full_output=1)
+Algorithms = dict()
+
+# the original one is the least squares fit "leastsq"
+Algorithms["Lev-Mar"] = [spopt.leastsq,
+ "Levenberg-Marquardt"]
+
+# simplex
+Algorithms["Nelder-Mead"] = [spopt.fmin,
+ "Nelder-Mead (downhill simplex)"]
+
+# quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno
+Algorithms["BFGS"] = [spopt.fmin_bfgs,
+ "BFGS (quasi-Newton)"]
+
+# modified Powell-method
+Algorithms["Powell"] = [spopt.fmin_powell,
+ "modified Powell (conjugate direction)"]
+
+# nonliner conjugate gradient method by Polak and Ribiere
+Algorithms["Polak-Ribiere"] = [spopt.fmin_cg,
+ "Polak-Ribiere (nonlinear conjugate gradient)"]
+
diff --git a/src/fitting.py b/pycorrfit/fitting.py
similarity index 99%
rename from src/fitting.py
rename to pycorrfit/fitting.py
index 96a0e17..411ead9 100644
--- a/src/fitting.py
+++ b/pycorrfit/fitting.py
@@ -21,8 +21,11 @@
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
-
-import matplotlib.pyplot as plt
+try:
+ import matplotlib.pyplot as plt
+except:
+ pass
+
import numpy as np
from scipy import interpolate as spintp
from scipy import optimize as spopt
@@ -30,7 +33,7 @@ from scipy import optimize as spopt
# If we use this module with PyCorrFit, we can plot things with latex using
# our own special thing.
try:
- import plotting
+ from . import plotting
except:
pass
diff --git a/src/frontend.py b/pycorrfit/frontend.py
similarity index 77%
rename from src/frontend.py
rename to pycorrfit/frontend.py
index 83052be..11d2e31 100644
--- a/src/frontend.py
+++ b/pycorrfit/frontend.py
@@ -1,35 +1,34 @@
# -*- coding: utf-8 -*-
-""" PyCorrFit
-
- Module frontend
- The frontend displays the GUI (Graphic User Interface). All necessary
- functions and modules are called from here.
-
- Dimensionless representation:
- unit of time : 1 ms
- unit of inverse time: 10³ /s
- unit of distance : 100 nm
- unit of Diff.coeff : 10 µm²/s
- unit of inverse area: 100 /µm²
- unit of inv. volume : 1000 /µm³
-
- Copyright (C) 2011-2012 Paul Müller
-
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; either version 2 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see <http://www.gnu.org/licenses/>.
-"""
+u""" PyCorrFit - Module frontend
+
+The frontend displays the GUI (Graphic User Interface). All necessary
+functions and modules are called from here.
+
+Dimensionless representation:
+unit of time : 1 ms
+unit of inverse time: 10³ /s
+unit of distance : 100 nm
+unit of Diff.coeff : 10 µm²/s
+unit of inverse area: 100 /µm²
+unit of inv. volume : 1000 /µm³
+
+Copyright (C) 2011-2012 Paul Müller
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+"""
+from distutils.version import LooseVersion # For version checking
import os
import webbrowser
import wx # GUI interface wxPython
@@ -40,6 +39,7 @@ import numpy as np # NumPy
import platform
import sys # System stuff
import traceback # for Error handling
+import warnings
try:
# contains e.g. update and icon, but no vital things.
@@ -49,16 +49,20 @@ except ImportError:
print " Update function will not work."
# PyCorrFit modules
-import doc # Documentation/some texts
-import edclasses
+from . import doc # Documentation/some texts
+from . import edclasses
-import models as mdls
-import openfile as opf # How to treat an opened file
-import page
-import plotting
-import readfiles
-import tools # Some tools
-import usermodel
+from . import models as mdls
+from . import openfile as opf # How to treat an opened file
+from . import page
+try:
+ from . import plotting
+except ImportError:
+ warnings.warn("Submodule `pycorrfit.plotting` will not be "+\
+ "available. Reason: {}.".format(sys.exc_info()[1].message))
+from . import readfiles
+from . import tools # Some tools
+from . import usermodel
## On Windows XP I had problems with the unicode Characters.
@@ -71,7 +75,16 @@ if platform.system() == 'Windows':
# ~paulmueller
-###########################################################
+########################################################################
+class ExceptionDialog(wx.MessageDialog):
+ """"""
+ def __init__(self, msg):
+ """Constructor"""
+ wx.MessageDialog.__init__(self, None, msg, "Error",
+ wx.OK|wx.ICON_ERROR)
+
+
+########################################################################
class FlatNotebookDemo(fnb.FlatNotebook):
"""
Flatnotebook class
@@ -89,9 +102,25 @@ class FlatNotebookDemo(fnb.FlatNotebook):
agwStyle=style)
+class MyApp(wx.App):
+ def MacOpenFile(self,filename):
+ """
+ """
+ if filename.endswith(".pcfs"):
+ stri = self.frame.OnClearSession()
+ if stri == "clear":
+ self.frame.OnOpenSession(sessionfile=filename)
+ elif filename.endswith(".txt"):
+ self.frame.OnAddModel(modfile=filename)
+ else:
+ self.frame.OnLoadBatch(dataname=filename)
+
+
###########################################################
class MyFrame(wx.Frame):
def __init__(self, parent, id, version):
+
+ sys.excepthook = MyExceptionHook
## Set initial variables that make sense
tau = 10**np.linspace(-6,8,1001)
@@ -173,6 +202,7 @@ class MyFrame(wx.Frame):
panel.SetSizer(sizer)
self.Layout()
+ self.Centre()
self.Show()
# Notebook Handler
@@ -440,93 +470,100 @@ class MyFrame(wx.Frame):
info.SetName('PyCorrFit')
info.SetVersion(self.version)
info.SetDescription(description)
- info.SetCopyright('(C) 2011 - 2012 Paul Müller')
+ info.SetCopyright(u'(C) 2011 - 2012 Paul Müller')
info.SetWebSite(doc.HomePage)
info.SetLicence(licence)
info.SetIcon(misc.getMainIcon(pxlength=64))
- info.AddDeveloper('Paul Müller')
- info.AddDocWriter('Thomas Weidemann, Paul Müller')
+ info.AddDeveloper(u'Paul Müller')
+ info.AddDocWriter(u'Thomas Weidemann, Paul Müller')
wx.AboutBox(info)
- def OnAddModel(self, event=None):
+ def OnAddModel(self, event=None, modfile=None):
""" Import a model from an external .txt file. See example model
functions available on the web.
"""
# Add a model using the dialog.
filters = "text file (*.txt)|*.txt"
- dlg = wx.FileDialog(self, "Open model file",
+ if modfile is None:
+ dlg = wx.FileDialog(self, "Open model file",
self.dirname, "", filters, wx.OPEN)
- if dlg.ShowModal() == wx.ID_OK:
- NewModel = usermodel.UserModel(self)
- # Workaround since 0.7.5
- (dirname, filename) = os.path.split(dlg.GetPath())
- #filename = dlg.GetFilename()
- #dirname = dlg.GetDirectory()
- self.dirname = dirname
- # Try to import a selected .txt file
- try:
- NewModel.GetCode( os.path.join(dirname, filename) )
- except NameError:
- # sympy is probably not installed
- # Warn the user
- text = ("SymPy not found.\n"+
- "In order to import user defined model\n"+
- "functions, please install Sympy\n"+
- "version 0.7.2 or higher.\nhttp://sympy.org/")
- if platform.system().lower() == 'linux':
- text += ("\nSymPy is included in the package:\n"+
- " 'python-sympy'")
- dlg = wx.MessageDialog(None, text, 'SymPy not found',
- wx.OK | wx.ICON_EXCLAMATION)
- dlg.ShowModal()
- return
- except:
- # The file does not seem to be what it seems to be.
- info = sys.exc_info()
- errstr = "Unknown file format:\n"
- errstr += str(filename)+"\n\n"
- errstr += str(info[0])+"\n"
- errstr += str(info[1])+"\n"
- for tb_item in traceback.format_tb(info[2]):
- errstr += tb_item
- dlg = wx.MessageDialog(self, errstr, "Error",
- style=wx.ICON_ERROR|wx.OK|wx.STAY_ON_TOP)
- dlg.ShowModal()
- del NewModel
+ if dlg.ShowModal() == wx.ID_OK:
+ # Workaround since 0.7.5
+ (dirname, filename) = os.path.split(dlg.GetPath())
+ #filename = dlg.GetFilename()
+ #dirname = dlg.GetDirectory()
+ self.dirname = dirname
+ # Try to import a selected .txt file
+ else:
+ self.dirname = dlg.GetDirectory()
+ dlg.Destroy()
return
- # Test the code for sympy compatibility.
- # If you write your own parser, this might be easier.
- try:
- NewModel.TestFunction()
+ else:
+ dirname, filename = os.path.split(modfile)
+ self.dirname = dirname
- except:
- # This means that the imported model file could be
- # contaminated. Ask the user how to proceed.
- text = "The model parsing check raised an Error.\n"+\
- "This could be the result of a wrong Syntax\n"+\
- "or an error of the parser.\n"+\
- "This might be dangerous. Procceed\n"+\
- "only, if you trust the source of the file.\n"+\
- "Try and import offensive file: "+filename+"?"
- dlg2 = wx.MessageDialog(self, text, "Unsafe Operation",
- style=wx.ICON_EXCLAMATION|wx.YES_NO|wx.STAY_ON_TOP)
- if dlg2.ShowModal() == wx.ID_YES:
- NewModel.ImportModel()
- else:
- del NewModel
- return
- else:
- # The model was loaded correctly
+ NewModel = usermodel.UserModel(self)
+ try:
+ NewModel.GetCode( os.path.join(dirname, filename) )
+ except NameError:
+ # sympy is probably not installed
+ # Warn the user
+ text = ("SymPy not found.\n"+
+ "In order to import user defined model\n"+
+ "functions, please install Sympy\n"+
+ "version 0.7.2 or higher.\nhttp://sympy.org/")
+ if platform.system().lower() == 'linux':
+ text += ("\nSymPy is included in the package:\n"+
+ " 'python-sympy'")
+ dlg = wx.MessageDialog(None, text, 'SymPy not found',
+ wx.OK | wx.ICON_EXCLAMATION)
+ dlg.ShowModal()
+ return
+ except:
+ # The file does not seem to be what it seems to be.
+ info = sys.exc_info()
+ errstr = "Unknown file format:\n"
+ errstr += str(filename)+"\n\n"
+ errstr += str(info[0])+"\n"
+ errstr += str(info[1])+"\n"
+ for tb_item in traceback.format_tb(info[2]):
+ errstr += tb_item
+ dlg = wx.MessageDialog(self, errstr, "Error",
+ style=wx.ICON_ERROR|wx.OK|wx.STAY_ON_TOP)
+ dlg.ShowModal()
+ del NewModel
+ return
+ # Test the code for sympy compatibility.
+ # If you write your own parser, this might be easier.
+ try:
+ NewModel.TestFunction()
+
+ except:
+ # This means that the imported model file could be
+ # contaminated. Ask the user how to proceed.
+ text = "The model parsing check raised an Error.\n"+\
+ "This could be the result of a wrong Syntax\n"+\
+ "or an error of the parser.\n"+\
+ "This might be dangerous. Procceed\n"+\
+ "only, if you trust the source of the file.\n"+\
+ "Try and import offensive file: "+filename+"?"
+ dlg2 = wx.MessageDialog(self, text, "Unsafe Operation",
+ style=wx.ICON_EXCLAMATION|wx.YES_NO|wx.STAY_ON_TOP)
+ if dlg2.ShowModal() == wx.ID_YES:
NewModel.ImportModel()
-
+ else:
+ del NewModel
+ return
else:
- dirname = dlg.GetDirectory()
- dlg.Destroy()
+ # The model was loaded correctly
+ NewModel.ImportModel()
+
+
self.dirname = dirname
- def OnClearSession(self,e=None,clearmodels=False):
+ def OnClearSession(self, e=None, clearmodels=False):
"""
Clear the entire session
@@ -683,7 +720,7 @@ class MyFrame(wx.Frame):
# Get the Page
if Page is None:
Page = self.notebook.GetCurrentPage()
- keys = self.ToolsOpen.keys()
+ keys = list(self.ToolsOpen.keys())
for key in keys:
# Update the information
self.ToolsOpen[key].OnPageChanged(Page, trigger=trigger)
@@ -707,6 +744,7 @@ class MyFrame(wx.Frame):
"""Import experimental data from a all filetypes specified in
*opf.Filetypes*.
Is called by the curmenu and applies to currently opened model.
+ Calls self.ImportData.
"""
# Open a data file
# Get Data
@@ -957,38 +995,50 @@ class MyFrame(wx.Frame):
dlg.ShowModal()
- def OnLoadBatch(self, e):
+ def OnLoadBatch(self, e=None, dataname=None):
""" Open multiple data files and apply a single model to them
We will create a new window where the user may decide which
model to use.
"""
- ## Browse the file system
- SupFiletypes = opf.Filetypes.keys()
- # Sort them so we have "All suported filetypes" up front
- SupFiletypes.sort()
- filters = ""
- for i in np.arange(len(SupFiletypes)):
- # Add to the filetype filter
- filters = filters+SupFiletypes[i]
- if i+1 != len(SupFiletypes):
- # Add a separator if item is not last item
- filters = filters+"|"
- dlg = wx.FileDialog(self, "Open data files",
- self.dirname, "", filters, wx.OPEN|wx.FD_MULTIPLE)
- if dlg.ShowModal() == wx.ID_OK:
- Datafiles = dlg.GetFilenames()
- # We rely on sorted filenames
- Datafiles.sort()
- # Workaround since 0.7.5
- paths = dlg.GetPaths()
- if len(paths) != 0:
- self.dirname = os.path.split(paths[0])[0]
+ if dataname is None:
+ ## Browse the file system
+ SupFiletypes = opf.Filetypes.keys()
+ # Sort them so we have "All suported filetypes" up front
+ SupFiletypes.sort()
+ filters = ""
+ for i in np.arange(len(SupFiletypes)):
+ # Add to the filetype filter
+ filters = filters+SupFiletypes[i]
+ if i+1 != len(SupFiletypes):
+ # Add a separator if item is not last item
+ filters = filters+"|"
+ dlg = wx.FileDialog(self, "Open data files",
+ self.dirname, "", filters, wx.OPEN|wx.FD_MULTIPLE)
+ if dlg.ShowModal() == wx.ID_OK:
+ Datafiles = dlg.GetFilenames()
+ # We rely on sorted filenames
+ Datafiles.sort()
+ # Workaround since 0.7.5
+ paths = dlg.GetPaths()
+ if len(paths) != 0:
+ self.dirname = os.path.split(paths[0])[0]
+ else:
+ self.dirname = dlg.GetDirectory()
+ dlg.Destroy()
else:
- self.dirname = dlg.GetDirectory()
- dlg.Destroy()
+ dlg.Destroy()
+ return
else:
- dlg.Destroy()
- return
+ Datafiles = list()
+ if isinstance(dataname, list):
+ for item in dataname:
+ Datafiles.append(os.path.split(item)[1])
+ self.dirname, filename = os.path.split(Datafiles[0])
+ else:
+ Datafiles.append(os.path.split(dataname)[1])
+ self.dirname, filename = os.path.split(dataname)
+ Datafiles.sort()
+
## Get information from the data files and let the user choose
## which type of curves to load and the corresponding model.
# List of filenames that could not be opened
@@ -1000,12 +1050,27 @@ class MyFrame(wx.Frame):
Filename = list() # there might be zipfiles with additional name info
#Run = list() # Run number connecting AC1 AC2 CC12 CC21
Curveid = list() # Curve ID of each curve in a file
- for afile in Datafiles:
+
+ # Display a progress dialog for file import
+ N = len(Datafiles)
+ style = wx.PD_REMAINING_TIME|wx.PD_SMOOTH|wx.PD_AUTO_HIDE|\
+ wx.PD_CAN_ABORT
+ dlgi = wx.ProgressDialog("Import", "Loading data...",
+ maximum = N, parent=self, style=style)
+ for j in np.arange(N):
+ afile=Datafiles[j]
+ # Let the user abort, if he wants to:
+ if dlgi.Update(j, "Loading data: "+afile)[0] == False:
+ dlgi.Destroy()
+ return
+ #Stuff = readfiles.openAny(self.dirname, afile)
try:
Stuff = readfiles.openAny(self.dirname, afile)
except:
# The file does not seem to be what it seems to be.
BadFiles.append(afile)
+ warnings.warn("Problem processing a file."+\
+ " Reason: {}.".format(sys.exc_info()[1].message))
else:
for i in np.arange(len(Stuff["Type"])):
Correlation.append(Stuff["Correlation"][i])
@@ -1013,6 +1078,8 @@ class MyFrame(wx.Frame):
Type.append(Stuff["Type"][i])
Filename.append(Stuff["Filename"][i])
#Curveid.append(str(i+1))
+ dlgi.Destroy()
+
# Add number of the curve within a file.
nameold = None
counter = 1
@@ -1175,10 +1242,16 @@ class MyFrame(wx.Frame):
dlg.Destroy()
- def OnOpenSession(self,e=None,sessionfile=None):
- """Open a previously saved session.
- Optional parameter sessionfile defines the file that shall be
- automatically loaded (without a dialog)
+ def OnOpenSession(self, e=None, sessionfile=None):
+ """ Displays a dialog for opening PyCorrFit sessions
+
+ Optional parameter sessionfile defines the file that shall be
+ automatically loaded (without a dialog).
+
+
+ See Also
+ --------
+ `pycorrfit.openfile.LoadSessionData`
"""
# We need to clear the session before opening one.
# This will also ask, if user wants to save the current session.
@@ -1187,142 +1260,204 @@ class MyFrame(wx.Frame):
# User pressed abort when he was asked if he wants to save
# the session. Therefore, we cannot open a new session.
return "abort"
- Infodict, self.dirname, filename = \
- opf.OpenSession(self, self.dirname, sessionfile=sessionfile)
- # Check, if a file has been opened
- if filename is not None:
- self.filename = filename
- self.SetTitleFCS(self.filename)
- ## Background traces
- try:
- self.Background = Infodict["Backgrounds"]
- except:
- pass
- ## Preferences
- ## if Preferences is Not None:
- ## add them!
- # External functions
- for key in Infodict["External Functions"].keys():
- NewModel = usermodel.UserModel(self)
- # NewModel.AddModel(self, code)
- # code is a list with strings
- # each string is one line
- NewModel.AddModel(
- Infodict["External Functions"][key].splitlines())
- NewModel.ImportModel()
- # Internal functions:
- N = len(Infodict["Parameters"])
- # Reset tabcounter
- self.tabcounter = 1
- # Show a nice progress dialog:
- style = wx.PD_REMAINING_TIME|wx.PD_SMOOTH|wx.PD_AUTO_HIDE|\
- wx.PD_CAN_ABORT
- dlg = wx.ProgressDialog("Import", "Loading pages..."
- , maximum = N, parent=self, style=style)
- for i in np.arange(N):
- # Let the user abort, if he wants to:
- if dlg.Update(i+1, "Loading pages...")[0] == False:
+
+ ## Create user dialog
+ wc = opf.session_wildcards
+ wcstring = "PyCorrFit session (*.pcfs)|*{};*{}".format(
+ wc[0], wc[1])
+ if sessionfile is None:
+ dlg = wx.FileDialog(self, "Open session file",
+ self.dirname, "", wcstring, wx.OPEN)
+ # user cannot do anything until he clicks "OK"
+ if dlg.ShowModal() == wx.ID_OK:
+ sessionfile = dlg.GetPath()
+ (self.dirname, self.filename) = os.path.split(
+ sessionfile)
+ else:
+ # User did not press OK
+ # stop this function
+ self.dirname = dlg.GetDirectory()
+ return "abort"
+ dlg.Destroy()
+ Infodict = opf.LoadSessionData(sessionfile)
+
+ ## Check for correct version
+ try:
+ arcv = LooseVersion(Infodict["Version"])
+ thisv = LooseVersion(self.version.strip())
+ if arcv > thisv:
+ errstring = "Your version of Pycorrfit ("+str(thisv)+\
+ ") is too old to open this session ("+\
+ str(arcv).strip()+").\n"+\
+ "Please download the lates version of "+\
+ " PyCorrFit from \n"+doc.HomePage+".\n"+\
+ "Continue opening this session?"
+ dlg = edclasses.MyOKAbortDialog(self, errstring, "Warning")
+ returns = dlg.ShowModal()
+ if returns == wx.ID_OK:
dlg.Destroy()
- return
- # Add a new page to the notebook. This page is created with
- # variables from models.py. We will write our data to
- # the page later.
- counter = Infodict["Parameters"][i][0]
- modelid = Infodict["Parameters"][i][1]
- Newtab = self.add_fitting_tab(modelid=modelid,
- counter=counter)
- # Add experimental Data
- # Import dataexp:
- number = counter.strip().strip(":").strip("#")
- pageid = int(number)
- [tau, dataexp] = Infodict["Correlations"][pageid]
- if dataexp is not None:
- # Write experimental data
- Newtab.dataexpfull = dataexp
- Newtab.dataexp = True # not None
- # As of 0.7.3: Add external weights to page
- try:
- Newtab.external_std_weights = \
- Infodict["External Weights"][pageid]
- except KeyError:
- # No data
- pass
else:
- # Add external weights to fitbox
- WeightKinds = Newtab.Fitbox[1].GetItems()
- wkeys = Newtab.external_std_weights.keys()
- wkeys.sort()
- for wkey in wkeys:
- WeightKinds += [wkey]
- Newtab.Fitbox[1].SetItems(WeightKinds)
- self.UnpackParameters(Infodict["Parameters"][i], Newtab,
- init=True)
- # Supplementary data
- try:
- Sups = Infodict["Supplements"][pageid]
- except KeyError:
- pass
- else:
- errdict = dict()
- for errInfo in Sups["FitErr"]:
- for ierr in np.arange(len(errInfo)):
- errkey = mdls.valuedict[modelid][0][int(errInfo[0])]
- errval = float(errInfo[1])
- errdict[errkey] = errval
- Newtab.parmoptim_error = errdict
- try:
- Newtab.GlobalParameterShare = Sups["Global Share"]
- except:
- pass
- try:
- Newtab.chi2 = Sups["Chi sq"]
- except:
- pass
- # Set Title of the Page
+ dlg.Destroy()
+ return "abort"
+ except:
+ pass
+
+ self.SetTitleFCS(self.filename)
+ ## Background traces
+ try:
+ self.Background = Infodict["Backgrounds"]
+ except:
+ pass
+ ## Preferences
+ ## if Preferences is Not None:
+ ## add them!
+ # External functions
+ for key in Infodict["External Functions"].keys():
+ NewModel = usermodel.UserModel(self)
+ # NewModel.AddModel(self, code)
+ # code is a list with strings
+ # each string is one line
+ NewModel.AddModel(
+ Infodict["External Functions"][key].splitlines())
+ NewModel.ImportModel()
+ # Internal functions:
+ N = len(Infodict["Parameters"])
+ # Reset tabcounter
+ self.tabcounter = 1
+ # Show a nice progress dialog:
+ style = wx.PD_REMAINING_TIME|wx.PD_SMOOTH|wx.PD_AUTO_HIDE|\
+ wx.PD_CAN_ABORT
+ dlg = wx.ProgressDialog("Import", "Loading pages...",
+ maximum = N, parent=self, style=style)
+ for i in np.arange(N):
+ # Let the user abort, if he wants to:
+ if dlg.Update(i+1, "Loading pages...")[0] == False:
+ dlg.Destroy()
+ return
+ # Add a new page to the notebook. This page is created with
+ # variables from models.py. We will write our data to
+ # the page later.
+ counter = Infodict["Parameters"][i][0]
+ modelid = Infodict["Parameters"][i][1]
+ Newtab = self.add_fitting_tab(modelid=modelid,
+ counter=counter)
+ # Add experimental Data
+ # Import dataexp:
+ number = counter.strip().strip(":").strip("#")
+ pageid = int(number)
+ [tau, dataexp] = Infodict["Correlations"][pageid]
+ if dataexp is not None:
+ # Write experimental data
+ Newtab.dataexpfull = dataexp
+ Newtab.dataexp = True # not None
+ # As of 0.7.3: Add external weights to page
+ try:
+ Newtab.external_std_weights = \
+ Infodict["External Weights"][pageid]
+ except KeyError:
+ # No data
+ pass
+ else:
+ # Add external weights to fitbox
+ WeightKinds = Newtab.Fitbox[1].GetItems()
+ wkeys = Newtab.external_std_weights.keys()
+ wkeys.sort()
+ for wkey in wkeys:
+ WeightKinds += [wkey]
+ Newtab.Fitbox[1].SetItems(WeightKinds)
+ self.UnpackParameters(Infodict["Parameters"][i], Newtab,
+ init=True)
+ # Supplementary data
+ try:
+ Sups = Infodict["Supplements"][pageid]
+ except KeyError:
+ pass
+ else:
+ errdict = dict()
+ for errInfo in Sups["FitErr"]:
+ for ierr in np.arange(len(errInfo)):
+ errkey = mdls.valuedict[modelid][0][int(errInfo[0])]
+ errval = float(errInfo[1])
+ errdict[errkey] = errval
+ Newtab.parmoptim_error = errdict
try:
- Newtab.tabtitle.SetValue(Infodict["Comments"][pageid])
+ Newtab.GlobalParameterShare = Sups["Global Share"]
except:
- pass # no page title
- # Import the intensity trace
+ pass
try:
- trace = Infodict["Traces"][pageid]
+ Newtab.chi2 = Sups["Chi sq"]
except:
- trace = None
- if trace is not None:
- if Newtab.IsCrossCorrelation is False:
- Newtab.trace = trace[0]
- Newtab.traceavg = trace[0][:,1].mean()
- else:
- Newtab.tracecc = trace
- # Plot everything
- Newtab.PlotAll(trigger="page_add_batch")
- # Set Session Comment
+ pass
+ # Set Title of the Page
try:
- self.SessionComment = Infodict["Comments"]["Session"]
+ Newtab.tabtitle.SetValue(Infodict["Comments"][pageid])
except:
- pass
+ pass # no page title
+ # Import the intensity trace
try:
- Infodict["Preferences"] # not used yet
+ trace = Infodict["Traces"][pageid]
except:
- pass
- if self.notebook.GetPageCount() > 0:
- # Enable the "Current" Menu
- self.EnableToolCurrent(True)
- self.OnFNBPageChanged(trigger="page_add_finalize")
- else:
- # There are no pages in the session.
- # Disable some menus and close some dialogs
- self.EnableToolCurrent(False)
+ trace = None
+ if trace is not None:
+ if Newtab.IsCrossCorrelation is False:
+ Newtab.trace = trace[0]
+ Newtab.traceavg = trace[0][:,1].mean()
+ else:
+ Newtab.tracecc = trace
+ # Plot everything
+ Newtab.PlotAll(trigger="page_add_batch")
+ # Set Session Comment
+ dlg.Destroy()
+ try:
+ self.SessionComment = Infodict["Comments"]["Session"]
+ except:
+ pass
+ try:
+ Infodict["Preferences"] # not used yet
+ except:
+ pass
+ if self.notebook.GetPageCount() > 0:
+ # Enable the "Current" Menu
+ self.EnableToolCurrent(True)
+ self.OnFNBPageChanged(trigger="page_add_finalize")
+ else:
+ # There are no pages in the session.
+ # Disable some menus and close some dialogs
+ self.EnableToolCurrent(False)
def OnSaveData(self,e=None):
- # Save the Data
- """ Save calculated Data including optional fitted exp. data. """
+ """ Opens a dialog for saving correlation data of a Page
+
+ Also saves the parameters that are accessible in the Info
+ dialog and the trace(s).
+ """
# What Data do we wish to save?
Page = self.notebook.GetCurrentPage()
- # Export CSV
- # If no file has been selected, self.filename will be set to 'None'.
- self.dirname, self.filename = opf.saveCSV(self, self.dirname, Page)
+ # Export CSV data
+ filename = Page.tabtitle.GetValue().strip()+Page.counter[:2]+".csv"
+ dlg = wx.FileDialog(self, "Save curve", self.dirname, filename,
+ "Correlation with trace (*.csv)|*.csv;*.CSV"+\
+ "|Correlation only (*.csv)|*.csv;*.CSV",
+ wx.SAVE|wx.FD_OVERWRITE_PROMPT)
+ # user cannot do anything until he clicks "OK"
+ if dlg.ShowModal() == wx.ID_OK:
+ path = dlg.GetPath() # Workaround since 0.7.5
+ if not path.lower().endswith(".csv"):
+ path += ".csv"
+ (self.dirname, self.filename) = os.path.split(path)
+
+
+ if dlg.GetFilterIndex() == 0:
+ savetrace = True
+ else:
+ savetrace = False
+ opf.ExportCorrelation(path, Page, tools.info,
+ savetrace=savetrace)
+ else:
+ dirname = dlg.GetDirectory()
+
+ dlg.Destroy()
def OnSavePlotCorr(self, e=None):
@@ -1346,12 +1481,18 @@ class MyFrame(wx.Frame):
def OnSaveSession(self,e=None):
- """
- Save a session to a session file
+ """ Displays a dialog for saving PyCorrFit sessions
- Returns:
- - the filename of the session if it was saved
- - None, if the user canceled the action
+
+ Returns
+ -------
+ - the filename of the session if it was saved
+ - None, if the user canceled the action
+
+
+ See Also
+ --------
+ `pycorrfit.openfile.SaveSessionData`
"""
# Parameters are all in one dictionary:
Infodict = dict()
@@ -1415,11 +1556,23 @@ class MyFrame(wx.Frame):
Infodict["External Weights"][counter] = Page.external_std_weights
# Append Session Comment:
Infodict["Comments"]["Session"] = self.SessionComment
- # Save everything
- # If no file has been selected, self.filename will be set to 'None'.
- self.dirname, self.filename = opf.SaveSession(self, self.dirname,
- Infodict)
- # Set title of our window
+ # File dialog
+ dlg = wx.FileDialog(self, "Save session file", self.dirname, "",
+ "PyCorrFit session (*.pcfs)|*.pcfs",
+ wx.SAVE|wx.FD_OVERWRITE_PROMPT)
+ if dlg.ShowModal() == wx.ID_OK:
+ # Save everything
+ path = dlg.GetPath() # Workaround since 0.7.5
+ (self.dirname, self.filename) = os.path.split(path)
+ opf.SaveSessionData(path, Infodict)
+ else:
+ self.dirname = dlg.GetDirectory()
+ self.filename = None
+ # Set title of our window
+ if (self.filename is not None and
+ not self.filename.endswith(".pcfs")):
+ self.filename += ".pcfs"
+ dlg.Destroy()
self.SetTitleFCS(self.filename)
return self.filename
@@ -1635,3 +1788,23 @@ class MyFrame(wx.Frame):
self.SetTitle('PyCorrFit ' + self.version + title)
else:
self.SetTitle('PyCorrFit ' + self.version)
+
+
+def MyExceptionHook(etype, value, trace):
+ """
+ Handler for all unhandled exceptions.
+
+ :param `etype`: the exception type (`SyntaxError`, `ZeroDivisionError`, etc...);
+ :type `etype`: `Exception`
+ :param string `value`: the exception error message;
+ :param string `trace`: the traceback header, if any (otherwise, it prints the
+ standard Python header: ``Traceback (most recent call last)``.
+ """
+ frame = wx.GetApp().GetTopWindow()
+ tmp = traceback.format_exception(etype, value, trace)
+ exception = "".join(tmp)
+
+ dlg = ExceptionDialog(exception)
+ dlg.ShowModal()
+ dlg.Destroy()
+ wx.EndBusyCursor()
diff --git a/src/icon.py b/pycorrfit/icon.py
similarity index 100%
rename from src/icon.py
rename to pycorrfit/icon.py
diff --git a/src/PyCorrFit.py b/pycorrfit/main.py
similarity index 54%
rename from src/PyCorrFit.py
rename to pycorrfit/main.py
index 9e951d7..3adeb6d 100755
--- a/src/PyCorrFit.py
+++ b/pycorrfit/main.py
@@ -1,31 +1,31 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# -*- coding: utf-8 -*-
""" PyCorrFit
- A flexible tool for fitting and analyzing correlation curves.
+A flexible tool for fitting and analyzing correlation curves.
- Dimensionless representation:
- unit of time : 1 ms
- unit of inverse time: 1000 /s
- unit of distance : 100 nm
- unit of Diff.coeff : 10 um^2/s
- unit of inverse area: 100 /um^2
- unit of inv. volume : 1000 /um^3
+Dimensionless representation:
+unit of time : 1 ms
+unit of inverse time: 1000 /s
+unit of distance : 100 nm
+unit of Diff.coeff : 10 um^2/s
+unit of inverse area: 100 /um^2
+unit of inv. volume : 1000 /um^3
- Copyright (C) 2011-2012 Paul Müller
+Copyright (C) 2011-2012 Paul Müller
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; either version 2 of the License, or
- (at your option) any later version.
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with this program. If not, see <http://www.gnu.org/licenses/>.
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
from distutils.version import LooseVersion
@@ -44,7 +44,7 @@ class Fake(object):
# http://stackoverflow.com/questions/5419/python-unicode-and-the-windows-console
# and it helped (needs to be done before import of matplotlib):
import platform
-if platform.system() == 'Windows':
+if platform.system().lower in ['windows', 'darwin']:
reload(sys)
sys.setdefaultencoding('utf-8')
@@ -87,9 +87,10 @@ except ImportError:
import yaml
## Continue with the import:
-import doc
-import frontend as gui # The actual program
+sys.path.append(os.path.abspath(os.path.dirname(__file__)))
+from . import doc
+from . import frontend as gui # The actual program
@@ -111,46 +112,55 @@ def CheckVersion(given, required, name):
print " OK: "+name+" v. "+given+" | "+required+" required"
-## VERSION
-version = doc.__version__
-__version__ = version
+## Start gui
+def Main():
-print gui.doc.info(version)
+ ## VERSION
+ version = doc.__version__
+ __version__ = version
-## Check important module versions
-print "\n\nChecking module versions..."
-CheckVersion(matplotlib.__version__, "1.0.0", "matplotlib")
-CheckVersion(np.__version__, "1.5.1", "NumPy")
-CheckVersion(yaml.__version__, "3.09", "PyYAML")
-CheckVersion(scipy.__version__, "0.8.0", "SciPy")
-CheckVersion(sympy.__version__, "0.7.2", "sympy")
-CheckVersion(gui.wx.__version__, "2.8.10.1", "wxPython")
+ print gui.doc.info(version)
+ ## Check important module versions
+ print "\n\nChecking module versions..."
+ CheckVersion(matplotlib.__version__, "1.0.0", "matplotlib")
+ CheckVersion(np.__version__, "1.5.1", "NumPy")
+ CheckVersion(yaml.__version__, "3.09", "PyYAML")
+ CheckVersion(scipy.__version__, "0.8.0", "SciPy")
+ CheckVersion(sympy.__version__, "0.7.2", "sympy")
+ CheckVersion(gui.wx.__version__, "2.8.10.1", "wxPython")
-## Start gui
-app = gui.wx.App(False)
-frame = gui.MyFrame(None, -1, version)
-# Before starting the main loop, check for possible session files
-# in the arguments.
-sysarg = sys.argv
-for arg in sysarg:
- if len(arg) > 4:
- if arg[-4:] == "pcfs":
+
+ ## Start gui
+ app = gui.MyApp(False)
+
+ frame = gui.MyFrame(None, -1, version)
+ app.frame = frame
+
+ # Before starting the main loop, check for possible session files
+ # in the arguments.
+ sysarg = sys.argv
+ for arg in sysarg:
+ if arg.endswith(".pcfs"):
print "\nLoading Session "+arg
frame.OnOpenSession(sessionfile=arg)
break
- elif len(arg) > 18:
- if arg[-18:] == "fcsfit-session.zip":
+ if arg.endswith(".fcsfit-session.zip"):
print "\nLoading Session "+arg
frame.OnOpenSession(sessionfile=arg)
break
- elif arg[:6] == "python":
- pass
- elif arg[-12:] == "PyCorrFit.py":
- pass
- elif arg[-11:] == "__main__.py":
- pass
- else:
- print "I do not know what to do with this argument: "+arg
-
-app.MainLoop()
+ elif arg[:6] == "python":
+ pass
+ elif arg[-12:] == "PyCorrFit.py":
+ pass
+ elif arg[-11:] == "__main__.py":
+ pass
+ else:
+ print "Ignoring command line parameter: "+arg
+
+
+ app.MainLoop()
+
+
+if __name__ == "__main__":
+ Main()
diff --git a/src/misc.py b/pycorrfit/misc.py
similarity index 98%
rename from src/misc.py
rename to pycorrfit/misc.py
index 432927a..42dc526 100644
--- a/src/misc.py
+++ b/pycorrfit/misc.py
@@ -31,10 +31,10 @@ import wx # GUI interface wxPython
import wx.html
import wx.lib.delayedresult as delayedresult
-import doc # Documentation/some texts
+from . import doc # Documentation/some texts
# The icon file was created with
# img2py -i -n Main PyCorrFit_icon.png icon.py
-import icon # Contains the program icon
+from . import icon # Contains the program icon
class UpdateDlg(wx.Frame):
diff --git a/src/models/MODEL_TIRF_1C.py b/pycorrfit/models/MODEL_TIRF_1C.py
similarity index 100%
rename from src/models/MODEL_TIRF_1C.py
rename to pycorrfit/models/MODEL_TIRF_1C.py
diff --git a/src/models/MODEL_TIRF_2D2D.py b/pycorrfit/models/MODEL_TIRF_2D2D.py
similarity index 100%
rename from src/models/MODEL_TIRF_2D2D.py
rename to pycorrfit/models/MODEL_TIRF_2D2D.py
diff --git a/src/models/MODEL_TIRF_3D2D.py b/pycorrfit/models/MODEL_TIRF_3D2D.py
similarity index 100%
rename from src/models/MODEL_TIRF_3D2D.py
rename to pycorrfit/models/MODEL_TIRF_3D2D.py
diff --git a/src/models/MODEL_TIRF_3D2Dkin_Ries.py b/pycorrfit/models/MODEL_TIRF_3D2Dkin_Ries.py
similarity index 100%
rename from src/models/MODEL_TIRF_3D2Dkin_Ries.py
rename to pycorrfit/models/MODEL_TIRF_3D2Dkin_Ries.py
diff --git a/src/models/MODEL_TIRF_3D3D.py b/pycorrfit/models/MODEL_TIRF_3D3D.py
similarity index 100%
rename from src/models/MODEL_TIRF_3D3D.py
rename to pycorrfit/models/MODEL_TIRF_3D3D.py
diff --git a/src/models/MODEL_TIRF_gaussian_1C.py b/pycorrfit/models/MODEL_TIRF_gaussian_1C.py
similarity index 100%
rename from src/models/MODEL_TIRF_gaussian_1C.py
rename to pycorrfit/models/MODEL_TIRF_gaussian_1C.py
diff --git a/src/models/MODEL_TIRF_gaussian_3D2D.py b/pycorrfit/models/MODEL_TIRF_gaussian_3D2D.py
similarity index 100%
rename from src/models/MODEL_TIRF_gaussian_3D2D.py
rename to pycorrfit/models/MODEL_TIRF_gaussian_3D2D.py
diff --git a/src/models/MODEL_TIRF_gaussian_3D3D.py b/pycorrfit/models/MODEL_TIRF_gaussian_3D3D.py
similarity index 100%
rename from src/models/MODEL_TIRF_gaussian_3D3D.py
rename to pycorrfit/models/MODEL_TIRF_gaussian_3D3D.py
diff --git a/src/models/MODEL_classic_gaussian_2D.py b/pycorrfit/models/MODEL_classic_gaussian_2D.py
similarity index 100%
rename from src/models/MODEL_classic_gaussian_2D.py
rename to pycorrfit/models/MODEL_classic_gaussian_2D.py
diff --git a/src/models/MODEL_classic_gaussian_3D.py b/pycorrfit/models/MODEL_classic_gaussian_3D.py
similarity index 100%
rename from src/models/MODEL_classic_gaussian_3D.py
rename to pycorrfit/models/MODEL_classic_gaussian_3D.py
diff --git a/src/models/MODEL_classic_gaussian_3D2D.py b/pycorrfit/models/MODEL_classic_gaussian_3D2D.py
similarity index 100%
rename from src/models/MODEL_classic_gaussian_3D2D.py
rename to pycorrfit/models/MODEL_classic_gaussian_3D2D.py
diff --git a/src/models/__init__.py b/pycorrfit/models/__init__.py
similarity index 95%
rename from src/models/__init__.py
rename to pycorrfit/models/__init__.py
index 5be89bf..ce65552 100644
--- a/src/models/__init__.py
+++ b/pycorrfit/models/__init__.py
@@ -54,17 +54,17 @@ sys.setdefaultencoding('utf-8')
## Models
-import MODEL_classic_gaussian_2D
-import MODEL_classic_gaussian_3D
-import MODEL_classic_gaussian_3D2D
-import MODEL_TIRF_gaussian_1C
-import MODEL_TIRF_gaussian_3D2D
-import MODEL_TIRF_gaussian_3D3D
-import MODEL_TIRF_1C
-import MODEL_TIRF_2D2D
-import MODEL_TIRF_3D2D
-import MODEL_TIRF_3D3D
-import MODEL_TIRF_3D2Dkin_Ries
+from . import MODEL_classic_gaussian_2D
+from . import MODEL_classic_gaussian_3D
+from . import MODEL_classic_gaussian_3D2D
+from . import MODEL_TIRF_gaussian_1C
+from . import MODEL_TIRF_gaussian_3D2D
+from . import MODEL_TIRF_gaussian_3D3D
+from . import MODEL_TIRF_1C
+from . import MODEL_TIRF_2D2D
+from . import MODEL_TIRF_3D2D
+from . import MODEL_TIRF_3D3D
+from . import MODEL_TIRF_3D2Dkin_Ries
def AppendNewModel(Modelarray):
""" Append a new model from a modelarray. *Modelarray* has to be a list
@@ -218,6 +218,17 @@ def GetModelType(modelid):
except:
return ""
+def GetModelFunctionFromId(modelid):
+ return pycorrfit.models.modeldict[modelid][3]
+
+
+def GetModelParametersFromId(modelid):
+ return pycorrfit.models.valuedict[modelid][1]
+
+
+def GetModelFitBoolFromId(modelid):
+ return pycorrfit.models.valuedict[modelid][2]
+
def GetMoreInfo(modelid, Page):
""" This functino is called by someone who has already calculated
diff --git a/pycorrfit/openfile.py b/pycorrfit/openfile.py
new file mode 100644
index 0000000..4ef2f3f
--- /dev/null
+++ b/pycorrfit/openfile.py
@@ -0,0 +1,766 @@
+# -*- coding: utf-8 -*-
+""" PyCorrFit - Module openfile
+
+This file contains definitions for opening PyCorrFit sessions and
+saving PyCorrFit correlation curves.
+
+Dimensionless representation:
+unit of time : 1 ms
+unit of inverse time: 10³ /s
+unit of distance : 100 nm
+unit of Diff.coeff : 10 µm²/s
+unit of inverse area: 100 /µm²
+unit of inv. volume : 1000 /µm³
+
+Copyright (C) 2011-2012 Paul Müller
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+"""
+
+
+import csv
+from distutils.version import LooseVersion # For version checking
+import numpy as np
+import os
+import shutil
+import tempfile
+import yaml
+import zipfile
+import warnings
+
+from . import doc
+from . import edclasses
+
+# These imports are required for loading data
+from .readfiles import Filetypes
+from .readfiles import BGFiletypes
+
+
+
+def LoadSessionData(sessionfile, parameters_only=False):
+ """ Load PyCorrFit session data from a zip file (.pcfs)
+
+
+ Parameters
+ ----------
+ sessionfile : str
+ File from which data will be loaded
+ parameters_only : bool
+ Only load the parameters from the YAML file
+
+
+ Returns
+ -------
+ Infodict : dict
+ Infodict may contain the following keys:
+ "Backgrounds", list: contains the backgrounds
+ "Comments", dict: "Session" comment and int keys to Page titles
+ "Correlations", dict: page numbers, all correlation curves
+ "External Functions", dict: modelids to external model functions
+ "External Weights", dict: page numbers, external weights for fitting
+ "Parameters", dict: page numbers, all parameters of the pages
+ "Preferences", dict: not used yet
+ "Traces", dict: page numbers, all traces of the pages
+ "Version", str: the PyCorrFit version of the session
+ """
+ Infodict = dict()
+ # Get the version
+ Arc = zipfile.ZipFile(sessionfile, mode='r')
+ readmefile = Arc.open("Readme.txt")
+ # e.g. "This file was created using PyCorrFit version 0.7.6"
+ Infodict["Version"] = readmefile.readline()[46:].strip()
+ readmefile.close()
+ # Get the yaml parms dump:
+ yamlfile = Arc.open("Parameters.yaml")
+ # Parameters: Fitting and drawing parameters of correlation curve
+ # The *yamlfile* is responsible for the order of the Pages #i.
+ Infodict["Parameters"] = yaml.safe_load(yamlfile)
+ yamlfile.close()
+ if parameters_only:
+ Arc.close()
+ return Infodict
+ # Supplementary data (errors of fit)
+ supname = "Supplements.yaml"
+ try:
+ Arc.getinfo(supname)
+ except:
+ pass
+ else:
+ supfile = Arc.open(supname)
+ supdata = yaml.safe_load(supfile)
+ Infodict["Supplements"] = dict()
+ for idp in supdata:
+ Infodict["Supplements"][idp[0]] = dict()
+ Infodict["Supplements"][idp[0]]["FitErr"] = idp[1]
+ if len(idp) > 2:
+ # As of version 0.7.4 we save chi2 and shared pages -global fit
+ Infodict["Supplements"][idp[0]]["Chi sq"] = idp[2]
+ Infodict["Supplements"][idp[0]]["Global Share"] = idp[3]
+ ## Preferences: Reserved for a future version of PyCorrFit :)
+ prefname = "Preferences.yaml"
+ try:
+ Arc.getinfo(prefname)
+ except KeyError:
+ pass
+ else:
+ yamlpref = Arc.open(prefname)
+ Infodict["Preferences"] = yaml.safe_load(yamlpref)
+ yamlpref.close()
+ # Get external functions
+ Infodict["External Functions"] = dict()
+ key = 7001
+ while key <= 7999:
+ # (There should not be more than 1000 functions)
+ funcfilename = "model_"+str(key)+".txt"
+ try:
+ Arc.getinfo(funcfilename)
+ except KeyError:
+ # No more functions to import
+ key = 8000
+ else:
+ funcfile = Arc.open(funcfilename)
+ Infodict["External Functions"][key] = funcfile.read()
+ funcfile.close()
+ key=key+1
+ # Get the correlation arrays
+ Infodict["Correlations"] = dict()
+ for i in np.arange(len(Infodict["Parameters"])):
+ # The *number* is used to identify the correct file
+ number = str(Infodict["Parameters"][i][0]).strip().strip(":").strip("#")
+ pageid = int(number)
+ expfilename = "data"+number+".csv"
+ expfile = Arc.open(expfilename, 'r')
+ readdata = csv.reader(expfile, delimiter=',')
+ dataexp = list()
+ tau = list()
+ if str(readdata.next()[0]) == "# tau only":
+ for row in readdata:
+ # Exclude commentaries
+ if (str(row[0])[0:1] != '#'):
+ tau.append(float(row[0]))
+ tau = np.array(tau)
+ dataexp = None
+ else:
+ for row in readdata:
+ # Exclude commentaries
+ if (str(row[0])[0:1] != '#'):
+ dataexp.append((float(row[0]), float(row[1])))
+ dataexp = np.array(dataexp)
+ tau = dataexp[:,0]
+ Infodict["Correlations"][pageid] = [tau, dataexp]
+ del readdata
+ expfile.close()
+ # Get the Traces
+ Infodict["Traces"] = dict()
+ for i in np.arange(len(Infodict["Parameters"])):
+ # The *number* is used to identify the correct file
+ number = str(Infodict["Parameters"][i][0]).strip().strip(":").strip("#")
+ pageid = int(number)
+ # Find out, if we have a cross correlation data type
+ IsCross = False
+ try:
+ IsCross = Infodict["Parameters"][i][7]
+ except IndexError:
+ # No Cross correlation
+ pass
+ if IsCross is False:
+ tracefilenames = ["trace"+number+".csv"]
+ else:
+ # Cross correlation uses two traces
+ tracefilenames = ["trace"+number+"A.csv",
+ "trace"+number+"B.csv" ]
+ thistrace = list()
+ for tracefilename in tracefilenames:
+ try:
+ Arc.getinfo(tracefilename)
+ except KeyError:
+ pass
+ else:
+ tracefile = Arc.open(tracefilename, 'r')
+ traceread = csv.reader(tracefile, delimiter=',')
+ singletrace = list()
+ for row in traceread:
+ # Exclude commentaries
+ if (str(row[0])[0:1] != '#'):
+ singletrace.append((float(row[0]), float(row[1])))
+ singletrace = np.array(singletrace)
+ thistrace.append(singletrace)
+ del traceread
+ del singletrace
+ tracefile.close()
+ if len(thistrace) != 0:
+ Infodict["Traces"][pageid] = thistrace
+ else:
+ Infodict["Traces"][pageid] = None
+ # Get the comments, if they exist
+ commentfilename = "comments.txt"
+ try:
+ # Raises KeyError, if file is not present:
+ Arc.getinfo(commentfilename)
+ except KeyError:
+ pass
+ else:
+ # Open the file
+ commentfile = Arc.open(commentfilename, 'r')
+ Infodict["Comments"] = dict()
+ for i in np.arange(len(Infodict["Parameters"])):
+ number = str(Infodict["Parameters"][i][0]).strip().strip(":").strip("#")
+ pageid = int(number)
+ # Strip line ending characters for all the Pages.
+ Infodict["Comments"][pageid] = commentfile.readline().strip()
+ # Now Add the Session Comment (the rest of the file).
+ ComList = commentfile.readlines()
+ Infodict["Comments"]["Session"] = ''
+ for line in ComList:
+ Infodict["Comments"]["Session"] += line
+ commentfile.close()
+ # Get the Backgroundtraces and data if they exist
+ bgfilename = "backgrounds.csv"
+ try:
+ # Raises KeyError, if file is not present:
+ Arc.getinfo(bgfilename)
+ except KeyError:
+ pass
+ else:
+ # Open the file
+ Infodict["Backgrounds"] = list()
+ bgfile = Arc.open(bgfilename, 'r')
+ bgread = csv.reader(bgfile, delimiter='\t')
+ i = 0
+ for bgrow in bgread:
+ bgtracefilename = "bg_trace"+str(i)+".csv"
+ bgtracefile = Arc.open(bgtracefilename, 'r')
+ bgtraceread = csv.reader(bgtracefile, delimiter=',')
+ bgtrace = list()
+ for row in bgtraceread:
+ # Exclude commentaries
+ if (str(row[0])[0:1] != '#'):
+ bgtrace.append((np.float(row[0]), np.float(row[1])))
+ bgtrace = np.array(bgtrace)
+ Infodict["Backgrounds"].append([np.float(bgrow[0]), str(bgrow[1]), bgtrace])
+ i = i + 1
+ bgfile.close()
+ # Get external weights if they exist
+ WeightsFilename = "externalweights.txt"
+ try:
+ # Raises KeyError, if file is not present:
+ Arc.getinfo(WeightsFilename)
+ except:
+ pass
+ else:
+ Wfile = Arc.open(WeightsFilename, 'r')
+ Wread = csv.reader(Wfile, delimiter='\t')
+ Weightsdict = dict()
+ for wrow in Wread:
+ Pkey = wrow[0] # Page of weights
+ pageid = int(Pkey)
+ # Do not overwrite anything
+ try:
+ Weightsdict[pageid]
+ except:
+ Weightsdict[pageid] = dict()
+ Nkey = wrow[1] # Name of weights
+ Wdatafilename = "externalweights_data"+Pkey+"_"+Nkey+".csv"
+ Wdatafile = Arc.open(Wdatafilename, 'r')
+ Wdatareader = csv.reader(Wdatafile)
+ Wdata = list()
+ for row in Wdatareader:
+ # Exclude commentaries
+ if (str(row[0])[0:1] != '#'):
+ Wdata.append(np.float(row[0]))
+ Weightsdict[pageid][Nkey] = np.array(Wdata)
+ Infodict["External Weights"] = Weightsdict
+ Arc.close()
+ return Infodict
+
+
+def SaveSessionData(sessionfile, Infodict):
+ """ Session PyCorrFit session data to file.
+
+
+ Parameters
+ ----------
+ sessionfile : str
+ The suffix ".pcfs" is automatically appended.
+ Infodict : dict
+ Infodict may contain the following keys:
+ "Backgrounds", list: contains the backgrounds
+ "Comments", dict: "Session" comment and int keys to Page titles
+ "Correlations", dict: page numbers, all correlation curves
+ "External Functions, dict": modelids to external model functions
+ "External Weights", dict: page numbers, external weights for fitting
+ "Parameters", dict: page numbers, all parameters of the pages
+ "Preferences", dict: not used yet
+ "Traces", dict: page numbers, all traces of the pages
+
+
+ The version of PyCorrFit is written to Readme.txt
+ """
+ (dirname, filename) = os.path.split(sessionfile)
+ # Sometimes you have multiple endings...
+ if filename.endswith(".pcfs") is not True:
+ filename += ".pcfs"
+ # Change working directory
+ returnWD = os.getcwd()
+ tempdir = tempfile.mkdtemp()
+ os.chdir(tempdir)
+ # Create zip file
+ Arc = zipfile.ZipFile(filename, mode='w')
+ # Only do the Yaml thing for safe operations.
+ # Make the yaml dump
+ parmsfilename = "Parameters.yaml"
+ # Parameters have to be floats in lists
+ # in order for yaml.safe_load to work.
+ Parms = Infodict["Parameters"]
+ ParmsKeys = Parms.keys()
+ ParmsKeys.sort()
+ Parmlist = list()
+ for idparm in ParmsKeys:
+ # Make sure we do not accidently save arrays.
+ # This would not work correctly with yaml.
+ Parms[idparm][2] = np.array(Parms[idparm][2],dtype="float").tolist()
+ Parms[idparm][3] = np.array(Parms[idparm][3],dtype="bool").tolist()
+ # Range of fitting parameters
+ Parms[idparm][9] = np.array(Parms[idparm][9],dtype="float").tolist()
+ Parmlist.append(Parms[idparm])
+ yaml.dump(Parmlist, open(parmsfilename, "wb"))
+ Arc.write(parmsfilename)
+ os.remove(os.path.join(tempdir, parmsfilename))
+ # Supplementary data (errors of fit)
+ errsfilename = "Supplements.yaml"
+ Sups = Infodict["Supplements"]
+ SupKeys = Sups.keys()
+ SupKeys.sort()
+ Suplist = list()
+ for idsup in SupKeys:
+ error = Sups[idsup]["FitErr"]
+ chi2 = Sups[idsup]["Chi sq"]
+ globalshare = Sups[idsup]["Global Share"]
+ Suplist.append([idsup, error, chi2, globalshare])
+ yaml.dump(Suplist, open(errsfilename, "wb"))
+ Arc.write(errsfilename)
+ os.remove(os.path.join(tempdir, errsfilename))
+ # Save external functions
+ for key in Infodict["External Functions"].keys():
+ funcfilename = "model_"+str(key)+".txt"
+ funcfile = open(funcfilename, 'wb')
+ funcfile.write(Infodict["External Functions"][key])
+ funcfile.close()
+ Arc.write(funcfilename)
+ os.remove(os.path.join(tempdir, funcfilename))
+ # Save (dataexp and tau)s into separate csv files.
+ for pageid in Infodict["Correlations"].keys():
+ # Since *Array* and *Parms* are in the same order (the page order),
+ # we will identify the filename by the Page title number.
+ number = str(pageid)
+ expfilename = "data"+number+".csv"
+ expfile = open(expfilename, 'wb')
+ tau = Infodict["Correlations"][pageid][0]
+ exp = Infodict["Correlations"][pageid][1]
+ dataWriter = csv.writer(expfile, delimiter=',')
+ if exp is not None:
+ # Names of Columns
+ dataWriter.writerow(['# tau', 'experimental data'])
+ # Actual Data
+ # Do not use len(tau) instead of len(exp[:,0])) !
+ # Otherwise, the experimental data will not be saved entirely,
+ # if it has been cropped. Because tau might be smaller, than
+ # exp[:,0] --> tau = exp[startcrop:endcrop,0]
+ for j in np.arange(len(exp[:,0])):
+ dataWriter.writerow(["%.20e" % exp[j,0],
+ "%.20e" % exp[j,1]])
+ else:
+ # Only write tau
+ dataWriter.writerow(['# tau'+' only'])
+ for j in np.arange(len(tau)):
+ dataWriter.writerow(["%.20e" % tau[j]])
+ expfile.close()
+ # Add to archive
+ Arc.write(expfilename)
+ os.remove(os.path.join(tempdir, expfilename))
+ # Save traces into separate csv files.
+ for pageid in Infodict["Traces"].keys():
+ number = str(pageid)
+ # Since *Trace* and *Parms* are in the same order, which is the
+ # Page order, we will identify the filename by the Page title
+ # number.
+ if Infodict["Traces"][pageid] is not None:
+ if Parms[pageid][7] is True:
+ # We have cross correlation: save two traces
+ ## A
+ tracefilenamea = "trace"+number+"A.csv"
+ tracefile = open(tracefilenamea, 'wb')
+ traceWriter = csv.writer(tracefile, delimiter=',')
+ time = Infodict["Traces"][pageid][0][:,0]
+ rate = Infodict["Traces"][pageid][0][:,1]
+ # Names of Columns
+ traceWriter.writerow(['# time', 'count rate'])
+ # Actual Data
+ for j in np.arange(len(time)):
+ traceWriter.writerow(["%.20e" % time[j],
+ "%.20e" % rate[j]])
+ tracefile.close()
+ # Add to archive
+ Arc.write(tracefilenamea)
+ os.remove(os.path.join(tempdir, tracefilenamea))
+ ## B
+ tracefilenameb = "trace"+number+"B.csv"
+ tracefile = open(tracefilenameb, 'wb')
+ traceWriter = csv.writer(tracefile, delimiter=',')
+ time = Infodict["Traces"][pageid][1][:,0]
+ rate = Infodict["Traces"][pageid][1][:,1]
+ # Names of Columns
+ traceWriter.writerow(['# time', 'count rate'])
+ # Actual Data
+ for j in np.arange(len(time)):
+ traceWriter.writerow(["%.20e" % time[j],
+ "%.20e" % rate[j]])
+ tracefile.close()
+ # Add to archive
+ Arc.write(tracefilenameb)
+ os.remove(os.path.join(tempdir, tracefilenameb))
+ else:
+ # Save one single trace
+ tracefilename = "trace"+number+".csv"
+ tracefile = open(tracefilename, 'wb')
+ traceWriter = csv.writer(tracefile, delimiter=',')
+ time = Infodict["Traces"][pageid][:,0]
+ rate = Infodict["Traces"][pageid][:,1]
+ # Names of Columns
+ traceWriter.writerow(['# time', 'count rate'])
+ # Actual Data
+ for j in np.arange(len(time)):
+ traceWriter.writerow(["%.20e" % time[j],
+ "%.20e" % rate[j]])
+ tracefile.close()
+ # Add to archive
+ Arc.write(tracefilename)
+ os.remove(os.path.join(tempdir, tracefilename))
+ # Save comments into txt file
+ commentfilename = "comments.txt"
+ commentfile = open(commentfilename, 'wb')
+ # Comments[-1] is comment on whole Session
+ Ckeys = Infodict["Comments"].keys()
+ Ckeys.sort()
+ for key in Ckeys:
+ if key != "Session":
+ commentfile.write(Infodict["Comments"][key]+"\r\n")
+ commentfile.write(Infodict["Comments"]["Session"])
+ commentfile.close()
+ Arc.write(commentfilename)
+ os.remove(os.path.join(tempdir, commentfilename))
+ ## Save Background information:
+ Background = Infodict["Backgrounds"]
+ if len(Background) > 0:
+ # We do not use a comma separated, but a tab separated file,
+ # because a comma might be in the name of a bg.
+ bgfilename = "backgrounds.csv"
+ bgfile = open(bgfilename, 'wb')
+ bgwriter = csv.writer(bgfile, delimiter='\t')
+ for i in np.arange(len(Background)):
+ bgwriter.writerow([str(Background[i][0]), Background[i][1]])
+ # Traces
+ bgtracefilename = "bg_trace"+str(i)+".csv"
+ bgtracefile = open(bgtracefilename, 'wb')
+ bgtraceWriter = csv.writer(bgtracefile, delimiter=',')
+ bgtraceWriter.writerow(['# time', 'count rate'])
+ # Actual Data
+ time = Background[i][2][:,0]
+ rate = Background[i][2][:,1]
+ for j in np.arange(len(time)):
+ bgtraceWriter.writerow(["%.20e" % time[j],
+ "%.20e" % rate[j]])
+ bgtracefile.close()
+ # Add to archive
+ Arc.write(bgtracefilename)
+ os.remove(os.path.join(tempdir, bgtracefilename))
+ bgfile.close()
+ Arc.write(bgfilename)
+ os.remove(os.path.join(tempdir, bgfilename))
+ ## Save External Weights information
+ WeightedPageID = Infodict["External Weights"].keys()
+ WeightedPageID.sort()
+ WeightFilename = "externalweights.txt"
+ WeightFile = open(WeightFilename, 'wb')
+ WeightWriter = csv.writer(WeightFile, delimiter='\t')
+ for pageid in WeightedPageID:
+ number = str(pageid)
+ NestWeights = Infodict["External Weights"][pageid].keys()
+ # The order of the types does not matter, since they are
+ # sorted in the frontend and upon import. We sort them here, anyhow.
+ NestWeights.sort()
+ for Nkey in NestWeights:
+ WeightWriter.writerow([number, str(Nkey).strip()])
+ # Add data to a File
+ WeightDataFilename = "externalweights_data"+number+\
+ "_"+str(Nkey).strip()+".csv"
+ WeightDataFile = open(WeightDataFilename, 'wb')
+ WeightDataWriter = csv.writer(WeightDataFile)
+ wdata = Infodict["External Weights"][pageid][Nkey]
+ for jw in np.arange(len(wdata)):
+ WeightDataWriter.writerow([str(wdata[jw])])
+ WeightDataFile.close()
+ Arc.write(WeightDataFilename)
+ os.remove(os.path.join(tempdir, WeightDataFilename))
+ WeightFile.close()
+ Arc.write(WeightFilename)
+ os.remove(os.path.join(tempdir, WeightFilename))
+ ## Readme
+ rmfilename = "Readme.txt"
+ rmfile = open(rmfilename, 'wb')
+ rmfile.write(ReadmeSession)
+ rmfile.close()
+ Arc.write(rmfilename)
+ os.remove(os.path.join(tempdir, rmfilename))
+ # Close the archive
+ Arc.close()
+ # Move archive to destination directory
+ shutil.move(os.path.join(tempdir, filename),
+ os.path.join(dirname, filename) )
+ # Go to destination directory
+ os.chdir(returnWD)
+ os.rmdir(tempdir)
+
+
+def ExportCorrelation(exportfile, Page, info, savetrace=True):
+ """ Write correlation data to a file
+
+
+ Parameters
+ ----------
+ exportfile : str
+ Absolute filename to save data
+ Page : PyCorrFit Page object
+ Contains all correlation data
+ info : module
+ The `info` tool module. This is a workaround until Page has
+ its own class to create info data.
+ savetrace : bool
+ Append the trace to the file
+ """
+
+ openedfile = open(exportfile, 'wb')
+ ## First, some doc text
+ openedfile.write(ReadmeCSV.replace('\n', '\r\n'))
+ # The infos
+ InfoMan = info.InfoClass(CurPage=Page)
+ PageInfo = InfoMan.GetCurFancyInfo()
+ for line in PageInfo.splitlines():
+ openedfile.write("# "+line+"\r\n")
+ openedfile.write("#\r\n#\r\n")
+ # Get all the data we need from the Page
+ # Modeled data
+ # Since 0.7.8 the user may normalize the curves. The normalization
+ # factor is set in *Page.normfactor*.
+ corr = Page.datacorr[:,1]*Page.normfactor
+ if Page.dataexp is not None:
+ # Experimental data
+ tau = Page.dataexp[:,0]
+ exp = Page.dataexp[:,1]*Page.normfactor
+ res = Page.resid[:,1]*Page.normfactor
+ # Plotting! Because we only export plotted area.
+ weight = Page.weights_used_for_plotting
+ if weight is None:
+ pass
+ elif len(weight) != len(exp):
+ text = "Weights have not been calculated for the "+\
+ "area you want to export. Pressing 'Fit' "+\
+ "again should solve this issue. Weights will "+\
+ "not be saved."
+ warnings.warn(text)
+ weight = None
+ else:
+ tau = Page.datacorr[:,0]
+ exp = None
+ res = None
+ # Include weights in data saving:
+ # PyCorrFit thinks in [ms], but we will save as [s]
+ timefactor = 0.001
+ tau = timefactor * tau
+ ## Now we want to write all that data into the file
+ # This is for csv writing:
+ ## Correlation curve
+ dataWriter = csv.writer(openedfile, delimiter='\t')
+ if exp is not None:
+ header = '# Channel (tau [s])'+"\t"+ \
+ 'Experimental correlation'+"\t"+ \
+ 'Fitted correlation'+ "\t"+ \
+ 'Residuals'+"\r\n"
+ data = [tau, exp, corr, res]
+ if Page.weighted_fit_was_performed is True \
+ and weight is not None:
+ header = header.strip() + "\t"+'Weights (fit)'+"\r\n"
+ data.append(weight)
+ else:
+ header = '# Channel (tau [s])'+"\t"+ \
+ 'Correlation function'+"\r\n"
+ data = [tau, corr]
+ # Write header
+ openedfile.write(header)
+ # Write data
+ for i in np.arange(len(data[0])):
+ # row-wise, data may have more than two elements per row
+ datarow = list()
+ for j in np.arange(len(data)):
+ rowcoli = str("%.10e") % data[j][i]
+ datarow.append(rowcoli)
+ dataWriter.writerow(datarow)
+ ## Trace
+ # Only save the trace if user wants us to:
+ if savetrace:
+ # We will also save the trace in [s]
+ # Intensity trace in kHz may stay the same
+ if Page.trace is not None:
+ # Mark beginning of Trace
+ openedfile.write('#\r\n#\r\n# BEGIN TRACE\r\n#\r\n')
+ # Columns
+ time = Page.trace[:,0]*timefactor
+ intensity = Page.trace[:,1]
+ # Write
+ openedfile.write('# Time [s]'+"\t"
+ 'Intensity trace [kHz]'+" \r\n")
+ for i in np.arange(len(time)):
+ dataWriter.writerow([str("%.10e") % time[i],
+ str("%.10e") % intensity[i]])
+ elif Page.tracecc is not None:
+ # We have some cross-correlation here:
+ # Mark beginning of Trace A
+ openedfile.write('#\r\n#\r\n# BEGIN TRACE\r\n#\r\n')
+ # Columns
+ time = Page.tracecc[0][:,0]*timefactor
+ intensity = Page.tracecc[0][:,1]
+ # Write
+ openedfile.write('# Time [s]'+"\t"
+ 'Intensity trace [kHz]'+" \r\n")
+ for i in np.arange(len(time)):
+ dataWriter.writerow([str("%.10e") % time[i],
+ str("%.10e") % intensity[i]])
+ # Mark beginning of Trace B
+ openedfile.write('#\r\n#\r\n# BEGIN SECOND TRACE\r\n#\r\n')
+ # Columns
+ time = Page.tracecc[1][:,0]*timefactor
+ intensity = Page.tracecc[1][:,1]
+ # Write
+ openedfile.write('# Time [s]'+"\t"
+ 'Intensity trace [kHz]'+" \r\n")
+ for i in np.arange(len(time)):
+ dataWriter.writerow([str("%.10e") % time[i],
+ str("%.10e") % intensity[i]])
+
+ openedfile.close()
+
+
+session_wildcards = [".pcfs", ".pycorrfit-session.zip"]
+
+
+ReadmeCSV = """# This file was created using PyCorrFit version {}.
+#
+# Lines starting with a '#' are treated as comments.
+# The data is stored as CSV below this comment section.
+# Data usually consists of lag times (channels) and
+# the corresponding correlation function - experimental
+# and fitted values plus resulting residuals.
+# If this file is opened by PyCorrFit, only the first two
+# columns will be imported as experimental data.
+#
+""".format(doc.__version__)
+
+
+ReadmeSession = """This file was created using PyCorrFit version {}.
+The .zip archive you are looking at is a stored session of PyCorrFit.
+If you are interested in how the data is stored, you will find
+out here. Most important are the dimensions of units:
+Dimensionless representation:
+ unit of time : 1 ms
+ unit of inverse time: 10³ /s
+ unit of distance : 100 nm
+ unit of Diff.coeff : 10 µm²/s
+ unit of inverse area: 100 /µm²
+ unit of inv. volume : 1000 /µm³
+From there, the dimension of any parameter may be
+calculated.
+
+There are a number of files within this archive,
+depending on what was done during the session.
+
+backgrounds.csv
+ - Contains the list of backgrounds used and
+ - Averaged intensities in [kHz]
+
+bg_trace*.csv (where * is an integer)
+ - The trace of the background corresponding
+ to the line number in backgrounds.csv
+ - Time in [ms], Trace in [kHz]
+
+comments.txt
+ - Contains page titles and session comment
+ - First n lines are titles, rest is session
+ comment (where n is total number of pages)
+
+data*.csv (where * is (Number of page))
+ - Contains lag times [ms]
+ - Contains experimental data, if available
+
+externalweights.txt
+ - Contains names (types) of external weights other than from
+ Model function or spline fit
+ - Linewise: 1st element is page number, 2nd is name
+ - According to this data, the following files are present in the archive
+
+externalweights_data_*PageID*_*Type*.csv
+ - Contains weighting information of Page *PageID* of type *Type*
+
+model_*ModelID*.txt
+ - An external (user-defined) model file with internal ID *ModelID*
+
+Parameters.yaml
+ - Contains all Parameters for each page
+ Block format:
+ - - '#(Number of page): '
+ - (Internal model ID)
+ - (List of parameters)
+ - (List of checked parameters (for fitting))
+ - [(Min channel selected), (Max channel selected)]
+ - [(Weighted fit method (0=None, 1=Spline, 2=Model function)),
+ (No. of bins from left and right),
+ (No. of knots (of e.g. spline)),
+ (Type of fitting algorithm (e.g. "Lev-Mar", "Nelder-Mead")]
+ - [B1,B2] Background to use (line in backgrounds.csv)
+ B2 is always *null* for autocorrelation curves
+ - Data type is Cross-correlation?
+ - Parameter id (int) used for normalization in plotting.
+ This number first enumerates the model parameters and then
+ the supplemental parameters (e.g. "n1").
+ - - [min, max] fitting parameter range of 1st parameter
+ - [min, max] fitting parameter range of 2nd parameter
+ - etc.
+ - Order in Parameters.yaml defines order of pages in a session
+ - Order in Parameters.yaml defines order in comments.txt
+
+Readme.txt (this file)
+
+Supplements.yaml
+ - Contains errors of fitting
+ Format:
+ -- Page number
+ -- [parameter id, error value]
+ - [parameter id, error value]
+ - Chi squared
+ - [pages that share parameters] (from global fitting)
+
+trace*.csv (where * is (Number of page) | appendix "A" or "B" point to
+ the respective channels (only in cross-correlation mode))
+ - Contains times [ms]
+ - Contains countrates [kHz]
+""".format(doc.__version__)
diff --git a/src/page.py b/pycorrfit/page.py
similarity index 99%
rename from src/page.py
rename to pycorrfit/page.py
index afc215b..c9705ba 100644
--- a/src/page.py
+++ b/pycorrfit/page.py
@@ -35,10 +35,10 @@ import wx.lib.scrolledpanel as scrolled
import numpy as np # NumPy
import sys # System stuff
-import edclasses # Cool stuf like better floatspin
-import fitting as fit # For fitting
-import models as mdls
-import tools
+from . import edclasses # Cool stuf like better floatspin
+from . import fitting as fit # For fitting
+from . import models as mdls
+from . import tools
## On Windows XP I had problems with the unicode Characters.
diff --git a/src/plotting.py b/pycorrfit/plotting.py
similarity index 99%
rename from src/plotting.py
rename to pycorrfit/plotting.py
index 79da229..a59d55f 100644
--- a/src/plotting.py
+++ b/pycorrfit/plotting.py
@@ -50,8 +50,8 @@ from matplotlib.backends.backend_wx import NavigationToolbar2Wx #We hack this
import unicodedata
# For finding latex tools
-from misc import findprogram
-import models as mdls
+from .misc import findprogram
+from . import models as mdls
def greek2tex(char):
diff --git a/src/readfiles/__init__.py b/pycorrfit/readfiles/__init__.py
similarity index 86%
rename from src/readfiles/__init__.py
rename to pycorrfit/readfiles/__init__.py
index 218ee6f..b19d390 100644
--- a/src/readfiles/__init__.py
+++ b/pycorrfit/readfiles/__init__.py
@@ -26,17 +26,21 @@
import csv
import numpy as np
import os
+import sys
import tempfile
import yaml
+import warnings
import zipfile
# To add a filetype add it here and in the
# dictionaries at the end of this file.
-from read_ASC_ALV import openASC
-from read_CSV_PyCorrFit import openCSV
-from read_SIN_correlator_com import openSIN
-from read_FCS_Confocor3 import openFCS
-from read_mat_ries import openMAT
+from .read_ASC_ALV import openASC
+from .read_CSV_PyCorrFit import openCSV
+from .read_SIN_correlator_com import openSIN
+from .read_FCS_Confocor3 import openFCS
+from .read_mat_ries import openMAT
+from .read_pt3_PicoQuant import openPT3
+
def AddAllWildcard(Dictionary):
@@ -108,8 +112,8 @@ def openZIP(dirname, filename):
Trace = list() # Corresponding traces
## First test, if we are opening a session file
sessionwc = [".fcsfit-session.zip", ".pcfs"]
- if ( (len(filename)>19 and filename[-19:] == sessionwc[0]) or
- (len(filename)> 5 and filename[-5:] == sessionwc[1]) ):
+ if ( filename.endswith(sessionwc[0]) or
+ filename.endswith(sessionwc[1]) ):
# Get the yaml parms dump:
yamlfile = Arc.open("Parameters.yaml")
# Parms: Fitting and drawing parameters of the correlation curve
@@ -196,22 +200,33 @@ def openZIP(dirname, filename):
allfiles = Arc.namelist()
# Extract data to temporary folder
tempdir = tempfile.mkdtemp()
+ rmdirs = list()
for afile in allfiles:
- Arc.extract(afile, path=tempdir)
+ apath = Arc.extract(afile, path=tempdir)
+ if os.path.isdir(apath):
+ rmdirs.append(apath)
+ continue
ReturnValue = openAny(tempdir, afile)
if ReturnValue is not None:
- cs = ReturnValue["Correlation"]
- ts = ReturnValue["Trace"]
- ls = ReturnValue["Type"]
- fs = ReturnValue["Filename"]
- for i in np.arange(len(cs)):
- Correlations.append(cs[i])
- Trace.append(ts[i])
- Curvelist.append(ls[i])
- Filelist.append(filename+"/"+fs[i])
+ Correlations += ReturnValue["Correlation"]
+ Trace += ReturnValue["Trace"]
+ Curvelist += ReturnValue["Type"]
+ fnames = ReturnValue["Filename"]
+ Filelist += [ filename+"/"+fs for fs in fnames ]
# Delte file
- os.remove(os.path.join(tempdir,afile))
- os.removedirs(tempdir)
+ try:
+ os.remove(os.path.join(tempdir, afile))
+ except:
+ warnings.warn("{}".format(sys.exc_info()[1]))
+ for rmd in rmdirs:
+ try:
+ os.removedirs(rmd)
+ except:
+ warnings.warn("{}".format(sys.exc_info()[1]))
+ try:
+ os.removedirs(tempdir)
+ except:
+ warnings.warn("{}".format(sys.exc_info()[1]))
Arc.close()
dictionary = dict()
dictionary["Correlation"] = Correlations
@@ -231,6 +246,7 @@ Filetypes = { "Correlator.com (*.SIN)|*.SIN;*.sin" : openSIN,
"ALV (*.ASC)|*.ASC" : openASC,
"PyCorrFit (*.csv)|*.csv" : openCSV,
"Matlab 'Ries (*.mat)|*.mat" : openMAT,
+ "PicoQuant (*.pt3)|*.pt3" : openPT3,
"Zeiss ConfoCor3 (*.fcs)|*.fcs" : openFCS,
"Zip file (*.zip)|*.zip" : openZIP,
"PyCorrFit session (*.pcfs)|*.pcfs" : openZIP
@@ -243,8 +259,10 @@ Filetypes = AddAllWildcard(Filetypes)
BGFiletypes = { "Correlator.com (*.SIN)|*.SIN;*.sin" : openSIN,
"ALV (*.ASC)|*.ASC" : openASC,
"PyCorrFit (*.csv)|*.csv" : openCSV,
+ "PicoQuant (*.pt3)|*.pt3" : openPT3,
"Zeiss ConfoCor3 (*.fcs)|*.fcs" : openFCS,
"Zip file (*.zip)|*.zip" : openZIP,
"PyCorrFit session (*.pcfs)|*.pcfs" : openZIP
}
BGFiletypes = AddAllWildcard(BGFiletypes)
+
diff --git a/src/readfiles/read_ASC_ALV.py b/pycorrfit/readfiles/read_ASC_ALV.py
old mode 100755
new mode 100644
similarity index 100%
rename from src/readfiles/read_ASC_ALV.py
rename to pycorrfit/readfiles/read_ASC_ALV.py
diff --git a/src/readfiles/read_CSV_PyCorrFit.py b/pycorrfit/readfiles/read_CSV_PyCorrFit.py
similarity index 100%
rename from src/readfiles/read_CSV_PyCorrFit.py
rename to pycorrfit/readfiles/read_CSV_PyCorrFit.py
diff --git a/src/readfiles/read_FCS_Confocor3.py b/pycorrfit/readfiles/read_FCS_Confocor3.py
similarity index 75%
rename from src/readfiles/read_FCS_Confocor3.py
rename to pycorrfit/readfiles/read_FCS_Confocor3.py
index e4fc8b9..2723248 100644
--- a/src/readfiles/read_FCS_Confocor3.py
+++ b/pycorrfit/readfiles/read_FCS_Confocor3.py
@@ -254,65 +254,140 @@ def openFCS_Multiple(dirname, filename):
tracelist = list()
corrlist = list()
- ### TODO:
- # match curves with their timestamp
+ # match up curves with their timestamps
# (actimelist and cctimelist)
-
- for i in np.arange(len(ac_correlations)):
- # Filter curves without correlation (ignore them)
- if ac_correlations[i] is not None:
- curvelist.append(aclist[i])
- tracelist.append(1*traces[i])
- corrlist.append(ac_correlations[i])
- else:
- if traces[i] is not None:
- warnings.warn("File {} curve {} does not contain AC data.".format(filename, i))
- # Overwrite traces. This way we have equal number of ac correlations
- # and traces.
- traces = tracelist
- ## The CC traces are more tricky:
- # Add traces to CC-correlation functions.
- # It seems reasonable, that if number of AC1,AC2 and CC are equal,
- # CC gets the traces accordingly.
- # We take the number of ac curves from curvelist instead of aclist,
- # because aclist may contain curves without ac data (see above).
- # In that case, the cc traces do most-likely belong to the acs.
- n_ac1 = curvelist.count("AC1")
- n_ac2 = curvelist.count("AC2")
- n_cc12 = cclist.count("CC12")
- n_cc21 = cclist.count("CC21")
- if n_ac1==n_ac2==n_cc12==n_cc21>0:
- CCTraces = True
- else:
- CCTraces = False
- # Commence swapping, if necessary
- # We want to have CC12 first and the corresponding trace to AC1 as well.
- if len(cc_correlations) != 0:
- if cclist[0] == "CC12":
- if aclist[0] == "AC2":
- for i in np.arange(len(traces)/2):
- traces[2*i], traces[2*i+1] = traces[2*i+1], traces[2*i]
- # Everything is OK
- elif cclist[0] == "CC21":
- # Switch the order of CC correlations
- a = cc_correlations
- for i in np.arange(len(a)/2):
- a[2*i], a[2*i+1] = a[2*i+1], a[2*i]
- cclist[2*i], cclist[2*i+1] = cclist[2*i+1], cclist[2*i]
- if aclist[2*i] == "AC2":
- traces[2*i], traces[2*i+1] = traces[2*i+1], traces[2*i]
- # Add cc-curves with (if CCTraces) trace.
- for i in np.arange(len(cc_correlations)):
- if cc_correlations[i] is not None:
- curvelist.append(cclist[i])
- corrlist.append(cc_correlations[i])
- if CCTraces == True:
- if cclist[i] == "CC12":
- tracelist.append([traces[i], traces[i+1]])
- elif cclist[i] == "CC21":
- tracelist.append([traces[i-1], traces[i]])
- else:
- tracelist.append(None)
+ knowntimes = list()
+ for tid in actimelist:
+ if tid not in knowntimes:
+ knowntimes.append(tid)
+ n = actimelist.count(tid)
+ actids = np.where(np.array(actimelist) == tid)[0]
+ cctids = np.where(np.array(cctimelist) == tid)[0]
+
+ if len(actids) == 0:
+ warnings.warn("File {} timepoint {} has no AC data.".
+ format(filename, tid))
+ elif len(actids) == 1:
+ # single AC curve
+ if ac_correlations[actids[0]] is not None:
+ curvelist.append(aclist[actids[0]])
+ tracelist.append(1*traces[actids[0]])
+ corrlist.append(ac_correlations[actids[0]])
+ else:
+ if traces[actids[0]] is not None:
+ warnings.warn("File {} curve {} does not contain AC data.".format(filename, tid))
+ elif len(actids) == 2:
+ # Get AC data
+ if aclist[actids[0]] == "AC1":
+ acdat1 = ac_correlations[actids[0]]
+ trace1 = traces[actids[0]]
+ acdat2 = ac_correlations[actids[1]]
+ trace2 = traces[actids[1]]
+ elif aclist[actids[0]] == "AC2":
+ acdat1 = ac_correlations[actids[1]]
+ trace1 = traces[actids[1]]
+ acdat2 = ac_correlations[actids[0]]
+ trace2 = traces[actids[0]]
+ else:
+ warnings.warn("File {} curve {}: unknown AC data.".format(filename, tid))
+ continue
+
+ if acdat1 is not None:
+ #AC1
+ curvelist.append("AC1")
+ tracelist.append(trace1)
+ corrlist.append(acdat1)
+ if acdat2 is not None:
+ #AC2
+ curvelist.append("AC2")
+ tracelist.append(trace2)
+ corrlist.append(acdat2)
+
+ if len(cctids) == 2:
+ # Get CC data
+ if cclist[cctids[0]] == "CC12":
+ ccdat12 = cc_correlations[cctids[0]]
+ ccdat21 = cc_correlations[cctids[1]]
+ elif cclist[cctids[0]] == "CC21":
+ ccdat12 = cc_correlations[cctids[1]]
+ ccdat21 = cc_correlations[cctids[0]]
+ else:
+ warnings.warn("File {} curve {}: unknown CC data.".format(filename, tid))
+ continue
+
+ tracecc = [trace1, trace2]
+ if ccdat12 is not None:
+ #CC12
+ curvelist.append("CC12")
+ tracelist.append(tracecc)
+ corrlist.append(ccdat12)
+ if ccdat21 is not None:
+ #CC21
+ curvelist.append("CC21")
+ tracelist.append(tracecc)
+ corrlist.append(ccdat21)
+
+ # for i in np.arange(len(ac_correlations)):
+ # # Filter curves without correlation (ignore them)
+ # if ac_correlations[i] is not None:
+ # curvelist.append(aclist[i])
+ # tracelist.append(1*traces[i])
+ # corrlist.append(ac_correlations[i])
+ # else:
+ # if traces[i] is not None:
+ # warnings.warn("File {} curve {} does not contain AC data.".format(filename, i))
+ # # Overwrite traces. Now we have equal number of ac correlations
+ # # and traces.
+ # traces = tracelist
+
+ # ## The CC traces are more tricky:
+ # # Add traces to CC-correlation functions.
+ # # It seems reasonable, that if number of AC1,AC2 and CC are equal,
+ # # CC gets the traces accordingly.
+ # # We take the number of ac curves from curvelist instead of aclist,
+ # # because aclist may contain curves without ac data (see above).
+ # # In that case, the cc traces do most-likely belong to the acs.
+ # n_ac1 = curvelist.count("AC1")
+ # n_ac2 = curvelist.count("AC2")
+ # n_cc12 = cclist.count("CC12")
+ # n_cc21 = cclist.count("CC21")
+ # if n_ac1==n_ac2==n_cc12==n_cc21>0:
+ # CCTraces = True
+ # else:
+ # CCTraces = False
+ #
+ # # Commence swapping, if necessary
+ # # We want to have CC12 first and the corresponding trace to AC1 as well.
+ # if len(cc_correlations) != 0:
+ # if cclist[0] == "CC12":
+ # if aclist[0] == "AC2":
+ # for i in np.arange(len(traces)/2):
+ # traces[2*i], traces[2*i+1] = traces[2*i+1], traces[2*i]
+ # # Everything is OK
+ # elif cclist[0] == "CC21":
+ # # Switch the order of CC correlations
+ # a = cc_correlations
+ # for i in np.arange(len(a)/2):
+ # a[2*i], a[2*i+1] = a[2*i+1], a[2*i]
+ # cclist[2*i], cclist[2*i+1] = cclist[2*i+1], cclist[2*i]
+ # if aclist[2*i] == "AC2":
+ # traces[2*i], traces[2*i+1] = traces[2*i+1], traces[2*i]
+ #
+ #
+ # # Add cc-curves with (if CCTraces) trace.
+ # for i in np.arange(len(cc_correlations)):
+ # if cc_correlations[i] is not None:
+ # curvelist.append(cclist[i])
+ # corrlist.append(cc_correlations[i])
+ # if CCTraces == True:
+ # if cclist[i] == "CC12":
+ # tracelist.append([traces[i], traces[i+1]])
+ # elif cclist[i] == "CC21":
+ # tracelist.append([traces[i-1], traces[i]])
+ # else:
+ # tracelist.append(None)
+
+
dictionary = dict()
dictionary["Correlation"] = corrlist
dictionary["Trace"] = tracelist
diff --git a/src/readfiles/read_SIN_correlator_com.py b/pycorrfit/readfiles/read_SIN_correlator_com.py
similarity index 100%
rename from src/readfiles/read_SIN_correlator_com.py
rename to pycorrfit/readfiles/read_SIN_correlator_com.py
diff --git a/src/readfiles/read_mat_ries.py b/pycorrfit/readfiles/read_mat_ries.py
similarity index 100%
rename from src/readfiles/read_mat_ries.py
rename to pycorrfit/readfiles/read_mat_ries.py
diff --git a/pycorrfit/readfiles/read_pt3_PicoQuant.py b/pycorrfit/readfiles/read_pt3_PicoQuant.py
new file mode 100644
index 0000000..f8c3926
--- /dev/null
+++ b/pycorrfit/readfiles/read_pt3_PicoQuant.py
@@ -0,0 +1,95 @@
+# -*- coding: utf-8 -*-
+""" Wrapper for Loading PicoQuant .pt3 data files
+
+Wraps around FCS_point_correlator by Dominic Waithe
+https://github.com/dwaithe/FCS_point_correlator
+"""
+import numpy as np
+import os
+from .read_pt3_scripts.correlation_objects import picoObject
+
+
+class ParameterClass():
+ """Stores parameters for correlation """
+ def __init__(self):
+
+ #Where the data is stored.
+ self.data = []
+ self.objectRef =[]
+ self.subObjectRef =[]
+ self.colors = ['blue','green','red','cyan','magenta','yellow','black']
+ self.numOfLoaded = 0
+ self.NcascStart = 0
+ self.NcascEnd = 25
+ self.Nsub = 6
+ self.winInt = 10
+ self.photonCountBin = 25
+
+
+def openPT3(dirname, filename):
+ """ Retreive correlation curves from PicoQuant data files
+
+ This function is a wrapper around the PicoQuant capability of
+ FCS_Viewer by Dominic Waithe.
+ """
+ par_obj = ParameterClass()
+
+ pt3file = picoObject(os.path.join(dirname, filename), par_obj, None)
+
+ po = pt3file
+
+ auto = po.autoNorm
+ # lag time [ms]
+ autotime = po.autotime.reshape(-1)
+
+ corrlist = list()
+ tracelist = list()
+ typelist = list()
+
+ # Some data points are zero for some reason
+ id1 = np.where(autotime!=0)
+
+
+ # AC0 - autocorrelation CH0
+ corrac0 = auto[:,0,0]
+ if np.sum(np.abs(corrac0[id1])) != 0:
+ typelist.append("AC0")
+ # autotime,auto[:,0,0]
+ corrlist.append(np.hstack( (autotime[id1].reshape(-1,1),
+ corrac0[id1].reshape(-1,1)) ))
+
+ # AC1 - autocorrelation CH1
+ corrac1 = auto[:,1,1]
+ if np.sum(np.abs(corrac1[id1])) != 0:
+ typelist.append("AC1")
+ # autotime,auto[:,1,1]
+ corrlist.append(np.hstack( (autotime[id1].reshape(-1,1),
+ corrac1[id1].reshape(-1,1)) ))
+
+ # CC01 - Cross-Correlation CH0-CH1
+ corrcc01 = auto[:,0,1]
+ if np.sum(np.abs(corrcc01[id1])) != 0:
+ typelist.append("CC01")
+ # autotime,auto[:,0,1]
+ corrlist.append(np.hstack( (autotime[id1].reshape(-1,1),
+ corrcc01[id1].reshape(-1,1)) ))
+
+ # CC10 - Cross-Correlation CH1-CH0
+ corrcc10 = auto[:,1,0]
+ if np.sum(np.abs(corrcc10[id1])) != 0:
+ typelist.append("CC10")
+ # autotime,auto[:,1,0]
+ corrlist.append(np.hstack( (autotime[id1].reshape(-1,1),
+ corrcc10[id1].reshape(-1,1)) ))
+
+ dictionary = dict()
+ dictionary["Correlation"] = corrlist
+ dictionary["Trace"] = tracelist
+ dictionary["Type"] = typelist
+ filelist = list()
+ for i in typelist:
+ filelist.append(filename)
+ tracelist.append(None)
+ dictionary["Filename"] = filelist
+
+ return dictionary
diff --git a/pycorrfit/readfiles/read_pt3_scripts/LICENSE b/pycorrfit/readfiles/read_pt3_scripts/LICENSE
new file mode 100644
index 0000000..d6a9326
--- /dev/null
+++ b/pycorrfit/readfiles/read_pt3_scripts/LICENSE
@@ -0,0 +1,340 @@
+GNU GENERAL PUBLIC LICENSE
+ Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc., <http://fsf.org/>
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ GNU GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License. The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language. (Hereinafter, translation is included without limitation in
+the term "modification".) Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+ 1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+ 2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) You must cause the modified files to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any
+ part thereof, to be licensed as a whole at no charge to all third
+ parties under the terms of this License.
+
+ c) If the modified program normally reads commands interactively
+ when run, you must cause it, when started running for such
+ interactive use in the most ordinary way, to print or display an
+ announcement including an appropriate copyright notice and a
+ notice that there is no warranty (or else, saying that you provide
+ a warranty) and that users may redistribute the program under
+ these conditions, and telling the user how to view a copy of this
+ License. (Exception: if the Program itself is interactive but
+ does not normally print such an announcement, your work based on
+ the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+ a) Accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of Sections
+ 1 and 2 above on a medium customarily used for software interchange; or,
+
+ b) Accompany it with a written offer, valid for at least three
+ years, to give any third party, for a charge no more than your
+ cost of physically performing source distribution, a complete
+ machine-readable copy of the corresponding source code, to be
+ distributed under the terms of Sections 1 and 2 above on a medium
+ customarily used for software interchange; or,
+
+ c) Accompany it with the information you received as to the offer
+ to distribute corresponding source code. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form with such
+ an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it. For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable. However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License. Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+ 5. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Program or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+ 6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+ 7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all. For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded. In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+ 9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation. If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+ 10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission. For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this. Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ {description}
+ Copyright (C) {year} {fullname}
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License along
+ with this program; if not, write to the Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) year name of author
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+ `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+ {signature of Ty Coon}, 1 April 1989
+ Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
+
diff --git a/pycorrfit/readfiles/read_pt3_scripts/__init__.py b/pycorrfit/readfiles/read_pt3_scripts/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/pycorrfit/readfiles/read_pt3_scripts/correlation_methods.py b/pycorrfit/readfiles/read_pt3_scripts/correlation_methods.py
new file mode 100644
index 0000000..7ede9d1
--- /dev/null
+++ b/pycorrfit/readfiles/read_pt3_scripts/correlation_methods.py
@@ -0,0 +1,134 @@
+import numpy as np
+from . import fib4
+
+
+"""FCS Bulk Correlation Software
+
+ Copyright (C) 2015 Dominic Waithe
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License along
+ with this program; if not, write to the Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+"""
+
+
+def tttr2xfcs (y,num,NcascStart,NcascEnd, Nsub):
+ """autocorr, autotime = tttr2xfcs(y,num,10,20)
+ Translation into python of:
+ Fast calculation of fluorescence correlation data with asynchronous time-correlated single-photon counting.
+ Michael Wahl, Ingo Gregor, Matthias Patting, Jorg Enderlein
+ """
+
+ dt = np.max(y)-np.min(y)
+ y = np.round(y[:],0)
+ numshape = num.shape[0]
+
+ autotime = np.zeros(((NcascEnd+1)*(Nsub+1),1));
+ auto = np.zeros(((NcascEnd+1)*(Nsub+1), num.shape[1], num.shape[1])).astype(np.float64)
+ shift = float(0)
+ delta = float(1)
+
+
+
+ for j in range(0,NcascEnd):
+
+ #Finds the unique photon times and their indices. The division of 'y' by '2' each cycle makes this more likely.
+
+ y,k1 = np.unique(y,1)
+ k1shape = k1.shape[0]
+
+ #Sums up the photon times in each bin.
+ cs =np.cumsum(num,0).T
+
+ #Prepares difference array so starts with zero.
+ diffArr1 = np.zeros(( k1shape+1));
+ diffArr2 = np.zeros(( k1shape+1));
+
+ #Takes the cumulative sum of the unique photon arrivals
+ diffArr1[1:] = cs[0,k1].reshape(-1)
+ diffArr2[1:] = cs[1,k1].reshape(-1)
+
+ #del k1
+ #del cs
+ num =np.zeros((k1shape,2))
+
+
+
+ #Finds the total photons in each bin. and represents as count.
+ #This is achieved because we have the indices of each unique time photon and cumulative total at each point.
+ num[:,0] = np.diff(diffArr1)
+ num[:,1] = np.diff(diffArr2)
+ #diffArr1 = [];
+ #diffArr2 = [];
+
+ for k in range(0,Nsub):
+ shift = shift + delta
+ lag = np.round(shift/delta,0)
+
+
+ #Allows the script to be sped up.
+ if j >= NcascStart:
+
+
+ #Old method
+ #i1= np.in1d(y,y+lag,assume_unique=True)
+ #i2= np.in1d(y+lag,y,assume_unique=True)
+
+ #New method, cython
+ i1,i2 = fib4.dividAndConquer(y, y+lag,y.shape[0])
+ i1 = i1.astype(np.bool);
+ i2 = i2.astype(np.bool);
+ #Faster dot product method, faster than converting to matrix.
+ auto[(k+(j)*Nsub),:,:] = np.dot((num[i1,:]).T,num[i2,:])/delta
+
+ autotime[k+(j)*Nsub] =shift;
+
+ #Equivalent to matlab round when numbers are %.5
+ y = np.ceil(np.array(0.5*y))
+ delta = 2*delta
+
+ for j in range(0, auto.shape[0]):
+ auto[j,:,:] = auto[j,:,:]*dt/(dt-autotime[j])
+ autotime = autotime/1000000
+ return auto, autotime
+
+
+def delayTime2bin(dTimeArr, chanArr, chanNum, winInt):
+
+ decayTime = np.array(dTimeArr)
+ #This is the point and which each channel is identified.
+ decayTimeCh =decayTime[chanArr == chanNum]
+
+ #Find the first and last entry
+ firstDecayTime = 0;#np.min(decayTimeCh).astype(np.int32)
+ tempLastDecayTime = np.max(decayTimeCh).astype(np.int32)
+
+ #We floor this as the last bin is always incomplete and so we discard photons.
+ numBins = np.floor((tempLastDecayTime-firstDecayTime)/winInt)
+ lastDecayTime = numBins*winInt
+
+
+ bins = np.linspace(firstDecayTime,lastDecayTime, int(numBins)+1)
+
+
+ photonsInBin, jnk = np.histogram(decayTimeCh, bins)
+
+ #bins are valued as half their span.
+ decayScale = bins[:-1]+(winInt/2)
+
+ #decayScale = np.arange(0,decayTimeCh.shape[0])
+
+
+
+
+ return list(photonsInBin), list(decayScale)
diff --git a/pycorrfit/readfiles/read_pt3_scripts/correlation_objects.py b/pycorrfit/readfiles/read_pt3_scripts/correlation_objects.py
new file mode 100644
index 0000000..1d6ed35
--- /dev/null
+++ b/pycorrfit/readfiles/read_pt3_scripts/correlation_objects.py
@@ -0,0 +1,527 @@
+import numpy as np
+import os, sys
+#from correlation_methods import *
+#from import_methods import *
+import time
+#from fitting_methods import equation_
+#from lmfit import minimize, Parameters,report_fit,report_errors, fit_report
+
+from .correlation_methods import *
+from .import_methods import *
+
+
+"""FCS Bulk Correlation Software
+
+ Copyright (C) 2015 Dominic Waithe
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License along
+ with this program; if not, write to the Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+"""
+
+class picoObject():
+ #This is the class which holds the .pt3 data and parameters
+ def __init__(self,filepath, par_obj,fit_obj):
+
+ #parameter object and fit object. If
+ self.par_obj = par_obj
+ self.fit_obj = fit_obj
+ self.type = 'mainObject'
+
+ #self.PIE = 0
+ self.filepath = str(filepath)
+ self.nameAndExt = os.path.basename(self.filepath).split('.')
+ self.name = self.nameAndExt[0]
+ self.par_obj.data.append(filepath);
+ self.par_obj.objectRef.append(self)
+
+ #Imports pt3 file format to object.
+ self.unqID = self.par_obj.numOfLoaded
+
+ #For fitting.
+ self.objId1 = None
+ self.objId2 = None
+ self.objId3 = None
+ self.objId4 = None
+ self.processData();
+
+ self.plotOn = True;
+
+
+ def processData(self):
+
+ self.NcascStart = self.par_obj.NcascStart
+ self.NcascEnd = self.par_obj.NcascEnd
+ self.Nsub = self.par_obj.Nsub
+ self.winInt = self.par_obj.winInt
+ self.photonCountBin = self.par_obj.photonCountBin
+
+ #File import
+ self.subChanArr, self.trueTimeArr, self.dTimeArr,self.resolution = pt3import(self.filepath)
+
+ #Colour assigned to file.
+ self.color = self.par_obj.colors[self.unqID % len(self.par_obj.colors)]
+
+ #How many channels there are in the files.
+ self.numOfCH = np.unique(np.array(self.subChanArr)).__len__()-1 #Minus 1 because not interested in channel 15.
+ #TODO Generates the interleaved excitation channel if required.
+ #if (self.aug == 'PIE'):
+ #self.pulsedInterleavedExcitation()
+
+ #Finds the numbers which address the channels.
+ self.ch_present = np.unique(np.array(self.subChanArr[0:100]))
+
+ #Calculates decay function for both channels.
+ self.photonDecayCh1,self.decayScale1 = delayTime2bin(np.array(self.dTimeArr),np.array(self.subChanArr),self.ch_present[0],self.winInt)
+
+ if self.numOfCH == 2:
+ self.photonDecayCh2,self.decayScale2 = delayTime2bin(np.array(self.dTimeArr),np.array(self.subChanArr),self.ch_present[1],self.winInt)
+
+ #Time series of photon counts. For visualisation.
+ self.timeSeries1,self.timeSeriesScale1 = delayTime2bin(np.array(self.trueTimeArr)/1000000,np.array(self.subChanArr),self.ch_present[0],self.photonCountBin)
+ if self.numOfCH == 2:
+ self.timeSeries2,self.timeSeriesScale2 = delayTime2bin(np.array(self.trueTimeArr)/1000000,np.array(self.subChanArr),self.ch_present[1],self.photonCountBin)
+
+
+ #Calculates the Auto and Cross-correlation functions.
+ self.crossAndAuto(np.array(self.trueTimeArr),np.array(self.subChanArr))
+
+
+
+
+
+ if self.fit_obj != None:
+ #If fit object provided then creates fit objects.
+ if self.objId1 == None:
+ corrObj= corrObject(self.filepath,self.fit_obj);
+ self.objId1 = corrObj.objId
+ self.fit_obj.objIdArr.append(corrObj.objId)
+ self.objId1.name = self.name+'_CH0_Auto_Corr'
+ self.objId1.ch_type = 0 #channel 0 Auto
+ self.objId1.prepare_for_fit()
+ self.objId1.autoNorm = np.array(self.autoNorm[:,0,0]).reshape(-1)
+ self.objId1.autotime = np.array(self.autotime).reshape(-1)
+ self.objId1.param = self.fit_obj.def_param
+
+
+ if self.numOfCH == 2:
+ if self.objId3 == None:
+ corrObj= corrObject(self.filepath,self.fit_obj);
+ self.objId3 = corrObj.objId
+ self.fit_obj.objIdArr.append(corrObj.objId)
+ self.objId3.name = self.name+'_CH1_Auto_Corr'
+ self.objId3.ch_type = 1 #channel 1 Auto
+ self.objId3.prepare_for_fit()
+ self.objId3.autoNorm = np.array(self.autoNorm[:,1,1]).reshape(-1)
+ self.objId3.autotime = np.array(self.autotime).reshape(-1)
+ self.objId3.param = self.fit_obj.def_param
+
+ if self.objId2 == None:
+ corrObj= corrObject(self.filepath,self.fit_obj);
+ self.objId2 = corrObj.objId
+ self.fit_obj.objIdArr.append(corrObj.objId)
+ self.objId2.name = self.name+'_CH01_Cross_Corr'
+ self.objId2.ch_type = 2 #01cross
+ self.objId2.prepare_for_fit()
+ self.objId2.autoNorm = np.array(self.autoNorm[:,0,1]).reshape(-1)
+ self.objId2.autotime = np.array(self.autotime).reshape(-1)
+ self.objId2.param = self.fit_obj.def_param
+
+
+ if self.objId4 == None:
+ corrObj= corrObject(self.filepath,self.fit_obj);
+ self.objId4 = corrObj.objId
+ self.fit_obj.objIdArr.append(corrObj.objId)
+ self.objId4.name = self.name+'_CH10_Cross_Corr'
+ self.objId4.ch_type = 3 #10cross
+ self.objId4.prepare_for_fit()
+ self.objId4.autoNorm = np.array(self.autoNorm[:,1,0]).reshape(-1)
+ self.objId4.autotime = np.array(self.autotime).reshape(-1)
+ self.objId4.param = self.fit_obj.def_param
+
+ self.fit_obj.fill_series_list()
+ self.dTimeMin = 0
+ self.dTimeMax = np.max(self.dTimeArr)
+ self.subDTimeMin = self.dTimeMin
+ self.subDTimeMax = self.dTimeMax
+ del self.subChanArr
+ del self.trueTimeArr
+ del self.dTimeArr
+ def crossAndAuto(self,trueTimeArr,subChanArr):
+ #For each channel we loop through and find only those in the correct time gate.
+ #We only want photons in channel 1 or two.
+ y = trueTimeArr[subChanArr < 3]
+ validPhotons = subChanArr[subChanArr < 3 ]
+
+
+ #Creates boolean for photon events in either channel.
+ num = np.zeros((validPhotons.shape[0],2))
+ num[:,0] = (np.array([np.array(validPhotons) ==self.ch_present[0]])).astype(np.int32)
+ if self.numOfCH ==2:
+ num[:,1] = (np.array([np.array(validPhotons) ==self.ch_present[1]])).astype(np.int32)
+
+
+ self.count0 = np.sum(num[:,0])
+ self.count1 = np.sum(num[:,1])
+
+ t1 = time.time()
+ auto, self.autotime = tttr2xfcs(y,num,self.NcascStart,self.NcascEnd, self.Nsub)
+ t2 = time.time()
+ print 'timing',t2-t1
+
+
+ #Normalisation of the TCSPC data:
+ maxY = np.ceil(max(self.trueTimeArr))
+ self.autoNorm = np.zeros((auto.shape))
+ self.autoNorm[:,0,0] = ((auto[:,0,0]*maxY)/(self.count0*self.count0))-1
+
+ if self.numOfCH == 2:
+ self.autoNorm[:,1,1] = ((auto[:,1,1]*maxY)/(self.count1*self.count1))-1
+ self.autoNorm[:,1,0] = ((auto[:,1,0]*maxY)/(self.count1*self.count0))-1
+ self.autoNorm[:,0,1] = ((auto[:,0,1]*maxY)/(self.count0*self.count1))-1
+
+
+ #Normalisaation of the decay functions.
+ self.photonDecayCh1Min = self.photonDecayCh1-np.min(self.photonDecayCh1)
+ self.photonDecayCh1Norm = self.photonDecayCh1Min/np.max(self.photonDecayCh1Min)
+
+
+ if self.numOfCH == 2:
+ self.photonDecayCh2Min = self.photonDecayCh2-np.min(self.photonDecayCh2)
+ self.photonDecayCh2Norm = self.photonDecayCh2Min/np.max(self.photonDecayCh2Min)
+
+ return
+
+
+
+
+class subPicoObject():
+ def __init__(self,parentId,xmin,xmax,TGid,par_obj):
+ #Binning window for decay function
+ self.TGid = TGid
+ #Parameters for auto-correlation and cross-correlation.
+ self.parentId = parentId
+ self.par_obj = par_obj
+ self.NcascStart = self.parentId.NcascStart
+ self.NcascEnd = self.parentId.NcascEnd
+ self.Nsub = self.parentId.Nsub
+ self.fit_obj = self.parentId.fit_obj
+
+ self.type = 'subObject'
+ #Appends the object to the subObject register.
+ self.par_obj.subObjectRef.append(self)
+ self.unqID = self.par_obj.subNum
+ self.parentUnqID = self.parentId.unqID
+ #self.chanArr = parentObj.chanArr
+ #self.trueTimeArr = self.parentId.trueTimeArr
+ #self.dTimeArr = self.parentId.dTimeArr
+ self.color = self.parentId.color
+ self.numOfCH = self.parentId.numOfCH
+ self.ch_present = self.parentId.ch_present
+
+ self.filepath = str(self.parentId.filepath)
+ self.xmin = xmin
+ self.xmax = xmax
+
+ self.nameAndExt = os.path.basename(self.filepath).split('.')
+ self.name = 'TG-'+str(self.unqID)+'-xmin_'+str(round(xmin,0))+'-xmax_'+str(round(xmax,0))+'-'+self.nameAndExt[0]
+
+ self.objId1 = None
+ self.objId2 = None
+ self.objId3 = None
+ self.objId4 = None
+ self.processData();
+ self.plotOn = True
+
+
+ def processData(self):
+ self.NcascStart= self.par_obj.NcascStart
+ self.NcascEnd= self.par_obj.NcascEnd
+ self.Nsub = self.par_obj.Nsub
+ self.winInt = self.par_obj.winInt
+
+
+ self.subChanArr, self.trueTimeArr, self.dTimeArr,self.resolution = pt3import(self.filepath)
+
+
+
+ self.subArrayGeneration(self.xmin,self.xmax,np.array(self.subChanArr))
+
+
+
+
+ self.dTimeMin = self.parentId.dTimeMin
+ self.dTimeMax = self.parentId.dTimeMax
+ self.subDTimeMin = self.dTimeMin
+ self.subDTimeMax = self.dTimeMax
+
+
+
+ #Adds names to the fit function for later fitting.
+ if self.objId1 == None:
+ corrObj= corrObject(self.filepath,self.fit_obj);
+ self.objId1 = corrObj.objId
+ self.fit_obj.objIdArr.append(corrObj.objId)
+ self.objId1.name = self.name+'_CH0_Auto_Corr'
+ self.objId1.ch_type = 0 #channel 0 Auto
+ self.objId1.prepare_for_fit()
+ self.objId1.autoNorm = np.array(self.autoNorm[:,0,0]).reshape(-1)
+ self.objId1.autotime = np.array(self.autotime).reshape(-1)
+ self.objId1.param = self.fit_obj.def_param
+
+
+ if self.numOfCH ==2:
+ if self.objId3 == None:
+ corrObj= corrObject(self.filepath,self.fit_obj);
+ self.objId3 = corrObj.objId
+ self.fit_obj.objIdArr.append(corrObj.objId)
+ self.objId3.name = self.name+'_CH1_Auto_Corr'
+ self.objId3.ch_type = 1 #channel 1 Auto
+ self.objId3.prepare_for_fit()
+ self.objId3.autoNorm = np.array(self.autoNorm[:,1,1]).reshape(-1)
+ self.objId3.autotime = np.array(self.autotime).reshape(-1)
+ self.objId3.param = self.fit_obj.def_param
+ if self.objId2 == None:
+ corrObj= corrObject(self.filepath,self.fit_obj);
+ self.objId2 = corrObj.objId
+ self.fit_obj.objIdArr.append(corrObj.objId)
+ self.objId2.name = self.name+'_CH01_Cross_Corr'
+ self.objId2.ch_type = 2 #channel 01 Cross
+ self.objId2.prepare_for_fit()
+ self.objId2.autoNorm = np.array(self.autoNorm[:,0,1]).reshape(-1)
+ self.objId2.autotime = np.array(self.autotime).reshape(-1)
+ self.objId2.param = self.fit_obj.def_param
+ if self.objId4 == None:
+ corrObj= corrObject(self.filepath,self.fit_obj);
+ self.objId4 = corrObj.objId
+ self.fit_obj.objIdArr.append(corrObj.objId)
+ self.objId4.name = self.name+'_CH10_Cross_Corr'
+ self.objId4.ch_type = 3 #channel 10 Cross
+ self.objId4.prepare_for_fit()
+ self.objId4.autoNorm = np.array(self.autoNorm[:,1,0]).reshape(-1)
+ self.objId4.autotime = np.array(self.autotime).reshape(-1)
+ self.objId4.param = self.fit_obj.def_param
+
+
+ self.fit_obj.fill_series_list()
+ del self.subChanArr
+ del self.trueTimeArr
+ del self.dTimeArr
+
+
+
+ def subArrayGeneration(self,xmin,xmax,subChanArr):
+ if(xmax<xmin):
+ xmin1 = xmin
+ xmin = xmax
+ xmax = xmin1
+ #self.subChanArr = np.array(self.chanArr)
+ #Finds those photons which arrive above certain time or below certain time.
+ photonInd = np.logical_and(self.dTimeArr>=xmin, self.dTimeArr<=xmax).astype(np.bool)
+
+ subChanArr[np.invert(photonInd).astype(np.bool)] = 16
+
+ self.crossAndAuto(subChanArr)
+
+ return
+ def crossAndAuto(self,subChanArr):
+ #We only want photons in channel 1 or two.
+ validPhotons = subChanArr[subChanArr < 3]
+ y = self.trueTimeArr[subChanArr < 3]
+ #Creates boolean for photon events in either channel.
+ num = np.zeros((validPhotons.shape[0],2))
+ num[:,0] = (np.array([np.array(validPhotons) ==self.ch_present[0]])).astype(np.int)
+ if self.numOfCH == 2:
+ num[:,1] = (np.array([np.array(validPhotons) ==self.ch_present[1]])).astype(np.int)
+
+ self.count0 = np.sum(num[:,0])
+ self.count1 = np.sum(num[:,1])
+ #Function which calculates auto-correlation and cross-correlation.
+
+
+
+ auto, self.autotime = tttr2xfcs(y,num,self.NcascStart,self.NcascEnd, self.Nsub)
+
+ maxY = np.ceil(max(self.trueTimeArr))
+ self.autoNorm = np.zeros((auto.shape))
+ self.autoNorm[:,0,0] = ((auto[:,0,0]*maxY)/(self.count0*self.count0))-1
+ if self.numOfCH ==2:
+ self.autoNorm[:,1,1] = ((auto[:,1,1]*maxY)/(self.count1*self.count1))-1
+ self.autoNorm[:,1,0] = ((auto[:,1,0]*maxY)/(self.count1*self.count0))-1
+ self.autoNorm[:,0,1] = ((auto[:,0,1]*maxY)/(self.count0*self.count1))-1
+
+ return
+
+class corrObject():
+ def __init__(self,filepath,parentFn):
+ #the container for the object.
+ self.parentFn = parentFn
+ self.type = 'corrObject'
+ self.filepath = str(filepath)
+ self.nameAndExt = os.path.basename(self.filepath).split('.')
+ self.name = self.nameAndExt[0]
+ self.ext = self.nameAndExt[-1]
+ self.autoNorm=[]
+ self.autotime=[]
+ self.model_autoNorm =[]
+ self.model_autotime = []
+ self.datalen= []
+ self.objId = self;
+ self.param = []
+ self.goodFit = True
+ self.fitted = False
+ self.checked = False
+ self.toFit = False
+
+ #main.data.append(filepath);
+ #The master data object reference
+ #main.corrObjectRef.append(self)
+ #The id in terms of how many things are loaded.
+ #self.unqID = main.label.numOfLoaded;
+ #main.label.numOfLoaded = main.label.numOfLoaded+1
+ def prepare_for_fit(self):
+ if self.parentFn.ch_check_ch0.isChecked() == True and self.ch_type == 0:
+ self.toFit = True
+ if self.parentFn.ch_check_ch1.isChecked() == True and self.ch_type == 1:
+ self.toFit = True
+
+ if self.parentFn.ch_check_ch01.isChecked() == True and self.ch_type == 2:
+ self.toFit = True
+ if self.parentFn.ch_check_ch10.isChecked() == True and self.ch_type == 3:
+ self.toFit = True
+ #self.parentFn.modelFitSel.clear()
+ #for objId in self.parentFn.objIdArr:
+ # if objId.toFit == True:
+ # self.parentFn.modelFitSel.addItem(objId.name)
+ self.parentFn.updateFitList()
+ def residual(self, param, x, data,options):
+
+ A = equation_(param, x,options)
+ residuals = data-A
+ return residuals
+ def fitToParameters(self):
+ self.parentFn.updateParamFirst()
+ self.parentFn.updateTableFirst()
+ self.parentFn.updateParamFirst()
+
+
+ #convert line coordinate
+
+ #Find the index of the nearest point in the scale.
+
+ data = np.array(self.autoNorm).astype(np.float64).reshape(-1)
+ scale = np.array(self.autotime).astype(np.float64).reshape(-1)
+ indx_L = int(np.argmin(np.abs(scale - self.parentFn.dr.xpos)))
+ indx_R = int(np.argmin(np.abs(scale - self.parentFn.dr1.xpos)))
+
+
+ res = minimize(self.residual, self.param, args=(scale[indx_L:indx_R+1],data[indx_L:indx_R+1], self.parentFn.def_options))
+ self.residualVar = res.residual
+ output = fit_report(self.param)
+ print 'residual',res.chisqr
+ if(res.chisqr>0.05):
+ print 'CAUTION DATA DID NOT FIT WELL CHI^2 >0.05',res.chisqr
+ self.goodFit = False
+ else:
+ self.goodFit = True
+ self.fitted = True
+ self.chisqr = res.chisqr
+ rowArray =[];
+ localTime = time.asctime( time.localtime(time.time()) )
+ rowArray.append(str(self.name))
+ rowArray.append(str(localTime))
+ rowArray.append(str(self.parentFn.diffModEqSel.currentText()))
+ rowArray.append(str(self.parentFn.def_options['Diff_species']))
+ rowArray.append(str(self.parentFn.tripModEqSel.currentText()))
+ rowArray.append(str(self.parentFn.def_options['Triplet_species']))
+ rowArray.append(str(self.parentFn.dimenModSel.currentText()))
+ rowArray.append(str(scale[indx_L]))
+ rowArray.append(str(scale[indx_R]))
+
+ for key, value in self.param.iteritems() :
+ rowArray.append(str(value.value))
+ rowArray.append(str(value.stderr))
+ if key =='GN0':
+ try:
+ rowArray.append(str(1/value.value))
+ except:
+ rowArray.append(str(0))
+
+ self.rowText = rowArray
+
+ self.parentFn.updateTableFirst();
+ self.model_autoNorm = equation_(self.param, scale[indx_L:indx_R+1],self.parentFn.def_options)
+ self.model_autotime = scale[indx_L:indx_R+1]
+ self.parentFn.on_show()
+
+ #self.parentFn.axes.plot(model_autotime,model_autoNorm, 'o-')
+ #self.parentFn.canvas.draw();
+
+ def load_from_file(self,channel):
+ tscale = [];
+ tdata = [];
+ if self.ext == 'SIN':
+ self.parentFn.objIdArr.append(self.objId)
+ proceed = False
+
+ for line in csv.reader(open(self.filepath, 'rb'),delimiter='\t'):
+
+ if proceed ==True:
+ if line ==[]:
+ break;
+
+
+ tscale.append(float(line[0]))
+ tdata.append(float(line[channel+1]))
+ else:
+
+ if (str(line) == "[\'[CorrelationFunction]\']"):
+ proceed = True;
+
+
+ self.autoNorm= np.array(tdata).astype(np.float64).reshape(-1)
+ self.autotime= np.array(tscale).astype(np.float64).reshape(-1)*1000
+ self.name = self.name+'-CH'+str(channel)
+ self.ch_type = channel;
+ self.prepare_for_fit()
+
+
+ self.param = self.parentFn.def_param
+ self.parentFn.fill_series_list()
+
+
+ #Where we add the names.
+
+
+ if self.ext == 'csv':
+
+ self.parentFn.objIdArr.append(self)
+
+ c = 0
+
+ for line in csv.reader(open(self.filepath, 'rb')):
+ if (c >0):
+ tscale.append(line[0])
+ tdata.append(line[1])
+ c +=1;
+
+ self.autoNorm= np.array(tdata).astype(np.float64).reshape(-1)
+ self.autotime= np.array(tscale).astype(np.float64).reshape(-1)
+ self.ch_type = 0
+ self.datalen= len(tdata)
+ self.objId.prepare_for_fit()
+
+
+
+
+
diff --git a/pycorrfit/readfiles/read_pt3_scripts/fib4.pyx b/pycorrfit/readfiles/read_pt3_scripts/fib4.pyx
new file mode 100644
index 0000000..874d52c
--- /dev/null
+++ b/pycorrfit/readfiles/read_pt3_scripts/fib4.pyx
@@ -0,0 +1,60 @@
+# -*- coding: utf-8 -*-
+""" PicoQuant functionalities from FCS_viewer
+
+This file contains a fast implementation of an algorithm that is
+very important (yes I have no clue about the structure of pt3 files)
+for importing *.pt3 files: `dividAndConquer`.
+
+The code was written by
+Dr. Dominic Waithe
+Wolfson Imaging Centre.
+Weatherall Institute of Molecular Medicine.
+University of Oxford
+
+https://github.com/dwaithe/FCS_viewer
+
+See Also:
+ The wrapper: `read_pt3_PicoQuant.py`
+ The wrapped file: `read_pt3_PicoQuant_original_FCSViewer.py`.
+"""
+
+import cython
+cimport cython
+
+import numpy as np
+cimport numpy as np
+
+DTYPE = np.float64
+ctypedef np.float64_t DTYPE_t
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.nonecheck(False)
+def dividAndConquer(arr1b,arr2b,arrLength):
+ """divide and conquer fast intersection algorithm. Waithe D 2014"""
+
+ cdef np.ndarray[DTYPE_t, ndim=1] arr1bool = np.zeros((arrLength-1))
+ cdef np.ndarray[DTYPE_t, ndim=1] arr2bool = np.zeros((arrLength-1))
+ cdef np.ndarray[DTYPE_t, ndim=1] arr1 = arr1b
+ cdef np.ndarray[DTYPE_t, ndim=1] arr2 = arr2b
+
+ cdef int arrLen
+ arrLen = arrLength;
+ cdef int i
+ i = 0;
+ cdef int j
+ j = 0;
+
+ while(i <arrLen-1 and j< arrLen-1):
+
+ if(arr1[i] < arr2[j]):
+ i+=1;
+ elif(arr2[j] < arr1[i]):
+ j+=1;
+ elif (arr1[i] == arr2[j]):
+
+ arr1bool[i] = 1;
+ arr2bool[j] = 1;
+ i+=1;
+
+ return arr1bool,arr2bool
diff --git a/pycorrfit/readfiles/read_pt3_scripts/import_methods.py b/pycorrfit/readfiles/read_pt3_scripts/import_methods.py
new file mode 100644
index 0000000..71b1abb
--- /dev/null
+++ b/pycorrfit/readfiles/read_pt3_scripts/import_methods.py
@@ -0,0 +1,190 @@
+import struct
+import numpy as np
+
+
+"""FCS Bulk Correlation Software
+
+ Copyright (C) 2015 Dominic Waithe
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License along
+ with this program; if not, write to the Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+"""
+
+def pt3import(filepath):
+ """The file import for the .pt3 file"""
+ f = open(filepath, 'rb')
+ Ident = f.read(16)
+ FormatVersion = f.read(6)
+ CreatorName = f.read(18)
+ CreatorVersion = f.read(12)
+ FileTime = f.read(18)
+ CRLF = f.read(2)
+ CommentField = f.read(256)
+ Curves = struct.unpack('i', f.read(4))[0]
+ BitsPerRecord = struct.unpack('i', f.read(4))[0]
+ RoutingChannels = struct.unpack('i', f.read(4))[0]
+ NumberOfBoards = struct.unpack('i', f.read(4))[0]
+ ActiveCurve = struct.unpack('i', f.read(4))[0]
+ MeasurementMode = struct.unpack('i', f.read(4))[0]
+ SubMode = struct.unpack('i', f.read(4))[0]
+ RangeNo = struct.unpack('i', f.read(4))[0]
+ Offset = struct.unpack('i', f.read(4))[0]
+ AcquisitionTime = struct.unpack('i', f.read(4))[0]
+ StopAt = struct.unpack('i', f.read(4))[0]
+ StopOnOvfl = struct.unpack('i', f.read(4))[0]
+ Restart = struct.unpack('i', f.read(4))[0]
+ DispLinLog = struct.unpack('i', f.read(4))[0]
+ DispTimeFrom = struct.unpack('i', f.read(4))[0]
+ DispTimeTo = struct.unpack('i', f.read(4))[0]
+ DispCountFrom = struct.unpack('i', f.read(4))[0]
+ DispCountTo = struct.unpack('i', f.read(4))[0]
+ DispCurveMapTo = [];
+ DispCurveShow =[];
+ for i in range(0,8):
+ DispCurveMapTo.append(struct.unpack('i', f.read(4))[0]);
+ DispCurveShow.append(struct.unpack('i', f.read(4))[0]);
+ ParamStart =[];
+ ParamStep =[];
+ ParamEnd =[];
+ for i in range(0,3):
+ ParamStart.append(struct.unpack('i', f.read(4))[0]);
+ ParamStep.append(struct.unpack('i', f.read(4))[0]);
+ ParamEnd.append(struct.unpack('i', f.read(4))[0]);
+
+ RepeatMode = struct.unpack('i', f.read(4))[0]
+ RepeatsPerCurve = struct.unpack('i', f.read(4))[0]
+ RepeatTime = struct.unpack('i', f.read(4))[0]
+ RepeatWait = struct.unpack('i', f.read(4))[0]
+ ScriptName = f.read(20)
+
+ #The next is a board specific header
+
+ HardwareIdent = f.read(16)
+ HardwareVersion = f.read(8)
+ HardwareSerial = struct.unpack('i', f.read(4))[0]
+ SyncDivider = struct.unpack('i', f.read(4))[0]
+
+ CFDZeroCross0 = struct.unpack('i', f.read(4))[0]
+ CFDLevel0 = struct.unpack('i', f.read(4))[0]
+ CFDZeroCross1 = struct.unpack('i', f.read(4))[0]
+ CFDLevel1 = struct.unpack('i', f.read(4))[0]
+
+ Resolution = struct.unpack('f', f.read(4))[0]
+
+ #below is new in format version 2.0
+
+ RouterModelCode = struct.unpack('i', f.read(4))[0]
+ RouterEnabled = struct.unpack('i', f.read(4))[0]
+
+ #Router Ch1
+ RtChan1_InputType = struct.unpack('i', f.read(4))[0]
+ RtChan1_InputLevel = struct.unpack('i', f.read(4))[0]
+ RtChan1_InputEdge = struct.unpack('i', f.read(4))[0]
+ RtChan1_CFDPresent = struct.unpack('i', f.read(4))[0]
+ RtChan1_CFDLevel = struct.unpack('i', f.read(4))[0]
+ RtChan1_CFDZeroCross = struct.unpack('i', f.read(4))[0]
+ #Router Ch2
+ RtChan2_InputType = struct.unpack('i', f.read(4))[0]
+ RtChan2_InputLevel = struct.unpack('i', f.read(4))[0]
+ RtChan2_InputEdge = struct.unpack('i', f.read(4))[0]
+ RtChan2_CFDPresent = struct.unpack('i', f.read(4))[0]
+ RtChan2_CFDLevel = struct.unpack('i', f.read(4))[0]
+ RtChan2_CFDZeroCross = struct.unpack('i', f.read(4))[0]
+ #Router Ch3
+ RtChan3_InputType = struct.unpack('i', f.read(4))[0]
+ RtChan3_InputLevel = struct.unpack('i', f.read(4))[0]
+ RtChan3_InputEdge = struct.unpack('i', f.read(4))[0]
+ RtChan3_CFDPresent = struct.unpack('i', f.read(4))[0]
+ RtChan3_CFDLevel = struct.unpack('i', f.read(4))[0]
+ RtChan3_CFDZeroCross = struct.unpack('i', f.read(4))[0]
+ #Router Ch4
+ RtChan4_InputType = struct.unpack('i', f.read(4))[0]
+ RtChan4_InputLevel = struct.unpack('i', f.read(4))[0]
+ RtChan4_InputEdge = struct.unpack('i', f.read(4))[0]
+ RtChan4_CFDPresent = struct.unpack('i', f.read(4))[0]
+ RtChan4_CFDLevel = struct.unpack('i', f.read(4))[0]
+ RtChan4_CFDZeroCross = struct.unpack('i', f.read(4))[0]
+
+ #The next is a T3 mode specific header.
+ ExtDevices = struct.unpack('i', f.read(4))[0]
+
+ Reserved1 = struct.unpack('i', f.read(4))[0]
+ Reserved2 = struct.unpack('i', f.read(4))[0]
+ CntRate0 = struct.unpack('i', f.read(4))[0]
+ CntRate1 = struct.unpack('i', f.read(4))[0]
+
+ StopAfter = struct.unpack('i', f.read(4))[0]
+ StopReason = struct.unpack('i', f.read(4))[0]
+ Records = struct.unpack('i', f.read(4))[0]
+ ImgHdrSize =struct.unpack('i', f.read(4))[0]
+
+ #Special Header for imaging.
+ if ImgHdrSize > 0:
+ ImgHdr = struct.unpack('i', f.read(ImgHdrSize))[0]
+ ofltime = 0;
+
+ cnt_1=0; cnt_2=0; cnt_3=0; cnt_4=0; cnt_Ofl=0; cnt_M=0; cnt_Err=0; # just counters
+ WRAPAROUND=65536;
+
+ #Put file Save info here.
+
+ syncperiod = 1e9/CntRate0;
+ #outfile stuff here.
+ #fpout.
+ #T3RecordArr = [];
+
+ chanArr = [0]*Records
+ trueTimeArr =[0]*Records
+ dTimeArr=[0]*Records
+ #f1=open('./testfile', 'w+')
+ for b in range(0,Records):
+ T3Record = struct.unpack('I', f.read(4))[0];
+
+ #T3RecordArr.append(T3Record)
+ nsync = T3Record & 65535
+ chan = ((T3Record >> 28) & 15);
+ chanArr[b]=chan
+ #f1.write(str(i)+" "+str(T3Record)+" "+str(nsync)+" "+str(chan)+" ")
+ dtime = 0;
+
+ if chan == 1:
+ cnt_1 = cnt_1+1;dtime = ((T3Record >> 16) & 4095);#f1.write(str(dtime)+" ")
+ elif chan == 2:
+ cnt_2 = cnt_2+1;dtime = ((T3Record >> 16) & 4095);#f1.write(str(dtime)+" ")
+ elif chan == 3:
+ cnt_3 = cnt_3+1;dtime = ((T3Record >> 16) & 4095);#f1.write(str(dtime)+" ")
+ elif chan == 4:
+ cnt_4 = cnt_4+1;dtime = ((T3Record >> 16) & 4095);#f1.write(str(dtime)+" ")
+ elif chan == 15:
+ markers = ((T3Record >> 16) & 15);
+
+ if markers ==0:
+ ofltime = ofltime +WRAPAROUND;
+ cnt_Ofl = cnt_Ofl+1
+ #f1.write("Ofl "+" ")
+ else:
+ cnt_M=cnt_M+1
+ #f1.write("MA:%1u "+markers+" ")
+
+ truensync = ofltime + nsync;
+ truetime = (truensync * syncperiod) + (dtime*Resolution);
+ trueTimeArr[b] = truetime
+ dTimeArr[b] = dtime
+ #f1.write(str(truensync)+" "+str(truetime)+"\n")
+ f.close();
+ #f1.close();
+
+
+
+ return np.array(chanArr), np.array(trueTimeArr), np.array(dTimeArr), Resolution
\ No newline at end of file
diff --git a/src/tools/__init__.py b/pycorrfit/tools/__init__.py
similarity index 91%
rename from src/tools/__init__.py
rename to pycorrfit/tools/__init__.py
index 4b73de7..429fa41 100644
--- a/src/tools/__init__.py
+++ b/pycorrfit/tools/__init__.py
@@ -61,17 +61,17 @@ import sys
reload(sys)
sys.setdefaultencoding('utf-8')
-import datarange
-import background
-import overlaycurves
-import batchcontrol
-import globalfit
-import average
-import simulation
-
-import info
-import statistics
-import trace
+from . import datarange
+from . import background
+from . import overlaycurves
+from . import batchcontrol
+from . import globalfit
+from . import average
+from . import simulation
+
+from . import info
+from . import statistics
+from . import trace
# Load all of the classes
# This also defines the order of the tools in the menu
ImpA = [
@@ -106,9 +106,9 @@ for i in np.arange(len(ImpB)):
#ToolsPassive.append(getattr(ModulePassive[i], ImpB[i][1]))
# This is in the file menu and not needed in the dictionaries below.
-from chooseimport import ChooseImportTypes
-from chooseimport import ChooseImportTypesModel
-from comment import EditComment
+from .chooseimport import ChooseImportTypes
+from .chooseimport import ChooseImportTypesModel
+from .comment import EditComment
# the "special" tool RangeSelector
from parmrange import RangeSelector
diff --git a/src/tools/average.py b/pycorrfit/tools/average.py
similarity index 99%
rename from src/tools/average.py
rename to pycorrfit/tools/average.py
index a87e859..bc3e0fb 100644
--- a/src/tools/average.py
+++ b/pycorrfit/tools/average.py
@@ -32,8 +32,8 @@
import numpy as np
import wx
-import misc
-import models as mdls
+from .. import misc
+from .. import models as mdls
# Menu entry name
MENUINFO = ["&Average data", "Create an average curve from whole session."]
diff --git a/src/tools/background.py b/pycorrfit/tools/background.py
similarity index 99%
rename from src/tools/background.py
rename to pycorrfit/tools/background.py
index 907af52..68c9444 100644
--- a/src/tools/background.py
+++ b/pycorrfit/tools/background.py
@@ -37,10 +37,10 @@ import wx
from wx.lib.agw import floatspin # Float numbers in spin fields
import wx.lib.plot as plot
-import doc
-import misc
-import openfile as opf # How to treat an opened file
-import readfiles
+from .. import doc
+from .. import misc
+from .. import openfile as opf # How to treat an opened file
+from .. import readfiles
# Menu entry name
MENUINFO = ["&Background correction", "Open a file for background correction."]
diff --git a/src/tools/batchcontrol.py b/pycorrfit/tools/batchcontrol.py
similarity index 85%
rename from src/tools/batchcontrol.py
rename to pycorrfit/tools/batchcontrol.py
index de940a7..91846d0 100644
--- a/src/tools/batchcontrol.py
+++ b/pycorrfit/tools/batchcontrol.py
@@ -30,10 +30,12 @@
import numpy as np
+import os
import wx
-import openfile as opf # How to treat an opened file
-import models as mdls
+from .. import openfile as opf # How to treat an opened file
+from .. import models as mdls
+
# Menu entry name
MENUINFO = ["B&atch control", "Batch fitting."]
@@ -186,18 +188,29 @@ class BatchCtrl(wx.Frame):
def OnRadioThere(self, event=None):
# If user clicks on pages in main program, we do not want the list
# to be changed.
- self.YamlParms, dirname, filename = \
- opf.ImportParametersYaml(self.parent, self.parent.dirname)
- if filename == None:
- # User did not select any sesion file
- self.rbtnhere.SetValue(True)
+ wc = opf.session_wildcards
+ wcstring = "PyCorrFit session (*.pcfs)|*{};*{}".format(
+ wc[0], wc[1])
+ dlg = wx.FileDialog(self.parent, "Open session file",
+ self.parent.dirname, "", wcstring, wx.OPEN)
+ # user cannot do anything until he clicks "OK"
+ if dlg.ShowModal() == wx.ID_OK:
+ sessionfile = dlg.GetPath()
+ self.dirname = os.path.split(sessionfile)[0]
else:
- DDlist = list()
- for i in np.arange(len(self.YamlParms)):
- # Rebuild the list
- modelid = self.YamlParms[i][1]
- modelname = mdls.modeldict[modelid][1]
- DDlist.append(self.YamlParms[i][0]+modelname)
- self.dropdown.SetItems(DDlist)
- # Set selection text to first item
- self.dropdown.SetSelection(0)
+ self.parent.dirname=dlg.GetDirectory()
+ self.rbtnhere.SetValue(True)
+ return
+
+ Infodict = opf.LoadSessionData(sessionfile,
+ parameters_only=True)
+ self.YamlParms = Infodict["Parameters"]
+ DDlist = list()
+ for i in np.arange(len(self.YamlParms)):
+ # Rebuild the list
+ modelid = self.YamlParms[i][1]
+ modelname = mdls.modeldict[modelid][1]
+ DDlist.append(self.YamlParms[i][0]+modelname)
+ self.dropdown.SetItems(DDlist)
+ # Set selection text to first item
+ self.dropdown.SetSelection(0)
diff --git a/src/tools/chooseimport.py b/pycorrfit/tools/chooseimport.py
similarity index 99%
rename from src/tools/chooseimport.py
rename to pycorrfit/tools/chooseimport.py
index 0a75b3d..700c78b 100644
--- a/src/tools/chooseimport.py
+++ b/pycorrfit/tools/chooseimport.py
@@ -33,9 +33,9 @@
import numpy as np
import wx
-import models as mdls
-import doc
-import overlaycurves
+from .. import models as mdls
+from .. import doc
+from . import overlaycurves
class ChooseImportTypes(wx.Dialog):
diff --git a/src/tools/comment.py b/pycorrfit/tools/comment.py
similarity index 100%
rename from src/tools/comment.py
rename to pycorrfit/tools/comment.py
diff --git a/src/tools/datarange.py b/pycorrfit/tools/datarange.py
similarity index 100%
rename from src/tools/datarange.py
rename to pycorrfit/tools/datarange.py
diff --git a/src/tools/example.py b/pycorrfit/tools/example.py
similarity index 100%
rename from src/tools/example.py
rename to pycorrfit/tools/example.py
diff --git a/src/tools/globalfit.py b/pycorrfit/tools/globalfit.py
similarity index 99%
rename from src/tools/globalfit.py
rename to pycorrfit/tools/globalfit.py
index 3cd5840..e48126a 100644
--- a/src/tools/globalfit.py
+++ b/pycorrfit/tools/globalfit.py
@@ -33,8 +33,8 @@ import wx
import numpy as np
from scipy import optimize as spopt
-import misc
-import models as mdls
+from .. import misc
+from .. import models as mdls
# Menu entry name
MENUINFO = ["&Global fitting",
diff --git a/src/tools/info.py b/pycorrfit/tools/info.py
similarity index 99%
rename from src/tools/info.py
rename to pycorrfit/tools/info.py
index 55983bf..671b44c 100644
--- a/src/tools/info.py
+++ b/pycorrfit/tools/info.py
@@ -32,8 +32,8 @@
import wx
import numpy as np
-import fitting
-import models as mdls
+from .. import fitting
+from .. import models as mdls
# Menu entry name
MENUINFO = ["Page &info",
diff --git a/src/tools/overlaycurves.py b/pycorrfit/tools/overlaycurves.py
similarity index 96%
rename from src/tools/overlaycurves.py
rename to pycorrfit/tools/overlaycurves.py
index 999bf5e..f03ad4d 100644
--- a/src/tools/overlaycurves.py
+++ b/pycorrfit/tools/overlaycurves.py
@@ -30,14 +30,20 @@
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
-from matplotlib import cm
+try:
+ from matplotlib import cm
+except:
+ mpl_available = False
+else:
+ mpl_available = True
+
import numpy as np
import platform
import wx
import wx.lib.plot as plot # Plotting in wxPython
-import edclasses
-import misc
+from .. import edclasses
+from .. import misc
# Menu entry name
MENUINFO = ["&Overlay curves", "Select experimental curves."]
@@ -139,7 +145,7 @@ class Wrapper_Tools(object):
`tools`.
"""
if trigger in ["parm_batch", "fit_batch", "page_add_batch",
- "tab_init"]:
+ "tab_init", "tab_browse"]:
return
# When parent changes
# This is a necessary function for PyCorrFit.
@@ -394,16 +400,20 @@ class UserSelectCurves(wx.Frame):
curves.append(self.curvedict[self.curvekeys[i]])
legends.append(self.curvekeys[i])
# Set color map
- cmap = cm.get_cmap("gist_rainbow")
+ if mpl_available:
+ cmap = cm.get_cmap("gist_rainbow")
# Clear Plot
self.canvas.Clear()
# Draw Plot
lines = list()
for i in np.arange(len(curves)):
- color = cmap(1.*i/(len(curves)), bytes=True)
- color = wx.Colour(color[0], color[1], color[2])
- line = plot.PolyLine(curves[i], legend=legends[i], colour=color,
- width=1)
+ if mpl_available:
+ color = cmap(1.*i/(len(curves)), bytes=True)
+ color = wx.Colour(color[0], color[1], color[2])
+ else:
+ color = "black"
+ line = plot.PolyLine(curves[i], legend=legends[i],
+ colour=color, width=1)
lines.append(line)
self.canvas.SetEnableLegend(True)
if len(curves) != 0:
diff --git a/src/tools/parmrange.py b/pycorrfit/tools/parmrange.py
similarity index 98%
rename from src/tools/parmrange.py
rename to pycorrfit/tools/parmrange.py
index 153e3f1..5535432 100644
--- a/src/tools/parmrange.py
+++ b/pycorrfit/tools/parmrange.py
@@ -34,8 +34,8 @@
import wx
import numpy as np
-import edclasses # edited floatspin
-import models as mdls
+from .. import edclasses # edited floatspin
+from .. import models as mdls
class RangeSelector(wx.Frame):
diff --git a/src/tools/plotexport.py b/pycorrfit/tools/plotexport.py
similarity index 100%
rename from src/tools/plotexport.py
rename to pycorrfit/tools/plotexport.py
diff --git a/src/tools/simulation.py b/pycorrfit/tools/simulation.py
similarity index 99%
rename from src/tools/simulation.py
rename to pycorrfit/tools/simulation.py
index 776e427..362a05a 100644
--- a/src/tools/simulation.py
+++ b/pycorrfit/tools/simulation.py
@@ -33,8 +33,8 @@
import wx
import numpy as np
-import edclasses # edited floatspin
-import models as mdls
+from .. import edclasses # edited floatspin
+from .. import models as mdls
# Menu entry name
MENUINFO = ["S&lider simulation",
diff --git a/src/tools/statistics.py b/pycorrfit/tools/statistics.py
similarity index 99%
rename from src/tools/statistics.py
rename to pycorrfit/tools/statistics.py
index 21157d4..aeda084 100644
--- a/src/tools/statistics.py
+++ b/pycorrfit/tools/statistics.py
@@ -35,8 +35,8 @@ import wx
import wx.lib.plot as plot # Plotting in wxPython
import numpy as np
-from info import InfoClass
-import misc
+from .info import InfoClass
+from .. import misc
# Menu entry name
MENUINFO = ["&Statistics view", "Show some session statistics."]
diff --git a/src/tools/trace.py b/pycorrfit/tools/trace.py
similarity index 100%
rename from src/tools/trace.py
rename to pycorrfit/tools/trace.py
diff --git a/src/usermodel.py b/pycorrfit/usermodel.py
similarity index 98%
rename from src/usermodel.py
rename to pycorrfit/usermodel.py
index 04b8f3d..32272c8 100644
--- a/src/usermodel.py
+++ b/pycorrfit/usermodel.py
@@ -35,6 +35,8 @@
import numpy as np
import scipy.special as sps
+import sys
+import warnings
try:
import sympy
from sympy.core.function import Function
@@ -42,13 +44,14 @@ try:
from sympy import sympify, I
from sympy.functions import im
except ImportError:
- print " Warning: module sympy not found!"
+ warnings.warn("Importing sympy failed."+\
+ " Reason: {}.".format(sys.exc_info()[1].message))
# Define Function, so PyCorrFit will start, even if sympy is not there.
# wixi needs Function.
Function = object
import wx
-import models as mdls
+from . import models as mdls
class CorrFunc(object):
diff --git a/setup.py b/setup.py
index 843b7ac..f98fd62 100644
--- a/setup.py
+++ b/setup.py
@@ -1,8 +1,13 @@
#!/usr/bin/env python
+# To just compile the cython part in-place:
+# python setup.py build_ext --inplace
# To create a distribution package for pip or easy-install:
# python setup.py sdist
-from setuptools import setup, find_packages
-from os.path import join, dirname, realpath
+from setuptools import setup, find_packages, Extension
+from Cython.Distutils import build_ext
+import numpy as np
+
+from os.path import join, dirname, realpath, exists
from warnings import warn
# The next three lines are necessary for setup.py install to include
@@ -12,6 +17,16 @@ for scheme in INSTALL_SCHEMES.values():
scheme['data'] = scheme['purelib']
+# Download documentation if it was not compiled
+Documentation = join(dirname(realpath(__file__)), "doc/PyCorrFit_doc.pdf")
+webdoc = "https://github.com/paulmueller/PyCorrFit/wiki/PyCorrFit_doc.pdf"
+if not exists(Documentation):
+ print "Downloading {} from {}".format(Documentation, webdoc)
+ import urllib
+ #testfile = urllib.URLopener()
+ urllib.urlretrieve(webdoc, Documentation)
+
+
# Get the version of PyCorrFit from the Changelog.txt
StaticChangeLog = join(dirname(realpath(__file__)), "ChangeLog.txt")
try:
@@ -22,6 +37,14 @@ except:
warn("Could not find 'ChangeLog.txt'. PyCorrFit version is unknown.")
version = "0.0.0-unknown"
+
+EXTENSIONS = [Extension("pycorrfit.readfiles.read_pt3_scripts.fib4",
+ ["pycorrfit/readfiles/read_pt3_scripts/fib4.pyx"],
+ libraries=[],
+ include_dirs=[np.get_include()]
+ )
+ ]
+
setup(
name='pycorrfit',
author='Paul Mueller',
@@ -32,23 +55,35 @@ setup(
'pycorrfit.models',
'pycorrfit.readfiles',
'pycorrfit.tools'],
- package_dir={'pycorrfit': 'src',
- 'pycorrfit.models': 'src/models',
- 'pycorrfit.readfiles': 'src/readfiles',
- 'pycorrfit.tools': 'src/tools'},
- data_files=[('pycorrfit_doc', ['ChangeLog.txt', 'PyCorrFit_doc.pdf'])],
+ package_dir={'pycorrfit': 'pycorrfit',
+ 'pycorrfit.models': 'pycorrfit/models',
+ 'pycorrfit.readfiles': 'pycorrfit/readfiles',
+ 'pycorrfit.tools': 'pycorrfit/tools'},
+ data_files=[('pycorrfit_doc', ['ChangeLog.txt', 'doc/PyCorrFit_doc.pdf'])],
license="GPL v2",
description='Scientific tool for fitting correlation curves on a logarithmic plot.',
long_description=open(join(dirname(__file__), 'Readme.txt')).read(),
scripts=['bin/pycorrfit'],
include_package_data=True,
+ cmdclass={"build_ext": build_ext},
+ ext_modules=EXTENSIONS,
install_requires=[
+ "cython",
"NumPy >= 1.5.1",
"SciPy >= 0.8.0",
"sympy >= 0.7.2",
"PyYAML >= 3.09",
"wxPython >= 2.8.10.1",
- "matplotlib >= 1.1.0"]
+ "matplotlib >= 1.1.0"],
+ keywords=["fcs", "fluorescence", "correlation", "spectroscopy",
+ "tir", "fitting"],
+ classifiers= [
+ 'Operating System :: OS Independent',
+ 'Programming Language :: Python :: 2.7',
+ 'Topic :: Scientific/Engineering :: Visualization',
+ 'Intended Audience :: Science/Research'
+ ],
+ platforms=['ALL']
)
diff --git a/src/openfile.py b/src/openfile.py
deleted file mode 100644
index 2de0000..0000000
--- a/src/openfile.py
+++ /dev/null
@@ -1,838 +0,0 @@
-# -*- coding: utf-8 -*-
-""" PyCorrFit
-
- Module openfile
- This file is used to define operations on how to open some files.
-
- Dimensionless representation:
- unit of time : 1 ms
- unit of inverse time: 10³ /s
- unit of distance : 100 nm
- unit of Diff.coeff : 10 µm²/s
- unit of inverse area: 100 /µm²
- unit of inv. volume : 1000 /µm³
-
- Copyright (C) 2011-2012 Paul Müller
-
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; either version 2 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see <http://www.gnu.org/licenses/>.
-"""
-
-
-import csv
-from distutils.version import LooseVersion # For version checking
-import numpy as np
-import os
-import shutil
-import tempfile
-import wx
-import yaml
-import zipfile
-
-import doc
-import edclasses
-from tools import info
-
-# These imports are required for loading data
-from readfiles import Filetypes
-from readfiles import BGFiletypes
-
-def ImportParametersYaml(parent, dirname):
- """ Import the parameters from a parameters.yaml file
- from an PyCorrFit session.
- """
- wc = [".pcfs", ".fcsfit-session.zip"]
- wcstring = "PyCorrFit session (*.pcfs)|*{};*{}".format(wc[0], wc[1])
- dlg = wx.FileDialog(parent, "Open session file", dirname, "",
- wcstring, wx.OPEN)
- # user cannot do anything until he clicks "OK"
- if dlg.ShowModal() == wx.ID_OK:
- path = dlg.GetPath() # Workaround since 0.7.5
- (dirname, filename) = os.path.split(path)
- #filename = dlg.GetFilename()
- #dirname = dlg.GetDirectory()
- dlg.Destroy()
- Arc = zipfile.ZipFile(os.path.join(dirname, filename), mode='r')
- # Get the yaml parms dump:
- yamlfile = Arc.open("Parameters.yaml")
- # Parms: Fitting and drawing parameters of correlation curve
- # The *yamlfile* is responsible for the order of the Pages #i.
- Parms = yaml.safe_load(yamlfile)
- yamlfile.close()
- Arc.close()
- return Parms, dirname, filename
- else:
- dirname=dlg.GetDirectory()
- return None, dirname, None
-
-
-def OpenSession(parent, dirname, sessionfile=None):
- """ Load a whole session that has previously been saved
- by PyCorrFit.
- Infodict may contain the following keys:
- "Backgrounds", list: contains the backgrounds
- "Comments", dict: "Session" comment and int keys to Page titles
- "Correlations", dict: page numbers, all correlation curves
- "External Functions", dict: modelids to external model functions
- "External Weights", dict: page numbers, external weights for fitting
- "Parameters", dict: page numbers, all parameters of the pages
- "Preferences", dict: not used yet
- "Traces", dict: page numbers, all traces of the pages
- """
- Infodict = dict()
- wc = [".pcfs", ".fcsfit-session.zip"]
- wcstring = "PyCorrFit session (*.pcfs)|*{};*{}".format(wc[0], wc[1])
- if sessionfile is None:
- dlg = wx.FileDialog(parent, "Open session file", dirname, "",
- wcstring, wx.OPEN)
- # user cannot do anything until he clicks "OK"
- if dlg.ShowModal() == wx.ID_OK:
- path = dlg.GetPath() # Workaround since 0.7.5
- (dirname, filename) = os.path.split(path)
- #filename = dlg.GetFilename()
- #dirname = dlg.GetDirectory()
- dlg.Destroy()
- else:
- # User did not press OK
- # stop this function
- dirname = dlg.GetDirectory()
- dlg.Destroy()
- return None, dirname, None
- else:
- (dirname, filename) = os.path.split(sessionfile)
- path = sessionfile # Workaround since 0.7.5
- if (filename[-len(wc[0]):] != wx[0] and
- filename[-len(wc[1]):] != wx[1]):
- # User specified wrong file
- print "Unknown file extension: "+filename
- # stop this function
- dirname = dlg.GetDirectory()
- dlg.Destroy()
- return None, dirname, None
- Arc = zipfile.ZipFile(path, mode='r')
- try:
- ## Check PyCorrFit version:
- readmefile = Arc.open("Readme.txt")
- # e.g. "This file was created using PyCorrFit version 0.7.6"
- identifier = readmefile.readline()
- arcv = LooseVersion(identifier[46:].strip())
- thisv = LooseVersion(parent.version.strip())
- if arcv > thisv:
- errstring = "Your version of Pycorrfit ("+str(thisv)+")"+\
- " is too old to open this session ("+\
- str(arcv).strip()+").\n"+\
- "Please download the lates version of "+\
- " PyCorrFit from \n"+doc.HomePage+".\n"+\
- "Continue opening this session?"
- dlg = edclasses.MyOKAbortDialog(parent, errstring, "Warning")
- returns = dlg.ShowModal()
- if returns == wx.ID_OK:
- dlg.Destroy()
- else:
- dlg.Destroy()
- return None, dirname, None
- except:
- pass
- # Get the yaml parms dump:
- yamlfile = Arc.open("Parameters.yaml")
- # Parameters: Fitting and drawing parameters of correlation curve
- # The *yamlfile* is responsible for the order of the Pages #i.
- Infodict["Parameters"] = yaml.safe_load(yamlfile)
- yamlfile.close()
- # Supplementary data (errors of fit)
- supname = "Supplements.yaml"
- try:
- Arc.getinfo(supname)
- except:
- pass
- else:
- supfile = Arc.open(supname)
- supdata = yaml.safe_load(supfile)
- Infodict["Supplements"] = dict()
- for idp in supdata:
- Infodict["Supplements"][idp[0]] = dict()
- Infodict["Supplements"][idp[0]]["FitErr"] = idp[1]
- if len(idp) > 2:
- # As of version 0.7.4 we save chi2 and shared pages -global fit
- Infodict["Supplements"][idp[0]]["Chi sq"] = idp[2]
- Infodict["Supplements"][idp[0]]["Global Share"] = idp[3]
- ## Preferences: Reserved for a future version of PyCorrFit :)
- prefname = "Preferences.yaml"
- try:
- Arc.getinfo(prefname)
- except KeyError:
- pass
- else:
- yamlpref = Arc.open(prefname)
- Infodict["Preferences"] = yaml.safe_load(yamlpref)
- yamlpref.close()
- # Get external functions
- Infodict["External Functions"] = dict()
- key = 7001
- while key <= 7999:
- # (There should not be more than 1000 functions)
- funcfilename = "model_"+str(key)+".txt"
- try:
- Arc.getinfo(funcfilename)
- except KeyError:
- # No more functions to import
- key = 8000
- else:
- funcfile = Arc.open(funcfilename)
- Infodict["External Functions"][key] = funcfile.read()
- funcfile.close()
- key=key+1
- # Get the correlation arrays
- Infodict["Correlations"] = dict()
- for i in np.arange(len(Infodict["Parameters"])):
- # The *number* is used to identify the correct file
- number = str(Infodict["Parameters"][i][0]).strip().strip(":").strip("#")
- pageid = int(number)
- expfilename = "data"+number+".csv"
- expfile = Arc.open(expfilename, 'r')
- readdata = csv.reader(expfile, delimiter=',')
- dataexp = list()
- tau = list()
- if str(readdata.next()[0]) == "# tau only":
- for row in readdata:
- # Exclude commentaries
- if (str(row[0])[0:1] != '#'):
- tau.append(float(row[0]))
- tau = np.array(tau)
- dataexp = None
- else:
- for row in readdata:
- # Exclude commentaries
- if (str(row[0])[0:1] != '#'):
- dataexp.append((float(row[0]), float(row[1])))
- dataexp = np.array(dataexp)
- tau = dataexp[:,0]
- Infodict["Correlations"][pageid] = [tau, dataexp]
- del readdata
- expfile.close()
- # Get the Traces
- Infodict["Traces"] = dict()
- for i in np.arange(len(Infodict["Parameters"])):
- # The *number* is used to identify the correct file
- number = str(Infodict["Parameters"][i][0]).strip().strip(":").strip("#")
- pageid = int(number)
- # Find out, if we have a cross correlation data type
- IsCross = False
- try:
- IsCross = Infodict["Parameters"][i][7]
- except IndexError:
- # No Cross correlation
- pass
- if IsCross is False:
- tracefilenames = ["trace"+number+".csv"]
- else:
- # Cross correlation uses two traces
- tracefilenames = ["trace"+number+"A.csv",
- "trace"+number+"B.csv" ]
- thistrace = list()
- for tracefilename in tracefilenames:
- try:
- Arc.getinfo(tracefilename)
- except KeyError:
- pass
- else:
- tracefile = Arc.open(tracefilename, 'r')
- traceread = csv.reader(tracefile, delimiter=',')
- singletrace = list()
- for row in traceread:
- # Exclude commentaries
- if (str(row[0])[0:1] != '#'):
- singletrace.append((float(row[0]), float(row[1])))
- singletrace = np.array(singletrace)
- thistrace.append(singletrace)
- del traceread
- del singletrace
- tracefile.close()
- if len(thistrace) != 0:
- Infodict["Traces"][pageid] = thistrace
- else:
- Infodict["Traces"][pageid] = None
- # Get the comments, if they exist
- commentfilename = "comments.txt"
- try:
- # Raises KeyError, if file is not present:
- Arc.getinfo(commentfilename)
- except KeyError:
- pass
- else:
- # Open the file
- commentfile = Arc.open(commentfilename, 'r')
- Infodict["Comments"] = dict()
- for i in np.arange(len(Infodict["Parameters"])):
- number = str(Infodict["Parameters"][i][0]).strip().strip(":").strip("#")
- pageid = int(number)
- # Strip line ending characters for all the Pages.
- Infodict["Comments"][pageid] = commentfile.readline().strip()
- # Now Add the Session Comment (the rest of the file).
- ComList = commentfile.readlines()
- Infodict["Comments"]["Session"] = ''
- for line in ComList:
- Infodict["Comments"]["Session"] += line
- commentfile.close()
- # Get the Backgroundtraces and data if they exist
- bgfilename = "backgrounds.csv"
- try:
- # Raises KeyError, if file is not present:
- Arc.getinfo(bgfilename)
- except KeyError:
- pass
- else:
- # Open the file
- Infodict["Backgrounds"] = list()
- bgfile = Arc.open(bgfilename, 'r')
- bgread = csv.reader(bgfile, delimiter='\t')
- i = 0
- for bgrow in bgread:
- bgtracefilename = "bg_trace"+str(i)+".csv"
- bgtracefile = Arc.open(bgtracefilename, 'r')
- bgtraceread = csv.reader(bgtracefile, delimiter=',')
- bgtrace = list()
- for row in bgtraceread:
- # Exclude commentaries
- if (str(row[0])[0:1] != '#'):
- bgtrace.append((np.float(row[0]), np.float(row[1])))
- bgtrace = np.array(bgtrace)
- Infodict["Backgrounds"].append([np.float(bgrow[0]), str(bgrow[1]), bgtrace])
- i = i + 1
- bgfile.close()
- # Get external weights if they exist
- WeightsFilename = "externalweights.txt"
- try:
- # Raises KeyError, if file is not present:
- Arc.getinfo(WeightsFilename)
- except:
- pass
- else:
- Wfile = Arc.open(WeightsFilename, 'r')
- Wread = csv.reader(Wfile, delimiter='\t')
- Weightsdict = dict()
- for wrow in Wread:
- Pkey = wrow[0] # Page of weights
- pageid = int(Pkey)
- # Do not overwrite anything
- try:
- Weightsdict[pageid]
- except:
- Weightsdict[pageid] = dict()
- Nkey = wrow[1] # Name of weights
- Wdatafilename = "externalweights_data"+Pkey+"_"+Nkey+".csv"
- Wdatafile = Arc.open(Wdatafilename, 'r')
- Wdatareader = csv.reader(Wdatafile)
- Wdata = list()
- for row in Wdatareader:
- # Exclude commentaries
- if (str(row[0])[0:1] != '#'):
- Wdata.append(np.float(row[0]))
- Weightsdict[pageid][Nkey] = np.array(Wdata)
- Infodict["External Weights"] = Weightsdict
- Arc.close()
- return Infodict, dirname, filename
-
-
-def SaveSession(parent, dirname, Infodict):
- """ Write whole Session into a zip file.
- Infodict may contain the following keys:
- "Backgrounds", list: contains the backgrounds
- "Comments", dict: "Session" comment and int keys to Page titles
- "Correlations", dict: page numbers, all correlation curves
- "External Functions, dict": modelids to external model functions
- "External Weights", dict: page numbers, external weights for fitting
- "Parameters", dict: page numbers, all parameters of the pages
- "Preferences", dict: not used yet
- "Traces", dict: page numbers, all traces of the pages
- We will also write a Readme.txt
- """
- dlg = wx.FileDialog(parent, "Save session file", dirname, "",
- "PyCorrFit session (*.pcfs)|*.pcfs",
- wx.SAVE|wx.FD_OVERWRITE_PROMPT)
- if dlg.ShowModal() == wx.ID_OK:
- path = dlg.GetPath() # Workaround since 0.7.5
- (dirname, filename) = os.path.split(path)
- #filename = dlg.GetFilename()
- #dirname = dlg.GetDirectory()
- # Sometimes you have multiple endings...
- if filename.endswith(".pcfs") is not True:
- filename = filename+".pcfs"
- dlg.Destroy()
- # Change working directory
- returnWD = os.getcwd()
- tempdir = tempfile.mkdtemp()
- os.chdir(tempdir)
- # Create zip file
- Arc = zipfile.ZipFile(filename, mode='w')
- # Only do the Yaml thing for safe operations.
- # Make the yaml dump
- parmsfilename = "Parameters.yaml"
- # Parameters have to be floats in lists
- # in order for yaml.safe_load to work.
- Parms = Infodict["Parameters"]
- ParmsKeys = Parms.keys()
- ParmsKeys.sort()
- Parmlist = list()
- for idparm in ParmsKeys:
- # Make sure we do not accidently save arrays.
- # This would not work correctly with yaml.
- Parms[idparm][2] = np.array(Parms[idparm][2],dtype="float").tolist()
- Parms[idparm][3] = np.array(Parms[idparm][3],dtype="bool").tolist()
- # Range of fitting parameters
- Parms[idparm][9] = np.array(Parms[idparm][9],dtype="float").tolist()
- Parmlist.append(Parms[idparm])
- yaml.dump(Parmlist, open(parmsfilename, "wb"))
- Arc.write(parmsfilename)
- os.remove(os.path.join(tempdir, parmsfilename))
- # Supplementary data (errors of fit)
- errsfilename = "Supplements.yaml"
- Sups = Infodict["Supplements"]
- SupKeys = Sups.keys()
- SupKeys.sort()
- Suplist = list()
- for idsup in SupKeys:
- error = Sups[idsup]["FitErr"]
- chi2 = Sups[idsup]["Chi sq"]
- globalshare = Sups[idsup]["Global Share"]
- Suplist.append([idsup, error, chi2, globalshare])
- yaml.dump(Suplist, open(errsfilename, "wb"))
- Arc.write(errsfilename)
- os.remove(os.path.join(tempdir, errsfilename))
- # Save external functions
- for key in Infodict["External Functions"].keys():
- funcfilename = "model_"+str(key)+".txt"
- funcfile = open(funcfilename, 'wb')
- funcfile.write(Infodict["External Functions"][key])
- funcfile.close()
- Arc.write(funcfilename)
- os.remove(os.path.join(tempdir, funcfilename))
- # Save (dataexp and tau)s into separate csv files.
- for pageid in Infodict["Correlations"].keys():
- # Since *Array* and *Parms* are in the same order (the page order),
- # we will identify the filename by the Page title number.
- number = str(pageid)
- expfilename = "data"+number+".csv"
- expfile = open(expfilename, 'wb')
- tau = Infodict["Correlations"][pageid][0]
- exp = Infodict["Correlations"][pageid][1]
- dataWriter = csv.writer(expfile, delimiter=',')
- if exp is not None:
- # Names of Columns
- dataWriter.writerow(['# tau', 'experimental data'])
- # Actual Data
- # Do not use len(tau) instead of len(exp[:,0])) !
- # Otherwise, the experimental data will not be saved entirely,
- # if it has been cropped. Because tau might be smaller, than
- # exp[:,0] --> tau = exp[startcrop:endcrop,0]
- for j in np.arange(len(exp[:,0])):
- dataWriter.writerow(["%.20e" % exp[j,0],
- "%.20e" % exp[j,1]])
- else:
- # Only write tau
- dataWriter.writerow(['# tau'+' only'])
- for j in np.arange(len(tau)):
- dataWriter.writerow(["%.20e" % tau[j]])
- expfile.close()
- # Add to archive
- Arc.write(expfilename)
- os.remove(os.path.join(tempdir, expfilename))
- # Save traces into separate csv files.
- for pageid in Infodict["Traces"].keys():
- number = str(pageid)
- # Since *Trace* and *Parms* are in the same order, which is the
- # Page order, we will identify the filename by the Page title
- # number.
- if Infodict["Traces"][pageid] is not None:
- if Parms[pageid][7] is True:
- # We have cross correlation: save two traces
- ## A
- tracefilenamea = "trace"+number+"A.csv"
- tracefile = open(tracefilenamea, 'wb')
- traceWriter = csv.writer(tracefile, delimiter=',')
- time = Infodict["Traces"][pageid][0][:,0]
- rate = Infodict["Traces"][pageid][0][:,1]
- # Names of Columns
- traceWriter.writerow(['# time', 'count rate'])
- # Actual Data
- for j in np.arange(len(time)):
- traceWriter.writerow(["%.20e" % time[j],
- "%.20e" % rate[j]])
- tracefile.close()
- # Add to archive
- Arc.write(tracefilenamea)
- os.remove(os.path.join(tempdir, tracefilenamea))
- ## B
- tracefilenameb = "trace"+number+"B.csv"
- tracefile = open(tracefilenameb, 'wb')
- traceWriter = csv.writer(tracefile, delimiter=',')
- time = Infodict["Traces"][pageid][1][:,0]
- rate = Infodict["Traces"][pageid][1][:,1]
- # Names of Columns
- traceWriter.writerow(['# time', 'count rate'])
- # Actual Data
- for j in np.arange(len(time)):
- traceWriter.writerow(["%.20e" % time[j],
- "%.20e" % rate[j]])
- tracefile.close()
- # Add to archive
- Arc.write(tracefilenameb)
- os.remove(os.path.join(tempdir, tracefilenameb))
- else:
- # Save one single trace
- tracefilename = "trace"+number+".csv"
- tracefile = open(tracefilename, 'wb')
- traceWriter = csv.writer(tracefile, delimiter=',')
- time = Infodict["Traces"][pageid][:,0]
- rate = Infodict["Traces"][pageid][:,1]
- # Names of Columns
- traceWriter.writerow(['# time', 'count rate'])
- # Actual Data
- for j in np.arange(len(time)):
- traceWriter.writerow(["%.20e" % time[j],
- "%.20e" % rate[j]])
- tracefile.close()
- # Add to archive
- Arc.write(tracefilename)
- os.remove(os.path.join(tempdir, tracefilename))
- # Save comments into txt file
- commentfilename = "comments.txt"
- commentfile = open(commentfilename, 'wb')
- # Comments[-1] is comment on whole Session
- Ckeys = Infodict["Comments"].keys()
- Ckeys.sort()
- for key in Ckeys:
- if key != "Session":
- commentfile.write(Infodict["Comments"][key]+"\r\n")
- commentfile.write(Infodict["Comments"]["Session"])
- commentfile.close()
- Arc.write(commentfilename)
- os.remove(os.path.join(tempdir, commentfilename))
- ## Save Background information:
- Background = Infodict["Backgrounds"]
- if len(Background) > 0:
- # We do not use a comma separated, but a tab separated file,
- # because a comma might be in the name of a bg.
- bgfilename = "backgrounds.csv"
- bgfile = open(bgfilename, 'wb')
- bgwriter = csv.writer(bgfile, delimiter='\t')
- for i in np.arange(len(Background)):
- bgwriter.writerow([str(Background[i][0]), Background[i][1]])
- # Traces
- bgtracefilename = "bg_trace"+str(i)+".csv"
- bgtracefile = open(bgtracefilename, 'wb')
- bgtraceWriter = csv.writer(bgtracefile, delimiter=',')
- bgtraceWriter.writerow(['# time', 'count rate'])
- # Actual Data
- time = Background[i][2][:,0]
- rate = Background[i][2][:,1]
- for j in np.arange(len(time)):
- bgtraceWriter.writerow(["%.20e" % time[j],
- "%.20e" % rate[j]])
- bgtracefile.close()
- # Add to archive
- Arc.write(bgtracefilename)
- os.remove(os.path.join(tempdir, bgtracefilename))
- bgfile.close()
- Arc.write(bgfilename)
- os.remove(os.path.join(tempdir, bgfilename))
- ## Save External Weights information
- WeightedPageID = Infodict["External Weights"].keys()
- WeightedPageID.sort()
- WeightFilename = "externalweights.txt"
- WeightFile = open(WeightFilename, 'wb')
- WeightWriter = csv.writer(WeightFile, delimiter='\t')
- for pageid in WeightedPageID:
- number = str(pageid)
- NestWeights = Infodict["External Weights"][pageid].keys()
- # The order of the types does not matter, since they are
- # sorted in the frontend and upon import. We sort them here, anyhow.
- NestWeights.sort()
- for Nkey in NestWeights:
- WeightWriter.writerow([number, str(Nkey).strip()])
- # Add data to a File
- WeightDataFilename = "externalweights_data"+number+\
- "_"+str(Nkey).strip()+".csv"
- WeightDataFile = open(WeightDataFilename, 'wb')
- WeightDataWriter = csv.writer(WeightDataFile)
- wdata = Infodict["External Weights"][pageid][Nkey]
- for jw in np.arange(len(wdata)):
- WeightDataWriter.writerow([str(wdata[jw])])
- WeightDataFile.close()
- Arc.write(WeightDataFilename)
- os.remove(os.path.join(tempdir, WeightDataFilename))
- WeightFile.close()
- Arc.write(WeightFilename)
- os.remove(os.path.join(tempdir, WeightFilename))
- ## Readme
- rmfilename = "Readme.txt"
- rmfile = open(rmfilename, 'wb')
- rmfile.write(ReadmeSession)
- rmfile.close()
- Arc.write(rmfilename)
- os.remove(os.path.join(tempdir, rmfilename))
- # Close the archive
- Arc.close()
- # Move archive to destination directory
- shutil.move(os.path.join(tempdir, filename),
- os.path.join(dirname, filename) )
- # Go to destination directory
- os.chdir(returnWD)
- os.rmdir(tempdir)
- return dirname, filename
- else:
- dirname = dlg.GetDirectory()
- dlg.Destroy()
- return dirname, None
-
-
-
-
-def saveCSV(parent, dirname, Page):
- """ Write relevant data into a comma separated list.
-
- Parameters:
- *parent* the parent window
- *dirname* directory to set on saving
- *Page* Page containing all necessary variables
- """
- filename = Page.tabtitle.GetValue().strip()+Page.counter[:2]
- dlg = wx.FileDialog(parent, "Save curve", dirname, filename,
- "Correlation with trace (*.csv)|*.csv;*.CSV"+\
- "|Correlation only (*.csv)|*.csv;*.CSV",
- wx.SAVE|wx.FD_OVERWRITE_PROMPT)
- # user cannot do anything until he clicks "OK"
- if dlg.ShowModal() == wx.ID_OK:
- path = dlg.GetPath() # Workaround since 0.7.5
- (dirname, filename) = os.path.split(path)
- #filename = dlg.GetFilename()
- #dirname = dlg.GetDirectory()
- if filename.lower().endswith(".csv") is not True:
- filename = filename+".csv"
- openedfile = open(os.path.join(dirname, filename), 'wb')
- ## First, some doc text
- openedfile.write(ReadmeCSV.replace('\n', '\r\n'))
- # The infos
- InfoMan = info.InfoClass(CurPage=Page)
- PageInfo = InfoMan.GetCurFancyInfo()
- for line in PageInfo.splitlines():
- openedfile.write("# "+line+"\r\n")
- openedfile.write("#\r\n#\r\n")
- # Get all the data we need from the Page
- # Modeled data
- # Since 0.7.8 the user may normalize the curves. The normalization
- # factor is set in *Page.normfactor*.
- corr = Page.datacorr[:,1]*Page.normfactor
- if Page.dataexp is not None:
- # Experimental data
- tau = Page.dataexp[:,0]
- exp = Page.dataexp[:,1]*Page.normfactor
- res = Page.resid[:,1]*Page.normfactor
- # Plotting! Because we only export plotted area.
- weight = Page.weights_used_for_plotting
- if weight is None:
- pass
- elif len(weight) != len(exp):
- text = "Weights have not been calculated for the "+\
- "area you want to export. Pressing 'Fit' "+\
- "again should solve this issue. Data will "+\
- "not be saved."
- wx.MessageDialog(parent, text, "Error",
- style=wx.ICON_ERROR|wx.OK|wx.STAY_ON_TOP)
- return dirname, None
- else:
- tau = Page.datacorr[:,0]
- exp = None
- res = None
- # Include weights in data saving:
- # PyCorrFit thinks in [ms], but we will save as [s]
- timefactor = 0.001
- tau = timefactor * tau
- ## Now we want to write all that data into the file
- # This is for csv writing:
- ## Correlation curve
- dataWriter = csv.writer(openedfile, delimiter='\t')
- if exp is not None:
- header = '# Channel (tau [s])'+"\t"+ \
- 'Experimental correlation'+"\t"+ \
- 'Fitted correlation'+ "\t"+ \
- 'Residuals'+"\r\n"
- data = [tau, exp, corr, res]
- if Page.weighted_fit_was_performed is True \
- and weight is not None:
- header = header.strip() + "\t"+'Weights (fit)'+"\r\n"
- data.append(weight)
- else:
- header = '# Channel (tau [s])'+"\t"+ \
- 'Correlation function'+"\r\n"
- data = [tau, corr]
- # Write header
- openedfile.write(header)
- # Write data
- for i in np.arange(len(data[0])):
- # row-wise, data may have more than two elements per row
- datarow = list()
- for j in np.arange(len(data)):
- rowcoli = str("%.10e") % data[j][i]
- datarow.append(rowcoli)
- dataWriter.writerow(datarow)
- ## Trace
- # Only save the trace if user wants us to:
- if dlg.GetFilterIndex() == 0:
- # We will also save the trace in [s]
- # Intensity trace in kHz may stay the same
- if Page.trace is not None:
- # Mark beginning of Trace
- openedfile.write('#\r\n#\r\n# BEGIN TRACE\r\n#\r\n')
- # Columns
- time = Page.trace[:,0]*timefactor
- intensity = Page.trace[:,1]
- # Write
- openedfile.write('# Time [s]'+"\t"
- 'Intensity trace [kHz]'+" \r\n")
- for i in np.arange(len(time)):
- dataWriter.writerow([str("%.10e") % time[i],
- str("%.10e") % intensity[i]])
- elif Page.tracecc is not None:
- # We have some cross-correlation here:
- # Mark beginning of Trace A
- openedfile.write('#\r\n#\r\n# BEGIN TRACE\r\n#\r\n')
- # Columns
- time = Page.tracecc[0][:,0]*timefactor
- intensity = Page.tracecc[0][:,1]
- # Write
- openedfile.write('# Time [s]'+"\t"
- 'Intensity trace [kHz]'+" \r\n")
- for i in np.arange(len(time)):
- dataWriter.writerow([str("%.10e") % time[i],
- str("%.10e") % intensity[i]])
- # Mark beginning of Trace B
- openedfile.write('#\r\n#\r\n# BEGIN SECOND TRACE\r\n#\r\n')
- # Columns
- time = Page.tracecc[1][:,0]*timefactor
- intensity = Page.tracecc[1][:,1]
- # Write
- openedfile.write('# Time [s]'+"\t"
- 'Intensity trace [kHz]'+" \r\n")
- for i in np.arange(len(time)):
- dataWriter.writerow([str("%.10e") % time[i],
- str("%.10e") % intensity[i]])
- dlg.Destroy()
- openedfile.close()
- return dirname, filename
- else:
- dirname = dlg.GetDirectory()
- dlg.Destroy()
- return dirname, None
-
-
-ReadmeCSV = """# This file was created using PyCorrFit version {}.
-#
-# Lines starting with a '#' are treated as comments.
-# The data is stored as CSV below this comment section.
-# Data usually consists of lag times (channels) and
-# the corresponding correlation function - experimental
-# and fitted values plus resulting residuals.
-# If this file is opened by PyCorrFit, only the first two
-# columns will be imported as experimental data.
-#
-""".format(doc.__version__)
-
-
-ReadmeSession = """This file was created using PyCorrFit version {}.
-The .zip archive you are looking at is a stored session of PyCorrFit.
-If you are interested in how the data is stored, you will find
-out here. Most important are the dimensions of units:
-Dimensionless representation:
- unit of time : 1 ms
- unit of inverse time: 10³ /s
- unit of distance : 100 nm
- unit of Diff.coeff : 10 µm²/s
- unit of inverse area: 100 /µm²
- unit of inv. volume : 1000 /µm³
-From there, the dimension of any parameter may be
-calculated.
-
-There are a number of files within this archive,
-depending on what was done during the session.
-
-backgrounds.csv
- - Contains the list of backgrounds used and
- - Averaged intensities in [kHz]
-
-bg_trace*.csv (where * is an integer)
- - The trace of the background corresponding
- to the line number in backgrounds.csv
- - Time in [ms], Trace in [kHz]
-
-comments.txt
- - Contains page titles and session comment
- - First n lines are titles, rest is session
- comment (where n is total number of pages)
-
-data*.csv (where * is (Number of page))
- - Contains lag times [ms]
- - Contains experimental data, if available
-
-externalweights.txt
- - Contains names (types) of external weights other than from
- Model function or spline fit
- - Linewise: 1st element is page number, 2nd is name
- - According to this data, the following files are present in the archive
-
-externalweights_data_*PageID*_*Type*.csv
- - Contains weighting information of Page *PageID* of type *Type*
-
-model_*ModelID*.txt
- - An external (user-defined) model file with internal ID *ModelID*
-
-Parameters.yaml
- - Contains all Parameters for each page
- Block format:
- - - '#(Number of page): '
- - (Internal model ID)
- - (List of parameters)
- - (List of checked parameters (for fitting))
- - [(Min channel selected), (Max channel selected)]
- - [(Weighted fit method (0=None, 1=Spline, 2=Model function)),
- (No. of bins from left and right),
- (No. of knots (of e.g. spline)),
- (Type of fitting algorithm (e.g. "Lev-Mar", "Nelder-Mead")]
- - [B1,B2] Background to use (line in backgrounds.csv)
- B2 is always *null* for autocorrelation curves
- - Data type is Cross-correlation?
- - Parameter id (int) used for normalization in plotting.
- This number first enumerates the model parameters and then
- the supplemental parameters (e.g. "n1").
- - - [min, max] fitting parameter range of 1st parameter
- - [min, max] fitting parameter range of 2nd parameter
- - etc.
- - Order in Parameters.yaml defines order of pages in a session
- - Order in Parameters.yaml defines order in comments.txt
-
-Readme.txt (this file)
-
-Supplements.yaml
- - Contains errors of fitting
- Format:
- -- Page number
- -- [parameter id, error value]
- - [parameter id, error value]
- - Chi squared
- - [pages that share parameters] (from global fitting)
-
-trace*.csv (where * is (Number of page) | appendix "A" or "B" point to
- the respective channels (only in cross-correlation mode))
- - Contains times [ms]
- - Contains countrates [kHz]
-""".format(doc.__version__)
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/pycorrfit.git
More information about the debian-med-commit
mailing list