[med-svn] [python-mne] 01/04: Imported Upstream version 0.11+dfsg

Yaroslav Halchenko debian at onerussian.com
Tue Jan 5 02:16:12 UTC 2016


This is an automated email from the git hooks/post-receive script.

yoh pushed a commit to branch master
in repository python-mne.

commit 88d4b22c84015b6aba58543d8f34b570831a2533
Author: Yaroslav Halchenko <debian at onerussian.com>
Date:   Mon Jan 4 20:51:39 2016 -0500

    Imported Upstream version 0.11+dfsg
---
 .travis.yml                                        |    3 +-
 Makefile                                           |    4 +
 README.rst                                         |   10 +-
 doc/advanced_setup.rst                             |   43 +-
 doc/conf.py                                        |   11 +-
 doc/contributing.rst                               |   22 +-
 doc/getting_started.rst                            |    6 +-
 doc/manual/datasets_index.rst                      |  125 ++
 doc/manual/decoding.rst                            |  168 ++
 doc/manual/index.rst                               |   32 +-
 doc/manual/io.rst                                  |   31 +-
 doc/manual/memory.rst                              |   45 +
 doc/manual/pitfalls.rst                            |   27 +
 doc/manual/preprocessing/overview.rst              |    8 +-
 doc/manual/preprocessing/ssp.rst                   |  144 +-
 doc/manual/{datasets.rst => sample_dataset.rst}    |    0
 doc/python_reference.rst                           |   21 +-
 doc/this_project.inc                               |    2 +-
 doc/tutorials/mne-report.png                       |  Bin 0 -> 172544 bytes
 doc/tutorials/report.rst                           |    5 +
 doc/whats_new.rst                                  |  113 +-
 examples/datasets/plot_brainstorm_data.py          |    3 +-
 .../decoding/plot_decoding_time_generalization.py  |    2 +-
 examples/preprocessing/plot_maxwell_filter.py      |   45 +
 examples/visualization/plot_evoked_erf_erp.py      |    6 +-
 mne/__init__.py                                    |   16 +-
 mne/bem.py                                         |  103 +-
 mne/channels/channels.py                           |  108 +-
 mne/channels/interpolation.py                      |    8 +-
 mne/channels/layout.py                             |   18 +-
 mne/channels/montage.py                            |  116 +-
 mne/channels/tests/test_channels.py                |    6 +-
 mne/channels/tests/test_layout.py                  |    7 +-
 mne/channels/tests/test_montage.py                 |   17 +-
 mne/chpi.py                                        |   55 +-
 mne/commands/mne_flash_bem_model.py                |  145 --
 mne/commands/mne_show_fiff.py                      |   27 +
 mne/commands/tests/test_commands.py                |   17 +-
 mne/connectivity/tests/test_utils.py               |    3 +-
 mne/coreg.py                                       |    4 +-
 mne/cov.py                                         |   27 +-
 mne/data/coil_def_Elekta.dat                       |   10 +-
 mne/datasets/utils.py                              |   19 +-
 mne/decoding/__init__.py                           |    2 +-
 mne/decoding/base.py                               |   20 +-
 mne/decoding/csp.py                                |  132 +-
 mne/decoding/mixin.py                              |   37 +
 mne/decoding/tests/test_csp.py                     |   23 +
 mne/decoding/tests/test_time_gen.py                |    8 +-
 mne/decoding/time_gen.py                           |  153 +-
 mne/decoding/transformer.py                        |    8 +-
 mne/dipole.py                                      |    8 +-
 mne/epochs.py                                      |  428 +++--
 mne/evoked.py                                      |   56 +-
 mne/forward/_compute_forward.py                    |   59 +-
 mne/forward/_field_interpolation.py                |   99 +-
 mne/forward/_lead_dots.py                          |  129 +-
 mne/forward/_make_forward.py                       |  178 ++-
 mne/forward/forward.py                             |   43 +-
 mne/forward/tests/test_field_interpolation.py      |   14 +-
 mne/forward/tests/test_make_forward.py             |    2 +-
 mne/gui/__init__.py                                |    7 +-
 mne/gui/_coreg_gui.py                              |    2 +-
 mne/gui/_fiducials_gui.py                          |    2 +-
 mne/gui/_file_traits.py                            |    2 +-
 mne/gui/_help.py                                   |   16 +
 mne/gui/_kit2fiff_gui.py                           |  151 +-
 mne/gui/_marker_gui.py                             |    2 +-
 mne/gui/_viewer.py                                 |    2 +-
 mne/gui/help/kit2fiff.json                         |    7 +
 mne/gui/tests/test_kit2fiff_gui.py                 |   22 +-
 mne/io/__init__.py                                 |   55 +-
 mne/io/array/array.py                              |    4 +-
 mne/io/array/tests/test_array.py                   |   20 +-
 mne/io/base.py                                     |  183 ++-
 mne/io/brainvision/brainvision.py                  |  218 +--
 mne/io/brainvision/tests/data/test.vmrk            |    1 +
 mne/io/brainvision/tests/data/testv2.vhdr          |  107 ++
 .../tests/data/{test.vmrk => testv2.vmrk}          |   12 +-
 mne/io/brainvision/tests/test_brainvision.py       |   78 +-
 mne/io/bti/bti.py                                  |  165 +-
 mne/io/bti/tests/test_bti.py                       |   66 +-
 mne/io/constants.py                                |   32 +-
 mne/io/ctf.py                                      |  256 ---
 mne/io/ctf/__init__.py                             |    7 +
 mne/io/ctf/constants.py                            |   38 +
 mne/io/ctf/ctf.py                                  |  218 +++
 mne/io/ctf/eeg.py                                  |   51 +
 mne/io/ctf/hc.py                                   |   85 +
 mne/io/ctf/info.py                                 |  401 +++++
 mne/io/ctf/res4.py                                 |  212 +++
 mne/{ => io/ctf}/tests/__init__.py                 |    0
 mne/io/ctf/tests/test_ctf.py                       |  171 ++
 mne/io/ctf/trans.py                                |  170 ++
 mne/io/ctf_comp.py                                 |  159 ++
 mne/io/edf/edf.py                                  |  231 +--
 mne/io/edf/tests/test_edf.py                       |  139 +-
 mne/io/eeglab/__init__.py                          |    5 +
 mne/io/eeglab/eeglab.py                            |  447 ++++++
 mne/{ => io/eeglab}/tests/__init__.py              |    0
 mne/io/eeglab/tests/test_eeglab.py                 |   85 +
 mne/io/egi/egi.py                                  |  227 ++-
 mne/io/egi/tests/data/test_egi.txt                 |  257 +++
 mne/io/egi/tests/test_egi.py                       |   58 +-
 mne/io/fiff/raw.py                                 |   23 +-
 .../fiff/tests/{test_raw.py => test_raw_fiff.py}   |   14 +-
 mne/io/kit/constants.py                            |   30 +-
 mne/io/kit/kit.py                                  |  108 +-
 mne/io/kit/tests/test_kit.py                       |  113 +-
 mne/io/meas_info.py                                |   77 +-
 mne/io/nicolet/__init__.py                         |    7 +
 mne/io/nicolet/nicolet.py                          |  206 +++
 mne/{ => io/nicolet}/tests/__init__.py             |    0
 mne/io/nicolet/tests/data/test_nicolet_raw.data    |  Bin 0 -> 29696 bytes
 mne/io/nicolet/tests/data/test_nicolet_raw.head    |   11 +
 mne/io/nicolet/tests/test_nicolet.py               |   20 +
 mne/io/open.py                                     |   10 +-
 mne/io/pick.py                                     |   71 +-
 mne/io/proc_history.py                             |   48 +-
 mne/io/proj.py                                     |   30 +-
 mne/io/reference.py                                |    8 +-
 mne/io/tag.py                                      |    2 -
 mne/io/tests/test_apply_function.py                |   18 +-
 mne/io/tests/test_pick.py                          |   77 +-
 mne/io/tests/test_raw.py                           |   82 +-
 mne/io/tests/test_reference.py                     |   31 +-
 mne/io/utils.py                                    |  165 ++
 mne/io/write.py                                    |   28 +-
 mne/minimum_norm/inverse.py                        |    6 +-
 mne/preprocessing/__init__.py                      |    2 +-
 mne/preprocessing/ecg.py                           |   55 +-
 mne/preprocessing/ica.py                           |  100 +-
 mne/preprocessing/maxwell.py                       | 1631 ++++++++++++++++----
 mne/preprocessing/stim.py                          |    4 +-
 mne/preprocessing/tests/test_ecg.py                |   32 +-
 mne/preprocessing/tests/test_ica.py                |   44 +-
 mne/preprocessing/tests/test_maxwell.py            |  734 +++++++--
 mne/preprocessing/tests/test_ssp.py                |   22 +
 mne/proj.py                                        |   12 +-
 mne/realtime/fieldtrip_client.py                   |  693 +++++----
 mne/simulation/__init__.py                         |    6 +-
 mne/simulation/evoked.py                           |   70 +-
 mne/simulation/raw.py                              |    4 +-
 mne/simulation/source.py                           |  111 +-
 mne/source_estimate.py                             |   47 +-
 mne/source_space.py                                |   55 +-
 mne/stats/__init__.py                              |    3 +-
 mne/stats/parametric.py                            |   22 -
 mne/stats/regression.py                            |   22 +-
 mne/stats/tests/test_cluster_level.py              |   27 +-
 mne/stats/tests/test_parametric.py                 |    7 +-
 mne/stats/tests/test_regression.py                 |   15 +
 mne/tests/__init__.py                              |    1 +
 mne/tests/common.py                                |   74 +
 mne/tests/test_bem.py                              |   41 +-
 mne/tests/test_chpi.py                             |   33 +-
 mne/tests/test_coreg.py                            |    2 +-
 mne/tests/test_dipole.py                           |   10 +-
 mne/tests/test_docstring_parameters.py             |    1 -
 mne/tests/test_epochs.py                           |  223 ++-
 mne/tests/test_evoked.py                           |   24 +-
 mne/tests/test_filter.py                           |   83 +-
 mne/tests/test_fixes.py                            |    6 +-
 mne/tests/test_label.py                            |    2 +-
 mne/tests/test_line_endings.py                     |   68 +
 mne/tests/test_proj.py                             |    3 +
 mne/tests/test_source_estimate.py                  |   60 +-
 mne/tests/test_source_space.py                     |    9 +-
 mne/tests/test_surface.py                          |   17 +-
 mne/tests/test_transforms.py                       |   49 +-
 mne/tests/test_utils.py                            |   12 +-
 mne/time_frequency/psd.py                          |    2 +-
 mne/time_frequency/tests/test_tfr.py               |   31 +-
 mne/time_frequency/tfr.py                          |   72 +-
 mne/transforms.py                                  |  153 +-
 mne/utils.py                                       |  321 ++--
 mne/viz/_3d.py                                     |   13 +-
 mne/viz/__init__.py                                |    9 +-
 mne/viz/circle.py                                  |    4 +-
 mne/viz/decoding.py                                |    8 +-
 mne/viz/epochs.py                                  |  178 +--
 mne/viz/evoked.py                                  |  163 +-
 mne/viz/ica.py                                     |  155 +-
 mne/viz/misc.py                                    |   20 +-
 mne/viz/montage.py                                 |    6 +-
 mne/viz/raw.py                                     |  124 +-
 mne/viz/tests/test_3d.py                           |    4 +-
 mne/viz/tests/test_circle.py                       |    2 +-
 mne/viz/tests/test_epochs.py                       |    5 +-
 mne/viz/tests/test_evoked.py                       |    9 +-
 mne/viz/tests/test_ica.py                          |   14 +
 mne/viz/tests/test_raw.py                          |    6 +-
 mne/viz/tests/test_topo.py                         |   10 +-
 mne/viz/tests/test_topomap.py                      |    6 +-
 mne/viz/tests/test_utils.py                        |    7 +-
 mne/viz/topo.py                                    |   86 +-
 mne/viz/topomap.py                                 |   76 +-
 mne/viz/utils.py                                   |   26 +-
 setup.py                                           |    4 +
 tutorials/plot_ica_from_raw.py                     |    4 +-
 tutorials/plot_introduction.py                     |   23 +-
 .../plot_spatio_temporal_cluster_stats_sensor.py   |    4 +-
 202 files changed, 10116 insertions(+), 4402 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index e13e593..2ef2c7b 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -23,7 +23,7 @@ env:
     # Must force libpng version to avoid silly libpng.so.15 error (MPL 1.1 needs it)
     #
     # Conda currently has packaging bug with mayavi/traits/numpy where 1.10 can't be used
-    # but breaks scipy; hopefully eventually the NUMPY=1.9 on 2.7 full can be removed
+    # but breaks sklearn on install; hopefully eventually the NUMPY=1.9 on 2.7 full can be removed
     - PYTHON=2.7 DEPS=full TEST_LOCATION=src NUMPY="=1.9" SCIPY="=0.16"
     - PYTHON=2.7 DEPS=nodata TEST_LOCATION=src MNE_DONTWRITE_HOME=true MNE_FORCE_SERIAL=true MNE_SKIP_NETWORK_TEST=1  # also runs flake8
     - PYTHON=3.5 DEPS=full TEST_LOCATION=install MNE_STIM_CHANNEL=STI101
@@ -116,6 +116,7 @@ install:
         ln -s ${SRC_DIR}/mne/io/kit/tests/data ${MNE_DIR}/io/kit/tests/data;
         ln -s ${SRC_DIR}/mne/io/brainvision/tests/data ${MNE_DIR}/io/brainvision/tests/data;
         ln -s ${SRC_DIR}/mne/io/egi/tests/data ${MNE_DIR}/io/egi/tests/data;
+        ln -s ${SRC_DIR}/mne/io/nicolet/tests/data ${MNE_DIR}/io/nicolet/tests/data;
         ln -s ${SRC_DIR}/mne/preprocessing/tests/data ${MNE_DIR}/preprocessing/tests/data;
         ln -s ${SRC_DIR}/setup.cfg ${MNE_DIR}/../setup.cfg;
         ln -s ${SRC_DIR}/.coveragerc ${MNE_DIR}/../.coveragerc;
diff --git a/Makefile b/Makefile
index c766d51..b71fe32 100755
--- a/Makefile
+++ b/Makefile
@@ -40,6 +40,10 @@ test: in
 	rm -f .coverage
 	$(NOSETESTS) -a '!ultra_slow_test' mne
 
+test-fast: in
+	rm -f .coverage
+	$(NOSETESTS) -a '!slow_test' mne
+
 test-full: in
 	rm -f .coverage
 	$(NOSETESTS) mne
diff --git a/README.rst b/README.rst
index 4579fa9..e584a82 100644
--- a/README.rst
+++ b/README.rst
@@ -15,7 +15,7 @@
 .. |Zenodo| image:: https://zenodo.org/badge/5822/mne-tools/mne-python.svg
 .. _Zenodo: https://zenodo.org/badge/latestdoi/5822/mne-tools/mne-python
 
-`mne-python <http://martinos.org/mne/mne-python.html>`_
+`mne-python <http://mne-tools.github.io/>`_
 =======================================================
 
 This package is designed for sensor- and source-space analysis of M-EEG
@@ -31,10 +31,10 @@ This page only contains bare-bones instructions for installing mne-python.
 
 If you're familiar with MNE and you're looking for information on using
 mne-python specifically, jump right to the `mne-python homepage
-<http://martinos.org/mne/mne-python.html>`_. This website includes a
-`tutorial <http://martinos.org/mne/python_tutorial.html>`_,
-helpful `examples <http://martinos.org/mne/auto_examples/index.html>`_, and
-a handy `function reference <http://martinos.org/mne/python_reference.html>`_,
+<http://mne-tools.github.io/stable/python_reference.html>`_. This website includes
+`tutorials <http://mne-tools.github.io/stable/tutorials.html>`_,
+helpful `examples <http://mne-tools.github.io/stable/auto_examples/index.html>`_, and
+a handy `function reference <http://mne-tools.github.io/stable/python_reference.html>`_,
 among other things.
 
 If you're unfamiliar with MNE, you can visit the
diff --git a/doc/advanced_setup.rst b/doc/advanced_setup.rst
index 903ef69..3cf5093 100644
--- a/doc/advanced_setup.rst
+++ b/doc/advanced_setup.rst
@@ -65,7 +65,7 @@ initialized on startup, you can do:
 
 You can test if MNE CUDA support is working by running the associated test:
 
-    nosetests mne/tests/test_filter.py
+    $ nosetests mne/tests/test_filter.py
 
 If all tests pass with none skipped, then mne-python CUDA support works.
 
@@ -78,13 +78,13 @@ Canopy and the Anaconda distributions ship with tested MKL-compiled
 numpy / scipy versions. Depending on the use case and your system
 this may speed up operations by a factor greater than 10.
 
-pylab
-^^^^^
+matplotlib
+^^^^^^^^^^
 
 For the setups listed above we would strongly recommend to use the Qt
 matplotlib backend for fast and correct rendering::
 
-    ipython --pylab qt
+    $ ipython --matplotlib=qt
 
 On Linux, for example, QT is the only matplotlib backend for which 3D rendering
 will work correctly. On Mac OS X for other backends certain matplotlib
@@ -103,6 +103,29 @@ runtime accordingly.
 If you use another Python setup and you encounter some difficulties please
 report them on the MNE mailing list or on github to get assistance.
 
+Installing Mayavi
+^^^^^^^^^^^^^^^^^
+
+Mayavi is only available for Python2.7. If you have Anaconda installed (recommended), the easiest way to install `mayavi` is to do::
+
+    $ conda install mayavi
+
+On Ubuntu, it is also possible to install using::
+
+    $ easy_install "Mayavi[app]"
+
+If you use this method, be sure to install the dependencies first: `python-vtk` and `python-configobj`::
+
+    $ sudo apt-get install python-vtk python-configobj
+
+Make sure the `TraitsBackendQt`_ has been installed as well. For other methods of installation, please consult
+the `Mayavi documentation`_.
+
+Configuring PySurfer
+^^^^^^^^^^^^^^^^^^^^
+
+Some users may need to configure PySurfer before they can make full use of our visualization
+capabilities. Please refer to the `PySurfer installation page`_ for up to date information.
 
 .. _inside_martinos:
 
@@ -113,15 +136,15 @@ For people within the MGH/MIT/HMS Martinos Center mne is available on the networ
 
 In a terminal do::
 
-    setenv PATH /usr/pubsw/packages/python/anaconda/bin:${PATH}
+    $ setenv PATH /usr/pubsw/packages/python/anaconda/bin:${PATH}
 
 If you use Bash replace the previous instruction with::
 
-    export PATH=/usr/pubsw/packages/python/anaconda/bin:${PATH}
+    $ export PATH=/usr/pubsw/packages/python/anaconda/bin:${PATH}
 
 Then start the python interpreter with:
 
-    ipython
+    $ ipython
 
 Then type::
 
@@ -132,3 +155,9 @@ If you get a new prompt with no error messages, you should be good to go.
 We encourage all Martinos center Python users to subscribe to the Martinos Python mailing list:
 
 https://mail.nmr.mgh.harvard.edu/mailman/listinfo/martinos-python
+
+.. _Pysurfer installation page: https://pysurfer.github.io/install.html
+
+.. _TraitsBackendQt: http://pypi.python.org/pypi/TraitsBackendQt
+
+.. _Mayavi documentation: http://docs.enthought.com/mayavi/mayavi/installation.html
diff --git a/doc/conf.py b/doc/conf.py
index 5668f85..112e2ce 100644
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -17,7 +17,12 @@ import os
 import os.path as op
 from datetime import date
 
-import sphinxgallery
+try:
+    import sphinx_gallery as sg
+    sg_extension = 'sphinx_gallery.gen_gallery'
+except ImportError:
+    import sphinxgallery as sg
+    sg_extension = 'sphinxgallery.gen_gallery'
 import sphinx_bootstrap_theme
 
 # If extensions (or modules to document with autodoc) are in another directory,
@@ -41,7 +46,7 @@ extensions = ['sphinx.ext.autodoc',
               'numpy_ext.numpydoc',
             #   'sphinx.ext.intersphinx',
               # 'flow_diagram',
-              'sphinxgallery.gen_gallery']
+              sg_extension]
 
 autosummary_generate = True
 
@@ -168,7 +173,7 @@ html_favicon = "favicon.ico"
 # Add any paths that contain custom static files (such as style sheets) here,
 # relative to this directory. They are copied after the builtin static files,
 # so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static', '_images', sphinxgallery.glr_path_static()]
+html_static_path = ['_static', '_images', sg.glr_path_static()]
 
 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
 # using the given strftime format.
diff --git a/doc/contributing.rst b/doc/contributing.rst
index d41c999..e4b756b 100644
--- a/doc/contributing.rst
+++ b/doc/contributing.rst
@@ -493,6 +493,8 @@ When you are ready to ask for someone to review your code and consider a merge:
 
 #. For the code to be mergeable, please rebase w.r.t master branch.
 
+#. Once, you are ready, prefix ``MRG:`` to the title of the pull request to indicate that you are ready for the pull request to be merged.
+
 
 If you are uncertain about what would or would not be appropriate to contribute
 to mne-python, don't hesitate to either send a pull request, or open an issue
@@ -783,8 +785,20 @@ As an example, to pull the realtime pull request which has a url
 If you want to fetch a pull request to your own fork, replace
 ``upstream`` with ``origin``. That's it!
 
-Adding example to example gallery
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Skipping a build
+^^^^^^^^^^^^^^^^
+
+The builds when the pull request is in `WIP` state can be safely skipped. The important thing is to ensure that the builds pass when the PR is ready to be merged. To skip a Travis build, add ``[ci skip]`` to the commit message::
+
+  FIX: some changes [ci skip]
+
+This will help prevent clogging up Travis and Appveyor and also save the environment.
+
+Documentation
+-------------
+
+Adding an example to example gallery
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Add the example to the correct subfolder in the ``examples/`` directory and
 prefix the file with ``plot_``. To make sure that the example renders correctly,
@@ -793,7 +807,7 @@ run ``make html`` in the ``doc/`` folder
 Editing \*.rst files
 ^^^^^^^^^^^^^^^^^^^^
 
-These are reStructuredText files. Consult the Sphinx documentation to learn
+These are reStructuredText files. Consult the `Sphinx documentation`_ to learn
 more about editing them.
 
 .. _troubleshooting:
@@ -826,3 +840,5 @@ handler doing an exit()``, try backing up or removing .ICEauthority::
     mv ~/.ICEauthority ~/.ICEauthority.bak
 
 .. include:: links.inc
+
+.. _Sphinx documentation: http://sphinx-doc.org/rest.html
\ No newline at end of file
diff --git a/doc/getting_started.rst b/doc/getting_started.rst
index 9bcbe34..5fedf75 100644
--- a/doc/getting_started.rst
+++ b/doc/getting_started.rst
@@ -276,7 +276,7 @@ Anaconda is free for academic purposes.
 
 To test that everything works properly, open up IPython::
 
-    ipython --pylab qt
+    $ ipython --matplotlib=qt
 
 Now that you have a working Python environment you can install MNE-Python.
 
@@ -286,12 +286,12 @@ mne-python installation
 Most users should start with the "stable" version of mne-python, which can
 be installed this way:
 
-    pip install mne --upgrade
+    $ pip install mne --upgrade
 
 For the newest features (and potentially more bugs), you can instead install
 the development version by:
 
-    pip install -e git+https://github.com/mne-tools/mne-python#egg=mne-dev
+    $ pip install -e git+https://github.com/mne-tools/mne-python#egg=mne-dev
 
 If you plan to contribute to the project, please follow the git instructions: 
 :ref:`contributing`.
diff --git a/doc/manual/datasets_index.rst b/doc/manual/datasets_index.rst
new file mode 100644
index 0000000..be4c59f
--- /dev/null
+++ b/doc/manual/datasets_index.rst
@@ -0,0 +1,125 @@
+.. _datasets:
+
+.. contents:: Contents
+   :local:
+   :depth: 2
+
+Datasets
+########
+
+All the dataset fetchers are available in :mod:`mne.datasets`. To download any of the datasets,
+use the ``data_path`` (fetches full dataset) or the ``load_data`` (fetches dataset partially) functions.
+
+Sample
+======
+:ref:`ch_sample_data` is recorded using a 306-channel Neuromag vectorview system. 
+
+In this experiment, checkerboard patterns were presented to the subject
+into the left and right visual field, interspersed by tones to the
+left or right ear. The interval between the stimuli was 750 ms. Occasionally
+a smiley face was presented at the center of the visual field.
+The subject was asked to press a key with the right index finger
+as soon as possible after the appearance of the face. To fetch this dataset, do::
+
+    from mne.datasets import sample
+    data_path = sample.data_path()  # returns the folder in which the data is locally stored.
+
+Once the ``data_path`` is known, its contents can be examined using :ref:`IO functions <ch_convert>`.
+
+Brainstorm
+==========
+Dataset fetchers for three Brainstorm tutorials are available. Users must agree to the
+license terms of these datasets before downloading them. These files are recorded in a CTF 275 system.
+The data is converted to `fif` format before being made available to MNE users. However, MNE-Python now supports
+IO for the `ctf` format as well in addition to the C converter utilities. Please consult the :ref:`IO section <ch_convert>` for details.
+
+Auditory
+^^^^^^^^
+To access the data, use the following Python commands::
+    
+    from mne.datasets.brainstorm import bst_raw
+    data_path = bst_raw.data_path()
+
+Further details about the data can be found at the `auditory dataset tutorial`_ on the Brainstorm website.
+
+.. topic:: Examples
+
+    * :ref:`Brainstorm auditory dataset tutorial<sphx_glr_auto_examples_datasets_plot_brainstorm_data.py>`: Partially replicates the original Brainstorm tutorial.
+
+Resting state
+^^^^^^^^^^^^^
+To access the data, use the Python command::
+
+    from mne.datasets.brainstorm import bst_resting
+    data_path = bst_resting.data_path()
+
+Further details can be found at the `resting state dataset tutorial`_ on the Brainstorm website.
+
+Median nerve
+^^^^^^^^^^^^
+To access the data, use the Python command::
+
+    from mne.datasets.brainstorm import bst_raw
+    data_path = bst_raw.data_path()
+
+Further details can be found at the `median nerve dataset tutorial`_ on the Brainstorm website.
+
+MEGSIM
+======
+This dataset contains experimental and simulated MEG data. To load data from this dataset, do::
+
+    from mne.io import Raw
+    from mne.datasets.megsim import load_data
+    raw_fnames = load_data(condition='visual', data_format='raw', data_type='experimental', verbose=True)
+    raw = Raw(raw_fnames[0])
+
+Detailed description of the dataset can be found in the related publication [1]_.
+
+.. topic:: Examples
+
+    * :ref:`sphx_glr_auto_examples_datasets_plot_megsim_data.py`
+
+SPM faces
+=========
+The `SPM faces dataset`_ contains EEG, MEG and fMRI recordings on face perception. To access this dataset, do::
+
+    from mne.datasets import spm_face
+    data_path = spm_face.data_path()
+
+.. topic:: Examples
+
+    * :ref:`sphx_glr_auto_examples_datasets_plot_spm_faces_dataset.py` Full pipeline including artifact removal, epochs averaging, forward model computation and source reconstruction using dSPM on the contrast: "faces - scrambled".
+
+EEGBCI motor imagery
+====================
+
+The EEGBCI dataset is documented in [2]_. The data set is available at PhysioNet [3]_.
+The dataset contains 64-channel EEG recordings from 109 subjects and 14 runs on each subject in EDF+ format.
+The recordings were made using the BCI2000 system. To load a subject, do::
+
+    from mne.io import concatenate_raws, read_raw_edf
+    from mne.datasets import eegbci
+    raw_fnames = eegbci.load_data(subject, runs)
+    raws = [read_raw_edf(f, preload=True) for f in raw_fnames]
+    raw = concatenate_raws(raws)
+
+.. topic:: Examples
+
+    * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_csp_eeg.py`
+
+Do not hesitate to contact MNE-Python developers on the `MNE mailing list`_ to discuss the possibility to add more publicly available datasets.
+
+.. _auditory dataset tutorial: http://neuroimage.usc.edu/brainstorm/DatasetAuditory
+.. _resting state dataset tutorial: http://neuroimage.usc.edu/brainstorm/DatasetResting
+.. _median nerve dataset tutorial: http://neuroimage.usc.edu/brainstorm/DatasetMedianNerveCtf
+.. _SPM faces dataset: http://www.fil.ion.ucl.ac.uk/spm/data/mmfaces/
+.. _MNE mailing list: http://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis
+
+References
+==========
+
+.. [1] Aine CJ, Sanfratello L, Ranken D, Best E, MacArthur JA, Wallace T, Gilliam K, Donahue CH, Montano R, Bryant JE, Scott A, Stephen JM (2012) MEG-SIM: A Web Portal for Testing MEG Analysis Methods using Realistic Simulated and Empirical Data. Neuroinform 10:141-158
+
+.. [2] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N., Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer Interface (BCI) System. IEEE TBME 51(6):1034-1043
+
+.. [3] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000) PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101(23):e215-e220
\ No newline at end of file
diff --git a/doc/manual/decoding.rst b/doc/manual/decoding.rst
new file mode 100644
index 0000000..2e0ab48
--- /dev/null
+++ b/doc/manual/decoding.rst
@@ -0,0 +1,168 @@
+.. _decoding:
+
+.. contents:: Contents
+   :local:
+   :depth: 3
+
+Decoding
+########
+
+For maximal compatibility with the Scikit-learn package, we follow the same API. Each estimator implements a ``fit``, a ``transform`` and a ``fit_transform`` method. In some cases, they also implement an ``inverse_transform`` method. For more details, visit the Scikit-learn page.
+
+For ease of comprehension, we will denote instantiations of the class using the same name as the class but in small caps instead of camel cases.
+
+Basic Estimators
+================
+
+Scaler
+^^^^^^
+This will standardize data across channels. Each channel type (mag, grad or eeg) is treated separately. During training time, the mean (`ch_mean_`) and standard deviation (`std_`) is computed in the ``fit`` method and stored as an attribute to the object. The ``transform`` method is called to transform the training set. To perform both the ``fit`` and ``transform`` operations in a single call, the ``fit_transform`` method may be used. During test time, the stored mean and standard deviat [...]
+
+.. note:: This is different from the ``StandarScaler`` estimator offered by Scikit-Learn. The ``StandardScaler`` standardizes each feature, whereas the ``Scaler`` object standardizes by channel type.
+
+EpochsVectorizer
+^^^^^^^^^^^^^^^^
+Scikit-learn API enforces the requirement that data arrays must be 2D. A common strategy for sensor-space decoding is to tile the sensors into a single vector. This can be achieved using the function :func:`mne.decoding.EpochsVectorizer.transform`. 
+
+To recover the original 3D data, an ``inverse_transform`` can be used. The ``epochs_vectorizer`` is particularly useful when constructing a pipeline object (used mainly for parameter search and cross validation). The ``epochs_vectorizer`` is the first estimator in the pipeline enabling estimators downstream to be more advanced estimators implemented in Scikit-learn. 
+
+PSDEstimator
+^^^^^^^^^^^^
+This estimator computes the power spectral density (PSD) using the multitaper method. It takes a 3D array as input, it into 2D and computes the PSD.
+
+FilterEstimator
+^^^^^^^^^^^^^^^
+This estimator filters the 3D epochs data.
+
+.. warning:: This is meant for use in conjunction with ``RtEpochs``. It is not recommended in a normal processing pipeline as it may result in edge artifacts.
+
+Spatial filters
+===============
+
+Just like temporal filters, spatial filters provide weights to modify the data along the sensor dimension. They are popular in the BCI community because of their simplicity and ability to distinguish spatially-separated neural activity.
+
+Common Spatial Pattern
+^^^^^^^^^^^^^^^^^^^^^^
+
+This is a technique to analyze multichannel data based on recordings from two classes. Let :math:`X \in R^{C\times T}` be a segment of data with :math:`C` channels and :math:`T` time points. The data at a single time point is denoted by :math:`x(t)` such that :math:`X=[x(t), x(t+1), ..., x(t+T-1)]`. Common Spatial Pattern (CSP) finds a decomposition that projects the signal in the original sensor space to CSP space using the following transformation:
+
+.. math::       x_{CSP}(t) = W^{T}x(t)
+   :label: csp
+
+where each column of :math:`W \in R^{C\times C}` is a spatial filter and each row of :math:`x_{CSP}` is a CSP component. The matrix :math:`W` is also called the de-mixing matrix in other contexts. Let :math:`\Sigma^{+} \in R^{C\times C}` and :math:`\Sigma^{-} \in R^{C\times C}` be the estimates of the covariance matrices of the two conditions. 
+CSP analysis is given by the simultaneous diagonalization of the two covariance matrices
+
+.. math::       W^{T}\Sigma^{+}W = \lambda^{+}
+   :label: diagonalize_p
+.. math::       W^{T}\Sigma^{-}W = \lambda^{-}
+   :label: diagonalize_n
+
+where :math:`\lambda^{C}` is a diagonal matrix whose entries are the eigenvalues of the following generalized eigenvalue problem
+
+.. math::      \Sigma^{+}w = \lambda \Sigma^{-}w
+   :label: eigen_problem
+
+Large entries in the diagonal matrix corresponds to a spatial filter which gives high variance in one class but low variance in the other. Thus, the filter facilitates discrimination between the two classes.
+
+.. topic:: Examples:
+
+    * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_csp_eeg.py`
+    * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_csp_space.py`
+
+.. topic:: Spotlight:
+
+    The winning entry of the Grasp-and-lift EEG competition in Kaggle uses the CSP implementation in MNE. It was featured as a `script of the week`_.
+
+xDAWN
+^^^^^
+Xdawn is a spatial filtering method designed to improve the signal to signal + noise ratio (SSNR) of the ERP responses. Xdawn was originally  designed for P300 evoked potential by enhancing the target response with respect to the non-target response. The implementation in MNE-Python is a generalization to any type of ERP.
+
+.. topic:: Examples:
+
+    * :ref:`sphx_glr_auto_examples_preprocessing_plot_xdawn_denoising.py`
+    * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_xdawn_eeg.py`
+
+Effect-matched spatial filtering
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The result is a spatial filter at each time point and a corresponding time course. Intuitively, the result gives the similarity between the filter at each time point and the data vector (sensors) at that time point.
+
+.. topic:: Examples
+
+    * :ref:`sphx_glr_auto_examples_decoding_plot_ems_filtering.py`
+
+Patterns vs. filters
+^^^^^^^^^^^^^^^^^^^^
+
+When interpreting the components of the CSP, it is often more intuitive to think about how :math:`x(t)` is composed of the different CSP components :math:`x_{CSP}(t)`. In other words, we can rewrite Equation :eq:`csp` as follows:
+
+.. math::       x(t) = (W^{-1})^{T}x_{CSP}(t)
+   :label: patterns
+
+The columns of the matrix :math:`(W^{-1})^T` are called spatial patterns. This is also called the mixing matrix. The example :ref:`sphx_glr_auto_examples_decoding_plot_linear_model_patterns.py` demonstrates the difference between patterns and filters.
+
+Plotting a pattern is as simple as doing::
+
+    >>> info = epochs.info
+    >>> model.plot_patterns(info)  # model is an instantiation of an estimator described in this section
+
+.. image:: ../../_images/sphx_glr_plot_linear_model_patterns_001.png
+   :align: center
+   :height: 100 px
+
+To plot the corresponding filter, you can do::
+
+    >>> model.plot_filters(info)
+
+.. image:: ../../_images/sphx_glr_plot_linear_model_patterns_002.png
+   :align: center
+   :height: 100 px
+
+Sensor-space decoding
+=====================
+
+Generalization Across Time
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Generalization Across Time (GAT) is a modern strategy to infer neuroscientific conclusions from decoding analysis of sensor-space data. An accuracy matrix is constructed where each point represents the performance of the model trained on one time window and tested on another.
+
+.. image:: ../../_images/sphx_glr_plot_decoding_time_generalization_001.png
+   :align: center
+   :width: 400px
+
+To use this functionality, simply do::
+
+    >>> gat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1)
+    >>> gat.fit(epochs)
+    >>> gat.score(epochs)
+    >>> gat.plot(vmin=0.1, vmax=0.9, title="Generalization Across Time (faces vs. scrambled)")
+
+.. topic:: Examples:
+
+    * :ref:`sphx_glr_auto_examples_decoding_plot_ems_filtering.py`
+    * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_time_generalization_conditions.py`
+
+Time Decoding
+^^^^^^^^^^^^^
+In this strategy, a model trained on one time window is tested on the same time window. A moving time window will thus yield an accuracy curve similar to an ERP, but is considered more sensitive to effects in some situations. It is related to searchlight-based approaches in fMRI. This is also the diagonal of the GAT matrix.
+
+.. image:: ../../_images/sphx_glr_plot_decoding_sensors_001.png
+   :align: center
+   :width: 400px
+
+To generate this plot, you need to initialize a GAT object and then use the method ``plot_diagonal``::
+
+    >>> gat.plot_diagonal()
+
+.. topic:: Examples:
+
+    * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_time_generalization.py`
+
+Source-space decoding
+=====================
+
+Source space decoding is also possible, but because the number of features can be much larger than in the sensor space, univariate feature selection using ANOVA f-test (or some other metric) can be done to reduce the feature dimension. Interpreting decoding results might be easier in source space as compared to sensor space.
+
+.. topic:: Examples:
+
+    * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_spatio_temporal_source.py`
+
+.. _script of the week: http://blog.kaggle.com/2015/08/12/july-2015-scripts-of-the-week/
diff --git a/doc/manual/index.rst b/doc/manual/index.rst
index 4217a0c..6498115 100644
--- a/doc/manual/index.rst
+++ b/doc/manual/index.rst
@@ -30,9 +30,10 @@ Reading your data
 How to get your raw data loaded in MNE.
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
 
    io
+   memory
 
 Preprocessing
 -------------
@@ -40,11 +41,8 @@ Preprocessing
 Dealing with artifacts and noise sources in data.
 
 .. toctree::
-   :maxdepth: 1
+   :maxdepth: 2
 
-   preprocessing/overview
-   preprocessing/bads
-   preprocessing/filter
    preprocessing/ica
    preprocessing/ssp
 
@@ -82,25 +80,31 @@ Using parametric and non-parametric tests with M/EEG data.
 
    statistics
 
-Visualization
--------------
-
-Various tools and techniques for getting a handle on your data.
+Decoding
+--------
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 3
 
-   visualization
+   decoding
 
 Datasets
 --------
 
-Some of the datasets made available to MNE users.
+To enable reproducibility of results, MNE-Python includes several dataset fetchers
 
 .. toctree::
-   :maxdepth: 1
+   :maxdepth: 2
+
+   datasets_index
+
+Pitfalls
+--------
+
+.. toctree::
+   :maxdepth: 2
 
-   datasets
+   pitfalls
 
 C tools
 -------
diff --git a/doc/manual/io.rst b/doc/manual/io.rst
index b6b805a..72ac893 100644
--- a/doc/manual/io.rst
+++ b/doc/manual/io.rst
@@ -1,4 +1,3 @@
-
 .. _ch_convert:
 
 .. contents:: Contents
@@ -6,7 +5,24 @@
    :depth: 2
 
 Here we describe the data reading and conversion utilities included
-with the MNE software.
+with the MNE software. The cheatsheet below summarizes the different
+file formats supported by MNE software.
+
+===================   ========================   =========  =================================================================
+Datatype              File format                Extension  MNE-Python function
+===================   ========================   =========  =================================================================
+MEG                   Elekta Neuromag            .fif       :func:`mne.io.read_raw_fif`
+MEG                   4-D Neuroimaging / BTI      dir       :func:`mne.io.read_raw_bti`
+MEG                   CTF                         dir       :func:`mne.io.read_raw_ctf`
+MEG                   KIT                         sqd       :func:`mne.io.read_raw_kit` and :func:`mne.read_epochs_kit`
+EEG                   Brainvision                .vhdr      :func:`mne.io.read_raw_brainvision`
+EEG                   European data format       .edf       :func:`mne.io.read_raw_edf`
+EEG                   Biosemi data format        .bdf       :func:`mne.io.read_raw_edf`
+EEG                   EGI simple binary          .egi       :func:`mne.io.read_raw_egi`
+EEG                   EEGLAB                     .set       :func:`mne.io.read_raw_eeglab` and :func:`mne.read_epochs_eeglab`
+Electrode locations   elc, txt, csd, sfp, htps   Misc       :func:`mne.channels.read_montage`
+Electrode locations   EEGLAB loc, locs, eloc     Misc       :func:`mne.channels.read_montage`
+===================   ========================   =========  =================================================================
 
 .. note::
     All IO functions in MNE-Python performing reading/conversion of MEG and
@@ -85,10 +101,7 @@ See :ref:`mne_create_comp_data` for command-line options.
 Importing CTF data
 ==================
 
-The C command line tools include a utility :ref:`mne_ctf2fiff`,
-based on the BrainStorm Matlab code by Richard Leahy, John Mosher,
-and Sylvain Baillet, to convert data in CTF ds directory to fif
-format.
+In MNE-Python, :func:`mne.io.read_raw_ctf` can be used to read CTF data.
 
 
 Importing CTF Polhemus data
@@ -284,6 +297,12 @@ The EGI raw files are simple binary files with a header and can be exported
 from using the EGI Netstation acquisition software.
 
 
+EEGLAB set files (.set)
+=======================
+
+EEGLAB .set files can be read in using :func:`mne.io.read_raw_eeglab`
+and :func:`mne.read_epochs_eeglab`.
+
 Importing EEG data saved in the Tufts University format
 =======================================================
 
diff --git a/doc/manual/memory.rst b/doc/manual/memory.rst
new file mode 100644
index 0000000..95f1d8c
--- /dev/null
+++ b/doc/manual/memory.rst
@@ -0,0 +1,45 @@
+.. _memory:
+
+.. contents:: Contents
+   :local:
+   :depth: 3
+
+Memory-efficient IO
+###################
+
+Preloading
+==========
+
+Raw
+^^^
+MNE-Python can read data on-demand using the ``preload`` option provided in :ref:`IO functions <ch_convert>`. For example::
+
+    from mne import io
+    from mne.datasets import sample
+    data_path = sample.data_path()
+    raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
+    raw = io.Raw(raw_fname, preload=False)
+
+.. note:: Filtering does not work with ``preload=False``.
+
+Epochs
+^^^^^^
+Similarly, epochs can also be be read from disk on-demand. For example::
+
+    import mne
+    events = mne.find_events(raw)
+    event_id, tmin, tmax = 1, -0.2, 0.5
+    picks = mne.pick_types(raw.info, meg=True, eeg=True, stim=False, eog=True)
+    epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(eeg=80e-6, eog=150e-6),
+                        preload=False)
+
+When ``preload=False``, the epochs data is loaded from the disk on-demand. Note that ``preload=False`` for epochs will work even if the ``raw`` object
+has been loaded with ``preload=True``. Preloading is also supported for :func:`mne.read_epochs`.
+
+.. warning:: This comes with a caveat. When ``preload=False``, data rejection based on peak-to-peak thresholds is executed when the data is loaded from disk, *not* when the ``Epochs`` object is created.
+
+To explicitly reject artifacts with ``preload=False``, use the function :func:`mne.Epochs.drop_bad_epochs`.
+
+Loading data explicitly
+=======================
+To load the data if ``preload=False`` was initially selected, use the functions :func:`mne.Raw.load_data` and :func:`mne.Epochs.load_data`.
\ No newline at end of file
diff --git a/doc/manual/pitfalls.rst b/doc/manual/pitfalls.rst
new file mode 100644
index 0000000..35c06d8
--- /dev/null
+++ b/doc/manual/pitfalls.rst
@@ -0,0 +1,27 @@
+.. _pitfalls:
+
+.. contents:: Contents
+   :local:
+   :depth: 2
+
+Pitfalls
+########
+
+Evoked Arithmetic
+=================
+
+Two evoked objects can be contrasted using::
+
+	>>> evoked = evoked_cond1 - evoked_cond2
+
+Note, however that the number of trials used to obtain the averages for ``evoked_cond1`` and ``evoked_cond2`` are taken into account when computing ``evoked``. That is, what you get is a weighted average, not a simple element-by-element subtraction. To do a uniform (not weighted) average, use the function :func:`mne.combine_evoked`.
+
+Float64 vs float32
+==================
+
+MNE-Python performs all computation in memory using the double-precision 64-bit floating point format. This means that the data is typecasted into `float64` format as soon as it is read into memory. The reason for this is that operations such as filtering, preprocessing etc. are more accurate when using the double-precision format. However, for backward compatibility, it writes the `fif` files in a 32-bit format by default. This is advantageous when saving data to disk as it consumes les [...]
+
+However, if the users save intermediate results to disk, they should be aware that this may lead to loss in precision. The reason is that writing to disk is 32-bit by default and the typecasting to 64-bit does not recover the lost precision. In case you would like to retain the 64-bit accuracy, there are two possibilities: 
+
+* Chain the operations in memory and not save intermediate results
+* Save intermediate results but change the ``dtype`` used for saving. However, this may render the files unreadable in other software packages
diff --git a/doc/manual/preprocessing/overview.rst b/doc/manual/preprocessing/overview.rst
index 99ea2d6..7a3c5c0 100644
--- a/doc/manual/preprocessing/overview.rst
+++ b/doc/manual/preprocessing/overview.rst
@@ -1,3 +1,5 @@
-========
-Overview
-========
+=========
+Filtering
+=========
+
+
diff --git a/doc/manual/preprocessing/ssp.rst b/doc/manual/preprocessing/ssp.rst
index 06dace9..e331501 100644
--- a/doc/manual/preprocessing/ssp.rst
+++ b/doc/manual/preprocessing/ssp.rst
@@ -1,41 +1,18 @@
 .. _ssp:
 
-The Signal-Space Projection (SSP) method
-########################################
-
-The Signal-Space Projection (SSP) is one approach to rejection
-of external disturbances in software. This part presents some
-relevant details of this method.
-
-In MNE-Python SSS projection vectors can be computed using general
-purpose functions :func:` mne.compute_proj_epochs`,
-:func:`mne.compute_proj_evoked`, and :func:`mne.compute_proj_raw`.
-The general assumption these functions make is that the data passed contains
-raw, epochs or averages of the artifact. Typically this involves continues raw
-data of empty room recordings or averaged ECG or EOG artefacts.
+.. contents:: Contents
+   :local:
+   :depth: 3
 
-A second set of highlevel convenience functions is provided to compute
-projection vector for typical usecases. This includes
-:func:`mne.preprocessing.compute_proj_ecg` and
-:func:`mne.preprocessing.compute_proj_eog` for computing the ECG and EOG
-related artifact components, respectively.
+Projections
+###########
 
-The underlying implementation can be found in :mod:`mne.preprocessing.ssp`.
-
-The following examples demonstrate how to use the SSP code:
-In :ref:`example_visualization_plot_evoked_delayed_ssp.py` and  :ref:`example_visualization_plot_evoked_topomap_delayed_ssp.py`
-SSPs are illustrated by toggling them in realtime.
-In :ref:`example_visualization_plot_ssp_projs_topomaps.py` and :ref:`example_visualization_plot_ssp_projs_sensitivity_map.py`
-the SSP sensitivities are visualized in sensor and source space, respectively.
-
-Background
-==========
-
-Concepts
---------
+The Signal-Space Projection (SSP) method
+========================================
 
-Unlike many other noise-cancellation approaches, SSP does
-not require additional reference sensors to record the disturbance
+The Signal-Space Projection (SSP) is one approach to rejection 
+of external disturbances in software. Unlike many other noise-cancellation
+approaches, SSP does not require additional reference sensors to record the disturbance
 fields. Instead, SSP relies on the fact that the magnetic field
 distributions generated by the sources in the brain have spatial
 distributions sufficiently different from those generated by external
@@ -43,17 +20,22 @@ noise sources. Furthermore, it is implicitly assumed that the linear
 space spanned by the significant external noise patters has a low
 dimension.
 
+What is SSP?
+------------
+
 Without loss of generality we can always decompose any :math:`n`-channel
 measurement :math:`b(t)` into its signal and
 noise components as
 
 .. math::    b(t) = b_s(t) + b_n(t)
+   :label: additive_model
 
 Further, if we know that :math:`b_n(t)` is
 well characterized by a few field patterns :math:`b_1 \dotso b_m`,
 we can express the disturbance as
 
 .. math::    b_n(t) = Uc_n(t) + e(t)\ ,
+   :label: pca
 
 where the columns of :math:`U` constitute
 an orthonormal basis for :math:`b_1 \dotso b_m`, :math:`c_n(t)` is
@@ -67,14 +49,20 @@ conditions described above are satisfied. We can now construct the
 orthogonal complement operator
 
 .. math::    P_{\perp} = I - UU^T
+   :label: projector
 
-and apply it to :math:`b(t)` yielding
+and apply it to :math:`b(t)` in Equation :eq:`additive_model` yielding
 
-.. math::    b(t) = P_{\perp}b_s(t)\ ,
+.. math::    b_{s}(t) \approx P_{\perp}b(t)\ ,
+   :label: result
 
-since :math:`P_{\perp}b_n(t) = P_{\perp}Uc_n(t) \approx 0`. The projection operator :math:`P_{\perp}` is
-called the signal-space projection operator and generally provides
-considerable rejection of noise, suppressing external disturbances
+since :math:`P_{\perp}b_n(t) = P_{\perp}(Uc_n(t) + e(t)) \approx 0` and :math:`P_{\perp}b_{s}(t) \approx b_{s}(t)`. The projection operator :math:`P_{\perp}` is
+called the **signal-space projection operator**.
+
+Why SSP?
+--------
+
+It provides considerable rejection of noise, suppressing external disturbances
 by a factor of 10 or more. The effectiveness of SSP depends on two
 factors:
 
@@ -108,6 +96,7 @@ please consult the references listed in :ref:`CEGIEEBB`.
 
 .. figure:: ../pics/proj-off-on.png
     :alt: example of the effect of SSP
+    :align: center
 
     An example of the effect of SSP
 
@@ -126,3 +115,82 @@ decomposition, and employ the eigenvectors corresponding to the
 highest eigenvalues as basis for the noise subspace. It is also
 customary to use a separate set of vectors for magnetometers and
 gradiometers in the Vectorview system.
+
+Average reference
+-----------------
+
+The EEG average reference is the mean signal over all the sensors. It is typical in EEG analysis to subtract the average reference from all the sensor signals :math:`b^{1}(t), ..., b^{n}(t)`. That is:
+
+.. math::	{b}^{j}_{s}(t) = b^{j}(t) - \frac{1}{n}\sum_{k}{b^k(t)}
+   :label: eeg_proj
+
+where the noise term :math:`b_{n}^{j}(t)` is given by
+
+.. math:: 	b_{n}^{j}(t) = \frac{1}{n}\sum_{k}{b^k(t)}
+   :label: noise_term
+
+Thus, the projector vector :math:`P_{\perp}` will be given by :math:`P_{\perp}=\frac{1}{n}[1, 1, ..., 1]`
+
+.. Warning:: When applying SSP, the signal of interest can also be sometimes removed. Therefore, it's always a good idea to check how much the effect of interest is reduced by applying SSP. SSP might remove *both* the artifact and signal of interest.
+
+The API
+=======
+
+Once a projector is applied on the data, it is said to be `active`.
+
+The proj attribute
+------------------
+
+It is available in all the basic data containers: ``Raw``, ``Epochs`` and ``Evoked``. It is ``True`` if at least one projector is present and all of them are `active`. 
+
+Computing projectors
+--------------------
+
+In MNE-Python SSP vectors can be computed using general
+purpose functions :func:`mne.compute_proj_epochs`,
+:func:`mne.compute_proj_evoked`, and :func:`mne.compute_proj_raw`.
+The general assumption these functions make is that the data passed contains
+raw, epochs or averages of the artifact. Typically this involves continues raw
+data of empty room recordings or averaged ECG or EOG artifacts.
+
+A second set of highlevel convenience functions is provided to compute projection vector for typical usecases. This includes :func:`mne.preprocessing.compute_proj_ecg` and :func:`mne.preprocessing.compute_proj_eog` for computing the ECG and EOG related artifact components, respectively. For computing the eeg reference signal, the function :func:`mne.preprocessing.ssp.make_eeg_average_ref_proj` can be used. The underlying implementation can be found in :mod:`mne.preprocessing.ssp`.
+
+.. _remove_projector:
+
+Adding/removing projectors
+--------------------------
+
+To explicitly add a ``proj``, use ``add_proj``. For example::
+
+    >>> projs = mne.read_proj('proj_a.fif')
+    >>> evoked.add_proj(projs)
+
+If projectors are already present in the raw `fif` file, it will be added to the ``info`` dictionary automatically. To remove existing projectors, you can do::
+
+	>>> evoked.add_proj([], remove_existing=True)
+
+Applying projectors
+-------------------
+
+Projectors can be applied at any stage of the pipeline. When the ``raw`` data is read in, the projectors are not applied by default but this flag can be turned on. However, at the ``epochs`` stage, the projectors are applied by default.
+
+To apply explicitly projs at any stage of the pipeline, use ``apply_proj``. For example::
+
+	>>> evoked.apply_proj()
+
+The projectors might not be applied if data are not :ref:`preloaded <memory>`. In this case, it's the ``_projector`` attribute that indicates if a projector will be applied when the data is loaded in memory. If the data is already in memory, then the projectors applied to it are the ones marked as `active`. As soon as you've applied the projectors, it will stay active in the remaining pipeline.
+
+.. Warning:: Once a projection operator is applied, it cannot be reversed.
+.. Warning:: Projections present in the info are applied during inverse computation whether or not they are `active`. Therefore, if a certain projection should not be applied, remove it from the info as described in Section :ref:`remove_projector`
+
+Delayed projectors
+------------------
+
+The suggested pipeline is ``proj=True`` in epochs (it's computationally cheaper than for raw). When you use delayed SSP in ``Epochs``, projectors are applied when you call :func:`mne.Epochs.get_data` method. They are not applied to the ``evoked`` data unless you call ``apply_proj()``. The reason is that you want to reject epochs with projectors although it's not stored in the projector mode. 
+
+.. topic:: Examples:
+
+	* :ref:`example_visualization_plot_evoked_delayed_ssp.py`: Interactive SSP
+	* :ref:`example_visualization_plot_evoked_topomap_delayed_ssp.py`: Interactive SSP
+	* :ref:`example_visualization_plot_ssp_projs_topomaps.py`: SSP sensitivities in sensor space
+	* :ref:`example_visualization_plot_ssp_projs_sensitivity_map.py`: SSP sensitivities in source space
diff --git a/doc/manual/datasets.rst b/doc/manual/sample_dataset.rst
similarity index 100%
rename from doc/manual/datasets.rst
rename to doc/manual/sample_dataset.rst
diff --git a/doc/python_reference.rst b/doc/python_reference.rst
index eda6534..95fc4fe 100644
--- a/doc/python_reference.rst
+++ b/doc/python_reference.rst
@@ -36,6 +36,8 @@ Classes
    Label
    BiHemiLabel
    Transform
+   io.Info
+   io.Projection
    preprocessing.ICA
    decoding.CSP
    decoding.Scaler
@@ -102,8 +104,11 @@ Functions:
   :template: function.rst
 
   read_raw_bti
+  read_raw_ctf
   read_raw_edf
   read_raw_kit
+  read_raw_nicolet
+  read_raw_eeglab
   read_raw_brainvision
   read_raw_egi
   read_raw_fif
@@ -141,6 +146,7 @@ Functions:
    read_dipole
    read_epochs
    read_epochs_kit
+   read_epochs_eeglab
    read_events
    read_evokeds
    read_forward_solution
@@ -353,6 +359,14 @@ Projections:
    read_proj
    write_proj
 
+.. currentmodule:: mne.preprocessing.ssp
+
+.. autosummary::
+   :toctree: generated/
+   :template: function.rst
+
+   make_eeg_average_ref_proj
+
 Manipulate channels and set sensors locations for processing and plotting:
 
 .. currentmodule:: mne.channels
@@ -405,6 +419,7 @@ Functions:
    find_eog_events
    ica_find_ecg_events
    ica_find_eog_events
+   maxwell_filter
    read_ica
    run_ica
 
@@ -471,10 +486,11 @@ Events
    :toctree: generated/
    :template: function.rst
 
-   combine_event_ids
-   equalize_epoch_counts
    add_channels_epochs
+   average_movements
+   combine_event_ids
    concatenate_epochs
+   equalize_epoch_counts
 
 Sensor Space Data
 =================
@@ -857,6 +873,7 @@ Functions to compute connectivity (adjacency) matrices for cluster-level statist
    spatial_dist_connectivity
    spatial_src_connectivity
    spatial_tris_connectivity
+   spatial_inter_hemi_connectivity
    spatio_temporal_src_connectivity
    spatio_temporal_tris_connectivity
    spatio_temporal_dist_connectivity
diff --git a/doc/this_project.inc b/doc/this_project.inc
index 32595b4..95c5c61 100644
--- a/doc/this_project.inc
+++ b/doc/this_project.inc
@@ -1,4 +1,4 @@
 .. mne-python
 .. _mne-python: http://mne-tools.github.io/mne-python-intro
 .. _`mne-python GitHub`: https://github.com/mne-tools/mne-python
-.. _`mne-python sample dataset`: ftp://surfer.nmr.mgh.harvard.edu/pub/data/MNE-sample-data-processed.tar.gz
+.. _`mne-python sample dataset`: https://s3.amazonaws.com/mne-python/datasets/MNE-sample-data-processed.tar.gz
diff --git a/doc/tutorials/mne-report.png b/doc/tutorials/mne-report.png
new file mode 100644
index 0000000..0278ae9
Binary files /dev/null and b/doc/tutorials/mne-report.png differ
diff --git a/doc/tutorials/report.rst b/doc/tutorials/report.rst
index 2925663..5ea0a55 100644
--- a/doc/tutorials/report.rst
+++ b/doc/tutorials/report.rst
@@ -56,6 +56,11 @@ To generate the report in parallel::
     mne report --path MNE-sample-data/ --info MNE-sample-data/MEG/sample/sample_audvis-ave.fif \ 
         --subject sample --subjects-dir MNE-sample-data/subjects --verbose --jobs 6
 
+The report rendered on sample-data is shown below:
+
+    .. image:: mne-report.png
+       :align: center
+
 For help on all the available options, do::
 
     mne report --help
diff --git a/doc/whats_new.rst b/doc/whats_new.rst
index 612dc72..643a4e6 100644
--- a/doc/whats_new.rst
+++ b/doc/whats_new.rst
@@ -4,6 +4,97 @@ What's new
     Note, we are now using links to highlight new functions and classes.
     Please be sure to follow the examples below like :func:`mne.stats.f_mway_rm`, so the whats_new page will have a link to the function/class documentation.
 
+.. _changes_0_11:
+
+Version 0.11
+------------
+
+Changelog
+~~~~~~~~~
+
+    - Maxwell filtering (SSS) implemented in :func:`mne.preprocessing.maxwell_filter` by `Mark Wronkiewicz`_ as part of Google Summer of Code, with help from `Samu Taulu`_, `Jukka Nenonen`_, and `Jussi Nurminen`_. Our implementation includes support for:
+
+        - Fine calibration
+
+        - Cross-talk correction
+
+        - Temporal SSS (tSSS)
+
+        - Head position translation
+
+        - Internal component regularization
+
+    - Compensation for movements using Maxwell filtering on epoched data in :func:`mne.epochs.average_movements` by `Eric Larson`_ and `Samu Taulu`_
+
+    - Add reader for Nicolet files in :func:`mne.io.read_raw_nicolet` by `Jaakko Leppakangas`_
+
+    - Add FIFF persistence for ICA labels by `Denis Engemann`_
+
+    - Display ICA labels in :func:`mne.viz.plot_ica_scores` and :func:`mne.viz.plot_ica_sources` (for evoked objects) by `Denis Engemann`_
+
+    - Plot spatially color coded lines in :func:`mne.Evoked.plot` by `Jona Sassenhagen`_ and `Jaakko Leppakangas`_
+
+    - Add reader for CTF data in :func:`mne.io.read_raw_ctf` by `Eric Larson`_
+
+    - Add support for Brainvision v2 in :func:`mne.io.read_raw_brainvision` by `Teon Brooks`_
+    
+    - Improve speed of generalization across time :class:`mne.decoding.GeneralizationAcrossTime` decoding up to a factor of seven by `Jean-Remi King`_ and `Federico Raimondo`_ and `Denis Engemann`_.
+
+    - Add the explained variance for each principal component, ``explained_var``, key to the :class:`mne.io.Projection` by `Teon Brooks`_
+
+    - Added methods :func:`mne.Epochs.add_eeg_average_proj`, :func:`mne.io.Raw.add_eeg_average_proj`, and :func:`mne.Evoked.add_eeg_average_proj` to add an average EEG reference.
+
+    - Add reader for EEGLAB data in :func:`mne.io.read_raw_eeglab` and :func:`mne.read_epochs_eeglab` by `Mainak Jas`_
+
+BUG
+~~~
+
+    - Fix bug that prevented homogeneous bem surfaces to be displayed in HTML reports by `Denis Engemann`_
+
+    - Added safeguards against ``None`` and negative values in reject and flat parameters in :class:`mne.Epochs` by `Eric Larson`_
+
+    - Fix train and test time window-length in :class:`mne.decoding.GeneralizationAcrossTime` by `Jean-Remi King`_
+
+    - Added lower bound in :func:`mne.stats.linear_regression` on p-values ``p_val`` (and resulting ``mlog10_p_val``) using double floating point arithmetic limits by `Eric Larson`_
+
+    - Fix channel name pick in :func:`mne.Evoked.get_peak` method by `Alex Gramfort`_
+
+    - Fix drop percentages to take into account ``ignore`` option in :func:`mne.viz.plot_drop_log` and :func:`mne.Epochs.plot_drop_log` by `Eric Larson`_.
+
+    - :class:`mne.EpochsArray` no longer has an average EEG reference silently added (but not applied to the data) by default. Use :func:`mne.EpochsArray.add_eeg_ref` to properly add one.
+
+API
+~~~
+
+    - :func:`mne.io.read_raw_brainvision` now has ``event_id`` argument to assign non-standard trigger events to a trigger value by `Teon Brooks`_
+
+    - :func:`mne.read_epochs` now has ``add_eeg_ref=False`` by default, since average EEG reference can be added before writing or after reading using the method :func:`mne.Epochs.add_eeg_ref`.
+
+    - :class:`mne.EpochsArray` no longer has an average EEG reference silently added (but not applied to the data) by default. Use :func:`mne.EpochsArray.add_eeg_average_proj` to properly add one.
+
+Authors
+~~~~~~~
+
+The committer list for this release is the following (preceded by number of commits):
+
+   171  Eric Larson
+   117  Jaakko Leppakangas
+    58  Jona Sassenhagen
+    52  Mainak Jas
+    46  Alexandre Gramfort
+    33  Denis A. Engemann
+    28  Teon Brooks
+    24  Clemens Brunner
+    23  Christian Brodbeck
+    15  Mark Wronkiewicz
+    10  Jean-Remi King
+     5  Marijn van Vliet
+     3  Fede Raimondo
+     2  Alexander Rudiuk
+     2  emilyps14
+     2  lennyvarghese
+     1  Marian Dovgialo
+
 .. _changes_0_10:
 
 Version 0.10
@@ -119,7 +210,7 @@ API
     - ``RawBrainVision`` objects now always have event channel ``'STI 014'``, and recordings with no events will have this channel set to zero by `Eric Larson`_
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number of commits):
 
@@ -353,7 +444,7 @@ API
    - Add ``montage`` parameter to the ``create_info`` function to create the info using montages by `Teon Brooks`_
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number of commits):
 
@@ -553,7 +644,7 @@ API
    - As default, for ICA the maximum number of PCA components equals the number of channels passed. The number of PCA components used to reconstruct the sensor space signals now defaults to the maximum number of PCA components estimated.
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number of commits):
 
@@ -699,7 +790,7 @@ API
 
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number
 of commits):
@@ -862,7 +953,7 @@ API
    - Remove artifacts module. Artifacts- and preprocessing related functions can now be found in mne.preprocessing.
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number
 of commits):
@@ -996,7 +1087,7 @@ API
    - Epochs objects now also take dicts as values for the event_id argument. They now can represent multiple conditions.
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number
 of commits):
@@ -1052,7 +1143,7 @@ Changelog
    - Add method to eliminate stimulation artifacts from raw data by linear interpolation or windowing by `Daniel Strohmeier`_.
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number
 of commits):
@@ -1097,7 +1188,7 @@ Changelog
    - New tutorial in the documentation and new classes and functions reference page by `Alex Gramfort`_.
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number
 of commits):
@@ -1134,7 +1225,7 @@ version 0.1:
   - New return values for the function find_ecg_events
 
 Authors
-~~~~~~~~~
+~~~~~~~
 
 The committer list for this release is the following (preceded by number
 of commits):
@@ -1234,4 +1325,8 @@ of commits):
 
 .. _Lorenzo Desantis: https://github.com/lorenzo-desantis/
 
+.. _Jukka Nenonen: https://www.linkedin.com/pub/jukka-nenonen/28/b5a/684
+
+.. _Jussi Nurminen: https://scholar.google.fi/citations?user=R6CQz5wAAAAJ&hl=en
+
 .. _Clemens Brunner: https://github.com/cle1109
diff --git a/examples/datasets/plot_brainstorm_data.py b/examples/datasets/plot_brainstorm_data.py
index eca7453..08f0bcd 100644
--- a/examples/datasets/plot_brainstorm_data.py
+++ b/examples/datasets/plot_brainstorm_data.py
@@ -34,11 +34,12 @@ data_path = bst_raw.data_path()
 
 raw_fname = data_path + '/MEG/bst_raw/' + \
                         'subj001_somatosensory_20111109_01_AUX-f_raw.fif'
-raw = Raw(raw_fname, preload=True)
+raw = Raw(raw_fname, preload=True, add_eeg_ref=False)
 raw.plot()
 
 # set EOG channel
 raw.set_channel_types({'EEG058': 'eog'})
+raw.add_eeg_average_proj()
 
 # show power line interference and remove it
 raw.plot_psd()
diff --git a/examples/decoding/plot_decoding_time_generalization.py b/examples/decoding/plot_decoding_time_generalization.py
index f9495b0..f04906e 100644
--- a/examples/decoding/plot_decoding_time_generalization.py
+++ b/examples/decoding/plot_decoding_time_generalization.py
@@ -45,7 +45,7 @@ epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
                     reject=dict(mag=1.5e-12), decim=decim, verbose=False)
 
 # Define decoder. The decision function is employed to use cross-validation
-gat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=2)
+gat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1)
 
 # fit and score
 gat.fit(epochs)
diff --git a/examples/preprocessing/plot_maxwell_filter.py b/examples/preprocessing/plot_maxwell_filter.py
new file mode 100644
index 0000000..100b1b4
--- /dev/null
+++ b/examples/preprocessing/plot_maxwell_filter.py
@@ -0,0 +1,45 @@
+"""
+=======================
+Maxwell filter raw data
+=======================
+
+This example shows how to process M/EEG data with Maxwell filtering
+in mne-python.
+"""
+# Authors: Eric Larson <larson.eric.d at gmail.com>
+#          Alexandre Gramfort <alexandre.gramfort at telecom-paristech.fr>
+#          Mark Wronkiewicz <wronk.mark at gmail.com>
+#
+# License: BSD (3-clause)
+
+import mne
+from mne.preprocessing import maxwell_filter
+
+print(__doc__)
+
+data_path = mne.datasets.sample.data_path()
+
+###############################################################################
+# Set parameters
+raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
+ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif'
+fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat'
+
+# Preprocess with Maxwell filtering
+raw = mne.io.Raw(raw_fname)
+raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313']  # set bads
+# Here we don't use tSSS (set st_duration) because MGH data is very clean
+raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname)
+
+# Select events to extract epochs from, pick M/EEG channels, and plot evoked
+tmin, tmax = -0.2, 0.5
+event_id = {'Auditory/Left': 1}
+events = mne.find_events(raw, 'STI 014')
+picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
+                       include=[], exclude='bads')
+for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')):
+    epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks,
+                        baseline=(None, 0), reject=dict(eog=150e-6),
+                        preload=False)
+    evoked = epochs.average()
+    evoked.plot(window_title=kind)
diff --git a/examples/visualization/plot_evoked_erf_erp.py b/examples/visualization/plot_evoked_erf_erp.py
index ed0e86d..c2020b0 100644
--- a/examples/visualization/plot_evoked_erf_erp.py
+++ b/examples/visualization/plot_evoked_erf_erp.py
@@ -23,16 +23,16 @@ fname = path + '/MEG/sample/sample_audvis-ave.fif'
 condition = 'Left Auditory'
 evoked = read_evokeds(fname, condition=condition, baseline=(None, 0))
 
+# Plot the evoked response with spatially color coded lines.
 # Note: You can paint the area with left mouse button to show the topographic
 # map of the N100.
-
-evoked.plot()
+evoked.plot(spatial_colors=True)
 
 ###############################################################################
 # Or plot manually after extracting peak latency
 
 evoked = evoked.pick_types(meg=False, eeg=True)
-times = 1e3 * evoked.times  # time in miliseconds
+times = 1e3 * evoked.times  # time in milliseconds
 
 ch_max_name, latency = evoked.get_peak(mode='neg')
 
diff --git a/mne/__init__.py b/mne/__init__.py
index 6f8b44a..98b02dd 100644
--- a/mne/__init__.py
+++ b/mne/__init__.py
@@ -17,12 +17,12 @@
 # Dev branch marker is: 'X.Y.devN' where N is an integer.
 #
 
-__version__ = '0.10.0'
+__version__ = '0.11.dev0'
 
 # have to import verbose first since it's needed by many things
 from .utils import (set_log_level, set_log_file, verbose, set_config,
                     get_config, get_config_path, set_cache_dir,
-                    set_memmap_min_size)
+                    set_memmap_min_size, grand_average)
 from .io.pick import (pick_types, pick_channels,
                       pick_channels_regexp, pick_channels_forward,
                       pick_types_forward, pick_channels_cov,
@@ -31,12 +31,12 @@ from .io.base import concatenate_raws
 from .chpi import get_chpi_positions
 from .io.meas_info import create_info
 from .io.kit import read_epochs_kit
+from .io.eeglab import read_epochs_eeglab
 from .bem import (make_sphere_model, make_bem_model, make_bem_solution,
-                  read_bem_surfaces, write_bem_surface, write_bem_surfaces,
+                  read_bem_surfaces, write_bem_surfaces,
                   read_bem_solution, write_bem_solution)
-from .cov import (read_cov, write_cov, Covariance,
-                  compute_covariance, compute_raw_data_covariance,
-                  compute_raw_covariance, whiten_evoked, make_ad_hoc_cov)
+from .cov import (read_cov, write_cov, Covariance, compute_raw_covariance,
+                  compute_covariance, whiten_evoked, make_ad_hoc_cov)
 from .event import (read_events, write_events, find_events, merge_events,
                     pick_events, make_fixed_length_events, concatenate_events,
                     find_stim_steps)
@@ -51,6 +51,7 @@ from .source_estimate import (read_source_estimate, MixedSourceEstimate,
                               spatial_src_connectivity,
                               spatial_tris_connectivity,
                               spatial_dist_connectivity,
+                              spatial_inter_hemi_connectivity,
                               spatio_temporal_src_connectivity,
                               spatio_temporal_tris_connectivity,
                               spatio_temporal_dist_connectivity,
@@ -63,8 +64,7 @@ from .source_space import (read_source_spaces, vertex_to_mni,
                            add_source_space_distances, morph_source_spaces,
                            get_volume_labels_from_aseg)
 from .epochs import Epochs, EpochsArray, read_epochs
-from .evoked import (Evoked, EvokedArray, read_evokeds, write_evokeds,
-                     grand_average, combine_evoked)
+from .evoked import Evoked, EvokedArray, read_evokeds, write_evokeds, combine_evoked
 from .label import (read_label, label_sign_flip,
                     write_label, stc_to_label, grow_labels, Label, split_label,
                     BiHemiLabel, read_labels_from_annot, write_labels_to_annot)
diff --git a/mne/bem.py b/mne/bem.py
index 2e83e22..c403ed0 100644
--- a/mne/bem.py
+++ b/mne/bem.py
@@ -10,12 +10,12 @@ import os
 import os.path as op
 import shutil
 import glob
+
 import numpy as np
 from scipy import linalg
 
 from .fixes import partial
-from .utils import (verbose, logger, run_subprocess, deprecated,
-                    get_subjects_dir)
+from .utils import verbose, logger, run_subprocess, get_subjects_dir
 from .transforms import _ensure_trans, apply_trans
 from .io.constants import FIFF
 from .io.write import (start_file, start_block, write_float, write_int,
@@ -821,6 +821,11 @@ def fit_sphere_to_headshape(info, dig_kinds=(FIFF.FIFFV_POINT_EXTRA,),
         Head center in head coordinates (mm).
     origin_device: ndarray, shape (3,)
         Head center in device coordinates (mm).
+
+    Notes
+    -----
+    This function excludes any points that are low and frontal
+    (``z < 0 and y > 0``) to improve the fit.
     """
     # get head digization points of the specified kind
     hsp = [p['r'] for p in info['dig'] if p['kind'] in dig_kinds]
@@ -844,6 +849,16 @@ def fit_sphere_to_headshape(info, dig_kinds=(FIFF.FIFFV_POINT_EXTRA,),
     origin_device *= 1e3
 
     logger.info('Fitted sphere radius:'.ljust(30) + '%0.1f mm' % radius)
+    # 99th percentile on Wikipedia for Giabella to back of head is 21.7cm,
+    # i.e. 108mm "radius", so let's go with 110mm
+    # en.wikipedia.org/wiki/Human_head#/media/File:HeadAnthropometry.JPG
+    if radius > 110.:
+        logger.warning('Estimated head size (%0.1f mm) exceeded 99th '
+                       'percentile for adult head size' % (radius,))
+    # > 2 cm away from head center in X or Y is strange
+    if np.sqrt(np.sum(origin_head[:2] ** 2)) > 20:
+        logger.warning('(X, Y) fit (%0.1f, %0.1f) more than 20 mm from '
+                       'head frame origin' % tuple(origin_head[:2]))
     logger.info('Origin head coordinates:'.ljust(30) +
                 '%0.1f %0.1f %0.1f mm' % tuple(origin_head))
     logger.info('Origin device coordinates:'.ljust(30) +
@@ -881,6 +896,30 @@ def _fit_sphere(points, disp='auto'):
     return radius, origin
 
 
+def _check_origin(origin, info, coord_frame='head', disp=False):
+    """Helper to check or auto-determine the origin"""
+    if isinstance(origin, string_types):
+        if origin != 'auto':
+            raise ValueError('origin must be a numerical array, or "auto", '
+                             'not %s' % (origin,))
+        if coord_frame == 'head':
+            R, origin = fit_sphere_to_headshape(info, verbose=False)[:2]
+            origin /= 1000.
+            logger.info('    Automatic origin fit: head of radius %0.1f mm'
+                        % R)
+            del R
+        else:
+            origin = (0., 0., 0.)
+    origin = np.array(origin, float)
+    if origin.shape != (3,):
+        raise ValueError('origin must be a 3-element array')
+    if disp:
+        origin_str = ', '.join(['%0.1f' % (o * 1000) for o in origin])
+        logger.info('    Using origin %s mm in the %s frame'
+                    % (origin_str, coord_frame))
+    return origin
+
+
 # ############################################################################
 # Create BEM surfaces
 
@@ -914,15 +953,9 @@ def make_watershed_bem(subject, subjects_dir=None, overwrite=False,
     .. versionadded:: 0.10
     """
     from .surface import read_surface
-    env = os.environ.copy()
-
-    if not os.environ.get('FREESURFER_HOME'):
-        raise RuntimeError('FREESURFER_HOME environment variable not set')
-
-    env['SUBJECT'] = subject
-
-    subjects_dir = get_subjects_dir(subjects_dir, raise_error=True)
-    env['SUBJECTS_DIR'] = subjects_dir
+    env, mri_dir = _prepare_env(subject, subjects_dir,
+                                requires_freesurfer=True,
+                                requires_mne=True)[:2]
 
     subject_dir = op.join(subjects_dir, subject)
     mri_dir = op.join(subject_dir, 'mri')
@@ -1274,25 +1307,6 @@ def _bem_explain_surface(id_):
 # ############################################################################
 # Write
 
- at deprecated('write_bem_surface is deprecated and will be removed in 0.11, '
-            'use write_bem_surfaces instead')
-def write_bem_surface(fname, surf):
-    """Write one bem surface
-
-    Parameters
-    ----------
-    fname : string
-        File to write
-    surf : dict
-        A surface structured as obtained with read_bem_surfaces
-
-    See Also
-    --------
-    read_bem_surfaces
-    """
-    write_bem_surfaces(fname, surf)
-
-
 def write_bem_surfaces(fname, surfs):
     """Write BEM surfaces to a fiff file
 
@@ -1366,11 +1380,21 @@ def write_bem_solution(fname, bem):
 # #############################################################################
 # Create 3-Layers BEM model from Flash MRI images
 
-def _prepare_env(subject, subjects_dir):
+def _prepare_env(subject, subjects_dir, requires_freesurfer, requires_mne):
     """Helper to prepare an env object for subprocess calls"""
     env = os.environ.copy()
+    if requires_freesurfer and not os.environ.get('FREESURFER_HOME'):
+        raise RuntimeError('I cannot find freesurfer. The FREESURFER_HOME '
+                           'environment variable is not set.')
+    if requires_mne and not os.environ.get('MNE_ROOT'):
+        raise RuntimeError('I cannot find the MNE command line tools. The '
+                           'MNE_ROOT environment variable is not set.')
+
     if not isinstance(subject, string_types):
         raise TypeError('The subject argument must be set')
+
+    subjects_dir = get_subjects_dir(subjects_dir, raise_error=True)
+
     env['SUBJECT'] = subject
     env['SUBJECTS_DIR'] = subjects_dir
     mri_dir = op.join(subjects_dir, subject, 'mri')
@@ -1423,7 +1447,10 @@ def convert_flash_mris(subject, flash30=True, convert=True, unwarp=False,
     has been completed. In particular, the T1.mgz and brain.mgz MRI volumes
     should be, as usual, in the subject's mri directory.
     """
-    env, mri_dir = _prepare_env(subject, subjects_dir)[:2]
+    env, mri_dir = _prepare_env(subject, subjects_dir,
+                                requires_freesurfer=True,
+                                requires_mne=False)[:2]
+    curdir = os.getcwd()
     # Step 1a : Data conversion to mgz format
     if not op.exists(op.join(mri_dir, 'flash', 'parameter_maps')):
         os.makedirs(op.join(mri_dir, 'flash', 'parameter_maps'))
@@ -1513,6 +1540,9 @@ def convert_flash_mris(subject, flash30=True, convert=True, unwarp=False,
         if op.exists('flash5_reg.mgz'):
             os.remove('flash5_reg.mgz')
 
+    # Go back to initial directory
+    os.chdir(curdir)
+
 
 @verbose
 def make_flash_bem(subject, overwrite=False, show=True, subjects_dir=None,
@@ -1549,7 +1579,11 @@ def make_flash_bem(subject, overwrite=False, show=True, subjects_dir=None,
     convert_flash_mris
     """
     from .viz.misc import plot_bem
-    env, mri_dir, bem_dir = _prepare_env(subject, subjects_dir)
+    env, mri_dir, bem_dir = _prepare_env(subject, subjects_dir,
+                                         requires_freesurfer=True,
+                                         requires_mne=True)
+
+    curdir = os.getcwd()
 
     logger.info('\nProcessing the flash MRI data to produce BEM meshes with '
                 'the following parameters:\n'
@@ -1658,3 +1692,6 @@ def make_flash_bem(subject, overwrite=False, show=True, subjects_dir=None,
     if show:
         plot_bem(subject=subject, subjects_dir=subjects_dir,
                  orientation='coronal', slices=None, show=True)
+
+    # Go back to initial directory
+    os.chdir(curdir)
diff --git a/mne/channels/channels.py b/mne/channels/channels.py
index 514930d..0c70bb7 100644
--- a/mne/channels/channels.py
+++ b/mne/channels/channels.py
@@ -56,7 +56,7 @@ def _contains_ch_type(info, ch_type):
 
     Parameters
     ---------
-    info : instance of mne.io.meas_info.Info
+    info : instance of mne.io.Info
         The measurement information.
     ch_type : str
         the channel type to be checked for
@@ -106,7 +106,7 @@ def equalize_channels(candidates, verbose=None):
     Parameters
     ----------
     candidates : list
-        list Raw | Epochs | Evoked.
+        list Raw | Epochs | Evoked | AverageTFR
     verbose : None | bool
         whether to be verbose or not.
 
@@ -155,6 +155,44 @@ class ContainsMixin(object):
         return has_ch_type
 
 
+_human2fiff = {'ecg': FIFF.FIFFV_ECG_CH,
+               'eeg': FIFF.FIFFV_EEG_CH,
+               'emg': FIFF.FIFFV_EMG_CH,
+               'eog': FIFF.FIFFV_EOG_CH,
+               'exci': FIFF.FIFFV_EXCI_CH,
+               'ias': FIFF.FIFFV_IAS_CH,
+               'misc': FIFF.FIFFV_MISC_CH,
+               'resp': FIFF.FIFFV_RESP_CH,
+               'seeg': FIFF.FIFFV_SEEG_CH,
+               'stim': FIFF.FIFFV_STIM_CH,
+               'syst': FIFF.FIFFV_SYST_CH}
+_human2unit = {'ecg': FIFF.FIFF_UNIT_V,
+               'eeg': FIFF.FIFF_UNIT_V,
+               'emg': FIFF.FIFF_UNIT_V,
+               'eog': FIFF.FIFF_UNIT_V,
+               'exci': FIFF.FIFF_UNIT_NONE,
+               'ias': FIFF.FIFF_UNIT_NONE,
+               'misc': FIFF.FIFF_UNIT_V,
+               'resp': FIFF.FIFF_UNIT_NONE,
+               'seeg': FIFF.FIFF_UNIT_V,
+               'stim': FIFF.FIFF_UNIT_NONE,
+               'syst': FIFF.FIFF_UNIT_NONE}
+_unit2human = {FIFF.FIFF_UNIT_V: 'V',
+               FIFF.FIFF_UNIT_NONE: 'NA'}
+
+
+def _check_set(ch, projs, ch_type):
+    """Helper to make sure type change is compatible with projectors"""
+    new_kind = _human2fiff[ch_type]
+    if ch['kind'] != new_kind:
+        for proj in projs:
+            if ch['ch_name'] in proj['data']['col_names']:
+                raise RuntimeError('Cannot change channel type for channel %s '
+                                   'in projector "%s"'
+                                   % (ch['ch_name'], proj['desc']))
+    ch['kind'] = new_kind
+
+
 class SetChannelsMixin(object):
     """Mixin class for Raw, Evoked, Epochs
     """
@@ -228,32 +266,6 @@ class SetChannelsMixin(object):
         -----
         .. versionadded:: 0.9.0
         """
-        human2fiff = {'ecg': FIFF.FIFFV_ECG_CH,
-                      'eeg': FIFF.FIFFV_EEG_CH,
-                      'emg': FIFF.FIFFV_EMG_CH,
-                      'eog': FIFF.FIFFV_EOG_CH,
-                      'exci': FIFF.FIFFV_EXCI_CH,
-                      'ias': FIFF.FIFFV_IAS_CH,
-                      'misc': FIFF.FIFFV_MISC_CH,
-                      'resp': FIFF.FIFFV_RESP_CH,
-                      'seeg': FIFF.FIFFV_SEEG_CH,
-                      'stim': FIFF.FIFFV_STIM_CH,
-                      'syst': FIFF.FIFFV_SYST_CH}
-
-        human2unit = {'ecg': FIFF.FIFF_UNIT_V,
-                      'eeg': FIFF.FIFF_UNIT_V,
-                      'emg': FIFF.FIFF_UNIT_V,
-                      'eog': FIFF.FIFF_UNIT_V,
-                      'exci': FIFF.FIFF_UNIT_NONE,
-                      'ias': FIFF.FIFF_UNIT_NONE,
-                      'misc': FIFF.FIFF_UNIT_V,
-                      'resp': FIFF.FIFF_UNIT_NONE,
-                      'seeg': FIFF.FIFF_UNIT_V,
-                      'stim': FIFF.FIFF_UNIT_NONE,
-                      'syst': FIFF.FIFF_UNIT_NONE}
-
-        unit2human = {FIFF.FIFF_UNIT_V: 'V',
-                      FIFF.FIFF_UNIT_NONE: 'NA'}
         ch_names = self.info['ch_names']
 
         # first check and assemble clean mappings of index and name
@@ -263,21 +275,22 @@ class SetChannelsMixin(object):
                                  "info." % ch_name)
 
             c_ind = ch_names.index(ch_name)
-            if ch_type not in human2fiff:
+            if ch_type not in _human2fiff:
                 raise ValueError('This function cannot change to this '
                                  'channel type: %s. Accepted channel types '
-                                 'are %s.' % (ch_type,
-                                              ", ".join(human2unit.keys())))
+                                 'are %s.'
+                                 % (ch_type,
+                                    ", ".join(sorted(_human2unit.keys()))))
             # Set sensor type
-            self.info['chs'][c_ind]['kind'] = human2fiff[ch_type]
+            _check_set(self.info['chs'][c_ind], self.info['projs'], ch_type)
             unit_old = self.info['chs'][c_ind]['unit']
-            unit_new = human2unit[ch_type]
-            if unit_old != human2unit[ch_type]:
+            unit_new = _human2unit[ch_type]
+            if unit_old != _human2unit[ch_type]:
                 warnings.warn("The unit for Channel %s has changed "
                               "from %s to %s." % (ch_name,
-                                                  unit2human[unit_old],
-                                                  unit2human[unit_new]))
-            self.info['chs'][c_ind]['unit'] = human2unit[ch_type]
+                                                  _unit2human[unit_old],
+                                                  _unit2human[unit_new]))
+            self.info['chs'][c_ind]['unit'] = _human2unit[ch_type]
             if ch_type in ['eeg', 'seeg']:
                 self.info['chs'][c_ind]['coil_type'] = FIFF.FIFFV_COIL_EEG
             else:
@@ -494,9 +507,8 @@ class UpdateChannelsMixin(object):
             object if copy==False)
         """
         # avoid circular imports
-        from ..io.base import _BaseRaw
+        from ..io import _BaseRaw, _merge_info
         from ..epochs import _BaseEpochs
-        from ..io.meas_info import _merge_info
 
         if not isinstance(add_list, (list, tuple)):
             raise AssertionError('Input must be a list or tuple of objs')
@@ -747,7 +759,7 @@ def _ch_neighbor_connectivity(ch_names, neighbors):
 
 
 def fix_mag_coil_types(info):
-    """Fix Elekta magnetometer coil types
+    """Fix magnetometer coil types
 
     Parameters
     ----------
@@ -774,10 +786,22 @@ def fix_mag_coil_types(info):
               current estimates computed by the MNE software is very small.
               Therefore the use of mne_fix_mag_coil_types is not mandatory.
     """
+    old_mag_inds = _get_T1T2_mag_inds(info)
+
+    for ii in old_mag_inds:
+        info['chs'][ii]['coil_type'] = FIFF.FIFFV_COIL_VV_MAG_T3
+    logger.info('%d of %d T1/T2 magnetometer types replaced with T3.' %
+                (len(old_mag_inds), len(pick_types(info, meg='mag'))))
+    info._check_consistency()
+
+
+def _get_T1T2_mag_inds(info):
+    """Helper to find T1/T2 magnetometer coil types"""
     picks = pick_types(info, meg='mag')
+    old_mag_inds = []
     for ii in picks:
         ch = info['chs'][ii]
         if ch['coil_type'] in (FIFF.FIFFV_COIL_VV_MAG_T1,
                                FIFF.FIFFV_COIL_VV_MAG_T2):
-            ch['coil_type'] = FIFF.FIFFV_COIL_VV_MAG_T3
-    info._check_consistency()
+            old_mag_inds.append(ii)
+    return old_mag_inds
diff --git a/mne/channels/interpolation.py b/mne/channels/interpolation.py
index 0b355a4..d9544fd 100644
--- a/mne/channels/interpolation.py
+++ b/mne/channels/interpolation.py
@@ -7,7 +7,7 @@ from numpy.polynomial.legendre import legval
 from scipy import linalg
 
 from ..utils import logger
-from ..io.pick import pick_types, pick_channels
+from ..io.pick import pick_types, pick_channels, pick_info
 from ..surface import _normalize_vectors
 from ..bem import _fit_sphere
 from ..forward import _map_meg_channels
@@ -201,7 +201,7 @@ def _interpolate_bads_meg(inst, mode='accurate', verbose=None):
     # return without doing anything if there are no meg channels
     if len(picks_meg) == 0 or len(picks_bad) == 0:
         return
-
-    mapping = _map_meg_channels(inst, picks_good, picks_bad, mode=mode)
-
+    info_from = pick_info(inst.info, picks_good, copy=True)
+    info_to = pick_info(inst.info, picks_bad, copy=True)
+    mapping = _map_meg_channels(info_from, info_to, mode=mode)
     _do_interp_dots(inst, mapping, picks_good, picks_bad)
diff --git a/mne/channels/layout.py b/mne/channels/layout.py
index fb21ac8..318bd95 100644
--- a/mne/channels/layout.py
+++ b/mne/channels/layout.py
@@ -350,8 +350,8 @@ def find_layout(info, ch_type=None, exclude='bads'):
         VectorView type layout. Use `meg` to force using the full layout
         in situations where the info does only contain one sensor type.
     exclude : list of string | str
-        List of channels to exclude. If empty do not exclude any (default).
-        If 'bads', exclude channels in info['bads'].
+        List of channels to exclude. If empty do not exclude any.
+        If 'bads', exclude channels in info['bads'] (default).
 
     Returns
     -------
@@ -573,7 +573,7 @@ def _auto_topomap_coords(info, picks):
     locs : array, shape = (n_sensors, 2)
         An array of positions of the 2 dimensional map.
     """
-    from scipy.spatial.distance import pdist
+    from scipy.spatial.distance import pdist, squareform
 
     chs = [info['chs'][i] for i in picks]
 
@@ -631,8 +631,16 @@ def _auto_topomap_coords(info, picks):
         locs3d = np.array([eeg_ch_locs[ch['ch_name']] for ch in chs])
 
     # Duplicate points cause all kinds of trouble during visualization
-    if np.min(pdist(locs3d)) < 1e-10:
-        raise ValueError('Electrode positions must be unique.')
+    dist = pdist(locs3d)
+    if np.min(dist) < 1e-10:
+        problematic_electrodes = [
+            info['ch_names'][elec_i]
+            for elec_i in squareform(dist < 1e-10).any(axis=0).nonzero()[0]
+        ]
+
+        raise ValueError('The following electrodes have overlapping positions:'
+                         '\n    ' + str(problematic_electrodes) + '\nThis '
+                         'causes problems during visualization.')
 
     x, y, z = locs3d.T
     az, el, r = _cartesian_to_sphere(x, y, z)
diff --git a/mne/channels/montage.py b/mne/channels/montage.py
index b3ac08d..67bed71 100644
--- a/mne/channels/montage.py
+++ b/mne/channels/montage.py
@@ -11,14 +11,17 @@
 
 import os
 import os.path as op
+import warnings
 
 import numpy as np
 
 from ..viz import plot_montage
 from .channels import _contains_ch_type
 from ..transforms import (_sphere_to_cartesian, apply_trans,
-                          get_ras_to_neuromag_trans)
+                          get_ras_to_neuromag_trans, _topo_to_sphere)
 from ..io.meas_info import _make_dig_points, _read_dig_points
+from ..io.pick import pick_types
+from ..io.constants import FIFF
 from ..externals.six import string_types
 from ..externals.six.moves import map
 
@@ -51,8 +54,8 @@ class Montage(object):
         self.selection = selection
 
     def __repr__(self):
-        s = '<Montage | %s - %d Channels: %s ...>'
-        s %= self.kind, len(self.ch_names), ', '.join(self.ch_names[:3])
+        s = ('<Montage | %s - %d channels: %s ...>'
+             % (self.kind, len(self.ch_names), ', '.join(self.ch_names[:3])))
         return s
 
     def plot(self, scale_factor=1.5, show_names=False):
@@ -127,7 +130,7 @@ def read_montage(kind, ch_names=None, path=None, unit='m', transform=False):
     kind : str
         The name of the montage file (e.g. kind='easycap-M10' for
         'easycap-M10.txt'). Files with extensions '.elc', '.txt', '.csd',
-        '.elp', '.hpts' or '.sfp' are supported.
+        '.elp', '.hpts', '.sfp' or '.loc' ('.locs' and '.eloc') are supported.
     ch_names : list of str | None
         If not all electrodes defined in the montage are present in the EEG
         data, use this parameter to select subset of electrode positions to
@@ -153,13 +156,17 @@ def read_montage(kind, ch_names=None, path=None, unit='m', transform=False):
     -----
     Built-in montages are not scaled or transformed by default.
 
+    Montages can contain fiducial points in addition to electrode
+    locations, e.g. ``biosemi-64`` contains 67 total channels.
+
     .. versionadded:: 0.9.0
     """
 
     if path is None:
         path = op.join(op.dirname(__file__), 'data', 'montages')
     if not op.isabs(kind):
-        supported = ('.elc', '.txt', '.csd', '.sfp', '.elp', '.hpts')
+        supported = ('.elc', '.txt', '.csd', '.sfp', '.elp', '.hpts', '.loc',
+                     '.locs', '.eloc')
         montages = [op.splitext(f) for f in os.listdir(path)]
         montages = [m for m in montages if m[1] in supported and kind == m[0]]
         if len(montages) != 1:
@@ -254,6 +261,20 @@ def read_montage(kind, ch_names=None, path=None, unit='m', transform=False):
         data = np.loadtxt(fname, dtype=dtype)
         pos = np.vstack((data['x'], data['y'], data['z'])).T
         ch_names_ = data['name'].astype(np.str)
+    elif ext in ('.loc', '.locs', '.eloc'):
+        ch_names_ = np.loadtxt(fname, dtype='S4', usecols=[3]).tolist()
+        dtype = {'names': ('angle', 'radius'), 'formats': ('f4', 'f4')}
+        angle, radius = np.loadtxt(fname, dtype=dtype, usecols=[1, 2],
+                                   unpack=True)
+
+        sph_phi, sph_theta = _topo_to_sphere(angle, radius)
+
+        azimuth = sph_theta / 180.0 * np.pi
+        elevation = sph_phi / 180.0 * np.pi
+        r = np.ones((len(ch_names_), ))
+
+        x, y, z = _sphere_to_cartesian(azimuth, elevation, r)
+        pos = np.c_[-y, x, z]
     else:
         raise ValueError('Currently the "%s" template is not supported.' %
                          kind)
@@ -307,23 +328,22 @@ class DigMontage(object):
     Parameters
     ----------
     hsp : array, shape (n_points, 3)
-        The positions of the channels in 3d.
+        The positions of the headshape points in 3d.
+        These points are in the native digitizer space.
     hpi : array, shape (n_hpi, 3)
         The positions of the head-position indicator coils in 3d.
         These points are in the MEG device space.
     elp : array, shape (n_hpi, 3)
         The positions of the head-position indicator coils in 3d.
-        This is typically in the acquisition digitizer space.
+        This is typically in the native digitizer space.
     point_names : list, shape (n_elp)
         The names of the digitized points for hpi and elp.
     nasion : array, shape (1, 3)
-        The position of the nasion fidicual point in the RAS head space.
+        The position of the nasion fidicual point.
     lpa : array, shape (1, 3)
-        The position of the left periauricular fidicual point in
-        the RAS head space.
+        The position of the left periauricular fidicual point.
     rpa : array, shape (1, 3)
-        The position of the right periauricular fidicual point in
-        the RAS head space.
+        The position of the right periauricular fidicual point.
     dev_head_t : array, shape (4, 4)
         A Device-to-Head transformation matrix.
 
@@ -374,7 +394,7 @@ class DigMontage(object):
 
 def read_dig_montage(hsp=None, hpi=None, elp=None, point_names=None,
                      unit='mm', transform=True, dev_head_t=False):
-    """Read montage from a file
+    """Read digitization data from a file and generate a DigMontage
 
     Parameters
     ----------
@@ -382,17 +402,19 @@ def read_dig_montage(hsp=None, hpi=None, elp=None, point_names=None,
         If str, this corresponds to the filename of the headshape points.
         This is typically used with the Polhemus FastSCAN system.
         If numpy.array, this corresponds to an array of positions of the
-        channels in 3d.
+        headshape points in 3d. These points are in the native
+        digitizer space.
     hpi : None | str | array, shape (n_hpi, 3)
-        If str, this corresponds to the filename of hpi points. If numpy.array,
-        this corresponds to an array hpi points. These points are in
-        device space.
+        If str, this corresponds to the filename of Head Position Indicator
+        (HPI) points. If numpy.array, this corresponds to an array
+        of HPI points. These points are in device space.
     elp : None | str | array, shape (n_fids + n_hpi, 3)
-        If str, this corresponds to the filename of hpi points.
-        This is typically used with the Polhemus FastSCAN system.
-        If numpy.array, this corresponds to an array hpi points. These points
-        are in head space. Fiducials should be listed first, then the points
-        corresponding to the hpi.
+        If str, this corresponds to the filename of electrode position
+        points. This is typically used with the Polhemus FastSCAN system.
+        Fiducials should be listed first: nasion, left periauricular point,
+        right periauricular point, then the points corresponding to the HPI.
+        These points are in the native digitizer space.
+        If numpy.array, this corresponds to an array of fids + HPI points.
     point_names : None | list
         If list, this corresponds to a list of point names. This must be
         specified if elp is defined.
@@ -468,6 +490,13 @@ def read_dig_montage(hsp=None, hpi=None, elp=None, point_names=None,
         nasion = elp[names_lower.index('nasion')]
         lpa = elp[names_lower.index('lpa')]
         rpa = elp[names_lower.index('rpa')]
+
+        # remove fiducials from elp
+        mask = np.ones(len(names_lower), dtype=bool)
+        for fid in ['nasion', 'lpa', 'rpa']:
+            mask[names_lower.index(fid)] = False
+        elp = elp[mask]
+
         neuromag_trans = get_ras_to_neuromag_trans(nasion, lpa, rpa)
 
         fids = np.array([nasion, lpa, rpa])
@@ -478,7 +507,7 @@ def read_dig_montage(hsp=None, hpi=None, elp=None, point_names=None,
         fids = [None] * 3
     if dev_head_t:
         from ..coreg import fit_matched_points
-        trans = fit_matched_points(tgt_pts=elp[3:], src_pts=hpi, out='trans')
+        trans = fit_matched_points(tgt_pts=elp, src_pts=hpi, out='trans')
     else:
         trans = np.identity(4)
 
@@ -486,7 +515,7 @@ def read_dig_montage(hsp=None, hpi=None, elp=None, point_names=None,
                       trans)
 
 
-def _set_montage(info, montage):
+def _set_montage(info, montage, update_ch_names=False):
     """Apply montage to data.
 
     With a Montage, this function will replace the EEG channel names and
@@ -495,20 +524,38 @@ def _set_montage(info, montage):
     With a DigMontage, this function will replace the digitizer info with
     the values specified for the particular montage.
 
-    Note: This function will change the info variable in place.
+    Usually, a montage is expected to contain the positions of all EEG
+    electrodes and a warning is raised when this is not the case.
 
     Parameters
     ----------
     info : instance of Info
         The measurement info to update.
-    montage : instance of Montage
+    montage : instance of Montage | instance of DigMontage
         The montage to apply.
+    update_ch_names : bool
+        If True, overwrite the info channel names with the ones from montage.
+
+    Notes
+    -----
+    This function will change the info variable in place.
     """
     if isinstance(montage, Montage):
+        if update_ch_names:
+            info['ch_names'] = montage.ch_names
+            info['chs'] = list()
+            for ii, ch_name in enumerate(montage.ch_names):
+                ch_info = {'cal': 1., 'logno': ii + 1, 'scanno': ii + 1,
+                           'range': 1.0, 'unit_mul': 0, 'ch_name': ch_name,
+                           'unit': FIFF.FIFF_UNIT_V, 'kind': FIFF.FIFFV_EEG_CH,
+                           'coord_frame': FIFF.FIFFV_COORD_HEAD,
+                           'coil_type': FIFF.FIFFV_COIL_EEG}
+                info['chs'].append(ch_info)
+
         if not _contains_ch_type(info, 'eeg'):
             raise ValueError('No EEG channels found.')
 
-        sensors_found = False
+        sensors_found = []
         for pos, ch_name in zip(montage.pos, montage.ch_names):
             if ch_name not in info['ch_names']:
                 continue
@@ -516,12 +563,23 @@ def _set_montage(info, montage):
             ch_idx = info['ch_names'].index(ch_name)
             info['ch_names'][ch_idx] = ch_name
             info['chs'][ch_idx]['loc'] = np.r_[pos, [0.] * 9]
-            sensors_found = True
+            sensors_found.append(ch_idx)
 
-        if not sensors_found:
+        if len(sensors_found) == 0:
             raise ValueError('None of the sensors defined in the montage were '
                              'found in the info structure. Check the channel '
                              'names.')
+
+        eeg_sensors = pick_types(info, meg=False, ref_meg=False, eeg=True,
+                                 exclude=[])
+        not_found = np.setdiff1d(eeg_sensors, sensors_found)
+        if len(not_found) > 0:
+            not_found_names = [info['ch_names'][ch] for ch in not_found]
+            warnings.warn('The following EEG sensors did not have a position '
+                          'specified in the selected montage: ' +
+                          str(not_found_names) + '. Their position has been '
+                          'left untouched.')
+
     elif isinstance(montage, DigMontage):
         dig = _make_dig_points(nasion=montage.nasion, lpa=montage.lpa,
                                rpa=montage.rpa, hpi=montage.hpi,
diff --git a/mne/channels/tests/test_channels.py b/mne/channels/tests/test_channels.py
index 3a37858..a02ebc9 100644
--- a/mne/channels/tests/test_channels.py
+++ b/mne/channels/tests/test_channels.py
@@ -71,10 +71,12 @@ def test_set_channel_types():
     # Test change to illegal channel type
     mapping = {'EOG 061': 'xxx'}
     assert_raises(ValueError, raw.set_channel_types, mapping)
+    # Test changing type if in proj (avg eeg ref here)
+    mapping = {'EEG 060': 'eog', 'EEG 059': 'ecg', 'EOG 061': 'seeg'}
+    assert_raises(RuntimeError, raw.set_channel_types, mapping)
     # Test type change
-    raw2 = Raw(raw_fname)
+    raw2 = Raw(raw_fname, add_eeg_ref=False)
     raw2.info['bads'] = ['EEG 059', 'EEG 060', 'EOG 061']
-    mapping = {'EEG 060': 'eog', 'EEG 059': 'ecg', 'EOG 061': 'seeg'}
     raw2.set_channel_types(mapping)
     info = raw2.info
     assert_true(info['chs'][374]['ch_name'] == 'EEG 060')
diff --git a/mne/channels/tests/test_layout.py b/mne/channels/tests/test_layout.py
index ccc388d..5166325 100644
--- a/mne/channels/tests/test_layout.py
+++ b/mne/channels/tests/test_layout.py
@@ -20,8 +20,7 @@ from mne.channels.layout import (_box_size, _auto_topomap_coords,
                                  generate_2d_layout)
 from mne.utils import run_tests_if_main
 from mne import pick_types, pick_info
-from mne.io import Raw, read_raw_kit
-from mne.io.meas_info import _empty_info
+from mne.io import Raw, read_raw_kit, _empty_info
 from mne.io.constants import FIFF
 from mne.preprocessing.maxfilter import fit_sphere_to_headshape
 from mne.utils import _TempDir
@@ -43,7 +42,7 @@ fname_ctf_raw = op.join(op.dirname(__file__), '..', '..', 'io', 'tests',
 fname_kit_157 = op.join(op.dirname(__file__), '..', '..',  'io', 'kit',
                         'tests', 'data', 'test.sqd')
 
-test_info = _empty_info()
+test_info = _empty_info(1000)
 test_info.update({
     'ch_names': ['ICA 001', 'ICA 002', 'EOG 061'],
     'chs': [{'cal': 1,
@@ -353,7 +352,7 @@ def test_generate_2d_layout():
     snobg = 10
     sbg = 15
     side = range(snobg)
-    bg_image = np.random.randn(sbg, sbg)
+    bg_image = np.random.RandomState(42).randn(sbg, sbg)
     w, h = [.2, .5]
 
     # Generate fake data
diff --git a/mne/channels/tests/test_montage.py b/mne/channels/tests/test_montage.py
index 23da88f..1b27bfc 100644
--- a/mne/channels/tests/test_montage.py
+++ b/mne/channels/tests/test_montage.py
@@ -3,8 +3,9 @@
 # License: BSD (3-clause)
 
 import os.path as op
+import warnings
 
-from nose.tools import assert_equal
+from nose.tools import assert_equal, assert_true
 
 import numpy as np
 from numpy.testing import (assert_array_equal, assert_almost_equal,
@@ -138,6 +139,13 @@ def test_montage():
     assert_array_equal(pos3, montage.pos)
     assert_equal(montage.ch_names, evoked.info['ch_names'])
 
+    # Warning should be raised when some EEG are not specified in the montage
+    with warnings.catch_warnings(record=True) as w:
+        info = create_info(montage.ch_names + ['foo', 'bar'], 1e3,
+                           ['eeg'] * (len(montage.ch_names) + 2))
+        _set_montage(info, montage)
+        assert_true(len(w) == 1)
+
 
 def test_read_dig_montage():
     """Test read_dig_montage"""
@@ -155,16 +163,13 @@ def test_read_dig_montage():
                                transform=True, dev_head_t=True)
     # check coordinate transformation
     # nasion
-    assert_almost_equal(montage.elp[0, 0], 0)
-    assert_almost_equal(montage.nasion[0], 0)
-    assert_almost_equal(montage.elp[0, 2], 0)
     assert_almost_equal(montage.nasion[0], 0)
+    assert_almost_equal(montage.nasion[2], 0)
     # lpa and rpa
-    assert_allclose(montage.elp[1:3, 1:], 0, atol=1e-16)
     assert_allclose(montage.lpa[1:], 0, atol=1e-16)
     assert_allclose(montage.rpa[1:], 0, atol=1e-16)
     # device head transform
-    dev_head_t = fit_matched_points(tgt_pts=montage.elp[3:],
+    dev_head_t = fit_matched_points(tgt_pts=montage.elp,
                                     src_pts=montage.hpi, out='trans')
     assert_array_equal(montage.dev_head_t, dev_head_t)
 
diff --git a/mne/chpi.py b/mne/chpi.py
index 13e4bf3..481bc79 100644
--- a/mne/chpi.py
+++ b/mne/chpi.py
@@ -22,7 +22,7 @@ from .externals.six import string_types
 # Reading from text or FIF file
 
 @verbose
-def get_chpi_positions(raw, t_step=None, verbose=None):
+def get_chpi_positions(raw, t_step=None, return_quat=False, verbose=None):
     """Extract head positions
 
     Note that the raw instance must have CHPI channels recorded.
@@ -38,6 +38,11 @@ def get_chpi_positions(raw, t_step=None, verbose=None):
         1 second is used if processing a raw data. If processing a
         Maxfilter log file, this must be None because the log file
         itself will determine the sampling interval.
+    return_quat : bool
+        If True, also return the quaternions.
+
+        .. versionadded:: 0.11
+
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -49,6 +54,8 @@ def get_chpi_positions(raw, t_step=None, verbose=None):
         Rotations at each time point.
     t : ndarray, shape (N,)
         The time points.
+    quat : ndarray, shape (N, 3)
+        The quaternions. Only returned if ``return_quat`` is True.
 
     Notes
     -----
@@ -82,7 +89,10 @@ def get_chpi_positions(raw, t_step=None, verbose=None):
         if t_step is not None:
             raise ValueError('t_step must be None if processing a log')
         data = np.loadtxt(raw, skiprows=1)  # first line is header, skip it
-    return _quats_to_trans_rot_t(data)
+    out = _quats_to_trans_rot_t(data)
+    if return_quat:
+        out = out + (data[:, 1:4],)
+    return out
 
 
 def _quats_to_trans_rot_t(quats):
@@ -135,13 +145,42 @@ def _quat_to_rot(q):
     return rotation
 
 
+def _one_rot_to_quat(rot):
+    """Convert a rotation matrix to quaternions"""
+    # see e.g. http://www.euclideanspace.com/maths/geometry/rotations/
+    #                 conversions/matrixToQuaternion/
+    t = 1. + rot[0] + rot[4] + rot[8]
+    if t > np.finfo(rot.dtype).eps:
+        s = np.sqrt(t) * 2.
+        qx = (rot[7] - rot[5]) / s
+        qy = (rot[2] - rot[6]) / s
+        qz = (rot[3] - rot[1]) / s
+        # qw = 0.25 * s
+    elif rot[0] > rot[4] and rot[0] > rot[8]:
+        s = np.sqrt(1. + rot[0] - rot[4] - rot[8]) * 2.
+        qx = 0.25 * s
+        qy = (rot[1] + rot[3]) / s
+        qz = (rot[2] + rot[6]) / s
+        # qw = (rot[7] - rot[5]) / s
+    elif rot[4] > rot[8]:
+        s = np.sqrt(1. - rot[0] + rot[4] - rot[8]) * 2
+        qx = (rot[1] + rot[3]) / s
+        qy = 0.25 * s
+        qz = (rot[5] + rot[7]) / s
+        # qw = (rot[2] - rot[6]) / s
+    else:
+        s = np.sqrt(1. - rot[0] - rot[4] + rot[8]) * 2.
+        qx = (rot[2] + rot[6]) / s
+        qy = (rot[5] + rot[7]) / s
+        qz = 0.25 * s
+        # qw = (rot[3] - rot[1]) / s
+    return qx, qy, qz
+
+
 def _rot_to_quat(rot):
-    """Here we derive qw from qx, qy, qz"""
-    qw_4 = np.sqrt(1 + rot[..., 0, 0] + rot[..., 1, 1] + rot[..., 2, 2]) * 2
-    qx = (rot[..., 2, 1] - rot[..., 1, 2]) / qw_4
-    qy = (rot[..., 0, 2] - rot[..., 2, 0]) / qw_4
-    qz = (rot[..., 1, 0] - rot[..., 0, 1]) / qw_4
-    return np.rollaxis(np.array((qx, qy, qz)), 0, rot.ndim - 1)
+    """Convert a set of rotations to quaternions"""
+    rot = rot.reshape(rot.shape[:-2] + (9,))
+    return np.apply_along_axis(_one_rot_to_quat, -1, rot)
 
 
 # ############################################################################
diff --git a/mne/commands/mne_flash_bem_model.py b/mne/commands/mne_flash_bem_model.py
deleted file mode 100755
index 2cd6580..0000000
--- a/mne/commands/mne_flash_bem_model.py
+++ /dev/null
@@ -1,145 +0,0 @@
-#!/usr/bin/env python
-"""Create 3-Layers BEM model from Flash MRI images
-
-This function extracts the BEM surfaces (outer skull, inner skull, and
-outer skin) from multiecho FLASH MRI data with spin angles of 5 and 30
-degrees. The multiecho FLASH data are inputted in NIFTI format.
-It was developed to work for Phillips MRI data, but could probably be
-used for data from other scanners that have been converted to NIFTI format
-(e.g., using MRIcron's dcm2nii). However,it has been tested only for
-data from the Achieva scanner). This function assumes that the Freesurfer
-segmentation of the subject has been completed. In particular, the T1.mgz
-and brain.mgz MRI volumes should be, as usual, in the subject's mri
-directory.
-
-"""
-from __future__ import print_function
-
-# Authors:  Rey Rene Ramirez, Ph.D.   e-mail: rrramir at uw.edu
-#           Alexandre Gramfort, Ph.D.
-
-import sys
-import math
-import os
-
-import mne
-from mne.utils import deprecated
-
-
- at deprecated("This function is deprecated, use mne_flash_bem instead")
-def make_flash_bem(subject, subjects_dir, flash05, flash30, show=False):
-    """Create 3-Layers BEM model from Flash MRI images
-
-    Parameters
-    ----------
-    subject : string
-        Subject name
-    subjects_dir : string
-        Directory containing subjects data (Freesurfer SUBJECTS_DIR)
-    flash05 : string
-        Full path of the NIFTI file for the
-        FLASH sequence with a spin angle of 5 degrees
-    flash30 : string
-        Full path of the NIFTI file for the
-        FLASH sequence with a spin angle of 30 degrees
-    show : bool
-        Show surfaces in 3D to visually inspect all three BEM
-        surfaces (recommended)
-
-    Notes
-    -----
-    This program assumes that both Freesurfer/FSL, and MNE,
-    including MNE's Matlab Toolbox, are installed properly.
-    For reference please read the MNE manual and wiki, and Freesurfer's wiki:
-    http://www.nmr.mgh.harvard.edu/meg/manuals/
-    http://www.nmr.mgh.harvard.edu/martinos/userInfo/data/sofMNE.php
-    http://www.nmr.mgh.harvard.edu/martinos/userInfo/data/MNE_register/index.php
-    http://surfer.nmr.mgh.harvard.edu/
-    http://surfer.nmr.mgh.harvard.edu/fswiki
-
-    References:
-    B. Fischl, D. H. Salat, A. J. van der Kouwe, N. Makris, F. Segonne,
-    B. T. Quinn, and A. M. Dale, "Sequence-independent segmentation of magnetic
-    resonance images," Neuroimage, vol. 23 Suppl 1, pp. S69-84, 2004.
-    J. Jovicich, S. Czanner, D. Greve, E. Haley, A. van der Kouwe, R. Gollub,
-    D. Kennedy, F. Schmitt, G. Brown, J. Macfall, B. Fischl, and A. Dale,
-    "Reliability in multi-site structural MRI studies: effects of gradient
-    non-linearity correction on phantom and human data," Neuroimage,
-    vol. 30, Epp. 436-43, 2006.
-    """
-    os.environ['SUBJECT'] = subject
-    os.chdir(os.path.join(subjects_dir, subject, "mri"))
-    if not os.path.exists('flash'):
-        os.mkdir("flash")
-    os.chdir("flash")
-    # flash_dir = os.getcwd()
-    if not os.path.exists('parameter_maps'):
-        os.mkdir("parameter_maps")
-    print("--- Converting Flash 5")
-    os.system('mri_convert -flip_angle %s -tr 25 %s mef05.mgz' %
-              (5 * math.pi / 180, flash05))
-    print("--- Converting Flash 30")
-    os.system('mri_convert -flip_angle %s -tr 25 %s mef30.mgz' %
-              (30 * math.pi / 180, flash30))
-    print("--- Running mne_flash_bem")
-    os.system('mne_flash_bem --noconvert')
-    os.chdir(os.path.join(subjects_dir, subject, 'bem'))
-    if not os.path.exists('flash'):
-        os.mkdir("flash")
-    os.chdir("flash")
-    print("[done]")
-
-    if show:
-        fnames = ['outer_skin.surf', 'outer_skull.surf', 'inner_skull.surf']
-        head_col = (0.95, 0.83, 0.83)  # light pink
-        skull_col = (0.91, 0.89, 0.67)
-        brain_col = (0.67, 0.89, 0.91)  # light blue
-        colors = [head_col, skull_col, brain_col]
-        from mayavi import mlab
-        mlab.clf()
-        for fname, c in zip(fnames, colors):
-            points, faces = mne.read_surface(fname)
-            mlab.triangular_mesh(points[:, 0], points[:, 1], points[:, 2],
-                                 faces, color=c, opacity=0.3)
-        mlab.show()
-
-
-def run():
-    from mne.commands.utils import get_optparser
-
-    parser = get_optparser(__file__)
-
-    subject = os.environ.get('SUBJECT')
-    subjects_dir = os.environ.get('SUBJECTS_DIR')
-
-    parser.add_option("-s", "--subject", dest="subject",
-                      help="Subject name", default=subject)
-    parser.add_option("-d", "--subjects-dir", dest="subjects_dir",
-                      help="Subjects directory", default=subjects_dir)
-    parser.add_option("-5", "--flash05", dest="flash05",
-                      help=("Path to FLASH sequence with a spin angle of 5 "
-                            "degrees in Nifti format"), metavar="FILE")
-    parser.add_option("-3", "--flash30", dest="flash30",
-                      help=("Path to FLASH sequence with a spin angle of 30 "
-                            "degrees in Nifti format"), metavar="FILE")
-    parser.add_option("-v", "--view", dest="show", action="store_true",
-                      help="Show BEM model in 3D for visual inspection",
-                      default=False)
-
-    options, args = parser.parse_args()
-
-    if options.flash05 is None or options.flash30 is None:
-        parser.print_help()
-        sys.exit(1)
-
-    subject = options.subject
-    subjects_dir = options.subjects_dir
-    flash05 = os.path.abspath(options.flash05)
-    flash30 = os.path.abspath(options.flash30)
-    show = options.show
-
-    make_flash_bem(subject, subjects_dir, flash05, flash30, show=show)
-
-is_main = (__name__ == '__main__')
-if is_main:
-    run()
diff --git a/mne/commands/mne_show_fiff.py b/mne/commands/mne_show_fiff.py
new file mode 100644
index 0000000..cb4fb4c
--- /dev/null
+++ b/mne/commands/mne_show_fiff.py
@@ -0,0 +1,27 @@
+#!/usr/bin/env python
+"""Show the contents of a FIFF file
+
+You can do for example:
+
+$ mne show_fiff test_raw.fif
+"""
+
+# Authors : Eric Larson, PhD
+
+import sys
+import mne
+
+
+def run():
+    parser = mne.commands.utils.get_optparser(
+        __file__, usage='mne show_fiff <file>')
+    options, args = parser.parse_args()
+    if len(args) != 1:
+        parser.print_help()
+        sys.exit(1)
+    print(mne.io.show_fiff(args[0]))
+
+
+is_main = (__name__ == '__main__')
+if is_main:
+    run()
diff --git a/mne/commands/tests/test_commands.py b/mne/commands/tests/test_commands.py
index 89574e1..9b12c2b 100644
--- a/mne/commands/tests/test_commands.py
+++ b/mne/commands/tests/test_commands.py
@@ -8,10 +8,10 @@ from nose.tools import assert_true, assert_raises
 
 from mne.commands import (mne_browse_raw, mne_bti2fiff, mne_clean_eog_ecg,
                           mne_compute_proj_ecg, mne_compute_proj_eog,
-                          mne_coreg, mne_flash_bem_model, mne_kit2fiff,
+                          mne_coreg, mne_kit2fiff,
                           mne_make_scalp_surfaces, mne_maxfilter,
                           mne_report, mne_surf2bem, mne_watershed_bem,
-                          mne_compare_fiff, mne_flash_bem)
+                          mne_compare_fiff, mne_flash_bem, mne_show_fiff)
 from mne.utils import (run_tests_if_main, _TempDir, requires_mne, requires_PIL,
                        requires_mayavi, requires_tvtk, requires_freesurfer,
                        ArgvSetter, slow_test, ultra_slow_test)
@@ -54,6 +54,13 @@ def test_compare_fiff():
     check_usage(mne_compare_fiff)
 
 
+def test_show_fiff():
+    """Test mne compare_fiff"""
+    check_usage(mne_show_fiff)
+    with ArgvSetter((raw_fname,)):
+        mne_show_fiff.run()
+
+
 @requires_mne
 def test_clean_eog_ecg():
     """Test mne clean_eog_ecg"""
@@ -96,12 +103,6 @@ def test_coreg():
     assert_true(hasattr(mne_coreg, 'run'))
 
 
-def test_flash_bem_model():
-    """Test mne flash_bem_model"""
-    assert_true(hasattr(mne_flash_bem_model, 'run'))
-    check_usage(mne_flash_bem_model)
-
-
 def test_kit2fiff():
     """Test mne kit2fiff"""
     # Can't check
diff --git a/mne/connectivity/tests/test_utils.py b/mne/connectivity/tests/test_utils.py
index 2736b1f..38fa521 100644
--- a/mne/connectivity/tests/test_utils.py
+++ b/mne/connectivity/tests/test_utils.py
@@ -8,9 +8,10 @@ def test_indices():
     """Test connectivity indexing methods"""
     n_seeds_test = [1, 3, 4]
     n_targets_test = [2, 3, 200]
+    rng = np.random.RandomState(42)
     for n_seeds in n_seeds_test:
         for n_targets in n_targets_test:
-            idx = np.random.permutation(np.arange(n_seeds + n_targets))
+            idx = rng.permutation(np.arange(n_seeds + n_targets))
             seeds = idx[:n_seeds]
             targets = idx[n_seeds:]
             indices = seed_target_indices(seeds, targets)
diff --git a/mne/coreg.py b/mne/coreg.py
index d3df150..b97cfac 100644
--- a/mne/coreg.py
+++ b/mne/coreg.py
@@ -13,11 +13,12 @@ import sys
 import re
 import shutil
 from warnings import warn
+from functools import reduce
 
 import numpy as np
 from numpy import dot
 
-from .io.meas_info import read_fiducials, write_fiducials
+from .io import read_fiducials, write_fiducials
 from .label import read_label, Label
 from .source_space import (add_source_space_distances, read_source_spaces,
                            write_source_spaces)
@@ -25,7 +26,6 @@ from .surface import read_surface, write_surface
 from .bem import read_bem_surfaces, write_bem_surfaces
 from .transforms import rotation, rotation3d, scaling, translation
 from .utils import get_config, get_subjects_dir, logger, pformat
-from functools import reduce
 from .externals.six.moves import zip
 
 
diff --git a/mne/cov.py b/mne/cov.py
index a28209d..b5a71b2 100644
--- a/mne/cov.py
+++ b/mne/cov.py
@@ -9,9 +9,7 @@ import os
 from math import floor, ceil, log
 import itertools as itt
 import warnings
-
 from copy import deepcopy
-
 from distutils.version import LooseVersion
 
 import numpy as np
@@ -19,7 +17,7 @@ from scipy import linalg
 
 from .io.write import start_file, end_file
 from .io.proj import (make_projector, _proj_equal, activate_proj,
-                      _has_eeg_average_ref_proj)
+                      _needs_eeg_average_ref_proj)
 from .io import fiff_open
 from .io.pick import (pick_types, channel_indices_by_type, pick_channels_cov,
                       pick_channels, pick_info, _picks_by_type)
@@ -35,7 +33,6 @@ from .defaults import _handle_default
 from .epochs import _is_good
 from .utils import (check_fname, logger, verbose, estimate_rank,
                     _compute_row_norms, check_version, _time_mask)
-from .utils import deprecated
 
 from .externals.six.moves import zip
 from .externals.six import string_types
@@ -351,16 +348,6 @@ def _check_n_samples(n_samples, n_chan):
         logger.warning(text)
 
 
- at deprecated('"compute_raw_data_covariance" is deprecated and will be '
-            'removed in MNE-0.11. Please use compute_raw_covariance instead')
- at verbose
-def compute_raw_data_covariance(raw, tmin=None, tmax=None, tstep=0.2,
-                                reject=None, flat=None, picks=None,
-                                verbose=None):
-    return compute_raw_covariance(raw, tmin, tmax, tstep,
-                                  reject, flat, picks, verbose)
-
-
 @verbose
 def compute_raw_covariance(raw, tmin=None, tmax=None, tstep=0.2,
                            reject=None, flat=None, picks=None,
@@ -437,7 +424,7 @@ def compute_raw_covariance(raw, tmin=None, tmax=None, tstep=0.2,
     info = pick_info(raw.info, picks)
     idx_by_type = channel_indices_by_type(info)
 
-    # Read data in chuncks
+    # Read data in chunks
     for first in range(start, stop, step):
         last = first + step
         if last >= stop:
@@ -1236,11 +1223,11 @@ def prepare_noise_cov(noise_cov, info, ch_names, rank=None,
             rank_eeg = _estimate_rank_meeg_cov(C_eeg, this_info, scalings)
         C_eeg_eig, C_eeg_eigvec = _get_ch_whitener(C_eeg, False, 'EEG',
                                                    rank_eeg)
-        if not _has_eeg_average_ref_proj(info['projs']):
-            warnings.warn('No average EEG reference present in info["projs"], '
-                          'covariance may be adversely affected. Consider '
-                          'recomputing covariance using a raw file with an '
-                          'average eeg reference projector added.')
+    if _needs_eeg_average_ref_proj(info):
+        warnings.warn('No average EEG reference present in info["projs"], '
+                      'covariance may be adversely affected. Consider '
+                      'recomputing covariance using a raw file with an '
+                      'average eeg reference projector added.')
 
     n_chan = len(ch_names)
     eigvec = np.zeros((n_chan, n_chan), dtype=np.float)
diff --git a/mne/data/coil_def_Elekta.dat b/mne/data/coil_def_Elekta.dat
index a15e3db..88b0201 100644
--- a/mne/data/coil_def_Elekta.dat
+++ b/mne/data/coil_def_Elekta.dat
@@ -8,12 +8,10 @@
 #	These coil definitions were used by Samu Taulu in the Spherical Space
 #	Separation work, which was subsequently used by Elekta in Maxfilter. The only 
 #	difference is that the local z-coordinate was set to zero in Taulu's original
-#	formulation.
+#	formulation. Source of small z-coordinate offset (0.0003m) is due to manufacturing bug.
 #
 #	Issues left to be sorted out.
 #	1) Discrepancy between gradiometer base size. 16.69 in Elekta, 16.80 in MNE
-#	2) Source of small z-coordinate offset (0.0003m). Not use in original SSS work,
-#	   but is present in Elekta's and MNE's coil definitions.
 #
 #	<class>	<id> <accuracy> <np> <size> <baseline> "<description>"
 #
@@ -39,6 +37,12 @@
 #			2	accurate
 #
 #
+1   2000    2   1  0.000e+00  0.000e+00	"Point magnetometer, z-normal"
+  1.0000000000e+00  0.0000000000e+00  0.0000000000e+00  0.0000000000e+00  0.0000000000e+00  0.0000000000e+00  1.0000000000e+00
+1   2002    2   1  0.000e+00  0.000e+00	"Point magnetometer, x-normal"
+  1.0000000000e+00  0.0000000000e+00  0.0000000000e+00  0.0000000000e+00  1.0000000000e+00  0.0000000000e+00  0.0000000000e+00
+1   2003    2   1  0.000e+00  0.000e+00	"Point magnetometer, y-normal"
+  1.0000000000e+00  0.0000000000e+00  0.0000000000e+00  0.0000000000e+00  0.0000000000e+00  1.0000000000e+00  0.0000000000e+00
 3   3012    2   8  2.639e-02  1.669e-02	"Vectorview planar gradiometer T1 size = 26.39  mm base = 16.69  mm"
 1.4979029359e+01  1.0800000000e-02  6.7100000000e-03  3.0000000000e-04  0.0000000000e+00  0.0000000000e+00  1.0000000000e+00
 1.4979029359e+01  5.8900000000e-03  6.7100000000e-03  3.0000000000e-04  0.0000000000e+00  0.0000000000e+00  1.0000000000e+00
diff --git a/mne/datasets/utils.py b/mne/datasets/utils.py
index b333b58..ab6fa88 100644
--- a/mne/datasets/utils.py
+++ b/mne/datasets/utils.py
@@ -160,11 +160,18 @@ def _data_path(path=None, force_update=False, update_path=True, download=True,
            }[name]
 
     path = _get_path(path, key, name)
+    # To update the testing dataset, push commits, then make a new release
+    # on GitHub. Then update the "testing_release" variable:
+    testing_release = '0.11'
+    # And also update the "hashes['testing']" variable below.
+
+    # To update any other dataset, update the data archive itself (upload
+    # an updated version) and update the hash.
     archive_names = dict(
         sample='MNE-sample-data-processed.tar.gz',
         spm='MNE-spm-face.tar.bz2',
         somato='MNE-somato-data.tar.gz',
-        testing='mne-testing-data-master.tar.gz',
+        testing='mne-testing-data-%s.tar.gz' % testing_release,
         fake='foo.tgz',
     )
     if archive_name is not None:
@@ -182,21 +189,21 @@ def _data_path(path=None, force_update=False, update_path=True, download=True,
         spm='https://s3.amazonaws.com/mne-python/datasets/%s',
         somato='https://s3.amazonaws.com/mne-python/datasets/%s',
         brainstorm='https://copy.com/ZTHXXFcuIZycvRoA/brainstorm/%s',
-        testing='https://github.com/mne-tools/mne-testing-data/archive/'
-                'master.tar.gz',
+        testing='https://codeload.github.com/mne-tools/mne-testing-data/'
+                'tar.gz/%s' % testing_release,
         fake='https://github.com/mne-tools/mne-testing-data/raw/master/'
              'datasets/%s',
     )
     hashes = dict(
-        sample='f73186795af820428e5e8e779ce5bfcf',
+        sample='ccf5cbc41a3727ed02821330a07abb13',
         spm='3e9e83c642136e5b720e2ecc5dcc3244',
         somato='f3e3a8441477bb5bacae1d0c6e0964fb',
         brainstorm=None,
-        testing=None,
+        testing='d1753ce154e0e6af12f1b82b21e975ce',
         fake='3194e9f7b46039bb050a74f3e1ae9908',
     )
     folder_origs = dict(  # not listed means None
-        testing='mne-testing-data-master',
+        testing='mne-testing-data-%s' % testing_release,
     )
     folder_name = folder_names[name]
     archive_name = archive_names[name]
diff --git a/mne/decoding/__init__.py b/mne/decoding/__init__.py
index d0f4e47..9a431a4 100644
--- a/mne/decoding/__init__.py
+++ b/mne/decoding/__init__.py
@@ -1,5 +1,5 @@
 from .transformer import Scaler, FilterEstimator
-from .transformer import PSDEstimator, EpochsVectorizer, ConcatenateChannels
+from .transformer import PSDEstimator, EpochsVectorizer
 from .mixin import TransformerMixin
 from .base import BaseEstimator, LinearModel
 from .csp import CSP
diff --git a/mne/decoding/base.py b/mne/decoding/base.py
index db33a38..1c9d4df 100644
--- a/mne/decoding/base.py
+++ b/mne/decoding/base.py
@@ -135,7 +135,7 @@ def _pprint(params, offset=0, printer=repr):
     params: dict
         The dictionary to pretty print
     offset: int
-        The offset in characters to add at the begin of each line.
+        The offset in characters to add at the beginning of each line.
     printer:
         The function to convert entries to strings, typically
         the builtin str or repr
@@ -178,19 +178,19 @@ def _pprint(params, offset=0, printer=repr):
 class LinearModel(BaseEstimator):
     """
     This object clones a Linear Model from scikit-learn
-    and updates the attribute for each fit. The linear model coefficient
+    and updates the attributes for each fit. The linear model coefficients
     (filters) are used to extract discriminant neural sources from
-    the measured data. This class implement the computation of patterns
+    the measured data. This class implements the computation of patterns
     which provides neurophysiologically interpretable information [1],
     in the sense that significant nonzero weights are only observed at channels
-    the activity of which is related to discriminant neural sources.
+    where activity is related to discriminant neural sources.
 
     Parameters
     ----------
     model : object | None
         A linear model from scikit-learn with a fit method
         that updates a coef_ attribute.
-        If None the model will be a LogisticRegression
+        If None the model will be LogisticRegression
 
     Attributes
     ----------
@@ -226,8 +226,8 @@ class LinearModel(BaseEstimator):
         self.filters_ = None
 
     def fit(self, X, y):
-        """Estimate the coefficient of the linear model.
-        Save the coefficient in the attribute filters_ and
+        """Estimate the coefficients of the linear model.
+        Save the coefficients in the attribute filters_ and
         computes the attribute patterns_ using [1].
 
         Parameters
@@ -274,7 +274,7 @@ class LinearModel(BaseEstimator):
         return self.model.transform(X)
 
     def fit_transform(self, X, y):
-        """fit the data and transform it using the linear model.
+        """Fit the data and transform it using the linear model.
 
         Parameters
         ----------
@@ -292,12 +292,12 @@ class LinearModel(BaseEstimator):
         return self.fit(X, y).transform(X)
 
     def predict(self, X):
-        """Computes prediction of X.
+        """Computes predictions of y from X.
 
         Parameters
         ----------
         X : array, shape (n_epochs, n_features)
-            The data used to compute prediction.
+            The data used to compute the predictions.
 
         Returns
         -------
diff --git a/mne/decoding/csp.py b/mne/decoding/csp.py
index 007f39a..c34c67a 100644
--- a/mne/decoding/csp.py
+++ b/mne/decoding/csp.py
@@ -1,25 +1,27 @@
+# -*- coding: utf-8 -*-
 # Authors: Romain Trachel <trachelr at gmail.com>
 #          Alexandre Gramfort <alexandre.gramfort at telecom-paristech.fr>
 #          Alexandre Barachant <alexandre.barachant at gmail.com>
+#          Clemens Brunner <clemens.brunner at gmail.com>
 #
 # License: BSD (3-clause)
 
 import copy as cp
-import warnings
 
 import numpy as np
 from scipy import linalg
 
-from .mixin import TransformerMixin
+from .mixin import TransformerMixin, EstimatorMixin
 from ..cov import _regularized_covariance
 
 
-class CSP(TransformerMixin):
+class CSP(TransformerMixin, EstimatorMixin):
     """M/EEG signal decomposition using the Common Spatial Patterns (CSP).
 
     This object can be used as a supervised decomposition to estimate
     spatial filters for feature extraction in a 2 class decoding problem.
-    See [1].
+    CSP in the context of EEG was first described in [1]; a comprehensive
+    tutorial on CSP can be found in [2].
 
     Parameters
     ----------
@@ -34,6 +36,11 @@ class CSP(TransformerMixin):
     log : bool (default True)
         If true, apply log to standardize the features.
         If false, features are just z-scored.
+    cov_est : str (default 'concat')
+        If 'concat', covariance matrices are estimated on concatenated epochs
+        for each class.
+        If 'epoch', covariance matrices are estimated on each epoch separately
+        and then averaged over each class.
 
     Attributes
     ----------
@@ -48,26 +55,40 @@ class CSP(TransformerMixin):
 
     References
     ----------
-    [1] Zoltan J. Koles. The quantitative extraction and topographic mapping
-    of the abnormal components in the clinical EEG. Electroencephalography
-    and Clinical Neurophysiology, 79(6):440--447, December 1991.
+    [1] Zoltan J. Koles, Michael S. Lazar, Steven Z. Zhou. Spatial Patterns
+        Underlying Population Differences in the Background EEG. Brain
+        Topography 2(4), 275-284, 1990.
+    [2] Benjamin Blankertz, Ryota Tomioka, Steven Lemm, Motoaki Kawanabe,
+        Klaus-Robert Müller. Optimizing Spatial Filters for Robust EEG
+        Single-Trial Analysis. IEEE Signal Processing Magazine 25(1), 41-56,
+        2008.
     """
 
-    def __init__(self, n_components=4, reg=None, log=True):
+    def __init__(self, n_components=4, reg=None, log=True, cov_est="concat"):
         """Init of CSP."""
         self.n_components = n_components
-        if reg == 'lws':
-            warnings.warn('`lws` has been deprecated for the `reg`'
-                          ' argument. It will be removed in 0.11.'
-                          ' Use `ledoit_wolf` instead.', DeprecationWarning)
-            reg = 'ledoit_wolf'
         self.reg = reg
         self.log = log
+        self.cov_est = cov_est
         self.filters_ = None
         self.patterns_ = None
         self.mean_ = None
         self.std_ = None
 
+    def get_params(self, deep=True):
+        """Return all parameters (mimics sklearn API).
+
+        Parameters
+        ----------
+        deep: boolean, optional
+            If True, will return the parameters for this estimator and
+            contained subobjects that are estimators.
+        """
+        params = {"n_components": self.n_components,
+                  "reg": self.reg,
+                  "log": self.log}
+        return params
+
     def fit(self, epochs_data, y):
         """Estimate the CSP decomposition on epochs.
 
@@ -87,26 +108,51 @@ class CSP(TransformerMixin):
             raise ValueError("epochs_data should be of type ndarray (got %s)."
                              % type(epochs_data))
         epochs_data = np.atleast_3d(epochs_data)
+        e, c, t = epochs_data.shape
         # check number of epochs
-        if epochs_data.shape[0] != len(y):
+        if e != len(y):
             raise ValueError("n_epochs must be the same for epochs_data and y")
         classes = np.unique(y)
         if len(classes) != 2:
             raise ValueError("More than two different classes in the data.")
-        # concatenate epochs
-        class_1 = np.transpose(epochs_data[y == classes[0]],
-                               [1, 0, 2]).reshape(epochs_data.shape[1], -1)
-        class_2 = np.transpose(epochs_data[y == classes[1]],
-                               [1, 0, 2]).reshape(epochs_data.shape[1], -1)
-
-        cov_1 = _regularized_covariance(class_1, reg=self.reg)
-        cov_2 = _regularized_covariance(class_2, reg=self.reg)
-
-        # then fit on covariance
-        self._fit(cov_1, cov_2)
+        if not (self.cov_est == "concat" or self.cov_est == "epoch"):
+            raise ValueError("unknown covariance estimation method")
+
+        if self.cov_est == "concat":  # concatenate epochs
+            class_1 = np.transpose(epochs_data[y == classes[0]],
+                                   [1, 0, 2]).reshape(c, -1)
+            class_2 = np.transpose(epochs_data[y == classes[1]],
+                                   [1, 0, 2]).reshape(c, -1)
+            cov_1 = _regularized_covariance(class_1, reg=self.reg)
+            cov_2 = _regularized_covariance(class_2, reg=self.reg)
+        elif self.cov_est == "epoch":
+            class_1 = epochs_data[y == classes[0]]
+            class_2 = epochs_data[y == classes[1]]
+            cov_1 = np.zeros((c, c))
+            for t in class_1:
+                cov_1 += _regularized_covariance(t, reg=self.reg)
+            cov_1 /= class_1.shape[0]
+            cov_2 = np.zeros((c, c))
+            for t in class_2:
+                cov_2 += _regularized_covariance(t, reg=self.reg)
+            cov_2 /= class_2.shape[0]
+
+        # normalize by trace
+        cov_1 /= np.trace(cov_1)
+        cov_2 /= np.trace(cov_2)
+
+        e, w = linalg.eigh(cov_1, cov_1 + cov_2)
+        n_vals = len(e)
+        # Rearrange vectors
+        ind = np.empty(n_vals, dtype=int)
+        ind[::2] = np.arange(n_vals - 1, n_vals // 2 - 1, -1)
+        ind[1::2] = np.arange(0, n_vals // 2)
+        w = w[:, ind]  # first, last, second, second last, third, ...
+        self.filters_ = w.T
+        self.patterns_ = linalg.pinv(w)
 
         pick_filters = self.filters_[:self.n_components]
-        X = np.asarray([np.dot(pick_filters, e) for e in epochs_data])
+        X = np.asarray([np.dot(pick_filters, epoch) for epoch in epochs_data])
 
         # compute features (mean band power)
         X = (X ** 2).mean(axis=-1)
@@ -117,38 +163,6 @@ class CSP(TransformerMixin):
 
         return self
 
-    def _fit(self, cov_a, cov_b):
-        """Aux Function (modifies cov_a and cov_b in-place)."""
-        cov_a /= np.trace(cov_a)
-        cov_b /= np.trace(cov_b)
-        # computes the eigen values
-        lambda_, u = linalg.eigh(cov_a + cov_b)
-        # sort them
-        ind = np.argsort(lambda_)[::-1]
-        lambda2_ = lambda_[ind]
-
-        u = u[:, ind]
-        p = np.dot(np.sqrt(linalg.pinv(np.diag(lambda2_))), u.T)
-
-        # Compute the generalized eigen value problem
-        w_a = np.dot(np.dot(p, cov_a), p.T)
-        w_b = np.dot(np.dot(p, cov_b), p.T)
-        # and solve it
-        vals, vecs = linalg.eigh(w_a, w_b)
-        # sort vectors by discriminative power using eigenvalues
-        ind = np.argsort(vals)[::-1]
-        vecs = vecs[:, ind]
-        # re-order (first, last, second, second last, third, ...)
-        n_vals = len(ind)
-        ind[::2] = np.arange(0, int(np.ceil(n_vals / 2.0)))
-        ind[1::2] = np.arange(n_vals - 1, int(np.ceil(n_vals / 2.0)) - 1, -1)
-        vecs = vecs[:, ind]
-        # and project
-        w = np.dot(vecs.T, p)
-
-        self.filters_ = w
-        self.patterns_ = linalg.pinv(w).T
-
     def transform(self, epochs_data, y=None):
         """Estimate epochs sources given the CSP filters.
 
@@ -172,7 +186,7 @@ class CSP(TransformerMixin):
                                'decomposition.')
 
         pick_filters = self.filters_[:self.n_components]
-        X = np.asarray([np.dot(pick_filters, e) for e in epochs_data])
+        X = np.asarray([np.dot(pick_filters, epoch) for epoch in epochs_data])
 
         # compute features (mean band power)
         X = (X ** 2).mean(axis=-1)
diff --git a/mne/decoding/mixin.py b/mne/decoding/mixin.py
index 2f16db8..645110e 100644
--- a/mne/decoding/mixin.py
+++ b/mne/decoding/mixin.py
@@ -1,3 +1,6 @@
+from ..externals import six
+
+
 class TransformerMixin(object):
     """Mixin class for all transformers in scikit-learn"""
 
@@ -28,3 +31,37 @@ class TransformerMixin(object):
         else:
             # fit method of arity 2 (supervised transformation)
             return self.fit(X, y, **fit_params).transform(X)
+
+
+class EstimatorMixin(object):
+    """Mixin class for estimators."""
+
+    def get_params(self):
+        pass
+
+    def set_params(self, **params):
+        """Set parameters (mimics sklearn API)."""
+        if not params:
+            return self
+        valid_params = self.get_params(deep=True)
+        for key, value in six.iteritems(params):
+            split = key.split('__', 1)
+            if len(split) > 1:
+                # nested objects case
+                name, sub_name = split
+                if name not in valid_params:
+                    raise ValueError('Invalid parameter %s for estimator %s. '
+                                     'Check the list of available parameters '
+                                     'with `estimator.get_params().keys()`.' %
+                                     (name, self))
+                sub_object = valid_params[name]
+                sub_object.set_params(**{sub_name: value})
+            else:
+                # simple objects case
+                if key not in valid_params:
+                    raise ValueError('Invalid parameter %s for estimator %s. '
+                                     'Check the list of available parameters '
+                                     'with `estimator.get_params().keys()`.' %
+                                     (key, self.__class__.__name__))
+                setattr(self, key, value)
+        return self
diff --git a/mne/decoding/tests/test_csp.py b/mne/decoding/tests/test_csp.py
index 6478567..cbb84e1 100644
--- a/mne/decoding/tests/test_csp.py
+++ b/mne/decoding/tests/test_csp.py
@@ -70,6 +70,16 @@ def test_csp():
     csp.plot_filters(epochs.info, components=components, res=12,
                      show=False)
 
+    # test covariance estimation methods (results should be roughly equal)
+    csp_epochs = CSP(cov_est="epoch")
+    csp_epochs.fit(epochs_data, y)
+    assert_array_almost_equal(csp.filters_, csp_epochs.filters_, -1)
+    assert_array_almost_equal(csp.patterns_, csp_epochs.patterns_, -1)
+
+    # make sure error is raised for undefined estimation method
+    csp_fail = CSP(cov_est="undefined")
+    assert_raises(ValueError, csp_fail.fit, epochs_data, y)
+
 
 @requires_sklearn
 def test_regularized_csp():
@@ -106,3 +116,16 @@ def test_regularized_csp():
         csp.n_components = n_components
         sources = csp.transform(epochs_data)
         assert_true(sources.shape[1] == n_components)
+
+
+ at requires_sklearn
+def test_csp_pipeline():
+    """Test if CSP works in a pipeline
+    """
+    from sklearn.svm import SVC
+    from sklearn.pipeline import Pipeline
+    csp = CSP(reg=1)
+    svc = SVC()
+    pipe = Pipeline([("CSP", csp), ("SVC", svc)])
+    pipe.set_params(CSP__reg=0.2)
+    assert_true(pipe.get_params()["CSP__reg"] == 0.2)
diff --git a/mne/decoding/tests/test_time_gen.py b/mne/decoding/tests/test_time_gen.py
index 4fe1b0c..07a6286 100644
--- a/mne/decoding/tests/test_time_gen.py
+++ b/mne/decoding/tests/test_time_gen.py
@@ -94,10 +94,10 @@ def test_generalization_across_time():
 
     # check _DecodingTime class
     assert_equal("<DecodingTime | start: -0.200 (s), stop: 0.499 (s), step: "
-                 "0.047 (s), length: 0.047 (s), n_time_windows: 15>",
+                 "0.050 (s), length: 0.050 (s), n_time_windows: 15>",
                  "%s" % gat.train_times_)
     assert_equal("<DecodingTime | start: -0.200 (s), stop: 0.499 (s), step: "
-                 "0.047 (s), length: 0.047 (s), n_time_windows: 15 x 15>",
+                 "0.050 (s), length: 0.050 (s), n_time_windows: 15 x 15>",
                  "%s" % gat.test_times_)
 
     # the y-check
@@ -237,7 +237,9 @@ def test_generalization_across_time():
         gat.fit(epochs)
 
     gat.predict(epochs)
-    assert_raises(ValueError, gat.predict, epochs[:10])
+    assert_raises(IndexError, gat.predict, epochs[:10])
+
+    # TODO JRK: test GAT with non-exhaustive CV (eg. train on 80%, test on 10%)
 
     # Check that still works with classifier that output y_pred with
     # shape = (n_trials, 1) instead of (n_trials,)
diff --git a/mne/decoding/time_gen.py b/mne/decoding/time_gen.py
index 5431653..6f8cbca 100644
--- a/mne/decoding/time_gen.py
+++ b/mne/decoding/time_gen.py
@@ -11,6 +11,7 @@ import copy
 from ..io.pick import pick_types
 from ..viz.decoding import plot_gat_matrix, plot_gat_times
 from ..parallel import parallel_func, check_n_jobs
+from ..utils import logger
 
 
 class _DecodingTime(dict):
@@ -148,7 +149,8 @@ class _GeneralizationAcrossTime(object):
         # defined in __init__
         self.train_times_ = copy.deepcopy(self.train_times)
         if 'slices' not in self.train_times_:
-            self.train_times_ = _sliding_window(epochs.times, self.train_times)
+            self.train_times_ = _sliding_window(epochs.times, self.train_times,
+                                                epochs.info['sfreq'])
 
         # Parallel across training time
         # TODO: JRK: Chunking times points needs to be simplified
@@ -223,7 +225,8 @@ class _GeneralizationAcrossTime(object):
             slices_list = list()
             times_list = list()
             for t in range(0, len(self.train_times_['slices'])):
-                test_times_ = _sliding_window(epochs.times, test_times)
+                test_times_ = _sliding_window(epochs.times, test_times,
+                                              epochs.info['sfreq'])
                 times_list += [test_times_['times']]
                 slices_list += [test_times_['slices']]
             test_times = test_times_
@@ -236,7 +239,7 @@ class _GeneralizationAcrossTime(object):
         # Prepare parallel predictions across time points
         # FIXME Note that this means that TimeDecoding.predict isn't parallel
         parallel, p_time_gen, n_jobs = parallel_func(_predict_slices, n_jobs)
-        n_test_slice = max([len(sl) for sl in self.train_times_['slices']])
+        n_test_slice = max(len(sl) for sl in self.train_times_['slices'])
         # Loop across estimators (i.e. training times)
         n_chunks = min(n_test_slice, n_jobs)
         splits = [np.array_split(slices, n_chunks)
@@ -261,6 +264,8 @@ class _GeneralizationAcrossTime(object):
         # np.concatenate as this would need new memory allocations
         self.y_pred_ = [[test for chunk in train for test in chunk]
                         for train in map(list, zip(*y_pred))]
+
+        _warn_once.clear()  # reset self-baked warning tracker
         return self.y_pred_
 
     def score(self, epochs=None, y=None):
@@ -360,6 +365,9 @@ def _predict_slices(X, estimators, cv, slices, predict_mode):
     return out
 
 
+_warn_once = dict()
+
+
 def _predict_time_loop(X, estimators, cv, slices, predict_mode):
     """Aux function of GeneralizationAcrossTime
 
@@ -385,42 +393,82 @@ def _predict_time_loop(X, estimators, cv, slices, predict_mode):
                 these predictions into a single estimate per sample.
         Default: 'cross-validation'
     """
-    n_epochs = len(X)
-    # Loop across testing slices
-    y_pred = [list() for _ in range(len(slices))]
 
-    # XXX EHN: This loop should be parallelized in a similar way to fit()
+    # Check inputs
+    n_orig_epochs, _, n_times = X.shape
+    if predict_mode == 'cross-validation':
+        # Subselect to-be-predicted epochs so as to manipulate a contiguous
+        # array X by using slices rather than indices.
+        all_test = np.concatenate(list(zip(*cv))[-1])
+        test_epochs_slices = []
+        start = 0
+        for _, test in cv:
+            n_test_epochs = len(test)
+            stop = start + n_test_epochs
+            test_epochs_slices.append(slice(start, stop, 1))
+            start += n_test_epochs
+        X = X[all_test]  # XXX JRK: Still 12 % of cpu time.
+
+        # Check that training cv and predicting cv match
+        if (len(estimators) != len(cv)) or (cv.n != len(X)):
+            raise ValueError(
+                'When `predict_mode = "cross-validation"`, the training '
+                'and predicting cv schemes must be identical.')
+    elif predict_mode != 'mean-prediction':
+        raise ValueError('`predict_mode` must be a str, "mean-prediction" or'
+                         '"cross-validation"')
+    n_epochs, _, n_times = X.shape
+
+    # Checks whether the GAT is based on contiguous window of lengths = 1
+    # time-sample, ranging across the entire times. In this case, we will
+    # be able to vectorize the testing times samples.
+    expected_start = np.arange(n_times)
+    is_single_time_sample = np.array_equal([ii for sl in slices for ii in sl],
+                                           expected_start)
+    if is_single_time_sample:
+        # In simple mode, we avoid iterating over time slices.
+        slices = [slice(expected_start[0], expected_start[-1] + 1, 1)]
+    elif _warn_once.get('vectorization', True):
+        logger.warning('not vectorizing predictions across testing times, '
+                       'using a time window with length > 1')
+        _warn_once['vectorization'] = False
+    # Iterate over testing times. If is_single_time_sample, then 1 iteration.
+    y_pred = list()
     for t, indices in enumerate(slices):
-        # Flatten features in case of multiple time samples
-        Xtrain = X[:, :, indices].reshape(
-            n_epochs, np.prod(X[:, :, indices].shape[1:]))
-
-        # Single trial predictions
-        if predict_mode == 'cross-validation':
-            # If predict within cross validation, only predict with
-            # corresponding classifier, else predict with each fold's
-            # classifier and average prediction.
-
-            # Check that training cv and predicting cv match
-            if (len(estimators) != len(cv)) or (cv.n != Xtrain.shape[0]):
-                raise ValueError(
-                    'When `predict_mode = "cross-validation"`, the training '
-                    'and predicting cv schemes must be identical.')
+        # Vectoring chan_times features in case of multiple time samples given
+        # to the estimators.
+        if not is_single_time_sample:
+            X_pred = X[:, :, indices].reshape(n_epochs, -1)
+        else:
+            X_pred = X
+
+        if predict_mode == 'mean-prediction':
+            # Predict with each fold's estimator and average predictions.
+            y_pred.append(_predict(X_pred, estimators,
+                          is_single_time_sample=is_single_time_sample))
+        elif predict_mode == 'cross-validation':
+            # Predict with the estimator trained on the separate training set.
             for k, (train, test) in enumerate(cv):
+                # Single trial predictions
+                X_pred_t = X_pred[test_epochs_slices[k]]
+                # If is_single_time_sample, we are predicting each time sample
+                # as if it was a different epoch (vectoring)
+                y_pred_ = _predict(X_pred_t, estimators[k:k + 1],
+                                   is_single_time_sample=is_single_time_sample)
                 # XXX I didn't manage to initialize correctly this array, as
                 # its size depends on the the type of predictor and the
                 # number of class.
                 if k == 0:
-                    y_pred_ = _predict(Xtrain[test, :], estimators[k:k + 1])
-                    y_pred[t] = np.empty((n_epochs, y_pred_.shape[1]))
-                    y_pred[t][test, :] = y_pred_
-                y_pred[t][test, :] = _predict(Xtrain[test, :],
-                                              estimators[k:k + 1])
-        elif predict_mode == 'mean-prediction':
-            y_pred[t] = _predict(Xtrain, estimators)
-        else:
-            raise ValueError('`predict_mode` must be a str, "mean-prediction"'
-                             ' or "cross-validation"')
+                    # /!\ The CV may not be exhaustive. Thus, we need to fill
+                    # the prediction to an array whose length is similar to X
+                    # before it was rendered contiguous.
+                    this_ypred = np.empty((n_orig_epochs,) + y_pred_.shape[1:])
+                    y_pred.append(this_ypred)
+                y_pred[-1][test, ...] = y_pred_
+
+    if is_single_time_sample:
+        y_pred = [yp for yp in y_pred[0].transpose([1, 0, 2])]
+
     return y_pred
 
 
@@ -540,7 +588,7 @@ def _fit_slices(clf, x_chunk, y, slices, cv):
     return estimators
 
 
-def _sliding_window(times, window_params):
+def _sliding_window(times, window_params, sfreq):
     """Aux function of GeneralizationAcrossTime
 
     Define the slices on which to train each classifier.
@@ -561,9 +609,6 @@ def _sliding_window(times, window_params):
 
     window_params = _DecodingTime(window_params)
 
-    # Sampling frequency as int
-    freq = (times[-1] - times[0]) / len(times)
-
     # Default values
     if ('slices' in window_params and
             all(k in window_params for k in
@@ -575,9 +620,9 @@ def _sliding_window(times, window_params):
         if 'stop' not in window_params:
             window_params['stop'] = times[-1]
         if 'step' not in window_params:
-            window_params['step'] = freq
+            window_params['step'] = 1. / sfreq
         if 'length' not in window_params:
-            window_params['length'] = freq
+            window_params['length'] = 1. / sfreq
 
         if (window_params['start'] < times[0] or
                 window_params['start'] > times[-1]):
@@ -589,9 +634,9 @@ def _sliding_window(times, window_params):
             raise ValueError(
                 '`stop` (%.2f s) outside time range [%.2f, %.2f].' % (
                     window_params['stop'], times[0], times[-1]))
-        if window_params['step'] < freq:
+        if window_params['step'] < 1. / sfreq:
             raise ValueError('`step` must be >= 1 / sampling_frequency')
-        if window_params['length'] < freq:
+        if window_params['length'] < 1. / sfreq:
             raise ValueError('`length` must be >= 1 / sampling_frequency')
         if window_params['length'] > np.ptp(times):
             raise ValueError('`length` must be <= time range')
@@ -603,8 +648,8 @@ def _sliding_window(times, window_params):
 
         start = find_time_idx(window_params['start'])
         stop = find_time_idx(window_params['stop'])
-        step = int(round(window_params['step'] / freq))
-        length = int(round(window_params['length'] / freq))
+        step = int(round(window_params['step'] * sfreq))
+        length = int(round(window_params['length'] * sfreq))
 
         # For each training slice, give time samples to be included
         time_pick = [range(start, start + length)]
@@ -620,7 +665,7 @@ def _sliding_window(times, window_params):
     return window_params
 
 
-def _predict(X, estimators):
+def _predict(X, estimators, is_single_time_sample):
     """Aux function of GeneralizationAcrossTime
 
     Predict each classifier. If multiple classifiers are passed, average
@@ -641,9 +686,19 @@ def _predict(X, estimators):
     from scipy import stats
     from sklearn.base import is_classifier
     # Initialize results:
-    n_epochs = X.shape[0]
+
+    orig_shape = X.shape
+    n_epochs = orig_shape[0]
+    n_times = orig_shape[-1]
+
     n_clf = len(estimators)
 
+    # in simple case, we are predicting each time sample as if it
+    # was a different epoch
+    if is_single_time_sample:  # treat times as trials for optimization
+        X = np.hstack(X).T  # XXX JRK: still 17% of cpu time
+    n_epochs_tmp = len(X)
+
     # Compute prediction for each sub-estimator (i.e. per fold)
     # if independent, estimators = all folds
     for fold, clf in enumerate(estimators):
@@ -654,7 +709,7 @@ def _predict(X, estimators):
         # initialize predict_results array
         if fold == 0:
             predict_size = _y_pred.shape[1]
-            y_pred = np.ones((n_epochs, predict_size, n_clf))
+            y_pred = np.ones((n_epochs_tmp, predict_size, n_clf))
         y_pred[:, :, fold] = _y_pred
 
     # Collapse y_pred across folds if necessary (i.e. if independent)
@@ -666,14 +721,18 @@ def _predict(X, estimators):
             y_pred = np.mean(y_pred, axis=2)
 
     # Format shape
-    y_pred = y_pred.reshape((n_epochs, predict_size))
+    y_pred = y_pred.reshape((n_epochs_tmp, predict_size))
+    if is_single_time_sample:
+        y_pred = np.reshape(y_pred,
+                            [n_epochs, n_times, y_pred.shape[-1]])
+
     return y_pred
 
 
 class GeneralizationAcrossTime(_GeneralizationAcrossTime):
     """Generalize across time and conditions
 
-    Creates and estimator object used to 1) fit a series of classifiers on
+    Creates an estimator object used to 1) fit a series of classifiers on
     multidimensional time-resolved data, and 2) test the ability of each
     classifier to generalize across other time samples.
 
diff --git a/mne/decoding/transformer.py b/mne/decoding/transformer.py
index 27950cd..55a28f8 100644
--- a/mne/decoding/transformer.py
+++ b/mne/decoding/transformer.py
@@ -13,7 +13,7 @@ from ..filter import (low_pass_filter, high_pass_filter, band_pass_filter,
                       band_stop_filter)
 from ..time_frequency import multitaper_psd
 from ..externals import six
-from ..utils import _check_type_picks, deprecated
+from ..utils import _check_type_picks
 
 
 class Scaler(TransformerMixin):
@@ -245,12 +245,6 @@ class EpochsVectorizer(TransformerMixin):
         return X.reshape(-1, self.n_channels, self.n_times)
 
 
- at deprecated("Class 'ConcatenateChannels' has been renamed to "
-            "'EpochsVectorizer' and will be removed in release 0.11.")
-class ConcatenateChannels(EpochsVectorizer):
-    pass
-
-
 class PSDEstimator(TransformerMixin):
     """Compute power spectrum density (PSD) using a multi-taper method
 
diff --git a/mne/dipole.py b/mne/dipole.py
index 64a313f..d1c71e6 100644
--- a/mne/dipole.py
+++ b/mne/dipole.py
@@ -10,12 +10,12 @@ import re
 
 from .cov import read_cov, _get_whitener_data
 from .io.pick import pick_types, channel_type
-from .io.proj import make_projector, _has_eeg_average_ref_proj
+from .io.proj import make_projector, _needs_eeg_average_ref_proj
 from .bem import _fit_sphere
 from .transforms import (_print_coord_trans, _coord_frame_name,
                          apply_trans, invert_transform, Transform)
 
-from .forward._make_forward import (_get_mri_head_t, _setup_bem,
+from .forward._make_forward import (_get_trans, _setup_bem,
                                     _prep_meg_channels, _prep_eeg_channels)
 from .forward._compute_forward import (_compute_forwards_meeg,
                                        _prep_field_computation)
@@ -575,7 +575,7 @@ def fit_dipole(evoked, cov, bem, trans=None, min_dist=5., n_jobs=1,
     evoked = evoked.copy()
 
     # Determine if a list of projectors has an average EEG ref
-    if "eeg" in evoked and not _has_eeg_average_ref_proj(evoked.info['projs']):
+    if _needs_eeg_average_ref_proj(evoked.info):
         raise ValueError('EEG average reference is mandatory for dipole '
                          'fitting.')
 
@@ -597,7 +597,7 @@ def fit_dipole(evoked, cov, bem, trans=None, min_dist=5., n_jobs=1,
         logger.info('BEM              : %s' % bem)
     if trans is not None:
         logger.info('MRI transform    : %s' % trans)
-        mri_head_t, trans = _get_mri_head_t(trans)
+        mri_head_t, trans = _get_trans(trans)
     else:
         mri_head_t = Transform('head', 'mri', np.eye(4))
     bem = _setup_bem(bem, bem, neeg, mri_head_t)
diff --git a/mne/epochs.py b/mne/epochs.py
index 48394f0..1633734 100644
--- a/mne/epochs.py
+++ b/mne/epochs.py
@@ -1,3 +1,5 @@
+# -*- coding: utf-8 -*-
+
 """Tools for working with epoched data"""
 
 # Authors: Alexandre Gramfort <alexandre.gramfort at telecom-paristech.fr>
@@ -26,9 +28,10 @@ from .io.tree import dir_tree_find
 from .io.tag import read_tag, read_tag_info
 from .io.constants import FIFF
 from .io.pick import (pick_types, channel_indices_by_type, channel_type,
-                      pick_channels, pick_info)
+                      pick_channels, pick_info, _pick_data_channels)
 from .io.proj import setup_proj, ProjMixin, _proj_equal
 from .io.base import _BaseRaw, ToDataFrameMixin
+from .bem import _check_origin
 from .evoked import EvokedArray, _aspect_rev
 from .baseline import rescale
 from .channels.channels import (ContainsMixin, UpdateChannelsMixin,
@@ -36,8 +39,7 @@ from .channels.channels import (ContainsMixin, UpdateChannelsMixin,
 from .filter import resample, detrend, FilterMixin
 from .event import _read_events_fif
 from .fixes import in1d, _get_args
-from .viz import (plot_epochs, _drop_log_stats,
-                  plot_epochs_psd, plot_epochs_psd_topomap)
+from .viz import plot_epochs, plot_epochs_psd, plot_epochs_psd_topomap
 from .utils import (check_fname, logger, verbose, _check_type_picks,
                     _time_mask, check_random_state, object_hash)
 from .externals.six import iteritems, string_types
@@ -75,7 +77,7 @@ def _save_split(epochs, fname, part_idx, n_parts):
 
     # One or more evoked data sets
     start_block(fid, FIFF.FIFFB_PROCESSED_DATA)
-    start_block(fid, FIFF.FIFFB_EPOCHS)
+    start_block(fid, FIFF.FIFFB_MNE_EPOCHS)
 
     # write events out after getting data to ensure bad events are dropped
     data = epochs.get_data()
@@ -87,7 +89,7 @@ def _save_split(epochs, fname, part_idx, n_parts):
     end_block(fid, FIFF.FIFFB_MNE_EVENTS)
 
     # First and last sample
-    first = int(epochs.times[0] * info['sfreq'])
+    first = int(round(epochs.tmin * info['sfreq']))  # round just to be safe
     last = first + len(epochs.times) - 1
     write_int(fid, FIFF.FIFF_FIRST_SAMPLE, first)
     write_int(fid, FIFF.FIFF_LAST_SAMPLE, last)
@@ -129,7 +131,7 @@ def _save_split(epochs, fname, part_idx, n_parts):
         write_int(fid, FIFF.FIFF_REF_FILE_NUM, next_idx)
         end_block(fid, FIFF.FIFFB_REF)
 
-    end_block(fid, FIFF.FIFFB_EPOCHS)
+    end_block(fid, FIFF.FIFFB_MNE_EPOCHS)
     end_block(fid, FIFF.FIFFB_PROCESSED_DATA)
     end_block(fid, FIFF.FIFFB_MEAS)
     end_file(fid)
@@ -254,8 +256,6 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         if tmin > tmax:
             raise ValueError('tmin has to be less than or equal to tmax')
 
-        self.tmin = tmin
-        self.tmax = tmax
         self.baseline = baseline
         self.reject_tmin = reject_tmin
         self.reject_tmax = reject_tmax
@@ -286,11 +286,11 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
         # Handle times
         sfreq = float(self.info['sfreq'])
-        start_idx = int(round(self.tmin * sfreq))
+        start_idx = int(round(tmin * sfreq))
         self._raw_times = np.arange(start_idx,
-                                    int(round(self.tmax * sfreq)) + 1) / sfreq
+                                    int(round(tmax * sfreq)) + 1) / sfreq
+        self.times = self._raw_times.copy()
         self._decim = 1
-        # this method sets the self.times property
         self.decimate(decim)
 
         # setup epoch rejection
@@ -316,7 +316,14 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         if preload_at_end:
             assert self._data is None
             assert self.preload is False
-            self.load_data()
+            self.load_data()  # this will do the projection
+        elif proj is True and self._projector is not None and data is not None:
+            # let's make sure we project if data was provided and proj
+            # requested
+            # we could do this with np.einsum, but iteration should be
+            # more memory safe in most instances
+            for ii, epoch in enumerate(self._data):
+                self._data[ii] = np.dot(self._projector, epoch)
 
     def load_data(self):
         """Load the data if not already preloaded
@@ -435,7 +442,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
         data = self._data
         picks = pick_types(self.info, meg=True, eeg=True, stim=False,
-                           ref_meg=True, eog=True, ecg=True,
+                           ref_meg=True, eog=True, ecg=True, seeg=True,
                            emg=True, exclude=[])
         data[:, picks, :] = rescale(data[:, picks, :], self.times, baseline,
                                     'mean', copy=False)
@@ -444,49 +451,57 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
     def _reject_setup(self, reject, flat):
         """Sets self._reject_time and self._channel_type_idx"""
         idx = channel_indices_by_type(self.info)
+        reject = deepcopy(reject) if reject is not None else dict()
+        flat = deepcopy(flat) if flat is not None else dict()
         for rej, kind in zip((reject, flat), ('reject', 'flat')):
-            if not isinstance(rej, (type(None), dict)):
+            if not isinstance(rej, dict):
                 raise TypeError('reject and flat must be dict or None, not %s'
                                 % type(rej))
-            if isinstance(rej, dict):
-                bads = set(rej.keys()) - set(idx.keys())
-                if len(bads) > 0:
-                    raise KeyError('Unknown channel types found in %s: %s'
-                                   % (kind, bads))
+            bads = set(rej.keys()) - set(idx.keys())
+            if len(bads) > 0:
+                raise KeyError('Unknown channel types found in %s: %s'
+                               % (kind, bads))
 
         for key in idx.keys():
-            if (reject is not None and key in reject) \
-                    or (flat is not None and key in flat):
-                if len(idx[key]) == 0:
-                    raise ValueError("No %s channel found. Cannot reject based"
-                                     " on %s." % (key.upper(), key.upper()))
-            # now check to see if our rejection and flat are getting more
-            # restrictive
-            old_reject = self.reject if self.reject is not None else dict()
-            new_reject = reject if reject is not None else dict()
-            old_flat = self.flat if self.flat is not None else dict()
-            new_flat = flat if flat is not None else dict()
-            bad_msg = ('{kind}["{key}"] == {new} {op} {old} (old value), new '
-                       '{kind} values must be at least as stringent as '
-                       'previous ones')
-            for key in set(new_reject.keys()).union(old_reject.keys()):
-                old = old_reject.get(key, np.inf)
-                new = new_reject.get(key, np.inf)
-                if new > old:
-                    raise ValueError(bad_msg.format(kind='reject', key=key,
-                                                    new=new, old=old, op='>'))
-            for key in set(new_flat.keys()).union(old_flat.keys()):
-                old = old_flat.get(key, -np.inf)
-                new = new_flat.get(key, -np.inf)
-                if new < old:
-                    raise ValueError(bad_msg.format(kind='flat', key=key,
-                                                    new=new, old=old, op='<'))
+            if len(idx[key]) == 0 and (key in reject or key in flat):
+                # This is where we could eventually add e.g.
+                # self.allow_missing_reject_keys check to allow users to
+                # provide keys that don't exist in data
+                raise ValueError("No %s channel found. Cannot reject based on "
+                                 "%s." % (key.upper(), key.upper()))
+
+        # check for invalid values
+        for rej, kind in zip((reject, flat), ('Rejection', 'Flat')):
+            for key, val in rej.items():
+                if val is None or val < 0:
+                    raise ValueError('%s value must be a number >= 0, not "%s"'
+                                     % (kind, val))
+
+        # now check to see if our rejection and flat are getting more
+        # restrictive
+        old_reject = self.reject if self.reject is not None else dict()
+        old_flat = self.flat if self.flat is not None else dict()
+        bad_msg = ('{kind}["{key}"] == {new} {op} {old} (old value), new '
+                   '{kind} values must be at least as stringent as '
+                   'previous ones')
+        for key in set(reject.keys()).union(old_reject.keys()):
+            old = old_reject.get(key, np.inf)
+            new = reject.get(key, np.inf)
+            if new > old:
+                raise ValueError(bad_msg.format(kind='reject', key=key,
+                                                new=new, old=old, op='>'))
+        for key in set(flat.keys()).union(old_flat.keys()):
+            old = old_flat.get(key, -np.inf)
+            new = flat.get(key, -np.inf)
+            if new < old:
+                raise ValueError(bad_msg.format(kind='flat', key=key,
+                                                new=new, old=old, op='<'))
 
         # after validation, set parameters
         self._bad_dropped = False
         self._channel_type_idx = idx
-        self.reject = reject
-        self.flat = flat
+        self.reject = reject if len(reject) > 0 else None
+        self.flat = flat if len(flat) > 0 else None
 
         if (self.reject_tmin is None) and (self.reject_tmax is None):
             self._reject_time = None
@@ -501,7 +516,6 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
             else:
                 idxs = np.nonzero(self.times <= self.reject_tmax)[0]
                 reject_imax = idxs[-1]
-
             self._reject_time = slice(reject_imin, reject_imax)
 
     @verbose
@@ -534,14 +548,12 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
         # Detrend
         if self.detrend is not None:
-            picks = pick_types(self.info, meg=True, eeg=True, stim=False,
-                               ref_meg=False, eog=False, ecg=False,
-                               emg=False, exclude=[])
+            picks = _pick_data_channels(self.info, exclude=[])
             epoch[picks] = detrend(epoch[picks], self.detrend, axis=1)
 
         # Baseline correct
         picks = pick_types(self.info, meg=True, eeg=True, stim=False,
-                           ref_meg=True, eog=True, ecg=True,
+                           ref_meg=True, eog=True, ecg=True, seeg=True,
                            emg=True, exclude=[])
         epoch[picks] = rescale(epoch[picks], self._raw_times, self.baseline,
                                'mean', copy=False, verbose=verbose)
@@ -594,7 +606,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         logger.info('Subtracting Evoked from Epochs')
         if evoked is None:
             picks = pick_types(self.info, meg=True, eeg=True,
-                               stim=False, eog=False, ecg=False,
+                               stim=False, eog=False, ecg=False, seeg=True,
                                emg=False, exclude=[])
             evoked = self.average(picks)
 
@@ -608,7 +620,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
             diff_idx = [self.ch_names.index(ch) for ch in diff_ch]
             diff_types = [channel_type(self.info, idx) for idx in diff_idx]
             bad_idx = [diff_types.index(t) for t in diff_types if t in
-                       ['grad', 'mag', 'eeg']]
+                       ['grad', 'mag', 'eeg', 'seeg']]
             if len(bad_idx) > 0:
                 bad_str = ', '.join([diff_ch[ii] for ii in bad_idx])
                 raise ValueError('The following data channels are missing '
@@ -658,7 +670,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         Parameters
         ----------
         picks : array-like of int | None
-            If None only MEG and EEG channels are kept
+            If None only MEG, EEG and SEEG channels are kept
             otherwise the channels indices in picks are kept.
 
         Returns
@@ -666,7 +678,6 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         evoked : instance of Evoked
             The averaged epochs.
         """
-
         return self._compute_mean_or_stderr(picks, 'ave')
 
     def standard_error(self, picks=None):
@@ -675,7 +686,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         Parameters
         ----------
         picks : array-like of int | None
-            If None only MEG and EEG channels are kept
+            If None only MEG, EEG and SEEG channels are kept
             otherwise the channels indices in picks are kept.
 
         Returns
@@ -724,21 +735,23 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         else:
             _aspect_kind = FIFF.FIFFV_ASPECT_STD_ERR
             data /= np.sqrt(n_events)
-        kind = _aspect_rev.get(str(_aspect_kind), 'Unknown')
-
-        info = deepcopy(self.info)
+        return self._evoked_from_epoch_data(data, self.info, picks, n_events,
+                                            _aspect_kind)
+
+    def _evoked_from_epoch_data(self, data, info, picks, n_events,
+                                aspect_kind):
+        """Helper to create an evoked object from epoch data"""
+        info = deepcopy(info)
+        kind = _aspect_rev.get(str(aspect_kind), 'Unknown')
         evoked = EvokedArray(data, info, tmin=self.times[0],
                              comment=self.name, nave=n_events, kind=kind,
                              verbose=self.verbose)
         # XXX: above constructor doesn't recreate the times object precisely
         evoked.times = self.times.copy()
-        evoked._aspect_kind = _aspect_kind
 
         # pick channels
         if picks is None:
-            picks = pick_types(evoked.info, meg=True, eeg=True, ref_meg=True,
-                               stim=False, eog=False, ecg=False,
-                               emg=False, exclude=[])
+            picks = _pick_data_channels(evoked.info, exclude=[])
 
         ch_names = [evoked.ch_names[p] for p in picks]
         evoked.pick_channels(ch_names)
@@ -804,7 +817,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         be used to navigate between channels and epochs and the scaling can be
         adjusted with - and + (or =) keys, but this depends on the backend
         matplotlib is configured to use (e.g., mpl.use(``TkAgg``) should work).
-        Full screen mode can be to toggled with f11 key. The amount of epochs
+        Full screen mode can be toggled with f11 key. The amount of epochs
         and channels per view can be adjusted with home/end and
         page down/page up keys. Butterfly plot can be toggled with ``b`` key.
         Right mouse click adds a vertical line to the plot.
@@ -996,7 +1009,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         self._reject_setup(reject, flat)
         self._get_data(out=False)
 
-    def drop_log_stats(self, ignore=['IGNORED']):
+    def drop_log_stats(self, ignore=('IGNORED',)):
         """Compute the channel stats based on a drop_log from Epochs.
 
         Parameters
@@ -1016,7 +1029,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         return _drop_log_stats(self.drop_log, ignore)
 
     def plot_drop_log(self, threshold=0, n_max_plot=20, subject='Unknown',
-                      color=(0.9, 0.9, 0.9), width=0.8, ignore=['IGNORED'],
+                      color=(0.9, 0.9, 0.9), width=0.8, ignore=('IGNORED',),
                       show=True):
         """Show the channel stats based on a drop_log from Epochs
 
@@ -1282,6 +1295,14 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
         return epoch if not return_event_id else epoch, self.event_id
 
+    @property
+    def tmin(self):
+        return self.times[0]
+
+    @property
+    def tmax(self):
+        return self.times[-1]
+
     def __repr__(self):
         """ Build string representation
         """
@@ -1392,11 +1413,7 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
             tmax = self.tmax
 
         tmask = _time_mask(self.times, tmin, tmax)
-        tidx = np.where(tmask)[0]
-
         this_epochs = self if not copy else self.copy()
-        this_epochs.tmin = this_epochs.times[tidx[0]]
-        this_epochs.tmax = this_epochs.times[tidx[-1]]
         this_epochs.times = this_epochs.times[tmask]
         this_epochs._raw_times = this_epochs._raw_times[tmask]
         this_epochs._data = this_epochs._data[:, :, tmask]
@@ -1447,7 +1464,6 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         inst.info['sfreq'] = sfreq
         inst.times = (np.arange(inst._data.shape[2], dtype=np.float) /
                       sfreq + inst.times[0])
-
         return inst
 
     def copy(self):
@@ -1614,6 +1630,27 @@ class _BaseEpochs(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         return epochs, indices
 
 
+def _drop_log_stats(drop_log, ignore=('IGNORED',)):
+    """
+    Parameters
+    ----------
+    drop_log : list of lists
+        Epoch drop log from Epochs.drop_log.
+    ignore : list
+        The drop reasons to ignore.
+
+    Returns
+    -------
+    perc : float
+        Total percentage of epochs dropped.
+    """
+    if not isinstance(drop_log, list) or not isinstance(drop_log[0], list):
+        raise ValueError('drop_log must be a list of lists')
+    perc = 100 * np.mean([len(d) > 0 for d in drop_log
+                          if not any(r in ignore for r in d)])
+    return perc
+
+
 class Epochs(_BaseEpochs):
     """Epochs extracted from a Raw instance
 
@@ -1794,14 +1831,12 @@ class Epochs(_BaseEpochs):
         proj = proj or raw.proj
 
         # call _BaseEpochs constructor
-        super(Epochs, self).__init__(info, None, events, event_id, tmin, tmax,
-                                     baseline=baseline, raw=raw, picks=picks,
-                                     name=name, reject=reject, flat=flat,
-                                     decim=decim, reject_tmin=reject_tmin,
-                                     reject_tmax=reject_tmax, detrend=detrend,
-                                     add_eeg_ref=add_eeg_ref, proj=proj,
-                                     on_missing=on_missing,
-                                     preload_at_end=preload, verbose=verbose)
+        super(Epochs, self).__init__(
+            info, None, events, event_id, tmin, tmax, baseline=baseline,
+            raw=raw, picks=picks, name=name, reject=reject, flat=flat,
+            decim=decim, reject_tmin=reject_tmin, reject_tmax=reject_tmax,
+            detrend=detrend, add_eeg_ref=add_eeg_ref, proj=proj,
+            on_missing=on_missing, preload_at_end=preload, verbose=verbose)
 
     @verbose
     def _get_epoch_from_raw(self, idx, verbose=None):
@@ -1905,7 +1940,8 @@ class EpochsArray(_BaseEpochs):
         super(EpochsArray, self).__init__(info, data, events, event_id, tmin,
                                           tmax, baseline, reject=reject,
                                           flat=flat, reject_tmin=reject_tmin,
-                                          reject_tmax=reject_tmax, decim=1)
+                                          reject_tmax=reject_tmax, decim=1,
+                                          add_eeg_ref=False)
         if len(events) != in1d(self.events[:, 2],
                                list(self.event_id.values())).sum():
             raise ValueError('The events must only contain event numbers from '
@@ -2001,7 +2037,7 @@ def equalize_epoch_counts(epochs_list, method='mintime'):
         list. If 'mintime', timing differences between each event list will be
         minimized.
     """
-    if not all(isinstance(e, Epochs) for e in epochs_list):
+    if not all(isinstance(e, _BaseEpochs) for e in epochs_list):
         raise ValueError('All inputs must be Epochs instances')
 
     # make sure bad epochs are dropped
@@ -2122,7 +2158,7 @@ def _read_one_epoch_file(f, tree, fname, preload):
 
     with f as fid:
         #   Read the measurement info
-        info, meas = read_meas_info(fid, tree)
+        info, meas = read_meas_info(fid, tree, clean_bads=True)
         info['filename'] = fname
 
         events, mappings = _read_events_fif(fid, tree)
@@ -2132,9 +2168,15 @@ def _read_one_epoch_file(f, tree, fname, preload):
         if len(processed) == 0:
             raise ValueError('Could not find processed data')
 
-        epochs_node = dir_tree_find(tree, FIFF.FIFFB_EPOCHS)
+        epochs_node = dir_tree_find(tree, FIFF.FIFFB_MNE_EPOCHS)
         if len(epochs_node) == 0:
-            raise ValueError('Could not find epochs data')
+            # before version 0.11 we errantly saved with this tag instead of
+            # an MNE tag
+            epochs_node = dir_tree_find(tree, FIFF.FIFFB_MNE_EPOCHS)
+            if len(epochs_node) == 0:
+                epochs_node = dir_tree_find(tree, 122)  # 122 used before v0.11
+                if len(epochs_node) == 0:
+                    raise ValueError('Could not find epochs data')
 
         my_epochs = epochs_node[0]
 
@@ -2163,10 +2205,12 @@ def _read_one_epoch_file(f, tree, fname, preload):
                 fid.seek(pos, 0)
                 data_tag = read_tag_info(fid)
                 data_tag.pos = pos
-            elif kind == FIFF.FIFF_MNE_BASELINE_MIN:
+            elif kind in [FIFF.FIFF_MNE_BASELINE_MIN, 304]:
+                # Constant 304 was used before v0.11
                 tag = read_tag(fid, pos)
                 bmin = float(tag.data)
-            elif kind == FIFF.FIFF_MNE_BASELINE_MAX:
+            elif kind in [FIFF.FIFF_MNE_BASELINE_MAX, 305]:
+                # Constant 305 was used before v0.11
                 tag = read_tag(fid, pos)
                 bmax = float(tag.data)
             elif kind == FIFF.FIFFB_MNE_EPOCHS_SELECTION:
@@ -2224,7 +2268,7 @@ def _read_one_epoch_file(f, tree, fname, preload):
 
 
 @verbose
-def read_epochs(fname, proj=True, add_eeg_ref=True, preload=True,
+def read_epochs(fname, proj=True, add_eeg_ref=False, preload=True,
                 verbose=None):
     """Read epochs from a fif file
 
@@ -2428,32 +2472,15 @@ def bootstrap(epochs, random_state=None):
 
 def _check_merge_epochs(epochs_list):
     """Aux function"""
-    event_ids = set(tuple(epochs.event_id.items()) for epochs in epochs_list)
-    if len(event_ids) == 1:
-        event_id = dict(event_ids.pop())
-    else:
+    if len(set(tuple(epochs.event_id.items()) for epochs in epochs_list)) != 1:
         raise NotImplementedError("Epochs with unequal values for event_id")
-
-    tmins = set(epochs.tmin for epochs in epochs_list)
-    if len(tmins) == 1:
-        tmin = tmins.pop()
-    else:
+    if len(set(epochs.tmin for epochs in epochs_list)) != 1:
         raise NotImplementedError("Epochs with unequal values for tmin")
-
-    tmaxs = set(epochs.tmax for epochs in epochs_list)
-    if len(tmaxs) == 1:
-        tmax = tmaxs.pop()
-    else:
+    if len(set(epochs.tmax for epochs in epochs_list)) != 1:
         raise NotImplementedError("Epochs with unequal values for tmax")
-
-    baselines = set(epochs.baseline for epochs in epochs_list)
-    if len(baselines) == 1:
-        baseline = baselines.pop()
-    else:
+    if len(set(epochs.baseline for epochs in epochs_list)) != 1:
         raise NotImplementedError("Epochs with unequal values for baseline")
 
-    return event_id, tmin, tmax, baseline
-
 
 @verbose
 def add_channels_epochs(epochs_list, name='Unknown', add_eeg_ref=True,
@@ -2483,8 +2510,7 @@ def add_channels_epochs(epochs_list, name='Unknown', add_eeg_ref=True,
 
     info = _merge_info([epochs.info for epochs in epochs_list])
     data = [epochs.get_data() for epochs in epochs_list]
-    event_id, tmin, tmax, baseline = _check_merge_epochs(epochs_list)
-
+    _check_merge_epochs(epochs_list)
     for d in data:
         if len(d) != len(data[0]):
             raise ValueError('all epochs must be of the same length')
@@ -2508,10 +2534,6 @@ def add_channels_epochs(epochs_list, name='Unknown', add_eeg_ref=True,
 
     epochs = epochs_list[0].copy()
     epochs.info = info
-    epochs.event_id = event_id
-    epochs.tmin = tmin
-    epochs.tmax = tmax
-    epochs.baseline = baseline
     epochs.picks = None
     epochs.name = name
     epochs.verbose = verbose
@@ -2606,3 +2628,179 @@ def concatenate_epochs(epochs_list):
     .. versionadded:: 0.9.0
     """
     return _finish_concat(*_concatenate_epochs(epochs_list))
+
+
+ at verbose
+def average_movements(epochs, pos, orig_sfreq=None, picks=None, origin='auto',
+                      weight_all=True, int_order=8, ext_order=3,
+                      ignore_ref=False, return_mapping=False, verbose=None):
+    """Average data using Maxwell filtering, transforming using head positions
+
+    Parameters
+    ----------
+    epochs : instance of Epochs
+        The epochs to operate on.
+    pos : tuple
+        Tuple of position information as ``(trans, rot, t)`` like that
+        returned by `get_chpi_positions`. The positions will be matched
+        based on the last given position before the onset of the epoch.
+    orig_sfreq : float | None
+        The original sample frequency of the data (that matches the
+        event sample numbers in ``epochs.events``). Can be ``None``
+        if data have not been decimated or resampled.
+    picks : array-like of int | None
+        If None only MEG, EEG and SEEG channels are kept
+        otherwise the channels indices in picks are kept.
+    origin : array-like, shape (3,) | str
+        Origin of internal and external multipolar moment space in head
+        coords and in meters. The default is ``'auto'``, which means
+        a head-digitization-based origin fit.
+    weight_all : bool
+        If True, all channels are weighted by the SSS basis weights.
+        If False, only MEG channels are weighted, other channels
+        receive uniform weight per epoch.
+    int_order : int
+        Order of internal component of spherical expansion.
+    ext_order : int
+        Order of external component of spherical expansion.
+    regularize : str | None
+        Basis regularization type, must be "in" or None.
+        See :func:`mne.preprocessing.maxwell_filter` for details.
+        Regularization is chosen based only on the destination position.
+    ignore_ref : bool
+        If True, do not include reference channels in compensation. This
+        option should be True for KIT files, since Maxwell filtering
+        with reference channels is not currently supported.
+    return_mapping : bool
+        If True, return the mapping matrix.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    evoked : instance of Evoked
+        The averaged epochs.
+
+    See Also
+    --------
+    mne.preprocessing.maxwell_filter
+
+    Notes
+    -----
+    The Maxwell filtering version of this algorithm is described in [1]_,
+    in section V.B "Virtual signals and movement correction", equations
+    40-44. For additional validation, see [2]_.
+
+    Regularization has not been added because in testing it appears to
+    decrease dipole localization accuracy relative to using all components.
+    Fine calibration and cross-talk cancellation, however, could be added
+    to this algorithm based on user demand.
+
+    .. versionadded:: 0.11
+
+    References
+    ----------
+    .. [1] Taulu S. and Kajola M. "Presentation of electromagnetic
+           multichannel data: The signal space separation method,"
+           Journal of Applied Physics, vol. 97, pp. 124905 1-10, 2005.
+
+    .. [2] Wehner DT, Hämäläinen MS, Mody M, Ahlfors SP. "Head movements
+           of children in MEG: Quantification, effects on source
+           estimation, and compensation. NeuroImage 40:541–550, 2008.
+    """
+    from .preprocessing.maxwell import (_info_sss_basis, _reset_meg_bads,
+                                        _check_usable, _col_norm_pinv,
+                                        _get_n_moments, _get_mf_picks)
+    if not isinstance(epochs, _BaseEpochs):
+        raise TypeError('epochs must be an instance of Epochs, not %s'
+                        % (type(epochs),))
+    orig_sfreq = epochs.info['sfreq'] if orig_sfreq is None else orig_sfreq
+    orig_sfreq = float(orig_sfreq)
+    trn, rot, t = pos
+    del pos
+    _check_usable(epochs)
+    origin = _check_origin(origin, epochs.info, 'head')
+
+    logger.info('Aligning and averaging up to %s epochs'
+                % (len(epochs.events)))
+    meg_picks, _, _, good_picks, coil_scale, _ = \
+        _get_mf_picks(epochs.info, int_order, ext_order, ignore_ref)
+    n_channels, n_times = len(epochs.ch_names), len(epochs.times)
+    other_picks = np.setdiff1d(np.arange(n_channels), meg_picks)
+    data = np.zeros((n_channels, n_times))
+    count = 0
+    # keep only MEG w/bad channels marked in "info_from"
+    info_from = pick_info(epochs.info, good_picks, copy=True)
+    # remove MEG bads in "to" info
+    info_to = deepcopy(epochs.info)
+    _reset_meg_bads(info_to)
+    # set up variables
+    w_sum = 0.
+    n_in, n_out = _get_n_moments([int_order, ext_order])
+    S_decomp = 0.  # this will end up being a weighted average
+    last_trans = None
+    decomp_coil_scale = coil_scale[good_picks]
+    for ei, epoch in enumerate(epochs):
+        event_time = epochs.events[epochs._current - 1, 0] / orig_sfreq
+        use_idx = np.where(t <= event_time)[0]
+        if len(use_idx) == 0:
+            raise RuntimeError('Event time %0.3f occurs before first '
+                               'position time %0.3f' % (event_time, t[0]))
+        use_idx = use_idx[-1]
+        trans = np.row_stack([np.column_stack([rot[use_idx],
+                                               trn[[use_idx]].T]),
+                              [[0., 0., 0., 1.]]])
+        loc_str = ', '.join('%0.1f' % tr for tr in (trans[:3, 3] * 1000))
+        if last_trans is None or not np.allclose(last_trans, trans):
+            logger.info('    Processing epoch %s (device location: %s mm)'
+                        % (ei + 1, loc_str))
+            reuse = False
+            last_trans = trans
+        else:
+            logger.info('    Processing epoch %s (device location: same)'
+                        % (ei + 1,))
+            reuse = True
+        epoch = epoch.copy()  # because we operate inplace
+        if not reuse:
+            S = _info_sss_basis(info_from, trans, origin,
+                                int_order, ext_order, True,
+                                coil_scale=decomp_coil_scale)
+            # Get the weight from the un-regularized version
+            weight = np.sqrt(np.sum(S * S))  # frobenius norm (eq. 44)
+            # XXX Eventually we could do cross-talk and fine-cal here
+            S *= weight
+        S_decomp += S  # eq. 41
+        epoch[slice(None) if weight_all else meg_picks] *= weight
+        data += epoch  # eq. 42
+        w_sum += weight
+        count += 1
+    del info_from
+    mapping = None
+    if count == 0:
+        data.fill(np.nan)
+    else:
+        data[meg_picks] /= w_sum
+        data[other_picks] /= w_sum if weight_all else count
+        # Finalize weighted average decomp matrix
+        S_decomp /= w_sum
+        # Get recon matrix
+        # (We would need to include external here for regularization to work)
+        S_recon = _info_sss_basis(epochs.info, None, origin,
+                                  int_order, 0, True)
+        # We could determine regularization on basis of destination basis
+        # matrix, restricted to good channels, as regularizing individual
+        # matrices within the loop above does not seem to work. But in
+        # testing this seemed to decrease localization quality in most cases,
+        # so we do not provide the option here.
+        S_recon /= coil_scale
+        # Invert
+        pS_ave = _col_norm_pinv(S_decomp)[0][:n_in]
+        pS_ave *= decomp_coil_scale.T
+        # Get mapping matrix
+        mapping = np.dot(S_recon, pS_ave)
+        # Apply mapping
+        data[meg_picks] = np.dot(mapping, data[good_picks])
+    evoked = epochs._evoked_from_epoch_data(
+        data, info_to, picks, count, 'average')
+    logger.info('Created Evoked dataset from %s epochs' % (count,))
+    return (evoked, mapping) if return_mapping else evoked
diff --git a/mne/evoked.py b/mne/evoked.py
index fdd9c60..f54ce66 100644
--- a/mne/evoked.py
+++ b/mne/evoked.py
@@ -107,7 +107,7 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
                 raise ValueError(r"'proj' must be 'True' or 'False'")
 
             #   Read the measurement info
-            info, meas = read_meas_info(fid, tree)
+            info, meas = read_meas_info(fid, tree, clean_bads=True)
             info['filename'] = fname
 
             #   Locate the data of interest
@@ -135,9 +135,16 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
                                      'found datasets:\n  %s'
                                      % (condition, kind, t))
                 condition = found_cond[0]
+            elif condition is None:
+                if len(evoked_node) > 1:
+                    _, _, conditions = _get_entries(fid, evoked_node)
+                    raise TypeError("Evoked file has more than one "
+                                    "conditions, the condition parameters "
+                                    "must be specified from:\n%s" % conditions)
+                else:
+                    condition = 0
 
             if condition >= len(evoked_node) or condition < 0:
-                fid.close()
                 raise ValueError('Data set selector out of range')
 
             my_evoked = evoked_node[condition]
@@ -285,6 +292,7 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
     def __repr__(self):
         s = "comment : '%s'" % self.comment
+        s += ', kind : %s' % self.kind
         s += ", time : [%f, %f]" % (self.times[0], self.times[-1])
         s += ", n_epochs : %d" % self.nave
         s += ", n_channels x n_times : %s x %s" % self.data.shape
@@ -345,7 +353,8 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
     def plot(self, picks=None, exclude='bads', unit=True, show=True, ylim=None,
              xlim='tight', proj=False, hline=None, units=None, scalings=None,
-             titles=None, axes=None, gfp=False):
+             titles=None, axes=None, gfp=False, window_title=None,
+             spatial_colors=False):
         """Plot evoked data as butterfly plots
 
         Left click to a line shows the channel name. Selecting an area by
@@ -392,11 +401,20 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         gfp : bool | 'only'
             Plot GFP in green if True or "only". If "only", then the individual
             channel traces will not be shown.
+        window_title : str | None
+            The title to put at the top of the figure window.
+        spatial_colors : bool
+            If True, the lines are color coded by mapping physical sensor
+            coordinates into color values. Spatially similar channels will have
+            similar colors. Bad channels will be dotted. If False, the good
+            channels are plotted black and bad channels red. Defaults to False.
         """
         return plot_evoked(self, picks=picks, exclude=exclude, unit=unit,
                            show=show, ylim=ylim, proj=proj, xlim=xlim,
                            hline=hline, units=units, scalings=scalings,
-                           titles=titles, axes=axes, gfp=gfp)
+                           titles=titles, axes=axes, gfp=gfp,
+                           window_title=window_title,
+                           spatial_colors=spatial_colors)
 
     def plot_image(self, picks=None, exclude='bads', unit=True, show=True,
                    clim=None, xlim='tight', proj=False, units=None,
@@ -710,7 +728,8 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
                                   rank=None, show=show)
 
     def as_type(self, ch_type='grad', mode='fast'):
-        """Compute virtual evoked using interpolated fields in mag/grad channels.
+        """Compute virtual evoked using interpolated fields in mag/grad
+        channels.
 
         .. Warning:: Using virtual evoked to compute inverse can yield
             unexpected results. The virtual channels have `'_virtual'` appended
@@ -772,12 +791,12 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
             Either 0 or 1, the order of the detrending. 0 is a constant
             (DC) detrend, 1 is a linear detrend.
         picks : array-like of int | None
-            If None only MEG and EEG channels are detrended.
+            If None only MEG, EEG and SEEG channels are detrended.
         """
         if picks is None:
             picks = pick_types(self.info, meg=True, eeg=True, ref_meg=False,
                                stim=False, eog=False, ecg=False, emg=False,
-                               exclude='bads')
+                               seeg=True, exclude='bads')
         self.data[picks] = detrend(self.data[picks], order, axis=-1)
 
     def copy(self):
@@ -817,14 +836,16 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
         Parameters
         ----------
-        ch_type : {'mag', 'grad', 'eeg', 'misc', None}
+        ch_type : {'mag', 'grad', 'eeg', 'seeg', 'misc', None}
             The channel type to use. Defaults to None. If more than one sensor
             Type is present in the data the channel type has to be explicitly
             set.
         tmin : float | None
             The minimum point in time to be considered for peak getting.
+            If None (default), the beginning of the data is used.
         tmax : float | None
             The maximum point in time to be considered for peak getting.
+            If None (default), the end of the data is used.
         mode : {'pos', 'neg', 'abs'}
             How to deal with the sign of the data. If 'pos' only positive
             values will be considered. If 'neg' only negative values will
@@ -841,9 +862,10 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
             The time point of the maximum response, either latency in seconds
             or index.
         """
-        supported = ('mag', 'grad', 'eeg', 'misc', 'None')
+        supported = ('mag', 'grad', 'eeg', 'seeg', 'misc', 'None')
 
-        data_picks = pick_types(self.info, meg=True, eeg=True, ref_meg=False)
+        data_picks = pick_types(self.info, meg=True, eeg=True, seeg=True,
+                                ref_meg=False)
         types_used = set([channel_type(self.info, idx) for idx in data_picks])
 
         if str(ch_type) not in supported:
@@ -861,7 +883,7 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
                                'must not be `None`, pass a sensor type '
                                'value instead')
 
-        meg, eeg, misc, picks = False, False, False, None
+        meg, eeg, misc, seeg, picks = False, False, False, False, None
 
         if ch_type == 'mag':
             meg = ch_type
@@ -871,16 +893,22 @@ class Evoked(ProjMixin, ContainsMixin, UpdateChannelsMixin,
             eeg = True
         elif ch_type == 'misc':
             misc = True
+        elif ch_type == 'seeg':
+            seeg = True
 
         if ch_type is not None:
             picks = pick_types(self.info, meg=meg, eeg=eeg, misc=misc,
-                               ref_meg=False)
+                               seeg=seeg, ref_meg=False)
 
-        data = self.data if picks is None else self.data[picks]
+        data = self.data
+        ch_names = self.ch_names
+        if picks is not None:
+            data = data[picks]
+            ch_names = [ch_names[k] for k in picks]
         ch_idx, time_idx = _get_peak(data, self.times, tmin,
                                      tmax, mode)
 
-        return (self.ch_names[ch_idx],
+        return (ch_names[ch_idx],
                 time_idx if time_as_index else self.times[time_idx])
 
 
diff --git a/mne/forward/_compute_forward.py b/mne/forward/_compute_forward.py
index 583f0bb..16b5271 100644
--- a/mne/forward/_compute_forward.py
+++ b/mne/forward/_compute_forward.py
@@ -59,7 +59,7 @@ def _check_coil_frame(coils, coord_frame, bem):
     return coils, coord_frame
 
 
-def _lin_field_coeff(surf, mult, rmags, cosmags, ws, n_int, n_jobs):
+def _lin_field_coeff(surf, mult, rmags, cosmags, ws, bins, n_jobs):
     """Parallel wrapper for _do_lin_field_coeff to compute linear coefficients.
 
     Parameters
@@ -73,10 +73,10 @@ def _lin_field_coeff(surf, mult, rmags, cosmags, ws, n_int, n_jobs):
         3D positions of MEG coil integration points (from coil['rmag'])
     cosmag : ndarray, shape (n_integration_pts, 3)
         Direction of the MEG coil integration points (from coil['cosmag'])
-    ws : ndarray, shape (n_sensor_pts,)
+    ws : ndarray, shape (n_integration_pts,)
         Weights for MEG coil integration points
-    n_int : ndarray, shape (n_MEG_sensors,)
-        Number of integration points for each MEG sensor
+    bins : ndarray, shape (n_integration_points,)
+        The sensor assignments for each rmag/cosmag/w.
     n_jobs : int
         Number of jobs to run in parallel
 
@@ -88,14 +88,14 @@ def _lin_field_coeff(surf, mult, rmags, cosmags, ws, n_int, n_jobs):
     """
     parallel, p_fun, _ = parallel_func(_do_lin_field_coeff, n_jobs)
     nas = np.array_split
-    coeffs = parallel(p_fun(surf['rr'], t, tn, ta, rmags, cosmags, ws, n_int)
+    coeffs = parallel(p_fun(surf['rr'], t, tn, ta, rmags, cosmags, ws, bins)
                       for t, tn, ta in zip(nas(surf['tris'], n_jobs),
                                            nas(surf['tri_nn'], n_jobs),
                                            nas(surf['tri_area'], n_jobs)))
     return mult * np.sum(coeffs, axis=0)
 
 
-def _do_lin_field_coeff(bem_rr, tris, tn, ta, rmags, cosmags, ws, n_int):
+def _do_lin_field_coeff(bem_rr, tris, tn, ta, rmags, cosmags, ws, bins):
     """Compute field coefficients (parallel-friendly).
 
     See section IV of Mosher et al., 1999 (specifically equation 35).
@@ -117,16 +117,15 @@ def _do_lin_field_coeff(bem_rr, tris, tn, ta, rmags, cosmags, ws, n_int):
         Direction of the MEG coil integration points (from coil['cosmag'])
     ws : ndarray, shape (n_sensor_pts,)
         Weights for MEG coil integration points
-    n_int : ndarray, shape (n_MEG_sensors,)
-        Number of integration points for each MEG sensor
+    bins : ndarray, shape (n_sensor_pts,)
+        The sensor assignments for each rmag/cosmag/w.
 
     Returns
     -------
     coeff : ndarray, shape (n_MEG_sensors, n_BEM_vertices)
         Linear coefficients with effect of each BEM vertex on each sensor (?)
     """
-    coeff = np.zeros((len(n_int), len(bem_rr)))
-    bins = np.repeat(np.arange(len(n_int)), n_int)
+    coeff = np.zeros((bins[-1] + 1, len(bem_rr)))
     for tri, tri_nn, tri_area in zip(tris, tn, ta):
         # Accumulate the coefficients for each triangle node and add to the
         # corresponding coefficient matrix
@@ -147,7 +146,7 @@ def _do_lin_field_coeff(bem_rr, tris, tn, ta, rmags, cosmags, ws, n_int):
             c = fast_cross_3d(diff, tri_nn[np.newaxis, :])
             x = tri_area * np.sum(c * cosmags, axis=1) / \
                 (3.0 * dl * np.sqrt(dl))
-            zz += [np.bincount(bins, weights=x * ws, minlength=len(n_int))]
+            zz += [np.bincount(bins, weights=x * ws, minlength=bins[-1] + 1)]
         coeff[:, tri] += np.array(zz).T
     return coeff
 
@@ -158,7 +157,13 @@ def _concatenate_coils(coils):
     cosmags = np.concatenate([coil['cosmag'] for coil in coils])
     ws = np.concatenate([coil['w'] for coil in coils])
     n_int = np.array([len(coil['rmag']) for coil in coils])
-    return rmags, cosmags, ws, n_int
+    if n_int[-1] == 0:
+        # We assume each sensor has at least one integration point,
+        # which should be a safe assumption. But let's check it here, since
+        # our code elsewhere relies on bins[-1] + 1 being the number of sensors
+        raise RuntimeError('not supported')
+    bins = np.repeat(np.arange(len(n_int)), n_int)
+    return rmags, cosmags, ws, bins
 
 
 def _bem_specify_coils(bem, coils, coord_frame, mults, n_jobs):
@@ -193,15 +198,15 @@ def _bem_specify_coils(bem, coils, coord_frame, mults, n_jobs):
     # potential approximation
 
     # Process each of the surfaces
-    rmags, cosmags, ws, n_int = _concatenate_coils(coils)
+    rmags, cosmags, ws, bins = _concatenate_coils(coils)
     lens = np.cumsum(np.r_[0, [len(s['rr']) for s in bem['surfs']]])
-    coeff = np.empty((len(n_int), lens[-1]))  # shape(n_coils, n_BEM_verts)
+    coeff = np.empty((bins[-1] + 1, lens[-1]))  # shape(n_coils, n_BEM_verts)
 
     # Compute coeffs for each surface, one at a time
     for o1, o2, surf, mult in zip(lens[:-1], lens[1:],
                                   bem['surfs'], bem['field_mult']):
         coeff[:, o1:o2] = _lin_field_coeff(surf, mult, rmags, cosmags, ws,
-                                           n_int, n_jobs)
+                                           bins, n_jobs)
     # put through the bem
     sol = np.dot(coeff, bem['solution'])
     sol *= mults
@@ -514,8 +519,7 @@ def _sphere_field(rrs, coils, sphere):
     The formulas have been manipulated for efficient computation
     by Matti Hamalainen, February 1990
     """
-    rmags, cosmags, ws, n_int = _concatenate_coils(coils)
-    bins = np.repeat(np.arange(len(n_int)), n_int)
+    rmags, cosmags, ws, bins = _concatenate_coils(coils)
 
     # Shift to the sphere model coordinates
     rrs = rrs - sphere['r0']
@@ -546,8 +550,7 @@ def _sphere_field(rrs, coils, sphere):
         v2 = fast_cross_3d(rr[np.newaxis, :], this_poss)
         xx = ((good * ws)[:, np.newaxis] *
               (v1 / F[:, np.newaxis] + v2 * g[:, np.newaxis]))
-        zz = np.array([np.bincount(bins, weights=x,
-                                   minlength=len(n_int)) for x in xx.T])
+        zz = np.array([np.bincount(bins, x, bins[-1] + 1) for x in xx.T])
         B[3 * ri:3 * ri + 3, :] = zz
     B *= _MAG_FACTOR
     return B
@@ -555,8 +558,7 @@ def _sphere_field(rrs, coils, sphere):
 
 def _eeg_spherepot_coil(rrs, coils, sphere):
     """Calculate the EEG in the sphere model."""
-    rmags, cosmags, ws, n_int = _concatenate_coils(coils)
-    bins = np.repeat(np.arange(len(n_int)), n_int)
+    rmags, cosmags, ws, bins = _concatenate_coils(coils)
 
     # Shift to the sphere model coordinates
     rrs = rrs - sphere['r0']
@@ -611,8 +613,7 @@ def _eeg_spherepot_coil(rrs, coils, sphere):
 
             # compute total result
             xx = vval_one * ws[:, np.newaxis]
-            zz = np.array([np.bincount(bins, weights=x,
-                                       minlength=len(n_int)) for x in xx.T])
+            zz = np.array([np.bincount(bins, x, bins[-1] + 1) for x in xx.T])
             B[3 * ri:3 * ri + 3, :] = zz
     # finishing by scaling by 1/(4*M_PI)
     B *= 0.25 / np.pi
@@ -624,7 +625,6 @@ def _eeg_spherepot_coil(rrs, coils, sphere):
 
 def _magnetic_dipole_field_vec(rrs, coils):
     """Compute an MEG forward solution for a set of magnetic dipoles."""
-    fwd = np.empty((3 * len(rrs), len(coils)))
     # The code below is a more efficient version (~30x) of this:
     # for ri, rr in enumerate(rrs):
     #     for k in range(len(coils)):
@@ -641,13 +641,11 @@ def _magnetic_dipole_field_vec(rrs, coils):
     #                 dist2 * this_coil['cosmag']) / dist5
     #         fwd[3*ri:3*ri+3, k] = 1e-7 * np.dot(this_coil['w'], sum_)
     if isinstance(coils, tuple):
-        rmags, cosmags, ws, n_int = coils
+        rmags, cosmags, ws, bins = coils
     else:
-        rmags, cosmags, ws, n_int = _concatenate_coils(coils)
+        rmags, cosmags, ws, bins = _concatenate_coils(coils)
     del coils
-
-    fwd = np.empty((3 * len(rrs), len(n_int)))
-    bins = np.repeat(np.arange(len(n_int)), n_int)
+    fwd = np.empty((3 * len(rrs), bins[-1] + 1))
     for ri, rr in enumerate(rrs):
         diff = rmags - rr
         dist2 = np.sum(diff * diff, axis=1)[:, np.newaxis]
@@ -658,8 +656,7 @@ def _magnetic_dipole_field_vec(rrs, coils):
                                                       axis=1)[:, np.newaxis] -
                                     dist2 * cosmags) / (dist2 * dist2 * dist)
         for ii in range(3):
-            fwd[3 * ri + ii] = np.bincount(bins, weights=sum_[:, ii],
-                                           minlength=len(n_int))
+            fwd[3 * ri + ii] = np.bincount(bins, sum_[:, ii], bins[-1] + 1)
     fwd *= 1e-7
     return fwd
 
diff --git a/mne/forward/_field_interpolation.py b/mne/forward/_field_interpolation.py
index 88d3802..e46f2cf 100644
--- a/mne/forward/_field_interpolation.py
+++ b/mne/forward/_field_interpolation.py
@@ -4,6 +4,7 @@ import numpy as np
 from scipy import linalg
 from copy import deepcopy
 
+from ..bem import _check_origin
 from ..io.constants import FIFF
 from ..io.pick import pick_types, pick_info
 from ..surface import get_head_surf, get_meg_helmet_surf
@@ -40,24 +41,24 @@ def _ad_hoc_noise(coils, ch_type='meg'):
 
 def _setup_dots(mode, coils, ch_type):
     """Setup dot products"""
-    my_origin = np.array([0.0, 0.0, 0.04])
     int_rad = 0.06
     noise = _ad_hoc_noise(coils, ch_type)
     if mode == 'fast':
         # Use 50 coefficients with nearest-neighbor interpolation
-        lut, n_fact = _get_legen_table(ch_type, False, 50)
-        lut_fun = partial(_get_legen_lut_fast, lut=lut)
+        n_coeff = 50
+        lut_fun = _get_legen_lut_fast
     else:  # 'accurate'
         # Use 100 coefficients with linear interpolation
-        lut, n_fact = _get_legen_table(ch_type, False, 100)
-        lut_fun = partial(_get_legen_lut_accurate, lut=lut)
-
-    return my_origin, int_rad, noise, lut_fun, n_fact
+        n_coeff = 100
+        lut_fun = _get_legen_lut_accurate
+    lut, n_fact = _get_legen_table(ch_type, False, n_coeff, verbose=False)
+    lut_fun = partial(lut_fun, lut=lut)
+    return int_rad, noise, lut_fun, n_fact
 
 
 def _compute_mapping_matrix(fmd, info):
     """Do the hairy computations"""
-    logger.info('preparing the mapping matrix...')
+    logger.info('    Preparing the mapping matrix...')
     # assemble a projector and apply it to the data
     ch_names = fmd['ch_names']
     projs = info.get('projs', list())
@@ -80,11 +81,10 @@ def _compute_mapping_matrix(fmd, info):
     sumk = np.cumsum(sing)
     sumk /= sumk[-1]
     fmd['nest'] = np.where(sumk > (1.0 - fmd['miss']))[0][0]
-    logger.info('Truncate at %d missing %g' % (fmd['nest'], fmd['miss']))
+    logger.info('    [Truncate at %d missing %g]' % (fmd['nest'], fmd['miss']))
     sing = 1.0 / sing[:fmd['nest']]
 
     # Put the inverse together
-    logger.info('Put the inverse together...')
     inv = np.dot(uu[:, :fmd['nest']] * sing, vv[:fmd['nest']]).T
 
     # Sandwich with the whitener
@@ -101,22 +101,20 @@ def _compute_mapping_matrix(fmd, info):
     # Optionally apply the average electrode reference to the final field map
     if fmd['kind'] == 'eeg':
         if _has_eeg_average_ref_proj(projs):
-            logger.info('The map will have average electrode reference')
+            logger.info('    The map will have average electrode reference')
             mapping_mat -= np.mean(mapping_mat, axis=0)[np.newaxis, :]
     return mapping_mat
 
 
-def _map_meg_channels(inst, pick_from, pick_to, mode='fast'):
+def _map_meg_channels(info_from, info_to, mode='fast', origin=(0., 0., 0.04)):
     """Find mapping from one set of channels to another.
 
     Parameters
     ----------
-    inst : mne.io.Raw, mne.Epochs or mne.Evoked
-        The data to interpolate. Must be preloaded.
-    pick_from : array-like of int
-        The channels from which to interpolate.
-    pick_to : array-like of int
-        The channels to which to interpolate.
+    info_from : mne.io.MeasInfo
+        The measurement data to interpolate from.
+    info_to : mne.io.MeasInfo
+        The measurement info to interpolate to.
     mode : str
         Either `'accurate'` or `'fast'`, determines the quality of the
         Legendre polynomial expansion used. `'fast'` should be sufficient
@@ -127,43 +125,38 @@ def _map_meg_channels(inst, pick_from, pick_to, mode='fast'):
     mapping : array
         A mapping matrix of shape len(pick_to) x len(pick_from).
     """
-    info_from = pick_info(inst.info, pick_from, copy=True)
-    info_to = pick_info(inst.info, pick_to, copy=True)
-
     # no need to apply trans because both from and to coils are in device
     # coordinates
-    templates = _read_coil_defs()
+    templates = _read_coil_defs(verbose=False)
     coils_from = _create_meg_coils(info_from['chs'], 'normal',
                                    info_from['dev_head_t'], templates)
     coils_to = _create_meg_coils(info_to['chs'], 'normal',
                                  info_to['dev_head_t'], templates)
     miss = 1e-4  # Smoothing criterion for MEG
-
+    origin = _check_origin(origin, info_from)
     #
     # Step 2. Calculate the dot products
     #
-    my_origin, int_rad, noise, lut_fun, n_fact = _setup_dots(mode, coils_from,
-                                                             'meg')
-    logger.info('Computing dot products for %i coils...' % (len(coils_from)))
-    self_dots = _do_self_dots(int_rad, False, coils_from, my_origin, 'meg',
+    int_rad, noise, lut_fun, n_fact = _setup_dots(mode, coils_from, 'meg')
+    logger.info('    Computing dot products for %i coils...'
+                % (len(coils_from)))
+    self_dots = _do_self_dots(int_rad, False, coils_from, origin, 'meg',
                               lut_fun, n_fact, n_jobs=1)
-    logger.info('Computing cross products for coils %i x %i coils...'
+    logger.info('    Computing cross products for coils %i x %i coils...'
                 % (len(coils_from), len(coils_to)))
     cross_dots = _do_cross_dots(int_rad, False, coils_from, coils_to,
-                                my_origin, 'meg', lut_fun, n_fact).T
+                                origin, 'meg', lut_fun, n_fact).T
 
     ch_names = [c['ch_name'] for c in info_from['chs']]
     fmd = dict(kind='meg', ch_names=ch_names,
-               origin=my_origin, noise=noise, self_dots=self_dots,
+               origin=origin, noise=noise, self_dots=self_dots,
                surface_dots=cross_dots, int_rad=int_rad, miss=miss)
-    logger.info('Field mapping data ready')
 
     #
     # Step 3. Compute the mapping matrix
     #
-    fmd['data'] = _compute_mapping_matrix(fmd, info_from)
-
-    return fmd['data']
+    mapping = _compute_mapping_matrix(fmd, info_from)
+    return mapping
 
 
 def _as_meg_type_evoked(evoked, ch_type='grad', mode='fast'):
@@ -203,7 +196,9 @@ def _as_meg_type_evoked(evoked, ch_type='grad', mode='fast'):
                          ' locations of the destination channels will be used'
                          ' for interpolation.')
 
-    mapping = _map_meg_channels(evoked, pick_from, pick_to, mode='fast')
+    info_from = pick_info(evoked.info, pick_from, copy=True)
+    info_to = pick_info(evoked.info, pick_to, copy=True)
+    mapping = _map_meg_channels(info_from, info_to, mode='fast')
 
     # compute evoked data by multiplying by the 'gain matrix' from
     # original sensors to virtual sensors
@@ -223,7 +218,7 @@ def _as_meg_type_evoked(evoked, ch_type='grad', mode='fast'):
 
 @verbose
 def _make_surface_mapping(info, surf, ch_type='meg', trans=None, mode='fast',
-                          n_jobs=1, verbose=None):
+                          n_jobs=1, origin=(0., 0., 0.04), verbose=None):
     """Re-map M/EEG data to a surface
 
     Parameters
@@ -244,6 +239,10 @@ def _make_surface_mapping(info, surf, ch_type='meg', trans=None, mode='fast',
         for most applications.
     n_jobs : int
         Number of permutations to run in parallel (requires joblib package).
+    origin : array-like, shape (3,) | str
+        Origin of internal and external multipolar moment space in head
+        coords and in meters. The default is ``'auto'``, which means
+        a head-digitization-based origin fit.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -263,8 +262,8 @@ def _make_surface_mapping(info, surf, ch_type='meg', trans=None, mode='fast',
 
     # deal with coordinate frames here -- always go to "head" (easiest)
     surf = transform_surface_to(deepcopy(surf), 'head', trans)
-
     n_jobs = check_n_jobs(n_jobs)
+    origin = _check_origin(origin, info)
 
     #
     # Step 1. Prepare the coil definitions
@@ -296,16 +295,15 @@ def _make_surface_mapping(info, surf, ch_type='meg', trans=None, mode='fast',
     #
     # Step 2. Calculate the dot products
     #
-    my_origin, int_rad, noise, lut_fun, n_fact = _setup_dots(mode, coils,
-                                                             ch_type)
+    int_rad, noise, lut_fun, n_fact = _setup_dots(mode, coils, ch_type)
     logger.info('Computing dot products for %i %s...' % (len(coils), type_str))
-    self_dots = _do_self_dots(int_rad, False, coils, my_origin, ch_type,
+    self_dots = _do_self_dots(int_rad, False, coils, origin, ch_type,
                               lut_fun, n_fact, n_jobs)
     sel = np.arange(len(surf['rr']))  # eventually we should do sub-selection
     logger.info('Computing dot products for %i surface locations...'
                 % len(sel))
     surface_dots = _do_surface_dots(int_rad, False, coils, surf, sel,
-                                    my_origin, ch_type, lut_fun, n_fact,
+                                    origin, ch_type, lut_fun, n_fact,
                                     n_jobs)
 
     #
@@ -313,7 +311,7 @@ def _make_surface_mapping(info, surf, ch_type='meg', trans=None, mode='fast',
     #
     ch_names = [c['ch_name'] for c in chs]
     fmd = dict(kind=ch_type, surf=surf, ch_names=ch_names, coils=coils,
-               origin=my_origin, noise=noise, self_dots=self_dots,
+               origin=origin, noise=noise, self_dots=self_dots,
                surface_dots=surface_dots, int_rad=int_rad, miss=miss)
     logger.info('Field mapping data ready')
 
@@ -327,9 +325,10 @@ def _make_surface_mapping(info, surf, ch_type='meg', trans=None, mode='fast',
     return fmd
 
 
+ at verbose
 def make_field_map(evoked, trans='auto', subject=None, subjects_dir=None,
                    ch_type=None, mode='fast', meg_surf='helmet',
-                   n_jobs=1):
+                   origin=(0., 0., 0.04), n_jobs=1, verbose=None):
     """Compute surface maps used for field display in 3D
 
     Parameters
@@ -357,8 +356,20 @@ def make_field_map(evoked, trans='auto', subject=None, subjects_dir=None,
     meg_surf : str
         Should be ``'helmet'`` or ``'head'`` to specify in which surface
         to compute the MEG field map. The default value is ``'helmet'``
+    origin : array-like, shape (3,) | str
+        Origin of internal and external multipolar moment space in head
+        coords and in meters. The default is ``'auto'``, which means
+        a head-digitization-based origin fit.
+
+        .. versionadded:: 0.11
+
     n_jobs : int
         The number of jobs to run in parallel.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+        .. versionadded:: 0.11
+
 
     Returns
     -------
@@ -406,7 +417,7 @@ def make_field_map(evoked, trans='auto', subject=None, subjects_dir=None,
 
     for this_type, this_surf in zip(types, surfs):
         this_map = _make_surface_mapping(evoked.info, this_surf, this_type,
-                                         trans, n_jobs=n_jobs)
+                                         trans, n_jobs=n_jobs, origin=origin)
         this_map['surf'] = this_surf  # XXX : a bit weird...
         surf_maps.append(this_map)
 
diff --git a/mne/forward/_lead_dots.py b/mne/forward/_lead_dots.py
index f0f4d15..25be329 100644
--- a/mne/forward/_lead_dots.py
+++ b/mne/forward/_lead_dots.py
@@ -11,7 +11,7 @@ import numpy as np
 from numpy.polynomial import legendre
 
 from ..parallel import parallel_func
-from ..utils import logger, _get_extra_data_path
+from ..utils import logger, verbose, _get_extra_data_path
 
 
 ##############################################################################
@@ -48,8 +48,9 @@ def _get_legen_der(xx, n_coeff=100):
     return coeffs
 
 
+ at verbose
 def _get_legen_table(ch_type, volume_integral=False, n_coeff=100,
-                     n_interp=20000, force_calc=False):
+                     n_interp=20000, force_calc=False, verbose=None):
     """Return a (generated) LUT of Legendre (derivative) polynomial coeffs"""
     if n_interp % 2 != 0:
         raise RuntimeError('n_interp must be even')
@@ -140,7 +141,8 @@ def _comp_sum_eeg(beta, ctheta, lut_fun, n_fact):
     coeffs = lut_fun(ctheta)
     betans = np.cumprod(np.tile(beta[:, np.newaxis], (1, n_fact.shape[0])),
                         axis=1)
-    s0 = np.dot(coeffs * betans, n_fact)  # == weighted sum across cols
+    coeffs *= betans
+    s0 = np.dot(coeffs, n_fact)  # == weighted sum across cols
     return s0
 
 
@@ -173,20 +175,25 @@ def _comp_sums_meg(beta, ctheta, lut_fun, n_fact, volume_integral):
     #  * sums[:, 2]    n/((2n+1)(n+1)) beta^(n+1) P_n'
     #  * sums[:, 3]    n/((2n+1)(n+1)) beta^(n+1) P_n''
     coeffs = lut_fun(ctheta)
-    beta = (np.cumprod(np.tile(beta[:, np.newaxis], (1, n_fact.shape[0])),
-                       axis=1) * beta[:, np.newaxis])
+    bbeta = np.cumprod(np.tile(beta[np.newaxis], (n_fact.shape[0], 1)),
+                       axis=0)
+    bbeta *= beta
     # This is equivalent, but slower:
-    # sums = np.sum(beta[:, :, np.newaxis] * n_fact * coeffs, axis=1)
+    # sums = np.sum(bbeta[:, :, np.newaxis].T * n_fact * coeffs, axis=1)
     # sums = np.rollaxis(sums, 2)
-    sums = np.einsum('ij,jk,ijk->ki', beta, n_fact, coeffs)
+    sums = np.einsum('ji,jk,ijk->ki', bbeta, n_fact, coeffs)
     return sums
 
 
 ###############################################################################
 # SPHERE DOTS
 
-def _fast_sphere_dot_r0(r, rr1, rr2, lr1, lr2, cosmags1, cosmags2,
-                        w1, w2, volume_integral, lut, n_fact, ch_type):
+_meg_const = 4e-14 * np.pi  # This is \mu_0^2/4\pi
+_eeg_const = 1.0 / (4.0 * np.pi)
+
+
+def _fast_sphere_dot_r0(r, rr1_orig, rr2s, lr1, lr2s, cosmags1, cosmags2s,
+                        w1, w2s, volume_integral, lut, n_fact, ch_type):
     """Lead field dot product computation for M/EEG in the sphere model.
 
     Parameters
@@ -196,19 +203,19 @@ def _fast_sphere_dot_r0(r, rr1, rr2, lr1, lr2, cosmags1, cosmags2,
         beta = (r * r) / (lr1 * lr2).
     rr1 : array, shape (n_points x 3)
         Normalized position vectors of integrations points in first sensor.
-    rr2 : array, shape (n_points x 3)
+    rr2s : list
         Normalized position vector of integration points in second sensor.
     lr1 : array, shape (n_points x 1)
         Magnitude of position vector of integration points in first sensor.
-    lr2 : array, shape (n_points x 1)
+    lr2s : list
         Magnitude of position vector of integration points in second sensor.
     cosmags1 : array, shape (n_points x 1)
         Direction of integration points in first sensor.
-    cosmags2 : array, shape (n_points x 1)
+    cosmags2s : list
         Direction of integration points in second sensor.
-    w1 : array, shape (n_points x 1)
+    w1 : array, shape (n_points x 1) | None
         Weights of integration points in the first sensor.
-    w2 : array, shape (n_points x 1)
+    w2s : list
         Weights of integration points in the second sensor.
     volume_integral : bool
         If True, compute volume integral.
@@ -224,10 +231,20 @@ def _fast_sphere_dot_r0(r, rr1, rr2, lr1, lr2, cosmags1, cosmags2,
     result : float
         The integration sum.
     """
-    ct = np.einsum('ik,jk->ij', rr1, rr2)  # outer product, sum over coords
+    if w1 is None:  # operating on surface, treat independently
+        out_shape = (len(rr2s), len(rr1_orig))
+    else:
+        out_shape = (len(rr2s),)
+    out = np.empty(out_shape)
+    rr2 = np.concatenate(rr2s)
+    lr2 = np.concatenate(lr2s)
+    cosmags2 = np.concatenate(cosmags2s)
+
+    # outer product, sum over coords
+    ct = np.einsum('ik,jk->ij', rr1_orig, rr2)
 
     # expand axes
-    rr1 = rr1[:, np.newaxis, :]  # (n_rr1, n_rr2, n_coord) e.g. 4x4x3
+    rr1 = rr1_orig[:, np.newaxis, :]  # (n_rr1, n_rr2, n_coord) e.g. 4x4x3
     rr2 = rr2[np.newaxis, :, :]
     lr1lr2 = lr1[:, np.newaxis] * lr2[np.newaxis, :]
 
@@ -259,25 +276,24 @@ def _fast_sphere_dot_r0(r, rr1, rr2, lr1, lr2, cosmags1, cosmags2,
                   (n1c2 - ct * n1c1) * (n2c1 - ct * n2c2) * sums[3])
 
         # Give it a finishing touch!
-        const = 4e-14 * np.pi  # This is \mu_0^2/4\pi
-        result *= (const / lr1lr2)
+        result *= (_meg_const / lr1lr2)
         if volume_integral:
             result *= r
     else:  # 'eeg'
-        sums = _comp_sum_eeg(beta.flatten(), ct.flatten(), lut, n_fact)
-        sums.shape = beta.shape
-
+        result = _comp_sum_eeg(beta.flatten(), ct.flatten(), lut, n_fact)
+        result.shape = beta.shape
         # Give it a finishing touch!
-        eeg_const = 1.0 / (4.0 * np.pi)
-        result = eeg_const * sums / lr1lr2
-    # new we add them all up with weights
-    if w1 is None:  # operating on surface, treat independently
-        # result = np.sum(w2[np.newaxis, :] * result, axis=1)
-        result = np.dot(result, w2)
-    else:
-        # result = np.sum((w1[:, np.newaxis] * w2[np.newaxis, :]) * result)
-        result = np.einsum('i,j,ij', w1, w2, result)
-    return result
+        result *= _eeg_const
+        result /= lr1lr2
+        # now we add them all up with weights
+    offset = 0
+    result *= np.concatenate(w2s)
+    if w1 is not None:
+        result *= w1[:, np.newaxis]
+    for ii, w2 in enumerate(w2s):
+        out[ii] = np.sum(result[:, offset:offset + len(w2)])
+        offset += len(w2)
+    return out
 
 
 def _do_self_dots(intrad, volume, coils, r0, ch_type, lut, n_fact, n_jobs):
@@ -330,14 +346,13 @@ def _do_self_dots_subset(intrad, rmags, rlens, cosmags, ws, volume, lut,
     # all possible combinations of two magnetometers
     products = np.zeros((len(rmags), len(rmags)))
     for ci1 in idx:
-        for ci2 in range(0, ci1 + 1):
-            res = _fast_sphere_dot_r0(intrad, rmags[ci1], rmags[ci2],
-                                      rlens[ci1], rlens[ci2],
-                                      cosmags[ci1], cosmags[ci2],
-                                      ws[ci1], ws[ci2], volume, lut,
-                                      n_fact, ch_type)
-            products[ci1, ci2] = res
-            products[ci2, ci1] = res
+        ci2 = ci1 + 1
+        res = _fast_sphere_dot_r0(
+            intrad, rmags[ci1], rmags[:ci2], rlens[ci1], rlens[:ci2],
+            cosmags[ci1], cosmags[:ci2], ws[ci1], ws[:ci2], volume, lut,
+            n_fact, ch_type)
+        products[ci1, :ci2] = res
+        products[:ci2, ci1] = res
     return products
 
 
@@ -390,12 +405,10 @@ def _do_cross_dots(intrad, volume, coils1, coils2, r0, ch_type,
 
     products = np.zeros((len(rmags1), len(rmags2)))
     for ci1 in range(len(coils1)):
-        for ci2 in range(len(coils2)):
-            res = _fast_sphere_dot_r0(intrad, rmags1[ci1], rmags2[ci2],
-                                      rlens1[ci1], rlens2[ci2], cosmags1[ci1],
-                                      cosmags2[ci2], ws1[ci1], ws2[ci2],
-                                      volume, lut, n_fact, ch_type)
-            products[ci1, ci2] = res
+        res = _fast_sphere_dot_r0(
+            intrad, rmags1[ci1], rmags2, rlens1[ci1], rlens2, cosmags1[ci1],
+            cosmags2, ws1[ci1], ws2, volume, lut, n_fact, ch_type)
+        products[ci1, :] = res
     return products
 
 
@@ -501,21 +514,13 @@ def _do_surface_dots_subset(intrad, rsurf, rmags, rref, refl, lsurf, rlens,
     products : array, shape (n_coils, n_coils)
         The integration products.
     """
-    products = np.zeros((len(rsurf), len(rmags)))
-    for ci in idx:
-        res = _fast_sphere_dot_r0(intrad, rsurf, rmags[ci],
-                                  lsurf, rlens[ci],
-                                  this_nn, cosmags[ci],
-                                  None, ws[ci], volume, lut,
-                                  n_fact, ch_type)
-        if rref is not None:
-            raise NotImplementedError  # we don't ever use this, isn't tested
-            # vres = _fast_sphere_dot_r0(intrad, rref, rmags[ci],
-            #                            refl, rlens[ci],
-            #                            this_nn, cosmags[ci],
-            #                            None, ws[ci], volume, lut,
-            #                            n_fact, ch_type)
-            # products[:, ci] = res - vres
-        else:
-            products[:, ci] = res
+    products = _fast_sphere_dot_r0(
+        intrad, rsurf, rmags, lsurf, rlens, this_nn, cosmags, None, ws,
+        volume, lut, n_fact, ch_type).T
+    if rref is not None:
+        raise NotImplementedError  # we don't ever use this, isn't tested
+        # vres = _fast_sphere_dot_r0(
+        #     intrad, rref, rmags, refl, rlens, this_nn, cosmags, None, ws,
+        #     volume, lut, n_fact, ch_type)
+        # products -= vres
     return products
diff --git a/mne/forward/_make_forward.py b/mne/forward/_make_forward.py
index 2d96811..a790bc2 100644
--- a/mne/forward/_make_forward.py
+++ b/mne/forward/_make_forward.py
@@ -9,38 +9,38 @@ import os
 from os import path as op
 import numpy as np
 
-from .. import pick_types, pick_info
-from ..io.pick import _has_kit_refs
-from ..io import read_info, _loc_to_coil_trans, _loc_to_eeg_loc
-from ..io.meas_info import Info
+from ..io import read_info, _loc_to_coil_trans, _loc_to_eeg_loc, Info
+from ..io.pick import _has_kit_refs, pick_types, pick_info
 from ..io.constants import FIFF
-from .forward import Forward, write_forward_solution, _merge_meg_eeg_fwds
-from ._compute_forward import _compute_forwards
 from ..transforms import (_ensure_trans, transform_surface_to, apply_trans,
-                          _get_mri_head_t, _print_coord_trans,
-                          _coord_frame_name, Transform)
+                          _get_trans, _print_coord_trans, _coord_frame_name,
+                          Transform)
 from ..utils import logger, verbose
 from ..source_space import _ensure_src, _filter_source_spaces
 from ..surface import _normalize_vectors
 from ..bem import read_bem_solution, _bem_find_surface, ConductorModel
 from ..externals.six import string_types
 
+from .forward import Forward, write_forward_solution, _merge_meg_eeg_fwds
+from ._compute_forward import _compute_forwards
+
 
 _accuracy_dict = dict(normal=FIFF.FWD_COIL_ACCURACY_NORMAL,
                       accurate=FIFF.FWD_COIL_ACCURACY_ACCURATE)
 
 
 @verbose
-def _read_coil_defs(fname=None, elekta_defs=False, verbose=None):
+def _read_coil_defs(elekta_defs=False, verbose=None):
     """Read a coil definition file.
 
     Parameters
     ----------
-    fname : str
-        The name of the file from which coil definitions are read.
     elekta_defs : bool
-        If true, use Elekta's coil definitions for numerical integration
-        (from Abramowitz and Stegun section 25.4.62).
+        If true, prepend Elekta's coil definitions for numerical
+        integration (from Abramowitz and Stegun section 25.4.62).
+        Note that this will likely cause duplicate coil definitions,
+        so the first matching coil should be selected for optimal
+        integration parameters.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
         Defaults to raw.verbose.
@@ -54,58 +54,61 @@ def _read_coil_defs(fname=None, elekta_defs=False, verbose=None):
         cosmag contains the direction of the coils and rmag contains the
         position vector.
     """
-    if fname is None:
-        if not elekta_defs:
-            fname = op.join(op.split(__file__)[0], '..', 'data',
-                            'coil_def.dat')
-        else:
-            fname = op.join(op.split(__file__)[0], '..', 'data',
-                            'coil_def_Elekta.dat')
+    coil_dir = op.join(op.split(__file__)[0], '..', 'data')
+    coils = list()
+    if elekta_defs:
+        coils += _read_coil_def_file(op.join(coil_dir, 'coil_def_Elekta.dat'))
+    coils += _read_coil_def_file(op.join(coil_dir, 'coil_def.dat'))
+    return coils
+
+
+def _read_coil_def_file(fname):
+    """Helper to read a coil def file"""
     big_val = 0.5
+    coils = list()
     with open(fname, 'r') as fid:
         lines = fid.readlines()
-        res = dict(coils=list())
-        lines = lines[::-1]
-        while len(lines) > 0:
-            line = lines.pop()
-            if line[0] != '#':
-                vals = np.fromstring(line, sep=' ')
-                assert len(vals) in (6, 7)  # newer numpy can truncate comment
-                start = line.find('"')
-                end = len(line.strip()) - 1
-                assert line.strip()[end] == '"'
-                desc = line[start:end]
-                npts = int(vals[3])
-                coil = dict(coil_type=vals[1], coil_class=vals[0], desc=desc,
-                            accuracy=vals[2], size=vals[4], base=vals[5])
-                # get parameters of each component
-                rmag = list()
-                cosmag = list()
-                w = list()
-                for p in range(npts):
-                    # get next non-comment line
+    lines = lines[::-1]
+    while len(lines) > 0:
+        line = lines.pop()
+        if line[0] != '#':
+            vals = np.fromstring(line, sep=' ')
+            assert len(vals) in (6, 7)  # newer numpy can truncate comment
+            start = line.find('"')
+            end = len(line.strip()) - 1
+            assert line.strip()[end] == '"'
+            desc = line[start:end]
+            npts = int(vals[3])
+            coil = dict(coil_type=vals[1], coil_class=vals[0], desc=desc,
+                        accuracy=vals[2], size=vals[4], base=vals[5])
+            # get parameters of each component
+            rmag = list()
+            cosmag = list()
+            w = list()
+            for p in range(npts):
+                # get next non-comment line
+                line = lines.pop()
+                while(line[0] == '#'):
                     line = lines.pop()
-                    while(line[0] == '#'):
-                        line = lines.pop()
-                    vals = np.fromstring(line, sep=' ')
-                    assert len(vals) == 7
-                    # Read and verify data for each integration point
-                    w.append(vals[0])
-                    rmag.append(vals[[1, 2, 3]])
-                    cosmag.append(vals[[4, 5, 6]])
-                w = np.array(w)
-                rmag = np.array(rmag)
-                cosmag = np.array(cosmag)
-                size = np.sqrt(np.sum(cosmag ** 2, axis=1))
-                if np.any(np.sqrt(np.sum(rmag ** 2, axis=1)) > big_val):
-                    raise RuntimeError('Unreasonable integration point')
-                if np.any(size <= 0):
-                    raise RuntimeError('Unreasonable normal')
-                cosmag /= size[:, np.newaxis]
-                coil.update(dict(w=w, cosmag=cosmag, rmag=rmag))
-                res['coils'].append(coil)
-    logger.info('%d coil definitions read', len(res['coils']))
-    return res
+                vals = np.fromstring(line, sep=' ')
+                assert len(vals) == 7
+                # Read and verify data for each integration point
+                w.append(vals[0])
+                rmag.append(vals[[1, 2, 3]])
+                cosmag.append(vals[[4, 5, 6]])
+            w = np.array(w)
+            rmag = np.array(rmag)
+            cosmag = np.array(cosmag)
+            size = np.sqrt(np.sum(cosmag ** 2, axis=1))
+            if np.any(np.sqrt(np.sum(rmag ** 2, axis=1)) > big_val):
+                raise RuntimeError('Unreasonable integration point')
+            if np.any(size <= 0):
+                raise RuntimeError('Unreasonable normal')
+            cosmag /= size[:, np.newaxis]
+            coil.update(dict(w=w, cosmag=cosmag, rmag=rmag))
+            coils.append(coil)
+    logger.info('%d coil definitions read', len(coils))
+    return coils
 
 
 def _create_meg_coil(coilset, ch, acc, t):
@@ -118,7 +121,7 @@ def _create_meg_coil(coilset, ch, acc, t):
         raise RuntimeError('%s is not a MEG channel' % ch['ch_name'])
 
     # Simple linear search from the coil definitions
-    for coil in coilset['coils']:
+    for coil in coilset:
         if coil['coil_type'] == (ch['coil_type'] & 0xFFFF) and \
                 coil['accuracy'] == acc:
             break
@@ -135,8 +138,10 @@ def _create_meg_coil(coilset, ch, acc, t):
                type=ch['coil_type'], w=coil['w'], desc=coil['desc'],
                coord_frame=t['to'], rmag=apply_trans(coil_trans, coil['rmag']),
                cosmag=apply_trans(coil_trans, coil['cosmag'], False))
+    r0_exey = (np.dot(coil['rmag'][:, :2], coil_trans[:3, :2].T) +
+               coil_trans[:3, 3])
     res.update(ex=coil_trans[:3, 0], ey=coil_trans[:3, 1],
-               ez=coil_trans[:3, 2], r0=coil_trans[:3, 3])
+               ez=coil_trans[:3, 2], r0=coil_trans[:3, 3], r0_exey=r0_exey)
     return res
 
 
@@ -169,7 +174,7 @@ def _create_eeg_el(ch, t=None):
 
 
 def _create_meg_coils(chs, acc=None, t=None, coilset=None):
-    """Create a set of MEG or EEG coils in the head coordinate frame"""
+    """Create a set of MEG coils in the head coordinate frame"""
     acc = _accuracy_dict[acc] if isinstance(acc, string_types) else acc
     coilset = _read_coil_defs(verbose=False) if coilset is None else coilset
     coils = [_create_meg_coil(coilset, ch, acc, t) for ch in chs]
@@ -177,7 +182,7 @@ def _create_meg_coils(chs, acc=None, t=None, coilset=None):
 
 
 def _create_eeg_els(chs):
-    """Create a set of MEG or EEG coils in the head coordinate frame"""
+    """Create a set of EEG electrodes in the head coordinate frame"""
     return [_create_eeg_el(ch) for ch in chs]
 
 
@@ -211,7 +216,7 @@ def _setup_bem(bem, bem_extra, neeg, mri_head_t, verbose=None):
 
 @verbose
 def _prep_meg_channels(info, accurate=True, exclude=(), ignore_ref=False,
-                       elekta_defs=False, verbose=None):
+                       elekta_defs=False, head_frame=True, verbose=None):
     """Prepare MEG coil definitions for forward calculation
 
     Parameters
@@ -226,6 +231,11 @@ def _prep_meg_channels(info, accurate=True, exclude=(), ignore_ref=False,
         info['bads']
     ignore_ref : bool
         If true, ignore compensation coils
+    elekta_defs : bool
+        If True, use Elekta's coil definitions, which use different integration
+        point geometry. False by default.
+    head_frame : bool
+        If True (default), use head frame coords. Otherwise, use device frame.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
         Defaults to raw.verbose.
@@ -272,32 +282,42 @@ def _prep_meg_channels(info, accurate=True, exclude=(), ignore_ref=False,
                         % (ncomp, info_extra))
             # We need to check to make sure these are NOT KIT refs
             if _has_kit_refs(info, picks):
-                err = ('Cannot create forward solution with KIT reference '
-                       'channels. Consider using "ignore_ref=True" in '
-                       'calculation')
-                raise NotImplementedError(err)
+                raise NotImplementedError(
+                    'Cannot create forward solution with KIT reference '
+                    'channels. Consider using "ignore_ref=True" in '
+                    'calculation')
     else:
         ncomp = 0
 
-    _print_coord_trans(info['dev_head_t'])
-
     # Make info structure to allow making compensator later
     ncomp_data = len(info['comps'])
     ref_meg = True if not ignore_ref else False
     picks = pick_types(info, meg=True, ref_meg=ref_meg, exclude=exclude)
     meg_info = pick_info(info, picks) if nmeg > 0 else None
 
-    # Create coil descriptions with transformation to head or MRI frame
+    # Create coil descriptions with transformation to head or device frame
     templates = _read_coil_defs(elekta_defs=elekta_defs)
 
-    megcoils = _create_meg_coils(megchs, accuracy, info['dev_head_t'],
-                                 templates)
+    if head_frame:
+        _print_coord_trans(info['dev_head_t'])
+        transform = info['dev_head_t']
+    else:
+        transform = None
+
+    megcoils = _create_meg_coils(megchs, accuracy, transform, templates)
+
     if ncomp > 0:
         logger.info('%d compensation data sets in %s' % (ncomp_data,
                                                          info_extra))
-        compcoils = _create_meg_coils(compchs, 'normal', info['dev_head_t'],
-                                      templates)
-    logger.info('Head coordinate MEG coil definitions created.')
+        compcoils = _create_meg_coils(compchs, 'normal', transform, templates)
+
+    # Check that coordinate frame is correct and log it
+    if head_frame:
+        assert megcoils[0]['coord_frame'] == FIFF.FIFFV_COORD_HEAD
+        logger.info('MEG coil definitions created in head coordinates.')
+    else:
+        assert megcoils[0]['coord_frame'] == FIFF.FIFFV_COORD_DEVICE
+        logger.info('MEG coil definitions created in device coordinate.')
 
     return megcoils, compcoils, megnames, meg_info
 
@@ -500,7 +520,7 @@ def make_forward_solution(info, trans, src, bem, fname=None, meg=True,
 
     # read the transformation from MRI to HEAD coordinates
     # (could also be HEAD to MRI)
-    mri_head_t, trans = _get_mri_head_t(trans)
+    mri_head_t, trans = _get_trans(trans)
     bem_extra = 'dict' if isinstance(bem, dict) else bem
     if fname is not None and op.isfile(fname) and not overwrite:
         raise IOError('file "%s" exists, consider using overwrite=True'
diff --git a/mne/forward/forward.py b/mne/forward/forward.py
index c937c5b..773026d 100644
--- a/mne/forward/forward.py
+++ b/mne/forward/forward.py
@@ -19,14 +19,14 @@ from os import path as op
 import tempfile
 
 from ..fixes import sparse_block_diag
-from ..io import RawArray
+from ..io import RawArray, Info
 from ..io.constants import FIFF
 from ..io.open import fiff_open
 from ..io.tree import dir_tree_find
 from ..io.tag import find_tag, read_tag
 from ..io.matrix import (_read_named_matrix, _transpose_named_matrix,
                          write_named_matrix)
-from ..io.meas_info import read_bad_channels, Info
+from ..io.meas_info import read_bad_channels
 from ..io.pick import (pick_channels_forward, pick_info, pick_channels,
                        pick_types)
 from ..io.write import (write_int, start_block, end_block,
@@ -34,7 +34,7 @@ from ..io.write import (write_int, start_block, end_block,
                         write_string, start_file, end_file, write_id)
 from ..io.base import _BaseRaw
 from ..evoked import Evoked, write_evokeds, EvokedArray
-from ..epochs import Epochs
+from ..epochs import Epochs, _BaseEpochs
 from ..source_space import (_read_source_spaces_from_tree,
                             find_source_space_hemi,
                             _write_source_spaces_to_fid)
@@ -352,7 +352,10 @@ def _read_forward_meas_info(tree, fid):
     info['bads'] = [bad for bad in info['bads'] if bad in info['ch_names']]
 
     # Check if a custom reference has been applied
-    tag = find_tag(fid, parent_mri, FIFF.FIFF_CUSTOM_REF)
+    tag = find_tag(fid, parent_mri, FIFF.FIFF_MNE_CUSTOM_REF)
+    if tag is None:
+        tag = find_tag(fid, parent_mri, 236)  # Constant 236 used before v0.11
+
     info['custom_ref_applied'] = bool(tag.data) if tag is not None else False
     info._check_consistency()
     return info
@@ -1110,8 +1113,8 @@ def _apply_forward(fwd, stc, start=None, stop=None, verbose=None):
 
 
 @verbose
-def apply_forward(fwd, stc, info=None, start=None, stop=None,
-                  verbose=None, evoked_template=None):
+def apply_forward(fwd, stc, info, start=None, stop=None,
+                  verbose=None):
     """
     Project source space currents to sensor space using a forward operator.
 
@@ -1139,8 +1142,6 @@ def apply_forward(fwd, stc, info=None, start=None, stop=None,
         Index of first time sample not to include (index not time is seconds).
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
-    evoked_template : Evoked object (deprecated)
-        Evoked object used as template to generate the output argument.
 
     Returns
     -------
@@ -1151,23 +1152,6 @@ def apply_forward(fwd, stc, info=None, start=None, stop=None,
     --------
     apply_forward_raw: Compute sensor space data and return a Raw object.
     """
-    if evoked_template is None and info is None:
-        raise ValueError('You have to provide the info parameter.')
-
-    if evoked_template is not None and not isinstance(evoked_template, Info):
-        warnings.warn('The "evoked_template" parameter is being deprecated '
-                      'and will be removed in MNE-0.11. '
-                      'Please provide info parameter instead',
-                      DeprecationWarning)
-        info = evoked_template.info
-
-    if info is not None and not isinstance(info, Info):
-        warnings.warn('The "evoked_template" parameter is being deprecated '
-                      'and will be removed in MNE-0.11. '
-                      'Please provide info parameter instead',
-                      DeprecationWarning)
-        info = info.info
-
     # make sure evoked_template contains all channels in fwd
     for ch_name in fwd['sol']['row_names']:
         if ch_name not in info['ch_names']:
@@ -1228,13 +1212,6 @@ def apply_forward_raw(fwd, stc, info, start=None, stop=None,
     --------
     apply_forward: Compute sensor space data and return an Evoked object.
     """
-    if isinstance(info, _BaseRaw):
-        warnings.warn('The "Raw_template" parameter is being deprecated '
-                      'and will be removed in MNE-0.11. '
-                      'Please provide info parameter instead',
-                      DeprecationWarning)
-        info = info.info
-
     # make sure info contains all channels in fwd
     for ch_name in fwd['sol']['row_names']:
         if ch_name not in info['ch_names']:
@@ -1470,7 +1447,7 @@ def do_forward_solution(subject, meas, fname=None, src=None, spacing=None,
         events = np.array([[0, 0, 1]], dtype=np.int)
         end = 1. / meas.info['sfreq']
         meas_data = Epochs(meas, events, 1, 0, end, proj=False).average()
-    elif isinstance(meas, Epochs):
+    elif isinstance(meas, _BaseEpochs):
         meas_data = meas.average()
     elif isinstance(meas, Evoked):
         meas_data = meas
diff --git a/mne/forward/tests/test_field_interpolation.py b/mne/forward/tests/test_field_interpolation.py
index 43fbc35..724041c 100644
--- a/mne/forward/tests/test_field_interpolation.py
+++ b/mne/forward/tests/test_field_interpolation.py
@@ -33,6 +33,7 @@ subjects_dir = op.join(data_path, 'subjects')
 def test_legendre_val():
     """Test Legendre polynomial (derivative) equivalence
     """
+    rng = np.random.RandomState(0)
     # check table equiv
     xs = np.linspace(-1., 1., 1000)
     n_terms = 100
@@ -50,8 +51,8 @@ def test_legendre_val():
                         rtol=1e-2, atol=5e-3)
 
         # Now let's look at our sums
-        ctheta = np.random.rand(20, 30) * 2.0 - 1.0
-        beta = np.random.rand(20, 30) * 0.8
+        ctheta = rng.rand(20, 30) * 2.0 - 1.0
+        beta = rng.rand(20, 30) * 0.8
         lut_fun = partial(fun, lut=lut)
         c1 = _comp_sum_eeg(beta.flatten(), ctheta.flatten(), lut_fun, n_fact)
         c1.shape = beta.shape
@@ -70,8 +71,8 @@ def test_legendre_val():
         assert_allclose(c1, c2, 1e-2, 1e-3)  # close enough...
 
     # compare fast and slow for MEG
-    ctheta = np.random.rand(20 * 30) * 2.0 - 1.0
-    beta = np.random.rand(20 * 30) * 0.8
+    ctheta = rng.rand(20 * 30) * 2.0 - 1.0
+    beta = rng.rand(20 * 30) * 0.8
     lut, n_fact = _get_legen_table('meg', n_coeff=10, force_calc=True)
     fun = partial(_get_legen_lut_fast, lut=lut)
     coeffs = _comp_sums_meg(beta, ctheta, fun, n_fact, False)
@@ -173,9 +174,8 @@ def test_make_field_map_meg():
 def _setup_args(info):
     """Helper to test_as_meg_type_evoked."""
     coils = _create_meg_coils(info['chs'], 'normal', info['dev_head_t'])
-    my_origin, int_rad, noise, lut_fun, n_fact = _setup_dots('fast',
-                                                             coils,
-                                                             'meg')
+    int_rad, noise, lut_fun, n_fact = _setup_dots('fast', coils, 'meg')
+    my_origin = np.array([0., 0., 0.04])
     args_dict = dict(intrad=int_rad, volume=False, coils1=coils, r0=my_origin,
                      ch_type='meg', lut=lut_fun, n_fact=n_fact)
     return args_dict
diff --git a/mne/forward/tests/test_make_forward.py b/mne/forward/tests/test_make_forward.py
index dba5d58..4c1ca1d 100644
--- a/mne/forward/tests/test_make_forward.py
+++ b/mne/forward/tests/test_make_forward.py
@@ -161,7 +161,7 @@ def test_make_forward_solution_kit():
     fwd = do_forward_solution('sample', fname_bti_raw, src=fname_src_small,
                               bem=fname_bem_meg, mri=trans_path,
                               eeg=False, meg=True, subjects_dir=subjects_dir)
-    raw_py = read_raw_bti(bti_pdf, bti_config, bti_hs)
+    raw_py = read_raw_bti(bti_pdf, bti_config, bti_hs, preload=False)
     fwd_py = make_forward_solution(raw_py.info, src=src, eeg=False, meg=True,
                                    bem=fname_bem_meg, trans=trans_path)
     _compare_forwards(fwd, fwd_py, 248, n_src)
diff --git a/mne/gui/__init__.py b/mne/gui/__init__.py
index f9f66fc..0286d02 100644
--- a/mne/gui/__init__.py
+++ b/mne/gui/__init__.py
@@ -22,7 +22,7 @@ def combine_kit_markers():
 
 
 def coregistration(tabbed=False, split=True, scene_width=0o1, inst=None,
-                   subject=None, subjects_dir=None, raw=None):
+                   subject=None, subjects_dir=None):
     """Coregister an MRI with a subject's head shape
 
     Parameters
@@ -54,11 +54,6 @@ def coregistration(tabbed=False, split=True, scene_width=0o1, inst=None,
     <http://www.slideshare.net/mne-python/mnepython-scale-mri>`_.
     """
     _check_mayavi_version()
-    if raw is not None:
-        raise DeprecationWarning('The `raw` argument has been deprecated for '
-                                 'the `inst` argument. Will be removed '
-                                 'in 0.11. Use `inst` instead.')
-        inst = raw
     from ._coreg_gui import CoregFrame, _make_view
     view = _make_view(tabbed, split, scene_width)
     gui = CoregFrame(inst, subject, subjects_dir)
diff --git a/mne/gui/_coreg_gui.py b/mne/gui/_coreg_gui.py
index 3a9493d..a805c14 100644
--- a/mne/gui/_coreg_gui.py
+++ b/mne/gui/_coreg_gui.py
@@ -27,7 +27,7 @@ try:
                               EnumEditor, Handler, Label, TextEditor)
     from traitsui.menu import Action, UndoButton, CancelButton, NoButtons
     from tvtk.pyface.scene_editor import SceneEditor
-except:
+except Exception:
     from ..utils import trait_wraith
     HasTraits = HasPrivateTraits = Handler = object
     cached_property = on_trait_change = MayaviScene = MlabSceneModel =\
diff --git a/mne/gui/_fiducials_gui.py b/mne/gui/_fiducials_gui.py
index e0a2ff2..4a9973b 100644
--- a/mne/gui/_fiducials_gui.py
+++ b/mne/gui/_fiducials_gui.py
@@ -20,7 +20,7 @@ try:
     from traitsui.api import HGroup, Item, VGroup, View
     from traitsui.menu import NoButtons
     from tvtk.pyface.scene_editor import SceneEditor
-except:
+except Exception:
     from ..utils import trait_wraith
     HasTraits = HasPrivateTraits = object
     cached_property = on_trait_change = MayaviScene = MlabSceneModel = \
diff --git a/mne/gui/_file_traits.py b/mne/gui/_file_traits.py
index fd59d7d..777cd79 100644
--- a/mne/gui/_file_traits.py
+++ b/mne/gui/_file_traits.py
@@ -18,7 +18,7 @@ try:
     from traitsui.api import View, Item, VGroup
     from pyface.api import (DirectoryDialog, OK, ProgressDialog, error,
                             information)
-except:
+except Exception:
     from ..utils import trait_wraith
     HasTraits = HasPrivateTraits = object
     cached_property = on_trait_change = Any = Array = Bool = Button = \
diff --git a/mne/gui/_help.py b/mne/gui/_help.py
new file mode 100644
index 0000000..c888e1a
--- /dev/null
+++ b/mne/gui/_help.py
@@ -0,0 +1,16 @@
+# Author: Christian Brodbeck <christianbrodbeck at nyu.edu>
+#
+# License: BSD (3-clause)
+import json
+import os
+from textwrap import TextWrapper
+
+
+def read_tooltips(gui_name):
+    "Read and format tooltips, return a dict"
+    dirname = os.path.dirname(__file__)
+    help_path = os.path.join(dirname, 'help', gui_name + '.json')
+    with open(help_path) as fid:
+        raw_tooltips = json.load(fid)
+    format_ = TextWrapper(width=60, fix_sentence_endings=True).fill
+    return dict((key, format_(text)) for key, text in raw_tooltips.items())
diff --git a/mne/gui/_kit2fiff_gui.py b/mne/gui/_kit2fiff_gui.py
index ee07198..3ce49ad 100644
--- a/mne/gui/_kit2fiff_gui.py
+++ b/mne/gui/_kit2fiff_gui.py
@@ -11,6 +11,7 @@ from threading import Thread
 
 from ..externals.six.moves import queue
 from ..io.meas_info import _read_dig_points, _make_dig_points
+from ..utils import logger
 
 
 # allow import without traits
@@ -19,25 +20,26 @@ try:
     from mayavi.tools.mlab_scene_model import MlabSceneModel
     from pyface.api import confirm, error, FileDialog, OK, YES, information
     from traits.api import (HasTraits, HasPrivateTraits, cached_property,
-                            Instance, Property, Bool, Button, Enum, File, Int,
-                            List, Str, Array, DelegatesTo)
-    from traitsui.api import (View, Item, HGroup, VGroup, spring,
+                            Instance, Property, Bool, Button, Enum, File,
+                            Float, Int, List, Str, Array, DelegatesTo)
+    from traitsui.api import (View, Item, HGroup, VGroup, spring, TextEditor,
                               CheckListEditor, EnumEditor, Handler)
     from traitsui.menu import NoButtons
     from tvtk.pyface.scene_editor import SceneEditor
-except:
+except Exception:
     from ..utils import trait_wraith
     HasTraits = HasPrivateTraits = Handler = object
-    cached_property = MayaviScene = MlabSceneModel = Bool = Button = \
+    cached_property = MayaviScene = MlabSceneModel = Bool = Button = Float = \
         DelegatesTo = Enum = File = Instance = Int = List = Property = \
         Str = Array = spring = View = Item = HGroup = VGroup = EnumEditor = \
-        NoButtons = CheckListEditor = SceneEditor = trait_wraith
+        NoButtons = CheckListEditor = SceneEditor = TextEditor = trait_wraith
 
 from ..io.kit.kit import RawKIT, KIT
 from ..transforms import (apply_trans, als_ras_trans, als_ras_trans_mm,
                           get_ras_to_neuromag_trans, Transform)
 from ..coreg import _decimate_points, fit_matched_points
 from ._marker_gui import CombineMarkersPanel, CombineMarkersModel
+from ._help import read_tooltips
 from ._viewer import (HeadViewController, headview_item, PointObject,
                       _testing_mode)
 
@@ -55,6 +57,9 @@ else:
     kit_con_wildcard = ['*.sqd;*.con']
 
 
+tooltips = read_tooltips('kit2fiff')
+
+
 class Kit2FiffModel(HasPrivateTraits):
     """Data Model for Kit2Fiff conversion
 
@@ -70,15 +75,20 @@ class Kit2FiffModel(HasPrivateTraits):
                     "head shape")
     fid_file = File(exists=True, filter=hsp_fid_wildcard, desc="Digitizer "
                     "fiducials")
-    stim_chs = Enum(">", "<", "man")
-    stim_chs_manual = Array(int, (8,), range(168, 176))
+    stim_coding = Enum(">", "<", "channel")
+    stim_chs = Str("")
+    stim_chs_array = Property(depends_on='stim_chs')
+    stim_chs_ok = Property(depends_on='stim_chs_array')
+    stim_chs_comment = Property(depends_on='stim_chs_array')
     stim_slope = Enum("-", "+")
+    stim_threshold = Float(1.)
+
     # Marker Points
     use_mrk = List(list(range(5)), desc="Which marker points to use for the "
                    "device head coregistration.")
 
     # Derived Traits
-    mrk = Property(depends_on=('markers.mrk3.points'))
+    mrk = Property(depends_on='markers.mrk3.points')
 
     # Polhemus Fiducials
     elp_raw = Property(depends_on=['fid_file'])
@@ -98,14 +108,13 @@ class Kit2FiffModel(HasPrivateTraits):
     sqd_fname = Property(Str, depends_on='sqd_file')
     hsp_fname = Property(Str, depends_on='hsp_file')
     fid_fname = Property(Str, depends_on='fid_file')
-    can_save = Property(Bool, depends_on=['sqd_file', 'fid', 'elp', 'hsp',
-                                          'dev_head_trans'])
+    can_save = Property(Bool, depends_on=['stim_chs_ok', 'sqd_file', 'fid',
+                                          'elp', 'hsp', 'dev_head_trans'])
 
     @cached_property
     def _get_can_save(self):
         "Only allow saving when either all or no head shape elements are set."
-        has_sqd = bool(self.sqd_file)
-        if not has_sqd:
+        if not self.stim_chs_ok or not self.sqd_file:
             return False
 
         has_all_hsp = (np.any(self.dev_head_trans) and np.any(self.hsp) and
@@ -242,6 +251,32 @@ class Kit2FiffModel(HasPrivateTraits):
         else:
             return '-'
 
+    @cached_property
+    def _get_stim_chs_array(self):
+        if not self.stim_chs.strip():
+            return True
+        try:
+            out = eval("r_[%s]" % self.stim_chs, vars(np))
+            if out.dtype.kind != 'i':
+                raise TypeError("Need array of int")
+        except:
+            return None
+        else:
+            return out
+
+    @cached_property
+    def _get_stim_chs_comment(self):
+        if self.stim_chs_array is None:
+            return "Invalid!"
+        elif self.stim_chs_array is True:
+            return "Ok: Default channels"
+        else:
+            return "Ok: %i channels" % len(self.stim_chs_array)
+
+    @cached_property
+    def _get_stim_chs_ok(self):
+        return self.stim_chs_array is not None
+
     def clear_all(self):
         """Clear all specified input parameters"""
         self.markers.clear = True
@@ -265,16 +300,36 @@ class Kit2FiffModel(HasPrivateTraits):
     def get_raw(self, preload=False):
         """Create a raw object based on the current model settings
         """
-        if not self.sqd_file:
-            raise ValueError("sqd file not set")
-
-        if self.stim_chs == 'man':
-            stim = self.stim_chs_manual
+        if not self.can_save:
+            raise ValueError("Not all necessary parameters are set")
+
+        # stim channels and coding
+        if self.stim_chs_array is True:
+            if self.stim_coding == 'channel':
+                stim_code = 'channel'
+                raise NotImplementedError("Finding default event channels")
+            else:
+                stim = self.stim_coding
+                stim_code = 'binary'
         else:
-            stim = self.stim_chs
-
+            stim = self.stim_chs_array
+            if self.stim_coding == 'channel':
+                stim_code = 'channel'
+            elif self.stim_coding == '<':
+                stim_code = 'binary'
+            elif self.stim_coding == '>':
+                # if stim is
+                stim = stim[::-1]
+                stim_code = 'binary'
+            else:
+                raise RuntimeError("stim_coding=%r" % self.stim_coding)
+
+        logger.info("Creating raw with stim=%r, slope=%r, stim_code=%r, "
+                    "stimthresh=%r", stim, self.stim_slope, stim_code,
+                    self.stim_threshold)
         raw = RawKIT(self.sqd_file, preload=preload, stim=stim,
-                     slope=self.stim_slope)
+                     slope=self.stim_slope, stim_code=stim_code,
+                     stimthresh=self.stim_threshold)
 
         if np.any(self.fid):
             raw.info['dig'] = _make_dig_points(self.fid[0], self.fid[1],
@@ -308,9 +363,12 @@ class Kit2FiffPanel(HasPrivateTraits):
     sqd_file = DelegatesTo('model')
     hsp_file = DelegatesTo('model')
     fid_file = DelegatesTo('model')
+    stim_coding = DelegatesTo('model')
     stim_chs = DelegatesTo('model')
-    stim_chs_manual = DelegatesTo('model')
+    stim_chs_ok = DelegatesTo('model')
+    stim_chs_comment = DelegatesTo('model')
     stim_slope = DelegatesTo('model')
+    stim_threshold = DelegatesTo('model')
 
     # info
     can_save = DelegatesTo('model')
@@ -338,41 +396,36 @@ class Kit2FiffPanel(HasPrivateTraits):
     error = Str('')
 
     view = View(
-        VGroup(VGroup(Item('sqd_file', label="Data"),
-                      Item('sqd_fname', show_label=False,
-                           style='readonly'),
+        VGroup(VGroup(Item('sqd_file', label="Data",
+                           tooltip=tooltips['sqd_file']),
+                      Item('sqd_fname', show_label=False, style='readonly'),
                       Item('hsp_file', label='Dig Head Shape'),
-                      Item('hsp_fname', show_label=False,
-                           style='readonly'),
+                      Item('hsp_fname', show_label=False, style='readonly'),
                       Item('fid_file', label='Dig Points'),
-                      Item('fid_fname', show_label=False,
-                           style='readonly'),
+                      Item('fid_fname', show_label=False, style='readonly'),
                       Item('reset_dig', label='Clear Digitizer Files',
                            show_label=False),
-                      Item('use_mrk', editor=use_editor,
-                           style='custom'),
+                      Item('use_mrk', editor=use_editor, style='custom'),
                       label="Sources", show_border=True),
-               VGroup(Item('stim_slope', label="Event Onset",
-                           style='custom',
+               VGroup(Item('stim_slope', label="Event Onset", style='custom',
+                           tooltip=tooltips['stim_slope'],
                            editor=EnumEditor(
                                values={'+': '2:Peak (0 to 5 V)',
                                        '-': '1:Trough (5 to 0 V)'},
-                               cols=2),
-                           help="Whether events are marked by a decrease "
-                           "(trough) or an increase (peak) in trigger "
-                           "channel values"),
-                      Item('stim_chs', label="Binary Coding",
-                           style='custom',
-                           editor=EnumEditor(values={'>': '1:1 ... 128',
-                                                     '<': '3:128 ... 1',
-                                                     'man': '2:Manual'},
-                                             cols=2),
-                           help="Specifies the bit order in event "
-                           "channels. Assign the first bit (1) to the "
-                           "first or the last trigger channel."),
-                      Item('stim_chs_manual', label='Stim Channels',
-                           style='custom',
-                           visible_when="stim_chs == 'man'"),
+                               cols=2)),
+                      Item('stim_coding', label="Value Coding", style='custom',
+                           editor=EnumEditor(values={'>': '1:little-endian',
+                                                     '<': '2:big-endian',
+                                                     'channel': '3:Channel#'},
+                                             cols=3),
+                           tooltip=tooltips["stim_coding"]),
+                      Item('stim_chs', label='Channels', style='custom',
+                           tooltip=tooltips["stim_chs"],
+                           editor=TextEditor(evaluate_name='stim_chs_ok',
+                                             auto_set=True)),
+                      Item('stim_chs_comment', label='>', style='readonly'),
+                      Item('stim_threshold', label='Threshold',
+                           tooltip=tooltips['stim_threshold']),
                       label='Events', show_border=True),
                HGroup(Item('save_as', enabled_when='can_save'), spring,
                       'clear_all', show_labels=False),
diff --git a/mne/gui/_marker_gui.py b/mne/gui/_marker_gui.py
index 835a206..ebe4436 100644
--- a/mne/gui/_marker_gui.py
+++ b/mne/gui/_marker_gui.py
@@ -19,7 +19,7 @@ try:
     from traitsui.api import View, Item, HGroup, VGroup, CheckListEditor
     from traitsui.menu import NoButtons
     from tvtk.pyface.scene_editor import SceneEditor
-except:
+except Exception:
     from ..utils import trait_wraith
     HasTraits = HasPrivateTraits = object
     cached_property = on_trait_change = MayaviScene = MlabSceneModel = \
diff --git a/mne/gui/_viewer.py b/mne/gui/_viewer.py
index f90a219..4f9bf75 100644
--- a/mne/gui/_viewer.py
+++ b/mne/gui/_viewer.py
@@ -19,7 +19,7 @@ try:
                             cached_property, Instance, Property, Array, Bool,
                             Button, Color, Enum, Float, Int, List, Range, Str)
     from traitsui.api import View, Item, Group, HGroup, VGrid, VGroup
-except:
+except Exception:
     from ..utils import trait_wraith
     HasTraits = HasPrivateTraits = object
     cached_property = on_trait_change = MlabSceneModel = Array = Bool = \
diff --git a/mne/gui/help/kit2fiff.json b/mne/gui/help/kit2fiff.json
new file mode 100644
index 0000000..47cea8b
--- /dev/null
+++ b/mne/gui/help/kit2fiff.json
@@ -0,0 +1,7 @@
+{
+  "stim_chs": "Define the channels that are used to generate events. If the field is empty, the default channels are used (for NYU systems only). Channels can be defined as comma separated channel numbers (1, 2, 3, 4, 5, 6), ranges (1:7) and combinations of the two (1:4, 7, 10:13).",
+  "stim_coding": "Specifies how stim-channel events are translated into trigger values. Little- and big-endian assume binary coding. In little-endian order, the first channel is assigned the smallest value (1) and the last channel is assigned the highest value (with 8 channels this would be 128). Channel# implies a different method of coding in which an event in a given channel is assigned the channel number as value.",
+  "sqd_file": "*.sqd or *.con file containing recorded MEG data",
+  "stim_slope": "How events are marked in stim channels. Trough: normally the signal is high, events are marked by transitory signal decrease. Peak: normally signal is low, events are marked by an increase.",
+  "stim_threshold": "Threshold voltage to detect events in stim channels."
+}
diff --git a/mne/gui/tests/test_kit2fiff_gui.py b/mne/gui/tests/test_kit2fiff_gui.py
index 4e7d90a..f6d5f59 100644
--- a/mne/gui/tests/test_kit2fiff_gui.py
+++ b/mne/gui/tests/test_kit2fiff_gui.py
@@ -39,9 +39,19 @@ def test_kit2fiff_model():
     model.hsp_file = hsp_path
     assert_false(model.can_save)
     model.fid_file = fid_path
+    assert_true(model.can_save)
 
-    # export raw
+    # stim channels
+    model.stim_chs = "181:184, 186"
+    assert_array_equal(model.stim_chs_array, [181, 182, 183, 186])
+    assert_true(model.stim_chs_ok)
+    model.stim_chs = "181:184, bad"
+    assert_false(model.stim_chs_ok)
+    assert_false(model.can_save)
+    model.stim_chs = ""
     assert_true(model.can_save)
+
+    # export raw
     raw_out = model.get_raw()
     raw_out.save(tgt_fname)
     raw = Raw(tgt_fname)
@@ -71,23 +81,23 @@ def test_kit2fiff_model():
     model.stim_slope = '+'
     events_bin = mne.find_events(raw_bin, stim_channel='STI 014')
 
-    model.stim_chs = '<'
+    model.stim_coding = '<'
     raw = model.get_raw()
     events = mne.find_events(raw, stim_channel='STI 014')
     assert_array_equal(events, events_bin)
 
     events_rev = events_bin.copy()
     events_rev[:, 2] = 1
-    model.stim_chs = '>'
+    model.stim_coding = '>'
     raw = model.get_raw()
     events = mne.find_events(raw, stim_channel='STI 014')
     assert_array_equal(events, events_rev)
 
-    model.stim_chs = 'man'
-    model.stim_chs_manual = list(range(167, 159, -1))
+    model.stim_coding = 'channel'
+    model.stim_chs = "160:161"
     raw = model.get_raw()
     events = mne.find_events(raw, stim_channel='STI 014')
-    assert_array_equal(events, events_bin)
+    assert_array_equal(events, events_bin + [0, 0, 32])
 
     # test reset
     model.clear_all()
diff --git a/mne/io/__init__.py b/mne/io/__init__.py
index 38b60f3..565569b 100644
--- a/mne/io/__init__.py
+++ b/mne/io/__init__.py
@@ -7,9 +7,9 @@
 
 from .open import fiff_open, show_fiff, _fiff_get_fid
 from .meas_info import (read_fiducials, write_fiducials, read_info, write_info,
-                        _empty_info)
+                        _empty_info, _merge_info, Info)
 
-from .proj import make_eeg_average_ref_proj
+from .proj import make_eeg_average_ref_proj, Projection
 from .tag import _loc_to_coil_trans, _coil_trans_to_loc, _loc_to_eeg_loc
 from .base import _BaseRaw
 
@@ -17,20 +17,26 @@ from . import array
 from . import base
 from . import brainvision
 from . import bti
+from . import ctf
 from . import constants
 from . import edf
 from . import egi
 from . import fiff
 from . import kit
+from . import nicolet
+from . import eeglab
 from . import pick
 
 from .array import RawArray
 from .brainvision import read_raw_brainvision
 from .bti import read_raw_bti
+from .ctf import read_raw_ctf
 from .edf import read_raw_edf
 from .egi import read_raw_egi
 from .kit import read_raw_kit, read_epochs_kit
 from .fiff import read_raw_fif
+from .nicolet import read_raw_nicolet
+from .eeglab import read_raw_eeglab, read_epochs_eeglab
 
 # for backward compatibility
 from .fiff import RawFIF
@@ -38,48 +44,3 @@ from .fiff import RawFIF as Raw
 from .base import concatenate_raws
 from .reference import (set_eeg_reference, set_bipolar_reference,
                         add_reference_channels)
-from ..utils import deprecated
-
-
- at deprecated('mne.io.get_chpi_positions is deprecated and will be removed in '
-            'v0.11, please use mne.get_chpi_positions')
-def get_chpi_positions(raw, t_step=None, verbose=None):
-    """Extract head positions
-
-    Note that the raw instance must have CHPI channels recorded.
-
-    Parameters
-    ----------
-    raw : instance of Raw | str
-        Raw instance to extract the head positions from. Can also be a
-        path to a Maxfilter log file (str).
-    t_step : float | None
-        Sampling interval to use when converting data. If None, it will
-        be automatically determined. By default, a sampling interval of
-        1 second is used if processing a raw data. If processing a
-        Maxfilter log file, this must be None because the log file
-        itself will determine the sampling interval.
-    verbose : bool, str, int, or None
-        If not None, override default verbose level (see mne.verbose).
-
-    Returns
-    -------
-    translation : ndarray, shape (N, 3)
-        Translations at each time point.
-    rotation : ndarray, shape (N, 3, 3)
-        Rotations at each time point.
-    t : ndarray, shape (N,)
-        The time points.
-
-    Notes
-    -----
-    The digitized HPI head frame y is related to the frame position X as:
-
-        Y = np.dot(rotation, X) + translation
-
-    Note that if a Maxfilter log file is being processed, the start time
-    may not use the same reference point as the rest of mne-python (i.e.,
-    it could be referenced relative to raw.first_samp or something else).
-    """
-    from ..chpi import get_chpi_positions
-    return get_chpi_positions(raw, t_step, verbose)
diff --git a/mne/io/array/array.py b/mne/io/array/array.py
index 8231c61..bc992ec 100644
--- a/mne/io/array/array.py
+++ b/mne/io/array/array.py
@@ -19,7 +19,7 @@ class RawArray(_BaseRaw):
         The channels' time series.
     info : instance of Info
         Info dictionary. Consider using `create_info` to populate
-        this structure.
+        this structure. This may be modified in place by the class.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -42,6 +42,8 @@ class RawArray(_BaseRaw):
         if len(data) != len(info['ch_names']):
             raise ValueError('len(data) does not match len(info["ch_names"])')
         assert len(info['ch_names']) == info['nchan']
+        if info.get('buffer_size_sec', None) is None:
+            info['buffer_size_sec'] = 1.  # reasonable default
         super(RawArray, self).__init__(info, data, verbose=verbose)
         logger.info('    Range : %d ... %d =  %9.3f ... %9.3f secs' % (
                     self.first_samp, self.last_samp,
diff --git a/mne/io/array/tests/test_array.py b/mne/io/array/tests/test_array.py
index 3e58b1b..d47e517 100644
--- a/mne/io/array/tests/test_array.py
+++ b/mne/io/array/tests/test_array.py
@@ -10,11 +10,12 @@ import matplotlib
 
 from numpy.testing import assert_array_almost_equal, assert_allclose
 from nose.tools import assert_equal, assert_raises, assert_true
-from mne import find_events, Epochs, pick_types, concatenate_raws
+from mne import find_events, Epochs, pick_types
 from mne.io import Raw
 from mne.io.array import RawArray
+from mne.io.tests.test_raw import _test_raw_reader
 from mne.io.meas_info import create_info, _kind_dict
-from mne.utils import _TempDir, slow_test, requires_version
+from mne.utils import slow_test, requires_version
 
 matplotlib.use('Agg')  # for testing don't use X server
 
@@ -30,7 +31,6 @@ def test_array_raw():
     """Test creating raw from array
     """
     import matplotlib.pyplot as plt
-    tempdir = _TempDir()
     # creating
     raw = Raw(fif_fname).crop(2, 5, copy=False)
     data, times = raw[:, :]
@@ -54,23 +54,13 @@ def test_array_raw():
     assert_equal(info['chs'][0]['kind'], _kind_dict['misc'][0])
     # use real types
     info = create_info(ch_names, sfreq, types)
-    raw2 = RawArray(data, info)
+    raw2 = _test_raw_reader(RawArray, test_preloading=False,
+                            data=data, info=info)
     data2, times2 = raw2[:, :]
     assert_allclose(data, data2)
     assert_allclose(times, times2)
-    # Make sure concatenation works
-    raw_concat = concatenate_raws([raw2.copy(), raw2])
-    assert_equal(raw_concat.n_times, 2 * raw2.n_times)
     assert_true('RawArray' in repr(raw2))
 
-    # saving
-    temp_fname = op.join(tempdir, 'raw.fif')
-    raw2.save(temp_fname)
-    raw3 = Raw(temp_fname)
-    data3, times3 = raw3[:, :]
-    assert_allclose(data, data3)
-    assert_allclose(times, times3)
-
     # filtering
     picks = pick_types(raw2.info, misc=True, exclude='bads')[:4]
     assert_equal(len(picks), 4)
diff --git a/mne/io/base.py b/mne/io/base.py
index ab5e16e..fa498ca 100644
--- a/mne/io/base.py
+++ b/mne/io/base.py
@@ -37,8 +37,8 @@ from ..parallel import parallel_func
 from ..utils import (_check_fname, _check_pandas_installed,
                      _check_pandas_index_arguments,
                      check_fname, _get_stim_channel, object_hash,
-                     logger, verbose, _time_mask, deprecated)
-from ..viz import plot_raw, plot_raw_psd
+                     logger, verbose, _time_mask)
+from ..viz import plot_raw, plot_raw_psd, plot_raw_psd_topo
 from ..defaults import _handle_default
 from ..externals.six import string_types
 from ..event import find_events, concatenate_events
@@ -218,8 +218,7 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
     Subclasses must provide the following methods:
 
-        * _read_segment_file(self, data, idx, offset, fi, start, stop,
-                             cals, mult)
+        * _read_segment_file(self, data, idx, fi, start, stop, cals, mult)
           (only needed for types that support on-demand disk reads)
 
     The `_BaseRaw._raw_extras` list can contain whatever data is necessary for
@@ -260,6 +259,8 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         self._first_samps = np.array(first_samps)
         info._check_consistency()  # make sure subclass did a good job
         self.info = info
+        if info.get('buffer_size_sec', None) is None:
+            raise RuntimeError('Reader error, notify mne-python developers')
         cals = np.empty(info['nchan'])
         for k in range(info['nchan']):
             cals[k] = info['chs'][k]['range'] * info['chs'][k]['cal']
@@ -354,22 +355,16 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
         # set up cals and mult (cals, compensation, and projector)
         cals = self._cals.ravel()[np.newaxis, :]
-        if self.comp is None and projector is None:
-            mult = None
+        if self.comp is not None:
+            if projector is not None:
+                mult = self.comp * cals
+                mult = np.dot(projector[idx], mult)
+            else:
+                mult = self.comp[idx] * cals
+        elif projector is not None:
+            mult = projector[idx] * cals
         else:
-            mult = list()
-            for ri in range(len(self._first_samps)):
-                if self.comp is not None:
-                    if projector is not None:
-                        mul = self.comp * cals
-                        mul = np.dot(projector[idx], mul)
-                    else:
-                        mul = self.comp[idx] * cals
-                elif projector is not None:
-                    mul = projector[idx] * cals
-                else:
-                    mul = np.diag(self._cals.ravel())[idx]
-                mult.append(mul)
+            mult = None
         cals = cals.T[idx]
 
         # read from necessary files
@@ -379,23 +374,21 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
             # first iteration (only) could start in the middle somewhere
             if offset == 0:
                 start_file += start - cumul_lens[fi]
-            stop_file = np.min([stop - 1 - cumul_lens[fi] +
-                                self._first_samps[fi], self._last_samps[fi]])
-            if start_file < self._first_samps[fi] or \
-                    stop_file > self._last_samps[fi] or \
-                    stop_file < start_file or start_file > stop_file:
+            stop_file = np.min([stop - cumul_lens[fi] + self._first_samps[fi],
+                                self._last_samps[fi] + 1])
+            if start_file < self._first_samps[fi] or stop_file < start_file:
                 raise ValueError('Bad array indexing, could be a bug')
-
-            self._read_segment_file(data, idx, offset, fi,
-                                    start_file, stop_file, cals, mult)
-            offset += stop_file - start_file + 1
+            n_read = stop_file - start_file
+            this_sl = slice(offset, offset + n_read)
+            self._read_segment_file(data[:, this_sl], idx, fi,
+                                    int(start_file), int(stop_file),
+                                    cals, mult)
+            offset += n_read
 
         logger.info('[done]')
-        times = np.arange(start, stop) / self.info['sfreq']
-        return data, times
+        return data
 
-    def _read_segment_file(self, data, idx, offset, fi, start, stop,
-                           cals, mult):
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
         """Read a segment of data from a file
 
         Only needs to be implemented for readers that support
@@ -403,15 +396,10 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
         Parameters
         ----------
-        data : ndarray, shape (len(idx), n_samp)
+        data : ndarray, shape (len(idx), stop - start + 1)
             The data array. Should be modified inplace.
         idx : ndarray | slice
             The requested channel indices.
-        offset : int
-            Offset. Data should be stored in something like::
-
-                data[:, offset:offset + (start - stop + 1)] = r[idx]
-
         fi : int
             The file index that must be read from.
         start : int
@@ -425,28 +413,6 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         """
         raise NotImplementedError
 
-    @deprecated("This method has been renamed 'load_data' and will be removed "
-                "in v0.11.")
-    def preload_data(self, verbose=None):
-        """Preload raw data
-
-        Parameters
-        ----------
-        verbose : bool, str, int, or None
-            If not None, override default verbose level (see mne.verbose).
-
-        Returns
-        -------
-        raw : instance of Raw
-            The raw object with data.
-
-        Notes
-        -----
-        This function will load raw data if it was not already preloaded.
-        If data were already preloaded, it will do nothing.
-        """
-        return self.load_data(verbose=verbose)
-
     @verbose
     def load_data(self, verbose=None):
         """Load raw data
@@ -475,7 +441,7 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
     def _preload_data(self, preload):
         """This function actually preloads the data"""
         data_buffer = preload if isinstance(preload, string_types) else None
-        self._data = self._read_segment(data_buffer=data_buffer)[0]
+        self._data = self._read_segment(data_buffer=data_buffer)
         assert len(self._data) == self.info['nchan']
         self.preload = True
         self.close()
@@ -575,11 +541,12 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         """getting raw data content with python slicing"""
         sel, start, stop = self._parse_get_set_params(item)
         if self.preload:
-            data, times = self._data[sel, start:stop], self.times[start:stop]
+            data = self._data[sel, start:stop]
         else:
-            data, times = self._read_segment(start=start, stop=stop, sel=sel,
-                                             projector=self._projector,
-                                             verbose=self.verbose)
+            data = self._read_segment(start=start, stop=stop, sel=sel,
+                                      projector=self._projector,
+                                      verbose=self.verbose)
+        times = self.times[start:stop]
         return data, times
 
     def __setitem__(self, item, value):
@@ -1325,7 +1292,7 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         start : float
             Initial time to show (can be changed dynamically once plotted).
         n_channels : int
-            Number of channels to plot at once.
+            Number of channels to plot at once. Defaults to 20.
         bgcolor : color object
             Color of the background.
         color : dict | color object | None
@@ -1453,7 +1420,7 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
         Returns
         -------
         fig : instance of matplotlib figure
-            Figure distributing one image per channel across sensor topography.
+            Figure with frequency spectra of the data channels.
         """
         return plot_raw_psd(self, tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,
                             proj=proj, n_fft=n_fft, picks=picks, ax=ax,
@@ -1461,6 +1428,65 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
                             area_alpha=area_alpha, n_overlap=n_overlap,
                             dB=dB, show=show, n_jobs=n_jobs)
 
+    def plot_psd_topo(self, tmin=0., tmax=None, fmin=0, fmax=100, proj=False,
+                      n_fft=2048, n_overlap=0, layout=None, color='w',
+                      fig_facecolor='k', axis_facecolor='k', dB=True,
+                      show=True, n_jobs=1, verbose=None):
+        """Function for plotting channel wise frequency spectra as topography.
+
+        Parameters
+        ----------
+        tmin : float
+            Start time for calculations. Defaults to zero.
+        tmax : float | None
+            End time for calculations. If None (default), the end of data is
+            used.
+        fmin : float
+            Start frequency to consider. Defaults to zero.
+        fmax : float
+            End frequency to consider. Defaults to 100.
+        proj : bool
+            Apply projection. Defaults to False.
+        n_fft : int
+            Number of points to use in Welch FFT calculations. Defaults to
+            2048.
+        n_overlap : int
+            The number of points of overlap between blocks. Defaults to 0
+            (no overlap).
+        layout : instance of Layout | None
+            Layout instance specifying sensor positions (does not need to
+            be specified for Neuromag data). If None (default), the correct
+            layout is inferred from the data.
+        color : str | tuple
+            A matplotlib-compatible color to use for the curves. Defaults to
+            white.
+        fig_facecolor : str | tuple
+            A matplotlib-compatible color to use for the figure background.
+            Defaults to black.
+        axis_facecolor : str | tuple
+            A matplotlib-compatible color to use for the axis background.
+            Defaults to black.
+        dB : bool
+            If True, transform data to decibels. Defaults to True.
+        show : bool
+            Show figure if True. Defaults to True.
+        n_jobs : int
+            Number of jobs to run in parallel. Defaults to 1.
+        verbose : bool, str, int, or None
+            If not None, override default verbose level (see mne.verbose).
+
+        Returns
+        -------
+        fig : instance of matplotlib figure
+            Figure distributing one image per channel across sensor topography.
+        """
+        return plot_raw_psd_topo(self, tmin=tmin, tmax=tmax, fmin=fmin,
+                                 fmax=fmax, proj=proj, n_fft=n_fft,
+                                 n_overlap=n_overlap, layout=layout,
+                                 color=color, fig_facecolor=fig_facecolor,
+                                 axis_facecolor=axis_facecolor, dB=dB,
+                                 show=show, n_jobs=n_jobs, verbose=verbose)
+
     def time_as_index(self, times, use_first_samp=False, use_rounding=False):
         """Convert time to indices
 
@@ -1690,7 +1716,7 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
             nsamp = c_ns[-1]
 
             if not self.preload:
-                this_data = self._read_segment()[0]
+                this_data = self._read_segment()
             else:
                 this_data = self._data
 
@@ -1796,6 +1822,7 @@ class _BaseRaw(ProjMixin, ContainsMixin, UpdateChannelsMixin,
 
 
 def _allocate_data(data, data_buffer, data_shape, dtype):
+    """Helper to data in memory or in memmap for preloading"""
     if data is None:
         # if not already done, allocate array with right type
         if isinstance(data_buffer, string_types):
@@ -1899,8 +1926,6 @@ def _write_raw(fname, raw, info, picks, fmt, data_type, reset_range, start,
         use_fname = fname
     logger.info('Writing %s' % use_fname)
 
-    meas_id = info['meas_id']
-
     fid, cals = _start_writing_raw(use_fname, info, picks, data_type,
                                    reset_range)
 
@@ -1913,8 +1938,8 @@ def _write_raw(fname, raw, info, picks, fmt, data_type, reset_range, start,
         start_block(fid, FIFF.FIFFB_REF)
         write_int(fid, FIFF.FIFF_REF_ROLE, FIFF.FIFFV_ROLE_PREV_FILE)
         write_string(fid, FIFF.FIFF_REF_FILE_NAME, prev_fname)
-        if meas_id is not None:
-            write_id(fid, FIFF.FIFF_REF_FILE_ID, meas_id)
+        if info['meas_id'] is not None:
+            write_id(fid, FIFF.FIFF_REF_FILE_ID, info['meas_id'])
         write_int(fid, FIFF.FIFF_REF_FILE_NUM, part_idx - 1)
         end_block(fid, FIFF.FIFFB_REF)
 
@@ -1964,8 +1989,8 @@ def _write_raw(fname, raw, info, picks, fmt, data_type, reset_range, start,
             start_block(fid, FIFF.FIFFB_REF)
             write_int(fid, FIFF.FIFF_REF_ROLE, FIFF.FIFFV_ROLE_NEXT_FILE)
             write_string(fid, FIFF.FIFF_REF_FILE_NAME, op.basename(next_fname))
-            if meas_id is not None:
-                write_id(fid, FIFF.FIFF_REF_FILE_ID, meas_id)
+            if info['meas_id'] is not None:
+                write_id(fid, FIFF.FIFF_REF_FILE_ID, info['meas_id'])
             write_int(fid, FIFF.FIFF_REF_FILE_NUM, next_idx)
             end_block(fid, FIFF.FIFFB_REF)
             break
@@ -2189,17 +2214,17 @@ def concatenate_raws(raws, preload=None, events_list=None):
         return raws[0], events
 
 
-def _check_update_montage(info, montage):
+def _check_update_montage(info, montage, path=None, update_ch_names=False):
     """ Helper function for eeg readers to add montage"""
     if montage is not None:
-        if not isinstance(montage, (str, Montage)):
+        if not isinstance(montage, (string_types, Montage)):
             err = ("Montage must be str, None, or instance of Montage. "
                    "%s was provided" % type(montage))
             raise TypeError(err)
         if montage is not None:
-            if isinstance(montage, str):
-                montage = read_montage(montage)
-            _set_montage(info, montage)
+            if isinstance(montage, string_types):
+                montage = read_montage(montage, path=path)
+            _set_montage(info, montage, update_ch_names=update_ch_names)
 
             missing_positions = []
             exclude = (FIFF.FIFFV_EOG_CH, FIFF.FIFFV_MISC_CH,
diff --git a/mne/io/brainvision/brainvision.py b/mne/io/brainvision/brainvision.py
index 72adeb9..d13052a 100644
--- a/mne/io/brainvision/brainvision.py
+++ b/mne/io/brainvision/brainvision.py
@@ -18,7 +18,7 @@ from ...utils import verbose, logger
 from ..constants import FIFF
 from ..meas_info import _empty_info
 from ..base import _BaseRaw, _check_update_montage
-from ..reference import add_reference_channels
+from ..utils import _mult_cal_one
 
 from ...externals.six import StringIO
 from ...externals.six.moves import configparser
@@ -43,12 +43,6 @@ class RawBrainVision(_BaseRaw):
         Names of channels or list of indices that should be designated
         MISC channels. Values should correspond to the electrodes
         in the vhdr file. Default is ``()``.
-    reference : None | str
-        **Deprecated**, use `add_reference_channel` instead.
-        Name of the electrode which served as the reference in the recording.
-        If a name is provided, a corresponding channel is added and its data
-        is set to 0. This is useful for later re-referencing. The name should
-        correspond to a name in elp_names. Data must be preloaded.
     scale : float
         The scaling factor for EEG data. Units are in volts. Default scale
         factor is 1. For microvolts, the scale factor would be 1e-6. This is
@@ -61,6 +55,12 @@ class RawBrainVision(_BaseRaw):
         events (stimulus triggers will be unaffected). If None, response
         triggers will be ignored. Default is 0 for backwards compatibility, but
         typically another value or None will be necessary.
+    event_id : dict | None
+        The id of the event to consider. If None (default),
+        only stimulus events are added to the stimulus channel. If dict,
+        the keys will be mapped to trigger values on the stimulus channel
+        in addition to the stimulus events. Keys are case-sensitive.
+        Example: {'SyncStatus': 1; 'Pulse Artifact': 3}.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -70,40 +70,32 @@ class RawBrainVision(_BaseRaw):
     """
     @verbose
     def __init__(self, vhdr_fname, montage=None,
-                 eog=('HEOGL', 'HEOGR', 'VEOGb'), misc=(), reference=None,
-                 scale=1., preload=False, response_trig_shift=0, verbose=None):
+                 eog=('HEOGL', 'HEOGR', 'VEOGb'), misc=(),
+                 scale=1., preload=False, response_trig_shift=0,
+                 event_id=None, verbose=None):
         # Channel info and events
         logger.info('Extracting parameters from %s...' % vhdr_fname)
         vhdr_fname = os.path.abspath(vhdr_fname)
-        info, fmt, self._order, events = _get_vhdr_info(
-            vhdr_fname, eog, misc, response_trig_shift, scale)
+        info, fmt, self._order, mrk_fname, montage = _get_vhdr_info(
+            vhdr_fname, eog, misc, scale, montage)
+        events = _read_vmrk_events(mrk_fname, event_id, response_trig_shift)
         _check_update_montage(info, montage)
         with open(info['filename'], 'rb') as f:
             f.seek(0, os.SEEK_END)
             n_samples = f.tell()
         dtype_bytes = _fmt_byte_dict[fmt]
         self.preload = False  # so the event-setting works
-        self.set_brainvision_events(events)
         last_samps = [(n_samples // (dtype_bytes * (info['nchan'] - 1))) - 1]
+        self._create_event_ch(events, last_samps[0] + 1)
         super(RawBrainVision, self).__init__(
             info, last_samps=last_samps, filenames=[info['filename']],
             orig_format=fmt, preload=preload, verbose=verbose)
 
-        # add reference
-        if reference is not None:
-            warnings.warn('reference is deprecated and will be removed in '
-                          'v0.11. Use add_reference_channels instead.')
-            if preload is False:
-                raise ValueError("Preload must be set to True if reference is "
-                                 "specified.")
-            add_reference_channels(self, reference, copy=False)
-
-    def _read_segment_file(self, data, idx, offset, fi, start, stop,
-                           cals, mult):
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
         """Read a chunk of raw data"""
         # read data
         n_data_ch = len(self.ch_names) - 1
-        n_times = stop - start + 1
+        n_times = stop - start
         pointer = start * n_data_ch * _fmt_byte_dict[self.orig_format]
         with open(self._filenames[fi], 'rb') as f:
             f.seek(pointer)
@@ -117,10 +109,8 @@ class RawBrainVision(_BaseRaw):
         data_ = np.empty((n_data_ch + 1, n_times), dtype=np.float64)
         data_[:-1] = data_buffer  # cast to float64
         del data_buffer
-        data_[-1] = _synthesize_stim_channel(self._events, start, stop + 1)
-        data_ *= self._cals[:, np.newaxis]
-        data[:, offset:offset + stop - start + 1] = \
-            np.dot(mult, data_) if mult is not None else data_[idx]
+        data_[-1] = self._event_ch[start:stop]
+        _mult_cal_one(data, data_, idx, cals, mult)
 
     def get_brainvision_events(self):
         """Retrieve the events associated with the Brain Vision Raw object
@@ -142,24 +132,34 @@ class RawBrainVision(_BaseRaw):
             Events, each row consisting of an (onset, duration, trigger)
             sequence.
         """
+        self._create_event_ch(events)
+
+    def _create_event_ch(self, events, n_samp=None):
+        """Create the event channel"""
+        if n_samp is None:
+            n_samp = self.last_samp - self.first_samp + 1
         events = np.array(events, int)
         if events.ndim != 2 or events.shape[1] != 3:
             raise ValueError("[n_events x 3] shaped array required")
         # update events
+        self._event_ch = _synthesize_stim_channel(events, n_samp)
         self._events = events
         if self.preload:
-            start = self.first_samp
-            stop = self.last_samp + 1
-            self._data[-1] = _synthesize_stim_channel(events, start, stop)
+            self._data[-1] = self._event_ch
 
 
-def _read_vmrk_events(fname, response_trig_shift=0):
+def _read_vmrk_events(fname, event_id=None, response_trig_shift=0):
     """Read events from a vmrk file
 
     Parameters
     ----------
     fname : str
         vmrk file to be read.
+    event_id : dict | None
+        The id of the event to consider. If dict, the keys will be mapped to
+        trigger values on the stimulus channel. Example:
+        {'SyncStatus': 1; 'Pulse Artifact': 3}. If empty dict (default),
+        only stimulus events are added to the stimulus channel.
     response_trig_shift : int | None
         Integer to shift response triggers by. None ignores response triggers.
 
@@ -169,17 +169,14 @@ def _read_vmrk_events(fname, response_trig_shift=0):
         An array containing the whole recording's events, each row representing
         an event as (onset, duration, trigger) sequence.
     """
+    if event_id is None:
+        event_id = dict()
     # read vmrk file
     with open(fname, 'rb') as fid:
         txt = fid.read().decode('utf-8')
 
     header = txt.split('\n')[0].strip()
-    start_tag = 'Brain Vision Data Exchange Marker File'
-    if not header.startswith(start_tag):
-        raise ValueError("vmrk file should start with %r" % start_tag)
-    end_tag = 'Version 1.0'
-    if not header.endswith(end_tag):
-        raise ValueError("vmrk file should be %r" % end_tag)
+    _check_mrk_version(header)
     if (response_trig_shift is not None and
             not isinstance(response_trig_shift, int)):
         raise TypeError("response_trig_shift must be an integer or None")
@@ -198,22 +195,28 @@ def _read_vmrk_events(fname, response_trig_shift=0):
     events = []
     for info in items:
         mtype, mdesc, onset, duration = info.split(',')[:4]
+        onset = int(onset)
+        duration = (int(duration) if duration.isdigit() else 1)
         try:
             trigger = int(re.findall('[A-Za-z]*\s*?(\d+)', mdesc)[0])
-            if mdesc[0].lower() == 's' or response_trig_shift is not None:
-                if mdesc[0].lower() == 'r':
-                    trigger += response_trig_shift
-                onset = int(onset)
-                duration = int(duration)
-                events.append((onset, duration, trigger))
         except IndexError:
-            pass
+            trigger = None
+
+        if mtype.lower().startswith('response'):
+            if response_trig_shift is not None:
+                trigger += response_trig_shift
+            else:
+                trigger = None
+        if mdesc in event_id:
+            trigger = event_id[mdesc]
+        if trigger:
+            events.append((onset, duration, trigger))
 
     events = np.array(events).reshape(-1, 3)
     return events
 
 
-def _synthesize_stim_channel(events, start, stop):
+def _synthesize_stim_channel(events, n_samp):
     """Synthesize a stim channel from events read from a vmrk file
 
     Parameters
@@ -221,10 +224,8 @@ def _synthesize_stim_channel(events, start, stop):
     events : array, shape (n_events, 3)
         Each row representing an event as (onset, duration, trigger) sequence
         (the format returned by _read_vmrk_events).
-    start : int
-        First sample to return.
-    stop : int
-        Last sample to return.
+    n_samp : int
+        The number of samples.
 
     Returns
     -------
@@ -233,26 +234,31 @@ def _synthesize_stim_channel(events, start, stop):
     """
     # select events overlapping buffer
     onset = events[:, 0]
-    offset = onset + events[:, 1]
-    idx = np.logical_and(onset < stop, offset > start)
-    if idx.sum() > 0:  # fix for old numpy
-        events = events[idx]
-
-    # make onset relative to buffer
-    events[:, 0] -= start
-
-    # fix onsets before buffer start
-    idx = events[:, 0] < 0
-    events[idx, 0] = 0
-
     # create output buffer
-    stim_channel = np.zeros(stop - start)
+    stim_channel = np.zeros(n_samp, int)
     for onset, duration, trigger in events:
         stim_channel[onset:onset + duration] = trigger
-
     return stim_channel
 
 
+def _check_hdr_version(header):
+    tags = ['Brain Vision Data Exchange Header File Version 1.0',
+            'Brain Vision Data Exchange Header File Version 2.0']
+    if header not in tags:
+        raise ValueError("Currently only support %r, not %r"
+                         "Contact MNE-Developers for support."
+                         % (str(tags), header))
+
+
+def _check_mrk_version(header):
+    tags = ['Brain Vision Data Exchange Marker File, Version 1.0',
+            'Brain Vision Data Exchange Marker File, Version 2.0']
+    if header not in tags:
+        raise ValueError("Currently only support %r, not %r"
+                         "Contact MNE-Developers for support."
+                         % (str(tags), header))
+
+
 _orientation_dict = dict(MULTIPLEXED='F', VECTORIZED='C')
 _fmt_dict = dict(INT_16='short', INT_32='int', IEEE_FLOAT_32='single')
 _fmt_byte_dict = dict(short=2, int=4, single=4)
@@ -260,7 +266,7 @@ _fmt_dtype_dict = dict(short='<i2', int='<i4', single='<f4')
 _unit_dict = {'V': 1., u'µV': 1e-6}
 
 
-def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
+def _get_vhdr_info(vhdr_fname, eog, misc, scale, montage):
     """Extracts all the information from the header file.
 
     Parameters
@@ -273,12 +279,14 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
     misc : list of str
         Names of channels that should be designated MISC channels. Names
         should correspond to the electrodes in the vhdr file.
-    response_trig_shift : int | None
-        Integer to shift response triggers by. None ignores response triggers.
     scale : float
         The scaling factor for EEG data. Units are in volts. Default scale
         factor is 1.. For microvolts, the scale factor would be 1e-6. This is
         used when the header file does not specify the scale factor.
+    montage : str | True | None | instance of Montage
+        Path or instance of montage containing electrode positions.
+        If None, sensor locations are (0,0,0). See the documentation of
+        :func:`mne.channels.read_montage` for more information.
 
     Returns
     -------
@@ -292,7 +300,6 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
         Events from the corresponding vmrk file.
     """
     scale = float(scale)
-    info = _empty_info()
 
     ext = os.path.splitext(vhdr_fname)[-1]
     if ext != '.vhdr':
@@ -300,8 +307,8 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
                       "not the '%s' file." % ext)
     with open(vhdr_fname, 'rb') as f:
         # extract the first section to resemble a cfg
-        l = f.readline().decode('utf-8').strip()
-        assert l == 'Brain Vision Data Exchange Header File Version 1.0'
+        header = f.readline().decode('utf-8').strip()
+        _check_hdr_version(header)
         settings = f.read().decode('utf-8')
 
     if settings.find('[Comment]') != -1:
@@ -316,7 +323,8 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
 
     # get sampling info
     # Sampling interval is given in microsec
-    info['sfreq'] = 1e6 / cfg.getfloat('Common Infos', 'SamplingInterval')
+    sfreq = 1e6 / cfg.getfloat('Common Infos', 'SamplingInterval')
+    info = _empty_info(sfreq)
 
     # check binary format
     assert cfg.get('Common Infos', 'DataFormat') == 'BINARY'
@@ -337,18 +345,40 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
     cals = np.empty(info['nchan'])
     ranges = np.empty(info['nchan'])
     cals.fill(np.nan)
+    ch_dict = dict()
     for chan, props in cfg.items('Channel Infos'):
         n = int(re.findall(r'ch(\d+)', chan)[0]) - 1
         props = props.split(',')
         if len(props) < 4:
             props += ('V',)
         name, _, resolution, unit = props[:4]
+        ch_dict[chan] = name
         ch_names[n] = name
-        if resolution == "":  # For truncated vhdrs (e.g. EEGLAB export)
-            resolution = 0.000001
+        if resolution == "":
+            if not(unit):  # For truncated vhdrs (e.g. EEGLAB export)
+                resolution = 0.000001
+            else:
+                resolution = 1.  # for files with units specified, but not res
         unit = unit.replace(u'\xc2', u'')  # Remove unwanted control characters
         cals[n] = float(resolution)
         ranges[n] = _unit_dict.get(unit, unit) * scale
+
+    # create montage
+    if montage is True:
+        from ...transforms import _sphere_to_cartesian
+        from ...channels.montage import Montage
+        montage_pos = list()
+        montage_names = list()
+        for ch in cfg.items('Coordinates'):
+            montage_names.append(ch_dict[ch[0]])
+            radius, theta, phi = map(float, ch[1].split(','))
+            # 1: radius, 2: theta, 3: phi
+            pos = _sphere_to_cartesian(r=radius, theta=theta, phi=phi)
+            montage_pos.append(pos)
+        montage_sel = np.arange(len(montage_pos))
+        montage = Montage(montage_pos, montage_names, 'Brainvision',
+                          montage_sel)
+
     ch_names[-1] = 'STI 014'
     cals[-1] = 1.
     ranges[-1] = 1.
@@ -377,10 +407,10 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
             highpass.append(line[5])
             lowpass.append(line[6])
         if len(highpass) == 0:
-            info['highpass'] = None
+            pass
         elif all(highpass):
             if highpass[0] == 'NaN':
-                info['highpass'] = None
+                pass  # Placeholder for future use. Highpass set in _empty_info
             elif highpass[0] == 'DC':
                 info['highpass'] = 0.
             else:
@@ -391,10 +421,10 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
                                   'filters. Highest filter setting will '
                                   'be stored.'))
         if len(lowpass) == 0:
-            info['lowpass'] = None
+            pass
         elif all(lowpass):
             if lowpass[0] == 'NaN':
-                info['lowpass'] = None
+                pass  # Placeholder for future use. Lowpass set in _empty_info
             else:
                 info['lowpass'] = float(lowpass[0])
         else:
@@ -405,19 +435,16 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
         # Post process highpass and lowpass to take into account units
         header = settings[idx].split('  ')
         header = [h for h in header if len(h)]
-        if '[s]' in header[4] and info['highpass'] is not None \
-                and (info['highpass'] > 0):
+        if '[s]' in header[4] and (info['highpass'] > 0):
             info['highpass'] = 1. / info['highpass']
-        if '[s]' in header[5] and info['lowpass'] is not None:
+        if '[s]' in header[5]:
             info['lowpass'] = 1. / info['lowpass']
-    else:
-        info['highpass'] = None
-        info['lowpass'] = None
 
     # locate EEG and marker files
     path = os.path.dirname(vhdr_fname)
     info['filename'] = os.path.join(path, cfg.get('Common Infos', 'DataFile'))
     info['meas_date'] = int(time.time())
+    info['buffer_size_sec'] = 1.  # reasonable default
 
     # Creates a list of dicts of eeg channels for raw.info
     logger.info('Setting channel info structure...')
@@ -447,16 +474,15 @@ def _get_vhdr_info(vhdr_fname, eog, misc, response_trig_shift, scale):
             coord_frame=FIFF.FIFFV_COORD_HEAD))
 
     # for stim channel
-    marker_id = os.path.join(path, cfg.get('Common Infos', 'MarkerFile'))
-    events = _read_vmrk_events(marker_id, response_trig_shift)
+    mrk_fname = os.path.join(path, cfg.get('Common Infos', 'MarkerFile'))
     info._check_consistency()
-    return info, fmt, order, events
+    return info, fmt, order, mrk_fname, montage
 
 
 def read_raw_brainvision(vhdr_fname, montage=None,
                          eog=('HEOGL', 'HEOGR', 'VEOGb'), misc=(),
-                         reference=None, scale=1., preload=False,
-                         response_trig_shift=0, verbose=None):
+                         scale=1., preload=False, response_trig_shift=0,
+                         event_id=None, verbose=None):
     """Reader for Brain Vision EEG file
 
     Parameters
@@ -475,12 +501,6 @@ def read_raw_brainvision(vhdr_fname, montage=None,
         Names of channels or list of indices that should be designated
         MISC channels. Values should correspond to the electrodes
         in the vhdr file. Default is ``()``.
-    reference : None | str
-        **Deprecated**, use `add_reference_channel` instead.
-        Name of the electrode which served as the reference in the recording.
-        If a name is provided, a corresponding channel is added and its data
-        is set to 0. This is useful for later re-referencing. The name should
-        correspond to a name in elp_names. Data must be preloaded.
     scale : float
         The scaling factor for EEG data. Units are in volts. Default scale
         factor is 1. For microvolts, the scale factor would be 1e-6. This is
@@ -493,6 +513,12 @@ def read_raw_brainvision(vhdr_fname, montage=None,
         events (stimulus triggers will be unaffected). If None, response
         triggers will be ignored. Default is 0 for backwards compatibility, but
         typically another value or None will be necessary.
+    event_id : dict | None
+        The id of the event to consider. If None (default),
+        only stimulus events are added to the stimulus channel. If dict,
+        the keys will be mapped to trigger values on the stimulus channel
+        in addition to the stimulus events. Keys are case-sensitive.
+        Example: {'SyncStatus': 1; 'Pulse Artifact': 3}.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -506,7 +532,7 @@ def read_raw_brainvision(vhdr_fname, montage=None,
     mne.io.Raw : Documentation of attribute and methods.
     """
     raw = RawBrainVision(vhdr_fname=vhdr_fname, montage=montage, eog=eog,
-                         misc=misc, reference=reference, scale=scale,
-                         preload=preload, verbose=verbose,
+                         misc=misc, scale=scale,
+                         preload=preload, verbose=verbose, event_id=event_id,
                          response_trig_shift=response_trig_shift)
     return raw
diff --git a/mne/io/brainvision/tests/data/test.vmrk b/mne/io/brainvision/tests/data/test.vmrk
index 16eccb9..a7cfb71 100755
--- a/mne/io/brainvision/tests/data/test.vmrk
+++ b/mne/io/brainvision/tests/data/test.vmrk
@@ -21,3 +21,4 @@ Mk9=Stimulus,S255,4946,1,0
 Mk10=Response,R255,6000,1,0
 Mk11=Stimulus,S254,6620,1,0
 Mk12=Stimulus,S255,6630,1,0
+Mk13=SyncStatus,Sync On,7630,1,0
diff --git a/mne/io/brainvision/tests/data/testv2.vhdr b/mne/io/brainvision/tests/data/testv2.vhdr
new file mode 100644
index 0000000..7773b83
--- /dev/null
+++ b/mne/io/brainvision/tests/data/testv2.vhdr
@@ -0,0 +1,107 @@
+Brain Vision Data Exchange Header File Version 2.0
+; Data created from history path: test/Raw Data
+
+[Common Infos]
+Codepage=UTF-8
+DataFile=test.eeg
+MarkerFile=testv2.vmrk
+DataFormat=BINARY
+; Data orientation: VECTORIZED=ch1,pt1, ch1,pt2..., MULTIPLEXED=ch1,pt1, ch2,pt1 ...
+DataOrientation=MULTIPLEXED
+DataType=TIMEDOMAIN
+NumberOfChannels=32
+DataPoints=7900
+; Sampling interval in microseconds if time domain (convert to Hertz:
+; 1000000 / SamplingInterval) or in Hertz if frequency domain:
+SamplingInterval=1000
+
+[User Infos]
+; Each entry: Prop<Number>=<Type>,<Name>,<Value>,<Value2>,...,<ValueN>
+; Property number must be unique. Types can be int, single, string, bool, byte, double, uint
+; or arrays of those, indicated int-array etc
+; Array types have more than one value, number of values determines size of array.
+; Fields are delimited by commas, commas in strings are written \1
+
+[Binary Infos]
+BinaryFormat=INT_16
+
+[Channel Infos]
+; Each entry: Ch<Channel number>=<Name>,<Reference channel name>,
+; <Resolution in "Unit">,<Unit>, Future extensions..
+; Fields are delimited by commas, some fields might be omitted (empty).
+; Commas in channel names are coded as "\1".
+Ch1=FP1,,0.5,µV
+Ch2=FP2,,0.5,µV
+Ch3=F3,,0.5,µV
+Ch4=F4,,0.5,µV
+Ch5=C3,,0.5,µV
+Ch6=C4,,0.5,µV
+Ch7=P3,,0.5,µV
+Ch8=P4,,0.5,µV
+Ch9=O1,,0.5,µV
+Ch10=O2,,0.5,µV
+Ch11=F7,,0.5,µV
+Ch12=F8,,0.5,µV
+Ch13=P7,,0.5,µV
+Ch14=P8,,0.5,µV
+Ch15=Fz,,0.5,µV
+Ch16=FCz,,0.5,µV
+Ch17=Cz,,0.5,µV
+Ch18=CPz,,0.5,µV
+Ch19=Pz,,0.5,µV
+Ch20=POz,,0.5,µV
+Ch21=FC1,,0.5,µV
+Ch22=FC2,,0.5,µV
+Ch23=CP1,,0.5,µV
+Ch24=CP2,,0.5,µV
+Ch25=FC5,,0.5,µV
+Ch26=FC6,,0.5,µV
+Ch27=CP5,,0.5,µV
+Ch28=CP6,,0.5,µV
+Ch29=HL,,0.5,µV
+Ch30=HR,,0.5,µV
+Ch31=Vb,,0.5,µV
+Ch32=ReRef,,0.5,µV
+
+[Channel User Infos]
+; Each entry: Prop<Number>=Ch<ChannelNumber>,<Type>,<Name>,<Value>,<Value2>,...,<ValueN>
+; Property number must be unique. Types can be int, single, string, bool, byte, double, uint
+; or arrays of those, indicated int-array etc
+; Array types have more than one value, number of values determines size of array.
+; Fields are delimited by commas, commas in strings are written \1
+; Properties are assigned to channels using their channel number.
+
+[Coordinates]
+; Each entry: Ch<Channel number>=<Radius>,<Theta>,<Phi>
+Ch1=1,-90,-72
+Ch2=1,90,72
+Ch3=1,-60,-51
+Ch4=1,60,51
+Ch5=1,-45,0
+Ch6=1,45,0
+Ch7=1,-60,51
+Ch8=1,60,-51
+Ch9=1,-90,72
+Ch10=1,90,-72
+Ch11=1,-90,-36
+Ch12=1,90,36
+Ch13=1,-90,36
+Ch14=1,90,-36
+Ch15=1,45,90
+Ch16=1,22,90
+Ch17=1,0,0
+Ch18=1,22,-90
+Ch19=1,45,-90
+Ch20=1,67,-90
+Ch21=1,-31,-46
+Ch22=1,31,46
+Ch23=1,-31,46
+Ch24=1,31,-46
+Ch25=1,-69,-21
+Ch26=1,69,21
+Ch27=1,-69,21
+Ch28=1,69,-21
+Ch29=0,0,0
+Ch30=0,0,0
+Ch31=0,0,0
+Ch32=0,0,0
diff --git a/mne/io/brainvision/tests/data/test.vmrk b/mne/io/brainvision/tests/data/testv2.vmrk
old mode 100755
new mode 100644
similarity index 52%
copy from mne/io/brainvision/tests/data/test.vmrk
copy to mne/io/brainvision/tests/data/testv2.vmrk
index 16eccb9..cec3a02
--- a/mne/io/brainvision/tests/data/test.vmrk
+++ b/mne/io/brainvision/tests/data/testv2.vmrk
@@ -1,4 +1,6 @@
-Brain Vision Data Exchange Marker File, Version 1.0
+Brain Vision Data Exchange Marker File, Version 2.0
+; Data created from history path: test/Raw Data
+; The channel numbers are related to the channels in the exported file.
 
 [Common Infos]
 Codepage=UTF-8
@@ -21,3 +23,11 @@ Mk9=Stimulus,S255,4946,1,0
 Mk10=Response,R255,6000,1,0
 Mk11=Stimulus,S254,6620,1,0
 Mk12=Stimulus,S255,6630,1,0
+
+[Marker User Infos]
+; Each entry: Prop<Number>=Mk<Marker number>,<Type>,<Name>,<Value>,<Value2>,...,<ValueN>
+; Property number must be unique. Types can be int, single, string, bool, byte, double, uint
+; or arrays of those, indicated int-array etc
+; Array types have more than one value, number of values determines size of array.
+; Fields are delimited by commas, commas in strings are written \1
+; Properties are assigned to markers using their marker number.
diff --git a/mne/io/brainvision/tests/test_brainvision.py b/mne/io/brainvision/tests/test_brainvision.py
index ca338f4..621a1f8 100644
--- a/mne/io/brainvision/tests/test_brainvision.py
+++ b/mne/io/brainvision/tests/test_brainvision.py
@@ -14,14 +14,17 @@ from numpy.testing import (assert_array_almost_equal, assert_array_equal,
                            assert_allclose)
 
 from mne.utils import _TempDir, run_tests_if_main
-from mne import pick_types, concatenate_raws, find_events
+from mne import pick_types, find_events
 from mne.io.constants import FIFF
 from mne.io import Raw, read_raw_brainvision
+from mne.io.tests.test_raw import _test_raw_reader
 
 FILE = inspect.getfile(inspect.currentframe())
 data_dir = op.join(op.dirname(op.abspath(FILE)), 'data')
 vhdr_path = op.join(data_dir, 'test.vhdr')
 vmrk_path = op.join(data_dir, 'test.vmrk')
+vhdr_v2_path = op.join(data_dir, 'testv2.vhdr')
+vmrk_v2_path = op.join(data_dir, 'testv2.vmrk')
 vhdr_highpass_path = op.join(data_dir, 'test_highpass.vhdr')
 montage = op.join(data_dir, 'test.hpts')
 eeg_bin = op.join(data_dir, 'test_bin_raw.fif')
@@ -31,12 +34,12 @@ eog = ['HL', 'HR', 'Vb']
 def test_brainvision_data_filters():
     """Test reading raw Brain Vision files
     """
-    raw = read_raw_brainvision(vhdr_highpass_path, montage, eog=eog,
-                               preload=True)
+    raw = _test_raw_reader(read_raw_brainvision,
+                           vhdr_fname=vhdr_highpass_path, montage=montage,
+                           eog=eog)
+
     assert_equal(raw.info['highpass'], 0.1)
     assert_equal(raw.info['lowpass'], 250.)
-    raw.info["lowpass"] = None
-    raw.filter(1, 30)
 
 
 def test_brainvision_data():
@@ -45,8 +48,9 @@ def test_brainvision_data():
     assert_raises(IOError, read_raw_brainvision, vmrk_path)
     assert_raises(ValueError, read_raw_brainvision, vhdr_path, montage,
                   preload=True, scale="foo")
-    raw_py = read_raw_brainvision(vhdr_path, montage, eog=eog, preload=True)
-    raw_py.load_data()  # currently does nothing
+    raw_py = _test_raw_reader(read_raw_brainvision,
+                              vhdr_fname=vhdr_path, montage=montage, eog=eog)
+
     assert_true('RawBrainVision' in repr(raw_py))
 
     assert_equal(raw_py.info['highpass'], 0.)
@@ -55,9 +59,6 @@ def test_brainvision_data():
     picks = pick_types(raw_py.info, meg=False, eeg=True, exclude='bads')
     data_py, times_py = raw_py[picks]
 
-    print(raw_py)  # to test repr
-    print(raw_py.info)  # to test Info repr
-
     # compare with a file that was generated using MNE-C
     raw_bin = Raw(eeg_bin, preload=True)
     picks = pick_types(raw_py.info, meg=False, eeg=True, exclude='bads')
@@ -67,8 +68,6 @@ def test_brainvision_data():
     assert_array_almost_equal(times_py, times_bin)
 
     # Make sure EOG channels are marked correctly
-    raw_py = read_raw_brainvision(vhdr_path, montage, eog=eog,
-                                  preload=True)
     for ch in raw_py.info['chs']:
         if ch['ch_name'] in eog:
             assert_equal(ch['kind'], FIFF.FIFFV_EOG_CH)
@@ -79,9 +78,9 @@ def test_brainvision_data():
         else:
             raise RuntimeError("Unknown Channel: %s" % ch['ch_name'])
 
-    # Make sure concatenation works
-    raw_concat = concatenate_raws([raw_py.copy(), raw_py])
-    assert_equal(raw_concat.n_times, 2 * raw_py.n_times)
+    # test loading v2
+    read_raw_brainvision(vhdr_v2_path, eog=eog, preload=True,
+                         response_trig_shift=1000)
 
 
 def test_events():
@@ -135,6 +134,23 @@ def test_events():
                                 [4946, 1, 255],
                                 [6620, 1, 254],
                                 [6630, 1, 255]])
+    # check that events are read properly when event_id is specified for
+    # auxiliary events
+    raw = read_raw_brainvision(vhdr_path, eog=eog, preload=True,
+                               response_trig_shift=None,
+                               event_id={'Sync On': 5})
+    events = raw.get_brainvision_events()
+    assert_array_equal(events, [[487, 1, 253],
+                                [497, 1, 255],
+                                [1770, 1, 254],
+                                [1780, 1, 255],
+                                [3253, 1, 254],
+                                [3263, 1, 255],
+                                [4936, 1, 253],
+                                [4946, 1, 255],
+                                [6620, 1, 254],
+                                [6630, 1, 255],
+                                [7630, 1, 5]])
 
     assert_raises(TypeError, read_raw_brainvision, vhdr_path, eog=eog,
                   preload=True, response_trig_shift=0.1)
@@ -172,36 +188,4 @@ def test_events():
     assert_equal(raw.info['chs'][-1]['ch_name'], 'STI 014')
 
 
-def test_read_segment():
-    """Test writing raw eeg files when preload is False
-    """
-    tempdir = _TempDir()
-    raw1 = read_raw_brainvision(vhdr_path, eog=eog, preload=False)
-    raw1_file = op.join(tempdir, 'test1-raw.fif')
-    raw1.save(raw1_file, overwrite=True)
-    raw11 = Raw(raw1_file, preload=True)
-    data1, times1 = raw1[:, :]
-    data11, times11 = raw11[:, :]
-    assert_array_almost_equal(data1, data11, 8)
-    assert_array_almost_equal(times1, times11)
-    assert_equal(sorted(raw1.info.keys()), sorted(raw11.info.keys()))
-
-    raw2 = read_raw_brainvision(vhdr_path, eog=eog, preload=True)
-    raw2_file = op.join(tempdir, 'test2-raw.fif')
-    raw2.save(raw2_file, overwrite=True)
-    data2, times2 = raw2[:, :]
-    assert_array_equal(data1, data2)
-    assert_array_equal(times1, times2)
-
-    raw1 = Raw(raw1_file, preload=True)
-    raw2 = Raw(raw2_file, preload=True)
-    assert_array_equal(raw1._data, raw2._data)
-
-    # save with buffer size smaller than file
-    raw3_file = op.join(tempdir, 'test3-raw.fif')
-    raw3 = read_raw_brainvision(vhdr_path, eog=eog)
-    raw3.save(raw3_file, buffer_size_sec=2)
-    raw3 = Raw(raw3_file, preload=True)
-    assert_array_equal(raw3._data, raw1._data)
-
 run_tests_if_main()
diff --git a/mne/io/bti/bti.py b/mne/io/bti/bti.py
index caa1be4..96c4477 100644
--- a/mne/io/bti/bti.py
+++ b/mne/io/bti/bti.py
@@ -9,6 +9,7 @@
 
 import os.path as op
 from itertools import count
+import warnings
 
 import numpy as np
 
@@ -17,6 +18,7 @@ from ...transforms import (combine_transforms, invert_transform, apply_trans,
                            Transform)
 from ..constants import FIFF
 from .. import _BaseRaw, _coil_trans_to_loc, _loc_to_coil_trans, _empty_info
+from ..utils import _mult_cal_one
 from .constants import BTI
 from .read import (read_int32, read_int16, read_str, read_float, read_double,
                    read_transform, read_char, read_int64, read_uint16,
@@ -893,8 +895,7 @@ def _read_bti_header_pdf(pdf_fname):
 def _read_bti_header(pdf_fname, config_fname, sort_by_ch_name=True):
     """ Read bti PDF header
     """
-    info = _read_bti_header_pdf(pdf_fname) if pdf_fname else dict()
-
+    info = _read_bti_header_pdf(pdf_fname)
     cfg = _read_config(config_fname)
     info['bti_transform'] = cfg['transforms']
 
@@ -926,13 +927,10 @@ def _read_bti_header(pdf_fname, config_fname, sort_by_ch_name=True):
             ch['loc'] = _coil_trans_to_loc(ch_cfg['dev']['transform'])
         else:
             ch['loc'] = None
-        if pdf_fname:
-            if info['data_format'] <= 2:  # see DTYPES, implies integer
-                ch['cal'] = ch['scale'] * ch['upb'] / float(ch['gain'])
-            else:  # float
-                ch['cal'] = ch['scale'] * ch['gain']
-        else:
-            ch['scale'] = 1.0
+        if info['data_format'] <= 2:  # see DTYPES, implies integer
+            ch['cal'] = ch['scale'] * ch['upb'] / float(ch['gain'])
+        else:  # float
+            ch['cal'] = ch['scale'] * ch['gain']
 
     if sort_by_ch_name:
         by_index = [(i, d['index']) for i, d in enumerate(chans)]
@@ -961,57 +959,6 @@ def _read_bti_header(pdf_fname, config_fname, sort_by_ch_name=True):
     return info
 
 
-def _read_data(info, start=None, stop=None):
-    """ Helper function: read Bti processed data file (PDF)
-
-    Parameters
-    ----------
-    info : dict
-        The measurement info.
-    start : int | None
-        The number of the first time slice to read. If None, all data will
-        be read from the beginning.
-    stop : int | None
-        The number of the last time slice to read. If None, all data will
-        be read to the end.
-    dtype : str | dtype object
-        The type the data are casted to.
-
-    Returns
-    -------
-    data : ndarray
-        The measurement data, a channels x time slices array.
-        The data will be cast to np.float64 for compatibility.
-    """
-
-    total_slices = info['total_slices']
-    if start is None:
-        start = 0
-    if stop is None:
-        stop = total_slices
-
-    if any([start < 0, stop > total_slices, start >= stop]):
-        raise RuntimeError('Invalid data range supplied:'
-                           ' %d, %d' % (start, stop))
-    fname = info['pdf_fname']
-    with _bti_open(fname, 'rb') as fid:
-        fid.seek(info['bytes_per_slice'] * start, 0)
-        cnt = (stop - start) * info['total_chans']
-        shape = [stop - start, info['total_chans']]
-
-        if isinstance(fid, six.BytesIO):
-            data = np.fromstring(fid.getvalue(),
-                                 dtype=info['dtype'], count=cnt)
-        else:
-            data = np.fromfile(fid, dtype=info['dtype'], count=cnt)
-        data = data.astype('f4').reshape(shape)
-
-    for ch in info['chs']:
-        data[:, ch['index']] *= ch['cal']
-
-    return data[:, info['order']].T.astype(np.float64)
-
-
 def _correct_trans(t):
     """Helper to convert to a transformation matrix"""
     t = np.array(t, np.float64)
@@ -1051,6 +998,15 @@ class RawBTi(_BaseRaw):
     eog_ch : tuple of str | None
         The 4D names of the EOG channels. If None, the channels will be treated
         as regular EEG channels.
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory).
+
+        ..versionadded:: 0.11
+
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
     """
@@ -1060,40 +1016,57 @@ class RawBTi(_BaseRaw):
                  translation=(0.0, 0.02, 0.11), convert=True,
                  rename_channels=True, sort_by_ch_name=True,
                  ecg_ch='E31', eog_ch=('E63', 'E64'),
-                 verbose=None):
-
+                 preload=None, verbose=None):
+        if preload is None:
+            warnings.warn('preload is True by default but will be changed to '
+                          'False in v0.12. Please explicitly set preload.',
+                          DeprecationWarning)
+            preload = True
         info, bti_info = _get_bti_info(
             pdf_fname=pdf_fname, config_fname=config_fname,
             head_shape_fname=head_shape_fname, rotation_x=rotation_x,
             translation=translation, convert=convert, ecg_ch=ecg_ch,
             rename_channels=rename_channels,
             sort_by_ch_name=sort_by_ch_name, eog_ch=eog_ch)
-        logger.info('Reading raw data from %s...' % pdf_fname)
-        data = _read_data(bti_info)
-        assert len(data) == len(info['ch_names'])
-        self._projector_hashes = [None]
         self.bti_ch_labels = [c['chan_label'] for c in bti_info['chs']]
-
         # make Raw repr work if we have a BytesIO as input
         if isinstance(pdf_fname, six.BytesIO):
             pdf_fname = repr(pdf_fname)
-
         super(RawBTi, self).__init__(
-            info, data, filenames=[pdf_fname], verbose=verbose)
-        logger.info('    Range : %d ... %d =  %9.3f ... %9.3f secs' % (
-                    self.first_samp, self.last_samp,
-                    float(self.first_samp) / info['sfreq'],
-                    float(self.last_samp) / info['sfreq']))
-        logger.info('Ready.')
+            info, preload, filenames=[pdf_fname], raw_extras=[bti_info],
+            last_samps=[bti_info['total_slices'] - 1], verbose=verbose)
+
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
+        """Read a segment of data from a file"""
+        bti_info = self._raw_extras[fi]
+        fname = bti_info['pdf_fname']
+        read_cals = np.empty((bti_info['total_chans'],))
+        for ch in bti_info['chs']:
+            read_cals[ch['index']] = ch['cal']
+        with _bti_open(fname, 'rb') as fid:
+            fid.seek(bti_info['bytes_per_slice'] * start, 0)
+            shape = (stop - start, bti_info['total_chans'])
+            count = np.prod(shape)
+            dtype = bti_info['dtype']
+            if isinstance(fid, six.BytesIO):
+                one_orig = np.fromstring(fid.getvalue(), dtype, count)
+            else:
+                one_orig = np.fromfile(fid, dtype, count)
+            one_orig.shape = shape
+            one = np.empty(shape[::-1])
+            for ii, b_i_o in enumerate(bti_info['order']):
+                one[ii] = one_orig[:, b_i_o] * read_cals[b_i_o]
+        _mult_cal_one(data, one, idx, cals, mult)
 
 
 def _get_bti_info(pdf_fname, config_fname, head_shape_fname, rotation_x,
                   translation, convert, ecg_ch, eog_ch, rename_channels=True,
                   sort_by_ch_name=True):
-
-    if pdf_fname is not None and not isinstance(pdf_fname, six.BytesIO):
-        if not op.isabs(pdf_fname):
-            pdf_fname = op.abspath(pdf_fname)
+    """Helper to read BTI info"""
+    if pdf_fname is None:
+        raise ValueError('pdf_fname must be a path, not None')
+    if not isinstance(pdf_fname, six.BytesIO):
+        pdf_fname = op.abspath(pdf_fname)
 
     if not isinstance(config_fname, six.BytesIO):
         if not op.isabs(config_fname):
@@ -1134,20 +1107,18 @@ def _get_bti_info(pdf_fname, config_fname, head_shape_fname, rotation_x,
 
     use_hpi = False  # hard coded, but marked as later option.
     logger.info('Creating Neuromag info structure ...')
-    info = _empty_info()
-    if pdf_fname is not None:
-        date = bti_info['processes'][0]['timestamp']
-        info['meas_date'] = [date, 0]
-        info['sfreq'] = 1e3 / bti_info['sample_period'] * 1e-3
-    else:  # for some use case we just want a partial info with channel geom.
-        info['meas_date'] = None
-        info['sfreq'] = None
-        bti_info['processes'] = list()
+    if 'sample_period' in bti_info.keys():
+        sfreq = 1. / bti_info['sample_period']
+    else:
+        sfreq = None
+    info = _empty_info(sfreq)
+    info['buffer_size_sec'] = 1.  # reasonable default for writing
+    date = bti_info['processes'][0]['timestamp']
+    info['meas_date'] = [date, 0]
     info['nchan'] = len(bti_info['chs'])
 
     # browse processing info for filter specs.
-    # find better default
-    hp, lp = (0.0, info['sfreq'] * 0.4) if pdf_fname else (None, None)
+    hp, lp = info['highpass'], info['lowpass']
     for proc in bti_info['processes']:
         if 'filt' in proc['process_type']:
             for step in proc['processing_steps']:
@@ -1160,8 +1131,6 @@ def _get_bti_info(pdf_fname, config_fname, head_shape_fname, rotation_x,
 
     info['highpass'] = hp
     info['lowpass'] = lp
-    info['acq_pars'] = info['acq_stim'] = info['hpi_subsystem'] = None
-    info['events'], info['hpi_results'], info['hpi_meas'] = [], [], []
     chs = []
 
     bti_ch_names = [ch['name'] for ch in bti_info['chs']]
@@ -1305,7 +1274,8 @@ def read_raw_bti(pdf_fname, config_fname='config',
                  head_shape_fname='hs_file', rotation_x=0.,
                  translation=(0.0, 0.02, 0.11), convert=True,
                  rename_channels=True, sort_by_ch_name=True,
-                 ecg_ch='E31', eog_ch=('E63', 'E64'), verbose=None):
+                 ecg_ch='E31', eog_ch=('E63', 'E64'), preload=None,
+                 verbose=None):
     """ Raw object from 4D Neuroimaging MagnesWH3600 data
 
     .. note::
@@ -1345,6 +1315,15 @@ def read_raw_bti(pdf_fname, config_fname='config',
     eog_ch : tuple of str | None
         The 4D names of the EOG channels. If None, the channels will be treated
         as regular EEG channels.
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory).
+
+        ..versionadded:: 0.11
+
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -1362,4 +1341,4 @@ def read_raw_bti(pdf_fname, config_fname='config',
                   rotation_x=rotation_x, translation=translation,
                   convert=convert, rename_channels=rename_channels,
                   sort_by_ch_name=sort_by_ch_name, ecg_ch=ecg_ch,
-                  eog_ch=eog_ch, verbose=verbose)
+                  eog_ch=eog_ch, preload=preload, verbose=verbose)
diff --git a/mne/io/bti/tests/test_bti.py b/mne/io/bti/tests/test_bti.py
index 5419d6c..5c4f8f4 100644
--- a/mne/io/bti/tests/test_bti.py
+++ b/mne/io/bti/tests/test_bti.py
@@ -6,6 +6,7 @@ from __future__ import print_function
 import os
 import os.path as op
 from functools import reduce
+import warnings
 
 import numpy as np
 from numpy.testing import (assert_array_almost_equal, assert_array_equal,
@@ -14,16 +15,19 @@ from nose.tools import assert_true, assert_raises, assert_equal
 
 from mne.io import Raw, read_raw_bti
 from mne.io.bti.bti import (_read_config, _process_bti_headshape,
-                            _read_data, _read_bti_header, _get_bti_dev_t,
+                            _read_bti_header, _get_bti_dev_t,
                             _correct_trans, _get_bti_info)
+from mne.io.tests.test_raw import _test_raw_reader
+from mne.tests.common import assert_dig_allclose
 from mne.io.pick import pick_info
 from mne.io.constants import FIFF
-from mne import concatenate_raws, pick_types
+from mne import pick_types
 from mne.utils import run_tests_if_main
 from mne.transforms import Transform, combine_transforms, invert_transform
 from mne.externals import six
 from mne.fixes import partial
 
+warnings.simplefilter('always')
 
 base_dir = op.join(op.abspath(op.dirname(__file__)), 'data')
 
@@ -48,19 +52,13 @@ def test_read_config():
                         for block in cfg['user_blocks']))
 
 
-def test_read_pdf():
-    """ Test read bti PDF file """
-    for pdf, config in zip(pdf_fnames, config_fnames):
-        info = _read_bti_header(pdf, config)
-        data = _read_data(info)
-        shape = (info['total_chans'], info['total_slices'])
-        assert_true(data.shape == shape)
-
-
 def test_crop_append():
     """ Test crop and append raw """
-    raw = read_raw_bti(pdf_fnames[0], config_fnames[0], hs_fnames[0])
-    raw.load_data()  # currently does nothing
+    with warnings.catch_warnings(record=True):  # preload warning
+        warnings.simplefilter('always')
+        raw = _test_raw_reader(
+            read_raw_bti, pdf_fname=pdf_fnames[0],
+            config_fname=config_fnames[0], head_shape_fname=hs_fnames[0])
     y, t = raw[:]
     t0, t1 = 0.25 * t[-1], 0.75 * t[-1]
     mask = (t0 <= t) * (t <= t1)
@@ -69,18 +67,13 @@ def test_crop_append():
     assert_true(y_.shape[1] == mask.sum())
     assert_true(y_.shape[0] == y.shape[0])
 
-    raw2 = raw.copy()
-    assert_raises(RuntimeError, raw.append, raw2, preload=False)
-    raw.append(raw2)
-    assert_allclose(np.tile(raw2[:, :][0], (1, 2)), raw[:, :][0])
-
 
 def test_transforms():
     """ Test transformations """
     bti_trans = (0.0, 0.02, 0.11)
     bti_dev_t = Transform('ctf_meg', 'meg', _get_bti_dev_t(0.0, bti_trans))
     for pdf, config, hs, in zip(pdf_fnames, config_fnames, hs_fnames):
-        raw = read_raw_bti(pdf, config, hs)
+        raw = read_raw_bti(pdf, config, hs, preload=False)
         dev_ctf_t = raw.info['dev_ctf_t']
         dev_head_t_old = raw.info['dev_head_t']
         ctf_head_t = raw.info['ctf_head_t']
@@ -102,19 +95,18 @@ def test_raw():
     for pdf, config, hs, exported in zip(pdf_fnames, config_fnames, hs_fnames,
                                          exported_fnames):
         # rx = 2 if 'linux' in pdf else 0
-        assert_raises(ValueError, read_raw_bti, pdf, 'eggs')
-        assert_raises(ValueError, read_raw_bti, pdf, config, 'spam')
+        assert_raises(ValueError, read_raw_bti, pdf, 'eggs', preload=False)
+        assert_raises(ValueError, read_raw_bti, pdf, config, 'spam',
+                      preload=False)
         if op.exists(tmp_raw_fname):
             os.remove(tmp_raw_fname)
         ex = Raw(exported, preload=True)
-        ra = read_raw_bti(pdf, config, hs)
+        ra = read_raw_bti(pdf, config, hs, preload=False)
         assert_true('RawBTi' in repr(ra))
         assert_equal(ex.ch_names[:NCH], ra.ch_names[:NCH])
         assert_array_almost_equal(ex.info['dev_head_t']['trans'],
                                   ra.info['dev_head_t']['trans'], 7)
-        dig1, dig2 = [np.array([d['r'] for d in r_.info['dig']])
-                      for r_ in (ra, ex)]
-        assert_array_almost_equal(dig1, dig2, 18)
+        assert_dig_allclose(ex.info, ra.info)
         coil1, coil2 = [np.concatenate([d['loc'].flatten()
                         for d in r_.info['chs'][:NCH]])
                         for r_ in (ra, ex)]
@@ -125,7 +117,11 @@ def test_raw():
                       for r_ in (ra, ex)]
         assert_allclose(loc1, loc2)
 
-        assert_array_equal(ra._data[:NCH], ex._data[:NCH])
+        assert_allclose(ra[:NCH][0], ex[:NCH][0])
+        assert_array_equal([c['range'] for c in ra.info['chs'][:NCH]],
+                           [c['range'] for c in ex.info['chs'][:NCH]])
+        assert_array_equal([c['cal'] for c in ra.info['chs'][:NCH]],
+                           [c['cal'] for c in ex.info['chs'][:NCH]])
         assert_array_equal(ra._cals[:NCH], ex._cals[:NCH])
 
         # check our transforms
@@ -138,10 +134,6 @@ def test_raw():
                     assert_allclose(ex.info[key][ent],
                                     ra.info[key][ent])
 
-        # Make sure concatenation works
-        raw_concat = concatenate_raws([ra.copy(), ra])
-        assert_equal(raw_concat.n_times, 2 * ra.n_times)
-
         ra.save(tmp_raw_fname)
         re = Raw(tmp_raw_fname)
         print(re)
@@ -175,17 +167,15 @@ def test_no_conversion():
 
     get_info = partial(
         _get_bti_info,
-        pdf_fname=None,  # test skipping no pdf
         rotation_x=0.0, translation=(0.0, 0.02, 0.11), convert=False,
         ecg_ch='E31', eog_ch=('E63', 'E64'),
         rename_channels=False, sort_by_ch_name=False)
 
     for pdf, config, hs in zip(pdf_fnames, config_fnames, hs_fnames):
-        raw_info, _ = get_info(
-            config_fname=config, head_shape_fname=hs, convert=False)
+        raw_info, _ = get_info(pdf, config, hs, convert=False)
         raw_info_con = read_raw_bti(
-            pdf_fname=pdf,
-            config_fname=config, head_shape_fname=hs, convert=True).info
+            pdf_fname=pdf, config_fname=config, head_shape_fname=hs,
+            convert=True, preload=False).info
 
         pick_info(raw_info_con,
                   pick_types(raw_info_con, meg=True, ref_meg=True),
@@ -233,7 +223,7 @@ def test_no_conversion():
 def test_bytes_io():
     """ Test bti bytes-io API """
     for pdf, config, hs in zip(pdf_fnames, config_fnames, hs_fnames):
-        raw = read_raw_bti(pdf, config, hs, convert=True)
+        raw = read_raw_bti(pdf, config, hs, convert=True, preload=False)
 
         with open(pdf, 'rb') as fid:
             pdf = six.BytesIO(fid.read())
@@ -241,9 +231,9 @@ def test_bytes_io():
             config = six.BytesIO(fid.read())
         with open(hs, 'rb') as fid:
             hs = six.BytesIO(fid.read())
-        raw2 = read_raw_bti(pdf, config, hs, convert=True)
+        raw2 = read_raw_bti(pdf, config, hs, convert=True, preload=False)
         repr(raw2)
-        assert_array_equal(raw._data, raw2._data)
+        assert_array_equal(raw[:][0], raw2[:][0])
 
 
 def test_setup_headshape():
diff --git a/mne/io/constants.py b/mne/io/constants.py
index 9db2ae8..f31a5ab 100644
--- a/mne/io/constants.py
+++ b/mne/io/constants.py
@@ -22,6 +22,14 @@ class BunchConst(Bunch):
         super(BunchConst, self).__setattr__(attr, val)
 
 FIFF = BunchConst()
+
+#
+# FIFF version number in use
+#
+FIFF.FIFFC_MAJOR_VERSION = 1
+FIFF.FIFFC_MINOR_VERSION = 3
+FIFF.FIFFC_VERSION = FIFF.FIFFC_MAJOR_VERSION << 16 | FIFF.FIFFC_MINOR_VERSION
+
 #
 # Blocks
 #
@@ -47,8 +55,6 @@ FIFF.FIFFB_REF                = 118
 FIFF.FIFFB_SMSH_RAW_DATA      = 119
 FIFF.FIFFB_SMSH_ASPECT        = 120
 FIFF.FIFFB_HPI_SUBSYSTEM      = 121
-FIFF.FIFFB_EPOCHS             = 122
-FIFF.FIFFB_ICA                = 123
 
 FIFF.FIFFB_SPHERE             = 300   # Concentric sphere model related
 FIFF.FIFFB_BEM                = 310   # Boundary-element method
@@ -66,6 +72,7 @@ FIFF.FIFFB_MRI_SEG_REGION     = 206     # One MRI segmentation region
 FIFF.FIFFB_PROCESSING_HISTORY = 900
 FIFF.FIFFB_PROCESSING_RECORD  = 901
 
+FIFF.FIFFB_DATA_CORRECTION    = 500
 FIFF.FIFFB_CHANNEL_DECOUPLER  = 501
 FIFF.FIFFB_SSS_INFO           = 502
 FIFF.FIFFB_SSS_CAL            = 503
@@ -136,7 +143,7 @@ FIFF.FIFF_NAME           = 233          # Intended to be a short name.
 FIFF.FIFF_DESCRIPTION    = FIFF.FIFF_COMMENT # (Textual) Description of an object
 FIFF.FIFF_DIG_STRING     = 234          # String of digitized points
 FIFF.FIFF_LINE_FREQ      = 235    # Line frequency
-FIFF.FIFF_CUSTOM_REF     = 236    # Whether a custom reference was applied to the data (NB: overlaps with HPI const #)
+
 #
 # HPI fitting program tags
 #
@@ -167,7 +174,7 @@ FIFF.FIFFV_EMG_CH     = 302
 FIFF.FIFFV_ECG_CH     = 402
 FIFF.FIFFV_MISC_CH    = 502
 FIFF.FIFFV_RESP_CH    = 602  # Respiration monitoring
-FIFF.FIFFV_SEEG_CH    = 702  # stereotactic EEG
+FIFF.FIFFV_SEEG_CH    = 802  # stereotactic EEG
 FIFF.FIFFV_SYST_CH    = 900  # some system status information (on Triux systems only)
 FIFF.FIFFV_IAS_CH     = 910  # Internal Active Shielding data (maybe on Triux only)
 FIFF.FIFFV_EXCI_CH    = 920  # flux excitation channel used to be a stimulus channel
@@ -205,8 +212,7 @@ FIFF.FIFF_DATA_BUFFER    = 300    # Buffer containing measurement data
 FIFF.FIFF_DATA_SKIP      = 301    # Data skip in buffers
 FIFF.FIFF_EPOCH          = 302    # Buffer containing one epoch and channel
 FIFF.FIFF_DATA_SKIP_SAMP = 303    # Data skip in samples
-FIFF.FIFF_MNE_BASELINE_MIN   = 304    # Time of baseline beginning
-FIFF.FIFF_MNE_BASELINE_MAX   = 305    # Time of baseline end
+
 #
 # Info on subject
 #
@@ -358,7 +364,7 @@ FIFF.FIFFV_MRI_PIXEL_BYTE_RGB_COLOR     = 6
 FIFF.FIFFV_MRI_PIXEL_BYTE_RLE_RGB_COLOR = 7
 FIFF.FIFFV_MRI_PIXEL_BIT_RLE            = 8
 #
-#   These are the MNE fiff definitions
+#   These are the MNE fiff definitions (range 350-390 reserved for MNE)
 #
 FIFF.FIFFB_MNE                    = 350
 FIFF.FIFFB_MNE_SOURCE_SPACE       = 351
@@ -382,6 +388,9 @@ FIFF.FIFFB_MNE_SURFACE_MAP_GROUP  = 364
 FIFF.FIFFB_MNE_CTF_COMP           = 370
 FIFF.FIFFB_MNE_CTF_COMP_DATA      = 371
 FIFF.FIFFB_MNE_DERIVATIONS        = 372
+
+FIFF.FIFFB_MNE_EPOCHS             = 373
+FIFF.FIFFB_MNE_ICA                = 374
 #
 # Fiff tags associated with MNE computations (3500...)
 #
@@ -484,6 +493,9 @@ FIFF.FIFF_MNE_DATA_SKIP_NOP          = 3563     # A data skip turned off in the
 FIFF.FIFF_MNE_ORIG_CH_INFO           = 3564     # Channel information before any changes
 FIFF.FIFF_MNE_EVENT_TRIGGER_MASK     = 3565     # Mask applied to the trigger channnel values
 FIFF.FIFF_MNE_EVENT_COMMENTS         = 3566     # Event comments merged into one long string
+FIFF.FIFF_MNE_CUSTOM_REF             = 3567     # Whether a custom reference was applied to the data
+FIFF.FIFF_MNE_BASELINE_MIN           = 3568     # Time of baseline beginning
+FIFF.FIFF_MNE_BASELINE_MAX           = 3569     # Time of baseline end
 #
 # 3570... Morphing maps
 #
@@ -770,8 +782,10 @@ FIFF.FIFFV_COIL_DIPOLE             = 200  # Time-varying dipole definition
 # direction (ex)
 FIFF.FIFFV_COIL_MCG_42             = 1000  # For testing the MCG software
 
-FIFF.FIFFV_COIL_POINT_MAGNETOMETER = 2000  # Simple point magnetometer
-FIFF.FIFFV_COIL_AXIAL_GRAD_5CM     = 2001  # Generic axial gradiometer
+FIFF.FIFFV_COIL_POINT_MAGNETOMETER   = 2000  # Simple point magnetometer
+FIFF.FIFFV_COIL_AXIAL_GRAD_5CM       = 2001  # Generic axial gradiometer
+FIFF.FIFFV_COIL_POINT_MAGNETOMETER_X = 2002  # Simple point magnetometer, x-direction
+FIFF.FIFFV_COIL_POINT_MAGNETOMETER_Y = 2003  # Simple point magnetometer, y-direction
 
 FIFF.FIFFV_COIL_VV_PLANAR_W        = 3011  # VV prototype wirewound planar sensor
 FIFF.FIFFV_COIL_VV_PLANAR_T1       = 3012  # Vectorview SQ20483N planar gradiometer
diff --git a/mne/io/ctf.py b/mne/io/ctf.py
deleted file mode 100644
index 3bdb8e8..0000000
--- a/mne/io/ctf.py
+++ /dev/null
@@ -1,256 +0,0 @@
-# Authors: Alexandre Gramfort <alexandre.gramfort at telecom-paristech.fr>
-#          Matti Hamalainen <msh at nmr.mgh.harvard.edu>
-#          Denis Engemann <denis.engemann at gmail.com>
-#
-# License: BSD (3-clause)
-
-from copy import deepcopy
-
-import numpy as np
-
-from .constants import FIFF
-from .tag import find_tag, has_tag, read_tag
-from .tree import dir_tree_find
-from .write import start_block, end_block, write_int
-from .matrix import write_named_matrix
-
-from ..utils import logger, verbose
-
-
-def hex2dec(s):
-    return int(s, 16)
-
-
-def _read_named_matrix(fid, node, matkind):
-    """read_named_matrix(fid,node)
-
-    Read named matrix from the given node
-
-    Parameters
-    ----------
-    fid : file
-        The file descriptor
-    node : dict
-        Node
-    matkind : mat kind
-        XXX
-    Returns
-    -------
-    mat : dict
-        The matrix with row and col names.
-    """
-
-    #   Descend one level if necessary
-    if node['block'] != FIFF.FIFFB_MNE_NAMED_MATRIX:
-        for k in range(node['nchild']):
-            if node['children'][k]['block'] == FIFF.FIFFB_MNE_NAMED_MATRIX:
-                if has_tag(node['children'][k], matkind):
-                    node = node['children'][k]
-                    break
-        else:
-            raise ValueError('Desired named matrix (kind = %d) not'
-                             ' available' % matkind)
-
-    else:
-        if not has_tag(node, matkind):
-            raise ValueError('Desired named matrix (kind = %d) not available'
-                             % matkind)
-
-    #   Read everything we need
-    tag = find_tag(fid, node, matkind)
-    if tag is None:
-        raise ValueError('Matrix data missing')
-    else:
-        data = tag.data
-
-    nrow, ncol = data.shape
-    tag = find_tag(fid, node, FIFF.FIFF_MNE_NROW)
-    if tag is not None:
-        if tag.data != nrow:
-            raise ValueError('Number of rows in matrix data and '
-                             'FIFF_MNE_NROW tag do not match')
-
-    tag = find_tag(fid, node, FIFF.FIFF_MNE_NCOL)
-    if tag is not None:
-        if tag.data != ncol:
-            raise ValueError('Number of columns in matrix data and '
-                             'FIFF_MNE_NCOL tag do not match')
-
-    tag = find_tag(fid, node, FIFF.FIFF_MNE_ROW_NAMES)
-    if tag is not None:
-        row_names = tag.data
-    else:
-        row_names = None
-
-    tag = find_tag(fid, node, FIFF.FIFF_MNE_COL_NAMES)
-    if tag is not None:
-        col_names = tag.data
-    else:
-        col_names = None
-
-    #   Put it together
-    mat = dict(nrow=nrow, ncol=ncol)
-    if row_names is not None:
-        mat['row_names'] = row_names.split(':')
-    else:
-        mat['row_names'] = None
-
-    if col_names is not None:
-        mat['col_names'] = col_names.split(':')
-    else:
-        mat['col_names'] = None
-
-    mat['data'] = data.astype(np.float)
-    return mat
-
-
- at verbose
-def read_ctf_comp(fid, node, chs, verbose=None):
-    """Read the CTF software compensation data from the given node
-
-    Parameters
-    ----------
-    fid : file
-        The file descriptor.
-    node : dict
-        The node in the FIF tree.
-    chs : list
-        The list of channels # XXX unclear.
-    verbose : bool, str, int, or None
-        If not None, override default verbose level (see mne.verbose).
-
-    Returns
-    -------
-    compdata : list
-        The compensation data
-    """
-    compdata = []
-    comps = dir_tree_find(node, FIFF.FIFFB_MNE_CTF_COMP_DATA)
-
-    for node in comps:
-        #   Read the data we need
-        mat = _read_named_matrix(fid, node, FIFF.FIFF_MNE_CTF_COMP_DATA)
-        for p in range(node['nent']):
-            kind = node['directory'][p].kind
-            pos = node['directory'][p].pos
-            if kind == FIFF.FIFF_MNE_CTF_COMP_KIND:
-                tag = read_tag(fid, pos)
-                break
-        else:
-            raise Exception('Compensation type not found')
-
-        #   Get the compensation kind and map it to a simple number
-        one = dict(ctfkind=tag.data)
-        del tag
-
-        if one['ctfkind'] == int('47314252', 16):  # hex2dec('47314252'):
-            one['kind'] = 1
-        elif one['ctfkind'] == int('47324252', 16):  # hex2dec('47324252'):
-            one['kind'] = 2
-        elif one['ctfkind'] == int('47334252', 16):  # hex2dec('47334252'):
-            one['kind'] = 3
-        else:
-            one['kind'] = int(one['ctfkind'])
-
-        for p in range(node['nent']):
-            kind = node['directory'][p].kind
-            pos = node['directory'][p].pos
-            if kind == FIFF.FIFF_MNE_CTF_COMP_CALIBRATED:
-                tag = read_tag(fid, pos)
-                calibrated = tag.data
-                break
-        else:
-            calibrated = False
-
-        one['save_calibrated'] = calibrated
-        one['rowcals'] = np.ones(mat['data'].shape[0], dtype=np.float)
-        one['colcals'] = np.ones(mat['data'].shape[1], dtype=np.float)
-
-        row_cals, col_cals = None, None  # initialize cals
-
-        if not calibrated:
-            #
-            #   Calibrate...
-            #
-            #   Do the columns first
-            #
-            ch_names = [c['ch_name'] for c in chs]
-
-            col_cals = np.zeros(mat['data'].shape[1], dtype=np.float)
-            for col in range(mat['data'].shape[1]):
-                p = ch_names.count(mat['col_names'][col])
-                if p == 0:
-                    raise Exception('Channel %s is not available in data'
-                                    % mat['col_names'][col])
-                elif p > 1:
-                    raise Exception('Ambiguous channel %s' %
-                                    mat['col_names'][col])
-                idx = ch_names.index(mat['col_names'][col])
-                col_cals[col] = 1.0 / (chs[idx]['range'] * chs[idx]['cal'])
-
-            #    Then the rows
-            row_cals = np.zeros(mat['data'].shape[0])
-            for row in range(mat['data'].shape[0]):
-                p = ch_names.count(mat['row_names'][row])
-                if p == 0:
-                    raise Exception('Channel %s is not available in data'
-                                    % mat['row_names'][row])
-                elif p > 1:
-                    raise Exception('Ambiguous channel %s' %
-                                    mat['row_names'][row])
-                idx = ch_names.index(mat['row_names'][row])
-                row_cals[row] = chs[idx]['range'] * chs[idx]['cal']
-
-            mat['data'] = row_cals[:, None] * mat['data'] * col_cals[None, :]
-            one['rowcals'] = row_cals
-            one['colcals'] = col_cals
-
-        one['data'] = mat
-        compdata.append(one)
-        if row_cals is not None:
-            del row_cals
-        if col_cals is not None:
-            del col_cals
-
-    if len(compdata) > 0:
-        logger.info('    Read %d compensation matrices' % len(compdata))
-
-    return compdata
-
-
-###############################################################################
-# Writing
-
-def write_ctf_comp(fid, comps):
-    """Write the CTF compensation data into a fif file
-
-    Parameters
-    ----------
-    fid : file
-        The open FIF file descriptor
-
-    comps : list
-        The compensation data to write
-    """
-    if len(comps) <= 0:
-        return
-
-    #  This is very simple in fact
-    start_block(fid, FIFF.FIFFB_MNE_CTF_COMP)
-    for comp in comps:
-        start_block(fid, FIFF.FIFFB_MNE_CTF_COMP_DATA)
-        #    Write the compensation kind
-        write_int(fid, FIFF.FIFF_MNE_CTF_COMP_KIND, comp['ctfkind'])
-        write_int(fid, FIFF.FIFF_MNE_CTF_COMP_CALIBRATED,
-                  comp['save_calibrated'])
-
-        if not comp['save_calibrated']:
-            # Undo calibration
-            comp = deepcopy(comp)
-            data = ((1. / comp['rowcals'][:, None]) * comp['data']['data'] *
-                    (1. / comp['colcals'][None, :]))
-            comp['data']['data'] = data
-        write_named_matrix(fid, FIFF.FIFF_MNE_CTF_COMP_DATA, comp['data'])
-        end_block(fid, FIFF.FIFFB_MNE_CTF_COMP_DATA)
-
-    end_block(fid, FIFF.FIFFB_MNE_CTF_COMP)
diff --git a/mne/io/ctf/__init__.py b/mne/io/ctf/__init__.py
new file mode 100644
index 0000000..8250246
--- /dev/null
+++ b/mne/io/ctf/__init__.py
@@ -0,0 +1,7 @@
+"""CTF module for conversion to FIF"""
+
+# Author: Eric Larson <larson.eric.d at gmail.com>
+#
+# License: BSD (3-clause)
+
+from .ctf import read_raw_ctf, RawCTF
diff --git a/mne/io/ctf/constants.py b/mne/io/ctf/constants.py
new file mode 100644
index 0000000..9642d78
--- /dev/null
+++ b/mne/io/ctf/constants.py
@@ -0,0 +1,38 @@
+"""CTF constants"""
+
+# Author: Eric Larson <larson.eric.d at gmail.com>
+#
+# License: BSD (3-clause)
+
+from ..constants import BunchConst
+
+
+CTF = BunchConst()
+
+# ctf_types.h
+CTF.CTFV_MAX_AVERAGE_BINS = 8
+CTF.CTFV_MAX_COILS = 8
+CTF.CTFV_MAX_BALANCING = 50
+CTF.CTFV_SENSOR_LABEL = 31
+
+CTF.CTFV_COIL_LPA = 1
+CTF.CTFV_COIL_RPA = 2
+CTF.CTFV_COIL_NAS = 3
+CTF.CTFV_COIL_SPARE = 4
+
+CTF.CTFV_REF_MAG_CH = 0
+CTF.CTFV_REF_GRAD_CH = 1
+CTF.CTFV_MEG_CH = 5
+CTF.CTFV_EEG_CH = 9
+CTF.CTFV_STIM_CH = 11
+
+CTF.CTFV_FILTER_LOWPASS = 1
+CTF.CTFV_FILTER_HIGHPASS = 2
+
+# read_res4.c
+CTF.FUNNY_POS = 1844
+
+# read_write_data.c
+CTF.HEADER_SIZE = 8
+CTF.BLOCK_SIZE = 2000
+CTF.SYSTEM_CLOCK_CH = 'SCLK01-177'
diff --git a/mne/io/ctf/ctf.py b/mne/io/ctf/ctf.py
new file mode 100644
index 0000000..cc32a2b
--- /dev/null
+++ b/mne/io/ctf/ctf.py
@@ -0,0 +1,218 @@
+"""Conversion tool from CTF to FIF
+"""
+
+# Author: Eric Larson <larson.eric.d<gmail.com>
+#
+# License: BSD (3-clause)
+
+import os
+from os import path as op
+
+import numpy as np
+
+from ...utils import verbose, logger
+from ...externals.six import string_types
+
+from ..base import _BaseRaw
+from ..utils import _mult_cal_one, _blk_read_lims
+
+from .res4 import _read_res4, _make_ctf_name
+from .hc import _read_hc
+from .eeg import _read_eeg
+from .trans import _make_ctf_coord_trans_set
+from .info import _compose_meas_info
+from .constants import CTF
+
+
+def read_raw_ctf(directory, system_clock='truncate', preload=False,
+                 verbose=None):
+    """Raw object from CTF directory
+
+    Parameters
+    ----------
+    directory : str
+        Path to the KIT data (ending in ``'.ds'``).
+    system_clock : str
+        How to treat the system clock. Use "truncate" (default) to truncate
+        the data file when the system clock drops to zero, and use "ignore"
+        to ignore the system clock (e.g., if head positions are measured
+        multiple times during a recording).
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory).
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    raw : instance of RawCTF
+        The raw data.
+
+    See Also
+    --------
+    mne.io.Raw : Documentation of attribute and methods.
+
+    Notes
+    -----
+    .. versionadded:: 0.11
+    """
+    return RawCTF(directory, system_clock, preload=preload, verbose=verbose)
+
+
+class RawCTF(_BaseRaw):
+    """Raw object from CTF directory
+
+    Parameters
+    ----------
+    directory : str
+        Path to the KIT data (ending in ``'.ds'``).
+    system_clock : str
+        How to treat the system clock. Use "truncate" (default) to truncate
+        the data file when the system clock drops to zero, and use "ignore"
+        to ignore the system clock (e.g., if head positions are measured
+        multiple times during a recording).
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory).
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    See Also
+    --------
+    mne.io.Raw : Documentation of attribute and methods.
+    """
+    @verbose
+    def __init__(self, directory, system_clock='truncate', preload=False,
+                 verbose=None):
+        # adapted from mne_ctf2fiff.c
+        if not isinstance(directory, string_types) or \
+                not directory.endswith('.ds'):
+            raise TypeError('directory must be a directory ending with ".ds"')
+        if not op.isdir(directory):
+            raise ValueError('directory does not exist: "%s"' % directory)
+        known_types = ['ignore', 'truncate']
+        if not isinstance(system_clock, string_types) or \
+                system_clock not in known_types:
+            raise ValueError('system_clock must be one of %s, not %s'
+                             % (known_types, system_clock))
+        logger.info('ds directory : %s' % directory)
+        res4 = _read_res4(directory)  # Read the magical res4 file
+        coils = _read_hc(directory)  # Read the coil locations
+        eeg = _read_eeg(directory)  # Read the EEG electrode loc info
+        # Investigate the coil location data to get the coordinate trans
+        coord_trans = _make_ctf_coord_trans_set(res4, coils)
+        # Compose a structure which makes fiff writing a piece of cake
+        info = _compose_meas_info(res4, coils, coord_trans, eeg)
+        # Determine how our data is distributed across files
+        fnames = list()
+        last_samps = list()
+        raw_extras = list()
+        while(True):
+            suffix = 'meg4' if len(fnames) == 0 else ('%d_meg4' % len(fnames))
+            meg4_name = _make_ctf_name(directory, suffix, raise_error=False)
+            if meg4_name is None:
+                break
+            # check how much data is in the file
+            sample_info = _get_sample_info(meg4_name, res4, system_clock)
+            if sample_info['n_samp'] == 0:
+                break
+            if len(fnames) == 0:
+                info['buffer_size_sec'] = \
+                    sample_info['block_size'] / info['sfreq']
+                info['filename'] = directory
+            fnames.append(meg4_name)
+            last_samps.append(sample_info['n_samp'] - 1)
+            raw_extras.append(sample_info)
+        super(RawCTF, self).__init__(
+            info, preload, last_samps=last_samps, filenames=fnames,
+            raw_extras=raw_extras, orig_format='int', verbose=verbose)
+
+    @verbose
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
+        """Read a chunk of raw data"""
+        si = self._raw_extras[fi]
+        offset = 0
+        trial_start_idx, r_lims, d_lims = _blk_read_lims(start, stop,
+                                                         int(si['block_size']))
+        with open(self._filenames[fi], 'rb') as fid:
+            for bi in range(len(r_lims)):
+                samp_offset = (bi + trial_start_idx) * si['res4_nsamp']
+                n_read = min(si['n_samp'] - samp_offset, si['block_size'])
+                # read the chunk of data
+                pos = CTF.HEADER_SIZE
+                pos += samp_offset * si['n_chan'] * 4
+                fid.seek(pos, 0)
+                this_data = np.fromstring(
+                    fid.read(si['n_chan'] * n_read * 4), '>i4')
+                this_data.shape = (si['n_chan'], n_read)
+                this_data = this_data[:, r_lims[bi, 0]:r_lims[bi, 1]]
+                data_view = data[:, d_lims[bi, 0]:d_lims[bi, 1]]
+                _mult_cal_one(data_view, this_data, idx, cals, mult)
+                offset += n_read
+
+
+def _get_sample_info(fname, res4, system_clock):
+    """Helper to determine the number of valid samples"""
+    logger.info('Finding samples for %s: ' % (fname,))
+    if CTF.SYSTEM_CLOCK_CH in res4['ch_names']:
+        clock_ch = res4['ch_names'].index(CTF.SYSTEM_CLOCK_CH)
+    else:
+        clock_ch = None
+    for k, ch in enumerate(res4['chs']):
+        if ch['ch_name'] == CTF.SYSTEM_CLOCK_CH:
+            clock_ch = k
+            break
+    with open(fname, 'rb') as fid:
+        fid.seek(0, os.SEEK_END)
+        st_size = fid.tell()
+        fid.seek(0, 0)
+        if (st_size - CTF.HEADER_SIZE) % (4 * res4['nsamp'] *
+                                          res4['nchan']) != 0:
+            raise RuntimeError('The number of samples is not an even multiple '
+                               'of the trial size')
+        n_samp_tot = (st_size - CTF.HEADER_SIZE) // (4 * res4['nchan'])
+        n_trial = n_samp_tot // res4['nsamp']
+        n_samp = n_samp_tot
+        if clock_ch is None:
+            logger.info('    System clock channel is not available, assuming '
+                        'all samples to be valid.')
+        elif system_clock == 'ignore':
+            logger.info('    System clock channel is available, but ignored.')
+        else:  # use it
+            logger.info('    System clock channel is available, checking '
+                        'which samples are valid.')
+            for t in range(n_trial):
+                # Skip to the correct trial
+                samp_offset = t * res4['nsamp']
+                offset = CTF.HEADER_SIZE + (samp_offset * res4['nchan'] +
+                                            (clock_ch * res4['nsamp'])) * 4
+                fid.seek(offset, 0)
+                this_data = np.fromstring(fid.read(4 * res4['nsamp']), '>i4')
+                if len(this_data) != res4['nsamp']:
+                    raise RuntimeError('Cannot read data for trial %d'
+                                       % (t + 1))
+                end = np.where(this_data == 0)[0]
+                if len(end) > 0:
+                    n_samp = samp_offset + end[0]
+                    break
+    if n_samp < res4['nsamp']:
+        n_trial = 1
+        logger.info('    %d x %d = %d samples from %d chs'
+                    % (n_trial, n_samp, n_samp, res4['nchan']))
+    else:
+        n_trial = n_samp // res4['nsamp']
+        n_omit = n_samp_tot - n_samp
+        n_samp = n_trial * res4['nsamp']
+        logger.info('    %d x %d = %d samples from %d chs'
+                    % (n_trial, res4['nsamp'], n_samp, res4['nchan']))
+        if n_omit != 0:
+            logger.info('    %d samples omitted at the end' % n_omit)
+    return dict(n_samp=n_samp, n_samp_tot=n_samp_tot, block_size=res4['nsamp'],
+                n_trial=n_trial, res4_nsamp=res4['nsamp'],
+                n_chan=res4['nchan'])
diff --git a/mne/io/ctf/eeg.py b/mne/io/ctf/eeg.py
new file mode 100644
index 0000000..edfde44
--- /dev/null
+++ b/mne/io/ctf/eeg.py
@@ -0,0 +1,51 @@
+"""Read .eeg files
+"""
+
+# Author: Eric Larson <larson.eric.d<gmail.com>
+#
+# License: BSD (3-clause)
+
+import numpy as np
+
+from ...utils import logger
+from ..constants import FIFF
+from .res4 import _make_ctf_name
+
+
+_cardinal_dict = dict(nasion=FIFF.FIFFV_POINT_NASION,
+                      lpa=FIFF.FIFFV_POINT_LPA, left=FIFF.FIFFV_POINT_LPA,
+                      rpa=FIFF.FIFFV_POINT_RPA, right=FIFF.FIFFV_POINT_RPA)
+
+
+def _read_eeg(directory):
+    """Read the .eeg file"""
+    # Missing file is ok
+    fname = _make_ctf_name(directory, 'eeg', raise_error=False)
+    if fname is None:
+        logger.info('    Separate EEG position data file not present.')
+        return
+    eeg = dict(labels=list(), kinds=list(), ids=list(), rr=list(), np=0,
+               assign_to_chs=True, coord_frame=FIFF.FIFFV_MNE_COORD_CTF_HEAD)
+    with open(fname, 'rb') as fid:
+        for line in fid:
+            line = line.strip()
+            if len(line) > 0:
+                parts = line.decode('utf-8').split()
+                if len(parts) != 5:
+                    raise RuntimeError('Illegal data in EEG position file: %s'
+                                       % line)
+                r = np.array([float(p) for p in parts[2:]]) / 100.
+                if (r * r).sum() > 1e-4:
+                    label = parts[1]
+                    eeg['labels'].append(label)
+                    eeg['rr'].append(r)
+                    id_ = _cardinal_dict.get(label.lower(), int(parts[0]))
+                    if label.lower() in _cardinal_dict:
+                        kind = FIFF.FIFFV_POINT_CARDINAL
+                    else:
+                        kind = FIFF.FIFFV_POINT_EXTRA
+                    eeg['ids'].append(id_)
+                    eeg['kinds'].append(kind)
+                    eeg['np'] += 1
+    logger.info('    Separate EEG position data file read.')
+    return eeg
diff --git a/mne/io/ctf/hc.py b/mne/io/ctf/hc.py
new file mode 100644
index 0000000..ddb4b19
--- /dev/null
+++ b/mne/io/ctf/hc.py
@@ -0,0 +1,85 @@
+"""Read .hc files
+"""
+
+# Author: Eric Larson <larson.eric.d<gmail.com>
+#
+# License: BSD (3-clause)
+
+import numpy as np
+
+from ...utils import logger
+from .res4 import _make_ctf_name
+from .constants import CTF
+from ..constants import FIFF
+
+
+_kind_dict = {'nasion': CTF.CTFV_COIL_NAS, 'left ear': CTF.CTFV_COIL_LPA,
+              'right ear': CTF.CTFV_COIL_RPA, 'spare': CTF.CTFV_COIL_SPARE}
+
+_coord_dict = {'relative to dewar': FIFF.FIFFV_MNE_COORD_CTF_DEVICE,
+               'relative to head': FIFF.FIFFV_MNE_COORD_CTF_HEAD}
+
+
+def _read_one_coil_point(fid):
+    """Read coil coordinate information from the hc file"""
+    # Descriptor
+    one = '#'
+    while len(one) > 0 and one[0] == '#':
+        one = fid.readline()
+    if len(one) == 0:
+        return None
+    one = one.strip().decode('utf-8')
+    if 'Unable' in one:
+        raise RuntimeError("HPI information not available")
+
+    # Hopefully this is an unambiguous interpretation
+    p = dict()
+    p['valid'] = ('measured' in one)
+    for key, val in _coord_dict.items():
+        if key in one:
+            p['coord_frame'] = val
+            break
+    else:
+        p['coord_frame'] = -1
+
+    for key, val in _kind_dict.items():
+        if key in one:
+            p['kind'] = val
+            break
+    else:
+        p['kind'] = -1
+
+    # Three coordinates
+    p['r'] = np.empty(3)
+    for ii, coord in enumerate('xyz'):
+        sp = fid.readline().decode('utf-8').strip()
+        if len(sp) == 0:  # blank line
+            continue
+        sp = sp.split(' ')
+        if len(sp) != 3 or sp[0] != coord or sp[1] != '=':
+            raise RuntimeError('Bad line: %s' % one)
+        # We do not deal with centimeters
+        p['r'][ii] = float(sp[2]) / 100.0
+    return p
+
+
+def _read_hc(directory):
+    """Read the hc file to get the HPI info and to prepare for coord transs"""
+    fname = _make_ctf_name(directory, 'hc', raise_error=False)
+    if fname is None:
+        logger.info('    hc data not present')
+        return None
+    s = list()
+    with open(fname, 'rb') as fid:
+        while(True):
+            p = _read_one_coil_point(fid)
+            if p is None:
+                # First point bad indicates that the file is empty
+                if len(s) == 0:
+                    logger.info('hc file empty, no data present')
+                    return None
+                # Returns None if at EOF
+                logger.info('    hc data read.')
+                return s
+            if p['valid']:
+                s.append(p)
diff --git a/mne/io/ctf/info.py b/mne/io/ctf/info.py
new file mode 100644
index 0000000..2a58d9c
--- /dev/null
+++ b/mne/io/ctf/info.py
@@ -0,0 +1,401 @@
+"""Populate measurement info
+"""
+
+# Author: Eric Larson <larson.eric.d<gmail.com>
+#
+# License: BSD (3-clause)
+
+from time import strptime
+from calendar import timegm
+
+import numpy as np
+
+from ...utils import logger
+from ...transforms import (apply_trans, _coord_frame_name, invert_transform,
+                           combine_transforms)
+
+from ..meas_info import _empty_info
+from ..write import get_new_file_id
+from ..ctf_comp import _add_kind, _calibrate_comp
+from ..constants import FIFF
+
+from .constants import CTF
+
+
+def _pick_isotrak_and_hpi_coils(res4, coils, t):
+    """Pick the HPI coil locations given in device coordinates"""
+    if coils is None:
+        return list(), list()
+    dig = list()
+    hpi_result = dict(dig_points=list())
+    n_coil_dev = 0
+    n_coil_head = 0
+    for p in coils:
+        if p['valid']:
+            if p['coord_frame'] == FIFF.FIFFV_MNE_COORD_CTF_DEVICE:
+                if t is None or t['t_ctf_dev_dev'] is None:
+                    raise RuntimeError('No coordinate transformation '
+                                       'available for HPI coil locations')
+                d = dict(kind=FIFF.FIFFV_POINT_HPI, ident=p['kind'],
+                         r=apply_trans(t['t_ctf_dev_dev'], p['r']),
+                         coord_frame=FIFF.FIFFV_COORD_UNKNOWN)
+                hpi_result['dig_points'].append(d)
+                n_coil_dev += 1
+            elif p['coord_frame'] == FIFF.FIFFV_MNE_COORD_CTF_HEAD:
+                if t is None or t['t_ctf_head_head'] is None:
+                    raise RuntimeError('No coordinate transformation '
+                                       'available for (virtual) Polhemus data')
+                d = dict(kind=FIFF.FIFFV_POINT_HPI, ident=p['kind'],
+                         r=apply_trans(t['t_ctf_head_head'], p['r']),
+                         coord_frame=FIFF.FIFFV_COORD_HEAD)
+                dig.append(d)
+                n_coil_head += 1
+    if n_coil_head > 0:
+        logger.info('    Polhemus data for %d HPI coils added' % n_coil_head)
+    if n_coil_dev > 0:
+        logger.info('    Device coordinate locations for %d HPI coils added'
+                    % n_coil_dev)
+    return dig, [hpi_result]
+
+
+def _convert_time(date_str, time_str):
+    """Convert date and time strings to float time"""
+    for fmt in ("%d/%m/%Y", "%d-%b-%Y", "%a, %b %d, %Y"):
+        try:
+            date = strptime(date_str, fmt)
+        except ValueError:
+            pass
+        else:
+            break
+    else:
+        raise RuntimeError("Illegal date: %s" % date)
+    for fmt in ('%H:%M:%S', '%H:%M'):
+        try:
+            time = strptime(time_str, fmt)
+        except ValueError:
+            pass
+        else:
+            break
+    else:
+        raise RuntimeError('Illegal time: %s' % time)
+    # MNE-C uses mktime which uses local time, but here we instead decouple
+    # conversion location from the process, and instead assume that the
+    # acquisiton was in GMT. This will be wrong for most sites, but at least
+    # the value we obtain here won't depend on the geographical location
+    # that the file was converted.
+    res = timegm((date.tm_year, date.tm_mon, date.tm_mday,
+                  time.tm_hour, time.tm_min, time.tm_sec,
+                  date.tm_wday, date.tm_yday, date.tm_isdst))
+    return res
+
+
+def _get_plane_vectors(ez):
+    """Get two orthogonal vectors orthogonal to ez (ez will be modified)"""
+    assert ez.shape == (3,)
+    ez_len = np.sqrt(np.sum(ez * ez))
+    if ez_len == 0:
+        raise RuntimeError('Zero length normal. Cannot proceed.')
+    if np.abs(ez_len - np.abs(ez[2])) < 1e-5:  # ez already in z-direction
+        ex = np.array([1., 0., 0.])
+    else:
+        ex = np.zeros(3)
+        if ez[1] < ez[2]:
+            ex[0 if ez[0] < ez[1] else 1] = 1.
+        else:
+            ex[0 if ez[0] < ez[2] else 2] = 1.
+    ez /= ez_len
+    ex -= np.dot(ez, ex) * ez
+    ex /= np.sqrt(np.sum(ex * ex))
+    ey = np.cross(ez, ex)
+    return ex, ey
+
+
+def _at_origin(x):
+    """Determine if a vector is at the origin"""
+    return (np.sum(x * x) < 1e-8)
+
+
+def _convert_channel_info(res4, t, use_eeg_pos):
+    """Convert CTF channel information to fif format"""
+    nmeg = neeg = nstim = nmisc = nref = 0
+    chs = list()
+    for k, cch in enumerate(res4['chs']):
+        cal = float(1. / (cch['proper_gain'] * cch['qgain']))
+        ch = dict(scanno=k + 1, range=1., cal=cal, loc=np.zeros(12),
+                  unit_mul=FIFF.FIFF_UNITM_NONE, ch_name=cch['ch_name'][:15],
+                  coil_type=FIFF.FIFFV_COIL_NONE)
+        del k
+        chs.append(ch)
+        # Create the channel position information
+        pos = dict(r0=ch['loc'][:3], ex=ch['loc'][3:6], ey=ch['loc'][6:9],
+                   ez=ch['loc'][9:12])
+        if cch['sensor_type_index'] in (CTF.CTFV_REF_MAG_CH,
+                                        CTF.CTFV_REF_GRAD_CH,
+                                        CTF.CTFV_MEG_CH):
+            ch['unit'] = FIFF.FIFF_UNIT_T
+            # Set up the local coordinate frame
+            pos['r0'][:] = cch['coil']['pos'][0]
+            pos['ez'][:] = cch['coil']['norm'][0]
+            # It turns out that positive proper_gain requires swapping
+            # of the normal direction
+            if cch['proper_gain'] > 0.0:
+                pos['ez'] *= -1
+            # Check how the other vectors should be defined
+            off_diag = False
+            if cch['sensor_type_index'] == CTF.CTFV_REF_GRAD_CH:
+                # We use the same convention for ex as for Neuromag planar
+                # gradiometers: pointing in the positive gradient direction
+                diff = cch['coil']['pos'][0] - cch['coil']['pos'][1]
+                size = np.sqrt(np.sum(diff * diff))
+                if size > 0.:
+                    diff /= size
+                if np.abs(np.dot(diff, pos['ez'])) < 1e-3:
+                    off_diag = True
+                if off_diag:
+                    # The off-diagonal gradiometers are an exception
+                    pos['r0'] -= size * diff / 2.0
+                    pos['ex'][:] = diff
+                    pos['ey'][:] = np.cross(pos['ez'], pos['ex'])
+            else:
+                # ex and ey are arbitrary in the plane normal to ex
+                pos['ex'][:], pos['ey'][:] = _get_plane_vectors(pos['ez'])
+            # Transform into a Neuromag-like coordinate system
+            pos['r0'][:] = apply_trans(t['t_ctf_dev_dev'], pos['r0'])
+            for key in ('ex', 'ey', 'ez'):
+                pos[key][:] = apply_trans(t['t_ctf_dev_dev'], pos[key],
+                                          move=False)
+            # Set the coil type
+            if cch['sensor_type_index'] == CTF.CTFV_REF_MAG_CH:
+                ch['kind'] = FIFF.FIFFV_REF_MEG_CH
+                ch['coil_type'] = FIFF.FIFFV_COIL_CTF_REF_MAG
+                nref += 1
+                ch['logno'] = nref
+            elif cch['sensor_type_index'] == CTF.CTFV_REF_GRAD_CH:
+                ch['kind'] = FIFF.FIFFV_REF_MEG_CH
+                if off_diag:
+                    ch['coil_type'] = FIFF.FIFFV_COIL_CTF_OFFDIAG_REF_GRAD
+                else:
+                    ch['coil_type'] = FIFF.FIFFV_COIL_CTF_REF_GRAD
+                nref += 1
+                ch['logno'] = nref
+            else:
+                ch['kind'] = FIFF.FIFFV_MEG_CH
+                ch['coil_type'] = FIFF.FIFFV_COIL_CTF_GRAD
+                nmeg += 1
+                ch['logno'] = nmeg
+            # Encode the software gradiometer order
+            ch['coil_type'] = ch['coil_type'] | (cch['grad_order_no'] << 16)
+            ch['coord_frame'] = FIFF.FIFFV_COORD_DEVICE
+        elif cch['sensor_type_index'] == CTF.CTFV_EEG_CH:
+            coord_frame = FIFF.FIFFV_COORD_HEAD
+            if use_eeg_pos:
+                # EEG electrode coordinates may be present but in the
+                # CTF head frame
+                pos['r0'][:] = cch['coil']['pos'][0]
+                if not _at_origin(pos['r0']):
+                    if t['t_ctf_head_head'] is None:
+                        logger.warning('EEG electrode (%s) location omitted '
+                                       'because of missing HPI information'
+                                       % (ch['ch_name']))
+                        pos['r0'][:] = np.zeros(3)
+                        coord_frame = FIFF.FIFFV_COORD_CTF_HEAD
+                    else:
+                        pos['r0'][:] = apply_trans(t['t_ctf_head_head'],
+                                                   pos['r0'])
+            neeg += 1
+            ch['logno'] = neeg
+            ch['kind'] = FIFF.FIFFV_EEG_CH
+            ch['unit'] = FIFF.FIFF_UNIT_V
+            ch['coord_frame'] = coord_frame
+        elif cch['sensor_type_index'] == CTF.CTFV_STIM_CH:
+            nstim += 1
+            ch['logno'] = nstim
+            ch['kind'] = FIFF.FIFFV_STIM_CH
+            ch['unit'] = FIFF.FIFF_UNIT_V
+            ch['coord_frame'] = FIFF.FIFFV_COORD_UNKNOWN
+        else:
+            nmisc += 1
+            ch['logno'] = nmisc
+            ch['kind'] = FIFF.FIFFV_MISC_CH
+            ch['unit'] = FIFF.FIFF_UNIT_V
+            ch['coord_frame'] = FIFF.FIFFV_COORD_UNKNOWN
+    return chs
+
+
+def _comp_sort_keys(c):
+    """This is for sorting the compensation data"""
+    return (int(c['coeff_type']), int(c['scanno']))
+
+
+def _check_comp(comp):
+    """Check that conversion to named matrices is, indeed possible"""
+    ref_sens = None
+    kind = -1
+    for k, c_k in enumerate(comp):
+        if c_k['coeff_type'] != kind:
+            c_ref = c_k
+            ref_sens = c_ref['sensors']
+            kind = c_k['coeff_type']
+        elif not c_k['sensors'] == ref_sens:
+            raise RuntimeError('Cannot use an uneven compensation matrix')
+
+
+def _conv_comp(comp, first, last, chs):
+    """Add a new converted compensation data item"""
+    ccomp = dict(ctfkind=np.array([comp[first]['coeff_type']]),
+                 save_calibrated=False)
+    _add_kind(ccomp)
+    n_col = comp[first]['ncoeff']
+    n_row = last - first + 1
+    col_names = comp[first]['sensors'][:n_col]
+    row_names = [comp[p]['sensor_name'] for p in range(first, last + 1)]
+    data = np.empty((n_row, n_col))
+    for ii, coeffs in enumerate(comp[first:last + 1]):
+        # Pick the elements to the matrix
+        data[ii, :] = coeffs['coeffs'][:]
+    ccomp['data'] = dict(row_names=row_names, col_names=col_names,
+                         data=data, nrow=len(row_names), ncol=len(col_names))
+    mk = ('proper_gain', 'qgain')
+    _calibrate_comp(ccomp, chs, row_names, col_names, mult_keys=mk, flip=True)
+    return ccomp
+
+
+def _convert_comp_data(res4):
+    """Convert the compensation data into named matrices"""
+    if res4['ncomp'] == 0:
+        return
+    # Sort the coefficients in our favorite order
+    res4['comp'] = sorted(res4['comp'], key=_comp_sort_keys)
+    # Check that all items for a given compensation type have the correct
+    # number of channels
+    _check_comp(res4['comp'])
+    # Create named matrices
+    first = 0
+    kind = -1
+    comps = list()
+    for k in range(len(res4['comp'])):
+        if res4['comp'][k]['coeff_type'] != kind:
+            if k > 0:
+                comps.append(_conv_comp(res4['comp'], first, k - 1,
+                                        res4['chs']))
+            kind = res4['comp'][k]['coeff_type']
+            first = k
+    comps.append(_conv_comp(res4['comp'], first, k, res4['chs']))
+    return comps
+
+
+def _pick_eeg_pos(c):
+    """Pick EEG positions"""
+    eeg = dict(coord_frame=FIFF.FIFFV_COORD_HEAD, assign_to_chs=False,
+               labels=list(), ids=list(), rr=list(), kinds=list(), np=0)
+    for ch in c['chs']:
+        if ch['kind'] == FIFF.FIFFV_EEG_CH and not _at_origin(ch['loc'][:3]):
+            eeg['labels'].append(ch['ch_name'])
+            eeg['ids'].append(ch['logno'])
+            eeg['rr'].append(ch['loc'][:3])
+            eeg['kinds'].append(FIFF.FIFFV_POINT_EEG)
+            eeg['np'] += 1
+    if eeg['np'] == 0:
+        return None
+    logger.info('Picked positions of %d EEG channels from channel info'
+                % eeg['np'])
+    return eeg
+
+
+def _add_eeg_pos(eeg, t, c):
+    """Pick the (virtual) EEG position data"""
+    if eeg is None:
+        return
+    if t is None or t['t_ctf_head_head'] is None:
+        raise RuntimeError('No coordinate transformation available for EEG '
+                           'position data')
+    eeg_assigned = 0
+    if eeg['assign_to_chs']:
+        for k in range(eeg['np']):
+            # Look for a channel name match
+            for ch in c['chs']:
+                if ch['ch_name'].lower() == eeg['labels'][k].lower():
+                    r0 = ch['loc'][:3]
+                    r0[:] = eeg['rr'][k]
+                    if eeg['coord_frame'] == FIFF.FIFFV_MNE_COORD_CTF_HEAD:
+                        r0[:] = apply_trans(t['t_ctf_head_head'], r0)
+                    elif eeg['coord_frame'] != FIFF.FIFFV_COORD_HEAD:
+                        raise RuntimeError(
+                            'Illegal coordinate frame for EEG electrode '
+                            'positions : %s'
+                            % _coord_frame_name(eeg['coord_frame']))
+                    # Use the logical channel number as an identifier
+                    eeg['ids'][k] = ch['logno']
+                    eeg['kinds'][k] = FIFF.FIFFV_POINT_EEG
+                    eeg_assigned += 1
+                    break
+
+    # Add these to the Polhemus data
+    fid_count = eeg_count = extra_count = 0
+    for k in range(eeg['np']):
+        d = dict(r=eeg['rr'][k].copy(), kind=eeg['kinds'][k],
+                 ident=eeg['ids'][k], coord_frame=FIFF.FIFFV_COORD_HEAD)
+        c['dig'].append(d)
+        if eeg['coord_frame'] == FIFF.FIFFV_MNE_COORD_CTF_HEAD:
+            d['r'] = apply_trans(t['t_ctf_head_head'], d['r'])
+        elif eeg['coord_frame'] != FIFF.FIFFV_COORD_HEAD:
+            raise RuntimeError('Illegal coordinate frame for EEG electrode '
+                               'positions: %s'
+                               % _coord_frame_name(eeg['coord_frame']))
+        if eeg['kinds'][k] == FIFF.FIFFV_POINT_CARDINAL:
+            fid_count += 1
+        elif eeg['kinds'][k] == FIFF.FIFFV_POINT_EEG:
+            eeg_count += 1
+        else:
+            extra_count += 1
+    if eeg_assigned > 0:
+        logger.info('    %d EEG electrode locations assigned to channel info.'
+                    % eeg_assigned)
+    for count, kind in zip((fid_count, eeg_count, extra_count),
+                           ('fiducials', 'EEG locations', 'extra points')):
+        if count > 0:
+            logger.info('    %d %s added to Polhemus data.' % (count, kind))
+
+
+_filt_map = {CTF.CTFV_FILTER_LOWPASS: 'lowpass',
+             CTF.CTFV_FILTER_HIGHPASS: 'highpass'}
+
+
+def _compose_meas_info(res4, coils, trans, eeg):
+    """Create meas info from CTF data"""
+    info = _empty_info(res4['sfreq'])
+
+    # Collect all the necessary data from the structures read
+    info['meas_id'] = get_new_file_id()
+    info['meas_id']['usecs'] = 0
+    info['meas_id']['secs'] = _convert_time(res4['data_date'],
+                                            res4['data_time'])
+    info['experimenter'] = res4['nf_operator']
+    info['subject_info'] = dict(his_id=res4['nf_subject_id'])
+    for filt in res4['filters']:
+        if filt['type'] in _filt_map:
+            info[_filt_map[filt['type']]] = filt['freq']
+    info['dig'], info['hpi_results'] = _pick_isotrak_and_hpi_coils(
+        res4, coils, trans)
+    if trans is not None:
+        if len(info['hpi_results']) > 0:
+            info['hpi_results'][0]['coord_trans'] = trans['t_ctf_head_head']
+        if trans['t_dev_head'] is not None:
+            info['dev_head_t'] = trans['t_dev_head']
+            info['dev_ctf_t'] = combine_transforms(
+                trans['t_dev_head'],
+                invert_transform(trans['t_ctf_head_head']),
+                FIFF.FIFFV_COORD_DEVICE, FIFF.FIFFV_MNE_COORD_CTF_HEAD)
+        if trans['t_ctf_head_head'] is not None:
+            info['ctf_head_t'] = trans['t_ctf_head_head']
+    info['chs'] = _convert_channel_info(res4, trans, eeg is None)
+    info['nchan'] = len(info['chs'])
+    info['comps'] = _convert_comp_data(res4)
+    if eeg is None:
+        # Pick EEG locations from chan info if not read from a separate file
+        eeg = _pick_eeg_pos(info)
+    _add_eeg_pos(eeg, trans, info)
+    info['ch_names'] = [ch['ch_name'] for ch in info['chs']]
+    logger.info('    Measurement info composed.')
+    info._check_consistency()
+    return info
diff --git a/mne/io/ctf/res4.py b/mne/io/ctf/res4.py
new file mode 100644
index 0000000..2c675c6
--- /dev/null
+++ b/mne/io/ctf/res4.py
@@ -0,0 +1,212 @@
+"""Read .res4 files
+"""
+
+# Author: Eric Larson <larson.eric.d<gmail.com>
+#
+# License: BSD (3-clause)
+
+import os.path as op
+
+import numpy as np
+
+from ...utils import logger
+from .constants import CTF
+
+
+def _make_ctf_name(directory, extra, raise_error=True):
+    """Helper to make a CTF name"""
+    fname = op.join(directory, op.basename(directory)[:-3] + '.' + extra)
+    if not op.isfile(fname):
+        if raise_error:
+            raise IOError('Standard file %s not found' % fname)
+        else:
+            return None
+    return fname
+
+
+def _read_double(fid, n=1):
+    """Read a double"""
+    return np.fromfile(fid, '>f8', n)
+
+
+def _read_string(fid, n_bytes, decode=True):
+    """Read string"""
+    s0 = fid.read(n_bytes)
+    s = s0.split(b'\x00')[0]
+    return s.decode('utf-8') if decode else s
+
+
+def _read_ustring(fid, n_bytes):
+    """Read unsigned character string"""
+    return np.fromfile(fid, '>B', n_bytes)
+
+
+def _read_int2(fid):
+    """Read int from short"""
+    return np.fromfile(fid, '>i2', 1)[0]
+
+
+def _read_int(fid):
+    """Read a 32-bit integer"""
+    return np.fromfile(fid, '>i4', 1)[0]
+
+
+def _move_to_next(fid, byte=8):
+    """Move to next byte boundary"""
+    now = fid.tell()
+    if now % byte != 0:
+        now = now - (now % byte) + byte
+        fid.seek(now, 0)
+
+
+def _read_filter(fid):
+    """Read filter information"""
+    f = dict()
+    f['freq'] = _read_double(fid)[0]
+    f['class'] = _read_int(fid)
+    f['type'] = _read_int(fid)
+    f['npar'] = _read_int2(fid)
+    f['pars'] = _read_double(fid, f['npar'])
+    return f
+
+
+def _read_channel(fid):
+    """Read channel information"""
+    ch = dict()
+    ch['sensor_type_index'] = _read_int2(fid)
+    ch['original_run_no'] = _read_int2(fid)
+    ch['coil_type'] = _read_int(fid)
+    ch['proper_gain'] = _read_double(fid)[0]
+    ch['qgain'] = _read_double(fid)[0]
+    ch['io_gain'] = _read_double(fid)[0]
+    ch['io_offset'] = _read_double(fid)[0]
+    ch['num_coils'] = _read_int2(fid)
+    ch['grad_order_no'] = int(_read_int2(fid))
+    _read_int(fid)  # pad
+    ch['coil'] = dict()
+    ch['head_coil'] = dict()
+    for coil in (ch['coil'], ch['head_coil']):
+        coil['pos'] = list()
+        coil['norm'] = list()
+        coil['turns'] = np.empty(CTF.CTFV_MAX_COILS)
+        coil['area'] = np.empty(CTF.CTFV_MAX_COILS)
+        for k in range(CTF.CTFV_MAX_COILS):
+            # It would have been wonderful to use meters in the first place
+            coil['pos'].append(_read_double(fid, 3) / 100.)
+            fid.seek(8, 1)  # dummy double
+            coil['norm'].append(_read_double(fid, 3))
+            fid.seek(8, 1)  # dummy double
+            coil['turns'][k] = _read_int2(fid)
+            _read_int(fid)  # pad
+            _read_int2(fid)  # pad
+            # Looks like this is given in cm^2
+            coil['area'][k] = _read_double(fid)[0] * 1e-4
+    return ch
+
+
+def _read_comp_coeff(fid, d):
+    """Read compensation coefficients"""
+    # Read the coefficients and initialize
+    d['ncomp'] = _read_int2(fid)
+    d['comp'] = list()
+    # Read each record
+    for k in range(d['ncomp']):
+        comp = dict()
+        d['comp'].append(comp)
+        comp['sensor_name'] = _read_string(fid, 32)
+        comp['coeff_type'] = _read_int(fid)
+        _read_int(fid)  # pad
+        comp['ncoeff'] = _read_int2(fid)
+        comp['coeffs'] = np.zeros(comp['ncoeff'])
+        comp['sensors'] = [_read_string(fid, CTF.CTFV_SENSOR_LABEL)
+                           for p in range(comp['ncoeff'])]
+        unused = CTF.CTFV_MAX_BALANCING - comp['ncoeff']
+        comp['sensors'] += [''] * unused
+        fid.seek(unused * CTF.CTFV_SENSOR_LABEL, 1)
+        comp['coeffs'][:comp['ncoeff']] = _read_double(fid, comp['ncoeff'])
+        fid.seek(unused * 8, 1)
+        comp['scanno'] = d['ch_names'].index(comp['sensor_name'])
+
+
+def _read_res4(dsdir):
+    """Read the magical res4 file"""
+    # adapted from read_res4.c
+    name = _make_ctf_name(dsdir, 'res4')
+    res = dict()
+    with open(name, 'rb') as fid:
+        # Read the fields
+        res['head'] = _read_string(fid, 8)
+        res['appname'] = _read_string(fid, 256)
+        res['origin'] = _read_string(fid, 256)
+        res['desc'] = _read_string(fid, 256)
+        res['nave'] = _read_int2(fid)
+        res['data_time'] = _read_string(fid, 255)
+        res['data_date'] = _read_string(fid, 255)
+        # Seems that date and time can be swapped
+        # (are they entered manually?!)
+        if '/' in res['data_time'] and ':' in res['data_date']:
+            data_date = res['data_date']
+            res['data_date'] = res['data_time']
+            res['data_time'] = data_date
+        res['nsamp'] = _read_int(fid)
+        res['nchan'] = _read_int2(fid)
+        _move_to_next(fid, 8)
+        res['sfreq'] = _read_double(fid)[0]
+        res['epoch_time'] = _read_double(fid)[0]
+        res['no_trials'] = _read_int2(fid)
+        _move_to_next(fid, 4)
+        res['pre_trig_pts'] = _read_int(fid)
+        res['no_trials_done'] = _read_int2(fid)
+        res['no_trials_bst_message_windowlay'] = _read_int2(fid)
+        _move_to_next(fid, 4)
+        res['save_trials'] = _read_int(fid)
+        res['primary_trigger'] = fid.read(1)
+        res['secondary_trigger'] = [fid.read(1)
+                                    for k in range(CTF.CTFV_MAX_AVERAGE_BINS)]
+        res['trigger_polarity_mask'] = fid.read(1)
+        res['trigger_mode'] = _read_int2(fid)
+        _move_to_next(fid, 4)
+        res['accept_reject'] = _read_int(fid)
+        res['run_time_bst_message_windowlay'] = _read_int2(fid)
+        _move_to_next(fid, 4)
+        res['zero_head'] = _read_int(fid)
+        _move_to_next(fid, 4)
+        res['artifact_mode'] = _read_int(fid)
+        _read_int(fid)  # padding
+        res['nf_run_name'] = _read_string(fid, 32)
+        res['nf_run_title'] = _read_string(fid, 256)
+        res['nf_instruments'] = _read_string(fid, 32)
+        res['nf_collect_descriptor'] = _read_string(fid, 32)
+        res['nf_subject_id'] = _read_string(fid, 32)
+        res['nf_operator'] = _read_string(fid, 32)
+        if len(res['nf_operator']) == 0:
+            res['nf_operator'] = None
+        res['nf_sensor_file_name'] = _read_ustring(fid, 60)
+        _move_to_next(fid, 4)
+        res['rdlen'] = _read_int(fid)
+        fid.seek(CTF.FUNNY_POS, 0)
+
+        if res['rdlen'] > 0:
+            res['run_desc'] = _read_string(fid, res['rdlen'])
+
+        # Filters
+        res['nfilt'] = _read_int2(fid)
+        res['filters'] = list()
+        for k in range(res['nfilt']):
+            res['filters'].append(_read_filter(fid))
+
+        # Channel information
+        res['chs'] = list()
+        res['ch_names'] = list()
+        for k in range(res['nchan']):
+            res['chs'].append(dict())
+            ch_name = _read_string(fid, 32)
+            res['chs'][k]['ch_name'] = ch_name
+            res['ch_names'].append(ch_name)
+        for k in range(res['nchan']):
+            res['chs'][k].update(_read_channel(fid))
+
+        # The compensation coefficients
+        _read_comp_coeff(fid, res)
+    logger.info('    res4 data read.')
+    return res
diff --git a/mne/tests/__init__.py b/mne/io/ctf/tests/__init__.py
similarity index 100%
copy from mne/tests/__init__.py
copy to mne/io/ctf/tests/__init__.py
diff --git a/mne/io/ctf/tests/test_ctf.py b/mne/io/ctf/tests/test_ctf.py
new file mode 100644
index 0000000..dca48b1
--- /dev/null
+++ b/mne/io/ctf/tests/test_ctf.py
@@ -0,0 +1,171 @@
+# Authors: Eric Larson <larson.eric.d at gmail.com>
+#
+# License: BSD (3-clause)
+
+import os
+from os import path as op
+import shutil
+
+import numpy as np
+from nose.tools import assert_raises, assert_true
+from numpy.testing import assert_allclose, assert_array_equal, assert_equal
+
+from mne import pick_types
+from mne.tests.common import assert_dig_allclose
+from mne.transforms import apply_trans
+from mne.io import Raw, read_raw_ctf
+from mne.io.tests.test_raw import _test_raw_reader
+from mne.utils import _TempDir, run_tests_if_main, slow_test
+from mne.datasets import testing
+
+ctf_dir = op.join(testing.data_path(download=False), 'CTF')
+ctf_fname_continuous = 'testdata_ctf.ds'
+ctf_fname_1_trial = 'testdata_ctf_short.ds'
+ctf_fname_2_trials = 'testdata_ctf_pseudocontinuous.ds'
+ctf_fname_discont = 'testdata_ctf_short_discontinuous.ds'
+ctf_fname_somato = 'somMDYO-18av.ds'
+ctf_fname_catch = 'catch-alp-good-f.ds'
+
+block_sizes = {
+    ctf_fname_continuous: 12000,
+    ctf_fname_1_trial: 4801,
+    ctf_fname_2_trials: 12000,
+    ctf_fname_discont: 1201,
+    ctf_fname_somato: 313,
+    ctf_fname_catch: 2500,
+}
+single_trials = (
+    ctf_fname_continuous,
+    ctf_fname_1_trial,
+)
+
+ctf_fnames = tuple(sorted(block_sizes.keys()))
+
+
+ at slow_test
+ at testing.requires_testing_data
+def test_read_ctf():
+    """Test CTF reader"""
+    temp_dir = _TempDir()
+    out_fname = op.join(temp_dir, 'test_py_raw.fif')
+
+    # Create a dummy .eeg file so we can test our reading/application of it
+    os.mkdir(op.join(temp_dir, 'randpos'))
+    ctf_eeg_fname = op.join(temp_dir, 'randpos', ctf_fname_catch)
+    shutil.copytree(op.join(ctf_dir, ctf_fname_catch), ctf_eeg_fname)
+    raw = _test_raw_reader(read_raw_ctf, directory=ctf_eeg_fname)
+    picks = pick_types(raw.info, meg=False, eeg=True)
+    pos = np.random.RandomState(42).randn(len(picks), 3)
+    fake_eeg_fname = op.join(ctf_eeg_fname, 'catch-alp-good-f.eeg')
+    # Create a bad file
+    with open(fake_eeg_fname, 'wb') as fid:
+        fid.write('foo\n'.encode('ascii'))
+    assert_raises(RuntimeError, read_raw_ctf, ctf_eeg_fname)
+    # Create a good file
+    with open(fake_eeg_fname, 'wb') as fid:
+        for ii, ch_num in enumerate(picks):
+            args = (str(ch_num + 1), raw.ch_names[ch_num],) + tuple(
+                '%0.5f' % x for x in 100 * pos[ii])  # convert to cm
+            fid.write(('\t'.join(args) + '\n').encode('ascii'))
+    pos_read_old = np.array([raw.info['chs'][p]['loc'][:3] for p in picks])
+    raw = read_raw_ctf(ctf_eeg_fname)  # read modified data
+    pos_read = np.array([raw.info['chs'][p]['loc'][:3] for p in picks])
+    assert_allclose(apply_trans(raw.info['ctf_head_t'], pos), pos_read,
+                    rtol=1e-5, atol=1e-5)
+    assert_true((pos_read == pos_read_old).mean() < 0.1)
+    shutil.copy(op.join(ctf_dir, 'catch-alp-good-f.ds_randpos_raw.fif'),
+                op.join(temp_dir, 'randpos', 'catch-alp-good-f.ds_raw.fif'))
+
+    # Create a version with no hc, starting out *with* EEG pos (error)
+    os.mkdir(op.join(temp_dir, 'nohc'))
+    ctf_no_hc_fname = op.join(temp_dir, 'no_hc', ctf_fname_catch)
+    shutil.copytree(ctf_eeg_fname, ctf_no_hc_fname)
+    remove_base = op.join(ctf_no_hc_fname, op.basename(ctf_fname_catch[:-3]))
+    os.remove(remove_base + '.hc')
+    assert_raises(RuntimeError, read_raw_ctf, ctf_no_hc_fname)  # no coord tr
+    os.remove(remove_base + '.eeg')
+    shutil.copy(op.join(ctf_dir, 'catch-alp-good-f.ds_nohc_raw.fif'),
+                op.join(temp_dir, 'no_hc', 'catch-alp-good-f.ds_raw.fif'))
+
+    # All our files
+    use_fnames = [op.join(ctf_dir, c) for c in ctf_fnames]
+    for fname in use_fnames:
+        raw_c = Raw(fname + '_raw.fif', add_eeg_ref=False, preload=True)
+        raw = read_raw_ctf(fname)
+
+        # check info match
+        assert_array_equal(raw.ch_names, raw_c.ch_names)
+        assert_allclose(raw.times, raw_c.times)
+        assert_allclose(raw._cals, raw_c._cals)
+        for key in ('version', 'usecs'):
+            assert_equal(raw.info['meas_id'][key], raw_c.info['meas_id'][key])
+        py_time = raw.info['meas_id']['secs']
+        c_time = raw_c.info['meas_id']['secs']
+        max_offset = 24 * 60 * 60  # probably overkill but covers timezone
+        assert_true(c_time - max_offset <= py_time <= c_time)
+        for t in ('dev_head_t', 'dev_ctf_t', 'ctf_head_t'):
+            assert_allclose(raw.info[t]['trans'], raw_c.info[t]['trans'],
+                            rtol=1e-4, atol=1e-7)
+        for key in ('acq_pars', 'acq_stim', 'bads',
+                    'ch_names', 'custom_ref_applied', 'description',
+                    'events', 'experimenter', 'highpass', 'line_freq',
+                    'lowpass', 'nchan', 'proj_id', 'proj_name',
+                    'projs', 'sfreq', 'subject_info'):
+            assert_equal(raw.info[key], raw_c.info[key], key)
+        if op.basename(fname) not in single_trials:
+            # We don't force buffer size to be smaller like MNE-C
+            assert_equal(raw.info['buffer_size_sec'],
+                         raw_c.info['buffer_size_sec'])
+        assert_equal(len(raw.info['comps']), len(raw_c.info['comps']))
+        for c1, c2 in zip(raw.info['comps'], raw_c.info['comps']):
+            for key in ('colcals', 'rowcals'):
+                assert_allclose(c1[key], c2[key])
+            assert_equal(c1['save_calibrated'], c2['save_calibrated'])
+            for key in ('row_names', 'col_names', 'nrow', 'ncol'):
+                assert_array_equal(c1['data'][key], c2['data'][key])
+            assert_allclose(c1['data']['data'], c2['data']['data'], atol=1e-7,
+                            rtol=1e-5)
+        assert_allclose(raw.info['hpi_results'][0]['coord_trans']['trans'],
+                        raw_c.info['hpi_results'][0]['coord_trans']['trans'],
+                        rtol=1e-5, atol=1e-7)
+        assert_equal(len(raw.info['chs']), len(raw_c.info['chs']))
+        for ii, (c1, c2) in enumerate(zip(raw.info['chs'], raw_c.info['chs'])):
+            for key in ('kind', 'scanno', 'unit', 'ch_name', 'unit_mul',
+                        'range', 'coord_frame', 'coil_type', 'logno'):
+                assert_equal(c1[key], c2[key])
+            for key in ('loc', 'cal'):
+                assert_allclose(c1[key], c2[key], atol=1e-6, rtol=1e-4,
+                                err_msg='raw.info["chs"][%d][%s]' % (ii, key))
+        assert_dig_allclose(raw.info, raw_c.info)
+
+        # check data match
+        raw_c.save(out_fname, overwrite=True, buffer_size_sec=1.)
+        raw_read = Raw(out_fname, add_eeg_ref=False)
+
+        # so let's check tricky cases based on sample boundaries
+        rng = np.random.RandomState(0)
+        pick_ch = rng.permutation(np.arange(len(raw.ch_names)))[:10]
+        bnd = int(round(raw.info['sfreq'] * raw.info['buffer_size_sec']))
+        assert_equal(bnd, raw._raw_extras[0]['block_size'])
+        assert_equal(bnd, block_sizes[op.basename(fname)])
+        slices = (slice(0, bnd), slice(bnd - 1, bnd), slice(3, bnd),
+                  slice(3, 300), slice(None))
+        if len(raw.times) >= 2 * bnd:  # at least two complete blocks
+            slices = slices + (slice(bnd, 2 * bnd), slice(bnd, bnd + 1),
+                               slice(0, bnd + 100))
+        for sl_time in slices:
+            assert_allclose(raw[pick_ch, sl_time][0],
+                            raw_c[pick_ch, sl_time][0])
+            assert_allclose(raw_read[pick_ch, sl_time][0],
+                            raw_c[pick_ch, sl_time][0])
+        # all data / preload
+        raw = read_raw_ctf(fname, preload=True)
+        assert_allclose(raw[:][0], raw_c[:][0])
+    assert_raises(TypeError, read_raw_ctf, 1)
+    assert_raises(ValueError, read_raw_ctf, ctf_fname_continuous + 'foo.ds')
+    # test ignoring of system clock
+    read_raw_ctf(op.join(ctf_dir, ctf_fname_continuous), 'ignore')
+    assert_raises(ValueError, read_raw_ctf,
+                  op.join(ctf_dir, ctf_fname_continuous), 'foo')
+
+run_tests_if_main()
diff --git a/mne/io/ctf/trans.py b/mne/io/ctf/trans.py
new file mode 100644
index 0000000..ed6dbf5
--- /dev/null
+++ b/mne/io/ctf/trans.py
@@ -0,0 +1,170 @@
+"""Create coordinate transforms
+"""
+
+# Author: Eric Larson <larson.eric.d<gmail.com>
+#
+# License: BSD (3-clause)
+
+import numpy as np
+from scipy import linalg
+
+from ...transforms import combine_transforms, invert_transform
+from ...utils import logger
+from ..constants import FIFF
+from .constants import CTF
+
+
+def _make_transform_card(fro, to, r_lpa, r_nasion, r_rpa):
+    """Helper to make a transform from cardinal landmarks"""
+    diff_1 = r_nasion - r_lpa
+    ex = r_rpa - r_lpa
+    alpha = np.dot(diff_1, ex) / np.dot(ex, ex)
+    ex /= np.sqrt(np.sum(ex * ex))
+    trans = np.eye(4)
+    move = (1. - alpha) * r_lpa + alpha * r_rpa
+    trans[:3, 3] = move
+    trans[:3, 0] = ex
+    ey = r_nasion - move
+    ey /= np.sqrt(np.sum(ey * ey))
+    trans[:3, 1] = ey
+    trans[:3, 2] = np.cross(ex, ey)  # ez
+    return {'from': fro, 'to': to, 'trans': trans}
+
+
+def _quaternion_align(from_frame, to_frame, from_pts, to_pts):
+    """Perform an alignment using the unit quaternions (modifies points)"""
+    assert from_pts.shape[1] == to_pts.shape[1] == 3
+
+    # Calculate the centroids and subtract
+    from_c, to_c = from_pts.mean(axis=0), to_pts.mean(axis=0)
+    from_ = from_pts - from_c
+    to_ = to_pts - to_c
+
+    # Compute the dot products
+    S = np.dot(from_.T, to_)
+
+    # Compute the magical N matrix
+    N = np.array([[S[0, 0] + S[1, 1] + S[2, 2], 0., 0., 0.],
+                  [S[1, 2] - S[2, 1], S[0, 0] - S[1, 1] - S[2, 2], 0., 0.],
+                  [S[2, 0] - S[0, 2], S[0, 1] + S[1, 0],
+                   -S[0, 0] + S[1, 1] - S[2, 2], 0.],
+                  [S[0, 1] - S[1, 0], S[2, 0] + S[0, 2],
+                   S[1, 2] + S[2, 1], -S[0, 0] - S[1, 1] + S[2, 2]]])
+
+    # Compute the eigenvalues and eigenvectors
+    # Use the eigenvector corresponding to the largest eigenvalue as the
+    # unit quaternion defining the rotation
+    eig_vals, eig_vecs = linalg.eigh(N, overwrite_a=True)
+    which = np.argmax(eig_vals)
+    if eig_vals[which] < 0:
+        raise RuntimeError('No positive eigenvalues. Cannot do the alignment.')
+    q = eig_vecs[:, which]
+
+    # Write out the rotation
+    trans = np.eye(4)
+    trans[0, 0] = q[0] * q[0] + q[1] * q[1] - q[2] * q[2] - q[3] * q[3]
+    trans[0, 1] = 2.0 * (q[1] * q[2] - q[0] * q[3])
+    trans[0, 2] = 2.0 * (q[1] * q[3] + q[0] * q[2])
+    trans[1, 0] = 2.0 * (q[2] * q[1] + q[0] * q[3])
+    trans[1, 1] = q[0] * q[0] - q[1] * q[1] + q[2] * q[2] - q[3] * q[3]
+    trans[1, 2] = 2.0 * (q[2] * q[3] - q[0] * q[1])
+    trans[2, 0] = 2.0 * (q[3] * q[1] - q[0] * q[2])
+    trans[2, 1] = 2.0 * (q[3] * q[2] + q[0] * q[1])
+    trans[2, 2] = q[0] * q[0] - q[1] * q[1] - q[2] * q[2] + q[3] * q[3]
+
+    # Now we need to generate a transformed translation vector
+    trans[:3, 3] = to_c - np.dot(trans[:3, :3], from_c)
+    del to_c, from_c
+
+    # Test the transformation and print the results
+    logger.info('    Quaternion matching (desired vs. transformed):')
+    for fro, to in zip(from_pts, to_pts):
+        rr = np.dot(trans[:3, :3], fro) + trans[:3, 3]
+        diff = np.sqrt(np.sum((to - rr) ** 2))
+        logger.info('    %7.2f %7.2f %7.2f mm <-> %7.2f %7.2f %7.2f mm '
+                    '(orig : %7.2f %7.2f %7.2f mm) diff = %8.3f mm'
+                    % (tuple(1000 * to) + tuple(1000 * rr) +
+                       tuple(1000 * fro) + (1000 * diff,)))
+        if diff > 1e-4:
+            raise RuntimeError('Something is wrong: quaternion matching did '
+                               'not work (see above)')
+    return {'from': from_frame, 'to': to_frame, 'trans': trans}
+
+
+def _make_ctf_coord_trans_set(res4, coils):
+    """Figure out the necessary coordinate transforms"""
+    # CTF head > Neuromag head
+    lpa = rpa = nas = T1 = T2 = T3 = T5 = None
+    if coils is not None:
+        for p in coils:
+            if p['valid'] and (p['coord_frame'] ==
+                               FIFF.FIFFV_MNE_COORD_CTF_HEAD):
+                if lpa is None and p['kind'] == CTF.CTFV_COIL_LPA:
+                    lpa = p
+                elif rpa is None and p['kind'] == CTF.CTFV_COIL_RPA:
+                    rpa = p
+                elif nas is None and p['kind'] == CTF.CTFV_COIL_NAS:
+                    nas = p
+        if lpa is None or rpa is None or nas is None:
+            raise RuntimeError('Some of the mandatory HPI device-coordinate '
+                               'info was not there.')
+        t = _make_transform_card(FIFF.FIFFV_COORD_HEAD,
+                                 FIFF.FIFFV_MNE_COORD_CTF_HEAD,
+                                 lpa['r'], nas['r'], rpa['r'])
+        T3 = invert_transform(t)
+
+    # CTF device -> Neuromag device
+    #
+    # Rotate the CTF coordinate frame by 45 degrees and shift by 190 mm
+    # in z direction to get a coordinate system comparable to the Neuromag one
+    #
+    R = np.eye(4)
+    R[:3, 3] = [0., 0., 0.19]
+    val = 0.5 * np.sqrt(2.)
+    R[0, 0] = val
+    R[0, 1] = -val
+    R[1, 0] = val
+    R[1, 1] = val
+    T4 = {'from': FIFF.FIFFV_MNE_COORD_CTF_DEVICE,
+          'to': FIFF.FIFFV_COORD_DEVICE, 'trans': R}
+
+    # CTF device -> CTF head
+    # We need to make the implicit transform explicit!
+    h_pts = dict()
+    d_pts = dict()
+    kinds = (CTF.CTFV_COIL_LPA, CTF.CTFV_COIL_RPA, CTF.CTFV_COIL_NAS,
+             CTF.CTFV_COIL_SPARE)
+    if coils is not None:
+        for p in coils:
+            if p['valid']:
+                if p['coord_frame'] == FIFF.FIFFV_MNE_COORD_CTF_HEAD:
+                    for kind in kinds:
+                        if kind not in h_pts and p['kind'] == kind:
+                            h_pts[kind] = p['r']
+                elif p['coord_frame'] == FIFF.FIFFV_MNE_COORD_CTF_DEVICE:
+                    for kind in kinds:
+                        if kind not in d_pts and p['kind'] == kind:
+                            d_pts[kind] = p['r']
+        if any(kind not in h_pts for kind in kinds[:-1]):
+            raise RuntimeError('Some of the mandatory HPI device-coordinate '
+                               'info was not there.')
+        if any(kind not in d_pts for kind in kinds[:-1]):
+            raise RuntimeError('Some of the mandatory HPI head-coordinate '
+                               'info was not there.')
+        use_kinds = [kind for kind in kinds
+                     if (kind in h_pts and kind in d_pts)]
+        r_head = np.array([h_pts[kind] for kind in use_kinds])
+        r_dev = np.array([d_pts[kind] for kind in use_kinds])
+        T2 = _quaternion_align(FIFF.FIFFV_MNE_COORD_CTF_DEVICE,
+                               FIFF.FIFFV_MNE_COORD_CTF_HEAD, r_dev, r_head)
+
+    # The final missing transform
+    if T3 is not None and T2 is not None:
+        T5 = combine_transforms(T2, T3, FIFF.FIFFV_MNE_COORD_CTF_DEVICE,
+                                FIFF.FIFFV_COORD_HEAD)
+        T1 = combine_transforms(invert_transform(T4), T5,
+                                FIFF.FIFFV_COORD_DEVICE, FIFF.FIFFV_COORD_HEAD)
+    s = dict(t_dev_head=T1, t_ctf_dev_ctf_head=T2, t_ctf_head_head=T3,
+             t_ctf_dev_dev=T4, t_ctf_dev_head=T5)
+    logger.info('    Coordinate transformations established.')
+    return s
diff --git a/mne/io/ctf_comp.py b/mne/io/ctf_comp.py
new file mode 100644
index 0000000..1775236
--- /dev/null
+++ b/mne/io/ctf_comp.py
@@ -0,0 +1,159 @@
+# Authors: Alexandre Gramfort <alexandre.gramfort at telecom-paristech.fr>
+#          Matti Hamalainen <msh at nmr.mgh.harvard.edu>
+#          Denis Engemann <denis.engemann at gmail.com>
+#
+# License: BSD (3-clause)
+
+from copy import deepcopy
+
+import numpy as np
+
+from .constants import FIFF
+from .tag import read_tag
+from .tree import dir_tree_find
+from .write import start_block, end_block, write_int
+from .matrix import write_named_matrix, _read_named_matrix
+
+from ..utils import logger, verbose
+
+
+def _add_kind(one):
+    """Convert CTF kind to MNE kind"""
+    if one['ctfkind'] == int('47314252', 16):
+        one['kind'] = 1
+    elif one['ctfkind'] == int('47324252', 16):
+        one['kind'] = 2
+    elif one['ctfkind'] == int('47334252', 16):
+        one['kind'] = 3
+    else:
+        one['kind'] = int(one['ctfkind'])
+
+
+def _calibrate_comp(comp, chs, row_names, col_names,
+                    mult_keys=('range', 'cal'), flip=False):
+    """Helper to get row and column cals"""
+    ch_names = [c['ch_name'] for c in chs]
+    row_cals = np.zeros(len(row_names))
+    col_cals = np.zeros(len(col_names))
+    for names, cals, inv in zip((row_names, col_names), (row_cals, col_cals),
+                                (False, True)):
+        for ii in range(len(cals)):
+            p = ch_names.count(names[ii])
+            if p != 1:
+                raise RuntimeError('Channel %s does not appear exactly once '
+                                   'in data' % names[ii])
+            idx = ch_names.index(names[ii])
+            val = chs[idx][mult_keys[0]] * chs[idx][mult_keys[1]]
+            val = float(1. / val) if inv else float(val)
+            val = 1. / val if flip else val
+            cals[ii] = val
+    comp['rowcals'] = row_cals
+    comp['colcals'] = col_cals
+    comp['data']['data'] = (row_cals[:, None] *
+                            comp['data']['data'] * col_cals[None, :])
+
+
+ at verbose
+def read_ctf_comp(fid, node, chs, verbose=None):
+    """Read the CTF software compensation data from the given node
+
+    Parameters
+    ----------
+    fid : file
+        The file descriptor.
+    node : dict
+        The node in the FIF tree.
+    chs : list
+        The list of channels from info['chs'] to match with
+        compensators that are read.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    compdata : list
+        The compensation data
+    """
+    compdata = []
+    comps = dir_tree_find(node, FIFF.FIFFB_MNE_CTF_COMP_DATA)
+
+    for node in comps:
+        #   Read the data we need
+        mat = _read_named_matrix(fid, node, FIFF.FIFF_MNE_CTF_COMP_DATA)
+        for p in range(node['nent']):
+            kind = node['directory'][p].kind
+            pos = node['directory'][p].pos
+            if kind == FIFF.FIFF_MNE_CTF_COMP_KIND:
+                tag = read_tag(fid, pos)
+                break
+        else:
+            raise Exception('Compensation type not found')
+
+        #   Get the compensation kind and map it to a simple number
+        one = dict(ctfkind=tag.data)
+        del tag
+        _add_kind(one)
+        for p in range(node['nent']):
+            kind = node['directory'][p].kind
+            pos = node['directory'][p].pos
+            if kind == FIFF.FIFF_MNE_CTF_COMP_CALIBRATED:
+                tag = read_tag(fid, pos)
+                calibrated = tag.data
+                break
+        else:
+            calibrated = False
+
+        one['save_calibrated'] = bool(calibrated)
+        one['data'] = mat
+        if not calibrated:
+            #   Calibrate...
+            _calibrate_comp(one, chs, mat['row_names'], mat['col_names'])
+        else:
+            one['rowcals'] = np.ones(mat['data'].shape[0], dtype=np.float)
+            one['colcals'] = np.ones(mat['data'].shape[1], dtype=np.float)
+
+        compdata.append(one)
+
+    if len(compdata) > 0:
+        logger.info('    Read %d compensation matrices' % len(compdata))
+
+    return compdata
+
+
+###############################################################################
+# Writing
+
+def write_ctf_comp(fid, comps):
+    """Write the CTF compensation data into a fif file
+
+    Parameters
+    ----------
+    fid : file
+        The open FIF file descriptor
+
+    comps : list
+        The compensation data to write
+    """
+    if len(comps) <= 0:
+        return
+
+    #  This is very simple in fact
+    start_block(fid, FIFF.FIFFB_MNE_CTF_COMP)
+    for comp in comps:
+        start_block(fid, FIFF.FIFFB_MNE_CTF_COMP_DATA)
+        #    Write the compensation kind
+        write_int(fid, FIFF.FIFF_MNE_CTF_COMP_KIND, comp['ctfkind'])
+        if comp.get('save_calibrated', False):
+            write_int(fid, FIFF.FIFF_MNE_CTF_COMP_CALIBRATED,
+                      comp['save_calibrated'])
+
+        if not comp.get('save_calibrated', True):
+            # Undo calibration
+            comp = deepcopy(comp)
+            data = ((1. / comp['rowcals'][:, None]) * comp['data']['data'] *
+                    (1. / comp['colcals'][None, :]))
+            comp['data']['data'] = data
+        write_named_matrix(fid, FIFF.FIFF_MNE_CTF_COMP_DATA, comp['data'])
+        end_block(fid, FIFF.FIFFB_MNE_CTF_COMP_DATA)
+
+    end_block(fid, FIFF.FIFFB_MNE_CTF_COMP)
diff --git a/mne/io/edf/edf.py b/mne/io/edf/edf.py
index 01509c4..bcf6eb2 100644
--- a/mne/io/edf/edf.py
+++ b/mne/io/edf/edf.py
@@ -12,14 +12,13 @@ import calendar
 import datetime
 import re
 import warnings
-from math import ceil, floor
 
 import numpy as np
 
 from ...utils import verbose, logger
+from ..utils import _blk_read_lims
 from ..base import _BaseRaw, _check_update_montage
 from ..meas_info import _empty_info
-from ..pick import pick_types
 from ..constants import FIFF
 from ...filter import resample
 from ...externals.six.moves import zip
@@ -94,8 +93,7 @@ class RawEDF(_BaseRaw):
         logger.info('Ready.')
 
     @verbose
-    def _read_segment_file(self, data, idx, offset, fi, start, stop,
-                           cals, mult):
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
         """Read a chunk of raw data"""
         from scipy.interpolate import interp1d
         if mult is not None:
@@ -103,13 +101,10 @@ class RawEDF(_BaseRaw):
             # and for efficiency we want to be able to combine mult and cals
             # so proj support will have to wait until this is resolved
             raise NotImplementedError('mult is not supported yet')
-        # RawFIF and RawEDF think of "stop" differently, easiest to increment
-        # here and refactor later
-        stop += 1
         sel = np.arange(self.info['nchan'])[idx]
 
         n_samps = self._raw_extras[fi]['n_samps']
-        buf_len = self._raw_extras[fi]['max_samp']
+        buf_len = int(self._raw_extras[fi]['max_samp'])
         sfreq = self.info['sfreq']
         n_chan = self.info['nchan']
         data_size = self._raw_extras[fi]['data_size']
@@ -120,10 +115,6 @@ class RawEDF(_BaseRaw):
         annotmap = self._raw_extras[fi]['annotmap']
         subtype = self._raw_extras[fi]['subtype']
 
-        # this is used to deal with indexing in the middle of a sampling period
-        blockstart = int(floor(float(start) / buf_len) * buf_len)
-        blockstop = int(ceil(float(stop) / buf_len) * buf_len)
-
         # gain constructor
         physical_range = np.array([ch['range'] for ch in self.info['chs']])
         cal = np.array([ch['cal'] for ch in self.info['chs']])
@@ -139,97 +130,37 @@ class RawEDF(_BaseRaw):
         if tal_channel is not None:
             offsets[tal_channel] = 0
 
-        read_size = blockstop - blockstart
-        this_data = np.empty((len(sel), buf_len))
-        data = data[:, offset:offset + (stop - start)]
-        """
-        Consider this example:
-
-        tmin, tmax = (2, 27)
-        read_size = 30
-        buf_len = 10
-        sfreq = 1.
-
-                        +---------+---------+---------+
-        File structure: |  buf0   |   buf1  |   buf2  |
-                        +---------+---------+---------+
-        File time:      0        10        20        30
-                        +---------+---------+---------+
-        Requested time:   2                       27
-
-                        |                             |
-                    blockstart                    blockstop
-                          |                        |
-                        start                    stop
-
-        We need 27 - 2 = 25 samples (per channel) to store our data, and
-        we need to read from 3 buffers (30 samples) to get all of our data.
-
-        On all reads but the first, the data we read starts at
-        the first sample of the buffer. On all reads but the last,
-        the data we read ends on the last sample of the buffer.
-
-        We call this_data the variable that stores the current buffer's data,
-        and data the variable that stores the total output.
-
-        On the first read, we need to do this::
-
-            >>> data[0:buf_len-2] = this_data[2:buf_len]
-
-        On the second read, we need to do::
-
-            >>> data[1*buf_len-2:2*buf_len-2] = this_data[0:buf_len]
-
-        On the final read, we need to do::
-
-            >>> data[2*buf_len-2:3*buf_len-2-3] = this_data[0:buf_len-3]
-
-        """
+        block_start_idx, r_lims, d_lims = _blk_read_lims(start, stop, buf_len)
+        read_size = len(r_lims) * buf_len
         with open(self._filenames[fi], 'rb', buffering=0) as fid:
             # extract data
-            fid.seek(data_offset + blockstart * n_chan * data_size)
-            n_blk = int(ceil(float(read_size) / buf_len))
-            start_offset = start - blockstart
-            end_offset = blockstop - stop
-            for bi in range(n_blk):
-                # Triage start (sidx) and end (eidx) indices for
-                # data (d) and read (r)
-                if bi == 0:
-                    d_sidx = 0
-                    r_sidx = start_offset
-                else:
-                    d_sidx = bi * buf_len - start_offset
-                    r_sidx = 0
-                if bi == n_blk - 1:
-                    d_eidx = data.shape[1]
-                    r_eidx = buf_len - end_offset
-                else:
-                    d_eidx = (bi + 1) * buf_len - start_offset
-                    r_eidx = buf_len
+            start_offset = (data_offset +
+                            block_start_idx * buf_len * n_chan * data_size)
+            ch_offsets = np.cumsum(np.concatenate([[0], n_samps * data_size]))
+            this_data = np.empty((len(sel), buf_len))
+            for bi in range(len(r_lims)):
+                block_offset = bi * ch_offsets[-1]
+                d_sidx, d_eidx = d_lims[bi]
+                r_sidx, r_eidx = r_lims[bi]
                 n_buf_samp = r_eidx - r_sidx
-                count = 0
-                for j, samp in enumerate(n_samps):
+                for ii, ci in enumerate(sel):
+                    n_samp = n_samps[ci]
                     # bdf data: 24bit data
-                    if j not in sel:
-                        fid.seek(samp * data_size, 1)
-                        continue
-                    if samp == buf_len:
+                    fid.seek(start_offset + block_offset + ch_offsets[ci], 0)
+                    if n_samp == buf_len:
                         # use faster version with skips built in
-                        if r_sidx > 0:
-                            fid.seek(r_sidx * data_size, 1)
+                        fid.seek(r_sidx * data_size, 1)
                         ch_data = _read_ch(fid, subtype, n_buf_samp, data_size)
-                        if r_eidx < buf_len:
-                            fid.seek((buf_len - r_eidx) * data_size, 1)
                     else:
                         # read in all the data and triage appropriately
-                        ch_data = _read_ch(fid, subtype, samp, data_size)
-                        if j == tal_channel:
+                        ch_data = _read_ch(fid, subtype, n_samp, data_size)
+                        if ci == tal_channel:
                             # don't resample tal_channel,
                             # pad with zeros instead.
-                            n_missing = int(buf_len - samp)
+                            n_missing = int(buf_len - n_samp)
                             ch_data = np.hstack([ch_data, [0] * n_missing])
                             ch_data = ch_data[r_sidx:r_eidx]
-                        elif j == stim_channel:
+                        elif ci == stim_channel:
                             if annot and annotmap or \
                                     tal_channel is not None:
                                 # don't bother with resampling the stim ch
@@ -238,17 +169,16 @@ class RawEDF(_BaseRaw):
                             else:
                                 warnings.warn('Interpolating stim channel.'
                                               ' Events may jitter.')
-                                oldrange = np.linspace(0, 1, samp + 1, True)
+                                oldrange = np.linspace(0, 1, n_samp + 1, True)
                                 newrange = np.linspace(0, 1, buf_len, False)
                                 newrange = newrange[r_sidx:r_eidx]
                                 ch_data = interp1d(
                                     oldrange, np.append(ch_data, 0),
                                     kind='zero')(newrange)
                         else:
-                            ch_data = resample(ch_data, buf_len, samp,
+                            ch_data = resample(ch_data, buf_len, n_samp,
                                                npad=0)[r_sidx:r_eidx]
-                    this_data[count, :n_buf_samp] = ch_data
-                    count += 1
+                    this_data[ii, :n_buf_samp] = ch_data
                 data[:, d_sidx:d_eidx] = this_data[:, :n_buf_samp]
         data *= gains.T[sel]
         data += offsets[sel]
@@ -260,7 +190,7 @@ class RawEDF(_BaseRaw):
             if annot and annotmap:
                 evts = _read_annot(annot, annotmap, sfreq,
                                    self._last_samps[fi])
-                data[stim_channel_idx, :] = evts[start:stop]
+                data[stim_channel_idx, :] = evts[start:stop + 1]
             elif tal_channel is not None:
                 tal_channel_idx = np.where(sel == tal_channel)[0][0]
                 evts = _parse_tal_channel(data[tal_channel_idx])
@@ -282,9 +212,9 @@ class RawEDF(_BaseRaw):
                     stim[n_start:n_stop] = evid
                 data[stim_channel_idx, :] = stim[start:stop]
             else:
-                # Allows support for up to 16-bit trigger values (2 ** 16 - 1)
+                # Allows support for up to 17-bit trigger values (2 ** 17 - 1)
                 stim = np.bitwise_and(data[stim_channel_idx].astype(int),
-                                      65535)
+                                      131071)
                 data[stim_channel_idx, :] = stim
 
 
@@ -350,8 +280,6 @@ def _get_edf_info(fname, stim_channel, annot, annotmap, eog, misc, preload):
         eog = []
     if misc is None:
         misc = []
-    info = _empty_info()
-    info['filename'] = fname
 
     edf_info = dict()
     edf_info['annot'] = annot
@@ -369,7 +297,6 @@ def _get_edf_info(fname, stim_channel, annot, annotmap, eog, misc, preload):
         hour, minute, sec = [int(x) for x in re.findall('(\d+)',
                                                         fid.read(8).decode())]
         date = datetime.datetime(year + 2000, month, day, hour, minute, sec)
-        info['meas_date'] = calendar.timegm(date.utctimetuple())
 
         edf_info['data_offset'] = header_nbytes = int(fid.read(8).decode())
         subtype = fid.read(44).strip().decode()[:5]
@@ -387,8 +314,8 @@ def _get_edf_info(fname, stim_channel, annot, annotmap, eog, misc, preload):
                           'Default record length set to 1.')
         else:
             edf_info['record_length'] = record_length
-        info['nchan'] = nchan = int(fid.read(4).decode())
-        channels = list(range(info['nchan']))
+        nchan = int(fid.read(4).decode())
+        channels = list(range(nchan))
         ch_names = [fid.read(16).strip().decode() for ch in channels]
         for ch in channels:
             fid.read(80)  # transducer
@@ -415,46 +342,16 @@ def _get_edf_info(fname, stim_channel, annot, annotmap, eog, misc, preload):
         lowpass = np.ravel([re.findall('LP:\s+(\w+)', filt)
                             for filt in prefiltering])
 
-        high_pass_default = 0.
-        if highpass.size == 0:
-            info['highpass'] = high_pass_default
-        elif all(highpass):
-            if highpass[0] == 'NaN':
-                info['highpass'] = high_pass_default
-            elif highpass[0] == 'DC':
-                info['highpass'] = 0.
-            else:
-                info['highpass'] = float(highpass[0])
-        else:
-            info['highpass'] = float(np.min(highpass))
-            warnings.warn('Channels contain different highpass filters. '
-                          'Highest filter setting will be stored.')
-
-        if lowpass.size == 0:
-            info['lowpass'] = None
-        elif all(lowpass):
-            if lowpass[0] == 'NaN':
-                info['lowpass'] = None
-            else:
-                info['lowpass'] = float(lowpass[0])
-        else:
-            info['lowpass'] = float(np.min(lowpass))
-            warnings.warn('%s' % ('Channels contain different lowpass filters.'
-                                  ' Lowest filter setting will be stored.'))
         # number of samples per record
         n_samps = np.array([int(fid.read(8).decode()) for ch in channels])
         edf_info['n_samps'] = n_samps
 
-        fid.read(32 * info['nchan']).decode()  # reserved
+        fid.read(32 * nchan).decode()  # reserved
         assert fid.tell() == header_nbytes
 
     physical_ranges = physical_max - physical_min
     cals = digital_max - digital_min
 
-    # Some keys to be consistent with FIF measurement info
-    info['description'] = None
-    info['buffer_size_sec'] = 10.
-
     if edf_info['subtype'] in ('24BIT', 'bdf'):
         edf_info['data_size'] = 3  # 24-bit (3 byte) integers
     else:
@@ -462,8 +359,8 @@ def _get_edf_info(fname, stim_channel, annot, annotmap, eog, misc, preload):
 
     # Creates a list of dicts of eeg channels for raw.info
     logger.info('Setting channel info structure...')
-    info['chs'] = []
-    info['ch_names'] = ch_names
+    chs = list()
+
     tal_ch_name = 'EDF Annotations'
     if tal_ch_name in ch_names:
         tal_channel = ch_names.index(tal_ch_name)
@@ -475,7 +372,8 @@ def _get_edf_info(fname, stim_channel, annot, annotmap, eog, misc, preload):
                                    ' parsed completely on loading.'
                                    ' You must set preload parameter to True.'))
     if stim_channel == -1:
-        stim_channel = info['nchan'] - 1
+        stim_channel = nchan - 1
+    pick_mask = np.ones(len(ch_names))
     for idx, ch_info in enumerate(zip(ch_names, physical_ranges, cals)):
         ch_name, physical_range, cal = ch_info
         chan_info = {}
@@ -493,19 +391,22 @@ def _get_edf_info(fname, stim_channel, annot, annotmap, eog, misc, preload):
         if ch_name in eog or idx in eog or idx - nchan in eog:
             chan_info['coil_type'] = FIFF.FIFFV_COIL_NONE
             chan_info['kind'] = FIFF.FIFFV_EOG_CH
+            pick_mask[idx] = False
         if ch_name in misc or idx in misc or idx - nchan in misc:
             chan_info['coil_type'] = FIFF.FIFFV_COIL_NONE
             chan_info['kind'] = FIFF.FIFFV_MISC_CH
+            pick_mask[idx] = False
         check1 = stim_channel == ch_name
         check2 = stim_channel == idx
-        check3 = info['nchan'] > 1
+        check3 = nchan > 1
         stim_check = np.logical_and(np.logical_or(check1, check2), check3)
         if stim_check:
             chan_info['coil_type'] = FIFF.FIFFV_COIL_NONE
             chan_info['unit'] = FIFF.FIFF_UNIT_NONE
             chan_info['kind'] = FIFF.FIFFV_STIM_CH
+            pick_mask[idx] = False
             chan_info['ch_name'] = 'STI 014'
-            info['ch_names'][idx] = chan_info['ch_name']
+            ch_names[idx] = chan_info['ch_name']
             units[idx] = 1
             if isinstance(stim_channel, str):
                 stim_channel = idx
@@ -515,20 +416,54 @@ def _get_edf_info(fname, stim_channel, annot, annotmap, eog, misc, preload):
             chan_info['coil_type'] = FIFF.FIFFV_COIL_NONE
             chan_info['unit'] = FIFF.FIFF_UNIT_NONE
             chan_info['kind'] = FIFF.FIFFV_MISC_CH
-        info['chs'].append(chan_info)
+            pick_mask[idx] = False
+        chs.append(chan_info)
     edf_info['stim_channel'] = stim_channel
 
-    # sfreq defined as the max sampling rate of eeg
-    picks = pick_types(info, meg=False, eeg=True)
-    if len(picks) == 0:
+    if any(pick_mask):
+        picks = [item for item, mask in zip(range(nchan), pick_mask) if mask]
+        edf_info['max_samp'] = max_samp = n_samps[picks].max()
+    else:
         edf_info['max_samp'] = max_samp = n_samps.max()
+    # sfreq defined as the max sampling rate of eeg
+    sfreq = n_samps.max() / record_length
+    info = _empty_info(sfreq)
+    info['filename'] = fname
+    info['meas_date'] = calendar.timegm(date.utctimetuple())
+    info['nchan'] = nchan
+    info['chs'] = chs
+    info['ch_names'] = ch_names
+
+    if highpass.size == 0:
+        pass
+    elif all(highpass):
+        if highpass[0] == 'NaN':
+            pass  # Placeholder for future use. Highpass set in _empty_info.
+        elif highpass[0] == 'DC':
+            info['highpass'] = 0.
+        else:
+            info['highpass'] = float(highpass[0])
     else:
-        edf_info['max_samp'] = max_samp = n_samps[picks].max()
-    info['sfreq'] = max_samp / record_length
-    edf_info['nsamples'] = int(n_records * max_samp)
+        info['highpass'] = float(np.min(highpass))
+        warnings.warn('Channels contain different highpass filters. '
+                      'Highest filter setting will be stored.')
+
+    if lowpass.size == 0:
+        pass
+    elif all(lowpass):
+        if lowpass[0] == 'NaN':
+            pass  # Placeholder for future use. Lowpass set in _empty_info.
+        else:
+            info['lowpass'] = float(lowpass[0])
+    else:
+        info['lowpass'] = float(np.min(lowpass))
+        warnings.warn('%s' % ('Channels contain different lowpass filters.'
+                              ' Lowest filter setting will be stored.'))
 
-    if info['lowpass'] is None:
-        info['lowpass'] = info['sfreq'] / 2.
+    # Some keys to be consistent with FIF measurement info
+    info['description'] = None
+    info['buffer_size_sec'] = 10.
+    edf_info['nsamples'] = int(n_records * max_samp)
 
     return info, edf_info
 
diff --git a/mne/io/edf/tests/test_edf.py b/mne/io/edf/tests/test_edf.py
index 7d68102..42c7abc 100644
--- a/mne/io/edf/tests/test_edf.py
+++ b/mne/io/edf/tests/test_edf.py
@@ -14,15 +14,15 @@ import warnings
 
 from nose.tools import assert_equal, assert_true
 from numpy.testing import (assert_array_almost_equal, assert_array_equal,
-                           assert_raises, assert_allclose)
+                           assert_raises)
 from scipy import io
 import numpy as np
 
-from mne import pick_types, concatenate_raws
+from mne import pick_types
 from mne.externals.six import iterbytes
 from mne.utils import _TempDir, run_tests_if_main, requires_pandas
-from mne.io import Raw, read_raw_edf, RawArray
-from mne.io.tests.test_raw import _test_concat
+from mne.io import read_raw_edf, Raw
+from mne.io.tests.test_raw import _test_raw_reader
 import mne.io.edf.edf as edfmodule
 from mne.event import find_events
 
@@ -45,15 +45,10 @@ eog = ['REOG', 'LEOG', 'IEOG']
 misc = ['EXG1', 'EXG5', 'EXG8', 'M1', 'M2']
 
 
-def test_concat():
-    """Test EDF concatenation"""
-    _test_concat(read_raw_edf, bdf_path)
-
-
 def test_bdf_data():
     """Test reading raw bdf files"""
-    raw_py = read_raw_edf(bdf_path, montage=montage_path, eog=eog,
-                          misc=misc, preload=True)
+    raw_py = _test_raw_reader(read_raw_edf, input_fname=bdf_path,
+                              montage=montage_path, eog=eog, misc=misc)
     assert_true('RawEDF' in repr(raw_py))
     picks = pick_types(raw_py.info, meg=False, eeg=True, exclude='bads')
     data_py, _ = raw_py[picks]
@@ -70,13 +65,44 @@ def test_bdf_data():
     assert_true((raw_py.info['chs'][25]['loc']).any())
     assert_true((raw_py.info['chs'][63]['loc']).any())
 
-    # Make sure concatenation works
-    raw_concat = concatenate_raws([raw_py.copy(), raw_py])
-    assert_equal(raw_concat.n_times, 2 * raw_py.n_times)
-
 
 def test_edf_data():
-    """Test reading raw edf files"""
+    """Test edf files"""
+    _test_raw_reader(read_raw_edf, input_fname=edf_path, stim_channel=None)
+    raw_py = read_raw_edf(edf_path, preload=True)
+    # Test saving and loading when annotations were parsed.
+    tempdir = _TempDir()
+    raw_file = op.join(tempdir, 'test-raw.fif')
+    raw_py.save(raw_file, overwrite=True, buffer_size_sec=1)
+    Raw(raw_file, preload=True)
+
+    edf_events = find_events(raw_py, output='step', shortest_event=0,
+                             stim_channel='STI 014')
+
+    # onset, duration, id
+    events = [[0.1344, 0.2560, 2],
+              [0.3904, 1.0000, 2],
+              [2.0000, 0.0000, 3],
+              [2.5000, 2.5000, 2]]
+    events = np.array(events)
+    events[:, :2] *= 512  # convert time to samples
+    events = np.array(events, dtype=int)
+    events[:, 1] -= 1
+    events[events[:, 1] <= 0, 1] = 1
+    events[:, 1] += events[:, 0]
+
+    onsets = events[:, [0, 2]]
+    offsets = events[:, [1, 2]]
+
+    events = np.zeros((2 * events.shape[0], 3), dtype=int)
+    events[0::2, [0, 2]] = onsets
+    events[1::2, [0, 1]] = offsets
+
+    assert_array_equal(edf_events, events)
+
+
+def test_stim_channel():
+    """Test reading raw edf files with stim channel"""
     raw_py = read_raw_edf(edf_path, misc=range(-4, 0), stim_channel=139,
                           preload=True)
 
@@ -94,10 +120,6 @@ def test_edf_data():
 
     assert_array_almost_equal(data_py, data_eeglab, 10)
 
-    # Make sure concatenation works
-    raw_concat = concatenate_raws([raw_py.copy(), raw_py])
-    assert_equal(raw_concat.n_times, 2 * raw_py.n_times)
-
     # Test uneven sampling
     raw_py = read_raw_edf(edf_uneven_path, stim_channel=None)
     data_py, _ = raw_py[0]
@@ -111,65 +133,7 @@ def test_edf_data():
     data_py = np.repeat(data_py, repeats=upsample)
     assert_array_equal(data_py, data_eeglab)
 
-
-def test_read_segment():
-    """Test writing raw edf files when preload is False"""
-    tempdir = _TempDir()
-    raw1 = read_raw_edf(edf_path, stim_channel=None, preload=False)
-    raw1_file = op.join(tempdir, 'test1-raw.fif')
-    raw1.save(raw1_file, overwrite=True, buffer_size_sec=1)
-    raw11 = Raw(raw1_file, preload=True)
-    data1, times1 = raw1[:139, :]
-    data11, times11 = raw11[:139, :]
-    assert_allclose(data1, data11, rtol=1e-6)
-    assert_array_almost_equal(times1, times11)
-    assert_equal(sorted(raw1.info.keys()), sorted(raw11.info.keys()))
-    data2, times2 = raw1[0, 0:1]
-    assert_array_equal(data2[0], data1[0, 0:1])
-    assert_array_equal(times2, times1[0:1])
-
-    buffer_fname = op.join(tempdir, 'buffer')
-    for preload in (buffer_fname, True, False):  # false here means "delayed"
-        raw2 = read_raw_edf(edf_path, stim_channel=None, preload=preload)
-        if preload is False:
-            raw2.load_data()
-        raw2_file = op.join(tempdir, 'test2-raw.fif')
-        raw2.save(raw2_file, overwrite=True)
-        data2, times2 = raw2[:139, :]
-        assert_allclose(data1, data2, rtol=1e-6)
-        assert_array_equal(times1, times2)
-
-    raw1 = Raw(raw1_file, preload=True)
-    raw2 = Raw(raw2_file, preload=True)
-    assert_array_equal(raw1._data, raw2._data)
-
-    # test the _read_segment function by only loading some of the data
-    raw1 = read_raw_edf(edf_path, stim_channel=None, preload=False)
-    raw2 = read_raw_edf(edf_path, stim_channel=None, preload=True)
-
-    # select some random range of data to compare
-    data1, times1 = raw1[:, 345:417]
-    data2, times2 = raw2[:, 345:417]
-    assert_array_equal(data1, data2)
-    assert_array_equal(times1, times2)
-
-
-def test_append():
-    """Test appending raw edf objects using Raw.append"""
-    for preload in (True, False):
-        raw = read_raw_edf(bdf_path, preload=False)
-        raw0 = raw.copy()
-        raw1 = raw.copy()
-        raw0.append(raw1)
-        assert_true(2 * len(raw) == len(raw0))
-        assert_allclose(np.tile(raw[:, :][0], (1, 2)), raw0[:, :][0])
-
-    # different types can't combine
-    raw = read_raw_edf(bdf_path, preload=True)
-    raw0 = raw.copy()
-    raw1 = raw.copy()
-    raw2 = RawArray(raw[:, :][0], raw.info)
-    assert_raises(ValueError, raw.append, raw2)
+    assert_raises(RuntimeError, read_raw_edf, edf_path, preload=False)
 
 
 def test_parse_annotation():
@@ -224,23 +188,6 @@ def test_edf_annotations():
     assert_array_equal(edf_events, events)
 
 
-def test_write_annotations():
-    """Test writing raw files when annotations were parsed."""
-    tempdir = _TempDir()
-    raw1 = read_raw_edf(edf_path, preload=True)
-    raw1_file = op.join(tempdir, 'test1-raw.fif')
-    raw1.save(raw1_file, overwrite=True, buffer_size_sec=1)
-    raw11 = Raw(raw1_file, preload=True)
-    data1, times1 = raw1[:, :]
-    data11, times11 = raw11[:, :]
-
-    assert_array_almost_equal(data1, data11)
-    assert_array_almost_equal(times1, times11)
-    assert_equal(sorted(raw1.info.keys()), sorted(raw11.info.keys()))
-
-    assert_raises(RuntimeError, read_raw_edf, edf_path, preload=False)
-
-
 def test_edf_stim_channel():
     """Test stim channel for edf file"""
     raw = read_raw_edf(edf_stim_channel_path, preload=True,
diff --git a/mne/io/eeglab/__init__.py b/mne/io/eeglab/__init__.py
new file mode 100644
index 0000000..871142f
--- /dev/null
+++ b/mne/io/eeglab/__init__.py
@@ -0,0 +1,5 @@
+"""EEGLAB module for conversion to FIF"""
+
+# Author: Mainak Jas <mainak.jas at telecom-paristech.fr>
+
+from .eeglab import read_raw_eeglab, read_epochs_eeglab
diff --git a/mne/io/eeglab/eeglab.py b/mne/io/eeglab/eeglab.py
new file mode 100644
index 0000000..72f2906
--- /dev/null
+++ b/mne/io/eeglab/eeglab.py
@@ -0,0 +1,447 @@
+# Author: Mainak Jas <mainak.jas at telecom-paristech.fr>
+#
+# License: BSD (3-clause)
+
+import os.path as op
+import numpy as np
+import warnings
+
+from ..utils import _read_segments_file, _find_channels
+from ..constants import FIFF
+from ..meas_info import _empty_info, create_info
+from ..base import _BaseRaw, _check_update_montage
+from ...utils import logger, verbose, check_version
+from ...channels.montage import Montage
+from ...epochs import _BaseEpochs
+from ...event import read_events
+from ...externals.six import string_types
+
+# just fix the scaling for now, EEGLAB doesn't seem to provide this info
+CAL = 1e-6
+
+
+def _check_fname(fname):
+    """Check if the file extension is valid.
+    """
+    fmt = str(op.splitext(fname)[-1])
+    if fmt == '.dat':
+        raise NotImplementedError(
+            'Old data format .dat detected. Please update your EEGLAB '
+            'version and resave the data in .fdt format')
+    elif fmt != '.fdt':
+        raise IOError('Expected .fdt file format. Found %s format' % fmt)
+
+
+def _check_mat_struct(fname):
+    """Check if the mat struct contains 'EEG'.
+    """
+    if not check_version('scipy', '0.12'):
+        raise RuntimeError('scipy >= 0.12 must be installed for reading EEGLAB'
+                           ' files.')
+    from scipy import io
+    mat = io.whosmat(fname, struct_as_record=False,
+                     squeeze_me=True)
+    if 'ALLEEG' in mat[0]:
+        raise NotImplementedError(
+            'Loading an ALLEEG array is not supported. Please contact'
+            'mne-python developers for more information.')
+    elif 'EEG' not in mat[0]:
+        msg = ('Unknown array in the .set file.')
+        raise ValueError(msg)
+
+
+def _to_loc(ll):
+    """Check if location exists.
+    """
+    if isinstance(ll, (int, float)) or len(ll) > 0:
+        return ll
+    else:
+        return 0.
+
+
+def _get_info(eeg, montage, eog=()):
+    """Get measurement info.
+    """
+    info = _empty_info(sfreq=eeg.srate)
+    info['nchan'] = eeg.nbchan
+
+    # add the ch_names and info['chs'][idx]['loc']
+    path = None
+    if len(eeg.chanlocs) > 0:
+        ch_names, pos = list(), list()
+        kind = 'user_defined'
+        selection = np.arange(len(eeg.chanlocs))
+        locs_available = True
+        for chanloc in eeg.chanlocs:
+            ch_names.append(chanloc.labels)
+            loc_x = _to_loc(chanloc.X)
+            loc_y = _to_loc(chanloc.Y)
+            loc_z = _to_loc(chanloc.Z)
+            locs = np.r_[-loc_y, loc_x, loc_z]
+            if np.unique(locs).size == 1:
+                locs_available = False
+            pos.append(locs)
+        if locs_available:
+            montage = Montage(np.array(pos), ch_names, kind, selection)
+    elif isinstance(montage, string_types):
+        path = op.dirname(montage)
+
+    if montage is None:
+        info = create_info(ch_names, eeg.srate, ch_types='eeg')
+    else:
+        _check_update_montage(info, montage, path=path,
+                              update_ch_names=True)
+
+    info['buffer_size_sec'] = 1.  # reasonable default
+    # update the info dict
+
+    if eog == 'auto':
+        eog = _find_channels(ch_names)
+
+    for idx, ch in enumerate(info['chs']):
+        ch['cal'] = CAL
+        if ch['ch_name'] in eog or idx in eog:
+            ch['coil_type'] = FIFF.FIFFV_COIL_NONE
+            ch['kind'] = FIFF.FIFFV_EOG_CH
+
+    return info
+
+
+def read_raw_eeglab(input_fname, montage=None, preload=False, eog=(),
+                    verbose=None):
+    """Read an EEGLAB .set file
+
+    Parameters
+    ----------
+    input_fname : str
+        Path to the .set file. If the data is stored in a separate .fdt file,
+        it is expected to be in the same folder as the .set file.
+    montage : str | None | instance of montage
+        Path or instance of montage containing electrode positions.
+        If None, sensor locations are (0,0,0). See the documentation of
+        :func:`mne.channels.read_montage` for more information.
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory). Note that
+        preload=False will be effective only if the data is stored in a
+        separate binary file.
+    eog : list | tuple | 'auto'
+        Names or indices of channels that should be designated
+        EOG channels. If 'auto', the channel names containing
+        ``EOG`` or ``EYE`` are used. Defaults to empty tuple.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    raw : Instance of RawEEGLAB
+        A Raw object containing EEGLAB .set data.
+
+    Notes
+    -----
+    .. versionadded:: 0.11.0
+
+    See Also
+    --------
+    mne.io.Raw : Documentation of attribute and methods.
+    """
+    return RawEEGLAB(input_fname=input_fname, montage=montage, preload=preload,
+                     eog=eog, verbose=verbose)
+
+
+def read_epochs_eeglab(input_fname, events=None, event_id=None, montage=None,
+                       eog=(), verbose=None):
+    """Reader function for EEGLAB epochs files
+
+    Parameters
+    ----------
+    input_fname : str
+        Path to the .set file. If the data is stored in a separate .fdt file,
+        it is expected to be in the same folder as the .set file.
+    events : str | array, shape (n_events, 3) | None
+        Path to events file. If array, it is the events typically returned
+        by the read_events function. If some events don't match the events
+        of interest as specified by event_id, they will be marked as 'IGNORED'
+        in the drop log. If None, it is constructed from the EEGLAB (.set) file
+        with each unique event encoded with a different integer.
+    event_id : int | list of int | dict | None
+        The id of the event to consider. If dict,
+        the keys can later be used to acces associated events. Example:
+        dict(auditory=1, visual=3). If int, a dict will be created with
+        the id as string. If a list, all events with the IDs specified
+        in the list are used. If None, the event_id is constructed from the
+        EEGLAB (.set) file with each descriptions copied from `eventtype`.
+    montage : str | None | instance of montage
+        Path or instance of montage containing electrode positions.
+        If None, sensor locations are (0,0,0). See the documentation of
+        :func:`mne.channels.read_montage` for more information.
+    eog : list | tuple | 'auto'
+        Names or indices of channels that should be designated
+        EOG channels. If 'auto', the channel names containing
+        ``EOG`` or ``EYE`` are used. Defaults to empty tuple.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    epochs : instance of Epochs
+        The epochs.
+
+    Notes
+    -----
+    .. versionadded:: 0.11.0
+
+
+    See Also
+    --------
+    mne.Epochs : Documentation of attribute and methods.
+    """
+    epochs = EpochsEEGLAB(input_fname=input_fname, events=events, eog=eog,
+                          event_id=event_id, montage=montage, verbose=verbose)
+    return epochs
+
+
+class RawEEGLAB(_BaseRaw):
+    """Raw object from EEGLAB .set file.
+
+    Parameters
+    ----------
+    input_fname : str
+        Path to the .set file. If the data is stored in a separate .fdt file,
+        it is expected to be in the same folder as the .set file.
+    montage : str | None | instance of montage
+        Path or instance of montage containing electrode positions.
+        If None, sensor locations are (0,0,0). See the documentation of
+        :func:`mne.channels.read_montage` for more information.
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory).
+    eog : list | tuple | 'auto'
+        Names or indices of channels that should be designated
+        EOG channels. If 'auto', the channel names containing
+        ``EOG`` or ``EYE`` are used. Defaults to empty tuple.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    raw : Instance of RawEEGLAB
+        A Raw object containing EEGLAB .set data.
+
+    Notes
+    -----
+    .. versionadded:: 0.11.0
+
+    See Also
+    --------
+    mne.io.Raw : Documentation of attribute and methods.
+    """
+    @verbose
+    def __init__(self, input_fname, montage, preload=False, eog=(),
+                 verbose=None):
+        """Read EEGLAB .set file.
+        """
+        from scipy import io
+        basedir = op.dirname(input_fname)
+        _check_mat_struct(input_fname)
+        eeg = io.loadmat(input_fname, struct_as_record=False,
+                         squeeze_me=True)['EEG']
+        if eeg.trials != 1:
+            raise TypeError('The number of trials is %d. It must be 1 for raw'
+                            ' files. Please use `mne.io.read_epochs_eeglab` if'
+                            ' the .set file contains epochs.' % eeg.trials)
+
+        last_samps = [eeg.pnts - 1]
+        info = _get_info(eeg, montage, eog=eog)
+        # read the data
+        if isinstance(eeg.data, string_types):
+            data_fname = op.join(basedir, eeg.data)
+            _check_fname(data_fname)
+            logger.info('Reading %s' % data_fname)
+
+            super(RawEEGLAB, self).__init__(
+                info, preload, filenames=[data_fname], last_samps=last_samps,
+                orig_format='double', verbose=verbose)
+        else:
+            if preload is False or isinstance(preload, string_types):
+                warnings.warn('Data will be preloaded. preload=False or a'
+                              ' string preload is not supported when the data'
+                              ' is stored in the .set file')
+            # can't be done in standard way with preload=True because of
+            # different reading path (.set file)
+            data = eeg.data.reshape(eeg.nbchan, -1, order='F')
+            data = data.astype(np.double)
+            data *= CAL
+            super(RawEEGLAB, self).__init__(
+                info, data, last_samps=last_samps, orig_format='double',
+                verbose=verbose)
+
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
+        """Read a chunk of raw data"""
+        _read_segments_file(self, data, idx, fi, start, stop, cals, mult,
+                            dtype=np.float32)
+
+
+class EpochsEEGLAB(_BaseEpochs):
+    """Epochs from EEGLAB .set file
+
+    Parameters
+    ----------
+    input_fname : str
+        Path to the .set file. If the data is stored in a separate .fdt file,
+        it is expected to be in the same folder as the .set file.
+    events : str | array, shape (n_events, 3) | None
+        Path to events file. If array, it is the events typically returned
+        by the read_events function. If some events don't match the events
+        of interest as specified by event_id, they will be marked as 'IGNORED'
+        in the drop log. If None, it is constructed from the EEGLAB (.set) file
+        with each unique event encoded with a different integer.
+    event_id : int | list of int | dict | None
+        The id of the event to consider. If dict,
+        the keys can later be used to acces associated events. Example:
+        dict(auditory=1, visual=3). If int, a dict will be created with
+        the id as string. If a list, all events with the IDs specified
+        in the list are used. If None, the event_id is constructed from the
+        EEGLAB (.set) file with each descriptions copied from `eventtype`.
+    tmin : float
+        Start time before event.
+    baseline : None or tuple of length 2 (default (None, 0))
+        The time interval to apply baseline correction.
+        If None do not apply it. If baseline is (a, b)
+        the interval is between "a (s)" and "b (s)".
+        If a is None the beginning of the data is used
+        and if b is None then b is set to the end of the interval.
+        If baseline is equal to (None, None) all the time
+        interval is used.
+        The baseline (a, b) includes both endpoints, i.e. all
+        timepoints t such that a <= t <= b.
+    reject : dict | None
+        Rejection parameters based on peak-to-peak amplitude.
+        Valid keys are 'grad' | 'mag' | 'eeg' | 'eog' | 'ecg'.
+        If reject is None then no rejection is done. Example::
+
+            reject = dict(grad=4000e-13, # T / m (gradiometers)
+                          mag=4e-12, # T (magnetometers)
+                          eeg=40e-6, # uV (EEG channels)
+                          eog=250e-6 # uV (EOG channels)
+                          )
+    flat : dict | None
+        Rejection parameters based on flatness of signal.
+        Valid keys are 'grad' | 'mag' | 'eeg' | 'eog' | 'ecg', and values
+        are floats that set the minimum acceptable peak-to-peak amplitude.
+        If flat is None then no rejection is done.
+    reject_tmin : scalar | None
+        Start of the time window used to reject epochs (with the default None,
+        the window will start with tmin).
+    reject_tmax : scalar | None
+        End of the time window used to reject epochs (with the default None,
+        the window will end with tmax).
+    montage : str | None | instance of montage
+        Path or instance of montage containing electrode positions.
+        If None, sensor locations are (0,0,0). See the documentation of
+        :func:`mne.channels.read_montage` for more information.
+    eog : list | tuple | 'auto'
+        Names or indices of channels that should be designated
+        EOG channels. If 'auto', the channel names containing
+        ``EOG`` or ``EYE`` are used. Defaults to empty tuple.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Notes
+    -----
+    .. versionadded:: 0.11.0
+
+    See Also
+    --------
+    mne.Epochs : Documentation of attribute and methods.
+    """
+    @verbose
+    def __init__(self, input_fname, events=None, event_id=None, tmin=0,
+                 baseline=None,  reject=None, flat=None, reject_tmin=None,
+                 reject_tmax=None, montage=None, eog=(), verbose=None):
+        from scipy import io
+        _check_mat_struct(input_fname)
+        eeg = io.loadmat(input_fname, struct_as_record=False,
+                         squeeze_me=True)['EEG']
+
+        if not ((events is None and event_id is None) or
+                (events is not None and event_id is not None)):
+            raise ValueError('Both `events` and `event_id` must be '
+                             'None or not None')
+
+        if events is None and eeg.trials > 1:
+            # first extract the events and construct an event_id dict
+            event_name, event_latencies, unique_ev = list(), list(), list()
+            ev_idx = 0
+            for ep in eeg.epoch:
+                if not isinstance(ep.eventtype, string_types):
+                    event_type = '/'.join(ep.eventtype.tolist())
+                    event_name.append(event_type)
+                    # store latency of only first event
+                    event_latencies.append(eeg.event[ev_idx].latency)
+                    ev_idx += len(ep.eventtype)
+                    warnings.warn('An epoch has multiple events. '
+                                  'Only the latency of first event will be '
+                                  'retained.')
+                else:
+                    event_type = ep.eventtype
+                    event_name.append(ep.eventtype)
+                    event_latencies.append(eeg.event[ev_idx].latency)
+                    ev_idx += 1
+
+                if event_type not in unique_ev:
+                    unique_ev.append(event_type)
+
+                # invent event dict but use id > 0 so you know its a trigger
+                event_id = dict((ev, idx + 1) for idx, ev
+                                in enumerate(unique_ev))
+            # now fill up the event array
+            events = np.zeros((eeg.trials, 3), dtype=int)
+            for idx in range(0, eeg.trials):
+                if idx == 0:
+                    prev_stim = 0
+                elif (idx > 0 and
+                        event_latencies[idx] - event_latencies[idx - 1] == 1):
+                    prev_stim = event_id[event_name[idx - 1]]
+                events[idx, 0] = event_latencies[idx]
+                events[idx, 1] = prev_stim
+                events[idx, 2] = event_id[event_name[idx]]
+        elif isinstance(events, string_types):
+            events = read_events(events)
+
+        logger.info('Extracting parameters from %s...' % input_fname)
+        input_fname = op.abspath(input_fname)
+        info = _get_info(eeg, montage, eog=eog)
+
+        for key, val in event_id.items():
+            if val not in events[:, 2]:
+                raise ValueError('No matching events found for %s '
+                                 '(event id %i)' % (key, val))
+
+        self._filename = input_fname
+        if isinstance(eeg.data, string_types):
+            basedir = op.dirname(input_fname)
+            data_fname = op.join(basedir, eeg.data)
+            _check_fname(data_fname)
+            with open(data_fname, 'rb') as data_fid:
+                data = np.fromfile(data_fid, dtype=np.float32)
+                data = data.reshape((eeg.nbchan, eeg.pnts, eeg.trials),
+                                    order="F")
+        else:
+            data = eeg.data
+        data = data.transpose((2, 0, 1)).astype('double')
+        data *= CAL
+        assert data.shape == (eeg.trials, eeg.nbchan, eeg.pnts)
+        tmin, tmax = eeg.xmin, eeg.xmax
+
+        super(EpochsEEGLAB, self).__init__(
+            info, data, events, event_id, tmin, tmax, baseline,
+            reject=reject, flat=flat, reject_tmin=reject_tmin,
+            reject_tmax=reject_tmax, add_eeg_ref=False, verbose=verbose)
+        logger.info('Ready.')
diff --git a/mne/tests/__init__.py b/mne/io/eeglab/tests/__init__.py
similarity index 100%
copy from mne/tests/__init__.py
copy to mne/io/eeglab/tests/__init__.py
diff --git a/mne/io/eeglab/tests/test_eeglab.py b/mne/io/eeglab/tests/test_eeglab.py
new file mode 100644
index 0000000..3e5a65e
--- /dev/null
+++ b/mne/io/eeglab/tests/test_eeglab.py
@@ -0,0 +1,85 @@
+# Author: Mainak Jas <mainak.jas at telecom-paristech.fr>
+#
+# License: BSD (3-clause)
+
+import os.path as op
+import shutil
+
+import warnings
+from nose.tools import assert_raises, assert_equal
+from numpy.testing import assert_array_equal
+
+from mne import write_events, read_epochs_eeglab
+from mne.io import read_raw_eeglab
+from mne.io.tests.test_raw import _test_raw_reader
+from mne.datasets import testing
+from mne.utils import _TempDir, run_tests_if_main, requires_version
+
+base_dir = op.join(testing.data_path(download=False), 'EEGLAB')
+raw_fname = op.join(base_dir, 'test_raw.set')
+raw_fname_onefile = op.join(base_dir, 'test_raw_onefile.set')
+epochs_fname = op.join(base_dir, 'test_epochs.set')
+epochs_fname_onefile = op.join(base_dir, 'test_epochs_onefile.set')
+montage = op.join(base_dir, 'test_chans.locs')
+
+warnings.simplefilter('always')  # enable b/c these tests throw warnings
+
+
+ at requires_version('scipy', '0.12')
+ at testing.requires_testing_data
+def test_io_set():
+    """Test importing EEGLAB .set files"""
+    from scipy import io
+
+    _test_raw_reader(read_raw_eeglab, input_fname=raw_fname, montage=montage)
+    with warnings.catch_warnings(record=True) as w:
+        warnings.simplefilter('always')
+        _test_raw_reader(read_raw_eeglab, input_fname=raw_fname_onefile,
+                         montage=montage)
+        raw = read_raw_eeglab(input_fname=raw_fname_onefile, montage=montage)
+        raw2 = read_raw_eeglab(input_fname=raw_fname, montage=montage)
+        assert_array_equal(raw[:][0], raw2[:][0])
+    # one warning per each preload=False or str with raw_fname_onefile
+    assert_equal(len(w), 3)
+
+    with warnings.catch_warnings(record=True) as w:
+        warnings.simplefilter('always')
+        epochs = read_epochs_eeglab(epochs_fname)
+        epochs2 = read_epochs_eeglab(epochs_fname_onefile)
+    # 3 warnings for each read_epochs_eeglab because there are 3 epochs
+    # associated with multiple events
+    assert_equal(len(w), 6)
+    assert_array_equal(epochs.get_data(), epochs2.get_data())
+
+    # test different combinations of events and event_ids
+    temp_dir = _TempDir()
+    out_fname = op.join(temp_dir, 'test-eve.fif')
+    write_events(out_fname, epochs.events)
+    event_id = {'S255/S8': 1, 'S8': 2, 'S255/S9': 3}
+
+    epochs = read_epochs_eeglab(epochs_fname, epochs.events, event_id)
+    epochs = read_epochs_eeglab(epochs_fname, out_fname, event_id)
+    assert_raises(ValueError, read_epochs_eeglab, epochs_fname,
+                  None, event_id)
+    assert_raises(ValueError, read_epochs_eeglab, epochs_fname,
+                  epochs.events, None)
+
+    # test if .dat file raises an error
+    eeg = io.loadmat(epochs_fname, struct_as_record=False,
+                     squeeze_me=True)['EEG']
+    eeg.data = 'epochs_fname.dat'
+    bad_epochs_fname = op.join(temp_dir, 'test_epochs.set')
+    io.savemat(bad_epochs_fname, {'EEG':
+               {'trials': eeg.trials, 'srate': eeg.srate,
+                'nbchan': eeg.nbchan, 'data': eeg.data,
+                'epoch': eeg.epoch, 'event': eeg.event,
+                'chanlocs': eeg.chanlocs}})
+    shutil.copyfile(op.join(base_dir, 'test_epochs.fdt'),
+                    op.join(temp_dir, 'test_epochs.dat'))
+    with warnings.catch_warnings(record=True) as w:
+        warnings.simplefilter('always')
+        assert_raises(NotImplementedError, read_epochs_eeglab,
+                      bad_epochs_fname)
+    assert_equal(len(w), 3)
+
+run_tests_if_main()
diff --git a/mne/io/egi/egi.py b/mne/io/egi/egi.py
index 7b38a5b..c2bc13b 100644
--- a/mne/io/egi/egi.py
+++ b/mne/io/egi/egi.py
@@ -10,6 +10,7 @@ import warnings
 import numpy as np
 
 from ..base import _BaseRaw, _check_update_montage
+from ..utils import _mult_cal_one
 from ..meas_info import _empty_info
 from ..constants import FIFF
 from ...utils import verbose, logger
@@ -65,79 +66,63 @@ def _read_header(fid):
         info['event_codes'] = np.array(info['event_codes'])
     else:
         raise NotImplementedError('Only continous files are supported')
-
-    info.update(dict(precision=precision, unsegmented=unsegmented))
-
+    info['unsegmented'] = unsegmented
+    info['dtype'], info['orig_format'] = {2: ('>i2', 'short'),
+                                          4: ('>f4', 'float'),
+                                          6: ('>f8', 'double')}[precision]
+    info['dtype'] = np.dtype(info['dtype'])
     return info
 
 
 def _read_events(fid, info):
     """Read events"""
-    unpack = [info[k] for k in ['n_events', 'n_segments', 'n_channels']]
-    n_events, n_segments, n_channels = unpack
-    n_samples = 1 if info['unsegmented'] else info['n_samples']
-    events = np.zeros([n_events, n_segments * info['n_samples']])
-    dtype, bytesize = {2: ('>i2', 2), 4: ('>f4', 4),
-                       6: ('>f8', 8)}[info['precision']]
-
-    info.update({'dtype': dtype, 'bytesize': bytesize})
-    beg_dat = fid.tell()
-
-    for ii in range(info['n_events']):
-        fid.seek(beg_dat + (int(n_channels) + ii) * bytesize, 0)
-        events[ii] = np.fromfile(fid, dtype, n_samples)
-        fid.seek(int((n_channels + n_events) * bytesize), 1)
-    return events
-
-
-def _read_data(fid, info):
-    """Aux function"""
-    if not info['unsegmented']:
-        raise NotImplementedError('Only continous files are supported')
-
+    events = np.zeros([info['n_events'],
+                       info['n_segments'] * info['n_samples']])
     fid.seek(36 + info['n_events'] * 4, 0)  # skip header
-    readsize = (info['n_channels'] + info['n_events']) * info['n_samples']
-    final_shape = (info['n_samples'], info['n_channels'] + info['n_events'])
-    data = np.fromfile(fid, info['dtype'], readsize).reshape(final_shape).T
-    return data
+    for si in range(info['n_samples']):
+        # skip data channels
+        fid.seek(info['n_channels'] * info['dtype'].itemsize, 1)
+        # read event channels
+        events[:, si] = np.fromfile(fid, info['dtype'], info['n_events'])
+    return events
 
 
 def _combine_triggers(data, remapping=None):
     """Combine binary triggers"""
-    new_trigger = np.zeros(data[0].shape)
-    first = np.nonzero(data[0])[0]
-    for d in data[1:]:
-        if np.intersect1d(d.nonzero()[0], first).any():
-            raise RuntimeError('Events must be mutually exclusive')
-
+    new_trigger = np.zeros(data.shape[1])
+    if data.astype(bool).sum(axis=0).max() > 1:  # ensure no overlaps
+        logger.info('    Found multiple events at the same time '
+                    'sample. Cannot create trigger channel.')
+        return
     if remapping is None:
         remapping = np.arange(data) + 1
-
     for d, event_id in zip(data, remapping):
         idx = d.nonzero()
         if np.any(idx):
             new_trigger[idx] += event_id
-
-    return new_trigger[None]
+    return new_trigger
 
 
 @verbose
 def read_raw_egi(input_fname, montage=None, eog=None, misc=None,
-                 include=None, exclude=None, verbose=None):
+                 include=None, exclude=None, preload=None, verbose=None):
     """Read EGI simple binary as raw object
 
-    Note. The trigger channel names are based on the
-    arbitrary user dependent event codes used. However this
-    function will attempt to generate a synthetic trigger channel
-    named ``STI 014`` in accordance with the general Neuromag / MNE
-    naming pattern.
-    The event_id assignment equals np.arange(n_events - n_excluded) + 1.
-    The resulting `event_id` mapping is stored as attribute to
-    the resulting raw object but will be ignored when saving to a fiff.
-    Note. The trigger channel is artificially constructed based on
-    timestamps received by the Netstation. As a consequence, triggers
-    have only short durations.
-    This step will fail if events are not mutually exclusive.
+    .. note:: The trigger channel names are based on the
+              arbitrary user dependent event codes used. However this
+              function will attempt to generate a synthetic trigger channel
+              named ``STI 014`` in accordance with the general
+              Neuromag / MNE naming pattern.
+
+              The event_id assignment equals
+              ``np.arange(n_events - n_excluded) + 1``. The resulting
+              `event_id` mapping is stored as attribute to the resulting
+              raw object but will be ignored when saving to a fiff.
+              Note. The trigger channel is artificially constructed based
+              on timestamps received by the Netstation. As a consequence,
+              triggers have only short durations.
+
+              This step will fail if events are not mutually exclusive.
 
     Parameters
     ----------
@@ -162,6 +147,15 @@ def read_raw_egi(input_fname, montage=None, eog=None, misc=None,
        trigger. Defaults to None. If None, channels that have more than
        one event and the ``sync`` and ``TREV`` channels will be
        ignored.
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory).
+
+        ..versionadded:: 0.11
+
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -174,7 +168,8 @@ def read_raw_egi(input_fname, montage=None, eog=None, misc=None,
     --------
     mne.io.Raw : Documentation of attribute and methods.
     """
-    return RawEGI(input_fname, montage, eog, misc, include, exclude, verbose)
+    return RawEGI(input_fname, montage, eog, misc, include, exclude, preload,
+                  verbose)
 
 
 class RawEGI(_BaseRaw):
@@ -182,8 +177,12 @@ class RawEGI(_BaseRaw):
     """
     @verbose
     def __init__(self, input_fname, montage=None, eog=None, misc=None,
-                 include=None, exclude=None, verbose=None):
-        """docstring for __init__"""
+                 include=None, exclude=None, preload=None, verbose=None):
+        if preload is None:
+            warnings.warn('preload is True by default but will be changed to '
+                          'False in v0.12. Please explicitly set preload.',
+                          DeprecationWarning)
+            preload = True
         if eog is None:
             eog = []
         if misc is None:
@@ -192,22 +191,16 @@ class RawEGI(_BaseRaw):
             logger.info('Reading EGI header from %s...' % input_fname)
             egi_info = _read_header(fid)
             logger.info('    Reading events ...')
-            _read_events(fid, egi_info)  # update info + jump
-            logger.info('    Reading data ...')
-            # reads events as well
-            data = _read_data(fid, egi_info).astype(np.float64)
+            egi_events = _read_events(fid, egi_info)  # update info + jump
             if egi_info['value_range'] != 0 and egi_info['bits'] != 0:
                 cal = egi_info['value_range'] / 2 ** egi_info['bits']
             else:
                 cal = 1e-6
-            data[:egi_info['n_channels']] = data[:egi_info['n_channels']] * cal
 
         logger.info('    Assembling measurement info ...')
 
         if egi_info['n_events'] > 0:
             event_codes = list(egi_info['event_codes'])
-            egi_events = data[-egi_info['n_events']:]
-
             if include is None:
                 exclude_list = ['sync', 'TREV'] if exclude is None else exclude
                 exclude_inds = [i for i, k in enumerate(event_codes) if k in
@@ -242,89 +235,73 @@ class RawEGI(_BaseRaw):
                     raise ValueError('`%s` must be None or of type list' % kk)
 
             event_ids = np.arange(len(include_)) + 1
-            try:
-                logger.info('    Synthesizing trigger channel "STI 014" ...')
-                logger.info('    Excluding events {%s} ...' %
-                            ", ".join([k for i, k in enumerate(event_codes)
-                                       if i not in include_]))
-                new_trigger = _combine_triggers(egi_events[include_],
-                                                remapping=event_ids)
-                data = np.concatenate([data, new_trigger])
-            except RuntimeError:
-                logger.info('    Found multiple events at the same time '
-                            'sample. Could not create trigger channel.')
-                new_trigger = None
-
+            logger.info('    Synthesizing trigger channel "STI 014" ...')
+            logger.info('    Excluding events {%s} ...' %
+                        ", ".join([k for i, k in enumerate(event_codes)
+                                   if i not in include_]))
+            self._new_trigger = _combine_triggers(egi_events[include_],
+                                                  remapping=event_ids)
             self.event_id = dict(zip([e for e in event_codes if e in
                                       include_names], event_ids))
         else:
             # No events
             self.event_id = None
-            new_trigger = None
-        info = _empty_info()
-        info['hpi_subsystem'] = None
-        info['events'], info['hpi_results'], info['hpi_meas'] = [], [], []
-        info['sfreq'] = float(egi_info['samp_rate'])
+            self._new_trigger = None
+        info = _empty_info(egi_info['samp_rate'])
+        info['buffer_size_sec'] = 1.  # reasonable default
         info['filename'] = input_fname
         my_time = datetime.datetime(
-            egi_info['year'],
-            egi_info['month'],
-            egi_info['day'],
-            egi_info['hour'],
-            egi_info['minute'],
-            egi_info['second']
-        )
+            egi_info['year'], egi_info['month'], egi_info['day'],
+            egi_info['hour'], egi_info['minute'], egi_info['second'])
         my_timestamp = time.mktime(my_time.timetuple())
         info['meas_date'] = np.array([my_timestamp], dtype=np.float32)
-        info['projs'] = []
         ch_names = ['EEG %03d' % (i + 1) for i in
                     range(egi_info['n_channels'])]
         ch_names.extend(list(egi_info['event_codes']))
-        if new_trigger is not None:
+        if self._new_trigger is not None:
             ch_names.append('STI 014')  # our new_trigger
-        info['nchan'] = nchan = len(data)
-        info['chs'] = []
+        info['nchan'] = nchan = len(ch_names)
         info['ch_names'] = ch_names
-        info['bads'] = []
-        info['comps'] = []
-        info['custom_ref_applied'] = False
         for ii, ch_name in enumerate(ch_names):
-            ch_info = {'cal': cal,
-                       'logno': ii + 1,
-                       'scanno': ii + 1,
-                       'range': 1.0,
-                       'unit_mul': 0,
-                       'ch_name': ch_name,
-                       'unit': FIFF.FIFF_UNIT_V,
-                       'coord_frame': FIFF.FIFFV_COORD_HEAD,
-                       'coil_type': FIFF.FIFFV_COIL_EEG,
-                       'kind': FIFF.FIFFV_EEG_CH,
-                       'loc': np.array([0, 0, 0, 1] * 3, dtype='f4')}
+            ch_info = {
+                'cal': cal, 'logno': ii + 1, 'scanno': ii + 1, 'range': 1.0,
+                'unit_mul': 0, 'ch_name': ch_name, 'unit': FIFF.FIFF_UNIT_V,
+                'coord_frame': FIFF.FIFFV_COORD_HEAD,
+                'coil_type': FIFF.FIFFV_COIL_EEG, 'kind': FIFF.FIFFV_EEG_CH,
+                'loc': np.array([0, 0, 0, 1] * 3, dtype='f4')}
             if ch_name in eog or ii in eog or ii - nchan in eog:
-                ch_info['coil_type'] = FIFF.FIFFV_COIL_NONE
-                ch_info['kind'] = FIFF.FIFFV_EOG_CH
+                ch_info.update(coil_type=FIFF.FIFFV_COIL_NONE,
+                               kind=FIFF.FIFFV_EOG_CH)
             if ch_name in misc or ii in misc or ii - nchan in misc:
-                ch_info['coil_type'] = FIFF.FIFFV_COIL_NONE
-                ch_info['kind'] = FIFF.FIFFV_MISC_CH
-
+                ch_info.update(coil_type=FIFF.FIFFV_COIL_NONE,
+                               kind=FIFF.FIFFV_MISC_CH)
             if len(ch_name) == 4 or ch_name.startswith('STI'):
-                u = {'unit_mul': 0,
-                     'cal': 1,
+                ch_info.update(
+                    {'unit_mul': 0, 'cal': 1, 'kind': FIFF.FIFFV_STIM_CH,
                      'coil_type': FIFF.FIFFV_COIL_NONE,
-                     'unit': FIFF.FIFF_UNIT_NONE,
-                     'kind': FIFF.FIFFV_STIM_CH}
-                ch_info.update(u)
+                     'unit': FIFF.FIFF_UNIT_NONE})
             info['chs'].append(ch_info)
-
         _check_update_montage(info, montage)
-        orig_format = {'>f2': 'single', '>f4': 'double',
-                       '>i2': 'int'}[egi_info['dtype']]
         super(RawEGI, self).__init__(
-            info, data, filenames=[input_fname], orig_format=orig_format,
-            verbose=verbose)
-        logger.info('    Range : %d ... %d =  %9.3f ... %9.3f secs'
-                    % (self.first_samp, self.last_samp,
-                       float(self.first_samp) / self.info['sfreq'],
-                       float(self.last_samp) / self.info['sfreq']))
-        # use information from egi
-        logger.info('Ready.')
+            info, preload, orig_format=egi_info['orig_format'],
+            filenames=[input_fname], last_samps=[egi_info['n_samples'] - 1],
+            raw_extras=[egi_info], verbose=verbose)
+
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
+        """Read a segment of data from a file"""
+        egi_info = self._raw_extras[fi]
+        n_chan_read = egi_info['n_channels'] + egi_info['n_events']
+        data_start = (36 + egi_info['n_events'] * 4 +
+                      start * n_chan_read * egi_info['dtype'].itemsize)
+        n_chan_out = n_chan_read + (1 if self._new_trigger is not None else 0)
+        one = np.empty((n_chan_out, stop - start))
+        with open(self._filenames[fi], 'rb') as fid:
+            fid.seek(data_start, 0)  # skip header
+            final_shape = (stop - start, n_chan_read)
+            one_ = np.fromfile(fid, egi_info['dtype'], np.prod(final_shape))
+            one_.shape = final_shape
+            one[:n_chan_read] = one_.T
+        # reads events as well
+        if self._new_trigger is not None:
+            one[-1] = self._new_trigger[start:stop]
+        _mult_cal_one(data, one, idx, cals, mult)
diff --git a/mne/io/egi/tests/data/test_egi.txt b/mne/io/egi/tests/data/test_egi.txt
new file mode 100644
index 0000000..039ce16
--- /dev/null
+++ b/mne/io/egi/tests/data/test_egi.txt
@@ -0,0 +1,257 @@
+0.0000	0.0040	0.0080	0.0120	0.0160	0.0200	0.0240	0.0280	0.0320	0.0360	0.0400	0.0440	0.0480	0.0520	0.0560	0.0600	0.0640	0.0680	0.0720	0.0760	0.0800	0.0840	0.0880	0.0920	0.0960	0.1000	0.1040	0.1080	0.1120	0.1160	0.1200	0.1240	0.1280	0.1320	0.1360	0.1400	0.1440	0.1480	0.1520	0.1560	0.1600	0.1640	0.1680	0.1720	0.1760	0.1800	0.1840	0.1880	0.1920	0.1960	0.2000	0.2040	0.2080	0.2120	0.2160	0.2200	0.2240	0.2280	0.2320	0.2360	0.2400	0.2440	0.2480	0.2520	0.2560	0.2600	0.2640	0.2680	0.2720	0.2760	0. [...]
+-14262.1006	-13993.9355	-14057.4209	-14348.1191	-14499.7773	-14278.8652	-14002.6699	-14074.4443	-14363.5654	-14518.6748	-14299.7109	-14015.6279	-14089.3516	-14393.6992	-14532.9512	-14305.7783	-14018.3018	-14086.9072	-14396.1084	-14558.9863	-14331.1709	-14054.4727	-14119.8574	-14414.1943	-14562.3350	-14329.5059	-14054.7188	-14120.3340	-14424.5088	-14578.3438	-14344.0576	-14052.6904	-14125.6689	-14431.7695	-14585.3408	-14355.4746	-14069.3818	-14139.3613	-14429.4570	-14587.8311	-14350.8848	 [...]
+-13067.8711	-12779.6631	-12812.5664	-13094.8633	-13270.5361	-13081.7578	-12788.0479	-12821.2686	-13096.5176	-13276.9805	-13094.4346	-12794.4014	-12832.1768	-13124.5293	-13291.7129	-13099.1162	-12798.7773	-12831.7051	-13134.4443	-13316.4414	-13125.5771	-12834.8721	-12863.5283	-13149.9717	-13315.2539	-13120.8398	-12833.2939	-12865.4756	-13154.0889	-13333.9336	-13136.1406	-12838.3301	-12873.5645	-13163.5176	-13335.3838	-13143.4805	-12845.8105	-12882.0732	-13163.1016	-13340.7686	-13131.3066	 [...]
+-12043.2041	-11769.7988	-11823.8662	-12101.8506	-12257.5547	-12049.7959	-11777.5830	-11835.7725	-12111.7373	-12264.9219	-12063.1309	-11784.8887	-11843.7207	-12128.7891	-12272.8740	-12062.8301	-11784.7725	-11842.7832	-12134.7637	-12295.6045	-12084.4727	-11809.0400	-11863.9824	-12141.2764	-12288.5176	-12076.6250	-11806.0195	-11859.4014	-12151.2354	-12303.8467	-12086.3340	-11806.9053	-11862.4229	-12147.0664	-12299.6748	-12090.9023	-11814.1299	-11870.7725	-12149.5127	-12305.3896	-12088.9258	 [...]
+-9939.3350	-9693.2236	-9780.7168	-10049.7393	-10170.7197	-9939.8242	-9693.5557	-9786.0107	-10053.3525	-10171.5176	-9946.3779	-9700.0439	-9788.5596	-10063.8584	-10177.1553	-9949.4590	-9701.9570	-9792.2598	-10074.6865	-10195.0898	-9962.2852	-9715.7158	-9802.8926	-10074.0996	-10185.2549	-9955.4160	-9712.9160	-9799.8096	-10082.9453	-10199.5400	-9960.6230	-9715.8477	-9801.6250	-10073.2979	-10192.1885	-9960.4131	-9715.7861	-9807.9150	-10077.2461	-10194.9033	-9961.4590	-9713.4414	-9802.3320	-10 [...]
+-4148.1167	-3918.5317	-3993.2751	-4236.5469	-4350.4795	-4145.9341	-3917.9387	-3996.9487	-4241.5640	-4350.6650	-4152.6440	-3924.1230	-4001.0232	-4246.0879	-4355.0039	-4155.4751	-3925.4902	-4002.1206	-4254.4419	-4366.7725	-4162.8711	-3932.7695	-4009.6487	-4253.3682	-4359.0552	-4156.9146	-3932.2422	-4006.5002	-4262.1274	-4371.9639	-4162.5723	-3937.5320	-4008.3679	-4252.5879	-4364.7529	-4159.8340	-3932.7871	-4012.8411	-4258.6973	-4367.9575	-4160.7632	-3933.5305	-4012.2458	-4255.3916	-4364.95 [...]
+204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	204439.4219	 [...]
+-3354.4636	-3167.1958	-3224.3621	-3416.0168	-3510.2139	-3349.6655	-3164.0708	-3225.5095	-3419.3494	-3506.7886	-3353.6389	-3172.6238	-3228.5823	-3421.9036	-3511.3398	-3357.3901	-3172.1372	-3230.8154	-3427.7749	-3516.9834	-3359.4153	-3172.7908	-3231.6001	-3424.8884	-3510.9319	-3355.9082	-3174.2339	-3231.0054	-3432.3354	-3520.4856	-3360.5615	-3179.0754	-3231.9253	-3422.7815	-3513.3823	-3357.0332	-3171.2302	-3236.5872	-3430.1882	-3516.0959	-3359.6951	-3175.1035	-3238.4822	-3427.5488	-3517.47 [...]
+204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	204478.1250	 [...]
+3116.9377	3073.5315	3090.3916	3158.3435	3186.7112	3118.6072	3074.5015	3091.6033	3156.6599	3187.5100	3116.7000	3071.7705	3089.7810	3157.0525	3185.0471	3116.6013	3072.6995	3089.3604	3156.1040	3184.1499	3115.8608	3072.2002	3090.1821	3156.2632	3184.8735	3113.6946	3070.4241	3088.7996	3154.3752	3183.2129	3112.3235	3070.1809	3087.7769	3155.5366	3182.1433	3111.8623	3067.4102	3085.5227	3151.6572	3180.9246	3110.1091	3065.4790	3084.5229	3152.7542	3181.1489	3111.4854	3067.4553	3083.5222	3151.7395	31 [...]
+-12826.1484	-12560.0840	-12636.3838	-12932.6631	-13076.3193	-12840.0605	-12568.9619	-12658.2158	-12952.1416	-13103.0342	-12867.1074	-12584.2939	-12674.7393	-12985.6094	-13115.0000	-12874.5820	-12593.7754	-12681.2051	-12999.0117	-13153.7139	-12911.0947	-12635.8770	-12721.0791	-13018.8701	-13159.7773	-12911.7119	-12640.3633	-12722.0957	-13032.5830	-13181.0293	-12929.7139	-12643.4121	-12728.6807	-13039.8496	-13186.4980	-12938.0459	-12657.5645	-12743.5303	-13040.7197	-13193.5439	-12937.4355	 [...]
+-16180.3789	-15921.3545	-16008.5029	-16299.4131	-16433.5000	-16186.9453	-15925.9268	-16026.1416	-16310.9727	-16447.4980	-16204.1699	-15936.1230	-16037.1943	-16335.9316	-16453.8477	-16207.1563	-15936.7441	-16035.5762	-16343.5059	-16483.0176	-16233.8076	-15969.2715	-16062.4863	-16351.2031	-16480.4512	-16229.5703	-15967.9102	-16060.6563	-16365.3867	-16498.0137	-16239.2568	-15968.4053	-16063.3047	-16366.1982	-16498.0449	-16245.2256	-15979.9961	-16075.7783	-16367.1963	-16505.3340	-16246.5703	 [...]
+-11567.9170	-11291.7998	-11362.7656	-11651.2246	-11801.8760	-11570.0859	-11298.2783	-11372.1973	-11660.5479	-11805.8652	-11581.7910	-11303.0518	-11378.6816	-11672.4326	-11811.0469	-11584.7764	-11305.9922	-11379.9609	-11685.6445	-11832.5391	-11605.0400	-11328.4434	-11398.7461	-11685.2920	-11824.4922	-11596.6318	-11325.9502	-11390.4521	-11699.0273	-11843.7861	-11603.3623	-11327.9463	-11397.0313	-11688.3936	-11837.7568	-11604.7959	-11328.3594	-11403.4912	-11692.7783	-11836.7998	-11606.1357	 [...]
+-6358.0713	-6091.8379	-6178.3379	-6460.8359	-6599.4702	-6354.8228	-6091.7012	-6181.0679	-6465.3096	-6594.1162	-6365.1885	-6105.0762	-6187.9502	-6471.3569	-6602.0557	-6370.8960	-6105.1777	-6187.8257	-6483.8315	-6616.2778	-6378.3130	-6111.5830	-6199.3354	-6483.5112	-6606.5098	-6373.6802	-6112.4668	-6192.9521	-6496.4282	-6622.8066	-6377.5718	-6121.7661	-6198.9512	-6479.4077	-6615.4185	-6377.6533	-6115.8813	-6207.4893	-6495.2939	-6621.9063	-6386.3789	-6120.6060	-6210.0308	-6491.2822	-6623.38 [...]
+-194.5330	19.8748	-58.8307	-290.1855	-393.6491	-189.8461	22.6635	-61.4236	-295.7708	-391.3524	-195.9999	12.9349	-65.1472	-298.2761	-396.7082	-199.5598	12.2660	-67.9403	-304.7816	-404.5864	-204.3642	10.6782	-71.0473	-304.3150	-397.9807	-200.5101	9.2634	-69.2245	-313.1916	-407.8352	-203.0276	2.9759	-71.7023	-299.9834	-401.3818	-202.8124	12.2722	-79.1371	-308.9559	-404.1696	-205.4163	7.3073	-77.3400	-306.1469	-406.1680	-201.0361	11.0093	-78.3932	-305.3652	-406.0227	-199.4828	14.0004	-73.814 [...]
+-3570.4736	-3341.5474	-3409.9011	-3644.5017	-3760.4011	-3558.7935	-3330.1982	-3403.6921	-3645.0657	-3752.0352	-3559.3770	-3338.1663	-3405.8523	-3643.0337	-3754.7749	-3560.0674	-3331.5896	-3402.5356	-3644.4653	-3753.9080	-3558.7639	-3325.4971	-3396.2664	-3635.6133	-3741.4846	-3551.6409	-3323.9829	-3394.3779	-3645.3381	-3751.3459	-3551.2168	-3329.2188	-3392.2881	-3626.2629	-3738.8525	-3545.7317	-3316.1436	-3398.2715	-3635.7532	-3740.7788	-3544.0618	-3320.8308	-3396.9741	-3631.9031	-3740.22 [...]
+-4765.6416	-4546.9746	-4594.1865	-4809.7573	-4928.7173	-4761.0747	-4540.9663	-4593.5869	-4816.4248	-4924.1914	-4764.8813	-4551.8022	-4598.3618	-4817.3452	-4932.4146	-4769.4473	-4550.1797	-4600.2397	-4820.8521	-4932.4507	-4768.7407	-4548.0864	-4598.7788	-4818.6108	-4929.7764	-4769.8071	-4552.6470	-4602.1509	-4827.7925	-4939.0098	-4771.2920	-4557.6743	-4602.8066	-4815.5869	-4930.3213	-4766.9590	-4547.7109	-4610.2441	-4825.9795	-4932.6548	-4769.5391	-4555.1143	-4610.1958	-4823.9863	-4934.63 [...]
+-4109.2388	-4005.9973	-4024.7649	-4116.5918	-4168.7920	-4106.7417	-4002.1602	-4024.0962	-4119.1743	-4170.4590	-4108.2412	-4008.3198	-4026.0820	-4119.6777	-4171.5498	-4109.4468	-4003.6516	-4024.6887	-4117.6685	-4170.1162	-4108.5454	-4002.8230	-4021.7578	-4115.6426	-4169.0083	-4110.0483	-4005.9744	-4022.9092	-4118.9033	-4170.5195	-4108.4385	-4005.6169	-4024.0415	-4115.4434	-4169.9448	-4108.5972	-4005.9192	-4028.2051	-4119.7183	-4169.7700	-4108.3960	-4008.7454	-4029.7000	-4121.1128	-4169.41 [...]
+-11942.4268	-11676.7813	-11756.2998	-12057.1240	-12198.0244	-11950.1221	-11679.8955	-11776.5723	-12074.7305	-12225.4434	-11978.8145	-11691.8418	-11791.3223	-12105.2461	-12227.4580	-11978.7012	-11691.2119	-11789.9092	-12113.5049	-12265.5859	-12010.9854	-11732.4951	-11821.6973	-12123.0273	-12262.3115	-12006.5967	-11730.1553	-11818.4844	-12137.0049	-12281.1709	-12023.7637	-11736.4082	-11824.7188	-12145.2178	-12287.1221	-12027.2861	-11746.8955	-11838.7803	-12138.7402	-12294.4736	-12026.7148	 [...]
+-12098.0781	-11820.9365	-11897.7734	-12195.1582	-12343.3379	-12093.1133	-11824.2930	-11916.5049	-12210.2197	-12354.1143	-12111.4248	-11833.3750	-11926.8105	-12228.3701	-12353.2031	-12115.7197	-11831.1436	-11920.9443	-12235.6279	-12378.6172	-12138.1631	-11863.0205	-11949.0918	-12237.4795	-12378.1211	-12135.5625	-11857.8857	-11933.1748	-12249.3857	-12390.1230	-12136.6689	-11860.0596	-11943.4971	-12248.2363	-12392.3242	-12142.1670	-11870.5439	-11952.0010	-12245.8848	-12392.2305	-12147.5859	 [...]
+-6266.3066	-5982.4731	-6078.3657	-6367.2354	-6523.0122	-6260.7832	-5992.2959	-6087.8809	-6387.4131	-6513.1479	-6273.2466	-6003.7861	-6086.8013	-6376.3921	-6517.8696	-6282.0630	-6013.7280	-6095.9321	-6409.4912	-6535.9238	-6294.1182	-6017.4038	-6105.7061	-6405.4229	-6525.7271	-6295.4746	-6021.7280	-6089.9219	-6419.2026	-6547.7285	-6285.3438	-6030.4375	-6110.4521	-6398.0269	-6538.4731	-6292.1616	-6021.3408	-6110.4194	-6411.9873	-6538.6294	-6297.1924	-6015.4937	-6113.6802	-6404.5732	-6541.22 [...]
+-5173.8931	-4926.7573	-5012.7461	-5282.6548	-5420.6973	-5170.2124	-4918.3711	-5015.0059	-5305.1284	-5407.5439	-5177.4951	-4935.1909	-5018.6157	-5295.4282	-5418.9150	-5191.7324	-4939.2090	-5024.9424	-5311.2798	-5428.0835	-5189.5566	-4934.0449	-5032.5923	-5310.9033	-5416.9570	-5193.5469	-4939.0654	-5030.0195	-5320.4253	-5428.2261	-5185.8569	-4954.4092	-5034.7783	-5298.5088	-5419.2866	-5190.9863	-4932.6123	-5049.4380	-5313.5947	-5419.5674	-5190.3438	-4941.3799	-5046.4185	-5304.7090	-5427.91 [...]
+-6520.0288	-6264.6387	-6346.4214	-6624.4424	-6762.9663	-6516.5444	-6254.8306	-6348.9922	-6644.2637	-6754.0420	-6522.7798	-6273.9990	-6354.2656	-6637.0532	-6764.3599	-6536.8486	-6273.9238	-6360.3672	-6650.2163	-6769.5352	-6535.8115	-6269.6069	-6362.8877	-6644.4502	-6767.2056	-6536.4941	-6272.5820	-6361.3096	-6655.0654	-6772.6704	-6530.6504	-6284.8203	-6366.7974	-6635.7817	-6765.1143	-6530.9663	-6267.2695	-6379.8101	-6650.9326	-6763.0503	-6532.2944	-6278.0537	-6374.4331	-6646.8936	-6769.58 [...]
+-6726.9829	-6490.1235	-6548.6777	-6791.5889	-6913.6787	-6727.5430	-6478.1250	-6552.0527	-6796.6064	-6921.3198	-6727.3872	-6495.0337	-6555.3687	-6804.4976	-6920.8892	-6728.9019	-6487.1152	-6564.0229	-6802.5454	-6921.9780	-6737.5171	-6483.6069	-6550.5063	-6798.6821	-6918.7451	-6732.5615	-6490.2769	-6553.8999	-6806.4038	-6922.2437	-6730.4673	-6493.0059	-6557.2935	-6795.0107	-6925.8813	-6724.0215	-6487.6738	-6565.6538	-6808.4312	-6917.4551	-6729.2920	-6495.6367	-6567.5176	-6810.3745	-6916.07 [...]
+-8651.2988	-8403.8330	-8430.4277	-8665.9111	-8804.5430	-8642.8018	-8394.0996	-8434.9336	-8672.3926	-8810.9658	-8649.2402	-8403.4365	-8436.5215	-8673.9150	-8810.0781	-8646.8584	-8393.5020	-8435.6328	-8669.9053	-8808.5195	-8647.9014	-8391.6885	-8428.9102	-8664.2881	-8805.7529	-8652.3037	-8397.4893	-8430.2441	-8670.2383	-8805.5488	-8647.7266	-8399.0713	-8434.2021	-8662.6563	-8810.3379	-8644.8545	-8400.2314	-8441.4492	-8675.7051	-8804.2754	-8648.2393	-8403.1260	-8441.8184	-8677.0752	-8805.77 [...]
+-13120.4727	-12881.2178	-12957.3184	-13285.1533	-13386.4834	-13121.9336	-12865.5684	-12994.1689	-13284.6543	-13415.5859	-13154.1807	-12866.2813	-13008.0508	-13310.0225	-13400.2861	-13163.2139	-12854.7754	-12992.4922	-13297.2148	-13432.9834	-13175.0859	-12907.3594	-12996.3984	-13310.1055	-13437.6035	-13179.3525	-12896.9111	-13010.9775	-13324.8555	-13451.0586	-13162.3232	-12895.2314	-12994.9385	-13332.7432	-13448.7109	-13192.8896	-12926.9512	-13017.2324	-13303.3799	-13457.6777	-13186.4492	 [...]
+-9870.7900	-9606.4795	-9711.8623	-9984.0410	-10141.6768	-9865.7754	-9610.3184	-9733.5176	-10017.6631	-10149.1787	-9889.2090	-9621.3389	-9736.4658	-10017.0967	-10155.9541	-9866.6748	-9619.5801	-9715.8447	-10033.9961	-10158.3701	-9910.4414	-9652.6611	-9742.3633	-10033.8984	-10204.1006	-9925.7314	-9653.4922	-9715.2158	-10039.4824	-10179.9199	-9890.6260	-9633.2041	-9740.8750	-10026.9795	-10176.1973	-9902.9883	-9669.8623	-9732.4990	-10028.2422	-10152.5654	-9921.5430	-9646.9121	-9716.1338	-100 [...]
+-9244.3916	-8999.0049	-9085.6758	-9364.3965	-9510.4941	-9239.7695	-8987.6240	-9083.6182	-9394.6250	-9488.2070	-9250.4697	-9005.3184	-9090.0283	-9376.1416	-9503.4785	-9268.9922	-9012.3311	-9096.8789	-9398.1934	-9506.9072	-9266.0459	-9000.4961	-9105.4082	-9392.3867	-9502.8145	-9272.0928	-9007.2324	-9104.8418	-9397.3623	-9512.3086	-9258.3760	-9023.3604	-9112.1035	-9376.8857	-9497.5811	-9263.3096	-9001.5967	-9126.1318	-9392.7217	-9500.6563	-9261.9971	-9012.2021	-9120.9658	-9384.7881	-9508.20 [...]
+-11280.5352	-11030.1621	-11112.2715	-11392.3691	-11526.9033	-11274.3926	-11014.2314	-11123.7021	-11418.2275	-11521.2539	-11280.9697	-11034.1572	-11123.9502	-11412.8535	-11527.2305	-11296.0801	-11026.1504	-11126.9746	-11416.9170	-11557.4473	-11288.3877	-11024.6455	-11125.7256	-11410.1025	-11533.7168	-11292.4365	-11031.2588	-11128.2295	-11421.1787	-11532.7852	-11291.5117	-11048.8135	-11130.0615	-11403.9629	-11550.6797	-11276.7627	-11026.6475	-11153.8818	-11418.3779	-11522.1748	-11289.4355	 [...]
+-12976.5537	-12724.5313	-12803.2637	-13069.8789	-13192.8936	-12977.0332	-12709.5869	-12805.7295	-13074.6895	-13204.9932	-12976.4775	-12729.1289	-12809.2031	-13087.0693	-13200.9189	-12974.8896	-12719.5645	-12824.4922	-13079.7959	-13204.5186	-12989.2754	-12715.5049	-12804.2588	-13075.7852	-13203.3896	-12980.9434	-12723.2402	-12805.6475	-13086.2813	-13202.7139	-12976.9453	-12723.5977	-12810.4746	-13075.5342	-13214.3633	-12968.1094	-12720.3584	-12821.2744	-13089.7607	-13198.4492	-12978.6875	 [...]
+-12066.0635	-11827.4629	-11895.5518	-12145.7617	-12263.4707	-12057.0879	-11813.9219	-11897.2773	-12151.3906	-12272.1260	-12065.2822	-11826.4434	-11894.4521	-12151.7764	-12267.1016	-12059.6533	-11813.9951	-11897.2363	-12146.7451	-12264.6768	-12061.4717	-11811.6338	-11888.3340	-12139.5273	-12265.3467	-12065.9844	-11814.6426	-11885.8633	-12141.8340	-12259.4170	-12055.1152	-11814.4023	-11890.1943	-12137.8125	-12265.3428	-12055.7344	-11818.9629	-11897.2480	-12152.0801	-12261.4512	-12057.9072	 [...]
+-10331.9795	-10106.2939	-10190.7598	-10481.8125	-10596.4248	-10323.1240	-10151.1123	-10221.5840	-10527.4189	-10650.2725	-10359.5107	-10091.9600	-10227.2656	-10544.5430	-10608.2676	-10338.2002	-10077.3311	-10249.2588	-10556.9521	-10681.6572	-10379.7930	-10123.3213	-10263.8936	-10546.6914	-10661.4023	-10358.2061	-10068.8330	-10256.6348	-10554.1592	-10681.6592	-10385.9834	-10127.2500	-10260.3457	-10566.4883	-10681.1064	-10392.5254	-10076.8418	-10248.6035	-10576.0605	-10673.9912	-10360.9766	 [...]
+-13539.3467	-13287.5645	-13391.5332	-13721.0830	-13835.5850	-13581.0801	-13308.1182	-13494.2813	-13744.6670	-13848.7178	-13585.5703	-13312.9971	-13477.8545	-13769.7080	-13823.2031	-13591.0488	-13328.5664	-13445.4180	-13732.7031	-13863.2393	-13612.2246	-13342.9561	-13418.5098	-13757.0322	-13885.1846	-13640.2949	-13350.8174	-13440.3330	-13764.2412	-13883.7227	-13569.9473	-13331.5371	-13423.6689	-13791.5068	-13900.5342	-13648.7559	-13413.7002	-13452.4561	-13737.8848	-13859.5176	-13668.9238	 [...]
+-15973.2900	-15721.7295	-15821.2012	-16105.5000	-16236.5391	-15980.7490	-15717.1035	-15834.1787	-16150.0273	-16243.7080	-15979.5352	-15731.1846	-15826.0557	-16126.6025	-16240.5000	-15983.2412	-15745.3359	-15827.4521	-16141.7021	-16251.2285	-16006.2197	-15741.2617	-15846.4209	-16127.6689	-16257.6631	-16032.6787	-15733.3662	-15831.8916	-16137.0586	-16262.1143	-15982.9736	-15740.0752	-15856.5654	-16127.0928	-16248.0771	-15990.9473	-15753.5342	-15850.9121	-16146.9053	-16252.3623	-15993.7773	 [...]
+-13134.7178	-12886.6699	-12972.3613	-13252.8086	-13381.8779	-13132.1436	-12875.9121	-12985.1113	-13283.6953	-13380.5967	-13141.4805	-12889.1904	-12979.3213	-13267.9775	-13380.0791	-13145.4375	-12885.3770	-12984.4658	-13269.9541	-13402.6123	-13148.7910	-12881.8848	-12986.1855	-13261.9707	-13386.0449	-13156.5098	-12880.5527	-12980.0830	-13268.2910	-13387.9805	-13140.2451	-12895.8105	-12990.8252	-13261.3135	-13395.6064	-13135.8623	-12887.7764	-13002.5371	-13276.5488	-13378.2598	-13143.0137	 [...]
+-12938.2334	-12697.3789	-12775.6689	-13040.3643	-13159.1182	-12931.2119	-12686.6973	-12784.4688	-13053.0713	-13167.0859	-12941.0361	-12700.3066	-12780.8516	-13051.7236	-13162.5107	-12939.4141	-12688.2969	-12787.4580	-13046.8789	-13171.5684	-12942.3418	-12685.9932	-12778.8613	-13042.3057	-13164.3145	-12946.3857	-12689.3203	-12776.0537	-13046.6846	-13160.6182	-12933.8193	-12693.1953	-12782.4023	-13041.0752	-13172.5820	-12931.0576	-12692.8438	-12792.1807	-13055.1855	-13159.4854	-12939.8379	 [...]
+-13668.2402	-13429.4150	-13503.2920	-13759.0645	-13876.2607	-13658.8516	-13417.7500	-13507.7207	-13767.0166	-13884.2539	-13667.9844	-13428.9902	-13502.9160	-13766.4355	-13879.5137	-13663.4512	-13416.8076	-13509.3496	-13759.2715	-13879.3750	-13666.5527	-13414.8486	-13498.4766	-13755.2168	-13878.8584	-13671.1123	-13417.0293	-13496.4746	-13757.6572	-13872.2617	-13659.3018	-13419.0098	-13502.4131	-13754.3828	-13881.9121	-13660.1504	-13424.2764	-13509.3809	-13767.9336	-13874.9590	-13664.4287	 [...]
+-14219.1943	-13966.1680	-14054.4355	-14367.3057	-14488.5215	-14196.8213	-13960.9658	-14092.5586	-14386.7637	-14505.4668	-14244.6299	-13983.5977	-14067.7979	-14356.7676	-14475.4600	-14234.3428	-13966.3828	-14080.4834	-14392.4697	-14534.7598	-14281.5449	-13969.4424	-14073.8506	-14357.4590	-14495.8750	-14256.4316	-13956.3555	-14079.8965	-14409.4766	-14512.0596	-14237.0547	-13975.0264	-14079.0059	-14374.3398	-14511.4063	-14218.9854	-13979.9756	-14102.0498	-14422.3965	-14498.5703	-14235.9189	 [...]
+-15117.5303	-14866.9932	-14954.4736	-15256.9063	-15379.5859	-15111.6357	-14858.9229	-14982.7041	-15279.8135	-15386.6094	-15130.5479	-14872.7715	-14960.4922	-15258.3301	-15373.3760	-15120.8281	-14870.9199	-14968.7021	-15267.7617	-15402.9902	-15150.2607	-14860.9170	-14969.1494	-15253.2285	-15385.3545	-15144.4053	-14852.1934	-14963.9111	-15271.2881	-15390.2773	-15119.0273	-14871.0908	-14973.3994	-15258.1797	-15393.0566	-15117.4502	-14867.5518	-14981.4746	-15290.3691	-15382.0518	-15124.8828	 [...]
+-15132.1211	-14884.6182	-14968.8760	-15250.7275	-15372.9727	-15123.4053	-14874.9443	-14981.8994	-15262.0752	-15376.6348	-15133.4297	-14881.4932	-14971.6543	-15251.6143	-15366.6436	-15125.6758	-14870.8213	-14976.6621	-15246.2256	-15372.3613	-15133.2988	-14871.1318	-14968.1748	-15243.3350	-15372.0820	-15140.7510	-14864.2988	-14958.1250	-15241.2861	-15359.9619	-15117.7432	-14872.4922	-14968.7549	-15240.5371	-15372.7734	-15119.9385	-14871.8896	-14972.5635	-15253.5420	-15363.2061	-15125.8896	 [...]
+-11965.8242	-11728.3916	-11801.5596	-12056.4766	-12173.3994	-11953.0889	-11715.5811	-11803.1113	-12060.9277	-12177.8975	-11962.7656	-11723.7119	-11794.4268	-12057.9658	-12171.9014	-11957.3486	-11712.0742	-11797.9805	-12048.8057	-12170.6035	-11957.1416	-11709.8115	-11792.1895	-12044.7891	-12171.0781	-11966.4023	-11709.1094	-11784.5928	-12042.0781	-12159.6660	-11945.4561	-11708.6084	-11791.3857	-12041.5352	-12170.2451	-11953.6338	-11718.0889	-11794.1992	-12054.5469	-12164.0117	-11955.3125	 [...]
+-10744.4219	-10509.4482	-10575.0674	-10820.4248	-10936.9766	-10733.0332	-10496.5898	-10576.0654	-10823.7285	-10942.8545	-10740.9502	-10504.9551	-10568.7178	-10820.5156	-10935.5283	-10734.9443	-10492.4932	-10570.3281	-10811.8242	-10933.9824	-10734.7188	-10490.9912	-10564.7783	-10807.3877	-10935.6113	-10743.3926	-10490.8369	-10557.2178	-10806.0703	-10923.4785	-10723.7744	-10489.2441	-10563.7891	-10804.9180	-10933.8037	-10731.5225	-10499.5527	-10567.7754	-10818.0078	-10928.0244	-10733.5615	 [...]
+-11180.8027	-10947.6943	-11008.4170	-11247.8984	-11362.6084	-11168.0889	-10935.4238	-11007.6631	-11250.1787	-11368.4912	-11175.5605	-10943.8691	-11002.5391	-11247.7344	-11363.4980	-11170.4287	-10931.0049	-11002.7070	-11239.7754	-11360.1904	-11169.0684	-10929.5264	-10997.1797	-11234.5068	-11362.3193	-11177.8691	-10932.3818	-10993.2080	-11233.4258	-11352.2314	-11161.9023	-10928.3652	-10998.6387	-11233.8535	-11360.1621	-11169.6230	-10940.0986	-11003.4346	-11245.5645	-11357.3623	-11168.9824	 [...]
+-3701.3481	-3503.0161	-3538.2666	-3730.5115	-3836.8921	-3691.9841	-3491.7942	-3539.3872	-3736.5032	-3842.4478	-3698.0674	-3500.4258	-3536.2952	-3733.3760	-3838.3591	-3694.9541	-3488.8438	-3536.7522	-3727.7036	-3836.1079	-3695.4160	-3488.5764	-3531.9092	-3724.5286	-3837.5203	-3700.6770	-3491.4785	-3528.2585	-3723.6687	-3830.7651	-3687.4919	-3489.5728	-3534.3853	-3723.2554	-3838.1226	-3693.0364	-3497.6389	-3538.2881	-3733.8262	-3832.3201	-3695.1089	-3499.8411	-3542.0920	-3734.3542	-3833.82 [...]
+-668.4680	-612.5321	-599.4512	-629.5359	-665.9693	-664.8243	-606.7495	-598.6161	-632.4637	-666.4552	-668.0946	-611.4942	-598.9023	-630.7455	-666.0767	-666.4165	-605.8336	-597.7109	-628.3378	-664.1516	-665.8702	-606.1737	-595.0090	-628.3598	-665.2283	-670.4786	-609.9055	-596.7603	-630.1173	-664.2892	-666.8846	-608.7969	-597.8969	-626.6461	-665.8951	-667.6832	-612.3323	-600.1740	-633.1940	-664.2787	-669.5220	-614.0844	-603.8319	-633.8776	-664.5173	-666.8577	-607.1336	-599.3957	-631.1494	-6 [...]
+-3624.7988	-3829.7332	-3723.2932	-3458.1978	-3361.8884	-3622.2888	-3825.5347	-3722.4097	-3458.3936	-3360.9343	-3623.4377	-3827.5452	-3723.1184	-3457.2144	-3362.8796	-3624.2275	-3825.9360	-3722.3860	-3455.7395	-3363.1013	-3625.6021	-3827.5679	-3720.5469	-3454.6794	-3361.9331	-3628.3679	-3828.4814	-3720.0151	-3455.4763	-3360.1826	-3626.1289	-3825.6008	-3722.0696	-3453.9348	-3364.3218	-3628.2190	-3832.9111	-3722.2822	-3456.2493	-3361.2881	-3627.8135	-3833.2219	-3725.1885	-3456.3662	-3362.34 [...]
+-7568.6440	-7326.3174	-7424.2358	-7728.9170	-7855.2520	-7557.3047	-7308.4990	-7440.7837	-7732.6431	-7840.1514	-7563.0767	-7306.2964	-7403.1885	-7708.0400	-7840.7969	-7566.6206	-7283.9028	-7410.9795	-7708.3301	-7833.6201	-7569.2632	-7286.9351	-7401.8218	-7699.9473	-7837.5752	-7574.1572	-7273.1763	-7391.7397	-7692.0737	-7804.0913	-7532.3564	-7288.7236	-7399.9639	-7691.1504	-7820.0220	-7544.4331	-7290.8438	-7401.9307	-7711.2959	-7809.4707	-7540.2373	-7292.2422	-7418.8848	-7710.8594	-7811.44 [...]
+-17765.6641	-17513.2148	-17603.7305	-17901.4043	-18029.4727	-17750.0234	-17505.2207	-17622.8984	-17903.3984	-18015.1152	-17754.6660	-17497.0059	-17586.7090	-17890.6016	-18008.3496	-17749.9863	-17482.5469	-17601.8105	-17884.4805	-18002.5059	-17747.1250	-17489.1484	-17596.7520	-17879.2754	-18006.0156	-17760.3984	-17472.6914	-17575.2656	-17866.4258	-17973.5664	-17715.6230	-17486.2461	-17591.1465	-17870.3262	-18002.3164	-17738.6445	-17489.9199	-17585.3789	-17879.0430	-17996.2012	-17739.2402	 [...]
+-15713.5791	-15465.1504	-15541.7119	-15808.4180	-15929.6836	-15700.0850	-15449.5361	-15540.3320	-15810.3545	-15932.3564	-15706.8428	-15455.9570	-15528.3711	-15805.0000	-15924.8174	-15701.4678	-15443.6338	-15526.8584	-15794.4785	-15922.4844	-15700.2158	-15442.5908	-15525.1846	-15790.5039	-15924.0947	-15710.7129	-15438.3145	-15514.9033	-15783.0049	-15908.3184	-15682.8125	-15436.0410	-15522.1045	-15784.5293	-15922.4141	-15697.4004	-15449.4688	-15522.9404	-15797.8135	-15915.0752	-15696.7764	 [...]
+-9899.8408	-9641.6270	-9706.1279	-9972.2148	-10102.3438	-9886.6924	-9625.4121	-9703.0439	-9973.8135	-10109.0928	-9893.9326	-9634.7100	-9693.9014	-9970.2988	-10100.6553	-9887.9180	-9618.6240	-9689.4814	-9958.0703	-10096.8604	-9887.5430	-9620.1758	-9688.8789	-9954.1553	-10098.8711	-9897.6240	-9617.4336	-9677.1992	-9947.1563	-10083.0781	-9870.1055	-9612.5625	-9686.2021	-9949.0791	-10097.8281	-9885.6816	-9628.5791	-9688.4014	-9961.9902	-10090.4131	-9885.5010	-9630.2666	-9699.9375	-9965.0977	 [...]
+-9282.7285	-9033.0029	-9093.3594	-9344.2959	-9474.5342	-9269.1240	-9018.6123	-9089.5264	-9347.7354	-9478.0527	-9277.5840	-9028.0479	-9082.3213	-9344.1992	-9470.6670	-9272.2686	-9014.5664	-9082.1992	-9333.9551	-9468.9512	-9272.2930	-9015.4072	-9078.9346	-9330.0127	-9471.6738	-9282.3359	-9013.3740	-9070.2510	-9324.6279	-9456.3955	-9258.0107	-9008.2881	-9076.7363	-9326.8359	-9469.1758	-9270.0635	-9024.9375	-9079.0781	-9339.1279	-9462.8301	-9270.0430	-9024.7949	-9089.6514	-9341.6514	-9465.26 [...]
+-5622.4854	-5394.8394	-5434.4648	-5654.6406	-5777.6040	-5609.1147	-5382.4751	-5431.9282	-5656.7119	-5781.3340	-5618.5991	-5390.4248	-5428.2085	-5655.7910	-5775.8364	-5612.9238	-5378.5117	-5427.7896	-5647.0044	-5773.1396	-5611.3296	-5377.9463	-5424.9727	-5643.0566	-5776.2720	-5620.0342	-5379.6953	-5419.1514	-5641.1978	-5765.2070	-5603.2837	-5374.4282	-5422.7910	-5642.6919	-5773.7842	-5612.6851	-5388.2031	-5427.3247	-5652.2114	-5768.7881	-5612.5972	-5387.4395	-5432.1094	-5653.3550	-5771.95 [...]
+-1780.1487	-1638.1166	-1648.0753	-1771.7561	-1851.7579	-1771.7584	-1629.9924	-1647.6387	-1773.5889	-1853.6255	-1777.6467	-1635.7307	-1645.4404	-1772.0740	-1851.9956	-1773.9613	-1627.2731	-1644.2314	-1767.0518	-1850.1920	-1773.7258	-1628.7635	-1642.8447	-1764.7869	-1852.0688	-1782.2559	-1630.4862	-1639.9730	-1764.5427	-1846.2341	-1770.7683	-1627.0214	-1642.4119	-1765.4697	-1852.0718	-1775.2102	-1635.7791	-1645.6259	-1772.4521	-1848.6860	-1776.9149	-1635.2350	-1648.6825	-1773.0387	-1848.60 [...]
+-908.3983	-1009.2620	-919.6689	-760.3754	-724.0175	-905.1149	-1003.3100	-915.7413	-760.1539	-721.5405	-906.2695	-1006.4770	-917.6286	-758.8837	-724.1296	-907.1485	-1002.5775	-916.3583	-755.8177	-723.1880	-906.5715	-1004.7623	-914.4067	-754.4775	-724.1245	-910.0113	-1004.1572	-914.5853	-755.1796	-720.5849	-904.7685	-1001.4862	-914.0995	-754.2053	-724.6791	-909.2710	-1009.4489	-914.6528	-758.7043	-722.1343	-910.3400	-1010.7351	-920.6573	-757.8268	-722.8627	-908.4315	-1003.3211	-917.0448	-7 [...]
+912.5317	1124.8234	996.3629	748.6647	651.1708	920.8968	1176.3827	1055.0702	771.7217	649.2142	928.8132	1182.7710	1060.1250	754.2014	666.9744	946.8862	1210.1534	1099.0942	801.2765	652.0223	920.6572	1211.9893	1099.1876	802.3464	669.4510	932.6447	1210.4158	1081.8857	814.0992	672.1199	968.6129	1231.6719	1107.3914	823.4833	671.5093	943.9724	1195.6021	1098.4506	794.2468	683.3360	943.4413	1200.0682	1055.6863	772.2293	689.9659	954.1121	1194.9427	1080.1772	796.0782	681.3604	949.5028	1180.2587	1060 [...]
+-11973.7979	-11730.4609	-11827.2949	-12098.4463	-12207.8105	-11960.4951	-11707.8574	-11817.2666	-12095.2139	-12213.0186	-11966.6758	-11712.8301	-11806.4893	-12093.9209	-12202.7500	-11957.3857	-11699.9150	-11795.8311	-12072.3262	-12201.6914	-11959.0195	-11698.5264	-11796.7285	-12070.9072	-12200.8643	-11968.9453	-11694.6367	-11788.7539	-12063.0625	-12183.5703	-11933.4746	-11684.7900	-11791.2324	-12061.4844	-12199.3301	-11956.5947	-11709.4023	-11792.3486	-12079.6846	-12191.5557	-11952.7363	 [...]
+-6795.6294	-6541.5562	-6618.7310	-6884.0752	-7007.9331	-6780.3501	-6522.3525	-6611.3345	-6886.9668	-7014.5254	-6789.9966	-6532.2900	-6602.0996	-6881.2358	-7004.3979	-6783.3379	-6518.2168	-6596.4771	-6866.9038	-7002.5781	-6780.6416	-6520.6099	-6597.1279	-6865.5557	-7006.6187	-6793.9126	-6515.4458	-6587.5449	-6856.0156	-6986.7554	-6761.1338	-6507.5024	-6592.8882	-6858.1475	-7004.0737	-6782.3467	-6528.9990	-6592.7896	-6870.1035	-6993.7217	-6779.4678	-6530.2700	-6606.6108	-6876.2471	-6997.64 [...]
+-3272.4204	-3023.8044	-3091.7761	-3343.3735	-3469.6216	-3260.0330	-3002.1987	-3084.5659	-3344.8831	-3474.6777	-3267.1775	-3017.0276	-3079.4341	-3341.2693	-3466.5525	-3262.3665	-3004.9490	-3074.2554	-3328.5984	-3461.2283	-3254.8909	-3005.9214	-3076.1646	-3333.1123	-3466.8682	-3274.0464	-3006.4587	-3065.1045	-3316.6255	-3448.3696	-3241.1321	-2993.7393	-3071.7427	-3323.3442	-3462.9216	-3261.6414	-3021.7542	-3073.8296	-3334.7915	-3456.6282	-3259.9270	-3016.6699	-3081.7317	-3340.9001	-3462.25 [...]
+-6588.1084	-6344.0161	-6402.7954	-6644.3135	-6770.8511	-6573.0347	-6323.9360	-6396.6304	-6647.7847	-6776.0898	-6583.3174	-6339.3413	-6395.4434	-6643.9233	-6767.8052	-6575.7925	-6324.8823	-6391.0898	-6633.6929	-6763.4067	-6573.5288	-6329.0947	-6390.5425	-6634.1069	-6770.3813	-6590.2017	-6329.1108	-6380.2593	-6622.9292	-6753.4248	-6560.6890	-6319.2017	-6387.0752	-6626.0005	-6764.5386	-6578.4209	-6341.6182	-6390.3101	-6641.6074	-6763.6670	-6578.0708	-6341.9458	-6397.8379	-6645.2061	-6765.13 [...]
+389.7935	588.6360	571.1140	392.4888	276.8909	398.9478	602.1590	575.7947	388.4563	276.7896	394.8340	593.8652	578.9537	393.6256	278.7906	398.9271	604.7342	579.8969	400.0127	282.8030	399.1154	601.8133	581.5638	398.6395	275.6815	387.6023	600.3156	585.8065	405.1955	287.2803	403.7623	607.4225	581.9133	399.2097	277.9875	392.6104	589.2728	578.0729	394.1756	280.7376	394.7303	590.4464	574.3145	388.8232	279.7548	400.5349	602.1851	581.4534	392.6813	273.5338	394.2687	592.8397	572.0178	390.1075	276.44 [...]
+-1035.9205	-1078.2775	-1043.8923	-968.4937	-951.2908	-1030.4479	-1067.7388	-1038.2197	-969.6968	-950.6821	-1034.5028	-1072.3295	-1036.5632	-965.8942	-949.3170	-1031.4966	-1066.1892	-1036.7867	-962.5547	-947.8896	-1031.7271	-1071.4160	-1034.1516	-962.2157	-953.0162	-1040.0454	-1069.8744	-1030.5392	-958.0412	-941.7413	-1026.8396	-1063.2108	-1033.0392	-959.6395	-949.9370	-1037.3988	-1078.2662	-1034.5797	-963.7249	-945.4725	-1035.4545	-1075.7882	-1039.7631	-966.2900	-947.8276	-1030.3300	-106 [...]
+-7221.9248	-6983.4375	-7096.1787	-7365.0518	-7472.5024	-7210.8647	-6951.1382	-7075.6475	-7367.9849	-7482.9565	-7214.6162	-6959.0767	-7065.5581	-7365.0356	-7469.4663	-7205.3267	-6943.3950	-7047.2642	-7337.4922	-7469.2627	-7209.1978	-6947.3950	-7052.3647	-7345.3364	-7476.5522	-7220.4370	-6938.9629	-7041.5986	-7322.4346	-7444.7876	-7176.1538	-6929.3906	-7044.5410	-7326.0435	-7471.5195	-7206.1689	-6950.0674	-7040.4810	-7338.8940	-7455.0469	-7200.6689	-6958.2437	-7074.0972	-7356.2656	-7451.43 [...]
+-9124.4658	-8877.7539	-8970.0195	-9236.9014	-9347.6709	-9107.9404	-8849.5830	-8956.8672	-9244.1973	-9363.1523	-9121.8516	-8865.3545	-8949.4248	-9234.4990	-9351.5332	-9109.8809	-8849.0313	-8936.8359	-9214.4238	-9350.1504	-9112.8496	-8854.5586	-8941.8428	-9222.9033	-9361.3252	-9129.5000	-8846.1396	-8933.4063	-9205.1270	-9332.6494	-9086.7031	-8837.7969	-8935.5469	-9206.2109	-9355.6865	-9117.0771	-8863.3203	-8930.5010	-9218.5986	-9338.4473	-9108.7080	-8865.0859	-8955.3965	-9230.6260	-9343.50 [...]
+-9420.7568	-9193.6768	-9281.9160	-9534.3467	-9632.8740	-9402.4307	-9164.6025	-9270.6338	-9546.4336	-9650.2451	-9419.1270	-9185.0654	-9269.3037	-9530.8682	-9641.1689	-9409.5527	-9167.8652	-9257.6113	-9515.1240	-9636.2510	-9415.1504	-9176.7217	-9259.0342	-9523.5566	-9649.5830	-9429.0313	-9167.6895	-9256.8086	-9509.2109	-9625.3789	-9391.1621	-9157.8643	-9255.6396	-9509.4150	-9645.6875	-9427.7207	-9181.8975	-9252.8867	-9518.5068	-9627.8350	-9409.9043	-9182.6016	-9273.6230	-9529.2988	-9636.24 [...]
+-6402.5854	-6212.9126	-6276.3389	-6472.5713	-6566.3462	-6387.6157	-6186.3428	-6270.4121	-6484.0947	-6572.5122	-6399.0679	-6206.5166	-6266.9019	-6471.0596	-6568.8662	-6392.5825	-6188.3882	-6265.2202	-6460.1997	-6560.8872	-6391.3154	-6200.6133	-6261.4609	-6466.5903	-6577.5898	-6412.2134	-6196.0889	-6254.0146	-6453.8960	-6554.7065	-6378.6323	-6184.9639	-6258.4355	-6456.7520	-6568.0469	-6411.0796	-6209.9268	-6260.3892	-6465.7788	-6559.1714	-6394.7139	-6204.5044	-6269.2485	-6474.1030	-6563.76 [...]
+269.0631	437.5076	395.0552	232.4569	144.4944	282.5675	459.8656	400.5933	224.0162	140.5337	272.7041	443.8403	404.0327	234.1364	142.8481	277.6525	457.5103	404.7246	244.8250	149.2487	279.6491	448.8839	408.7237	238.6342	137.3484	262.7183	452.0385	415.7703	249.9585	155.0644	291.5630	462.8918	411.7890	245.9242	144.0817	264.1725	438.5725	408.2354	238.3005	152.4921	275.5885	443.1430	403.4374	231.1155	146.0203	285.3275	457.8748	413.4241	232.9564	143.9390	274.0223	443.6188	401.1920	233.5706	144.74 [...]
+1441.5942	1569.2924	1542.9678	1424.3828	1356.5795	1449.7666	1587.8254	1550.3188	1422.1558	1355.4082	1444.6011	1576.5594	1550.9984	1427.6045	1356.7363	1448.6940	1586.0598	1551.3394	1432.5967	1361.2654	1449.0621	1578.5526	1553.6730	1431.5864	1352.5848	1435.4177	1580.2079	1557.9015	1438.0275	1366.4734	1456.6671	1591.4774	1556.3839	1436.0092	1357.4010	1438.5319	1569.9900	1554.4553	1430.2947	1364.7885	1444.9412	1574.5481	1549.3085	1425.1571	1360.3540	1452.4033	1585.6510	1556.6848	1427.7433	13 [...]
+-4622.3003	-4383.9502	-4493.5059	-4764.5640	-4870.8892	-4608.4785	-4351.9087	-4479.3276	-4774.9048	-4887.3032	-4621.2930	-4367.7007	-4469.0146	-4764.5771	-4872.9082	-4611.0366	-4352.7695	-4456.3076	-4743.2080	-4871.8716	-4613.3140	-4357.2329	-4460.0122	-4754.2397	-4885.8486	-4631.8110	-4349.1318	-4449.5498	-4728.8516	-4851.1982	-4584.3286	-4342.2661	-4454.0552	-4735.3179	-4878.2603	-4617.5786	-4363.2061	-4449.5928	-4744.9478	-4862.0522	-4609.8208	-4369.7847	-4482.1528	-4761.0244	-4859.99 [...]
+-7249.3955	-7016.0386	-7119.0332	-7383.3418	-7484.7505	-7233.7690	-6983.8311	-7107.1826	-7393.6729	-7502.0049	-7250.9409	-7005.5396	-7100.5679	-7382.2588	-7489.1895	-7236.6543	-6986.9346	-7088.8638	-7363.4072	-7487.8101	-7239.9458	-6992.2207	-7090.3877	-7373.9497	-7504.0249	-7260.5186	-6985.0923	-7081.0107	-7350.7075	-7468.9844	-7215.9858	-6978.7129	-7085.8354	-7355.0396	-7495.1250	-7245.7539	-7000.0371	-7079.7554	-7366.1729	-7476.5000	-7238.0635	-7004.4170	-7109.4863	-7380.3936	-7479.22 [...]
+-8438.9453	-8192.5791	-8283.4385	-8551.2598	-8657.0850	-8409.8818	-8158.1802	-8269.7012	-8559.5039	-8680.8848	-8444.3955	-8192.5820	-8275.0947	-8552.8037	-8668.9365	-8415.7666	-8163.6089	-8267.7422	-8527.2305	-8665.9150	-8427.0850	-8172.5186	-8265.0898	-8551.5449	-8685.8633	-8449.7773	-8168.0264	-8262.0723	-8531.8350	-8653.9746	-8412.7422	-8160.5127	-8263.9424	-8530.9248	-8679.9639	-8437.8408	-8187.0054	-8257.7256	-8541.1768	-8656.2227	-8422.7725	-8186.5610	-8289.0322	-8551.9043	-8670.66 [...]
+-9563.8037	-9319.7432	-9409.7188	-9669.4189	-9772.5576	-9542.3496	-9285.2822	-9382.4072	-9677.7686	-9801.9756	-9569.2539	-9318.6943	-9403.2813	-9668.3145	-9785.9111	-9545.4932	-9300.9072	-9377.7363	-9649.7490	-9787.4248	-9569.9102	-9298.6328	-9385.2969	-9661.8594	-9796.8301	-9574.1377	-9300.6699	-9389.6338	-9650.8750	-9777.0205	-9547.9326	-9287.0850	-9384.1191	-9649.8301	-9808.0557	-9568.8125	-9304.8125	-9380.8594	-9654.3984	-9774.9619	-9554.8730	-9317.0430	-9408.9756	-9665.5605	-9790.46 [...]
+-8266.1523	-8002.9995	-8016.0962	-8253.9492	-8402.4941	-8245.6182	-7975.2847	-8002.8496	-8257.1914	-8418.2305	-8268.6309	-7992.1797	-8006.8818	-8251.1309	-8412.1670	-8252.1855	-7983.4854	-7996.2158	-8234.4268	-8408.6162	-8266.8008	-7989.7085	-7998.7759	-8243.1807	-8420.5400	-8276.2480	-7986.4204	-7996.4639	-8237.5166	-8403.6572	-8249.2021	-7972.8418	-8001.1367	-8235.8564	-8422.8584	-8279.0039	-7994.6265	-7994.8145	-8242.9795	-8405.3994	-8261.8975	-7991.0791	-8009.2368	-8250.8662	-8409.55 [...]
+-2922.8101	-2668.2969	-2664.4456	-2884.2517	-3046.4487	-2912.9260	-2644.1831	-2657.2117	-2888.4431	-3045.9622	-2920.4170	-2660.2100	-2657.6641	-2881.6997	-3045.2783	-2911.8352	-2648.5720	-2654.8167	-2876.1414	-3037.4075	-2911.5281	-2654.4846	-2652.1975	-2876.0164	-3047.2271	-2927.2144	-2652.3157	-2646.7961	-2866.6106	-3032.5413	-2899.7244	-2639.5872	-2647.0806	-2873.3696	-3044.1997	-2924.9980	-2668.7539	-2650.7280	-2877.2258	-3037.4673	-2917.0776	-2660.1904	-2656.8809	-2884.5291	-3041.89 [...]
+-1997.7800	-1747.6993	-1862.0873	-2141.5835	-2247.5793	-1978.9630	-1721.5248	-1858.9292	-2162.4468	-2280.0952	-2003.1442	-1740.6597	-1835.5127	-2139.8923	-2256.9724	-1992.3385	-1721.6541	-1830.4193	-2127.3386	-2257.3308	-1991.3745	-1734.3446	-1841.1737	-2140.4204	-2279.8298	-2013.6217	-1720.9608	-1819.7472	-2106.8347	-2237.8948	-1966.9004	-1720.4700	-1831.6333	-2119.6592	-2268.0730	-2003.5686	-1737.2931	-1825.7678	-2116.9111	-2248.3792	-1991.0540	-1749.2599	-1856.8823	-2142.6082	-2245.62 [...]
+-6227.7871	-5984.2656	-6077.6475	-6335.9502	-6442.6963	-6194.3667	-5940.1089	-6051.5200	-6348.2471	-6471.1758	-6235.1504	-5987.8164	-6077.6987	-6342.1675	-6459.9697	-6206.4849	-5954.6895	-6056.6035	-6320.3086	-6453.7534	-6222.7803	-5961.5620	-6053.1436	-6336.8223	-6477.6074	-6249.4351	-5963.5708	-6058.3086	-6321.2998	-6445.6802	-6207.8130	-5952.0332	-6051.6880	-6319.7363	-6468.8730	-6228.2280	-5975.8687	-6052.2236	-6332.8696	-6447.0581	-6222.3882	-5978.0688	-6074.7168	-6342.5151	-6461.95 [...]
+-5972.4814	-5744.4282	-5820.1763	-6059.8096	-6164.4854	-5956.5542	-5719.9351	-5801.3804	-6060.2920	-6179.6870	-5973.2725	-5740.4043	-5814.9146	-6056.5869	-6175.0322	-5956.9854	-5725.4263	-5800.0415	-6051.4390	-6176.0288	-5974.2505	-5732.3301	-5801.7788	-6050.7153	-6181.2700	-5982.4790	-5726.7505	-5805.4688	-6047.8003	-6165.8550	-5962.5322	-5718.4312	-5800.8374	-6042.8291	-6183.0410	-5976.8174	-5738.7466	-5802.6104	-6051.4229	-6168.0894	-5971.0122	-5738.1416	-5814.8394	-6057.6021	-6177.31 [...]
+129.1230	332.7003	329.7183	154.3474	33.4888	139.1896	351.1340	345.4635	157.6273	33.7236	129.9357	339.9262	338.9263	160.9752	28.9665	139.1393	348.0336	346.5751	166.3897	32.3597	129.4579	339.1181	348.8864	168.7055	28.2848	123.4920	346.8499	345.4668	169.7363	37.9085	141.9659	356.0339	349.0103	170.4440	26.9452	125.2843	336.0553	348.9879	166.9741	38.6125	128.1114	341.2112	342.5544	160.0416	32.9330	140.1928	350.5083	350.6596	163.1409	36.8195	134.5730	340.0482	343.5302	161.0638	24.5046	129.3764 [...]
+4427.3916	4489.3242	4443.6616	4374.6875	4355.5977	4430.9946	4504.2725	4452.6372	4376.4468	4359.5249	4431.0098	4500.8213	4453.5220	4379.4858	4357.0488	4433.6680	4502.6230	4453.9312	4383.9756	4360.5796	4431.8438	4497.1714	4456.7905	4384.7759	4355.9106	4424.7549	4502.8066	4459.3677	4390.3418	4365.9951	4438.0127	4510.0518	4459.1880	4385.8530	4357.2388	4424.4644	4490.7612	4455.2256	4384.5054	4363.4541	4428.3984	4496.5811	4454.6216	4379.0986	4361.0688	4436.1924	4504.1704	4459.2520	4381.2983	43 [...]
+3351.6721	3321.2021	3329.2412	3375.0369	3395.7358	3355.5457	3332.5850	3335.5596	3376.2444	3397.3701	3353.5684	3330.6345	3336.8750	3379.3235	3396.5002	3354.4397	3333.3760	3337.9617	3382.0437	3398.5056	3355.1184	3327.9011	3338.8083	3383.8711	3396.0198	3348.1096	3330.7849	3340.5203	3387.8474	3405.2546	3362.2659	3338.2576	3339.6458	3385.2073	3397.7759	3346.9734	3320.0916	3338.4919	3383.4434	3402.9448	3350.6609	3325.5657	3335.4053	3380.2734	3401.0266	3357.9641	3333.5217	3340.5718	3381.8406	34 [...]
+2848.3250	2496.0273	2653.6799	3087.1816	3253.9504	2851.7466	2505.1631	2660.4663	3088.1162	3257.6255	2850.9753	2503.7664	2660.7075	3090.0527	3254.6348	2849.9392	2504.7803	2662.0786	3094.3672	3256.3923	2849.3862	2501.2258	2663.9995	3096.4304	3255.0964	2844.3469	2504.2048	2664.4270	3098.4880	3261.5576	2852.2996	2509.7529	2664.7778	3096.6538	3254.9514	2843.1326	2495.1306	2664.9543	3095.1870	3259.5444	2843.5540	2497.9180	2662.1035	3094.3711	3258.8425	2849.3777	2505.7175	2665.7632	3095.0771	32 [...]
+1540.6320	615.9078	1058.7129	2187.7131	2610.8042	1540.3558	619.3454	1065.2032	2190.7085	2612.9243	1540.9014	619.3715	1062.5872	2193.5706	2608.1580	1537.5496	620.4221	1064.3264	2194.2419	2609.0356	1533.9562	616.1178	1067.3365	2196.3535	2609.3513	1531.6685	616.8085	1065.4827	2196.1030	2611.6748	1531.3943	620.1871	1065.5559	2197.6536	2606.8491	1528.0332	609.7106	1069.0166	2197.0957	2612.0916	1526.0774	610.1937	1063.8203	2196.9438	2609.1650	1525.8079	617.6830	1066.2280	2195.2603	2609.6082	15 [...]
+2704.5454	2215.3523	2384.0122	2953.0334	3207.1736	2703.8496	2215.1997	2384.4612	2953.7791	3209.5608	2705.9587	2216.5518	2386.6152	2955.7795	3206.7273	2701.8474	2215.9570	2385.1880	2955.3499	3205.6189	2703.1553	2215.4463	2386.5605	2956.0498	3208.4744	2701.5608	2215.4749	2386.6584	2956.2244	3208.9775	2700.7446	2215.3857	2385.8604	2956.4004	3205.5210	2700.0908	2211.3379	2388.3508	2957.3286	3207.2554	2698.5596	2210.9026	2386.7119	2958.1108	3206.7717	2698.0300	2213.8447	2386.1851	2956.3965	32 [...]
+1818.7413	2071.3884	1946.5354	1667.8613	1570.1425	1849.7073	2102.0532	1940.7089	1621.9958	1507.3187	1799.1334	2061.0701	1970.0516	1671.7322	1552.6188	1820.5558	2090.7429	1971.2212	1670.3668	1551.0625	1828.1586	2074.3530	1955.9187	1643.9274	1502.8601	1777.7673	2080.7803	1982.4603	1701.5680	1571.0170	1840.5410	2092.2822	1972.1294	1684.8362	1527.9364	1795.7787	2067.2478	1981.6007	1693.2571	1569.3853	1807.8466	2042.2981	1937.2462	1655.4343	1554.1130	1836.2445	2092.4202	1992.6807	1664.7056	15 [...]
+-4115.0732	-3864.7102	-3955.3428	-4211.9863	-4322.9248	-4073.9187	-3819.5234	-3928.5183	-4220.6489	-4345.3047	-4114.0981	-3870.5059	-3959.5383	-4217.7793	-4341.9849	-4087.5757	-3831.8289	-3941.7681	-4215.6035	-4332.8906	-4104.8096	-3859.3840	-3922.2749	-4208.9985	-4355.2383	-4132.9316	-3843.4792	-3938.3254	-4202.0850	-4328.4658	-4092.2344	-3832.8076	-3928.3469	-4197.1650	-4344.6499	-4108.5522	-3860.6650	-3933.9331	-4213.3315	-4333.3989	-4117.1572	-3854.6934	-3940.3010	-4214.6860	-4343.49 [...]
+-3713.3706	-3471.3713	-3556.1917	-3808.4922	-3918.0310	-3686.6313	-3439.7095	-3533.6531	-3807.9912	-3929.6785	-3709.7666	-3469.7368	-3553.6567	-3805.9146	-3932.3875	-3691.3035	-3446.0544	-3539.8599	-3805.4512	-3925.9592	-3705.3577	-3465.3584	-3529.7344	-3798.0081	-3937.2468	-3720.9495	-3451.6338	-3539.7100	-3796.1687	-3918.5659	-3694.8223	-3442.0649	-3532.8848	-3790.4668	-3934.3882	-3708.2432	-3466.0845	-3537.7800	-3804.5583	-3924.4919	-3715.7029	-3463.4202	-3543.4727	-3808.0630	-3928.87 [...]
+3395.3430	3618.1372	3571.1902	3351.9963	3236.6863	3404.5830	3634.6067	3585.3916	3356.3723	3235.4731	3398.6763	3626.0908	3579.5137	3360.4033	3233.4736	3411.2297	3634.7524	3586.8313	3361.8074	3235.1001	3400.6199	3623.0779	3589.6729	3366.0059	3230.6140	3392.2175	3633.3889	3585.2969	3367.1201	3241.0244	3407.7683	3641.9551	3589.7454	3369.3931	3229.9585	3395.0898	3618.7139	3586.3750	3360.5398	3237.8706	3393.3252	3625.3562	3582.8208	3357.8105	3236.3718	3407.6724	3632.8638	3590.5981	3361.7144	32 [...]
+6154.5313	6290.2520	6252.8594	6121.2822	6054.4624	6161.8965	6303.5142	6264.7842	6123.2954	6057.1514	6157.1426	6299.8555	6262.7109	6128.9038	6053.6816	6162.7676	6302.1958	6266.7358	6131.3652	6056.0332	6156.0171	6294.8354	6270.0132	6134.2339	6054.7627	6152.7495	6301.6543	6268.2134	6135.3833	6063.9468	6165.4912	6310.5566	6270.5757	6134.3252	6053.1250	6152.7305	6288.4873	6268.3809	6132.2280	6061.3193	6154.1899	6296.2012	6265.5200	6128.0068	6057.1475	6163.5415	6304.2500	6271.8643	6131.2793	60 [...]
+2234.8923	2118.1963	2143.6089	2280.8938	2349.8511	2235.7249	2126.2607	2151.6440	2285.8542	2355.3596	2238.1436	2129.3789	2152.2815	2288.8020	2351.7915	2238.9077	2126.6882	2153.4070	2290.3740	2353.4890	2235.8157	2122.4685	2155.4526	2290.3357	2352.2229	2232.7522	2128.4573	2155.6062	2296.0305	2360.2930	2241.7563	2135.4358	2156.7412	2292.0762	2351.4958	2231.1619	2116.1094	2156.4172	2292.4390	2357.2854	2232.2578	2122.7490	2155.7163	2290.8997	2357.8806	2238.6907	2128.1277	2156.7383	2291.2517	23 [...]
+5830.5869	5490.1938	5614.6230	6015.6309	6187.8184	5830.1431	5498.7344	5622.4004	6020.0469	6194.8472	5832.7666	5501.1099	5623.0308	6023.7876	6190.0234	5832.7485	5500.2837	5624.8701	6025.8643	6190.8066	5830.0840	5494.4819	5627.1333	6026.7573	6190.4819	5827.2153	5500.6567	5626.3530	6030.8999	6197.5200	5834.0972	5505.8945	5627.2393	6027.7637	6190.1465	5824.0825	5488.6582	5627.3125	6027.3218	6195.5425	5825.4766	5493.1021	5625.8530	6026.6025	6193.9517	5831.1431	5498.9180	5627.5117	6026.0278	61 [...]
+2651.2358	1319.7871	1921.2422	3519.6948	4138.7402	2649.0791	1322.9415	1932.2917	3523.8535	4144.8257	2652.9990	1325.3306	1928.7823	3528.8604	4137.2300	2645.0000	1323.5195	1929.3792	3529.8418	4136.8535	2641.7720	1320.3867	1935.5260	3532.3489	4141.3042	2639.7258	1321.3706	1930.5250	3535.5632	4144.0249	2639.9414	1327.9915	1932.3912	3535.7637	4136.5845	2633.4011	1310.9507	1937.9825	3537.5549	4144.8018	2633.3335	1313.1594	1933.6934	3537.4700	4140.3384	2631.7214	1320.9663	1933.2949	3535.0286	41 [...]
+2316.1338	-1995.9020	19.7999	5202.0015	7174.3467	2315.3984	-1997.0187	38.8716	5204.5762	7177.1528	2318.2673	-1999.8344	25.1194	5216.2168	7163.0581	2292.1296	-1997.4409	33.6344	5220.6167	7164.9707	2293.5403	-1998.6586	45.6498	5226.2095	7170.0366	2284.1892	-2005.6257	33.8042	5231.5439	7174.2993	2276.6204	-1994.9247	40.5663	5242.8086	7163.5791	2274.1453	-2014.3715	57.1130	5244.1504	7177.6157	2271.4580	-2017.4935	47.9289	5245.4893	7162.1553	2253.9055	-2002.9482	50.0039	5242.3569	7164.1660	22 [...]
+-12347.9434	-12092.1973	-12217.3477	-12507.4209	-12603.6758	-12314.2354	-12067.5469	-12237.4912	-12555.2646	-12660.1660	-12371.7158	-12107.1240	-12205.2480	-12485.3848	-12610.3389	-12349.9355	-12082.6943	-12198.2295	-12502.4844	-12615.3545	-12338.4512	-12102.5137	-12227.2236	-12540.4521	-12666.0713	-12388.5635	-12092.9893	-12192.5908	-12460.2842	-12591.0420	-12339.8809	-12078.9912	-12199.6426	-12486.2432	-12646.5479	-12377.8877	-12105.9854	-12189.3242	-12458.1094	-12596.2568	-12399.5908	 [...]
+-16782.5762	-16521.5977	-16657.0820	-16956.4980	-17046.6953	-16722.6738	-16483.7070	-16670.1855	-16986.0215	-17092.4961	-16813.1621	-16569.2656	-16674.7773	-16920.5449	-17038.5000	-16790.7363	-16526.9375	-16649.1367	-16935.1934	-17041.8027	-16773.1328	-16564.7891	-16660.8730	-16957.3066	-17088.1875	-16826.1836	-16533.5938	-16639.3379	-16890.9648	-17016.8223	-16788.1230	-16528.2285	-16635.7344	-16901.3281	-17072.6738	-16820.5098	-16537.6855	-16625.2246	-16892.5137	-17027.4336	-16852.8652	 [...]
+-10239.5723	-9980.4209	-10086.1172	-10356.0088	-10460.4355	-10171.0732	-9927.9043	-10067.0361	-10367.8232	-10496.2061	-10248.6904	-10007.3682	-10100.3096	-10353.4600	-10477.9844	-10221.9131	-9955.0801	-10081.7539	-10344.9941	-10463.4453	-10227.9834	-9996.9775	-10055.3760	-10352.1025	-10502.8701	-10267.7773	-9958.0752	-10062.8174	-10331.3135	-10466.9717	-10222.6289	-9955.0615	-10053.1104	-10324.8379	-10488.3877	-10246.9863	-9986.0225	-10065.0869	-10350.2939	-10473.7197	-10256.9688	-9985.7 [...]
+-7566.8804	-7309.3652	-7396.4839	-7657.6563	-7776.1665	-7514.8003	-7263.8105	-7368.5615	-7663.1763	-7793.5996	-7564.9424	-7316.2148	-7402.0176	-7662.8306	-7795.6929	-7538.0981	-7274.5161	-7387.9722	-7658.7642	-7784.9414	-7555.6699	-7308.1377	-7359.3770	-7652.3799	-7805.4160	-7580.8511	-7282.9639	-7379.7769	-7647.5850	-7782.6553	-7543.7998	-7273.6030	-7367.3164	-7641.7935	-7794.5552	-7556.6260	-7306.6387	-7375.7822	-7661.3662	-7791.2295	-7570.9487	-7298.3062	-7372.6914	-7657.6895	-7796.68 [...]
+-3013.4185	-2767.0986	-2858.3237	-3113.2014	-3226.5220	-2983.6433	-2734.3325	-2832.2590	-3114.4729	-3235.2048	-3009.1401	-2766.5303	-2859.7954	-3115.7351	-3239.7141	-2986.0061	-2736.9919	-2844.1838	-3114.0491	-3234.2283	-3003.5142	-2762.6074	-2829.6670	-3103.3779	-3243.7510	-3018.9199	-2746.5247	-2844.0598	-3105.1814	-3226.7981	-2990.7402	-2735.4558	-2832.7693	-3100.7661	-3237.0315	-3002.1624	-2760.7522	-2837.1033	-3112.1604	-3233.8638	-3018.2473	-2755.6736	-2838.6179	-3112.3599	-3236.15 [...]
+133.0378	365.6564	285.5623	38.9087	-72.6954	142.4728	383.4427	302.4911	45.1799	-69.5415	139.0082	376.8066	295.9261	49.3226	-76.6756	148.3079	383.0428	301.5494	48.1214	-73.3468	138.5770	371.4578	307.0584	54.4246	-76.6460	131.1001	383.1477	299.5379	52.8676	-65.6321	146.5768	392.2437	308.1918	54.4432	-77.1972	136.9411	367.8087	302.4799	48.2022	-72.5656	129.7955	375.3024	300.7149	45.6956	-71.2637	145.6737	381.1020	308.0244	50.3428	-64.7603	144.3591	376.5405	299.3782	47.5235	-79.3109	136.7369 [...]
+3079.3635	3308.5959	3230.9919	2982.3542	2870.2883	3082.4668	3321.6187	3242.8640	2990.7310	2878.3669	3086.3713	3324.1550	3240.1885	2993.2710	2869.9956	3091.2195	3324.7422	3243.4333	2992.3770	2869.6526	3081.8289	3312.2605	3244.8359	2995.0078	2869.9229	3080.3450	3324.7896	3240.8059	2993.4470	2878.2461	3089.4800	3330.8188	3246.0474	2992.1624	2870.1614	3080.6702	3307.7000	3241.7432	2990.5217	2871.2561	3074.5830	3317.2156	3241.7188	2988.7837	2873.6982	3087.3291	3320.0378	3243.1152	2989.1604	28 [...]
+3455.7190	3448.7451	3412.2065	3408.7058	3430.7671	3456.9221	3457.8643	3421.7378	3416.9473	3438.5457	3459.1223	3462.6636	3422.4634	3420.4995	3433.5754	3461.5596	3460.3450	3423.3567	3420.0190	3433.4346	3454.9500	3453.5732	3425.8000	3421.5037	3434.1116	3455.0317	3462.4016	3424.9875	3424.5742	3442.0654	3462.8218	3466.5972	3425.3298	3419.2285	3433.2100	3452.8557	3448.5237	3424.2937	3421.0259	3438.2283	3452.5698	3456.2642	3425.3840	3419.3064	3440.3962	3460.6377	3459.9629	3425.2378	3418.6362	34 [...]
+3932.6611	3781.6926	3796.7334	3955.8926	4049.1472	3930.8831	3788.8181	3805.6577	3964.3704	4059.7693	3936.0120	3794.7183	3806.1489	3966.5408	4054.2290	3936.4587	3792.0762	3805.8254	3967.1941	4054.4314	3931.2126	3787.1394	3809.2205	3969.2336	4056.6401	3932.5610	3794.9019	3809.3054	3973.0176	4062.3628	3938.4077	3799.0923	3809.1921	3968.4412	4054.2698	3928.0999	3782.0474	3809.5569	3971.5540	4059.6907	3928.0061	3788.8489	3810.1904	3969.6348	4062.6348	3935.7637	3792.0686	3810.4502	3968.6487	40 [...]
+3935.5552	3022.2036	3386.9172	4459.7036	4904.1904	3934.7688	3027.1885	3396.9080	4465.3364	4912.2466	3939.2517	3032.0610	3395.2290	4469.5645	4905.9946	3933.7314	3028.5879	3395.7534	4470.6826	4904.7432	3929.7813	3024.0925	3401.5432	4472.1958	4908.7124	3930.7671	3029.4465	3397.4648	4475.3477	4914.1582	3931.6619	3034.5840	3399.0100	4474.5503	4905.7393	3923.4380	3017.7466	3402.5435	4476.5835	4912.7363	3924.4277	3021.7312	3401.0920	4476.9004	4910.9160	3925.6614	3027.5110	3400.2732	4473.4238	49 [...]
+-160.4619	-1564.1030	-1198.1097	337.9693	1099.4744	-165.8539	-1562.9869	-1189.7875	345.4800	1109.7714	-159.8270	-1553.6340	-1187.7314	348.2901	1101.7805	-166.3631	-1559.5458	-1189.0793	348.4191	1100.1740	-169.5037	-1564.0056	-1183.4055	351.1584	1105.2360	-168.0952	-1559.9243	-1187.6681	353.4644	1109.0103	-170.3255	-1556.6736	-1185.0386	352.5597	1101.3645	-178.5873	-1570.7762	-1181.2175	357.6832	1107.7578	-177.2433	-1568.5300	-1181.0581	360.2924	1108.4319	-176.1108	-1563.0233	-1184.0085	3 [...]
+-13711.6563	-13461.6973	-13590.3350	-13899.2129	-13987.1924	-13714.4102	-13487.7939	-13603.7383	-13875.1309	-13977.4268	-13715.2246	-13466.1758	-13576.1084	-13845.4238	-14005.1572	-13752.5381	-13459.5117	-13566.8115	-13854.0527	-13994.4375	-13744.1982	-13473.8828	-13555.8174	-13862.1602	-13976.8545	-13707.2783	-13443.0068	-13565.4443	-13837.5410	-13990.3604	-13743.3125	-13441.2949	-13564.7715	-13861.6611	-14009.8496	-13721.0254	-13453.8311	-13561.2178	-13865.4404	-14051.8213	-13757.3906	 [...]
+-12468.0254	-12215.3174	-12330.5664	-12628.2637	-12732.4902	-12461.4482	-12223.6416	-12331.2002	-12607.9307	-12720.8975	-12463.2197	-12215.9150	-12321.5703	-12595.9590	-12744.3975	-12481.5820	-12201.6182	-12311.8242	-12590.7295	-12737.2129	-12491.1953	-12221.5840	-12292.2383	-12595.9629	-12721.5508	-12459.4492	-12185.8760	-12309.4268	-12583.5596	-12730.4199	-12474.1436	-12190.4512	-12303.3174	-12595.4521	-12739.9287	-12467.7764	-12207.7598	-12302.1680	-12606.4209	-12767.2236	-12494.5479	 [...]
+-6279.2119	-6021.9585	-6119.4502	-6394.6812	-6517.0347	-6254.3408	-6005.5771	-6100.6733	-6385.9805	-6511.5518	-6272.1133	-6022.5845	-6117.4585	-6389.4248	-6530.1992	-6262.3691	-5997.7607	-6107.6143	-6383.8525	-6526.4722	-6277.5425	-6020.2471	-6082.4609	-6381.6128	-6522.3643	-6277.1768	-5993.0723	-6104.8188	-6380.0747	-6514.3433	-6264.0942	-5993.2847	-6097.6616	-6378.9980	-6517.0059	-6269.0781	-6020.8032	-6100.3501	-6396.4404	-6534.6147	-6290.9731	-6019.8428	-6096.5059	-6391.0786	-6523.61 [...]
+-382.3864	-128.2416	-221.3393	-485.5426	-605.8494	-348.6319	-98.3607	-195.7209	-482.9977	-609.6306	-377.1732	-129.0201	-223.5138	-487.1660	-622.7101	-357.0513	-96.5434	-208.6060	-484.2057	-615.7518	-376.1822	-126.9419	-183.7860	-477.1321	-621.9897	-386.8064	-105.0336	-207.0536	-476.8456	-607.4243	-362.1664	-97.0784	-194.9682	-473.6904	-612.3857	-370.5302	-124.4405	-201.5173	-487.7882	-619.6482	-390.5169	-120.1298	-197.2912	-483.8896	-617.8528	-366.8579	-114.4436	-198.5069	-489.3347	-614. [...]
+-999.8619	-752.0541	-846.6862	-1110.2363	-1231.1912	-979.5341	-725.2262	-821.3141	-1103.1012	-1226.2803	-991.2209	-742.6960	-844.5821	-1108.8807	-1242.0973	-975.7686	-718.8120	-828.5295	-1106.4609	-1234.6602	-992.6070	-745.0222	-817.5226	-1098.7917	-1236.9785	-996.7574	-728.8017	-832.3084	-1101.3182	-1223.6394	-978.7265	-721.2924	-821.7941	-1098.9366	-1227.7504	-985.9566	-741.9002	-823.6130	-1107.5037	-1231.6438	-1004.7062	-740.2587	-823.3532	-1103.1852	-1229.1578	-981.9793	-733.2432	-82 [...]
+3506.4626	3747.4905	3653.2527	3383.3381	3261.7310	3505.8379	3757.9482	3668.0388	3395.6909	3276.9556	3510.8406	3764.2458	3660.6631	3391.6912	3262.8530	3516.6907	3768.0349	3667.7979	3392.3921	3261.9980	3504.2046	3749.2073	3670.5298	3396.2212	3268.1187	3510.4509	3767.7881	3663.8130	3394.1543	3275.3501	3517.8921	3771.5757	3669.7351	3392.4573	3270.8037	3508.1934	3751.8801	3668.9209	3389.2273	3262.9548	3497.5825	3754.7014	3667.7471	3392.6184	3273.0789	3513.7100	3758.7734	3665.1567	3390.5684	32 [...]
+4641.4893	4860.0640	4770.2764	4524.1025	4418.0581	4639.2749	4868.0869	4781.3784	4533.7021	4431.3647	4646.8481	4877.4956	4780.1025	4533.8970	4421.6118	4650.2095	4875.5068	4784.4219	4534.7002	4420.0845	4640.7407	4862.8315	4783.0596	4534.8735	4423.3921	4643.2275	4877.1416	4783.3193	4535.3574	4429.9668	4650.6680	4878.8604	4783.4741	4531.3384	4422.4233	4638.7441	4860.2236	4781.7100	4532.8345	4422.3247	4634.4819	4868.8037	4784.3647	4532.0615	4427.7339	4648.1943	4869.6592	4780.4561	4531.2515	44 [...]
+6502.4263	6511.3164	6458.2012	6428.8442	6446.1279	6501.1226	6518.7314	6466.1226	6437.6309	6456.4663	6506.8608	6525.8311	6468.0000	6438.7407	6449.5386	6507.3496	6522.2690	6466.7871	6438.8823	6450.1611	6500.6577	6515.2051	6469.7178	6440.8735	6452.6528	6503.0474	6525.2612	6469.2959	6443.4482	6459.4253	6510.9307	6529.8159	6470.3237	6438.3921	6451.3931	6500.1685	6512.1157	6469.9023	6441.3296	6455.0532	6499.0063	6518.4014	6471.2329	6440.5420	6458.0508	6509.1665	6522.0063	6469.1499	6437.4150	64 [...]
+3210.9365	2682.5149	2800.9834	3376.1680	3670.0339	3207.6255	2685.8711	2809.8174	3382.3962	3680.3215	3213.9324	2695.3323	2812.4634	3387.2886	3675.9104	3211.6372	2689.4531	2810.8682	3387.5764	3674.1506	3206.7563	2683.9602	2815.3901	3388.5425	3677.7861	3209.6799	2691.6726	2813.1404	3390.7173	3683.3311	3211.9438	2694.9792	2813.9866	3388.0342	3674.3184	3202.1265	2679.0374	2816.4292	3391.2419	3680.3811	3203.0791	2685.4331	2816.8167	3393.0212	3682.0408	3208.4309	2688.3901	2813.3936	3389.3264	36 [...]
+-8655.6260	-8401.7988	-8517.7451	-8815.8682	-8936.4834	-8683.9707	-8427.2666	-8508.7217	-8782.8350	-8893.5273	-8638.1592	-8385.8125	-8498.0479	-8796.8438	-8934.1885	-8668.3076	-8388.7031	-8499.4346	-8781.6475	-8945.4727	-8668.8750	-8398.0176	-8484.0996	-8778.1416	-8892.4424	-8630.1895	-8373.8945	-8501.5713	-8779.5068	-8925.3691	-8655.5449	-8385.4014	-8506.3496	-8799.8047	-8911.1563	-8648.6777	-8400.6172	-8486.9951	-8814.8340	-8965.0469	-8674.1650	-8410.9375	-8493.8242	-8795.2969	-8910.68 [...]
+-6350.8145	-6095.7744	-6207.0410	-6504.3799	-6634.3403	-6381.7046	-6116.4385	-6194.5244	-6470.7295	-6587.7217	-6334.7085	-6080.0845	-6189.5903	-6489.5986	-6626.8149	-6362.7241	-6080.3560	-6189.2383	-6476.4849	-6635.7197	-6362.9199	-6097.2983	-6182.1812	-6469.6123	-6587.4746	-6329.8345	-6071.3267	-6195.6064	-6474.8589	-6616.0850	-6351.5928	-6087.1196	-6200.6343	-6491.5469	-6602.1880	-6351.4604	-6101.3589	-6180.7412	-6505.4453	-6654.7070	-6375.9805	-6111.4785	-6189.4443	-6491.1709	-6605.82 [...]
+-6777.4204	-6525.8530	-6624.1846	-6905.5317	-7036.7695	-6778.5283	-6523.6123	-6605.9795	-6887.0894	-7010.9712	-6767.3027	-6513.0239	-6618.1841	-6900.1772	-7037.6802	-6767.2803	-6500.8584	-6609.6802	-6894.6318	-7037.9678	-6779.4673	-6524.6387	-6599.3096	-6889.3589	-7018.5518	-6767.1626	-6497.8677	-6613.7041	-6891.8486	-7019.6309	-6764.9165	-6503.5449	-6612.5752	-6894.0278	-7017.1675	-6769.4072	-6526.2783	-6604.6108	-6906.2129	-7042.7549	-6795.2178	-6526.0625	-6606.6187	-6901.5854	-7024.52 [...]
+1760.9994	2011.0172	1914.9426	1639.5608	1511.2947	1767.8580	2021.2905	1935.1027	1653.3094	1529.1147	1768.8418	2022.4136	1919.0938	1644.2817	1505.8726	1776.6060	2036.7186	1930.2091	1647.5681	1511.3567	1760.7378	2013.0405	1939.3556	1651.7563	1520.4014	1766.5804	2035.0676	1925.6598	1649.5050	1525.0227	1775.4912	2037.0585	1930.3170	1649.7667	1524.6584	1770.0815	2013.0457	1933.3048	1640.6559	1508.5618	1748.3075	2013.3359	1933.5587	1645.9591	1520.2001	1767.7681	2019.9913	1926.5001	1642.9459	15 [...]
+3015.9751	3258.0037	3155.3870	2879.8872	2756.0354	3015.7366	3264.2654	3171.2102	2895.8657	2777.5730	3022.1812	3274.1948	3161.8013	2886.3264	2758.1836	3026.5146	3280.6428	3169.2273	2888.8481	2760.3115	3012.1868	3260.4861	3175.9038	2893.8684	2770.0974	3022.5796	3279.6816	3166.7327	2891.4065	2774.3457	3027.0383	3280.7673	3168.8540	2888.9890	2773.5164	3019.4778	3261.9810	3170.4966	2883.3127	2760.9595	3003.6733	3260.8115	3172.0894	2890.4238	2772.5044	3021.9512	3267.5295	3165.2266	2887.4202	27 [...]
+2452.9490	2664.8789	2568.4709	2319.1951	2215.7542	2445.7495	2670.9622	2578.6804	2332.3242	2233.7859	2457.0432	2684.6738	2577.2969	2328.2480	2220.4783	2459.1594	2681.9761	2580.6650	2330.6768	2220.3145	2447.2065	2667.1536	2581.3850	2331.3931	2226.7498	2455.6018	2682.5586	2579.4597	2330.5244	2230.7993	2457.9673	2685.3452	2579.2378	2324.5703	2224.9421	2448.8884	2666.2949	2579.0464	2327.8770	2220.2937	2441.9785	2671.4844	2582.1257	2329.3635	2231.0723	2456.1467	2673.7607	2576.9680	2327.9248	22 [...]
+4180.8149	4330.7837	4242.5874	4054.6421	3990.8232	4173.3418	4335.3462	4250.0928	4066.6926	4007.3054	4183.3662	4349.5190	4252.2993	4066.0625	3996.9700	4183.5522	4340.8276	4250.4951	4064.7263	3994.0762	4175.1689	4331.4058	4251.8359	4066.8032	4001.3481	4182.2173	4344.9282	4251.1045	4065.9587	4003.8831	4183.9775	4346.7754	4250.8936	4060.3958	3997.9338	4174.0571	4329.7163	4250.3052	4063.9844	3997.5413	4173.4404	4336.5024	4254.0732	4066.9116	4005.8792	4183.2285	4338.2197	4249.2803	4062.2861	40 [...]
+3679.1404	3670.5063	3604.4634	3581.2332	3610.6328	3672.9500	3675.1865	3611.9143	3590.1338	3625.2756	3681.7939	3689.0481	3616.2896	3591.5647	3616.9148	3681.4182	3679.7065	3613.7317	3590.0459	3614.4373	3673.4976	3671.7708	3616.9971	3591.3728	3620.2822	3680.4204	3683.7415	3613.4250	3591.4021	3623.4358	3681.1943	3685.5923	3613.0498	3586.3569	3617.4180	3670.6787	3669.6746	3613.3193	3591.1353	3618.6926	3671.2454	3676.3853	3616.4680	3593.2761	3625.6426	3679.3237	3678.0530	3612.9836	3587.9285	36 [...]
+4415.7231	3899.0344	3979.1338	4517.2944	4818.3809	4410.1606	3902.2451	3986.4563	4526.1489	4830.7217	4416.9824	3912.9529	3989.7693	4528.7280	4824.3481	4413.4893	3904.9941	3985.7798	4527.2910	4821.1899	4408.4106	3899.4253	3989.9988	4528.6689	4826.3320	4413.8252	3907.9492	3989.0535	4529.3691	4829.3340	4412.4722	3909.1145	3988.3801	4526.5386	4823.5854	4404.2295	3894.5923	3989.4209	4531.5942	4826.7109	4404.5020	3900.6365	3990.9543	4533.5654	4830.9443	4410.3896	3901.6731	3987.0364	4526.7900	48 [...]
+-4866.9326	-4610.1563	-4718.6978	-5022.0586	-5160.2896	-4907.9600	-4634.2334	-4703.9644	-4984.9878	-5098.6143	-4848.2104	-4590.0068	-4701.3018	-5010.3726	-5139.9268	-4881.5659	-4593.3125	-4699.9209	-4994.9937	-5153.0889	-4877.1606	-4616.0405	-4702.2598	-4980.1904	-5098.3208	-4844.1948	-4589.0664	-4709.7578	-4991.2939	-5132.4121	-4869.8301	-4608.9614	-4717.3481	-5009.0493	-5114.7266	-4873.7490	-4616.3989	-4692.8809	-5023.6304	-5171.1938	-4897.0137	-4628.3472	-4707.5161	-5006.7793	-5119.37 [...]
+-224.2799	30.6897	-76.2510	-378.3803	-520.9692	-262.9972	8.3931	-63.1334	-343.4988	-459.2264	-209.4294	52.1343	-61.9917	-371.3701	-495.0025	-235.1590	50.3420	-57.4846	-355.6945	-503.7299	-233.8009	24.3876	-65.3768	-343.5642	-458.3124	-200.5214	54.2030	-67.4594	-351.6909	-482.7141	-221.4825	33.1444	-74.3747	-363.0514	-467.4489	-227.5760	24.3592	-52.4480	-376.0070	-518.9229	-248.3102	17.0628	-64.3469	-364.9624	-474.4063	-237.5673	15.2110	-78.9938	-353.6801	-461.5871	-214.8284	41.8546	-44.7 [...]
+-719.8560	-468.7020	-566.6588	-857.5400	-995.6199	-737.7867	-476.8007	-553.1060	-832.1023	-956.6284	-709.2766	-448.9088	-562.4278	-853.3427	-986.1484	-715.8201	-446.7968	-549.9848	-841.6987	-986.3306	-728.3499	-468.8640	-552.3062	-838.1377	-961.3331	-704.0461	-443.5486	-558.5161	-841.8456	-967.6099	-712.0179	-451.1830	-559.8192	-846.8586	-962.4893	-717.7394	-468.5739	-550.3809	-854.0113	-993.9254	-740.2964	-472.0397	-553.9830	-851.7718	-968.2731	-721.9905	-469.2549	-567.8220	-845.7021	-9 [...]
+522.3984	763.3443	658.9214	376.2010	248.7806	510.9037	763.4186	673.5576	395.9394	279.9277	528.7978	784.1429	666.3058	381.3373	255.0840	527.3665	784.4048	675.3579	389.3084	255.8248	513.3674	765.0818	674.8758	390.5189	271.1899	531.6998	786.5981	668.3321	388.1690	273.0560	530.0435	785.0871	668.1666	384.0247	274.1190	522.2629	766.1887	675.3292	382.6778	253.9149	506.1598	765.4671	674.0973	384.1536	271.3526	522.3759	770.4685	662.0665	386.0934	280.3747	528.6368	778.6967	684.6018	397.0547	265.63 [...]
+3219.0166	3457.1833	3359.5212	3078.1094	2949.9204	3203.9363	3455.5618	3370.8108	3095.5864	2979.6289	3219.9961	3480.2744	3366.3030	3085.9871	2958.1152	3221.2290	3474.3750	3374.2507	3092.5952	2959.1360	3207.9231	3456.7327	3371.6675	3092.6121	2970.7683	3223.1580	3475.6804	3367.9810	3089.9089	2972.6785	3222.2759	3478.7319	3369.1040	3085.8298	2969.8306	3211.8169	3458.8179	3370.3455	3087.0857	2957.9570	3202.2439	3458.7629	3370.9688	3087.7905	2974.2607	3216.4910	3464.4668	3363.6917	3087.4026	29 [...]
+5009.9028	5233.5684	5144.5762	4881.0308	4764.8198	4998.3516	5236.6963	5153.7593	4896.6572	4786.0713	5010.9927	5256.2070	5153.1255	4891.2373	4773.2490	5012.3525	5247.4116	5154.8047	4893.8770	4770.7480	5000.6416	5233.8906	5154.4160	4894.1060	4779.7422	5013.1797	5251.3501	5154.0669	4892.8008	4782.3726	5011.8296	5253.9961	5152.9194	4886.0845	4776.7939	5001.1528	5235.1440	5152.4033	4891.8438	4772.8467	4997.8701	5239.5527	5155.0210	4893.8389	4784.2896	5009.8682	5242.7710	5148.4468	4890.4932	47 [...]
+2583.3191	2724.9492	2635.6274	2452.6284	2393.6892	2575.5449	2728.1321	2641.0540	2464.5449	2410.7012	2584.8464	2743.6812	2643.8660	2463.8623	2400.7942	2584.4939	2734.0571	2642.6926	2463.0510	2398.0344	2575.2307	2725.8774	2643.7395	2465.2056	2405.5850	2586.2214	2739.1184	2643.7585	2463.0984	2407.2505	2585.0364	2739.1775	2643.2505	2458.2371	2402.3384	2575.7129	2722.3882	2639.8618	2463.7124	2403.2424	2574.6174	2730.9758	2644.8127	2465.4753	2410.5786	2584.4917	2732.1162	2638.5078	2459.7063	24 [...]
+5586.9697	5518.4922	5463.6689	5503.5923	5568.2891	5579.1387	5520.7500	5471.2974	5514.3242	5582.2925	5588.0903	5535.4927	5475.7930	5515.4146	5574.9951	5587.6582	5526.2583	5473.2998	5514.5845	5572.6587	5581.2632	5519.8789	5475.3618	5516.5527	5578.4292	5587.7070	5530.8799	5474.3774	5515.7192	5581.1904	5587.7319	5531.3740	5473.6553	5510.1660	5574.5601	5578.0562	5516.6230	5473.1768	5517.2500	5577.5405	5578.5186	5524.5669	5476.5518	5518.9224	5584.1484	5587.3965	5525.6699	5471.1274	5512.2754	55 [...]
+2645.4573	2532.4309	2489.1836	2578.9001	2667.7275	2637.9785	2533.0571	2494.4092	2588.7498	2680.4912	2644.8052	2546.5935	2498.5266	2589.6587	2674.6023	2643.4883	2536.4619	2495.2837	2587.0537	2669.6331	2636.9844	2531.5103	2496.7561	2589.3562	2676.0449	2644.9451	2540.3020	2497.0613	2587.4304	2676.9387	2642.2808	2541.6270	2496.4863	2584.2485	2671.9910	2634.9653	2528.7546	2496.9236	2590.2102	2676.6277	2636.7866	2534.2795	2499.6143	2591.5886	2681.6721	2642.1436	2535.6453	2494.6594	2585.0796	26 [...]
+3722.4797	2783.8257	2968.6685	3964.3711	4499.3315	3717.0100	2783.5686	2974.0447	3971.5652	4506.9087	3721.3811	2794.0337	2977.6416	3974.5198	4502.3784	3716.6890	2784.8110	2973.3140	3973.6628	4498.3774	3712.4150	2782.5081	2977.7400	3975.3193	4503.3174	3716.0715	2786.0161	2975.6277	3975.1538	4505.5742	3712.9377	2788.5027	2976.7297	3972.7061	4500.0884	3706.4480	2776.3638	2979.6772	3980.6108	4506.0293	3707.6782	2780.5066	2979.6077	3982.0620	4508.3950	3709.0513	2781.1343	2975.0205	3975.5334	45 [...]
+2261.7395	760.9996	1256.9137	2954.7939	3727.1748	2257.0229	759.9017	1262.4403	2958.9136	3733.6501	2262.3101	768.0494	1264.3293	2964.4673	3728.3928	2252.9065	762.4599	1262.6547	2964.5884	3725.0657	2250.5737	758.5721	1268.8818	2967.4121	3729.9063	2252.2920	762.2872	1264.0282	2967.0522	3732.5393	2248.7590	764.9020	1266.2245	2968.5891	3727.0615	2242.1118	751.5731	1271.8005	2974.4648	3732.9722	2242.6125	755.0292	1269.5125	2976.8044	3732.7068	2242.3662	757.6644	1266.1563	2969.4817	3732.7563	22 [...]
+2342.7717	494.1344	1173.3938	3303.6829	4228.1182	2337.7930	489.5128	1178.9969	3304.8677	4231.3486	2342.3264	497.4323	1178.5842	3310.2263	4225.8579	2329.3215	488.3428	1177.5079	3309.1448	4221.3867	2328.2952	486.8491	1182.2769	3310.1045	4225.4966	2328.1289	487.3566	1176.7854	3313.7183	4226.4810	2321.8530	490.1115	1179.4526	3313.6929	4220.9741	2317.8547	478.4442	1186.0757	3319.8369	4227.2432	2318.8552	479.9706	1182.8210	3320.9517	4224.9771	2313.2954	482.3594	1179.2092	3314.4282	4225.5791	23 [...]
+3776.5427	3756.3923	3754.9846	3789.8164	3808.7185	3773.1794	3750.5823	3752.8689	3789.5957	3811.0339	3774.6084	3755.5186	3755.0706	3791.6406	3812.1794	3772.5935	3752.2566	3753.6907	3786.4497	3806.9817	3774.6431	3752.1624	3753.8210	3786.5151	3809.1357	3775.4692	3753.7693	3754.7476	3788.0044	3810.4998	3773.4719	3754.2061	3753.4780	3786.6602	3808.1233	3773.3599	3752.0640	3755.8386	3788.0100	3808.4255	3773.3362	3754.7551	3755.2107	3789.5457	3811.7542	3772.7854	3751.1189	3750.4639	3785.9651	38 [...]
+-5500.4438	-5246.9604	-5351.1372	-5655.3765	-5809.0688	-5549.2388	-5273.8110	-5338.3066	-5617.6880	-5733.5298	-5481.9785	-5217.6523	-5341.4141	-5651.2095	-5770.5820	-5508.3467	-5229.0654	-5330.2896	-5630.9463	-5782.3184	-5511.9595	-5252.9033	-5345.7378	-5619.5181	-5733.0786	-5475.9292	-5229.6050	-5342.9712	-5630.9478	-5758.8301	-5501.5684	-5240.9292	-5351.6914	-5644.3916	-5744.4199	-5508.0513	-5249.3257	-5329.5796	-5647.7646	-5793.4722	-5528.7744	-5258.4453	-5343.5435	-5639.5264	-5747.76 [...]
+-3550.9346	-3292.1260	-3381.0383	-3681.4497	-3836.8284	-3582.0554	-3310.4678	-3370.6887	-3651.5811	-3781.2205	-3537.7712	-3267.4089	-3373.3367	-3678.7529	-3812.3745	-3548.9219	-3272.4077	-3361.5867	-3662.4705	-3816.7031	-3563.8914	-3294.2988	-3372.7122	-3657.3489	-3785.8538	-3530.7424	-3271.5730	-3375.2302	-3664.7524	-3796.6172	-3547.8159	-3277.8760	-3377.4622	-3672.0632	-3791.8948	-3554.6931	-3293.8037	-3366.8657	-3673.9880	-3826.5776	-3573.1865	-3296.8433	-3373.9001	-3674.3586	-3793.24 [...]
+-878.5390	-629.9741	-725.7325	-1019.6136	-1162.5894	-902.9245	-642.0062	-715.8707	-995.5134	-1119.4092	-870.7849	-603.8262	-720.7402	-1014.5622	-1146.8485	-876.1102	-609.1257	-707.1850	-1002.3488	-1147.2897	-890.0291	-629.3511	-714.0605	-1000.7586	-1125.6750	-863.4739	-606.8029	-717.4261	-1004.9435	-1129.6075	-875.5291	-608.6738	-719.2076	-1012.6624	-1128.2200	-882.0878	-627.9310	-712.1678	-1010.9626	-1153.9786	-899.8956	-629.8138	-714.5887	-1013.5571	-1128.5641	-884.1275	-629.8734	-728. [...]
+613.0931	854.4426	760.5955	471.6114	336.2885	593.5488	849.0464	768.5136	491.4553	368.5038	614.5884	880.2606	765.0962	477.4854	347.5216	614.3121	872.2646	773.8361	488.7059	349.5961	600.8815	854.4633	768.6472	484.8020	360.3563	619.3604	874.4588	768.3945	482.9832	363.9344	614.4219	879.1648	769.0975	478.3979	360.4996	604.3055	857.8491	769.5430	480.9661	346.2720	593.5532	857.8059	769.1330	479.7322	362.6577	607.8395	860.1846	761.9929	480.9820	371.3873	613.0749	868.0492	780.0115	491.7226	356.32 [...]
+5633.6226	5870.5127	5778.0508	5497.6006	5370.3486	5616.3330	5868.0669	5784.8076	5515.1636	5397.0161	5631.0566	5894.5156	5783.0601	5506.4873	5379.6597	5633.5342	5884.5078	5790.4019	5512.1118	5380.3335	5619.8691	5867.8311	5786.3013	5509.1494	5388.4917	5635.0591	5887.4844	5787.2798	5509.8809	5391.7534	5632.4766	5891.8740	5787.8633	5503.4946	5384.5103	5621.0913	5871.0181	5784.0776	5508.9751	5380.4839	5616.8828	5874.0020	5787.2847	5509.3916	5392.3574	5629.6230	5875.5576	5779.5557	5506.2979	54 [...]
+1868.0807	2098.0537	2012.9078	1744.1111	1619.1835	1853.9465	2098.9709	2019.8241	1759.1766	1642.9503	1865.8007	2120.8262	2019.2847	1754.0344	1627.3315	1868.7128	2110.2725	2024.3027	1754.9932	1626.6587	1854.2847	2095.7065	2021.4531	1755.4980	1634.4623	1870.5278	2114.5010	2021.5466	1755.1221	1636.0867	1866.8353	2116.7043	2019.8939	1746.8718	1631.4163	1854.4648	2097.7422	2019.2798	1754.0455	1629.5300	1853.2355	2103.0769	2020.9290	1755.2346	1638.6532	1864.7924	2105.5247	2015.8739	1751.4789	16 [...]
+4201.6426	4350.3325	4262.2690	4074.4802	4009.3291	4192.6387	4353.2148	4270.3999	4087.0171	4027.3831	4202.0757	4371.7407	4273.5889	4086.7534	4018.7085	4202.5972	4360.7148	4272.5679	4086.0745	4016.5586	4194.9561	4353.1704	4274.0459	4088.4866	4024.3960	4206.4409	4366.8843	4274.6538	4088.7896	4026.2092	4204.3794	4368.8306	4273.9194	4083.0313	4021.8540	4195.6855	4352.7412	4273.7402	4089.7522	4024.3210	4196.3237	4360.2856	4277.7197	4092.0356	4032.2505	4206.6396	4362.0684	4272.3164	4086.8562	40 [...]
+2464.3696	2611.2988	2528.7620	2348.8550	2281.8376	2454.7461	2612.5950	2535.0688	2359.7678	2298.8235	2464.6533	2629.7859	2539.6357	2361.1042	2292.9861	2464.4778	2617.8467	2537.0601	2358.1289	2287.8989	2457.2244	2611.3206	2537.5291	2359.8396	2294.9919	2466.8977	2623.6560	2539.1738	2358.8411	2296.0964	2464.6011	2623.9998	2536.7402	2354.0156	2291.7529	2455.7983	2611.4319	2537.2549	2362.8948	2295.1748	2457.8521	2617.9270	2541.0203	2364.6367	2303.5649	2466.9104	2619.8794	2534.0176	2357.4099	23 [...]
+932.5012	989.2316	914.2322	822.5043	813.3203	925.1936	988.9644	918.6498	832.2500	827.2405	930.5258	1005.4139	923.9775	833.0342	821.3595	930.4056	993.6210	919.5239	830.8243	817.9142	924.1389	989.1566	920.8788	832.1159	821.9033	931.3310	997.1157	919.5339	829.8624	824.2158	929.0347	997.2853	918.6564	826.4401	818.6129	922.8442	987.2466	920.9841	835.5487	822.8023	923.5115	992.4476	922.2682	835.6837	829.8912	932.3239	992.7292	914.9103	827.0544	830.2867	931.3286	994.0084	924.9803	834.5104	819.4 [...]
+5814.3042	5731.3076	5686.3110	5744.2266	5817.7993	5806.9819	5727.7222	5686.8818	5750.6099	5827.2314	5812.5508	5744.7900	5693.5767	5753.9741	5825.5127	5811.1187	5732.6860	5689.4727	5750.0532	5818.8042	5805.8970	5729.1836	5689.2349	5750.7314	5823.9121	5814.2314	5735.2563	5689.5479	5749.5796	5824.0313	5809.8979	5734.5400	5687.7993	5744.3799	5819.9907	5802.7686	5725.8955	5689.4287	5753.2075	5823.5293	5806.0815	5730.5425	5689.3340	5753.7334	5829.6318	5809.2939	5728.9336	5681.9399	5745.1816	58 [...]
+3313.3760	2992.3625	3061.4541	3410.8142	3591.5752	3306.5942	2987.2583	3059.8152	3411.9746	3597.9841	3310.1816	3000.4922	3064.9431	3416.7693	3595.8987	3308.1099	2990.0310	3062.4478	3412.4314	3589.8374	3302.0754	2986.5171	3062.3372	3414.5493	3594.7754	3309.2493	2992.8730	3061.4006	3413.0718	3593.8950	3304.7510	2991.4043	3059.5630	3409.5640	3589.8225	3298.5872	2983.3396	3062.6418	3416.4966	3592.7534	3300.5635	2986.9077	3062.1792	3418.4385	3598.0540	3305.0396	2985.5178	3055.5579	3409.2495	35 [...]
+3199.9731	2704.3462	2869.0764	3440.5632	3697.8208	3194.8093	2697.6589	2866.5188	3441.3206	3700.0452	3196.6477	2707.5090	2869.9312	3443.7244	3698.5640	3193.4011	2698.8057	2867.4729	3438.6934	3692.3303	3190.1953	2697.2664	2868.0535	3439.9790	3695.9321	3193.9429	2699.2078	2866.4727	3440.5574	3695.3765	3189.0420	2699.0076	2866.1150	3437.6604	3692.4656	3185.9771	2694.2656	2870.4370	3442.7075	3695.6138	3187.4111	2696.3052	2866.8428	3443.1001	3698.6201	3187.5281	2695.4509	2863.7378	3437.1555	36 [...]
+-5273.7881	-5023.8984	-5123.2505	-5420.8457	-5567.9766	-5306.8037	-5045.5044	-5113.9419	-5393.3315	-5514.8477	-5262.9092	-4997.2715	-5117.7905	-5417.4419	-5545.2964	-5272.9546	-5007.2881	-5104.0967	-5401.6226	-5548.1436	-5289.4253	-5025.5474	-5116.3052	-5400.4810	-5519.7515	-5256.3770	-5005.5825	-5115.6279	-5406.8418	-5528.6045	-5272.9663	-5005.5469	-5118.9116	-5414.4268	-5526.0220	-5281.4092	-5025.5698	-5110.4448	-5410.9248	-5554.8105	-5297.6543	-5028.4458	-5116.4565	-5412.4282	-5525.23 [...]
+-5606.1157	-5352.8398	-5446.6772	-5743.0601	-5893.1035	-5634.1177	-5368.8701	-5437.8257	-5717.6743	-5846.5220	-5598.2017	-5327.6040	-5443.0327	-5740.0742	-5873.7275	-5604.2754	-5335.3530	-5427.7925	-5725.1816	-5874.5586	-5620.6880	-5353.7773	-5437.6831	-5726.1758	-5852.4502	-5591.9146	-5333.5615	-5438.2148	-5730.5181	-5857.7583	-5604.8228	-5333.2827	-5440.2051	-5738.4595	-5856.5015	-5612.0762	-5352.8062	-5434.5479	-5733.7388	-5880.5986	-5628.8896	-5353.3711	-5439.8271	-5736.7168	-5855.98 [...]
+-1810.3749	-1569.6796	-1669.7515	-1962.5786	-2097.7527	-1835.4147	-1581.8109	-1663.0889	-1939.7489	-2060.3564	-1809.7256	-1543.5330	-1667.1066	-1955.4220	-2083.0266	-1809.0686	-1552.0038	-1652.9435	-1942.6133	-2081.0493	-1825.9342	-1571.2642	-1662.9880	-1948.3534	-2068.8535	-1803.3140	-1550.8363	-1662.2209	-1950.0620	-2066.6335	-1811.6013	-1547.4562	-1661.7891	-1956.6450	-2070.6914	-1820.8406	-1567.7996	-1661.0182	-1950.1956	-2085.6890	-1832.9309	-1567.7863	-1662.8713	-1954.3668	-2066.68 [...]
+2602.8933	2834.1501	2735.8257	2450.5254	2326.2251	2582.7629	2829.3792	2742.7307	2469.3357	2354.4976	2597.1589	2859.7556	2738.8601	2459.4519	2335.9949	2601.1221	2849.0979	2750.3430	2467.4214	2338.8992	2584.4036	2830.3970	2742.2725	2462.1094	2345.1472	2602.6829	2850.5383	2743.7913	2462.8198	2350.0779	2598.8325	2857.7163	2743.9834	2456.6492	2340.3096	2586.4810	2835.4841	2742.4277	2465.0022	2337.2332	2581.1245	2838.3694	2743.2131	2462.7566	2349.2158	2594.9749	2840.2368	2736.0181	2460.5232	23 [...]
+712.0372	940.0635	844.0450	568.4089	445.2562	693.4948	937.5940	851.0430	584.0614	471.5069	707.5548	964.3408	849.1324	575.2989	456.3796	710.5093	953.2717	856.5385	582.1475	457.6617	695.6367	936.6899	851.1463	578.6354	462.4547	711.8576	955.3579	852.7769	578.4518	466.4100	709.0387	960.3780	852.4782	571.3792	458.8596	696.1678	940.3367	849.6621	579.9424	456.7478	693.9183	944.1238	852.6407	579.4277	468.0294	705.9871	946.5964	844.6948	576.2577	477.1270	708.9295	949.2657	860.0350	584.8070	462.17 [...]
+1344.6769	1556.9117	1462.9695	1205.4866	1097.4028	1330.2412	1554.8772	1468.9402	1220.5054	1119.7513	1342.1389	1579.1605	1469.3015	1214.4353	1106.5546	1344.2347	1567.3730	1473.6085	1218.3362	1107.2112	1332.1111	1555.4343	1470.7964	1218.1501	1112.2195	1346.0276	1572.3964	1472.6884	1217.4237	1115.2329	1343.1124	1575.4622	1471.4520	1210.1245	1108.8894	1332.1234	1558.4851	1469.5477	1221.0685	1110.5266	1332.2876	1562.9945	1472.6820	1221.2577	1119.7889	1343.1849	1564.5779	1465.1024	1214.9656	11 [...]
+2346.7305	2567.9768	2477.0010	2202.9309	2082.8752	2334.9788	2567.4971	2483.7922	2216.5725	2101.8110	2345.0012	2588.1794	2486.4910	2213.8110	2091.2996	2345.6340	2575.3716	2486.4692	2213.4119	2089.1118	2337.3689	2566.3755	2485.9695	2216.3030	2098.1321	2351.0156	2582.1594	2486.4102	2214.8132	2097.1345	2346.4138	2583.0247	2485.8557	2207.1855	2091.3484	2336.1406	2568.6589	2485.4795	2218.9443	2095.6992	2338.4700	2572.5107	2488.4775	2217.8545	2104.0532	2348.3264	2574.4351	2481.0798	2210.7864	21 [...]
+2312.4487	2495.9839	2405.1089	2163.3188	2066.1343	2300.9136	2496.1274	2409.3560	2173.6123	2083.3303	2308.5251	2514.8142	2411.8987	2171.7661	2073.7034	2308.1802	2501.8545	2410.9023	2170.7356	2070.2959	2300.2332	2494.2771	2409.5833	2172.6797	2077.3904	2311.4519	2506.4475	2409.9500	2170.0757	2077.5000	2307.3235	2506.9067	2407.9150	2163.6379	2071.9080	2297.6829	2493.5051	2409.5417	2174.0015	2075.6235	2299.0120	2498.4216	2410.3091	2173.4182	2083.4390	2307.1233	2499.3093	2401.0986	2165.7725	20 [...]
+2824.3423	2921.8955	2844.9470	2700.2959	2657.3198	2815.6902	2919.2932	2847.8601	2707.4412	2668.2715	2821.3406	2935.6577	2852.1699	2708.1533	2663.5994	2820.6306	2923.2400	2849.5906	2705.5095	2659.9297	2815.2605	2919.5156	2849.2100	2706.2588	2664.8552	2823.9099	2927.3218	2849.1155	2705.8940	2665.3494	2819.3674	2927.5085	2847.1570	2699.3604	2659.8721	2812.1448	2918.4070	2849.9106	2710.9255	2664.9429	2814.7371	2922.6008	2849.8977	2709.2546	2672.2156	2822.2859	2921.5820	2841.4194	2700.2900	26 [...]
+1259.8477	1143.0640	1118.9319	1207.5825	1285.0396	1252.6698	1138.0942	1117.1603	1209.8798	1293.3409	1257.3951	1153.5000	1124.6147	1212.6437	1291.2482	1254.8053	1141.5361	1120.6483	1208.7247	1284.0620	1249.5106	1137.5507	1119.9965	1210.5485	1290.9216	1258.1395	1143.7916	1119.2056	1208.9894	1287.3907	1252.6989	1142.8130	1117.5339	1202.8773	1284.0602	1245.8971	1134.9681	1121.2227	1213.7389	1288.1128	1249.6431	1138.0059	1119.4879	1211.8754	1294.2006	1253.5062	1137.1686	1111.3170	1204.6986	12 [...]
+3035.5989	2660.6536	2745.2466	3140.8625	3341.7373	3027.7354	2654.5928	2741.7151	3139.5073	3344.7043	3031.1152	2666.2104	2747.9287	3143.2075	3343.4880	3028.3828	2656.3464	2744.4785	3138.6912	3336.4741	3023.2446	2652.8293	2744.5679	3140.0884	3340.9248	3029.1707	2656.4797	2741.6021	3139.0479	3340.8743	3023.4590	2656.1733	2741.6226	3136.2917	3336.4292	3018.4067	2648.9473	2742.8491	3142.1533	3339.6528	3021.5073	2652.8975	2742.7739	3141.5867	3343.1575	3021.8445	2651.2524	2736.8235	3134.6526	33 [...]
+-3456.7317	-3204.3035	-3307.3921	-3625.6404	-3777.1494	-3481.8157	-3220.3799	-3301.6611	-3604.7742	-3740.9966	-3457.5293	-3178.4912	-3307.7698	-3622.5151	-3761.4675	-3455.0403	-3188.7297	-3289.4617	-3608.3301	-3757.2410	-3476.1699	-3208.1328	-3301.0505	-3616.2407	-3749.7764	-3450.8640	-3188.1570	-3300.1790	-3615.4365	-3745.2029	-3460.1482	-3180.9351	-3298.0752	-3624.0854	-3754.2561	-3470.7231	-3204.1750	-3300.7673	-3612.7795	-3761.5864	-3480.5178	-3202.8606	-3303.0212	-3620.6250	-3747.45 [...]
+664.0970	906.1524	804.9072	492.0460	350.2948	641.9526	899.1927	811.0135	510.6799	380.7223	657.9624	932.1554	805.0601	498.0416	361.8509	663.1465	922.4857	820.9668	508.4138	367.3145	643.0120	902.4474	809.6724	500.6698	371.3233	664.2116	922.4689	813.1687	502.7554	377.4133	660.2549	931.3762	815.0939	496.2561	366.6931	646.7029	908.4277	810.4285	507.2153	365.7499	642.8676	911.9943	810.9882	502.7819	375.7184	656.5701	913.3552	804.2670	502.3003	389.6540	660.9260	916.4189	821.9437	510.7740	372.81 [...]
+2787.4097	3022.4094	2921.4263	2618.0977	2485.4636	2767.2822	3018.1023	2928.0723	2634.3074	2510.5422	2780.6531	3046.4924	2924.1213	2624.3860	2494.3076	2784.9797	3035.3645	2934.0537	2632.6951	2496.7832	2768.1423	3017.1821	2926.2883	2627.6934	2501.6567	2785.8323	3035.4270	2927.7029	2627.9888	2505.9749	2781.4204	3042.2129	2929.2283	2619.9680	2495.5503	2769.2539	3022.6206	2925.9358	2632.0322	2496.9705	2768.1064	3025.4963	2927.1812	2629.3430	2506.3020	2778.9780	3027.9783	2920.2173	2625.8423	25 [...]
+-442.3733	-211.6626	-309.9428	-605.1894	-734.8349	-458.8115	-214.6324	-303.6879	-590.8430	-709.8420	-448.2953	-188.4535	-305.6054	-597.2571	-725.8983	-444.9647	-201.7451	-298.9135	-593.6500	-724.6094	-459.6139	-215.7949	-305.4836	-595.8066	-719.6172	-444.0022	-200.1167	-304.4046	-596.0399	-716.7520	-447.7361	-194.3415	-304.3773	-604.2897	-725.5273	-459.9278	-212.7343	-306.9228	-592.4489	-722.9912	-460.3671	-209.2161	-305.2716	-593.9533	-714.6132	-448.5499	-207.2397	-312.8095	-599.7979	-7 [...]
+3.1364	230.5168	134.1157	-155.1399	-280.8354	-12.5557	229.1400	140.1818	-141.2825	-259.9984	-3.3738	251.2785	140.2410	-146.5679	-272.2938	-0.5665	239.1349	144.3813	-144.6750	-271.5366	-13.2846	227.2950	140.5874	-144.3093	-267.0440	2.2063	242.1159	140.9636	-145.1764	-264.5488	-1.4970	246.7733	140.0268	-152.9749	-272.6813	-12.4898	231.9152	140.5394	-139.5529	-267.3659	-10.3531	234.2397	141.1766	-141.2405	-259.4995	-0.2036	235.8521	134.1572	-147.9712	-252.4746	0.5910	238.2955	148.2660	-139. [...]
+-1465.8157	-1244.4146	-1329.3495	-1600.2639	-1722.7607	-1478.6146	-1247.4734	-1322.7855	-1589.6982	-1706.0717	-1470.9388	-1226.9026	-1324.0299	-1592.5591	-1715.6670	-1469.8030	-1240.2239	-1323.7865	-1592.8407	-1718.7578	-1480.7631	-1249.7642	-1327.4253	-1591.7272	-1711.3978	-1466.9860	-1236.5131	-1325.7965	-1594.1138	-1712.6464	-1471.8999	-1235.8425	-1328.8647	-1601.4921	-1719.4146	-1481.7617	-1249.1250	-1325.8085	-1588.3992	-1713.9530	-1481.1611	-1245.0555	-1325.3999	-1590.8440	-1707.61 [...]
+68.6272	220.8723	131.4969	-77.1144	-150.1038	57.8236	218.6709	134.6749	-67.7765	-138.2000	64.2722	235.3094	135.2640	-71.3433	-145.6090	61.8332	223.3586	136.2157	-71.5583	-149.9025	54.9847	216.0801	132.5995	-70.7575	-143.3820	65.8987	225.6982	131.9641	-73.0913	-144.2480	60.3777	226.1540	130.8709	-80.7411	-150.7005	51.4288	214.8640	133.6313	-68.2264	-146.2193	53.9621	218.4333	131.1502	-71.1217	-137.9419	61.3931	216.3210	123.4339	-79.5033	-137.9560	62.3346	220.8404	135.7996	-70.1270	-148.04 [...]
+226.1854	305.1735	223.6868	91.7804	63.0680	215.9520	301.2162	223.2706	94.4642	72.4308	219.8827	316.6235	227.3395	95.4136	67.0555	218.7726	305.7793	224.8380	93.7113	61.0714	211.8871	298.8702	224.4926	93.2654	67.5206	222.6084	306.8643	222.9417	91.2328	66.0980	216.6925	306.2006	219.9030	84.8444	61.9323	209.4071	298.0757	222.6263	95.5062	65.2948	212.6421	300.8605	221.0485	95.4301	72.6938	218.5613	298.6316	213.5981	84.9657	73.6317	219.1747	304.2604	227.2206	96.0578	63.5526	206.7256	286.9020	1 [...]
+2433.1812	2381.5752	2331.1621	2345.2542	2391.7717	2425.5684	2374.3164	2326.8555	2344.1296	2395.4685	2427.9292	2390.5010	2334.2598	2348.1462	2393.6060	2426.8335	2378.6892	2330.7046	2343.7720	2386.6711	2421.6997	2374.4963	2329.0002	2344.8481	2394.2969	2430.1121	2380.1572	2327.3760	2341.5637	2390.3240	2423.6201	2379.3132	2327.1628	2338.1355	2387.0447	2417.5066	2372.6101	2326.9314	2346.0256	2389.9407	2420.7104	2375.3821	2326.6980	2344.8198	2396.4312	2425.6753	2373.2385	2320.5344	2336.2302	23 [...]
+-5432.8447	-5181.5571	-5286.3633	-5602.5996	-5753.2144	-5457.1226	-5195.7378	-5279.0474	-5583.1265	-5719.4326	-5437.7412	-5155.9561	-5284.9019	-5599.7314	-5737.3770	-5430.7544	-5165.7710	-5265.5605	-5586.1060	-5732.9253	-5454.4624	-5185.7280	-5278.7642	-5595.4556	-5728.4209	-5429.5024	-5165.9541	-5276.7397	-5592.4956	-5722.7261	-5436.8853	-5156.3721	-5273.0654	-5600.6909	-5734.0898	-5450.0454	-5181.6021	-5278.4912	-5588.0811	-5736.9365	-5455.9980	-5179.0894	-5279.7456	-5595.7705	-5724.12 [...]
+-4844.9082	-4595.1094	-4698.6875	-5016.2720	-5163.9063	-4868.6836	-4607.6743	-4692.4023	-4997.0322	-5131.4766	-4850.2910	-4569.0425	-4699.5942	-5012.4033	-5149.5298	-4842.7373	-4579.8984	-4679.9229	-4998.6328	-5143.4438	-4866.9775	-4600.2959	-4694.9229	-5008.9043	-5141.2383	-4842.5425	-4580.2886	-4690.6338	-5005.3042	-5134.9146	-4849.9365	-4570.6548	-4687.3525	-5014.8955	-5146.7378	-4862.6401	-4594.6064	-4693.2241	-5000.6211	-5147.9468	-4867.3525	-4591.9229	-4695.1680	-5008.0649	-5136.97 [...]
+-2062.6106	-1824.0601	-1925.3975	-2233.7832	-2373.1277	-2084.2266	-1829.8053	-1917.6195	-2217.0911	-2343.7134	-2073.1277	-1798.9030	-1924.2183	-2228.4216	-2361.8535	-2065.5703	-1810.1118	-1909.2021	-2218.8599	-2353.8674	-2084.7197	-1829.5056	-1920.7548	-2226.3044	-2353.5874	-2065.4536	-1810.1086	-1917.7618	-2224.1860	-2349.4646	-2070.2339	-1802.1742	-1917.2626	-2231.5376	-2362.3235	-2083.5078	-1823.8579	-1921.2435	-2218.9714	-2359.1504	-2084.2998	-1820.8700	-1921.0267	-2223.3511	-2350.08 [...]
+-2451.4202	-2208.6990	-2304.0952	-2612.3162	-2752.8699	-2472.0764	-2213.4324	-2296.7742	-2595.4216	-2725.9243	-2461.7864	-2183.5518	-2303.8840	-2607.2871	-2745.3152	-2455.7244	-2197.3098	-2289.1814	-2600.2251	-2737.6277	-2474.7681	-2217.2266	-2301.4414	-2604.6355	-2738.4043	-2455.8354	-2198.7664	-2300.1992	-2603.1042	-2731.5227	-2459.8215	-2190.0193	-2297.5764	-2611.9260	-2746.2632	-2473.8569	-2210.2864	-2300.2795	-2597.2236	-2740.2627	-2472.7810	-2207.5425	-2303.3362	-2602.0083	-2731.82 [...]
+619.0143	860.6830	769.3347	467.0854	327.5048	599.8307	857.3204	776.1470	482.0939	350.7756	607.6656	882.4988	771.7821	471.6811	335.0545	612.9579	869.2610	780.9271	478.2279	337.3563	597.5548	853.5563	772.1013	474.4486	340.2368	613.6320	869.4918	772.7074	474.2549	342.5357	609.6962	875.4394	774.1277	465.3697	331.1111	596.5831	858.4532	773.7255	480.8495	338.6483	597.6253	860.8287	770.6677	476.7545	345.7149	608.6691	862.6205	763.3470	469.0951	353.5381	611.7417	863.4030	780.5058	477.1364	340.38 [...]
+83.6720	317.3326	223.4887	-68.1990	-197.2761	68.3964	314.8239	229.1977	-56.6133	-178.8945	75.6176	335.4933	225.0499	-64.4941	-192.4372	77.5711	322.6874	231.1594	-60.8215	-192.0297	65.3966	310.5448	224.8247	-61.6002	-187.2616	79.8981	322.3788	225.2424	-63.9394	-187.6445	75.2929	326.3944	224.9333	-72.5333	-196.6464	62.7200	313.0022	225.3819	-57.8552	-188.4675	66.5432	314.3692	221.0928	-62.8944	-182.2129	74.8847	314.8463	215.1250	-69.6163	-176.5450	76.7392	316.9406	229.2347	-61.6086	-189.61 [...]
+-98.4648	133.4387	43.4424	-241.1945	-369.6072	-112.3184	131.7254	50.0923	-230.6409	-353.2719	-104.8923	149.3519	48.5948	-236.8591	-364.1197	-104.7588	137.8407	51.1823	-234.1499	-365.7740	-115.0910	126.5220	45.7468	-234.9177	-359.6014	-101.0186	138.0681	46.2290	-238.5392	-361.8036	-106.6583	140.5539	44.9730	-247.1336	-370.2274	-117.1947	129.4634	46.5251	-231.5480	-362.5306	-114.7308	131.0505	42.4664	-235.0534	-354.4953	-105.9306	131.1746	36.1133	-243.8127	-351.7297	-104.9206	134.9208	50.5 [...]
+-912.8033	-701.3564	-791.9003	-1054.3074	-1166.2349	-924.5778	-706.2278	-789.3151	-1046.3402	-1153.2759	-918.9888	-688.3484	-788.6865	-1051.6863	-1162.2344	-919.8260	-700.2041	-787.9529	-1050.8643	-1166.1696	-929.6488	-711.5569	-791.9674	-1051.2109	-1159.7361	-916.4569	-700.1039	-793.3781	-1054.3478	-1163.3505	-923.2532	-700.1147	-794.3604	-1061.0688	-1169.0685	-931.3426	-709.2691	-793.4064	-1049.2101	-1163.4419	-927.9588	-706.6554	-795.9427	-1052.0049	-1155.2555	-920.7008	-707.5678	-803 [...]
+-636.8519	-526.6597	-607.9560	-767.0701	-814.3766	-645.8356	-533.5677	-609.3967	-764.2021	-805.7259	-642.1977	-517.2286	-605.2474	-764.2708	-810.1625	-643.4214	-527.9941	-606.6143	-765.4543	-816.2378	-650.5186	-535.3107	-611.2474	-765.9033	-810.0823	-639.9453	-527.4763	-611.7514	-768.5032	-814.1104	-645.2615	-528.6571	-612.1845	-773.4808	-817.2342	-653.1178	-536.9566	-610.9199	-763.1652	-813.3253	-649.5497	-534.4419	-612.4032	-765.7114	-804.7957	-643.2485	-534.9832	-619.6376	-773.8046	-8 [...]
+-5647.6982	-5398.9683	-5511.1782	-5824.7603	-5970.0693	-5670.1367	-5412.2334	-5502.5527	-5807.9165	-5940.6733	-5656.2373	-5375.2529	-5511.1802	-5825.4512	-5956.3662	-5645.6860	-5384.4990	-5488.4351	-5808.1733	-5950.9302	-5673.4775	-5407.5171	-5505.1914	-5821.9219	-5952.4126	-5648.4014	-5386.7324	-5499.0132	-5814.9028	-5941.4536	-5654.0366	-5374.8535	-5495.6060	-5824.7163	-5957.0200	-5668.6211	-5401.2998	-5503.1479	-5807.2666	-5951.9639	-5669.3848	-5397.9248	-5507.3691	-5817.7314	-5944.14 [...]
+-5213.8638	-4969.7061	-5074.5811	-5387.2222	-5532.4272	-5236.8301	-4978.1973	-5064.3530	-5370.6182	-5501.8398	-5225.1201	-4942.1963	-5076.4521	-5385.2983	-5520.3691	-5216.0127	-4954.0239	-5053.6382	-5372.3418	-5511.6162	-5239.5991	-4978.8467	-5070.3120	-5381.8740	-5514.6514	-5216.9907	-4957.3525	-5066.1934	-5377.7119	-5505.2339	-5222.7183	-4944.8384	-5061.0576	-5387.2881	-5521.8452	-5238.6558	-4971.4937	-5068.9775	-5369.7246	-5514.5889	-5237.3760	-4967.4014	-5072.1240	-5379.2686	-5508.35 [...]
+-2968.9883	-2726.7507	-2826.1812	-3136.7000	-3279.9441	-2991.9258	-2732.6770	-2816.9207	-3120.7778	-3251.0271	-2982.0659	-2700.8772	-2828.4136	-3133.5898	-3269.6067	-2973.1641	-2713.6968	-2809.0281	-3122.8132	-3261.5422	-2994.4995	-2735.8096	-2824.4475	-3131.8679	-3264.3374	-2974.4648	-2716.3892	-2821.3069	-3128.3376	-3257.2507	-2978.4912	-2705.6614	-2817.8174	-3136.6199	-3272.2615	-2993.3164	-2728.3958	-2823.4326	-3121.8398	-3265.0269	-2992.5754	-2724.9138	-2825.6096	-3129.2234	-3258.08 [...]
+-3948.3372	-3698.3364	-3796.3486	-4106.9722	-4249.9189	-3966.8064	-3703.5596	-3787.3892	-4091.4600	-4225.7070	-3961.1521	-3675.5918	-3796.4470	-4106.2949	-4243.9819	-3952.1802	-3686.5334	-3778.6147	-4093.0901	-4237.2983	-3970.2180	-3709.2222	-3793.4580	-4101.4604	-4238.5742	-3954.8511	-3692.8914	-3791.9802	-4101.4346	-4233.7769	-3957.5730	-3682.3369	-3789.4092	-4109.2832	-4247.9570	-3973.7539	-3701.8518	-3793.1587	-4092.1311	-4237.5273	-3969.5605	-3702.5757	-3798.2163	-4100.8662	-4234.01 [...]
+-1893.1404	-1643.2734	-1734.8158	-2043.8668	-2188.3672	-1911.5475	-1647.0470	-1728.6005	-2030.9170	-2166.4841	-1905.3555	-1623.6401	-1734.7899	-2041.9250	-2182.0847	-1900.1563	-1632.8441	-1722.4546	-2033.0640	-2178.0510	-1915.2969	-1653.3640	-1734.0383	-2038.3461	-2177.4353	-1899.9077	-1638.3740	-1733.5410	-2040.1794	-2175.4602	-1905.0510	-1630.8193	-1732.1138	-2048.5522	-2187.9373	-1917.0266	-1648.6586	-1733.7853	-2030.9205	-2176.4380	-1914.4851	-1648.1298	-1738.1097	-2040.2062	-2170.99 [...]
+-3687.2146	-3438.0381	-3529.4092	-3832.9192	-3970.8123	-3701.9382	-3442.3650	-3522.5662	-3821.7180	-3954.8984	-3696.4912	-3424.1196	-3528.7046	-3831.5896	-3967.0535	-3694.2061	-3432.3552	-3520.3384	-3824.9287	-3967.2126	-3708.3486	-3448.6377	-3530.7021	-3828.5125	-3963.3103	-3692.3762	-3434.9827	-3531.0549	-3833.1038	-3965.3718	-3698.7498	-3431.5161	-3528.3684	-3838.6160	-3973.7231	-3709.6155	-3443.6157	-3529.5764	-3822.0176	-3964.1150	-3705.2283	-3443.0469	-3536.7844	-3829.6904	-3958.79 [...]
+-2448.5437	-2216.3984	-2308.2732	-2592.9009	-2716.2949	-2461.2397	-2221.7136	-2305.1887	-2585.2800	-2706.2942	-2456.2144	-2205.7849	-2308.2310	-2591.6577	-2714.5798	-2456.6816	-2214.6509	-2302.8516	-2588.6565	-2718.5903	-2467.8325	-2226.5798	-2311.5869	-2589.4851	-2712.2441	-2453.5994	-2216.9460	-2311.3386	-2594.9470	-2716.1033	-2459.8936	-2214.1365	-2310.3918	-2600.9038	-2722.5063	-2468.6968	-2223.4822	-2309.9331	-2587.2417	-2714.2490	-2464.5549	-2222.3015	-2314.6479	-2591.6379	-2707.29 [...]
+-1372.4532	-1188.4471	-1278.6924	-1514.9155	-1607.6398	-1384.6976	-1195.0450	-1279.9960	-1511.3809	-1598.6833	-1380.5510	-1180.3243	-1277.0172	-1514.3204	-1605.0393	-1380.0479	-1189.6455	-1277.2400	-1513.2640	-1610.6890	-1390.3102	-1198.6461	-1283.3944	-1515.1426	-1604.5299	-1378.6460	-1192.4758	-1284.6200	-1519.3746	-1608.3649	-1384.8547	-1191.1401	-1283.4709	-1522.8660	-1611.8923	-1391.4659	-1199.3712	-1284.1978	-1512.6263	-1607.5737	-1388.5231	-1197.1549	-1287.2655	-1515.2484	-1599.74 [...]
+326.1956	398.7625	321.4643	202.4660	178.7342	317.0193	390.0680	316.1822	200.8274	183.5980	320.8335	405.0668	322.6675	203.8155	181.0793	318.9032	395.2200	320.0359	198.0617	172.9723	311.2373	388.0066	315.6949	198.8345	180.4684	320.9784	393.1466	313.8635	196.5728	176.3632	315.4618	392.2569	314.3420	192.5477	173.6699	310.0102	386.8983	314.1165	200.8016	176.4537	314.5612	389.7158	314.5703	199.2789	184.2783	318.9346	388.4822	306.4288	190.6231	184.2785	318.5644	393.5158	320.1299	201.3385	176.07 [...]
+145.4604	91.9297	71.3836	105.0293	142.9805	137.6379	84.2493	66.0340	102.8287	144.6096	139.8546	96.4791	70.9665	105.6336	142.7321	139.1594	86.8630	67.4907	100.0945	134.4520	132.2965	81.4078	65.6041	100.8830	141.5855	140.1592	84.7532	63.8200	99.0525	137.2624	134.1052	83.6036	64.4508	95.8741	135.1890	129.4408	80.4737	64.7104	102.0801	137.9965	134.0780	83.5126	63.8291	101.2189	145.0910	137.0452	81.4580	58.5845	94.0371	144.9506	138.6445	86.5948	68.9759	104.3432	138.3286	126.0090	69.9352	46.34 [...]
+36.3656	-87.2077	-62.8841	62.2512	128.0151	29.9712	-93.5913	-68.0806	59.6916	129.7759	32.1658	-84.4560	-63.7146	63.1701	128.4777	29.3747	-93.4023	-67.4428	56.8506	119.6798	23.9472	-98.0708	-69.1029	58.1052	126.1987	29.8673	-94.5036	-69.9246	57.2271	124.0311	25.6559	-95.5284	-71.6326	54.1795	122.1658	21.3156	-99.8530	-70.8349	57.9472	122.6800	24.6545	-96.0230	-69.9785	59.3689	127.9831	26.0231	-98.9731	-75.6896	54.0046	128.9713	28.1303	-93.2351	-65.6597	61.7358	121.3803	17.1438	-107.0192	- [...]
+-160.4870	-160.3180	-153.3991	-149.3515	-153.4233	-163.3915	-166.0575	-158.5886	-151.3062	-153.9011	-164.5091	-160.5134	-153.1641	-149.4906	-153.2439	-163.9937	-166.4715	-158.7796	-156.7097	-160.3134	-167.0124	-169.5715	-159.5626	-154.7616	-154.0511	-160.9862	-165.9652	-157.8028	-155.1477	-156.8329	-165.9998	-167.8894	-160.4932	-158.4527	-158.8615	-167.9205	-167.6267	-159.0546	-153.6589	-155.8069	-165.1962	-166.4843	-157.9959	-155.6797	-153.1357	-165.7814	-168.8648	-162.5094	-157.3399	-1 [...]
+1515.8210	1546.9159	1520.4680	1476.4779	1466.6990	1515.2815	1545.6616	1520.1899	1476.1361	1467.6100	1516.5432	1546.3579	1520.1470	1476.3618	1466.3951	1514.9576	1545.3783	1518.2717	1474.5779	1462.7433	1514.4618	1545.3877	1518.1801	1474.4913	1466.8188	1517.8003	1545.4219	1519.7963	1473.6694	1465.4218	1513.8354	1544.9108	1518.7710	1474.2263	1465.4132	1516.7476	1545.4901	1518.9792	1474.2759	1466.2837	1516.5411	1546.8096	1520.3568	1475.4685	1467.5491	1516.2805	1545.9778	1517.8096	1474.2128	14 [...]
+-8114.0000	-7862.7695	-7976.0430	-8288.0654	-8433.2441	-8135.2451	-7875.8188	-7966.1997	-8272.3037	-8407.0361	-8121.0698	-7839.0454	-7974.6958	-8289.2363	-8419.4766	-8111.5562	-7848.0693	-7951.6875	-8271.1836	-8415.3711	-8138.8740	-7872.0234	-7969.3589	-8284.8164	-8417.7822	-8114.3013	-7850.4790	-7962.7231	-8279.0469	-8406.3516	-8120.2407	-7839.3555	-7958.9629	-8289.1240	-8423.1221	-8135.3623	-7866.0015	-7967.4312	-8270.3828	-8415.4336	-8133.5435	-7862.8970	-7973.1680	-8282.8174	-8408.33 [...]
+-5740.5449	-5494.5845	-5600.0269	-5910.2432	-6053.5708	-5761.8564	-5503.6260	-5590.5161	-5895.1851	-6027.0000	-5752.8105	-5468.9551	-5602.2246	-5910.3320	-6043.7314	-5743.6191	-5482.0181	-5580.0273	-5895.6704	-6035.5137	-5765.5571	-5505.9712	-5596.4951	-5906.4941	-6039.2754	-5745.0122	-5485.6719	-5592.4951	-5902.4893	-6030.1221	-5750.3501	-5473.5278	-5587.7729	-5912.0391	-6046.1563	-5766.5693	-5497.5010	-5594.5850	-5894.0938	-6037.4453	-5762.6235	-5495.0337	-5598.8442	-5904.1328	-6032.07 [...]
+-6573.1450	-6319.5542	-6415.4546	-6726.6318	-6875.4595	-6592.2256	-6325.6011	-6405.6787	-6712.0811	-6850.6484	-6585.0581	-6293.9253	-6417.4971	-6727.7852	-6870.2080	-6576.0254	-6306.9282	-6396.3872	-6713.1816	-6859.7061	-6594.8296	-6332.1265	-6413.6216	-6721.8003	-6863.4146	-6577.8447	-6310.0835	-6408.1104	-6719.6392	-6855.2793	-6582.1089	-6299.3237	-6404.1235	-6729.6660	-6872.7285	-6597.1797	-6320.6763	-6409.5776	-6709.2061	-6860.1040	-6592.5615	-6321.1934	-6416.4043	-6721.4087	-6857.33 [...]
+-3665.6208	-3403.7800	-3492.9504	-3806.3101	-3954.6729	-3678.1685	-3408.1299	-3487.2144	-3792.0198	-3937.0117	-3673.6265	-3382.9688	-3493.3521	-3808.4631	-3952.7695	-3668.0720	-3393.1326	-3477.3596	-3792.6523	-3946.3616	-3682.4451	-3417.0198	-3492.9736	-3801.8398	-3948.3792	-3670.4170	-3399.9106	-3490.5718	-3804.7488	-3944.3047	-3674.4832	-3392.2058	-3488.7988	-3810.9429	-3958.3267	-3687.7798	-3409.6023	-3491.8958	-3794.1853	-3943.7837	-3683.1707	-3414.8943	-3502.7964	-3808.6240	-3944.47 [...]
+-3094.6289	-2833.3955	-2922.8059	-3235.9504	-3383.2791	-3108.3877	-2837.9529	-2921.2141	-3223.4373	-3367.2463	-3103.2354	-2816.1313	-2924.3760	-3237.6172	-3382.2644	-3099.4556	-2824.1926	-2909.3486	-3225.2454	-3378.1765	-3112.6394	-2846.8513	-2923.3691	-3231.6694	-3375.5420	-3098.8696	-2830.3496	-2923.9321	-3235.0259	-3374.4673	-3103.9880	-2822.5449	-2922.1296	-3241.1133	-3385.3843	-3114.2996	-2839.7368	-2921.3713	-3224.9231	-3374.3428	-3111.0210	-2843.9814	-2930.0715	-3236.5371	-3372.88 [...]
+-4971.7559	-4720.3472	-4822.2427	-5127.1304	-5260.3179	-4983.1851	-4726.5601	-4819.4595	-5116.9976	-5245.5566	-4980.1196	-4710.4048	-4822.2983	-5130.2832	-5258.2856	-4976.5200	-4715.9873	-4811.3892	-5119.1567	-5259.5151	-4988.3608	-4733.2900	-4823.1221	-5125.3262	-5255.0562	-4977.1284	-4722.0737	-4826.0786	-5129.2402	-5257.4819	-4983.0020	-4718.0376	-4822.4336	-5134.7075	-5265.9409	-4992.4438	-4731.6885	-4826.1587	-5121.2783	-5256.0039	-4988.8896	-4733.3511	-4832.8413	-5130.7788	-5252.83 [...]
+-3817.9438	-3592.3062	-3692.0786	-3973.9907	-4091.5398	-3830.0051	-3598.7197	-3693.9512	-3968.4607	-4080.0320	-3826.4016	-3583.1062	-3692.3171	-3975.1838	-4087.9307	-3824.5498	-3589.9724	-3686.6870	-3970.8384	-4093.3718	-3837.0588	-3603.0789	-3697.5837	-3975.5422	-4087.9084	-3825.4700	-3595.8540	-3698.7004	-3979.0554	-4092.4165	-3830.0688	-3593.1963	-3696.5647	-3984.2656	-4096.5371	-3839.7134	-3604.4189	-3699.6184	-3973.7517	-4091.5972	-3835.0181	-3604.0544	-3704.4563	-3978.2998	-4083.98 [...]
+-2043.8322	-1870.9240	-1967.5493	-2192.1418	-2271.6628	-2054.6201	-1881.6377	-1973.9972	-2192.1392	-2267.6348	-2049.5613	-1865.6694	-1968.9260	-2193.8589	-2270.6833	-2051.2175	-1874.8663	-1968.7202	-2196.2104	-2280.5771	-2060.8914	-1885.0908	-1976.4366	-2197.2700	-2273.2258	-2050.9590	-1880.1085	-1976.8096	-2200.2275	-2277.4106	-2056.3557	-1877.6799	-1975.3270	-2204.2625	-2281.1365	-2063.5286	-1885.6113	-1978.0769	-2196.5886	-2277.0962	-2058.7896	-1883.0988	-1980.4764	-2199.3950	-2270.01 [...]
+458.8953	534.7487	468.5098	354.9846	326.6594	449.0101	524.1694	460.3350	351.8267	329.0114	452.5768	537.7072	468.7544	354.1245	329.2533	451.4226	528.1031	462.9927	346.9756	317.5902	443.9397	521.9935	458.9525	347.7834	325.4063	452.2538	525.2025	458.2376	346.1890	321.6442	446.3828	524.8998	458.2957	342.5108	319.8481	441.7010	519.9787	457.0737	347.7000	320.9457	446.6408	523.2683	456.5393	346.0348	327.4300	449.6996	520.1016	450.8005	339.3991	328.6627	450.4811	525.5361	461.4768	349.5087	320.41 [...]
+-733.2314	-685.7292	-722.7623	-793.7039	-811.0868	-740.4067	-694.3376	-731.2854	-795.9615	-810.7763	-739.2181	-685.4136	-725.1091	-793.4642	-810.3529	-740.1605	-692.8117	-730.5084	-802.8267	-823.1936	-747.3853	-698.8957	-733.7554	-798.8013	-814.7446	-739.7716	-696.7495	-733.7859	-802.2466	-819.1835	-745.1659	-697.3295	-733.5577	-805.2762	-820.4988	-747.9023	-699.1916	-735.1273	-801.3480	-819.1389	-743.4810	-696.5811	-733.9887	-800.7584	-813.0143	-742.1800	-700.3361	-739.6133	-806.1456	-8 [...]
+1254.4108	1311.1603	1274.0820	1199.1246	1175.0238	1251.1827	1305.9224	1269.3601	1196.1079	1174.7966	1250.5928	1309.7874	1272.3744	1196.3401	1174.1309	1250.3612	1304.9994	1266.8142	1190.3154	1164.6531	1244.2732	1300.6654	1266.4949	1191.2534	1172.3441	1250.4948	1302.2849	1266.3126	1189.5779	1166.7384	1245.3215	1301.1552	1265.1025	1188.7128	1167.2845	1244.6711	1300.4330	1265.0717	1189.5741	1167.1005	1246.5654	1301.1418	1263.7546	1190.2059	1171.3091	1247.6171	1298.4390	1260.0370	1187.0894	11 [...]
+3370.2874	3490.4150	3430.4146	3284.0146	3229.5173	3370.3889	3487.7456	3427.9070	3281.7117	3229.2961	3368.2629	3488.5203	3428.9729	3281.4983	3228.3977	3369.6980	3486.8396	3426.0156	3277.7246	3222.7158	3366.7168	3486.4744	3426.3250	3280.3167	3229.1653	3371.3608	3487.6680	3427.4661	3279.2148	3224.6057	3368.1206	3486.4221	3426.9812	3280.2334	3228.9314	3369.3154	3488.3684	3428.0042	3278.8828	3226.4373	3370.5798	3488.8005	3427.1382	3279.2876	3228.3413	3369.4241	3487.4519	3424.5146	3278.7693	32 [...]
+-17105.6113	-16840.4102	-16953.5957	-17268.0762	-17417.9902	-17120.2930	-16855.3613	-16944.8418	-17256.3438	-17397.8535	-17106.8516	-16820.5586	-16951.6660	-17273.8965	-17405.6074	-17100.1895	-16828.0762	-16925.8340	-17249.6934	-17402.9453	-17126.4316	-16852.3105	-16946.2656	-17267.3555	-17408.8281	-17103.5254	-16830.3477	-16937.6523	-17261.7910	-17395.5352	-17112.4922	-16819.7207	-16936.5293	-17268.4883	-17410.3281	-17124.9297	-16847.3926	-16946.4277	-17251.1680	-17400.7129	-17122.4941	 [...]
+-5586.7578	-5311.2222	-5395.8154	-5711.0337	-5867.8481	-5594.0850	-5319.1680	-5388.8027	-5694.5728	-5857.0313	-5586.0029	-5281.9966	-5390.5342	-5711.9570	-5867.1030	-5581.5205	-5297.8257	-5372.9453	-5690.5957	-5857.0005	-5601.3457	-5326.0513	-5389.5234	-5701.3877	-5863.2515	-5587.1157	-5300.0928	-5381.4624	-5699.3159	-5849.3911	-5594.0190	-5291.8936	-5384.5483	-5711.3286	-5869.0620	-5602.8647	-5314.5059	-5385.1362	-5692.3896	-5849.2031	-5600.7534	-5318.1504	-5400.9468	-5709.1611	-5850.48 [...]
+-7411.0879	-7149.0718	-7242.3496	-7553.4229	-7698.8662	-7420.0273	-7156.1079	-7236.9521	-7540.3013	-7684.4688	-7416.3271	-7128.3169	-7243.4048	-7558.3335	-7699.1611	-7411.7441	-7140.1665	-7225.3945	-7540.2314	-7692.0542	-7426.0591	-7163.4961	-7241.4873	-7549.2593	-7693.5679	-7414.1929	-7145.5229	-7238.6108	-7551.3672	-7688.2456	-7419.6270	-7139.1138	-7237.6196	-7560.0864	-7702.2651	-7431.7500	-7155.5972	-7241.3457	-7540.7817	-7686.9443	-7426.4658	-7162.8047	-7252.4956	-7556.8281	-7689.51 [...]
+-8000.2803	-7741.6899	-7852.0503	-8166.4585	-8299.0146	-8009.6436	-7748.1401	-7851.3169	-8158.1875	-8287.9688	-8008.4697	-7737.1172	-7854.7422	-8172.4272	-8299.2666	-8005.0708	-7736.4102	-7840.3516	-8158.8350	-8301.6709	-8018.3525	-7755.0942	-7856.4058	-8167.5493	-8297.2773	-8006.1240	-7747.0986	-7856.5752	-8171.5947	-8300.1328	-8010.8271	-7742.1157	-7853.7559	-8175.6372	-8305.9531	-8022.4976	-7755.1006	-7859.6177	-8165.4878	-8300.2520	-8016.8521	-7757.4951	-7865.7476	-8174.3931	-8296.75 [...]
+-8542.6055	-8289.2588	-8394.6855	-8702.1016	-8831.9092	-8553.6328	-8299.0908	-8401.0947	-8700.3389	-8826.9443	-8553.0518	-8289.3848	-8400.4902	-8708.9678	-8833.8096	-8551.1357	-8290.2432	-8393.7441	-8704.5352	-8840.7031	-8563.2012	-8307.4229	-8407.0313	-8710.5156	-8835.6553	-8553.8770	-8301.0791	-8407.4648	-8713.5088	-8839.7178	-8557.7959	-8296.6445	-8403.4404	-8717.1260	-8843.5371	-8567.8398	-8307.3779	-8411.0059	-8710.4639	-8841.5938	-8562.3359	-8308.5615	-8415.0293	-8714.5010	-8833.28 [...]
+-4466.0928	-4249.6826	-4351.0171	-4618.0483	-4724.2432	-4478.3062	-4262.1250	-4361.8271	-4621.2368	-4723.4067	-4477.1958	-4251.3271	-4354.7505	-4622.9263	-4723.8384	-4475.3062	-4255.7295	-4355.1094	-4627.9937	-4737.2715	-4489.6040	-4271.6816	-4366.0337	-4628.4897	-4730.2695	-4479.7773	-4265.1655	-4365.7100	-4631.6890	-4735.4634	-4483.4360	-4263.9019	-4365.3462	-4636.0718	-4739.1455	-4491.9121	-4272.4126	-4370.8301	-4632.5093	-4737.8525	-4484.9043	-4269.3877	-4370.3706	-4631.9458	-4727.22 [...]
+-5253.7012	-5081.9575	-5170.9316	-5386.4570	-5467.2705	-5263.6411	-5092.5249	-5180.9824	-5391.2437	-5468.0664	-5262.5645	-5084.3105	-5173.4521	-5390.3774	-5466.4653	-5262.9507	-5090.6816	-5178.1919	-5399.1294	-5481.6450	-5273.0703	-5101.2959	-5185.6445	-5397.0664	-5472.3525	-5264.5850	-5098.1597	-5186.2842	-5402.3594	-5478.9150	-5268.9419	-5098.2324	-5185.7422	-5405.2876	-5480.7891	-5274.4702	-5102.6021	-5190.5737	-5401.2144	-5480.0635	-5268.2280	-5098.6855	-5187.0264	-5400.6929	-5472.59 [...]
+-2292.4617	-2164.5420	-2234.7378	-2399.0913	-2455.2825	-2295.9722	-2172.4026	-2243.4192	-2403.2358	-2457.6201	-2298.9175	-2164.8528	-2237.8074	-2401.1875	-2455.8040	-2298.0364	-2172.4104	-2243.7754	-2410.0823	-2469.6218	-2305.4907	-2178.1414	-2247.7957	-2405.8701	-2459.2319	-2297.1003	-2175.2246	-2246.9026	-2410.4207	-2465.5291	-2304.2322	-2177.5771	-2247.8608	-2411.6621	-2465.1082	-2304.3992	-2177.4346	-2249.3733	-2409.7852	-2464.0635	-2301.1670	-2174.7231	-2247.8220	-2409.4846	-2458.02 [...]
+-105.1208	58.9632	-6.9110	-198.0781	-279.3481	-105.3251	56.8474	-9.8053	-200.5124	-279.6183	-107.8890	57.9664	-10.5539	-202.2167	-280.4695	-107.1126	54.6566	-13.6697	-208.4202	-289.8157	-112.4920	51.4679	-14.3311	-206.7605	-282.3438	-106.9699	52.4977	-14.2224	-212.3852	-290.6869	-113.5409	49.0109	-15.1586	-208.9734	-286.1405	-109.8745	53.0815	-16.1629	-208.6180	-287.2478	-109.3536	53.3367	-15.8096	-209.2825	-285.3831	-108.5867	50.0045	-19.4384	-210.1472	-285.1212	-107.6513	53.6849	-15.14 [...]
+-10731.1074	-10422.7354	-10516.5371	-10824.7236	-10991.0742	-10726.4775	-10452.7637	-10502.8008	-10829.6074	-10984.9404	-10705.4785	-10407.6523	-10508.9746	-10863.7627	-10972.2939	-10712.8408	-10420.6133	-10485.0049	-10805.8115	-10991.1738	-10753.4277	-10433.2148	-10502.0215	-10819.8477	-11007.2295	-10706.7471	-10411.7666	-10493.9980	-10809.3555	-11013.9629	-10728.5889	-10410.1523	-10500.4434	-10818.3164	-10988.9199	-10731.7959	-10434.8799	-10513.0137	-10799.7266	-10991.9707	-10732.8096	 [...]
+-54.6271	228.0549	126.0827	-190.5258	-339.8010	-48.7767	217.7714	127.2144	-187.2063	-334.3264	-44.5955	247.5522	130.0926	-206.1578	-341.7850	-47.8340	237.9487	144.6936	-183.6343	-343.9725	-70.3388	211.9699	125.4487	-195.8828	-349.3808	-54.7509	227.3422	122.5319	-203.4097	-348.9018	-70.8616	221.9522	110.1573	-212.4953	-361.6351	-79.6844	201.0187	109.3361	-200.8075	-348.8540	-84.3225	188.2413	84.0654	-233.1889	-361.4061	-70.8104	199.8051	92.6943	-224.7353	-353.7563	-78.9027	198.2016	98.551 [...]
+-10230.1289	-9970.3467	-10088.0518	-10402.5586	-10533.4707	-10242.5498	-9978.5371	-10089.0049	-10397.4648	-10526.3750	-10238.4629	-9968.0664	-10090.7842	-10413.1406	-10535.1074	-10234.6758	-9965.6699	-10075.6084	-10401.3105	-10543.2393	-10252.2783	-9983.9111	-10095.8506	-10411.6709	-10538.8662	-10238.4697	-9977.5967	-10093.2861	-10412.9102	-10542.8447	-10243.2607	-9972.6309	-10091.3662	-10414.6777	-10545.3770	-10253.6084	-9984.5342	-10099.1680	-10411.4004	-10544.0986	-10249.3125	-9989.50 [...]
+-11545.8467	-11293.3496	-11405.1533	-11713.3223	-11840.8535	-11557.1221	-11301.1719	-11411.2500	-11708.2021	-11835.8574	-11556.0957	-11293.1816	-11409.4883	-11721.1748	-11841.0576	-11553.6982	-11292.0869	-11399.4932	-11715.6396	-11850.0742	-11567.2002	-11311.2607	-11416.8008	-11722.3125	-11846.1104	-11556.6758	-11304.0781	-11415.9609	-11724.2637	-11850.6953	-11561.5400	-11300.3623	-11412.1123	-11728.7686	-11853.6719	-11571.2510	-11311.4570	-11420.5713	-11723.9873	-11852.4355	-11565.6582	 [...]
+-9428.1357	-9191.6260	-9298.1650	-9589.8428	-9707.5000	-9439.9648	-9201.2734	-9308.4355	-9591.9502	-9706.5498	-9440.8662	-9194.8066	-9305.1768	-9599.2969	-9707.7832	-9439.0244	-9196.1631	-9303.4375	-9599.7881	-9725.0303	-9452.7129	-9214.5195	-9318.1064	-9606.0469	-9718.7490	-9445.6182	-9211.5205	-9317.4326	-9609.3643	-9724.9316	-9451.1094	-9208.6943	-9316.9912	-9613.8457	-9728.2959	-9460.3174	-9219.0713	-9326.2725	-9611.5000	-9729.7998	-9454.9346	-9216.4756	-9325.7295	-9610.5518	-9717.80 [...]
+-4754.1714	-4520.5615	-4619.9619	-4904.9067	-5021.9639	-4765.6523	-4531.3457	-4632.1558	-4908.8638	-5023.5405	-4767.1533	-4526.4282	-4628.6563	-4913.2813	-5024.0474	-4766.2090	-4528.0195	-4629.9048	-4919.5088	-5040.8232	-4779.9023	-4546.9380	-4642.1108	-4920.8311	-5031.9917	-4772.4844	-4541.3325	-4642.9517	-4926.9053	-5041.5552	-4778.1616	-4540.3809	-4642.6284	-4929.7852	-5041.9712	-4783.9482	-4548.8286	-4650.5894	-4928.3218	-5042.2705	-4777.7793	-4543.7139	-4647.0327	-4926.1934	-5033.07 [...]
+-7913.7949	-7743.8931	-7830.7769	-8046.1494	-8125.2979	-7920.0938	-7751.1538	-7840.2339	-8050.8022	-8128.0879	-7924.1748	-7747.1577	-7837.9204	-8053.1621	-8127.0181	-7923.3472	-7753.0034	-7841.7070	-8061.3955	-8142.9551	-7932.9419	-7761.9561	-7847.5474	-8058.7119	-8131.2036	-7922.7275	-7759.3643	-7846.8330	-8063.7852	-8140.5552	-7930.9468	-7760.7749	-7848.4058	-8064.5752	-8138.2251	-7933.2148	-7761.3667	-7851.0166	-8062.8975	-8137.6914	-7927.8716	-7759.7485	-7849.8179	-8062.4316	-8132.62 [...]
+-2985.3872	-2804.3240	-2879.6179	-3095.5950	-3184.1729	-2984.1189	-2805.2190	-2885.0256	-3100.1108	-3186.1750	-2989.0344	-2805.5662	-2884.2644	-3100.1912	-3185.4897	-2987.1060	-2808.2974	-2888.7693	-3108.6399	-3199.3547	-2998.0137	-2815.9744	-2894.1038	-3107.3918	-3189.1829	-2987.5500	-2814.8655	-2892.4678	-3112.1914	-3197.8147	-2995.8049	-2816.6541	-2892.4656	-3110.9211	-3194.3059	-2995.4609	-2815.1650	-2896.1672	-3110.7976	-3196.1616	-2994.7678	-2814.4187	-2894.7778	-3110.0376	-3192.56 [...]
+-11987.9424	-11703.4482	-11806.6279	-12114.4102	-12270.0322	-11992.3906	-11720.1748	-11794.1953	-12109.9014	-12258.9395	-11985.4932	-11702.5225	-11797.6895	-12126.2148	-12264.1826	-11976.9092	-11693.8320	-11772.8408	-12102.0957	-12272.0313	-12007.7021	-11706.0596	-11794.3047	-12121.2217	-12272.4961	-11975.7715	-11693.6650	-11794.7988	-12109.4023	-12263.4473	-11989.8730	-11691.2686	-11785.3721	-12107.5859	-12270.6494	-12003.0605	-11715.2148	-11794.7490	-12106.5410	-12266.2842	-11990.7383	 [...]
+-5564.3569	-5290.0073	-5407.2007	-5720.6440	-5866.3179	-5574.5537	-5300.4097	-5400.2646	-5713.3369	-5856.8252	-5570.3291	-5289.2822	-5398.2197	-5732.8804	-5862.0620	-5561.1709	-5279.1528	-5375.9316	-5711.8003	-5866.6216	-5583.2363	-5297.8350	-5400.4106	-5724.3301	-5865.4136	-5562.2837	-5284.2100	-5393.5498	-5716.5996	-5862.7778	-5571.4902	-5282.2383	-5390.4976	-5720.6948	-5869.3486	-5582.1973	-5301.1602	-5401.6875	-5719.2852	-5867.5610	-5575.6377	-5304.0181	-5409.4761	-5733.8516	-5863.56 [...]
+-3306.2659	-3031.0859	-3132.2595	-3444.7495	-3596.4958	-3317.8506	-3039.6282	-3126.3320	-3440.5461	-3587.5178	-3313.6724	-3028.3936	-3128.0608	-3458.7913	-3595.2063	-3305.9385	-3023.2043	-3109.4175	-3441.1072	-3601.0046	-3326.7639	-3043.9326	-3131.6152	-3451.9902	-3600.7939	-3312.2634	-3033.3127	-3127.6855	-3449.4436	-3600.7214	-3319.2639	-3028.1472	-3124.2297	-3453.6604	-3606.1536	-3328.8777	-3043.8850	-3136.7700	-3452.5020	-3605.7114	-3324.1968	-3049.6094	-3143.9534	-3462.6057	-3600.46 [...]
+-13901.7998	-13640.3193	-13755.6299	-14072.4727	-14206.4756	-13915.8486	-13648.3291	-13758.6377	-14068.8398	-14201.9150	-13914.8037	-13641.4707	-13759.7754	-14086.5615	-14209.9111	-13910.3184	-13637.3750	-13746.2041	-14075.8076	-14221.6904	-13929.8848	-13661.5957	-13769.7520	-14088.8887	-14219.2100	-13919.2070	-13654.9004	-13765.7314	-14089.5869	-14224.0762	-13925.8594	-13649.8018	-13764.2461	-14093.3516	-14228.8643	-13937.3018	-13664.5801	-13776.0225	-14091.7061	-14230.2422	-13932.1357	 [...]
+-13153.9775	-12889.0127	-12993.6123	-13312.0225	-13451.2080	-13167.0674	-12894.7471	-12999.2520	-13310.7910	-13451.1221	-13170.5342	-12892.1973	-13000.2021	-13329.0811	-13459.1709	-13168.1445	-12888.4795	-12991.6602	-13324.1611	-13471.8955	-13186.1367	-12914.9600	-13014.0635	-13333.7822	-13469.9932	-13178.7832	-12910.0273	-13012.7227	-13336.6895	-13479.2949	-13184.6230	-12905.0146	-13009.3545	-13341.6025	-13480.6660	-13195.4160	-12918.7490	-13022.2881	-13338.6025	-13484.5742	-13190.7725	 [...]
+-9723.1582	-9457.5322	-9539.6924	-9847.0654	-9992.3125	-9739.3779	-9468.1680	-9550.0352	-9849.1240	-9995.5508	-9741.5615	-9465.3867	-9550.8740	-9863.3008	-10000.2979	-9740.6221	-9465.5254	-9547.9619	-9865.0313	-10018.3047	-9757.3799	-9488.8271	-9566.7559	-9871.3799	-10014.6709	-9751.2451	-9485.6699	-9567.4434	-9874.5098	-10023.4365	-9758.0508	-9480.8291	-9566.6436	-9881.6113	-10028.0352	-9769.4697	-9495.0605	-9578.3076	-9880.8252	-10031.1318	-9764.9688	-9494.5156	-9578.3096	-9878.4756	-1 [...]
+-10332.1943	-10080.5801	-10177.4854	-10474.8877	-10604.2402	-10341.2246	-10092.2510	-10190.9648	-10480.8389	-10607.4746	-10345.6826	-10089.7676	-10190.1152	-10490.7646	-10609.9170	-10346.9326	-10090.8760	-10187.9180	-10495.3135	-10631.4521	-10361.3809	-10112.8994	-10206.0283	-10499.0566	-10622.7490	-10355.7070	-10110.3779	-10207.1006	-10506.2275	-10634.2871	-10362.4492	-10108.1289	-10206.6309	-10509.6123	-10635.0879	-10369.8398	-10115.8018	-10214.4814	-10508.4482	-10637.0928	-10364.7314	 [...]
+-15432.2354	-15157.9248	-15234.9355	-15545.7695	-15698.0195	-15437.9902	-15162.1572	-15245.8037	-15552.1182	-15701.2900	-15445.7373	-15166.0293	-15251.8545	-15563.4063	-15704.8555	-15443.8818	-15165.8848	-15251.4297	-15570.4941	-15728.2744	-15465.3135	-15188.5020	-15265.1631	-15571.8535	-15716.7969	-15454.4385	-15185.5234	-15265.1172	-15579.5801	-15729.3457	-15461.6924	-15181.5283	-15260.8398	-15574.1230	-15724.1426	-15461.6045	-15185.1318	-15269.3154	-15575.1826	-15724.9053	-15459.2070	 [...]
+-13015.6143	-12776.9834	-12867.0254	-13147.3721	-13268.0439	-13013.5322	-12775.5703	-12869.5879	-13150.0459	-13271.9912	-13024.6934	-12782.7861	-12876.7314	-13159.0205	-13275.2988	-13025.2021	-12785.2529	-12879.4395	-13167.0557	-13291.7197	-13037.9688	-12797.9238	-12889.5576	-13167.2891	-13280.5225	-13028.0352	-12797.8047	-12888.2783	-13174.6846	-13294.8477	-13035.3652	-12798.8555	-12887.0547	-13169.2119	-13288.9014	-13036.6787	-12797.4932	-12894.4980	-13171.8457	-13291.5615	-13034.8682	 [...]
+-15251.2363	-14980.0352	-15080.6836	-15398.7490	-15544.9531	-15265.5967	-14987.2139	-15083.8789	-15397.7217	-15544.3301	-15268.2412	-14982.4414	-15085.4170	-15417.0898	-15552.0664	-15262.1885	-14977.2051	-15072.8418	-15406.8516	-15563.9961	-15283.1152	-15004.2031	-15097.9072	-15420.1035	-15563.8242	-15272.3633	-14997.1260	-15095.5498	-15420.6719	-15569.6895	-15279.3174	-14992.3574	-15092.2861	-15426.2100	-15574.6680	-15290.9814	-15006.0947	-15105.3418	-15424.9473	-15576.8027	-15285.6934	 [...]
+-17225.5449	-16954.2402	-17048.1172	-17366.2813	-17516.2676	-17240.8945	-16959.6465	-17056.3105	-17375.0645	-17528.5605	-17254.4551	-16964.0820	-17064.8809	-17402.1680	-17537.7813	-17251.4961	-16962.4512	-17055.1660	-17396.9180	-17559.2402	-17276.6934	-16995.5137	-17087.0410	-17410.8398	-17558.9355	-17269.7480	-16991.3965	-17082.9609	-17415.2402	-17570.6836	-17282.4082	-16986.7754	-17084.3223	-17421.7500	-17575.9121	-17290.4922	-16999.8340	-17096.2480	-17419.8184	-17579.1016	-17284.6523	 [...]
+-7012.3633	-6741.4341	-6838.8599	-7150.1948	-7304.1196	-7029.0742	-6747.3350	-6838.3916	-7150.2300	-7298.4478	-7028.0659	-6741.9048	-6838.8872	-7167.8657	-7306.8726	-7021.0664	-6733.9663	-6822.3936	-7156.5322	-7316.7397	-7041.4297	-6758.2046	-6847.9961	-7169.0850	-7316.4360	-7027.0649	-6749.4897	-6838.5225	-7164.7231	-7317.9629	-7034.4863	-6743.3584	-6837.3066	-7170.7915	-7323.1211	-7045.8242	-6757.7051	-6849.6738	-7169.7222	-7324.3467	-7039.3403	-6761.0186	-6855.4893	-7175.2104	-7314.62 [...]
+-1838.4966	-1562.8992	-1673.4507	-1986.4330	-2137.7415	-1851.2427	-1571.3606	-1665.0458	-1980.8229	-2127.3655	-1845.3229	-1560.0002	-1663.6523	-1997.1984	-2133.9905	-1835.6796	-1550.6472	-1642.9330	-1980.5593	-2140.4697	-1857.8486	-1573.4619	-1669.8114	-1993.2134	-2136.5125	-1838.5391	-1559.5977	-1659.1960	-1981.7970	-2135.9136	-1845.8252	-1554.2683	-1655.7570	-1988.4973	-2144.1462	-1856.6744	-1572.4957	-1667.6443	-1987.3201	-2140.2471	-1848.8584	-1572.3949	-1670.9285	-1994.9985	-2132.39 [...]
+-3356.3298	-3088.3748	-3214.6958	-3526.9492	-3667.5891	-3369.3486	-3098.4922	-3204.2727	-3518.7029	-3658.4451	-3366.2561	-3086.8550	-3202.3328	-3536.6519	-3661.4746	-3350.2598	-3075.0168	-3177.2585	-3516.7173	-3665.9602	-3374.2664	-3092.2051	-3204.0103	-3529.3464	-3662.0156	-3351.2866	-3077.9705	-3192.4492	-3513.5923	-3658.6687	-3357.1213	-3074.1206	-3189.2195	-3521.7556	-3665.5962	-3370.1575	-3093.8625	-3199.6941	-3520.0071	-3664.9231	-3361.1123	-3090.4932	-3203.5002	-3528.8733	-3656.13 [...]
+-12513.0947	-12253.9248	-12374.8447	-12695.6045	-12830.5088	-12526.6641	-12258.3799	-12380.6934	-12705.2861	-12840.9219	-12539.7119	-12260.5918	-12388.9678	-12731.2822	-12851.2637	-12536.0947	-12255.5322	-12376.9863	-12725.3320	-12873.7207	-12563.5439	-12289.9414	-12411.2148	-12742.3760	-12871.8301	-12552.3613	-12284.0723	-12399.9414	-12743.0195	-12881.6162	-12567.2070	-12280.5361	-12403.5762	-12750.0313	-12886.1035	-12571.8086	-12292.2070	-12417.7900	-12748.6191	-12889.8154	-12568.2705	 [...]
+-7603.6021	-7268.0771	-7254.0361	-7565.1494	-7802.5845	-7615.7876	-7272.2437	-7252.9146	-7566.7739	-7797.8516	-7615.7046	-7264.2710	-7252.5674	-7583.9287	-7804.7251	-7607.5259	-7260.9785	-7236.7949	-7572.6626	-7817.3857	-7630.7954	-7284.3984	-7264.0703	-7584.8989	-7817.3804	-7616.5273	-7276.3086	-7253.4053	-7580.8228	-7821.5083	-7625.8198	-7267.8525	-7253.4751	-7587.3115	-7824.4331	-7633.4951	-7280.7988	-7265.4458	-7585.9009	-7826.5913	-7626.7051	-7285.1055	-7267.0503	-7589.6411	-7815.75 [...]
+-4723.2002	-4442.7031	-4507.9492	-4804.1973	-4980.8350	-4738.2051	-4446.0176	-4502.3521	-4803.0366	-4971.9614	-4734.8491	-4441.4771	-4503.3218	-4820.1187	-4978.4580	-4725.4014	-4431.4390	-4482.1875	-4807.1118	-4989.1265	-4748.3042	-4454.9854	-4511.9209	-4820.4614	-4987.4102	-4729.8560	-4445.2788	-4497.2749	-4810.3174	-4986.2876	-4736.6187	-4435.5918	-4493.8027	-4815.5059	-4990.1465	-4744.1299	-4452.2485	-4505.2891	-4816.1504	-4990.3882	-4739.9072	-4451.7437	-4510.0366	-4820.8882	-4982.71 [...]
+-5796.1128	-5507.0278	-5588.4897	-5895.3740	-6069.9248	-5808.1592	-5514.4121	-5577.9873	-5891.0649	-6061.2773	-5805.9058	-5504.7378	-5577.8232	-5908.3755	-6064.4805	-5792.4629	-5494.7271	-5555.5981	-5890.6147	-6070.9204	-5815.4580	-5516.1001	-5582.6729	-5904.2236	-6068.7437	-5794.9287	-5501.9404	-5568.0498	-5888.0249	-6065.2910	-5802.5527	-5496.2871	-5566.6304	-5896.9487	-6072.2295	-5812.8770	-5514.2725	-5578.1143	-5894.0591	-6069.2295	-5802.4819	-5512.1479	-5581.9351	-5901.9136	-6065.98 [...]
+99.6962	373.8423	274.4015	-47.0488	-199.0228	88.0458	374.6539	273.9691	-53.3486	-205.0553	79.3717	376.3204	270.3489	-75.1069	-210.8066	86.5566	385.2838	286.0577	-63.6760	-232.6548	65.3966	355.1806	254.2427	-76.5114	-227.0457	76.7532	362.8314	270.5007	-75.8623	-230.8237	66.5637	371.3092	268.6814	-81.3633	-237.4848	63.1013	359.2781	254.4684	-78.9946	-241.7349	67.2510	356.2067	253.5647	-79.3560	-223.3422	79.0849	358.2546	261.8454	-75.7036	-211.3333	79.7470	364.3486	271.7635	-63.6166	-216.44 [...]
+1023.5961	1291.0488	1182.6500	867.6858	720.3473	1009.7223	1289.2323	1184.4648	866.1928	722.6398	1007.2182	1294.6504	1183.1134	848.3550	717.1947	1018.2546	1303.2689	1203.2712	862.5685	705.1392	998.4647	1279.6831	1173.9669	849.3887	707.0538	1012.6962	1287.6886	1188.5122	857.1715	704.6126	1004.7588	1296.5753	1189.9382	849.8392	700.2349	998.5245	1284.1328	1175.4637	851.1307	700.2017	1004.3207	1284.6488	1174.3668	848.4126	712.4672	1013.8741	1276.9531	1180.9890	853.7103	727.2169	1018.2604	1292 [...]
+-2065.5994	-1790.2935	-1898.6405	-2212.0220	-2367.0374	-2079.5886	-1792.9523	-1893.3727	-2212.2102	-2358.9753	-2076.9456	-1787.1588	-1890.6694	-2226.6450	-2362.6089	-2065.3401	-1777.3805	-1870.2002	-2212.5381	-2372.7363	-2085.7590	-1800.8716	-1898.7546	-2224.9194	-2370.9683	-2067.3276	-1791.5040	-1883.5654	-2213.2417	-2369.2915	-2074.6289	-1779.7966	-1880.1921	-2220.3958	-2373.3828	-2083.6428	-1793.5460	-1895.8054	-2218.5522	-2375.4954	-2076.8499	-1795.5492	-1899.5911	-2227.1245	-2367.26 [...]
+-8366.8555	-8091.9556	-8203.6055	-8515.2168	-8669.9990	-8381.5566	-8097.0010	-8196.3799	-8512.6406	-8661.1270	-8378.7061	-8089.8345	-8194.4297	-8527.5107	-8666.5059	-8365.9600	-8077.5669	-8168.0366	-8511.7129	-8670.7012	-8384.2246	-8098.6450	-8198.5537	-8521.8965	-8666.7539	-8364.5791	-8084.8872	-8178.7549	-8506.7852	-8662.9932	-8369.6064	-8077.2129	-8178.9800	-8516.5430	-8671.4521	-8380.9072	-8094.7544	-8191.1406	-8515.1426	-8671.6074	-8374.3008	-8094.7798	-8197.4766	-8521.2012	-8663.24 [...]
+-4373.7754	-4107.0952	-4221.8955	-4539.4194	-4677.9048	-4375.9570	-4098.3579	-4214.0151	-4536.3813	-4676.7271	-4379.7661	-4090.3567	-4205.3809	-4547.7690	-4674.2100	-4363.0854	-4074.1309	-4186.2827	-4529.6206	-4689.3394	-4377.2930	-4099.5410	-4212.8584	-4539.2939	-4677.8374	-4361.2100	-4086.0840	-4188.9600	-4530.4746	-4673.2012	-4366.2603	-4072.8533	-4187.2983	-4534.1533	-4679.8667	-4367.7173	-4082.6335	-4198.2119	-4528.5103	-4681.6929	-4362.5615	-4088.2893	-4207.2759	-4535.0298	-4668.45 [...]
+-1928.2401	-1659.7429	-1755.4979	-2059.5720	-2215.1345	-1940.8970	-1656.8514	-1744.8210	-2057.7837	-2209.1362	-1937.7727	-1649.1475	-1741.2079	-2071.0083	-2212.6003	-1925.7214	-1636.9044	-1721.3500	-2054.6636	-2221.4565	-1942.2496	-1661.1349	-1749.6833	-2066.8416	-2217.7261	-1926.1359	-1650.0636	-1729.2565	-2055.5542	-2209.3503	-1931.2045	-1637.6799	-1728.4897	-2062.3286	-2219.7498	-1938.9896	-1653.1698	-1744.1272	-2059.0334	-2221.8237	-1931.9827	-1654.2959	-1748.3414	-2064.3784	-2211.47 [...]
+-5609.3428	-5340.2402	-5445.9189	-5754.6201	-5908.1855	-5626.6660	-5342.8667	-5438.5430	-5753.5347	-5898.6660	-5620.7734	-5335.6011	-5435.0361	-5764.0122	-5902.8906	-5608.5000	-5324.2100	-5412.7705	-5748.0254	-5910.0078	-5626.7383	-5346.6343	-5442.7939	-5762.3735	-5906.2764	-5608.1782	-5337.9907	-5423.4395	-5750.3516	-5900.2207	-5614.6367	-5324.4238	-5420.6738	-5755.6763	-5908.0259	-5624.4761	-5339.2188	-5435.2905	-5753.2686	-5910.8198	-5617.1167	-5339.8145	-5440.8315	-5760.5591	-5900.74 [...]
+-5090.6177	-4822.4150	-4921.4932	-5216.4907	-5370.7954	-5086.8613	-4809.6782	-4906.7588	-5223.1997	-5364.3154	-5084.5571	-4800.5532	-4891.6641	-5216.3970	-5375.4526	-5074.4194	-4800.5845	-4875.7661	-5206.4941	-5366.3589	-5078.4414	-4802.5605	-4894.4434	-5205.6372	-5365.9556	-5070.9780	-4790.9038	-4866.7720	-5194.8989	-5336.8867	-5069.1440	-4772.9468	-4875.2549	-5196.3247	-5358.2290	-5077.8218	-4788.8672	-4884.0654	-5199.2627	-5361.2681	-5058.9668	-4793.1890	-4897.2031	-5208.3931	-5352.07 [...]
+-9741.7354	-9466.2041	-9573.3584	-9879.0840	-10021.7578	-9731.4365	-9457.4756	-9565.2568	-9880.2598	-10023.2070	-9734.2188	-9451.6143	-9543.0537	-9875.0234	-10017.8291	-9723.0215	-9439.7246	-9534.1543	-9864.3965	-10023.5898	-9728.0508	-9449.7959	-9549.2510	-9865.2803	-10020.2852	-9720.5547	-9435.8369	-9524.2422	-9852.4404	-9990.8701	-9703.4580	-9435.1865	-9532.9639	-9858.6982	-10016.3809	-9726.3779	-9439.1143	-9534.5029	-9856.7070	-10017.5303	-9712.7119	-9441.0576	-9552.3174	-9874.4697	- [...]
+-10360.9014	-10090.3125	-10209.6709	-10523.5283	-10651.8994	-10348.0117	-10079.9707	-10202.0244	-10517.4580	-10651.9912	-10356.5703	-10073.8174	-10177.4160	-10513.4785	-10648.3379	-10343.4570	-10068.6836	-10173.3135	-10500.0947	-10650.0723	-10344.9990	-10074.4414	-10186.5596	-10504.5830	-10650.8174	-10345.6172	-10065.5527	-10159.6299	-10488.9111	-10619.7148	-10326.6973	-10063.9131	-10169.3301	-10499.0713	-10649.8926	-10350.4844	-10068.1016	-10173.0752	-10495.4395	-10653.3174	-10336.3457	 [...]
+-5463.9131	-5164.2236	-5261.7041	-5560.1270	-5720.9263	-5441.6426	-5164.7632	-5259.1646	-5577.6729	-5721.8877	-5446.1621	-5175.3843	-5236.5098	-5554.3423	-5704.4463	-5419.1919	-5141.1709	-5227.2666	-5553.3521	-5728.4648	-5443.5532	-5137.4761	-5234.9746	-5555.3384	-5714.6553	-5412.1509	-5118.5898	-5209.1982	-5534.0215	-5679.9019	-5396.5044	-5137.5532	-5227.6665	-5533.8999	-5708.0112	-5416.0762	-5121.5049	-5212.8403	-5536.6851	-5703.2139	-5404.6465	-5129.2568	-5237.2148	-5569.4121	-5708.48 [...]
+-4847.2295	-4564.3970	-4666.3325	-4973.9028	-5120.4561	-4831.2041	-4555.2148	-4661.9609	-4980.9121	-5126.0049	-4839.2666	-4555.4727	-4638.7896	-4971.2212	-5110.9375	-4821.4536	-4541.3389	-4630.6699	-4957.7734	-5121.0825	-4833.2026	-4545.8354	-4645.0513	-4961.7271	-5116.6860	-4819.5356	-4528.7852	-4617.9111	-4942.6006	-5084.9395	-4796.3472	-4536.9595	-4633.4766	-4948.6753	-5114.4312	-4827.4556	-4534.5020	-4624.8125	-4944.2949	-5109.3149	-4814.2031	-4543.6914	-4649.0342	-4971.2510	-5105.06 [...]
+-13890.6855	-13615.5723	-13729.6104	-14043.5352	-14176.6855	-13877.8018	-13605.8418	-13722.2627	-14040.7910	-14178.8477	-13882.6172	-13599.3682	-13696.7734	-14034.2549	-14169.4043	-13872.4375	-13593.4014	-13691.1211	-14021.3711	-14173.2803	-13875.6250	-13599.1777	-13702.9492	-14020.7686	-14172.2432	-13870.1846	-13583.3457	-13677.3838	-14003.3848	-14141.3779	-13847.4316	-13585.0166	-13688.8447	-14012.1553	-14168.8027	-13875.0654	-13590.7529	-13686.6377	-14005.0137	-14165.6279	-13860.6934	 [...]
+-12531.1973	-12261.2549	-12375.7334	-12692.9551	-12818.8945	-12525.8438	-12253.0020	-12369.7617	-12683.3516	-12820.1973	-12529.7246	-12245.2676	-12343.7314	-12676.0010	-12811.8750	-12518.2305	-12238.4336	-12340.5410	-12667.0420	-12820.1348	-12525.2588	-12251.6934	-12349.9258	-12668.7246	-12819.0381	-12520.0332	-12234.2617	-12326.6240	-12652.3594	-12794.8662	-12502.6064	-12237.5986	-12338.5352	-12662.2832	-12819.6309	-12524.3457	-12242.9619	-12337.7031	-12654.7607	-12817.2891	-12512.5439	 [...]
+-6565.1040	-6261.7261	-6321.3867	-6619.5708	-6794.1719	-6541.5723	-6255.8960	-6330.6494	-6634.8682	-6799.8008	-6554.2568	-6263.5518	-6294.1128	-6611.2686	-6776.8618	-6520.0361	-6233.9199	-6287.4229	-6600.8198	-6795.4956	-6541.2969	-6234.1216	-6299.1553	-6609.6816	-6781.9946	-6518.2480	-6214.5156	-6273.6675	-6581.4272	-6746.4082	-6490.6172	-6236.2866	-6294.0679	-6587.6665	-6777.5796	-6523.7056	-6223.0215	-6271.4204	-6581.0244	-6765.7363	-6514.5938	-6230.8911	-6302.7324	-6627.0464	-6771.94 [...]
+-6640.0044	-6363.5518	-6460.7896	-6767.8564	-6912.0166	-6624.2124	-6346.7754	-6456.3477	-6776.2305	-6918.3145	-6630.5962	-6345.8281	-6429.6650	-6766.0142	-6901.7329	-6614.7217	-6334.1553	-6422.2969	-6747.0972	-6905.0703	-6622.4600	-6339.4028	-6433.1631	-6752.4360	-6908.4331	-6616.4272	-6321.9717	-6410.0557	-6728.9561	-6874.4087	-6585.4248	-6325.7402	-6423.6626	-6737.9478	-6902.8848	-6618.6445	-6332.0474	-6415.9614	-6731.5405	-6892.3467	-6608.6416	-6339.7344	-6440.5181	-6759.0986	-6886.73 [...]
+-10391.1201	-10113.5752	-10188.2090	-10488.1494	-10639.6025	-10380.7822	-10100.1309	-10180.9941	-10493.1777	-10651.7207	-10389.3457	-10097.3877	-10155.1328	-10480.7012	-10636.4854	-10381.3125	-10086.1904	-10151.5439	-10469.4131	-10640.2998	-10382.9492	-10099.3926	-10163.1025	-10470.8066	-10645.8818	-10385.3359	-10081.6563	-10138.9443	-10449.0752	-10615.5469	-10356.0684	-10082.2949	-10152.0762	-10460.5830	-10641.8389	-10384.9316	-10091.9785	-10146.2012	-10451.0898	-10632.9834	-10373.0957	 [...]
+-7354.9702	-7071.7832	-7160.1147	-7474.0605	-7612.6597	-7342.6660	-7062.8247	-7160.0576	-7478.1440	-7628.2803	-7354.7817	-7059.5425	-7130.3638	-7460.3364	-7613.0898	-7343.0269	-7049.2119	-7126.7617	-7453.5229	-7619.2671	-7348.2524	-7065.2725	-7141.5303	-7459.3013	-7626.9180	-7350.4771	-7043.0752	-7113.2637	-7434.0688	-7596.2524	-7321.3813	-7041.7759	-7126.5337	-7445.4219	-7622.1855	-7349.7280	-7058.7217	-7124.8599	-7439.4873	-7619.7866	-7344.9097	-7073.0986	-7153.0645	-7463.1792	-7601.40 [...]
+-4862.7764	-4613.0269	-4714.5220	-5019.7324	-5151.4697	-4858.6563	-4581.2617	-4701.1880	-5021.5112	-5149.8516	-4849.8447	-4562.7666	-4680.5435	-5022.2261	-5144.4683	-4843.3086	-4561.4766	-4665.8472	-4989.6577	-5133.2485	-4831.3179	-4569.3765	-4681.7666	-5002.5337	-5147.1514	-4844.1353	-4549.8081	-4653.3286	-4969.3652	-5102.2168	-4796.8657	-4540.2324	-4672.4365	-4983.4287	-5129.2373	-4832.2061	-4563.1689	-4656.8071	-4975.6182	-5120.5781	-4834.2217	-4571.9780	-4688.6621	-5000.1538	-5117.19 [...]
+-380.7783	-127.8448	-242.5484	-541.7405	-666.8411	-372.2576	-99.2402	-222.6502	-544.3165	-675.0734	-371.7869	-98.2827	-210.5633	-542.8483	-665.7859	-364.6625	-88.9668	-193.4529	-515.9946	-660.2251	-365.8444	-94.5679	-201.8795	-525.3510	-671.5419	-377.1725	-84.8560	-188.6032	-497.9098	-634.0905	-332.8897	-76.9852	-194.3027	-507.7216	-663.2867	-364.9129	-94.0360	-191.1611	-513.3672	-649.1671	-362.6311	-105.1339	-223.9430	-535.6072	-643.0663	-349.0943	-88.4632	-198.7111	-527.9015	-656.7940	 [...]
+-6240.7769	-5970.2344	-6060.9658	-6364.3101	-6503.2856	-6223.6162	-5946.2114	-6051.5254	-6373.1333	-6518.3882	-6235.7690	-5951.7744	-6028.9404	-6360.1226	-6503.2490	-6228.2690	-5941.3760	-6025.2778	-6346.5820	-6504.7695	-6226.3965	-5947.7764	-6034.0630	-6352.6533	-6518.2124	-6237.3140	-5936.4038	-6016.9302	-6329.6260	-6482.9829	-6203.7788	-5937.5308	-6031.9341	-6342.7085	-6513.2690	-6232.4458	-5948.7690	-6023.8726	-6338.6895	-6500.0361	-6226.0786	-5963.1147	-6051.0640	-6361.0898	-6491.12 [...]
+2682.5195	2941.2305	2819.7190	2510.5088	2395.6262	2696.4570	2962.4712	2823.4385	2496.4849	2368.5659	2676.3276	2952.6399	2851.3582	2517.7622	2390.8289	2686.7073	2967.3701	2854.5005	2529.3882	2387.7314	2689.0015	2955.1602	2841.9646	2519.5649	2370.9211	2674.1777	2970.4900	2867.1841	2550.9324	2410.9409	2713.0254	2969.7795	2853.2537	2535.9856	2378.5830	2679.3799	2953.3799	2856.0322	2542.3506	2393.9910	2689.2168	2940.9067	2828.7637	2517.4790	2402.1323	2705.2529	2962.0393	2861.0640	2527.9150	23 [...]
+-9376.3037	-9106.6963	-9218.8955	-9532.4951	-9653.2656	-9365.8340	-9098.9639	-9228.4385	-9550.9883	-9687.4199	-9386.0889	-9101.1787	-9191.2188	-9517.9854	-9658.3047	-9383.0127	-9095.9023	-9191.3984	-9515.8975	-9665.4326	-9374.2510	-9109.5967	-9209.4531	-9527.8027	-9684.3682	-9394.7256	-9088.4932	-9176.0352	-9491.8857	-9646.2285	-9360.0342	-9086.5547	-9193.2920	-9513.6631	-9680.7275	-9389.9365	-9104.9854	-9185.9512	-9496.7793	-9662.4902	-9390.7783	-9125.4922	-9213.4824	-9525.5713	-9651.72 [...]
diff --git a/mne/io/egi/tests/test_egi.py b/mne/io/egi/tests/test_egi.py
index 73274bd..c038a8a 100644
--- a/mne/io/egi/tests/test_egi.py
+++ b/mne/io/egi/tests/test_egi.py
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
 # Authors: Denis A. Engemann  <denis.engemann at gmail.com>
 #          simplified BSD-3 license
 
@@ -6,48 +7,50 @@ import os.path as op
 import warnings
 
 import numpy as np
-from numpy.testing import assert_array_almost_equal, assert_array_equal
+from numpy.testing import assert_array_equal, assert_allclose
 from nose.tools import assert_true, assert_raises, assert_equal
 
-from mne import find_events, pick_types, concatenate_raws
-from mne.io import read_raw_egi, Raw
+from mne import find_events, pick_types
+from mne.io import read_raw_egi
+from mne.io.tests.test_raw import _test_raw_reader
 from mne.io.egi import _combine_triggers
-from mne.utils import _TempDir
+from mne.utils import run_tests_if_main
 
 warnings.simplefilter('always')  # enable b/c these tests throw warnings
 
 base_dir = op.join(op.dirname(op.realpath(__file__)), 'data')
 egi_fname = op.join(base_dir, 'test_egi.raw')
+egi_txt_fname = op.join(base_dir, 'test_egi.txt')
 
 
 def test_io_egi():
     """Test importing EGI simple binary files"""
     # test default
-    tempdir = _TempDir()
+    with open(egi_txt_fname) as fid:
+        data = np.loadtxt(fid)
+    t = data[0]
+    data = data[1:]
+    data *= 1e-6  # μV
+
     with warnings.catch_warnings(record=True) as w:
-        warnings.simplefilter('always', category=RuntimeWarning)
+        warnings.simplefilter('always')
         raw = read_raw_egi(egi_fname, include=None)
         assert_true('RawEGI' in repr(raw))
-        raw.load_data()  # currently does nothing
-        assert_equal(len(w), 1)
-        assert_true(w[0].category == RuntimeWarning)
+        assert_equal(len(w), 2)
+        assert_true(w[0].category == DeprecationWarning)  # preload=None
+        assert_true(w[1].category == RuntimeWarning)
         msg = 'Did not find any event code with more than one event.'
-        assert_true(msg in '%s' % w[0].message)
+        assert_true(msg in '%s' % w[1].message)
+    data_read, t_read = raw[:256]
+    assert_allclose(t_read, t)
+    assert_allclose(data_read, data, atol=1e-10)
 
     include = ['TRSP', 'XXX1']
-    raw = read_raw_egi(egi_fname, include=include)
-    repr(raw)
-    repr(raw.info)
+    with warnings.catch_warnings(record=True):  # preload=None
+        raw = _test_raw_reader(read_raw_egi, input_fname=egi_fname,
+                               include=include)
 
     assert_equal('eeg' in raw, True)
-    out_fname = op.join(tempdir, 'test_egi_raw.fif')
-    raw.save(out_fname)
-
-    raw2 = Raw(out_fname, preload=True)
-    data1, times1 = raw[:10, :]
-    data2, times2 = raw2[:10, :]
-    assert_array_almost_equal(data1, data2, 9)
-    assert_array_almost_equal(times1, times2)
 
     eeg_chan = [c for c in raw.ch_names if 'EEG' in c]
     assert_equal(len(eeg_chan), 256)
@@ -63,20 +66,17 @@ def test_io_egi():
     triggers = np.array([[0, 1, 1, 0], [0, 0, 1, 0]])
 
     # test trigger functionality
-    assert_raises(RuntimeError, _combine_triggers, triggers, None)
     triggers = np.array([[0, 1, 0, 0], [0, 0, 1, 0]])
     events_ids = [12, 24]
     new_trigger = _combine_triggers(triggers, events_ids)
     assert_array_equal(np.unique(new_trigger), np.unique([0, 12, 24]))
 
-    assert_raises(ValueError, read_raw_egi, egi_fname,
-                  include=['Foo'])
-    assert_raises(ValueError, read_raw_egi, egi_fname,
-                  exclude=['Bar'])
+    assert_raises(ValueError, read_raw_egi, egi_fname, include=['Foo'],
+                  preload=False)
+    assert_raises(ValueError, read_raw_egi, egi_fname, exclude=['Bar'],
+                  preload=False)
     for ii, k in enumerate(include, 1):
         assert_true(k in raw.event_id)
         assert_true(raw.event_id[k] == ii)
 
-    # Make sure concatenation works
-    raw_concat = concatenate_raws([raw.copy(), raw])
-    assert_equal(raw_concat.n_times, 2 * raw.n_times)
+run_tests_if_main()
diff --git a/mne/io/fiff/raw.py b/mne/io/fiff/raw.py
index 5d1fc42..78b8374 100644
--- a/mne/io/fiff/raw.py
+++ b/mne/io/fiff/raw.py
@@ -21,6 +21,7 @@ from ..tag import read_tag, read_tag_info
 from ..proj import make_eeg_average_ref_proj, _needs_eeg_average_ref_proj
 from ..compensator import get_current_comp, set_current_comp, make_compensator
 from ..base import _BaseRaw, _RawShell, _check_raw_compatibility
+from ..utils import _mult_cal_one
 
 from ...utils import check_fname, logger, verbose
 
@@ -151,7 +152,7 @@ class RawFIF(_BaseRaw):
         with ff as fid:
             #   Read the measurement info
 
-            info, meas = read_meas_info(fid, tree)
+            info, meas = read_meas_info(fid, tree, clean_bads=True)
 
             #   Locate the data of interest
             raw_node = dir_tree_find(meas, FIFF.FIFFB_RAW_DATA)
@@ -341,9 +342,10 @@ class RawFIF(_BaseRaw):
         self._dtype_ = dtype
         return dtype
 
-    def _read_segment_file(self, data, idx, offset, fi, start, stop,
-                           cals, mult):
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
         """Read a segment of data from a file"""
+        stop -= 1
+        offset = 0
         with _fiff_get_fid(self._filenames[fi]) as fid:
             for this in self._raw_extras[fi]:
                 #  Do we need this buffer
@@ -381,19 +383,8 @@ class RawFIF(_BaseRaw):
                                                   self.info['nchan']),
                                            rlims=(first_pick, last_pick)).data
                             one.shape = (picksamp, self.info['nchan'])
-                            one = one.T.astype(data.dtype)
-                            data_view = data[:, offset:(offset + picksamp)]
-                            if mult is not None:
-                                data_view[:] = np.dot(mult[fi], one)
-                            else:  # cals is not None
-                                if isinstance(idx, slice):
-                                    data_view[:] = one[idx]
-                                else:
-                                    # faster to iterate than doing
-                                    # one = one[idx]
-                                    for ii, ix in enumerate(idx):
-                                        data_view[ii] = one[ix]
-                                data_view *= cals
+                            _mult_cal_one(data[:, offset:(offset + picksamp)],
+                                          one.T, idx, cals, mult)
                         offset += picksamp
 
                 #   Done?
diff --git a/mne/io/fiff/tests/test_raw.py b/mne/io/fiff/tests/test_raw_fiff.py
similarity index 99%
rename from mne/io/fiff/tests/test_raw.py
rename to mne/io/fiff/tests/test_raw_fiff.py
index e3f561e..5feafab 100644
--- a/mne/io/fiff/tests/test_raw.py
+++ b/mne/io/fiff/tests/test_raw_fiff.py
@@ -1,5 +1,3 @@
-from __future__ import print_function
-
 # Author: Alexandre Gramfort <alexandre.gramfort at telecom-paristech.fr>
 #         Denis Engemann <denis.engemann at gmail.com>
 #
@@ -20,7 +18,7 @@ from nose.tools import assert_true, assert_raises, assert_not_equal
 from mne.datasets import testing
 from mne.io.constants import FIFF
 from mne.io import Raw, RawArray, concatenate_raws, read_raw_fif
-from mne.io.tests.test_raw import _test_concat
+from mne.io.tests.test_raw import _test_concat, _test_raw_reader
 from mne import (concatenate_events, find_events, equalize_channels,
                  compute_proj_raw, pick_types, pick_channels, create_info)
 from mne.utils import (_TempDir, requires_pandas, slow_test,
@@ -44,6 +42,7 @@ bad_file_works = op.join(base_dir, 'test_bads.txt')
 bad_file_wrong = op.join(base_dir, 'test_wrong_bads.txt')
 hp_fname = op.join(base_dir, 'test_chpi_raw_hp.txt')
 hp_fif_fname = op.join(base_dir, 'test_chpi_raw_sss.fif')
+rng = np.random.RandomState(0)
 
 
 def test_fix_types():
@@ -425,7 +424,7 @@ def test_io_raw():
     raw = Raw(fif_fname).crop(0, 3.5, False)
     raw.load_data()
     # put in some data that we know the values of
-    data = np.random.randn(raw._data.shape[0], raw._data.shape[1])
+    data = rng.randn(raw._data.shape[0], raw._data.shape[1])
     raw._data[:, :] = data
     # save it somewhere
     fname = op.join(tempdir, 'test_copy_raw.fif')
@@ -523,17 +522,18 @@ def test_io_raw():
 def test_io_complex():
     """Test IO with complex data types
     """
+    rng = np.random.RandomState(0)
     tempdir = _TempDir()
     dtypes = [np.complex64, np.complex128]
 
-    raw = Raw(fif_fname, preload=True)
+    raw = _test_raw_reader(Raw, fnames=fif_fname)
     picks = np.arange(5)
     start, stop = raw.time_as_index([0, 5])
 
     data_orig, _ = raw[picks, start:stop]
 
     for di, dtype in enumerate(dtypes):
-        imag_rand = np.array(1j * np.random.randn(data_orig.shape[0],
+        imag_rand = np.array(1j * rng.randn(data_orig.shape[0],
                              data_orig.shape[1]), dtype)
 
         raw_cp = raw.copy()
@@ -658,7 +658,7 @@ def test_preload_modify():
         nsamp = raw.last_samp - raw.first_samp + 1
         picks = pick_types(raw.info, meg='grad', exclude='bads')
 
-        data = np.random.randn(len(picks), nsamp // 2)
+        data = rng.randn(len(picks), nsamp // 2)
 
         try:
             raw[picks, :nsamp // 2] = data
diff --git a/mne/io/kit/constants.py b/mne/io/kit/constants.py
index 7941223..3e96ea4 100644
--- a/mne/io/kit/constants.py
+++ b/mne/io/kit/constants.py
@@ -43,14 +43,14 @@ KIT.DIG_POINTS = 10000
 KIT_NY = Bunch(**KIT)
 KIT_AD = Bunch(**KIT)
 
-# NYU-system channel information
+# NY-system channel information
 KIT_NY.NCHAN = 192
 KIT_NY.NMEGCHAN = 157
 KIT_NY.NREFCHAN = 3
 KIT_NY.NMISCCHAN = 32
 KIT_NY.N_SENS = KIT_NY.NMEGCHAN + KIT_NY.NREFCHAN
 # 12-bit A-to-D converter, one bit for signed integer. range +/- 2048
-KIT_NY.DYNAMIC_RANGE = 2 ** 12 / 2
+KIT_NY.DYNAMIC_RANGE = 2 ** 11
 # amplifier information
 KIT_NY.GAIN1_BIT = 11  # stored in Bit 11-12
 KIT_NY.GAIN1_MASK = 2 ** 11 + 2 ** 12
@@ -71,6 +71,13 @@ KIT_NY.HPFS = [0, 1, 3]
 KIT_NY.LPFS = [10, 20, 50, 100, 200, 500, 1000, 2000]
 
 
+# Maryland-system channel information
+# Virtually the same as the NY-system except new ADC circa July 2014
+# 16-bit A-to-D converter, one bit for signed integer. range +/- 32768
+KIT_MD = Bunch(**KIT_NY)
+KIT_MD.DYNAMIC_RANGE = 2 ** 15
+
+
 # AD-system channel information
 KIT_AD.NCHAN = 256
 KIT_AD.NMEGCHAN = 208
@@ -78,7 +85,7 @@ KIT_AD.NREFCHAN = 16
 KIT_AD.NMISCCHAN = 32
 KIT_AD.N_SENS = KIT_AD.NMEGCHAN + KIT_AD.NREFCHAN
 # 16-bit A-to-D converter, one bit for signed integer. range +/- 32768
-KIT_AD.DYNAMIC_RANGE = 2 ** 16 / 2
+KIT_AD.DYNAMIC_RANGE = 2 ** 15
 # amplifier information
 KIT_AD.GAIN1_BIT = 12  # stored in Bit 12-14
 KIT_AD.GAIN1_MASK = 2 ** 12 + 2 ** 13 + 2 ** 14
@@ -97,3 +104,20 @@ KIT_AD.HPFS = [0, 0.03, 0.1, 0.3, 1, 3, 10, 30]
 # LPF options: 0:10Hz, 1:20Hz, 2:50Hz, 3:100Hz, 4:200Hz, 5:500Hz,
 #              6:1,000Hz, 7:10,000Hz
 KIT_AD.LPFS = [10, 20, 50, 100, 200, 500, 1000, 10000]
+
+
+# KIT recording system is encoded in the SQD file as integer:
+KIT_CONSTANTS = {32: KIT_NY,  # NYU-NY, July 7, 2008 -
+                 33: KIT_NY,  # NYU-NY, January 24, 2009 -
+                 34: KIT_NY,  # NYU-NY, January 22, 2010 -
+                 # 440 NYU-AD, initial launch May 20, 2011 -
+                 441: KIT_AD,  # NYU-AD more channels July 11, 2012 -
+                 442: KIT_AD,  # NYU-AD move to NYUAD campus Nov 20, 2014 -
+                 51: KIT_NY,  # UMD
+                 52: KIT_MD,  # UMD update to 16 bit ADC, July 4, 2014 -
+                 53: KIT_MD}  # UMD December 4, 2014 -
+
+SYSNAMES = {33: 'NYU 160ch System since Jan24 2009',
+            34: 'NYU 160ch System since Jan24 2009',
+            441: "New York University Abu Dhabi",
+            442: "New York University Abu Dhabi"}
diff --git a/mne/io/kit/kit.py b/mne/io/kit/kit.py
index df0eb35..fa46233 100644
--- a/mne/io/kit/kit.py
+++ b/mne/io/kit/kit.py
@@ -11,6 +11,7 @@ RawKIT class is adapted from Denis Engemann et al.'s mne_bti2fiff.py
 from os import SEEK_CUR, path as op
 from struct import unpack
 import time
+from warnings import warn
 
 import numpy as np
 from scipy import linalg
@@ -21,17 +22,18 @@ from ...utils import verbose, logger
 from ...transforms import (apply_trans, als_ras_trans, als_ras_trans_mm,
                            get_ras_to_neuromag_trans, Transform)
 from ..base import _BaseRaw
+from ..utils import _mult_cal_one
 from ...epochs import _BaseEpochs
 from ..constants import FIFF
 from ..meas_info import _empty_info, _read_dig_points, _make_dig_points
-from .constants import KIT, KIT_NY, KIT_AD
+from .constants import KIT, KIT_CONSTANTS, SYSNAMES
 from .coreg import read_mrk
 from ...externals.six import string_types
 from ...event import read_events
 
 
 class RawKIT(_BaseRaw):
-    """Raw object from KIT SQD file adapted from bti/raw.py
+    """Raw object from KIT SQD file
 
     Parameters
     ----------
@@ -68,6 +70,9 @@ class RawKIT(_BaseRaw):
         large amount of memory). If preload is a string, preload is the
         file name of a memory-mapped file which is used to store the data
         on the hard drive (slower, requires less memory).
+    stim_code : 'binary' | 'channel'
+        How to decode trigger values from stim channels. 'binary' read stim
+        channel events as binary code, 'channel' encodes channel number.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -84,7 +89,8 @@ class RawKIT(_BaseRaw):
     """
     @verbose
     def __init__(self, input_fname, mrk=None, elp=None, hsp=None, stim='>',
-                 slope='-', stimthresh=1, preload=False, verbose=None):
+                 slope='-', stimthresh=1, preload=False, stim_code='binary',
+                 verbose=None):
         logger.info('Extracting SQD Parameters from %s...' % input_fname)
         input_fname = op.abspath(input_fname)
         self.preload = False
@@ -99,7 +105,7 @@ class RawKIT(_BaseRaw):
 
         last_samps = [kit_info['n_samples'] - 1]
         self._raw_extras = [kit_info]
-        self._set_stimchannels(info, stim)
+        self._set_stimchannels(info, stim, stim_code)
         super(RawKIT, self).__init__(
             info, preload, last_samps=last_samps, filenames=[input_fname],
             raw_extras=self._raw_extras, verbose=verbose)
@@ -108,11 +114,11 @@ class RawKIT(_BaseRaw):
             mrk = [read_mrk(marker) if isinstance(marker, string_types)
                    else marker for marker in mrk]
             mrk = np.mean(mrk, axis=0)
-        if (mrk is not None and elp is not None and hsp is not None):
+        if mrk is not None and elp is not None and hsp is not None:
             dig_points, dev_head_t = _set_dig_kit(mrk, elp, hsp)
             self.info['dig'] = dig_points
             self.info['dev_head_t'] = dev_head_t
-        elif (mrk is not None or elp is not None or hsp is not None):
+        elif mrk is not None or elp is not None or hsp is not None:
             raise ValueError('mrk, elp and hsp need to be provided as a group '
                              '(all or none)')
 
@@ -145,7 +151,7 @@ class RawKIT(_BaseRaw):
 
         return stim_ch
 
-    def _set_stimchannels(self, info, stim='<'):
+    def _set_stimchannels(self, info, stim, stim_code):
         """Specify how the trigger channel is synthesized from analog channels.
 
         Has to be done before loading data. For a RawKIT instance that has been
@@ -164,7 +170,14 @@ class RawKIT(_BaseRaw):
             in sequence.
             '>' means the largest trigger assigned to the last channel
             in sequence.
+        stim_code : 'binary' | 'channel'
+            How to decode trigger values from stim channels. 'binary' read stim
+            channel events as binary code, 'channel' encodes channel number.
         """
+        if stim_code not in ('binary', 'channel'):
+            raise ValueError("stim_code=%r, needs to be 'binary' or 'channel'"
+                             % stim_code)
+
         if stim is not None:
             if isinstance(stim, str):
                 picks = pick_types(info, meg=False, ref_meg=False,
@@ -202,16 +215,11 @@ class RawKIT(_BaseRaw):
             raise NotImplementedError(err)
 
         self._raw_extras[0]['stim'] = stim
+        self._raw_extras[0]['stim_code'] = stim_code
 
     @verbose
-    def _read_segment_file(self, data, idx, offset, fi, start, stop,
-                           cals, mult):
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
         """Read a chunk of raw data"""
-        # cals are all unity, so can be ignored
-
-        # RawFIF and RawEDF think of "stop" differently, easiest to increment
-        # here and refactor later
-        stop += 1
         with open(self._filenames[fi], 'rb', buffering=0) as fid:
             # extract data
             data_offset = KIT.RAW_OFFSET
@@ -245,13 +253,18 @@ class RawKIT(_BaseRaw):
                 trig_chs = trig_chs < self._raw_extras[0]['stimthresh']
             else:
                 raise ValueError("slope needs to be '+' or '-'")
-            trig_vals = np.array(
-                2 ** np.arange(len(self._raw_extras[0]['stim'])), ndmin=2).T
+
+            # trigger value
+            if self._raw_extras[0]['stim_code'] == 'binary':
+                ntrigchan = len(self._raw_extras[0]['stim'])
+                trig_vals = np.array(2 ** np.arange(ntrigchan), ndmin=2).T
+            else:
+                trig_vals = np.reshape(self._raw_extras[0]['stim'], (-1, 1))
             trig_chs = trig_chs * trig_vals
             stim_ch = np.array(trig_chs.sum(axis=0), ndmin=2)
             data_ = np.vstack((data_, stim_ch))
-        data[:, offset:offset + (stop - start)] = \
-            np.dot(mult, data_) if mult is not None else data_[idx]
+        # cals are all unity, so can be ignored
+        _mult_cal_one(data, data_, idx, None, mult)
 
 
 class EpochsKIT(_BaseEpochs):
@@ -435,7 +448,7 @@ class EpochsKIT(_BaseEpochs):
         return data
 
 
-def _set_dig_kit(mrk, elp, hsp, auto_decimate=True):
+def _set_dig_kit(mrk, elp, hsp):
     """Add landmark points and head shape data to the KIT instance
 
     Digitizer data (elp and hsp) are represented in [mm] in the Polhemus
@@ -454,9 +467,6 @@ def _set_dig_kit(mrk, elp, hsp, auto_decimate=True):
         Digitizer head shape points, or path to head shape file. If more
         than 10`000 points are in the head shape, they are automatically
         decimated.
-    auto_decimate : bool
-        Decimate hsp points for head shape files with more than 10'000
-        points.
 
     Returns
     -------
@@ -481,14 +491,12 @@ def _set_dig_kit(mrk, elp, hsp, auto_decimate=True):
     if isinstance(elp, string_types):
         elp_points = _read_dig_points(elp)
         if len(elp_points) != 8:
-            err = ("File %r should contain 8 points; got shape "
-                   "%s." % (elp, elp_points.shape))
-            raise ValueError(err)
+            raise ValueError("File %r should contain 8 points; got shape "
+                             "%s." % (elp, elp_points.shape))
         elp = elp_points
-
     elif len(elp) != 8:
-        err = ("ELP should contain 8 points; got shape "
-               "%s." % (elp.shape,))
+        raise ValueError("ELP should contain 8 points; got shape "
+                         "%s." % (elp.shape,))
     if isinstance(mrk, string_types):
         mrk = read_mrk(mrk)
 
@@ -534,21 +542,26 @@ def get_kit_info(rawfile):
         fid.seek(KIT.BASIC_INFO)
         basic_offset = unpack('i', fid.read(KIT.INT))[0]
         fid.seek(basic_offset)
-        # skips version, revision, sysid
-        fid.seek(KIT.INT * 3, SEEK_CUR)
+        # skips version, revision
+        fid.seek(KIT.INT * 2, SEEK_CUR)
+        sysid = unpack('i', fid.read(KIT.INT))[0]
         # basic info
         sysname = unpack('128s', fid.read(KIT.STRING))
         sysname = sysname[0].decode().split('\n')[0]
+        if sysid not in KIT_CONSTANTS:
+            raise NotImplementedError("Data from the KIT system %s (ID %s) "
+                                      "can not currently be read, please "
+                                      "contact the MNE-Python developers."
+                                      % (sysname, sysid))
+        KIT_SYS = KIT_CONSTANTS[sysid]
+        if sysid in SYSNAMES:
+            if sysname != SYSNAMES[sysid]:
+                warn("KIT file %s has system-name %r, expected %r"
+                     % (rawfile, sysname, SYSNAMES[sysid]))
+
+        # channels
         fid.seek(KIT.STRING, SEEK_CUR)  # skips modelname
         sqd['nchan'] = unpack('i', fid.read(KIT.INT))[0]
-
-        if sysname == 'New York University Abu Dhabi':
-            KIT_SYS = KIT_AD
-        elif sysname == 'NYU 160ch System since Jan24 2009':
-            KIT_SYS = KIT_NY
-        else:
-            raise NotImplementedError
-
         # channel locations
         fid.seek(KIT_SYS.CHAN_LOC_OFFSET)
         chan_offset = unpack('i', fid.read(KIT.INT))[0]
@@ -610,10 +623,9 @@ def get_kit_info(rawfile):
         sens_offset = unpack('i', fid.read(KIT_SYS.INT))[0]
         fid.seek(sens_offset)
         sens = np.fromfile(fid, dtype='d', count=sqd['nchan'] * 2)
-        sensitivities = (np.reshape(sens, (sqd['nchan'], 2))
-                         [:KIT_SYS.N_SENS, 1])
+        sens.shape = (sqd['nchan'], 2)
         sqd['sensor_gain'] = np.ones(KIT_SYS.NCHAN)
-        sqd['sensor_gain'][:KIT_SYS.N_SENS] = sensitivities
+        sqd['sensor_gain'][:KIT_SYS.N_SENS] = sens[:KIT_SYS.N_SENS, 1]
 
         fid.seek(KIT_SYS.SAMPLE_INFO)
         acqcond_offset = unpack('i', fid.read(KIT_SYS.INT))[0]
@@ -640,10 +652,10 @@ def get_kit_info(rawfile):
         sqd['acq_type'] = acq_type
 
         # Create raw.info dict for raw fif object with SQD data
-        info = _empty_info()
+        info = _empty_info(float(sqd['sfreq']))
         info.update(meas_date=int(time.time()), lowpass=sqd['lowpass'],
-                    highpass=sqd['highpass'], sfreq=float(sqd['sfreq']),
-                    filename=rawfile, nchan=sqd['nchan'])
+                    highpass=sqd['highpass'], filename=rawfile,
+                    nchan=sqd['nchan'], buffer_size_sec=1.)
 
         # Creates a list of dicts of meg channels for raw.info
         logger.info('Setting channel info structure...')
@@ -726,7 +738,8 @@ def get_kit_info(rawfile):
 
 
 def read_raw_kit(input_fname, mrk=None, elp=None, hsp=None, stim='>',
-                 slope='-', stimthresh=1, preload=False, verbose=None):
+                 slope='-', stimthresh=1, preload=False, stim_code='binary',
+                 verbose=None):
     """Reader function for KIT conversion to FIF
 
     Parameters
@@ -761,6 +774,9 @@ def read_raw_kit(input_fname, mrk=None, elp=None, hsp=None, stim='>',
     preload : bool
         If True, all data are loaded at initialization.
         If False, data are not read until save.
+    stim_code : 'binary' | 'channel'
+        How to decode trigger values from stim channels. 'binary' read stim
+        channel events as binary code, 'channel' encodes channel number.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -775,7 +791,7 @@ def read_raw_kit(input_fname, mrk=None, elp=None, hsp=None, stim='>',
     """
     return RawKIT(input_fname=input_fname, mrk=mrk, elp=elp, hsp=hsp,
                   stim=stim, slope=slope, stimthresh=stimthresh,
-                  preload=preload, verbose=verbose)
+                  preload=preload, stim_code=stim_code, verbose=verbose)
 
 
 def read_epochs_kit(input_fname, events, event_id=None,
diff --git a/mne/io/kit/tests/test_kit.py b/mne/io/kit/tests/test_kit.py
index 72b3028..2e171a5 100644
--- a/mne/io/kit/tests/test_kit.py
+++ b/mne/io/kit/tests/test_kit.py
@@ -9,15 +9,15 @@ import os.path as op
 import inspect
 import numpy as np
 from numpy.testing import assert_array_almost_equal, assert_array_equal
-from nose.tools import assert_equal, assert_raises, assert_true
+from nose.tools import assert_raises, assert_true
 import scipy.io
 
-from mne import pick_types, concatenate_raws, Epochs, read_events
-from mne.utils import _TempDir, run_tests_if_main
-from mne.io import Raw
-from mne.io import read_raw_kit, read_epochs_kit
+from mne import pick_types, Epochs, find_events, read_events
+from mne.tests.common import assert_dig_allclose
+from mne.utils import run_tests_if_main
+from mne.io import Raw, read_raw_kit, read_epochs_kit
 from mne.io.kit.coreg import read_sns
-from mne.io.tests.test_raw import _test_concat
+from mne.io.tests.test_raw import _test_raw_reader
 
 FILE = inspect.getfile(inspect.currentframe())
 parent_dir = op.dirname(op.abspath(FILE))
@@ -32,12 +32,6 @@ elp_path = op.join(data_dir, 'test_elp.txt')
 hsp_path = op.join(data_dir, 'test_hsp.txt')
 
 
-def test_concat():
-    """Test EDF concatenation
-    """
-    _test_concat(read_raw_kit, sqd_path)
-
-
 def test_data():
     """Test reading raw kit files
     """
@@ -49,13 +43,24 @@ def test_data():
     assert_raises(ValueError, read_raw_kit, sqd_path, None, None, None,
                   list(range(167, 159, -1)), '*', 1, True)
     # check functionality
-    _ = read_raw_kit(sqd_path, [mrk2_path, mrk3_path], elp_path,
-                     hsp_path)
-    raw_py = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path,
-                          stim=list(range(167, 159, -1)), slope='+',
-                          stimthresh=1, preload=True)
+    raw_mrk = read_raw_kit(sqd_path, [mrk2_path, mrk3_path], elp_path,
+                           hsp_path)
+    raw_py = _test_raw_reader(read_raw_kit,
+                              input_fname=sqd_path, mrk=mrk_path, elp=elp_path,
+                              hsp=hsp_path, stim=list(range(167, 159, -1)),
+                              slope='+', stimthresh=1)
     assert_true('RawKIT' in repr(raw_py))
 
+    # Test stim channel
+    raw_stim = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path, stim='<',
+                            preload=False)
+    for raw in [raw_py, raw_stim, raw_mrk]:
+        stim_pick = pick_types(raw.info, meg=False, ref_meg=False,
+                               stim=True, exclude='bads')
+        stim1, _ = raw[stim_pick]
+        stim2 = np.array(raw.read_stim_ch(), ndmin=2)
+        assert_array_equal(stim1, stim2)
+
     # Binary file only stores the sensor channels
     py_picks = pick_types(raw_py.info, exclude='bads')
     raw_bin = op.join(data_dir, 'test_bin_raw.fif')
@@ -76,10 +81,6 @@ def test_data():
     data_py, _ = raw_py[py_picks]
     assert_array_almost_equal(data_py, data_bin)
 
-    # Make sure concatenation works
-    raw_concat = concatenate_raws([raw_py.copy(), raw_py])
-    assert_equal(raw_concat.n_times, 2 * raw_py.n_times)
-
 
 def test_epochs():
     raw = read_raw_kit(sqd_path, stim=None)
@@ -91,37 +92,28 @@ def test_epochs():
     assert_array_equal(data1, data11)
 
 
-def test_read_segment():
-    """Test writing raw kit files when preload is False
-    """
-    tempdir = _TempDir()
-    raw1 = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path, stim='<',
-                        preload=False)
-    raw1_file = op.join(tempdir, 'test1-raw.fif')
-    raw1.save(raw1_file, buffer_size_sec=.1, overwrite=True)
-    raw2 = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path, stim='<',
-                        preload=True)
-    raw2_file = op.join(tempdir, 'test2-raw.fif')
-    raw2.save(raw2_file, buffer_size_sec=.1, overwrite=True)
-    data1, times1 = raw1[0, 0:1]
-
-    raw1 = Raw(raw1_file, preload=True)
-    raw2 = Raw(raw2_file, preload=True)
-    assert_array_equal(raw1._data, raw2._data)
-    data2, times2 = raw2[0, 0:1]
-    assert_array_almost_equal(data1, data2)
-    assert_array_almost_equal(times1, times2)
-    raw3 = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path, stim='<',
-                        preload=True)
-    assert_array_almost_equal(raw1._data, raw3._data)
-    raw4 = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path, stim='<',
-                        preload=False)
-    raw4.load_data()
-    buffer_fname = op.join(tempdir, 'buffer')
-    assert_array_almost_equal(raw1._data, raw4._data)
-    raw5 = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path, stim='<',
-                        preload=buffer_fname)
-    assert_array_almost_equal(raw1._data, raw5._data)
+def test_raw_events():
+    def evts(a, b, c, d, e, f=None):
+        out = [[269, a, b], [281, b, c], [1552, c, d], [1564, d, e]]
+        if f is not None:
+            out.append([2000, e, f])
+        return out
+
+    raw = read_raw_kit(sqd_path)
+    assert_array_equal(find_events(raw, output='step', consecutive=True),
+                       evts(255, 254, 255, 254, 255, 0))
+
+    raw = read_raw_kit(sqd_path, slope='+')
+    assert_array_equal(find_events(raw, output='step', consecutive=True),
+                       evts(0, 1, 0, 1, 0))
+
+    raw = read_raw_kit(sqd_path, stim='<', slope='+')
+    assert_array_equal(find_events(raw, output='step', consecutive=True),
+                       evts(0, 128, 0, 128, 0))
+
+    raw = read_raw_kit(sqd_path, stim='<', slope='+', stim_code='channel')
+    assert_array_equal(find_events(raw, output='step', consecutive=True),
+                       evts(0, 160, 0, 160, 0))
 
 
 def test_ch_loc():
@@ -146,18 +138,9 @@ def test_ch_loc():
     # test when more than one marker file provided
     mrks = [mrk_path, mrk2_path, mrk3_path]
     read_raw_kit(sqd_path, mrks, elp_path, hsp_path, preload=False)
-
-
-def test_stim_ch():
-    """Test raw kit stim ch
-    """
-    raw = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path, stim='<',
-                       slope='+', preload=True)
-    stim_pick = pick_types(raw.info, meg=False, ref_meg=False,
-                           stim=True, exclude='bads')
-    stim1, _ = raw[stim_pick]
-    stim2 = np.array(raw.read_stim_ch(), ndmin=2)
-    assert_array_equal(stim1, stim2)
-
+    # this dataset does not have the equivalent set of points :(
+    raw_bin.info['dig'] = raw_bin.info['dig'][:8]
+    raw_py.info['dig'] = raw_py.info['dig'][:8]
+    assert_dig_allclose(raw_py.info, raw_bin.info)
 
 run_tests_if_main()
diff --git a/mne/io/meas_info.py b/mne/io/meas_info.py
index f2e808e..efaf554 100644
--- a/mne/io/meas_info.py
+++ b/mne/io/meas_info.py
@@ -18,7 +18,7 @@ from .open import fiff_open
 from .tree import dir_tree_find
 from .tag import read_tag, find_tag
 from .proj import _read_proj, _write_proj, _uniquify_projs
-from .ctf import read_ctf_comp, write_ctf_comp
+from .ctf_comp import read_ctf_comp, write_ctf_comp
 from .write import (start_file, end_file, start_block, end_block,
                     write_string, write_dig_point, write_float, write_int,
                     write_coord_trans, write_ch_info, write_name_list,
@@ -76,13 +76,13 @@ class Info(dict):
         Event list, usually extracted from the stim channels.
         See: :ref:`faq` for details.
     hpi_results : list of dict
-        Head position indicator (HPI) digitization points.
-        See: :ref:`faq` for details.
+        Head position indicator (HPI) digitization points and fit information
+        (e.g., the resulting transform). See: :ref:`faq` for details.
     meas_date : list of int
         The first element of this list is a POSIX timestamp (milliseconds since
         1970-01-01 00:00:00) denoting the date and time at which the
-        measurement was taken.
-        TODO: what are the other fields?
+        measurement was taken. The second element is the number of
+        microseconds.
     nchan : int
         Number of channels.
     projs : list of dict
@@ -94,7 +94,7 @@ class Info(dict):
     acq_pars : str | None
         MEG system acquition parameters.
     acq_stim : str | None
-        TODO: What is this?
+        MEG system stimulus parameters.
     buffer_size_sec : float | None
         Buffer size (in seconds) when reading the raw data in chunks.
     ctf_head_t : dict | None
@@ -123,10 +123,11 @@ class Info(dict):
     highpass : float | None
         Highpass corner frequency in Hertz. Zero indicates a DC recording.
     hpi_meas : list of dict | None
-        HPI measurements.
-        TODO: What is this exactly?
-    hpi_subsystem: | None
-        TODO: What is this?
+        HPI measurements that were taken at the start of the recording
+        (e.g. coil frequencies).
+    hpi_subsystem : dict | None
+        Information about the HPI subsystem that was used (e.g., event
+        channel used for cHPI measurements).
     line_freq : float | None
         Frequency of the power line in Hertz.
     lowpass : float | None
@@ -195,6 +196,8 @@ class Info(dict):
                                             for ch_type, count
                                             in ch_counts.items())
             strs.append('%s : %s%s' % (k, str(type(v))[7:-2], entr))
+            if k in ['sfreq', 'lowpass', 'highpass']:
+                strs[-1] += ' Hz'
         strs_non_empty = sorted(s for s in strs if '|' in s)
         strs_empty = sorted(s for s in strs if '|' not in s)
         st = '\n    '.join(strs_non_empty + strs_empty)
@@ -381,16 +384,6 @@ def _make_dig_points(nasion=None, lpa=None, rpa=None, hpi=None,
         List of digitizer points to be added to the info['dig'].
     """
     dig = []
-    if nasion is not None:
-        nasion = np.asarray(nasion)
-        if nasion.shape == (3,):
-            dig.append({'r': nasion, 'ident': FIFF.FIFFV_POINT_NASION,
-                        'kind': FIFF.FIFFV_POINT_CARDINAL,
-                        'coord_frame':  FIFF.FIFFV_COORD_HEAD})
-        else:
-            msg = ('Nasion should have the shape (3,) instead of %s'
-                   % (nasion.shape,))
-            raise ValueError(msg)
     if lpa is not None:
         lpa = np.asarray(lpa)
         if lpa.shape == (3,):
@@ -401,6 +394,16 @@ def _make_dig_points(nasion=None, lpa=None, rpa=None, hpi=None,
             msg = ('LPA should have the shape (3,) instead of %s'
                    % (lpa.shape,))
             raise ValueError(msg)
+    if nasion is not None:
+        nasion = np.asarray(nasion)
+        if nasion.shape == (3,):
+            dig.append({'r': nasion, 'ident': FIFF.FIFFV_POINT_NASION,
+                        'kind': FIFF.FIFFV_POINT_CARDINAL,
+                        'coord_frame':  FIFF.FIFFV_COORD_HEAD})
+        else:
+            msg = ('Nasion should have the shape (3,) instead of %s'
+                   % (nasion.shape,))
+            raise ValueError(msg)
     if rpa is not None:
         rpa = np.asarray(rpa)
         if rpa.shape == (3,):
@@ -415,7 +418,7 @@ def _make_dig_points(nasion=None, lpa=None, rpa=None, hpi=None,
         hpi = np.asarray(hpi)
         if hpi.shape[1] == 3:
             for idx, point in enumerate(hpi):
-                dig.append({'r': point, 'ident': idx,
+                dig.append({'r': point, 'ident': idx + 1,
                             'kind': FIFF.FIFFV_POINT_HPI,
                             'coord_frame': FIFF.FIFFV_COORD_HEAD})
         else:
@@ -426,7 +429,7 @@ def _make_dig_points(nasion=None, lpa=None, rpa=None, hpi=None,
         dig_points = np.asarray(dig_points)
         if dig_points.shape[1] == 3:
             for idx, point in enumerate(dig_points):
-                dig.append({'r': point, 'ident': idx,
+                dig.append({'r': point, 'ident': idx + 1,
                             'kind': FIFF.FIFFV_POINT_EXTRA,
                             'coord_frame': FIFF.FIFFV_COORD_HEAD})
         else:
@@ -593,7 +596,7 @@ def read_meas_info(fid, tree, clean_bads=False, verbose=None):
         elif kind == FIFF.FIFF_LINE_FREQ:
             tag = read_tag(fid, pos)
             line_freq = float(tag.data)
-        elif kind == FIFF.FIFF_CUSTOM_REF:
+        elif kind in [FIFF.FIFF_MNE_CUSTOM_REF, 236]:  # 236 used before v0.11
             tag = read_tag(fid, pos)
             custom_ref_applied = bool(tag.data)
 
@@ -861,7 +864,7 @@ def read_meas_info(fid, tree, clean_bads=False, verbose=None):
 
     info['nchan'] = nchan
     info['sfreq'] = sfreq
-    info['highpass'] = highpass if highpass is not None else 0
+    info['highpass'] = highpass if highpass is not None else 0.
     info['lowpass'] = lowpass if lowpass is not None else info['sfreq'] / 2.0
     info['line_freq'] = line_freq
 
@@ -1024,9 +1027,6 @@ def write_meas_info(fid, info, data_type=None, reset_range=True):
     #   Projectors
     _write_proj(fid, info['projs'])
 
-    #   CTF compensation info
-    write_ctf_comp(fid, info['comps'])
-
     #   Bad channels
     if len(info['bads']) > 0:
         start_block(fid, FIFF.FIFFB_MNE_BAD_CHANNELS)
@@ -1046,14 +1046,16 @@ def write_meas_info(fid, info, data_type=None, reset_range=True):
         write_int(fid, FIFF.FIFF_MEAS_DATE, info['meas_date'])
     write_int(fid, FIFF.FIFF_NCHAN, info['nchan'])
     write_float(fid, FIFF.FIFF_SFREQ, info['sfreq'])
-    write_float(fid, FIFF.FIFF_LOWPASS, info['lowpass'])
-    write_float(fid, FIFF.FIFF_HIGHPASS, info['highpass'])
+    if info['lowpass'] is not None:
+        write_float(fid, FIFF.FIFF_LOWPASS, info['lowpass'])
+    if info['highpass'] is not None:
+        write_float(fid, FIFF.FIFF_HIGHPASS, info['highpass'])
     if info.get('line_freq') is not None:
         write_float(fid, FIFF.FIFF_LINE_FREQ, info['line_freq'])
     if data_type is not None:
         write_int(fid, FIFF.FIFF_DATA_PACK, data_type)
     if info.get('custom_ref_applied'):
-        write_int(fid, FIFF.FIFF_CUSTOM_REF, info['custom_ref_applied'])
+        write_int(fid, FIFF.FIFF_MNE_CUSTOM_REF, info['custom_ref_applied'])
 
     #  Channel information
     for k, c in enumerate(info['chs']):
@@ -1103,6 +1105,9 @@ def write_meas_info(fid, info, data_type=None, reset_range=True):
                 end_block(fid, FIFF.FIFFB_HPI_COIL)
         end_block(fid, FIFF.FIFFB_HPI_SUBSYSTEM)
 
+    #   CTF compensation info
+    write_ctf_comp(fid, info['comps'])
+
     end_block(fid, FIFF.FIFFB_MEAS_INFO)
 
     #   Processing history
@@ -1339,9 +1344,8 @@ def create_info(ch_names, sfreq, ch_types=None, montage=None):
         ch_types = [ch_types] * nchan
     if len(ch_types) != nchan:
         raise ValueError('ch_types and ch_names must be the same length')
-    info = _empty_info()
+    info = _empty_info(sfreq)
     info['meas_date'] = np.array([0, 0], np.int32)
-    info['sfreq'] = sfreq
     info['ch_names'] = ch_names
     info['nchan'] = nchan
     loc = np.concatenate((np.zeros(3), np.eye(3).ravel())).astype(np.float32)
@@ -1354,7 +1358,7 @@ def create_info(ch_names, sfreq, ch_types=None, montage=None):
             raise KeyError('kind must be one of %s, not %s'
                            % (list(_kind_dict.keys()), kind))
         kind = _kind_dict[kind]
-        chan_info = dict(loc=loc, unit_mul=0, range=1., cal=1.,
+        chan_info = dict(loc=loc.copy(), unit_mul=0, range=1., cal=1.,
                          kind=kind[0], coil_type=kind[1],
                          unit=kind[2], coord_frame=FIFF.FIFFV_COORD_UNKNOWN,
                          ch_name=name, scanno=ci + 1, logno=ci + 1)
@@ -1387,7 +1391,7 @@ RAW_INFO_FIELDS = (
 )
 
 
-def _empty_info():
+def _empty_info(sfreq):
     """Create an empty info dictionary"""
     from ..transforms import Transform
     _none_keys = (
@@ -1407,8 +1411,11 @@ def _empty_info():
     for k in _list_keys:
         info[k] = list()
     info['custom_ref_applied'] = False
-    info['nchan'] = info['sfreq'] = 0
+    info['nchan'] = 0
     info['dev_head_t'] = Transform('meg', 'head', np.eye(4))
+    info['highpass'] = 0.
+    info['sfreq'] = float(sfreq)
+    info['lowpass'] = info['sfreq'] / 2.
     assert set(info.keys()) == set(RAW_INFO_FIELDS)
     info._check_consistency()
     return info
diff --git a/mne/io/nicolet/__init__.py b/mne/io/nicolet/__init__.py
new file mode 100644
index 0000000..4b05df3
--- /dev/null
+++ b/mne/io/nicolet/__init__.py
@@ -0,0 +1,7 @@
+"""Nicolet module for conversion to FIF"""
+
+# Author: Jaakko Leppakangas <jaeilepp at student.jyu.fi>
+#
+# License: BSD (3-clause)
+
+from .nicolet import read_raw_nicolet
diff --git a/mne/io/nicolet/nicolet.py b/mne/io/nicolet/nicolet.py
new file mode 100644
index 0000000..85954d4
--- /dev/null
+++ b/mne/io/nicolet/nicolet.py
@@ -0,0 +1,206 @@
+# Author: Jaakko Leppakangas <jaeilepp at student.jyu.fi>
+#
+# License: BSD (3-clause)
+
+import numpy as np
+from os import path
+import datetime
+import calendar
+
+from ...utils import logger
+from ..utils import _read_segments_file, _find_channels
+from ..base import _BaseRaw, _check_update_montage
+from ..meas_info import _empty_info
+from ..constants import FIFF
+
+
+def read_raw_nicolet(input_fname, ch_type, montage=None, eog=(), ecg=(),
+                     emg=(), misc=(), preload=False, verbose=None):
+    """Read Nicolet data as raw object
+
+    Note: This reader takes data files with the extension ``.data`` as an
+    input. The header file with the same file name stem and an extension
+    ``.head`` is expected to be found in the same directory.
+
+    Parameters
+    ----------
+    input_fname : str
+        Path to the data file.
+    ch_type : str
+        Channel type to designate to the data channels. Supported data types
+        include 'eeg', 'seeg'.
+    montage : str | None | instance of montage
+        Path or instance of montage containing electrode positions.
+        If None, sensor locations are (0,0,0). See the documentation of
+        :func:`mne.channels.read_montage` for more information.
+    eog : list | tuple | 'auto'
+        Names of channels or list of indices that should be designated
+        EOG channels. If 'auto', the channel names beginning with
+        ``EOG`` are used. Defaults to empty tuple.
+    ecg : list or tuple | 'auto'
+        Names of channels or list of indices that should be designated
+        ECG channels. If 'auto', the channel names beginning with
+        ``ECG`` are used. Defaults to empty tuple.
+    emg : list or tuple | 'auto'
+        Names of channels or list of indices that should be designated
+        EMG channels. If 'auto', the channel names beginning with
+        ``EMG`` are used. Defaults to empty tuple.
+    misc : list or tuple
+        Names of channels or list of indices that should be designated
+        MISC channels. Defaults to empty tuple.
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory).
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    raw : Instance of Raw
+        A Raw object containing the data.
+
+    See Also
+    --------
+    mne.io.Raw : Documentation of attribute and methods.
+    """
+    return RawNicolet(input_fname, ch_type, montage=montage, eog=eog, ecg=ecg,
+                      emg=emg, misc=misc, preload=preload, verbose=verbose)
+
+
+def _get_nicolet_info(fname, ch_type, eog, ecg, emg, misc):
+    """Function for extracting info from Nicolet header files."""
+    fname = path.splitext(fname)[0]
+    header = fname + '.head'
+
+    logger.info('Reading header...')
+    header_info = dict()
+    with open(header, 'r') as fid:
+        for line in fid:
+            var, value = line.split('=')
+            if var == 'elec_names':
+                value = value[1:-2].split(',')  # strip brackets
+            elif var == 'conversion_factor':
+                value = float(value)
+            elif var != 'start_ts':
+                value = int(value)
+            header_info[var] = value
+
+    ch_names = header_info['elec_names']
+    if eog == 'auto':
+        eog = _find_channels(ch_names, 'EOG')
+    if ecg == 'auto':
+        ecg = _find_channels(ch_names, 'ECG')
+    if emg == 'auto':
+        emg = _find_channels(ch_names, 'EMG')
+
+    date, time = header_info['start_ts'].split()
+    date = date.split('-')
+    time = time.split(':')
+    sec, msec = time[2].split('.')
+    date = datetime.datetime(int(date[0]), int(date[1]), int(date[2]),
+                             int(time[0]), int(time[1]), int(sec), int(msec))
+    info = _empty_info(header_info['sample_freq'])
+    info.update({'filename': fname, 'nchan': header_info['num_channels'],
+                 'meas_date': calendar.timegm(date.utctimetuple()),
+                 'ch_names': ch_names, 'description': None,
+                 'buffer_size_sec': 10.})
+
+    if ch_type == 'eeg':
+        ch_coil = FIFF.FIFFV_COIL_EEG
+        ch_kind = FIFF.FIFFV_EEG_CH
+    elif ch_type == 'seeg':
+        ch_coil = FIFF.FIFFV_COIL_EEG
+        ch_kind = FIFF.FIFFV_SEEG_CH
+    else:
+        raise TypeError("Channel type not recognized. Available types are "
+                        "'eeg' and 'seeg'.")
+    cal = header_info['conversion_factor'] * 1e-6
+    for idx, ch_name in enumerate(ch_names):
+        if ch_name in eog or idx in eog:
+            coil_type = FIFF.FIFFV_COIL_NONE
+            kind = FIFF.FIFFV_EOG_CH
+        elif ch_name in ecg or idx in ecg:
+            coil_type = FIFF.FIFFV_COIL_NONE
+            kind = FIFF.FIFFV_ECG_CH
+        elif ch_name in emg or idx in emg:
+            coil_type = FIFF.FIFFV_COIL_NONE
+            kind = FIFF.FIFFV_EMG_CH
+        elif ch_name in misc or idx in misc:
+            coil_type = FIFF.FIFFV_COIL_NONE
+            kind = FIFF.FIFFV_MISC_CH
+        else:
+            coil_type = ch_coil
+            kind = ch_kind
+        chan_info = {'cal': cal, 'logno': idx + 1, 'scanno': idx + 1,
+                     'range': 1.0, 'unit_mul': 0., 'ch_name': ch_name,
+                     'unit': FIFF.FIFF_UNIT_V,
+                     'coord_frame': FIFF.FIFFV_COORD_HEAD,
+                     'coil_type': coil_type, 'kind': kind, 'loc': np.zeros(12)}
+        info['chs'].append(chan_info)
+
+    info['highpass'] = 0.
+    info['lowpass'] = info['sfreq'] / 2.0
+
+    return info, header_info
+
+
+class RawNicolet(_BaseRaw):
+    """Raw object from Nicolet file.
+
+    Parameters
+    ----------
+    input_fname : str
+        Path to the Nicolet file.
+    ch_type : str
+        Channel type to designate to the data channels. Supported data types
+        include 'eeg', 'seeg'.
+    montage : str | None | instance of Montage
+        Path or instance of montage containing electrode positions.
+        If None, sensor locations are (0,0,0). See the documentation of
+        :func:`mne.channels.read_montage` for more information.
+    eog : list | tuple | 'auto'
+        Names of channels or list of indices that should be designated
+        EOG channels. If 'auto', the channel names beginning with
+        ``EOG`` are used. Defaults to empty tuple.
+    ecg : list or tuple | 'auto'
+        Names of channels or list of indices that should be designated
+        ECG channels. If 'auto', the channel names beginning with
+        ``ECG`` are used. Defaults to empty tuple.
+    emg : list or tuple | 'auto'
+        Names of channels or list of indices that should be designated
+        EMG channels. If 'auto', the channel names beginning with
+        ``EMG`` are used. Defaults to empty tuple.
+    misc : list or tuple
+        Names of channels or list of indices that should be designated
+        MISC channels. Defaults to empty tuple.
+    preload : bool or str (default False)
+        Preload data into memory for data manipulation and faster indexing.
+        If True, the data will be preloaded into memory (fast, requires
+        large amount of memory). If preload is a string, preload is the
+        file name of a memory-mapped file which is used to store the data
+        on the hard drive (slower, requires less memory).
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    See Also
+    --------
+    mne.io.Raw : Documentation of attribute and methods.
+    """
+    def __init__(self, input_fname, ch_type, montage=None, eog=(), ecg=(),
+                 emg=(), misc=(), preload=False, verbose=None):
+        input_fname = path.abspath(input_fname)
+        info, header_info = _get_nicolet_info(input_fname, ch_type, eog, ecg,
+                                              emg, misc)
+        last_samps = [header_info['num_samples'] - 1]
+        _check_update_montage(info, montage)
+        super(RawNicolet, self).__init__(
+            info, preload, filenames=[input_fname], raw_extras=[header_info],
+            last_samps=last_samps, orig_format='int',
+            verbose=verbose)
+
+    def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
+        """Read a chunk of raw data"""
+        _read_segments_file(self, data, idx, fi, start, stop, cals, mult)
diff --git a/mne/tests/__init__.py b/mne/io/nicolet/tests/__init__.py
similarity index 100%
copy from mne/tests/__init__.py
copy to mne/io/nicolet/tests/__init__.py
diff --git a/mne/io/nicolet/tests/data/test_nicolet_raw.data b/mne/io/nicolet/tests/data/test_nicolet_raw.data
new file mode 100644
index 0000000..2555e78
Binary files /dev/null and b/mne/io/nicolet/tests/data/test_nicolet_raw.data differ
diff --git a/mne/io/nicolet/tests/data/test_nicolet_raw.head b/mne/io/nicolet/tests/data/test_nicolet_raw.head
new file mode 100644
index 0000000..ffb8217
--- /dev/null
+++ b/mne/io/nicolet/tests/data/test_nicolet_raw.head
@@ -0,0 +1,11 @@
+start_ts=2015-01-01 12:00:00.000
+num_samples=512
+sample_freq=256
+conversion_factor=0.179000
+num_channels=29
+elec_names=[FP1,FP2,F3,F4,C3,C4,P3,P4,O1,O2,F7,F8,T3,T4,T5,T6,FZ,CZ,PZ,SP1,SP2,RS,T1,T2,EOG1,EOG2,EMG,ECG,PHO]
+pat_id=102
+adm_id=1102
+rec_id=100102
+duration_in_sec=2
+sample_bytes=2
diff --git a/mne/io/nicolet/tests/test_nicolet.py b/mne/io/nicolet/tests/test_nicolet.py
new file mode 100644
index 0000000..df0274f
--- /dev/null
+++ b/mne/io/nicolet/tests/test_nicolet.py
@@ -0,0 +1,20 @@
+
+# Author: Jaakko Leppakangas <jaeilepp at student.jyu.fi>
+#
+# License: BSD (3-clause)
+
+import os.path as op
+import inspect
+
+from mne.io import read_raw_nicolet
+from mne.io.tests.test_raw import _test_raw_reader
+
+FILE = inspect.getfile(inspect.currentframe())
+base_dir = op.join(op.dirname(op.abspath(FILE)), 'data')
+fname = op.join(base_dir, 'test_nicolet_raw.data')
+
+
+def test_data():
+    """Test reading raw nicolet files."""
+    _test_raw_reader(read_raw_nicolet, input_fname=fname, ch_type='eeg',
+                     ecg='auto', eog='auto', emg='auto', misc=['PHO'])
diff --git a/mne/io/open.py b/mne/io/open.py
index bcc1ce0..c5f0dfd 100644
--- a/mne/io/open.py
+++ b/mne/io/open.py
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
 # Authors: Alexandre Gramfort <alexandre.gramfort at telecom-paristech.fr>
 #          Matti Hamalainen <msh at nmr.mgh.harvard.edu>
 #
@@ -191,11 +192,14 @@ def _find_type(value, fmts=['FIFF_'], exclude=['FIFF_UNIT']):
     vals = [k for k, v in six.iteritems(FIFF)
             if v == value and any(fmt in k for fmt in fmts) and
             not any(exc in k for exc in exclude)]
+    if len(vals) == 0:
+        vals = ['???']
     return vals
 
 
 def _show_tree(fid, tree, indent, level, read_limit, max_str):
     """Helper for showing FIFF"""
+    from scipy import sparse
     this_idt = indent * level
     next_idt = indent * (level + 1)
     # print block-level information
@@ -236,12 +240,16 @@ def _show_tree(fid, tree, indent, level, read_limit, max_str):
                         postpend += ' ... str len=' + str(len(tag.data))
                     elif isinstance(tag.data, (list, tuple)):
                         postpend += ' ... list len=' + str(len(tag.data))
+                    elif sparse.issparse(tag.data):
+                        postpend += (' ... sparse (%s) shape=%s'
+                                     % (tag.data.getformat(), tag.data.shape))
                     else:
-                        postpend += ' ... (unknown type)'
+                        postpend += ' ... type=' + str(type(tag.data))
                 postpend = '>' * 20 + 'BAD' if not good else postpend
                 out += [next_idt + prepend + str(k) + ' = ' +
                         '/'.join(this_type) + ' (' + str(size) + ')' +
                         postpend]
+                out[-1] = out[-1].replace('\n', u'¶')
                 counter = 0
                 good = True
 
diff --git a/mne/io/pick.py b/mne/io/pick.py
index 027445f..8370e53 100644
--- a/mne/io/pick.py
+++ b/mne/io/pick.py
@@ -59,7 +59,7 @@ def channel_type(info, idx):
         return 'ias'
     elif kind == FIFF.FIFFV_SYST_CH:
         return 'syst'
-    elif kind == FIFF.FIFFV_SEEG_CH:
+    elif kind in [FIFF.FIFFV_SEEG_CH, 702]:  # 702 was used before v0.11
         return 'seeg'
     elif kind in [FIFF.FIFFV_QUAT_0, FIFF.FIFFV_QUAT_1, FIFF.FIFFV_QUAT_2,
                   FIFF.FIFFV_QUAT_3, FIFF.FIFFV_QUAT_4, FIFF.FIFFV_QUAT_5,
@@ -101,9 +101,7 @@ def pick_channels(ch_names, include, exclude=[]):
     for k, name in enumerate(ch_names):
         if (len(include) == 0 or name in include) and name not in exclude:
             sel.append(k)
-    sel = np.unique(sel)
-    np.sort(sel)
-    return sel
+    return np.array(sel, int)
 
 
 def pick_channels_regexp(ch_names, regexp):
@@ -140,6 +138,32 @@ def pick_channels_regexp(ch_names, regexp):
     return [k for k, name in enumerate(ch_names) if r.match(name)]
 
 
+def _triage_meg_pick(ch, meg):
+    """Helper to triage an MEG pick type"""
+    if meg is True:
+        return True
+    elif ch['unit'] == FIFF.FIFF_UNIT_T_M:
+        if meg == 'grad':
+            return True
+        elif meg == 'planar1' and ch['ch_name'].endswith('2'):
+            return True
+        elif meg == 'planar2' and ch['ch_name'].endswith('3'):
+            return True
+    elif (meg == 'mag' and ch['unit'] == FIFF.FIFF_UNIT_T):
+        return True
+    return False
+
+
+def _check_meg_type(meg, allow_auto=False):
+    """Helper to ensure a valid meg type"""
+    if isinstance(meg, string_types):
+        allowed_types = ['grad', 'mag', 'planar1', 'planar2']
+        allowed_types += ['auto'] if allow_auto else []
+        if meg not in allowed_types:
+            raise ValueError('meg value must be one of %s or bool, not %s'
+                             % (allowed_types, meg))
+
+
 def pick_types(info, meg=True, eeg=False, stim=False, eog=False, ecg=False,
                emg=False, ref_meg='auto', misc=False, resp=False, chpi=False,
                exci=False, ias=False, syst=False, seeg=False,
@@ -167,7 +191,8 @@ def pick_types(info, meg=True, eeg=False, stim=False, eog=False, ecg=False,
         If True include EMG channels.
     ref_meg: bool | str
         If True include CTF / 4D reference channels. If 'auto', the reference
-        channels are only included if compensations are present.
+        channels are only included if compensations are present. Can also be
+        the string options from `meg`.
     misc : bool
         If True include miscellaneous analog channels.
     resp : bool
@@ -215,28 +240,16 @@ def pick_types(info, meg=True, eeg=False, stim=False, eog=False, ecg=False,
                          ' If only one channel is to be excluded, use '
                          '[ch_name] instead of passing ch_name.')
 
-    if isinstance(ref_meg, string_types):
-        if ref_meg != 'auto':
-            raise ValueError('ref_meg has to be either a bool or \'auto\'')
-
+    _check_meg_type(ref_meg, allow_auto=True)
+    _check_meg_type(meg)
+    if isinstance(ref_meg, string_types) and ref_meg == 'auto':
         ref_meg = ('comps' in info and info['comps'] is not None and
                    len(info['comps']) > 0)
 
     for k in range(nchan):
         kind = info['chs'][k]['kind']
         if kind == FIFF.FIFFV_MEG_CH:
-            if meg is True:
-                pick[k] = True
-            elif info['chs'][k]['unit'] == FIFF.FIFF_UNIT_T_M:
-                if meg == 'grad':
-                    pick[k] = True
-                elif meg == 'planar1' and info['ch_names'][k].endswith('2'):
-                    pick[k] = True
-                elif meg == 'planar2' and info['ch_names'][k].endswith('3'):
-                    pick[k] = True
-            elif (meg == 'mag' and
-                  info['chs'][k]['unit'] == FIFF.FIFF_UNIT_T):
-                pick[k] = True
+            pick[k] = _triage_meg_pick(info['chs'][k], meg)
         elif kind == FIFF.FIFFV_EEG_CH and eeg:
             pick[k] = True
         elif kind == FIFF.FIFFV_STIM_CH and stim:
@@ -250,12 +263,13 @@ def pick_types(info, meg=True, eeg=False, stim=False, eog=False, ecg=False,
         elif kind == FIFF.FIFFV_MISC_CH and misc:
             pick[k] = True
         elif kind == FIFF.FIFFV_REF_MEG_CH and ref_meg:
-            pick[k] = True
+            pick[k] = _triage_meg_pick(info['chs'][k], ref_meg)
         elif kind == FIFF.FIFFV_RESP_CH and resp:
             pick[k] = True
         elif kind == FIFF.FIFFV_SYST_CH and syst:
             pick[k] = True
-        elif kind == FIFF.FIFFV_SEEG_CH and seeg:
+        elif kind in [FIFF.FIFFV_SEEG_CH, 702] and seeg:
+            # Constant 702 was used before v0.11
             pick[k] = True
         elif kind == FIFF.FIFFV_IAS_CH and ias:
             pick[k] = True
@@ -281,7 +295,7 @@ def pick_types(info, meg=True, eeg=False, stim=False, eog=False, ecg=False,
     myinclude += include
 
     if len(myinclude) == 0:
-        sel = []
+        sel = np.array([], int)
     else:
         sel = pick_channels(info['ch_names'], myinclude, exclude)
 
@@ -621,3 +635,12 @@ def _check_excludes_includes(chs, info=None, allow_bads=False):
                 'include/exclude must be list, tuple, ndarray, or "bads". ' +
                 'You provided type {0}'.format(type(chs)))
     return chs
+
+
+def _pick_data_channels(info, exclude='bads'):
+    """Convenience function for picking only data channels."""
+    return pick_types(info, meg=True, eeg=True, stim=False, eog=False,
+                      ecg=False, emg=False, ref_meg=True, misc=False,
+                      resp=False, chpi=False, exci=False, ias=False,
+                      syst=False, seeg=True, include=[], exclude=exclude,
+                      selection=None)
diff --git a/mne/io/proc_history.py b/mne/io/proc_history.py
index 50d065f..a2df522 100644
--- a/mne/io/proc_history.py
+++ b/mne/io/proc_history.py
@@ -3,17 +3,20 @@
 #          Eric Larson <larson.eric.d at gmail.com>
 # License: Simplified BSD
 
+from os import path as op
+import warnings
+
 import numpy as np
 from scipy.sparse import csc_matrix
-import warnings
 
-from .open import read_tag
+from .open import read_tag, fiff_open
 from .tree import dir_tree_find
 from .write import (start_block, end_block, write_int, write_float,
                     write_string, write_float_matrix, write_int_matrix,
                     write_float_sparse_rcs, write_id)
+from .tag import find_tag
 from .constants import FIFF
-from ..externals.six import text_type
+from ..externals.six import text_type, string_types
 
 
 _proc_keys = ['parent_file_id', 'block_id', 'parent_block_id',
@@ -153,18 +156,13 @@ _max_st_ids = (FIFF.FIFF_SSS_JOB, FIFF.FIFF_SSS_ST_CORR,
 _max_st_writers = (write_int, write_float, write_float)
 _max_st_casters = (int, float, float)
 
-_sss_ctc_keys = ('parent_file_id', 'block_id', 'parent_block_id',
-                 'date', 'creator', 'decoupler')
-_sss_ctc_ids = (FIFF.FIFF_PARENT_FILE_ID,
-                FIFF.FIFF_BLOCK_ID,
-                FIFF.FIFF_PARENT_BLOCK_ID,
+_sss_ctc_keys = ('block_id', 'date', 'creator', 'decoupler')
+_sss_ctc_ids = (FIFF.FIFF_BLOCK_ID,
                 FIFF.FIFF_MEAS_DATE,
                 FIFF.FIFF_CREATOR,
                 FIFF.FIFF_DECOUPLER_MATRIX)
-_sss_ctc_writers = (write_id, write_id, write_id,
-                    write_int, write_string, write_float_sparse_rcs)
-_sss_ctc_casters = (dict, dict, dict,
-                    np.array, text_type, csc_matrix)
+_sss_ctc_writers = (write_id, write_int, write_string, write_float_sparse_rcs)
+_sss_ctc_casters = (dict, np.array, text_type, csc_matrix)
 
 _sss_cal_keys = ('cal_chans', 'cal_corrs')
 _sss_cal_ids = (FIFF.FIFF_SSS_CAL_CHANS, FIFF.FIFF_SSS_CAL_CORRS)
@@ -172,6 +170,25 @@ _sss_cal_writers = (write_int_matrix, write_float_matrix)
 _sss_cal_casters = (np.array, np.array)
 
 
+def _read_ctc(fname):
+    """Read cross-talk correction matrix"""
+    if not isinstance(fname, string_types) or not op.isfile(fname):
+        raise ValueError('fname must be a file that exists, not %s' % fname)
+    f, tree, _ = fiff_open(fname)
+    with f as fid:
+        sss_ctc = _read_maxfilter_record(fid, tree)['sss_ctc']
+        bad_str = 'Invalid cross-talk FIF: %s' % fname
+        if len(sss_ctc) == 0:
+            raise ValueError(bad_str)
+        node = dir_tree_find(tree, FIFF.FIFFB_DATA_CORRECTION)[0]
+        comment = find_tag(fid, node, FIFF.FIFF_COMMENT).data
+        if comment != 'cross-talk compensation matrix':
+            raise ValueError(bad_str)
+        sss_ctc['creator'] = find_tag(fid, node, FIFF.FIFF_CREATOR).data
+        sss_ctc['date'] = find_tag(fid, node, FIFF.FIFF_MEAS_DATE).data
+    return sss_ctc
+
+
 def _read_maxfilter_record(fid, tree):
     """Read maxfilter processing record from file"""
     sss_info_block = dir_tree_find(tree, FIFF.FIFFB_SSS_INFO)  # 502
@@ -218,7 +235,12 @@ def _read_maxfilter_record(fid, tree):
             else:
                 if kind == FIFF.FIFF_PROJ_ITEM_CH_NAME_LIST:
                     tag = read_tag(fid, pos)
-                    sss_ctc['proj_items_chs'] = tag.data.split(':')
+                    chs = tag.data.split(':')
+                    # XXX for some reason this list can have a bunch of junk
+                    # in the last entry, e.g.:
+                    # [..., u'MEG2642', u'MEG2643', u'MEG2641\x00 ... \x00']
+                    chs[-1] = chs[-1].split('\x00')[0]
+                    sss_ctc['proj_items_chs'] = chs
 
     sss_cal_block = dir_tree_find(tree, FIFF.FIFFB_SSS_CAL)  # 503
     sss_cal = dict()
diff --git a/mne/io/proj.py b/mne/io/proj.py
index 0ab52e2..c69efe1 100644
--- a/mne/io/proj.py
+++ b/mne/io/proj.py
@@ -100,6 +100,15 @@ class ProjMixin(object):
 
         return self
 
+    def add_eeg_average_proj(self):
+        """Add an average EEG reference projector if one does not exist
+        """
+        if _needs_eeg_average_ref_proj(self.info):
+            # Don't set as active, since we haven't applied it
+            eeg_proj = make_eeg_average_ref_proj(self.info, activate=False)
+            self.add_proj(eeg_proj)
+        return self
+
     def apply_proj(self):
         """Apply the signal space projection (SSP) operators to the data.
 
@@ -152,7 +161,6 @@ class ProjMixin(object):
             logger.info('The projections don\'t apply to these data.'
                         ' Doing nothing.')
             return self
-
         self._projector, self.info = _projector, info
         if isinstance(self, _BaseRaw):
             if self.preload:
@@ -341,6 +349,12 @@ def _read_proj(fid, node, verbose=None):
         else:
             active = False
 
+        tag = find_tag(fid, item, FIFF.FIFF_MNE_ICA_PCA_EXPLAINED_VAR)
+        if tag is not None:
+            explained_var = tag.data
+        else:
+            explained_var = None
+
         # handle the case when data is transposed for some reason
         if data.shape[0] == len(names) and data.shape[1] == nvec:
             data = data.T
@@ -352,7 +366,8 @@ def _read_proj(fid, node, verbose=None):
         #   Use exactly the same fields in data as in a named matrix
         one = Projection(kind=kind, active=active, desc=desc,
                          data=dict(nrow=nvec, ncol=nchan, row_names=None,
-                                   col_names=names, data=data))
+                                   col_names=names, data=data),
+                         explained_var=explained_var)
 
         projs.append(one)
 
@@ -383,6 +398,8 @@ def _write_proj(fid, projs):
     projs : dict
         The projection operator.
     """
+    if len(projs) == 0:
+        return
     start_block(fid, FIFF.FIFFB_PROJ)
 
     for proj in projs:
@@ -399,6 +416,9 @@ def _write_proj(fid, projs):
         write_int(fid, FIFF.FIFF_MNE_PROJ_ITEM_ACTIVE, proj['active'])
         write_float_matrix(fid, FIFF.FIFF_PROJ_ITEM_VECTORS,
                            proj['data']['data'])
+        if proj['explained_var'] is not None:
+            write_float(fid, FIFF.FIFF_MNE_ICA_PCA_EXPLAINED_VAR,
+                        proj['explained_var'])
         end_block(fid, FIFF.FIFFB_PROJ_ITEM)
 
     end_block(fid, FIFF.FIFFB_PROJ)
@@ -625,11 +645,13 @@ def make_eeg_average_ref_proj(info, activate=True, verbose=None):
         raise ValueError('Cannot create EEG average reference projector '
                          '(no EEG data found)')
     vec = np.ones((1, n_eeg)) / n_eeg
+    explained_var = None
     eeg_proj_data = dict(col_names=eeg_names, row_names=None,
                          data=vec, nrow=1, ncol=n_eeg)
     eeg_proj = Projection(active=activate, data=eeg_proj_data,
                           desc='Average EEG reference',
-                          kind=FIFF.FIFFV_MNE_PROJ_ITEM_EEG_AVREF)
+                          kind=FIFF.FIFFV_MNE_PROJ_ITEM_EEG_AVREF,
+                          explained_var=explained_var)
     return eeg_proj
 
 
@@ -680,7 +702,7 @@ def setup_proj(info, add_eeg_ref=True, activate=True,
         The modified measurement info (Warning: info is modified inplace).
     """
     # Add EEG ref reference proj if necessary
-    if _needs_eeg_average_ref_proj(info) and add_eeg_ref:
+    if add_eeg_ref and _needs_eeg_average_ref_proj(info):
         eeg_proj = make_eeg_average_ref_proj(info, activate=activate)
         info['projs'].append(eeg_proj)
 
diff --git a/mne/io/reference.py b/mne/io/reference.py
index 1fc0455..1585dd4 100644
--- a/mne/io/reference.py
+++ b/mne/io/reference.py
@@ -11,7 +11,7 @@ from .proj import _has_eeg_average_ref_proj, make_eeg_average_ref_proj
 from .pick import pick_types
 from .base import _BaseRaw
 from ..evoked import Evoked
-from ..epochs import Epochs
+from ..epochs import _BaseEpochs
 from ..utils import logger
 
 
@@ -110,7 +110,7 @@ def _apply_reference(inst, ref_from, ref_to=None, copy=True):
     if len(ref_from) > 0:
         ref_data = data[..., ref_from, :].mean(-2)
 
-        if isinstance(inst, Epochs):
+        if isinstance(inst, _BaseEpochs):
             data[:, ref_to, :] -= ref_data[:, np.newaxis, :]
         else:
             data[ref_to] -= ref_data
@@ -174,7 +174,7 @@ def add_reference_channels(inst, ref_channels, copy=True):
         refs = np.zeros((len(ref_channels), data.shape[1]))
         data = np.vstack((data, refs))
         inst._data = data
-    elif isinstance(inst, Epochs):
+    elif isinstance(inst, _BaseEpochs):
         data = inst._data
         x, y, z = data.shape
         refs = np.zeros((x * len(ref_channels), z))
@@ -185,7 +185,7 @@ def add_reference_channels(inst, ref_channels, copy=True):
         raise TypeError("inst should be Raw, Epochs, or Evoked instead of %s."
                         % type(inst))
     nchan = len(inst.info['ch_names'])
-    if ch in ref_channels:
+    for ch in ref_channels:
         chan_info = {'ch_name': ch,
                      'coil_type': FIFF.FIFFV_COIL_EEG,
                      'kind': FIFF.FIFFV_EEG_CH,
diff --git a/mne/io/tag.py b/mne/io/tag.py
index 1f95733..61fed6c 100644
--- a/mne/io/tag.py
+++ b/mne/io/tag.py
@@ -403,8 +403,6 @@ def read_tag(fid, pos=None, shape=None, rlims=None):
                 tag.data = dict()
                 tag.data['version'] = int(np.fromstring(fid.read(4),
                                                         dtype=">i4"))
-                tag.data['version'] = int(np.fromstring(fid.read(4),
-                                                        dtype=">i4"))
                 tag.data['machid'] = np.fromstring(fid.read(8), dtype=">i4")
                 tag.data['secs'] = int(np.fromstring(fid.read(4), dtype=">i4"))
                 tag.data['usecs'] = int(np.fromstring(fid.read(4),
diff --git a/mne/io/tests/test_apply_function.py b/mne/io/tests/test_apply_function.py
index 7adfede..1fb935b 100644
--- a/mne/io/tests/test_apply_function.py
+++ b/mne/io/tests/test_apply_function.py
@@ -3,12 +3,11 @@
 # License: BSD (3-clause)
 
 import numpy as np
-import os.path as op
 from nose.tools import assert_equal, assert_raises
 
 from mne import create_info
 from mne.io import RawArray
-from mne.utils import logger, set_log_file, slow_test, _TempDir
+from mne.utils import logger, catch_logging, slow_test, run_tests_if_main
 
 
 def bad_1(x):
@@ -44,15 +43,10 @@ def test_apply_function_verbose():
                   None, None, 2)
 
     # check our arguments
-    tempdir = _TempDir()
-    test_name = op.join(tempdir, 'test.log')
-    set_log_file(test_name)
-    try:
+    with catch_logging() as sio:
         raw.apply_function(printer, None, None, 1, verbose=False)
-        with open(test_name) as fid:
-            assert_equal(len(fid.readlines()), 0)
+        assert_equal(len(sio.getvalue()), 0)
         raw.apply_function(printer, None, None, 1, verbose=True)
-        with open(test_name) as fid:
-            assert_equal(len(fid.readlines()), n_chan)
-    finally:
-        set_log_file(None)
+        assert_equal(sio.getvalue().count('\n'), n_chan)
+
+run_tests_if_main()
diff --git a/mne/io/tests/test_pick.py b/mne/io/tests/test_pick.py
index 80e2767..2e1f512 100644
--- a/mne/io/tests/test_pick.py
+++ b/mne/io/tests/test_pick.py
@@ -1,22 +1,83 @@
+import os.path as op
+import inspect
+
 from nose.tools import assert_equal, assert_raises
 from numpy.testing import assert_array_equal
 import numpy as np
-import os.path as op
 
 from mne import (pick_channels_regexp, pick_types, Epochs,
                  read_forward_solution, rename_channels,
-                 pick_info, pick_channels, __file__)
-
-from mne.io.meas_info import create_info
-from mne.io.array import RawArray
+                 pick_info, pick_channels, __file__, create_info)
+from mne.io import Raw, RawArray, read_raw_bti, read_raw_kit
 from mne.io.pick import (channel_indices_by_type, channel_type,
                          pick_types_forward, _picks_by_type)
 from mne.io.constants import FIFF
-from mne.io import Raw
 from mne.datasets import testing
-from mne.forward.tests import test_forward
 from mne.utils import run_tests_if_main
 
+io_dir = op.join(op.dirname(inspect.getfile(inspect.currentframe())), '..')
+data_path = testing.data_path(download=False)
+fname_meeg = op.join(data_path, 'MEG', 'sample',
+                     'sample_audvis_trunc-meg-eeg-oct-4-fwd.fif')
+
+
+def test_pick_refs():
+    """Test picking of reference sensors
+    """
+    infos = list()
+    # KIT
+    kit_dir = op.join(io_dir, 'kit', 'tests', 'data')
+    sqd_path = op.join(kit_dir, 'test.sqd')
+    mrk_path = op.join(kit_dir, 'test_mrk.sqd')
+    elp_path = op.join(kit_dir, 'test_elp.txt')
+    hsp_path = op.join(kit_dir, 'test_hsp.txt')
+    raw_kit = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path)
+    infos.append(raw_kit.info)
+    # BTi
+    bti_dir = op.join(io_dir, 'bti', 'tests', 'data')
+    bti_pdf = op.join(bti_dir, 'test_pdf_linux')
+    bti_config = op.join(bti_dir, 'test_config_linux')
+    bti_hs = op.join(bti_dir, 'test_hs_linux')
+    raw_bti = read_raw_bti(bti_pdf, bti_config, bti_hs, preload=False)
+    infos.append(raw_bti.info)
+    # CTF
+    fname_ctf_raw = op.join(io_dir, 'tests', 'data', 'test_ctf_comp_raw.fif')
+    raw_ctf = Raw(fname_ctf_raw, compensation=2)
+    infos.append(raw_ctf.info)
+    for info in infos:
+        info['bads'] = []
+        assert_raises(ValueError, pick_types, info, meg='foo')
+        assert_raises(ValueError, pick_types, info, ref_meg='foo')
+        picks_meg_ref = pick_types(info, meg=True, ref_meg=True)
+        picks_meg = pick_types(info, meg=True, ref_meg=False)
+        picks_ref = pick_types(info, meg=False, ref_meg=True)
+        assert_array_equal(picks_meg_ref,
+                           np.sort(np.concatenate([picks_meg, picks_ref])))
+        picks_grad = pick_types(info, meg='grad', ref_meg=False)
+        picks_ref_grad = pick_types(info, meg=False, ref_meg='grad')
+        picks_meg_ref_grad = pick_types(info, meg='grad', ref_meg='grad')
+        assert_array_equal(picks_meg_ref_grad,
+                           np.sort(np.concatenate([picks_grad,
+                                                   picks_ref_grad])))
+        picks_mag = pick_types(info, meg='mag', ref_meg=False)
+        picks_ref_mag = pick_types(info, meg=False, ref_meg='mag')
+        picks_meg_ref_mag = pick_types(info, meg='mag', ref_meg='mag')
+        assert_array_equal(picks_meg_ref_mag,
+                           np.sort(np.concatenate([picks_mag,
+                                                   picks_ref_mag])))
+        assert_array_equal(picks_meg,
+                           np.sort(np.concatenate([picks_mag, picks_grad])))
+        assert_array_equal(picks_ref,
+                           np.sort(np.concatenate([picks_ref_mag,
+                                                   picks_ref_grad])))
+        assert_array_equal(picks_meg_ref, np.sort(np.concatenate(
+            [picks_grad, picks_mag, picks_ref_grad, picks_ref_mag])))
+        for pick in (picks_meg_ref, picks_meg, picks_ref,
+                     picks_grad, picks_ref_grad, picks_meg_ref_grad,
+                     picks_mag, picks_ref_mag, picks_meg_ref_mag):
+            if len(pick) > 0:
+                pick_info(info, pick)
+
 
 def test_pick_channels_regexp():
     """Test pick with regular expression
@@ -60,7 +121,7 @@ def _check_fwd_n_chan_consistent(fwd, n_expected):
 def test_pick_forward_seeg():
     """Test picking forward with SEEG
     """
-    fwd = read_forward_solution(test_forward.fname_meeg)
+    fwd = read_forward_solution(fname_meeg)
     counts = channel_indices_by_type(fwd['info'])
     for key in counts.keys():
         counts[key] = len(counts[key])
diff --git a/mne/io/tests/test_raw.py b/mne/io/tests/test_raw.py
index 9d79349..378b598 100644
--- a/mne/io/tests/test_raw.py
+++ b/mne/io/tests/test_raw.py
@@ -1,14 +1,93 @@
 # Generic tests that all raw classes should run
 from os import path as op
-from numpy.testing import assert_allclose
+import math
+import numpy as np
+from numpy.testing import assert_allclose, assert_array_almost_equal
 
+from nose.tools import assert_equal, assert_true
+
+from mne import concatenate_raws
 from mne.datasets import testing
 from mne.io import Raw
+from mne.utils import _TempDir
+
+
+def _test_raw_reader(reader, test_preloading=True, **kwargs):
+    """Test reading, writing and slicing of raw classes.
+
+    Parameters
+    ----------
+    reader : function
+        Function to test.
+    test_preloading : bool
+        Whether not preloading is implemented for the reader. If True, both
+        cases and memory mapping to file are tested.
+    **kwargs :
+        Arguments for the reader. Note: Do not use preload as kwarg.
+        Use ``test_preloading`` instead.
+
+    Returns
+    -------
+    raw : Instance of Raw
+        A preloaded Raw object.
+    """
+    tempdir = _TempDir()
+    rng = np.random.RandomState(0)
+    if test_preloading:
+        raw = reader(preload=True, **kwargs)
+        # don't assume the first is preloaded
+        buffer_fname = op.join(tempdir, 'buffer')
+        picks = rng.permutation(np.arange(len(raw.ch_names)))[:10]
+        bnd = min(int(round(raw.info['buffer_size_sec'] *
+                            raw.info['sfreq'])), raw.n_times)
+        slices = [slice(0, bnd), slice(bnd - 1, bnd), slice(3, bnd),
+                  slice(3, 300), slice(None), slice(1, bnd)]
+        if raw.n_times >= 2 * bnd:  # at least two complete blocks
+            slices += [slice(bnd, 2 * bnd), slice(bnd, bnd + 1),
+                       slice(0, bnd + 100)]
+        other_raws = [reader(preload=buffer_fname, **kwargs),
+                      reader(preload=False, **kwargs)]
+        for sl_time in slices:
+            for other_raw in other_raws:
+                data1, times1 = raw[picks, sl_time]
+                data2, times2 = other_raw[picks, sl_time]
+                assert_allclose(data1, data2)
+                assert_allclose(times1, times2)
+    else:
+        raw = reader(**kwargs)
+
+    full_data = raw._data
+    assert_true(raw.__class__.__name__, repr(raw))  # to test repr
+    assert_true(raw.info.__class__.__name__, repr(raw.info))
+
+    # Test saving and reading
+    out_fname = op.join(tempdir, 'test_raw.fif')
+    raw.save(out_fname, tmax=raw.times[-1], overwrite=True, buffer_size_sec=1)
+    raw3 = Raw(out_fname)
+    assert_equal(set(raw.info.keys()), set(raw3.info.keys()))
+    assert_allclose(raw3[0:20][0], full_data[0:20], rtol=1e-6,
+                    atol=1e-20)  # atol is very small but > 0
+    assert_array_almost_equal(raw.times, raw3.times)
+
+    assert_true(not math.isnan(raw3.info['highpass']))
+    assert_true(not math.isnan(raw3.info['lowpass']))
+    assert_true(not math.isnan(raw.info['highpass']))
+    assert_true(not math.isnan(raw.info['lowpass']))
+
+    # Make sure concatenation works
+    first_samp = raw.first_samp
+    last_samp = raw.last_samp
+    concat_raw = concatenate_raws([raw.copy(), raw])
+    assert_equal(concat_raw.n_times, 2 * raw.n_times)
+    assert_equal(concat_raw.first_samp, first_samp)
+    assert_equal(concat_raw.last_samp - last_samp + first_samp, last_samp + 1)
+    return raw
 
 
 def _test_concat(reader, *args):
     """Test concatenation of raw classes that allow not preloading"""
     data = None
+
     for preload in (True, False):
         raw1 = reader(*args, preload=preload)
         raw2 = reader(*args, preload=preload)
@@ -17,6 +96,7 @@ def _test_concat(reader, *args):
         if data is None:
             data = raw1[:, :][0]
         assert_allclose(data, raw1[:, :][0])
+
     for first_preload in (True, False):
         raw = reader(*args, preload=first_preload)
         data = raw[:, :][0]
diff --git a/mne/io/tests/test_reference.py b/mne/io/tests/test_reference.py
index 7ce82d5..ea01708 100644
--- a/mne/io/tests/test_reference.py
+++ b/mne/io/tests/test_reference.py
@@ -11,7 +11,7 @@ import numpy as np
 from nose.tools import assert_true, assert_equal, assert_raises
 from numpy.testing import assert_array_equal, assert_allclose
 
-from mne import pick_types, Evoked, Epochs, read_events
+from mne import pick_channels, pick_types, Evoked, Epochs, read_events
 from mne.io.constants import FIFF
 from mne.io import (set_eeg_reference, set_bipolar_reference,
                     add_reference_channels)
@@ -201,6 +201,19 @@ def test_set_bipolar_reference():
                   'EEG 001', 'EEG 002', ch_name='EEG 003')
 
 
+def _check_channel_names(inst, ref_names):
+    if isinstance(ref_names, str):
+        ref_names = [ref_names]
+
+    # Test that the names of the reference channels are present in `ch_names`
+    ref_idx = pick_channels(inst.info['ch_names'], ref_names)
+    assert_true(len(ref_idx), len(ref_names))
+
+    # Test that the names of the reference channels are present in the `chs`
+    # list
+    inst.info._check_consistency()  # Should raise no exceptions
+
+
 @testing.requires_testing_data
 def test_add_reference():
     raw = Raw(fif_fname, preload=True)
@@ -212,26 +225,34 @@ def test_add_reference():
     raw_ref = add_reference_channels(raw, 'Ref', copy=True)
     assert_equal(raw_ref._data.shape[0], raw._data.shape[0] + 1)
     assert_array_equal(raw._data[picks_eeg, :], raw_ref._data[picks_eeg, :])
+    _check_channel_names(raw_ref, 'Ref')
 
     orig_nchan = raw.info['nchan']
     raw = add_reference_channels(raw, 'Ref', copy=False)
     assert_array_equal(raw._data, raw_ref._data)
     assert_equal(raw.info['nchan'], orig_nchan + 1)
+    _check_channel_names(raw, 'Ref')
 
     ref_idx = raw.ch_names.index('Ref')
     ref_data, _ = raw[ref_idx]
     assert_array_equal(ref_data, 0)
 
-    # add two reference channels to Raw
     raw = Raw(fif_fname, preload=True)
     picks_eeg = pick_types(raw.info, meg=False, eeg=True)
+
+    # Test adding an existing channel as reference channel
     assert_raises(ValueError, add_reference_channels, raw,
                   raw.info['ch_names'][0])
+
+    # add two reference channels to Raw
     raw_ref = add_reference_channels(raw, ['M1', 'M2'], copy=True)
+    _check_channel_names(raw_ref, ['M1', 'M2'])
     assert_equal(raw_ref._data.shape[0], raw._data.shape[0] + 2)
     assert_array_equal(raw._data[picks_eeg, :], raw_ref._data[picks_eeg, :])
+    assert_array_equal(raw_ref._data[-2:, :], 0)
 
     raw = add_reference_channels(raw, ['M1', 'M2'], copy=False)
+    _check_channel_names(raw, ['M1', 'M2'])
     ref_idx = raw.ch_names.index('M1')
     ref_idy = raw.ch_names.index('M2')
     ref_data, _ = raw[[ref_idx, ref_idy]]
@@ -245,6 +266,7 @@ def test_add_reference():
                     picks=picks_eeg, preload=True)
     epochs_ref = add_reference_channels(epochs, 'Ref', copy=True)
     assert_equal(epochs_ref._data.shape[1], epochs._data.shape[1] + 1)
+    _check_channel_names(epochs_ref, 'Ref')
     ref_idx = epochs_ref.ch_names.index('Ref')
     ref_data = epochs_ref.get_data()[:, ref_idx, :]
     assert_array_equal(ref_data, 0)
@@ -260,8 +282,11 @@ def test_add_reference():
                     picks=picks_eeg, preload=True)
     epochs_ref = add_reference_channels(epochs, ['M1', 'M2'], copy=True)
     assert_equal(epochs_ref._data.shape[1], epochs._data.shape[1] + 2)
+    _check_channel_names(epochs_ref, ['M1', 'M2'])
     ref_idx = epochs_ref.ch_names.index('M1')
     ref_idy = epochs_ref.ch_names.index('M2')
+    assert_equal(epochs_ref.info['chs'][ref_idx]['ch_name'], 'M1')
+    assert_equal(epochs_ref.info['chs'][ref_idy]['ch_name'], 'M2')
     ref_data = epochs_ref.get_data()[:, [ref_idx, ref_idy], :]
     assert_array_equal(ref_data, 0)
     picks_eeg = pick_types(epochs.info, meg=False, eeg=True)
@@ -277,6 +302,7 @@ def test_add_reference():
     evoked = epochs.average()
     evoked_ref = add_reference_channels(evoked, 'Ref', copy=True)
     assert_equal(evoked_ref.data.shape[0], evoked.data.shape[0] + 1)
+    _check_channel_names(evoked_ref, 'Ref')
     ref_idx = evoked_ref.ch_names.index('Ref')
     ref_data = evoked_ref.data[ref_idx, :]
     assert_array_equal(ref_data, 0)
@@ -293,6 +319,7 @@ def test_add_reference():
     evoked = epochs.average()
     evoked_ref = add_reference_channels(evoked, ['M1', 'M2'], copy=True)
     assert_equal(evoked_ref.data.shape[0], evoked.data.shape[0] + 2)
+    _check_channel_names(evoked_ref, ['M1', 'M2'])
     ref_idx = evoked_ref.ch_names.index('M1')
     ref_idy = evoked_ref.ch_names.index('M2')
     ref_data = evoked_ref.data[[ref_idx, ref_idy], :]
diff --git a/mne/io/utils.py b/mne/io/utils.py
new file mode 100644
index 0000000..0cf45fc
--- /dev/null
+++ b/mne/io/utils.py
@@ -0,0 +1,165 @@
+# Authors: Alexandre Gramfort <alexandre.gramfort at telecom-paristech.fr>
+#          Matti Hamalainen <msh at nmr.mgh.harvard.edu>
+#          Martin Luessi <mluessi at nmr.mgh.harvard.edu>
+#          Denis Engemann <denis.engemann at gmail.com>
+#          Teon Brooks <teon.brooks at gmail.com>
+#          Marijn van Vliet <w.m.vanvliet at gmail.com>
+#          Mainak Jas <mainak.jas at telecom-paristech.fr>
+#
+# License: BSD (3-clause)
+
+import numpy as np
+
+
+def _find_channels(ch_names, ch_type='EOG'):
+    """Helper to find EOG channel.
+    """
+    substrings = (ch_type,)
+    substrings = [s.upper() for s in substrings]
+    if ch_type == 'EOG':
+        substrings = ('EOG', 'EYE')
+    eog_idx = [idx for idx, ch in enumerate(ch_names) if
+               any(substring in ch.upper() for substring in substrings)]
+    return eog_idx
+
+
+def _mult_cal_one(data_view, one, idx, cals, mult):
+    """Take a chunk of raw data, multiply by mult or cals, and store"""
+    one = np.asarray(one, dtype=data_view.dtype)
+    assert data_view.shape[1] == one.shape[1]
+    if mult is not None:
+        data_view[:] = np.dot(mult, one)
+    else:
+        if isinstance(idx, slice):
+            data_view[:] = one[idx]
+        else:
+            # faster than doing one = one[idx]
+            np.take(one, idx, axis=0, out=data_view)
+        if cals is not None:
+            data_view *= cals
+
+
+def _blk_read_lims(start, stop, buf_len):
+    """Helper to deal with indexing in the middle of a data block
+
+    Parameters
+    ----------
+    start : int
+        Starting index.
+    stop : int
+        Ending index (exclusive).
+    buf_len : int
+        Buffer size in samples.
+
+    Returns
+    -------
+    block_start_idx : int
+        The first block to start reading from.
+    r_lims : list
+        The read limits.
+    d_lims : list
+        The write limits.
+
+    Notes
+    -----
+    Consider this example::
+
+        >>> start, stop, buf_len = 2, 27, 10
+
+                    +---------+---------+---------
+    File structure: |  buf0   |   buf1  |   buf2  |
+                    +---------+---------+---------
+    File time:      0        10        20        30
+                    +---------+---------+---------
+    Requested time:   2                       27
+
+                    |                             |
+                blockstart                    blockstop
+                      |                        |
+                    start                    stop
+
+    We need 27 - 2 = 25 samples (per channel) to store our data, and
+    we need to read from 3 buffers (30 samples) to get all of our data.
+
+    On all reads but the first, the data we read starts at
+    the first sample of the buffer. On all reads but the last,
+    the data we read ends on the last sample of the buffer.
+
+    We call ``this_data`` the variable that stores the current buffer's data,
+    and ``data`` the variable that stores the total output.
+
+    On the first read, we need to do this::
+
+        >>> data[0:buf_len-2] = this_data[2:buf_len]  # doctest: +SKIP
+
+    On the second read, we need to do::
+
+        >>> data[1*buf_len-2:2*buf_len-2] = this_data[0:buf_len]  # doctest: +SKIP
+
+    On the final read, we need to do::
+
+        >>> data[2*buf_len-2:3*buf_len-2-3] = this_data[0:buf_len-3]  # doctest: +SKIP
+
+    This function encapsulates this logic to allow a loop over blocks, where
+    data is stored using the following limits::
+
+        >>> data[d_lims[ii, 0]:d_lims[ii, 1]] = this_data[r_lims[ii, 0]:r_lims[ii, 1]]  # doctest: +SKIP
+
+    """  # noqa
+    # this is used to deal with indexing in the middle of a sampling period
+    assert all(isinstance(x, int) for x in (start, stop, buf_len))
+    block_start_idx = (start // buf_len)
+    block_start = block_start_idx * buf_len
+    last_used_samp = stop - 1
+    block_stop = last_used_samp - last_used_samp % buf_len + buf_len
+    read_size = block_stop - block_start
+    n_blk = read_size // buf_len + (read_size % buf_len != 0)
+    start_offset = start - block_start
+    end_offset = block_stop - stop
+    d_lims = np.empty((n_blk, 2), int)
+    r_lims = np.empty((n_blk, 2), int)
+    for bi in range(n_blk):
+        # Triage start (sidx) and end (eidx) indices for
+        # data (d) and read (r)
+        if bi == 0:
+            d_sidx = 0
+            r_sidx = start_offset
+        else:
+            d_sidx = bi * buf_len - start_offset
+            r_sidx = 0
+        if bi == n_blk - 1:
+            d_eidx = stop - start
+            r_eidx = buf_len - end_offset
+        else:
+            d_eidx = (bi + 1) * buf_len - start_offset
+            r_eidx = buf_len
+        d_lims[bi] = [d_sidx, d_eidx]
+        r_lims[bi] = [r_sidx, r_eidx]
+    return block_start_idx, r_lims, d_lims
+
+
+def _read_segments_file(raw, data, idx, fi, start, stop, cals, mult,
+                        dtype='<i2'):
+    """Read a chunk of raw data"""
+    n_channels = raw.info['nchan']
+    n_bytes = np.dtype(dtype).itemsize
+    # data_offset and data_left count data samples (channels x time points),
+    # not bytes.
+    data_offset = n_channels * start * n_bytes
+    data_left = (stop - start) * n_channels
+
+    # Read up to 100 MB of data at a time, block_size is in data samples
+    block_size = ((int(100e6) // n_bytes) // n_channels) * n_channels
+    block_size = min(data_left, block_size)
+    with open(raw._filenames[fi], 'rb', buffering=0) as fid:
+        fid.seek(data_offset)
+        # extract data in chunks
+        for sample_start in np.arange(0, data_left, block_size) // n_channels:
+
+            count = min(block_size, data_left - sample_start * n_channels)
+            block = np.fromfile(fid, dtype, count)
+            block = block.reshape(n_channels, -1, order='F')
+            n_samples = block.shape[1]  # = count // n_channels
+            sample_stop = sample_start + n_samples
+            data_view = data[:, sample_start:sample_stop]
+            _mult_cal_one(data_view, block, idx, cals, mult)
diff --git a/mne/io/write.py b/mne/io/write.py
index da090fb..2c602fc 100644
--- a/mne/io/write.py
+++ b/mne/io/write.py
@@ -195,27 +195,29 @@ def get_machid():
     return ids
 
 
+def get_new_file_id():
+    """Helper to create a new file ID tag"""
+    secs, usecs = divmod(time.time(), 1.)
+    secs, usecs = int(secs), int(usecs * 1e6)
+    return {'machid': get_machid(), 'version': FIFF.FIFFC_VERSION,
+            'secs': secs, 'usecs': usecs}
+
+
 def write_id(fid, kind, id_=None):
     """Writes fiff id"""
     id_ = _generate_meas_id() if id_ is None else id_
 
-    FIFFT_ID_STRUCT = 31
-    FIFFV_NEXT_SEQ = 0
-
     data_size = 5 * 4                       # The id comprises five integers
     fid.write(np.array(kind, dtype='>i4').tostring())
-    fid.write(np.array(FIFFT_ID_STRUCT, dtype='>i4').tostring())
+    fid.write(np.array(FIFF.FIFFT_ID_STRUCT, dtype='>i4').tostring())
     fid.write(np.array(data_size, dtype='>i4').tostring())
-    fid.write(np.array(FIFFV_NEXT_SEQ, dtype='>i4').tostring())
+    fid.write(np.array(FIFF.FIFFV_NEXT_SEQ, dtype='>i4').tostring())
 
     # Collect the bits together for one write
-    data = np.empty(5, dtype=np.int32)
-    data[0] = id_['version']
-    data[1] = id_['machid'][0]
-    data[2] = id_['machid'][1]
-    data[3] = id_['secs']
-    data[4] = id_['usecs']
-    fid.write(np.array(data, dtype='>i4').tostring())
+    arr = np.array([id_['version'],
+                    id_['machid'][0], id_['machid'][1],
+                    id_['secs'], id_['usecs']], dtype='>i4')
+    fid.write(arr.tostring())
 
 
 def start_block(fid, kind):
@@ -378,7 +380,7 @@ def write_float_sparse_rcs(fid, kind, mat):
 def _generate_meas_id():
     """Helper to generate a new meas_id dict"""
     id_ = dict()
-    id_['version'] = (1 << 16) | 2
+    id_['version'] = FIFF.FIFFC_VERSION
     id_['machid'] = get_machid()
     id_['secs'], id_['usecs'] = _date_now()
     return id_
diff --git a/mne/minimum_norm/inverse.py b/mne/minimum_norm/inverse.py
index eca2a24..713de53 100644
--- a/mne/minimum_norm/inverse.py
+++ b/mne/minimum_norm/inverse.py
@@ -16,7 +16,7 @@ from ..io.tag import find_tag
 from ..io.matrix import (_read_named_matrix, _transpose_named_matrix,
                          write_named_matrix)
 from ..io.proj import _read_proj, make_projector, _write_proj
-from ..io.proj import _has_eeg_average_ref_proj
+from ..io.proj import _needs_eeg_average_ref_proj
 from ..io.tree import dir_tree_find
 from ..io.write import (write_int, write_float_matrix, start_file,
                         start_block, end_block, end_file, write_float,
@@ -710,9 +710,9 @@ def _check_ori(pick_ori):
 
 def _check_reference(inst):
     """Aux funcion"""
-    if "eeg" in inst and not _has_eeg_average_ref_proj(inst.info['projs']):
+    if _needs_eeg_average_ref_proj(inst.info):
         raise ValueError('EEG average reference is mandatory for inverse '
-                         'modeling.')
+                         'modeling, use add_eeg_ref method.')
     if inst.info['custom_ref_applied']:
         raise ValueError('Custom EEG reference is not allowed for inverse '
                          'modeling.')
diff --git a/mne/preprocessing/__init__.py b/mne/preprocessing/__init__.py
index e1f6420..4792927 100644
--- a/mne/preprocessing/__init__.py
+++ b/mne/preprocessing/__init__.py
@@ -15,5 +15,5 @@ from .ica import (ICA, ica_find_eog_events, ica_find_ecg_events,
                   get_score_funcs, read_ica, run_ica)
 from .bads import find_outliers
 from .stim import fix_stim_artifact
-from .maxwell import _maxwell_filter
+from .maxwell import maxwell_filter
 from .xdawn import Xdawn
diff --git a/mne/preprocessing/ecg.py b/mne/preprocessing/ecg.py
index 1976318..cd50cec 100644
--- a/mne/preprocessing/ecg.py
+++ b/mne/preprocessing/ecg.py
@@ -14,6 +14,8 @@ from ..filter import band_pass_filter
 from ..epochs import Epochs, _BaseEpochs
 from ..io.base import _BaseRaw
 from ..evoked import Evoked
+from ..io import RawArray
+from .. import create_info
 
 
 def qrs_detector(sfreq, ecg, thresh_value=0.6, levels=2.5, n_thresh=3,
@@ -129,7 +131,7 @@ def qrs_detector(sfreq, ecg, thresh_value=0.6, levels=2.5, n_thresh=3,
 @verbose
 def find_ecg_events(raw, event_id=999, ch_name=None, tstart=0.0,
                     l_freq=5, h_freq=35, qrs_threshold='auto',
-                    filter_length='10s', verbose=None):
+                    filter_length='10s', return_ecg=False, verbose=None):
     """Find ECG peaks
 
     Parameters
@@ -156,6 +158,9 @@ def find_ecg_events(raw, event_id=999, ch_name=None, tstart=0.0,
         number of heartbeats (40-160 beats / min).
     filter_length : str | int | None
         Number of taps to use for filtering.
+    return_ecg : bool
+        Return ecg channel if synthesized. Defaults to False. If True and
+        and ecg exists this will yield None.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -189,7 +194,10 @@ def find_ecg_events(raw, event_id=999, ch_name=None, tstart=0.0,
     ecg_events = np.array([ecg_events + raw.first_samp,
                            np.zeros(n_events, int),
                            event_id * np.ones(n_events, int)]).T
-    return ecg_events, idx_ecg, average_pulse
+    out = (ecg_events, idx_ecg, average_pulse)
+    if return_ecg:
+        out += (ecg,)
+    return out
 
 
 def _get_ecg_channel_index(ch_name, inst):
@@ -219,7 +227,8 @@ def _get_ecg_channel_index(ch_name, inst):
 @verbose
 def create_ecg_epochs(raw, ch_name=None, event_id=999, picks=None,
                       tmin=-0.5, tmax=0.5, l_freq=8, h_freq=16, reject=None,
-                      flat=None, baseline=None, preload=True, verbose=None):
+                      flat=None, baseline=None, preload=True,
+                      keep_ecg=False, verbose=None):
     """Conveniently generate epochs around ECG artifact events
 
 
@@ -270,6 +279,9 @@ def create_ecg_epochs(raw, ch_name=None, event_id=999, picks=None,
         interval is used. If None, no correction is applied.
     preload : bool
         Preload epochs or not.
+    keep_ecg : bool
+        When ECG is synthetically created (after picking),
+        should it be added to the epochs? Defaults to False.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
 
@@ -278,18 +290,45 @@ def create_ecg_epochs(raw, ch_name=None, event_id=999, picks=None,
     ecg_epochs : instance of Epochs
         Data epoched around ECG r-peaks.
     """
+    not_has_ecg = 'ecg' not in raw
+    if not_has_ecg:
+        ecg, times = _make_ecg(raw, None, None, verbose)
 
-    events, _, _ = find_ecg_events(raw, ch_name=ch_name, event_id=event_id,
-                                   l_freq=l_freq, h_freq=h_freq,
-                                   verbose=verbose)
-    if picks is None:
-        picks = pick_types(raw.info, meg=True, eeg=True, ref_meg=False)
+    events, _, _, ecg = find_ecg_events(
+        raw, ch_name=ch_name, event_id=event_id, l_freq=l_freq, h_freq=h_freq,
+        return_ecg=True,
+        verbose=verbose)
+
+    if not_has_ecg:
+        ecg_raw = RawArray(
+            ecg[None],
+            create_info(ch_names=['ECG-SYN'],
+                        sfreq=raw.info['sfreq'], ch_types=['ecg']))
+        ignore = ['ch_names', 'chs', 'nchan', 'bads']
+        for k, v in raw.info.items():
+            if k not in ignore:
+                ecg_raw.info[k] = v
+        raw.add_channels([ecg_raw])
+
+    if picks is None and not keep_ecg:
+        picks = pick_types(raw.info, meg=True, eeg=True, ecg=False,
+                           ref_meg=False)
+    elif picks is None and keep_ecg and not_has_ecg:
+        picks = pick_types(raw.info, meg=True, eeg=True, ecg=True,
+                           ref_meg=False)
+    elif keep_ecg and not_has_ecg:
+        picks_extra = pick_types(raw.info, meg=False, eeg=False, ecg=True,
+                                 ref_meg=False)
+        picks = np.concatenate([picks, picks_extra])
 
     # create epochs around ECG events and baseline (important)
     ecg_epochs = Epochs(raw, events=events, event_id=event_id,
                         tmin=tmin, tmax=tmax, proj=False,
                         picks=picks, reject=reject, baseline=baseline,
                         verbose=verbose, preload=preload)
+    if ecg is not None:
+        raw.drop_channels(['ECG-SYN'])
+
     return ecg_epochs
 
 
diff --git a/mne/preprocessing/ica.py b/mne/preprocessing/ica.py
index 57a7f04..b416e42 100644
--- a/mne/preprocessing/ica.py
+++ b/mne/preprocessing/ica.py
@@ -34,7 +34,7 @@ from ..io.base import _BaseRaw
 from ..epochs import _BaseEpochs
 from ..viz import (plot_ica_components, plot_ica_scores,
                    plot_ica_sources, plot_ica_overlay)
-from ..viz.utils import (_prepare_trellis, tight_layout,
+from ..viz.utils import (_prepare_trellis, tight_layout, plt_show,
                          _setup_vmin_vmax)
 from ..viz.topomap import (_prepare_topo_plot, _check_outlines,
                            plot_topomap)
@@ -909,7 +909,7 @@ class ICA(ContainsMixin):
             if verbose is not None:
                 verbose = self.verbose
             ecg, times = _make_ecg(inst, start, stop, verbose)
-            ch_name = 'ECG'
+            ch_name = 'ECG-MAG'
         else:
             ecg = inst.ch_names[idx_ecg]
 
@@ -949,6 +949,7 @@ class ICA(ContainsMixin):
         if not hasattr(self, 'labels_'):
             self.labels_ = dict()
         self.labels_['ecg'] = list(ecg_idx)
+        self.labels_['ecg/%s' % ch_name] = list(ecg_idx)
         return self.labels_['ecg'], scores
 
     @verbose
@@ -1015,13 +1016,19 @@ class ICA(ContainsMixin):
         if inst.ch_names != self.ch_names:
             inst = inst.pick_channels(self.ch_names, copy=True)
 
-        for eog_ch, target in zip(eog_chs, targets):
+        if not hasattr(self, 'labels_'):
+            self.labels_ = dict()
+
+        for ii, (eog_ch, target) in enumerate(zip(eog_chs, targets)):
             scores += [self.score_sources(inst, target=target,
                                           score_func='pearsonr',
                                           start=start, stop=stop,
                                           l_freq=l_freq, h_freq=h_freq,
                                           verbose=verbose)]
-            eog_idx += [find_outliers(scores[-1], threshold=threshold)]
+            # pick last scores
+            this_idx = find_outliers(scores[-1], threshold=threshold)
+            eog_idx += [this_idx]
+            self.labels_[('eog/%i/' % ii) + eog_ch] = list(this_idx)
 
         # remove duplicates but keep order by score, even across multiple
         # EOG channels
@@ -1037,10 +1044,8 @@ class ICA(ContainsMixin):
                 eog_idx_unique.remove(i)
         if len(scores) == 1:
             scores = scores[0]
-
-        if not hasattr(self, 'labels_'):
-            self.labels_ = dict()
         self.labels_['eog'] = list(eog_idx)
+
         return self.labels_['eog'], scores
 
     def apply(self, inst, include=None, exclude=None,
@@ -1421,7 +1426,7 @@ class ICA(ContainsMixin):
                                 title=title, start=start, stop=stop, show=show,
                                 block=block)
 
-    def plot_scores(self, scores, exclude=None, axhline=None,
+    def plot_scores(self, scores, exclude=None, labels=None, axhline=None,
                     title='ICA component scores', figsize=(12, 6),
                     show=True):
         """Plot scores related to detected components.
@@ -1436,6 +1441,12 @@ class ICA(ContainsMixin):
         exclude : array_like of int
             The components marked for exclusion. If None (default), ICA.exclude
             will be used.
+        labels : str | list | 'ecg' | 'eog' | None
+            The labels to consider for the axes tests. Defaults to None.
+            If list, should match the outer shape of `scores`.
+            If 'ecg' or 'eog', the labels_ attributes will be looked up.
+            Note that '/' is used internally for sublabels specifying ECG and
+            EOG channels.
         axhline : float
             Draw horizontal line to e.g. visualize rejection threshold.
         title : str
@@ -1450,9 +1461,9 @@ class ICA(ContainsMixin):
         fig : instance of matplotlib.pyplot.Figure
             The figure object.
         """
-        return plot_ica_scores(ica=self, scores=scores, exclude=exclude,
-                               axhline=axhline, title=title,
-                               figsize=figsize, show=show)
+        return plot_ica_scores(
+            ica=self, scores=scores, exclude=exclude, labels=labels,
+            axhline=axhline, title=title, figsize=figsize, show=show)
 
     def plot_overlay(self, inst, exclude=None, picks=None, start=None,
                      stop=None, title=None, show=True):
@@ -1743,16 +1754,24 @@ def _find_sources(sources, target, score_func):
 def _serialize(dict_, outer_sep=';', inner_sep=':'):
     """Aux function"""
     s = []
-    for k, v in dict_.items():
-        if callable(v):
-            v = v.__name__
-        elif isinstance(v, int):
-            v = int(v)
+    for key, value in dict_.items():
+        if callable(value):
+            value = value.__name__
+        elif isinstance(value, int):
+            value = int(value)
+        elif isinstance(value, dict):
+            # py35 json does not support numpy int64
+            for subkey, subvalue in value.items():
+                if isinstance(subvalue, list):
+                    if len(subvalue) > 0:
+                        if isinstance(subvalue[0], (int, np.integer)):
+                            value[subkey] = [int(i) for i in subvalue]
+
         for cls in (np.random.RandomState, Covariance):
-            if isinstance(v, cls):
-                v = cls.__name__
+            if isinstance(value, cls):
+                value = cls.__name__
 
-        s.append(k + inner_sep + json.dumps(v))
+        s.append(key + inner_sep + json.dumps(value))
 
     return outer_sep.join(s)
 
@@ -1761,7 +1780,7 @@ def _deserialize(str_, outer_sep=';', inner_sep=':'):
     """Aux Function"""
     out = {}
     for mapping in str_.split(outer_sep):
-        k, v = mapping.split(inner_sep)
+        k, v = mapping.split(inner_sep, 1)
         vv = json.loads(v)
         out[k] = vv if not isinstance(vv, text_type) else str(vv)
 
@@ -1794,7 +1813,7 @@ def _write_ica(fid, ica):
         write_meas_info(fid, ica.info)
         end_block(fid, FIFF.FIFFB_MEAS)
 
-    start_block(fid, FIFF.FIFFB_ICA)
+    start_block(fid, FIFF.FIFFB_MNE_ICA)
 
     #   ICA interface params
     write_string(fid, FIFF.FIFF_MNE_ICA_INTERFACE_PARAMS,
@@ -1805,8 +1824,10 @@ def _write_ica(fid, ica):
         write_name_list(fid, FIFF.FIFF_MNE_ROW_NAMES, ica.ch_names)
 
     # samples on fit
-    ica_misc = {'n_samples_': getattr(ica, 'n_samples_', None)}
-    #   ICA init params
+    n_samples = getattr(ica, 'n_samples_', None)
+    ica_misc = {'n_samples_': (None if n_samples is None else int(n_samples)),
+                'labels_': getattr(ica, 'labels_', None)}
+
     write_string(fid, FIFF.FIFF_MNE_ICA_INTERFACE_PARAMS,
                  _serialize(ica_init))
 
@@ -1836,7 +1857,7 @@ def _write_ica(fid, ica):
     write_int(fid, FIFF.FIFF_MNE_ICA_BADS, ica.exclude)
 
     # Done!
-    end_block(fid, FIFF.FIFFB_ICA)
+    end_block(fid, FIFF.FIFFB_MNE_ICA)
 
 
 @verbose
@@ -1870,10 +1891,12 @@ def read_ica(fname):
     else:
         info['filename'] = fname
 
-    ica_data = dir_tree_find(tree, FIFF.FIFFB_ICA)
+    ica_data = dir_tree_find(tree, FIFF.FIFFB_MNE_ICA)
     if len(ica_data) == 0:
-        fid.close()
-        raise ValueError('Could not find ICA data')
+        ica_data = dir_tree_find(tree, 123)  # Constant 123 Used before v 0.11
+        if len(ica_data) == 0:
+            fid.close()
+            raise ValueError('Could not find ICA data')
 
     my_ica_data = ica_data[0]
     for d in my_ica_data['directory']:
@@ -1936,6 +1959,8 @@ def read_ica(fname):
     ica.info = info
     if 'n_samples_' in ica_misc:
         ica.n_samples_ = ica_misc['n_samples_']
+    if 'labels_' in ica_misc:
+        ica.labels_ = ica_misc['labels_']
 
     logger.info('Ready.')
 
@@ -2227,8 +2252,6 @@ def _find_max_corrs(all_maps, target, threshold):
 def _plot_corrmap(data, subjs, indices, ch_type, ica, label, show, outlines,
                   layout, cmap, contours):
     """Customized ica.plot_components for corrmap"""
-    import matplotlib.pyplot as plt
-
     title = 'Detected components'
     if label is not None:
         title += ' of type ' + label
@@ -2275,8 +2298,7 @@ def _plot_corrmap(data, subjs, indices, ch_type, ica, label, show, outlines,
     tight_layout(fig=fig)
     fig.subplots_adjust(top=0.8)
     fig.canvas.draw()
-    if show is True:
-        plt.show()
+    plt_show(show)
     return fig
 
 
@@ -2296,6 +2318,11 @@ def corrmap(icas, template, threshold="auto", label=None,
     analysis is repeated with the mean of the maps identified in the first
     stage.
 
+    Run with `plot` and `show` set to `True` and `label=False` to find
+    good parameters. Then, run with labelling enabled to apply the
+    labelling in the IC objects. (Running with both `plot` and `labels`
+    off does nothing.)
+
     Outputs a list of fitted ICAs with the indices of the marked ICs in a
     specified field.
 
@@ -2350,16 +2377,6 @@ def corrmap(icas, template, threshold="auto", label=None,
         outline. Moreover, a matplotlib patch object can be passed for
         advanced masking options, either directly or as a function that returns
         patches (required for multi-axis plots).
-    layout : None | Layout | list of Layout
-        Layout instance specifying sensor positions (does not need to be
-        specified for Neuromag data). Or a list of Layout if projections
-        are from different sensor types.
-    cmap : matplotlib colormap
-        Colormap.
-    sensors : bool | str
-        Add markers for sensor locations to the plot. Accepts matplotlib plot
-        format string (e.g., 'r+' for red plusses). If True, a circle will be
-        used (via .add_artist). Defaults to True.
     contours : int | False | None
         The number of contour lines to draw. If 0, no contours will be drawn.
     verbose : bool, str, int, or None
@@ -2382,6 +2399,7 @@ def corrmap(icas, template, threshold="auto", label=None,
 
     target = all_maps[template[0]][template[1]]
 
+    template_fig, labelled_ics = None, None
     if plot is True:
         ttl = 'Template from subj. {0}'.format(str(template[0]))
         template_fig = icas[template[0]].plot_components(
diff --git a/mne/preprocessing/maxwell.py b/mne/preprocessing/maxwell.py
index 51d3a4d..8d7df5e 100644
--- a/mne/preprocessing/maxwell.py
+++ b/mne/preprocessing/maxwell.py
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
 # Authors: Mark Wronkiewicz <wronk.mark at gmail.com>
 #          Eric Larson <larson.eric.d at gmail.com>
 #          Jussi Nurminen <jnu at iki.fi>
@@ -5,38 +6,73 @@
 
 # License: BSD (3-clause)
 
-from __future__ import division
+from copy import deepcopy
 import numpy as np
 from scipy import linalg
 from math import factorial
-import inspect
+from os import path as op
 
-from .. import pick_types
+from .. import __version__
+from ..bem import _check_origin
+from ..transforms import _str_to_frame, _get_trans
 from ..forward._compute_forward import _concatenate_coils
 from ..forward._make_forward import _prep_meg_channels
+from ..surface import _normalize_vectors
+from ..io.constants import FIFF
+from ..io.proc_history import _read_ctc
 from ..io.write import _generate_meas_id, _date_now
-from ..utils import verbose, logger
+from ..io import _loc_to_coil_trans, _BaseRaw
+from ..io.pick import pick_types, pick_info, pick_channels
+from ..utils import verbose, logger, _clean_names
+from ..fixes import _get_args
+from ..externals.six import string_types
+from ..channels.channels import _get_T1T2_mag_inds
+
+
+# Note: Elekta uses single precision and some algorithms might use
+# truncated versions of constants (e.g., μ0), which could lead to small
+# differences between algorithms
 
 
 @verbose
-def _maxwell_filter(raw, origin=(0, 0, 40), int_order=8, ext_order=3,
-                    st_dur=None, st_corr=0.98, verbose=None):
-    """Apply Maxwell filter to data using spherical harmonics.
+def maxwell_filter(raw, origin='auto', int_order=8, ext_order=3,
+                   calibration=None, cross_talk=None, st_duration=None,
+                   st_correlation=0.98, coord_frame='head', destination=None,
+                   regularize='in', ignore_ref=False, bad_condition='error',
+                   verbose=None):
+    """Apply Maxwell filter to data using multipole moments
+
+    .. warning:: Automatic bad channel detection is not currently implemented.
+                 It is critical to mark bad channels before running Maxwell
+                 filtering, so data should be inspected and marked accordingly
+                 prior to running this algorithm.
+
+    .. warning:: Not all features of Elekta MaxFilter™ are currently
+                 implemented (see Notes). Maxwell filtering in mne-python
+                 is not designed for clinical use.
 
     Parameters
     ----------
     raw : instance of mne.io.Raw
         Data to be filtered
-    origin : array-like, shape (3,)
-        Origin of internal and external multipolar moment space in head coords
-        and in millimeters
+    origin : array-like, shape (3,) | str
+        Origin of internal and external multipolar moment space in meters.
+        The default is ``'auto'``, which means ``(0., 0., 0.)`` for
+        ``coord_frame='meg'``, and a head-digitization-based origin fit
+        for ``coord_frame='head'``.
     int_order : int
-        Order of internal component of spherical expansion
+        Order of internal component of spherical expansion.
     ext_order : int
-        Order of external component of spherical expansion
-    st_dur : float | None
+        Order of external component of spherical expansion.
+    calibration : str | None
+        Path to the ``'.dat'`` file with fine calibration coefficients.
+        File can have 1D or 3D gradiometer imbalance correction.
+        This file is machine/site-specific.
+    cross_talk : str | None
+        Path to the FIF file with cross-talk correction information.
+    st_duration : float | None
         If not None, apply spatiotemporal SSS with specified buffer duration
-        (in seconds). Elekta's default is 10.0 seconds in MaxFilter v2.2.
+        (in seconds). Elekta's default is 10.0 seconds in MaxFilter™ v2.2.
         Spatiotemporal SSS acts as implicitly as a high-pass filter where the
         cut-off frequency is 1/st_dur Hz. For this (and other) reasons, longer
         buffers are generally better as long as your system can handle the
@@ -44,26 +80,103 @@ def _maxwell_filter(raw, origin=(0, 0, 40), int_order=8, ext_order=3,
         identically, choose a buffer length that divides evenly into your data.
         Any data at the trailing edge that doesn't fit evenly into a whole
         buffer window will be lumped into the previous buffer.
-    st_corr : float
+    st_correlation : float
         Correlation limit between inner and outer subspaces used to reject
         ovwrlapping intersecting inner/outer signals during spatiotemporal SSS.
+    coord_frame : str
+        The coordinate frame that the ``origin`` is specified in, either
+        ``'meg'`` or ``'head'``. For empty-room recordings that do not have
+        a head<->meg transform ``info['dev_head_t']``, the MEG coordinate
+        frame should be used.
+    destination : str | array-like, shape (3,) | None
+        The destination location for the head. Can be ``None``, which
+        will not change the head position, or a string path to a FIF file
+        containing a MEG device<->head transformation, or a 3-element array
+        giving the coordinates to translate to (with no rotations).
+        For example, ``destination=(0, 0, 0.04)`` would translate the bases
+        as ``--trans default`` would in MaxFilter™ (i.e., to the default
+        head location).
+    regularize : str | None
+        Basis regularization type, must be "in" or None.
+        "in" is the same algorithm as the "-regularize in" option in
+        MaxFilter™.
+    ignore_ref : bool
+        If True, do not include reference channels in compensation. This
+        option should be True for KIT files, since Maxwell filtering
+        with reference channels is not currently supported.
+    bad_condition : str
+        How to deal with ill-conditioned SSS matrices. Can be "error"
+        (default), "warning", or "ignore".
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose)
 
     Returns
     -------
     raw_sss : instance of mne.io.Raw
-        The raw data with Maxwell filtering applied
+        The raw data with Maxwell filtering applied.
+
+    See Also
+    --------
+    mne.epochs.average_movements
 
     Notes
     -----
-    .. versionadded:: 0.10
-
-    Equation numbers refer to Taulu and Kajola, 2005 [1]_ unless otherwise
-    noted.
+    .. versionadded:: 0.11
 
     Some of this code was adapted and relicensed (with BSD form) with
-    permission from Jussi Nurminen.
+    permission from Jussi Nurminen. These algorithms are based on work
+    from [1]_ and [2]_.
+
+    Compared to Elekta's MaxFilter™ software, our Maxwell filtering
+    algorithm currently provides the following features:
+
+        * Bad channel reconstruction
+        * Cross-talk cancellation
+        * Fine calibration correction
+        * tSSS
+        * Coordinate frame translation
+        * Regularization of internal components using information theory
+
+    The following features are not yet implemented:
+
+        * **Not certified for clinical use**
+        * Raw movement compensation
+        * Automatic bad channel detection
+        * cHPI subtraction
+
+    Our algorithm has the following enhancements:
+
+        * Double floating point precision
+        * Handling of 3D (in additon to 1D) fine calibration files
+        * Automated processing of split (-1.fif) and concatenated files
+        * Epoch-based movement compensation as described in [1]_ through
+          :func:`mne.epochs.average_movements`
+        * **Experimental** processing of data from (un-compensated)
+          non-Elekta systems
+
+    Use of Maxwell filtering routines with non-Elekta systems is currently
+    **experimental**. Worse results for non-Elekta systems are expected due
+    to (at least):
+
+        * Missing fine-calibration and cross-talk cancellation data for
+          other systems.
+        * Processing with reference sensors has not been vetted.
+        * Regularization of components may not work well for all systems.
+        * Coil integration has not been optimized using Abramowitz/Stegun
+          definitions.
+
+    .. note:: Various Maxwell filtering algorithm components are covered by
+              patents owned by Elekta Oy, Helsinki, Finland.
+              These patents include, but may not be limited to:
+
+                  - US2006031038 (Signal Space Separation)
+                  - US6876196 (Head position determination)
+                  - WO2005067789 (DC fields)
+                  - WO2005078467 (MaxShield)
+                  - WO2006114473 (Temporal Signal Space Separation)
+
+              These patents likely preclude the use of Maxwell filtering code
+              in commercial applications. Consult a lawyer if necessary.
 
     References
     ----------
@@ -79,7 +192,6 @@ def _maxwell_filter(raw, origin=(0, 0, 40), int_order=8, ext_order=3,
 
            http://lib.tkk.fi/Diss/2008/isbn9789512295654/article3.pdf
     """
-
     # There are an absurd number of different possible notations for spherical
     # coordinates, which confounds the notation for spherical harmonics.  Here,
     # we purposefully stay away from shorthand notation in both and use
@@ -87,122 +199,357 @@ def _maxwell_filter(raw, origin=(0, 0, 40), int_order=8, ext_order=3,
     # See mathworld.wolfram.com/SphericalHarmonic.html for more discussion.
     # Our code follows the same standard that ``scipy`` uses for ``sph_harm``.
 
-    if raw.proj:
-        raise RuntimeError('Projectors cannot be applied to raw data.')
-    if len(raw.info.get('comps', [])) > 0:
-        raise RuntimeError('Maxwell filter cannot handle compensated '
-                           'channels.')
-    st_corr = float(st_corr)
-    if st_corr <= 0. or st_corr > 1.:
-        raise ValueError('Need 0 < st_corr <= 1., got %s' % st_corr)
-    logger.info('Bad channels being reconstructed: ' + str(raw.info['bads']))
-
-    logger.info('Preparing coil definitions')
-    all_coils, _, _, meg_info = _prep_meg_channels(raw.info, accurate=True,
-                                                   elekta_defs=True,
-                                                   verbose=False)
-    raw_sss = raw.copy().load_data()
+    # triage inputs ASAP to avoid late-thrown errors
+
+    _check_raw(raw)
+    _check_usable(raw)
+    _check_regularize(regularize)
+    st_correlation = float(st_correlation)
+    if st_correlation <= 0. or st_correlation > 1.:
+        raise ValueError('Need 0 < st_correlation <= 1., got %s'
+                         % st_correlation)
+    if coord_frame not in ('head', 'meg'):
+        raise ValueError('coord_frame must be either "head" or "meg", not "%s"'
+                         % coord_frame)
+    head_frame = True if coord_frame == 'head' else False
+    if destination is not None:
+        if not head_frame:
+            raise RuntimeError('destination can only be set if using the '
+                               'head coordinate frame')
+        if isinstance(destination, string_types):
+            recon_trans = _get_trans(destination, 'meg', 'head')[0]['trans']
+        else:
+            destination = np.array(destination, float)
+            if destination.shape != (3,):
+                raise ValueError('destination must be a 3-element vector, '
+                                 'str, or None')
+            recon_trans = np.eye(4)
+            recon_trans[:3, 3] = destination
+    else:
+        recon_trans = None
+    if st_duration is not None:
+        st_duration = float(st_duration)
+        if not 0. < st_duration <= raw.times[-1]:
+            raise ValueError('st_duration (%0.1fs) must be between 0 and the '
+                             'duration of the data (%0.1fs).'
+                             % (st_duration, raw.times[-1]))
+        st_correlation = float(st_correlation)
+        if not 0. < st_correlation <= 1:
+            raise ValueError('st_correlation must be between 0. and 1.')
+    if not isinstance(bad_condition, string_types) or \
+            bad_condition not in ['error', 'warning', 'ignore']:
+        raise ValueError('bad_condition must be "error", "warning", or '
+                         '"ignore", not %s' % bad_condition)
+
+    # Now we can actually get moving
+
+    logger.info('Maxwell filtering raw data')
+    raw_sss = raw.copy().load_data(verbose=False)
     del raw
-    times = raw_sss.times
+    info, times = raw_sss.info, raw_sss.times
+    meg_picks, mag_picks, grad_picks, good_picks, coil_scale, mag_or_fine = \
+        _get_mf_picks(info, int_order, ext_order, ignore_ref, mag_scale=100.)
+
+    #
+    # Fine calibration processing (load fine cal and overwrite sensor geometry)
+    #
+    if calibration is not None:
+        grad_imbalances, mag_cals, sss_cal = \
+            _update_sensor_geometry(info, calibration)
+    else:
+        sss_cal = dict()
 
-    # Get indices of channels to use in multipolar moment calculation
-    good_chs = pick_types(raw_sss.info, meg=True, exclude='bads')
     # Get indices of MEG channels
-    meg_picks = pick_types(raw_sss.info, meg=True, exclude=[])
-    meg_coils, _, _, meg_info = _prep_meg_channels(raw_sss.info, accurate=True,
-                                                   elekta_defs=True)
-
-    # Magnetometers (with coil_class == 1.0) must be scaled by 100 to improve
-    # numerical stability as they have different scales than gradiometers
-    coil_scale = np.ones((len(meg_coils), 1))
-    coil_scale[np.array([coil['coil_class'] == 1.0
-                         for coil in meg_coils])] = 100.
-
-    # Compute multipolar moment bases
-    origin = np.array(origin) / 1000.  # Convert scale from mm to m
-    # Compute in/out bases and create copies containing only good chs
-    S_in, S_out = _sss_basis(origin, meg_coils, int_order, ext_order)
-    n_in = S_in.shape[1]
-
-    S_in_good, S_out_good = S_in[good_chs, :], S_out[good_chs, :]
-    S_in_good_norm = np.sqrt(np.sum(S_in_good * S_in_good, axis=0))[:,
-                                                                    np.newaxis]
-    S_out_good_norm = \
-        np.sqrt(np.sum(S_out_good * S_out_good, axis=0))[:, np.newaxis]
-    # Pseudo-inverse of total multipolar moment basis set (Part of Eq. 37)
-    S_tot_good = np.c_[S_in_good, S_out_good]
-    S_tot_good /= np.sqrt(np.sum(S_tot_good * S_tot_good, axis=0))[np.newaxis,
-                                                                   :]
-    pS_tot_good = linalg.pinv(S_tot_good, cond=1e-15)
-
-    # Compute multipolar moments of (magnetometer scaled) data (Eq. 37)
-    # XXX eventually we can refactor this to work in chunks
-    data = raw_sss[good_chs][0]
-    mm = np.dot(pS_tot_good, data * coil_scale[good_chs])
-    # Reconstruct data from internal space (Eq. 38)
-    raw_sss._data[meg_picks] = np.dot(S_in, mm[:n_in] / S_in_good_norm)
-    raw_sss._data[meg_picks] /= coil_scale
+    if info['dev_head_t'] is None and coord_frame == 'head':
+        raise RuntimeError('coord_frame cannot be "head" because '
+                           'info["dev_head_t"] is None; if this is an '
+                           'empty room recording, consider using '
+                           'coord_frame="meg"')
+
+    # Determine/check the origin of the expansion
+    origin = _check_origin(origin, raw_sss.info, coord_frame, disp=True)
+    n_in, n_out = _get_n_moments([int_order, ext_order])
+
+    #
+    # Cross-talk processing
+    #
+    if cross_talk is not None:
+        sss_ctc = _read_ctc(cross_talk)
+        ctc_chs = sss_ctc['proj_items_chs']
+        if set(info['ch_names'][p] for p in meg_picks) != set(ctc_chs):
+            raise RuntimeError('ctc channels and raw channels do not match')
+        ctc_picks = pick_channels(ctc_chs,
+                                  [info['ch_names'][c] for c in good_picks])
+        ctc = sss_ctc['decoupler'][ctc_picks][:, ctc_picks]
+        # I have no idea why, but MF transposes this for storage..
+        sss_ctc['decoupler'] = sss_ctc['decoupler'].T.tocsc()
+    else:
+        sss_ctc = dict()
+
+    #
+    # Fine calibration processing (point-like magnetometers and calib. coeffs)
+    #
+    S_decomp = _info_sss_basis(info, None, origin, int_order, ext_order,
+                               head_frame, ignore_ref, coil_scale)
+    if calibration is not None:
+        # Compute point-like mags to incorporate gradiometer imbalance
+        grad_info = pick_info(info, grad_picks)
+        S_fine = _sss_basis_point(origin, grad_info, int_order, ext_order,
+                                  grad_imbalances, ignore_ref, head_frame)
+        # Add point like magnetometer data to bases.
+        S_decomp[grad_picks, :] += S_fine
+        # Scale magnetometers by calibration coefficient
+        S_decomp[mag_picks, :] /= mag_cals
+        mag_or_fine.fill(True)
+        # We need to be careful about KIT gradiometers
+    S_decomp = S_decomp[good_picks]
+
+    #
+    # Translate to destination frame (always use non-fine-cal bases)
+    #
+    S_recon = _info_sss_basis(info, recon_trans, origin, int_order, 0,
+                              head_frame, ignore_ref, coil_scale)
+    if recon_trans is not None:
+        # warn if we have translated too far
+        diff = 1000 * (info['dev_head_t']['trans'][:3, 3] -
+                       recon_trans[:3, 3])
+        dist = np.sqrt(np.sum(_sq(diff)))
+        if dist > 25.:
+            logger.warning('Head position change is over 25 mm (%s) = %0.1f mm'
+                           % (', '.join('%0.1f' % x for x in diff), dist))
+
+    #
+    # Regularization
+    #
+    reg_moments, n_use_in = _regularize(regularize, int_order, ext_order,
+                                        S_decomp, mag_or_fine)
+    if n_use_in != n_in:
+        S_decomp = S_decomp.take(reg_moments, axis=1)
+        S_recon = S_recon.take(reg_moments[:n_use_in], axis=1)
+
+    #
+    # Do the heavy lifting
+    #
 
-    # Reset 'bads' for any MEG channels since they've been reconstructed
-    bad_inds = [raw_sss.info['ch_names'].index(ch)
-                for ch in raw_sss.info['bads']]
-    raw_sss.info['bads'] = [raw_sss.info['ch_names'][bi] for bi in bad_inds
-                            if bi not in meg_picks]
+    # Pseudo-inverse of total multipolar moment basis set (Part of Eq. 37)
+    pS_decomp_good, sing = _col_norm_pinv(S_decomp.copy())
+    cond = sing[0] / sing[-1]
+    logger.debug('    Decomposition matrix condition: %0.1f' % cond)
+    if bad_condition != 'ignore' and cond >= 1000.:
+        msg = 'Matrix is badly conditioned: %0.0f >= 1000' % cond
+        if bad_condition == 'error':
+            raise RuntimeError(msg)
+        else:  # condition == 'warning':
+            logger.warning(msg)
+
+    # Build in our data scaling here
+    pS_decomp_good *= coil_scale[good_picks].T
+
+    # Split into inside and outside versions
+    pS_decomp_in = pS_decomp_good[:n_use_in]
+    pS_decomp_out = pS_decomp_good[n_use_in:]
+    del pS_decomp_good
+
+    # Reconstruct data from internal space only (Eq. 38), first rescale S_recon
+    S_recon /= coil_scale
 
     # Reconstruct raw file object with spatiotemporal processed data
-    if st_dur is not None:
-        if st_dur > times[-1]:
-            raise ValueError('st_dur (%0.1fs) longer than length of signal in '
-                             'raw (%0.1fs).' % (st_dur, times[-1]))
-        logger.info('Processing data using tSSS with st_dur=%s' % st_dur)
-
-        # Generate time points to break up data in to windows
-        lims = raw_sss.time_as_index(np.arange(times[0], times[-1], st_dur))
-        len_last_buf = raw_sss.times[-1] - raw_sss.index_as_time(lims[-1])[0]
-        if len_last_buf == st_dur:
-            lims = np.concatenate([lims, [len(raw_sss.times)]])
-        else:
-            # len_last_buf < st_dur so fold it into the previous buffer
-            lims[-1] = len(raw_sss.times)
-            logger.info('Spatiotemporal window did not fit evenly into raw '
-                        'object. The final %0.2f seconds were lumped onto '
-                        'the previous window.' % len_last_buf)
-
-        # Loop through buffer windows of data
-        for win in zip(lims[:-1], lims[1:]):
-            # Reconstruct data from external space and compute residual
-            resid = data[:, win[0]:win[1]]
-            resid -= raw_sss._data[meg_picks, win[0]:win[1]]
-            resid -= np.dot(S_out, mm[n_in:, win[0]:win[1]] /
-                            S_out_good_norm) / coil_scale
+    max_st = dict()
+    if st_duration is not None:
+        max_st.update(job=10, subspcorr=st_correlation, buflen=st_duration)
+        logger.info('    Processing data using tSSS with st_duration=%s'
+                    % st_duration)
+    else:
+        st_duration = min(raw_sss.times[-1], 10.)  # chunk size
+        st_correlation = None
+
+    # Generate time points to break up data in to windows
+    lims = raw_sss.time_as_index(np.arange(times[0], times[-1],
+                                           st_duration))
+    len_last_buf = raw_sss.times[-1] - raw_sss.index_as_time(lims[-1])[0]
+    if len_last_buf == st_duration:
+        lims = np.concatenate([lims, [len(raw_sss.times)]])
+    else:
+        # len_last_buf < st_dur so fold it into the previous buffer
+        lims[-1] = len(raw_sss.times)
+        if st_correlation is not None:
+            logger.info('    Spatiotemporal window did not fit evenly into '
+                        'raw object. The final %0.2f seconds were lumped '
+                        'onto the previous window.' % len_last_buf)
+
+    S_decomp /= coil_scale[good_picks]
+    logger.info('    Processing data in chunks of %0.1f sec' % st_duration)
+    # Loop through buffer windows of data
+    for start, stop in zip(lims[:-1], lims[1:]):
+        # Compute multipolar moments of (magnetometer scaled) data (Eq. 37)
+        orig_data = raw_sss._data[good_picks, start:stop]
+        if cross_talk is not None:
+            orig_data = ctc.dot(orig_data)
+        mm_in = np.dot(pS_decomp_in, orig_data)
+        in_data = np.dot(S_recon, mm_in)
+
+        if st_correlation is not None:
+            # Reconstruct data using original location from external
+            # and internal spaces and compute residual
+            mm_out = np.dot(pS_decomp_out, orig_data)
+            resid = orig_data  # we will operate inplace but it's safe
+            orig_in_data = np.dot(S_decomp[:, :n_use_in], mm_in)
+            orig_out_data = np.dot(S_decomp[:, n_use_in:], mm_out)
+            resid -= orig_in_data
+            resid -= orig_out_data
             _check_finite(resid)
 
-            # Compute SSP-like projector. Set overlap limit to 0.02
-            this_data = raw_sss._data[meg_picks, win[0]:win[1]]
-            _check_finite(this_data)
-            V = _overlap_projector(this_data, resid, st_corr)
+            # Compute SSP-like projection vectors based on minimal correlation
+            _check_finite(orig_in_data)
+            t_proj = _overlap_projector(orig_in_data, resid, st_correlation)
 
             # Apply projector according to Eq. 12 in [2]_
-            logger.info('    Projecting out %s tSSS components for %s-%s'
-                        % (V.shape[1], win[0] / raw_sss.info['sfreq'],
-                           win[1] / raw_sss.info['sfreq']))
-            this_data -= np.dot(np.dot(this_data, V), V.T)
-            raw_sss._data[meg_picks, win[0]:win[1]] = this_data
+            logger.info('        Projecting %s intersecting tSSS components '
+                        'for %0.3f-%0.3f sec'
+                        % (t_proj.shape[1], start / raw_sss.info['sfreq'],
+                           stop / raw_sss.info['sfreq']))
+            in_data -= np.dot(np.dot(in_data, t_proj), t_proj.T)
+        raw_sss._data[meg_picks, start:stop] = in_data
 
     # Update info
-    raw_sss = _update_sss_info(raw_sss, origin, int_order, ext_order,
-                               len(good_chs))
-
+    _update_sss_info(raw_sss, origin, int_order, ext_order, len(good_picks),
+                     coord_frame, sss_ctc, sss_cal, max_st, reg_moments)
+    logger.info('[done]')
     return raw_sss
 
 
+def _regularize(regularize, int_order, ext_order, S_decomp, mag_or_fine):
+    """Regularize a decomposition matrix"""
+    # ALWAYS regularize the out components according to norm, since
+    # gradiometer-only setups (e.g., KIT) can have zero first-order
+    # components
+    n_in, n_out = _get_n_moments([int_order, ext_order])
+    if regularize is not None:  # regularize='in'
+        logger.info('    Computing regularization')
+        in_removes, out_removes = _regularize_in(
+            int_order, ext_order, S_decomp, mag_or_fine)
+    else:
+        in_removes = []
+        out_removes = _regularize_out(int_order, ext_order, mag_or_fine)
+    reg_in_moments = np.setdiff1d(np.arange(n_in), in_removes)
+    reg_out_moments = np.setdiff1d(np.arange(n_in, n_in + n_out),
+                                   out_removes)
+    n_use_in = len(reg_in_moments)
+    n_use_out = len(reg_out_moments)
+    if regularize is not None or n_use_out != n_out:
+        logger.info('        Using %s/%s inside and %s/%s outside harmonic '
+                    'components' % (n_use_in, n_in, n_use_out, n_out))
+    reg_moments = np.concatenate((reg_in_moments, reg_out_moments))
+    return reg_moments, n_use_in
+
+
+def _get_mf_picks(info, int_order, ext_order, ignore_ref=False,
+                  mag_scale=100.):
+    """Helper to pick types for Maxwell filtering"""
+    # Check for T1/T2 mag types
+    mag_inds_T1T2 = _get_T1T2_mag_inds(info)
+    if len(mag_inds_T1T2) > 0:
+        logger.warning('%d T1/T2 magnetometer channel types found. If using '
+                       ' SSS, it is advised to replace coil types using '
+                       ' `fix_mag_coil_types`.' % len(mag_inds_T1T2))
+    # Get indices of channels to use in multipolar moment calculation
+    ref = not ignore_ref
+    meg_picks = pick_types(info, meg=True, ref_meg=ref, exclude=[])
+    meg_info = pick_info(info, meg_picks)
+    del info
+    good_picks = pick_types(meg_info, meg=True, ref_meg=ref, exclude='bads')
+    n_bases = _get_n_moments([int_order, ext_order]).sum()
+    if n_bases > len(good_picks):
+        raise ValueError('Number of requested bases (%s) exceeds number of '
+                         'good sensors (%s)' % (str(n_bases), len(good_picks)))
+    recons = [ch for ch in meg_info['bads']]
+    if len(recons) > 0:
+        logger.info('    Bad MEG channels being reconstructed: %s' % recons)
+    else:
+        logger.info('    No bad MEG channels')
+    ref_meg = False if ignore_ref else 'mag'
+    mag_picks = pick_types(meg_info, meg='mag', ref_meg=ref_meg, exclude=[])
+    ref_meg = False if ignore_ref else 'grad'
+    grad_picks = pick_types(meg_info, meg='grad', ref_meg=ref_meg, exclude=[])
+    assert len(mag_picks) + len(grad_picks) == len(meg_info['ch_names'])
+    # Magnetometers are scaled by 100 to improve numerical stability
+    coil_scale = np.ones((len(meg_picks), 1))
+    coil_scale[mag_picks] = 100.
+    # Determine which are magnetometers for external basis purposes
+    mag_or_fine = np.zeros(len(meg_picks), bool)
+    mag_or_fine[mag_picks] = True
+    # KIT gradiometers are marked as having units T, not T/M (argh)
+    # We need a separate variable for this because KIT grads should be
+    # treated mostly like magnetometers (e.g., scaled by 100) for reg
+    mag_or_fine[np.array([ch['coil_type'] == FIFF.FIFFV_COIL_KIT_GRAD
+                          for ch in meg_info['chs']], bool)] = False
+    msg = ('    Processing %s gradiometers and %s magnetometers'
+           % (len(grad_picks), len(mag_picks)))
+    n_kit = len(mag_picks) - mag_or_fine.sum()
+    if n_kit > 0:
+        msg += ' (of which %s are actually KIT gradiometers)' % n_kit
+    logger.info(msg)
+    return (meg_picks, mag_picks, grad_picks, good_picks, coil_scale,
+            mag_or_fine)
+
+
+def _check_regularize(regularize):
+    """Helper to ensure regularize is valid"""
+    if not (regularize is None or (isinstance(regularize, string_types) and
+                                   regularize in ('in',))):
+        raise ValueError('regularize must be None or "in"')
+
+
+def _check_usable(inst):
+    """Helper to ensure our data are clean"""
+    if inst.proj:
+        raise RuntimeError('Projectors cannot be applied to data.')
+    if hasattr(inst, 'comp'):
+        if inst.comp is not None:
+            raise RuntimeError('Maxwell filter cannot be done on compensated '
+                               'channels.')
+    else:
+        if len(inst.info['comps']) > 0:  # more conservative check
+            raise RuntimeError('Maxwell filter cannot be done on data that '
+                               'might have been compensated.')
+
+
+def _col_norm_pinv(x):
+    """Compute the pinv with column-normalization to stabilize calculation
+
+    Note: will modify/overwrite x.
+    """
+    norm = np.sqrt(np.sum(x * x, axis=0))
+    x /= norm
+    u, s, v = linalg.svd(x, full_matrices=False, overwrite_a=True,
+                         **check_disable)
+    v /= norm
+    return np.dot(v.T * 1. / s, u.T), s
+
+
+def _sq(x):
+    """Helper to square"""
+    return x * x
+
+
 def _check_finite(data):
     """Helper to ensure data is finite"""
     if not np.isfinite(data).all():
         raise RuntimeError('data contains non-finite numbers')
 
 
-def _sph_harm(order, degree, az, pol):
+def _sph_harm_norm(order, degree):
+    """Normalization factor for spherical harmonics"""
+    # we could use scipy.special.poch(degree + order + 1, -2 * order)
+    # here, but it's slower for our fairly small degree
+    norm = np.sqrt((2 * degree + 1.) / (4 * np.pi))
+    if order != 0:
+        norm *= np.sqrt(factorial(degree - order) /
+                        float(factorial(degree + order)))
+    return norm
+
+
+def _sph_harm(order, degree, az, pol, norm=True):
     """Evaluate point in specified multipolar moment. [1]_ Equation 4.
 
     When using, pay close attention to inputs. Spherical harmonic notation for
@@ -211,45 +558,178 @@ def _sph_harm(order, degree, az, pol):
     more discussion.
 
     Note that scipy has ``scipy.special.sph_harm``, but that function is
-    too slow on old versions (< 0.15) and has a weird bug on newer versions.
-    At some point we should track it down and open a bug report...
+    too slow on old versions (< 0.15) for heavy use.
 
     Parameters
     ----------
     order : int
-        Order of spherical harmonic. (Usually) corresponds to 'm'
+        Order of spherical harmonic. (Usually) corresponds to 'm'.
     degree : int
-        Degree of spherical harmonic. (Usually) corresponds to 'l'
+        Degree of spherical harmonic. (Usually) corresponds to 'l'.
     az : float
         Azimuthal (longitudinal) spherical coordinate [0, 2*pi]. 0 is aligned
         with x-axis.
     pol : float
         Polar (or colatitudinal) spherical coordinate [0, pi]. 0 is aligned
         with z-axis.
+    norm : bool
+        If True, include normalization factor.
 
     Returns
     -------
     base : complex float
-        The spherical harmonic value at the specified azimuth and polar angles
+        The spherical harmonic value.
     """
     from scipy.special import lpmv
 
     # Error checks
     if np.abs(order) > degree:
-        raise ValueError('Absolute value of expansion coefficient must be <= '
-                         'degree')
+        raise ValueError('Absolute value of order must be <= degree')
     # Ensure that polar and azimuth angles are arrays
     az = np.asarray(az)
     pol = np.asarray(pol)
-    if (az < -2 * np.pi).any() or (az > 2 * np.pi).any():
+    if (np.abs(az) > 2 * np.pi).any():
         raise ValueError('Azimuth coords must lie in [-2*pi, 2*pi]')
     if(pol < 0).any() or (pol > np.pi).any():
         raise ValueError('Polar coords must lie in [0, pi]')
+    # This is the "seismology" convention on Wikipedia, w/o Condon-Shortley
+    if norm:
+        norm = _sph_harm_norm(order, degree)
+    else:
+        norm = 1.
+    return norm * lpmv(order, degree, np.cos(pol)) * np.exp(1j * order * az)
+
+
+def _concatenate_sph_coils(coils):
+    """Helper to concatenate MEG coil parameters for spherical harmoncs."""
+    rs = np.concatenate([coil['r0_exey'] for coil in coils])
+    wcoils = np.concatenate([coil['w'] for coil in coils])
+    ezs = np.concatenate([np.tile(coil['ez'][np.newaxis, :],
+                                  (len(coil['rmag']), 1))
+                          for coil in coils])
+    bins = np.repeat(np.arange(len(coils)),
+                     [len(coil['rmag']) for coil in coils])
+    return rs, wcoils, ezs, bins
+
+
+_mu_0 = 4e-7 * np.pi  # magnetic permeability
+
+
+def _get_coil_scale(coils, mag_scale=100.):
+    """Helper to get the coil_scale for Maxwell filtering"""
+    coil_scale = np.ones((len(coils), 1))
+    coil_scale[np.array([coil['coil_class'] == FIFF.FWD_COILC_MAG
+                         for coil in coils])] = mag_scale
+    return coil_scale
+
+
+def _sss_basis_basic(origin, coils, int_order, ext_order, mag_scale=100.,
+                     method='standard'):
+    """Compute SSS basis using non-optimized (but more readable) algorithms"""
+    # Compute vector between origin and coil, convert to spherical coords
+    if method == 'standard':
+        # Get position, normal, weights, and number of integration pts.
+        rmags, cosmags, wcoils, bins = _concatenate_coils(coils)
+        rmags -= origin
+        # Convert points to spherical coordinates
+        rad, az, pol = _cart_to_sph(rmags).T
+        cosmags *= wcoils[:, np.newaxis]
+        del rmags, wcoils
+        out_type = np.float64
+    else:  # testing equivalence method
+        rs, wcoils, ezs, bins = _concatenate_sph_coils(coils)
+        rs -= origin
+        rad, az, pol = _cart_to_sph(rs).T
+        ezs *= wcoils[:, np.newaxis]
+        del rs, wcoils
+        out_type = np.complex128
+    del origin
+
+    # Set up output matrices
+    n_in, n_out = _get_n_moments([int_order, ext_order])
+    S_tot = np.empty((len(coils), n_in + n_out), out_type)
+    S_in = S_tot[:, :n_in]
+    S_out = S_tot[:, n_in:]
+    coil_scale = _get_coil_scale(coils)
 
-    base = np.sqrt((2 * degree + 1) / (4 * np.pi) * factorial(degree - order) /
-                   factorial(degree + order)) * \
-        lpmv(order, degree, np.cos(pol)) * np.exp(1j * order * az)
-    return base
+    # Compute internal/external basis vectors (exclude degree 0; L/RHS Eq. 5)
+    for degree in range(1, max(int_order, ext_order) + 1):
+        # Only loop over positive orders, negative orders are handled
+        # for efficiency within
+        for order in range(degree + 1):
+            S_in_out = list()
+            grads_in_out = list()
+            # Same spherical harmonic is used for both internal and external
+            sph = _sph_harm(order, degree, az, pol, norm=False)
+            sph_norm = _sph_harm_norm(order, degree)
+            sph *= sph_norm
+            # Compute complex gradient for all integration points
+            # in spherical coordinates (Eq. 6). The gradient for rad, az, pol
+            # is obtained by taking the partial derivative of Eq. 4 w.r.t. each
+            # coordinate.
+            az_factor = 1j * order * sph / np.sin(np.maximum(pol, 1e-16))
+            pol_factor = (-sph_norm * np.sin(pol) * np.exp(1j * order * az) *
+                          _alegendre_deriv(order, degree, np.cos(pol)))
+            if degree <= int_order:
+                S_in_out.append(S_in)
+                in_norm = _mu_0 * rad ** -(degree + 2)
+                g_rad = in_norm * (-(degree + 1.) * sph)
+                g_az = in_norm * az_factor
+                g_pol = in_norm * pol_factor
+                grads_in_out.append(_sph_to_cart_partials(az, pol,
+                                                          g_rad, g_az, g_pol))
+            if degree <= ext_order:
+                S_in_out.append(S_out)
+                out_norm = _mu_0 * rad ** (degree - 1)
+                g_rad = out_norm * degree * sph
+                g_az = out_norm * az_factor
+                g_pol = out_norm * pol_factor
+                grads_in_out.append(_sph_to_cart_partials(az, pol,
+                                                          g_rad, g_az, g_pol))
+            for spc, grads in zip(S_in_out, grads_in_out):
+                # We could convert to real at the end, but it's more efficient
+                # to do it now
+                if method == 'standard':
+                    grads_pos_neg = [_sh_complex_to_real(grads, order)]
+                    orders_pos_neg = [order]
+                    # Deal with the negative orders
+                    if order > 0:
+                        # it's faster to use the conjugation property for
+                        # our normalized spherical harmonics than recalculate
+                        grads_pos_neg.append(_sh_complex_to_real(
+                            _sh_negate(grads, order), -order))
+                        orders_pos_neg.append(-order)
+                    for gr, oo in zip(grads_pos_neg, orders_pos_neg):
+                        # Gradients dotted w/integration point weighted normals
+                        gr = np.einsum('ij,ij->i', gr, cosmags)
+                        vals = np.bincount(bins, gr, len(coils))
+                        spc[:, _deg_order_idx(degree, oo)] = -vals
+                else:
+                    grads = np.einsum('ij,ij->i', grads, ezs)
+                    v = (np.bincount(bins, grads.real, len(coils)) +
+                         1j * np.bincount(bins, grads.imag, len(coils)))
+                    spc[:, _deg_order_idx(degree, order)] = -v
+                    if order > 0:
+                        spc[:, _deg_order_idx(degree, -order)] = \
+                            -_sh_negate(v, order)
+
+    # Scale magnetometers
+    S_tot *= coil_scale
+    if method != 'standard':
+        # Eventually we could probably refactor this for 2x mem (and maybe CPU)
+        # savings by changing how spc/S_tot is assigned above (real only)
+        S_tot = _bases_complex_to_real(S_tot, int_order, ext_order)
+    return S_tot
+
+
+def _prep_bases(coils, int_order, ext_order):
+    """Helper to prepare for basis computation"""
+    # Get position, normal, weights, and number of integration pts.
+    rmags, cosmags, wcoils, bins = _concatenate_coils(coils)
+    cosmags *= wcoils[:, np.newaxis]
+    n_in, n_out = _get_n_moments([int_order, ext_order])
+    S_tot = np.empty((len(coils), n_in + n_out), np.float64)
+    return rmags, cosmags, bins, len(coils), S_tot, n_in
 
 
 def _sss_basis(origin, coils, int_order, ext_order):
@@ -260,8 +740,10 @@ def _sss_basis(origin, coils, int_order, ext_order):
     origin : ndarray, shape (3,)
         Origin of the multipolar moment space in millimeters
     coils : list
-        List of MEG coils. Each should contain coil information dict. All
-        position info must be in the same coordinate frame as 'origin'
+        List of MEG coils. Each should contain coil information dict specifying
+        position, normals, weights, number of integration points and channel
+        type. All coil geometry must be in the same coordinate frame
+        as ``origin`` (``head`` or ``meg``).
     int_order : int
         Order of the internal multipolar moment space
     ext_order : int
@@ -269,73 +751,201 @@ def _sss_basis(origin, coils, int_order, ext_order):
 
     Returns
     -------
-    bases: tuple, len (2)
-        Internal and external basis sets ndarrays with shape
-        (n_coils, n_mult_moments)
-    """
-    r_int_pts, ncoils, wcoils, counts = _concatenate_coils(coils)
-    bins = np.repeat(np.arange(len(counts)), counts)
-    n_sens = len(counts)
-    n_bases = get_num_moments(int_order, ext_order)
-    # int_lens = np.insert(np.cumsum(counts), obj=0, values=0)
-
-    S_in = np.empty((n_sens, (int_order + 1) ** 2 - 1))
-    S_out = np.empty((n_sens, (ext_order + 1) ** 2 - 1))
-    S_in.fill(np.nan)
-    S_out.fill(np.nan)
-
-    # Set all magnetometers (with 'coil_type' == 1.0) to be scaled by 100
-    coil_scale = np.ones((len(coils)))
-    coil_scale[np.array([coil['coil_class'] == 1.0 for coil in coils])] = 100.
-
-    if n_bases > n_sens:
-        raise ValueError('Number of requested bases (%s) exceeds number of '
-                         'sensors (%s)' % (str(n_bases), str(n_sens)))
-
-    # Compute position vector between origin and coil integration pts
-    cvec_cart = r_int_pts - origin[np.newaxis, :]
-    # Convert points to spherical coordinates
-    cvec_sph = _cart_to_sph(cvec_cart)
-
-    # Compute internal/external basis vectors (exclude degree 0; L/RHS Eq. 5)
-    for spc, g_func, order in zip([S_in, S_out],
-                                  [_grad_in_components, _grad_out_components],
-                                  [int_order, ext_order]):
-        for deg in range(1, order + 1):
-            for order in range(-deg, deg + 1):
-
-                # Compute gradient for all integration points
-                grads = -1 * g_func(deg, order, cvec_sph[:, 0], cvec_sph[:, 1],
-                                    cvec_sph[:, 2])
-
-                # Gradients dotted with integration point normals and weighted
-                all_grads = wcoils * np.einsum('ij,ij->i', grads, ncoils)
-
-                # For order and degree, sum over each sensor's integration pts
-                # for pt_i in range(0, len(int_lens) - 1):
-                #    int_pts_sum = \
-                #        np.sum(all_grads[int_lens[pt_i]:int_lens[pt_i + 1]])
-                #    spc[pt_i, deg ** 2 + deg + order - 1] = int_pts_sum
-                spc[:, deg ** 2 + deg + order - 1] = \
-                    np.bincount(bins, weights=all_grads, minlength=len(counts))
-
-        # Scale magnetometers
-        spc *= coil_scale[:, np.newaxis]
-
-    return S_in, S_out
+    bases : ndarray, shape (n_coils, n_mult_moments)
+        Internal and external basis sets as a single ndarray.
 
+    Notes
+    -----
+    Does not incorporate magnetometer scaling factor or normalize spaces.
 
-def _alegendre_deriv(degree, order, val):
+    Adapted from code provided by Jukka Nenonen.
+    """
+    rmags, cosmags, bins, n_coils, S_tot, n_in = _prep_bases(
+        coils, int_order, ext_order)
+    rmags = rmags - origin
+    S_in = S_tot[:, :n_in]
+    S_out = S_tot[:, n_in:]
+
+    # do the heavy lifting
+    max_order = max(int_order, ext_order)
+    L = _tabular_legendre(rmags, max_order)
+    phi = np.arctan2(rmags[:, 1], rmags[:, 0])
+    r_n = np.sqrt(np.sum(rmags * rmags, axis=1))
+    r_xy = np.sqrt(rmags[:, 0] * rmags[:, 0] + rmags[:, 1] * rmags[:, 1])
+    cos_pol = rmags[:, 2] / r_n  # cos(theta); theta 0...pi
+    sin_pol = np.sqrt(1. - cos_pol * cos_pol)  # sin(theta)
+    z_only = (r_xy <= 1e-16)
+    r_xy[z_only] = 1.
+    cos_az = rmags[:, 0] / r_xy  # cos(phi)
+    cos_az[z_only] = 1.
+    sin_az = rmags[:, 1] / r_xy  # sin(phi)
+    sin_az[z_only] = 0.
+    del rmags
+    # Appropriate vector spherical harmonics terms
+    #  JNE 2012-02-08: modified alm -> 2*alm, blm -> -2*blm
+    r_nn2 = r_n.copy()
+    r_nn1 = 1.0 / (r_n * r_n)
+    for degree in range(max_order + 1):
+        if degree <= ext_order:
+            r_nn1 *= r_n  # r^(l-1)
+        if degree <= int_order:
+            r_nn2 *= r_n  # r^(l+2)
+
+        # mu_0*sqrt((2l+1)/4pi (l-m)!/(l+m)!)
+        mult = 2e-7 * np.sqrt((2 * degree + 1) * np.pi)
+
+        if degree > 0:
+            idx = _deg_order_idx(degree, 0)
+            # alpha
+            if degree <= int_order:
+                b_r = mult * (degree + 1) * L[degree][0] / r_nn2
+                b_pol = -mult * L[degree][1] / r_nn2
+                S_in[:, idx] = _integrate_points(
+                    cos_az, sin_az, cos_pol, sin_pol, b_r, 0., b_pol,
+                    cosmags, bins, n_coils)
+            # beta
+            if degree <= ext_order:
+                b_r = -mult * degree * L[degree][0] * r_nn1
+                b_pol = -mult * L[degree][1] * r_nn1
+                S_out[:, idx] = _integrate_points(
+                    cos_az, sin_az, cos_pol, sin_pol, b_r, 0., b_pol,
+                    cosmags, bins, n_coils)
+        for order in range(1, degree + 1):
+            sin_order = np.sin(order * phi)
+            cos_order = np.cos(order * phi)
+            mult /= np.sqrt((degree - order + 1) * (degree + order))
+            factor = mult * np.sqrt(2)  # equivalence fix (Elekta uses 2.)
+
+            # Real
+            idx = _deg_order_idx(degree, order)
+            r_fact = factor * L[degree][order] * cos_order
+            az_fact = factor * order * sin_order * L[degree][order]
+            pol_fact = -factor * (L[degree][order + 1] -
+                                  (degree + order) * (degree - order + 1) *
+                                  L[degree][order - 1]) * cos_order
+            # alpha
+            if degree <= int_order:
+                b_r = (degree + 1) * r_fact / r_nn2
+                b_az = az_fact / (sin_pol * r_nn2)
+                b_az[z_only] = 0.
+                b_pol = pol_fact / (2 * r_nn2)
+                S_in[:, idx] = _integrate_points(
+                    cos_az, sin_az, cos_pol, sin_pol, b_r, b_az, b_pol,
+                    cosmags, bins, n_coils)
+            # beta
+            if degree <= ext_order:
+                b_r = -degree * r_fact * r_nn1
+                b_az = az_fact * r_nn1 / sin_pol
+                b_az[z_only] = 0.
+                b_pol = pol_fact * r_nn1 / 2.
+                S_out[:, idx] = _integrate_points(
+                    cos_az, sin_az, cos_pol, sin_pol, b_r, b_az, b_pol,
+                    cosmags, bins, n_coils)
+
+            # Imaginary
+            idx = _deg_order_idx(degree, -order)
+            r_fact = factor * L[degree][order] * sin_order
+            az_fact = factor * order * cos_order * L[degree][order]
+            pol_fact = factor * (L[degree][order + 1] -
+                                 (degree + order) * (degree - order + 1) *
+                                 L[degree][order - 1]) * sin_order
+            # alpha
+            if degree <= int_order:
+                b_r = -(degree + 1) * r_fact / r_nn2
+                b_az = az_fact / (sin_pol * r_nn2)
+                b_az[z_only] = 0.
+                b_pol = pol_fact / (2 * r_nn2)
+                S_in[:, idx] = _integrate_points(
+                    cos_az, sin_az, cos_pol, sin_pol, b_r, b_az, b_pol,
+                    cosmags, bins, n_coils)
+            # beta
+            if degree <= ext_order:
+                b_r = degree * r_fact * r_nn1
+                b_az = az_fact * r_nn1 / sin_pol
+                b_az[z_only] = 0.
+                b_pol = pol_fact * r_nn1 / 2.
+                S_out[:, idx] = _integrate_points(
+                    cos_az, sin_az, cos_pol, sin_pol, b_r, b_az, b_pol,
+                    cosmags, bins, n_coils)
+    return S_tot
+
+
+def _integrate_points(cos_az, sin_az, cos_pol, sin_pol, b_r, b_az, b_pol,
+                      cosmags, bins, n_coils):
+    """Helper to integrate points in spherical coords"""
+    grads = _sp_to_cart(cos_az, sin_az, cos_pol, sin_pol, b_r, b_az, b_pol).T
+    grads = np.einsum('ij,ij->i', grads, cosmags)
+    return np.bincount(bins, grads, n_coils)
+
+
+def _tabular_legendre(r, nind):
+    """Helper to compute associated Legendre polynomials"""
+    r_n = np.sqrt(np.sum(r * r, axis=1))
+    x = r[:, 2] / r_n  # cos(theta)
+    L = list()
+    for degree in range(nind + 1):
+        L.append(np.zeros((degree + 2, len(r))))
+    L[0][0] = 1.
+    pnn = 1.
+    fact = 1.
+    sx2 = np.sqrt((1. - x) * (1. + x))
+    for degree in range(nind + 1):
+        L[degree][degree] = pnn
+        pnn *= (-fact * sx2)
+        fact += 2.
+        if degree < nind:
+            L[degree + 1][degree] = x * (2 * degree + 1) * L[degree][degree]
+        if degree >= 2:
+            for order in range(degree - 1):
+                L[degree][order] = (x * (2 * degree - 1) *
+                                    L[degree - 1][order] -
+                                    (degree + order - 1) *
+                                    L[degree - 2][order]) / (degree - order)
+    return L
+
+
+def _sp_to_cart(cos_az, sin_az, cos_pol, sin_pol, b_r, b_az, b_pol):
+    """Helper to convert spherical coords to cartesian"""
+    return np.array([(sin_pol * cos_az * b_r +
+                      cos_pol * cos_az * b_pol - sin_az * b_az),
+                     (sin_pol * sin_az * b_r +
+                      cos_pol * sin_az * b_pol + cos_az * b_az),
+                     cos_pol * b_r - sin_pol * b_pol])
+
+
+def _get_degrees_orders(order):
+    """Helper to get the set of degrees used in our basis functions"""
+    degrees = np.zeros(_get_n_moments(order), int)
+    orders = np.zeros_like(degrees)
+    for degree in range(1, order + 1):
+        # Only loop over positive orders, negative orders are handled
+        # for efficiency within
+        for order in range(degree + 1):
+            ii = _deg_order_idx(degree, order)
+            degrees[ii] = degree
+            orders[ii] = order
+            ii = _deg_order_idx(degree, -order)
+            degrees[ii] = degree
+            orders[ii] = -order
+    return degrees, orders
+
+
+def _deg_order_idx(deg, order):
+    """Helper to get the index into S_in or S_out given a degree and order"""
+    return _sq(deg) + deg + order - 1
+
+
+def _alegendre_deriv(order, degree, val):
     """Compute the derivative of the associated Legendre polynomial at a value.
 
     Parameters
     ----------
-    degree : int
-        Degree of spherical harmonic. (Usually) corresponds to 'l'
     order : int
-        Order of spherical harmonic. (Usually) corresponds to 'm'
+        Order of spherical harmonic. (Usually) corresponds to 'm'.
+    degree : int
+        Degree of spherical harmonic. (Usually) corresponds to 'l'.
     val : float
-        Value to evaluate the derivative at
+        Value to evaluate the derivative at.
 
     Returns
     -------
@@ -343,152 +953,134 @@ def _alegendre_deriv(degree, order, val):
         Associated Legendre function derivative
     """
     from scipy.special import lpmv
+    assert order >= 0
+    return (order * val * lpmv(order, degree, val) + (degree + order) *
+            (degree - order + 1.) * np.sqrt(1. - val * val) *
+            lpmv(order - 1, degree, val)) / (1. - val * val)
 
-    C = 1
-    if order < 0:
-        order = abs(order)
-        C = (-1) ** order * factorial(degree - order) / factorial(degree +
-                                                                  order)
-    return C * (order * val * lpmv(order, degree, val) + (degree + order) *
-                (degree - order + 1) * np.sqrt(1 - val ** 2) *
-                lpmv(order - 1, degree, val)) / (1 - val ** 2)
 
+def _sh_negate(sh, order):
+    """Helper to get the negative spherical harmonic from a positive one"""
+    assert order >= 0
+    return sh.conj() * (-1. if order % 2 else 1.)  # == (-1) ** order
 
-def _grad_in_components(degree, order, rad, az, pol):
-    """Compute gradient of internal component of V(r) spherical expansion.
 
-    Internal component has form: Ylm(pol, az) / (rad ** (degree + 1))
+def _sh_complex_to_real(sh, order):
+    """Helper function to convert complex to real basis functions.
 
     Parameters
     ----------
-    degree : int
-        Degree of spherical harmonic. (Usually) corresponds to 'l'
+    sh : array-like
+        Spherical harmonics. Must be from order >=0 even if negative orders
+        are used.
     order : int
-        Order of spherical harmonic. (Usually) corresponds to 'm'
-    rad : ndarray, shape (n_samples,)
-        Array of radii
-    az : ndarray, shape (n_samples,)
-        Array of azimuthal (longitudinal) spherical coordinates [0, 2*pi]. 0 is
-        aligned with x-axis.
-    pol : ndarray, shape (n_samples,)
-        Array of polar (or colatitudinal) spherical coordinates [0, pi]. 0 is
-        aligned with z-axis.
+        Order (usually 'm') of multipolar moment.
 
     Returns
     -------
-    grads : ndarray, shape (n_samples, 3)
-        Gradient of the spherical harmonic and vector specified in rectangular
-        coordinates
-    """
-    # Compute gradients for all spherical coordinates (Eq. 6)
-    g_rad = (-(degree + 1) / rad ** (degree + 2) *
-             _sph_harm(order, degree, az, pol))
-
-    g_az = (1 / (rad ** (degree + 2) * np.sin(pol)) * 1j * order *
-            _sph_harm(order, degree, az, pol))
-
-    g_pol = (1 / rad ** (degree + 2) *
-             np.sqrt((2 * degree + 1) * factorial(degree - order) /
-                     (4 * np.pi * factorial(degree + order))) *
-             np.sin(-pol) * _alegendre_deriv(degree, order, np.cos(pol)) *
-             np.exp(1j * order * az))
-
-    # Get real component of vectors, convert to cartesian coords, and return
-    real_grads = _get_real_grad(np.c_[g_rad, g_az, g_pol], order)
-    return _sph_to_cart_partials(np.c_[rad, az, pol], real_grads)
-
-
-def _grad_out_components(degree, order, rad, az, pol):
-    """Compute gradient of external component of V(r) spherical expansion.
-
-    External component has form: Ylm(azimuth, polar) * (radius ** degree)
+    real_sh : array-like
+        The real version of the spherical harmonics.
 
-    Parameters
-    ----------
-    degree : int
-        Degree of spherical harmonic. (Usually) corresponds to 'l'
-    order : int
-        Order of spherical harmonic. (Usually) corresponds to 'm'
-    rad : ndarray, shape (n_samples,)
-        Array of radii
-    az : ndarray, shape (n_samples,)
-        Array of azimuthal (longitudinal) spherical coordinates [0, 2*pi]. 0 is
-        aligned with x-axis.
-    pol : ndarray, shape (n_samples,)
-        Array of polar (or colatitudinal) spherical coordinates [0, pi]. 0 is
-        aligned with z-axis.
-
-    Returns
-    -------
-    grads : ndarray, shape (n_samples, 3)
-        Gradient of the spherical harmonic and vector specified in rectangular
-        coordinates
+    Notes
+    -----
+    This does not include the Condon-Shortely phase.
     """
-    # Compute gradients for all spherical coordinates (Eq. 7)
-    g_rad = degree * rad ** (degree - 1) * _sph_harm(order, degree, az, pol)
-
-    g_az = (rad ** (degree - 1) / np.sin(pol) * 1j * order *
-            _sph_harm(order, degree, az, pol))
-
-    g_pol = (rad ** (degree - 1) *
-             np.sqrt((2 * degree + 1) * factorial(degree - order) /
-                     (4 * np.pi * factorial(degree + order))) *
-             np.sin(-pol) * _alegendre_deriv(degree, order, np.cos(pol)) *
-             np.exp(1j * order * az))
 
-    # Get real component of vectors, convert to cartesian coords, and return
-    real_grads = _get_real_grad(np.c_[g_rad, g_az, g_pol], order)
-    return _sph_to_cart_partials(np.c_[rad, az, pol], real_grads)
+    if order == 0:
+        return np.real(sh)
+    else:
+        return np.sqrt(2.) * (np.real if order > 0 else np.imag)(sh)
 
 
-def _get_real_grad(grad_vec_raw, order):
-    """Helper function to convert gradient vector to to real basis functions.
+def _sh_real_to_complex(shs, order):
+    """Convert real spherical harmonic pair to complex
 
     Parameters
     ----------
-    grad_vec_raw : ndarray, shape (n_gradients, 3)
-        Gradient array with columns for radius, azimuth, polar points
+    shs : ndarray, shape (2, ...)
+        The real spherical harmonics at ``[order, -order]``.
     order : int
         Order (usually 'm') of multipolar moment.
 
     Returns
     -------
-    grad_vec : ndarray, shape (n_gradients, 3)
-        Gradient vectors with only real componnet
+    sh : array-like, shape (...)
+        The complex version of the spherical harmonics.
     """
-
-    if order > 0:
-        grad_vec = np.sqrt(2) * np.real(grad_vec_raw)
-    elif order < 0:
-        grad_vec = np.sqrt(2) * np.imag(grad_vec_raw)
+    if order == 0:
+        return shs[0]
     else:
-        grad_vec = grad_vec_raw
-
-    return np.real(grad_vec)
-
-
-def get_num_moments(int_order, ext_order):
-    """Compute total number of multipolar moments. Equivalent to [1]_ Eq. 32.
+        return (shs[0] + 1j * np.sign(order) * shs[1]) / np.sqrt(2.)
+
+
+def _bases_complex_to_real(complex_tot, int_order, ext_order):
+    """Convert complex spherical harmonics to real"""
+    n_in, n_out = _get_n_moments([int_order, ext_order])
+    complex_in = complex_tot[:, :n_in]
+    complex_out = complex_tot[:, n_in:]
+    real_tot = np.empty(complex_tot.shape, np.float64)
+    real_in = real_tot[:, :n_in]
+    real_out = real_tot[:, n_in:]
+    for comp, real, exp_order in zip([complex_in, complex_out],
+                                     [real_in, real_out],
+                                     [int_order, ext_order]):
+        for deg in range(1, exp_order + 1):
+            for order in range(deg + 1):
+                idx_pos = _deg_order_idx(deg, order)
+                idx_neg = _deg_order_idx(deg, -order)
+                real[:, idx_pos] = _sh_complex_to_real(comp[:, idx_pos], order)
+                if order != 0:
+                    # This extra mult factor baffles me a bit, but it works
+                    # in round-trip testing, so we'll keep it :(
+                    mult = (-1 if order % 2 == 0 else 1)
+                    real[:, idx_neg] = mult * _sh_complex_to_real(
+                        comp[:, idx_neg], -order)
+    return real_tot
+
+
+def _bases_real_to_complex(real_tot, int_order, ext_order):
+    """Convert real spherical harmonics to complex"""
+    n_in, n_out = _get_n_moments([int_order, ext_order])
+    real_in = real_tot[:, :n_in]
+    real_out = real_tot[:, n_in:]
+    comp_tot = np.empty(real_tot.shape, np.complex128)
+    comp_in = comp_tot[:, :n_in]
+    comp_out = comp_tot[:, n_in:]
+    for real, comp, exp_order in zip([real_in, real_out],
+                                     [comp_in, comp_out],
+                                     [int_order, ext_order]):
+        for deg in range(1, exp_order + 1):
+            # only loop over positive orders, figure out neg from pos
+            for order in range(deg + 1):
+                idx_pos = _deg_order_idx(deg, order)
+                idx_neg = _deg_order_idx(deg, -order)
+                this_comp = _sh_real_to_complex([real[:, idx_pos],
+                                                 real[:, idx_neg]], order)
+                comp[:, idx_pos] = this_comp
+                comp[:, idx_neg] = _sh_negate(this_comp, order)
+    return comp_tot
+
+
+def _get_n_moments(order):
+    """Compute the number of multipolar moments.
+
+    Equivalent to [1]_ Eq. 32.
 
     Parameters
     ----------
-    int_order : int
-        Internal expansion order
-    ext_order : int
-        External expansion order
+    order : array-like
+        Expansion orders, often ``[int_order, ext_order]``.
 
     Returns
     -------
-    M : int
-        Total number of multipolar moments
+    M : ndarray
+        Number of moments due to each order.
     """
+    order = np.asarray(order, int)
+    return (order + 2) * order
 
-    # TODO: Eventually, reuse code in field_interpolation
 
-    return int_order ** 2 + 2 * int_order + ext_order ** 2 + 2 * ext_order
-
-
-def _sph_to_cart_partials(sph_pts, sph_grads):
+def _sph_to_cart_partials(az, pol, g_rad, g_az, g_pol):
     """Convert spherical partial derivatives to cartesian coords.
 
     Note: Because we are dealing with partial derivatives, this calculation is
@@ -500,20 +1092,23 @@ def _sph_to_cart_partials(sph_pts, sph_grads):
 
     Parameters
     ----------
-    sph_pts : ndarray, shape (n_points, 3)
-        Array containing spherical coordinates points (rad, azimuth, polar)
+    az : ndarray, shape (n_points,)
+        Array containing spherical coordinates points (azimuth).
+    pol : ndarray, shape (n_points,)
+        Array containing spherical coordinates points (polar).
     sph_grads : ndarray, shape (n_points, 3)
         Array containing partial derivatives at each spherical coordinate
+        (radius, azimuth, polar).
 
     Returns
     -------
     cart_grads : ndarray, shape (n_points, 3)
         Array containing partial derivatives in Cartesian coordinates (x, y, z)
     """
-
+    sph_grads = np.c_[g_rad, g_az, g_pol]
     cart_grads = np.zeros_like(sph_grads)
-    c_as, s_as = np.cos(sph_pts[:, 1]), np.sin(sph_pts[:, 1])
-    c_ps, s_ps = np.cos(sph_pts[:, 2]), np.sin(sph_pts[:, 2])
+    c_as, s_as = np.cos(az), np.sin(az)
+    c_ps, s_ps = np.cos(pol), np.sin(pol)
     trans = np.array([[c_as * s_ps, -s_as, c_as * c_ps],
                       [s_as * s_ps, c_as, c_ps * s_as],
                       [c_ps, np.zeros_like(c_as), -s_ps]])
@@ -534,16 +1129,29 @@ def _cart_to_sph(cart_pts):
     sph_pts : ndarray, shape (n_points, 3)
         Array containing points in spherical coordinates (rad, azimuth, polar)
     """
-
     rad = np.sqrt(np.sum(cart_pts * cart_pts, axis=1))
     az = np.arctan2(cart_pts[:, 1], cart_pts[:, 0])
     pol = np.arccos(cart_pts[:, 2] / rad)
+    return np.array([rad, az, pol]).T
 
-    return np.c_[rad, az, pol]
 
+def _check_raw(raw):
+    """Ensure that Maxwell filtering has not been applied yet"""
+    if not isinstance(raw, _BaseRaw):
+        raise TypeError('raw must be Raw, not %s' % type(raw))
+    for ent in raw.info.get('proc_history', []):
+        for msg, key in (('SSS', 'sss_info'),
+                         ('tSSS', 'max_st'),
+                         ('fine calibration', 'sss_cal'),
+                         ('cross-talk cancellation',  'sss_ctc')):
+            if len(ent['max_info'][key]) > 0:
+                raise RuntimeError('Maxwell filtering %s step has already '
+                                   'been applied' % msg)
 
-def _update_sss_info(raw, origin, int_order, ext_order, nsens):
-    """Helper function to update info after Maxwell filtering.
+
+def _update_sss_info(raw, origin, int_order, ext_order, nchan, coord_frame,
+                     sss_ctc, sss_cal, max_st, reg_moments):
+    """Helper function to update info inplace after Maxwell filtering
 
     Parameters
     ----------
@@ -556,43 +1164,46 @@ def _update_sss_info(raw, origin, int_order, ext_order, nsens):
         Order of internal component of spherical expansion
     ext_order : int
         Order of external component of spherical expansion
-    nsens : int
+    nchan : int
         Number of sensors
-
-    Returns
-    -------
-    raw : mne.io.Raw
-        raw file object with raw.info modified
+    sss_ctc : dict
+        The cross talk information.
+    sss_cal : dict
+        The calibration information.
+    max_st : dict
+        The tSSS information.
+    reg_moments : ndarray | slice
+        The moments that were used.
     """
-    from .. import __version__
-    # TODO: Continue to fill out bookkeeping info as additional features
-    # are added (fine calibration, cross-talk calibration, etc.)
-    int_moments = get_num_moments(int_order, 0)
-    ext_moments = get_num_moments(0, ext_order)
-
+    n_in, n_out = _get_n_moments([int_order, ext_order])
     raw.info['maxshield'] = False
+    components = np.zeros(n_in + n_out).astype('int32')
+    components[reg_moments] = 1
     sss_info_dict = dict(in_order=int_order, out_order=ext_order,
-                         nsens=nsens, origin=origin.astype('float32'),
-                         n_int_moments=int_moments,
-                         frame=raw.info['dev_head_t']['to'],
-                         components=np.ones(int_moments +
-                                            ext_moments).astype('int32'))
-
-    max_info_dict = dict(max_st={}, sss_cal={}, sss_ctc={},
-                         sss_info=sss_info_dict)
-
+                         nchan=nchan, origin=origin.astype('float32'),
+                         job=np.array([2]), nfree=np.sum(components[:n_in]),
+                         frame=_str_to_frame[coord_frame],
+                         components=components)
+    max_info_dict = dict(sss_info=sss_info_dict, max_st=max_st,
+                         sss_cal=sss_cal, sss_ctc=sss_ctc)
     block_id = _generate_meas_id()
     proc_block = dict(max_info=max_info_dict, block_id=block_id,
                       creator='mne-python v%s' % __version__,
                       date=_date_now(), experimentor='')
-
-    # Insert information in raw.info['proc_info']
     raw.info['proc_history'] = [proc_block] + raw.info.get('proc_history', [])
-    return raw
+    # Reset 'bads' for any MEG channels since they've been reconstructed
+    _reset_meg_bads(raw.info)
+
+
+def _reset_meg_bads(info):
+    """Helper to reset MEG bads"""
+    meg_picks = pick_types(info, meg=True, exclude=[])
+    info['bads'] = [bad for bad in info['bads']
+                    if info['ch_names'].index(bad) not in meg_picks]
 
 
 check_disable = dict()  # not available on really old versions of SciPy
-if 'check_finite' in inspect.getargspec(linalg.svd)[0]:
+if 'check_finite' in _get_args(linalg.svd):
     check_disable['check_finite'] = False
 
 
@@ -621,9 +1232,11 @@ def _overlap_projector(data_int, data_res, corr):
     # Normalize data, then compute orth to get temporal bases. Matrices
     # must have shape (n_samps x effective_rank) when passed into svd
     # computation
-    Q_int = linalg.qr(_orth_overwrite((data_int / np.linalg.norm(data_int)).T),
+    n = np.sqrt(np.sum(data_int * data_int))
+    Q_int = linalg.qr(_orth_overwrite((data_int / n).T),
                       overwrite_a=True, mode='economic', **check_disable)[0].T
-    Q_res = linalg.qr(_orth_overwrite((data_res / np.linalg.norm(data_res)).T),
+    n = np.sqrt(np.sum(data_res * data_res))
+    Q_res = linalg.qr(_orth_overwrite((data_res / n).T),
                       overwrite_a=True, mode='economic', **check_disable)[0]
     assert data_int.shape[1] > 0
     C_mat = np.dot(Q_int, Q_res)
@@ -642,3 +1255,347 @@ def _overlap_projector(data_int, data_res, corr):
     Vh_intersect = Vh_intersect[intersect_mask].T
     V_principal = np.dot(Q_res, Vh_intersect)
     return V_principal
+
+
+def _read_fine_cal(fine_cal):
+    """Read sensor locations and calib. coeffs from fine calibration file."""
+
+    # Read new sensor locations
+    cal_chs = list()
+    cal_ch_numbers = list()
+    with open(fine_cal, 'r') as fid:
+        lines = [line for line in fid if line[0] not in '#\n']
+        for line in lines:
+            # `vals` contains channel number, (x, y, z), x-norm 3-vec, y-norm
+            # 3-vec, z-norm 3-vec, and (1 or 3) imbalance terms
+            vals = np.fromstring(line, sep=' ').astype(np.float64)
+
+            # Check for correct number of items
+            if len(vals) not in [14, 16]:
+                raise RuntimeError('Error reading fine calibration file')
+
+            ch_name = 'MEG' + '%04d' % vals[0]  # Zero-pad names to 4 char
+            cal_ch_numbers.append(vals[0])
+
+            # Get orientation information for coil transformation
+            loc = vals[1:13].copy()  # Get orientation information for 'loc'
+            calib_coeff = vals[13:].copy()  # Get imbalance/calibration coeff
+            cal_chs.append(dict(ch_name=ch_name,
+                                loc=loc, calib_coeff=calib_coeff,
+                                coord_frame=FIFF.FIFFV_COORD_DEVICE))
+    return cal_chs, cal_ch_numbers
+
+
+def _skew_symmetric_cross(a):
+    """The skew-symmetric cross product of a vector"""
+    return np.array([[0., -a[2], a[1]], [a[2], 0., -a[0]], [-a[1], a[0], 0.]])
+
+
+def _find_vector_rotation(a, b):
+    """Find the rotation matrix that maps unit vector a to b"""
+    # Rodrigues' rotation formula:
+    #   https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula
+    #   http://math.stackexchange.com/a/476311
+    R = np.eye(3)
+    v = np.cross(a, b)
+    if np.allclose(v, 0.):  # identical
+        return R
+    s = np.sqrt(np.sum(v * v))  # sine of the angle between them
+    c = np.sqrt(np.sum(a * b))  # cosine of the angle between them
+    vx = _skew_symmetric_cross(v)
+    R += vx + np.dot(vx, vx) * (1 - c) / s
+    return R
+
+
+def _update_sensor_geometry(info, fine_cal):
+    """Helper to replace sensor geometry information and reorder cal_chs"""
+    logger.info('    Using fine calibration %s' % op.basename(fine_cal))
+    cal_chs, cal_ch_numbers = _read_fine_cal(fine_cal)
+
+    # Check that we ended up with correct channels
+    meg_info = pick_info(info, pick_types(info, meg=True, exclude=[]))
+    clean_meg_names = _clean_names(meg_info['ch_names'],
+                                   remove_whitespace=True)
+    order = pick_channels([c['ch_name'] for c in cal_chs], clean_meg_names)
+    if not (len(cal_chs) == meg_info['nchan'] == len(order)):
+        raise RuntimeError('Number of channels in fine calibration file (%i) '
+                           'does not equal number of channels in info (%i)' %
+                           (len(cal_chs), meg_info['nchan']))
+    # ensure they're ordered like our data
+    cal_chs = [cal_chs[ii] for ii in order]
+
+    # Replace sensor locations (and track differences) for fine calibration
+    ang_shift = np.zeros((len(cal_chs), 3))
+    used = np.zeros(len(info['chs']), bool)
+    cal_corrs = list()
+    coil_types = list()
+    grad_picks = pick_types(meg_info, meg='grad')
+    adjust_logged = False
+    clean_info_names = _clean_names(info['ch_names'], remove_whitespace=True)
+    for ci, cal_ch in enumerate(cal_chs):
+        idx = clean_info_names.index(cal_ch['ch_name'])
+        assert not used[idx]
+        used[idx] = True
+        info_ch = info['chs'][idx]
+        coil_types.append(info_ch['coil_type'])
+
+        # Some .dat files might only rotate EZ, so we must check first that
+        # EX and EY are orthogonal to EZ. If not, we find the rotation between
+        # the original and fine-cal ez, and rotate EX and EY accordingly:
+        ch_coil_rot = _loc_to_coil_trans(info_ch['loc'])[:3, :3]
+        cal_loc = cal_ch['loc'].copy()
+        cal_coil_rot = _loc_to_coil_trans(cal_loc)[:3, :3]
+        if np.max([np.abs(np.dot(cal_coil_rot[:, ii], cal_coil_rot[:, 2]))
+                   for ii in range(2)]) > 1e-6:  # X or Y not orthogonal
+            if not adjust_logged:
+                logger.info('        Adjusting non-orthogonal EX and EY')
+                adjust_logged = True
+            # find the rotation matrix that goes from one to the other
+            this_trans = _find_vector_rotation(ch_coil_rot[:, 2],
+                                               cal_coil_rot[:, 2])
+            cal_loc[3:] = np.dot(this_trans, ch_coil_rot).T.ravel()
+
+        # calculate shift angle
+        v1 = _loc_to_coil_trans(cal_ch['loc'])[:3, :3]
+        _normalize_vectors(v1)
+        v2 = _loc_to_coil_trans(info_ch['loc'])[:3, :3]
+        _normalize_vectors(v2)
+        ang_shift[ci] = np.sum(v1 * v2, axis=0)
+        if idx in grad_picks:
+            extra = [1., cal_ch['calib_coeff'][0]]
+        else:
+            extra = [cal_ch['calib_coeff'][0], 0.]
+        cal_corrs.append(np.concatenate([extra, cal_loc]))
+        # Adjust channel normal orientations with those from fine calibration
+        # Channel positions are not changed
+        info_ch['loc'][3:] = cal_loc[3:]
+        assert (info_ch['coord_frame'] == cal_ch['coord_frame'] ==
+                FIFF.FIFFV_COORD_DEVICE)
+    cal_chans = [[sc, ct] for sc, ct in zip(cal_ch_numbers, coil_types)]
+    sss_cal = dict(cal_corrs=np.array(cal_corrs),
+                   cal_chans=np.array(cal_chans))
+
+    # Deal with numerical precision giving absolute vals slightly more than 1.
+    np.clip(ang_shift, -1., 1., ang_shift)
+    np.rad2deg(np.arccos(ang_shift), ang_shift)  # Convert to degrees
+
+    # Log quantification of sensor changes
+    logger.info('        Adjusted coil positions by (μ ± σ): '
+                '%0.1f° ± %0.1f° (max: %0.1f°)' %
+                (np.mean(ang_shift), np.std(ang_shift),
+                 np.max(np.abs(ang_shift))))
+
+    # Determine gradiometer imbalances and magnetometer calibrations
+    grad_picks = pick_types(info, meg='grad', exclude=[])
+    mag_picks = pick_types(info, meg='mag', exclude=[])
+    grad_imbalances = np.array([cal_chs[ii]['calib_coeff']
+                                for ii in grad_picks]).T
+    mag_cals = np.array([cal_chs[ii]['calib_coeff'] for ii in mag_picks])
+    return grad_imbalances, mag_cals, sss_cal
+
+
+def _sss_basis_point(origin, info, int_order, ext_order, imbalances,
+                     ignore_ref=False, head_frame=True):
+    """Compute multipolar moments for point-like magnetometers (in fine cal)"""
+
+    # Construct 'coils' with r, weights, normal vecs, # integration pts, and
+    # channel type.
+    if imbalances.shape[0] not in [1, 3]:
+        raise ValueError('Must have 1 (x) or 3 (x, y, z) point-like ' +
+                         'magnetometers. Currently have %i' %
+                         imbalances.shape[0])
+
+    # Coil_type values for x, y, z point magnetometers
+    # Note: 1D correction files only have x-direction corrections
+    pt_types = [FIFF.FIFFV_COIL_POINT_MAGNETOMETER_X,
+                FIFF.FIFFV_COIL_POINT_MAGNETOMETER_Y,
+                FIFF.FIFFV_COIL_POINT_MAGNETOMETER]
+
+    # Loop over all coordinate directions desired and create point mags
+    S_tot = 0.
+    # These are magnetometers, so use a uniform coil_scale of 100.
+    this_coil_scale = np.array([100.])
+    for imb, pt_type in zip(imbalances, pt_types):
+        temp_info = deepcopy(info)
+        for ch in temp_info['chs']:
+            ch['coil_type'] = pt_type
+        S_add = _info_sss_basis(temp_info, None, origin,
+                                int_order, ext_order, head_frame,
+                                ignore_ref, this_coil_scale)
+        # Scale spaces by gradiometer imbalance
+        S_add *= imb[:, np.newaxis]
+        S_tot += S_add
+
+    # Return point-like mag bases
+    return S_tot
+
+
+def _regularize_out(int_order, ext_order, mag_or_fine):
+    """Helper to regularize out components based on norm"""
+    n_in = _get_n_moments(int_order)
+    out_removes = list(np.arange(0 if mag_or_fine.any() else 3) + n_in)
+    return list(out_removes)
+
+
+def _regularize_in(int_order, ext_order, S_decomp, mag_or_fine):
+    """Regularize basis set using idealized SNR measure"""
+    n_in, n_out = _get_n_moments([int_order, ext_order])
+
+    # The "signal" terms depend only on the inner expansion order
+    # (i.e., not sensor geometry or head position / expansion origin)
+    a_lm_sq, rho_i = _compute_sphere_activation_in(
+        np.arange(int_order + 1))
+    degrees, orders = _get_degrees_orders(int_order)
+    a_lm_sq = a_lm_sq[degrees]
+
+    I_tots = np.empty(n_in)
+    in_keepers = list(range(n_in))
+    out_removes = _regularize_out(int_order, ext_order, mag_or_fine)
+    out_keepers = list(np.setdiff1d(np.arange(n_in, n_in + n_out),
+                                    out_removes))
+    remove_order = []
+    S_decomp = S_decomp.copy()
+    use_norm = np.sqrt(np.sum(S_decomp * S_decomp, axis=0))
+    S_decomp /= use_norm
+    eigs = np.zeros((n_in, 2))
+
+    # plot = False  # for debugging
+    # if plot:
+    #     import matplotlib.pyplot as plt
+    #     fig, axs = plt.subplots(3, figsize=[6, 12])
+    #     plot_ord = np.empty(n_in, int)
+    #     plot_ord.fill(-1)
+    #     count = 0
+    #     # Reorder plot to match MF
+    #     for degree in range(1, int_order + 1):
+    #         for order in range(0, degree + 1):
+    #             assert plot_ord[count] == -1
+    #             plot_ord[count] = _deg_order_idx(degree, order)
+    #             count += 1
+    #             if order > 0:
+    #                 assert plot_ord[count] == -1
+    #                 plot_ord[count] = _deg_order_idx(degree, -order)
+    #                 count += 1
+    #     assert count == n_in
+    #     assert (plot_ord >= 0).all()
+    #     assert len(np.unique(plot_ord)) == n_in
+    noise_lev = 5e-13  # noise level in T/m
+    noise_lev *= noise_lev  # effectively what would happen by earlier multiply
+    for ii in range(n_in):
+        this_S = S_decomp.take(in_keepers + out_keepers, axis=1)
+        u, s, v = linalg.svd(this_S, full_matrices=False, overwrite_a=True,
+                             **check_disable)
+        eigs[ii] = s[[0, -1]]
+        v = v.T[:len(in_keepers)]
+        v /= use_norm[in_keepers][:, np.newaxis]
+        eta_lm_sq = np.dot(v * 1. / s, u.T)
+        del u, s, v
+        eta_lm_sq *= eta_lm_sq
+        eta_lm_sq = eta_lm_sq.sum(axis=1)
+        eta_lm_sq *= noise_lev
+
+        # Mysterious scale factors to match Elekta, likely due to differences
+        # in the basis normalizations...
+        eta_lm_sq[orders[in_keepers] == 0] *= 2
+        eta_lm_sq *= 0.0025
+        snr = a_lm_sq[in_keepers] / eta_lm_sq
+        I_tots[ii] = 0.5 * np.log2(snr + 1.).sum()
+        remove_order.append(in_keepers[np.argmin(snr)])
+        in_keepers.pop(in_keepers.index(remove_order[-1]))
+        # if plot and ii == 0:
+        #     axs[0].semilogy(snr[plot_ord[in_keepers]], color='k')
+    # if plot:
+    #     axs[0].set(ylabel='SNR', ylim=[0.1, 500], xlabel='Component')
+    #     axs[1].plot(I_tots)
+    #     axs[1].set(ylabel='Information', xlabel='Iteration')
+    #     axs[2].plot(eigs[:, 0] / eigs[:, 1])
+    #     axs[2].set(ylabel='Condition', xlabel='Iteration')
+    # Pick the components that give at least 98% of max info
+    # This is done because the curves can be quite flat, and we err on the
+    # side of including rather than excluding components
+    max_info = np.max(I_tots)
+    lim_idx = np.where(I_tots >= 0.98 * max_info)[0][0]
+    in_removes = remove_order[:lim_idx]
+    for ii, ri in enumerate(in_removes):
+        logger.debug('            Condition %0.3f/%0.3f = %03.1f, '
+                     'Removing in component %s: l=%s, m=%+0.0f'
+                     % (tuple(eigs[ii]) + (eigs[ii, 0] / eigs[ii, 1],
+                        ri, degrees[ri], orders[ri])))
+    logger.info('        Resulting information: %0.1f bits/sample '
+                '(%0.1f%% of peak %0.1f)'
+                % (I_tots[lim_idx], 100 * I_tots[lim_idx] / max_info,
+                   max_info))
+    return in_removes, out_removes
+
+
+def _compute_sphere_activation_in(degrees):
+    """Helper to compute the "in" power from random currents in a sphere
+
+    Parameters
+    ----------
+    degrees : ndarray
+        The degrees to evaluate.
+
+    Returns
+    -------
+    a_power : ndarray
+        The a_lm associated for the associated degrees.
+    rho_i : float
+        The current density.
+
+    Notes
+    -----
+    See also:
+
+        A 122-channel whole-cortex SQUID system for measuring the brain’s
+        magnetic fields. Knuutila et al. IEEE Transactions on Magnetics,
+        Vol 29 No 6, Nov 1993.
+    """
+    r_in = 0.080  # radius of the randomly-activated sphere
+
+    # set the observation point r=r_s, az=el=0, so we can just look at m=0 term
+    # compute the resulting current density rho_i
+
+    # This is the "surface" version of the equation:
+    # b_r_in = 100e-15  # fixed radial field amplitude at distance r_s = 100 fT
+    # r_s = 0.13  # 5 cm from the surface
+    # rho_degrees = np.arange(1, 100)
+    # in_sum = (rho_degrees * (rho_degrees + 1.) /
+    #           ((2. * rho_degrees + 1.)) *
+    #           (r_in / r_s) ** (2 * rho_degrees + 2)).sum() * 4. * np.pi
+    # rho_i = b_r_in * 1e7 / np.sqrt(in_sum)
+    # rho_i = 5.21334885574e-07  # value for r_s = 0.125
+    rho_i = 5.91107375632e-07  # deterministic from above, so just store it
+    a_power = _sq(rho_i) * (degrees * r_in ** (2 * degrees + 4) /
+                            (_sq(2. * degrees + 1.) *
+                            (degrees + 1.)))
+    return a_power, rho_i
+
+
+def _info_sss_basis(info, trans, origin, int_order, ext_order, head_frame,
+                    ignore_ref=False, coil_scale=100.):
+    """SSS basis using an info structure and dev<->head trans"""
+    if trans is not None:
+        info = info.copy()
+        info['dev_head_t'] = info['dev_head_t'].copy()
+        info['dev_head_t']['trans'] = trans
+    coils, comp_coils = _prep_meg_channels(
+        info, accurate=True, elekta_defs=True, head_frame=head_frame,
+        ignore_ref=ignore_ref, verbose=False)[:2]
+    if len(comp_coils) > 0:
+        meg_picks = pick_types(info, meg=True, ref_meg=False, exclude=[])
+        ref_picks = pick_types(info, meg=False, ref_meg=True, exclude=[])
+        inserts = np.searchsorted(meg_picks, ref_picks)
+        # len(inserts) == len(comp_coils)
+        for idx, comp_coil in zip(inserts[::-1], comp_coils[::-1]):
+            coils.insert(idx, comp_coil)
+        # Now we have:
+        # [c['chname'] for c in coils] ==
+        # [info['ch_names'][ii]
+        #  for ii in pick_types(info, meg=True, ref_meg=True)]
+    if not isinstance(coil_scale, np.ndarray):
+        # Scale all magnetometers (with `coil_class` == 1.0) by `mag_scale`
+        coil_scale = _get_coil_scale(coils, coil_scale)
+    S_tot = _sss_basis(origin, coils, int_order, ext_order)
+    S_tot *= coil_scale
+    return S_tot
diff --git a/mne/preprocessing/stim.py b/mne/preprocessing/stim.py
index 06fd200..12da024 100644
--- a/mne/preprocessing/stim.py
+++ b/mne/preprocessing/stim.py
@@ -4,7 +4,7 @@
 
 import numpy as np
 from ..evoked import Evoked
-from ..epochs import Epochs
+from ..epochs import _BaseEpochs
 from ..io import Raw
 from ..event import find_events
 
@@ -106,7 +106,7 @@ def fix_stim_artifact(inst, events=None, event_id=None, tmin=0.,
             last_samp = int(event_idx) - inst.first_samp + s_end
             _fix_artifact(data, window, picks, first_samp, last_samp, mode)
 
-    elif isinstance(inst, Epochs):
+    elif isinstance(inst, _BaseEpochs):
         _check_preload(inst)
         if inst.reject is not None:
             raise RuntimeError('Reject is already applied. Use reject=None '
diff --git a/mne/preprocessing/tests/test_ecg.py b/mne/preprocessing/tests/test_ecg.py
index e034227..92b6602 100644
--- a/mne/preprocessing/tests/test_ecg.py
+++ b/mne/preprocessing/tests/test_ecg.py
@@ -3,6 +3,7 @@ import os.path as op
 from nose.tools import assert_true, assert_equal
 
 from mne.io import Raw
+from mne import pick_types
 from mne.preprocessing.ecg import find_ecg_events, create_ecg_epochs
 
 data_path = op.join(op.dirname(__file__), '..', '..', 'io', 'tests', 'data')
@@ -14,11 +15,30 @@ proj_fname = op.join(data_path, 'test-proj.fif')
 def test_find_ecg():
     """Test find ECG peaks"""
     raw = Raw(raw_fname)
-    events, ch_ECG, average_pulse = find_ecg_events(raw, event_id=999,
-                                                    ch_name='MEG 1531')
-    n_events = len(events)
-    _, times = raw[0, :]
-    assert_true(55 < average_pulse < 60)
 
-    ecg_epochs = create_ecg_epochs(raw, ch_name='MEG 1531')
+    # once with mag-trick
+    # once with characteristic channel
+    for ch_name in ['MEG 1531', None]:
+        events, ch_ECG, average_pulse, ecg = find_ecg_events(
+            raw, event_id=999, ch_name=None, return_ecg=True)
+        assert_equal(len(raw.times), len(ecg))
+        n_events = len(events)
+        _, times = raw[0, :]
+        assert_true(55 < average_pulse < 60)
+
+    picks = pick_types(
+        raw.info, meg='grad', eeg=False, stim=False,
+        eog=False, ecg=True, emg=False, ref_meg=False,
+        exclude='bads')
+
+    raw.load_data()
+    ecg_epochs = create_ecg_epochs(raw, picks=picks, keep_ecg=True)
     assert_equal(len(ecg_epochs.events), n_events)
+    assert_true('ECG-SYN' not in raw.ch_names)
+    assert_true('ECG-SYN' in ecg_epochs.ch_names)
+
+    picks = pick_types(
+        ecg_epochs.info, meg=False, eeg=False, stim=False,
+        eog=False, ecg=True, emg=False, ref_meg=False,
+        exclude='bads')
+    assert_true(len(picks) == 1)
diff --git a/mne/preprocessing/tests/test_ica.py b/mne/preprocessing/tests/test_ica.py
index c5862ce..cada286 100644
--- a/mne/preprocessing/tests/test_ica.py
+++ b/mne/preprocessing/tests/test_ica.py
@@ -16,13 +16,13 @@ from numpy.testing import (assert_array_almost_equal, assert_array_equal,
 from scipy import stats
 from itertools import product
 
-from mne import io, Epochs, read_events, pick_types
+from mne import Epochs, read_events, pick_types
 from mne.cov import read_cov
 from mne.preprocessing import (ICA, ica_find_ecg_events, ica_find_eog_events,
                                read_ica, run_ica)
 from mne.preprocessing.ica import get_score_funcs, corrmap
-from mne.io.meas_info import Info
-from mne.utils import (set_log_file, _TempDir, requires_sklearn, slow_test,
+from mne.io import Raw, Info
+from mne.utils import (catch_logging, _TempDir, requires_sklearn, slow_test,
                        run_tests_if_main)
 
 # Set our plotters to test mode
@@ -53,7 +53,7 @@ except:
 def test_ica_full_data_recovery():
     """Test recovery of full data when no source is rejected"""
     # Most basic recovery
-    raw = io.Raw(raw_fname).crop(0.5, stop, False)
+    raw = Raw(raw_fname).crop(0.5, stop, False)
     raw.load_data()
     events = read_events(event_name)
     picks = pick_types(raw.info, meg=True, stim=False, ecg=False,
@@ -111,7 +111,7 @@ def test_ica_full_data_recovery():
 def test_ica_rank_reduction():
     """Test recovery of full data when no source is rejected"""
     # Most basic recovery
-    raw = io.Raw(raw_fname).crop(0.5, stop, False)
+    raw = Raw(raw_fname).crop(0.5, stop, False)
     raw.load_data()
     picks = pick_types(raw.info, meg=True, stim=False, ecg=False,
                        eog=False, exclude='bads')[:10]
@@ -139,7 +139,7 @@ def test_ica_rank_reduction():
 @requires_sklearn
 def test_ica_reset():
     """Test ICA resetting"""
-    raw = io.Raw(raw_fname).crop(0.5, stop, False)
+    raw = Raw(raw_fname).crop(0.5, stop, False)
     raw.load_data()
     picks = pick_types(raw.info, meg=True, stim=False, ecg=False,
                        eog=False, exclude='bads')[:10]
@@ -167,7 +167,7 @@ def test_ica_reset():
 @requires_sklearn
 def test_ica_core():
     """Test ICA on raw and epochs"""
-    raw = io.Raw(raw_fname).crop(1.5, stop, False)
+    raw = Raw(raw_fname).crop(1.5, stop, False)
     raw.load_data()
     picks = pick_types(raw.info, meg=True, stim=False, ecg=False,
                        eog=False, exclude='bads')
@@ -272,7 +272,7 @@ def test_ica_additional():
     """Test additional ICA functionality"""
     tempdir = _TempDir()
     stop2 = 500
-    raw = io.Raw(raw_fname).crop(1.5, stop, False)
+    raw = Raw(raw_fname).crop(1.5, stop, False)
     raw.load_data()
     picks = pick_types(raw.info, meg=True, stim=False, ecg=False,
                        eog=False, exclude='bads')
@@ -313,7 +313,7 @@ def test_ica_additional():
     ica2 = ica.copy()
     corrmap([ica, ica2], (0, 0), threshold='auto', label='blinks', plot=True,
             ch_type="mag")
-    corrmap([ica, ica2], (0, 0), threshold=2, plot=False)
+    corrmap([ica, ica2], (0, 0), threshold=2, plot=False, show=False)
     assert_true(ica.labels_["blinks"] == ica2.labels_["blinks"])
     assert_true(0 in ica.labels_["blinks"])
     plt.close('all')
@@ -361,10 +361,11 @@ def test_ica_additional():
 
         for exclude in [[], [0]]:
             ica.exclude = [0]
+            ica.labels_ = {'foo': [0]}
             ica.save(test_ica_fname)
             ica_read = read_ica(test_ica_fname)
             assert_true(ica.exclude == ica_read.exclude)
-
+            assert_equal(ica.labels_, ica_read.labels_)
             ica.exclude = []
             ica.apply(raw, exclude=[1])
             assert_true(ica.exclude == [])
@@ -509,7 +510,7 @@ def test_ica_additional():
     test_ica_fname = op.join(op.abspath(op.curdir), 'test-ica_raw.fif')
     ica.n_components = np.int32(ica.n_components)
     ica_raw.save(test_ica_fname, overwrite=True)
-    ica_raw2 = io.Raw(test_ica_fname, preload=True)
+    ica_raw2 = Raw(test_ica_fname, preload=True)
     assert_allclose(ica_raw._data, ica_raw2._data, rtol=1e-5, atol=1e-4)
     ica_raw2.close()
     os.remove(test_ica_fname)
@@ -534,7 +535,7 @@ def test_ica_additional():
 @requires_sklearn
 def test_run_ica():
     """Test run_ica function"""
-    raw = io.Raw(raw_fname, preload=True).crop(0, stop, False).crop(1.5)
+    raw = Raw(raw_fname, preload=True).crop(0, stop, False).crop(1.5)
     params = []
     params += [(None, -1, slice(2), [0, 1])]  # varicance, kurtosis idx
     params += [(None, 'MEG 1531')]  # ECG / EOG channel params
@@ -549,28 +550,25 @@ def test_run_ica():
 @requires_sklearn
 def test_ica_reject_buffer():
     """Test ICA data raw buffer rejection"""
-    tempdir = _TempDir()
-    raw = io.Raw(raw_fname).crop(1.5, stop, False)
+    raw = Raw(raw_fname).crop(1.5, stop, False)
     raw.load_data()
     picks = pick_types(raw.info, meg=True, stim=False, ecg=False,
                        eog=False, exclude='bads')
     ica = ICA(n_components=3, max_pca_components=4, n_pca_components=4)
     raw._data[2, 1000:1005] = 5e-12
-    drop_log = op.join(op.dirname(tempdir), 'ica_drop.log')
-    set_log_file(drop_log, overwrite=True)
-    with warnings.catch_warnings(record=True):
-        ica.fit(raw, picks[:5], reject=dict(mag=2.5e-12), decim=2,
-                tstep=0.01, verbose=True)
-    assert_true(raw._data[:5, ::2].shape[1] - 4 == ica.n_samples_)
-    with open(drop_log) as fid:
-        log = [l for l in fid if 'detected' in l]
+    with catch_logging() as drop_log:
+        with warnings.catch_warnings(record=True):
+            ica.fit(raw, picks[:5], reject=dict(mag=2.5e-12), decim=2,
+                    tstep=0.01, verbose=True)
+        assert_true(raw._data[:5, ::2].shape[1] - 4 == ica.n_samples_)
+    log = [l for l in drop_log.getvalue().split('\n') if 'detected' in l]
     assert_equal(len(log), 1)
 
 
 @requires_sklearn
 def test_ica_twice():
     """Test running ICA twice"""
-    raw = io.Raw(raw_fname).crop(1.5, stop, False)
+    raw = Raw(raw_fname).crop(1.5, stop, False)
     raw.load_data()
     picks = pick_types(raw.info, meg='grad', exclude='bads')
     n_components = 0.9
diff --git a/mne/preprocessing/tests/test_maxwell.py b/mne/preprocessing/tests/test_maxwell.py
index f2320dc..9a6faca 100644
--- a/mne/preprocessing/tests/test_maxwell.py
+++ b/mne/preprocessing/tests/test_maxwell.py
@@ -5,108 +5,318 @@
 import os.path as op
 import warnings
 import numpy as np
-from numpy.testing import (assert_equal, assert_allclose,
-                           assert_array_almost_equal)
+import sys
+import scipy
+from numpy.testing import assert_equal, assert_allclose
 from nose.tools import assert_true, assert_raises
+from nose.plugins.skip import SkipTest
+from distutils.version import LooseVersion
 
 from mne import compute_raw_covariance, pick_types
+from mne.forward import _prep_meg_channels
 from mne.cov import _estimate_rank_meeg_cov
 from mne.datasets import testing
-from mne.forward._make_forward import _prep_meg_channels
-from mne.io import Raw, proc_history
-from mne.preprocessing.maxwell import (_maxwell_filter as maxwell_filter,
-                                       get_num_moments, _sss_basis)
-from mne.utils import _TempDir, run_tests_if_main, slow_test
+from mne.io import Raw, proc_history, read_info, read_raw_bti, read_raw_kit
+from mne.preprocessing.maxwell import (maxwell_filter, _get_n_moments,
+                                       _sss_basis_basic, _sh_complex_to_real,
+                                       _sh_real_to_complex, _sh_negate,
+                                       _bases_complex_to_real, _sss_basis,
+                                       _bases_real_to_complex, _sph_harm,
+                                       _get_coil_scale)
+from mne.tests.common import assert_meg_snr
+from mne.utils import (_TempDir, run_tests_if_main, slow_test, catch_logging,
+                       requires_version, object_diff)
+from mne.externals.six import PY3
 
 warnings.simplefilter('always')  # Always throw warnings
 
-data_path = op.join(testing.data_path(download=False))
-raw_fname = op.join(data_path, 'SSS', 'test_move_anon_raw.fif')
-sss_std_fname = op.join(data_path, 'SSS',
-                        'test_move_anon_raw_simp_stdOrigin_sss.fif')
-sss_nonstd_fname = op.join(data_path, 'SSS',
-                           'test_move_anon_raw_simp_nonStdOrigin_sss.fif')
-sss_bad_recon_fname = op.join(data_path, 'SSS',
-                              'test_move_anon_raw_bad_recon_sss.fif')
+data_path = testing.data_path(download=False)
+sss_path = op.join(data_path, 'SSS')
+pre = op.join(sss_path, 'test_move_anon_')
+raw_fname = pre + 'raw.fif'
+sss_std_fname = pre + 'stdOrigin_raw_sss.fif'
+sss_nonstd_fname = pre + 'nonStdOrigin_raw_sss.fif'
+sss_bad_recon_fname = pre + 'badRecon_raw_sss.fif'
+sss_reg_in_fname = pre + 'regIn_raw_sss.fif'
+sss_fine_cal_fname = pre + 'fineCal_raw_sss.fif'
+sss_ctc_fname = pre + 'crossTalk_raw_sss.fif'
+sss_trans_default_fname = pre + 'transDefault_raw_sss.fif'
+sss_trans_sample_fname = pre + 'transSample_raw_sss.fif'
+sss_st1FineCalCrossTalkRegIn_fname = \
+    pre + 'st1FineCalCrossTalkRegIn_raw_sss.fif'
+sss_st1FineCalCrossTalkRegInTransSample_fname = \
+    pre + 'st1FineCalCrossTalkRegInTransSample_raw_sss.fif'
+
+erm_fname = pre + 'erm_raw.fif'
+sss_erm_std_fname = pre + 'erm_devOrigin_raw_sss.fif'
+sss_erm_reg_in_fname = pre + 'erm_regIn_raw_sss.fif'
+sss_erm_fine_cal_fname = pre + 'erm_fineCal_raw_sss.fif'
+sss_erm_ctc_fname = pre + 'erm_crossTalk_raw_sss.fif'
+sss_erm_st_fname = pre + 'erm_st1_raw_sss.fif'
+sss_erm_st1FineCalCrossTalk_fname = pre + 'erm_st1FineCalCrossTalk_raw_sss.fif'
+sss_erm_st1FineCalCrossTalkRegIn_fname = \
+    pre + 'erm_st1FineCalCrossTalkRegIn_raw_sss.fif'
+
+sample_fname = op.join(data_path, 'MEG', 'sample_audvis_trunc_raw.fif')
+sss_samp_reg_in_fname = op.join(data_path, 'SSS',
+                                'sample_audvis_trunc_regIn_raw_sss.fif')
+sss_samp_fname = op.join(data_path, 'SSS', 'sample_audvis_trunc_raw_sss.fif')
+
+bases_fname = op.join(sss_path, 'sss_data.mat')
+fine_cal_fname = op.join(sss_path, 'sss_cal_3053.dat')
+fine_cal_fname_3d = op.join(sss_path, 'sss_cal_3053_3d.dat')
+ctc_fname = op.join(sss_path, 'ct_sparse.fif')
+fine_cal_mgh_fname = op.join(sss_path, 'sss_cal_mgh.dat')
+ctc_mgh_fname = op.join(sss_path, 'ct_sparse_mgh.fif')
+
+sample_fname = op.join(data_path, 'MEG', 'sample',
+                       'sample_audvis_trunc_raw.fif')
+
+int_order, ext_order = 8, 3
+mf_head_origin = (0., 0., 0.04)
+mf_meg_origin = (0., 0.013, -0.006)
+
+# otherwise we can get SVD error
+requires_svd_convergence = requires_version('scipy', '0.12')
+
+# 30 random bad MEG channels (20 grad, 10 mag) that were used in generation
+bads = ['MEG0912', 'MEG1722', 'MEG2213', 'MEG0132', 'MEG1312', 'MEG0432',
+        'MEG2433', 'MEG1022', 'MEG0442', 'MEG2332', 'MEG0633', 'MEG1043',
+        'MEG1713', 'MEG0422', 'MEG0932', 'MEG1622', 'MEG1343', 'MEG0943',
+        'MEG0643', 'MEG0143', 'MEG2142', 'MEG0813', 'MEG2143', 'MEG1323',
+        'MEG0522', 'MEG1123', 'MEG0423', 'MEG2122', 'MEG2532', 'MEG0812']
+
+
+def _assert_n_free(raw_sss, lower, upper=None):
+    """Helper to check the DOF"""
+    upper = lower if upper is None else upper
+    n_free = raw_sss.info['proc_history'][0]['max_info']['sss_info']['nfree']
+    assert_true(lower <= n_free <= upper,
+                'nfree fail: %s <= %s <= %s' % (lower, n_free, upper))
+
+
+ at slow_test
+def test_other_systems():
+    """Test Maxwell filtering on KIT, BTI, and CTF files
+    """
+    io_dir = op.join(op.dirname(__file__), '..', '..', 'io')
+
+    # KIT
+    kit_dir = op.join(io_dir, 'kit', 'tests', 'data')
+    sqd_path = op.join(kit_dir, 'test.sqd')
+    mrk_path = op.join(kit_dir, 'test_mrk.sqd')
+    elp_path = op.join(kit_dir, 'test_elp.txt')
+    hsp_path = op.join(kit_dir, 'test_hsp.txt')
+    raw_kit = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path)
+    assert_raises(RuntimeError, maxwell_filter, raw_kit)
+    raw_sss = maxwell_filter(raw_kit, origin=(0., 0., 0.04), ignore_ref=True)
+    _assert_n_free(raw_sss, 65)
+    # XXX this KIT origin fit is terrible! Eventually we should get a
+    # corrected HSP file with proper coverage
+    with catch_logging() as log_file:
+        assert_raises(RuntimeError, maxwell_filter, raw_kit,
+                      ignore_ref=True, regularize=None)  # bad condition
+        raw_sss = maxwell_filter(raw_kit, origin='auto',
+                                 ignore_ref=True, bad_condition='warning',
+                                 verbose='warning')
+    log_file = log_file.getvalue()
+    assert_true('badly conditioned' in log_file)
+    assert_true('more than 20 mm from' in log_file)
+    _assert_n_free(raw_sss, 28, 32)  # bad origin == brutal reg
+    # Let's set the origin
+    with catch_logging() as log_file:
+        raw_sss = maxwell_filter(raw_kit, origin=(0., 0., 0.04),
+                                 ignore_ref=True, bad_condition='warning',
+                                 regularize=None, verbose='warning')
+    log_file = log_file.getvalue()
+    assert_true('badly conditioned' in log_file)
+    _assert_n_free(raw_sss, 80)
+    # Now with reg
+    with catch_logging() as log_file:
+        raw_sss = maxwell_filter(raw_kit, origin=(0., 0., 0.04),
+                                 ignore_ref=True, verbose=True)
+    log_file = log_file.getvalue()
+    assert_true('badly conditioned' not in log_file)
+    _assert_n_free(raw_sss, 65)
+
+    # BTi
+    bti_dir = op.join(io_dir, 'bti', 'tests', 'data')
+    bti_pdf = op.join(bti_dir, 'test_pdf_linux')
+    bti_config = op.join(bti_dir, 'test_config_linux')
+    bti_hs = op.join(bti_dir, 'test_hs_linux')
+    raw_bti = read_raw_bti(bti_pdf, bti_config, bti_hs, preload=False)
+    raw_sss = maxwell_filter(raw_bti)
+    _assert_n_free(raw_sss, 70)
+
+    # CTF
+    fname_ctf_raw = op.join(io_dir, 'tests', 'data', 'test_ctf_comp_raw.fif')
+    raw_ctf = Raw(fname_ctf_raw, compensation=2)
+    assert_raises(RuntimeError, maxwell_filter, raw_ctf)  # compensated
+    raw_ctf = Raw(fname_ctf_raw)
+    assert_raises(ValueError, maxwell_filter, raw_ctf)  # cannot fit headshape
+    raw_sss = maxwell_filter(raw_ctf, origin=(0., 0., 0.04))
+    _assert_n_free(raw_sss, 68)
+    raw_sss = maxwell_filter(raw_ctf, origin=(0., 0., 0.04), ignore_ref=True)
+    _assert_n_free(raw_sss, 70)
+
+
+def test_spherical_harmonics():
+    """Test spherical harmonic functions"""
+    from scipy.special import sph_harm
+    az, pol = np.meshgrid(np.linspace(0, 2 * np.pi, 30),
+                          np.linspace(0, np.pi, 20))
+    # As of Oct 16, 2015, Anancoda has a bug in scipy due to old compilers (?):
+    # https://github.com/ContinuumIO/anaconda-issues/issues/479
+    if (PY3 and
+            LooseVersion(scipy.__version__) >= LooseVersion('0.15') and
+            'Continuum Analytics' in sys.version):
+        raise SkipTest('scipy sph_harm bad in Py3k on Anaconda')
+
+    # Test our basic spherical harmonics
+    for degree in range(1, int_order):
+        for order in range(0, degree + 1):
+            sph = _sph_harm(order, degree, az, pol)
+            sph_scipy = sph_harm(order, degree, az, pol)
+            assert_allclose(sph, sph_scipy, atol=1e-7)
+
+
+def test_spherical_conversions():
+    """Test spherical harmonic conversions"""
+    # Test our real<->complex conversion functions
+    az, pol = np.meshgrid(np.linspace(0, 2 * np.pi, 30),
+                          np.linspace(0, np.pi, 20))
+    for degree in range(1, int_order):
+        for order in range(0, degree + 1):
+            sph = _sph_harm(order, degree, az, pol)
+            # ensure that we satisfy the conjugation property
+            assert_allclose(_sh_negate(sph, order),
+                            _sph_harm(-order, degree, az, pol))
+            # ensure our conversion functions work
+            sph_real_pos = _sh_complex_to_real(sph, order)
+            sph_real_neg = _sh_complex_to_real(sph, -order)
+            sph_2 = _sh_real_to_complex([sph_real_pos, sph_real_neg], order)
+            assert_allclose(sph, sph_2, atol=1e-7)
 
 
 @testing.requires_testing_data
-def test_maxwell_filter():
-    """Test multipolar moment and Maxwell filter"""
+def test_multipolar_bases():
+    """Test multipolar moment basis calculation using sensor information"""
+    from scipy.io import loadmat
+    # Test our basis calculations
+    info = read_info(raw_fname)
+    coils = _prep_meg_channels(info, accurate=True, elekta_defs=True)[0]
+    # Check against a known benchmark
+    sss_data = loadmat(bases_fname)
+    for origin in ((0, 0, 0.04), (0, 0.02, 0.02)):
+        o_str = ''.join('%d' % (1000 * n) for n in origin)
+
+        S_tot = _sss_basis_basic(origin, coils, int_order, ext_order,
+                                 method='alternative')
+        # Test our real<->complex conversion functions
+        S_tot_complex = _bases_real_to_complex(S_tot, int_order, ext_order)
+        S_tot_round = _bases_complex_to_real(S_tot_complex,
+                                             int_order, ext_order)
+        assert_allclose(S_tot, S_tot_round, atol=1e-7)
+
+        S_tot_mat = np.concatenate([sss_data['Sin' + o_str],
+                                    sss_data['Sout' + o_str]], axis=1)
+        S_tot_mat_real = _bases_complex_to_real(S_tot_mat,
+                                                int_order, ext_order)
+        S_tot_mat_round = _bases_real_to_complex(S_tot_mat_real,
+                                                 int_order, ext_order)
+        assert_allclose(S_tot_mat, S_tot_mat_round, atol=1e-7)
+        assert_allclose(S_tot_complex, S_tot_mat, rtol=1e-4, atol=1e-8)
+        assert_allclose(S_tot, S_tot_mat_real, rtol=1e-4, atol=1e-8)
+
+        # Now normalize our columns
+        S_tot /= np.sqrt(np.sum(S_tot * S_tot, axis=0))[np.newaxis]
+        S_tot_complex /= np.sqrt(np.sum(
+            (S_tot_complex * S_tot_complex.conj()).real, axis=0))[np.newaxis]
+        # Check against a known benchmark
+        S_tot_mat = np.concatenate([sss_data['SNin' + o_str],
+                                    sss_data['SNout' + o_str]], axis=1)
+        # Check this roundtrip
+        S_tot_mat_real = _bases_complex_to_real(S_tot_mat,
+                                                int_order, ext_order)
+        S_tot_mat_round = _bases_real_to_complex(S_tot_mat_real,
+                                                 int_order, ext_order)
+        assert_allclose(S_tot_mat, S_tot_mat_round, atol=1e-7)
+        assert_allclose(S_tot_complex, S_tot_mat, rtol=1e-4, atol=1e-8)
+
+        # Now test our optimized version
+        S_tot = _sss_basis_basic(origin, coils, int_order, ext_order)
+        S_tot_fast = _sss_basis(origin, coils, int_order, ext_order)
+        S_tot_fast *= _get_coil_scale(coils)
+        # there are some sign differences for columns (order/degrees)
+        # in here, likely due to Condon-Shortley. Here we use a
+        # Magnetometer channel to figure out the flips because the
+        # gradiometer channels have effectively zero values for first three
+        # external components (i.e., S_tot[grad_picks, 80:83])
+        flips = (np.sign(S_tot_fast[2]) != np.sign(S_tot[2]))
+        flips = 1 - 2 * flips
+        assert_allclose(S_tot, S_tot_fast * flips, atol=1e-16)
 
-    # TODO: Future tests integrate with mne/io/tests/test_proc_history
 
+ at testing.requires_testing_data
+def test_basic():
+    """Test Maxwell filter basic version"""
     # Load testing data (raw, SSS std origin, SSS non-standard origin)
     with warnings.catch_warnings(record=True):  # maxshield
         raw = Raw(raw_fname, allow_maxshield=True).crop(0., 1., False)
-    raw.load_data()
-    with warnings.catch_warnings(record=True):  # maxshield, naming
-        sss_std = Raw(sss_std_fname, allow_maxshield=True)
-        sss_nonStd = Raw(sss_nonstd_fname, allow_maxshield=True)
-        raw_err = Raw(raw_fname, proj=True,
-                      allow_maxshield=True).crop(0., 0.1, False)
+        raw_err = Raw(raw_fname, proj=True, allow_maxshield=True)
+        raw_erm = Raw(erm_fname, allow_maxshield=True)
     assert_raises(RuntimeError, maxwell_filter, raw_err)
+    assert_raises(TypeError, maxwell_filter, 1.)  # not a raw
+    assert_raises(ValueError, maxwell_filter, raw, int_order=20)  # too many
 
-    # Create coils
-    all_coils, _, _, meg_info = _prep_meg_channels(raw.info, ignore_ref=True,
-                                                   elekta_defs=True)
-    picks = [raw.info['ch_names'].index(ch) for ch in [coil['chname']
-                                                       for coil in all_coils]]
-    coils = [all_coils[ci] for ci in picks]
-    ncoils = len(coils)
-
-    int_order, ext_order = 8, 3
     n_int_bases = int_order ** 2 + 2 * int_order
     n_ext_bases = ext_order ** 2 + 2 * ext_order
     nbases = n_int_bases + n_ext_bases
 
     # Check number of bases computed correctly
-    assert_equal(get_num_moments(int_order, ext_order), nbases)
-
-    # Check multipolar moment basis set
-    S_in, S_out = _sss_basis(origin=np.array([0, 0, 40]), coils=coils,
-                             int_order=int_order, ext_order=ext_order)
-    assert_equal(S_in.shape, (ncoils, n_int_bases), 'S_in has incorrect shape')
-    assert_equal(S_out.shape, (ncoils, n_ext_bases),
-                 'S_out has incorrect shape')
-
-    # Test sss computation at the standard head origin
-    raw_sss = maxwell_filter(raw, origin=[0., 0., 40.],
-                             int_order=int_order, ext_order=ext_order)
-
-    sss_std_data = sss_std[picks][0]
-    assert_array_almost_equal(raw_sss[picks][0], sss_std_data,
-                              decimal=11, err_msg='Maxwell filtered data at '
-                              'standard origin incorrect.')
-
-    # Confirm SNR is above 100
-    bench_rms = np.sqrt(np.mean(sss_std_data * sss_std_data, axis=1))
-    error = raw_sss[picks][0] - sss_std_data
-    error_rms = np.sqrt(np.mean(error ** 2, axis=1))
-    assert_true(np.mean(bench_rms / error_rms) > 1000, 'SNR < 1000')
-
-    # Test sss computation at non-standard head origin
-    raw_sss = maxwell_filter(raw, origin=[0., 20., 20.],
-                             int_order=int_order, ext_order=ext_order)
-    sss_nonStd_data = sss_nonStd[picks][0]
-    assert_array_almost_equal(raw_sss[picks][0], sss_nonStd_data, decimal=11,
-                              err_msg='Maxwell filtered data at non-std '
-                              'origin incorrect.')
-    # Confirm SNR is above 100
-    bench_rms = np.sqrt(np.mean(sss_nonStd_data * sss_nonStd_data, axis=1))
-    error = raw_sss[picks][0] - sss_nonStd_data
-    error_rms = np.sqrt(np.mean(error ** 2, axis=1))
-    assert_true(np.mean(bench_rms / error_rms) > 1000, 'SNR < 1000')
+    assert_equal(_get_n_moments([int_order, ext_order]).sum(), nbases)
+
+    # Test SSS computation at the standard head origin
+    raw_sss = maxwell_filter(raw, origin=mf_head_origin, regularize=None,
+                             bad_condition='ignore')
+    assert_meg_snr(raw_sss, Raw(sss_std_fname), 200., 1000.)
+    py_cal = raw_sss.info['proc_history'][0]['max_info']['sss_cal']
+    assert_equal(len(py_cal), 0)
+    py_ctc = raw_sss.info['proc_history'][0]['max_info']['sss_ctc']
+    assert_equal(len(py_ctc), 0)
+    py_st = raw_sss.info['proc_history'][0]['max_info']['max_st']
+    assert_equal(len(py_st), 0)
+    assert_raises(RuntimeError, maxwell_filter, raw_sss)
+
+    # Test SSS computation at non-standard head origin
+    raw_sss = maxwell_filter(raw, origin=[0., 0.02, 0.02], regularize=None,
+                             bad_condition='ignore')
+    assert_meg_snr(raw_sss, Raw(sss_nonstd_fname), 250., 700.)
+
+    # Test SSS computation at device origin
+    sss_erm_std = Raw(sss_erm_std_fname)
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg',
+                             origin=mf_meg_origin, regularize=None,
+                             bad_condition='ignore')
+    assert_meg_snr(raw_sss, sss_erm_std, 100., 900.)
+    for key in ('job', 'frame'):
+        vals = [x.info['proc_history'][0]['max_info']['sss_info'][key]
+                for x in [raw_sss, sss_erm_std]]
+        assert_equal(vals[0], vals[1])
 
     # Check against SSS functions from proc_history
     sss_info = raw_sss.info['proc_history'][0]['max_info']
-    assert_equal(get_num_moments(int_order, 0),
+    assert_equal(_get_n_moments(int_order),
                  proc_history._get_sss_rank(sss_info))
 
     # Degenerate cases
     raw_bad = raw.copy()
-    raw_bad.info['comps'] = [0]
+    raw_bad.comp = True
     assert_raises(RuntimeError, maxwell_filter, raw_bad)
+    del raw_bad
+    assert_raises(ValueError, maxwell_filter, raw, coord_frame='foo')
+    assert_raises(ValueError, maxwell_filter, raw, origin='foo')
+    assert_raises(ValueError, maxwell_filter, raw, origin=[0] * 4)
 
 
 @testing.requires_testing_data
@@ -129,15 +339,15 @@ def test_maxwell_filter_additional():
     # Get MEG channels, compute Maxwell filtered data
     raw.load_data()
     raw.pick_types(meg=True, eeg=False)
-    int_order, ext_order = 8, 3
-    raw_sss = maxwell_filter(raw, int_order=int_order, ext_order=ext_order)
+    int_order = 8
+    raw_sss = maxwell_filter(raw, origin=mf_head_origin, regularize=None,
+                             bad_condition='ignore')
 
     # Test io on processed data
     tempdir = _TempDir()
     test_outname = op.join(tempdir, 'test_raw_sss.fif')
     raw_sss.save(test_outname)
-    raw_sss_loaded = Raw(test_outname, preload=True, proj=False,
-                         allow_maxshield=True)
+    raw_sss_loaded = Raw(test_outname, preload=True)
 
     # Some numerical imprecision since save uses 'single' fmt
     assert_allclose(raw_sss_loaded[:][0], raw_sss[:][0],
@@ -153,104 +363,332 @@ def test_maxwell_filter_additional():
                                            scalings)
 
     assert_equal(cov_raw_rank, raw.info['nchan'])
-    assert_equal(cov_sss_rank, get_num_moments(int_order, 0))
+    assert_equal(cov_sss_rank, _get_n_moments(int_order))
 
 
 @slow_test
 @testing.requires_testing_data
 def test_bads_reconstruction():
-    """Test reconstruction of channels marked as bad"""
-
-    with warnings.catch_warnings(record=True):  # maxshield, naming
-        sss_bench = Raw(sss_bad_recon_fname, allow_maxshield=True)
-
-    raw_fname = op.join(data_path, 'SSS', 'test_move_anon_raw.fif')
-
+    """Test Maxwell filter reconstruction of bad channels"""
     with warnings.catch_warnings(record=True):  # maxshield
         raw = Raw(raw_fname, allow_maxshield=True).crop(0., 1., False)
-
-    # Set 30 random bad MEG channels (20 grad, 10 mag)
-    bads = ['MEG0912', 'MEG1722', 'MEG2213', 'MEG0132', 'MEG1312', 'MEG0432',
-            'MEG2433', 'MEG1022', 'MEG0442', 'MEG2332', 'MEG0633', 'MEG1043',
-            'MEG1713', 'MEG0422', 'MEG0932', 'MEG1622', 'MEG1343', 'MEG0943',
-            'MEG0643', 'MEG0143', 'MEG2142', 'MEG0813', 'MEG2143', 'MEG1323',
-            'MEG0522', 'MEG1123', 'MEG0423', 'MEG2122', 'MEG2532', 'MEG0812']
     raw.info['bads'] = bads
-
-    # Compute Maxwell filtered data
-    raw_sss = maxwell_filter(raw)
-    meg_chs = pick_types(raw_sss.info)
-    non_meg_chs = np.setdiff1d(np.arange(len(raw.ch_names)), meg_chs)
-    sss_bench_data = sss_bench[meg_chs][0]
-
-    # Some numerical imprecision since save uses 'single' fmt
-    assert_allclose(raw_sss[meg_chs][0], sss_bench_data,
-                    rtol=1e-12, atol=1e-4, err_msg='Maxwell filtered data '
-                    'with reconstructed bads is incorrect.')
-
-    # Confirm SNR is above 1000
-    bench_rms = np.sqrt(np.mean(raw_sss[meg_chs][0] ** 2, axis=1))
-    error = raw_sss[meg_chs][0] - sss_bench_data
-    error_rms = np.sqrt(np.mean(error ** 2, axis=1))
-    assert_true(np.mean(bench_rms / error_rms) >= 1000,
-                'SNR (%0.1f) < 1000' % np.mean(bench_rms / error_rms))
-    assert_allclose(raw_sss[non_meg_chs][0], raw[non_meg_chs][0])
+    raw_sss = maxwell_filter(raw, origin=mf_head_origin, regularize=None,
+                             bad_condition='ignore')
+    assert_meg_snr(raw_sss, Raw(sss_bad_recon_fname), 300.)
 
 
+ at requires_svd_convergence
 @testing.requires_testing_data
 def test_spatiotemporal_maxwell():
-    """Test spatiotemporal (tSSS) processing"""
+    """Test Maxwell filter (tSSS) spatiotemporal processing"""
     # Load raw testing data
     with warnings.catch_warnings(record=True):  # maxshield
         raw = Raw(raw_fname, allow_maxshield=True)
 
-    # Create coils
-    picks = pick_types(raw.info)
-
     # Test that window is less than length of data
-    assert_raises(ValueError, maxwell_filter, raw, st_dur=1000.)
+    assert_raises(ValueError, maxwell_filter, raw, st_duration=1000.)
 
     # Check both 4 and 10 seconds because Elekta handles them differently
     # This is to ensure that std/non-std tSSS windows are correctly handled
-    st_durs = [4., 10.]
-    for st_dur in st_durs:
-        # Load tSSS data depending on st_dur and get data
-        tSSS_fname = op.join(data_path, 'SSS', 'test_move_anon_raw_' +
-                             'spatiotemporal_%0ds_sss.fif' % st_dur)
-
-        with warnings.catch_warnings(record=True):  # maxshield, naming
-            tsss_bench = Raw(tSSS_fname, allow_maxshield=True)
-            # Because Elekta's tSSS sometimes(!) lumps the tail window of data
-            # onto the previous buffer if it's shorter than st_dur, we have to
-            # crop the data here to compensate for Elekta's tSSS behavior.
-            if st_dur == 10.:
-                tsss_bench.crop(0, st_dur, copy=False)
-        tsss_bench_data = tsss_bench[picks, :][0]
-        del tsss_bench
+    st_durations = [4., 10.]
+    tols = [325., 200.]
+    for st_duration, tol in zip(st_durations, tols):
+        # Load tSSS data depending on st_duration and get data
+        tSSS_fname = op.join(sss_path,
+                             'test_move_anon_st%0ds_raw_sss.fif' % st_duration)
+        tsss_bench = Raw(tSSS_fname)
+        # Because Elekta's tSSS sometimes(!) lumps the tail window of data
+        # onto the previous buffer if it's shorter than st_duration, we have to
+        # crop the data here to compensate for Elekta's tSSS behavior.
+        if st_duration == 10.:
+            tsss_bench.crop(0, st_duration, copy=False)
 
         # Test sss computation at the standard head origin. Same cropping issue
         # as mentioned above.
-        if st_dur == 10.:
-            raw_tsss = maxwell_filter(raw.crop(0, st_dur), st_dur=st_dur)
+        if st_duration == 10.:
+            raw_tsss = maxwell_filter(raw.crop(0, st_duration),
+                                      origin=mf_head_origin,
+                                      st_duration=st_duration, regularize=None,
+                                      bad_condition='ignore')
         else:
-            raw_tsss = maxwell_filter(raw, st_dur=st_dur)
-        assert_allclose(raw_tsss[picks][0], tsss_bench_data,
-                        rtol=1e-12, atol=1e-4, err_msg='Spatiotemporal (tSSS) '
-                        'maxwell filtered data at standard origin incorrect.')
-
-        # Confirm SNR is above 500. Single precision is part of discrepancy
-        bench_rms = np.sqrt(np.mean(tsss_bench_data * tsss_bench_data, axis=1))
-        error = raw_tsss[picks][0] - tsss_bench_data
-        error_rms = np.sqrt(np.mean(error * error, axis=1))
-        assert_true(np.mean(bench_rms / error_rms) >= 500,
-                    'SNR (%0.1f) < 500' % np.mean(bench_rms / error_rms))
-
-    # Confirm we didn't modify other channels (like EEG chs)
-    non_picks = np.setdiff1d(np.arange(len(raw.ch_names)), picks)
-    assert_allclose(raw[non_picks, 0:raw_tsss.n_times][0],
-                    raw_tsss[non_picks, 0:raw_tsss.n_times][0])
+            raw_tsss = maxwell_filter(raw, st_duration=st_duration,
+                                      origin=mf_head_origin, regularize=None,
+                                      bad_condition='ignore')
+        assert_meg_snr(raw_tsss, tsss_bench, tol)
+        py_st = raw_tsss.info['proc_history'][0]['max_info']['max_st']
+        assert_true(len(py_st) > 0)
+        assert_equal(py_st['buflen'], st_duration)
+        assert_equal(py_st['subspcorr'], 0.98)
+
+    # Degenerate cases
+    assert_raises(ValueError, maxwell_filter, raw, st_duration=10.,
+                  st_correlation=0.)
+
+
+ at testing.requires_testing_data
+def test_maxwell_filter_fine_calibration():
+    """Test Maxwell filter fine calibration"""
+
+    # Load testing data (raw, SSS std origin, SSS non-standard origin)
+    with warnings.catch_warnings(record=True):  # maxshield
+        raw = Raw(raw_fname, allow_maxshield=True).crop(0., 1., False)
+    sss_fine_cal = Raw(sss_fine_cal_fname)
+
+    # Test 1D SSS fine calibration
+    raw_sss = maxwell_filter(raw, calibration=fine_cal_fname,
+                             origin=mf_head_origin, regularize=None,
+                             bad_condition='ignore')
+    assert_meg_snr(raw_sss, sss_fine_cal, 70, 500)
+    py_cal = raw_sss.info['proc_history'][0]['max_info']['sss_cal']
+    assert_true(py_cal is not None)
+    assert_true(len(py_cal) > 0)
+    mf_cal = sss_fine_cal.info['proc_history'][0]['max_info']['sss_cal']
+    # we identify these differently
+    mf_cal['cal_chans'][mf_cal['cal_chans'][:, 1] == 3022, 1] = 3024
+    assert_allclose(py_cal['cal_chans'], mf_cal['cal_chans'])
+    assert_allclose(py_cal['cal_corrs'], mf_cal['cal_corrs'],
+                    rtol=1e-3, atol=1e-3)
+
+    # Test 3D SSS fine calibration (no equivalent func in MaxFilter yet!)
+    # very low SNR as proc differs, eventually we should add a better test
+    raw_sss_3D = maxwell_filter(raw, calibration=fine_cal_fname_3d,
+                                origin=mf_head_origin, regularize=None,
+                                bad_condition='ignore')
+    assert_meg_snr(raw_sss_3D, sss_fine_cal, 1.0, 6.)
+
 
+ at slow_test
+ at testing.requires_testing_data
+def test_maxwell_filter_regularization():
+    """Test Maxwell filter regularization"""
+    # Load testing data (raw, SSS std origin, SSS non-standard origin)
+    min_tols = (100., 2.6, 1.0)
+    med_tols = (1000., 21.4, 3.7)
+    origins = ((0., 0., 0.04), (0.,) * 3, (0., 0.02, 0.02))
+    coord_frames = ('head', 'meg', 'head')
+    raw_fnames = (raw_fname, erm_fname, sample_fname)
+    sss_fnames = (sss_reg_in_fname, sss_erm_reg_in_fname,
+                  sss_samp_reg_in_fname)
+    comp_tols = [0, 1, 4]
+    for ii, rf in enumerate(raw_fnames):
+        with warnings.catch_warnings(record=True):  # maxshield
+            raw = Raw(rf, allow_maxshield=True).crop(0., 1., False)
+        sss_reg_in = Raw(sss_fnames[ii])
+
+        # Test "in" regularization
+        raw_sss = maxwell_filter(raw, coord_frame=coord_frames[ii],
+                                 origin=origins[ii])
+        assert_meg_snr(raw_sss, sss_reg_in, min_tols[ii], med_tols[ii], msg=rf)
+
+        # check components match
+        py_info = raw_sss.info['proc_history'][0]['max_info']['sss_info']
+        assert_true(py_info is not None)
+        assert_true(len(py_info) > 0)
+        mf_info = sss_reg_in.info['proc_history'][0]['max_info']['sss_info']
+        n_in = None
+        for inf in py_info, mf_info:
+            if n_in is None:
+                n_in = _get_n_moments(inf['in_order'])
+            else:
+                assert_equal(n_in, _get_n_moments(inf['in_order']))
+            assert_equal(inf['components'][:n_in].sum(), inf['nfree'])
+        assert_allclose(py_info['nfree'], mf_info['nfree'],
+                        atol=comp_tols[ii], err_msg=rf)
+
+
+ at testing.requires_testing_data
+def test_cross_talk():
+    """Test Maxwell filter cross-talk cancellation"""
+    with warnings.catch_warnings(record=True):  # maxshield
+        raw = Raw(raw_fname, allow_maxshield=True).crop(0., 1., False)
+    raw.info['bads'] = bads
+    sss_ctc = Raw(sss_ctc_fname)
+    raw_sss = maxwell_filter(raw, cross_talk=ctc_fname,
+                             origin=mf_head_origin, regularize=None,
+                             bad_condition='ignore')
+    assert_meg_snr(raw_sss, sss_ctc, 275.)
+    py_ctc = raw_sss.info['proc_history'][0]['max_info']['sss_ctc']
+    assert_true(len(py_ctc) > 0)
+    assert_raises(ValueError, maxwell_filter, raw, cross_talk=raw)
+    assert_raises(ValueError, maxwell_filter, raw, cross_talk=raw_fname)
+    mf_ctc = sss_ctc.info['proc_history'][0]['max_info']['sss_ctc']
+    del mf_ctc['block_id']  # we don't write this
+    assert_equal(object_diff(py_ctc, mf_ctc), '')
+
+
+ at testing.requires_testing_data
+def test_head_translation():
+    """Test Maxwell filter head translation"""
+    with warnings.catch_warnings(record=True):  # maxshield
+        raw = Raw(raw_fname, allow_maxshield=True).crop(0., 1., False)
+    # First try with an unchanged destination
+    raw_sss = maxwell_filter(raw, destination=raw_fname,
+                             origin=mf_head_origin, regularize=None,
+                             bad_condition='ignore')
+    assert_meg_snr(raw_sss, Raw(sss_std_fname).crop(0., 1., False), 200.)
+    # Now with default
+    with catch_logging() as log:
+        raw_sss = maxwell_filter(raw, destination=mf_head_origin,
+                                 origin=mf_head_origin, regularize=None,
+                                 bad_condition='ignore', verbose='warning')
+    assert_true('over 25 mm' in log.getvalue())
+    assert_meg_snr(raw_sss, Raw(sss_trans_default_fname), 125.)
+    # Now to sample's head pos
+    with catch_logging() as log:
+        raw_sss = maxwell_filter(raw, destination=sample_fname,
+                                 origin=mf_head_origin, regularize=None,
+                                 bad_condition='ignore', verbose='warning')
+    assert_true('= 25.6 mm' in log.getvalue())
+    assert_meg_snr(raw_sss, Raw(sss_trans_sample_fname), 350.)
     # Degenerate cases
-    assert_raises(ValueError, maxwell_filter, raw, st_dur=10., st_corr=0.)
+    assert_raises(RuntimeError, maxwell_filter, raw,
+                  destination=mf_head_origin, coord_frame='meg')
+    assert_raises(ValueError, maxwell_filter, raw, destination=[0.] * 4)
+
+
+# TODO: Eventually add simulation tests mirroring Taulu's original paper
+# that calculates the localization error:
+# http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1495874
+
+def _assert_shielding(raw_sss, erm_power, shielding_factor, meg='mag'):
+    """Helper to assert a minimum shielding factor using empty-room power"""
+    picks = pick_types(raw_sss.info, meg=meg)
+    sss_power = raw_sss[picks][0].ravel()
+    sss_power = np.sqrt(np.sum(sss_power * sss_power))
+    factor = erm_power / sss_power
+    assert_true(factor >= shielding_factor,
+                'Shielding factor %0.3f < %0.3f' % (factor, shielding_factor))
+
+
+ at slow_test
+ at requires_svd_convergence
+ at testing.requires_testing_data
+def test_noise_rejection():
+    """Test Maxwell filter shielding factor using empty room"""
+    with warnings.catch_warnings(record=True):  # maxshield
+        raw_erm = Raw(erm_fname, allow_maxshield=True, preload=True)
+    picks = pick_types(raw_erm.info, meg='mag')
+    erm_power = raw_erm[picks][0].ravel()
+    erm_power = np.sqrt(np.sum(erm_power * erm_power))
+
+    # Vanilla SSS (second value would be for meg=True instead of meg='mag')
+    _assert_shielding(Raw(sss_erm_std_fname), erm_power, 10)  # 1.5)
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg', regularize=None)
+    _assert_shielding(raw_sss, erm_power, 12)  # 1.5)
+
+    # Fine cal
+    _assert_shielding(Raw(sss_erm_fine_cal_fname), erm_power, 12)  # 2.0)
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg', regularize=None,
+                             origin=mf_meg_origin,
+                             calibration=fine_cal_fname)
+    _assert_shielding(raw_sss, erm_power, 12)  # 2.0)
+
+    # Crosstalk
+    _assert_shielding(Raw(sss_erm_ctc_fname), erm_power, 12)  # 2.1)
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg', regularize=None,
+                             origin=mf_meg_origin,
+                             cross_talk=ctc_fname)
+    _assert_shielding(raw_sss, erm_power, 12)  # 2.1)
+
+    # Fine cal + Crosstalk
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg', regularize=None,
+                             calibration=fine_cal_fname,
+                             origin=mf_meg_origin,
+                             cross_talk=ctc_fname)
+    _assert_shielding(raw_sss, erm_power, 13)  # 2.2)
+
+    # tSSS
+    _assert_shielding(Raw(sss_erm_st_fname), erm_power, 37)  # 5.8)
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg', regularize=None,
+                             origin=mf_meg_origin, st_duration=1.)
+    _assert_shielding(raw_sss, erm_power, 37)  # 5.8)
+
+    # Crosstalk + tSSS
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg', regularize=None,
+                             cross_talk=ctc_fname, origin=mf_meg_origin,
+                             st_duration=1.)
+    _assert_shielding(raw_sss, erm_power, 38)  # 5.91)
+
+    # Fine cal + tSSS
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg', regularize=None,
+                             calibration=fine_cal_fname,
+                             origin=mf_meg_origin, st_duration=1.)
+    _assert_shielding(raw_sss, erm_power, 38)  # 5.98)
+
+    # Fine cal + Crosstalk + tSSS
+    _assert_shielding(Raw(sss_erm_st1FineCalCrossTalk_fname),
+                      erm_power, 39)  # 6.07)
+    raw_sss = maxwell_filter(raw_erm, coord_frame='meg', regularize=None,
+                             calibration=fine_cal_fname, origin=mf_meg_origin,
+                             cross_talk=ctc_fname, st_duration=1.)
+    _assert_shielding(raw_sss, erm_power, 39)  # 6.05)
+
+    # Fine cal + Crosstalk + tSSS + Reg-in
+    _assert_shielding(Raw(sss_erm_st1FineCalCrossTalkRegIn_fname), erm_power,
+                      57)  # 6.97)
+    raw_sss = maxwell_filter(raw_erm, calibration=fine_cal_fname,
+                             cross_talk=ctc_fname, st_duration=1.,
+                             origin=mf_meg_origin,
+                             coord_frame='meg', regularize='in')
+    _assert_shielding(raw_sss, erm_power, 53)  # 6.64)
+    raw_sss = maxwell_filter(raw_erm, calibration=fine_cal_fname,
+                             cross_talk=ctc_fname, st_duration=1.,
+                             coord_frame='meg', regularize='in')
+    _assert_shielding(raw_sss, erm_power, 58)  # 7.0)
+    raw_sss = maxwell_filter(raw_erm, calibration=fine_cal_fname_3d,
+                             cross_talk=ctc_fname, st_duration=1.,
+                             coord_frame='meg', regularize='in')
+
+    # Our 3D cal has worse defaults for this ERM than the 1D file
+    _assert_shielding(raw_sss, erm_power, 54)
+    # Show it by rewriting the 3D as 1D and testing it
+    temp_dir = _TempDir()
+    temp_fname = op.join(temp_dir, 'test_cal.dat')
+    with open(fine_cal_fname_3d, 'r') as fid:
+        with open(temp_fname, 'w') as fid_out:
+            for line in fid:
+                fid_out.write(' '.join(line.strip().split(' ')[:14]) + '\n')
+    raw_sss = maxwell_filter(raw_erm, calibration=temp_fname,
+                             cross_talk=ctc_fname, st_duration=1.,
+                             coord_frame='meg', regularize='in')
+    # Our 3D cal has worse defaults for this ERM than the 1D file
+    _assert_shielding(raw_sss, erm_power, 44)
+
+
+ at slow_test
+ at requires_svd_convergence
+ at testing.requires_testing_data
+def test_all():
+    """Test maxwell filter using all options"""
+    raw_fnames = (raw_fname, raw_fname, erm_fname, sample_fname)
+    sss_fnames = (sss_st1FineCalCrossTalkRegIn_fname,
+                  sss_st1FineCalCrossTalkRegInTransSample_fname,
+                  sss_erm_st1FineCalCrossTalkRegIn_fname,
+                  sss_samp_fname)
+    fine_cals = (fine_cal_fname,
+                 fine_cal_fname,
+                 fine_cal_fname,
+                 fine_cal_mgh_fname)
+    coord_frames = ('head', 'head', 'meg', 'head')
+    ctcs = (ctc_fname, ctc_fname, ctc_fname, ctc_mgh_fname)
+    mins = (3.5, 3.5, 1.2, 0.9)
+    meds = (10.9, 10.4, 3.2, 6.)
+    st_durs = (1., 1., 1., None)
+    destinations = (None, sample_fname, None, None)
+    origins = (mf_head_origin,
+               mf_head_origin,
+               mf_meg_origin,
+               mf_head_origin)
+    for ii, rf in enumerate(raw_fnames):
+        with warnings.catch_warnings(record=True):  # maxshield
+            raw = Raw(rf, allow_maxshield=True).crop(0., 1., copy=False)
+        sss_py = maxwell_filter(raw, calibration=fine_cals[ii],
+                                cross_talk=ctcs[ii], st_duration=st_durs[ii],
+                                coord_frame=coord_frames[ii],
+                                destination=destinations[ii],
+                                origin=origins[ii])
+        sss_mf = Raw(sss_fnames[ii])
+        assert_meg_snr(sss_py, sss_mf, mins[ii], meds[ii], msg=rf)
 
 run_tests_if_main()
diff --git a/mne/preprocessing/tests/test_ssp.py b/mne/preprocessing/tests/test_ssp.py
index 1d5cd0a..4974f01 100644
--- a/mne/preprocessing/tests/test_ssp.py
+++ b/mne/preprocessing/tests/test_ssp.py
@@ -34,6 +34,17 @@ def test_compute_proj_ecg():
         # heart rate at least 0.5 Hz, but less than 3 Hz
         assert_true(events.shape[0] > 0.5 * dur_use and
                     events.shape[0] < 3 * dur_use)
+        ssp_ecg = [proj for proj in projs if proj['desc'].startswith('ECG')]
+        # check that the first principal component have a certain minimum
+        ssp_ecg = [proj for proj in ssp_ecg if 'PCA-01' in proj['desc']]
+        thresh_eeg, thresh_axial, thresh_planar = .9, .3, .1
+        for proj in ssp_ecg:
+            if 'planar' in proj['desc']:
+                assert_true(proj['explained_var'] > thresh_planar)
+            elif 'axial' in proj['desc']:
+                assert_true(proj['explained_var'] > thresh_axial)
+            elif 'eeg' in proj['desc']:
+                assert_true(proj['explained_var'] > thresh_eeg)
         # XXX: better tests
 
         # without setting a bad channel, this should throw a warning
@@ -62,6 +73,17 @@ def test_compute_proj_eog():
         assert_true(len(projs) == (7 + n_projs_init))
         assert_true(np.abs(events.shape[0] -
                     np.sum(np.less(eog_times, dur_use))) <= 1)
+        ssp_eog = [proj for proj in projs if proj['desc'].startswith('EOG')]
+        # check that the first principal component have a certain minimum
+        ssp_eog = [proj for proj in ssp_eog if 'PCA-01' in proj['desc']]
+        thresh_eeg, thresh_axial, thresh_planar = .9, .3, .1
+        for proj in ssp_eog:
+            if 'planar' in proj['desc']:
+                assert_true(proj['explained_var'] > thresh_planar)
+            elif 'axial' in proj['desc']:
+                assert_true(proj['explained_var'] > thresh_axial)
+            elif 'eeg' in proj['desc']:
+                assert_true(proj['explained_var'] > thresh_eeg)
         # XXX: better tests
 
         # This will throw a warning b/c simplefilter('always')
diff --git a/mne/proj.py b/mne/proj.py
index c146331..0bcbc35 100644
--- a/mne/proj.py
+++ b/mne/proj.py
@@ -97,15 +97,19 @@ def _compute_proj(data, info, n_grad, n_mag, n_eeg, desc_prefix, verbose=None):
         if n == 0:
             continue
         data_ind = data[ind][:, ind]
-        U = linalg.svd(data_ind, full_matrices=False,
-                       overwrite_a=True)[0][:, :n]
-        for k, u in enumerate(U.T):
+        # data is the covariance matrix: U * S**2 * Ut
+        U, Sexp2, _ = linalg.svd(data_ind, full_matrices=False,
+                                 overwrite_a=True)
+        U = U[:, :n]
+        exp_var = Sexp2 / Sexp2.sum()
+        exp_var = exp_var[:n]
+        for k, (u, var) in enumerate(zip(U.T, exp_var)):
             proj_data = dict(col_names=names, row_names=None,
                              data=u[np.newaxis, :], nrow=1, ncol=u.size)
             this_desc = "%s-%s-PCA-%02d" % (desc, desc_prefix, k + 1)
             logger.info("Adding projection: %s" % this_desc)
             proj = Projection(active=False, data=proj_data,
-                              desc=this_desc, kind=1)
+                              desc=this_desc, kind=1, explained_var=var)
             projs.append(proj)
 
     return projs
diff --git a/mne/realtime/fieldtrip_client.py b/mne/realtime/fieldtrip_client.py
index 24820ea..6b35cb0 100644
--- a/mne/realtime/fieldtrip_client.py
+++ b/mne/realtime/fieldtrip_client.py
@@ -1,351 +1,350 @@
-# Author: Mainak Jas
-#
-# License: BSD (3-clause)
-
-import re
-import copy
-import time
-import threading
-import warnings
-import numpy as np
-
-from ..io.constants import FIFF
-from ..io.meas_info import _empty_info
-from ..io.pick import pick_info
-from ..epochs import EpochsArray
-from ..utils import logger
-from ..externals.FieldTrip import Client as FtClient
-
-
-def _buffer_recv_worker(ft_client):
-    """Worker thread that constantly receives buffers."""
-
-    try:
-        for raw_buffer in ft_client.iter_raw_buffers():
-            ft_client._push_raw_buffer(raw_buffer)
-    except RuntimeError as err:
-        # something is wrong, the server stopped (or something)
-        ft_client._recv_thread = None
-        print('Buffer receive thread stopped: %s' % err)
-
-
-class FieldTripClient(object):
-    """ Realtime FieldTrip client
-
-    Parameters
-    ----------
-    info : dict | None
-        The measurement info read in from a file. If None, it is guessed from
-        the Fieldtrip Header object.
-    host : str
-        Hostname (or IP address) of the host where Fieldtrip buffer is running.
-    port : int
-        Port to use for the connection.
-    wait_max : float
-        Maximum time (in seconds) to wait for Fieldtrip buffer to start
-    tmin : float | None
-        Time instant to start receiving buffers. If None, start from the latest
-        samples available.
-    tmax : float
-        Time instant to stop receiving buffers.
-    buffer_size : int
-        Size of each buffer in terms of number of samples.
-    verbose : bool, str, int, or None
-        Log verbosity see mne.verbose.
-    """
-    def __init__(self, info=None, host='localhost', port=1972, wait_max=30,
-                 tmin=None, tmax=np.inf, buffer_size=1000, verbose=None):
-        self.verbose = verbose
-
-        self.info = info
-        self.wait_max = wait_max
-        self.tmin = tmin
-        self.tmax = tmax
-        self.buffer_size = buffer_size
-
-        self.host = host
-        self.port = port
-
-        self._recv_thread = None
-        self._recv_callbacks = list()
-
-    def __enter__(self):
-        # instantiate Fieldtrip client and connect
-        self.ft_client = FtClient()
-
-        # connect to FieldTrip buffer
-        logger.info("FieldTripClient: Waiting for server to start")
-        start_time, current_time = time.time(), time.time()
-        success = False
-        while current_time < (start_time + self.wait_max):
-            try:
-                self.ft_client.connect(self.host, self.port)
-                logger.info("FieldTripClient: Connected")
-                success = True
-                break
-            except:
-                current_time = time.time()
-                time.sleep(0.1)
-
-        if not success:
-            raise RuntimeError('Could not connect to FieldTrip Buffer')
-
-        # retrieve header
-        logger.info("FieldTripClient: Retrieving header")
-        start_time, current_time = time.time(), time.time()
-        while current_time < (start_time + self.wait_max):
-            self.ft_header = self.ft_client.getHeader()
-            if self.ft_header is None:
-                current_time = time.time()
-                time.sleep(0.1)
-            else:
-                break
-
-        if self.ft_header is None:
-            raise RuntimeError('Failed to retrieve Fieldtrip header!')
-        else:
-            logger.info("FieldTripClient: Header retrieved")
-
-        self.info = self._guess_measurement_info()
-        self.ch_names = self.ft_header.labels
-
-        # find start and end samples
-
-        sfreq = self.info['sfreq']
-
-        if self.tmin is None:
-            self.tmin_samp = max(0, self.ft_header.nSamples - 1)
-        else:
-            self.tmin_samp = int(round(sfreq * self.tmin))
-
-        if self.tmax != np.inf:
-            self.tmax_samp = int(round(sfreq * self.tmax))
-        else:
-            self.tmax_samp = np.iinfo(np.uint32).max
-
-        return self
-
-    def __exit__(self, type, value, traceback):
-        self.ft_client.disconnect()
-
-    def _guess_measurement_info(self):
-        """
-        Creates a minimal Info dictionary required for epoching, averaging
-        et al.
-        """
-
-        if self.info is None:
-
-            warnings.warn('Info dictionary not provided. Trying to guess it '
-                          'from FieldTrip Header object')
-
-            info = _empty_info()  # create info dictionary
-
-            # modify info attributes according to the FieldTrip Header object
-            info['nchan'] = self.ft_header.nChannels
-            info['sfreq'] = self.ft_header.fSample
-            info['ch_names'] = self.ft_header.labels
-
-            info['comps'] = list()
-            info['projs'] = list()
-            info['bads'] = list()
-
-            # channel dictionary list
-            info['chs'] = []
-
-            for idx, ch in enumerate(info['ch_names']):
-                this_info = dict()
-
-                this_info['scanno'] = idx
-
-                # extract numerical part of channel name
-                this_info['logno'] = int(re.findall('[^\W\d_]+|\d+', ch)[-1])
-
-                if ch.startswith('EEG'):
-                    this_info['kind'] = FIFF.FIFFV_EEG_CH
-                elif ch.startswith('MEG'):
-                    this_info['kind'] = FIFF.FIFFV_MEG_CH
-                elif ch.startswith('MCG'):
-                    this_info['kind'] = FIFF.FIFFV_MCG_CH
-                elif ch.startswith('EOG'):
-                    this_info['kind'] = FIFF.FIFFV_EOG_CH
-                elif ch.startswith('EMG'):
-                    this_info['kind'] = FIFF.FIFFV_EMG_CH
-                elif ch.startswith('STI'):
-                    this_info['kind'] = FIFF.FIFFV_STIM_CH
-                elif ch.startswith('ECG'):
-                    this_info['kind'] = FIFF.FIFFV_ECG_CH
-                elif ch.startswith('MISC'):
-                    this_info['kind'] = FIFF.FIFFV_MISC_CH
-
-                # Fieldtrip already does calibration
-                this_info['range'] = 1.0
-                this_info['cal'] = 1.0
-
-                this_info['ch_name'] = ch
-                this_info['loc'] = None
-
-                if ch.startswith('EEG'):
-                    this_info['coord_frame'] = FIFF.FIFFV_COORD_HEAD
-                elif ch.startswith('MEG'):
-                    this_info['coord_frame'] = FIFF.FIFFV_COORD_DEVICE
-                else:
-                    this_info['coord_frame'] = FIFF.FIFFV_COORD_UNKNOWN
-
-                if ch.startswith('MEG') and ch.endswith('1'):
-                    this_info['unit'] = FIFF.FIFF_UNIT_T
-                elif ch.startswith('MEG') and (ch.endswith('2') or
-                                               ch.endswith('3')):
-                    this_info['unit'] = FIFF.FIFF_UNIT_T_M
-                else:
-                    this_info['unit'] = FIFF.FIFF_UNIT_V
-
-                this_info['unit_mul'] = 0
-
-                info['chs'].append(this_info)
-
-        else:
-
-            # XXX: the data in real-time mode and offline mode
-            # does not match unless this is done
-            self.info['projs'] = list()
-
-            # FieldTrip buffer already does the calibration
-            for this_info in self.info['chs']:
-                this_info['range'] = 1.0
-                this_info['cal'] = 1.0
-                this_info['unit_mul'] = 0
-
-            info = copy.deepcopy(self.info)
-
-        return info
-
-    def get_measurement_info(self):
-        """Returns the measurement info.
-
-        Returns
-        -------
-        self.info : dict
-            The measurement info.
-        """
-        return self.info
-
-    def get_data_as_epoch(self, n_samples=1024, picks=None):
-        """Returns last n_samples from current time.
-
-        Parameters
-        ----------
-        n_samples : int
-            Number of samples to fetch.
-        picks : array-like of int | None
-            If None all channels are kept
-            otherwise the channels indices in picks are kept.
-
-        Returns
-        -------
-        epoch : instance of Epochs
-            The samples fetched as an Epochs object.
+# Author: Mainak Jas
+#
+# License: BSD (3-clause)
+
+import re
+import copy
+import time
+import threading
+import warnings
+import numpy as np
+
+from ..io import _empty_info
+from ..io.pick import pick_info
+from ..io.constants import FIFF
+from ..epochs import EpochsArray
+from ..utils import logger
+from ..externals.FieldTrip import Client as FtClient
+
+
+def _buffer_recv_worker(ft_client):
+    """Worker thread that constantly receives buffers."""
+
+    try:
+        for raw_buffer in ft_client.iter_raw_buffers():
+            ft_client._push_raw_buffer(raw_buffer)
+    except RuntimeError as err:
+        # something is wrong, the server stopped (or something)
+        ft_client._recv_thread = None
+        print('Buffer receive thread stopped: %s' % err)
+
+
+class FieldTripClient(object):
+    """ Realtime FieldTrip client
+
+    Parameters
+    ----------
+    info : dict | None
+        The measurement info read in from a file. If None, it is guessed from
+        the Fieldtrip Header object.
+    host : str
+        Hostname (or IP address) of the host where Fieldtrip buffer is running.
+    port : int
+        Port to use for the connection.
+    wait_max : float
+        Maximum time (in seconds) to wait for Fieldtrip buffer to start
+    tmin : float | None
+        Time instant to start receiving buffers. If None, start from the latest
+        samples available.
+    tmax : float
+        Time instant to stop receiving buffers.
+    buffer_size : int
+        Size of each buffer in terms of number of samples.
+    verbose : bool, str, int, or None
+        Log verbosity see mne.verbose.
+    """
+    def __init__(self, info=None, host='localhost', port=1972, wait_max=30,
+                 tmin=None, tmax=np.inf, buffer_size=1000, verbose=None):
+        self.verbose = verbose
+
+        self.info = info
+        self.wait_max = wait_max
+        self.tmin = tmin
+        self.tmax = tmax
+        self.buffer_size = buffer_size
+
+        self.host = host
+        self.port = port
+
+        self._recv_thread = None
+        self._recv_callbacks = list()
+
+    def __enter__(self):
+        # instantiate Fieldtrip client and connect
+        self.ft_client = FtClient()
+
+        # connect to FieldTrip buffer
+        logger.info("FieldTripClient: Waiting for server to start")
+        start_time, current_time = time.time(), time.time()
+        success = False
+        while current_time < (start_time + self.wait_max):
+            try:
+                self.ft_client.connect(self.host, self.port)
+                logger.info("FieldTripClient: Connected")
+                success = True
+                break
+            except:
+                current_time = time.time()
+                time.sleep(0.1)
+
+        if not success:
+            raise RuntimeError('Could not connect to FieldTrip Buffer')
+
+        # retrieve header
+        logger.info("FieldTripClient: Retrieving header")
+        start_time, current_time = time.time(), time.time()
+        while current_time < (start_time + self.wait_max):
+            self.ft_header = self.ft_client.getHeader()
+            if self.ft_header is None:
+                current_time = time.time()
+                time.sleep(0.1)
+            else:
+                break
+
+        if self.ft_header is None:
+            raise RuntimeError('Failed to retrieve Fieldtrip header!')
+        else:
+            logger.info("FieldTripClient: Header retrieved")
+
+        self.info = self._guess_measurement_info()
+        self.ch_names = self.ft_header.labels
+
+        # find start and end samples
+
+        sfreq = self.info['sfreq']
+
+        if self.tmin is None:
+            self.tmin_samp = max(0, self.ft_header.nSamples - 1)
+        else:
+            self.tmin_samp = int(round(sfreq * self.tmin))
+
+        if self.tmax != np.inf:
+            self.tmax_samp = int(round(sfreq * self.tmax))
+        else:
+            self.tmax_samp = np.iinfo(np.uint32).max
+
+        return self
+
+    def __exit__(self, type, value, traceback):
+        self.ft_client.disconnect()
+
+    def _guess_measurement_info(self):
+        """
+        Creates a minimal Info dictionary required for epoching, averaging
+        et al.
+        """
+
+        if self.info is None:
+
+            warnings.warn('Info dictionary not provided. Trying to guess it '
+                          'from FieldTrip Header object')
+
+            info = _empty_info(self.ft_header.fSample)  # create info
+
+            # modify info attributes according to the FieldTrip Header object
+            info['nchan'] = self.ft_header.nChannels
+            info['ch_names'] = self.ft_header.labels
+
+            info['comps'] = list()
+            info['projs'] = list()
+            info['bads'] = list()
+
+            # channel dictionary list
+            info['chs'] = []
+
+            for idx, ch in enumerate(info['ch_names']):
+                this_info = dict()
+
+                this_info['scanno'] = idx
+
+                # extract numerical part of channel name
+                this_info['logno'] = int(re.findall('[^\W\d_]+|\d+', ch)[-1])
+
+                if ch.startswith('EEG'):
+                    this_info['kind'] = FIFF.FIFFV_EEG_CH
+                elif ch.startswith('MEG'):
+                    this_info['kind'] = FIFF.FIFFV_MEG_CH
+                elif ch.startswith('MCG'):
+                    this_info['kind'] = FIFF.FIFFV_MCG_CH
+                elif ch.startswith('EOG'):
+                    this_info['kind'] = FIFF.FIFFV_EOG_CH
+                elif ch.startswith('EMG'):
+                    this_info['kind'] = FIFF.FIFFV_EMG_CH
+                elif ch.startswith('STI'):
+                    this_info['kind'] = FIFF.FIFFV_STIM_CH
+                elif ch.startswith('ECG'):
+                    this_info['kind'] = FIFF.FIFFV_ECG_CH
+                elif ch.startswith('MISC'):
+                    this_info['kind'] = FIFF.FIFFV_MISC_CH
+
+                # Fieldtrip already does calibration
+                this_info['range'] = 1.0
+                this_info['cal'] = 1.0
+
+                this_info['ch_name'] = ch
+                this_info['loc'] = None
+
+                if ch.startswith('EEG'):
+                    this_info['coord_frame'] = FIFF.FIFFV_COORD_HEAD
+                elif ch.startswith('MEG'):
+                    this_info['coord_frame'] = FIFF.FIFFV_COORD_DEVICE
+                else:
+                    this_info['coord_frame'] = FIFF.FIFFV_COORD_UNKNOWN
+
+                if ch.startswith('MEG') and ch.endswith('1'):
+                    this_info['unit'] = FIFF.FIFF_UNIT_T
+                elif ch.startswith('MEG') and (ch.endswith('2') or
+                                               ch.endswith('3')):
+                    this_info['unit'] = FIFF.FIFF_UNIT_T_M
+                else:
+                    this_info['unit'] = FIFF.FIFF_UNIT_V
+
+                this_info['unit_mul'] = 0
+
+                info['chs'].append(this_info)
+
+        else:
+
+            # XXX: the data in real-time mode and offline mode
+            # does not match unless this is done
+            self.info['projs'] = list()
+
+            # FieldTrip buffer already does the calibration
+            for this_info in self.info['chs']:
+                this_info['range'] = 1.0
+                this_info['cal'] = 1.0
+                this_info['unit_mul'] = 0
+
+            info = copy.deepcopy(self.info)
+
+        return info
+
+    def get_measurement_info(self):
+        """Returns the measurement info.
+
+        Returns
+        -------
+        self.info : dict
+            The measurement info.
+        """
+        return self.info
+
+    def get_data_as_epoch(self, n_samples=1024, picks=None):
+        """Returns last n_samples from current time.
+
+        Parameters
+        ----------
+        n_samples : int
+            Number of samples to fetch.
+        picks : array-like of int | None
+            If None all channels are kept
+            otherwise the channels indices in picks are kept.
+
+        Returns
+        -------
+        epoch : instance of Epochs
+            The samples fetched as an Epochs object.
 
         See Also
         --------
         Epochs.iter_evoked
-        """
-        ft_header = self.ft_client.getHeader()
-        last_samp = ft_header.nSamples - 1
-        start = last_samp - n_samples + 1
-        stop = last_samp
-        events = np.expand_dims(np.array([start, 1, 1]), axis=0)
-
-        # get the data
-        data = self.ft_client.getData([start, stop]).transpose()
-
-        # create epoch from data
-        info = self.info
-        if picks is not None:
-            info = pick_info(info, picks, copy=True)
-        epoch = EpochsArray(data[picks][np.newaxis], info, events)
-
-        return epoch
-
-    def register_receive_callback(self, callback):
-        """Register a raw buffer receive callback.
-
-        Parameters
-        ----------
-        callback : callable
-            The callback. The raw buffer is passed as the first parameter
-            to callback.
-        """
-        if callback not in self._recv_callbacks:
-            self._recv_callbacks.append(callback)
-
-    def unregister_receive_callback(self, callback):
-        """Unregister a raw buffer receive callback
-
-        Parameters
-        ----------
-        callback : callable
-            The callback to unregister.
-        """
-        if callback in self._recv_callbacks:
-            self._recv_callbacks.remove(callback)
-
-    def _push_raw_buffer(self, raw_buffer):
-        """Push raw buffer to clients using callbacks."""
-        for callback in self._recv_callbacks:
-            callback(raw_buffer)
-
-    def start_receive_thread(self, nchan):
-        """Start the receive thread.
-
-        If the measurement has not been started, it will also be started.
-
-        Parameters
-        ----------
-        nchan : int
-            The number of channels in the data.
-        """
-
-        if self._recv_thread is None:
-
-            self._recv_thread = threading.Thread(target=_buffer_recv_worker,
-                                                 args=(self, ))
-            self._recv_thread.daemon = True
-            self._recv_thread.start()
-
-    def stop_receive_thread(self, stop_measurement=False):
-        """Stop the receive thread
-
-        Parameters
-        ----------
-        stop_measurement : bool
-            Also stop the measurement.
-        """
-        if self._recv_thread is not None:
-            self._recv_thread.stop()
-            self._recv_thread = None
-
-    def iter_raw_buffers(self):
-        """Return an iterator over raw buffers
-
-        Returns
-        -------
-        raw_buffer : generator
-            Generator for iteration over raw buffers.
-        """
-
-        iter_times = zip(range(self.tmin_samp, self.tmax_samp,
-                               self.buffer_size),
-                         range(self.tmin_samp + self.buffer_size - 1,
-                               self.tmax_samp, self.buffer_size))
-
-        for ii, (start, stop) in enumerate(iter_times):
-
-            # wait for correct number of samples to be available
-            self.ft_client.wait(stop, np.iinfo(np.uint32).max,
-                                np.iinfo(np.uint32).max)
-
-            # get the samples
-            raw_buffer = self.ft_client.getData([start, stop]).transpose()
-
-            yield raw_buffer
+        """
+        ft_header = self.ft_client.getHeader()
+        last_samp = ft_header.nSamples - 1
+        start = last_samp - n_samples + 1
+        stop = last_samp
+        events = np.expand_dims(np.array([start, 1, 1]), axis=0)
+
+        # get the data
+        data = self.ft_client.getData([start, stop]).transpose()
+
+        # create epoch from data
+        info = self.info
+        if picks is not None:
+            info = pick_info(info, picks, copy=True)
+        epoch = EpochsArray(data[picks][np.newaxis], info, events)
+
+        return epoch
+
+    def register_receive_callback(self, callback):
+        """Register a raw buffer receive callback.
+
+        Parameters
+        ----------
+        callback : callable
+            The callback. The raw buffer is passed as the first parameter
+            to callback.
+        """
+        if callback not in self._recv_callbacks:
+            self._recv_callbacks.append(callback)
+
+    def unregister_receive_callback(self, callback):
+        """Unregister a raw buffer receive callback
+
+        Parameters
+        ----------
+        callback : callable
+            The callback to unregister.
+        """
+        if callback in self._recv_callbacks:
+            self._recv_callbacks.remove(callback)
+
+    def _push_raw_buffer(self, raw_buffer):
+        """Push raw buffer to clients using callbacks."""
+        for callback in self._recv_callbacks:
+            callback(raw_buffer)
+
+    def start_receive_thread(self, nchan):
+        """Start the receive thread.
+
+        If the measurement has not been started, it will also be started.
+
+        Parameters
+        ----------
+        nchan : int
+            The number of channels in the data.
+        """
+
+        if self._recv_thread is None:
+
+            self._recv_thread = threading.Thread(target=_buffer_recv_worker,
+                                                 args=(self, ))
+            self._recv_thread.daemon = True
+            self._recv_thread.start()
+
+    def stop_receive_thread(self, stop_measurement=False):
+        """Stop the receive thread
+
+        Parameters
+        ----------
+        stop_measurement : bool
+            Also stop the measurement.
+        """
+        if self._recv_thread is not None:
+            self._recv_thread.stop()
+            self._recv_thread = None
+
+    def iter_raw_buffers(self):
+        """Return an iterator over raw buffers
+
+        Returns
+        -------
+        raw_buffer : generator
+            Generator for iteration over raw buffers.
+        """
+
+        iter_times = zip(range(self.tmin_samp, self.tmax_samp,
+                               self.buffer_size),
+                         range(self.tmin_samp + self.buffer_size - 1,
+                               self.tmax_samp, self.buffer_size))
+
+        for ii, (start, stop) in enumerate(iter_times):
+
+            # wait for correct number of samples to be available
+            self.ft_client.wait(stop, np.iinfo(np.uint32).max,
+                                np.iinfo(np.uint32).max)
+
+            # get the samples
+            raw_buffer = self.ft_client.getData([start, stop]).transpose()
+
+            yield raw_buffer
diff --git a/mne/simulation/__init__.py b/mne/simulation/__init__.py
index 081654b..7140854 100644
--- a/mne/simulation/__init__.py
+++ b/mne/simulation/__init__.py
@@ -1,9 +1,7 @@
 """Data simulation code
 """
 
-from .evoked import (generate_evoked, generate_noise_evoked, add_noise_evoked,
-                     simulate_evoked, simulate_noise_evoked)
+from .evoked import add_noise_evoked, simulate_evoked, simulate_noise_evoked
 from .raw import simulate_raw
-from .source import (select_source_in_label, generate_sparse_stc, generate_stc,
-                     simulate_sparse_stc)
+from .source import select_source_in_label, simulate_stc, simulate_sparse_stc
 from .metrics import source_estimate_quantification
diff --git a/mne/simulation/evoked.py b/mne/simulation/evoked.py
index d349706..bc2d540 100644
--- a/mne/simulation/evoked.py
+++ b/mne/simulation/evoked.py
@@ -10,49 +10,7 @@ import numpy as np
 
 from ..io.pick import pick_channels_cov
 from ..forward import apply_forward
-from ..utils import check_random_state, verbose, _time_mask, deprecated
-
-
- at deprecated('"generate_evoked" is deprecated and will be removed in '
-            'MNE-0.11. Please use simulate_evoked instead')
-def generate_evoked(fwd, stc, evoked, cov, snr=3, tmin=None,
-                    tmax=None, iir_filter=None, random_state=None,
-                    verbose=None):
-    """Generate noisy evoked data
-
-    Parameters
-    ----------
-    fwd : dict
-        a forward solution.
-    stc : SourceEstimate object
-        The source time courses.
-    evoked : None | Evoked object
-        An instance of evoked used as template.
-    cov : Covariance object
-        The noise covariance
-    snr : float
-        signal to noise ratio in dB. It corresponds to
-        10 * log10( var(signal) / var(noise) ).
-    tmin : float | None
-        start of time interval to estimate SNR. If None first time point
-        is used.
-    tmax : float | None
-        start of time interval to estimate SNR. If None last time point
-        is used.
-    iir_filter : None | array
-        IIR filter coefficients (denominator) e.g. [1, -1, 0.2].
-    random_state : None | int | np.random.RandomState
-        To specify the random generator state.
-    verbose : bool, str, int, or None
-        If not None, override default verbose level (see mne.verbose).
-
-    Returns
-    -------
-    evoked : Evoked object
-        The simulated evoked data
-    """
-    return simulate_evoked(fwd, stc, evoked.info, cov, snr, tmin,
-                           tmax, iir_filter, random_state, verbose)
+from ..utils import check_random_state, verbose, _time_mask
 
 
 @verbose
@@ -105,32 +63,6 @@ def simulate_evoked(fwd, stc, info, cov, snr=3., tmin=None, tmax=None,
     return evoked_noise
 
 
- at deprecated('"generate_noise_evoked" is deprecated and will be removed in '
-            'MNE-0.11. Please use simulate_noise_evoked instead')
-def generate_noise_evoked(evoked, cov, iir_filter=None, random_state=None):
-    """Creates noise as a multivariate Gaussian
-
-    The spatial covariance of the noise is given from the cov matrix.
-
-    Parameters
-    ----------
-    evoked : instance of Evoked
-        An instance of evoked used as template.
-    cov : instance of Covariance
-        The noise covariance.
-    iir_filter : None | array
-        IIR filter coefficients (denominator as it is an AR filter).
-    random_state : None | int | np.random.RandomState
-        To specify the random generator state.
-
-    Returns
-    -------
-    noise : evoked object
-        an instance of evoked
-    """
-    return simulate_noise_evoked(evoked, cov, iir_filter, random_state)
-
-
 def simulate_noise_evoked(evoked, cov, iir_filter=None, random_state=None):
     """Creates noise as a multivariate Gaussian
 
diff --git a/mne/simulation/raw.py b/mne/simulation/raw.py
index 39a16c7..39742c5 100644
--- a/mne/simulation/raw.py
+++ b/mne/simulation/raw.py
@@ -22,7 +22,7 @@ from ..forward import (_magnetic_dipole_field_vec, _merge_meg_eeg_fwds,
                        _stc_src_sel, convert_forward_solution,
                        _prepare_for_forward, _prep_meg_channels,
                        _compute_forwards, _to_forward_dict)
-from ..transforms import _get_mri_head_t, transform_surface_to
+from ..transforms import _get_trans, transform_surface_to
 from ..source_space import _ensure_src, _points_outside_surface
 from ..source_estimate import _BaseSourceEstimate
 from ..utils import logger, verbose, check_random_state
@@ -463,7 +463,7 @@ def simulate_raw(raw, stc, trans, src, bem, cov='simple',
 def _iter_forward_solutions(info, trans, src, bem, exg_bem, dev_head_ts,
                             mindist, hpi_rrs, blink_rrs, ecg_rrs, n_jobs):
     """Calculate a forward solution for a subject"""
-    mri_head_t, trans = _get_mri_head_t(trans)
+    mri_head_t, trans = _get_trans(trans)
     logger.info('Setting up forward solutions')
     megcoils, meg_info, compcoils, megnames, eegels, eegnames, rr, info, \
         update_kwargs, bem = _prepare_for_forward(
diff --git a/mne/simulation/source.py b/mne/simulation/source.py
index 45293fe..b4c5c44 100644
--- a/mne/simulation/source.py
+++ b/mne/simulation/source.py
@@ -8,7 +8,7 @@ import numpy as np
 
 from ..source_estimate import SourceEstimate, VolSourceEstimate
 from ..source_space import _ensure_src
-from ..utils import check_random_state, deprecated, logger
+from ..utils import check_random_state, logger
 from ..externals.six.moves import zip
 
 
@@ -48,73 +48,6 @@ def select_source_in_label(src, label, random_state=None):
     return lh_vertno, rh_vertno
 
 
- at deprecated('"generate_sparse_stc" is deprecated and will be removed in'
-            'MNE-0.11. Please use simulate_sparse_stc instead')
-def generate_sparse_stc(src, labels, stc_data, tmin, tstep, random_state=None):
-    """Generate sparse sources time courses from waveforms and labels
-
-    This function randomly selects a single vertex in each label and assigns
-    a waveform from stc_data to it.
-
-    Parameters
-    ----------
-    src : list of dict
-        The source space
-    labels : list of Labels
-        The labels
-    stc_data : array (shape: len(labels) x n_times)
-        The waveforms
-    tmin : float
-        The beginning of the timeseries
-    tstep : float
-        The time step (1 / sampling frequency)
-    random_state : None | int | np.random.RandomState
-        To specify the random generator state.
-
-    Returns
-    -------
-    stc : SourceEstimate
-        The generated source time courses.
-    """
-    if len(labels) != len(stc_data):
-        raise ValueError('labels and stc_data must have the same length')
-
-    rng = check_random_state(random_state)
-    vertno = [[], []]
-    lh_data = list()
-    rh_data = list()
-    for label_data, label in zip(stc_data, labels):
-        lh_vertno, rh_vertno = select_source_in_label(src, label, rng)
-        vertno[0] += lh_vertno
-        vertno[1] += rh_vertno
-        if len(lh_vertno) != 0:
-            lh_data.append(np.atleast_2d(label_data))
-        elif len(rh_vertno) != 0:
-            rh_data.append(np.atleast_2d(label_data))
-        else:
-            raise ValueError('No vertno found.')
-
-    vertno = [np.array(v) for v in vertno]
-
-    # the data is in the order left, right
-    data = list()
-    if len(vertno[0]) != 0:
-        idx = np.argsort(vertno[0])
-        vertno[0] = vertno[0][idx]
-        data.append(np.concatenate(lh_data)[idx])
-
-    if len(vertno[1]) != 0:
-        idx = np.argsort(vertno[1])
-        vertno[1] = vertno[1][idx]
-        data.append(np.concatenate(rh_data)[idx])
-
-    data = np.concatenate(data)
-
-    stc = SourceEstimate(data, vertices=vertno, tmin=tmin, tstep=tstep)
-
-    return stc
-
-
 def simulate_sparse_stc(src, n_dipoles, times,
                         data_fun=lambda t: 1e-7 * np.sin(20 * np.pi * t),
                         labels=None, random_state=None):
@@ -202,48 +135,6 @@ def simulate_sparse_stc(src, n_dipoles, times,
     return stc
 
 
- at deprecated('"generate_stc" is deprecated and will be removed in'
-            'MNE-0.11. Please use simulate_stc instead')
-def generate_stc(src, labels, stc_data, tmin, tstep, value_fun=None):
-    """Generate sources time courses from waveforms and labels
-
-    This function generates a source estimate with extended sources by
-    filling the labels with the waveforms given in stc_data.
-
-    By default, the vertices within a label are assigned the same waveform.
-    The waveforms can be scaled for each vertex by using the label values
-    and value_fun. E.g.,
-
-    # create a source label where the values are the distance from the center
-    labels = circular_source_labels('sample', 0, 10, 0)
-
-    # sources with decaying strength (x will be the distance from the center)
-    fun = lambda x: exp(- x / 10)
-    stc = generate_stc(fwd, labels, stc_data, tmin, tstep, fun)
-
-    Parameters
-    ----------
-    src : list of dict
-        The source space
-    labels : list of Labels
-        The labels
-    stc_data : array (shape: len(labels) x n_times)
-        The waveforms
-    tmin : float
-        The beginning of the timeseries
-    tstep : float
-        The time step (1 / sampling frequency)
-    value_fun : function
-        Function to apply to the label values
-
-    Returns
-    -------
-    stc : SourceEstimate
-        The generated source time courses.
-    """
-    return simulate_stc(src, labels, stc_data, tmin, tstep, value_fun)
-
-
 def simulate_stc(src, labels, stc_data, tmin, tstep, value_fun=None):
     """Simulate sources time courses from waveforms and labels
 
diff --git a/mne/source_estimate.py b/mne/source_estimate.py
index 7c20c71..2d46c77 100644
--- a/mne/source_estimate.py
+++ b/mne/source_estimate.py
@@ -1394,7 +1394,7 @@ class SourceEstimate(_BaseSourceEstimate):
         """
         if self.subject is None:
             raise ValueError('stc.subject must be set')
-        src_orig = _ensure_src(src_orig)
+        src_orig = _ensure_src(src_orig, kind='surf')
         subject_orig = _ensure_src_subject(src_orig, subject_orig)
         data_idx, vertices = _get_morph_src_reordering(
             self.vertices, src_orig, subject_orig, self.subject, subjects_dir)
@@ -1835,10 +1835,7 @@ class MixedSourceEstimate(_BaseSourceEstimate):
         """
 
         # extract surface source spaces
-        src = _ensure_src(src)
-        surf = [s for s in src if s['type'] == 'surf']
-        if len(surf) != 2:
-            raise ValueError('Source space must contain exactly two surfaces.')
+        surf = _ensure_src(src, kind='surf')
 
         # extract surface source estimate
         data = self.data[:surf[0]['nuse'] + surf[1]['nuse']]
@@ -2322,7 +2319,7 @@ def spatio_temporal_src_connectivity(src, n_times, dist=None, verbose=None):
 
     Parameters
     ----------
-    src : source space
+    src : instance of SourceSpaces
         The source space.
     n_times : int
         Number of time instants.
@@ -2442,7 +2439,7 @@ def spatio_temporal_dist_connectivity(src, n_times, dist, verbose=None):
 
     Parameters
     ----------
-    src : source space
+    src : instance of SourceSpaces
         The source space must have distances between vertices computed, such
         that src['dist'] exists and is useful. This can be obtained using MNE
         with a call to mne_add_patch_info with the --dist option.
@@ -2482,7 +2479,7 @@ def spatial_src_connectivity(src, dist=None, verbose=None):
 
     Parameters
     ----------
-    src : source space
+    src : instance of SourceSpaces
         The source space.
     dist : float, or None
         Maximal geodesic distance (in m) between vertices in the
@@ -2526,7 +2523,7 @@ def spatial_dist_connectivity(src, dist, verbose=None):
 
     Parameters
     ----------
-    src : source space
+    src : instance of SourceSpaces
         The source space must have distances between vertices computed, such
         that src['dist'] exists and is useful. This can be obtained using MNE
         with a call to mne_add_patch_info with the --dist option.
@@ -2544,6 +2541,38 @@ def spatial_dist_connectivity(src, dist, verbose=None):
     return spatio_temporal_dist_connectivity(src, 1, dist)
 
 
+def spatial_inter_hemi_connectivity(src, dist, verbose=None):
+    """Get vertices on each hemisphere that are close to the other hemisphere
+
+    Parameters
+    ----------
+    src : instance of SourceSpaces
+        The source space. Must be surface type.
+    dist : float
+        Maximal Euclidean distance (in m) between vertices in one hemisphere
+        compared to the other to consider neighbors.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    connectivity : sparse COO matrix
+        The connectivity matrix describing the spatial graph structure.
+        Typically this should be combined (addititively) with another
+        existing intra-hemispheric connectivity matrix, e.g. computed
+        using geodesic distances.
+    """
+    from scipy.spatial.distance import cdist
+    src = _ensure_src(src, kind='surf')
+    conn = cdist(src[0]['rr'][src[0]['vertno']],
+                 src[1]['rr'][src[1]['vertno']])
+    conn = sparse.csr_matrix(conn <= dist, dtype=int)
+    empties = [sparse.csr_matrix((nv, nv), dtype=int) for nv in conn.shape]
+    conn = sparse.vstack([sparse.hstack([empties[0], conn]),
+                          sparse.hstack([conn.T, empties[1]])])
+    return conn
+
+
 @verbose
 def _get_connectivity_from_edges(edges, n_times, verbose=None):
     """Given edges sparse matrix, create connectivity matrix"""
diff --git a/mne/source_space.py b/mne/source_space.py
index 4d99e0e..1f70d57 100644
--- a/mne/source_space.py
+++ b/mne/source_space.py
@@ -30,7 +30,7 @@ from .utils import (get_subjects_dir, run_subprocess, has_freesurfer,
 from .fixes import in1d, partial, gzip_open, meshgrid
 from .parallel import parallel_func, check_n_jobs
 from .transforms import (invert_transform, apply_trans, _print_coord_trans,
-                         combine_transforms, _get_mri_head_t,
+                         combine_transforms, _get_trans,
                          _coord_frame_name, Transform)
 from .externals.six import string_types
 
@@ -227,55 +227,37 @@ class SourceSpaces(list):
                 raise ValueError('Unrecognized source type: %s.' % src['type'])
 
         # Get shape, inuse array and interpolation matrix from volume sources
-        first_vol = True  # mark the first volume source
-        # Loop through the volume sources
-        for vs in src_types['volume']:
+        inuse = 0
+        for ii, vs in enumerate(src_types['volume']):
             # read the lookup table value for segmented volume
             if 'seg_name' not in vs:
                 raise ValueError('Volume sources should be segments, '
                                  'not the entire volume.')
             # find the color value for this volume
-            i = _get_lut_id(lut, vs['seg_name'], use_lut)
+            id_ = _get_lut_id(lut, vs['seg_name'], use_lut)
 
-            if first_vol:
+            if ii == 0:
                 # get the inuse array
                 if mri_resolution:
                     # read the mri file used to generate volumes
-                    aseg = nib.load(vs['mri_file'])
-
+                    aseg_data = nib.load(vs['mri_file']).get_data()
                     # get the voxel space shape
                     shape3d = (vs['mri_height'], vs['mri_depth'],
                                vs['mri_width'])
-
-                    # get the values for this volume
-                    inuse = i * (aseg.get_data() == i).astype(int)
-                    # store as 1D array
-                    inuse = inuse.ravel((2, 1, 0))
-
                 else:
-                    inuse = i * vs['inuse']
-
                     # get the volume source space shape
-                    shape = vs['shape']
-
                     # read the shape in reverse order
                     # (otherwise results are scrambled)
-                    shape3d = (shape[2], shape[1], shape[0])
-
-                first_vol = False
-
+                    shape3d = vs['shape'][2::-1]
+            if mri_resolution:
+                # get the values for this volume
+                use = id_ * (aseg_data == id_).astype(int).ravel('F')
             else:
-                # update the inuse array
-                if mri_resolution:
-
-                    # get the values for this volume
-                    use = i * (aseg.get_data() == i).astype(int)
-                    inuse += use.ravel((2, 1, 0))
-                else:
-                    inuse += i * vs['inuse']
+                use = id_ * vs['inuse']
+            inuse += use
 
         # Raise error if there are no volume source spaces
-        if first_vol:
+        if np.array(inuse).ndim == 0:
             raise ValueError('Source spaces must contain at least one volume.')
 
         # create 3d grid in the MRI_VOXEL coordinate frame
@@ -300,7 +282,7 @@ class SourceSpaces(list):
             if coords == 'head':
 
                 # read mri -> head transformation
-                mri_head_t = _get_mri_head_t(trans)[0]
+                mri_head_t = _get_trans(trans)[0]
 
                 # get the HEAD to MRI transform
                 head_mri_t = invert_transform(mri_head_t)
@@ -2106,7 +2088,7 @@ def _get_solids(tri_rrs, fros):
 
 
 @verbose
-def _ensure_src(src, verbose=None):
+def _ensure_src(src, kind=None, verbose=None):
     """Helper to ensure we have a source space"""
     if isinstance(src, string_types):
         if not op.isfile(src):
@@ -2115,6 +2097,13 @@ def _ensure_src(src, verbose=None):
         src = read_source_spaces(src, verbose=False)
     if not isinstance(src, SourceSpaces):
         raise ValueError('src must be a string or instance of SourceSpaces')
+    if kind is not None:
+        if kind == 'surf':
+            surf = [s for s in src if s['type'] == 'surf']
+            if len(surf) != 2 or len(src) != 2:
+                raise ValueError('Source space must contain exactly two '
+                                 'surfaces.')
+            src = surf
     return src
 
 
diff --git a/mne/stats/__init__.py b/mne/stats/__init__.py
index b45141e..6a94d61 100644
--- a/mne/stats/__init__.py
+++ b/mne/stats/__init__.py
@@ -1,7 +1,6 @@
 """Functions for statistical analysis"""
 
-from .parametric import (
-    f_threshold_twoway_rm, f_threshold_mway_rm, f_twoway_rm, f_mway_rm)
+from .parametric import f_threshold_mway_rm, f_mway_rm
 from .permutations import permutation_t_test
 from .cluster_level import (permutation_cluster_test,
                             permutation_cluster_1samp_test,
diff --git a/mne/stats/parametric.py b/mne/stats/parametric.py
index ed7fbe3..e42db60 100644
--- a/mne/stats/parametric.py
+++ b/mne/stats/parametric.py
@@ -9,7 +9,6 @@ from functools import reduce
 from string import ascii_uppercase
 
 from ..externals.six import string_types
-from ..utils import deprecated
 from ..fixes import matrix_rank
 
 # The following function is a rewriting of scipy.stats.f_oneway
@@ -186,15 +185,6 @@ def _iter_contrasts(n_subjects, factor_levels, effect_picks):
         yield c_, df1, df2
 
 
- at deprecated('"f_threshold_twoway_rm" is deprecated and will be removed in'
-            'MNE-0.11. Please use f_threshold_mway_rm instead')
-def f_threshold_twoway_rm(n_subjects, factor_levels, effects='A*B',
-                          pvalue=0.05):
-    return f_threshold_mway_rm(
-        n_subjects=n_subjects, factor_levels=factor_levels,
-        effects=effects, pvalue=pvalue)
-
-
 def f_threshold_mway_rm(n_subjects, factor_levels, effects='A*B',
                         pvalue=0.05):
     """ Compute f-value thesholds for a two-way ANOVA
@@ -242,18 +232,6 @@ def f_threshold_mway_rm(n_subjects, factor_levels, effects='A*B',
     return f_threshold if len(f_threshold) > 1 else f_threshold[0]
 
 
-# The following functions based on MATLAB code by Rik Henson
-# and Python code from the pvttble toolbox by Roger Lew.
- at deprecated('"f_twoway_rm" is deprecated and will be removed in MNE 0.11."'
-            " Please use f_mway_rm instead")
-def f_twoway_rm(data, factor_levels, effects='A*B', alpha=0.05,
-                correction=False, return_pvals=True):
-    """This function is deprecated, use `f_mway_rm` instead"""
-    return f_mway_rm(data=data, factor_levels=factor_levels, effects=effects,
-                     alpha=alpha, correction=correction,
-                     return_pvals=return_pvals)
-
-
 def f_mway_rm(data, factor_levels, effects='all', alpha=0.05,
               correction=False, return_pvals=True):
     """M-way repeated measures ANOVA for fully balanced designs
diff --git a/mne/stats/regression.py b/mne/stats/regression.py
index b5fb7d7..e314458 100644
--- a/mne/stats/regression.py
+++ b/mne/stats/regression.py
@@ -127,14 +127,28 @@ def _fit_lm(data, design_matrix, names):
     sqrt_noise_var = np.sqrt(resid_sum_squares / df).reshape(data.shape[1:])
     design_invcov = linalg.inv(np.dot(design_matrix.T, design_matrix))
     unscaled_stderrs = np.sqrt(np.diag(design_invcov))
-
+    tiny = np.finfo(np.float64).tiny
     beta, stderr, t_val, p_val, mlog10_p_val = (dict() for _ in range(5))
     for x, unscaled_stderr, predictor in zip(betas, unscaled_stderrs, names):
         beta[predictor] = x.reshape(data.shape[1:])
         stderr[predictor] = sqrt_noise_var * unscaled_stderr
-        t_val[predictor] = beta[predictor] / stderr[predictor]
-        cdf = stats.t.cdf(np.abs(t_val[predictor]), df)
-        p_val[predictor] = (1. - cdf) * 2.
+        p_val[predictor] = np.empty_like(stderr[predictor])
+        t_val[predictor] = np.empty_like(stderr[predictor])
+
+        stderr_pos = (stderr[predictor] > 0)
+        beta_pos = (beta[predictor] > 0)
+        t_val[predictor][stderr_pos] = (beta[predictor][stderr_pos] /
+                                        stderr[predictor][stderr_pos])
+        cdf = stats.t.cdf(np.abs(t_val[predictor][stderr_pos]), df)
+        p_val[predictor][stderr_pos] = np.clip((1. - cdf) * 2., tiny, 1.)
+        # degenerate cases
+        mask = (~stderr_pos & beta_pos)
+        t_val[predictor][mask] = np.inf * np.sign(beta[predictor][mask])
+        p_val[predictor][mask] = tiny
+        # could do NaN here, but hopefully this is safe enough
+        mask = (~stderr_pos & ~beta_pos)
+        t_val[predictor][mask] = 0
+        p_val[predictor][mask] = 1.
         mlog10_p_val[predictor] = -np.log10(p_val[predictor])
 
     return beta, stderr, t_val, p_val, mlog10_p_val
diff --git a/mne/stats/tests/test_cluster_level.py b/mne/stats/tests/test_cluster_level.py
index 3f00cc9..a19b57c 100644
--- a/mne/stats/tests/test_cluster_level.py
+++ b/mne/stats/tests/test_cluster_level.py
@@ -1,5 +1,4 @@
 import os
-import os.path as op
 import numpy as np
 from numpy.testing import (assert_equal, assert_array_equal,
                            assert_array_almost_equal)
@@ -13,7 +12,7 @@ from mne.stats.cluster_level import (permutation_cluster_test,
                                      spatio_temporal_cluster_test,
                                      spatio_temporal_cluster_1samp_test,
                                      ttest_1samp_no_p, summarize_clusters_stc)
-from mne.utils import run_tests_if_main, slow_test, _TempDir, set_log_file
+from mne.utils import run_tests_if_main, slow_test, _TempDir, catch_logging
 
 warnings.simplefilter('always')  # enable b/c these tests throw warnings
 
@@ -52,22 +51,21 @@ def test_cache_dir():
     orig_size = os.getenv('MNE_MEMMAP_MIN_SIZE', None)
     rng = np.random.RandomState(0)
     X = rng.randn(9, 2, 10)
-    log_file = op.join(tempdir, 'log.txt')
     try:
         os.environ['MNE_MEMMAP_MIN_SIZE'] = '1K'
         os.environ['MNE_CACHE_DIR'] = tempdir
         # Fix error for #1507: in-place when memmapping
-        permutation_cluster_1samp_test(
-            X, buffer_size=None, n_jobs=2, n_permutations=1,
-            seed=0, stat_fun=ttest_1samp_no_p, verbose=False)
-        # ensure that non-independence yields warning
-        stat_fun = partial(ttest_1samp_no_p, sigma=1e-3)
-        set_log_file(log_file)
-        permutation_cluster_1samp_test(
-            X, buffer_size=10, n_jobs=2, n_permutations=1,
-            seed=0, stat_fun=stat_fun, verbose=False)
-        with open(log_file, 'r') as fid:
-            assert_true('independently' in ''.join(fid.readlines()))
+        with catch_logging() as log_file:
+            permutation_cluster_1samp_test(
+                X, buffer_size=None, n_jobs=2, n_permutations=1,
+                seed=0, stat_fun=ttest_1samp_no_p, verbose=False)
+            # ensure that non-independence yields warning
+            stat_fun = partial(ttest_1samp_no_p, sigma=1e-3)
+            assert_true('independently' not in log_file.getvalue())
+            permutation_cluster_1samp_test(
+                X, buffer_size=10, n_jobs=2, n_permutations=1,
+                seed=0, stat_fun=stat_fun, verbose=False)
+            assert_true('independently' in log_file.getvalue())
     finally:
         if orig_dir is not None:
             os.environ['MNE_CACHE_DIR'] = orig_dir
@@ -77,7 +75,6 @@ def test_cache_dir():
             os.environ['MNE_MEMMAP_MIN_SIZE'] = orig_size
         else:
             del os.environ['MNE_MEMMAP_MIN_SIZE']
-        set_log_file(None)
 
 
 def test_permutation_step_down_p():
diff --git a/mne/stats/tests/test_parametric.py b/mne/stats/tests/test_parametric.py
index 57f184d..37110c7 100644
--- a/mne/stats/tests/test_parametric.py
+++ b/mne/stats/tests/test_parametric.py
@@ -58,6 +58,7 @@ def test_map_effects():
 
 def test_f_twoway_rm():
     """ Test 2-way anova """
+    rng = np.random.RandomState(42)
     iter_params = product([4, 10], [2, 15], [4, 6, 8],
                           ['A', 'B', 'A:B'],
                           [False, True])
@@ -68,7 +69,7 @@ def test_f_twoway_rm():
     }
     for params in iter_params:
         n_subj, n_obs, n_levels, effects, correction = params
-        data = np.random.random([n_subj, n_levels, n_obs])
+        data = rng.random_sample([n_subj, n_levels, n_obs])
         fvals, pvals = f_mway_rm(data, _effects[n_levels], effects,
                                  correction=correction)
         assert_true((fvals >= 0).all())
@@ -83,10 +84,10 @@ def test_f_twoway_rm():
         assert_true((fvals_ >= 0).all())
         assert_true(fvals_.size == n_effects)
 
-    data = np.random.random([n_subj, n_levels, 1])
+    data = rng.random_sample([n_subj, n_levels, 1])
     assert_raises(ValueError, f_mway_rm, data, _effects[n_levels],
                   effects='C', correction=correction)
-    data = np.random.random([n_subj, n_levels, n_obs, 3])
+    data = rng.random_sample([n_subj, n_levels, n_obs, 3])
     # check for dimension handling
     f_mway_rm(data, _effects[n_levels], effects, correction=correction)
 
diff --git a/mne/stats/tests/test_regression.py b/mne/stats/tests/test_regression.py
index 0dccf0f..95a2a33 100644
--- a/mne/stats/tests/test_regression.py
+++ b/mne/stats/tests/test_regression.py
@@ -20,6 +20,8 @@ from mne.datasets import testing
 from mne.stats.regression import linear_regression, linear_regression_raw
 from mne.io import RawArray
 
+warnings.simplefilter('always')
+
 data_path = testing.data_path(download=False)
 stc_fname = op.join(data_path, 'MEG', 'sample',
                     'sample_audvis_trunc-meg-lh.stc')
@@ -47,6 +49,7 @@ def test_regression():
     # creates contrast: aud_l=0, aud_r=1
     design_matrix[:, 1] -= 1
     with warnings.catch_warnings(record=True) as w:
+        warnings.simplefilter('always')
         lm = linear_regression(epochs, design_matrix, ['intercept', 'aud'])
         assert_true(w[0].category == UserWarning)
         assert_true('non-data' in '%s' % w[0].message)
@@ -62,8 +65,20 @@ def test_regression():
     stc_list = [stc, stc, stc]
     stc_gen = (s for s in stc_list)
     with warnings.catch_warnings(record=True):  # divide by zero
+        warnings.simplefilter('always')
         lm1 = linear_regression(stc_list, design_matrix[:len(stc_list)])
     lm2 = linear_regression(stc_gen, design_matrix[:len(stc_list)])
+    for val in lm2.values():
+        # all p values are 0 < p <= 1 to start, but get stored in float32
+        # data, so can actually be truncated to 0. Thus the mlog10_p_val
+        # actually maintains better precision for tiny p-values.
+        assert_true(np.isfinite(val.p_val.data).all())
+        assert_true((val.p_val.data <= 1).all())
+        assert_true((val.p_val.data >= 0).all())
+        # all -log10(p) are non-negative
+        assert_true(np.isfinite(val.mlog10_p_val.data).all())
+        assert_true((val.mlog10_p_val.data >= 0).all())
+        assert_true((val.mlog10_p_val.data >= 0).all())
 
     for k in lm1:
         for v1, v2 in zip(lm1[k], lm2[k]):
diff --git a/mne/tests/__init__.py b/mne/tests/__init__.py
index e69de29..e4193cf 100644
--- a/mne/tests/__init__.py
+++ b/mne/tests/__init__.py
@@ -0,0 +1 @@
+from . import common
diff --git a/mne/tests/common.py b/mne/tests/common.py
new file mode 100644
index 0000000..0e9ffc6
--- /dev/null
+++ b/mne/tests/common.py
@@ -0,0 +1,74 @@
+# Authors: Eric Larson <larson.eric.d at gmail.com>
+#
+# License: BSD (3-clause)
+
+import numpy as np
+from numpy.testing import assert_allclose, assert_equal
+
+from .. import pick_types, Evoked
+from ..io import _BaseRaw
+from ..io.constants import FIFF
+from ..bem import fit_sphere_to_headshape
+
+
+def _get_data(x, ch_idx):
+    """Helper to get the (n_ch, n_times) data array"""
+    if isinstance(x, _BaseRaw):
+        return x[ch_idx][0]
+    elif isinstance(x, Evoked):
+        return x.data[ch_idx]
+
+
+def assert_meg_snr(actual, desired, min_tol, med_tol=500., msg=None):
+    """Helper to assert channel SNR of a certain level
+
+    Mostly useful for operations like Maxwell filtering that modify
+    MEG channels while leaving EEG and others intact.
+    """
+    from nose.tools import assert_true
+    picks = pick_types(desired.info, meg=True, exclude=[])
+    others = np.setdiff1d(np.arange(len(actual.ch_names)), picks)
+    if len(others) > 0:  # if non-MEG channels present
+        assert_allclose(_get_data(actual, others),
+                        _get_data(desired, others), atol=1e-11, rtol=1e-5,
+                        err_msg='non-MEG channel mismatch')
+    actual_data = _get_data(actual, picks)
+    desired_data = _get_data(desired, picks)
+    bench_rms = np.sqrt(np.mean(desired_data * desired_data, axis=1))
+    error = actual_data - desired_data
+    error_rms = np.sqrt(np.mean(error * error, axis=1))
+    snrs = bench_rms / error_rms
+    # min tol
+    snr = snrs.min()
+    bad_count = (snrs < min_tol).sum()
+    msg = ' (%s)' % msg if msg != '' else msg
+    assert_true(bad_count == 0, 'SNR (worst %0.2f) < %0.2f for %s/%s '
+                'channels%s' % (snr, min_tol, bad_count, len(picks), msg))
+    # median tol
+    snr = np.median(snrs)
+    assert_true(snr >= med_tol, 'SNR median %0.2f < %0.2f%s'
+                % (snr, med_tol, msg))
+
+
+def _dig_sort_key(dig):
+    """Helper for sorting"""
+    return 10000 * dig['kind'] + dig['ident']
+
+
+def assert_dig_allclose(info_py, info_bin):
+    # test dig positions
+    dig_py = sorted(info_py['dig'], key=_dig_sort_key)
+    dig_bin = sorted(info_bin['dig'], key=_dig_sort_key)
+    assert_equal(len(dig_py), len(dig_bin))
+    for ii, (d_py, d_bin) in enumerate(zip(dig_py, dig_bin)):
+        for key in ('ident', 'kind', 'coord_frame'):
+            assert_equal(d_py[key], d_bin[key])
+        assert_allclose(d_py['r'], d_bin['r'], rtol=1e-5, atol=1e-5,
+                        err_msg='Failure on %s:\n%s\n%s'
+                        % (ii, d_py['r'], d_bin['r']))
+    if any(d['kind'] == FIFF.FIFFV_POINT_EXTRA for d in dig_py):
+        R_bin, o_head_bin, o_dev_bin = fit_sphere_to_headshape(info_bin)
+        R_py, o_head_py, o_dev_py = fit_sphere_to_headshape(info_py)
+        assert_allclose(R_py, R_bin)
+        assert_allclose(o_dev_py, o_dev_bin, rtol=1e-5, atol=1e-3)  # mm
+        assert_allclose(o_head_py, o_head_bin, rtol=1e-5, atol=1e-3)  # mm
diff --git a/mne/tests/test_bem.py b/mne/tests/test_bem.py
index dee1b83..c4ff661 100644
--- a/mne/tests/test_bem.py
+++ b/mne/tests/test_bem.py
@@ -3,6 +3,8 @@
 # License: BSD 3 clause
 
 import os.path as op
+from copy import deepcopy
+
 import numpy as np
 from nose.tools import assert_raises, assert_true
 from numpy.testing import assert_equal, assert_allclose
@@ -14,7 +16,7 @@ from mne.preprocessing.maxfilter import fit_sphere_to_headshape
 from mne.io.constants import FIFF
 from mne.transforms import translation
 from mne.datasets import testing
-from mne.utils import run_tests_if_main, _TempDir, slow_test
+from mne.utils import run_tests_if_main, _TempDir, slow_test, catch_logging
 from mne.bem import (_ico_downsample, _get_ico_map, _order_surfaces,
                      _assert_complete_surface, _assert_inside,
                      _check_surface_size, _bem_find_surface)
@@ -171,6 +173,7 @@ def test_fit_sphere_to_headshape():
     """Test fitting a sphere to digitization points"""
     # Create points of various kinds
     rad = 90.  # mm
+    big_rad = 120.
     center = np.array([0.5, -10., 40.])  # mm
     dev_trans = np.array([0., -0.005, -10.])
     dev_center = center - dev_trans
@@ -244,7 +247,8 @@ def test_fit_sphere_to_headshape():
 
     # Test with all points
     dig_kinds = (FIFF.FIFFV_POINT_CARDINAL, FIFF.FIFFV_POINT_EXTRA,
-                 FIFF.FIFFV_POINT_EXTRA)
+                 FIFF.FIFFV_POINT_EEG)
+    kwargs = dict(rtol=1e-3, atol=1.)  # in mm
     r, oh, od = fit_sphere_to_headshape(info, dig_kinds=dig_kinds)
     assert_allclose(r, rad, **kwargs)
     assert_allclose(oh, center, **kwargs)
@@ -258,7 +262,38 @@ def test_fit_sphere_to_headshape():
     assert_allclose(oh, center, **kwargs)
     assert_allclose(od, center, **kwargs)
 
-    dig = [dict(coord_frame=FIFF.FIFFV_COORD_DEVICE, )]
+    # Test big size
+    dig_kinds = (FIFF.FIFFV_POINT_CARDINAL, FIFF.FIFFV_POINT_EXTRA)
+    info_big = deepcopy(info)
+    for d in info_big['dig']:
+        d['r'] -= center / 1000.
+        d['r'] *= big_rad / rad
+        d['r'] += center / 1000.
+    with catch_logging() as log_file:
+        r, oh, od = fit_sphere_to_headshape(info_big, dig_kinds=dig_kinds,
+                                            verbose='warning')
+    log_file = log_file.getvalue().strip()
+    assert_equal(len(log_file.split('\n')), 1)
+    assert_true(log_file.startswith('Estimated head size'))
+    assert_allclose(oh, center, atol=1e-3)
+    assert_allclose(r, big_rad, atol=1e-3)
+    del info_big
+
+    # Test offcenter
+    dig_kinds = (FIFF.FIFFV_POINT_CARDINAL, FIFF.FIFFV_POINT_EXTRA)
+    info_shift = deepcopy(info)
+    shift_center = np.array([0., -30, 0.])
+    for d in info_shift['dig']:
+        d['r'] -= center / 1000.
+        d['r'] += shift_center / 1000.
+    with catch_logging() as log_file:
+        r, oh, od = fit_sphere_to_headshape(info_shift, dig_kinds=dig_kinds,
+                                            verbose='warning')
+    log_file = log_file.getvalue().strip()
+    assert_equal(len(log_file.split('\n')), 1)
+    assert_true('from head frame origin' in log_file)
+    assert_allclose(oh, shift_center, atol=1e-3)
+    assert_allclose(r, rad, atol=1e-3)
 
 
 run_tests_if_main()
diff --git a/mne/tests/test_chpi.py b/mne/tests/test_chpi.py
index 8d837bf..ec73e13 100644
--- a/mne/tests/test_chpi.py
+++ b/mne/tests/test_chpi.py
@@ -12,7 +12,7 @@ from mne.io import read_info, Raw
 from mne.io.constants import FIFF
 from mne.chpi import (_rot_to_quat, _quat_to_rot, get_chpi_positions,
                       _calculate_chpi_positions, _angle_between_quats)
-from mne.utils import (run_tests_if_main, _TempDir, slow_test, set_log_file,
+from mne.utils import (run_tests_if_main, _TempDir, slow_test, catch_logging,
                        requires_version)
 from mne.datasets import testing
 
@@ -36,6 +36,22 @@ def test_quaternions():
     rots = [np.eye(3)]
     for fname in [test_fif_fname, ctf_fname, hp_fif_fname]:
         rots += [read_info(fname)['dev_head_t']['trans'][:3, :3]]
+    # nasty numerical cases
+    rots += [np.array([
+        [-0.99978541, -0.01873462, -0.00898756],
+        [-0.01873462, 0.62565561, 0.77987608],
+        [-0.00898756, 0.77987608, -0.62587152],
+    ])]
+    rots += [np.array([
+        [0.62565561, -0.01873462, 0.77987608],
+        [-0.01873462, -0.99978541, -0.00898756],
+        [0.77987608, -0.00898756, -0.62587152],
+    ])]
+    rots += [np.array([
+        [-0.99978541, -0.00898756, -0.01873462],
+        [-0.00898756, -0.62587152, 0.77987608],
+        [-0.01873462, 0.77987608, 0.62565561],
+    ])]
     for rot in rots:
         assert_allclose(rot, _quat_to_rot(_rot_to_quat(rot)),
                         rtol=1e-5, atol=1e-5)
@@ -57,7 +73,8 @@ def test_quaternions():
 def test_get_chpi():
     """Test CHPI position computation
     """
-    trans0, rot0 = get_chpi_positions(hp_fname)[:2]
+    trans0, rot0, _, quat0 = get_chpi_positions(hp_fname, return_quat=True)
+    assert_allclose(rot0[0], _quat_to_rot(quat0[0]))
     trans0, rot0 = trans0[:-1], rot0[:-1]
     raw = Raw(hp_fif_fname)
     out = get_chpi_positions(raw)
@@ -154,15 +171,9 @@ def test_calculate_chpi_positions():
         if d['kind'] == FIFF.FIFFV_POINT_HPI:
             d['r'] = np.ones(3)
     raw_bad.crop(0, 1., copy=False)
-    tempdir = _TempDir()
-    log_file = op.join(tempdir, 'temp_log.txt')
-    set_log_file(log_file, overwrite=True)
-    try:
+    with catch_logging() as log_file:
         _calculate_chpi_positions(raw_bad)
-    finally:
-        set_log_file()
-    with open(log_file, 'r') as fid:
-        for line in fid:
-            assert_true('0/5 acceptable' in line)
+    for line in log_file.getvalue().split('\n')[:-1]:
+        assert_true('0/5 acceptable' in line)
 
 run_tests_if_main()
diff --git a/mne/tests/test_coreg.py b/mne/tests/test_coreg.py
index 0735f8e..ab4bf83 100644
--- a/mne/tests/test_coreg.py
+++ b/mne/tests/test_coreg.py
@@ -73,7 +73,7 @@ def test_scale_mri():
 
 def test_fit_matched_points():
     """Test fit_matched_points: fitting two matching sets of points"""
-    tgt_pts = np.random.uniform(size=(6, 3))
+    tgt_pts = np.random.RandomState(42).uniform(size=(6, 3))
 
     # rotation only
     trans = rotation(2, 6, 3)
diff --git a/mne/tests/test_dipole.py b/mne/tests/test_dipole.py
index 4819578..f56655d 100644
--- a/mne/tests/test_dipole.py
+++ b/mne/tests/test_dipole.py
@@ -20,7 +20,7 @@ from mne.io import Raw
 
 from mne.surface import _compute_nearest
 from mne.bem import _bem_find_surface, read_bem_solution
-from mne.transforms import (read_trans, apply_trans, _get_mri_head_t)
+from mne.transforms import apply_trans, _get_trans
 
 warnings.simplefilter('always')
 data_path = testing.data_path(download=False)
@@ -89,9 +89,8 @@ def test_dipole_fitting():
                 for s in fwd['src']]
     nv = sum(len(v) for v in vertices)
     stc = SourceEstimate(amp * np.eye(nv), vertices, 0, 0.001)
-    with warnings.catch_warnings(record=True):  # semi-def cov
-        evoked = simulate_evoked(fwd, stc, evoked, cov, snr=20,
-                                 random_state=rng)
+    evoked = simulate_evoked(fwd, stc, evoked.info, cov, snr=20,
+                             random_state=rng)
     # For speed, let's use a subset of channels (strange but works)
     picks = np.sort(np.concatenate([
         pick_types(evoked.info, meg=True, eeg=False)[::2],
@@ -202,8 +201,7 @@ def test_min_distance_fit_dipole():
 
 def _compute_depth(dip, fname_bem, fname_trans, subject, subjects_dir):
     """Compute dipole depth"""
-    trans = read_trans(fname_trans)
-    trans = _get_mri_head_t(trans)[0]
+    trans = _get_trans(fname_trans)[0]
     bem = read_bem_solution(fname_bem)
     surf = _bem_find_surface(bem, 'inner_skull')
     points = surf['rr']
diff --git a/mne/tests/test_docstring_parameters.py b/mne/tests/test_docstring_parameters.py
index c4d9548..0216ecd 100644
--- a/mne/tests/test_docstring_parameters.py
+++ b/mne/tests/test_docstring_parameters.py
@@ -66,7 +66,6 @@ def get_name(func):
 _docstring_ignores = [
     'mne.io.write',  # always ignore these
     'mne.fixes._in1d',  # fix function
-    'mne.gui.coregistration',  # deprecated single argument w/None
 ]
 
 _tab_ignores = [
diff --git a/mne/tests/test_epochs.py b/mne/tests/test_epochs.py
index 34e76aa..d75adf0 100644
--- a/mne/tests/test_epochs.py
+++ b/mne/tests/test_epochs.py
@@ -17,27 +17,37 @@ import warnings
 from scipy import fftpack
 import matplotlib
 
-from mne import (io, Epochs, read_events, pick_events, read_epochs,
+from mne import (Epochs, read_events, pick_events, read_epochs,
                  equalize_channels, pick_types, pick_channels, read_evokeds,
-                 write_evokeds)
+                 write_evokeds, create_info, make_fixed_length_events,
+                 get_chpi_positions)
+from mne.preprocessing import maxwell_filter
 from mne.epochs import (
     bootstrap, equalize_epoch_counts, combine_event_ids, add_channels_epochs,
-    EpochsArray, concatenate_epochs, _BaseEpochs)
+    EpochsArray, concatenate_epochs, _BaseEpochs, average_movements)
 from mne.utils import (_TempDir, requires_pandas, slow_test,
                        clean_warning_registry, run_tests_if_main,
                        requires_version)
 
-from mne.io.meas_info import create_info
+from mne.io import RawArray, Raw
 from mne.io.proj import _has_eeg_average_ref_proj
 from mne.event import merge_events
 from mne.io.constants import FIFF
 from mne.externals.six import text_type
 from mne.externals.six.moves import zip, cPickle as pickle
+from mne.datasets import testing
+from mne.tests.common import assert_meg_snr
 
 matplotlib.use('Agg')  # for testing don't use X server
 
 warnings.simplefilter('always')  # enable b/c these tests throw warnings
 
+data_path = testing.data_path(download=False)
+fname_raw_move = op.join(data_path, 'SSS', 'test_move_anon_raw.fif')
+fname_raw_movecomp_sss = op.join(
+    data_path, 'SSS', 'test_move_anon_movecomp_raw_sss.fif')
+fname_raw_move_pos = op.join(data_path, 'SSS', 'test_move_anon_raw.pos')
+
 base_dir = op.join(op.dirname(__file__), '..', 'io', 'tests', 'data')
 raw_fname = op.join(base_dir, 'test_raw.fif')
 event_name = op.join(base_dir, 'test-eve.fif')
@@ -45,10 +55,11 @@ evoked_nf_name = op.join(base_dir, 'test-nf-ave.fif')
 
 event_id, tmin, tmax = 1, -0.2, 0.5
 event_id_2 = 2
+rng = np.random.RandomState(42)
 
 
-def _get_data():
-    raw = io.Raw(raw_fname, add_eeg_ref=False, proj=False)
+def _get_data(preload=False):
+    raw = Raw(raw_fname, preload=preload, add_eeg_ref=False, proj=False)
     events = read_events(event_name)
     picks = pick_types(raw.info, meg=True, eeg=True, stim=True,
                        ecg=True, eog=True, include=['STI 014'],
@@ -61,6 +72,88 @@ flat = dict(grad=1e-15, mag=1e-15)
 clean_warning_registry()  # really clean warning stack
 
 
+ at slow_test
+ at testing.requires_testing_data
+def test_average_movements():
+    """Test movement averaging algorithm
+    """
+    # usable data
+    crop = 0., 10.
+    origin = (0., 0., 0.04)
+    with warnings.catch_warnings(record=True):  # MaxShield
+        raw = Raw(fname_raw_move, allow_maxshield=True)
+    raw.info['bads'] += ['MEG2443']  # mark some bad MEG channel
+    raw.crop(*crop, copy=False).load_data()
+    raw.filter(None, 20, method='iir')
+    events = make_fixed_length_events(raw, event_id)
+    picks = pick_types(raw.info, meg=True, eeg=True, stim=True,
+                       ecg=True, eog=True, exclude=())
+    epochs = Epochs(raw, events, event_id, tmin, tmax, picks=picks, proj=False,
+                    preload=True)
+    epochs_proj = Epochs(raw, events[:1], event_id, tmin, tmax, picks=picks,
+                         proj=True, preload=True)
+    raw_sss_stat = maxwell_filter(raw, origin=origin, regularize=None,
+                                  bad_condition='ignore')
+    del raw
+    epochs_sss_stat = Epochs(raw_sss_stat, events, event_id, tmin, tmax,
+                             picks=picks, proj=False)
+    evoked_sss_stat = epochs_sss_stat.average()
+    del raw_sss_stat, epochs_sss_stat
+    pos = get_chpi_positions(fname_raw_move_pos)
+    ts = pos[2]
+    trans = epochs.info['dev_head_t']['trans']
+    pos_stat = (np.array([trans[:3, 3]]),
+                np.array([trans[:3, :3]]),
+                np.array([0.]))
+
+    # SSS-based
+    evoked_move_non = average_movements(epochs, pos=pos, weight_all=False,
+                                        origin=origin)
+    evoked_move_all = average_movements(epochs, pos=pos, weight_all=True,
+                                        origin=origin)
+    evoked_stat_all = average_movements(epochs, pos=pos_stat, weight_all=True,
+                                        origin=origin)
+    evoked_std = epochs.average()
+    for ev in (evoked_move_non, evoked_move_all, evoked_stat_all):
+        assert_equal(ev.nave, evoked_std.nave)
+        assert_equal(len(ev.info['bads']), 0)
+    # substantial changes to MEG data
+    for ev in (evoked_move_non, evoked_stat_all):
+        assert_meg_snr(ev, evoked_std, 0., 0.1)
+        assert_raises(AssertionError, assert_meg_snr,
+                      ev, evoked_std, 1., 1.)
+    meg_picks = pick_types(evoked_std.info, meg=True, exclude=())
+    assert_allclose(evoked_move_non.data[meg_picks],
+                    evoked_move_all.data[meg_picks])
+    # compare to averaged movecomp version (should be fairly similar)
+    raw_sss = Raw(fname_raw_movecomp_sss)
+    raw_sss.crop(*crop, copy=False).load_data()
+    raw_sss.filter(None, 20, method='iir')
+    picks_sss = pick_types(raw_sss.info, meg=True, eeg=True, stim=True,
+                           ecg=True, eog=True, exclude=())
+    assert_array_equal(picks, picks_sss)
+    epochs_sss = Epochs(raw_sss, events, event_id, tmin, tmax,
+                        picks=picks_sss, proj=False)
+    evoked_sss = epochs_sss.average()
+    assert_equal(evoked_std.nave, evoked_sss.nave)
+    # this should break the non-MEG channels
+    assert_raises(AssertionError, assert_meg_snr,
+                  evoked_sss, evoked_move_all, 0., 0.)
+    assert_meg_snr(evoked_sss, evoked_move_non, 0.02, 2.6)
+    assert_meg_snr(evoked_sss, evoked_stat_all, 0.05, 3.2)
+    # these should be close to numerical precision
+    assert_allclose(evoked_sss_stat.data, evoked_stat_all.data, atol=1e-20)
+
+    # degenerate cases
+    ts += 10.
+    assert_raises(RuntimeError, average_movements, epochs, pos=pos)  # bad pos
+    ts -= 10.
+    assert_raises(TypeError, average_movements, 'foo', pos=pos)
+    assert_raises(RuntimeError, average_movements, epochs_proj, pos=pos)  # prj
+    epochs.info['comps'].append([0])
+    assert_raises(RuntimeError, average_movements, epochs, pos=pos)
+
+
 def test_reject():
     """Test epochs rejection
     """
@@ -75,6 +168,11 @@ def test_reject():
                   picks=picks, preload=False, reject='foo')
     assert_raises(ValueError, Epochs, raw, events, event_id, tmin, tmax,
                   picks=picks_meg, preload=False, reject=dict(eeg=1.))
+    for val in (None, -1):  # protect against older MNE-C types
+        for kwarg in ('reject', 'flat'):
+            assert_raises(ValueError, Epochs, raw, events, event_id,
+                          tmin, tmax, picks=picks_meg, preload=False,
+                          **{kwarg: dict(grad=val)})
     assert_raises(KeyError, Epochs, raw, events, event_id, tmin, tmax,
                   picks=picks, preload=False, reject=dict(foo=1.))
 
@@ -156,7 +254,7 @@ def test_decim():
     decim = dec_1 * dec_2
     sfreq = 1000.
     sfreq_new = sfreq / decim
-    data = np.random.randn(n_epochs, n_channels, n_times)
+    data = rng.randn(n_epochs, n_channels, n_times)
     events = np.array([np.arange(n_epochs), [0] * n_epochs, [1] * n_epochs]).T
     info = create_info(n_channels, sfreq, 'eeg')
     info['lowpass'] = sfreq_new / float(decim)
@@ -294,7 +392,7 @@ def test_event_ordering():
     """Test event order"""
     raw, events = _get_data()[:2]
     events2 = events.copy()
-    np.random.shuffle(events2)
+    rng.shuffle(events2)
     for ii, eve in enumerate([events, events2]):
         with warnings.catch_warnings(record=True) as w:
             warnings.simplefilter('always')
@@ -408,16 +506,13 @@ def test_read_write_epochs():
         # decim with lowpass
         warnings.simplefilter('always')
         epochs_dec = Epochs(raw, events, event_id, tmin, tmax, picks=picks,
-                            baseline=(None, 0), decim=4)
+                            baseline=(None, 0), decim=2)
         assert_equal(len(w), 1)
 
         # decim without lowpass
-        lowpass = raw.info['lowpass']
-        raw.info['lowpass'] = None
-        epochs_dec = Epochs(raw, events, event_id, tmin, tmax, picks=picks,
-                            baseline=(None, 0), decim=4)
+        epochs_dec.info['lowpass'] = None
+        epochs_dec.decimate(2)
         assert_equal(len(w), 2)
-        raw.info['lowpass'] = lowpass
 
     data_dec = epochs_dec.get_data()
     assert_allclose(data[:, :, epochs_dec._decim_slice], data_dec, rtol=1e-7,
@@ -425,7 +520,7 @@ def test_read_write_epochs():
 
     evoked_dec = epochs_dec.average()
     assert_allclose(evoked.data[:, epochs_dec._decim_slice],
-                    evoked_dec.data, rtol=1e-12)
+                    evoked_dec.data, rtol=1e-12, atol=1e-17)
 
     n = evoked.data.shape[1]
     n_dec = evoked_dec.data.shape[1]
@@ -447,8 +542,11 @@ def test_read_write_epochs():
         epochs = Epochs(raw, events, event_ids, tmin, tmax, picks=picks,
                         baseline=(None, 0), proj=proj, reject=reject,
                         add_eeg_ref=True)
+        assert_equal(epochs.proj, proj if proj != 'delayed' else False)
         data1 = epochs.get_data()
-        data2 = epochs.apply_proj().get_data()
+        epochs2 = epochs.copy().apply_proj()
+        assert_equal(epochs2.proj, True)
+        data2 = epochs2.get_data()
         assert_allclose(data1, data2, **tols)
         epochs.save(temp_fname)
         epochs_read = read_epochs(temp_fname, preload=False)
@@ -585,7 +683,7 @@ def test_epochs_proj():
     assert_true(all(p['active'] is True for p in evoked.info['projs']))
     data = epochs.get_data()
 
-    raw_proj = io.Raw(raw_fname, proj=True)
+    raw_proj = Raw(raw_fname, proj=True)
     epochs_no_proj = Epochs(raw_proj, events[:4], event_id, tmin, tmax,
                             picks=this_picks, baseline=(None, 0), proj=False)
 
@@ -629,6 +727,32 @@ def test_epochs_proj():
     data_2 = epochs_read.get_data()  # Let's check the result
     assert_allclose(data, data_2, atol=1e-15, rtol=1e-3)
 
+    # adding EEG ref (GH #2727)
+    raw = Raw(raw_fname)
+    raw.add_proj([], remove_existing=True)
+    raw.info['bads'] = ['MEG 2443', 'EEG 053']
+    picks = pick_types(raw.info, meg=False, eeg=True, stim=True, eog=False,
+                       exclude='bads')
+    epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
+                    baseline=(None, 0), preload=True, add_eeg_ref=False)
+    epochs.pick_channels(['EEG 001', 'EEG 002'])
+    assert_equal(len(epochs), 7)  # sufficient for testing
+    temp_fname = 'test.fif'
+    epochs.save(temp_fname)
+    for preload in (True, False):
+        epochs = read_epochs(temp_fname, add_eeg_ref=True, proj=True,
+                             preload=preload)
+        assert_allclose(epochs.get_data().mean(axis=1), 0, atol=1e-15)
+        epochs = read_epochs(temp_fname, add_eeg_ref=True, proj=False,
+                             preload=preload)
+        assert_raises(AssertionError, assert_allclose,
+                      epochs.get_data().mean(axis=1), 0., atol=1e-15)
+        epochs.add_eeg_average_proj()
+        assert_raises(AssertionError, assert_allclose,
+                      epochs.get_data().mean(axis=1), 0., atol=1e-15)
+        epochs.apply_proj()
+        assert_allclose(epochs.get_data().mean(axis=1), 0, atol=1e-15)
+
 
 def test_evoked_arithmetic():
     """Test arithmetic of evoked data
@@ -842,7 +966,7 @@ def test_indexing_slicing():
         assert_array_equal(data, data_normal[[idx]])
 
         # using indexing with an array
-        idx = np.random.randint(0, data_epochs2_sliced.shape[0], 10)
+        idx = rng.randint(0, data_epochs2_sliced.shape[0], 10)
         data = epochs2[idx].get_data()
         assert_array_equal(data, data_normal[idx])
 
@@ -961,6 +1085,21 @@ def test_resample():
     epochs_resampled = epochs.resample(sfreq_normal * 2, npad=0, copy=False)
     assert_true(epochs_resampled is epochs)
 
+    # test proper setting of times (#2645)
+    n_trial, n_chan, n_time, sfreq = 1, 1, 10, 1000
+    data = np.zeros((n_trial, n_chan, n_time))
+    events = np.zeros((n_trial, 3), int)
+    info = create_info(n_chan, sfreq, 'eeg')
+    epochs1 = EpochsArray(data, deepcopy(info), events)
+    epochs2 = EpochsArray(data, deepcopy(info), events)
+    epochs = concatenate_epochs([epochs1, epochs2])
+    epochs1.resample(epochs1.info['sfreq'] // 2)
+    epochs2.resample(epochs2.info['sfreq'] // 2)
+    epochs = concatenate_epochs([epochs1, epochs2])
+    for e in epochs1, epochs2, epochs:
+        assert_equal(e.times[0], epochs.tmin)
+        assert_equal(e.times[-1], epochs.tmax)
+
 
 def test_detrend():
     """Test detrending of epochs
@@ -1435,18 +1574,31 @@ def test_drop_epochs_mult():
 
 def test_contains():
     """Test membership API"""
-    raw, events = _get_data()[:2]
-
-    tests = [(('mag', False), ('grad', 'eeg')),
-             (('grad', False), ('mag', 'eeg')),
-             ((False, True), ('grad', 'mag'))]
-
-    for (meg, eeg), others in tests:
-        picks_contains = pick_types(raw.info, meg=meg, eeg=eeg)
+    raw, events = _get_data(True)[:2]
+    # Add seeg channel
+    seeg = RawArray(np.zeros((1, len(raw.times))),
+                    create_info(['SEEG 001'], raw.info['sfreq'], 'seeg'))
+    for key in ('dev_head_t', 'buffer_size_sec', 'highpass', 'lowpass',
+                'filename', 'dig', 'description', 'acq_pars', 'experimenter',
+                'proj_name'):
+        seeg.info[key] = raw.info[key]
+    raw.add_channels([seeg])
+    tests = [(('mag', False, False), ('grad', 'eeg', 'seeg')),
+             (('grad', False, False), ('mag', 'eeg', 'seeg')),
+             ((False, True, False), ('grad', 'mag', 'seeg')),
+             ((False, False, True), ('grad', 'mag', 'eeg'))]
+
+    for (meg, eeg, seeg), others in tests:
+        picks_contains = pick_types(raw.info, meg=meg, eeg=eeg, seeg=seeg)
         epochs = Epochs(raw, events, {'a': 1, 'b': 2}, tmin, tmax,
                         picks=picks_contains, reject=None,
                         preload=False)
-        test = 'eeg' if eeg is True else meg
+        if eeg:
+            test = 'eeg'
+        elif seeg:
+            test = 'seeg'
+        else:
+            test = meg
         assert_true(test in epochs)
         assert_true(not any(o in epochs for o in others))
 
@@ -1617,12 +1769,12 @@ def test_add_channels_epochs():
                   [epochs_meg2, epochs_eeg])
 
     epochs_meg2 = epochs_meg.copy()
-    epochs_meg2.tmin += 0.4
+    epochs_meg2.times += 0.4
     assert_raises(NotImplementedError, add_channels_epochs,
                   [epochs_meg2, epochs_eeg])
 
     epochs_meg2 = epochs_meg.copy()
-    epochs_meg2.tmin += 0.5
+    epochs_meg2.times += 0.5
     assert_raises(NotImplementedError, add_channels_epochs,
                   [epochs_meg2, epochs_eeg])
 
@@ -1644,7 +1796,6 @@ def test_array_epochs():
     tempdir = _TempDir()
 
     # creating
-    rng = np.random.RandomState(42)
     data = rng.random_sample((10, 20, 300))
     sfreq = 1e3
     ch_names = ['EEG %03d' % (i + 1) for i in range(20)]
@@ -1790,4 +1941,16 @@ def test_add_channels():
     assert_raises(AssertionError, epoch_meg.add_channels, epoch_badsf)
 
 
+def test_seeg():
+    """Test the compatibility of the Epoch object with SEEG data."""
+    n_epochs, n_channels, n_times, sfreq = 5, 10, 20, 1000.
+    data = np.ones((n_epochs, n_channels, n_times))
+    events = np.array([np.arange(n_epochs), [0] * n_epochs, [1] * n_epochs]).T
+    info = create_info(n_channels, sfreq, 'seeg')
+    epochs = EpochsArray(data, info, events)
+    picks = pick_types(epochs.info, meg=False, eeg=False, stim=False,
+                       eog=False, ecg=False, seeg=True, emg=False, exclude=[])
+    assert_equal(len(picks), n_channels)
+
+
 run_tests_if_main()
diff --git a/mne/tests/test_evoked.py b/mne/tests/test_evoked.py
index 7918378..5639b75 100644
--- a/mne/tests/test_evoked.py
+++ b/mne/tests/test_evoked.py
@@ -16,13 +16,12 @@ from numpy.testing import (assert_array_almost_equal, assert_equal,
 from nose.tools import assert_true, assert_raises, assert_not_equal
 
 from mne import (equalize_channels, pick_types, read_evokeds, write_evokeds,
-                 grand_average, combine_evoked)
-from mne.evoked import _get_peak, EvokedArray
+                 grand_average, combine_evoked, create_info)
+from mne.evoked import _get_peak, Evoked, EvokedArray
 from mne.epochs import EpochsArray
 
 from mne.utils import _TempDir, requires_pandas, slow_test, requires_version
 
-from mne.io.meas_info import create_info
 from mne.externals.six.moves import cPickle as pickle
 
 warnings.simplefilter('always')
@@ -101,10 +100,9 @@ def test_io_evoked():
     assert_array_almost_equal(ave.data, ave3.data, 19)
 
     # test read_evokeds and write_evokeds
-    types = ['Left Auditory', 'Right Auditory', 'Left visual', 'Right visual']
-    aves1 = read_evokeds(fname)
-    aves2 = read_evokeds(fname, [0, 1, 2, 3])
-    aves3 = read_evokeds(fname, types)
+    aves1 = read_evokeds(fname)[1::2]
+    aves2 = read_evokeds(fname, [1, 3])
+    aves3 = read_evokeds(fname, ['Right Auditory', 'Right visual'])
     write_evokeds(op.join(tempdir, 'evoked-ave.fif'), aves1)
     aves4 = read_evokeds(op.join(tempdir, 'evoked-ave.fif'))
     for aves in [aves2, aves3, aves4]:
@@ -126,6 +124,9 @@ def test_io_evoked():
         read_evokeds(fname2)
     assert_true(len(w) == 2)
 
+    # constructor
+    assert_raises(TypeError, Evoked, fname)
+
 
 def test_shift_time_evoked():
     """ Test for shifting of time scale
@@ -266,13 +267,14 @@ def test_get_peak():
     assert_raises(RuntimeError, evoked.get_peak, ch_type=None, mode='foo')
     assert_raises(ValueError, evoked.get_peak, ch_type='misc', mode='foo')
 
-    ch_idx, time_idx = evoked.get_peak(ch_type='mag')
-    assert_true(ch_idx in evoked.ch_names)
+    ch_name, time_idx = evoked.get_peak(ch_type='mag')
+    assert_true(ch_name in evoked.ch_names)
     assert_true(time_idx in evoked.times)
 
-    ch_idx, time_idx = evoked.get_peak(ch_type='mag',
-                                       time_as_index=True)
+    ch_name, time_idx = evoked.get_peak(ch_type='mag',
+                                        time_as_index=True)
     assert_true(time_idx < len(evoked.times))
+    assert_equal(ch_name, 'MEG 1421')
 
     data = np.array([[0., 1.,  2.],
                      [0., -3.,  0]])
diff --git a/mne/tests/test_filter.py b/mne/tests/test_filter.py
index cd67ab9..216f017 100644
--- a/mne/tests/test_filter.py
+++ b/mne/tests/test_filter.py
@@ -2,7 +2,6 @@ import numpy as np
 from numpy.testing import (assert_array_almost_equal, assert_almost_equal,
                            assert_array_equal, assert_allclose)
 from nose.tools import assert_equal, assert_true, assert_raises
-import os.path as op
 import warnings
 from scipy.signal import resample as sp_resample
 
@@ -11,15 +10,14 @@ from mne.filter import (band_pass_filter, high_pass_filter, low_pass_filter,
                         construct_iir_filter, notch_filter, detrend,
                         _overlap_add_filter, _smart_pad)
 
-from mne import set_log_file
-from mne.utils import _TempDir, sum_squared, run_tests_if_main, slow_test
+from mne.utils import sum_squared, run_tests_if_main, slow_test, catch_logging
 
 warnings.simplefilter('always')  # enable b/c these tests throw warnings
+rng = np.random.RandomState(0)
 
 
 def test_1d_filter():
     """Test our private overlap-add filtering function"""
-    rng = np.random.RandomState(0)
     # make some random signals and filters
     for n_signal in (1, 2, 5, 10, 20, 40, 100, 200, 400, 1000, 2000):
         x = rng.randn(n_signal)
@@ -99,8 +97,6 @@ def test_iir_stability():
 def test_notch_filters():
     """Test notch filters
     """
-    tempdir = _TempDir()
-    log_file = op.join(tempdir, 'temp_log.txt')
     # let's use an ugly, prime sfreq for fun
     sfreq = 487.0
     sig_len_secs = 20
@@ -108,7 +104,6 @@ def test_notch_filters():
     freqs = np.arange(60, 241, 60)
 
     # make a "signal"
-    rng = np.random.RandomState(0)
     a = rng.randn(int(sig_len_secs * sfreq))
     orig_power = np.sqrt(np.mean(a ** 2))
     # make line noise
@@ -122,16 +117,12 @@ def test_notch_filters():
     line_freqs = [None, freqs, freqs, freqs, freqs]
     tols = [2, 1, 1, 1]
     for meth, lf, fl, tol in zip(methods, line_freqs, filter_lengths, tols):
-        if lf is None:
-            set_log_file(log_file, overwrite=True)
-
-        b = notch_filter(a, sfreq, lf, filter_length=fl, method=meth,
-                         verbose='INFO')
+        with catch_logging() as log_file:
+            b = notch_filter(a, sfreq, lf, filter_length=fl, method=meth,
+                             verbose='INFO')
 
         if lf is None:
-            set_log_file()
-            with open(log_file) as fid:
-                out = fid.readlines()
+            out = log_file.getvalue().split('\n')[:-1]
             if len(out) != 2 and len(out) != 3:  # force_serial: len(out) == 3
                 raise ValueError('Detected frequencies not logged properly')
             out = np.fromstring(out[-1], sep=', ')
@@ -142,7 +133,7 @@ def test_notch_filters():
 
 def test_resample():
     """Test resampling"""
-    x = np.random.normal(0, 1, (10, 10, 10))
+    x = rng.normal(0, 1, (10, 10, 10))
     x_rs = resample(x, 1, 2, 10)
     assert_equal(x.shape, (10, 10, 10))
     assert_equal(x_rs.shape, (10, 10, 5))
@@ -194,7 +185,7 @@ def test_filters():
     sfreq = 500
     sig_len_secs = 30
 
-    a = np.random.randn(2, sig_len_secs * sfreq)
+    a = rng.randn(2, sig_len_secs * sfreq)
 
     # let's test our catchers
     for fl in ['blah', [0, 1], 1000.5, '10ss', '10']:
@@ -281,7 +272,7 @@ def test_filters():
     assert_true(iir_params['b'].size - 1 == 4)
 
     # check that picks work for 3d array with one channel and picks=[0]
-    a = np.random.randn(5 * sfreq, 5 * sfreq)
+    a = rng.randn(5 * sfreq, 5 * sfreq)
     b = a[:, None, :]
 
     with warnings.catch_warnings(record=True) as w:
@@ -291,7 +282,7 @@ def test_filters():
     assert_array_equal(a_filt[:, None, :], b_filt)
 
     # check for n-dimensional case
-    a = np.random.randn(2, 2, 2, 2)
+    a = rng.randn(2, 2, 2, 2)
     assert_raises(ValueError, band_pass_filter, a, sfreq, Fp1=4, Fp2=8,
                   picks=np.array([0, 1]))
 
@@ -316,38 +307,34 @@ def test_cuda():
     # some warnings about clean-up failing
     # Also, using `n_jobs='cuda'` on a non-CUDA system should be fine,
     # as it should fall back to using n_jobs=1.
-    tempdir = _TempDir()
-    log_file = op.join(tempdir, 'temp_log.txt')
     sfreq = 500
     sig_len_secs = 20
-    a = np.random.randn(sig_len_secs * sfreq)
-
-    set_log_file(log_file, overwrite=True)
-    for fl in ['10s', None, 2048]:
-        bp = band_pass_filter(a, sfreq, 4, 8, n_jobs=1, filter_length=fl)
-        bs = band_stop_filter(a, sfreq, 4 - 0.5, 8 + 0.5, n_jobs=1,
-                              filter_length=fl)
-        lp = low_pass_filter(a, sfreq, 8, n_jobs=1, filter_length=fl)
-        hp = high_pass_filter(lp, sfreq, 4, n_jobs=1, filter_length=fl)
-
-        bp_c = band_pass_filter(a, sfreq, 4, 8, n_jobs='cuda',
-                                filter_length=fl, verbose='INFO')
-        bs_c = band_stop_filter(a, sfreq, 4 - 0.5, 8 + 0.5, n_jobs='cuda',
-                                filter_length=fl, verbose='INFO')
-        lp_c = low_pass_filter(a, sfreq, 8, n_jobs='cuda', filter_length=fl,
-                               verbose='INFO')
-        hp_c = high_pass_filter(lp, sfreq, 4, n_jobs='cuda', filter_length=fl,
-                                verbose='INFO')
-
-        assert_array_almost_equal(bp, bp_c, 12)
-        assert_array_almost_equal(bs, bs_c, 12)
-        assert_array_almost_equal(lp, lp_c, 12)
-        assert_array_almost_equal(hp, hp_c, 12)
+    a = rng.randn(sig_len_secs * sfreq)
+
+    with catch_logging() as log_file:
+        for fl in ['10s', None, 2048]:
+            bp = band_pass_filter(a, sfreq, 4, 8, n_jobs=1, filter_length=fl)
+            bs = band_stop_filter(a, sfreq, 4 - 0.5, 8 + 0.5, n_jobs=1,
+                                  filter_length=fl)
+            lp = low_pass_filter(a, sfreq, 8, n_jobs=1, filter_length=fl)
+            hp = high_pass_filter(lp, sfreq, 4, n_jobs=1, filter_length=fl)
+
+            bp_c = band_pass_filter(a, sfreq, 4, 8, n_jobs='cuda',
+                                    filter_length=fl, verbose='INFO')
+            bs_c = band_stop_filter(a, sfreq, 4 - 0.5, 8 + 0.5, n_jobs='cuda',
+                                    filter_length=fl, verbose='INFO')
+            lp_c = low_pass_filter(a, sfreq, 8, n_jobs='cuda',
+                                   filter_length=fl, verbose='INFO')
+            hp_c = high_pass_filter(lp, sfreq, 4, n_jobs='cuda',
+                                    filter_length=fl, verbose='INFO')
+
+            assert_array_almost_equal(bp, bp_c, 12)
+            assert_array_almost_equal(bs, bs_c, 12)
+            assert_array_almost_equal(lp, lp_c, 12)
+            assert_array_almost_equal(hp, hp_c, 12)
 
     # check to make sure we actually used CUDA
-    set_log_file()
-    with open(log_file) as fid:
-        out = fid.readlines()
+    out = log_file.getvalue().split('\n')[:-1]
     # triage based on whether or not we actually expected to use CUDA
     from mne.cuda import _cuda_capable  # allow above funs to set it
     tot = 12 if _cuda_capable else 0
@@ -355,7 +342,7 @@ def test_cuda():
                      for o in out]) == tot)
 
     # check resampling
-    a = np.random.RandomState(0).randn(3, sig_len_secs * sfreq)
+    a = rng.randn(3, sig_len_secs * sfreq)
     a1 = resample(a, 1, 2, n_jobs=2, npad=0)
     a2 = resample(a, 1, 2, n_jobs='cuda', npad=0)
     a3 = resample(a, 2, 1, n_jobs=2, npad=0)
diff --git a/mne/tests/test_fixes.py b/mne/tests/test_fixes.py
index eaa9fa3..bf647e5 100644
--- a/mne/tests/test_fixes.py
+++ b/mne/tests/test_fixes.py
@@ -18,6 +18,8 @@ from mne.fixes import (_in1d, _tril_indices, _copysign, _unravel_index,
 from mne.fixes import _firwin2 as mne_firwin2
 from mne.fixes import _filtfilt as mne_filtfilt
 
+rng = np.random.RandomState(0)
+
 
 def test_counter():
     """Test Counter replacement"""
@@ -42,7 +44,7 @@ def test_unique():
     # skip test for np version < 1.5
     if LooseVersion(np.__version__) < LooseVersion('1.5'):
         return
-    for arr in [np.array([]), np.random.rand(10), np.ones(10)]:
+    for arr in [np.array([]), rng.rand(10), np.ones(10)]:
         # basic
         assert_array_equal(np.unique(arr), _unique(arr))
         # with return_index=True
@@ -182,7 +184,7 @@ def test_meshgrid():
 def test_isclose():
     """Test isclose replacement
     """
-    a = np.random.RandomState(0).randn(10)
+    a = rng.randn(10)
     b = a.copy()
     assert_true(_isclose(a, b).all())
     a[0] = np.inf
diff --git a/mne/tests/test_label.py b/mne/tests/test_label.py
index 99a5c74..be9761e 100644
--- a/mne/tests/test_label.py
+++ b/mne/tests/test_label.py
@@ -180,7 +180,7 @@ def test_label_subject():
 def test_label_addition():
     """Test label addition
     """
-    pos = np.random.rand(10, 3)
+    pos = np.random.RandomState(0).rand(10, 3)
     values = np.arange(10.) / 10
     idx0 = list(range(7))
     idx1 = list(range(7, 10))  # non-overlapping
diff --git a/mne/tests/test_line_endings.py b/mne/tests/test_line_endings.py
new file mode 100644
index 0000000..a327a01
--- /dev/null
+++ b/mne/tests/test_line_endings.py
@@ -0,0 +1,68 @@
+# Author: Eric Larson <larson.eric.d at gmail.com>
+#         Adapted from vispy
+#
+# License: BSD (3-clause)
+
+import os
+from nose.tools import assert_raises
+from nose.plugins.skip import SkipTest
+from os import path as op
+import sys
+
+from mne.utils import run_tests_if_main, _TempDir, _get_root_dir
+
+
+skip_files = (
+    # known crlf
+    'FreeSurferColorLUT.txt',
+    'test_edf_stim_channel.txt',
+    'FieldTrip.py',
+)
+
+
+def _assert_line_endings(dir_):
+    """Check line endings for a directory"""
+    if sys.platform == 'win32':
+        raise SkipTest('Skipping line endings check on Windows')
+    report = list()
+    good_exts = ('.py', '.dat', '.sel', '.lout', '.css', '.js', '.lay', '.txt',
+                 '.elc', '.csd', '.sfp', '.json', '.hpts', '.vmrk', '.vhdr',
+                 '.head', '.eve', '.ave', '.cov', '.label')
+    for dirpath, dirnames, filenames in os.walk(dir_):
+        for fname in filenames:
+            if op.splitext(fname)[1] not in good_exts or fname in skip_files:
+                continue
+            filename = op.join(dirpath, fname)
+            relfilename = op.relpath(filename, dir_)
+            try:
+                with open(filename, 'rb') as fid:
+                    text = fid.read().decode('utf-8')
+            except UnicodeDecodeError:
+                report.append('In %s found non-decodable bytes' % relfilename)
+            else:
+                crcount = text.count('\r')
+                if crcount:
+                    report.append('In %s found %i/%i CR/LF' %
+                                  (relfilename, crcount, text.count('\n')))
+    if len(report) > 0:
+        raise AssertionError('Found %s files with incorrect endings:\n%s'
+                             % (len(report), '\n'.join(report)))
+
+
+def test_line_endings():
+    """Test line endings of mne-python
+    """
+    tempdir = _TempDir()
+    with open(op.join(tempdir, 'foo'), 'wb') as fid:
+        fid.write('bad\r\ngood\n'.encode('ascii'))
+    _assert_line_endings(tempdir)
+    with open(op.join(tempdir, 'bad.py'), 'wb') as fid:
+        fid.write(b'\x97')
+    assert_raises(AssertionError, _assert_line_endings, tempdir)
+    with open(op.join(tempdir, 'bad.py'), 'wb') as fid:
+        fid.write('bad\r\ngood\n'.encode('ascii'))
+    assert_raises(AssertionError, _assert_line_endings, tempdir)
+    # now check mne
+    _assert_line_endings(_get_root_dir())
+
+run_tests_if_main()
diff --git a/mne/tests/test_proj.py b/mne/tests/test_proj.py
index e9af0ed..3dc4fa8 100644
--- a/mne/tests/test_proj.py
+++ b/mne/tests/test_proj.py
@@ -122,6 +122,9 @@ def test_compute_proj_epochs():
                 p2_data = p2_data[:, mask]
             corr = np.corrcoef(p1_data, p2_data)[0, 1]
             assert_array_almost_equal(corr, 1.0, 5)
+            if p2['explained_var']:
+                assert_array_almost_equal(p1['explained_var'],
+                                          p2['explained_var'])
 
     # test that you can compute the projection matrix
     projs = activate_proj(projs)
diff --git a/mne/tests/test_source_estimate.py b/mne/tests/test_source_estimate.py
index 6fa9fdd..3a91b75 100644
--- a/mne/tests/test_source_estimate.py
+++ b/mne/tests/test_source_estimate.py
@@ -12,11 +12,12 @@ from scipy.fftpack import fft
 
 from mne.datasets import testing
 from mne import (stats, SourceEstimate, VolSourceEstimate, Label,
-                 read_source_spaces, MixedSourceEstimate)
-from mne import read_source_estimate, morph_data, extract_label_time_course
-from mne.source_estimate import (spatio_temporal_tris_connectivity,
-                                 spatio_temporal_src_connectivity,
-                                 compute_morph_matrix, grade_to_vertices,
+                 read_source_spaces, MixedSourceEstimate, read_source_estimate,
+                 morph_data, extract_label_time_course,
+                 spatio_temporal_tris_connectivity,
+                 spatio_temporal_src_connectivity,
+                 spatial_inter_hemi_connectivity)
+from mne.source_estimate import (compute_morph_matrix, grade_to_vertices,
                                  grade_to_tris)
 
 from mne.minimum_norm import read_inverse_operator
@@ -33,6 +34,8 @@ fname_inv = op.join(data_path, 'MEG', 'sample',
 fname_t1 = op.join(data_path, 'subjects', 'sample', 'mri', 'T1.mgz')
 fname_src = op.join(data_path, 'MEG', 'sample',
                     'sample_audvis_trunc-meg-eeg-oct-6-fwd.fif')
+fname_src_3 = op.join(data_path, 'subjects', 'sample', 'bem',
+                      'sample-oct-4-src.fif')
 fname_stc = op.join(data_path, 'MEG', 'sample', 'sample_audvis_trunc-meg')
 fname_smorph = op.join(data_path, 'MEG', 'sample',
                        'sample_audvis_trunc-meg')
@@ -42,6 +45,41 @@ fname_vol = op.join(data_path, 'MEG', 'sample',
                     'sample_audvis_trunc-grad-vol-7-fwd-sensmap-vol.w')
 fname_vsrc = op.join(data_path, 'MEG', 'sample',
                      'sample_audvis_trunc-meg-vol-7-fwd.fif')
+rng = np.random.RandomState(0)
+
+
+ at testing.requires_testing_data
+def test_aaspatial_inter_hemi_connectivity():
+    """Test spatial connectivity between hemispheres"""
+    # trivial cases
+    conn = spatial_inter_hemi_connectivity(fname_src_3, 5e-6)
+    assert_equal(conn.data.size, 0)
+    conn = spatial_inter_hemi_connectivity(fname_src_3, 5e6)
+    assert_equal(conn.data.size, np.prod(conn.shape) // 2)
+    # actually interesting case (1cm), should be between 2 and 10% of verts
+    src = read_source_spaces(fname_src_3)
+    conn = spatial_inter_hemi_connectivity(src, 10e-3)
+    conn = conn.tocsr()
+    n_src = conn.shape[0]
+    assert_true(n_src * 0.02 < conn.data.size < n_src * 0.10)
+    assert_equal(conn[:src[0]['nuse'], :src[0]['nuse']].data.size, 0)
+    assert_equal(conn[-src[1]['nuse']:, -src[1]['nuse']:].data.size, 0)
+    c = (conn.T + conn) / 2. - conn
+    c.eliminate_zeros()
+    assert_equal(c.data.size, 0)
+    # check locations
+    upper_right = conn[:src[0]['nuse'], src[0]['nuse']:].toarray()
+    assert_equal(upper_right.sum(), conn.sum() // 2)
+    good_labels = ['S_pericallosal', 'Unknown', 'G_and_S_cingul-Mid-Post',
+                   'G_cuneus']
+    for hi, hemi in enumerate(('lh', 'rh')):
+        has_neighbors = src[hi]['vertno'][np.where(np.any(upper_right,
+                                                          axis=1 - hi))[0]]
+        labels = read_labels_from_annot('sample', 'aparc.a2009s', hemi,
+                                        subjects_dir=subjects_dir)
+        use_labels = [l.name[:-3] for l in labels
+                      if np.in1d(l.vertices, has_neighbors).any()]
+        assert_true(set(use_labels) - set(good_labels) == set())
 
 
 @slow_test
@@ -497,8 +535,8 @@ def test_transform_data():
     """Test applying linear (time) transform to data"""
     # make up some data
     n_sensors, n_vertices, n_times = 10, 20, 4
-    kernel = np.random.randn(n_vertices, n_sensors)
-    sens_data = np.random.randn(n_sensors, n_times)
+    kernel = rng.randn(n_vertices, n_sensors)
+    sens_data = rng.randn(n_sensors, n_times)
 
     vertices = np.arange(n_vertices)
     data = np.dot(kernel, sens_data)
@@ -528,7 +566,7 @@ def test_transform():
     # make up some data
     n_verts_lh, n_verts_rh, n_times = 10, 10, 10
     vertices = [np.arange(n_verts_lh), n_verts_lh + np.arange(n_verts_rh)]
-    data = np.random.randn(n_verts_lh + n_verts_rh, n_times)
+    data = rng.randn(n_verts_lh + n_verts_rh, n_times)
     stc = SourceEstimate(data, vertices=vertices, tmin=-0.1, tstep=0.1)
 
     # data_t.ndim > 2 & copy is True
@@ -629,7 +667,7 @@ def test_to_data_frame():
     """Test stc Pandas exporter"""
     n_vert, n_times = 10, 5
     vertices = [np.arange(n_vert, dtype=np.int), np.empty(0, dtype=np.int)]
-    data = np.random.randn(n_vert, n_times)
+    data = rng.randn(n_vert, n_times)
     stc_surf = SourceEstimate(data, vertices=vertices, tmin=0, tstep=1,
                               subject='sample')
     stc_vol = VolSourceEstimate(data, vertices=vertices[0], tmin=0, tstep=1,
@@ -651,7 +689,7 @@ def test_get_peak():
     """
     n_vert, n_times = 10, 5
     vertices = [np.arange(n_vert, dtype=np.int), np.empty(0, dtype=np.int)]
-    data = np.random.randn(n_vert, n_times)
+    data = rng.randn(n_vert, n_times)
     stc_surf = SourceEstimate(data, vertices=vertices, tmin=0, tstep=1,
                               subject='sample')
 
@@ -682,7 +720,7 @@ def test_mixed_stc():
     T = 2  # number of time points
     S = 3  # number of source spaces
 
-    data = np.random.randn(N, T)
+    data = rng.randn(N, T)
     vertno = S * [np.arange(N // S)]
 
     # make sure error is raised if vertices are not a list of length >= 2
diff --git a/mne/tests/test_source_space.py b/mne/tests/test_source_space.py
index 8fefdf2..b58ac5e 100644
--- a/mne/tests/test_source_space.py
+++ b/mne/tests/test_source_space.py
@@ -39,6 +39,7 @@ fname_morph = op.join(subjects_dir, 'sample', 'bem',
 
 base_dir = op.join(op.dirname(__file__), '..', 'io', 'tests', 'data')
 fname_small = op.join(base_dir, 'small-src.fif.gz')
+rng = np.random.RandomState(0)
 
 
 @testing.requires_testing_data
@@ -287,7 +288,6 @@ def test_triangle_neighbors():
 def test_accumulate_normals():
     """Test efficient normal accumulation for surfaces"""
     # set up comparison
-    rng = np.random.RandomState(0)
     n_pts = int(1.6e5)  # approx number in sample source space
     n_tris = int(3.2e5)
     # use all positive to make a worst-case for cumulative summation
@@ -434,8 +434,8 @@ def test_vertex_to_mni_fs_nibabel():
     """
     n_check = 1000
     subject = 'sample'
-    vertices = np.random.randint(0, 100000, n_check)
-    hemis = np.random.randint(0, 1, n_check)
+    vertices = rng.randint(0, 100000, n_check)
+    hemis = rng.randint(0, 1, n_check)
     coords = vertex_to_mni(vertices, hemis, subject, subjects_dir,
                            'nibabel')
     coords_2 = vertex_to_mni(vertices, hemis, subject, subjects_dir,
@@ -512,7 +512,7 @@ def test_combine_source_spaces():
                                     mri=aseg_fname, add_interpolator=False)
 
     # setup a discrete source space
-    rr = np.random.randint(0, 20, (100, 3)) * 1e-3
+    rr = rng.randint(0, 20, (100, 3)) * 1e-3
     nn = np.zeros(rr.shape)
     nn[:, -1] = 1
     pos = {'rr': rr, 'nn': nn}
@@ -578,7 +578,6 @@ def test_morph_source_spaces():
 def test_morphed_source_space_return():
     """Test returning a morphed source space to the original subject"""
     # let's create some random data on fsaverage
-    rng = np.random.RandomState(0)
     data = rng.randn(20484, 1)
     tmin, tstep = 0, 1.
     src_fs = read_source_spaces(fname_fs)
diff --git a/mne/tests/test_surface.py b/mne/tests/test_surface.py
index a7e0c1d..f4d9b23 100644
--- a/mne/tests/test_surface.py
+++ b/mne/tests/test_surface.py
@@ -13,9 +13,9 @@ from mne import read_surface, write_surface, decimate_surface
 from mne.surface import (read_morph_map, _compute_nearest,
                          fast_cross_3d, get_head_surf, read_curvature,
                          get_meg_helmet_surf)
-from mne.utils import _TempDir, requires_tvtk, run_tests_if_main, slow_test
+from mne.utils import _TempDir, requires_mayavi, run_tests_if_main, slow_test
 from mne.io import read_info
-from mne.transforms import _get_mri_head_t
+from mne.transforms import _get_trans
 
 data_path = testing.data_path(download=False)
 subjects_dir = op.join(data_path, 'subjects')
@@ -23,6 +23,7 @@ fname = op.join(subjects_dir, 'sample', 'bem',
                 'sample-1280-1280-1280-bem-sol.fif')
 
 warnings.simplefilter('always')
+rng = np.random.RandomState(0)
 
 
 def test_helmet():
@@ -37,7 +38,7 @@ def test_helmet():
     fname_ctf_raw = op.join(base_dir, 'tests', 'data', 'test_ctf_raw.fif')
     fname_trans = op.join(base_dir, 'tests', 'data',
                           'sample-audvis-raw-trans.txt')
-    trans = _get_mri_head_t(fname_trans)[0]
+    trans = _get_trans(fname_trans)[0]
     for fname in [fname_raw, fname_kit_raw, fname_bti_raw, fname_ctf_raw]:
         helmet = get_meg_helmet_surf(read_info(fname), trans)
         assert_equal(len(helmet['rr']), 304)  # they all have 304 verts
@@ -56,8 +57,8 @@ def test_head():
 def test_huge_cross():
     """Test cross product with lots of elements
     """
-    x = np.random.rand(100000, 3)
-    y = np.random.rand(1, 3)
+    x = rng.rand(100000, 3)
+    y = rng.rand(1, 3)
     z = np.cross(x, y)
     zz = fast_cross_3d(x, y)
     assert_array_equal(z, zz)
@@ -65,9 +66,9 @@ def test_huge_cross():
 
 def test_compute_nearest():
     """Test nearest neighbor searches"""
-    x = np.random.randn(500, 3)
+    x = rng.randn(500, 3)
     x /= np.sqrt(np.sum(x ** 2, axis=1))[:, None]
-    nn_true = np.random.permutation(np.arange(500, dtype=np.int))[:20]
+    nn_true = rng.permutation(np.arange(500, dtype=np.int))[:20]
     y = x[nn_true]
 
     nn1 = _compute_nearest(x, y, use_balltree=False)
@@ -145,7 +146,7 @@ def test_read_curv():
     assert_true(np.logical_or(bin_curv == 0, bin_curv == 1).all())
 
 
- at requires_tvtk
+ at requires_mayavi
 def test_decimate_surface():
     """Test triangular surface decimation
     """
diff --git a/mne/tests/test_transforms.py b/mne/tests/test_transforms.py
index 605f589..fd9a54f 100644
--- a/mne/tests/test_transforms.py
+++ b/mne/tests/test_transforms.py
@@ -7,14 +7,12 @@ from numpy.testing import (assert_array_equal, assert_equal, assert_allclose,
                            assert_almost_equal, assert_array_almost_equal)
 import warnings
 
-from mne.io.constants import FIFF
 from mne.datasets import testing
 from mne import read_trans, write_trans
 from mne.utils import _TempDir, run_tests_if_main
-from mne.transforms import (invert_transform, _get_mri_head_t,
+from mne.transforms import (invert_transform, _get_trans,
                             rotation, rotation3d, rotation_angles, _find_trans,
-                            combine_transforms, transform_coordinates,
-                            collect_transforms, apply_trans, translation,
+                            combine_transforms, apply_trans, translation,
                             get_ras_to_neuromag_trans, _sphere_to_cartesian,
                             _polar_to_cartesian, _cartesian_to_sphere)
 
@@ -29,11 +27,11 @@ fname_trans = op.join(op.split(__file__)[0], '..', 'io', 'tests',
 
 
 @testing.requires_testing_data
-def test_get_mri_head_t():
+def test_get_trans():
     """Test converting '-trans.txt' to '-trans.fif'"""
     trans = read_trans(fname)
     trans = invert_transform(trans)  # starts out as head->MRI, so invert
-    trans_2 = _get_mri_head_t(fname_trans)[0]
+    trans_2 = _get_trans(fname_trans)[0]
     assert_equal(trans['from'], trans_2['from'])
     assert_equal(trans['to'], trans_2['to'])
     assert_allclose(trans['trans'], trans_2['trans'], rtol=1e-5, atol=1e-5)
@@ -70,15 +68,16 @@ def test_io_trans():
 def test_get_ras_to_neuromag_trans():
     """Test the coordinate transformation from ras to neuromag"""
     # create model points in neuromag-like space
+    rng = np.random.RandomState(0)
     anterior = [0, 1, 0]
     left = [-1, 0, 0]
     right = [.8, 0, 0]
     up = [0, 0, 1]
-    rand_pts = np.random.uniform(-1, 1, (3, 3))
+    rand_pts = rng.uniform(-1, 1, (3, 3))
     pts = np.vstack((anterior, left, right, up, rand_pts))
 
     # change coord system
-    rx, ry, rz, tx, ty, tz = np.random.uniform(-2 * np.pi, 2 * np.pi, 6)
+    rx, ry, rz, tx, ty, tz = rng.uniform(-2 * np.pi, 2 * np.pi, 6)
     trans = np.dot(translation(tx, ty, tz), rotation(rx, ry, rz))
     pts_changed = apply_trans(trans, pts)
 
@@ -161,38 +160,4 @@ def test_combine():
                   trans['from'], trans['to'])
 
 
- at testing.requires_testing_data
-def test_transform_coords():
-    """Test transforming coordinates
-    """
-    # normal trans won't work
-    with warnings.catch_warnings(record=True):  # dep
-        assert_raises(ValueError, transform_coordinates,
-                      fname, np.eye(3), 'meg', 'fs_tal')
-    # needs to have all entries
-    pairs = [[FIFF.FIFFV_COORD_MRI, FIFF.FIFFV_COORD_HEAD],
-             [FIFF.FIFFV_COORD_MRI, FIFF.FIFFV_MNE_COORD_RAS],
-             [FIFF.FIFFV_MNE_COORD_RAS, FIFF.FIFFV_MNE_COORD_MNI_TAL],
-             [FIFF.FIFFV_MNE_COORD_MNI_TAL, FIFF.FIFFV_MNE_COORD_FS_TAL_GTZ],
-             [FIFF.FIFFV_MNE_COORD_MNI_TAL, FIFF.FIFFV_MNE_COORD_FS_TAL_LTZ],
-             ]
-    xforms = []
-    for fro, to in pairs:
-        xforms.append({'to': to, 'from': fro, 'trans': np.eye(4)})
-    tempdir = _TempDir()
-    all_fname = op.join(tempdir, 'all-trans.fif')
-    with warnings.catch_warnings(record=True):  # dep
-        collect_transforms(all_fname, xforms)
-    for fro in ['meg', 'mri']:
-        for to in ['meg', 'mri', 'fs_tal', 'mni_tal']:
-            with warnings.catch_warnings(record=True):  # dep
-                out = transform_coordinates(all_fname, np.eye(3), fro, to)
-                assert_allclose(out, np.eye(3))
-    with warnings.catch_warnings(record=True):  # dep
-        assert_raises(ValueError, transform_coordinates, all_fname, np.eye(4),
-                      'meg', 'meg')
-        assert_raises(ValueError, transform_coordinates, all_fname, np.eye(3),
-                      'fs_tal', 'meg')
-
-
 run_tests_if_main()
diff --git a/mne/tests/test_utils.py b/mne/tests/test_utils.py
index 5dcb7e4..6475049 100644
--- a/mne/tests/test_utils.py
+++ b/mne/tests/test_utils.py
@@ -377,19 +377,21 @@ def test_fetch_file():
     """Test file downloading
     """
     tempdir = _TempDir()
-    urls = ['http://martinos.org/mne/',
-            'ftp://surfer.nmr.mgh.harvard.edu/pub/data/bert.recon.md5sum.txt']
+    urls = ['http://google.com',
+            'ftp://ftp.openbsd.org/pub/OpenBSD/README']
     with ArgvSetter(disable_stderr=False):  # to capture stdout
         for url in urls:
             archive_name = op.join(tempdir, "download_test")
-            _fetch_file(url, archive_name, verbose=False)
+            _fetch_file(url, archive_name, timeout=30., verbose=False,
+                        resume=False)
             assert_raises(Exception, _fetch_file, 'NOT_AN_ADDRESS',
                           op.join(tempdir, 'test'), verbose=False)
             resume_name = op.join(tempdir, "download_resume")
             # touch file
             with open(resume_name + '.part', 'w'):
                 os.utime(resume_name + '.part', None)
-            _fetch_file(url, resume_name, resume=True, verbose=False)
+            _fetch_file(url, resume_name, resume=True, timeout=30.,
+                        verbose=False)
             assert_raises(ValueError, _fetch_file, url, archive_name,
                           hash_='a', verbose=False)
             assert_raises(RuntimeError, _fetch_file, url, archive_name,
@@ -399,7 +401,7 @@ def test_fetch_file():
 def test_sum_squared():
     """Test optimized sum of squares
     """
-    X = np.random.randint(0, 50, (3, 3))
+    X = np.random.RandomState(0).randint(0, 50, (3, 3))
     assert_equal(np.sum(X ** 2), sum_squared(X))
 
 
diff --git a/mne/time_frequency/psd.py b/mne/time_frequency/psd.py
index c728163..1544cbd 100644
--- a/mne/time_frequency/psd.py
+++ b/mne/time_frequency/psd.py
@@ -166,7 +166,7 @@ def compute_epochs_psd(epochs, picks=None, fmin=0, fmax=np.inf, tmin=None,
     if tmin is not None or tmax is not None:
         time_mask = _time_mask(epochs.times, tmin, tmax)
     else:
-        time_mask = Ellipsis
+        time_mask = slice(None)
 
     data = epochs.get_data()[:, picks][..., time_mask]
     if proj:
diff --git a/mne/time_frequency/tests/test_tfr.py b/mne/time_frequency/tests/test_tfr.py
index ee7a734..e54f60e 100644
--- a/mne/time_frequency/tests/test_tfr.py
+++ b/mne/time_frequency/tests/test_tfr.py
@@ -5,11 +5,13 @@ from nose.tools import assert_true, assert_false, assert_equal, assert_raises
 
 import mne
 from mne import io, Epochs, read_events, pick_types, create_info, EpochsArray
-from mne.utils import _TempDir, run_tests_if_main, slow_test, requires_h5py
+from mne.utils import (_TempDir, run_tests_if_main, slow_test, requires_h5py,
+                       grand_average)
 from mne.time_frequency import single_trial_power
-from mne.time_frequency.tfr import cwt_morlet, morlet, tfr_morlet
-from mne.time_frequency.tfr import _dpss_wavelet, tfr_multitaper
-from mne.time_frequency.tfr import AverageTFR, read_tfrs, write_tfrs
+from mne.time_frequency.tfr import (cwt_morlet, morlet, tfr_morlet,
+                                    _dpss_wavelet, tfr_multitaper,
+                                    AverageTFR, read_tfrs, write_tfrs,
+                                    combine_tfr)
 
 import matplotlib
 matplotlib.use('Agg')  # for testing don't use X server
@@ -99,6 +101,27 @@ def test_time_frequency():
     assert_true(np.sum(itc.data >= 1) == 0)
     assert_true(np.sum(itc.data <= 0) == 0)
 
+    # grand average
+    itc2 = itc.copy()
+    itc2.info['bads'] = [itc2.ch_names[0]]  # test channel drop
+    gave = grand_average([itc2, itc])
+    assert_equal(gave.data.shape, (itc2.data.shape[0] - 1,
+                                   itc2.data.shape[1],
+                                   itc2.data.shape[2]))
+    assert_equal(itc2.ch_names[1:], gave.ch_names)
+    assert_equal(gave.nave, 2)
+    itc2.drop_channels(itc2.info["bads"])
+    assert_array_almost_equal(gave.data, itc2.data)
+    itc2.data = np.ones(itc2.data.shape)
+    itc.data = np.zeros(itc.data.shape)
+    itc2.nave = 2
+    itc.nave = 1
+    itc.drop_channels([itc.ch_names[0]])
+    combined_itc = combine_tfr([itc2, itc])
+    assert_array_almost_equal(combined_itc.data,
+                              np.ones(combined_itc.data.shape) * 2 / 3)
+
+    # more tests
     power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=2, use_fft=False,
                             return_itc=True)
 
diff --git a/mne/time_frequency/tfr.py b/mne/time_frequency/tfr.py
index 4623877..3040f7c 100644
--- a/mne/time_frequency/tfr.py
+++ b/mne/time_frequency/tfr.py
@@ -23,8 +23,9 @@ from ..io.pick import pick_info, pick_types
 from ..io.meas_info import Info
 from ..utils import check_fname
 from .multitaper import dpss_windows
-from ..viz.utils import figure_nobar
+from ..viz.utils import figure_nobar, plt_show
 from ..externals.h5io import write_hdf5, read_hdf5
+from ..externals.six import string_types
 
 
 def _get_data(inst, return_itc):
@@ -724,8 +725,7 @@ class AverageTFR(ContainsMixin, UpdateChannelsMixin):
             if title:
                 fig.suptitle(title)
             colorbar = False  # only one colorbar for multiple axes
-        if show:
-            plt.show()
+        plt_show(show)
         return fig
 
     def _onselect(self, eclick, erelease, baseline, mode, layout):
@@ -842,7 +842,6 @@ class AverageTFR(ContainsMixin, UpdateChannelsMixin):
             The figure containing the topography.
         """
         from ..viz.topo import _imshow_tfr, _plot_topo
-        import matplotlib.pyplot as plt
         times = self.times.copy()
         freqs = self.freqs
         data = self.data
@@ -869,10 +868,7 @@ class AverageTFR(ContainsMixin, UpdateChannelsMixin):
                          title=title, border=border, x_label='Time (ms)',
                          y_label='Frequency (Hz)', fig_facecolor=fig_facecolor,
                          font_color=font_color)
-
-        if show:
-            plt.show()
-
+        plt_show(show)
         return fig
 
     def _check_compat(self, tfr):
@@ -1374,3 +1370,63 @@ def tfr_multitaper(inst, freqs, n_cycles, time_bandwidth=4.0,
         out = (out, AverageTFR(info, itc, times, freqs, nave,
                                method='mutlitaper-itc'))
     return out
+
+
+def combine_tfr(all_tfr, weights='nave'):
+    """Merge AverageTFR data by weighted addition
+
+    Create a new AverageTFR instance, using a combination of the supplied
+    instances as its data. By default, the mean (weighted by trials) is used.
+    Subtraction can be performed by passing negative weights (e.g., [1, -1]).
+    Data must have the same channels and the same time instants.
+
+    Parameters
+    ----------
+    all_tfr : list of AverageTFR
+        The tfr datasets.
+    weights : list of float | str
+        The weights to apply to the data of each AverageTFR instance.
+        Can also be ``'nave'`` to weight according to tfr.nave,
+        or ``'equal'`` to use equal weighting (each weighted as ``1/N``).
+
+    Returns
+    -------
+    tfr : AverageTFR
+        The new TFR data.
+
+    Notes
+    -----
+    .. versionadded:: 0.11.0
+    """
+    tfr = all_tfr[0].copy()
+    if isinstance(weights, string_types):
+        if weights not in ('nave', 'equal'):
+            raise ValueError('Weights must be a list of float, or "nave" or '
+                             '"equal"')
+        if weights == 'nave':
+            weights = np.array([e.nave for e in all_tfr], float)
+            weights /= weights.sum()
+        else:  # == 'equal'
+            weights = [1. / len(all_tfr)] * len(all_tfr)
+    weights = np.array(weights, float)
+    if weights.ndim != 1 or weights.size != len(all_tfr):
+        raise ValueError('Weights must be the same size as all_tfr')
+
+    ch_names = tfr.ch_names
+    for t_ in all_tfr[1:]:
+        assert t_.ch_names == ch_names, ValueError("%s and %s do not contain "
+                                                   "the same channels"
+                                                   % (tfr, t_))
+        assert np.max(np.abs(t_.times - tfr.times)) < 1e-7, \
+            ValueError("%s and %s do not contain the same time instants"
+                       % (tfr, t_))
+
+    # use union of bad channels
+    bads = list(set(tfr.info['bads']).union(*(t_.info['bads']
+                                              for t_ in all_tfr[1:])))
+    tfr.info['bads'] = bads
+
+    tfr.data = sum(w * t_.data for w, t_ in zip(weights, all_tfr))
+    tfr.nave = max(int(1. / sum(w ** 2 / e.nave
+                                for w, e in zip(weights, all_tfr))), 1)
+    return tfr
diff --git a/mne/transforms.py b/mne/transforms.py
index fdc405c..d48d54f 100644
--- a/mne/transforms.py
+++ b/mne/transforms.py
@@ -14,7 +14,7 @@ from .io.constants import FIFF
 from .io.open import fiff_open
 from .io.tag import read_tag
 from .io.write import start_file, end_file, write_coord_trans
-from .utils import check_fname, logger, deprecated
+from .utils import check_fname, logger
 from .externals.six import string_types
 
 
@@ -318,32 +318,33 @@ def _ensure_trans(trans, fro='mri', to='head'):
     return trans
 
 
-def _get_mri_head_t(trans):
+def _get_trans(trans, fro='mri', to='head'):
     """Get mri_head_t (from=mri, to=head) from mri filename"""
     if isinstance(trans, string_types):
         if not op.isfile(trans):
             raise IOError('trans file "%s" not found' % trans)
         if op.splitext(trans)[1] in ['.fif', '.gz']:
-            mri_head_t = read_trans(trans)
+            fro_to_t = read_trans(trans)
         else:
             # convert "-trans.txt" to "-trans.fif" mri-type equivalent
+            # these are usually actually in to_fro form
             t = np.genfromtxt(trans)
             if t.ndim != 2 or t.shape != (4, 4):
                 raise RuntimeError('File "%s" did not have 4x4 entries'
                                    % trans)
-            mri_head_t = Transform('head', 'mri', t)
+            fro_to_t = Transform(to, fro, t)
     elif isinstance(trans, dict):
-        mri_head_t = trans
+        fro_to_t = trans
         trans = 'dict'
     elif trans is None:
-        mri_head_t = Transform('head', 'mri', np.eye(4))
+        fro_to_t = Transform(fro, to, np.eye(4))
         trans = 'identity'
     else:
-        raise ValueError('trans type %s not known, must be str, dict, or None'
-                         % type(trans))
+        raise ValueError('transform type %s not known, must be str, dict, '
+                         'or None' % type(trans))
     # it's usually a head->MRI transform, so we probably need to invert it
-    mri_head_t = _ensure_trans(mri_head_t, 'mri', 'head')
-    return mri_head_t, trans
+    fro_to_t = _ensure_trans(fro_to_t, fro, to)
+    return fro_to_t, trans
 
 
 def combine_transforms(t_first, t_second, fro, to):
@@ -487,113 +488,6 @@ def transform_surface_to(surf, dest, trans):
     return surf
 
 
- at deprecated('transform_coordinates is deprecated and will be removed in v0.11')
-def transform_coordinates(filename, pos, orig, dest):
-    """Transform coordinates between various MRI-related coordinate frames
-
-    Parameters
-    ----------
-    filename: string
-        Name of a fif file containing the coordinate transformations
-        This file can be conveniently created with mne_collect_transforms
-        or ``collect_transforms``.
-    pos: array of shape N x 3
-        array of locations to transform (in meters)
-    orig: 'meg' | 'mri'
-        Coordinate frame of the above locations.
-        'meg' is MEG head coordinates
-        'mri' surface RAS coordinates
-    dest: 'meg' | 'mri' | 'fs_tal' | 'mni_tal'
-        Coordinate frame of the result.
-        'mni_tal' is MNI Talairach
-        'fs_tal' is FreeSurfer Talairach
-
-    Returns
-    -------
-    trans_pos: array of shape N x 3
-        The transformed locations
-
-    Examples
-    --------
-    transform_coordinates('all-trans.fif', np.eye(3), 'meg', 'fs_tal')
-    transform_coordinates('all-trans.fif', np.eye(3), 'mri', 'mni_tal')
-    """
-    #   Read the fif file containing all necessary transformations
-    fid, tree, directory = fiff_open(filename)
-
-    coord_names = dict(mri=FIFF.FIFFV_COORD_MRI,
-                       meg=FIFF.FIFFV_COORD_HEAD,
-                       mni_tal=FIFF.FIFFV_MNE_COORD_MNI_TAL,
-                       fs_tal=FIFF.FIFFV_MNE_COORD_FS_TAL)
-
-    orig = coord_names[orig]
-    dest = coord_names[dest]
-
-    T0 = T1 = T2 = T3plus = T3minus = None
-    for d in directory:
-        if d.kind == FIFF.FIFF_COORD_TRANS:
-            tag = read_tag(fid, d.pos)
-            trans = tag.data
-            if (trans['from'] == FIFF.FIFFV_COORD_MRI and
-                    trans['to'] == FIFF.FIFFV_COORD_HEAD):
-                T0 = invert_transform(trans)
-            elif (trans['from'] == FIFF.FIFFV_COORD_MRI and
-                  trans['to'] == FIFF.FIFFV_MNE_COORD_RAS):
-                T1 = trans
-            elif (trans['from'] == FIFF.FIFFV_MNE_COORD_RAS and
-                  trans['to'] == FIFF.FIFFV_MNE_COORD_MNI_TAL):
-                T2 = trans
-            elif trans['from'] == FIFF.FIFFV_MNE_COORD_MNI_TAL:
-                if trans['to'] == FIFF.FIFFV_MNE_COORD_FS_TAL_GTZ:
-                    T3plus = trans
-                elif trans['to'] == FIFF.FIFFV_MNE_COORD_FS_TAL_LTZ:
-                    T3minus = trans
-    fid.close()
-    #
-    #   Check we have everything we need
-    #
-    if ((orig == FIFF.FIFFV_COORD_HEAD and T0 is None) or (T1 is None) or
-            (T2 is None) or (dest == FIFF.FIFFV_MNE_COORD_FS_TAL and
-                             ((T3minus is None) or (T3minus is None)))):
-        raise ValueError('All required coordinate transforms not found')
-
-    #
-    #   Go ahead and transform the data
-    #
-    if pos.shape[1] != 3:
-        raise ValueError('Coordinates must be given in a N x 3 array')
-
-    if dest == orig:
-        trans_pos = pos.copy()
-    else:
-        n_points = pos.shape[0]
-        pos = np.c_[pos, np.ones(n_points)].T
-        if orig == FIFF.FIFFV_COORD_HEAD:
-            pos = np.dot(T0['trans'], pos)
-        elif orig != FIFF.FIFFV_COORD_MRI:
-            raise ValueError('Input data must be in MEG head or surface RAS '
-                             'coordinates')
-
-        if dest == FIFF.FIFFV_COORD_HEAD:
-            pos = np.dot(linalg.inv(T0['trans']), pos)
-        elif dest != FIFF.FIFFV_COORD_MRI:
-            pos = np.dot(np.dot(T2['trans'], T1['trans']), pos)
-            if dest != FIFF.FIFFV_MNE_COORD_MNI_TAL:
-                if dest == FIFF.FIFFV_MNE_COORD_FS_TAL:
-                    for k in range(n_points):
-                        if pos[2, k] > 0:
-                            pos[:, k] = np.dot(T3plus['trans'], pos[:, k])
-                        else:
-                            pos[:, k] = np.dot(T3minus['trans'], pos[:, k])
-                else:
-                    raise ValueError('Illegal choice for the output '
-                                     'coordinates')
-
-        trans_pos = pos[:3, :].T
-
-    return trans_pos
-
-
 def get_ras_to_neuromag_trans(nasion, lpa, rpa):
     """Construct a transformation matrix to the MNE head coordinate system
 
@@ -646,24 +540,6 @@ def get_ras_to_neuromag_trans(nasion, lpa, rpa):
     return trans
 
 
- at deprecated('collect_transforms is deprecated and will be removed in v0.11')
-def collect_transforms(fname, xforms):
-    """Collect a set of transforms in a single FIFF file
-
-    Parameters
-    ----------
-    fname : str
-        Filename to save to.
-    xforms : list of dict
-        List of transformations.
-    """
-    check_fname(fname, 'trans', ('-trans.fif', '-trans.fif.gz'))
-    with start_file(fname) as fid:
-        for xform in xforms:
-            write_coord_trans(fid, xform)
-        end_file(fid)
-
-
 def _sphere_to_cartesian(theta, phi, r):
     """Transform spherical coordinates to cartesian"""
     z = r * np.sin(phi)
@@ -687,3 +563,10 @@ def _cartesian_to_sphere(x, y, z):
     elev = np.arctan2(z, hypotxy)
     az = np.arctan2(y, x)
     return az, elev, r
+
+
+def _topo_to_sphere(theta, radius):
+    """Convert 2D topo coordinates to spherical."""
+    sph_phi = (0.5 - radius) * 180
+    sph_theta = -theta
+    return sph_phi, sph_theta
diff --git a/mne/utils.py b/mne/utils.py
index a7db386..6ac4101 100644
--- a/mne/utils.py
+++ b/mne/utils.py
@@ -152,24 +152,24 @@ def object_diff(a, b, pre=''):
         k2s = _sort_keys(b)
         m1 = set(k2s) - set(k1s)
         if len(m1):
-            out += pre + ' x1 missing keys %s\n' % (m1)
+            out += pre + ' left missing keys %s\n' % (m1)
         for key in k1s:
             if key not in k2s:
-                out += pre + ' x2 missing key %s\n' % key
+                out += pre + ' right missing key %s\n' % key
             else:
-                out += object_diff(a[key], b[key], pre + 'd1[%s]' % repr(key))
+                out += object_diff(a[key], b[key], pre + '[%s]' % repr(key))
     elif isinstance(a, (list, tuple)):
         if len(a) != len(b):
             out += pre + ' length mismatch (%s, %s)\n' % (len(a), len(b))
         else:
-            for xx1, xx2 in zip(a, b):
-                out += object_diff(xx1, xx2, pre='')
+            for ii, (xx1, xx2) in enumerate(zip(a, b)):
+                out += object_diff(xx1, xx2, pre + '[%s]' % ii)
     elif isinstance(a, (string_types, int, float, bytes)):
         if a != b:
             out += pre + ' value mismatch (%s, %s)\n' % (a, b)
     elif a is None:
         if b is not None:
-            out += pre + ' a is None, b is not (%s)\n' % (b)
+            out += pre + ' left is None, right is not (%s)\n' % (b)
     elif isinstance(a, np.ndarray):
         if not np.array_equal(a, b):
             out += pre + ' array mismatch\n'
@@ -910,6 +910,26 @@ def set_log_file(fname=None, output_format='%(message)s', overwrite=None):
     logger.addHandler(lh)
 
 
+class catch_logging(object):
+    """Helper to store logging
+
+    This will remove all other logging handlers, and return the handler to
+    stdout when complete.
+    """
+    def __enter__(self):
+        self._data = StringIO()
+        self._lh = logging.StreamHandler(self._data)
+        self._lh.setFormatter(logging.Formatter('%(message)s'))
+        for lh in logger.handlers:
+            logger.removeHandler(lh)
+        logger.addHandler(self._lh)
+        return self._data
+
+    def __exit__(self, *args):
+        logger.removeHandler(self._lh)
+        set_log_file(None)
+
+
 ###############################################################################
 # CONFIG / PREFS
 
@@ -1213,7 +1233,7 @@ class ProgressBar(object):
     def __init__(self, max_value, initial_value=0, mesg='', max_chars=40,
                  progress_character='.', spinner=False, verbose_bool=True):
         self.cur_value = initial_value
-        self.max_value = float(max_value)
+        self.max_value = max_value
         self.mesg = mesg
         self.max_chars = max_chars
         self.progress_character = progress_character
@@ -1245,6 +1265,9 @@ class ProgressBar(object):
 
         # Update the message
         if mesg is not None:
+            if mesg == 'file_sizes':
+                mesg = '(%s / %s)' % (sizeof_fmt(self.cur_value),
+                                      sizeof_fmt(self.max_value))
             self.mesg = mesg
 
         # The \r tells the cursor to return to the beginning of the line rather
@@ -1283,55 +1306,8 @@ class ProgressBar(object):
         self.update(self.cur_value, mesg)
 
 
-def _chunk_read(response, local_file, initial_size=0, verbose_bool=True):
-    """Download a file chunk by chunk and show advancement
-
-    Can also be used when resuming downloads over http.
-
-    Parameters
-    ----------
-    response: urllib.response.addinfourl
-        Response to the download request in order to get file size.
-    local_file: file
-        Hard disk file where data should be written.
-    initial_size: int, optional
-        If resuming, indicate the initial size of the file.
-
-    Notes
-    -----
-    The chunk size will be automatically adapted based on the connection
-    speed.
-    """
-    # Adapted from NISL:
-    # https://github.com/nisl/tutorial/blob/master/nisl/datasets.py
-
-    # Returns only amount left to download when resuming, not the size of the
-    # entire file
-    total_size = int(response.headers.get('Content-Length', '1').strip())
-    total_size += initial_size
-
-    progress = ProgressBar(total_size, initial_value=initial_size,
-                           max_chars=40, spinner=True, mesg='downloading',
-                           verbose_bool=verbose_bool)
-    chunk_size = 8192  # 2 ** 13
-    while True:
-        t0 = time.time()
-        chunk = response.read(chunk_size)
-        dt = time.time() - t0
-        if dt < 0.001:
-            chunk_size *= 2
-        elif dt > 0.5 and chunk_size > 8192:
-            chunk_size = chunk_size // 2
-        if not chunk:
-            if verbose_bool:
-                sys.stdout.write('\n')
-                sys.stdout.flush()
-            break
-        _chunk_write(chunk, local_file, progress)
-
-
-def _chunk_read_ftp_resume(url, temp_file_name, local_file, verbose_bool=True):
-    """Resume downloading of a file from an FTP server"""
+def _get_ftp(url, temp_file_name, initial_size, file_size, verbose_bool):
+    """Safely (resume a) download to a file from FTP"""
     # Adapted from: https://pypi.python.org/pypi/fileDownloader.py
     # but with changes
 
@@ -1339,7 +1315,6 @@ def _chunk_read_ftp_resume(url, temp_file_name, local_file, verbose_bool=True):
     file_name = os.path.basename(parsed_url.path)
     server_path = parsed_url.path.replace(file_name, "")
     unquoted_server_path = urllib.parse.unquote(server_path)
-    local_file_size = os.path.getsize(temp_file_name)
 
     data = ftplib.FTP()
     if parsed_url.port is not None:
@@ -1350,23 +1325,75 @@ def _chunk_read_ftp_resume(url, temp_file_name, local_file, verbose_bool=True):
     if len(server_path) > 1:
         data.cwd(unquoted_server_path)
     data.sendcmd("TYPE I")
-    data.sendcmd("REST " + str(local_file_size))
+    data.sendcmd("REST " + str(initial_size))
     down_cmd = "RETR " + file_name
-    file_size = data.size(file_name)
-    progress = ProgressBar(file_size, initial_value=local_file_size,
-                           max_chars=40, spinner=True, mesg='downloading',
+    assert file_size == data.size(file_name)
+    progress = ProgressBar(file_size, initial_value=initial_size,
+                           max_chars=40, spinner=True, mesg='file_sizes',
                            verbose_bool=verbose_bool)
 
     # Callback lambda function that will be passed the downloaded data
     # chunk and will write it to file and update the progress bar
-    def chunk_write(chunk):
-        return _chunk_write(chunk, local_file, progress)
-    data.retrbinary(down_cmd, chunk_write)
-    data.close()
+    mode = 'ab' if initial_size > 0 else 'wb'
+    with open(temp_file_name, mode) as local_file:
+        def chunk_write(chunk):
+            return _chunk_write(chunk, local_file, progress)
+        data.retrbinary(down_cmd, chunk_write)
+        data.close()
     sys.stdout.write('\n')
     sys.stdout.flush()
 
 
+def _get_http(url, temp_file_name, initial_size, file_size, verbose_bool):
+    """Safely (resume a) download to a file from http(s)"""
+    # Actually do the reading
+    req = urllib.request.Request(url)
+    if initial_size > 0:
+        req.headers['Range'] = 'bytes=%s-' % (initial_size,)
+    try:
+        response = urllib.request.urlopen(req)
+    except Exception:
+        # There is a problem that may be due to resuming, some
+        # servers may not support the "Range" header. Switch
+        # back to complete download method
+        logger.info('Resuming download failed (server '
+                    'rejected the request). Attempting to '
+                    'restart downloading the entire file.')
+        del req.headers['Range']
+        response = urllib.request.urlopen(req)
+    total_size = int(response.headers.get('Content-Length', '1').strip())
+    if initial_size > 0 and file_size == total_size:
+        logger.info('Resuming download failed (resume file size '
+                    'mismatch). Attempting to restart downloading the '
+                    'entire file.')
+        initial_size = 0
+    total_size += initial_size
+    if total_size != file_size:
+        raise RuntimeError('URL could not be parsed properly')
+    mode = 'ab' if initial_size > 0 else 'wb'
+    progress = ProgressBar(total_size, initial_value=initial_size,
+                           max_chars=40, spinner=True, mesg='file_sizes',
+                           verbose_bool=verbose_bool)
+    chunk_size = 8192  # 2 ** 13
+    with open(temp_file_name, mode) as local_file:
+        while True:
+            t0 = time.time()
+            chunk = response.read(chunk_size)
+            dt = time.time() - t0
+            if dt < 0.005:
+                chunk_size *= 2
+            elif dt > 0.1 and chunk_size > 8192:
+                chunk_size = chunk_size // 2
+            if not chunk:
+                if verbose_bool:
+                    sys.stdout.write('\n')
+                    sys.stdout.flush()
+                break
+            local_file.write(chunk)
+            progress.update_with_increment_value(len(chunk),
+                                                 mesg='file_sizes')
+
+
 def _chunk_write(chunk, local_file, progress):
     """Write a chunk to file and update the progress bar"""
     local_file.write(chunk)
@@ -1375,7 +1402,7 @@ def _chunk_write(chunk, local_file, progress):
 
 @verbose
 def _fetch_file(url, file_name, print_destination=True, resume=True,
-                hash_=None, verbose=None):
+                hash_=None, timeout=10., verbose=None):
     """Load requested file, downloading it if needed or requested
 
     Parameters
@@ -1392,6 +1419,8 @@ def _fetch_file(url, file_name, print_destination=True, resume=True,
     hash_ : str | None
         The hash of the file to check. If None, no checking is
         performed.
+    timeout : float
+        The URL open timeout.
     verbose : bool, str, int, or None
         If not None, override default verbose level (see mne.verbose).
     """
@@ -1402,12 +1431,14 @@ def _fetch_file(url, file_name, print_destination=True, resume=True,
         raise ValueError('Bad hash value given, should be a 32-character '
                          'string:\n%s' % (hash_,))
     temp_file_name = file_name + ".part"
-    local_file = None
-    initial_size = 0
     verbose_bool = (logger.level <= 20)  # 20 is info
     try:
-        # Checking file size and displaying it alongside the download url
-        u = urllib.request.urlopen(url, timeout=10.)
+        # Check file size and displaying it alongside the download url
+        u = urllib.request.urlopen(url, timeout=timeout)
+        u.close()
+        # this is necessary to follow any redirects
+        url = u.geturl()
+        u = urllib.request.urlopen(url, timeout=timeout)
         try:
             file_size = int(u.headers.get('Content-Length', '1').strip())
         finally:
@@ -1415,46 +1446,28 @@ def _fetch_file(url, file_name, print_destination=True, resume=True,
             del u
         logger.info('Downloading data from %s (%s)\n'
                     % (url, sizeof_fmt(file_size)))
-        # Downloading data
-        if resume and os.path.exists(temp_file_name):
-            local_file = open(temp_file_name, "ab")
-            # Resuming HTTP and FTP downloads requires different procedures
-            scheme = urllib.parse.urlparse(url).scheme
-            if scheme in ('http', 'https'):
-                local_file_size = os.path.getsize(temp_file_name)
-                # If the file exists, then only download the remainder
-                req = urllib.request.Request(url)
-                req.headers["Range"] = "bytes=%s-" % local_file_size
-                try:
-                    data = urllib.request.urlopen(req)
-                except Exception:
-                    # There is a problem that may be due to resuming, some
-                    # servers may not support the "Range" header. Switch back
-                    # to complete download method
-                    logger.info('Resuming download failed. Attempting to '
-                                'restart downloading the entire file.')
-                    local_file.close()
-                    _fetch_file(url, file_name, resume=False)
-                else:
-                    _chunk_read(data, local_file, initial_size=local_file_size,
-                                verbose_bool=verbose_bool)
-                    data.close()
-                    del data  # should auto-close
-            else:
-                _chunk_read_ftp_resume(url, temp_file_name, local_file,
-                                       verbose_bool=verbose_bool)
+
+        # Triage resume
+        if not os.path.exists(temp_file_name):
+            resume = False
+        if resume:
+            with open(temp_file_name, 'rb', buffering=0) as local_file:
+                local_file.seek(0, 2)
+                initial_size = local_file.tell()
+            del local_file
         else:
-            local_file = open(temp_file_name, "wb")
-            data = urllib.request.urlopen(url)
-            try:
-                _chunk_read(data, local_file, initial_size=initial_size,
-                            verbose_bool=verbose_bool)
-            finally:
-                data.close()
-                del data  # should auto-close
-        # temp file must be closed prior to the move
-        if not local_file.closed:
-            local_file.close()
+            initial_size = 0
+        # This should never happen if our functions work properly
+        if initial_size >= file_size:
+            raise RuntimeError('Local file (%s) is larger than remote '
+                               'file (%s), cannot resume download'
+                               % (sizeof_fmt(initial_size),
+                                  sizeof_fmt(file_size)))
+
+        scheme = urllib.parse.urlparse(url).scheme
+        fun = _get_http if scheme in ('http', 'https') else _get_ftp
+        fun(url, temp_file_name, initial_size, file_size, verbose_bool)
+
         # check md5sum
         if hash_ is not None:
             logger.info('Verifying download hash.')
@@ -1466,15 +1479,10 @@ def _fetch_file(url, file_name, print_destination=True, resume=True,
         shutil.move(temp_file_name, file_name)
         if print_destination is True:
             logger.info('File saved as %s.\n' % file_name)
-    except Exception as e:
+    except Exception:
         logger.error('Error while fetching file %s.'
                      ' Dataset fetching aborted.' % url)
-        logger.error("Error: %s", e)
         raise
-    finally:
-        if local_file is not None:
-            if not local_file.closed:
-                local_file.close()
 
 
 def sizeof_fmt(num):
@@ -1892,3 +1900,86 @@ def compute_corr(x, y):
     # transpose / broadcasting else Y is correct
     y_sd = Y.std(0, ddof=1)[:, None if X.shape == Y.shape else Ellipsis]
     return (fast_dot(X.T, Y) / float(len(X) - 1)) / (x_sd * y_sd)
+
+
+def grand_average(all_inst, interpolate_bads=True, drop_bads=True):
+    """Make grand average of a list evoked or AverageTFR data
+
+    For evoked data, the function interpolates bad channels based on
+    `interpolate_bads` parameter. If `interpolate_bads` is True, the grand
+    average file will contain good channels and the bad channels interpolated
+    from the good MEG/EEG channels.
+    For AverageTFR data, the function takes the subset of channels not marked
+    as bad in any of the instances.
+
+    The grand_average.nave attribute will be equal to the number
+    of evoked datasets used to calculate the grand average.
+
+    Note: Grand average evoked should not be used for source localization.
+
+    Parameters
+    ----------
+    all_inst : list of Evoked or AverageTFR data
+        The evoked datasets.
+    interpolate_bads : bool
+        If True, bad MEG and EEG channels are interpolated. Ignored for
+        AverageTFR.
+    drop_bads : bool
+        If True, drop all bad channels marked as bad in any data set.
+        If neither interpolate_bads nor drop_bads is True, in the output file,
+        every channel marked as bad in at least one of the input files will be
+        marked as bad, but no interpolation or dropping will be performed.
+
+    Returns
+    -------
+    grand_average : Evoked | AverageTFR
+        The grand average data. Same type as input.
+
+    Notes
+    -----
+    .. versionadded:: 0.11.0
+    """
+    # check if all elements in the given list are evoked data
+    from .evoked import Evoked
+    from .time_frequency import AverageTFR
+    from .channels.channels import equalize_channels
+    if not any([(all(isinstance(inst, t) for inst in all_inst)
+                for t in (Evoked, AverageTFR))]):
+        raise ValueError("Not all input elements are Evoked or AverageTFR")
+
+    # Copy channels to leave the original evoked datasets intact.
+    all_inst = [inst.copy() for inst in all_inst]
+
+    # Interpolates if necessary
+    if isinstance(all_inst[0], Evoked):
+        if interpolate_bads:
+            all_inst = [inst.interpolate_bads() if len(inst.info['bads']) > 0
+                        else inst for inst in all_inst]
+        equalize_channels(all_inst)  # apply equalize_channels
+        from .evoked import combine_evoked as combine
+    elif isinstance(all_inst[0], AverageTFR):
+        from .time_frequency.tfr import combine_tfr as combine
+
+    if drop_bads:
+        bads = list(set((b for inst in all_inst for b in inst.info['bads'])))
+        if bads:
+            for inst in all_inst:
+                inst.drop_channels(bads, copy=False)
+
+    # make grand_average object using combine_[evoked/tfr]
+    grand_average = combine(all_inst, weights='equal')
+    # change the grand_average.nave to the number of Evokeds
+    grand_average.nave = len(all_inst)
+    # change comment field
+    grand_average.comment = "Grand average (n = %d)" % grand_average.nave
+    return grand_average
+
+
+def _get_root_dir():
+    """Helper to get as close to the repo root as possible"""
+    root_dir = op.abspath(op.dirname(__file__))
+    up_dir = op.join(root_dir, '..')
+    if op.isfile(op.join(up_dir, 'setup.py')) and all(
+            op.isdir(op.join(up_dir, x)) for x in ('mne', 'examples', 'doc')):
+        root_dir = op.abspath(up_dir)
+    return root_dir
diff --git a/mne/viz/_3d.py b/mne/viz/_3d.py
index cf94942..202e976 100644
--- a/mne/viz/_3d.py
+++ b/mne/viz/_3d.py
@@ -26,12 +26,12 @@ from ..io.constants import FIFF
 from ..surface import (get_head_surf, get_meg_helmet_surf, read_surface,
                        transform_surface_to)
 from ..transforms import (read_trans, _find_trans, apply_trans,
-                          combine_transforms, _get_mri_head_t, _ensure_trans,
+                          combine_transforms, _get_trans, _ensure_trans,
                           invert_transform)
 from ..utils import get_subjects_dir, logger, _check_subject, verbose
 from ..fixes import _get_args
 from ..defaults import _handle_default
-from .utils import mne_analyze_colormap, _prepare_trellis, COLORS
+from .utils import mne_analyze_colormap, _prepare_trellis, COLORS, plt_show
 from ..externals.six import BytesIO
 
 
@@ -262,8 +262,7 @@ def _plot_mri_contours(mri_fname, surf_fnames, orientation='coronal',
     if show:
         plt.subplots_adjust(left=0., bottom=0., right=1., top=1., wspace=0.,
                             hspace=0.)
-        plt.show()
-
+    plt_show(show)
     return fig if img_output is None else outs
 
 
@@ -820,9 +819,7 @@ def plot_sparse_source_estimates(src, stcs, colors=None, linewidth=2,
 
     if fig_name is not None:
         plt.title(fig_name)
-
-    if show:
-        plt.show()
+    plt_show(show)
 
     surface.actor.property.backface_culling = True
     surface.actor.property.shading = True
@@ -887,7 +884,7 @@ def plot_dipole_locations(dipoles, trans, subject, subjects_dir=None,
     from matplotlib.colors import ColorConverter
     color_converter = ColorConverter()
 
-    trans = _get_mri_head_t(trans)[0]
+    trans = _get_trans(trans)[0]
     subjects_dir = get_subjects_dir(subjects_dir=subjects_dir,
                                     raise_error=True)
     fname = op.join(subjects_dir, subject, 'bem', 'inner_skull.surf')
diff --git a/mne/viz/__init__.py b/mne/viz/__init__.py
index cc3f0bf..14bafec 100644
--- a/mne/viz/__init__.py
+++ b/mne/viz/__init__.py
@@ -4,8 +4,7 @@
 from .topomap import (plot_evoked_topomap, plot_projs_topomap,
                       plot_ica_components, plot_tfr_topomap, plot_topomap,
                       plot_epochs_psd_topomap)
-from .topo import (plot_topo, plot_topo_image_epochs,
-                   iter_topography)
+from .topo import plot_topo_image_epochs, iter_topography
 from .utils import (tight_layout, mne_analyze_colormap, compare_fiff,
                     ClickableImage, add_background_image)
 from ._3d import (plot_sparse_source_estimates, plot_source_estimates,
@@ -15,9 +14,9 @@ from .misc import (plot_cov, plot_bem, plot_events, plot_source_spectrogram,
 from .evoked import (plot_evoked, plot_evoked_image, plot_evoked_white,
                      plot_snr_estimate, plot_evoked_topo)
 from .circle import plot_connectivity_circle, circular_layout
-from .epochs import (plot_image_epochs, plot_drop_log, plot_epochs,
-                     _drop_log_stats, plot_epochs_psd, plot_epochs_image)
-from .raw import plot_raw, plot_raw_psd
+from .epochs import (plot_drop_log, plot_epochs, plot_epochs_psd,
+                     plot_epochs_image)
+from .raw import plot_raw, plot_raw_psd, plot_raw_psd_topo
 from .ica import plot_ica_scores, plot_ica_sources, plot_ica_overlay
 from .ica import _plot_sources_raw, _plot_sources_epochs
 from .montage import plot_montage
diff --git a/mne/viz/circle.py b/mne/viz/circle.py
index 7662b14..6c21293 100644
--- a/mne/viz/circle.py
+++ b/mne/viz/circle.py
@@ -14,6 +14,7 @@ from functools import partial
 
 import numpy as np
 
+from .utils import plt_show
 from ..externals.six import string_types
 from ..fixes import tril_indices, normalize_colors
 
@@ -409,6 +410,5 @@ def plot_connectivity_circle(con, node_names, indices=None, n_lines=None,
 
         fig.canvas.mpl_connect('button_press_event', callback)
 
-    if show:
-        plt.show()
+    plt_show(show)
     return fig, axes
diff --git a/mne/viz/decoding.py b/mne/viz/decoding.py
index 9d88f15..160252b 100644
--- a/mne/viz/decoding.py
+++ b/mne/viz/decoding.py
@@ -11,6 +11,8 @@ from __future__ import print_function
 import numpy as np
 import warnings
 
+from .utils import plt_show
+
 
 def plot_gat_matrix(gat, title=None, vmin=None, vmax=None, tlim=None,
                     ax=None, cmap='RdBu_r', show=True, colorbar=True,
@@ -82,8 +84,7 @@ def plot_gat_matrix(gat, title=None, vmin=None, vmax=None, tlim=None,
     ax.set_ylim(tlim[2:])
     if colorbar is True:
         plt.colorbar(im, ax=ax)
-    if show is True:
-        plt.show()
+    plt_show(show)
     return fig if ax is None else ax.get_figure()
 
 
@@ -182,8 +183,7 @@ def plot_gat_times(gat, train_time='diagonal', title=None, xmin=None,
                       'AUC' if 'roc' in repr(gat.scorer_) else r'%'))
     if legend is True:
         ax.legend(loc='best')
-    if show is True:
-        plt.show()
+    plt_show(show)
     return fig if ax is None else ax.get_figure()
 
 
diff --git a/mne/viz/epochs.py b/mne/viz/epochs.py
index 4e4e830..ee60cf4 100644
--- a/mne/viz/epochs.py
+++ b/mne/viz/epochs.py
@@ -14,15 +14,14 @@ import copy
 
 import numpy as np
 
-from ..utils import verbose, get_config, set_config, deprecated
-from ..utils import logger
+from ..utils import verbose, get_config, set_config, logger
 from ..io.pick import pick_types, channel_type
 from ..io.proj import setup_proj
 from ..fixes import Counter, _in1d
 from ..time_frequency import compute_epochs_psd
-from .utils import tight_layout, figure_nobar, _toggle_proj
-from .utils import _toggle_options, _layout_figure, _setup_vmin_vmax
-from .utils import _channels_changed, _plot_raw_onscroll, _onclick_help
+from .utils import (tight_layout, figure_nobar, _toggle_proj, _toggle_options,
+                    _layout_figure, _setup_vmin_vmax, _channels_changed,
+                    _plot_raw_onscroll, _onclick_help, plt_show)
 from ..defaults import _handle_default
 
 
@@ -156,52 +155,12 @@ def plot_epochs_image(epochs, picks=None, sigma=0., vmin=None,
             plt.colorbar(im, cax=ax3)
             tight_layout(fig=this_fig)
 
-    if show:
-        plt.show()
-
+    plt_show(show)
     return figs
 
 
- at deprecated('`plot_image_epochs` is deprecated and will be removed in '
-            '"MNE 0.11." Please use plot_epochs_image instead')
-def plot_image_epochs(epochs, picks=None, sigma=0., vmin=None,
-                      vmax=None, colorbar=True, order=None, show=True,
-                      units=None, scalings=None, cmap='RdBu_r', fig=None):
-
-    return plot_epochs_image(epochs=epochs, picks=picks, sigma=sigma,
-                             vmin=vmin, vmax=None, colorbar=True, order=order,
-                             show=show, units=None, scalings=None, cmap=cmap,
-                             fig=fig)
-
-
-def _drop_log_stats(drop_log, ignore=['IGNORED']):
-    """
-    Parameters
-    ----------
-    drop_log : list of lists
-        Epoch drop log from Epochs.drop_log.
-    ignore : list
-        The drop reasons to ignore.
-
-    Returns
-    -------
-    perc : float
-        Total percentage of epochs dropped.
-    """
-    # XXX: This function should be moved to epochs.py after
-    # removal of perc return parameter in plot_drop_log()
-
-    if not isinstance(drop_log, list) or not isinstance(drop_log[0], list):
-        raise ValueError('drop_log must be a list of lists')
-
-    perc = 100 * np.mean([len(d) > 0 for d in drop_log
-                          if not any(r in ignore for r in d)])
-
-    return perc
-
-
 def plot_drop_log(drop_log, threshold=0, n_max_plot=20, subject='Unknown',
-                  color=(0.9, 0.9, 0.9), width=0.8, ignore=['IGNORED'],
+                  color=(0.9, 0.9, 0.9), width=0.8, ignore=('IGNORED',),
                   show=True):
     """Show the channel stats based on a drop_log from Epochs
 
@@ -231,6 +190,7 @@ def plot_drop_log(drop_log, threshold=0, n_max_plot=20, subject='Unknown',
         The figure.
     """
     import matplotlib.pyplot as plt
+    from ..epochs import _drop_log_stats
     perc = _drop_log_stats(drop_log, ignore)
     scores = Counter([ch for d in drop_log for ch in d if ch not in ignore])
     ch_names = np.array(list(scores.keys()))
@@ -238,7 +198,11 @@ def plot_drop_log(drop_log, threshold=0, n_max_plot=20, subject='Unknown',
     if perc < threshold or len(ch_names) == 0:
         plt.text(0, 0, 'No drops')
         return fig
-    counts = 100 * np.array(list(scores.values()), dtype=float) / len(drop_log)
+    n_used = 0
+    for d in drop_log:  # "d" is the list of drop reasons for each epoch
+        if len(d) == 0 or any(ch not in ignore for ch in d):
+            n_used += 1  # number of epochs not ignored
+    counts = 100 * np.array(list(scores.values()), dtype=float) / n_used
     n_plot = min(n_max_plot, len(ch_names))
     order = np.flipud(np.argsort(counts))
     plt.title('%s: %0.1f%%' % (subject, perc))
@@ -250,10 +214,7 @@ def plot_drop_log(drop_log, threshold=0, n_max_plot=20, subject='Unknown',
     plt.ylabel('% of epochs rejected')
     plt.xlim((-width / 2.0, (n_plot - 1) + width * 3 / 2))
     plt.grid(True, axis='y')
-
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
@@ -394,36 +355,36 @@ def plot_epochs(epochs, picks=None, scalings=None, n_epochs=20,
 
     Notes
     -----
-    With trellis set to False, the arrow keys (up/down/left/right) can
-    be used to navigate between channels and epochs and the scaling can be
-    adjusted with - and + (or =) keys, but this depends on the backend
-    matplotlib is configured to use (e.g., mpl.use(``TkAgg``) should work).
-    Full screen mode can be to toggled with f11 key. The amount of epochs and
-    channels per view can be adjusted with home/end and page down/page up keys.
-    Butterfly plot can be toggled with ``b`` key. Right mouse click adds a
-    vertical line to the plot.
+    The arrow keys (up/down/left/right) can be used to navigate between
+    channels and epochs and the scaling can be adjusted with - and + (or =)
+    keys, but this depends on the backend matplotlib is configured to use
+    (e.g., mpl.use(``TkAgg``) should work). Full screen mode can be toggled
+    with f11 key. The amount of epochs and channels per view can be adjusted
+    with home/end and page down/page up keys. Butterfly plot can be toggled
+    with ``b`` key. Right mouse click adds a vertical line to the plot.
     """
-    import matplotlib.pyplot as plt
+    epochs.drop_bad_epochs()
     scalings = _handle_default('scalings_plot_raw', scalings)
 
     projs = epochs.info['projs']
 
     params = {'epochs': epochs,
-              'orig_data': np.concatenate(epochs.get_data(), axis=1),
               'info': copy.deepcopy(epochs.info),
               'bad_color': (0.8, 0.8, 0.8),
-              't_start': 0}
+              't_start': 0,
+              'histogram': None}
     params['label_click_fun'] = partial(_pick_bad_channels, params=params)
     _prepare_mne_browse_epochs(params, projs, n_channels, n_epochs, scalings,
                                title, picks)
+    _prepare_projectors(params)
+    _layout_figure(params)
 
     callback_close = partial(_close_event, params=params)
     params['fig'].canvas.mpl_connect('close_event', callback_close)
-    if show:
-        try:
-            plt.show(block=block)
-        except TypeError:  # not all versions have this
-            plt.show()
+    try:
+        plt_show(show, block=block)
+    except TypeError:  # not all versions have this
+        plt_show(show)
 
     return params['fig']
 
@@ -481,7 +442,6 @@ def plot_epochs_psd(epochs, fmin=0, fmax=np.inf, tmin=None, tmax=None,
     fig : instance of matplotlib figure
         Figure distributing one image per channel across sensor topography.
     """
-    import matplotlib.pyplot as plt
     from .raw import _set_psd_plot_params
     fig, picks_list, titles_list, ax_list, make_label = _set_psd_plot_params(
         epochs.info, proj, picks, ax, area_mode)
@@ -525,8 +485,7 @@ def plot_epochs_psd(epochs, fmin=0, fmax=np.inf, tmin=None, tmax=None,
             ax.set_xlim(freqs[0], freqs[-1])
     if make_label:
         tight_layout(pad=0.1, h_pad=0.1, w_pad=0.1, fig=fig)
-    if show:
-        plt.show()
+    plt_show(show)
     return fig
 
 
@@ -646,14 +605,14 @@ def _prepare_mne_browse_epochs(params, projs, n_channels, n_epochs, scalings,
         lines.append(lc)
 
     times = epochs.times
-    data = np.zeros((params['info']['nchan'], len(times) * len(epochs.events)))
+    data = np.zeros((params['info']['nchan'], len(times) * n_epochs))
 
     ylim = (25., 0.)  # Hardcoded 25 because butterfly has max 5 rows (5*5=25).
     # make shells for plotting traces
     offset = ylim[0] / n_channels
     offsets = np.arange(n_channels) * offset + (offset / 2.)
 
-    times = np.arange(len(data[0]))
+    times = np.arange(len(times) * len(epochs.events))
     epoch_times = np.arange(0, len(times), n_times)
 
     ax.set_yticks(offsets)
@@ -732,14 +691,6 @@ def _prepare_mne_browse_epochs(params, projs, n_channels, n_epochs, scalings,
 
     params['plot_fun'] = partial(_plot_traces, params=params)
 
-    if len(projs) > 0 and not epochs.proj:
-        ax_button = plt.subplot2grid((10, 15), (9, 14))
-        opt_button = mpl.widgets.Button(ax_button, 'Proj')
-        callback_option = partial(_toggle_options, params=params)
-        opt_button.on_clicked(callback_option)
-        params['opt_button'] = opt_button
-        params['ax_button'] = ax_button
-
     # callbacks
     callback_scroll = partial(_plot_onscroll, params=params)
     fig.canvas.mpl_connect('scroll_event', callback_scroll)
@@ -750,10 +701,26 @@ def _prepare_mne_browse_epochs(params, projs, n_channels, n_epochs, scalings,
     callback_resize = partial(_resize_event, params=params)
     fig.canvas.mpl_connect('resize_event', callback_resize)
     fig.canvas.mpl_connect('pick_event', partial(_onpick, params=params))
+    params['callback_key'] = callback_key
 
     # Draw event lines for the first time.
     _plot_vert_lines(params)
 
+
+def _prepare_projectors(params):
+    """ Helper for setting up the projectors for epochs browser """
+    import matplotlib.pyplot as plt
+    import matplotlib as mpl
+    epochs = params['epochs']
+    projs = params['projs']
+    if len(projs) > 0 and not epochs.proj:
+        ax_button = plt.subplot2grid((10, 15), (9, 14))
+        opt_button = mpl.widgets.Button(ax_button, 'Proj')
+        callback_option = partial(_toggle_options, params=params)
+        opt_button.on_clicked(callback_option)
+        params['opt_button'] = opt_button
+        params['ax_button'] = ax_button
+
     # As here code is shared with plot_evoked, some extra steps:
     # first the actual plot update function
     params['plot_update_proj_callback'] = _plot_update_epochs_proj
@@ -761,10 +728,7 @@ def _prepare_mne_browse_epochs(params, projs, n_channels, n_epochs, scalings,
     callback_proj = partial(_toggle_proj, params=params)
     # store these for use by callbacks in the options figure
     params['callback_proj'] = callback_proj
-    params['callback_key'] = callback_key
-
     callback_proj('none')
-    _layout_figure(params)
 
 
 def _plot_traces(params):
@@ -817,7 +781,7 @@ def _plot_traces(params):
             else:
                 tick_list += [params['ch_names'][ch_idx]]
                 offset = offsets[line_idx]
-            this_data = data[ch_idx][params['t_start']:end]
+            this_data = data[ch_idx]
 
             # subtraction here gets correct orientation for flipped ylim
             ydata = offset - this_data
@@ -904,7 +868,7 @@ def _plot_traces(params):
         params['fig_proj'].canvas.draw()
 
 
-def _plot_update_epochs_proj(params, bools):
+def _plot_update_epochs_proj(params, bools=None):
     """Helper only needs to be called when proj is changed"""
     if bools is not None:
         inds = np.where(bools)[0]
@@ -914,7 +878,10 @@ def _plot_update_epochs_proj(params, bools):
     params['projector'], _ = setup_proj(params['info'], add_eeg_ref=False,
                                         verbose=False)
 
-    data = params['orig_data']
+    start = int(params['t_start'] / len(params['epochs'].times))
+    n_epochs = params['n_epochs']
+    end = start + n_epochs
+    data = np.concatenate(params['epochs'][start:end].get_data(), axis=1)
     if params['projector'] is not None:
         data = np.dot(params['projector'], data)
     types = params['types']
@@ -944,7 +911,7 @@ def _plot_window(value, params):
     if params['t_start'] != value:
         params['t_start'] = value
         params['hsel_patch'].set_x(value)
-        params['plot_fun']()
+        params['plot_update_proj_callback'](params)
 
 
 def _plot_vert_lines(params):
@@ -1018,7 +985,7 @@ def _pick_bad_channels(pos, params):
     if 'ica' in params:
         params['plot_fun']()
     else:
-        params['plot_update_proj_callback'](params, None)
+        params['plot_update_proj_callback'](params)
 
 
 def _plot_onscroll(event, params):
@@ -1184,15 +1151,13 @@ def _plot_onkey(event, params):
         params['n_epochs'] = n_epochs
         params['duration'] -= n_times
         params['hsel_patch'].set_width(params['duration'])
-        params['plot_fun']()
+        params['data'] = params['data'][:, :-n_times]
+        params['plot_update_proj_callback'](params)
     elif event.key == 'end':
         n_epochs = params['n_epochs'] + 1
         n_times = len(params['epochs'].times)
-        if n_times * n_epochs > len(params['data'][0]):
+        if n_times * n_epochs > len(params['times']):
             return
-        if params['t_start'] + n_times * n_epochs > len(params['data'][0]):
-            params['t_start'] -= n_times
-            params['hsel_patch'].set_x(params['t_start'])
         ticks = params['epoch_times'] + 0.5 * n_times
         params['ax2'].set_xticks(ticks[:n_epochs])
         params['n_epochs'] = n_epochs
@@ -1202,11 +1167,12 @@ def _plot_onkey(event, params):
             params['vert_lines'].append(ax.plot(pos, ax.get_ylim(), 'y',
                                                 zorder=3))
         params['duration'] += n_times
-        if params['t_start'] + params['duration'] > len(params['data'][0]):
+        if params['t_start'] + params['duration'] > len(params['times']):
             params['t_start'] -= n_times
             params['hsel_patch'].set_x(params['t_start'])
         params['hsel_patch'].set_width(params['duration'])
-        params['plot_fun']()
+        params['data'] = np.zeros((len(params['data']), params['duration']))
+        params['plot_update_proj_callback'](params)
     elif event.key == 'b':
         if params['fig_options'] is not None:
             plt.close(params['fig_options'])
@@ -1328,8 +1294,8 @@ def _onpick(event, params):
 def _close_event(event, params):
     """Function to drop selected bad epochs. Called on closing of the plot."""
     params['epochs'].drop_epochs(params['bads'])
-    logger.info('Channels marked as bad: %s' % params['epochs'].info['bads'])
     params['epochs'].info['bads'] = params['info']['bads']
+    logger.info('Channels marked as bad: %s' % params['epochs'].info['bads'])
 
 
 def _resize_event(event, params):
@@ -1366,10 +1332,11 @@ def _update_channels_epochs(event, params):
     params['n_epochs'] = n_epochs
     params['duration'] = n_times * n_epochs
     params['hsel_patch'].set_width(params['duration'])
-    if params['t_start'] + n_times * n_epochs > len(params['data'][0]):
-        params['t_start'] = len(params['data'][0]) - n_times * n_epochs
+    params['data'] = np.zeros((len(params['data']), params['duration']))
+    if params['t_start'] + n_times * n_epochs > len(params['times']):
+        params['t_start'] = len(params['times']) - n_times * n_epochs
         params['hsel_patch'].set_x(params['t_start'])
-    _plot_traces(params)
+    params['plot_update_proj_callback'](params)
 
 
 def _toggle_labels(label, params):
@@ -1439,7 +1406,7 @@ def _open_options(params):
     params['fig_options'].canvas.mpl_connect('close_event', close_callback)
     try:
         params['fig_options'].canvas.draw()
-        params['fig_options'].show()
+        params['fig_options'].show(warn=False)
         if params['fig_proj'] is not None:
             params['fig_proj'].canvas.draw()
     except Exception:
@@ -1473,8 +1440,7 @@ def _plot_histogram(params):
                           x in enumerate(params['types']) if x == 'grad'])
         data.append(grads.ravel())
         types.append('grad')
-    fig = plt.figure(len(types))
-    fig.clf()
+    params['histogram'] = plt.figure()
     scalings = _handle_default('scalings')
     units = _handle_default('units')
     titles = _handle_default('titles')
@@ -1495,10 +1461,10 @@ def _plot_histogram(params):
         if rej is not None:
             ax.plot((rej, rej), (0, ax.get_ylim()[1]), color='r')
         plt.title(titles[types[idx]])
-    fig.suptitle('Peak-to-peak histogram', y=0.99)
-    fig.subplots_adjust(hspace=0.6)
+    params['histogram'].suptitle('Peak-to-peak histogram', y=0.99)
+    params['histogram'].subplots_adjust(hspace=0.6)
     try:
-        fig.show()
+        params['histogram'].show(warn=False)
     except:
         pass
     if params['fig_proj'] is not None:
diff --git a/mne/viz/evoked.py b/mne/viz/evoked.py
index f929fd5..128ba10 100644
--- a/mne/viz/evoked.py
+++ b/mne/viz/evoked.py
@@ -11,19 +11,20 @@ from __future__ import print_function
 #
 # License: Simplified BSD
 
-from itertools import cycle
-
 import numpy as np
 
 from ..io.pick import channel_type, pick_types, _picks_by_type
 from ..externals.six import string_types
 from ..defaults import _handle_default
-from .utils import _draw_proj_checkbox, tight_layout, _check_delayed_ssp
-from ..utils import logger
+from .utils import (_draw_proj_checkbox, tight_layout, _check_delayed_ssp,
+                    plt_show)
+from ..utils import logger, _clean_names
 from ..fixes import partial
 from ..io.pick import pick_info
 from .topo import _plot_evoked_topo
-from .topomap import _prepare_topo_plot, plot_topomap
+from .topomap import (_prepare_topo_plot, plot_topomap, _check_outlines,
+                      _prepare_topomap)
+from ..channels import find_layout
 
 
 def _butterfly_onpick(event, params):
@@ -101,7 +102,7 @@ def _butterfly_onselect(xmin, xmax, ch_types, evoked, text=None):
     fig.suptitle('Average over %.2fs - %.2fs' % (xmin, xmax), fontsize=15,
                  y=0.1)
     tight_layout(pad=2.0, fig=fig)
-    plt.show()
+    plt_show()
     if text is not None:
         text.set_visible(False)
         close_callback = partial(_topo_closed, ax=ax, lines=vert_lines,
@@ -119,10 +120,42 @@ def _topo_closed(events, ax, lines, fill):
     ax.get_figure().canvas.draw()
 
 
+def _rgb(x, y, z):
+    """Helper to transform x, y, z values into RGB colors"""
+    for dim in (x, y, z):
+        dim -= dim.min()
+        dim /= dim.max()
+    return np.asarray([x, y, z]).T
+
+
+def _plot_legend(pos, colors, axis, bads, outlines='skirt'):
+    """Helper function to plot color/channel legends for butterfly plots
+    with spatial colors"""
+    from mpl_toolkits.axes_grid.inset_locator import inset_axes
+    bbox = axis.get_window_extent()  # Determine the correct size.
+    ratio = bbox.width / bbox.height
+    ax = inset_axes(axis, width=str(30 / ratio) + '%', height='30%', loc=2)
+    pos, outlines = _check_outlines(pos, outlines, None)
+    pos_x, pos_y = _prepare_topomap(pos, ax)
+    ax.scatter(pos_x, pos_y, color=colors, s=25, marker='.', zorder=0)
+    for idx in bads:
+        ax.scatter(pos_x[idx], pos_y[idx], s=5, marker='.', color='w',
+                   zorder=1)
+
+    if isinstance(outlines, dict):
+        outlines_ = dict([(k, v) for k, v in outlines.items() if k not in
+                          ['patch', 'autoshrink']])
+        for k, (x, y) in outlines_.items():
+            if 'mask' in k:
+                continue
+            ax.plot(x, y, color='k', linewidth=1)
+
+
 def _plot_evoked(evoked, picks, exclude, unit, show,
                  ylim, proj, xlim, hline, units,
                  scalings, titles, axes, plot_type,
-                 cmap=None, gfp=False):
+                 cmap=None, gfp=False, window_title=None,
+                 spatial_colors=False):
     """Aux function for plot_evoked and plot_evoked_image (cf. docstrings)
 
     Extra param is:
@@ -137,6 +170,7 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
     import matplotlib.pyplot as plt
     from matplotlib import patheffects
     from matplotlib.widgets import SpanSelector
+    info = evoked.info
     if axes is not None and proj == 'interactive':
         raise RuntimeError('Currently only single axis figures are supported'
                            ' for interactive SSP selection.')
@@ -150,16 +184,16 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
     channel_types = ['eeg', 'grad', 'mag', 'seeg']
 
     if picks is None:
-        picks = list(range(evoked.info['nchan']))
+        picks = list(range(info['nchan']))
 
-    bad_ch_idx = [evoked.ch_names.index(ch) for ch in evoked.info['bads']
-                  if ch in evoked.ch_names]
+    bad_ch_idx = [info['ch_names'].index(ch) for ch in info['bads']
+                  if ch in info['ch_names']]
     if len(exclude) > 0:
         if isinstance(exclude, string_types) and exclude == 'bads':
             exclude = bad_ch_idx
         elif (isinstance(exclude, list) and
               all(isinstance(ch, string_types) for ch in exclude)):
-            exclude = [evoked.ch_names.index(ch) for ch in exclude]
+            exclude = [info['ch_names'].index(ch) for ch in exclude]
         else:
             raise ValueError('exclude has to be a list of channel names or '
                              '"bads"')
@@ -167,7 +201,7 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
         picks = list(set(picks).difference(exclude))
     picks = np.array(picks)
 
-    types = np.array([channel_type(evoked.info, idx) for idx in picks])
+    types = np.array([channel_type(info, idx) for idx in picks])
     n_channel_types = 0
     ch_types_used = []
     for t in channel_types:
@@ -188,6 +222,8 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
 
     if axes_init is not None:
         fig = axes[0].get_figure()
+    if window_title is not None:
+        fig.canvas.set_window_title(window_title)
 
     if not len(axes) == n_channel_types:
         raise ValueError('Number of axes (%g) must match number of channel '
@@ -208,6 +244,7 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
     gfp_path_effects = [patheffects.withStroke(linewidth=5, foreground="w",
                                                alpha=0.75)]
     for ax, t in zip(axes, ch_types_used):
+        line_list = list()  # 'line_list' contains the lines for this axes
         ch_unit = units[t]
         this_scaling = scalings[t]
         if unit is False:
@@ -216,20 +253,13 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
         idx = list(picks[types == t])
         idxs.append(idx)
         if len(idx) > 0:
+            # Set amplitude scaling
+            D = this_scaling * evoked.data[idx, :]
             # Parameters for butterfly interactive plots
             if plot_type == 'butterfly':
-                if any(i in bad_ch_idx for i in idx):
-                    colors = ['k'] * len(idx)
-                    for i in bad_ch_idx:
-                        if i in idx:
-                            colors[idx.index(i)] = 'r'
-
-                    ax._get_lines.color_cycle = iter(colors)
-                else:
-                    ax._get_lines.color_cycle = cycle(['k'])
                 text = ax.annotate('Loading...', xy=(0.01, 0.1),
                                    xycoords='axes fraction', fontsize=20,
-                                   color='green')
+                                   color='green', zorder=2)
                 text.set_visible(False)
                 callback_onselect = partial(_butterfly_onselect,
                                             ch_types=ch_types_used,
@@ -240,19 +270,44 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
                                               useblit=blit,
                                               rectprops=dict(alpha=0.5,
                                                              facecolor='red')))
-            # Set amplitude scaling
-            D = this_scaling * evoked.data[idx, :]
-            if plot_type == 'butterfly':
+
                 gfp_only = (isinstance(gfp, string_types) and gfp == 'only')
                 if not gfp_only:
-                    lines.append(ax.plot(times, D.T, picker=3., zorder=0))
-                    for ii, line in zip(idx, lines[-1]):
-                        if ii in bad_ch_idx:
-                            line.set_zorder(1)
+                    if spatial_colors:
+                        chs = [info['chs'][i] for i in idx]
+                        locs3d = np.array([ch['loc'][:3] for ch in chs])
+                        x, y, z = locs3d.T
+                        colors = _rgb(x, y, z)
+                        layout = find_layout(info, ch_type=t, exclude=[])
+                        # drop channels that are not in the data
+                        used_nm = np.array(_clean_names(info['ch_names']))[idx]
+                        names = np.asarray([name for name in layout.names
+                                            if name in used_nm])
+                        name_idx = [layout.names.index(name) for name in names]
+                        if len(name_idx) < len(chs):
+                            logger.warning('Could not find layout for '
+                                           'all the channels. Legend for '
+                                           'spatial colors not drawn.')
+                        else:
+                            # find indices for bads
+                            bads = [np.where(names == bad)[0][0] for bad in
+                                    info['bads'] if bad in names]
+                            pos = layout.pos[name_idx, :2]
+                            _plot_legend(pos, colors, ax, bads=bads)
+                    else:
+                        colors = ['k'] * len(idx)
+                        for i in bad_ch_idx:
+                            if i in idx:
+                                colors[idx.index(i)] = 'r'
+                    for ch_idx in range(len(D)):
+                        line_list.append(ax.plot(times, D[ch_idx], picker=3.,
+                                                 zorder=0,
+                                                 color=colors[ch_idx])[0])
                 if gfp:  # 'only' or boolean True
-                    gfp_color = (0., 1., 0.)
+                    gfp_color = 3 * (0.,) if spatial_colors else (0., 1., 0.)
                     this_gfp = np.sqrt((D * D).mean(axis=0))
-                    this_ylim = ax.get_ylim()
+                    this_ylim = ax.get_ylim() if (ylim is None or t not in
+                                                  ylim.keys()) else ylim[t]
                     if not gfp_only:
                         y_offset = this_ylim[0]
                     else:
@@ -260,15 +315,21 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
                     this_gfp += y_offset
                     ax.fill_between(times, y_offset, this_gfp, color='none',
                                     facecolor=gfp_color, zorder=0, alpha=0.25)
-                    ax.plot(times, this_gfp, color=gfp_color, zorder=2)
+                    line_list.append(ax.plot(times, this_gfp, color=gfp_color,
+                                             zorder=2)[0])
                     ax.text(times[0] + 0.01 * (times[-1] - times[0]),
                             this_gfp[0] + 0.05 * np.diff(ax.get_ylim())[0],
                             'GFP', zorder=3, color=gfp_color,
                             path_effects=gfp_path_effects)
+                for ii, line in zip(idx, line_list):
+                    if ii in bad_ch_idx:
+                        line.set_zorder(1)
+                        if spatial_colors:
+                            line.set_linestyle("--")
                 ax.set_ylabel('data (%s)' % ch_unit)
                 # for old matplotlib, we actually need this to have a bounding
                 # box (!), so we have to put some valid text here, change
-                # alpha and  path effects later
+                # alpha and path effects later
                 texts.append(ax.text(0, 0, 'blank', zorder=2,
                                      verticalalignment='baseline',
                                      horizontalalignment='left',
@@ -298,10 +359,12 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
 
             if (plot_type == 'butterfly') and (hline is not None):
                 for h in hline:
-                    ax.axhline(h, color='r', linestyle='--', linewidth=2)
+                    c = ('r' if not spatial_colors else 'grey')
+                    ax.axhline(h, linestyle='--', linewidth=2, color=c)
+        lines.append(line_list)
     if plot_type == 'butterfly':
         params = dict(axes=axes, texts=texts, lines=lines,
-                      ch_names=evoked.ch_names, idxs=idxs, need_draw=False,
+                      ch_names=info['ch_names'], idxs=idxs, need_draw=False,
                       path_effects=path_effects, selectors=selectors)
         fig.canvas.mpl_connect('pick_event',
                                partial(_butterfly_onpick, params=params))
@@ -314,16 +377,15 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
 
     if proj == 'interactive':
         _check_delayed_ssp(evoked)
-        params = dict(evoked=evoked, fig=fig, projs=evoked.info['projs'],
-                      axes=axes, types=types, units=units, scalings=scalings,
-                      unit=unit, ch_types_used=ch_types_used, picks=picks,
+        params = dict(evoked=evoked, fig=fig, projs=info['projs'], axes=axes,
+                      types=types, units=units, scalings=scalings, unit=unit,
+                      ch_types_used=ch_types_used, picks=picks,
                       plot_update_proj_callback=_plot_update_evoked,
                       plot_type=plot_type)
         _draw_proj_checkbox(None, params)
 
-    if show and plt.get_backend() != 'agg':
-        plt.show()
-        fig.canvas.draw()  # for axes plots update axes.
+    plt_show(show)
+    fig.canvas.draw()  # for axes plots update axes.
     tight_layout(fig=fig)
 
     return fig
@@ -331,7 +393,8 @@ def _plot_evoked(evoked, picks, exclude, unit, show,
 
 def plot_evoked(evoked, picks=None, exclude='bads', unit=True, show=True,
                 ylim=None, xlim='tight', proj=False, hline=None, units=None,
-                scalings=None, titles=None, axes=None, gfp=False):
+                scalings=None, titles=None, axes=None, gfp=False,
+                window_title=None, spatial_colors=False):
     """Plot evoked data
 
     Left click to a line shows the channel name. Selecting an area by clicking
@@ -380,12 +443,20 @@ def plot_evoked(evoked, picks=None, exclude='bads', unit=True, show=True,
     gfp : bool | 'only'
         Plot GFP in green if True or "only". If "only", then the individual
         channel traces will not be shown.
+    window_title : str | None
+        The title to put at the top of the figure.
+    spatial_colors : bool
+        If True, the lines are color coded by mapping physical sensor
+        coordinates into color values. Spatially similar channels will have
+        similar colors. Bad channels will be dotted. If False, the good
+        channels are plotted black and bad channels red. Defaults to False.
     """
     return _plot_evoked(evoked=evoked, picks=picks, exclude=exclude, unit=unit,
                         show=show, ylim=ylim, proj=proj, xlim=xlim,
                         hline=hline, units=units, scalings=scalings,
                         titles=titles, axes=axes, plot_type="butterfly",
-                        gfp=gfp)
+                        gfp=gfp, window_title=window_title,
+                        spatial_colors=spatial_colors)
 
 
 def plot_evoked_topo(evoked, layout=None, layout_scale=0.945, color=None,
@@ -760,8 +831,7 @@ def _plot_evoked_white(evoked, noise_cov, scalings=None, rank=None, show=True):
         fig.subplots_adjust(**params)
     fig.canvas.draw()
 
-    if show is True:
-        plt.show()
+    plt_show(show)
     return fig
 
 
@@ -804,6 +874,5 @@ def plot_snr_estimate(evoked, inv, show=True):
     if evoked.comment is not None:
         ax.set_title(evoked.comment)
     plt.draw()
-    if show:
-        plt.show()
+    plt_show(show)
     return fig
diff --git a/mne/viz/ica.py b/mne/viz/ica.py
index 122fd7c..ce3f527 100644
--- a/mne/viz/ica.py
+++ b/mne/viz/ica.py
@@ -12,9 +12,9 @@ from functools import partial
 
 import numpy as np
 
-from .utils import tight_layout, _prepare_trellis, _select_bads
-from .utils import _layout_figure, _plot_raw_onscroll, _mouse_click
-from .utils import _helper_raw_resize, _plot_raw_onkey
+from .utils import (tight_layout, _prepare_trellis, _select_bads,
+                    _layout_figure, _plot_raw_onscroll, _mouse_click,
+                    _helper_raw_resize, _plot_raw_onkey, plt_show)
 from .raw import _prepare_mne_browse_raw, _plot_raw_traces
 from .epochs import _prepare_mne_browse_epochs
 from .evoked import _butterfly_on_button_press, _butterfly_onpick
@@ -23,6 +23,7 @@ from ..utils import logger
 from ..defaults import _handle_default
 from ..io.meas_info import create_info
 from ..io.pick import pick_types
+from ..externals.six import string_types
 
 
 def _ica_plot_sources_onpick_(event, sources=None, ylims=None):
@@ -120,10 +121,9 @@ def plot_ica_sources(ica, inst, picks=None, exclude=None, start=None,
         sources = ica.get_sources(inst)
         if start is not None or stop is not None:
             inst = inst.crop(start, stop, copy=True)
-        fig = _plot_ica_sources_evoked(evoked=sources,
-                                       picks=picks,
-                                       exclude=exclude,
-                                       title=title, show=show)
+        fig = _plot_ica_sources_evoked(
+            evoked=sources, picks=picks, exclude=exclude, title=title,
+            labels=getattr(ica, 'labels_', None), show=show)
     else:
         raise ValueError('Data input must be of Raw or Epochs type')
 
@@ -194,14 +194,11 @@ def _plot_ica_grid(sources, start, stop,
     # register callback
     callback = partial(_ica_plot_sources_onpick_, sources=sources, ylims=ylims)
     fig.canvas.mpl_connect('pick_event', callback)
-
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
-def _plot_ica_sources_evoked(evoked, picks, exclude, title, show):
+def _plot_ica_sources_evoked(evoked, picks, exclude, title, show, labels=None):
     """Plot average over epochs in ICA space
 
     Parameters
@@ -218,6 +215,8 @@ def _plot_ica_sources_evoked(evoked, picks, exclude, title, show):
         The figure title.
     show : bool
         Show figure if True.
+    labels : None | dict
+        The ICA labels attribute.
     """
     import matplotlib.pyplot as plt
     if title is None:
@@ -234,12 +233,48 @@ def _plot_ica_sources_evoked(evoked, picks, exclude, title, show):
     texts = list()
     if picks is None:
         picks = np.arange(evoked.data.shape[0])
+    picks = np.sort(picks)
     idxs = [picks]
+    color = 'r'
+
+    if labels is not None:
+        labels_used = [k for k in labels if '/' not in k]
+
+    exclude_labels = list()
     for ii in picks:
         if ii in exclude:
-            label = 'ICA %03d' % (ii + 1)
+            line_label = 'ICA %03d' % (ii + 1)
+            if labels is not None:
+                annot = list()
+                for this_label in labels_used:
+                    indices = labels[this_label]
+                    if ii in indices:
+                        annot.append(this_label)
+
+                line_label += (' - ' + ', '.join(annot))
+            exclude_labels.append(line_label)
+        else:
+            exclude_labels.append(None)
+
+    if labels is not None:
+        # compute colors only based on label categories
+        unique_labels = set([k.split(' - ')[1] for k in exclude_labels if k])
+        label_colors = plt.cm.rainbow(np.linspace(0, 1, len(unique_labels)))
+        label_colors = dict(zip(unique_labels, label_colors))
+    else:
+        label_colors = dict((k, 'red') for k in exclude_labels)
+
+    for exc_label, ii in zip(exclude_labels, picks):
+        if exc_label is not None:
+            # create look up for color ...
+            if ' - ' in exc_label:
+                key = exc_label.split(' - ')[1]
+            else:
+                key = exc_label
+            color = label_colors[key]
+            # ... but display component number too
             lines.extend(ax.plot(times, evoked.data[ii].T, picker=3.,
-                         zorder=1, color='r', label=label))
+                         zorder=1, color=color, label=exc_label))
         else:
             lines.extend(ax.plot(times, evoked.data[ii].T, picker=3.,
                                  color='k', zorder=0))
@@ -275,13 +310,13 @@ def _plot_ica_sources_evoked(evoked, picks, exclude, title, show):
     fig.canvas.mpl_connect('button_press_event',
                            partial(_butterfly_on_button_press,
                                    params=params))
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
-def plot_ica_scores(ica, scores, exclude=None, axhline=None,
+def plot_ica_scores(ica, scores,
+                    exclude=None, labels=None,
+                    axhline=None,
                     title='ICA component scores',
                     figsize=(12, 6), show=True):
     """Plot scores related to detected components.
@@ -298,6 +333,12 @@ def plot_ica_scores(ica, scores, exclude=None, axhline=None,
     exclude : array_like of int
         The components marked for exclusion. If None (default), ICA.exclude
         will be used.
+    labels : str | list | 'ecg' | 'eog' | None
+        The labels to consider for the axes tests. Defaults to None.
+        If list, should match the outer shape of `scores`.
+        If 'ecg' or 'eog', the labels_ attributes will be looked up.
+        Note that '/' is used internally for sublabels specifying ECG and
+        EOG channels.
     axhline : float
         Draw horizontal line to e.g. visualize rejection threshold.
     title : str
@@ -327,7 +368,22 @@ def plot_ica_scores(ica, scores, exclude=None, axhline=None,
     else:
         axes = [axes]
     plt.suptitle(title)
-    for this_scores, ax in zip(scores, axes):
+
+    if labels == 'ecg':
+        labels = [l for l in ica.labels_ if l.startswith('ecg/')]
+    elif labels == 'eog':
+        labels = [l for l in ica.labels_ if l.startswith('eog/')]
+        labels.sort(key=lambda l: l.split('/')[1])  # sort by index
+    elif isinstance(labels, string_types):
+        if len(axes) > 1:
+            raise ValueError('Need as many labels as axes (%i)' % len(axes))
+        labels = [labels]
+    elif isinstance(labels, (tuple, list)):
+        if len(labels) != len(axes):
+            raise ValueError('Need as many labels as axes (%i)' % len(axes))
+    elif labels is None:
+        labels = (None, None)
+    for label, this_scores, ax in zip(labels, scores, axes):
         if len(my_range) != len(this_scores):
             raise ValueError('The length of `scores` must equal the '
                              'number of ICA components.')
@@ -340,15 +396,21 @@ def plot_ica_scores(ica, scores, exclude=None, axhline=None,
             for axl in axhline:
                 ax.axhline(axl, color='r', linestyle='--')
         ax.set_ylabel('score')
+
+        if label is not None:
+            if 'eog/' in label:
+                split = label.split('/')
+                label = ', '.join([split[0], split[2]])
+            elif '/' in label:
+                label = ', '.join(label.split('/'))
+            ax.set_title('(%s)' % label)
         ax.set_xlabel('ICA components')
         ax.set_xlim(0, len(this_scores))
 
     tight_layout(fig=fig)
     if len(axes) > 1:
         plt.subplots_adjust(top=0.9)
-
-    if show:
-        plt.show()
+    plt_show(show)
     return fig
 
 
@@ -473,10 +535,7 @@ def _plot_ica_overlay_raw(data, data_cln, times, title, ch_types_used, show):
 
     fig.subplots_adjust(top=0.90)
     fig.canvas.draw()
-
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
@@ -520,17 +579,13 @@ def _plot_ica_overlay_evoked(evoked, evoked_cln, title, show):
 
     fig.subplots_adjust(top=0.90)
     fig.canvas.draw()
-
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
 def _plot_sources_raw(ica, raw, picks, exclude, start, stop, show, title,
                       block):
     """Function for plotting the ICA components as raw array."""
-    import matplotlib.pyplot as plt
     color = _handle_default('color', (0., 0., 0.))
     orig_data = ica._transform_raw(raw, 0, len(raw.times)) * 0.2
     if picks is None:
@@ -606,12 +661,10 @@ def _plot_sources_raw(ica, raw, picks, exclude, start, stop, show, title,
     params['event_times'] = None
     params['update_fun']()
     params['plot_fun']()
-    if show:
-        try:
-            plt.show(block=block)
-        except TypeError:  # not all versions have this
-            plt.show()
-
+    try:
+        plt_show(show, block=block)
+    except TypeError:  # not all versions have this
+        plt_show(show)
     return params['fig']
 
 
@@ -643,7 +696,6 @@ def _close_event(events, params):
 def _plot_sources_epochs(ica, epochs, picks, exclude, start, stop, show,
                          title, block):
     """Function for plotting the components as epochs."""
-    import matplotlib.pyplot as plt
     data = ica._transform_epochs(epochs, concatenate=True)
     eog_chs = pick_types(epochs.info, meg=False, eog=True, ref_meg=False)
     ecg_chs = pick_types(epochs.info, meg=False, ecg=True, ref_meg=False)
@@ -692,18 +744,30 @@ def _plot_sources_epochs(ica, epochs, picks, exclude, start, stop, show,
                                n_epochs=n_epochs, scalings=scalings,
                                title=title, picks=picks,
                                order=['misc', 'eog', 'ecg'])
+    params['plot_update_proj_callback'] = _update_epoch_data
+    _update_epoch_data(params)
     params['hsel_patch'].set_x(params['t_start'])
     callback_close = partial(_close_epochs_event, params=params)
     params['fig'].canvas.mpl_connect('close_event', callback_close)
-    if show:
-        try:
-            plt.show(block=block)
-        except TypeError:  # not all versions have this
-            plt.show()
-
+    try:
+        plt_show(show, block=block)
+    except TypeError:  # not all versions have this
+        plt_show(show)
     return params['fig']
 
 
+def _update_epoch_data(params):
+    """Function for preparing the data on horizontal shift."""
+    start = params['t_start']
+    n_epochs = params['n_epochs']
+    end = start + n_epochs * len(params['epochs'].times)
+    data = params['orig_data'][:, start:end]
+    types = params['types']
+    for pick, ind in enumerate(params['inds']):
+        params['data'][pick] = data[ind] / params['scalings'][types[pick]]
+    params['plot_fun']()
+
+
 def _close_epochs_event(events, params):
     """Function for excluding the selected components on close."""
     info = params['info']
@@ -757,5 +821,4 @@ def _label_clicked(pos, params):
     tight_layout(fig=fig)
     fig.subplots_adjust(top=0.95)
     fig.canvas.draw()
-
-    plt.show()
+    plt_show(True)
diff --git a/mne/viz/misc.py b/mne/viz/misc.py
index abcff98..2b7b864 100644
--- a/mne/viz/misc.py
+++ b/mne/viz/misc.py
@@ -24,7 +24,7 @@ from ..surface import read_surface
 from ..io.proj import make_projector
 from ..utils import logger, verbose, get_subjects_dir
 from ..io.pick import pick_types
-from .utils import tight_layout, COLORS, _prepare_trellis
+from .utils import tight_layout, COLORS, _prepare_trellis, plt_show
 
 
 @verbose
@@ -124,9 +124,7 @@ def plot_cov(cov, info, exclude=[], colorbar=True, proj=False, show_svd=True,
             plt.title(name)
             tight_layout(fig=fig_svd)
 
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig_cov, fig_svd
 
 
@@ -236,9 +234,7 @@ def plot_source_spectrogram(stcs, freq_bins, tmin=None, tmax=None,
         plt.barh(lower_bound, time_bounds[-1] - time_bounds[0], upper_bound -
                  lower_bound, time_bounds[0], color='#666666')
 
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
@@ -332,9 +328,7 @@ def _plot_mri_contours(mri_fname, surf_fnames, orientation='coronal',
 
     plt.subplots_adjust(left=0., bottom=0., right=1., top=1., wspace=0.,
                         hspace=0.)
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
@@ -524,9 +518,7 @@ def plot_events(events, sfreq=None, first_samp=0, color=None, event_id=None,
         ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
         ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
         fig.canvas.draw()
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
@@ -576,5 +568,5 @@ def plot_dipole_amplitudes(dipoles, colors=None, show=True):
     ax.set_xlabel('Time (sec)')
     ax.set_ylabel('Amplitude (nAm)')
     if show:
-        fig.show()
+        fig.show(warn=False)
     return fig
diff --git a/mne/viz/montage.py b/mne/viz/montage.py
index 184029a..1bcded2 100644
--- a/mne/viz/montage.py
+++ b/mne/viz/montage.py
@@ -2,6 +2,8 @@
 """
 import numpy as np
 
+from .utils import plt_show
+
 
 def plot_montage(montage, scale_factor=1.5, show_names=False, show=True):
     """Plot a montage
@@ -52,7 +54,5 @@ def plot_montage(montage, scale_factor=1.5, show_names=False, show=True):
     ax.set_ylabel('y')
     ax.set_zlabel('z')
 
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
diff --git a/mne/viz/raw.py b/mne/viz/raw.py
index a5a3934..6bbda89 100644
--- a/mne/viz/raw.py
+++ b/mne/viz/raw.py
@@ -13,14 +13,15 @@ from functools import partial
 import numpy as np
 
 from ..externals.six import string_types
-from ..io.pick import pick_types
+from ..io.pick import pick_types, _pick_data_channels
 from ..io.proj import setup_proj
-from ..utils import verbose, get_config
+from ..utils import verbose, get_config, logger
 from ..time_frequency import compute_raw_psd
-from .utils import _toggle_options, _toggle_proj, tight_layout
-from .utils import _layout_figure, _plot_raw_onkey, figure_nobar
-from .utils import _plot_raw_onscroll, _mouse_click
-from .utils import _helper_raw_resize, _select_bads, _onclick_help
+from .topo import _plot_topo, _plot_timeseries
+from .utils import (_toggle_options, _toggle_proj, tight_layout,
+                    _layout_figure, _plot_raw_onkey, figure_nobar,
+                    _plot_raw_onscroll, _mouse_click, plt_show,
+                    _helper_raw_resize, _select_bads, _onclick_help)
 from ..defaults import _handle_default
 
 
@@ -43,7 +44,7 @@ def _update_raw_data(params):
     start = params['t_start']
     stop = params['raw'].time_as_index(start + params['duration'])[0]
     start = params['raw'].time_as_index(start)[0]
-    data_picks = pick_types(params['raw'].info, meg=True, eeg=True)
+    data_picks = _pick_data_channels(params['raw'].info)
     data, times = params['raw'][:, start:stop]
     if params['projector'] is not None:
         data = np.dot(params['projector'], data)
@@ -76,7 +77,7 @@ def _pick_bad_channels(event, params):
     _plot_update_raw_proj(params, None)
 
 
-def plot_raw(raw, events=None, duration=10.0, start=0.0, n_channels=None,
+def plot_raw(raw, events=None, duration=10.0, start=0.0, n_channels=20,
              bgcolor='w', color=None, bad_color=(0.8, 0.8, 0.8),
              event_color='cyan', scalings=None, remove_dc=True, order='type',
              show_options=False, title=None, show=True, block=False,
@@ -95,7 +96,7 @@ def plot_raw(raw, events=None, duration=10.0, start=0.0, n_channels=None,
     start : float
         Initial time to show (can be changed dynamically once plotted).
     n_channels : int
-        Number of channels to plot at once.
+        Number of channels to plot at once. Defaults to 20.
     bgcolor : color object
         Color of the background.
     color : dict | color object | None
@@ -234,10 +235,21 @@ def plot_raw(raw, events=None, duration=10.0, start=0.0, n_channels=None,
         inds += [pick_types(info, meg=t, ref_meg=False, exclude=[])]
         types += [t] * len(inds[-1])
     pick_kwargs = dict(meg=False, ref_meg=False, exclude=[])
-    for t in ['eeg', 'eog', 'ecg', 'emg', 'ref_meg', 'stim', 'resp',
+    for t in ['eeg', 'seeg', 'eog', 'ecg', 'emg', 'ref_meg', 'stim', 'resp',
               'misc', 'chpi', 'syst', 'ias', 'exci']:
         pick_kwargs[t] = True
         inds += [pick_types(raw.info, **pick_kwargs)]
+        if t == 'seeg' and len(inds[-1]) > 0:
+            # XXX hack to work around fiff mess
+            new_picks = [ind for ind in inds[-1] if
+                         not raw.ch_names[ind].startswith('CHPI')]
+            if len(new_picks) != len(inds[-1]):
+                inds[-1] = new_picks
+            else:
+                logger.warning('Conflicting FIFF constants detected. SEEG '
+                               'FIFF data saved before mne version 0.11 will '
+                               'not work with mne version 0.12! Save the raw '
+                               'files again to fix the FIFF tags!')
         types += [t] * len(inds[-1])
         pick_kwargs[t] = False
     inds = np.concatenate(inds).astype(int)
@@ -331,11 +343,10 @@ def plot_raw(raw, events=None, duration=10.0, start=0.0, n_channels=None,
     if show_options is True:
         _toggle_options(None, params)
 
-    if show:
-        try:
-            plt.show(block=block)
-        except TypeError:  # not all versions have this
-            plt.show()
+    try:
+        plt_show(show, block=block)
+    except TypeError:  # not all versions have this
+        plt_show(show)
 
     return params['fig']
 
@@ -461,9 +472,8 @@ def plot_raw_psd(raw, tmin=0., tmax=np.inf, fmin=0, fmax=np.inf, proj=False,
     Returns
     -------
     fig : instance of matplotlib figure
-        Figure distributing one image per channel across sensor topography.
+        Figure with frequency spectra of the data channels.
     """
-    import matplotlib.pyplot as plt
     fig, picks_list, titles_list, ax_list, make_label = _set_psd_plot_params(
         raw.info, proj, picks, ax, area_mode)
 
@@ -472,7 +482,7 @@ def plot_raw_psd(raw, tmin=0., tmax=np.inf, fmin=0, fmax=np.inf, proj=False,
         psds, freqs = compute_raw_psd(raw, tmin=tmin, tmax=tmax, picks=picks,
                                       fmin=fmin, fmax=fmax, proj=proj,
                                       n_fft=n_fft, n_overlap=n_overlap,
-                                      n_jobs=n_jobs, verbose=None)
+                                      n_jobs=n_jobs, verbose=verbose)
 
         # Convert PSDs to dB
         if dB:
@@ -502,8 +512,7 @@ def plot_raw_psd(raw, tmin=0., tmax=np.inf, fmin=0, fmax=np.inf, proj=False,
             ax.set_xlim(freqs[0], freqs[-1])
     if make_label:
         tight_layout(pad=0.1, h_pad=0.1, w_pad=0.1, fig=fig)
-    if show is True:
-        plt.show()
+    plt_show(show)
     return fig
 
 
@@ -670,3 +679,78 @@ def _plot_raw_traces(params, inds, color, bad_color, event_lines=None,
     # CGContextRef error on the MacOSX backend :(
     if params['fig_proj'] is not None:
         params['fig_proj'].canvas.draw()
+
+
+def plot_raw_psd_topo(raw, tmin=0., tmax=None, fmin=0, fmax=100, proj=False,
+                      n_fft=2048, n_overlap=0, layout=None, color='w',
+                      fig_facecolor='k', axis_facecolor='k', dB=True,
+                      show=True, n_jobs=1, verbose=None):
+    """Function for plotting channel wise frequency spectra as topography.
+
+    Parameters
+    ----------
+    raw : instance of io.Raw
+        The raw instance to use.
+    tmin : float
+        Start time for calculations. Defaults to zero.
+    tmax : float | None
+        End time for calculations. If None (default), the end of data is used.
+    fmin : float
+        Start frequency to consider. Defaults to zero.
+    fmax : float
+        End frequency to consider. Defaults to 100.
+    proj : bool
+        Apply projection. Defaults to False.
+    n_fft : int
+        Number of points to use in Welch FFT calculations. Defaults to 2048.
+    n_overlap : int
+        The number of points of overlap between blocks. Defaults to 0
+        (no overlap).
+    layout : instance of Layout | None
+        Layout instance specifying sensor positions (does not need to be
+        specified for Neuromag data). If None (default), the correct layout is
+        inferred from the data.
+    color : str | tuple
+        A matplotlib-compatible color to use for the curves. Defaults to white.
+    fig_facecolor : str | tuple
+        A matplotlib-compatible color to use for the figure background.
+        Defaults to black.
+    axis_facecolor : str | tuple
+        A matplotlib-compatible color to use for the axis background.
+        Defaults to black.
+    dB : bool
+        If True, transform data to decibels. Defaults to True.
+    show : bool
+        Show figure if True. Defaults to True.
+    n_jobs : int
+        Number of jobs to run in parallel. Defaults to 1.
+    verbose : bool, str, int, or None
+        If not None, override default verbose level (see mne.verbose).
+
+    Returns
+    -------
+    fig : instance of matplotlib figure
+        Figure distributing one image per channel across sensor topography.
+    """
+    if layout is None:
+        from ..channels.layout import find_layout
+        layout = find_layout(raw.info)
+
+    psds, freqs = compute_raw_psd(raw, tmin=tmin, tmax=tmax, fmin=fmin,
+                                  fmax=fmax, proj=proj, n_fft=n_fft,
+                                  n_overlap=n_overlap, n_jobs=n_jobs,
+                                  verbose=verbose)
+    if dB:
+        psds = 10 * np.log10(psds)
+        y_label = 'dB'
+    else:
+        y_label = 'Power'
+    plot_fun = partial(_plot_timeseries, data=[psds], color=color, times=freqs)
+
+    fig = _plot_topo(raw.info, times=freqs, show_func=plot_fun, layout=layout,
+                     axis_facecolor=axis_facecolor,
+                     fig_facecolor=fig_facecolor, x_label='Frequency (Hz)',
+                     y_label=y_label)
+
+    plt_show(show)
+    return fig
diff --git a/mne/viz/tests/test_3d.py b/mne/viz/tests/test_3d.py
index 7baa32a..3e1e7b0 100644
--- a/mne/viz/tests/test_3d.py
+++ b/mne/viz/tests/test_3d.py
@@ -55,7 +55,7 @@ def test_plot_sparse_source_estimates():
     stc_data = np.zeros((n_verts * n_time))
     stc_size = stc_data.size
     stc_data[(np.random.rand(stc_size / 20) * stc_size).astype(int)] = \
-        np.random.rand(stc_data.size / 20)
+        np.random.RandomState(0).rand(stc_data.size / 20)
     stc_data.shape = (n_verts, n_time)
     stc = SourceEstimate(stc_data, vertices, 1, 1)
     colormap = 'mne_analyze'
@@ -118,7 +118,7 @@ def test_limits_to_control_points():
     vertices = [s['vertno'] for s in sample_src]
     n_time = 5
     n_verts = sum(len(v) for v in vertices)
-    stc_data = np.random.rand((n_verts * n_time))
+    stc_data = np.random.RandomState(0).rand((n_verts * n_time))
     stc_data.shape = (n_verts, n_time)
     stc = SourceEstimate(stc_data, vertices, 1, 1, 'sample')
 
diff --git a/mne/viz/tests/test_circle.py b/mne/viz/tests/test_circle.py
index 1999221..0b72130 100644
--- a/mne/viz/tests/test_circle.py
+++ b/mne/viz/tests/test_circle.py
@@ -82,7 +82,7 @@ def test_plot_connectivity_circle():
     group_boundaries = [0, len(label_names) / 2]
     node_angles = circular_layout(label_names, node_order, start_pos=90,
                                   group_boundaries=group_boundaries)
-    con = np.random.randn(68, 68)
+    con = np.random.RandomState(0).randn(68, 68)
     plot_connectivity_circle(con, label_names, n_lines=300,
                              node_angles=node_angles, title='test',
                              )
diff --git a/mne/viz/tests/test_epochs.py b/mne/viz/tests/test_epochs.py
index 6f3a3b4..683ca6b 100644
--- a/mne/viz/tests/test_epochs.py
+++ b/mne/viz/tests/test_epochs.py
@@ -18,7 +18,7 @@ from mne import pick_types
 from mne.utils import run_tests_if_main, requires_version
 from mne.channels import read_layout
 
-from mne.viz import plot_drop_log, plot_epochs_image, plot_image_epochs
+from mne.viz import plot_drop_log, plot_epochs_image
 from mne.viz.utils import _fake_click
 
 # Set our plotters to test mode
@@ -132,9 +132,6 @@ def test_plot_epochs_image():
     epochs = _get_epochs()
     plot_epochs_image(epochs, picks=[1, 2])
     plt.close('all')
-    with warnings.catch_warnings(record=True):
-        plot_image_epochs(epochs, picks=[1, 2])
-        plt.close('all')
 
 
 def test_plot_drop_log():
diff --git a/mne/viz/tests/test_evoked.py b/mne/viz/tests/test_evoked.py
index e2c308e..529616e 100644
--- a/mne/viz/tests/test_evoked.py
+++ b/mne/viz/tests/test_evoked.py
@@ -81,7 +81,7 @@ def test_plot_evoked():
     import matplotlib.pyplot as plt
     evoked = _get_epochs().average()
     with warnings.catch_warnings(record=True):
-        fig = evoked.plot(proj=True, hline=[1], exclude=[])
+        fig = evoked.plot(proj=True, hline=[1], exclude=[], window_title='foo')
         # Test a click
         ax = fig.get_axes()[0]
         line = ax.lines[0]
@@ -89,9 +89,9 @@ def test_plot_evoked():
                     [line.get_xdata()[0], line.get_ydata()[0]], 'data')
         _fake_click(fig, ax,
                     [ax.get_xlim()[0], ax.get_ylim()[1]], 'data')
-        # plot with bad channels excluded
+        # plot with bad channels excluded & spatial_colors
         evoked.plot(exclude='bads')
-        evoked.plot(exclude=evoked.info['bads'])  # does the same thing
+        evoked.plot(exclude=evoked.info['bads'], spatial_colors=True, gfp=True)
 
         # test selective updating of dict keys is working.
         evoked.plot(hline=[1], units=dict(mag='femto foo'))
@@ -107,8 +107,7 @@ def test_plot_evoked():
                       proj='interactive', axes='foo')
         plt.close('all')
 
-        # test GFP plot overlay
-        evoked.plot(gfp=True)
+        # test GFP only
         evoked.plot(gfp='only')
         assert_raises(ValueError, evoked.plot, gfp='foo')
 
diff --git a/mne/viz/tests/test_ica.py b/mne/viz/tests/test_ica.py
index ae0ce93..ff1048f 100644
--- a/mne/viz/tests/test_ica.py
+++ b/mne/viz/tests/test_ica.py
@@ -105,6 +105,9 @@ def test_plot_ica_sources():
         ica.plot_sources(evoked, exclude=[0])
         ica.exclude = [0]
         ica.plot_sources(evoked)  # does the same thing
+        ica.labels_ = dict(eog=[0])
+        ica.labels_['eog/0/crazy-channel'] = [0]
+        ica.plot_sources(evoked)  # now with labels
     assert_raises(ValueError, ica.plot_sources, 'meeow')
     plt.close('all')
 
@@ -139,7 +142,18 @@ def test_plot_ica_scores():
     ica = ICA(noise_cov=read_cov(cov_fname), n_components=2,
               max_pca_components=3, n_pca_components=3)
     ica.fit(raw, picks=picks)
+    ica.labels_ = dict()
+    ica.labels_['eog/0/foo'] = 0
+    ica.labels_['eog'] = 0
+    ica.labels_['ecg'] = 1
     ica.plot_scores([0.3, 0.2], axhline=[0.1, -0.1])
+    ica.plot_scores([0.3, 0.2], axhline=[0.1, -0.1], labels='foo')
+    ica.plot_scores([0.3, 0.2], axhline=[0.1, -0.1], labels='eog')
+    ica.plot_scores([0.3, 0.2], axhline=[0.1, -0.1], labels='ecg')
+    assert_raises(
+        ValueError,
+        ica.plot_scores,
+        [0.3, 0.2], axhline=[0.1, -0.1], labels=['one', 'one-too-many'])
     assert_raises(ValueError, ica.plot_scores, [0.2])
     plt.close('all')
 
diff --git a/mne/viz/tests/test_raw.py b/mne/viz/tests/test_raw.py
index 311215c..71632e8 100644
--- a/mne/viz/tests/test_raw.py
+++ b/mne/viz/tests/test_raw.py
@@ -10,6 +10,7 @@ from numpy.testing import assert_raises
 from mne import io, read_events, pick_types
 from mne.utils import requires_version, run_tests_if_main
 from mne.viz.utils import _fake_click
+from mne.viz import plot_raw
 
 # Set our plotters to test mode
 import matplotlib
@@ -85,7 +86,7 @@ def test_plot_raw():
         # Color setting
         assert_raises(KeyError, raw.plot, event_color={0: 'r'})
         assert_raises(TypeError, raw.plot, event_color={'foo': 'r'})
-        fig = raw.plot(events=events, event_color={-1: 'r', 998: 'b'})
+        fig = plot_raw(raw, events=events, event_color={-1: 'r', 998: 'b'})
         plt.close('all')
 
 
@@ -120,6 +121,9 @@ def test_plot_raw_psd():
     assert_raises(ValueError, raw.plot_psd, ax=ax)
     raw.plot_psd(picks=picks, ax=ax)
     plt.close('all')
+    # topo psd
+    raw.plot_psd_topo()
+    plt.close('all')
 
 
 run_tests_if_main()
diff --git a/mne/viz/tests/test_topo.py b/mne/viz/tests/test_topo.py
index 127c4af..6ae52c0 100644
--- a/mne/viz/tests/test_topo.py
+++ b/mne/viz/tests/test_topo.py
@@ -19,7 +19,7 @@ from mne.channels import read_layout
 from mne.time_frequency.tfr import AverageTFR
 from mne.utils import run_tests_if_main
 
-from mne.viz import (plot_topo, plot_topo_image_epochs, _get_presser,
+from mne.viz import (plot_topo_image_epochs, _get_presser,
                      mne_analyze_colormap, plot_evoked_topo)
 from mne.viz.topo import _plot_update_evoked_topo
 
@@ -85,16 +85,18 @@ def test_plot_topo():
     # test scaling
     with warnings.catch_warnings(record=True):
         for ylim in [dict(mag=[-600, 600]), None]:
-            plot_topo([picked_evoked] * 2, layout, ylim=ylim)
+            plot_evoked_topo([picked_evoked] * 2, layout, ylim=ylim)
 
         for evo in [evoked, [evoked, picked_evoked]]:
-            assert_raises(ValueError, plot_topo, evo, layout, color=['y', 'b'])
+            assert_raises(ValueError, plot_evoked_topo, evo, layout,
+                          color=['y', 'b'])
 
         evoked_delayed_ssp = _get_epochs_delayed_ssp().average()
         ch_names = evoked_delayed_ssp.ch_names[:3]  # make it faster
         picked_evoked_delayed_ssp = pick_channels_evoked(evoked_delayed_ssp,
                                                          ch_names)
-        fig = plot_topo(picked_evoked_delayed_ssp, layout, proj='interactive')
+        fig = plot_evoked_topo(picked_evoked_delayed_ssp, layout,
+                               proj='interactive')
         func = _get_presser(fig)
         event = namedtuple('Event', 'inaxes')
         func(event(inaxes=fig.axes[0]))
diff --git a/mne/viz/tests/test_topomap.py b/mne/viz/tests/test_topomap.py
index 3504bf4..d9dd5d6 100644
--- a/mne/viz/tests/test_topomap.py
+++ b/mne/viz/tests/test_topomap.py
@@ -69,6 +69,7 @@ def test_plot_topomap():
     assert_raises(ValueError, ev_bad.plot_topomap, head_pos=dict(center=0))
     assert_raises(ValueError, ev_bad.plot_topomap, times=[-100])  # bad time
     assert_raises(ValueError, ev_bad.plot_topomap, times=[[0]])  # bad time
+    assert_raises(ValueError, ev_bad.plot_topomap, times=[[0]])  # bad time
 
     evoked.plot_topomap(0.1, layout=layout, scale=dict(mag=0.1))
     plt.close('all')
@@ -194,9 +195,12 @@ def test_plot_topomap():
                   times, ch_type='eeg')
     plt.close('all')
 
-    # Test error messages for invalid pos parameter
+    # Error for missing names
     n_channels = len(pos)
     data = np.ones(n_channels)
+    assert_raises(ValueError, plot_topomap, data, pos, show_names=True)
+
+    # Test error messages for invalid pos parameter
     pos_1d = np.zeros(n_channels)
     pos_3d = np.zeros((n_channels, 2, 2))
     assert_raises(ValueError, plot_topomap, data, pos_1d)
diff --git a/mne/viz/tests/test_utils.py b/mne/viz/tests/test_utils.py
index 7a337ac..3a8b69d 100644
--- a/mne/viz/tests/test_utils.py
+++ b/mne/viz/tests/test_utils.py
@@ -41,7 +41,7 @@ def test_clickable_image():
     """Test the ClickableImage class."""
     # Gen data and create clickable image
     import matplotlib.pyplot as plt
-    im = np.random.randn(100, 100)
+    im = np.random.RandomState(0).randn(100, 100)
     clk = ClickableImage(im)
     clicks = [(12, 8), (46, 48), (10, 24)]
 
@@ -63,9 +63,10 @@ def test_clickable_image():
 def test_add_background_image():
     """Test adding background image to a figure."""
     import matplotlib.pyplot as plt
+    rng = np.random.RandomState(0)
     f, axs = plt.subplots(1, 2)
-    x, y = np.random.randn(2, 10)
-    im = np.random.randn(10, 10)
+    x, y = rng.randn(2, 10)
+    im = rng.randn(10, 10)
     axs[0].scatter(x, y)
     axs[1].scatter(y, x)
     for ax in axs:
diff --git a/mne/viz/topo.py b/mne/viz/topo.py
index bc869a7..72512fd 100644
--- a/mne/viz/topo.py
+++ b/mne/viz/topo.py
@@ -17,11 +17,11 @@ import numpy as np
 
 from ..io.pick import channel_type, pick_types
 from ..fixes import normalize_colors
-from ..utils import _clean_names, deprecated
+from ..utils import _clean_names
 
 from ..defaults import _handle_default
 from .utils import (_check_delayed_ssp, COLORS, _draw_proj_checkbox,
-                    add_background_image)
+                    add_background_image, plt_show)
 
 
 def iter_topography(info, layout=None, on_pick=None, fig=None,
@@ -239,78 +239,6 @@ def _check_vlim(vlim):
     return not np.isscalar(vlim) and vlim is not None
 
 
- at deprecated("It will be removed in version 0.11. "
-            "Please use evoked.plot_topo or viz.evoked.plot_evoked_topo "
-            "for list of evoked instead.")
-def plot_topo(evoked, layout=None, layout_scale=0.945, color=None,
-              border='none', ylim=None, scalings=None, title=None, proj=False,
-              vline=[0.0], fig_facecolor='k', fig_background=None,
-              axis_facecolor='k', font_color='w', show=True):
-    """Plot 2D topography of evoked responses.
-
-    Clicking on the plot of an individual sensor opens a new figure showing
-    the evoked response for the selected sensor.
-
-    Parameters
-    ----------
-    evoked : list of Evoked | Evoked
-        The evoked response to plot.
-    layout : instance of Layout | None
-        Layout instance specifying sensor positions (does not need to
-        be specified for Neuromag data). If possible, the correct layout is
-        inferred from the data.
-    layout_scale: float
-        Scaling factor for adjusting the relative size of the layout
-        on the canvas
-    color : list of color objects | color object | None
-        Everything matplotlib accepts to specify colors. If not list-like,
-        the color specified will be repeated. If None, colors are
-        automatically drawn.
-    border : str
-        matplotlib borders style to be used for each sensor plot.
-    ylim : dict | None
-        ylim for plots. The value determines the upper and lower subplot
-        limits. e.g. ylim = dict(eeg=[-200e-6, 200e6]). Valid keys are eeg,
-        mag, grad, misc. If None, the ylim parameter for each channel is
-        determined by the maximum absolute peak.
-    scalings : dict | None
-        The scalings of the channel types to be applied for plotting. If None,`
-        defaults to `dict(eeg=1e6, grad=1e13, mag=1e15)`.
-    title : str
-        Title of the figure.
-    proj : bool | 'interactive'
-        If true SSP projections are applied before display. If 'interactive',
-        a check box for reversible selection of SSP projection vectors will
-        be shown.
-    vline : list of floats | None
-        The values at which to show a vertical line.
-    fig_facecolor : str | obj
-        The figure face color. Defaults to black.
-    fig_background : None | numpy ndarray
-        A background image for the figure. This must work with a call to
-        plt.imshow. Defaults to None.
-    axis_facecolor : str | obj
-        The face color to be used for each sensor plot. Defaults to black.
-    font_color : str | obj
-        The color of text in the colorbar and title. Defaults to white.
-    show : bool
-        Show figure if True.
-
-    Returns
-    -------
-    fig : Instance of matplotlib.figure.Figure
-        Images of evoked responses at sensor locations
-    """
-    return _plot_evoked_topo(evoked=evoked, layout=layout,
-                             layout_scale=layout_scale, color=color,
-                             border=border, ylim=ylim, scalings=scalings,
-                             title=title, proj=proj, vline=vline,
-                             fig_facecolor=fig_facecolor,
-                             fig_background=fig_background,
-                             axis_facecolor=axis_facecolor,
-                             font_color=font_color, show=show)
-
-
 def _plot_evoked_topo(evoked, layout=None, layout_scale=0.945, color=None,
                       border='none', ylim=None, scalings=None, title=None,
                       proj=False, vline=[0.0], fig_facecolor='k',
@@ -371,8 +299,6 @@ def _plot_evoked_topo(evoked, layout=None, layout_scale=0.945, color=None,
     fig : Instance of matplotlib.figure.Figure
         Images of evoked responses at sensor locations
     """
-    import matplotlib.pyplot as plt
-
     if not type(evoked) in (tuple, list):
         evoked = [evoked]
 
@@ -473,9 +399,7 @@ def _plot_evoked_topo(evoked, layout=None, layout_scale=0.945, color=None,
                       projs=evoked[0].info['projs'], fig=fig)
         _draw_proj_checkbox(None, params)
 
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
@@ -595,7 +519,6 @@ def plot_topo_image_epochs(epochs, layout=None, sigma=0., vmin=None,
     fig : instance of matplotlib figure
         Figure distributing one image per channel across sensor topography.
     """
-    import matplotlib.pyplot as plt
     scalings = _handle_default('scalings', scalings)
     data = epochs.get_data()
     if vmin is None:
@@ -617,6 +540,5 @@ def plot_topo_image_epochs(epochs, layout=None, sigma=0., vmin=None,
                      fig_facecolor=fig_facecolor,
                      font_color=font_color, border=border,
                      x_label='Time (s)', y_label='Epoch')
-    if show:
-        plt.show()
+    plt_show(show)
     return fig
diff --git a/mne/viz/topomap.py b/mne/viz/topomap.py
index 1be92dc..e035e6d 100644
--- a/mne/viz/topomap.py
+++ b/mne/viz/topomap.py
@@ -21,11 +21,13 @@ from ..io.constants import FIFF
 from ..io.pick import pick_types
 from ..utils import _clean_names, _time_mask, verbose, logger
 from .utils import (tight_layout, _setup_vmin_vmax, _prepare_trellis,
-                    _check_delayed_ssp, _draw_proj_checkbox, figure_nobar)
+                    _check_delayed_ssp, _draw_proj_checkbox, figure_nobar,
+                    plt_show)
 from ..time_frequency import compute_epochs_psd
 from ..defaults import _handle_default
 from ..channels.layout import _find_topomap_coords
 from ..fixes import _get_argrelmax
+from ..externals.six import string_types
 
 
 def _prepare_topo_plot(inst, ch_type, layout):
@@ -221,7 +223,7 @@ def plot_projs_topomap(projs, layout=None, cmap='RdBu_r', sensors=True,
             pos = l.pos[idx]
             if is_vv and grad_pairs:
                 from ..channels.layout import _merge_grad_data
-                shape = (len(idx) / 2, 2, -1)
+                shape = (len(idx) // 2, 2, -1)
                 pos = pos.reshape(shape).mean(axis=1)
                 data = _merge_grad_data(data[grad_pairs]).ravel()
 
@@ -238,9 +240,7 @@ def plot_projs_topomap(projs, layout=None, cmap='RdBu_r', sensors=True,
             raise RuntimeError('Cannot find a proper layout for projection %s'
                                % proj['desc'])
     tight_layout(fig=axes[0].get_figure())
-    if show and plt.get_backend() != 'agg':
-        plt.show()
-
+    plt_show(show)
     return axes[0].get_figure()
 
 
@@ -403,6 +403,7 @@ def plot_topomap(data, pos, vmin=None, vmax=None, cmap='RdBu_r', sensors=True,
         delete the prefix 'MEG ' from all channel names, pass the function
         lambda x: x.replace('MEG ', ''). If `mask` is not None, only
         significant sensors will be shown.
+        If `True`, a list of names must be provided (see `names` keyword).
     mask : ndarray of bool, shape (n_channels, n_times) | None
         The channels to be marked as significant at a given time point.
         Indices set to `True` will be considered. Defaults to None.
@@ -486,16 +487,9 @@ def plot_topomap(data, pos, vmin=None, vmax=None, cmap='RdBu_r', sensors=True,
     vmin, vmax = _setup_vmin_vmax(data, vmin, vmax)
 
     pos, outlines = _check_outlines(pos, outlines, head_pos)
-    pos_x = pos[:, 0]
-    pos_y = pos[:, 1]
 
     ax = axis if axis else plt.gca()
-    ax.set_xticks([])
-    ax.set_yticks([])
-    ax.set_frame_on(False)
-    if any([not pos_y.any(), not pos_x.any()]):
-        raise RuntimeError('No position information found, cannot compute '
-                           'geometries for topomap.')
+    pos_x, pos_y = _prepare_topomap(pos, ax)
     if outlines is None:
         xmin, xmax = pos_x.min(), pos_x.max()
         ymin, ymax = pos_y.min(), pos_y.max()
@@ -587,6 +581,9 @@ def plot_topomap(data, pos, vmin=None, vmax=None, cmap='RdBu_r', sensors=True,
             ax.plot(x, y, color='k', linewidth=linewidth, clip_on=False)
 
     if show_names:
+        if names is None:
+            raise ValueError("To show names, a list of names must be provided"
+                             " (see `names` keyword).")
         if show_names is True:
             def _show_names(x):
                 return x
@@ -604,8 +601,7 @@ def plot_topomap(data, pos, vmin=None, vmax=None, cmap='RdBu_r', sensors=True,
 
     if onselect is not None:
         ax.RS = RectangleSelector(ax, onselect=onselect)
-    if show:
-        plt.show()
+    plt_show(show)
     return im, cont
 
 
@@ -805,9 +801,7 @@ def plot_ica_components(ica, picks=None, ch_type=None, res=64,
     tight_layout(fig=fig)
     fig.subplots_adjust(top=0.95)
     fig.canvas.draw()
-
-    if show is True:
-        plt.show()
+    plt_show(show)
     return fig
 
 
@@ -995,9 +989,7 @@ def plot_tfr_topomap(tfr, tmin=None, tmax=None, fmin=None, fmax=None,
         cbar.ax.tick_params(labelsize=12)
         cbar.ax.set_title('AU')
 
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
@@ -1137,14 +1129,16 @@ def plot_evoked_topomap(evoked, times="auto", ch_type=None, layout=None,
     if isinstance(axes, plt.Axes):
         axes = [axes]
 
-    if times == "peaks":
-        npeaks = 10 if axes is None else len(axes)
-        times = _find_peaks(evoked, npeaks)
-    elif times == "auto":
-        if axes is None:
-            times = np.linspace(evoked.times[0], evoked.times[-1], 10)
-        else:
-            times = np.linspace(evoked.times[0], evoked.times[-1], len(axes))
+    if isinstance(times, string_types):
+        if times == "peaks":
+            npeaks = 10 if axes is None else len(axes)
+            times = _find_peaks(evoked, npeaks)
+        elif times == "auto":
+            if axes is None:
+                times = np.linspace(evoked.times[0], evoked.times[-1], 10)
+            else:
+                times = np.linspace(evoked.times[0], evoked.times[-1],
+                                    len(axes))
     elif np.isscalar(times):
         times = [times]
 
@@ -1263,7 +1257,6 @@ def plot_evoked_topomap(evoked, times="auto", ch_type=None, layout=None,
 
     if title is not None:
         plt.suptitle(title, verticalalignment='top', size='x-large')
-        tight_layout(pad=size, fig=fig)
 
     if colorbar:
         cax = plt.subplot(1, n_times + 1, n_times + 1)
@@ -1289,9 +1282,7 @@ def plot_evoked_topomap(evoked, times="auto", ch_type=None, layout=None,
                       plot_update_proj_callback=_plot_update_evoked_topomap)
         _draw_proj_checkbox(None, params)
 
-    if show:
-        plt.show()
-
+    plt_show(show)
     return fig
 
 
@@ -1532,8 +1523,7 @@ def plot_psds_topomap(
                                  colorbar=True, unit=unit, cbar_fmt=cbar_fmt)
     tight_layout(fig=fig)
     fig.canvas.draw()
-    if show:
-        plt.show()
+    plt_show(show)
     return fig
 
 
@@ -1600,7 +1590,7 @@ def _onselect(eclick, erelease, tfr, pos, ch_type, itmin, itmax, ifmin, ifmax,
         fig[0].get_axes()[1].cbar.on_mappable_changed(mappable=img)
     fig[0].canvas.draw()
     plt.figure(fig[0].number)
-    plt.show()
+    plt_show(True)
 
 
 def _find_peaks(evoked, npeaks):
@@ -1620,3 +1610,17 @@ def _find_peaks(evoked, npeaks):
     if len(times) == 0:
         times = [evoked.times[gfp.argmax()]]
     return times
+
+
+def _prepare_topomap(pos, ax):
+    """Helper for preparing the topomap."""
+    pos_x = pos[:, 0]
+    pos_y = pos[:, 1]
+
+    ax.set_xticks([])
+    ax.set_yticks([])
+    ax.set_frame_on(False)
+    if any([not pos_y.any(), not pos_x.any()]):
+        raise RuntimeError('No position information found, cannot compute '
+                           'geometries for topomap.')
+    return pos_x, pos_y
diff --git a/mne/viz/utils.py b/mne/viz/utils.py
index 89796a3..da30cf6 100644
--- a/mne/viz/utils.py
+++ b/mne/viz/utils.py
@@ -49,6 +49,14 @@ def _setup_vmin_vmax(data, vmin, vmax, norm=False):
     return vmin, vmax
 
 
+def plt_show(show=True, **kwargs):
+    """Helper to show a figure while suppressing warnings"""
+    import matplotlib
+    import matplotlib.pyplot as plt
+    if show and matplotlib.get_backend() != 'agg':
+        plt.show(**kwargs)
+
+
 def tight_layout(pad=1.2, h_pad=None, w_pad=None, fig=None):
     """ Adjust subplot parameters to give specified padding.
 
@@ -358,7 +366,7 @@ def _draw_proj_checkbox(event, params, draw_current_state=True):
     # this should work for non-test cases
     try:
         fig_proj.canvas.draw()
-        fig_proj.show()
+        fig_proj.show(warn=False)
     except Exception:
         pass
 
@@ -456,12 +464,12 @@ def compare_fiff(fname_1, fname_2, fname_out=None, show=True, indent='    ',
                        read_limit=read_limit, max_str=max_str)
     diff = difflib.HtmlDiff().make_file(file_1, file_2, fname_1, fname_2)
     if fname_out is not None:
-        f = open(fname_out, 'w')
+        f = open(fname_out, 'wb')
     else:
-        f = tempfile.NamedTemporaryFile('w', delete=False, suffix='.html')
+        f = tempfile.NamedTemporaryFile('wb', delete=False, suffix='.html')
         fname_out = f.name
     with f as fid:
-        fid.write(diff)
+        fid.write(diff.encode('utf-8'))
     if show is True:
         webbrowser.open_new_tab(fname_out)
     return fname_out
@@ -691,7 +699,7 @@ def _onclick_help(event, params):
     # this should work for non-test cases
     try:
         fig_help.canvas.draw()
-        fig_help.show()
+        fig_help.show(warn=False)
     except Exception:
         pass
 
@@ -724,7 +732,7 @@ class ClickableImage(object):
 
     def __init__(self, imdata, **kwargs):
         """Display the image for clicking."""
-        from matplotlib.pyplot import figure, show
+        from matplotlib.pyplot import figure
         self.coords = []
         self.imdata = imdata
         self.fig = figure()
@@ -736,7 +744,7 @@ class ClickableImage(object):
                                  picker=True, **kwargs)
         self.ax.axis('off')
         self.fig.canvas.mpl_connect('pick_event', self.onclick)
-        show()
+        plt_show()
 
     def onclick(self, event):
         """Mouse click handler.
@@ -757,7 +765,7 @@ class ClickableImage(object):
         **kwargs : dict
             Arguments are passed to imshow in displaying the bg image.
         """
-        from matplotlib.pyplot import subplots, show
+        from matplotlib.pyplot import subplots
         f, ax = subplots()
         ax.imshow(self.imdata, extent=(0, self.xmax, 0, self.ymax), **kwargs)
         xlim, ylim = [ax.get_xlim(), ax.get_ylim()]
@@ -768,7 +776,7 @@ class ClickableImage(object):
             ax.annotate(txt, coord, fontsize=20, color='r')
         ax.set_xlim(xlim)
         ax.set_ylim(ylim)
-        show()
+        plt_show()
 
     def to_layout(self, **kwargs):
         """Turn coordinates into an MNE Layout object.
diff --git a/setup.py b/setup.py
index 4428bbc..dec7410 100755
--- a/setup.py
+++ b/setup.py
@@ -81,10 +81,13 @@ if __name__ == "__main__":
                     'mne.io.array', 'mne.io.array.tests',
                     'mne.io.brainvision', 'mne.io.brainvision.tests',
                     'mne.io.bti', 'mne.io.bti.tests',
+                    'mne.io.ctf', 'mne.io.ctf.tests',
                     'mne.io.edf', 'mne.io.edf.tests',
                     'mne.io.egi', 'mne.io.egi.tests',
                     'mne.io.fiff', 'mne.io.fiff.tests',
                     'mne.io.kit', 'mne.io.kit.tests',
+                    'mne.io.nicolet', 'mne.io.nicolet.tests',
+                    'mne.io.eeglab', 'mne.io.eeglab',
                     'mne.forward', 'mne.forward.tests',
                     'mne.viz', 'mne.viz.tests',
                     'mne.gui', 'mne.gui.tests',
@@ -110,6 +113,7 @@ if __name__ == "__main__":
                                 op.join('channels', 'data', 'montages', '*.txt'),
                                 op.join('channels', 'data', 'montages', '*.elc'),
                                 op.join('channels', 'data', 'neighbors', '*.mat'),
+                                op.join('gui', 'help', '*.json'),
                                 op.join('html', '*.js'),
                                 op.join('html', '*.css')]},
           scripts=['bin/mne'])
diff --git a/tutorials/plot_ica_from_raw.py b/tutorials/plot_ica_from_raw.py
index aa0a658..bd8a8fe 100644
--- a/tutorials/plot_ica_from_raw.py
+++ b/tutorials/plot_ica_from_raw.py
@@ -57,7 +57,7 @@ title = 'Sources related to %s artifacts (red)'
 ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5, picks=picks)
 
 ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
-ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg')
+ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg', labels='ecg')
 
 show_picks = np.abs(scores).argsort()[::-1][:5]
 
@@ -70,7 +70,7 @@ ica.exclude += ecg_inds
 # detect EOG by correlation
 
 eog_inds, scores = ica.find_bads_eog(raw)
-ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog')
+ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog', labels='eog')
 
 show_picks = np.abs(scores).argsort()[::-1][:5]
 
diff --git a/tutorials/plot_introduction.py b/tutorials/plot_introduction.py
index 9f539a0..c7b417d 100644
--- a/tutorials/plot_introduction.py
+++ b/tutorials/plot_introduction.py
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
 """
 .. _intro_tutorial:
 
@@ -16,40 +17,44 @@ What you can do with MNE Python
 -------------------------------
 
    - **Raw data visualization** to visualize recordings, can also use
-   *mne_browse_raw* for extended functionality (see :ref:`ch_browse`)
+     *mne_browse_raw* for extended functionality (see :ref:`ch_browse`)
    - **Epoching**: Define epochs, baseline correction, handle conditions etc.
    - **Averaging** to get Evoked data
    - **Compute SSP pojectors** to remove ECG and EOG artifacts
    - **Compute ICA** to remove artifacts or select latent sources.
+   - **Maxwell filtering** to remove environmental noise.
    - **Boundary Element Modeling**: single and three-layer BEM model
      creation and solution computation.
    - **Forward modeling**: BEM computation and mesh creation
-   (see :ref:`ch_forward`)
+     (see :ref:`ch_forward`)
    - **Linear inverse solvers** (dSPM, sLORETA, MNE, LCMV, DICS)
    - **Sparse inverse solvers** (L1/L2 mixed norm MxNE, Gamma Map,
-   Time-Frequency MxNE)
+     Time-Frequency MxNE)
    - **Connectivity estimation** in sensor and source space
    - **Visualization of sensor and source space data**
    - **Time-frequency** analysis with Morlet wavelets (induced power,
-   intertrial coherence, phase lock value) also in the source space
+     intertrial coherence, phase lock value) also in the source space
    - **Spectrum estimation** using multi-taper method
    - **Mixed Source Models** combining cortical and subcortical structures
    - **Dipole Fitting**
    - **Decoding** multivariate pattern analyis of M/EEG topographies
    - **Compute contrasts** between conditions, between sensors, across
-   subjects etc.
+     subjects etc.
    - **Non-parametric statistics** in time, space and frequency
-   (including cluster-level)
+     (including cluster-level)
    - **Scripting** (batch and parallel computing)
 
 What you're not supposed to do with MNE Python
 ----------------------------------------------
 
-    - **Brain and head surface segmentation** for use with BEM models -- use Freesurfer.
+    - **Brain and head surface segmentation** for use with BEM
+      models -- use Freesurfer.
+    - **Raw movement compensation** -- use Elekta Maxfilter™
 
 
-.. note:: Package based on the FIF file format from Neuromag. It can read and
-          convert CTF, BTI/4D, KIT and various EEG formats to FIF.
+.. note:: This package is based on the FIF file format from Neuromag. It
+          can read and convert CTF, BTI/4D, KIT and various EEG formats to
+          FIF.
 
 
 Installation of the required materials
diff --git a/tutorials/plot_spatio_temporal_cluster_stats_sensor.py b/tutorials/plot_spatio_temporal_cluster_stats_sensor.py
index c43b514..a938fdf 100644
--- a/tutorials/plot_spatio_temporal_cluster_stats_sensor.py
+++ b/tutorials/plot_spatio_temporal_cluster_stats_sensor.py
@@ -183,11 +183,11 @@ for i_clu, clu_idx in enumerate(good_cluster_inds):
     plt.show()
 
 """
-Excercises
+Exercises
 ----------
 
 - What is the smallest p-value you can obtain, given the finite number of
    permutations?
-- use an F distribution to compute the threshold by tradition significance
+- use an F distribution to compute the threshold by traditional significance
    levels. Hint: take a look at ```scipy.stats.distributions.f```
 """

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/python-mne.git



More information about the debian-med-commit mailing list