[Python-modules-commits] [colorspacious] 01/05: Import colorspacious_1.1.0.orig.tar.gz

Sandro Tosi morph at moszumanska.debian.org
Sat Dec 10 17:37:07 UTC 2016


This is an automated email from the git hooks/post-receive script.

morph pushed a commit to branch master
in repository colorspacious.

commit c9d8de70cc81131a726747a3320646725c6ac284
Author: Sandro Tosi <morph at debian.org>
Date:   Sat Dec 10 11:47:58 2016 -0500

    Import colorspacious_1.1.0.orig.tar.gz
---
 LICENSE.txt                         |  21 +++++++
 PKG-INFO                            |  33 ++++++-----
 README.rst                          |  31 +++++-----
 colorspacious.egg-info/PKG-INFO     |  33 ++++++-----
 colorspacious.egg-info/SOURCES.txt  |   1 +
 colorspacious.egg-info/requires.txt |   1 +
 colorspacious/ciecam02.py           |   3 +-
 colorspacious/gold_values.py        |   8 +--
 colorspacious/luoetal2006.py        |   4 +-
 colorspacious/version.py            |   2 +-
 doc/bibliography.bib                |  11 ++++
 doc/changes.rst                     |  18 ++++++
 doc/conf.py                         |   2 +-
 doc/index.rst                       |   6 +-
 doc/overview.rst                    |  11 ----
 doc/reference.rst                   | 109 ++++++++++++++++++++++--------------
 doc/tutorial.rst                    |  50 +++++++++--------
 setup.py                            |   3 +-
 18 files changed, 217 insertions(+), 130 deletions(-)

diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100644
index 0000000..8ec21cf
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2014-2015 Colorspacious developers
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/PKG-INFO b/PKG-INFO
index 122015d..748e55f 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: colorspacious
-Version: 1.0.0
+Version: 1.1.0
 Summary: A powerful, accurate, and easy-to-use Python library for doing colorspace conversions
 Home-page: https://github.com/njsmith/colorspacious
 Author: Nathaniel J. Smith
@@ -9,10 +9,17 @@ License: MIT
 Description: colorspacious
         =============
         
-        .. image:: https://travis-ci.org/njsmith/colorspacious.png?branch=master
+        .. image:: https://travis-ci.org/njsmith/colorspacious.svg?branch=master
            :target: https://travis-ci.org/njsmith/colorspacious
-        .. image:: https://coveralls.io/repos/njsmith/colorspacious/badge.png?branch=master
-           :target: https://coveralls.io/r/njsmith/colorspacious?branch=master
+           :alt: Automated test status
+        
+        .. image:: https://codecov.io/gh/njsmith/colorspacious/branch/master/graph/badge.svg
+           :target: https://codecov.io/gh/njsmith/colorspacious
+           :alt: Test coverage
+        
+        .. image:: https://readthedocs.org/projects/colorspacious/badge/?version=latest
+           :target: http://colorspacious.readthedocs.io/en/latest/?badge=latest
+           :alt: Documentation Status
         
         Colorspacious is a powerful, accurate, and easy-to-use library for
         performing colorspace conversions.
@@ -65,16 +72,14 @@ Description: colorspacious
         License:
           MIT, see LICENSE.txt for details.
         
-        References:
-        
-        * Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on
-          CIECAM02 colour appearance model. Color Research & Application, 31(4),
-          320–330. doi:10.1002/col.20227
-        
-        * Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A
-          physiologically-based model for simulation of color vision
-          deficiency. Visualization and Computer Graphics, IEEE Transactions on,
-          15(6), 1291–1298. http://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html
+        References for algorithms we implement:
+          * Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on
+            CIECAM02 colour appearance model. Color Research & Application, 31(4),
+            320–330. doi:10.1002/col.20227
+          * Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A
+            physiologically-based model for simulation of color vision
+            deficiency. Visualization and Computer Graphics, IEEE Transactions on,
+            15(6), 1291–1298. http://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html
         
         Other Python packages with similar functionality that you might want
         to check out as well or instead:
diff --git a/README.rst b/README.rst
index c93daeb..1f92af9 100644
--- a/README.rst
+++ b/README.rst
@@ -1,10 +1,17 @@
 colorspacious
 =============
 
-.. image:: https://travis-ci.org/njsmith/colorspacious.png?branch=master
+.. image:: https://travis-ci.org/njsmith/colorspacious.svg?branch=master
    :target: https://travis-ci.org/njsmith/colorspacious
-.. image:: https://coveralls.io/repos/njsmith/colorspacious/badge.png?branch=master
-   :target: https://coveralls.io/r/njsmith/colorspacious?branch=master
+   :alt: Automated test status
+
+.. image:: https://codecov.io/gh/njsmith/colorspacious/branch/master/graph/badge.svg
+   :target: https://codecov.io/gh/njsmith/colorspacious
+   :alt: Test coverage
+
+.. image:: https://readthedocs.org/projects/colorspacious/badge/?version=latest
+   :target: http://colorspacious.readthedocs.io/en/latest/?badge=latest
+   :alt: Documentation Status
 
 Colorspacious is a powerful, accurate, and easy-to-use library for
 performing colorspace conversions.
@@ -57,16 +64,14 @@ Developer dependencies (only needed for hacking on source):
 License:
   MIT, see LICENSE.txt for details.
 
-References:
-
-* Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on
-  CIECAM02 colour appearance model. Color Research & Application, 31(4),
-  320–330. doi:10.1002/col.20227
-
-* Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A
-  physiologically-based model for simulation of color vision
-  deficiency. Visualization and Computer Graphics, IEEE Transactions on,
-  15(6), 1291–1298. http://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html
+References for algorithms we implement:
+  * Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on
+    CIECAM02 colour appearance model. Color Research & Application, 31(4),
+    320–330. doi:10.1002/col.20227
+  * Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A
+    physiologically-based model for simulation of color vision
+    deficiency. Visualization and Computer Graphics, IEEE Transactions on,
+    15(6), 1291–1298. http://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html
 
 Other Python packages with similar functionality that you might want
 to check out as well or instead:
diff --git a/colorspacious.egg-info/PKG-INFO b/colorspacious.egg-info/PKG-INFO
index 122015d..748e55f 100644
--- a/colorspacious.egg-info/PKG-INFO
+++ b/colorspacious.egg-info/PKG-INFO
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: colorspacious
-Version: 1.0.0
+Version: 1.1.0
 Summary: A powerful, accurate, and easy-to-use Python library for doing colorspace conversions
 Home-page: https://github.com/njsmith/colorspacious
 Author: Nathaniel J. Smith
@@ -9,10 +9,17 @@ License: MIT
 Description: colorspacious
         =============
         
-        .. image:: https://travis-ci.org/njsmith/colorspacious.png?branch=master
+        .. image:: https://travis-ci.org/njsmith/colorspacious.svg?branch=master
            :target: https://travis-ci.org/njsmith/colorspacious
-        .. image:: https://coveralls.io/repos/njsmith/colorspacious/badge.png?branch=master
-           :target: https://coveralls.io/r/njsmith/colorspacious?branch=master
+           :alt: Automated test status
+        
+        .. image:: https://codecov.io/gh/njsmith/colorspacious/branch/master/graph/badge.svg
+           :target: https://codecov.io/gh/njsmith/colorspacious
+           :alt: Test coverage
+        
+        .. image:: https://readthedocs.org/projects/colorspacious/badge/?version=latest
+           :target: http://colorspacious.readthedocs.io/en/latest/?badge=latest
+           :alt: Documentation Status
         
         Colorspacious is a powerful, accurate, and easy-to-use library for
         performing colorspace conversions.
@@ -65,16 +72,14 @@ Description: colorspacious
         License:
           MIT, see LICENSE.txt for details.
         
-        References:
-        
-        * Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on
-          CIECAM02 colour appearance model. Color Research & Application, 31(4),
-          320–330. doi:10.1002/col.20227
-        
-        * Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A
-          physiologically-based model for simulation of color vision
-          deficiency. Visualization and Computer Graphics, IEEE Transactions on,
-          15(6), 1291–1298. http://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html
+        References for algorithms we implement:
+          * Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on
+            CIECAM02 colour appearance model. Color Research & Application, 31(4),
+            320–330. doi:10.1002/col.20227
+          * Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A
+            physiologically-based model for simulation of color vision
+            deficiency. Visualization and Computer Graphics, IEEE Transactions on,
+            15(6), 1291–1298. http://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html
         
         Other Python packages with similar functionality that you might want
         to check out as well or instead:
diff --git a/colorspacious.egg-info/SOURCES.txt b/colorspacious.egg-info/SOURCES.txt
index b4e3b2a..62314f5 100644
--- a/colorspacious.egg-info/SOURCES.txt
+++ b/colorspacious.egg-info/SOURCES.txt
@@ -1,3 +1,4 @@
+LICENSE.txt
 MANIFEST.in
 README.rst
 setup.cfg
diff --git a/colorspacious.egg-info/requires.txt b/colorspacious.egg-info/requires.txt
index e69de29..24ce15a 100644
--- a/colorspacious.egg-info/requires.txt
+++ b/colorspacious.egg-info/requires.txt
@@ -0,0 +1 @@
+numpy
diff --git a/colorspacious/ciecam02.py b/colorspacious/ciecam02.py
index fc8988b..fd26d1a 100644
--- a/colorspacious/ciecam02.py
+++ b/colorspacious/ciecam02.py
@@ -451,8 +451,7 @@ def test_gold():
     for t in XYZ100_CIECAM02_gold:
         got = t.vc.XYZ100_to_CIECAM02(t.XYZ100)
         for i in range(len(got)):
-            if t.expected[i] is not None:
-                assert np.allclose(got[i], t.expected[i], atol=1e-05)
+            assert np.allclose(got[i], t.expected[i], atol=1e-05)
         check_roundtrip(t.vc, t.XYZ100)
 
 def test_inverse():
diff --git a/colorspacious/gold_values.py b/colorspacious/gold_values.py
index e653f83..96c70be 100644
--- a/colorspacious/gold_values.py
+++ b/colorspacious/gold_values.py
@@ -196,16 +196,16 @@ JMh_to_CAM02UCS_silver = [
 
 JMh_to_CAM02LCD_silver = [
     ([50, 20, 10],
-     [ 50.77658303,  14.80756375,   2.61097301]),
+     [ 81.77008177,  18.72061994,   3.30095039]),
     ([10, 60, 100],
-     [ 12.81278263,  -5.5311588 ,  31.36876036]),
+     [ 20.63357204,  -9.04659289,  51.30577777]),
     ]
 
 JMh_to_CAM02SCD_silver = [
     ([50, 20, 10],
-     [ 81.77008177,  18.72061994,   3.30095039]),
+     [ 50.77658303,  14.80756375,   2.61097301]),
     ([10, 60, 100],
-     [ 20.63357204,  -9.04659289,  51.30577777]),
+     [ 12.81278263,  -5.5311588 ,  31.36876036]),
     ]
 
 ################################################################
diff --git a/colorspacious/luoetal2006.py b/colorspacious/luoetal2006.py
index e4032d7..8cb9d59 100644
--- a/colorspacious/luoetal2006.py
+++ b/colorspacious/luoetal2006.py
@@ -52,8 +52,8 @@ class LuoEtAl2006UniformSpace(object):
         return stacklast(J, M, h)
 
 CAM02UCS = LuoEtAl2006UniformSpace(1.00, 0.007, 0.0228)
-CAM02LCD = LuoEtAl2006UniformSpace(1.24, 0.007, 0.0363)
-CAM02SCD = LuoEtAl2006UniformSpace(0.77, 0.007, 0.0053)
+CAM02LCD = LuoEtAl2006UniformSpace(0.77, 0.007, 0.0053)
+CAM02SCD = LuoEtAl2006UniformSpace(1.24, 0.007, 0.0363)
 
 def test_repr():
     # smoke test
diff --git a/colorspacious/version.py b/colorspacious/version.py
index 477c1dd..2b8440a 100644
--- a/colorspacious/version.py
+++ b/colorspacious/version.py
@@ -18,4 +18,4 @@
 # want. (Contrast with the special suffix 1.0.0.dev, which sorts *before*
 # 1.0.0.)
 
-__version__ = "1.0.0"
+__version__ = "1.1.0"
diff --git a/doc/bibliography.bib b/doc/bibliography.bib
index 8e6239b..117784c 100644
--- a/doc/bibliography.bib
+++ b/doc/bibliography.bib
@@ -32,3 +32,14 @@
   publisher = {Springer New York},
   doi = {10.1007/978-1-4419-6190-7_2},
 }
+
+ at InCollection{Sharpe-CVD,
+  title = {Opsin genes, cone photopigments and color vision},
+  booktitle = {Color vision: {From} genes to perception},
+  author = {Sharpe, Lindsay T. and Stockman, Andrew and Jägle, Herbert and Nathans, Jeremy},
+  year = {2000},
+  pages = {3--51},
+  editor = {Karl R. Gegenfurtner and Lindsay T. Sharpe},
+  publisher = {Cambridge University Press},
+  address = {Cambridge},
+}
diff --git a/doc/changes.rst b/doc/changes.rst
index fea946e..02b6a5e 100644
--- a/doc/changes.rst
+++ b/doc/changes.rst
@@ -1,9 +1,26 @@
 Changes
 =======
 
+v1.1.0
+------
+
+* **BUG AFFECTING CALCULATIONS:** In previous versions, it turns out
+  that the CAM02-LCD and CAM02-SCD spaces were accidentally swapped –
+  so if you asked for CAM02-LCD you got SCD, and vice-versa. This has
+  now been corrected. (Thanks to Github user TFiFiE for catching
+  this!)
+
+* Fixed setup.py to be compatible with both python 2 and python 3.
+
+* Miscellaneous documentation improvements.
+
+
 v1.0.0
 ------
 
+.. image:: https://zenodo.org/badge/doi/10.5281/zenodo.33086.svg
+   :target: http://dx.doi.org/10.5281/zenodo.33086
+
 Notable changes since v0.1.0 include:
 
 * **BUG AFFECTING CALCULATIONS:** the sRGB viewing conditions
@@ -70,6 +87,7 @@ Notable changes since v0.1.0 include:
 
 * Miscellaneous bug fixes.
 
+
 v0.1.0
 ------
 
diff --git a/doc/conf.py b/doc/conf.py
index 79a1dd5..7d3a98d 100644
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -90,7 +90,7 @@ sys.path.insert(0, os.getcwd() + "/..")
 import colorspacious
 version = colorspacious.__version__
 # The full version, including alpha/beta/rc tags.
-release = '0.0.0'
+release = version
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.
diff --git a/doc/index.rst b/doc/index.rst
index 344d7f2..d518cda 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -11,11 +11,11 @@ library for performing colorspace conversions.
 
 In addition to the most common standard colorspaces (sRGB, XYZ, xyY,
 CIELab, CIELCh), we also include: color vision deficiency ("color
-blindness") simulations using the approach of Machado et al (2009); a
+blindness") simulations using the approach of :cite:`Machado-CVD`; a
 complete implementation of `CIECAM02
 <https://en.wikipedia.org/wiki/CIECAM02>`_; and the perceptually
-uniform CAM02-UCS / CAM02-LCD / CAM02-SCD spaces proposed by Luo et al
-(2006).
+uniform CAM02-UCS / CAM02-LCD / CAM02-SCD spaces proposed by
+:cite:`CAM02-UCS`.
 
 Contents:
 
diff --git a/doc/overview.rst b/doc/overview.rst
index 6057049..af1a1de 100644
--- a/doc/overview.rst
+++ b/doc/overview.rst
@@ -29,17 +29,6 @@ Developer dependencies (only needed for hacking on source):
 License:
   MIT, see LICENSE.txt for details.
 
-References:
-
-  * Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on
-    CIECAM02 colour appearance model. Color Research & Application, 31(4),
-    320–330. doi:10.1002/col.20227
-
-  * Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A
-    physiologically-based model for simulation of color vision
-    deficiency. Visualization and Computer Graphics, IEEE Transactions on,
-    15(6), 1291–1298. http://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html
-
 Other Python packages with similar functionality that you might want
 to check out as well or instead:
 
diff --git a/doc/reference.rst b/doc/reference.rst
index 2cf17cf..4cbc142 100644
--- a/doc/reference.rst
+++ b/doc/reference.rst
@@ -19,7 +19,7 @@ take additional parameters, and it can convert freely between any of
 them. Here's an image showing all the known spaces, and the conversion
 paths used. (This graph is generated directly from the source code:
 when you request a conversion between two spaces,
-:func:`convert_cspace` automatically traverses this graph to find the
+:func:`cspace_convert` automatically traverses this graph to find the
 best conversion path. This makes it very easy to add support for new
 colorspaces.)
 
@@ -43,7 +43,7 @@ to :func:`cspace_convert`::
    {"name": "CIELab", "XYZ100_w": "D65"}
    {"name": "CIELab", "XYZ100_w": [95.047, 100, 108.883]}
 
-These dictionaries always have a "name" key specifying the
+These dictionaries always have a ``"name"`` key specifying the
 colorspace. Every bold-faced string in the above image is a recognized
 colorspace name. Some spaces take additional parameters beyond the
 name, such as the CIELab whitepoint above. These additional parameters
@@ -83,9 +83,9 @@ out long dicts in most cases. In particular:
   This allows you to directly use common shorthands like ``"JCh"`` or
   ``"JMh"`` as first-class colorspaces.
 
-Any other string ``"foo"``: expands to ``{"name": "foo"}``. So for any
+Any other string ``"foo"`` expands to ``{"name": "foo"}``. So for any
 space that doesn't take parameters, you can simply say ``"sRGB1"`` or
-``"XYZ100"`` or whatever.
+``"XYZ100"`` or whatever and ignore all these complications.
 
 And, as one final trick, any alias can also be used as the ``"name"``
 field in a colorspace dict, in which case its normal expansion is
@@ -116,24 +116,29 @@ Well-known colorspaces
 ......................
 
 **sRGB1**, **sRGB100**: The standard `sRGB colorspace
-<https://en.wikipedia.org/wiki/SRGB>`_. Use ``sRGB1`` if you have or
-want values that are normalized to fall between 0 and 1, and use
-``sRGB255`` if you have or want values that are normalized to fall
-between 0 and 255. This is designed to match the behavior of common
-monitors.
+<https://en.wikipedia.org/wiki/SRGB>`_. If you have generic "RGB"
+values with no further information specified, then usually the right
+thing to do is to assume that they are in the sRGB space; the sRGB
+space was originally designed to match the behavior of common consumer
+monitors, and these days common consumer monitors are designed to
+match sRGB. Use ``sRGB1`` if you have or want values that are
+normalized to fall between 0 and 1, and use ``sRGB255`` if you have or
+want values that are normalized to fall between 0 and 255.
 
 **XYZ100**, **XYZ1**: The standard `CIE 1931 XYZ color space
 <https://en.wikipedia.org/wiki/CIE_1931_color_space>`_. Use ``XYZ100``
 if you have or want values that are normalized to fall between 0 and
-100 (or so -- values greater than 100 are valid in certain cases). Use
-``XYZ1`` if you have or want values that are normalized to fall
-between 0 and 1 (or so). This is a space which is "linear-light",
-i.e. related by a linear transformation to the photon counts in a
-spectral power distribution.
+100 (roughly speaking -- values greater than 100 are valid in certain
+cases). Use ``XYZ1`` if you have or want values that are normalized to
+fall between 0 and 1 (roughly). This is a space which is
+"linear-light", i.e. related by a linear transformation to the photon
+counts in a spectral power distribution. In particular, this means
+that linear interpolation in this space is a valid way to simulate
+physical mixing of lights.
 
 **sRGB1-linear**: A linear-light version of **sRGB1**, i.e., it has
-had gamma correction applied, but retains the standard sRGB
-primaries.
+had gamma correction applied, but is still represented in terms of the
+standard sRGB primaries.
 
 **xyY100**, **xyY1**: The standard `CIE 1931 xyY color space
 <https://en.wikipedia.org/wiki/CIE_1931_color_space#CIE_xy_chromaticity_diagram_and_the_CIE_xyY_color_space>`_. *The
@@ -143,12 +148,12 @@ and use ``xyY1`` if you have or want a Y value that falls between 0
 and 1.
 
 **CIELab**: The standard `CIE 1976 L*a*b* color space
-<https://en.wikipedia.org/wiki/Lab_color_space>`_. L* is scaled to vary
-from 0 to 100; a* and b* are likewise scaled to (very roughly) -50
-to 50. This space takes a parameter, *XYZ100_w*, which is the
-reference point, and may be specified either directly as a tristimulus
-value or as a string naming one of the well-known standard illuminants
-like ``"D65"``.
+<https://en.wikipedia.org/wiki/Lab_color_space>`_. L* is scaled to
+vary from 0 to 100; a* and b* are likewise scaled to roughly the
+range -50 to 50. This space takes a parameter, *XYZ100_w*, which sets
+the reference white point, and may be specified either directly as a
+tristimulus value or as a string naming one of the well-known standard
+illuminants like ``"D65"``.
 
 **CIELCh**: Cylindrical version of **CIELab**. Accepts the same
 parameters. h* is in degrees.
@@ -170,13 +175,23 @@ This is generally done by specifying a colorspace like::
 where ``<type>`` is one of the following strings:
 
 * ``"protanomaly"``: A common form of red-green colorblindness;
-  affects ~2% of white men to some degree (less common among
-  ethnicities, much less common among women).
+  affects ~2% of white men to some degree (less common among other
+  ethnicities, much less common among women, see Tables 1.5 and 1.6 in
+  :cite:`Sharpe-CVD`).
 * ``"deuteranomaly"``: The most common form of red-green
   colorblindness; affects ~6% of white men to some degree (less common
-  among other ethnicities, much less common among women).
-* ``"tritanomaly"``: A very rare form of blue-yellow colorblindness;
-  affects <0.1% of people.
+  among other ethnicities, much less common among women, see Tables
+  1.5 and 1.6 in :cite:`Sharpe-CVD`).
+* ``"tritanomaly"``: A very rare form of colorblindness affecting
+  blue/yellow discrimination -- so rare that its detailed effects and
+  even rate of occurrence are not well understood. Affects <0.1% of
+  people, possibly much less (:cite:`Sharpe-CVD`, page 47). Also, the
+  name we use here is somewhat misleading because only full
+  trit\ **anopia** has been documented, and partial trit\ **anomaly**
+  likely does not exist (:cite:`Sharpe-CVD`, page 45). What this means
+  is that while Colorspacious will happily allow any severity value to
+  be passed, probably only severity = 100 corresponds to any real
+  people.
 
 And ``<severity>`` is any number between 0 (indicating regular vision)
 and 100 (indicating complete dichromacy).
@@ -198,12 +213,12 @@ CIECAM02
 `CIECAM02 <https://en.wikipedia.org/wiki/CIECAM02>`_ is a
 standardized, rather complex, state-of-the-art color appearance model,
 i.e., it's not useful for describing the voltage that should be
-applied to a phosphorescent element in your monitor (like RGB), and
-it's not useful for describing the quantity of photons flying through
-the air (like XYZ), but it is very useful to tell you what a color
-will look like subjectively to a human observer, under a certain set
-of viewing conditions. Unfortunately this makes it rather complicated,
-because human vision is rather complicated.
+applied to a phosphorescent element in your monitor (like RGB was
+originally designed to do), and it's not useful for modelling physical
+properties of light (like XYZ), but it is very useful to tell you what
+a color will look like subjectively to a human observer, under a
+certain set of viewing conditions. Unfortunately this makes it rather
+complicated, because human vision is rather complicated.
 
 If you just want a better replacement for traditional ad hoc spaces
 like "Hue/Saturation/Value", then use the string ``"JCh"`` for your
@@ -226,7 +241,7 @@ can instantiate your own :class:`CIECAM02Space` object:
       specified in the sRGB standard. (The sRGB standard defines two
       things: how a standard monitor should respond to different RGB
       values, and a standard set of viewing conditions in which you
-      are supposed to look at such a monitor, which attempt to
+      are supposed to look at such a monitor, and that attempt to
       approximate the average conditions in which people actually do
       look at such monitors. This object encodes the latter.)
 
@@ -270,7 +285,7 @@ object of class :class:`JChQMsH`:
 
    A namedtuple with a mnemonic name: it has attributes ``J``, ``C``,
    ``h``, ``Q``, ``M``, ``s``, and ``H``, each of which holds a scalar
-   or NumPy array representing the lightness, chroma, hue angle,
+   or NumPy array representing lightness, chroma, hue angle,
    brightness, colorfulness, saturation, and hue composition,
    respectively.
 
@@ -317,11 +332,13 @@ Perceptually uniform colorspaces based on CIECAM02
 The :math:`J'a'b'` spaces proposed by :cite:`CAM02-UCS` are
 high-quality, approximately perceptually uniform spaces based on
 CIECAM02. They propose three variants: CAM02-LCD optimized for "large
-color differences" (e.g., how similar is blue to green), CAM02-SCD
-optimized for "small color differences" (e.g., how similar is lightish
-greenish blue to lightish bluish green), and CAM02-UCS which attempts
-to provide a single "uniform color space" that is less optimized for
-either case but provides acceptable performance in general.
+color differences" (e.g., estimating the similarity between blue and
+green), CAM02-SCD optimized for "small color differences" (e.g.,
+estimating the similarity between light blue with a faint greenish
+cast and light blue with a faint purpleish cast), and CAM02-UCS which
+attempts to provide a single "uniform color space" that is less
+optimized for either case but provides acceptable performance in
+general.
 
 Colorspacious represents these spaces as instances of
 :class:`LuoEtAl2006UniformSpace`:
@@ -329,8 +346,8 @@ Colorspacious represents these spaces as instances of
 .. autoclass:: LuoEtAl2006UniformSpace
 
 Because these spaces are defined as transformations from CIECAM02, to
-use them you must also specify some particular CIECAM02 viewing
-conditions, e.g.::
+have a fully specified color space you must also provide some
+particular CIECAM02 viewing conditions, e.g.::
 
   {"name": "J'a'b'",
    "ciecam02_space": CIECAM02.sRGB,
@@ -341,6 +358,14 @@ As usual, you can also pass any instance of
 like the above, or for the three common variants you can pass the
 strings ``"CAM02-UCS"``, ``"CAM02-LCD"``, or ``"CAM02-SCD"``.
 
+.. versionchanged:: 1.1.0
+
+   In v1.0.0 and earlier, colorspacious's definitions of the
+   ``CAM02-LCD`` and ``CAM02-SCD`` spaces were swapped compared to
+   what they should have been based on the :cite:`CAM02-UCS` – i.e.,
+   if you asked for LCD, you got SCD, and vice-versa. (``CAM02-UCS``
+   was correct, though). Starting in 1.1.0, all three spaces are now
+   correct.
 
 Color difference computation
 ----------------------------
diff --git a/doc/tutorial.rst b/doc/tutorial.rst
index 5e53c6a..16c10b8 100644
--- a/doc/tutorial.rst
+++ b/doc/tutorial.rst
@@ -144,43 +144,49 @@ And now we'll use it to look at the desaturated image we computed above:
 
 The original version is on the left, with our modified version on the
 right. Notice how in the version with reduced chroma, the colors are
-more muted, but not entirely gone. Of course we could also reduce the
-chroma all the way to zero, for a highly accurate greyscale
-conversion:
+more muted, but not entirely gone.
 
-.. ipython:: python
-
-   hopper_greyscale_JCh = cspace_convert(hopper_sRGB, "sRGB1", "JCh")
-   hopper_greyscale_JCh[..., 1] = 0
-   hopper_greyscale_sRGB = cspace_convert(hopper_greyscale_JCh, "JCh", "sRGB1")
-   @savefig hopper_greyscale_unclipped.png width=6in
-   compare_hoppers(hopper_greyscale_sRGB)
-
-But notice the small cyan patches on her collar and hat --
-this occurs due to floating point rounding error creating a few points
-with sRGB values that are greater than 1, which causes matplotlib to
-render the points in a strange way:
+Except, there is one oddity -- notice the small cyan patches on her
+collar and hat. This occurs due to floating point rounding error
+creating a few points with sRGB values that are greater than 1, which
+causes matplotlib to render the points in a strange way:
 
 .. ipython:: python
 
-   hopper_greyscale_sRGB[np.any(hopper_greyscale_sRGB > 1, axis=-1), :]
+   hopper_desat_sRGB[np.any(hopper_desat_sRGB > 1, axis=-1), :]
 
 Colorspacious doesn't do anything to clip such values, since they can
 sometimes be useful for further processing -- e.g. when chaining
 multiple conversions together, you don't want to clip between
 intermediate steps, because this might introduce errors. And
-potentially you might want to handle them in some clever way
-(e.g. rescaling your whole image). But in this case, where the values
-are only just barely over 1, then simply clipping them to 1 is
-probably the best approach, and you can easily do this yourself:
+potentially you might want to handle them in some clever way (`there's
+a whole literature on how to solve such problems
+<https://en.wikipedia.org/wiki/Color_management#Gamut_mapping>`_). But
+in this case, where the values are only just barely over 1, then
+simply clipping them to 1 is probably the best approach, and you can
+easily do this yourself. In fact, NumPy provides a standard function
+that we can use:
 
 .. ipython:: python
 
-   @savefig hopper_greyscale_clipped.png width=6in
-   compare_hoppers(np.clip(hopper_greyscale_sRGB, 0, 1))
+   @savefig hopper_desat_clipped.png width=6in
+   compare_hoppers(np.clip(hopper_desat_sRGB, 0, 1))
 
 No more cyan splotches!
 
+Once we know how to represent an image in terms of
+lightness/chroma/hue, then there's all kinds of things we can
+do. Let's try reducing the chroma all the way to zero, for a highly
+accurate greyscale conversion:
+
+.. ipython:: python
+
+   hopper_greyscale_JCh = cspace_convert(hopper_sRGB, "sRGB1", "JCh")
+   hopper_greyscale_JCh[..., 1] = 0
+   hopper_greyscale_sRGB = cspace_convert(hopper_greyscale_JCh, "JCh", "sRGB1")
+   @savefig hopper_greyscale_unclipped.png width=6in
+   compare_hoppers(np.clip(hopper_greyscale_sRGB, 0, 1))
+
 To explore, try applying other transformations. E.g., you could darken
 the image by rescaling the lightness channel "J" by a factor of 2
 (``image_JCh[..., 0] /= 2``), or try replacing each hue by its
diff --git a/setup.py b/setup.py
index 5d8e92f..e3db512 100644
--- a/setup.py
+++ b/setup.py
@@ -8,7 +8,8 @@ import numpy as np
 DESC = ("A powerful, accurate, and easy-to-use Python library for "
         "doing colorspace conversions")
 
-LONG_DESC = open("README.rst").read()
+import codecs
+LONG_DESC = codecs.open("README.rst", encoding="utf-8").read()
 
 # defines __version__
 exec(open("colorspacious/version.py").read())

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/python-modules/packages/colorspacious.git



More information about the Python-modules-commits mailing list