Posting because I think many of us could benefit from making our results more readable for all.
The accurate representation of data is essential in science communication. However, colour maps that visually distort data through uneven colour gradients or are unreadable to those with colour-vision deficiency remain prevalent in science. These include, but are not limited to, rainbow-like and red–green colour maps. Here, we present a simple guide for the scientific use of colour. We show how scientifically derived colour maps report true data variations, reduce complexity, and are accessible for people with colour-vision deficiencies. We highlight ways for the scientific community to identify and prevent the misuse of colour in science, and call for a proactive step away from colour misuse among the community, publishers, and the press.
Thanks for posting this! We have been worrying about these issues intermittently, especially the concern about color-vision deficiency. This piece could be the basis of a policy at the journal. Grateful for the pointer.
Not so much science, but I work in a hospital on their IT systems. We use an app called color oracle simulates different types of colour blindness when we are designing things. Quite a nifty tool which can help with decisions
Having a doctor colour blind and us using a red / green flag to say patient is well and really unwell would be awkward
@Dan_Eastwood you might enjoy this short snippet that can convert any colormap into a perceptually uniform color map:
from matplotlib import cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
import numpy as np
from colormath.color_objects import *
from colormath.color_conversions import convert_color
from colormath import color_diff
def PerceptuallySmoothColorMap(cmap, delta=color_diff.delta_e_cie2000, sample_N=256, out_N=256):
L = np.linspace(0,1,sample_N)
lab = [convert_color(sRGBColor(*tuple(c[:3])), LabColor) for c in cmap(L)]
diff = cmap.diff = [delta(l1, l2) for l1, l2 in zip(lab[1:], lab[:-1])]
cd = np.cumsum(np.array(diff))
cd = cd / cd[-1]
cd = cmap.cd =  + list(cd)
colors=list((c , tuple(rgb)) for c, rgb in zip(cd, cmap(L)[:,:3]))
return LinearSegmentedColormap.from_list("new",colors, N=out_N)
See how it fixes the infamous “jet” color map (unaltered on left, perceptually uniform on right).
JET = plt.get_cmap('jet')
PU = PerceptuallySmoothColorMap(JET)
s = plt.subplot(1,2,1)
fig = s.imshow(a,aspect='auto',cmap=JET,origin="lower")
s = plt.subplot(1,2,2)
fig = s.imshow(a,aspect='auto',cmap=PU,origin="lower")
This work depends on some really excellent perception studies form decades ago, codified in something called “CIELAB” color space:
With python libraries like colormath, it becomes easy for anyone with basic programing skills to make use of color in a whole new way.
@sfmatheson, it should be part of the review process, in my opinion, that all figures are auto-converted into versions that simulate colorblindness, and spot checked to ensure that they all remain interpretable. Justification should be provided for any deviation from this, and it should be part of peer review.
I have two son’s both with red-green colour blindness. The fascinating but scary thing is how well they have adapted. The don’t see red or green the way I do, but they can typically differentiate and tell me what is red or green.
However once in a while they will totally miss-interpret a colour or colour variation, that is completely apparent to those of us without colour blindness. This usually comes out in a conversation we realize they aren’t understanding something and we realize after the fact it’s because of their colour blindness. The scary part is that while this occurs only once and a while, they have no idea they are missing something, that could be important.