Color Vision Deficiency Simulation: Methods, Accuracy, and Future Directions

Outline

I. Introduction


I. Introduction

Color Vision Deficiencies (CVD) represent a group of conditions that impact the perception of color. In essence, individuals with CVD experience the world of color differently compared to those with typical color vision. Most of us are familiar with the concept of being "color blind," but that's just a layman's term. The scientific reality is much more nuanced, with variations in color perception ranging from mild to severe, and affecting different colors depending on the type of CVD. This spectrum of conditions is most often caused by genetic factors, affecting the function of specific light-sensitive cells in the eye known as cones, but can also be acquired due to age, disease, or injury to the eye.

Simulating Color Vision Deficiency is crucial for various reasons. For one, it promotes understanding and empathy among those with normal color vision, providing an insight into the world as seen by those with a color vision deficiency. It's a bridge connecting two different perceptual experiences, highlighting the challenges faced by individuals with CVD. Furthermore, in the design world, simulating CVD is a valuable tool. Designers and developers can use simulations to ensure their products are accessible and user-friendly for people with a variety of color vision disabilities. This is particularly important in areas like website design, product design, and the creation of public safety information, where color coding is frequently used.

The purpose of this review is to provide a comprehensive look at the methods and technologies used to simulate CVD. We will delve into the science behind these simulations, explore the various types of CVD they aim to replicate and evaluate their effectiveness and accuracy. By understanding the strengths and limitations of current CVD simulations, we can pave the way for improved tools and strategies in the future, enhancing accessibility and inclusivity in color-based design and communication.

II. Understanding Color Vision Deficiencies

To fully comprehend Color Vision Deficiencies (CVD), it's necessary to delve into the workings of color perception in humans. We all know that light is a mix of various colors. When this light strikes an object, some of it gets absorbed while the rest bounces back. The color we perceive an object to be is determined by this reflected light that reaches our eyes. So, how does our eye decode these colors? The secret is in a part of our eyes known as the retina, where specialized cells known as cones reside.

In the human eye, there are three kinds of cone cells, each attuned to different wavelengths of light. The first type responds mostly to short wavelengths, which we perceive as blue (hence, they're often termed "blue cones"). Another type is most sensitive to medium wavelengths (green light, thus "green cones"), and the last category reacts most to long wavelengths (red light, or "red cones"). When these cones detect their corresponding color, they send signals to the brain. The brain then merges these signals to generate our rich, full-color visual experience. Think of it as a sophisticated, biological version of a color mixer!

When discussing Color Vision Deficiencies, these typically stem from problems with one or more of these cone cells. When one of the three basic color mechanisms is absent or not functioning it is called dichromacy. There are three main types of dichromacy: Protanopia, Deuteranopia, and Tritanopia, as well as a variant known as Anomalous Trichromacy.

In Anomalous Trichromacy, one type of cone cell isn't working as it should, causing colors to be perceived differently. It comes in three forms, depending on which cone cell is malfunctioning: Protanomaly (red cones), Deuteranomaly (green cones), and Tritanomaly (blue cones).

Protanopia, on the other hand, is a condition where the red cones are absent or dysfunctional. This results in individuals with Protanopia struggling to differentiate colors in the green-yellow-red part of the spectrum, often seeing these colors as more grey or brown.

Deuteranopia is akin to Protanopia but involves issues with the green cones. Those with Deuteranopia find it challenging to tell colors apart in the red-green part of the spectrum.

Lastly, Tritanopia impacts the blue cones. This deficiency is less common than Protanopia and Deuteranopia and can cause trouble distinguishing between blue and yellow, as well as between purple and red.

To summarize, cone cells play an integral role in color perception. They are crucial to our capability to perceive and differentiate colors. When these cone cells malfunction, it leads to a color vision deficiency, changing how an individual experiences the world around them. This understanding forms the basis of why simulating these conditions is a vital tool in fostering empathy and inclusivity.

III. The LMS Color Space and CVD Simulations

From the preceding section, recall we described how the three types of cone cells in our eyes help us see color.  These cones are sensitive to red, green, and blue light, and they're named after their optimum wavelength: long (L), medium (M), and short (S) wavelengths, respectively. This is where the LMS color space gets its name from. It's like a fancy color map based on how these cone cells respond to light.

This color map is pretty useful when it comes to something called chromatic adaptation. This is just a fancy way of saying how a color might look different under different lights, like how a red apple might look different under sunlight compared to a lamp. And it's also really helpful for studying color vision deficiencies (CVD), which happens when one or more types of these cones don't work as they should.

But here's the tricky part: if we want to tweak colors for different light situations, we can't use this LMS color map directly. Instead, we have to convert these colors into a different map, called the XYZ color space. From there, we use a kind of translator, or transformation matrix, to bring them into the LMS space. This process is super important for many color-changing methods and models.

Sometimes people refer to this transformed color space as RGB or ργβ, but it's not the same as the RGB color model we use for things like computer screens. After this translation step, we use what's called chromatic adaptation transform (CAT) matrices to figure out a "cone" response for each cell type. This helps us understand how each cone in the eye reacts to different colors. So, while this LMS color space stuff might sound complicated, it's a super cool tool for studying and simulating how we see color! More details can be found in this Wikipedia article (LMS color space, 2023)​.

IV. Review of Key CVD Simulation Models

A. HCIRN Color Blind Simulation Function (Coblis)

The HCIRN Color Blind Simulation function, also known as Coblis, is a tool designed to simulate color blindness, making it easier for those with normal color vision to understand what it's like to be colorblind. This handy tool helps to bridge the understanding gap between colorblind and non-colorblind people. It operates on a user's local machine, so there's no need to upload images to a server, which also means there are no restrictions on the size of the images you can use. However, do note that the "Lens feature" may have some issues on certain browsers like Edge and Internet Explorer, but it works fine on most others​ (Coblis - Color Blindness Simulator, 2023).

Users can either upload an image or drag and drop it into the simulator. There's also a zoom and move functionality, allowing users to inspect different areas of the image in more detail​.

The algorithms of the Coblis simulator can transform any picture into how it would be perceived by red-, green-, blue-, or completely colorblind individuals. The simulator's first version was built using the color blindness matrix provided by Michael from ColorJack. However, the current version is based on the jsColorblindSimulator-Project developed by MaPePeR, which only uses client resources and has added cool features like pan and zoom. The HCIRN Color Blind Simulation function is now utilized and freely available for non-commercial use​​. The function is copyrighted by Matthew Wickline and the Human-Computer Interaction Resource Network (HCIRN)​.

While Coblis is a commonly found tool for color blindness simulation, it's worth noting that the full HCIRN simulation function hasn't been formally evaluated. However, it's generally considered reasonable in practice. There is a simplified version of the simulation function called "ColorMatrix" approximation by Colorjack, which is often avoided due to its inaccuracy. Despite this, it's still quite widespread, likely because it's easy to copy and paste​ (Algorithm to simulate color blindness, 2023)2​.

Overall, while Coblis is a useful tool for getting a general sense of what it's like to have a color vision deficiency, it could be more accurate, so it's important to understand its limitations when using it. The colorblind experience is complex and can vary significantly among individuals, and simulation tools like Coblis can't capture all the subtleties of this experience.

B. Brettel, Viénot, & Mollon, 1997

The paper "Computerized simulation of color appearance for dichromats" by Brettel, Viénot, and Mollon (Brettel, Viénot, & Mollon, 1997), proposes an algorithm that transforms a digitized color image to simulate the appearance of the image for people who have dichromatic forms of color blindness. The algorithm is based on colorimetry and the reports of unilateral dichromats described in the literature.

Dichromats are individuals who can only perceive two of the three primary colors. This condition arises from the absence of one of the retinal photopigments, which can be of the long-wavelength (L) type in protanopes, the middle-wavelength (M) type in deuteranopes, and the short-wavelength (S) type in tritanopes.

The algorithm represents color stimuli as vectors in a three-dimensional LMS space, and the simulation algorithm is expressed in terms of transformations of this space. The algorithm replaces each stimulus by its projection onto a reduced stimulus surface. This surface is defined by a neutral axis and by the LMS locations of those monochromatic stimuli that are perceived as the same hue by normal trichromats and a given type of dichromat.

The operation of the algorithm was demonstrated with a mosaic of square color patches. A protanope and a deuteranope accepted the match between the original and the appropriate image, confirming that the reduction is colorimetrically accurate. However, the authors caution that the simulation provides a means of quantifying and illustrating the residual color information available to dichromats in any digitized image, but it can never be certain of another's sensations.

The authors also note that the simulation is limited by the gamut of colors on the video monitor. They overcame this limitation by starting the transformation with an image consisting of a subset of the colors obtainable in the RGB space of the monitor.

The authors caution that the genetic basis for unilateral inherited dichromacy is not well understood, and the brain's plasticity may minimize discrepancies between the sensations evoked by the two eyes of the unilaterally color blind. They also note that their implicit assumption that for a particular type of dichromat only one color subsystem is affected may not always hold.

C. Viénot, Brettel, & Mollon, 1999

The paper, "Digital video colourmaps for checking the legibility of displays by dichromats," by Viénot, Brettel, and Mollon (Viénot, Brettel, & Mollon, 1999), focuses on the development and application of digital video color maps. The authors' primary aim is to enhance the legibility of digital displays for dichromats. The research builds upon previous studies and knowledge in the field of color perception and digital display technology. The authors have developed a unique algorithm that simulates dichromatic vision, allowing designers to create more accessible digital displays.

The algorithm developed by the authors has been tested and evaluated for its accuracy in simulating dichromatic vision. The results indicate that the algorithm is effective in creating digital video color maps that improve the legibility of displays for dichromats. However, the study acknowledges certain limitations. The algorithm's effectiveness may vary depending on the specific type of dichromatism (protanopia or deuteranopia). Furthermore, the study is based on the assumption that the spectral sensitivities of dichromats are known and constant, which may not always be the case. Despite these limitations, the research provides a significant contribution to improving the accessibility of digital displays for individuals with color vision deficiencies.


D. Machado, Oliveira, & Fernandes, 2009

The article titled "A Physiologically-based Model for Simulation of Color Vision Deficiency" by Gustavo M. Machado, Manuel M. Oliveira, and Leandro A. F. Fernandes (Machado, Oliveira, & Fernandes, 2009) presents a model for simulating color perception based on the stage theory of human color vision. The model is the first to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way.

The paper begins by discussing the importance of understanding color vision deficiency (CVD), which affects approximately 200 million people worldwide. It then introduces a physiologically-based model for simulating color perception, which is based on the stage theory of human color vision. The model is derived from data reported in electrophysiological studies and is the first to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. The authors have validated the model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones.

The model is evaluated through experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. The authors claim that their model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision-deficient individuals. However, the paper does not provide explicit information about the limitations of the model.

E. Open Source Color Blindness Simulation

The article (Review of Open Source Color Blindness Simulation, 2021) on DaltonLens provides the best comprehensive review of various open-source programs and methods used to simulate color vision deficiencies (CVD). Here is a summary of the review provided in this article.

  1. Overview of the main existing approaches The article reviews several methods for simulating color blindness, including the HCIRN Color Blind Simulation function, Brettel & Mollon 1997, Viénot, Brettel & Mollon 1999, and Machado 2009. Each method has its strengths and weaknesses, and the article provides a detailed analysis of these.

  2. Evaluation of accuracy and limitations The article provides a critical evaluation of each method, discussing their accuracy and limitations. For instance, the HCIRN Color Blind Simulation function is considered questionable in terms of accuracy, especially in its older version. The Brettel & Mollon 1997 method is considered a solid choice for tritanopia, while the Viénot, Brettel & Mollon 1999 method is recommended for protanopia and deuteranopia. The Machado 2009 method is praised for supporting both dichromacy and anomalous trichromacy in a principled way, but it doesn't work well for tritanopia.

  3. Recommendations The article recommends different methods depending on the type of deficiency. For tritanopia, the Brettel 1997 approach is recommended. For protanopia and deuteranopia, the Viénot 1999 and Machado 2009 methods are recommended. The article strongly advises against using the ColorMatrix (Coblis V1) method.

  4. Code Implementation The author provides several code implementations for the discussed methods, including libDaltonLens, a minimalistic single-file library with a public domain license, and DaltonLens-Python, which is targeted toward experimentation and research.

  5. Accuracy of Simulations The article concludes by noting that while the discussed models have a solid theoretical background, they remain approximate mathematical models of the complex human perception of colors. Therefore, their accuracy varies, and they should be used with caution. The models are based on average observers and do not account for individual variations or the plasticity of the brain that can potentially adapt and change color perception at higher levels.


V. Future Directions and Improvements in CVD Simulations

A. Current Limitations and Challenges in CVD Simulations

As we've discussed, CVD simulations have come a long way, but they're not perfect. Understanding and mimicking the intricacies of human color perception is a complex task, especially when you're trying to replicate what happens when things go awry. Even though we have good models and algorithms in place, the truth is that color perception theory is complex. So, some limitations and challenges exist.

One challenge is ensuring that the simulations are accurate and not overly exaggerated. This is a difficult task, and it's not always easy for developers to fully understand the underlying algorithms. Furthermore, it's also challenging to evaluate the accuracy of the simulations, as they can still appear reasonable to an untrained observer, even if they're not entirely accurate (Review of Open Source Color Blindness Simulation, 2021)​.

B. Potential Areas of Research and Development

Looking ahead, there's a lot of room for improvement in CVD simulations. One promising avenue is the development of models based on actual physiological data. For instance, (Machado, Oliveira, & Fernandes, 2009) work leverages data from electrophysiological studies and provides a unified framework for normal color vision, anomalous trichromacy, and dichromacy. This kind of model offers new opportunities for testing hypotheses about the role of retinal photoreceptors in CVD, which could lead to more accurate and effective simulations in the future​.

Another exciting development is the use of optimized color maps. These are designed with consideration for color vision deficiencies, ensuring that the data can be accurately interpreted by as many viewers as possible. A recent example of this is a Python module called cmaputil, which was developed to create CVD-optimized color maps that are perceptually uniform in CVD-safe colorspace while maximizing the brightness range (Nuñez, Anderton, & Renslow, 2018). This kind of development can significantly improve the visualization experiences for individuals with CVD, and there's great potential for further improvements in this area​.

VI. Conclusions

So, let's wrap this up. We've taken a journey through the world of color vision deficiencies (CVDs) and their simulation. We dove into the different types of CVDs and discussed the essential role that cone cells play in color perception. We then delved into the science of CVD simulations, highlighting the importance of the LMS color space and discussing some of the challenges involved in creating accurate simulations. Finally, we looked at some of the current limitations and future directions in CVD simulations.

In summary, CVD simulations are an invaluable tool that can help us understand the experiences of people with color vision deficiencies. While they are not without their limitations, ongoing research and development promise to bring about significant improvements. By continuing to refine our models and develop more sophisticated simulations, we can hope to increase the inclusivity of visual content, enabling individuals with CVD to participate more fully in a world that's often heavily reliant on color. Through these simulations, we are one step closer to creating a more inclusive and accessible world for everyone.

Did you know that at Tynge, we are developing Android and iOS apps for color vision deficiency simulation? We will be releasing them soon and we will update this page with details on how to download them.


References

Algorithm to simulate color blindness. (2023, June 13). Retrieved from stackoverflow: https://stackoverflow.com/questions/12168795/algorithm-to-simulate-color-blindness

Brettel, H., Viénot, F., & Mollon, J. D. (1997). Computerized simulation of color appearance for dichromats. Optical Society of America. A, Optics, Image Science, and Vision, 2647–2655.

Coblis - Color Blindness Simulator. (2023, June 13). Retrieved from Colblindor: https://www.color-blindness.com/coblis-color-blindness-simulator/

LMS color space. (2023, June 13). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/LMS_color_space

Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A physiologically-based model for simulation of color vision deficiency. IEEE Transactions on Visualization and Computer Graphics, 1291–1298.

Review of Open Source Color Blindness Simulation. (2021, October 19). Retrieved from DaltonLens: https://daltonlens.org/opensource-cvd-simulation/

Viénot, F., Brettel, H., & Mollon, J. D. (1999). Digital video colourmaps for checking the legibility of displays by dichromats. Color Research & Application, 243–252.




Comments