In an engaging exploration of computer graphics, the author delves into implementing Atkinson dithering for color images with custom palettes and correct linearisation beyond traditional black-and-white applications. Starting from existing posts on HN and other sources like amanvir.com and surma.dev, they explain how to extend dithering techniques beyond grayscale by considering all three RGB channels instead of just a single scalar value for comparison.
To achieve this colorful dithering, the author introduces computing distances in 3D color space between pixel values and distinct colors from the chosen palette. They also discuss accumulating errors separately for each channel to mimic error diffusion methods used in monochrome dithering. To experiment with different palettes and observe results instantly, they recommend checking out ditherit.com’s web interface.
However, a crucial step often overlooked is linearising input images before processing them due to sRGB encoding issues that result in overly bright outputs after dithering. The author highlights the importance of converting images into a linear color space prior to applying any dithering techniques for accurate results. They also mention taking human perception into account by assigning different weights (perceptual luminance) based on how each channel affects perceived brightness in our eyes – R: 0.2126, G: 0.7152, and B: 0.0722.
To further explore this topic with a working implementation or learn more about related concepts from other sources like makew0rld’s blog post on linearisation or their GitHub repository ditherpy (by the author themselves), feel free to follow provided links throughout the article.\
Finally, inviting readers who have deeper insights into color theory and notice any errors in this explanation are welcome to reach out for constructive discussions.
Complete Article after the Jump: Here!