-
DESCRIPTIONExplore a groundbreaking solution addressing color blindness, a challenge affecting nearly 300 million people worldwide. Learn how Intel is developing innovative technology to enhance digital accessibility through customized color palettes and adaptive interfaces. This session will demonstrate practical approaches to making digital environments more inclusive for individuals with color vision deficiency, from educational materials to professional tools and everyday applications.
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
VOICEOVER: Increasing Access Across Apps, Addressing Colorblindness Barriers. Speaker, Arvind Sundaram, Senior Principal Engineer, Intel. Moderator, Darryl Adams, Accessible Technology Innovator.
DARRYL ADAMS: Thank you, and thank you to Sight Tech Global for producing this conference and bringing this leading-edge technology to the forefront and shining a spotlight on vision tech. I’m Darryl Adams, and I recently left Intel after 28 years where I was the Director of Accessibility. I’m now pursuing new ways of defining and driving assistive technologies to help people with disabilities live better lives. And I’m currently chairing the Consumer Technology Association’s Accessibility and Age Tech Working Group, where we hope to bring the industry closer together and improve. I’m an expert on technology and digital technology, really realizing that vision, where technology can be of better use to people with disabilities and for the aging population. I have retinitis pigmentosa, so that means I’m losing my eyesight from the outside in. I think most people who are on this path, who have some experience with progressive sight loss, you can and cannot do and how you and how you can and cannot rely on your eyesight, but it’s such an incredible time where we’re able to also get a glimpse into the future of this fantastic set of technologies that are that are becoming available and at this this conference today is just uh is a way to highlight just all the fantastic capabilities that are becoming available to us so I’m really excited to be here and also to experience all of the different technologies that that will be shared today. Today we’re going to be sharing some of the great work that Intel is doing to improve the digital experience for people who are colorblind, but before we get into that I want to talk a little bit about the history of Intel and Sight Tech innovation. Starting back in around 2007-2008 Intel created a product called the Intel Reader this was a handheld device that was specifically developed to and designed to help people who are dyslexic or who are visually impaired or blind be able to interact with written documents so it basically would allow you to scan the document and then it would use ocr to read it back to you or optionally if you had sight it would it could it could enhance the text so it would be easier to read keeping in mind this was right as the the the first smartphone came onto the market with the iphone and so it was fairly quickly evident that Over a number of years, this technology would be something that would be performed in the phone. But as when it launched in 2009, it was a really fantastic product that had a lot of utility for the blind and low vision community, and it was a well-a thing that was most interesting about this to me was that this product was the brainchild of a single employee at Intel who was dyslexic and had a passion for trying to deter to try to figure out how to solve for his own needs. He took that that that vision made it his mission and created a product, and it was successful. So from that point forward, I realized that you can as an individual. Individual, if you have the right drive and passion, you can make things happen, even at the largest companies. The next thing I’d like to bring to your attention is the spatial awareness wearable. This was back in 2014, so 10 years ago, we developed, or Intel developed, a wearable technology based on a RealSense camera, which is a 3D depth-sensing camera, and we put it together with a number of haptic actuators that were positioned on a person’s body, and it would basically allow someone to move through space and be aware of their surroundings based on haptic feedback. No AI involved, but it was all based on the cutting-edge sensing capabilities of the time, and I actually had the opportunity to demonstrate this capability on stage at the keynote, or Intel’s keynote, at CES in 2015. More recently, Intel had been partnering with GoodMaps to work on improving indoor wayfinding, and in particular, we were interested in making more effective and more accurate and more effective use of the wayfinding technology, and we were interested in making more accurate and more efficient use of LiDAR scanning technologies. So that way, solutions like GoodMaps can scale more efficiently and more quickly so that more people can take advantage of the benefits of having really accurate indoor wayfinding maps. So as we move into the future, my vision is that our AI is really changing our relationship with technology. We are at a point where, where the technology is going to, is going to be able to improve our communications and ultimately improve human connection and our level of shared understanding. Think about it as, as the technology is becoming more sophisticated, it’s also blending into the background, so we’re going to have less computing interfaces and more human interaction. And our devices are also learning how we communicate. So for the first time after decades, instead of people needing to learn how to use technology and how to use and have that learning ramp, that learning curve, technology is beginning to understand us fundamentally and will, and will work with us. So this is going to be a much more natural way of interacting with, with our, with each other and with technology. In addition, our under, our devices will fundamentally understand our needs and preferences as individuals. And this is considered the idea that your computer or your, your device will understand how you hear, or how you don’t hear, or how you see, or the things that you’re unable to see, and will correct for that in real-time. So this is where the Deixaun’s idea takes the lead. Where we get into today’s topic, where Intel is developing technology to help people who are colorblind be able to get full benefit from what they’re seeing on their own displays. This is done by by first understanding what somebody can and cannot see, and then applying a filter at the graphics driver level so that everything that is that is rendered to the display is able to be seen in a in a useful way for that individual user. So I’d like to now introduce Arvind Sundaram who is a senior principal engineer in the client computing group of Intel. Arvind and his team have been working on this technology, and he’ll now go through the details for how it works. Arvind, over to you!
ARVIND SUNDARAM: Hello everybody, and thank you Darryl for the wonderful introduction and thank you Sight Tech Global for providing us this wonderful platform and opportunity for us to bring together and showcase this fantastic technology that we are super excited about. So I am here representing Intel and my name is Arvind and I am a middle-aged male from of Asian descent and I’m very happy and excited to be here with you today, and I’m going to introduce you to some of the challenges That you may be facing as you’re working towards your career, I’m going to talk about one of the technologies that could really help color-challenged individuals. So, we are going to talk about and I’m going to introduce the tool that we are developing, and also talk about how it is planning or how it is meant to help people with color challenge or color perception challenges right with that, so let me start okay. So, with that background, let me quickly run through as to what we are going to talk about in this presentation, so I will start off with the problem statement, we will quickly look into the problem statement and we’ll Also, look at what is color blindness or color perception challenge. We’ll quickly look at what are the different types, and then I will run into the solution space. We will look at what exactly we’re trying to solve, how are we going to solve, and I will go walk through a step-by-step uh process of explaining. I’ll also show as to what kind of a test it’s going to be doing and what are we deriving and maybe slightly peek into the algorithm and see what exactly we’re trying to derive, and then lastly, I will show as to what the user interface is and then look at when the whole system or the whole application is going to be available. What are the next steps, right? Okay, so with that let me me quickly introduce the problem or bring the problem statement. So, color perception is something that’s more often taken for granted, but however reduced color perception is more common than expected if you go by statistics. It’s as much as eight percent and of specifically the male population they have challenges in perceiving colors, but if you look at contrast deficiency and other defects it could be as high as 24 percent-that’s almost a mind-boggling one-fourth of the male population have some sort of a visual perception challenge. So we started off and looking at how can we address and how can we help this particular group of individuals who have color perception challenges so if you look at the current industry and also a lot of professions are increasingly dependent on color perception so if you look at the current industry and also dependent on slideshows illustrations and these use a lot of colors they use a very wide color spectrum so we are basically depriving as much as we i could say that we are depriving a significant portion of our population of the ability to perceive or even appreciate all these slideshows and illustrations so so what i mean are there no solutions Yeah, there are some solutions which are out there, the operating system for example, and there are some specific tools that are there out in the market which provide for color correction. But we did find out that the process is not very user friendly can be quite cumbersome and you’re basically playing with millions of colors so it’s never possible to get the exact color correction per se if you want to look at. So we’ll look at what exactly is color correction, so it’s not exactly that easy for an end user and it can be even quite frustrating as well. And also there is a freezing need for a simple, low-cost yet effective automated tool that can identify the user’s color perception, deficiency of it, and apply the corrections that at the device level, and verify if the correction is indeed effective. Daryl talked about PCs being more perceptive, being more user attentive, it perceives what the user can see and not see or hear or not hear. So this is one step towards making the PC weaving, the development of the button to make it sleep better, see more, smarter, bringing intelligence to use, putting it to use, and then identifying what the user can perceive and what what the user cannot perceive and then make appropriate changes so that they can. Appreciate what exactly is being shown on the display, so that’s our attempt. Now, with that let me quickly look at what exactly is color blindness: what are color perception challenges? So, deep humans perceive colors, especially there are three basic colors, red, green, and blue, and there are certain sensor elements within the eye which we call as cones. Without going into the specifics, so these are the this main sensory nerve endings which are responsible for the user to perceive colors. A significant portion of the population can perceive a wide spectrum of color, and this is a very important aspect of color blindness. However, there are a certain group of individuals who can or who do have perception challenges. So when I say that a person cannot perceive red, it’s not that they cannot, for example, they are called protonopia. It’s not that they cannot see the red, but it’s just that they don’t see it with the vibrancy that a person with a normal vision can perceive, so which means they see the other colors basically derived out of the eye. So that’s a very important aspect of color blindness-green and blue much more pronounced as compared to red. Similarly, if the person has green deficiency, or if the person has blue deficiency, so we have different Colors and also a combination of these colors, which the end user may not be able to perceive, so as you can see. So if a user may not necessarily have only the red deficiency or only the green but a combination of these as well, so that’s where the complication comes when it comes to the application of a correction factor and that’s where we felt the need for an automated tool which can identify these mix of color deficiencies and also look at how much the correction needs to be done and everything is done in the background and it’s a matter of just a few minutes before the the display starts correcting itself and then the end. User gets a display which is perfectly calibrated for their eyes so that’s the goal and then with that approach let’s let’s look at how do we how do we get the solution what’s the approach that we are taking so the approach is not very different from what a a professional or a or a doctor would use to basically look at what’s the perception or color perception deficiency that the user has so we drew inspiration from the Ishikawa test which is a test for colorblindness and we’re going to look at how do we get the solution and we have derived charts out of this which are very similar to what is used by professionals. We use these charts on an online format, so which means the PC or the device or the computer which the end user is using, so we present these as slides. You’ll shortly show that I’ll shortly show the demo, but you can see that these are charts, which are standard charts, which contain color-coded numbers or images of objects or animals and these are presented to the user, and the user is required to select as to what they see, for example if the number is shown so the user is provided a multiple-choice, and they pick one of the choice which is representative of what they are seeing. Based on these inputs, we have about 85 color charts. Which are presented in succession, and based on these inputs, they are a clear idea of the colour perception deficiency is identified, it’s also recorded, and based on this, the algorithm picks up what correction needs to be done and we will see as to what and how it’s derived, then, based on the correction, the entire display changes much so much to the benefit of the end-user; so now you basically increase the contrast and increase or decrease the contrast of certain colours and spectrum, so that it compensates for the lack of perception or lack of colour perception capability of this programme, the end-user, so that’s. The overall very high-level approach however, let’s go step by step and maybe we can go peek a little bit under the hood and see how it works, so as I mentioned, this is a very standard color perception test. We start off with a standard color palette. I’ll show a few color palettes here, and also explain what’s being shown there, so that you can get an idea. And it’s used by experts professionals to identify the extent of color perception deficiency. So, I’m going to start off an automated test here, where you will be seeing a succession of about 20 plus slides which will show the the charts right starting off with this, so what I’m showing here what you might be able to see uh is is is a ball uh which with multiple small balls of different colors but hidden within this is a number and if you have a normal perception you will be able to see that it’s it’s a number three which is hiding here however if you have certain color deficiencies you might see a different number so that’s that’s what the the the application is trying to see and hear from you as to what exactly are you seeing are you seeing the number three or are you seeing something else so the multiple choices will basically be looking at what exactly are you seeing similarly if you see this Chart so, if you have a normal perception, a normal color perception, then you will see the 42 as a number. However, if you’re not able to see this, but if you’re seeing only four, you might some people might only see two, some people may not even see any number; they just see a collection of black and white balls. So that’s also an option. So you will see that deliberately that these charts are positioned at different corners of the display it’s not in one single area because it’s trying to what we are trying to gauge is that it’s not in one single area because it’s trying to gauge, is are there color perception challenges? Uh, and and is it changing with the the corners and is it different in the middle? So we are trying to perceive across different corners of the display and try to see as to what is it that you’re looking at and what is that you perceive and based on that it’s trying to gauge what’s the depth of the color deficiency perception deficiency that the user has, so here you see the number five being displayed however some people may only see the number three and the number four and the number five and the number four and the a circle with orange and the brown dots right and similarly you have here, I’m going to show as to a sequence of these. As to how these numbers look like, and of course I just use the example of numbers here but we also have black and white charts we also use charts with pictures of animals, pictures of objects so they are deliberately made in such a way that they can invoke any different response and we are basically looking at the numbers that are shown here and the numbers that, looking at what the user is responding or how the user is seeing it as, so in other words let us the step two we go into the step two of it wherein we look at the algorithm the algorithm is now looking at what is your response for example let me try to explain that with this Slide here on the bottom left, you will see the chart which I had shown earlier. So, here you can see that it’s an image which has the number six but then there are multiple choices. Let’s assume that you chose ‘hey’. I don’t see anything so in which case you don’t perceive red, so which means the there is a line that we call as a confusion line. So, reason why we call it the confusion line is because the user cannot see this line. So, we say that okay, the user is confused and says, ‘okay’, they cannot see the red. Similarly, if it is an ‘A’ or a green, so then we call it a green line. If it’s a blue, it’s a blue line. So, we derive weightage for all of these colors. Or these three basic colors, we try to find out what exactly is the combined number for all these three basic colors. So let’s take an example here: if you are able to see nothing in this chart on the left, bottom-left corner so then we give a weightage for R as 10, which means 10 is the maximum, which means you don’t see red at all. If you are able to see six, then it is a normal vision when you are looking at red specifically, so then the R is given the weightage of one. If you see three, it’s a four, it is below the subtraction. And, if it’s eight, the R line is three, but the B line is is also three, which means we are trying to also look at not just the single basic colors, but a combination of these colors, so now we have a succession of these charts, so which are deliberately positioned and also very specifically chosen, so that we can look at all the colors that that are needed, and then we derive a weightage for for each of these, and then based on this, we derive what we call as matrix multipliers, so we do a matrix transformation, and our approach is not enhance the color, but rather enhance the other colors, say for example if the person has protanopia which is red deficiency, so then we enhance the the green and blue, so that you basically look at or de-emphasize it, so that you actually Look at the entire image in the right image so that you actually look at the entire image in the right image so that you actually intensity as it was intended originally. The same way we are doing this de-emphasis rather than emphasis, because then there is a possibility that the colors might get saturated. We might actually hit the display limit. So we are looking at the inverse application, which is the de-emphasis part. So while we are working on this, we are also taking account of luminance adjustments, because as we talked earlier, not just color deficiency, but a lot of people also have problems perceiving contrast. So we also do luminance adjustments. And here is where we are actually trying to make use of the compute capability that the PC has to apply what is appropriately needed. And then we apply it right at the graphics driver itself. So which means there is no hardware change needed. It’s just a software which uses the input that it collects from the user, makes a few changes, and then it applies it to the computer. So we are actually trying to make a few modifications to the way that the colors are displayed or generated, and then it displays, puts it on the display. And once the correction is applied, we also re-run the test, and the efficacy is also checked. So that’s something which we are doing. And that leads us to the last step, which is a user interface. So all of this comes-the chart, presenting the chart, the algorithm, and the application to the driver. Everything is packaged in a single Windows application. So that’s the way we are doing it. So the application can be run at will, or it can be run once in six months or whenever the user is logging in for the first time. So it’s completely up to the user. The results are also stored in the device against the user login. So, which means every time the user logs in, automatically the corrections are applied. However, if someone else logs in, the application, the app may not apply the correction. And also there is a built-in periodic retesting so that with age-related degeneration is also accounted for. And again, the user is given the option to go back to standard color table as and when they need it without any challenges or without any issues whatsoever. So that’s where we are. So what will the user see post-calibration? So here I’m just showing an image, which is basically, again, taking the example of a person who has red perception challenge or protanopia. So they don’t see the red colors, popping out. So they see the color display, and this is deliberately flushed so that the person with the normal vision sees the way that a person with protanopia will see this. However, with the correction, they will see that the red colors are popping, right? So they can actually appreciate the contrast and the colors that basically exist. So that’s our intent. That’s how we see it. So in conclusion, so the application software is something that we already developed, and it can be run on any PC. We are not limiting it to any CPU or GPU combination. It can be across the board. It can be used on any CPU and GPU combination. However, we will also be offering this as a standard offering to any PC, any Intel PC buyer. So we are right now at the stage where we are conducting field trials, and we have at the time of recording, the feedback of various different user trials are already being collected. We are collecting feedback on efficacy and also user inputs, and it’s being implemented. So we at a team Intel, we are looking at making this application available for general public from the second half of 2025. And we’ll also be bundling this as part of Intel’s next generation PC offering, which will be Q2 of 2025. That brings me to the end of this presentation. I would really like to thank PsychTech Global for providing this. It’s a wonderful platform and opportunity, and I am really excited to see this technology coming to the forefront, and I hope it helps people who have vision perception challenges. Thank you.
DARRYL ADAMS: Thank you, Arvind. This actually brings to mind, in my previous role, leading accessibility and also leading the Disability Leadership Council at Intel, many times employees and managers would confide in me that they were their accessibility challenges in the workplace. And most of the time, these were things that they were not comfortable speaking of publicly or gaining accommodations. It was just something that they would be dealing with. And colorblindness was a very common component of this. I would say, hey, by the way, you know, I’m colorblind, and I’m always being asked to review these dashboards with red, yellow, green indicators, and it’s so challenging, and it’s such a common thing in the business world to do that with red, green, yellow, and yellow in particular. And while there are lots of ways and techniques that we can use for accessibility to help people design things more thoughtfully to include everyone, it’s solutions like this that are fundamentally designed into our technology, into our computing software, and in general, predisposed down the line helps people provide more accessibility. Platforms that can really just make this universally useful to everyone, I’m really excited to see how this transpires over the next couple of years. And just to reiterate Arvin’s points, Intel does plan to make this capability broadly available in the second half of 2025 on new PC platforms, and the technology is not proprietary, so it’s not necessarily that you have to buy an Intel system. We want to make sure that it is a technology that everyone can benefit from. For more details, you can also read the the research paper Arvind’s team produced and that will describe this solution in much more detail. So thank you everyone for tuning in today, and thank you again to Sight Tech Global for really shining spotlights on the latest advancements in vision tech.
[MUSIC PLAYING]