-
DESCRIPTIONWhile it's clear that AI-based technologies like natural language processing and computer vision are powerful tools to help with accessibility, there are also areas where AI technologies inject bias against people with disabilities by contrasting them again "norms" established in databases. This panel will look at examples of where that is happening – in employment software, benefits determination or even self-driving cars, for example, - and approaches that will help address these issues from the ground up.
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
JIM FRUCHTERMAN: Welcome to our session on AI fairness and bias. And we’re really excited to be talking about the issues around AI that aren’t so glorious and exciting as so much of what else is going on at Sight Tech Global. Here, we’re talking about some of the wider impacts that the increased use of AI actually has on people with disabilities.
And so our two panelists have spent a lot of time documenting some of those impacts. So let’s start with Lydia. Can you share a little bit of the research that you’ve been doing and give us some examples of how AI machine learning has had maybe a less than favorable impact on people with disabilities?
LYDIA XZ BROWN: Thanks, Jim, so much for asking that. Disabled people experience such widespread discrimination in society that we have a name for it. We call it ableism, and ableism is what happens when prejudice and bias against people of disabilities meet systems of power that reinforce and perpetuate those biases and prejudices against disabled people.
And those prejudices are built deeply into algorithmic decision making systems. One of the areas in which this shows up is in the context of public benefits. Many disabled people, because of reasons directly related to disability, rely on benefits to move through the world because we face record high levels of unemployment, precarious employment, and underemployment.
Because we are disproportionately likely to experience homelessness, and because we may have very specific physical and mental health care needs that are not readily met through existing services systems, we might rely extensively on access to public benefits in a number of ways to cover housing, to cover necessary services and care, and to cover gaps where employment and lack of supportive and accommodating work environments leave us.
And in public benefits, what we’ve seen in the United States, is that thousands of disabled people have been affected by state government’s increasing adoption of benefits determinations systems that are driven by algorithms. And those algorithm-driven determinations tended to result in cuts to people’s benefits across the board. There are different ways that this happens.
And one of the contexts that we’ve been writing about more recently is in Medicaid, where people who receive Medicaid benefits receive care hours that allow people to stay at home, to live in the community, to keep work if they have a job, and to receive whatever care they need to be able to live a meaningful and supportive life– at least when care is provided correctly by caregivers and support workers who respect you– And when you have adequate funding that subsidizes or outright covers the cost of that care for you.
And what we’ve seen is that when algorithm-driven benefits determinations cut people’s access to those forms of care, someone who previously needed perhaps 56 or 70 hours of attendant care per week to do everything, from eating to taking medication to turning or repositioning yourself so that you don’t get bedsores and so that you can engage in different activities and everything else imaginable, are now being told, well, you’re now only approved for 30 hours of care a week or you’re only approved for 25 hours of care a week.
That number might seem arbitrary to a non-disabled person who doesn’t understand what it means to rely on such services. But for disabled people, that cut can result in dangers to health, to safety, as well as a severe deterioration of one’s quality of life.
And even worse, if your services have been cut so drastically that you don’t know for sure when you’re going to eat or to be able to use the bathroom or to be able to have support to go out into the world and visit the store, meet with friends in a non-pandemic, of course, or to do other things that people like to do to live life, then you might actually fear that you would have to go into an institution or a congregate care setting to receive the very same care and support that you should have been able to and were able to receive at your home in the community.
And those cuts to care just really [INAUDIBLE] how fundamentally flawed our benefits system is, one that relies on an entitlement system that we don’t fully fund and that you have to be able to prove that you’re [INAUDIBLE] disabled or disabled in the right way to be able to receive the right services and rely on non-disabled people’s judgment of what your needs are, of non-disabled people’s beliefs about your ability to communicate and express your needs.
And lastly, that places people in a deep bind where we’re often forced to choose between accepting some level of services that may be inadequate and unhelpful just to be able to stay free and live in the community or to risk going to an institution, and technically on paper, receive more services but be subjected to an infinitely more abusive and potentially neglectful environment.
And so those cuts, which can affect all people with disabilities that rely on Medicaid type services in that area, will, of course, end up harming disabled people who are low income or who are people of color or who are [INAUDIBLE] the most because we’re the least likely to have access to additional resources financially, supportive family or community members, or even just the ability mentally and cognitively to have energy to be able to do something about it.
JIM FRUCHTERMAN: Wow, well pushing people towards this institutionalization seems to be a very retrograde motion compared to all the disability rights activism [? from ?] the last 30 or 40 years. Jutta, you’ve spent a lot of time working on this issue internationally, as well as here in the US. Can you talk to us more about what your research has found about the impacts of AI, issues like fairness and bias?
JUTTA TREVIRANUS: Yeah, so based upon population data– so data about people is going to be biased against people who are different from the average majority or statistical norm. And this existed before we had artificial intelligence in anyone who would listen to predictions about all women, all men, all teenagers, et cetera, would see some inkling of that pattern.
Because if you have a disability, the only common thing you have with other people with disabilities is difference from the average or the typical to the extent that things don’t work for you. And so that then means that when it’s not decisions or things like natural language processing, where it’s a standard speech or detecting things that are average within the environment, but decisions about you as an individual based upon your behavior or your looks, how you act, what you’ve done in your life, your history, it’s going to be biased against people with disabilities.
There are many things that are currently happening already that show this particular bias. You were earlier asking about one example that was my first experience of great alarm, which was in working with automated vehicles, where we were able to– and this was back in 2015 when automated vehicle learning models were just emerging.
And I had the opportunity to test an unexpected situation, knowing that data-driven systems, like automated vehicles, are depending upon a whole bunch of data about what happens within an intersection, typically to predict whether they should stop, move through the intersection, or change direction. And I introduced a friend of mine that pushes her wheelchair backward through the intersection very erratically, but she’s very efficient.
A lot of people that would encounter her in the intersection would think she had lost control and would try to push her back onto the curb. All the learning models of these automated vehicles chose to drive-through the intersection and effectively run her over. That worried me somewhat. They all said, don’t worry, we’re going to train these. These are immature models. They don’t have enough data yet about people in wheelchairs and intersections.
When I, however, came back to retest it, after they had fed these systems with lots of data, lots of images of people in wheelchairs moving through intersections, what happened that shocked me even more was that these learning models chose to run my friend over with greater confidence because the learning models showed that the average person in a wheelchair moves forward.
And that when a car was to encounter my friend in an intersection, the assumption would be they could proceed because she would not be going backwards into their path. That [INAUDIBLE] this alarm about, what are the implications of this behavior in all sorts of things. And since that time– and this was five years ago– there have been more and more of these instances that have popped up.
One of the things that has been concerning is security systems, where people with disabilities– anything anomalous that is detected in a security system is going to be flagged as a threat. So whether it’s moving through an airport security system and not actually meeting the expectations that someone would have for an average traveler.
Most recently, this year actually, the COVID situation with respect to tests and exams in schools has come up. And so what has been rolled out at many universities– billions of dollars have been spent and thousands of universities are using proctoring systems, which are using artificial intelligence data to detect who is cheating.
And the flag of cheating comes up if you do anything unusual. If you gaze and refocus somewhere that that isn’t at the screen. If you have strange movements with your hands and they’re not on the keyboard or the mouse. If anyone comes into the room. If there’s a vocalization which could be interpreted as someone speaking to you.
Any of those unusual things flag you as someone who’s cheating. And these types of exams are used to make very, very critical decisions about people’s lives. And so that pattern occurs. I mean, there’s so many other examples of where anything unusual, anything that isn’t average or typical or that doesn’t have to do with the majority is flagged as something that is a threat.
On the other hand, as well, are all of the optimization techniques. Basically, artificial intelligence amplifies, automates, and accelerates whatever happened before. It’s using data from the past. So what it’s trying to optimize is what was optimal in the past. What was optimal with respect to average performance or normative performance.
And so if someone in a hiring recruitment situation, like you was never performing that job before, you’re never going to be chosen. If you as a student are applying for a highly competitive position in an academic department, and there is no data that a student like you has ever performed well, then you’re not going to get an opportunity, et cetera.
It’s biased against anything that isn’t average, anything that isn’t the majority, anything that is unusual. And there’s a silver lining to that, which I don’t want to–
JIM FRUCHTERMAN: Well, I’d like to get back to the silver lining, Jutta. But I think one of the things that– many of the complaints about the use of AI is that it reinforces existing biases in society. So no brown person can get hired for this job because our algorithms were trained on a body of white employees. And so that university never showed up, whatever it might be.
But I think what you’re highlighting is that there are fresh harms that come from this that aren’t just existing biases against people with disabilities or ableism. It’s that too but it’s also these other things. So Lydia, do you have some examples of where one is just a traditional bias against people with disabilities reinforced and one is a novel thing, like oh, they’ve come up with a new way to disadvantage people with disabilities. You’ll need to unmute so.
LYDIA XZ BROWN: This is Lydia. Apologies for that. Despite us having a tech conversation, we’re inevitably going to do something that’s tech foolish. I want to push back on that a little bit. I want to put it out there that it’s not so much that algorithmic discrimination creates totally different forms of discrimination, but rather that algorithm discrimination highlights existing ableism, exacerbates and sharpens existing ableism, and only shows different ways for ableism that already existed to manifest.
So it’s not so much that it’s a new type of ableism, so much as a different manifestation of ableism. So let’s take two of the examples that Jutta was just bringing up.
In the context of algorithmic or virtual proctoring, the idea that the software might flag you as suspicious because your eye gaze is not directly on the computer or because your movements of the mouse or the keyboard aren’t what the program recognizes typical movement, perhaps because you use a sip-and-puff input or perhaps because you use eye tracking software in order to create input into your program or because you have spasms as a person with cerebral palsy or any number of other examples.
Well, no software is designed based on the idea that there is one normal way to learn. There is one normal way for bodies to be configured, for people’s bodies to move. There’s one normal way that people’s bodies look like. And if you are abnormal, like Jutta pointed out that one thing that we all share is disabled people, is that we are non-normative in some way, perhaps multiple ways.
And so if that idea is embedded into the algorithm, then that produces the discriminatory effect where disabled students will be more likely to be flagged the suspicious by that algorithm.
Take another example that Jutta alluded to where you may reinforce an existing bias because if you’ve trained your hiring algorithm’s data set based on existing employees– and your existing employees where majority non-disabled, majority straight, majority cisgender, majority male, and majority white– then yes, it will begin to attach certain factors or characteristics that might be more associated with resumes of people that are not non-disabled straight white men as being less likely to be successful or less worthy of being considered for hiring.
Whether that is because someone’s name is black coded, whether that is because someone’s name is feminine coded, whether that is because somebody has a longer gap on their resume, which in turn might have been caused by repeated discrimination and inability to get hired because of, perhaps, ableist discrimination, that now is a self-perpetuating and self-fulfilling prophecy that you have never been able to get hired before.
That long and increasingly longer gap on your resume might now be flagged as a reason to automatically screen that person out if that’s what the algorithm has been trained to do. And that reinforces the existing ableism. And just one last example on that point too.
When we think about predictive policing and algorithmic law enforcement, that not only will reinforce existing racism and classism and other forms of structural oppression that we already know exist within the prison industrial complex and mass criminalization and mass incarceration, but they will do so in ways that might appear to be new, but it’s not because the bias or the oppression is new. It is because the tools are new.
So we think about how disabled people are affected in this way. For me, the conversation isn’t about a new kind of ableism. It is about a new set of tools that exacerbate the existing ableist ideas. That someone who is well behaved will have a record that looks a certain way. That someone who is intelligent and able to academically excel will move a certain way and will communicate and express their thoughts a certain way.
Or that somebody who is able to live independently will be able to check a certain number of boxes on a sheet, or if somebody needs a certain type of support, someone else, say a nurse or another professional, will be able to look and decide for themselves what kind of support somebody needs rather than believing a person about themself, about what it means to live in the world and to be able to live a life authentically.
And [? with ?] support that we choose to be able to learn without fear of being surveilled. To be able to learn without fear of being made a suspect. To be able to move out in public without fear of being criminalized or automatically labeled suspicious, which of course, is always going to fall hardest and worst on disabled people of color, and particularly black and brown disabled people.
And when we talk about ableism in that way, it helps us understand, algorithmic discrimination doesn’t create something new. It builds on the ableism and other forms of oppression that have already existed throughout society.
JIM FRUCHTERMAN: Well, thank you, Lydia, for explaining that it isn’t necessarily just new, but it’s just a new manifestation of ableism. It blew my mind that there was hiring software that rated whether or not you actually got to talk to a human based on your facial movements. And many people with visual impairments aren’t necessarily trained to move their face the way ableist people expect them to move their face. And they may never get an interview.
Jutta, I know I know that you work a lot with product designers and people with disabilities. Can you give us some ideas of, what can we do about this, both as individuals and as people building products designed to help rather than hurt?
JUTTA TREVIRANUS: Mm-hmm, yeah. So I want to answer your question, but I also want to take the conversation a little bit further because there’s a lot of buzz at the moment about AI ethics and the issues with artificial intelligence, which is so, so necessary.
But one of the worries I have about framing the particular issues that people with disabilities have with artificial intelligence, in the same vein as other social justice AI ethics efforts, is that many of the ways in which the bias sort of the discrimination is tested or determined or flagged or identified is depending upon a certain set of criteria that are not possible when the issue is disability or discrimination because of disability.
What do I mean by that? When we look at the bias detection systems that test algorithms to see whether there is a discrimination happening, what is done is we identify a particular justice-seeking group and they’re bounded characteristics. And then we compare it to how the algorithm performs with the general population.
And if there is a distinction between those two, then we say, OK, here’s the problem, this is discrimination. There is no bounded data set of characteristics for people with disabilities. And so it’s very difficult to prove discrimination because of the opaqueness of the artificial intelligence systems, et cetera.
So the other area that we talk about is representation. There isn’t adequate representation of people who are black, of people who speak a particular language, of people that have particular cultural norms within the data set. And so we talk about improving the representation, adding additional data.
But even if we have full proportional representation of people with disabilities within the data set, because people with disabilities are tiny minorities or outliers, there is usually not another person that has exactly the same distance from the average that you have that could represent you within a data set. That doesn’t happen or the representation will not address this.
And even if we remove all of the human bias, the attitudinal ableist bias from our algorithms, because of course, it enters via the data but it also enters via the people that create the algorithms, we’re still going to have an issue here.
And I was talking earlier about the silver lining. This same issue actually hurts all sorts of things that we’re doing with artificial intelligence, especially the more critical decisions that are being made and the products that companies are developing because it points to a terrible flaw within AI, in that artificial intelligence cannot deal with diversity or complexity or the unexpected very well.
So we are not able to easily– I mean, you can say, OK, something anomalous is happening, but there is no way of interpreting it because the AI system is depending upon big data, large data sets about what is this anomalous thing that’s happening, what is this threat at the periphery of our vision, or the something unexpected that is happening.
We’re now in COVID. COVID came about as a weak signal. And there will be other weak signals, unexpected things that are not based upon data that we have about the past because AI is all about the past. I mean, all data is about the past. Data is something that has already happened, not something that might happen.
And so what does it do for companies to not address these particular flaws within artificial intelligence? It means that companies are not able to develop new innovative approaches, and they are not able to detect flaws very well or in an intelligent way or in a way that they can actually address it. It’s either a threat or it is anomalous and should be eliminated.
Disability is a perfect challenge to artificial intelligence because if you’re living with a disability, then your entire life is much more complex, much more unexpected, and much more entangled. And your experiences are always diverse. You have to be resourceful.
JIM FRUCHTERMAN: So Jutta, we’re down to the last couple of minutes. And I want to make sure– this is Jim, by the way. Lydia, you spent a lot of time worrying about public policy and about how that affects people with disabilities. Do you have recommendations on what we should do, because it sounds like pretty challenging, all these different ways that the technology doesn’t help the cause of individuals with disabilities and their unique individuality, but also the community of people with disabilities. So if you could unmute, it would be great to hear from you.
LYDIA XZ BROWN: This is Lydia. There’s two different major angles to go from. And one is, what is it that we, as disabled people, need and want, and the other is what are users and vendors of AI technologies, what are they trying to do and what are they trying to accomplish.
And the first thing that I recommend to everybody at all times, if you’re a policymaker or if you’re R&D for a company that is creating a new AI tool or if you are acquisitions for a private company that wants to start using a hiring algorithm or you’re acquisitions for a state government agency that wants to start using a benefits algorithm, is listen to, center, and lead, and follow the lead of actually disabled people.
So if blind people are telling you, you need not to be creating software that prioritizes eye gaze and eye movement, then listen to blind people when they say that. If we, as autistic people, are telling you, you need to not be using software that is attempting to flag students as dangerous based on social media posts that are not actually about threatening violence, then listen to us when we talk about that.
Listen to me. When I was in high school, I was falsely accused of planning a school shooting. It was horrifying. So listen to us and let our perspectives and our priorities guide and lead those conversations.
But on the other end of that, if you are a vendor or if you are a person who is trying to use or require an algorithmic tool for whatever purpose it might be, it’s incumbent on you to be very deliberate and careful about what it is your tool is actually aiming to accomplish, if your tool complies with appropriate legal guidelines and legal requirements, and where your tool might go astray or run afoul of those guidelines.
And lastly, how your tool can be used as limited a capacity or application as possible to protect people’s rights maximally. And that includes explaining it and making sure people understand it.
JIM FRUCHTERMAN: That sounds great. Lydia, Jutta, in our last minute, do you have a few final thoughts to share?
JUTTA TREVIRANUS: Yeah, disabled people are the best dressed testers, and they’re the primary people that are going to come up with resourceful new ideas and the choices and options that we need to get out of this crisis and to do much more inclusive and supportive innovative things.
JIM FRUCHTERMAN: Thank you very much Jutta and Lydia for illuminating this key area of how AI has bigger impacts on people with disabilities. And let’s go forward and listen to the rest of the Sight Tech Global Conference.
[MUSIC PLAYING]