-
DESCRIPTIONAI decision systems have permeated most of the critical decisions within our society. They shape our views, wants, friendships, hates and fears. They influence who is hired, fired, admitted, prioritized for healthcare, funded, voted for, and seen as a security threat. It is argued that current AI systems are mechanizing eugenics and segregation. They automate, amplify, and accelerate past discrimination more efficiently, accurately, and consistently. What is the impact if you have a disability? AI is also poised to make the next leap, moving beyond statistical modelling based on big data sets. What are the opportunities and risks of emerging forms of AI?
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
BRIAN: AI decision systems permeate our lives. Now what? Moderator Jutta Treviranus. Speakers Merve Hickok and Clayton Lewis.
JUTTA TREVIRANUS: Welcome Merve and Clayton to this session of Sight Tech Global. It’s wonderful to have you both here. And I’m not going to introduce the two of you. If I really did your introduction justice, it would probably take all of our time, and we have so many important things to cover. I think, suffice it to say we all have thought long and hard about how do we make decisions about the nature of human and machine intelligence, and we’re worried about the risks of the current course of innovation. And we’ve devoted our careers to finding more inclusive trajectories. I wonder if you want to add something relevant about your particular background, Merve and Clayton?
MERVE HICKOK: Oh, absolutely. And I’ll be very brief about it. I’m The. Founder of AIEthicist.org, and my work focuses on bias and social justice in general. I’m also the research director at Center for AI and Digital Policy, so working on the policy, advocacy, and regulation side of AI and algorithmic systems. And I also work for an organization that develops and uses learning systems for individuals with intellectual and developmental disabilities. So a lot of complementary work. And thank you for having us here.
CLAYTON LEWIS: Yeah, I’ll mention that I was drawn into an interest in current developments in AI in my former role as codirector for technology at the Coleman Institute for Cognitive Disabilities. In pursuing possibilities for using AI to support people with disabilities, I was really led to learn about very recent developments, which are, I think, really transformative. Potentially, we hope for the good. But there are also risks.
JUTTA TREVIRANUS: Thank you, both of you. So for decades, I’ve been worried about the impact of statistical reasoning on people who have lives that are different from the statistical average or norm. Now, the same reasoning is made more efficient, accurate, and consistent through the pervasively-deployed artificial intelligence systems. What are the implications of statistical reasoning applied in current AI systems, and how is this compounded by recommender systems that encourage affinity grouping? Wendy Chung’s contention is that current AI systems mechanize eugenics and segregation. How does this play out? Merve, I wonder if you can comment on this given your role.
MERVE HICKOK: Absolutely. And let me start with, there is already a difference between our ideal world of just an equitable world, right? We don’t know society in the world gets to live that optimum state. We have our biases. We have our discriminatory histories in our society as well as our structural biases in the institutions. And still trying to resolve those. So all the data that we’re looking to work on that relates to human to human interactions, interactions of consumers or citizens or humans in general at different institutions, carries that injustice and equity. And what we do with that data is then replicating if you’re not taking precautions and safeguards or maybe even take the option of not launching an algorithmic system or AI system, what we do is kind of replicating and deepening and magnifying those injustices of the past as well. How these biases, especially against what is called as outliers or differences from the statistical norm, is we build the systems usually without much consideration of who is on the tail ends of those statistical distributions and what their experiences are. And in terms of real life experiences– for example, if you’re not recognized by a system to make it work for you or work it optimally as it works for a lot of other people– you might be encountering physical, psychological, emotional harms and repeating harms at that. It’s not just once or one system. You have that repeat in multiple systems. Or you end up trying to fit into the system’s expectations of trying to change your accent, your pronunciation, your pitch, eye contact, smile, posture, you name it. You’re shifting yourself to fit into those expectations. And worst case, if you’re close to an outlier or error in the systems, it means that you might not be getting the opportunities– such as jobs or education, or resources, such as housing, insurance, loans, et cetera– that others in the society get to enjoy. And someone is making those decisions for you by function of creating the systems and expectations and norms in those systems.
JUTTA TREVIRANUS: So a lot of people think AI is something of the future. But can you tell us a little bit about how pervasively it is employed? What decisions is it making for the companies and the governments and the schools that we interact with?
MERVE HICKOK: Absolutely. So AI or algorithmic systems make decisions in housing, like who gets housing, with tenant screening systems, or who gets to have housing benefits if you’re getting any government benefits. And to that effect, all the other government benefits at federal or state level or in different countries, national level, we depend on these algorithmic systems now. In education systems– access to schools, access to job opportunities as we are going through hiring processes, performance management processes, or productivity scores. We see this in health decisions on who gets to be prioritized for crucial health interventions or health diagnostics, even for certain use cases. We see it in policing, law enforcement, border control, immigration, asylum, and refugee systems. We see them in credit scoring of who gets to get a credit in the first place or at what, for example, interest level that you get that credit. Similarly, loans, what kind of loans you can, what kind of insurance that you can get, what kind of coverage, or at what levels, et cetera. We even interact with them in pricing of the products that we buy, which recommend their systems and pricing systems, what kind of news that you get to see, and as well as other opportunities and interesting things. So these systems are making decisions, especially the recommender systems, about what you get to see, what you get to experience, and how do you access these systems and opportunities as well.
JUTTA TREVIRANUS: And possibly even influencing how you say things, what you send to people, what you share what, you don’t share.
MERVE HICKOK: Absolutely.
JUTTA TREVIRANUS: Yeah. So people with disabilities are often used as the poster child of AI innovation. The ability to recognize images and tell whether you’re going to go and take the wrong pills or go into the wrong subway entrance is life changing. But even there, if your world is not the world the system is trained on, what are the implications, and is it likely to recognize and work for you if you are in an unusual context?
MERVE HICKOK: Well, one of the issues, especially since the systems are dependent on the data sets that are built on them as well as what you train the system or what kind of use cases you’ve included, if you treat special abilities and the spectrum of abilities as this homogeneous thing, one, you might be missing on a number of disabilities, you might be missing on different manifestations of abilities and disabilities, and you might be missing on the intersection of how these disabilities manifest themselves or how do you get experience in different contexts, in different cultures, in different locations as well. So there is a whole host of complexities that go into these systems or should go into the consideration and development of these systems in the first place. So even if you think that you have a representative data set, there are still a lot of assumptions in your thinking that your data set is representative. So if you– to your point, Jutta, if you’re interacting with the system in a way that doesn’t recognize the way the system was trained, the command is different, the light behind you is different, if it’s, for example, a computer vision system or the sounds or background sounds or the pitch, et cetera, is different, or even the way that you pronounce or put a sentence together is different, the system, again, might not work for you. So I think there’s a lot of work to be done in terms of the assumptions and not treating, especially, a lot of our identities as homogeneous identities to start with. But especially in the case of disability, there are a lot of different manifestations, intersections, and considerations that need to go into that.
JUTTA TREVIRANUS: Yeah, and the effect of the bias is cumulative, isn’t it? I mean, if most of the training happened in a typical middle class neighborhood with typical middle class products and you try to use a recognition system in a, say, rural setting where the products are not typical, the neighborhood is not typical, what’s the likelihood that it will be as accurate as it is in the trained environment? So what happens is there are all of these interacting cumulative effects in terms of who it works for and who it doesn’t. Is that also your perception?
MERVE HICKOK: It is also my perception. And I use the concept of cumulative disadvantages, a point that was termed a while ago. But my interaction with it is through [? Oscar ?] [? Gandy ?] I use that a lot because I tend to try to figure out the long-term implications and harms in any system. Now, what happens if this connect is if something is connected to other? So in this case, now, I’m really concerned about these systems becoming interconnected with each other. So as you interact with, for example, banking, insurance, loans, housing, employment, whatever, these systems become interconnected, and one this one biased result or one harmful result from one system becomes interconnected and becomes a bigger problem in another system. Or things that you don’t have control over, you’re disadvantaged and harmed, and you just go deeper into that disadvantage hole. Whether we want to do that, conceptualized as that, or conceptualize harm, for example, as a snowball. It just keeps building on each other, and it becomes a vicious cycle. So if you’re not considering these harms in each of these systems as well as their interactions, we are creating a very bleak future where we are permanently locking people out of resources and opportunities.
JUTTA TREVIRANUS: So you mentioned representation. And there’s now a proliferation of AI ethics companies that will audit and certify your AI as ethical. And one of the things that they attempt to address is data gaps or, in fact, what’s called data deserts. Are they effective? Because what’s happening is companies are spending quite a bit of money to measure the AI ethics to receive a certification as being ethical. Do you think this addresses the issues that we’ve been talking about, the cumulative harms?
MERVE HICKOK: I think there’s definitely a step in a positive direction. But that is definitely, cannot be the only solution or only control over the systems. And these audits, these criterias, these controls, checks, and balances also need to include the people who are impacted by the systems, right? A lot of the time, you’re talking about someone coming in, using this criteria or the models that they build to assess without having that experience or knowledge of different kinds of harms and how they manifest as well. But there is also the piece about, in general, setting audits aside, when we use statistics, you’re dependent on quantified data, right? But not everything is quantifiable, and not everything is easily quantifiable. So it means that as you build these algorithms, as you collect this data, you make decisions about what counts, what counts as a measurable proxy for the behaviors and interactions that you’re interested in. It also means that we may not be able to capture the qualitative elements of what makes us humans, the complex stuff– our choices, behaviors. It means we’re attempting to capture that fluidity and complexity of the experience but without success, hence creating a whole host of and biases. And if I may, here, I might be throwing a bit of a additional curveball here kind of moving away from the [INAUDIBLE]. But also, for me, there is the element of very problematic tendency of treating some constructed concepts, such as race, as something biological, as something quantifiable, as something objective in itself. We first come up with this political concepts or social concepts such as race, for example, which actuality denotes difference between individuals. And we forget about histories. We forget about our discriminatory histories where these concepts or constructs were used, in what context, for what political purposes they were used. And then you come as statistics, which has a lot of historical baggage in itself, you come to statistics and try to treat these concepts as scientific, unbiased, impartial categories and technologies. So for me, there is already that, from the nature of it, there are a lot of issues that we need to consider when we’re depending on data, depending on statistics, depending on models. And add on top of that, all the other assumptions and decisions that we make, audits and certifications come at the very end of that. And you already have a huge pile that you need to go through. And how you audit that and how you certify that, whether we should is a whole different question then.
JUTTA TREVIRANUS: Yeah. Yeah, I’m somewhat worried about the false assurances and the dismissal of the bias that you’ve been talking about. And certainly in our modeling, even if we get rid of all of the data gaps, even if we have full proportional representation and we remove those areas that you’re talking about– the human racism, sexism, and ableism– from the algorithms, we still have the issue of statistical reasoning and its bias against those outliers and already excluded minorities that are not part of the training set. And that takes us, I think, to your topic, Clayton. There are emerging systems within AI which do not use statistical reasoning. But with every new innovation that is going to disrupt our lives, there are risks and opportunities. So, Clayton, what are some of the risks and opportunities of the emerging large language models that don’t employ the statistical reasoning? Do the systems have the potential to ameliorate some of the discrimination against outliers and differences that Merve I was talking about? And what do you advise, or what can you predict?
CLAYTON LEWIS: Yeah, maybe before going directly into that, I want to maybe reflect a little on some of what Merve was saying and step back and note that any system that’s trained on data of past decisions, which most of these systems are, they have the property that they kind of tie us to our past. And as people, we should be thinking about our aspirations. So not what we’ve accomplished, but what we aspire to accomplish. And I worry that the systems really hold us back from that and say, well, we’re just going to reproduce what we’ve been doing or try to tune it up a little bit.
JUTTA TREVIRANUS: Yeah, all data is from the past.
CLAYTON LEWIS: Yeah. And there’s a connection there. So these large language models, things like GPT-3 and its many, many successors, have some properties that may offer some real benefit here. These systems have the feature that they can at least appear to do something like reasoning. And they can process not just– actually, they’re better at processing assertions and propositions rather than data in the traditional sense. So they don’t run off tables of cases. They run off large volumes of text that embody things people have said about things. And remarkably, this includes inferences. So that means that these systems, like people, may be able to operate at the level of aspiration and policy in addition to the level of what experience has been. Indeed, as they are now, it’s not really possible to load in a bunch of data about past hiring decisions. But I think the fact that can’t be done now, that’s temporary. I’m confident. There are already a few systems emerging that have the ability to consult data, as well as to process propositions, sentences, policy statements, and so on. So we can imagine systems that will be much more like people than current systems are in that they can operate at both levels. They can respond to historical data, but they can also respond to assertions, ethical values, policy statements, and things like that. So that may sound pretty good in that they may, indeed, deal with some of the issues that Merve has laid out. But there are a lot of uncertainties and, therefore, risks that remain. So one thing is, as these systems are now, they’re quite inconsistent in their judgments. That may not be all bad actually. But from some points of view, it’s a serious limitation. But for me, there are risks in a couple of areas. So one is, we don’t understand how these things work. And so it’s going to be difficult to be confident that they will act according to some ethical aspiration that might be stated, for example. We don’t know how reliable that behavior is going to be.
JUTTA TREVIRANUS: They’re even more inscrutable than–
CLAYTON LEWIS: Sorry?
JUTTA TREVIRANUS: They’re even more inscrutable in some ways. Yes.
CLAYTON LEWIS: Yes, that’s right. We have very little insight into how they work in particular cases. But another feature that they have that I think exposes another category of risks is, they’re clonable like other digital systems. And that means that there’s the potential– this relates to Merve’s interconnection concern. There’s the potential that instead of having, as today, myriad, many different people making decisions about, for example, hiring, we may reach a situation where there’s really a single system that’s making huge numbers of these decisions with all the risks that are entailed there. So I’m concerned about the lack of diversity going forward, which, again, has the feature of tying us down. So instead of allowing us to flexibly respond as our aspirations change, as our experiences change, I worry that we’ll be dealing with large systems that are operating everywhere and that we don’t understand very well and that will be difficult to change.
JUTTA TREVIRANUS: Yeah, and especially as they cost so much to create, which means only very large, powerful players are able to create the original ones which then can be transferred. So that may mean that there’s– we’re encouraging an even greater monoculture. One thing you mentioned about training and the additional opportunities for giving instruction. I’m a professor, and the first course that I teach in my program is called Unlearning and Questioning. This is a graduate course that is necessary to overcome the years of socializing the students have been exposed to. Among the things to unlearn is the notion of average humans, the Social Darwinist notion of survival of the fittest, the sorting, labeling, and ranking of humans, and the idea of quick wins by ignoring the difficult things in the 80/20 rule. And also, the folly of ignoring the entangled complex adaptive system we live in when we plan and predict. All of these are deeply embedded in the data or the training language or the data sets that the systems are trained on. How do we teach the systems to unlearn this? I mean, if we’re handed these large language models trained on mammoth amounts of scraped data, how do we go about having the same unlearning and questioning process?
CLAYTON LEWIS: So I think, as of now, we just don’t know how to do that. You can find efforts in the literature. People are trying to find ways of diminishing false beliefs and so on. But these are dealing with pretty simple things. How we could cause a statement of an aspiration, say, a policy aspiration of nondiscrimination– to speak in very simplistic terms. I don’t think we know how to make sure that a policy or an aspirational assertion like that outweighs what might otherwise be happening in the system based on other aspects of the training. It’d be a little bit optimistic. It’s very early days for this technology. And it may be that we’ll be successful in developing an understanding of all of that and that we can become more comfortable about it. But that seems pretty far off as things are now. And so there’s a worry about the applications of these things out running, as usual, our ability to really understand them and their implications.
JUTTA TREVIRANUS: Right. And I mean, we usually encourage scaling by diversification and contextual customization. One of the positive things that has been stated about these systems is the ability to transfer to change context. Are you optimistic that that might ameliorate some of the issues with the monocultures or the transfer of this cumulative effect?
CLAYTON LEWIS: Well, I think those things operate at different levels really. So I think these systems are already showing a remarkable ability to transfer, so to speak, patterns of relationships from one domain to another. It truly is amazing if people have played with these systems. But I worry that that won’t mean that we won’t have a monoculture kind of system. Whatever the ability of a system is, it may be that’s the system that large numbers of people end up using. And so that won’t mean that we end up having a diversity of perspectives. But yeah, I mean, well, yeah, period. I’ll just leave that there.
JUTTA TREVIRANUS: So to both of you, what should people in the accessibility community watch out for and advocate for individually and collectively as these systems emerge?
MERVE HICKOK: Well, let me go first. But I want to say one thing about that on learning and possiblity to teach that to the systems. We see some unsuccessful attempts by technology companies to put fixes in place, right? Like where you see language models or computer vision or search algorithms, et cetera, trying to fix some of those biased results by certain patch fixes, which ends up– again, you’re making determinations, or you’re making judgments about what should be fixed and how it should be fixed. And the result becomes people talking about their religion. People with disabilities talking about their disabilities or their identities or their experiences are marked as toxic or dangerous speech. And they’re, again, biased and discriminated against. So I would be very curious of who gets to decide what should be unlearned and how should be unlearned and to apply those to those systems. But in terms of your question on what should we advocate for individually and collectively, I want to come back to the issue of cumulative disadvantages and interconnected systems. This is not only for the accessibility community. This is for every one of us in different ways because we all experience our identities, different parts of our identities, whether it’s age, ability, sex, gender, religion, you name it, in different ways. So as these systems become more prevalent in our lives, they’re going to be yet more impactful and connected. So I think it goes for all of us to individually advocate for acknowledgment of these harms and ask for legal remedies and safeguards and a control mechanism. Or some systems should not be launched or used in certain contexts to start with. And therefore, collective advocacy– I want to highlight two things. One is the, again, intersectional identities and power, which comes from the intersectional identities. We can advocate and innovate better and create better if we understand those combinations and different dimensions that make each of us connected. And second, a lot of the conversation on AI ethics and policy are still very much Western-centric and prioritizes Western values. Similarly, a lot of the major data sets that we use– the systems from large language models to computer vision to object identification, within that are also based on Western societies and Western cultures and Western languages. So I think, collectively, we can advocate for bigger connections and wider communities and insights and try to go beyond our own circles. And if you’re acknowledging that if you’re experiencing harms and bias in one context, that, especially, in the Western countries systems, that there’s a higher risk of those harms and damages outside of that circle. And how do we get in front of that before catastrophic issues happen?
JUTTA TREVIRANUS: Right, Yeah. Recognizing our differences and valuing our differences. Thank you. And Clayton, I wonder if you have advice?
CLAYTON LEWIS: Yeah, actually, I wanted to link to an argument that Phil Agre makes, more than one, in his paper Surveillance and Capture– Models of Privacy. So he went against capture as he calls it, which is the tendency for automated systems or computerized systems in general to displace human agency. And that’s what we’re talking about here. So in the name of efficiency, we’re taking ourselves as people out of key decisions we make about one another. And we’re, of course, flawed. But we’re flawed in ways that offer hope for progress of a sort that I don’t see in the increasing adoption of these automated systems where people are not– they’re abdicating responsibility and make judgments about other people as humans to systems whose basis has the kinds of problems that we’re talking about here. So I think we all, as people, really need to push back as much as we can against this idea that somehow, decisions shouldn’t be made by people. They should be made based on data or automated systems or whatever because we are marginalizing ourselves and doing a lot of harm in the process.
JUTTA TREVIRANUS: Yeah, people over efficiency, consistency, and automation, right? Yes. Thank you so much. And unfortunately, we’ve run out of time. There’s still so much to say. But hopefully, we can continue this conversation at another time. And thank you for bringing both of your perspectives.
MERVE HICKOK: Thank you for having us.
CLAYTON LEWIS: Thank you.
[MUSIC PLAYING]