-
DESCRIPTIONAI has the power to transform lives, but only if it's built for everyone. This panel digs into the challenges of AI and the principles used to create accessible solutions for all, with speakers sharing Microsoft's commitment to responsibly design, build, and release accessible AI technologies. A demo of the Ask Microsoft Accessibility bot, "AskMa," showcases how users can find information about the accessibility of Microsoft products and services. The panel's call to action: Be a community of change makers, take the first step in using AI, build your skillset, and share feedback.
Speakers
-
Moderator: Jessica Rafuse, Director of Accessibility Strategic Partnerships, Microsoft
-
Ioana Tanase, AI and Accessibility Program Manager, Microsoft
-
Jeremy Curry, Senior Support Escalation Engineer, Microsoft
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
VOICEOVER: Responsible Ai in Action: Microsoft Is Building a Fair and Inclusive Ai Future. Speakers: Ioana Tanase, AI and Accessibility Program Manager, Microsoft. Jeremy Curry, Senior Support Escalation Engineer, Microsoft. Moderator: Jessica Rafuse, Director of Strategic Partnerships and Policy, Microsoft.
JESSICA RAFUSE: Hello, and welcome to a important conversation about responsible AI. We’re gonna talk about standards, fairness, inclusiveness, and we’re also gonna get a little sneak peek into a demo that we believe has the potential of helping many of our customers. I’m Jessica Rafuse, and I am here at Microsoft with my colleagues. You’re going to meet Jeremy, you’re going to meet Ioana, but before we do that, let me give a big shout-out to Sight Tech Global. Thank you so much for including us. We really appreciate you creating this platform where we can dig into these problems and find solutions. And the way that we have done that at Microsoft is just one way. We are here to learn as well. We want to hear feedback on how it is going for all of you as you create the most accessible AI that you can, hopefully with our products, um, a key part of that.
So, let’s get started. We’re gonna start here with some introductions. I’m gonna head over to Jeremy first. Jeremy Curry brings over two decades of experience in assistive technology and accessibility. His lived experience is even deeper than that. He has extensive knowledge of how to use these products, because he uses them himself. From notetakers to digital book players, braille displays, uh, Jeremy knows a whole lot about screen magnification and screen reader software. Today, he’s working on the Disability Answer Desk, where he’s a liaison to our engineering groups. He works with these engineers day in and day out because he knows them so well, coming from the Windows product group. He’s also gonna do that demo I mentioned on, um, AskMA, but more to come on that. Let me welcome you, Jeremy.
JEREMY CURRY: Hey, Jessica. Thanks for the introduction. It’s so great to be here, everyone. I’m Jeremy Curry from the Enterprise Disability Answer Desk, or known as EDAD. Um, as Jessica mentioned, I’m the liaison between engineering and also innovation, so I work heavily in, uh, both sides, especially AI, uh, innovation, which you’re gonna see a lot about, uh, today. Oh, I hope you guys are excited. But before we get there, Ioana.
JESSICA RAFUSE: Ioana Tanase is a dear friend and a colleague, and she is an absolutely brilliant technologist, and she has this cool job. This is like one of those jobs that everybody wants to have. She’s at the intersection of AI and accessibility. She’s working really closely with not only engineers, but also researchers. She wants to make sure that we are creating AI models that are inclusive for everybody. So cutting edge is, should be a part of her… She’s chief cutting edge officer. I think I just made that up… for you, Ioana. But I want you to hear directly from Ioana. Can you share with us, uh, not only the work that you do, but how your lived experience intersects with that work?
IOANA TANASE: Hi, Jessica. Hi, folks. I am so, so happy to be having this conversation. And Jessica, you made me blush way too much. (laughs) Um, my job really as I explain to people is to make sure that AI systems are representative of all of us. And for me, it’s deeply, deeply personal. Um, I was 32 when I discovered that I was dyslexic, and up to that point, I have to admit that my disability acumen was very slim. But in discovering more about myself, I really discover more about the beautiful world of accessibility, and as an extension of that, of AI. So, really excited to talk about responsible AI, which is one of my favorite topics. And the space that all, and I do mean all of us, play as users, as consumers, as technology makers, I want you all to know that you play a part.
JESSICA RAFUSE: Wonderful. I think you raise a really good point. Disability is a, it’s a journey. Uh, some of us were, uh, born with disability, I have muscular dystrophy, and other people acquire disability later in life. Um, I use a wheelchair, power wheelchair, fancy little wheelchair. This one is all black and I call her Wednesday. But I am also an, an attorney and former administrative judge. I’m a mom of three wild boys, and the work that I do here at Microsoft is really just one goal in mind. I want to accelerate accessible technology so that everyone can fully participate in the digital world. I work with partners, incredible partners like, uh, Sight Tech Global and Vista Center, um, on that, uh, mission to make more accessible technology for all. I’m also working with commercial groups. So I want to show the business value of accessibility, and I wanna turn these lived experiences into accessible solutions for everyone.
So, enough about me. Let’s really get into the substance. And I wanna start with you, Ioana. Um, what are some of the considerations of developing AI responsibly? Tell me about those principles and how we have used them here at Microsoft.
IOANA TANASE: Absolutely. And I want to ground us in what good versus bad looks like and how do we even determine that, right? Because one of the questions that I often have or, um, in talking with our partners, with our consumers is, who decides what is good and bad? Who decides what should pass the mustard and what does not? And really, um, at Microsoft, we ground everything in what we call our Microsoft Responsible AI principles. These are kind of the foundation. Think of a house, and at the bottom of the house, you have your foundation and you have your pillars, and they keep everything together. Without those, you know, the walls are gonna be shaky. The roof is gonna be leaky. You’re not gonna have anything that stands the testimony of time.
So, we have a series of principles that we believe are core to everything we do. Um, these are accountability in the sense of that we believe of purpose of AI systems and that we should have data governance and management. We… Second principle is transparency. We absolutely hold dear the fact that our users should be informed that they’re interacting, for example, with an AI system and be very clear of distinction of that in interacting with a human. Um, the third one is privacy and security. Goes without saying, but without privacy and security, we cannot feel that our data is protected, our rights are protected and so forth. The fourth one is reliability and safety. So, um, how is… how are we ensuring that those principles run correctly over time? How are we monitoring? How are we taking in feedback? How are we evaluating that feedback?
And then, not that I have favorites, but I do have favorites. (laughs) My favorite two other principles are inclusiveness. I think this is one… it’s gonna be a crowd favorite with this group. Inclusiveness means that our products should be accessible. We know how important that is. We know it’s, it’s not, uh, nice to have but mandatory. Um, and the, and the sixth one that it’s also one of my favorite children is a principle called fairness. And this is the newer playground, so to say, when it comes to AI, and especially generative AI. When we think of fairness, we think of things like, our AI systems actually provide the same quality of service to everybody? Does it work as good for me as it does for Jessica, as it does for Jeremy, independent of whatever our identities might be? Um, when we’re thinking of depiction of human identity and output, does it do things like stereotype or provide demeaning content or does it erase identities altogether? These are things that are incredibly, um, important and they really govern everything that happens downstream from there.
JESSICA RAFUSE: All right, do we have time for a, a pun? (laughs) So if we’re talking about the house that technology built, would you say that accessibility is an elevator to all the rooms of the house?
IOANA TANASE: Oh, absolutely. Wah-wah.
JESSICA RAFUSE: Okay. Okay, cheesy joke. But I hope that that helps really set this foundation of what are we talking about when we say responsible AI at Microsoft, because I’m gonna head over to Jeremy now to give us some examples. So in practice, talk a little bit about the motivation behind creating something like Microsoft Ask Accessibility.
JEREMY CURRY: Yeah. So one of the things that our customers were constantly, uh, coming to us and asking us is, “Where can I find information about accessibility on the Microsoft website?” And the answer was, “Everywhere.” The problem was, it’s everywhere, because it’s embedded into so much in what we do inside of Microsoft that, you know, it’s in just so many different places, you… It’s just almost impossible to centralize because, you know, we have thousands of products and trying to put everything together is, uh, very difficult.
And so, we created, uh, and pioneered really one of the… one of the first, uh, I would say accessible chatbots, uh, called Ask Microsoft Accessibility, or we call it Ask MA for short. We figured we had DAD, eDAD, and now we have Ask MA, just (laughs) seemed like the right way to go. Uh, and so Ask MA is a, a way that you can go and basically ask anything about a Microsoft product or service in regards to accessibility, and it will search the entire microsoft.com domain and only look at that domain and trust its sources from our domain.
So for example, if we have a support article and that support article links to some other webpage, it has the ability to go and look at that webpage as well. But we don’t want it to get all of the information from all of the internet because that could, uh, put data in that we don’t necessarily want to be provided to customers, like giving wrong information. Like, maybe they ask for a hotkey, it gives them the wrong hotkey because somebody on some webpage had put out the wrong hotkey. And so, Ask Microsoft Accessibility is, is a way to be able to help customers find all of the accessibility stuff about Microsoft in one spot.
JESSICA RAFUSE: That’s fantastic. Thank you so much for that, Jeremy. I think it really brings to mind this concept of, of asking the technology to do something to help you. And, uh, we talk a lot about the, the agent-boss, so you’re asking tech to do something to make it easier for you, but it does require a bit of change management. You have to spend the time to learn and you have to change the way in which you work. And people with disabilities are really leading there.
This concept that people with disabilities need to be included in the creation of AI, Ioana, let’s go back to those two pillars again, inclusiveness and fairness. Talk to me more about how they really apply to disability in particular.
IOANA TANASE: Absolutely. And you mentioned at the beginning of this call, Jessica, that we are not perfect, and I’m gonna profess that I’m gonna share a little bit of our learning journey as well. Um, and talking about fairness and disability representation, I remember in the early days of generative AI and as we were discovering the generative AI tools and models, we were trying to understand what does disability representation show up as in those tools? How, how does the system react to somebody, for example, disclosing they have a disability? Or if I’m asking for a photo of somebody who’s blind or low vision, how accurate is that?
And in the early days, um, we have had a lot of, uh, great conversations internally with our engineers as well as externally with our communities and disability communities to understand what does accurate representation mean. And I’ll give you two examples of things that were corrected over time, but definitely didn’t start, uh, in a great place.
So in the early days of, um, of the technology when somebody was asking or saying, “Hey, you know, I’m blind. How can I use a tool like,” let’s say, “GitHub Copilot?” The system react in a way that all of us will cringe over. And I have to preface that, but it would say, “I am so sorry to learn that.” That is not the attitude that either of us want to encounter when we’re using an AI system. I know that my identity is tied to disability and I’m proud of that. I would never react favorably to somebody saying, “I’m so sorry you’re dyslexic.” Um, so one of the first things we had to mitigate is, how does the system react to somebody disclosing a disability? Because the system was, in essence, amplifying a lot of the societal biases that we know and live by every day.
Um, another example, uh, from the text-to-image generation, so if I was asking it to generate a photo of somebody who’s, who is blind, it really was confused of what, what that looks like and it would create photos, for example, of somebody who’s wearing a VR headset or a blindfold, um, or even attributing those elements to the, uh, to the guide dog. So again, a lot of confusion of what correct representation is.
Now, fast forward and a lot of fantastic feedback from the communities, a lot of conversations about what accurate representation is, um, and work with our engineering teams, we are now able to generate those images that are authentic, that are accurate, and they don’t amplify societal biases or even create new misconceptions about what that disability identity is.
JESSICA RAFUSE: Ioana, let me ask you a curveball question here. Is that a hallucination when you get an image of a guide dog wearing a blindfold or a guide dog with a white cane?
IOANA TANASE: It is. And the reason for that is we have so little data that is representative of disability that the system, in the absence of this is what good looks like, it created its own version and it didn’t know what it didn’t know. Um, and that led to hallucinations.
JESSICA RAFUSE: Yes, here’s a guitar for everybody. Fantastic. Jeremy, I’m gonna head over to you and talk about the… we’ve learned about these key principles of responsible AI. Give us some examples of how you followed those principles to design Ask MA. And then we’re gonna head into a demo. So, promise, stick around. We’re gonna have a demo right after this. Jeremy, fill us in.
JEREMY CURRY: Yeah. So yeah, let’s talk about some of those. And, and first of all, just the hallucination about the guide dog. My guide dog’s name is, uh, Diesel. And I often have people say, “Hey, there goes a blind dog.” And I keep thinking, “Well, I hope he’s not blind ’cause then I’ve got a, I’ve got a bigger problem.” (laughs)
JESSICA RAFUSE: So it’s not just a guide. “Blind Diesel” is not helpful, is it? (laughs)
JEREMY CURRY: (laughs) Um, so when we talk about these principles, uh, uh, Ioana and I were, worked very closely at the beginning ’cause we were really trailblazing all this stuff together and trying to figure out, “Well, what does this, what does this mean? What things do we have to do?” And ableism was one of those things, was, that was at the top of the list. For example, Ioana said, “We don’t want the AI to come back and say, ‘Oh, I’m sorry you’re blind.'” Yeah, that’s just, it’s disrespectful. And so we had to think through, how do we deal with that on the underlying AI models and how the responses come back? And that’s gotten better and better and better over time thanks to, uh, work, uh, from folks like Ioana. Uh, and Ask MA was really one of the first to be able to do some of that.
So I’m gonna actually share my screen, um, and we’ll switch over to a demo here.
[Demo begins – Jeremy shares his screen and demonstrates Ask MA with Narrator screen reader]
JEREMY CURRY: When you were talking about Diesel, my dog, who’s in the room with me, let out a big sigh. Like, “These humans.” (laughs)
JESSICA RAFUSE: (laughs) It’s so cute.
JEREMY CURRY: I, I am blind/low vision, kind of depending on the environment, I’m, I, I can be kind of either/or, so I’m gonna ask the sighties in the room, can you guys see my screen?
JESSICA RAFUSE: We can. We can.
JEREMY CURRY: Okay, perfect. So I’m at Ask Microsoft Accessibility. This is public. Anybody can get to this at aka.ms/AskMA. Aka.ms/A-S-K-M-A. And when we think about, uh, design for something like this, I’m a big believer in that less is more. Uh, um, for me as a person who’s blind and low vision, if I go into a physical room that’s very complex, it’s much more difficult to navigate than I know if everything is very structured. And so I like to know where things are at. So that was one of the principles that we used in addition to how do we do things like make sure the AI isn’t ableist and things like that.
Uh, so, uh, I’m going to turn on Narrator here. I have the page already loaded. And so once Narrator loads, you’re gonna hear… [Narrator announces the page]. You’re gonna hear that we’re actually in the edit box. This is where focus goes as soon as you go to this page. Notice that it even gives you a hot key if you are not using a screen reader, but you’re a keyboard-only user. If you want to access the edit box, it’s Alt+Q. It tells you that right away.
Um, for my fellow people who are also low vision, this will support light and dark mode. Many of us like to use dark mode. If your theme on your system is dark mode and in your browser, it’s gonna use, utilize dark mode. So even those things that, uh, some websites don’t think about because often we go to websites in their light mode and it’s very difficult for those of us who are light sensitive, we try to take all this stuff into account.
So I’m gonna just ask it a question. I’m gonna say, what are the accessibility features in Windows? [Types the question]. So I’m gonna press enter. [Narrator announces “Stop responding, button, scan” and “Working on it”]. So initially, it’s, there’s a stop responding button, so I could just activate that. And you’ll notice it says working on it. I’m gonna come back to this in a moment. We’re gonna hear this every five seconds, and visually text is actually streaming into the system while we’re hearing working on it. And that text is then being formatted into, uh, in this case, it’s being formatted into bulleted text.
[After the response loads, Narrator begins reading the answer about Windows accessibility features]
All right. I’m gonna press control to silence it, ’cause there’s a lot of information in this particular answer. But one of the considerations we had to think about whenever we were creating this is, how do we present information to the user? Because as I noted, there’s text that’s visually streaming, and, uh, because of some of the way that the, the technology stack works in the background, you can’t have that speak as it comes in, because it repeats a whole bunch of stuff, which wouldn’t be very usable for those of us who are using screen readers, because you just hear, “Chunk one, chunk one, chunk two, chunk two,” of, of text over and over again, and it would be really problematic.
So we actually wait ’til all of the text comes in, and then we read all of it. So you’re hearing, “Working on it,” while you know that, uh, it’s being streamed in. So someone who’s sighted, uh, would see text streaming, and then once it’s all there, then it’s read. So this is one of the things we had to figure out, how do we actually do this? Because, and then also in the background, not only do we have to think about when is it read, but the way that this comes into this website, it’s actually not always structured inside of a HTML the way that you would think of it as. So it has to get the information from the AI model. Then it has to actually restructure the information, and it has to put it into, uh, usable, traversable, uh, elements on the web.
[Jeremy demonstrates navigating through the response with keyboard commands and Narrator]
So I can actually have this whole thing read… I can actually up and down arrow between the list of questions and the answers that are here, making it very easy to navigate. And the same is true for somebody who is, um, uh, uh, who is keyboard only, but perhaps not a screen reader user. So they could just shift tab or tab to this, press enter, and then they can up and down arrow through this like a list. Additionally, if you like using your virtual mode, whether that’s, uh, you know, forms mode in JAWS, or browse mode in NVDA, or scan mode in, in Narrator, uh, you can also use that to navigate and traverse the page.
[Jeremy continues demonstrating the feedback feature]
And so we have various things here that, “Is this conversation helpful?” Well, let’s suppose that you said, “Hey, I really wish that there was some more information here.” I can activate this. [A dialogue opens]. So what happens here is, a dialogue opens up and it says, “Hey, what, what was your issue with it?” And you can type in, you know, “Hey, I don’t think this was the right answer,” or, “I was looking for this.” And we, we actually implemented what we call self-recursive learning, or self-learning. So what happens is after you input information, if you say, “Hey, this answer didn’t help,” it goes back into our AI model and our AI model actually improves and self-learns from that information so that it can get better answers the next time that you come through here.
Um, I, I don’t have the ability to do this with Zoom, uh, but there’s some other low vision features I think that are very handy that I’m just gonna mention real quick, which is when you actually are searching for a question, [demonstrates] if you’re using magnifier, that actually gets you… if you’re using magnification, it will zoom in on that particular part on the page. We found that was really helpful, because other times you might get lost and not know where that information was.
So, we tried to think about this from all aspects of accessibility. From the usability of the page, the accessibility of the page, ’cause those two things aren’t always one and the same. As well as, what does the AI model actually do on the back end to ensure that you’re getting, one, a reliable, accurate information from Microsoft.com and Microsoft trusted resources, and also respects all of those principles that Ioana was talking about in terms of, you know, not having ableism and, and, uh, all of those other things. And then most importantly, I think, maybe not most importantly, one of the important things is simplicity. Um, you’re not going to have to find yourself tabbing and shift-tabbing through a bazillion things that are, like, extra prompts and things of that nature. We tried to make it a very simplistic design.
Again, anyone is welcome to try this out at aka.ms/askma. Uh, please use the feedback, the yes or no button to provide us feedback because we would absolutely love to get your feedback as we, we continue to improve this. Um, so this is an example of kind of the, the initial pioneer that we had trailblazing a lot of these concepts that we’ve been talking about.
IOANA TANASE: Jeremy, can I add something to your demo?
JEREMY CURRY: Sure.
IOANA TANASE: Um, I love the fact that you showed the box where it says, “Was this a helpful conversation?” And we have a thumbs up versus a thumbs down button. I told you at the beginning of this conversation that all of us play a space in responsibly… And this is one of the ways that you can actually take part and be active in this conversation. That’s not just a nice to have box. It’s not, it doesn’t just exist there for the sake of it. By giving feedback, you are actively, um, participating in our responsibility by standards, because we have been very grateful to all the feedback we received from our clients in terms of, is this accessible or, um, is this actually accurate in terms of my representation? And it has been a journey, and the feedback that we receive has helped us be better. So one, thank you, and two, as consumers, please provide feedback. Um, products are not perfect, but your feedback helps improve them, and ultimately that, that helps all of us.
JESSICA RAFUSE: That’s a great point. Great feedback. (laughs) So amazing. Ioana, I’m glad you brought that up. I, I’m wondering if you can talk a little bit about the broader Microsoft approach. Where, how are product groups receiving that feedback in other ways?
IOANA TANASE: Absolutely. So, I was sharing that one of the ways in all of our Copilot technology has that thumbs up, thumbs down button. Um, so that is a consistent way where folks can, uh, submit their feedback and share everything, from, um, from, “Hey, this answer was not good enough,” or, “It was not accurate enough,” towards, “I’m not really happy about this image, um, that was meant to describe somebody who’s blind with a guide dog.” Um, and Jeremy is the expert here, but we also have our fantastic enterprise disability answer desk where, where folks and clients can submit their feedback concerning disability as well as other, other elements related to, um, identity and disability. So, I encourage everybody to do so because all of that does go back into our products and more so, it helps us identify ways that the community wants to be represented and the community wants to be using our products. And that is absolutely the best thing and the, and the best gift that a community can give us.
JESSICA RAFUSE: Now, we have time for one more question. Out there, we have an audience of brilliant, passionate change-makers, folks that truly care about accessibility. If we had a wish list, if there was something that we wanted to ask this audience next steps, uh, what would be the call to action? Ioana, let’s start with you.
IOANA TANASE: Um, I was smiling because it’s dangerous to give me a wish list. Uh, for me, I want everybody to be part of the AI conversation. I want people to start using these tools. I want them to experiment with them. And my advice, try one tool at a time, uh, and discover what you learn and discover what works for you and then, um, figure out ways that you can share that with others. If there’s one thing I can go back to the accessibility team when we first started using Copilot, we heavily leaned on each other. We were discovering what type of questions we can ask. We were discovering what type of issues we’re encountering. So, being able to share that with each other has been a gift, um, and I want people to be able to do the same thing. So, that’s my homework. Go try these tools out.
JESSICA RAFUSE: You heard from Ioana. It’s time to go play. Jeremy, what would be your call to action?
JEREMY CURRY: I’d say take, take the first step. Um, we are probably further ahead than most people ’cause we’ve been working on this for several years, but there are a lot of people who are very advanced as well. But, um, there’s a lot of sometimes fear portrayed about AI. Don’t be fearful. Just go and try it. Um, if you don’t take that first step, it’s gonna be difficult. Uh, and, and see what it does for you. There are feedback mechanisms like Ioana mentioned, and really, community is the other thing that I would, I would note. Um, I don’t think there’s a day that goes by that Ioana and I aren’t sharing something about AI that we’re learning still to this day. So, community is everything.
JESSICA RAFUSE: Well, thank you both so much for joining me today. This has been a fantastic conversation. Thank you to Sight Tech Global for having us. And to everyone out there, we hope you’ll try Ask MA, provide feedback, and join us on this journey to make AI more fair, more inclusive, and more accessible for everyone. Thank you.
[MUSIC PLAYING]
