-
DESCRIPTIONWho knew that screen readers, unlike Web browsers, are not interoperable. Web site developers don’t worry about whether their code will work on Safari, Chrome or any other browser, but if they take accessibility seriously they have to test for JAWS, VoiceOver, NVDA and the rest. That’s about to change, thanks to the W3C ARIA-AT project. (This session will be followed tomorrow by a live breakout session with King and Fairchild, as well as several other members of the W3C ARIA-AT team.)
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
SPEAKER 1: But first, a brief video introducing the W3C ARIA-AT project.
SPEAKER 2: On a vintage computer, web pages cycle through. Some work, and some show incompatibility errors.
SPEAKER 3: Back in the 1990s, web browsers were all over the place. A website that worked in Netscape didn’t always work in Internet Explorer, or the other way around. Only big companies with dedicated teams could afford to make websites work in multiple browsers.
SPEAKER 2: On a split screen, several hands type and one uses a mouse.
SPEAKER 3: To fix this, thousands of people from dozens of internet companies worked for over a decade to standardize the web. It was a massive investment of effort that improved the state of browser interoperability, but there were still lots of compatibility bugs.
SPEAKER 2: Back to the monitor, an error message reads, “this page best viewed in Netscape.”
SPEAKER 3: So in 2012, a group of people started the Web Platform Test project, or WPT for short. Today, you can go to wpt.fyi and see the results of millions of small tests that run every day to make sure websites render consistently across all platforms.
SPEAKER 2: On the WPT website, a cursor moves over a browser interoperability graph.
SPEAKER 3: When you use a website and it works the way you expect, it’s because we have web standards and automated testing technology. The reason we’re here today is because assistive technology users were not included in this push for interoperability.
SPEAKER 2: An animation of a giant sphere pushing clusters of shapes into the far corner of the screen. Louis, a light tan-skinned blind man, smiles at us.
LOUIS: My name is Louis, pronouns he/him/his. I am from Southern California.
SPEAKER 2: Palm trees sway in the wind. A close-up of someone strumming an acoustic guitar.
LOUIS: I play guitar in a musical duo, and I am proud to be an accessibility tester. Behind me is a desk on which is a technology setup — keyboard connected to a desktop, a monitor, and a laptop. The experience for a screen reader user can vary dramatically from one screen reader to the next. In practical terms, this means that I can spend a morning trying out different screen reader and browser combinations to overcome whatever inaccessible content I’m dealing with that day. Or sometimes, depending on how the content’s rendered, I can’t get past that. And I’m somebody with the tools to experiment and figure out these accessibility issues. Now, I’m going to show you how two screen readers voice the web differently.
SPEAKER 4 (SCREEN READER): Delicious pizza that you can get from an accessible site when you need a quick — blank — list of pizza options, region, check box, not check, pepperoni, checked, not checked, checked.
LOUIS: Now, I’ll be using the screen reader on the laptop.
SPEAKER 5 (SCREEN READER): Pepperoni off. You are currently on a switch. On. Off. On. On.
SPEAKER 2: Screen reader logos appear on screen — VoiceOver, TalkBack, JAWS, Narrator, NVDA, VoiceView, and ORCA.
SPEAKER 3: There are many screen readers with hundreds of interpretation differences, like the one Louis just shared. These differences can exclude people.
SPEAKER 2: An animation of shapes dropping onto a conveyor belt and moving right.
SPEAKER 3: Developers who can afford it test across screen reader and browser combinations. But the way screen readers and browsers interpret web code changes all the time.
SPEAKER 2: Shape clusters jump in and out of their place in line.
SPEAKER 3: To make the web truly equitable and inclusive, assistive technologies need to work with web code in predictable ways. This is why we started the Accessible Rich Internet Applications and Assistive Technologies Community Group, ARIA-AT for short. Here we are on the ARIA-AT website. In ARIA-AT, we are bringing together a community of accessibility experts, user advocates, assistive technology vendors, and web developers to create what we call assistive technology interoperability.
SPEAKER 2: Several different shapes intersect with each other to form one big shape. The big shape shifts around, trying out different combinations.
SPEAKER 3: We are testing common ARIA patterns and collaborating with screen reader companies to resolve inconsistencies. In the future, we envision developers will be able to focus on accessible experience design, and assistive technology users will finally enjoy a web platform designed for them.
SPEAKER 2: The shape grows bigger and envelops the screen.
SPEAKER 3: Visit us at aria-at.w3.org to read our interoperability reports, learn more about our automated testing technology, or even join the group.
SPEAKER 2: A person uses their laptop at a dining room table. They click on Join Community Group, and the webcam turns on.
SPEAKER 3: We’d love to have you contribute your expertise.
SPEAKER 2: Big spheres and streams of smaller shapes all flow organically in the same direction.
SPEAKER 3: Let’s make sure that web platform reliability includes assistive technology users. We build better when we build for everyone.
CAROLINE DESROSIERS: Well, hi, everyone. I am so thankful we had that video prepared by Matt’s team, because it really breaks down this interesting topic we’ll be digging in today. I’m Caroline Desrosiers, founder and CEO of Scribely. And today, we’re going to be talking about how screen readers are working right now across browsers and devices, and a W3C project, years in the making, that promises to create a better path forward for assistive technologies.
We’ll learn more about how this project solves some critical inefficiencies with screen reader testing and its potential to open doors for future innovation. So on today’s panel, we have Mike Shebanek, head of accessibility at Meta, Matt King, accessibility technical program manager at Meta, and Michael Fairchild, senior accessibility consultant at Deque. Matt and Michael are the co-chairs of the W3C ARIA and Assistive Technologies Community Group behind this project.
And in a little bit, we’re going to dig into the history behind how this community group started and also find out what happens next for this initiative. But first, question to the panel. Can you talk a little bit more about some of the key points from that intro video? What is assistive technology interoperability exactly? And why should those of us who use or test with assistive technologies care about it?
MATT KING: Yeah. This is Matt, and I would love to talk about that a little bit, especially since I am a screen reader user — have been for decades. And I think this is something that almost every screen reader user is familiar with, even if they don’t recognize it as what we are talking about, as assistive technology interoperability. But it’s basically — interoperability means that inter, between, I can go between, from one screen reader to another screen reader, in one browser to another browser. And things should basically work the same way. Things that I expect to be there and be announced are there, and they’re announced and I can access them.
And so many of us are familiar with the scenario where, gosh, I was on this site last week and it was working. But what’s going on? This week, it’s not working for me and it doesn’t seem like the website has changed. And so then you go and try it with a different screen reader. And now, it’s actually working for you. Like, huh, what’s going on here? And there are so many different things that can change, that can cause that kind of problem. And that problem, because there are so many different things that can change, we have to have different ways of testing to ensure that the things that could change, that could break it for the user, don’t.
And that’s what this is all about. We’ve just kind of assumed this is how life is. Sometimes it works with one screen reader and not another. And it doesn’t have to be that way. Even the people who are testing and trying to build websites every day, an make them accessible for anybody using a screen reader are running into this problem all the time. They thought they could use this specific web component and they get it working with one screen reader, and then it doesn’t work with the other. And now we’re creating kind of what’s really almost an impossible situation for web developers to try to keep it working well for anybody using a screen reader.
MIKE SHEBANEK: And I hear this all the time. As head of accessibility at multiple different companies in my career, people ask me all the time, why isn’t it better? Why is this so hard? Why can’t companies do this? It seems straightforward. You make a website — I do this every day. I go to a website on my browser. It works. But when I use a screen reader, it’s not like that at all. And so we’re feeling that pain as an industry, we’re feeling that pain as end-users, and we now have a way of addressing that. But really, we want to help people understand, why is it that way? What’s not happened or what has happened that’s led us to this point?
CAROLINE DESROSIERS: Right. So it does sound like there’s a lot of pain points and things standing in the way for web developers right now. Does this mean that it’s currently impossible to make a website that actually works with assistive technologies? Or can you tell us about what it’s like for developers right now working through those issues?
MICHAEL FAIRCHILD: Yeah, this is Michael. I can try to answer that. So it is possible right now to make websites that conform to accessibility standards. But compliance doesn’t require good support in all screen reader or browser combinations. But for teams that want to provide great, inclusive experiences beyond that minimal compliance, it can be possible to make some of those experiences work well for a lot of assistive technologies. But doing so requires a lot of manual testing in many different assistive technologies, browsers, operating systems. And it just creates a huge matrix of testing that needs to be done.
And often, it requires a very high level of skill for those teams to test all of those combinations and understand what the expectations really are in each of those individual assistive technologies and what support really means. And this often means that teams end up making non-standard tweaks or hacks to the code to achieve good support. And these hacks are often brittle and break over time as browsers and screen readers change, often leading to worse experiences. And in some cases, it’s even impossible to achieve good support with all assistive technologies.
So this boils down to a lot of overhead in terms of time, cost, and frustrations for teams that want to provide good support and inclusive experiences, and it results in a fragile sort of ecosystem that’s prone to breaking. And most importantly, it means the exclusion of people that use the assistive technologies.
CAROLINE DESROSIERS: Right. So it sounds like a lot of developers, as Matt said, have kind of accepted that this is just the way things are, but it really doesn’t have to be. And there are a lot of practical implications at play here, right? So what is this all about? Can one of you explain some of the reasoning behind Meta’s decision to focus energy on this project in particular?
MIKE SHEBANEK: Yeah. You know, we feel this pain from both sides. We have people at our company, like Matt, who use screen readers every day. And it’s amazing to watch him switch from one to the other to the other, and then go, it works here, but it doesn’t work there. And if we make this fix, it breaks the other one, and just that frustration of, what choice do we make? And then the other choice where it’s, we only have so much time and so many people. Even as many resources as we have, how do we cover all the combinations?
As Matt and Michael were saying, the number of combinations of device and operating system and web browser and screen reader grows so fast. And there’s more coming, and there’ll be more in the future. It’s getting out of control for us. And so it’s not a trivial thing for developers who want to do the right thing, like us and others, to sit down and say, well, just test it and it’ll work across all these different combinations. We literally have to test every single one.
So we also hear this from our end-users. We serve over three billion people in the world on Meta products across the globe on multiple different platforms — mobile, desktop, and others. And they have the same frustration. Like, I come with my product that I can afford that I have access to, which we also know is not always the latest and greatest one, because not everybody has the money to upgrade to the latest thing at the moment they come out. How come it doesn’t work for my version? And so even the variations of — not even just the platform, but the versions you’re on or the age of the device, these all grow that matrix to become basically overwhelming. And that’s where companies find themselves.
Whether it’s our company or another company, whether you’re a small developer with three people, or a mid-sized company, or an enterprise, you’re facing this same giant matrix. And to guarantee a good accessible experience across all those combinations, as Matt said, is kind of impossible. So we all do the best we can. And what ends up resulting is that people who use screen readers say, I guess this is just the way it is, because no one’s been able to sort of wrangle this and figure this out — until now.
CAROLINE DESROSIERS: So theoretically, the dream behind this project actually sounds great, because with this initiative, we now have what we need to make screen readers actually work more consistently and more intuitively. And I think this is all kind of what we expect from technology these days. However, as you mentioned, Mike, there are so many different kinds of assistive technologies, so many different kinds of screen readers. With all of the options we have right now, you must still see that interoperability for assistive technology is realistic and achievable. So my question is, do you believe it’s possible? And what will it take to get us there?
MIKE SHEBANEK: I’ll let Matt and Michael address the, what it takes to get us there. But in terms of, is it possible? It absolutely is possible. And I think that if we roll the clock back and think about what the web itself was like in the early 2000s, you had to have a certain website browser to go to certain websites. And if you had the wrong one, you couldn’t load that page. You couldn’t perform that transaction, or send that message, or get information.
And we’ve kind of forgotten that that’s really how the web started, and it took some really dedicated people — and in a lot of ways, the work of the W3C — to propose the idea of interoperability and standards, something that everyone could agree on. So people could have more choice in the browser and device they wanted to use, which has resulted in a plethora of really cool different browsers and really cool different platforms, but still work together.
So we have the uniqueness that we want, we have the choice that we want. And it’s allowed companies to be really innovative and do lots of cool features and things without having to spend all of their time just making it work and making it interoperable. And that’s what we’re proposing here with this project, is we can now sort of come to agreement on what the standard should be so we can cover it quickly and then spend all of our time and money and resources and creativity on the really cool, innovative stuff that’s going to push us forward.
CAROLINE DESROSIERS: And Matt or Michael, do you want to talk more about what it will take to get us there?
MATT KING: Yeah. If you think about what it took to get there for browsers — and by the way, we have a lot of the same people who were working on making this happen for browsers helping us make it happen for assistive technologies. And it kind of boils down to getting hundreds, if not thousands, of little agreements in place that, this is what ought to happen in this scenario, and this is what ought to happen in this scenario, and it has to — wait, we have to have consensus across all of the stakeholders.
And then once we have that consensus, we have to have the machinery, the infrastructure in place, to run a process and a program where we can get everything that needs to be tested tested, and track all of that data, highlight the problems, get the problems resolved. And that’s a lot of work to do and a lot of work to track, a lot of data to track. So there’s a big infrastructure that has to be built.
And technologies have to be invented. Right now, there is no way to run automated screen reader tests in a standardized way. You can’t just say, you know, I have a machine that’s going to run a test. Now, plug in any screen reader you want, and let’s see if it works with this screen reader. You can’t do that today. We can do that with browsers. We have technologies for doing that with browsers. We don’t have technologies for doing that with screen readers or any other assistive technology.
MIKE SHEBANEK: I think it amazed most people to learn that even for just the traditional web browser, there are literally millions of performance tests run every single day — in the public. You can go find these sites and check it out. To ensure that they’re working correctly with the internet standards like HTML and CSS. That happens all the time in the web space. It doesn’t happen at all really, effectively, in the accessibility/screen reader space. So I think that’s the kind of — Matt, I think that’s the kind of automation you’re referring to, is if we can start to make those robust tools [? and help — ?]
MATT KING: Yeah.
MIKE SHEBANEK: –with agreements, the automation that will help us be so much consistent and so much quicker, deliver so much better experiences, will speed up. It’s a giant change, giant step forward. We just — the accessibility world missed its moment. The web world had it. We saw it happen. It was there. We need to catch up to that. And so that’s really what this project is about, is bringing this up to speed, in a lot of ways.
MATT KING: Yeah. Essentially, every time a browser changes, or every time a screen reader changes, or every time there’s a change in a standard that affects either one of those, every single test needs to be rerun. So that’s a lot of test-running that has to happen. That’s why we need machinery and technology to help us do that.
CAROLINE DESROSIERS: Got it. Well, this all sounds really positive and encouraging, but I am sure there are skeptics out there at this point. So does this mean that all screen readers actually have to be the same? And what would you say to those who are concerned that this might negatively impact competition and innovation within the assistive technologies industry?
MICHAEL FAIRCHILD: Great question. So the goal is not to have identical output between screen readers. The goal is to have equivalent meaning and robust support of the expectations for the standards. And then once you standardize the fundamentals of a technology, as Matt just alluded to, it’s easier for innovation to blossom and for people to embrace that technology. The web itself is evidence of that, as we just described. Matt, do you have anything else to add about —
MATT KING: Yeah. I mean, I would say that I understand the skepticism. Because if people love the way their screen reader works, they want it to work the way it works. And that’s not what — we’re not trying to change the things that actually work, we’re just trying to, when something doesn’t work, figure out, what is the right thing to have happen within the context of that given screen reader?
And that’s why this takes a long time. That’s why — I mean, this is a lot of negotiation that has to go on. But the thing is, in the past, there may have been discussion of this. There may have been negotiation. Like, we think this is what screen readers ought to do. But there wasn’t any way to codify the output of those discussions and then to make sure that, yeah, the thing that we thought should happen should keep on happening as all of the technologies change.
MIKE SHEBANEK: Yeah. So Michael and Matt, you guys have been working in the working groups for the W3C, the standards body, to develop all of this infrastructure and these definitions to actually make this possible now. That’s what’s been going on for the last few years. But now, we’re at a different place, aren’t we?
MATT KING: Yeah. Now, we’re at the place where we have a good chunk of this infrastructure developed, we have what we call a working mode in place where we know how we want to interface with the different stakeholders. We have agreements with a lot of people about how to work. But now, it’s time for us to really start processing all of this data, running the tests — we have a lot of tests written. I think as of today, we have 30 common web design patterns — have tests drafted for them. And that represents something like 4,000 tests, somewhere in that neighborhood. And these are all proposals at this point. We have to begin that long, arduous process of getting industry-wide consensus on them.
CAROLINE DESROSIERS: Right.
MICHAEL FAIRCHILD: And in addition to that — sorry. And in addition to that, we are also making good progress on creating the technology to automate all of those tests. We’ve started to — we have a prototype of a driver to automate this on Windows. And we’re starting to explore it on other platforms, and we’re starting to try to push forward standardization of that automation.
MIKE SHEBANEK: Yeah, automation is really amazing when you stop to think about what that means. Because if you’re a small web developer house, the power of that automation is the same as for a company like Meta. We’re all running the same automation at that point. We can all guarantee this is going to work the way it’s supposed to work. And so suddenly, all boats rise in the tide. Everybody gets to be better. Everybody gets to be more consistent.
And so again, we can focus on making really cool websites to do different things. We can focus on more content. We can focus on all of the stuff we want to focus on, knowing that it will also be equally accessible, which is what we’re trying to achieve. So the promise of automation around this is, I think, maybe one of the most significant opportunities that this presents.
CAROLINE DESROSIERS: Got it. So let’s get down to it, because I’m really curious about the answer to this question. We’re, of course, in 2021. WCAG is more than 20 years old. Screen readers are more than 30 years old. So if this technology standard is so fundamental to the type of work that we’re doing and what we expect from our technology, I’m wondering why hasn’t it happened yet? Why do we have interoperability for everyone who uses a browser but we don’t have it for people who rely on assistive technologies? And maybe we can get into some of the history behind all of this.
MATT KING: Yeah, it’s kind of a long, complicated question. But without going too deeply into it, I think the first thing we have to realize is just the sheer volume of work required to get this done. We’ve already alluded to the amount of investment that had to happen to make web platform test work for browsers. And that took many years, and it’s ongoing. And so there had to have been somebody at some point to make a decision, like we’ve just got to get down to business and do this. So that had to happen.
But even before that, the standards that make that possible, the Accessible Rich Internet Application standard itself, only came into being in 2014. So that standard is — people should understand — is something different from WCAG. WCAG is, like, the building code that says put a ramp on the building. And ARIA is the standard that says, how do you build that ramp?
And so we didn’t have that until 2014. So you didn’t know — like, there wasn’t any way to say that this element on the page is a button, and this element on the page is a link, and this one’s a checkbox, unless you were using out-of-the-box, what we call pure, vanilla HTML. But if you were doing building websites the way most modern websites are built, there’s a lot of other stuff, fancy stuff, on that website that there was no way to make it accessible without that ARIA standard.
MIKE SHEBANEK: Yeah. And while screen readers have been around for 30 years, they haven’t been around for 30 years on mobile devices. I mean, we think of our world today as, well, we’ve always had smartphones. How would we ever have lived without them? But there was a day not that long ago, we didn’t even have smartphones. And even when smartphones came out, screen reader didn’t come out on the first one. It took a while to figure that one out, too.
So it does take some time. But it’s been a dream for a long time, and it’s just needed the time to incubate. And that’s what’s been happening. And so really, why we’re excited about this — and you can hear the enthusiasm in our voices. We’re at that moment where it’s really time to sort of present this and encourage people now, like, there’s enough here for people to jump in on it and take part of it and actually make this real. So pretty exciting in that respect.
CAROLINE DESROSIERS: [? Right. ?]
MATT KING: Like you said, they’ve been around for 30 years, Mike, but I think it’s really important for people to understand that for the first 15 years, at least, 20 years, screen readers were really a lot more of a hack than anything else. Like, literally, they read the print being sent to the screen. They would suck data out of video drivers and things like that.
And they weren’t what we call engineered approaches that — you couldn’t design accessibility, really, back in the early days of Windows and so forth, not until some standards came — or APIs we call them, really — came into place where it was possible for a screen reader to know what was on the screen without guessing. Now, they can know what’s on the screen without guessing. We have all of those standards and technology in place. Now, we can actually test to make sure that it’s doing the right thing.
MIKE SHEBANEK: Yeah. I remember, we had a hand in helping develop the VoiceOver screen reader for Mac OS. And that was only in 2005, and that was really one of the first efforts to actually build it into the operating system as if it belonged, and not just sort of let someone hack into the system and figure out what to say or do, or enable them to interact with a computer. So to your point, Matt — I think that’s a good one — it seems like there’s been a lot longer time at this than maybe there has been. So we’re catching up. We’re catching up fast.
MATT KING: Right.
CAROLINE DESROSIERS: Yeah. It seems like the stars have finally aligned, perhaps, and that was part of the drive towards pursuing this project in the first place. We finally have what we need to do it. So I guess my next question would be, what comes next? What can we expect? What are the milestones ahead? And how long will this initiative take to roll out?
MATT KING: So we have a 2023 goal that’s pretty big. We want three desktop screen readers and two screen readers on mobile operating systems to be really well-positioned for nearly full interoperability by the end of 2023. That means that all of what we call accessibility semantics that are defined for HTML and for ARIA will have at least proposed tests for all of them, and we’ll be running all of those thousands of proposed tests. Every time any one of those technologies changes, we’ll know what all of the bugs are. And by that time, we will also have in place all of the processes and the consensus, and we will have leveraged enough to fix most of them.
I can’t say that we’ll be at the point where all five of those are really fully interoperable at the end of 2023. But all of the machinery to make sure that it will happen, and that it is inevitable, that’s our goal — to have it in place at that time. But this is a never-ending thing. So that’s only five screen readers. In fact, right now, we’re just focused on the three desktop screen readers. That’s it. Because if we don’t do it there, we know that we can’t do it anywhere else. So we’re starting there, and then we’re growing. And then it’ll go to more screen readers, and then more kinds of assistive technologies. But that, 2023, that’s a big giant milestone for us that we’re aiming to — we’re really pushing hard to get there.
CAROLINE DESROSIERS: Fantastic. Well, I feel like there’s so much we didn’t get to cover today. And I’m really curious to know how we go about making this happen in practice, and what kind of impacts this is ultimately going to have on accessibility moving forward. So how can people listening today learn more at this point?
MICHAEL FAIRCHILD: Yeah. So that’s a deep topic. It’ll take a while to cover it, and there’s a lot of different angles there. So we have another session tomorrow on the schedule, and we’ll cover that topic in more depth and dive into some of the more technical details.
CAROLINE DESROSIERS: OK, great. So final thoughts today. What’s your main takeaway on why this initiative is so important? And why should people believe in this?
MIKE SHEBANEK: Well, it certainly can be done. We’ve seen this happen with the web and with other big projects where standards came along and allowed interoperability. And then suddenly, those ecosystems flourished, and that’s what we’re proposing here. We want people to grab on to that vision that things can and should be much better than they are.
And in a lot of ways, we’re just calling on everyone to say, don’t put up with this anymore. It shouldn’t be that you have to know how to use three or four screen readers on three or four different devices with different operating systems just to get your work done, just to go to school and learn, just to stay in touch with your friends. You should be able to pick the one that suits you best, and enjoy it, and have it work, and know it’s going to work before you even try it.
So that’s the call to action here. That’s the excitement around this. And that’s a massive step change for accessibility to become sort of more formal, and more consistent, and higher quality across the board. So for us, it’s very exciting moment and we’re really encouraging, if you’re one of those developers, if you’re one of those platform-owners, if you’re a web browser, even if you’re just an end user of a screen reader, it’s time to start saying, we want more, and we want it to be better. And we have a way to do this. Let’s get on board. Let’s work together and make it the world we want it to be.
MATT KING: Yeah, it is going to take a community, a lot of work and effort. And we know that there’s going to be some more bumps along the way. It’s not going to be just clear, smooth sailing. But we really believe that this isn’t actually optional at this point. Because if we’re ever going to have a web where it is possible to build for every single user, then we just have to have this. It’s a big investment. It’s a lot of work. But it might seem like, compared to the number of people involved and what we get out of it, it’s disproportionate. It’s not. People matter. Every one of us matters. The investment is important, and it needs to be made
MIKE SHEBANEK: Yeah, I’ll just — and —
MICHAEL FAIRCHILD: Mike and —
MIKE SHEBANEK: Go ahead, Michael.
MICHAEL FAIRCHILD: Oh, go ahead. No, you. Go ahead.
MIKE SHEBANEK: I was just going to say when it comes — I was thinking about your question, Caroline. It’s such a good one. And I’m sure there are people out there saying, this seems impossible. Like, it’s never happened. It probably will never happen. And I just want to encourage people and just say this. When Apple proposed to make a smartphone out of glass, the entirety of the vision loss community said, that’s impossible. We will never be able to touch that glass and understand what’s under our finger and be able to use that device. And yet when you go around, almost everyone who’s blind and uses technology has a smartphone, and it’s glass, and it was figured out, and it works even better than people thought they had before.
And so I think we’re really encouraged by breakthrough moments like that. And we’re feeling like this is that breakthrough moment for web browsing and for assistive tech interoperability. And we want to paint that picture for people to say, like, this can really happen. And we’ve seen these — we’ve experienced these moments before, and this is that. And that’s why we’re excited to share it.
MICHAEL FAIRCHILD: Yeah. I think Mike and Matt said it best. Let’s make the web more stable and more inclusive.
CAROLINE DESROSIERS: Well, this was a wonderful discussion and an absolute pleasure to moderate today. Thank you, Matt, Mike, and Michael for sharing your work and your ideas. And I’m very much looking forward to learning more and watching this project unfold. So bottom line, I feel like we need to continue to find ways to make screen readers a great experience so that people have technology that’s actually working for them to achieve what they set out to do.
And we hope that this panel has provided everyone with an introduction to this project and the possibilities for the future. So please bring your questions to tomorrow’s session, which will be an interactive deep dive into the technical aspects of how this all works and how we can make it real together. We hope to see you there.
[MUSIC PLAYING]