-
DESCRIPTIONApple has long embraced accessibility as a bedrock design principle. Not only has Apple created some of the most popular consumer products in history, these same products are also some of the most powerful assistive devices ever. Apple’s Sarah Herrlinger and Jeffrey Bigham will discuss the latest accessibility technology from Apple and how the company fosters a culture of innovation, empowerment and inclusion.
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
MATTHEW PANZARINO: Hello, I’m Matthew Panzarino, editor-in-chief of TechCrunch, and I am here, thankfully and gratefully, with Sarah Herrlinger, who is the Senior Director of Global Accessibility Initiatives at Apple, and Chris Fleizach, who is engineering lead for iOS, watchOS, and tvOS accessibility. So I’m really excited to have both of them here with me. Hello, how are you?
SARAH HERRLINGER: Doing well, thank you very much for having us here.
MATTHEW PANZARINO: Wonderful.
CHRIS FLEIZACH: Yeah, doing great. It’s going to be fun.
MATTHEW PANZARINO: Great. I think that a good place to start this conversation is to sort of set the tone and help people to understand a little bit about how accessibility works, and how accessibility work happens at Apple, and what’s different about its approach. So maybe you could start talking a little bit about that, Sarah?
SARAH HERRLINGER: Yeah, sure. Well, accessibility is something that is incredibly important to us. Our goal is to really design for everyone. When we talk about making products that are simple and intuitive and easy to use, we’re not doing that just for some people. We’re trying to do it for everyone. And so accessibility is something that we see as a basic human right.
We also consider it to be one of our six core corporate values. And a little factoid– we actually started our first office of disability back in 1985, which was five years before the ADA even became the ADA. And I think really from the start, that goal of making products that are products that everyone can use has permeated the design of our products since day one.
MATTHEW PANZARINO: And I think that given that Apple is such a large organization– obviously, tens of thousands of employees right now, and many of those obviously work at headquarters– or now distributed like we all are– on a variety of different products, I think it’s interesting to kind of talk a little bit about how Apple integrates accessibility with the product development teams specifically. Because I think in a lot of cases, what we’ve seen over the years is that accessibility teams are tacked on, or they come in at the end of a product development process to try to adapt the work that’s been done for people with disabilities or different conditions. And I think that there’s a certain afterthought aspect to a lot of different accessibility features that we see. And so I’m just curious about how that differs at Apple.
SARAH HERRLINGER: Yeah, for us, I think we do have a very different mindset. Accessibility is something that’s brought in at the very early stages of everything that we do. The accessibility teams– their forte is kind of multi-fold. Certainly, one of them is to create really amazing assistive technology, to figure out, for example, how do you make a touchscreen accessible to someone who will never see the touchscreen, when it’s essentially a piece of glass? And how do you move around in a touchscreen environment? It could also be, how do you make a touch screen work for someone who will never touch a touchscreen?
So they really focus a lot on amazing assistive technology, but as well, they’re working with every design team on every new product to say, OK, we’re going to come out with this new whatever it might be, and how do we ensure that this product will work for everyone? So it’s a really early and often way that we get in there and talk to all of the different teams, and make sure that they’re thinking about accessibility from day one, and then working to integrate all of our really cool ideas into all of these different new products.
MATTHEW PANZARINO: And let’s rewind it a bit. Because speaking of the product development process, I know that early on, when the iPhone was introduced– I remember, I’m that old– there was definitely a strong reaction in the larger community that the touchscreen interface was going to be a problem because it lacked the tactile nature of buttons. You’re removing a lot of the ability for people with, obviously, motor skills issues, or even sight issues, to navigate the interface, because there’s no tactile feedback. And I know that VoiceOver is one of the things that came out of that. So I was just curious– how was that initial reaction, and then the genesis of that VoiceOver product– how did that kind of happen [INAUDIBLE]?
CHRIS FLEIZACH: Matthew, I was on the MacOS VoiceOver team when the iPhone first came out. And we saw the device come out, and we started to think, we can probably make this accessible, even though there was a lot of sort of fear and uncertainty and doubt about the whole experience. And I know different organizations were getting worked up, and worried that they might be left behind.
And we were lucky enough to get in very early. I mean, the project was very secret until it shipped. Very soon, right after that ship, we were able to get involved and start prototyping things. It took about three years to get things ready to ship in 2009 with IOS 3. And that whole time, we were working sort of in secret, knowing that we had something that was just a blockbuster on our hands.
And I remember a great story. I was eating lunch with one of my engineer friends who happens to be a VoiceOver user. And I saw Steve sitting close by at the lunch area and positioned myself close enough where he might be intrigued to come over. And so he took the bait, and he came over, and we talked a little bit about this. And my friend who’s an engineer said, Steve, what about the iPhone? And Steve said, maybe we can do that.
And so this sort of pushed us along the way to think about what could do that would be really amazing and push this ball forward. And the response has been incredible. I mean, it’s so great to see people take this technology and go out and be independent with it, and do things that we never thought would have been possible.
MATTHEW PANZARINO: Yeah. And I think that adaptation of new technologies is obviously kind of like a key theme with Apple, right? Apple takes new technologies at the sort of edge of the productization envelope and makes them mass successful, to the tune of millions or billions of people. And over the years, they’ve introduced to a lot of technology. Some recent ones include Atmos support, the surround sound in AirPods, liDAR, improved cameras. But those are often leveraged for accessibility features as well. And so how early in the process are accessibility teams brought in to work on these new technologies, and kind of wrap them around accessibility opportunities or vice versa?
SARAH HERRLINGER: We actually get brought in really early on these projects. I think as other teams are thinking about their use cases maybe for a broader public, they’re talking to us about the types of things that they’re doing so that we can start imagining what we might be able to do with them from an accessibility perspective. So whether it be something like liDAR, or a lot of the work we’ve been doing in AI and machine learning, it’s having our teams be able to say, what’s the specific use case that is really life-changing for our communities, and how do we make sure that whatever they’re working on, we are taking that and making it a part of the work that we do?
CHRIS FLEIZACH: Yeah, one great example of this was when FaceID started to be developed. We were brought in over a year before it ever shipped to customers. And they asked for our expertise in this. And one of the jobs that we have is to sort of poke and prod. It’s like, who would be impacted here? How could we improve the situation, and find all those edge cases? So one thing that we pushed hard for at that point was to ensure that facial diversity was included in the training set to make sure that FaceID would work for a wide range of people right out of the gate.
MATTHEW PANZARINO: Yeah, that’s a good example. There have been many different uses for ML and AI over recent years across Apple’s spectrum of products. And you mentioned it there briefly, Sarah. But I’m just curious, what’s kind of new as far as leveraging the neural engine and leveraging ML, as far as accessibility goes, in products coming up at Apple?
SARAH HERRLINGER: Yeah, we have a couple of great new things that we’ve done in the timeframe of iOS 14, or built it in. I would say a couple of really interesting ones to think about specifically in support of the blind and low vision communities would be things like voiceover recognition, which is a new part of VoiceOver, which really is looking at how do we give more information about elements that are on the screen? And that could be anything from if you take an photo on your camera, or you have an image on the screen, being able to describe to you elements that are in that a more contextual human way. So rather than just saying, dog, ball, pool, to give you an idea of three things in it, it would be able to say dog jumping over a pool to catch a ball. So really, just giving you a lot more context about what’s going on in an image than we have been able to do previously.
Also, being able to do more to read text in images. And then lastly, a really cool feature that we’ve added in, which is called screen recognition. And the idea here is to be able to augment what developers are doing on their own around making their apps accessible. So while we always work with developers to try and get them to do the necessary accessibility work on their own, screen recognition gives you the opportunity to kind of understand other elements on the screen, so sliders, buttons, things like that. So we’re really excited about what voiceover recognition is going to be able to do to open up a lot more contextual information for our users.
MATTHEW PANZARINO: Yeah, and let’s split this into two conversations really quickly, or two questions. Because I want to come back to the ML discussion around on-screen detection. But really quickly– I recently watched a pretty impressive video showing sort of a new kind of wrinkle in the screen detection, which showed a person walking, tapping the screen on a regular basis, and getting a description of what was being shown through the iPhone’s camera. So effectively, a lens onto the world that described, as you said, a park, with trees and a bench, a man standing with a child right?
CHRIS FLEIZACH: It’s so cool.
MATTHEW PANZARINO: And therefore, essentially translating the real world over audio through the camera. Can you talk a little bit about that jump from, hey, we see stuff on the screen, to hey, we see stuff in the world, and we can tell you what it is?
CHRIS FLEIZACH: I think that that project is leveraging the image descriptions that Sarah was just referencing. And it’s one of those things that only Apple could eventually do, where we could develop a technology that’s supposed to be applied to photos, and realize, you know what? We can just apply this to the viewfinder of a camera, so you can walk around and get these descriptions output to you automatically. So that’s sort of like a really cool connection that one of our engineers working just made, and he’s like, you know what? Why can’t we just do this? And he said, yeah, we can do that.
And what’s really cool is that because it’s on-device, because it doesn’t have to send anything to a server, we can just generate that image description in like a quarter of a second. It’s only going to use a limited amount of memory, so it doesn’t jettison the camera at the same time. Everything can run together. And of course, you don’t have to worry about your privacy at all. Everything just stays on device just straight from the beginning.
And because we have those sort of constraints when we build up these kind of ML projects, we end up with the things that create user experiences that are so cool that we can deploy them everywhere. You talked about one example where it just worked in-camera, and you can walk around like a lens. But it also just works in Twitter, and Instagram, and your photo library, and the web. And no one else had to do it. Those apps didn’t have to change a bit. Just we could build it into the system and make it seamless into a great experience.
MATTHEW PANZARINO: Yeah, and that part of it it’s really the second part, right? The on-screen detection– you’ve got a white paper coming out. It should actually be out by the time people are listening to our conversation today. And that paper goes over in detail sort of how you’re using pixel-level analysis of a screen. And if I get the gist correctly– and please explain it in better detail– it’s enabling accessibility features, even if a developer did not have the resources to or ability to, or just didn’t get around to making their app accessible using Apple’s internal frameworks, right?
CHRIS FLEIZACH: Yeah, yeah, this is one of the things that I’ve been so excited about for so many years now. If I could break down the history of iPhone accessibility, the first sort of 10 years was about built-in accessibility. You walk into an Apple store, no matter what your disability, you buy it, you walk out, you’re independent. You can turn it on, and do almost everything. But not only do you get to turn it on, but Stocks works. The Measure app works. Every single app out of the box works. And this built-in accessibility was sort of unheard of 10 years ago, especially at a mainstream company.
When we did that, we realized we solved this problem. The industry has moved forward. You can get screen readers on almost every major device. What’s next? Well, what about all those other apps out there? Sure, there’s API to make your app successful, and there’s a lot of great accessible apps out there. But there’s a lot of apps that are hard to make accessible, or it’s hard to get the education about. I mean, it’s still a niche field. It’s not easy to let people even know that this thing exists.
So we thought, what can we do to create it so that you can still have a usable experience so you can do something? So someone can turn this feature on, and the user can operate the app, get things done. And then also, sometimes apps come out with bugs, like a certain app may be completely accessible, but someone forgot to label a button, so you don’t get to hear what it’s called, it just says button. And if that’s the buy button, or the order button, that can really put a dent in your day. So by leveraging screen recognition, you can sort of get past these things, and make these apps usable again. And that’s what’s really so exciting about it.
MATTHEW PANZARINO: And there’s a multiplicative effect there, because there’s no work on the developer’s part. And I think there’s plenty of teams building– small one to two to three-person teams– who may want to build accessible apps but, may lack the resources or time to do it, just because of development cycles. And so this kind of leapfrogs over that.
CHRIS FLEIZACH: Right, and screen recognition is not meant to replace what you can do. I mean, we make API, and give advice, and we have lots of WWDC sections about this, about how to make a great accessible experience. And we are always pushing app developers to go for that. Because you can do some really incredible things in terms of efficiency, and ease to use, and good descriptions. Screen recognition is about closing that gap on the other– whatever– 80%-90% of apps that are really hard to make accessible.
And we want to try to enable that, and allow people to use all that content at the same time. So if a great photo app comes out, and it was moving so fast that no one had the chance to learn how to use the API to make it accessible, guess what? You can use it with screen recognition and image descriptions to get descriptions of what all the photos are going on in that. And now this is great, because it makes you part of the conversation. You don’t have to sit on the sidelines watching as people are talking about something new.
SARAH HERRLINGER: It’s been really fun to hear from so many people out in the world who started using these features, and talking about how valuable they are for them. Just to see tweets pop up, or people just saying, wow, for the first time, I’m playing this game that I never had access to before, or I just started using this app everybody’s been talking about. And it’s really fun to see where this can go.
[INTERPOSING VOICES]
CHRIS FLEIZACH: I’ve got a great story.
MATTHEW PANZARINO: Go ahead.
CHRIS FLEIZACH: I’m going to interrupt you one more time.
MATTHEW PANZARINO: Yeah, no, please.
CHRIS FLEIZACH: I mentioned playing games, and we’ve seen that feedback. When we were developing this and we just had it working, one of the engineers who was working with someone who was helping to test it said, oh, this is working, why don’t you give it a try? And said, oh let’s try it on one of those top games in the App Store? So they started playing, and I heard from the engineer working on it my team– he’s like, oh, this is great. She was so happy she was able to play, but she was able to beat me. And I heard from her, and it was like, I’m playing this game and I beat him. I loved this. I never had a chance to play this game before.
And so it was like one of those moments where it’s almost a teary-eyed moment, where it’s like, finally, we are doing this. We can make this stuff happen now with ML, and on-device technology and the neural engine. You put this all together, and all of a sudden, you can enable people to play games that would never been able to do it before.
MATTHEW PANZARINO: Yeah, I think there’s a particular confluence of the technologies and approaches that Apple is using that sort of intrigues people that work in this field. But also, of course, it has benefits for the people that it’s designed for. And that’s, of course as you mentioned, on-device processing, which enables privacy. And then you have the frontline technologies that are actually being shipped to billions of consumers are leveraged for accessibility. It’s not like an afterthought. Or it’s not, oh, we get that tech in accessibility three years later or whatever.
So there’s a combination of things there, but the one thing that I keep coming back to is the original premise of the iPhone was to replicate or modernize the idea of the information appliance, a slab that became whatever you needed it to become. You tap the calculator button, it’s a calculator, right? You tap the camera button, and it’s a camera now. And I think that all too often, the accessibility universe is expected– people who need these affordances are expected to wrap themselves around the way the world works, and not wrap the world around the way that they work.
And I think that that’s kind of like the advantage of this edge tech being integrated, is that they have for the first time kind of a tool that they can carry with them, a magic wand that wraps the world around them, rather than have them having to conform, conform, conform, conform every day. It’s like, hey, the world is conforming to my way of working. And I think that’s a pretty special thing. And it only comes when you have a partnership, where people buy in to that idea that, hey, we built this tech, but we also know that you want to leverage it in these ways.
SARAH HERRLINGER: Well, I think one of the keys to that is the fact that we’ve always incorporated members of the community into the process as we’ve done this. So we look at all of the features of that device, and we don’t look at and say, no one would want to use the camera if they were blind. We look at it and say, why wouldn’t you want to use the camera? So how do we make the camera work for this community? Every problem is really an opportunity. And so our teams are always looking at every piece, every nook and cranny of what everyone else is doing, and figuring out the ways to try and make those accessible, and not just accessible, but a really amazing experience for the communities that we support.
MATTHEW PANZARINO: Yeah. And you mentioned the team members. And I think that’s an important thing to touch on. All too often, we do see accessibility features that are well-meaning, but not well-executed. And many times, you can trace that back to the makeup of the team. And so you mentioned it a little bit, but could you expand a little bit about the importance of having people with disabilities work on these technologies, and describe how that’s prioritized?
SARAH HERRLINGER: Well, that, I think, is central to what we do. We believe in the mantra of the disability community of nothing about us without us. Because you can’t design a product that’s really going to work for someone if you’re on the outside looking in, and you think you know how something is going to work.
So we have a lot of people on our accessibility team who have a really wide background, and who use our assistive technology in their daily lives every day. So whether it be VoiceOver users, or switch control users, or whatever it might be, we have people who are dependent on these technologies, and therefore are able to help us to really think through not just how they should work– what’s the user experience– but also where are there holes to plug? Where there are things that we could be doing more in ways that we may not even be thinking about if we aren’t a member of that specific community?
CHRIS FLEIZACH: Yeah, I got one great story from the image descriptions project, where we can describe an image with a full sentence. When we first started and the model started rolling out, obviously, we had people on our team who were using it. And we noticed right away that a lot of photos with white canes in them were being described as brooms or sticks. And so this is something that we popped up, and we went out and sourced thousands of images of people wearing or using white canes, and incorporated that into the training set, so that images like that would be described in a much better way. And something like that wouldn’t have happened naturally if we didn’t have people involved in the process from the beginning.
MATTHEW PANZARINO: Right, right. Yeah, that makes total sense. And one of the technologies I think that is most intriguing lately has been liDAR, right? Because I think that when it was introduced on the iPad Pro, it certainly was like, oh hey, this exists, let’s start using it for AR applications and things like that. But obviously, once the iPhone 12 Pro and Pro Max had it on there, you started to see some other applications. So one of the things that we talked about recently– we published a piece on it– was the people detection. So can you talk a little bit about that?
SARAH HERRLINGER: Yeah, sure. And people detection, I think, is another one of these great examples of things that have come out of the community within Apple, the blind community here. Truth be told, we started working on the concept of people detection a while ago. So I think some people have thought it has been specifically about our current environment of 2020, when in reality, this was one of those holes to fill. We had people on our team who said, one of the things I struggle with is, as I’m moving through space and just doing regular things in my life, like commuting or shopping, I don’t really have a context as to where people are around me. So for example, if for some reason, I’m in a line, I don’t know if the person in front of me moved. How do I get that kind of information?
And so we first started looking at ARKit with people occlusion built in as a way to be able to kind of know where a person is in the field of view. And then when liDAR got built into both the iPad Pro and then the new iPhone 12 Pro and iPhone 12 Pro Max, we were really able to build on that to increase the accuracy of what one could do with people detection and that people occlusion in ARKit. And so by building those two things together, plus using all of Chris’s team’s amazing work on VoiceOver, and building Magnifier, and kind of creating this great tool kit to support blind and low vision users, we were able to create people detection, which does real-time detection of individuals around you, so giving you a lot more information about the proximity of an individual to you as you move through space.
And it’s built to be used both while you’re standing still, or also on the move. And the idea behind it is that in both feet or meters, it will tell you the distance of the closest person to you. And it does that through an audible. It will tell you the number of feet away. Also will do a tone to tell you, is this person getting too close? Have they come within your threshold area? Also using haptics to give you an idea of, once again, of where that person is in proximity to you. And then also a visual for someone– even if you aren’t a member of the blind or low vision communities, and you just kind of want to know, has this person gotten within six feet of me, or whatever it might be?
MATTHEW PANZARINO: Right.
SARAH HERRLINGER: [INAUDIBLE] so that you can get that information as well.
CHRIS FLEIZACH: Yeah, people detection is such a great example where Apple can take cutting edge technology, new sensor input, and combine that with user experience and design experience that’s really unique to Apple accessibility, and the people working in this space to make something. Because it’s not just detecting people. Because ARKit can do that. Anyone can just detect people, right? But what was different about what we did– part of it was, what is the right feedback to play? How often you have to hear the feedback? Do you want to hear speech now, or do you want to hear sound or haptics here? But it’s also about little things.
One thing that we kept encountering while developing it was that it would detect small body parts like your hand or your foot, and would beep. And so if you’re trying to use this and test it, it sounds like it’s going off the time. And we realized it’s because the way that a user would hold this, they might hold their cane and the phone would be angled down while they’re doing that, or moving around, and it would sort of detect a foot here or an arm here. And so part of that process was figuring out, well, let’s weed out small body parts so we can focus on people that are actually in front of you, and give you accurate distance and measurement techniques. I mean, this was a product that was created more by the users of the technology than almost by the engineers.
MATTHEW PANZARINO: Yeah, that’s cool. And your ability to engage a community, and have people work on it who would use it day to day, then allows you to train the models appropriately, right? Because you can train them off into a wrong fork very, very quickly. And then all of a sudden, boom, boom, boom, it’s false positive after false positive.
CHRIS FLEIZACH: Robustness analysis and modeling is such an important thing, and something that doesn’t get a lot of attention. I mean, we did this with the people detection. We did it with the image descriptions, to make sure that all those possible sensitive or hot topics– that we investigated them, and tried to have mitigation plans for anything, so that we’re going to present captions in that case that are representative, accurate, and always respectful.
MATTHEW PANZARINO: Yeah, yeah, good point. And then what do you think the overall impact of the iPhone has been on the blind and low vision community? Because I think that there certainly have been advancements in many different ways that are connected to the iPhone, whether it be connecting directly to cochlear implants, or other accessibility features that have helped people with motor skills issues. Back Tap is a great example of something that came out of there, and therefore benefits the entire world. People love Back Tap, and it came out of accessibility research. But I think specifically, the blind and low vision community has gained a lot from the iPhone over the years. How do you think that that impact is measured?
SARAH HERRLINGER: Well, I think you can look at public statistics that tell us that almost 70% of the blind community using a mobile device is using an iOS device. And American Foundation for the Blind recently just did a study about COVID, where they said that four out of five people said that iPhone was the technology that they rely on most in the world right now. And they even looked at how people were– they had over 50% of the people who were really scared to be out in the world, and figure out social distancing and things. So I think as we’re able to do things– not just like people detection– but the work we’ve done with the iPhone has profoundly affected the community.
And I think one of the big things that people don’t really talk about or think about that much is really just the fact that we’ve built it into a consumer product, which mitigates social stigma, and gives people the opportunity to just go to a store and buy the same product that everyone else is buying, and use that in a way that everyone else is using it, and be a part of the larger Apple community. And that the blind member of the household can be tech support for everybody else, because they’re just using that same device that they know how it works, you know? It’s not something foreign and different.
MATTHEW PANZARINO: Right.
SARAH HERRLINGER: It’s just something that becomes a ubiquitous part of all of our lives and no one’s left out.
MATTHEW PANZARINO: Yeah, and it speaks to the difficulty, of course, for the entire community, this feeling of otherness, right? Everybody has an iPhone– not everybody, but Apple wishes– everybody’s got a smartphone.
[LAUGHTER]
And it’s a very common device, with features that everybody uses or doesn’t use on an individual basis. And I think that that aspect of it is probably very empowering overall. Because you’re like, it’s just a phone that I use like everybody else. I use it my way, you use it your way.
MATTHEW PANZARINO: And part of that’s been created just with all the user feedback that we’ve gotten over the years. I mean, every email that gets sent to AppleCare, every bug that gets filed, someone on my team ends up reading that. Oftentimes, I will read every single one of them. We look at what the community is saying. And we look at that feedback very seriously to try to incorporate everything that we can.
So if we see someone who’s saying, you know what this product would really need to help me, it would be this thing. And then we look at that honestly, and say, how do we best interpret that, and make it work for the person? And so that’s one of the reasons why you’re going to find more settings under accessibility than anywhere else on any other technology on any platform, just because we are devoted completely to ensuring that we try to provide as much access to everybody as we can.
SARAH HERRLINGER: Yeah. Well, I think the other piece is we look at accessibility not as a checkbox, but as a broad spectrum. Everybody uses their device differently. Whether you self-identify as having a disability or not, we’re all setting up our devices in different ways. And so for us, accessibility– when you, Matthew, we’re talking about use it your way, I use it my way– that’s the whole point, and every single one of us is using it our own way. And we want to just keep building out more and more features, and making those features work together. So whatever is the combination that you need to be more effective using your device, that’s our goal.
MATTHEW PANZARINO: Excellent. Well, I could talk for hours about this, but I think that’s about the end of our time, and it’s a great way to wrap it up. So thank you so much, both of you. I really appreciated it and enjoy it, and I hope the audience does as well.
SARAH HERRLINGER: Well, thank you very much.
CHRIS FLEIZACH: Thank you.
MATTHEW PANZARINO: Thanks.
[MUSIC PLAYING]