-
DESCRIPTIONMap apps on mobile phones are miraculous tools accessible via voice output, but mainstream apps don’t announce the detailed location information (which people who are blind or visually impaired really want), especially inside buildings and in public transportation settings. Efforts in the U.S. and U.K. are improving accessible navigation.
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
MIKE MAY: Well, as you know, part of the accessible wayfinding toolbox involves visual assistance like Be My Eyes, which is, as you know part, of the good maps offering on the Explorer app. And I think we’ll get into very quickly here talking about what are some of the wayfinding challenges going forward. And that may take a little bit of reflection on, where are we now? Hoe have we gotten where we are? What are the different components of accessible wayfinding?
There are really three major pieces. One is positioning. Another is the map data. And finally, the user interface. And we have two very good experts on both of these, all three of these topics– Nick Giudice and Tim Murdoch. So I’m going to passed it over to Nick to begin with and talk about positioning, mapping, and the UI. And I know, Nick, the UI is of particular interest to you. How do we make wayfinding something that blind people can use affordably, excessively, and productively?
NICK GIUDICE: Thanks, Mike. Yeah, and hello, everyone. Great question. We probably could all go on about this for a long time. I’ll try to be brief. So as Mike said, these three components are all critical aspects. And each of them has relevance to accessible wayfinding for blind folks, low vision folks, and anyone else.
I’ll start with the User Interface, or the UI, which is oftentimes done at the end. And that’s part of the problem. I think that one of the things moving forward– Mike particularly has been a champion of this for years, as have some others that have really thought about accessibility of navigation.
But for this to really work, we need to think about the user interface right alongside some of these other more engineering, technical aspects of localization and mapping, and think about how they all relate to each other. One of the things that I think is really critical moving forward is thinking about user interface in a way that allows us to tap what– what I think of is kind of how modeling more on how we actually use different types of information in our brains.
So using what you can think of as bio-inspired interfaces. Our brain has all these different inputs that we use, right? So I’m congenitally blind, so I don’t use my visual input as well as some, but I use some vision. Obviously, hearing, touch, smell, taste. And we’re really good at processing, integrating all this information into this one synthesis.
But most of our interfaces are visual, and most of the interfaces that are not visual use language, which is really efficient because we speak, and it’s simple. But it’s also not inherently spatial. So if I say go over there, over there, unless you point or you do something with your head, the words themselves mean nothing. I could just say, eat chocolate cake, and it would have as much useful spacial content.
And so figuring out how to use language in a way that’s understandable and takes less cognitive effort and is something that’s very quick and precise is difficult. And so if future interfaces can use more perceptual information that directly conveys spatial content because navigation is about space and interacting with space.
And these other sensors that I mentioned that the brain does, they’re very good at spatial content. Vision, addition, and touch. And so what I’m really excited about, some of the areas that I’m working on and some others are certainly doing as well, is using things like touch. Can we make real-time maps? We may talk a little bit more about that, but using– we’re using vibration-based maps that allow you to feel you’re the face of your iPhone and to feel the route that you’re on or the relation around you as you’re getting the narrative instructions.
Or using specialized audio. So hearing things out in space to your left or to your right instead of saying, Mike is at 10 feet at 11 o’clock. You have to be able to process, well, what the heck is 10 feet if I only think in yards or half of people now don’t know what 11 o’clock even means? If they just do digital interfaces.
So if you hear someone localized in 3D space– the Microsoft Soundscape Project is a really great commercial example of a product that’s using this. So I think as we move forward, more interfaces that use these types of spatial interfaces, that integrate the different ways that our brain already is really good at conveying multi-sensory or multi-modal information, and moving beyond just a visual interface for sighted folks or just a linguistic-based interface for blind folks, is where we’re headed. And I’ll pass it on.
MIKE MAY: Yeah. Nick, thank you. And certainly, I know you studied under Dr. Jack Loomis, and the kind of 3D audio, the spacial presentation of information that is so popular and so effective with soundscape goes all the way back to the mid-’90s and what Jack Loomis did at UC, Santa Barbara.
NICK GIUDICE: 1985 I think is when he first proposed this, yeah.
MIKE MAY: Yeah. I mean, it’s– but the advent of headphones that don’t cover your ears has really made it more practical than it was back in those days. And we’ve tried to incorporate a lot of user interface components, best practices into the Good Maps Explorer app as well. Tim, let me shoot that over to you for the user interface. How are you guys focusing on that with the [? Way ?] [? Maps, ?] and what’s your thinking on this topic?
TIM MURDOCH: Thanks, Mike. And so I completely agree with Nick. I think the user interface for us is a central part of the whole proposition. We’ve been working with the Royal Society for Blind Children here in the UK for a number of years now. And I think the real heart for us is the confidence to be able to go out and to be able to get about. It’s the confidence that gives you that freedom.
And so we’ve been working on a number of different fronts. And I think we’ve driven the user experience through a collaborative process to eventually define a standard. So one of the things that we’ve been really keen on is to drive forward the new CTA standard that came out very recently that actually is all about giving audio instruction to vision-impaired people based on where they are. And so as Nick was saying, to be able to say go over there or turn left to 3 o’clock, these are really complex things you need to be able to get right, which actually plays to the other two things that Mike mentioned as particularly important.
So we’ve got to get use experience right, but we’ve got to get good maps. So we’ve got to be able to understand the environment that you’re in. But if we’re going to give a really good user experience, we’ve got to be able to automate those instructions. And those instructions need to be really sensitive to not only where I am but also to where I’m facing.
So if I’m facing in a particular direction and I give an instruction or I receive an instruction, I need to be contextual. I need to be able to understand, well, I’m actually in this place. I’m trying to get to that particular destination. But I’m currently facing this direction.
So if we can give those instructions, which are contextually sort of aware, then what we can do is we can give a lovely, really integrated and confidence-building experience. And that, for us, is the heart of where we’re going. So we’re working really hard on developing a location technology that works on your phone that gives you that step-by-step accuracy so that we can give you that confidence to be able to get out in not just one environment but in multiple environments.
And that for us is– it becomes a bit like an audio-based augmented reality. So if you look at the things like the Bose sunglasses, which give you this 3D soundscape that is sensitive to where you’re facing and how you turn your head, we love that, because actually, that’s a really powerful way of delivering the sort of experience that we think is really, really important.
MIKE MAY: Yeah, [? we ?] mentioned that. And of course, sometimes it’s one step forward, two steps back. And I know those sunglasses, the newer models now won’t have the head-tracking component. It’s the older ones. But we still have the ability not to have our ears blocked, and that’s been fundamental to using any kind of mobile accessible technology.
You mentioned CTA. That’s the Consumer Technology Association standard. And it’s interesting because, of course, standards mean here are some guidelines. This is what we suggest you go by. But that’s, of course, difference than regulations. It’s up to people to comply and to go along with something.
And I think some importance of having uniformity among apps is the fact, for example, at Good Maps, any data that we create for indoor mapping will be shared free of charge with other apps. So then a blind person has their choice of their own user interface, but they would have the benefit of the content which the venues will pay for. But since the app is free and it can be shared among different accessibility apps, then blind people have options, which is really a huge benefit.
Let’s touch a bit more on another key component of accessible navigation, and that is positioning. And of course, outdoors, we’re all used to GPS. We know how that works and how it doesn’t work, and everybody pretty much agrees on the final frustrating 50-foot problem. And a lot of different organizations are trying to address that.
So let me have both of you think about that a little bit and mention it, and then segue into indoor positioning because that’s the new pioneering frontier for accessible navigation– indoor mapping and positioning. Nick, have any comments on this part of the equation?
NICK GIUDICE: Yeah. You know, probably one of the most influential talks I had on this or something– Mike, you won’t even remember. It was from sometime in, like, 2002. We were talking, and we were complaining because you had a product that you had at that point. And we were trying stuff, but it wasn’t exactly right.
And at some point, you’re like, you know, if you waited until GPS was at centimeter accuracy for everyone, we would be waiting for years and years and have nothing. And it’s so important to just get mostly there and then let people use their own skills. And yeah, it doesn’t mean that it’s perfect, but it means that we’re close.
I think that’s really a critical thing to remember because even though indoor positioning isn’t good yet, whatever we want to say, the last x, last meter, last 2 meters, last 3 meters, but whatever you need to get to. And at some point, the issue isn’t, can you get to 1 centimeter or 2 centimeters? It’s, can you get someone to where they want to get to? Can they perform what they want to be able to do?
And the fact that people are now working on it– and it’s getting so it’s good. It’s getting so it’s good. Not great, but good. And it’s working. It’s getting you there. That’s good enough to really begin to feel that this is going to work. So yeah, my feeling here is one of the problems is that, unlike outdoors where we have this one system, GPS, that works pretty much everywhere– and yes, there’s line of sight, urban canyon if you’re under foliage or whatever.
But inside, there isn’t one thing generally. So if you’re going to use radio frequency pr you’re going to use beacons, Wi-Fi, whatever, you need to put them in all these places. And so there’s a lot more infrastructure build up. And so there’s other places that I’ve tried. Like, we’ve tried magnetic signatures where you try to use the magnetic information in the building to localize you. And that’s kind of cool, and it kind of works. But if you get away from the wall, it doesn’t work. Or if an elevator goes by, it makes some sort of noise that screws everything up.
So I think the answer here is going to be some sort of sensor fusion where it’s going to be multiple technologies, multiple people really working together. And I think that some of the technologies that were initially thought of as not going to be the solution, and I’m interested in what both of you think, are now actually going to be a big part of the solution. So using cameras. Using optical lidar, things that are being able to take real-time imaging in some form and using that for positioning instead of just using Wi-Fi or beacons, which I think were kind of what everyone thought would ultimately work, or cell phone positioning.
So yeah I think we’re getting there. I think the future is going to see this nut crack. There’s a lot of really big teams that are interested and want to do this from– you know, everyone needs to do indoor localization. It’s not just blind folks. There’s lots of reasons. But I think we’re going to see, it’s going to be using lots of different sensors that’ll work together and hopefully seamlessly share that information.
MIKE MAY: Yeah. A laudable goal. And as you know, both of you, indoor navigation is something, and the position has been worked on for at least 25 years. And the big breakthrough was really when Bluetooth beacons and Wi-Fi fingerprinting started to be tested and rolled out commercially about the last 8 to 10 years. That was a big jump forward. And I think we’re now poised for additional jumps forward as we put lidar and camera-based positioning into practice, which is what we’re using on the Good Maps Explorer app. How about you, Tim? What are you thinking about in terms of positioning?
TIM MURDOCH: Yeah, so I suppose I’ve got a bit of a minority report here. So we tried the beacons for many, many years where we created the standard. And actually, the beacons and Wi-Fi just simply doesn’t give you the accuracy that you need internally. And also, it is just one of those impractical things to be able to deploy at scale. And obviously, one of the things we are trying to do is we’re trying to provide a service for– in any location.
And so we’ve actually developed a dead reckoning technique that does the sensor fusion that Nick was mentioning earlier that actually follows people around as they walk around any building. We don’t need to survey any Wi-Fi, we don’t need to survey any magnetometer sort of interference, and we don’t need to install beacons.
So what we were able to do is we’re able to have the outdoor/indoor experience as a completely seamless experience. And the beauty of that is, as that GPS gets better, and it will be getting better over the next few years. And there’s some amazing things going on in that. The outdoor environment will get better. And also, as Apple and Google just naturally extend their indoor location competence. So they’re providing out of the box now Wi-Fi fingerprinting for specific locations. Those are all really good things.
And then what we were able to do is we’re able to take advantage of that with our current algorithm that gives us a really accurate location as people walk around. And so I think where we see, we see a dead reckoning technique gets us to the last meter. Where we see cameras and lidar, we think that’s really important. But that gets you have much more dynamic environment. If there’s an obstruction in the way, if someone’s left a scooter on the sidewalk, we can see that with [INAUDIBLE]. So that’s where lidar and image recognition comes in.
So you what you’ve then got is you’ve got that the last inch comes from cameras, which is relatively computationally expensive thing to do. But what we were able to do right now is to give a location that works on an old school iPhone 7 or so. So it’s fairly old school technique that actually becomes available to everyone with equipment they’ve got today. All they need is a pair of headphones, ideally [? both ?] conducting so it doesn’t include the sound.
And what that then allows us to do is to get people out and about with the equipment that they’ve got today. And as time goes on and new techniques emerge, we’re able to fuse that information into the current platform to give them a much deeper, better experience as time goes on. So it plays to where Nick was going, which I completely agree with. What we have today is an ability through dead reckoning to give a great experience.
And where we absolutely will be going over the next four years is just going to get better and better and better, not just for those with vision impairments, but actually for all sorts of users, whether they be sighted but actually require some other systems or whether they just be a tourist in a foreign land. We want to be able to give a broad range of experience using that fundamental same approach.
MIKE MAY: Yeah, Tim. Let me ask you this. Dead reckoning has been the Holy Grail forever. And the problem really was around the sensors, and the drift, and the accumulation of air over distance. And so I’m guessing that it’s better sensors, better algorithms, that allow you to correct for that error as you proceed along a route using dead reckoning. Is that true?
TIM MURDOCH: Yeah, so I think we saw the generation of iPhone that was around the iPhone 7 and a similar thing on the Android phones. At that point, you’ve got a combination of things that worked to our benefit. One is the sensors got better, and they are still getting better. So the gyros and so on that we use.
Also, the computational capability of the phone got more capable. And so really, anything since iPhone 7 and on, we’ve been able to create an algorithm that we– we first developed it from my previous company, an engineering company, where we were following blue line services into buildings which were unknown. So we were doing some amazing technology on bespoke devices and desktop-based analysis. And to be able to move that onto the phone has been an amazing sort of breakthrough for us, which is what we’ve built our company on.
MIKE MAY: Great I mean, that’s so important, because when you don’t have to modify the infrastructure, it makes it more affordable. And that brings us to one of the key problems, which is mapping. And this is why our company was formed in why we’re called Good Maps is because that’s really where it’s all at. If we can get a handle on the positioning, then what about the maps?
And a very small percentage of buildings are mapped. Airports, baseball stadiums, some large facilities. There is a positioning now in LED lights and some big companies like Target and Home Depot that they’re really using for their staff and– their own staff and stocking. But the consumers aren’t using yet. But we’re turning the corner where that kind of thing can be used.
But how do you map these buildings, particularly if you’re not a huge company that’s going to invest in this? And how do you scale this? And I’ll share my opinion after I touch base with both of you guys on how do you think we’re going to get these maps to sync up with this cool positioning that’s starting to happen? Nick.
NICK GIUDICE: Yeah, this is the area, the actual process that’s really critical. And I work in this part of it the least. You know, GPS, outdoors, a lot of people don’t even think about it, but underlying all why your GPS works is it’s hitting an underlying map or a GIS. And as Mike said, they just aren’t consistently done or even done at all in most indoor settings.
But just one thing to add to this. Mike and Tim are working on these issues. What a lot of people don’t think about, though, is that the big companies that are working on these like Google and Apple, they may get these– this mapping technology figured out. But what a lot of what they won’t have are information that are useful to blind folks for orientation.
So something like a change in carpet. Something under feet. Textured carpet, brick, wood. Those can be really important cues to give in a description. Or something about the wall, or something about different sounds, smells, feels. These are things that may not be in a normal, traditional map that’s used on a device, something that Apple may roll out or Google may roll out, but may be really critical in something that blind individuals and blind users would want to use. And I’m really excited that the mapping that you’re doing, Mike, through Good Maps, those are going to be able to be shared, because people will be able to build on this information, I’m imagining, and being able to put it in there, which will be really critical for blind users.
MIKE MAY: Yeah. Well, the interesting thing about maps is, when you think about it, they’re the hardest thing to make accessible, and that’s because a map is a graphic. It’s a picture. And so short of doing video description or image description for every one of these– and more and more of that is happening by Apple, by Facebook, by others– how do we do it in the context of wayfinding for streets? How do you tell somebody it’s a four-way intersection, or two it’s an eight-way intersection, and these kinds of things?
And in indoors, you don’t have street names. You have hallways, and they’re not named. They’re just a hallway. So there’s a lot of challenges indoors, and that’s really where we need to all collaborate, both in a scaling prospect, but also in the presentation, so we figure out an indoor setting, how do we convey this picture, this map picture, to the blind user? Tim, what do you think about this?
TIM MURDOCH: Yeah, so I’m a massive supporter in the whole area of mapping. And I think it is the key that unlocks everything. And that’s really– so one of the things, if we look at Apple, they have a Indoor Mapping Data Format, IMDF, that they use. And it’s really interesting if you look at that. That is being designed to help a shopping mall lay out the shops inside a large mall. I mean, that’s basically how it’s designed. It’s been more than that, but that’s sort of where– you can still look at the– you can see that focus when you look at the [INAUDIBLE].
And actually, by extending that and creating a new standard– so we’re a big believer in standards because standards being that people collaborate to come up with a solution that people adopt and if we get the right standards in there, then what that means is that we can actually distribute the map creation exercise to everyone. So if everyone creates maps to a common format and a common approach building very much on what’s already out there, [INAUDIBLE] and others, then we believe that, actually, having open accessible maps that are free to everywhere where lots of different applications can work together providing different elements of expertise, we think that’s an amazing way to go forward.
So we are actually getting behind another standard, having worked very hard on the audio standard and recently just about to publish [? one ?] cognitive [? impairment. ?] So how we give first-person instructions to those people who need it, a different mode of instruction. We’re now getting behind another standard for the mapping.
And for us, that’s really important. If we can roll the standard approach to doing a mapping, then it means that it’s not just down to the likes of Mike and myself to generate those maps. Actually, everyone can do those maps. And everyone can share those maps so that we can then get your local school or your local office or restaurant to participate in a way that otherwise, it would be impossible to do every building on the planet with one company. This is very much something that we want venue owners to participate in and contribute to so that we can all benefit from it.
MIKE MAY: Yeah. And of course, what you referred to is the power of crowdsourcing, and this has been done in so many different ways in terms of information and mapping points of interest. And it’s something that I’ve really embraced from early days of working on accessible navigation, calling what we called user points of interest. And that really augments the commercial database of which there’s millions of points of interest.
But there are ones that might be specific to a blind person that a sighted person may not necessarily appreciate or need. And this folds into another principle, which is the more we can piggy back the mainstream, the more affordable and accessible these things will be. And you look at, for example, in doing routing, with Apple and Google and your normal maps, they are accessible. But for a sighted person, they want to hear, where do I turn? They don’t want to hear about all the details in between.
Whereas, as a blind person, I’m more interested in the details, and I want to know the name of the intersections, and where to turn, and what are the names of the streets. The same thing happens in other venues and airports. If you get to a gate, the sited person could see a map that shows them the route from to their connecting gate. A blind person can’t benefit from that because it is a graphic.
So how do we piggyback that commercial Data so we make it affordable but we provide the user information to the blind person that makes it accessible? And that’s really combining the best of both worlds. And that’s how we try to– that’s one of the ways that we try to scale maps– by using commercial maps that are out there in addition to ones that we’re going to create. Nick, I think you were going to say something about that.
NICK GIUDICE: Yeah, just one more thing that I think built on both of these things. So we may– there’s still a lot of benefit that can get out of the mapping component and the UI component, even if the positioning isn’t totally correct yet. And that’s– we’re seeing this, but I think we’ll increasingly see more systems using abilities to do pre-journey exploration and virtual navigation where you can go.
Now, the research is very clear. People benefit a lot by being able to explore something ahead of time and building up their cognitive map or their mental representation of the space, which really helps to just build confidence and build an idea of where you’re going when you actually get there. And even if the system isn’t perfect, you kind of have already been able to build in a lot of your knowledge of what’s there and where you’re going from your house as you’ve virtually explored. And I think that’s really important and increasing that that’s being done. And I think that’s, as we can add to that and build on user-defined attributes, I think that’s going to be something that’s used more and more.
MIKE MAY: Yeah, Nick. Thanks very much. We’re approaching the end of our time, so I’ll shoot it back to Tim for closing comments. And if somebody wants to reach you, you might mention a contact for that.
TIM MURDOCH: So sorry, Mike. You broke up for me just at one vital point there, so I hope I respond appropriately. So just to say on a mapping. So I think it’s really interesting looking at the mapping that actually, that starts to open up another layer of detail, which we’ve done some amazing work on, is that if you were to do a route, a satellite navigation in your car, you’d use the roads to create a graph to be able to create a least-cost route. That’s how that works.
Well, if you’re a vision-impaired person walking around a large building, the rooting is– you don’t have those routes. You have corridors, but actually, you have large, open spaces, too. So it’s much more akin to sailing than it is to driving.
So actually, generating the route calculations for that experience in an open environment with an open map– there’s really interesting new algorithms that we’ve been working on that are actually much more– taken from the world of sailing to be able to take people through a building. No longer having to force people to walk along a wall because we can take them directly across the room because we can guide them. And I think there’s some really amazing new things that we can do because of the emerging technologies that are coming through at the moment.
MIKE MAY: Great, Tim. Thanks so much. I want to thank my colleagues. Tim over there in the UK, and Nick across the country at the University of Maine. This is Mike May from Good Maps. And please be in touch with any of us. Goodmaps.com is where you could reach our company and reach me. And we’d like to continue the discussion further and help us in getting the world mapped. Thank you very much for having us, and back over to Will.
[MUSIC PLAYING]