Indoor Navigation: Can Inertial Navigation, Computer Vision and other new technologies Work Where GPS Can't?
DESCRIPTIONThanks to mobile phones, GPS, and navigation apps, people who are blind or visually impaired can get around outdoors independently. Navigating indoors is another matter. For starters, GPS is often not available indoors. Then there are the challenges of knowing where the door is, finding the stairs, or avoiding the couch someone moved. Combining on-phone and in-cloud technologies like inertial navigation, audio AR, LiDAR and computer vision may be the foundation for a solution, if product developers can map indoor spaces, provide indoor positioning and deliver an accessible user interface.
NICHOLAS GIUDICE: Thanks, Will. And welcome to everybody tuning in today. My name is Nicholas Giudice, and I’m excited to be moderating this panel on blindness and indoor navigation-related technology and associated challenges. This is an area of lots of hot research. There’s a lot of challenges and big issues that are trying to be figured out, and there’s a range of promising technological solutions that are on the market, on the horizon, and being developed.
And so our panel today, we have three leaders in the field who are championing different innovative technological approaches to dealing with the indoor navigation challenge for blind and visually impaired folks. I am– it’s going to be– well, it’s an all-guys panel. Let me start by introducing everybody.
Start with Roberto Manduchi. Roberto is a professor of Computer Science and Engineering at UC Santa Cruz. I’ve known Roberto for years. He does excellent, lots of excellent research on accessibility. But related to navigation, he’s done work with computer vision, inertial technology, some cool indoor mapping techniques. Look forward to hearing what you’re doing.
We have Mike May joining us. Mike is an evangelist at GoodMaps. He’s working on development of a seamless outdoor-to-indoor navigation system. Mike is one of the early pioneers in his field. He led the development and rollout of the first commercial navigation system for blind travelers. He’s been really a leader for decades. So welcome, Mike.
Paul Ruvolo. Paul is associate professor of Computer Science at Olin College. He was also a key collaborator in the creation development of the Clew App. So welcome, Paul, and welcome to all three of you.
So my goal today is to get discussion going around some of the big issues in indoor navigation, research that’s going on, how it connects to blind folks. And I think some of these issues also certainly pertain to the sighted individuals. But what we’re talking mostly about the indoor navigation challenge for blind folks.
I want to talk about technology in isolation, perhaps some of the approaches that people are working on can be done together. We’ll discuss where we’re at and where we’re going. And I want to start by asking each of the panelists to give maybe a two to three-minute very brief description of your approach, what you’re addressing, and some of the pros and cons of this approach.
Just one note. As moderator, I am going to try to keep us on time and on task. That can be difficult when we have these types of discussions. So I may occasionally jump in to steer the discussion and otherwise, just let it roll. So I am going to go– what do I have here? I’m going to go alphabetically by first name, and I think that puts Mike up. Mike, want to tell us a little bit about what you’re doing?
MIKE MAY: Yeah, thanks, Nick. Thanks for the introduction. And I have to get credit, I want to you say I’m an early pioneer, earlier than me were people like Dr. Jack Loomis. We’ve been talking about navigation for many years. The better a blind person gets around, the better we engage in life, the better anybody gets around for that matter.
And so Jack Loomis was one of the earlier pioneers at UC Santa Barbara. And I know he was a mentor to you. And he worked a lot with Roberto, so I’m excited that these many years later, we’re working on indoor navigation. I did so at Sendero Group, where I had my own company for about 18 years.
And then joined GoodMaps in 2019 to focus on indoor navigation, thinking about what is it that’s different between outdoor and indoor navigation. And as an evangelist, I had two roles. One was to evangelize to the world about the value of accessible navigation to help scale indoor navigating because it’s obviously a challenging thing to do when you compare it with outdoor GPS, where we have– it’s ubiquitous, it’s free. Indoor navigation has a lot of other challenges in terms of scaling as well as what technologies that you use.
And at GoodMaps, we started out with beacons. And this was the technique that was first commercialized by Apple with iBeacon seven, eight years ago and has been reaping mediocre results in terms of accuracy and viability, price. And GoodMaps started with that for the first year. And then we were able to switch over to something that we think is a more practical approach because the less infrastructure you have to modify, the better you are in terms of mapping and positioning indoors, and no better way to do that than with imagery.
So both LiDAR and camera imaging are being used by GoodMaps to scan a building and then map that building. And then for the user to come back with their camera phone, and walk through that building, and be able to determine from the images that are streamed to the 3D cloud that was created by LiDAR, that phone camera then compares one’s position with that cloud image. And voila, you can hear about where you are relative to the reception desk, to the restrooms, and the entrance, where are the stairs, the elevators, and everything.
So in that way, it’s indoor version of what we’re used to outdoors and hearing about points of interest, like Starbucks and other kinds of things, and street intersections. Indoors, though, has a lot of other challenges, and we’ll get into that as some of the others of us on this panel discuss their techniques. I think we’re all addressing the same problem with some interesting technologies.
NICHOLAS GIUDICE: Thank you, Mike, for the overview and for staying on time. Paul, you’re next.
PAUL RUVOLO: All right. Thanks, Nick. It’s definitely a pleasure to be here at this panel. So the approach that we’ve been taking over here at Olin College has some commonalities with the approach at GoodMaps. So we’ve also been looking into approaches that use the camera on the smartphone in combination with inertial sensors to estimate the motion of the device as it moves about in an environment.
In our app, Clew, we’re really focusing on the specific case where a user is recording a route in an indoor environment or a short route outdoors with their own phone for use at a later time. So they’re able to hit the Record button, navigate through a space, hit the Stop button, and then either navigate back to their starting location or save that route in their phone for navigating at a later point in time.
So this approach has some pros and cons. Of course, one of the pros is that it doesn’t require any infrastructure at all, it doesn’t require anybody to map the space ahead of time, so that’s good. On the negative side, for really long routes, it can sometimes be inaccurate. So it’s best to use on relatively short routes on the order of about 100 meters or so. But that does put it in range of a number of useful situations for indoor navigation.
In addition to the Clew app, I’ve also been working on a number of approaches based on tags. So the idea of putting up special markers in an indoor environment as a way to extend the range of apps like Clew. And in addition to my academic work, I’ve also been collaborating with companies, including the folks at Supersense on developing similar types of technology as well.
NICHOLAS GIUDICE: Great. Thank you, Paul. And Roberto?
ROBERTO MANDUCHI: Yeah, thank you, Nick. By the way, I should say I’m super honored to be in this panel. I’m also very happy that Nick is asking questions to us. We’ve been collaborating for a number of years, so it’s always a pleasure to be together.
So I have been working on wayfinding indoors on and off for several years. And with my colleague James Kaplan at the Smith-Kettlewell Eye Research Laboratory in San Francisco many years ago, we experimented with using cell phones. They were not called smartphones back then. We were using Nokia phones.
And we were using the camera on the phones, which was a little bit of an innovative thing back then. We were placing color markers in different places in the environment and see how a blind person could navigate by detecting these markers. Now, that is an example, of course, of infrastructure modification.
More recently, I’ve been concentrating on inertial sensors on the phone. And part of that is because I believe that for the system to be usable in a smartphone, you don’t really want to have your smartphone out with the camera facing forward as you’re walking in a potentially crowded situation. It would be much better if you could keep your phone in your pocket.
So what can you do with phone in your pocket, and without infrastructure, without be [INAUDIBLE]. Well, the phone is loaded with sensors, inertial sensors, so different types. So we’ve been studying a little bit what it can do to get your position using these sensors. And this is something that has been studied for many years.
And I should also point out that indoor wayfinding is not just for blind people. There are a whole conferences devoted to indoor wayfinding. But I do believe that if the user cannot see, there’s a whole different set of requirements, there are all different set of interface mechanisms, and also different accuracy requirements that are very important.
NICHOLAS GIUDICE: Thank you. So I want to move on– I have some questions that will kind of lob out, and I think this will probably lead to some other discussion. And we’ll kind of just jump around through these, and anyone can answer who wants to or has some input.
I think an important first question in my head is, we’ve all used some form of outdoor navigation system, my question is, what are the major differences between indoor and outdoor navigation for blind people? I’m not talking about relying on GPS, but more in terms of how the systems work.
MIKE MAY: I’ll take a stab at that, Nick. Some of the big differences are the distances indoors are less than what you commonly have outdoors. In an outdoor route, you might be going 10 blocks or 5 miles. Indoors, you’re probably wanting to wander around a building that might be in most 100 meters, 200 meters, so much shorter distances, which means that things come at you a lot faster. The distances between turns are shorter, and therefore a little harder for the technology to establish your direction of travel.
Also, indoors, you have hallways that aren’t named, the same as streets outdoors. Your points of interest have names, the same as outdoors, so that part is easy. But indoors, if you’re guiding somebody, it’s got to be a lot more about a little bit left, or a little bit right, or 11 o’clock, 1 o’clock. You can’t tell people, at least not yet, go down hallway 7. And these are things that present some user interface challenges that we don’t have to deal with outdoors.
It’s interesting. It’s something that can be accomplished. But when we’re new and the evolution of this stuff, we don’t have a lot of feedback from users yet as to what works and what doesn’t work. And when in these short distances, one real problem is when your voiceover is talking, sometimes it could say turn left and by the time it finishes talking, you’ve missed the turn. So interesting things to deal with.
NICHOLAS GIUDICE: So, Mike, just following up on something you mentioned. The issue of naming is not one of a technological problem, I’m realizing. It’s just a function of how buildings are designed. So we don’t have avenues, we don’t have addressing like we do outside. And so it seems like that would be a hard thing to solve because it’s not part of the natural kind of way we build indoor structures.
MIKE MAY: Right. And I’ve heard of some buildings that are actually starting to name hallways because of this kind of challenge. But I can’t help but hearken way back to the Jack Loomis days and Reg College had a chapter posted about the archaeology of navigation. And in that, he talked about how our brains are wired to navigate by landmarks and how difficult that is for blind people. We don’t– we’re not getting landmarks.
So indoors, I think, real key is that you do have landmarks. You can smell a Starbucks if you’re walking by it, but what about a lot of other things in a place that doesn’t have a smell or a sound associated with it? If we can announce a landmark indoors, then that gives us some flexibility to have our own independent choice as to what way we want to go, rather than just turn at 11 o’clock and 27 feet.
ROBERTO MANDUCHI: Can I give my two cents? Thanks, Mike, it all makes sense. Let me just add a couple of things in the difference between indoor and outdoor. Besides the obvious fact that you can’t rely on GPS indoors, I would say that there is a practical difference which certainly could be alleviated. Outdoors, you have an incredible amount of GIS information. It keeps accumulating. So Google Maps, Apple Maps, and all these other companies that provide the data to them day by day are increasing the amount of data.
I mean, Google Maps right now, for many cities, is aware of sidewalks, is aware of crossings and a lot of other things. And by the day, this data is increased. Indoors is a little bit different. So you have some venues that are very well-mapped, there are companies including GoodMaps that can map venues for you.
But I would say, as of now, we don’t– I don’t know, in my building here, I would not have a precise map except for the one that I built through a tool that we built, an online tool for mapping. But other than that, you don’t have an open street map for indoors, at least as of now that’s up and functioning. So there’s also a little bit of a lack information. Like I said, this could change in the future, but I see that as an issue right now.
NICHOLAS GIUDICE: Let me build on that a little bit. So we can deconstruct navigation systems, whether they’re indoors or for outdoor use, into three kind of general components. So there’s obviously the localization technology for positioning, there’s the underlying map or the database, and there’s a user interface.
And with respect to indoor navigation, most of the work seems to be focused to me on the first component, so solving how to accurately localize people, and this is obviously important. But what are your thoughts on the other two components which are also hugely important? And I guess my question here is, what are the possibilities here for mapping and for new user interfaces? And how are these being advanced?
PAUL RUVOLO: Nick, can I say a few words about that?
NICHOLAS GIUDICE: Absolutely.
PAUL RUVOLO: Yeah. So we thought a lot about the sort of user interface side, and one thing I think that’s interesting is with some of the newer technologies where you can get really fast feedback as to how a user is moving around in space, there’s an opportunity for this kind of interactive type of directions, where as soon as the user is facing the correct direction, you could provide some sort of queue.
So providing ways to get information that’s not as tied to sort of natural language, this idea that if you say “turn left”, you might have already passed the turn. But if you give some other sort of cue when someone’s pointed in the right direction, that’s kind of a new kind of thing that’s opened up by some of these technologies.
But I think there are a lot of usability challenges, too. I think that we’ve uncovered now for the first time, obviously, but what we’ve also seen with the Clew app. So smartphones are really good at detecting their own position, and they can relay things in the environment with respect to where they are and where they’re pointing, but that is not where the user necessarily can see it themselves in space.
So making that sort of translation between where the phone is estimating that it is and where the user actually isn’t can interpret that is a big challenge. So we focus some of our efforts on trying to translate between those spaces in an intelligent way to try to increase usability. But I think it’s still an area that is pretty challenging and we’re still looking into it.
MIKE MAY: Yeah, there’s a huge, huge effort that needs to go into scaling, mapping. You might think, well, every building got blueprints. But it’s amazing when you start looking into it that those blueprints aren’t necessarily updated. They might not be digital. They’re in different pieces. How do you cobble them all together?
And then to think of the millions of buildings that are around the world, how do you even get to some scale where it’s useful? And there’s not an easy answer to that, but it is starting to happen. And when there’s a motivation, when there’s a need, where there’s a value, then all of a sudden these huge undertakings can be addressed and hopefully accomplished.
And I think that’s happening because this is a situation that’s appealing to the general public. Any building could benefit from having maps for multiple reasons. For sighted users, for blind people, people in wheelchairs, first responders, for asset tracking, for any number of different reasons where it would be really nice if you could just say, well, I need to go find a particular product in a Target or a Home Depot or at a grocery store, and not to have to wander around or find somebody to ask where that is.
You put it into the app, and these stores all have good product apps, but what they don’t have indoors is mapping and positioning. So if you can integrate those apps together, then all of a sudden, you can have this experience where you search for a product, you get guided to the store, you walk inside, you get guided to the product, and you’re in and out of there in five minutes. And with that kind of motivation, I think we might be able to address this huge problem of finding ways to digitally map a lot of buildings.
ROBERTO MANDUCHI: Nick, let me give a stab, if you don’t mind, to the interface part. So when we think of navigation systems, in general, we think about the paradigm of your car navigation system, which is a turn by turn direction. The system tells you turn right here.
Now, when the car navigation says turn right, you don’t turn right right away. You look at the road, understand where the intersection, you understand where it is when the right time for you to turn. That is because the time in which this was after might not be at the exact time at which you turn, in part because of the inaccuracy of GPS.
So this is even more true if you cannot see. So if you have a turn-by-turn system, if this is your interface, the system tells you, are you walking in a corridor, now turn right. But because of the inaccuracy of your localization, which is inherent, you always have to expect the system not to be perfect, if you turn exactly when you are told then you hit a wall, that is not good. It is no good because you hit the wall. It is not good because it is confusing and also because you may start losing faith in the system.
So in my opinion, the paradigm that needs to be used to provide directional information to the user has to account for multiple things, has to account for the fact that a localization is not accurate. Potentially, the map may not even be accurate. Also, it has to keep into account the sensory abilities of the user.
In my opinion, a real system is not a turn-by-turn system. The system should give you information about the environment around you, and it should give information about where you should go. But you, as the user, are in charge. You should be empowered to take more informed decisions. But the user is not a robot, the user is the one who needs to decide what to do. And I think that is something that perhaps does not come directly– it is not very clear, I think, in the design of the interface systems that I see around.
NICHOLAS GIUDICE: Yeah. Let me dig into that a little bit more. And I’m admittedly a UI interface person. But most navigation technologies rely on speech and natural language instructions, whether they’re indoors or outdoors. On the one hand, this makes good sense. This is easy to generate. It’s natural, people understand it. But it also leads– it has certain limitations, right?
So language is error-prone to interpret, that there’s often ambiguity. You might be told near the corner. Well, how near? That wall issue that you just mentioned, Roberto. Language isn’t inherently spatial. You have to cognitively process it. But it turns out navigation is intrinsically spatial.
So I guess my question is, why aren’t more or do you see more user interfaces that would be trying to utilize senses that are able, interfaces in their sensory, in the user interfaces, different senses that can directly specify spatial content, what I call perceptual interfaces?
So for instance, Mike mentioned Jack Loomis, and Jack and his colleagues Bobby and Reg College, since the ’80s, were interested in spatialized audio for presenting, hearing the world around you, it’s coming from those locations. And now, we get that in Soundscape app and some other apps. Very few systems that I know of, at least, are using haptics in a real time way. So do you see a role for that and do you have an idea as to why this isn’t happening more?
ROBERTO MANDUCHI: Well, I think they were first of the– probably I’m the last who should speak here. And probably Nick is the person in the room here who has the most experience with that. I do believe that haptic or some sort, and of course, acoustic have a role, and there are tools, a vibrating bracelet, and others. And of course, in practice, the right recipe of combination between acoustic signals which need not be too overbearing, then haptic that has a very limited vocabulary and so you want to use the best.
As an engineer, to me it’s more a matter of convenience. So Soundscape is great for somebody who wants to keep headphones or earplugs on. But in very practical situations, that might not be what you want to do.
So I think in a sense, natural language processing is perhaps the most direct, the one that we all speak about and [INAUDIBLE], though as you say, not necessarily the easiest to understand [INAUDIBLE]. But I think it’s very important to also keep the fact that, again, you want to keep your phone in your pocket, you might not want to wear other types of devices. So there is a little bit of compromise here in my mind.
PAUL RUVOLO: Yeah, I could build off that a bit. Yeah. So haptic, real-time haptic, I think it is very powerful. And so that’s what I was referencing before with. We do use that in Clew. But I think if you restrict yourself to what you’re allowed– what you get from mainstream smartphones, the haptic signal is very impoverished, like there’s only like a single actuator haptic actuator on most phones, and it just really limits the richness of the type of information you can give.
And yeah, I mean, I do like the idea of natural language. I like Roberto’s idea the user is not a robot. I think there would be a lot of potential in giving sort of descriptions of scenes and what could be done in them. And then maybe there’s a secondary level where non-natural language haptic feedback could be used to perform a particular task.
But giving the user some choice in the matter, but also using, yeah, maybe more inherently spatial sort of mechanisms to give feedback as they do a particular thing. But yeah, I think it would be great if there was more richer haptic systems available that people could leverage, and that could assume that people had access to.
MIKE MAY: And I think because you don’t have all of the sensors and the feedback loops that you might want all in one smartphone, I always talk about the accessible toolbox. And I think that’s practically the way people really should be approaching navigation in the near future. Tactile maps are great, things like team apps or 3D maps if you can get them to preview ahead of time, looking at things virtually. Using multiple techniques, it’s so important because it’s just not one device that does everything.
The other thing that’s probably worth mentioning in terms of what Roberto was saying with what happens when you come to a turn and it says turn now and you run into a wall. With GPS, I learned from early on that a lot has to do with expectations. And so when somebody says turn now, what they really mean is turn at the next available opportunity.
And people used to get really annoyed with GPS when it was off. But then they just kind of got used to that factor, and the fact that GPS accuracy might vary from 10 meters to 30 meters, and that was just the nature of it. So I know that in the first five years or so, people used to complain about this stuff all the time. And it wasn’t that the GPS got more accurate, it’s that people’s expectations got more accurate, and the familiarity with what to expect really softened their understanding of how to approach things.
And I think the same thing is– it’s yet to happen indoors because there’s not enough places that are mapped and people using it. But I think we’ll learn what we can hope for indoors and then combine that with multiple devices, and hopefully we have something that’s moderately useful most of the time.
NICHOLAS GIUDICE: Mike, I remember you saying this when I– when first, you were putting out the product in early 2000s. You’re saying GPS is– if everyone waited for it to be centimeter accuracy, we’d still be waiting in 20 years. Like it’s good enough, and you have to get your expectations in the right place, and then it makes sense. And it’s something that– it’s just been hugely impactful I think on a lot of people.
I have another that I want to throw out I’m interested in your folks’ opinion on. So there’s a lot of interest and a lot of people nowadays using human in the loop systems. So we have EyeROV, Be My Eyes, that are really AI versions. But particularly human in the loop where you’re calling and talking with someone the blind user may be navigating and they’re getting information.
What are your thoughts on these? And do you think that they represent a viable solution that doesn’t require a lot of these other components? So they don’t need localization, they don’t need mapping, they don’t really need the UI beyond the system. Is this a solution or is this something that’s meant to augment the types of things that you’re doing?
MIKE MAY: It’s part of the accessible toolbox, for sure.
NICHOLAS GIUDICE: Yeah.
MIKE MAY: I mean the real value of navigation is it gets you in the vicinity of something. And it may not get you right in the doorway, even if it’s indoors and the system might have 1 or 2-meter accuracy, you still can get confused if there’s two restroom doors that are right next to each other and they’re not labeled men and women’s. You might need Be My Eyes or EyeROV to get on the line. And that’s what we’ve done in the GoodMaps apps, is have a button to call up these apps because we know that you need some eyeballs sometimes.
PAUL RUVOLO: Yeah, I think– I definitely agree with it. I think there’s a lot of potential. I’ve explored it a little bit in some research projects but never released anything of combining augmented reality technology with human in the loop. So the idea that it’s great to call up somebody and have in situ guidance, but indicating spatial concepts of natural language can be difficult.
There might be lag on the video feed, and the sort of bandwidth for communicating that may not be so great. So having somebody jump on, be able to locate things that you’re interested in your immediate environment but have it be remembered by the system in the context of its map of the sort of local environment.
I think there’s a lot of potential there for just making systems easier to use and increasing the efficiency of them. And of course, the viability of this for a big audience, you know, tied to the efficiency. There’s like a cost trade-off there. So I don’t know, I think it could be hugely impactful. I haven’t heard of anything that does that yet on the sort of EyeROV or Be My Eyes side, but something that I’ve definitely thought a lot about my own work.
NICHOLAS GIUDICE: Paul, you mentioned user testing. I think this is a big, a really important issue to kind of avoid what I think of as the engineering trap when something’s just designed because someone has an idea and they think, oh this would be useful, they never talked to end users. What are you folks– how are you getting users involved in the testing? And how does their input help to drive the advancements you’re making?
ROBERTO MANDUCHI: Well, let me speak first. The user is important from the beginning, co-design is a big word right now. And I have to say, in my laboratory, we are seniors because we probably do things exactly as we should. Let me put it this way, I’m trying to learn from my mistakes and I’m trying to learn from observation of prior system with people in the loop. The engineering trap that you mentioned is the number one problem that I see constantly all the time.
We, as technologists, tend to do things just because we can, tend to propose a solution and then to look for a problem. And so this in practice– this ties in by the way with the human in the loop that I mentioned before. I think that you need to get the best solution possible regardless.
You have your toolbox, as Mike says. Artificial intelligence, that is what everybody is talking about, is not the destination. It is one of the several means that which you can get it. But yes, obviously, the user in the loop is important to understand infinite things, such as how you would use a system.
As engineers, we tend to forget about the abilities of a person. So we tend to, again, think of a completely clueless model system, robot that we endow with sensors. And in fact, I think that the right way that system should work is to empower, to make you work at best with information that is missing.
NICHOLAS GIUDICE: So I know we’re kind of– sorry, go on, Paul.
PAUL RUVOLO: Oh, I was going to say, I mean, yeah, I totally agree. I was just going to say one on that last point about thinking about the abilities of users. I think one of the biggest moments I had where that really hit home was about making wrong assumptions. I had somebody visiting the lab who was an expert navigator, could do echolocation, walk around buildings, no problem. And of course, seeing that was awesome.
But then what was interesting was that when I had this person try out Clew, they had a very difficult time using the feedback from Clew. It was telling them about where the phone was pointing relative to the route, and they were having a difficult time mapping that to where their body was. And that was just such a situation where something that I thought would be easy was really hard, and something that I thought would be really hard was actually really doable. And that always stuck with me.
So I’ve always tried to involve users a lot. Especially now, especially if you have an app that doesn’t require a lot of physical infrastructure, it’s really nice to take advantage of the community of folks online. So we have like a big co-design program over the summer, where people were paid to participate and got some career development workshops as well. And that was really beneficial to kind of charting some new directions for us.
MIKE MAY: Yeah, at GoodMaps, we’ve been working on something in conjunction with the American Printing House for the Blind, which is in direct contact with all the blind students around the country. And so we’ve set up a study with five different schools for the blind to learn more about what is it that blind students currently do, what would help them, how could we modify our software. Maybe a smartphone it doesn’t suit a lot of the students, what’s different about how they use it versus adults.
And one of the things we’ve heard already is the teacher is worrying about the distraction factor. If the phone’s talking to you and it’s giving you information and you start concentrating on that and then you forget about your cane, that might be dangerous. So we’ve got to figure out how to deal with that distraction factor probably in the teaching techniques that are used.
NICHOLAS GIUDICE: So anyone listening here at this panel, these are leaders, they’re all involving participants and users. It’s so important. I’m really glad to hear that you’re all, this is a big part of your programs. I know we’re kind of running near the end. I’m curious– one thing that you’ve all mentioned at some point is the challenge of infrastructure buildout and infrastructure free approaches.
Looking at– technology is always changing and evolving, what do you predict are the most exciting changes on the technological front in the next, let’s say, next two years? And this could be hardware, AI, whatever you’re excited about in this domain.
MIKE MAY: Nick, I’m excited by potentially collaborating with others who are creating maps out there. For example, we’re on a NIDILRR grant with one of the companies is Acuity, who’s the largest lighting company in the world. And they have installed 6-inch 1-foot accuracy in all Target stores, Home Depots, a lot of other stores, airports.
And that information, unfortunately, is not accessible to blind people. So how can we, GoodMaps, help to facilitate that level of access to all of these stores and airports, as well as being able to benefit from that level of accuracy? I think that’s very exciting when positioning can be built into lighting systems as new buildings go up. This is really to expand where we can have access.
PAUL RUVOLO: I guess on my side, one thing I’m very excited about is to see if the augmented reality, sort of product, hardware devices, like glasses or what have you, will lead to availability of really good sort of indoor maps that can be accessed by folks who want to leverage it, to create apps for accessibility.
So that’s, I think, the need for those ubiquity of maps and standardization, I don’t know if it’s going to come in the next two years, but will be a huge boon for accessibility. And one of the key drivers could, of course, be companies working in the space specifically like GoodMaps, or it could also come from larger players as some other trends kind of unfold.
ROBERTO MANDUCHI: Nick, I am horrible at predicting the future of technology. To give you an idea when the–
NICHOLAS GIUDICE: Just excited, Roberto, you don’t have to predict.
ROBERTO MANDUCHI: When the iPhone came out in 2007, I thought, such a bad idea, nobody’s going to buy it. So I will abstain from telling you what I think is going to happen. But I am going to go back to your point about infrastructure. So I believe that this issue of should we develop technology assuming there is [? zero ?] infrastructure to support us, or can we expect that at some point some infrastructure will be placed, which is going to make things immensely easier, is an ongoing debate. It’s never going to stop.
I think that we have to be a little bit flexible. I am expecting that there will be some sort of infrastructure, there’s always been, that some part will be put in place to help everyone, and that’s the curb cut effect, to help everyone, for example, get better localization in buildings.
And I think that there are sets, of course, big players here, a lot of money and costs. If you remember when talking science from Smith-Kettlewell came out at some point, they were hoping that that will be used everywhere perhaps as part of ADA requirements. That would have changed things immensely. That never happened. So it’s an evolving dynamic situation. I think that one has to be flexible and assume that there’s going to be steps in both directions with and without infrastructure.
NICHOLAS GIUDICE: Excellent. Well, I think this wraps it up. I want to thank our panel. You guys have been great. This has been a really fun discussion and informative. Thanks to all that are attending this session. I hope you’ve also learned a lot and want to get involved. Everyone here is interested in collaborating and talking with you. Our contact information is on the website, and so, yeah, get excited, learn from some of the things that you’re hearing and let’s innovate. Back to you, Will.