-
DESCRIPTIONDiscover how AI and computer vision are revolutionizing mobility assistance through two groundbreaking devices. This session explores how Glidance's Glide and Biped's NOA are combining advanced technology with thoughtful design to enhance independent navigation for people with visual impairments. Learn how these innovative solutions utilize AI-powered sensors, real-time obstacle detection, and intuitive interfaces to create more confident, seamless mobility experiences.
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
VOICEOVER: Transforming Mobility: Glidance and Biped make mobility safer, more intuitive, and profoundly empowering. Amos Miller, CEO of Glidance; Mayo Fabian, CEO of BiPed; Victor Taran, Senior Technical Program Manager of Accessibility, Google.
VICTOR TSARAN: Hi, my name is Victor Tsaran, and I would like to start by thanking SightTech Global for giving me this opportunity to moderate this, which will turn out to be a really amazing panel with Amos and Mayo, two great companies. We’re going to talk about this in a bit more detail just a few minutes later. Amos is here from Glidance; Mayo is from BiPed. But instead of me presenting them or introducing them, I would rather if they introduce themselves in their own words. Who’s going to go first? Amos, you’ve started with letter A.
AMOS MILLER: Thank you, Victor. And hi, everyone. My name is Amos Miller. I’m the co-founder and CEO of Glidance, where we’re developing a new intelligent guide for people with blindness.
MAËL FABIEN: And Maël. All right. Hello, everyone. My name is Maël, the co-founder and CEO of BiPed Robotics, calling from Switzerland today, where our company is located. And we are also building a mobility device for those who are blind or have low vision.
VICTOR TSARAN: Thank you both. Really great time for all of us. And I just wanted to add that I myself am completely blind. So to me, hearing from the two here, two visionaries and two leaders in this industry is both personally gratifying, but also inspiring because one of these devices or perhaps even both are going to change my personal life. So that’s absolutely great to have this opportunity to talk to both of you. The first thing I want to start with is the need that you’re trying to address with both devices. And I would like you to maybe give a quick introduction into the devices themselves, because I think you talked about the companies in general. But what people might want to hear is what exactly those devices do. But before we go there, I wanted to sort of shed a bit of light on why I think this is so important. I think in the field of blindness, we talk a lot about different issues that blind people face, especially from birth or, you know, people who lost their sight in later life. One of them is literacy. But another big issue is mobility. And I think a lot of blind kids or teachers who work with kids will tell you that mobility is one of the profound things a lot of blind people have to learn. And if not done right, it might actually impair their ability to later go into professional fields and obtain careers. Because if you can’t really move around freely and wherever you need to go, whether it’s to work or for personal need, that certainly limits the ability of where you can go in your life and with your life. And so I think to me, seeing both of you trying to address this big need is really, really important. And so I just wanted to have both of you describe maybe in more detail what each device does so that the listeners and viewers can draw conclusions to themselves as to how you approach the problem of solving mobility. So maybe I’ll start with Maël. If you could talk a bit more about NOA and Biped in general.
MAËL FABIEN: Yeah, absolutely. So I’m probably just going to start by telling the backstory of why we are building this. And that really explains the set of features we came up with. So right at the beginning of COVID, I met a white cane user in Lausanne in Switzerland heading to the ophthalmic hospital. And he was doing a FaceTime call to a friend. And on FaceTime, his friend was basically telling him to give GPS instructions like, OK, at the next intersection, you’re going to turn left. Right, turn left now. Also giving him some sort of obstacle avoidance pieces of advice, like watch out, you’ve got stairs to your right. And then also describing where the entrance of the hospital was located. So right across the courtyard, you’ll find the entrance with revolving doors, et cetera, et cetera. And just seeing that in action, just made me think, well, the person’s obviously using a cane and obviously quite an advanced cane user. But to discover new places and to go to new destinations is probably something that is quite challenging. And coming from an AI research background, I was not familiar with the field at all. And I just knew that if we were to build something, we would have to just build it in a very user-focused way. And so, we approached the ophthalmic hospital and thought, well, what we noticed was GPS was sort of, you know, not precise enough. The obstacle avoidance maybe needs to act as a complement to what the cane could be detecting on the ground level and detect also head-level obstacles or holes or steps that are moving that the cane might pick up too late. And finally, finding things like finding doors, finding crosswalks, et cetera, et cetera. And so that’s what NOA stands for in the end. So NOA stands for Navigation Obstacle in AI. And we really try to build it around the cane and around the use of a dog as such. And so just building on top of anything that exists in terms of in terms of mobility. And I have a device here. So, we basically built a vest. It’s kind of a small harness. It’s about two pounds or one kilogram worn on the shoulders. It’s got a it’s black on the left-hand side of the chest. We have three cameras that have 170 degrees of field of view. And on the right-hand side, we have a small AI computer. That runs all the computation. And it basically gives audio feedback to the person. So, the person can hear ‘turn right at two o’clock.’ Hear beeps when obstacles are coming closer. And then press on buttons that are located on the side to look for specific objects. Or get a full scene description. That’s kind of in a nutshell how we approached it and what it currently does.
VICTOR TSARAN: Very cool. Thanks for spelling the NOA for me, by the way. I knew the navigation part. I knew orientation. I just I don’t know why I forgot about AI. That’s a shame. AI is the first I should have remembered. So thanks for doing that, NOA. And as in thanks for spelling NOA, I meant to say. Okay. Amos, how about Glidance?
AMOS MILLER: Yeah, of course. Thank you, Victor. And Maël, it’s really great to hear your vision and your ideas. It’s just amazing to be in the presence of other collaborators who are together working to address this world. What is a very serious problem, like you say, Victor, mobility, the ability to get around, the ability to impossibly go to places, is so foundational to quality of life, to opportunity, to employment. When I moved to London back when I was like 25, you know, I used to go to; I started a new job. I’m blind myself. I started a new job at a tech company. You know. And I had to commute through London, of, you know, very like it was like in the middle of winter, cold and dark, and, you know, tapping with my cane, trying to commute, and then getting a guide dog. Like you can’t have these kinds of opportunities without the ability to get around. And our angle at this quest of Glidance is, Mayel described how cane users and guide dog users, yeah, are benefiting from additional information in order to get around more confidently. I’ve done a lot of work in mobility. Spent a lot of time at Microsoft at the Microsoft Soundscape, which some of the audience might be familiar with. And through my work, I realized that there is still a fact that we are relying on people being confident with cane users and guide dogs, is leaving a lot of people behind. And that there are a lot of people who need more help. Especially people who lose their sight later in life. Through my research and exploration, I realized that the information that technology can give the user is only going to be as effective as the way that the information is delivered to the user when they’re out there. So if you’re going to navigate through a crowded space, move left a bit, right a bit, is not going to continue. It’s not going to quite cut it. Yeah. It’s not fine enough. Especially if you’re going to provide a mobility aid, which is a primary mobility aid as an alternative to the cane or the guide dog. And so our work at Glidance is we concluded that what we needed was something that is physically connected to the ground and guides. And that’s the essence of Glide. Glide is an autonomous intelligent guide, mobility aid. Basically, it has two wheels on the ground, a long handle. You hold onto the handle. You nudge the device forward. And then the wheels begin to autonomously steer the way, a little bit to the left, a bit to the right, guiding you around obstacles, keeping you on a safe path, and guiding you to potential line of sight targets or potentially entire routes as well. And really, the goal here is we have canes. We have guide dogs. I believe that there’s a need for another option for people. And that’s what we are working on at Glidance.
VICTOR TSARAN: Yeah. That’s really awesome. Thank you both. Couldn’t agree more. And this sort of leads me into the next question, which is the approach that both of you took. And I know you kind of both touched on this already. That, you know, obviously. Maël, you were influenced or you were impacted by the person that you met. And so we are trying to solve the immediate problem, I guess, they had, which is how do we help someone who already uses the cane, how to augment the cane navigation with something that will help basically bridge the gap, provide information that the cane doesn’t. On the other hand, interesting what you mentioned, Amos, about sound systems. I’m a user of soundscapes. I’m a little bit biased here. But I do have to say that the ability to also actually, Maël, you alluded to that, this audio signals. You know, there’s tendency, especially within the last few years, well, not so much few years, maybe decade or so, to push everything into speech. And the problem with speech, you know, I call it the one-dimensional sort of stream of information. Because by the time you hear something spoken, you know, you could be in danger of hitting a tree or falling or whatever. Right. And so as important as speech is for us, for reading emails or you know writing documents, it may not be good enough of means of communication when you need almost instantaneous response. Right Which is where audio signals really become super important. And especially the way that soundscape solved it. Right By using stereo positioning headphones. And forgive me Maël I don’t know if that’s the approach that Noah takes But I find this especially super useful when I don’t need to necessarily hear that something left or right. And again in English these are short words I don’t know in other languages it might sound longer. But, you know, audio signals become really an interesting alternative to speech. But what I’m really also interested to ask you about is: do you have any suggestions? Any thoughts as to, and this is not really a competition kind of question, but I’m still curious because in one case, in the case of Gliden’s devices standing on the ground, which means it’s super convenient, but it also introduces other things like the user needs to, you know, remember like if they’re in a place where they have to pick it up or they might be in a place where they need to find a space for the unit to stand, things like that. Whereas in the other situation we have with Noah, the user has to wear something on their shoulder. Shoulders that might be too heavy. How do you think about ergonomics, you know, for these devices? Because I’m sure there are always pluses and minuses to both. So, Amos, do you want to start and talk a little bit about ergonomics? Because I’m kind of curious. Yeah, absolutely.
AMOS MILLER: I mean, you are spot on. I think that there are different technologies and different interfaces that can provide users with different kinds of information, voice, audio, haptics. What I found is that voice and sound and haptics is very good, as long as it’s used wisely, when you already have your ability to micro-navigate. So to use the cane effectively, use the guide dog effectively, orient yourself effectively, and that additional information really enriches your experience. But if you’re going to do a viable mobility aid that is going to be the primary mobility aid, for me, I think it’s still a way away before we can use wearables for that. I just, for me, and I often say that, once we’re able to plug directly into the brain, I think everything will change. But until then, I have, from my experience, my doubts that you can wear it. You can wear something or feel something on your body, and you just walk along and be confident that you’re not going to trip over anything or walk into anything just without anything in your hand. And so that’s really the essence that Glide is physical. It’s connected to the ground. You push against it. It pushes against you. It moves you. You move it. It’s very essential to the experience and the confidence and the trust. You build in the device. But as you say, for this to be a viable mobility aid, it also has to be light. You have to be able to pick it up. You have to be able to get into a car quickly and easily with it. You can’t just expect a forklift to come and pick it up for you and put it in the back of your car every time. Yeah. If you go into a bus or a train, you want to be able to put it in the overhead compartment. You want to be able to walk in rough terrains, smooth terrains, all kinds of requirements that are just essential to have this as a viable physical device that is guiding you. So these are really the constraints and the requirements that I believe are essential if we’re going to bring about a mobility aid of this nature. Very cool.
VICTOR TSARAN: Thank you. Maël, what are some of your thoughts?
MAËL FABIEN: Yeah. I mean, I think Amos pointed it out very well. I think if you want to do a primary mobility aid, I also don’t believe in the aspect of, you know, having something like NOA, for example, give a continuous sound that you have to follow. That would be way too overwhelming. And it’s been tried and researched over a couple of devices. Yeah. So there’s, yeah, it just doesn’t work as such. And this is where I think the positions of like the angles we took are very different, having one be a primary mobility aid as a replacement and the other in addition to a primary mobility aid. And that’s also one of the things in terms of two aspects to the ergonomics of what we design is, of course, like the form factor. And there is the audio interface. And what we realized over the test sessions is that, we really need it to be very sparse, like only throw in the essential information that’s needed at a specific moment and avoid overwhelming the user at all costs. And so that’s like, we spent probably two years roughly just building, you know, like a trajectory prediction model that looks at the dynamics of the scene. And if someone’s about to collide with you, et cetera, et cetera, to just be able to filter more sounds and not overwhelm the user with this. But otherwise, the risk of overwhelming someone with a continued sound, for example, is very high. And then for the hardware, I think, of course, there is hardly like a one solution fits all. I’m just glad there’s like more and more companies in this field that are building alternative ways to, you know, wear the device and, you know, embed technologies on the shoulders or like forehead or belt or whatever. We tried the belt in the very first place. Yeah. And then we quickly took it for a spin and realized, well, the cameras are half of the time like blocked by the hand of the user who’s, you know, either like a guide dog or a white cane user. And so we went on the chest level. And that’s where we got the first success. But it was like a GoPro clip, and that was really hard to, you know, like when you have straps and everything to clip. I really think this is too much of a hassle to put in the first place. And so that’s how we came to this form factor. The head mounts, we are not so sure. I think it’s really worth investing a lot of time and money in, like, a head mount, and rather like wait for smart glasses to have, like, day and night vision with 3D sensors and everything. That’s, you know, going to come one day. It’s still super challenging. But yeah, like the shoulders felt very natural for, you know, to build this complement to canes and dogs.
VICTOR TSARAN: Yeah, very cool. Thank you. And thank you for describing your thought process. Because that’s also. I’m also a technologist and always this is one of the interesting aspects to me. How do we arrive at a certain invention, right? Obviously, it takes phases. It takes thinking. It takes trial, takes failure. And so thank you for pointing out that you guys went through several iterations before you got to where you are today. You know, and one other thing that I do want to point out, because I think it’s important to me personally. Also, what both mobility aids allow us to do is to travel spontaneously. Because so my wife and I would travel quite a bit. You know, at least we go to one destination. We try to go every year somewhere, a different country. And, you know, most of blind people, the way we’re taught, like, oh, you need to go from point A to point B, right? Like mobility instructor shows you. You start here, then you remember you’re out, whether it’s to work or to your grocery store or whatever. But whenever you travel, you know, you hear oftentimes sighted people say, like, oh, and we went there and we saw this. And it’s like, oh, my God, it would be so cool like if just somebody dropped me somewhere on some, you know, random street. And I would like to be able to explore what’s around me without, you know, having always to plan for something. And I think this is where I would like, hopefully, this technology is going to get us to-this ability to really travel spontaneously, get to necessarily unplanned places, things like that. Yeah. And so this kind of leads me to the whole idea of; I know this is we could talk about this for another hour. But both devices rely on AI quite a bit. And could you both quickly talk about what role does AI really play in both devices? You know, I know we could talk about, you know, this AI as an AI and this trendy AI and there’s a real AI. But if you look at both devices, what role does AI really play in both of them? I don’t know. I want to start with Mayel maybe. Because you guys have an AI button. So talk about that.
MAËL FABIEN: Yeah, absolutely. So I think there is for us we’re leveraging the field of AI mostly focused on computer vision to both do a scene description. So that’s you can really think of it as a left to right. That’s why we have such a wide field of view camera. So you can do like a full description to really grasp what your surroundings are like. What are the key elements that are useful for mobility but also to discover the place. And so, for that one, we have like a big silver button in the middle of the piece that goes on the right hand side of your chest. And then more specifically, on the right edge. Of the device. Then you can look for specific elements. These are like object recognition models. And of course, like you can look for doors. You can look for crosswalks or stuff like that. But I think this blend of like this mix of models that run on the edge, meaning in the device locally, that do not require internet connectivity. And for the things that are super demanding internally, in terms of computing resources. You know, using AI that’s connected on the cloud is a pretty relevant solution so far. And that’s how we’re building it. Of course, like when you look at I think no longer than last week OpenAI released the real-time API. Which is quite literally around the corner, like being able to do speech-to-speech but also ingesting like a video feed in the middle. So you can ask questions as you walk in the street for example. I think this is really where the technology is going and this is what we have in mind for the next few steps is really going into video description. By the time this video airs, that feature would probably already be released. So, you can think really of it as an AI narrator that’s talking as you walk for really this exploration. Like, you’re not really traveling somewhere with a specific destination. But, you’re relying on the voice that’s gradually talking and highlighting the key elements in the streets.
VICTOR TSARAN: Sounds great. Thank you. I just want to pass it over to Amos to share some. Because, you guys have robotics and AI and everything in between. Yeah absolutely. So maybe you could talk like for a minute or two about that.
AMOS MILLER: Yeah, just to be brief. We have so Glide is powered by two depth stereo cameras at the handle. And a bunch of other sensors on the device so that it can really have a good, detailed understanding of its immediate surrounding and long-range as well. And we use our proprietary AI is mainly around what is a system that we call Senseable Way Finding System. And this is a system that’s based on computer vision, local maps, and priors. And foundational models. That enable the device to make sense of the environment. And help the user make sensible navigation decisions. I’ll give you an example. I mean it might sound obvious. But you know for computers nothing is obvious. If you walk along a corridor and the corridor is going to turn to the left, and the computer can’t see through the wall. So it doesn’t know that the corridor continues there. Something needs to give it the belief. That based on what I can see, the corridor does continue over that direction. Yeah and so the AI, the sensible way-finding platform that Glide is investing a lot in is really enabling Glide to make sense of environments. And help the user make and actually guide the user sensibly in a space. The other aspect is exactly what Mel talked about. The potential and opportunity for human-computer interaction. The voice, the instant voice. The instant understanding of a scene and an environment. And I think that’s really where the magic happens. It’s a combination of that really responsive interactive interface that allows me as a user to understand what’s going on around me. And the robot making sensible navigation decisions.
VICTOR TSARAN: Sounds great. Thank you both. And maybe quickly if you could just share the websites where people can find more information about both devices. Amos you want to go first?
AMOS MILLER: Yeah, so our website is Glidance. io. It’s the word guidance with the U switched with an L. So Glidance. io. We have extensive community engagement sessions. And people who are interested to follow what the development of Glide. We are still about a year away before the device ships. And we really strongly encourage people to go onto the website, sign up, take part. We’re building this future together. And in these kinds of sessions, these are the best places to engage and bring people on board.
VICTOR TSARAN: Very cool. Glidance.io and Maël?
MAËL FABIEN: Yeah. Our website is biped.ai. So that would be spelled B-I-P-E-D. ai. Like a bipedal.ai. And our device is available usually within six to nine weeks. But most people actually want to try it, and they’re right to do so. So Marco is our COO. He’s usually the one on the roads in the U. S. So yeah, we have a couple of distributors. So people can just reach out. And we just find the nearest distributor to schedule a test.
VICTOR TSARAN: Sounds great. Exciting times ahead. Really looking forward to this technology disrupting mobility and making blind people and people with visual impairment more productive. And so we can finally go to places. Thank you both for this wonderful panel. And thank you, SciTech Global, for again giving me this opportunity to talk to these two wonderful, amazing, inventive people. Superb, great people. Thanks, Gang.
[MUSIC PLAYING]