-
DESCRIPTIONJoin pioneering entrepreneurs as they explore how AI and innovative technology are complementing traditional mobility aids for the visually impaired. Learn how BenVision, Lighthouse Tech, and WeWALK are revolutionizing spatial navigation, stylish assistive devices, and smart cane technology to enhance independence and safety.
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
VOICEOVER: Smart Mobility, Advancing Solutions for the Visually Impaired. Speakers, Nathan Deutsch, COO and Innovation Lead, Lighthouse Tech. Kürşat Ceylan, Co-Founder of WeWalk. Patrick Burton, Co-Founder and CEO of BenVision. Moderator, Anat Nulman, Founder and CEO, Assistive Consulting.
ANAT NULMAN: Thank you, Karae and Ross. Thank you for hosting our session. What an honor to join an amazing line of speakers and panelists here at SciTech Global. And welcome to Smart Mobility, Advancing Solutions for People Who Are Visually Impaired session. Today you will hear from three visionary leaders in assistive technology as they explore the groundbreaking solutions transforming mobility for people who are visually impaired. From cutting-edge options to smart mobility, from obstacle detection systems to the integration of AI, you will hear firsthand how these pioneers are tackling the age-old challenges with creativity and compassion. Our expert panelists will share their insights on emerging technologies, how these technologies integrate with traditional solutions such as white canes and guide dogs, critical role of mobility training, and the future of AI in mobility, navigation, and obstacle avoidance. But before we dive in, let us introduce ourselves. My name is Anat Nulman. I’ve been an advocate, an ally, and evangelist in the assistive technology field for the past 11 years. I’ve worked for two manufacturers of assistive devices and launched multiple products to market. Last year, I founded Assistive . Consulting, where I help companies that build solutions for people with disability, primarily for people who are visually impaired, bring their innovations to market, and reach as many users as possible. My current clients span diverse areas like mobility, smart glasses, Braille, and accessible telecommunications. Today, I’m joined by three extraordinary leaders. Prashad Salon, co-founder and chief product officer at WeWalk, Patrick Burton, CEO and co-founder at BandVision, and Dr. Nadine Deutsch, operations and innovation lead at Lighthouse Technologies. I will let them introduce themselves. Prashad, please take it away.
KÜRŞAT CEYLAN: Thank you, Anat. It’s a privilege to be here together with you. I am blind from birth, and I studied in a primary school for blind children. And then I attended inclusive high school education. Then I studied psychological counseling at university. And after my graduation, I dived into technology work. And as a team, we have implemented various technologies for visually impaired people, such as audio description technology for movie theaters. And for the last four years, we have been focusing on WeWalk smart cane technology. And I am one of the co-founders of WeWalk. And at WeWalk, we focus on enhancing the mobility of visually impaired people, which are patented smart cane technology.
ANAT NULMAN: Patrick? Thank you, Kürşat.
PATRICK BURTON: Excellent. Such an honor to be here at SightTech Global. So my name is Patrick Burton, and I am an accessibility hacker. And if you don’t know what that means, that’s because I made it up right now. But I’m CEO and co-founder of BandVision. We make Speakaboos, but the more exciting thing that we’re working on, in my opinion, is called Ben, the binaural experience navigator, if you want to be extra. And Ben, what Ben does, it uses music and spatial audio to give an echolocation-like navigation experience. So think sonar meets symphony. That’s what we’re building.
ANAT NULMAN: I love it. Thank you, Patrick. Nathan?
NATHAN DEUTSCH: So again, honored to be here. Hi, Anat. Hi, co-panelists. I’m Nathan, CEO at Lighthouse Tech. Social scientist by training. And I’m interested in the way people learn about and interact with their environments, and how they make changes, imagine, design solutions to the problems that they perceive. So we produced TAMI. It’s a smart eyewear solution for people living with blindness and vision impairments. Personally, I’m sighted. I have no vision impairments or visual perceptual impairments myself. But I’ve been working now on this project for roughly three years, so I’m relatively new to this space.
ANAT NULMAN: Thank you very much, panelists. So before we jump into the discussion on solutions, let’s first try to identify what problems mobility solutions are aiming to solve. What specific navigation and mobility challenges do people who are visually impaired face in their daily lives? How is your technology addressing those changes? And what gaps remain in the current landscape of assistive technology for mobility? Krishad, let’s start with you.
KÜRŞAT CEYLAN: So it’s a really deep question because there are different angles of this question. When I go out, currently, there are some challenges I have to deal with. We cannot say that visually impaired people will have only one problem while going somewhere. And if we solve it, then everything will be perfect. No. Detecting obstacles is one of them. And important. And also, so most of the people think that the only problem of visually impaired people is to detect obstacles. But even if we detect obstacles, I have to know which direction should I go walk. And also, while we are going somewhere, we have to increase our spatial awareness. What is around ourselves. And also to increase our spatial awareness and also to navigate somewhere, we have to use different applications. And can you imagine we are going somewhere with our white cane and we are using it with our one hand and with the other hand, we have to use our smartphone to use different applications. Google Maps or different applications developed for blind people. And we have to switch between them. That’s why I believe visually impaired people are having challenges detecting obstacles, increasing their spatial awareness and also using different applications. So all those are the problems of visually impaired people right now. And also with the WeWalk SmartCane, we target all those problems. So you don’t need to switch different applications. You don’t need to hold your smartphone. You can put into your pocket and then you can reach all the information from your SmartCane either by using physical buttons or you can simply talk to your cane thanks to our custom-trained voice assistant technology.
ANAT NULMAN: Well, thank you very much, Kürşat, for sharing that. Nathan, do you have anything to add?
NATHAN DEUTSCH: Yeah, sure. I mean, there are problems that training and traditional tools already deal with very well. But there are problems that continue to cause a lot of anxiety. And I think we’ve zeroed in on one of those problems that can deeply affect people’s lives. We’re focused on providing people with a safe mobility experience by protecting the upper body from collisions with obstacles, and this is the zone that the traditional white cane can’t see. But another important gap that we’ve really worked hard on is stigma and inclusion. And we’re trying, in this path, we’re making a discrete wearable in the form of glasses. The company’s founders are fashion eyewear industry experts. We’re leveraging that experience to really start with the discrete assistive technology. That enhances safety, but it also reduces the stigma around what can normally be a bulky assistive device in some cases. And at the same time, we’re complementing those traditional tools. So I think in this current landscape of assistive technology, we’re really sort of going to a nice wearable integration. And we think that wearables are an exciting space to be working in right now, not just in assistive technology, but it’s certainly getting there. And we’re seeing a lot of innovation in the eyewear space, and we’re bringing in our solution there too. And we want to experiment with that form factor to make it effective, to be safer in terms of mobility.
KÜRŞAT CEYLAN: By the way, thank you for bringing upper body obstacle detection aspect. You know, you can see I have some front, I have some scars on my forehead. So it is real pain point for visually impaired people. I totally get you.
NATHAN DEUTSCH: Cool, thanks.
ANAT NULMAN: Definitely. And Patrick, do you want to share a few thoughts from your side? I know your solution uses music to tackle some of those problems, but what problems are you guys tackling specifically?
PATRICK BURTON: Absolutely. So I myself am sighted, but I work at a charity for the blind, and I see people, you know, tripping over things and bumping into things and hitting their head on things like Kershaw. And I would say that what I see the most, though, beyond the loss of independence and mobility is the mental and emotional exhaustion that comes from relying on your other senses to get around. And something that I love about this panel is that all of our solutions, after researching them, they all use nonverbal cues in one way or another. And so many assistants, many assistive tech solutions for sight loss rely on speech. I should know, we make one of them speakaboo. But when you’re trying to accomplish a task like navigating from one room to another, that takes a really heavy cognitive load. And it can be exhausting and really difficult to think about anything else but navigation when you’re trying to follow a set of directions to do that. So what BenVision’s doing to solve that is we’re taking advantage of the technologies of augmented reality and spatial audio. We anchor virtual speakers that play different music, that play music to different objects and waypoints and regions around your space so that rather than following a laundry list of directions to get somewhere, you can actually just walk to where the sound is coming from. Through testing, we found that not only is that more intuitive to navigate that way, but it actually frees up space in your brain. Two of our beta testers were talking with each other about the experience of using Ben while they were using it and suddenly one of them realized, oh my God, I can carry a conversation while I’m using this. I never thought that would be possible with a solution like this. So I like to think that by our use of music, we’re putting the humanity back into assistive tech. We’re giving you a piece of your brain back.
ANAT NULMAN: Fascinating. There are obviously many, many challenges, but let’s hone in on obstacle avoidance specifically. Despite new technologies, why is obstacle avoidance still an ongoing challenge? What are the limitations of current technology in accurately detecting obstacles, especially in dynamic or complex environments, which is a lot of environments we’re in on a daily basis? And how are you innovating to address that particular issue? And what advancements do you foresee in the next few years? So Patrick, let’s start with you this time.
PATRICK BURTON: Excellent. Yes. I love this question. You guys are all about to find out what a big nerd I am. So the challenge is really, I mean, it’s just such a complex problem. You know, computer vision comes in so many different flavors and each of them have their own strengths and weaknesses. For instance, there’s object recognition, which draws a bounding box around detected object categories. And so you can kind of approximate their location by their box. Then there’s another method called segmentation, which takes it one step further by identifying obstacles and actually which pixels on your screen belong to which object category. And then there’s something called depth estimation or LIDAR is a common method. And those can tell you how close or far away an object is from you. But there’s no color information there, so they can’t tell you what the objects are. And then nowadays we have visual large language models or VLLM for short, which are trained on much larger data sets of images. And they can give you precise information about what you’re seeing in natural language. You could even ask a VLLM, like, what is that over on the right side of the screen? And it’ll tell you, oh, that’s a box with a trophy on it or something. And they’re great, but sometimes they hallucinate. Sometimes they give you the wrong information. So all of these different methods, they all have different strengths and weaknesses. And some give you information that others don’t. And some of them take more compute power than others do. So I think the perfect solution will be one that marries all of them together and knows which to use in the, excuse me, in the appropriate scenarios. And so to get even more technical, there’s other challenges like tracking objects. How if your object disappears from the camera’s field of view and then comes back, then how does the model know that it’s the same object? Or if it disappears behind a tree and then reappears, how do we know that it’s the same object and not a different one? And thankfully the models are starting to get better and they’re handling that a little bit better, but they still make mistakes sometimes. So what it boils down to, I think is that the human brain is an incredible thing and it does all of this processing of light information and feeds it to us so that we can digest it instantly. But we truly don’t understand everything that the brain is doing in this process. And that makes it hard to emulate, especially in real-time. So, that’s all the problem of interpreting the information or getting the information. Now, how do you communicate it all to the user once you have it? And that’s the big issue that we’re addressing with Ben is that for real-time assistance, especially in crucial moments involving safety, speech just isn’t going to cut it. Hearing somebody say ‘watch out for that car’ isn’t going to have the same effect as hearing a horn blaring at you. They mean the same thing, but one is certainly going to be more effective than the other. And speech takes a lot of brain power to process too. We need to tap into the human brain’s remarkable ability, I think, to process nonverbal cues like music or like haptics. And that’s where sometimes I get frustrated that people look at BenVision like it’s an art project or a novelty because we’re using music. But we’re not just a novelty. Through testing, we’ve shown that music is actually one of the best ways to solve this problem.
ANAT NULMAN: Very, very interesting. Thank you, Patrick, for sharing that. And for those who are listening to us today who are coming from the tech side, I’m glad that you highlighted some of the technologies that you guys implement in BenVision. Nathan, would you like to add?
NATHAN DEUTSCH: Yeah, sure. I’d love to. I’d love to add something here. And I’m glad to let Patrick be the nerd here. Maybe I’ll get into a slightly different subject, though. But just to note that the technology is always improving. And in our case, what that means is we’re getting a little bit better at doing things with smaller spaces and timelier projects than ever. And lately, we can start to put these pieces together in novel form factors, exciting form factors, like smart glasses. So it’s exciting for us to start to work with these things. New designs have to emerge, new ways of interacting discreetly with the technology and the environment around you when you sort of change these things up. So also just notice where it’s a tool. You can’t just pick it up and go, but you have to remember that there’s challenges involved in developing a relationship with that tool that you’re using, so you can use it effectively and so it doesn’t distract you from other things. So, you know, that’s our angle. We use haptics. BenVision is musical. And we are focusing; we chose a different feedback which is the haptics. And that helps you because awareness of the environment around you and locations of obstacles is complex. And there’s tools that can really afford major improvements in people’s lives because they are increasing the possibility of perceiving these things in the environment around you. But you need to increase your skills and abilities. The technology grows with the user. And so I sort of like to, I bring it back to a low-tech tool, which is the white cane, which is a really great example of this. It’s a very reliable tool. Unfortunately, it doesn’t protect you in the upper body zone. And I think we’re trying to pack some technology into a very tiny space and try to alleviate that gap.
ANAT NULMAN: Thank you, Natan. And I also want to point out that using haptics, for example, opens up the solutions to a different demographic for people who may have both vision and hearing loss. And often using auditory cues may not be available to them because they can’t hear them or cannot hear them very well. So it’s not just serving the people who are visually impaired, but it’s also serving other folks who, and expanding the technology and making it more accessible to others. So I’m glad you mentioned, Natan, white canes. So when I’m out and about in the community and I talk about different mobility solutions, there’s always a question. Is this instead of the white cane or is this instead of a guide dog? How new technology can the new technology replace these traditional aids? Or there should be some kind of a hybrid, given that the white cane is the oldest assistive device that has been available to people who are visually impaired. And if it is some kind of a hybrid model where all these new devices and the solutions that you guys are talking about and many others can complement guide dogs and white canes, how would that work? So, Natan, let’s start with you because you kind of alluded to it earlier, but I just want to hone in on it a little bit more.
NATHAN DEUTSCH: Yeah, so sure. Yeah, so if there’s an ideal hybrid model or if it’s going to be a model that we’re talking about, some hybrid, then it’s going to be highly individualized. It depends on the person. But one thing that we continue to hear from people and that we’ve taken into internalized as we’re building our product, is that you do want to have different kinds of feedback. You want to have the feedback from the white cane. You need other kinds of feelings for the way you move with your upper body area. That’s something that we’ve taken into consideration here when we make smart glasses. And we always make sure that we say this is for use with a white cane or guide dog. And we also hear about the benefits of having some redundancy in the system. So if your fancy device falls down in the toilet or something, it’s not going to prevent you from going about your day. So this is all considerations that we take into account when we say, no, we’re not trying to replace a traditional tool.
ANAT NULMAN: Mm-hmm. And Krishna, WeWalk is a smart cane. So you took the oldest mobility device and made it smarter. Tell us more about it.
KÜRŞAT CEYLAN: So as a black person, as you may guess, I am a real white cane lover. So that’s why we didn’t replace a white cane. So we just improved it. And also maybe I want to go back to the previous question as well. So obstacle detection. Question. So I agree with Patrick. So to provide, deliver better mobility, to deliver better obstacle detection capacity, we have to aggregate data from different sources, such as camera or LIDAR or ultrasonic, etcetera. And however, all those technologies will need more computing power and then more computer power means more power consumption. And then at the end of the day, you will find yourself holding a really bulky device. And what we are doing with the WeWalk smart cane right now, we resemble the traditional white cane. It is as slim as traditional white cane. It is as light as traditional white cane. But we added obstacle detection technology for upper body obstacles. And we use ultrasonic sensor. And again, as Patrick mentioned, so all those technologies have downsides and upsides. And that’s why we just didn’t put the obstacle detection sensor. We improved it. With our engineers. And right now, we reduce the disadvantage of ultrasonic sensor, for example, by narrowing the field of view. So it is important. And also it is important because we don’t want to provide too much stimulus to visually impaired people. And that’s why we developed WeWalk smart cane. And we didn’t replace it with the other technology.
ANAT NULMAN: Thank you, Kürşat. Patrick?
PATRICK BURTON: Yeah. First of all, I’m just blown away by both of your innovations. Your products are so cool. So, to speak to your first question, I think about whether guide dogs and white canes are still valuable. You know, of course they are. I mean, first of all, all technology can fail. You know, even white canes and even guide dogs-white canes can be misplaced. Guide dogs can be, I don’t know, distracted by food. Maybe. But that said, I don’t think either of those solutions are going anywhere. And I love what Kurshat’s doing by building on the tried and true white cane. Because, you know, that’s been around for, I think, 100 years. And for good reason. Because it works. And there’s so many gaps in computer vision technology, like I mentioned before, that a white cane or a guide dog can handle, navigate with ease. So I don’t think those solutions are going anywhere. And I don’t think that they need to. But, you know, on the flip side, there’s some places where they fall short, where you, Nathan and Kurshat, are both doing great innovation to kind of fill in those gaps. You know, like they can’t protect you from head-level obstacles. And they can’t read signs. They can’t tell you, they can’t help you find like a misplaced phone charger, for instance. I think all of our solutions kind of fill those gaps in different and innovative ways. But when, when I conduct these focus groups with the blind community, the overwhelming sentiment that I get from them is, you’ll take my cane or you’ll take my guide dog over my cold, dead body. And we’re not trying to take those solutions away because again, there’s so many, they’re great innovations. And there’s so many gaps in current technology that I don’t think are going to be solved anytime soon. So I’m thankful for those.
ANAT NULMAN: Thank you. Thank you. Thank you all three of you for your insights. You know, the question that I asked, you know, are guide dogs or white canes obsolete was a little bit of a trigger question. You know, we all know that, that none, I think none of the solutions are really trying to replace these traditional tools. But I wanted to hear your responses and how each one of your companies tackles, tackles this issue. How do you see it differently? On the other hand, and along the lines of tradition is the orientation and mobility training. So it’s, it’s a critical component and most blind people or everybody, at least in the US who uses the cane, they usually go through orientation mobility training. Why is it still critical even with the advancement of new technology? How do you see your products complementing traditional mobility training methods? Like O&M? And how can we ensure that users are fully equipped to maximize the benefits of both the traditional training and the new technologies? And Krishad, let’s start with you because I know you guys have a very exciting solution for both mobility trainers as well as mobility, orientation mobility learners.
KÜRŞAT CEYLAN: Yeah. So we will, we will have smart cane. We have two different products. The first one is a smart cane itself for visually impaired people. So it detects obstacles, gives navigation and more. But also we have another product designed for O&M trainers. As you know, right now, all O&M training sessions are based on observation. When I left from the classroom, unfortunately, my trainer is not able to monitor my mobility experience. But thanks to WeWalk’s inbuilt sensors and AI technology, so we designed a dashboard for trainers where they can see my mobility experience because when I go out, I’m not shaking my white cane randomly. There are some techniques and there are some essential metrics as well. For example, cane angle, swipe angle, swipe count, walking speed, walking step count, and confidence level. All those metrics are really essential to develop better mobility skills. So with our technology, O &M trainers are able to see all those metrics about their trainees. And this is the technology that we developed. And also, O &M, you know, although technology has advanced so much, O &M training sessions are so important because at the end of the day, we, as a blind person, I have to, I have to manage my orientation. This is my responsibility. I cannot give it to some, I cannot give it to a technology. For example, I cannot say that, okay, there will be a technology and it will take me from A to B. So then I, I will not have any orientation skill. No. At that point, most probably, you will feel yourself really miserable. But O&M sessions are giving you self-confidence and ability to participate social life. That’s why even technology advances. So, we will keep getting O&M trainings.
ANAT NULMAN: Thank you, Kurshat. Neydan, do you have anything to add?
NATHAN DEUTSCH: Yeah, I have something small to add. I, I really appreciate what you’re doing with WeWalk Kurshat. In our case, I guess one of the best ways to get products into the, the hands of people who, who can really use it. So, really use the device is through those who are doing the training and assessments of needs. So that’s where we interface with training. So they’re the people who can respond to the user’s needs. And so we’re providing glasses through a network of distributors and user associations that can provide this training. So we want to make sure that there’s support along the way. And that’s all I would add in our case. But really exciting work you’re doing there, Kurshat.
ANAT NULMAN: Thank you. Thank you both. Our session would not be complete without a discussion on AI. And while we can spend hours and days talking about AI, today, we unfortunately have time to only scrape the surface. But it’s still important to mention it. What role will AI play in navigation and obstacle avoidance for people who are visually impaired in the next five to 10 years, since it’s changing very rapidly? What are some ethical concerns and safety guardrails that need to be in place as AI becomes more and more integrated into mobility technologies and just in general in our everyday life? And how do you see the evolution of AI-driven assistive devices impacting the independence and quality of life for the users? Patrick, do you want to share your thoughts on that?
PATRICK BURTON: Oh, please. Another nerd question. I’ll try not to spend hours or days talking about it. So for BenVision specifically, a big challenge that we have right now is like in whether if you’re like in a dining room or if you’re in a conference hall, the objects and obstacles that you would want to be aware of are going to be entirely different in those different scenarios. So how do we account for the different context, the changing context of every scenario in our algorithm? That presents a big challenge and a bigger challenge is how do we intuitively communicate all that information with sound? Right now, there’s a huge burden placed on our audio director who’s doing an amazing job, but she can’t possibly design intuitive sounds that go with every single object in the world. So those are two big things that AI can help us with because as you know, AI is a workhorse. It can help us and it can account for all of those different contexts and it can actually compose different sounds to match every single object in the universe. So to your question about ethical concerns, data privacy is probably the biggest one, right? And it kind of presents this dilemma because on the one hand, collecting data from users’ cameras, especially the users of our products, that’s actually the best way to directly and exponentially speed up our ability to improve the models that we’re using. But on the other hand, obviously that carries some serious privacy and ethical concerns. And what I can say is I see some of our contemporaries, what they’re doing is sort of flipping the choice over to the user and giving them the option. And perhaps if they opt in, then they’ll get free credits or premium services to share their data. And I think that’s probably the right way to handle it. In terms of looking out to the future, I think that eventually, honestly, I think visual large language models are going to kind of take over everything, all those other methods that I mentioned, like object recognition, segmentation, even depth estimation, depth detection. I think eventually visual LLMs will be smart enough that they can handle all of those different things. And maybe it’ll make everything obsolete, but I think that’s just a matter of getting enough training data and then compressing the models so that the earth doesn’t blow up from all of the compute power required to run them.
ANAT NULMAN: Well, thank you for succinct thoughts without spending hours, but I think you’ve covered many interesting considerations and in answering questions about AI. Kershat, I know, again, was kind of, you know, taking what Patrick said about users opting in and opting out and whether they’re interested in sharing their data. I know you guys are doing something like that as well. So could you tell us more?
KÜRŞAT CEYLAN: Yes, of course. Yeah, I mentioned about the technology developed for ONM trainers. Of course, trainees can easily opt in, opt out. Thank you for bringing that. And also form factor, as Nathan underlined it, is so crucial for developing assistive technologies. And I believe in the upcoming years, we will see smaller sensors, smaller processors, and they will allow us to build more ergonomic devices. And also in terms of the AI, as you know, edge computing will be so important. Then it will allow us to aggregate all the data from different sources, different sensors. Patrick mentioned about it. And I believe in the upcoming years, we will see, we will have some devices in our hands. Even those devices won’t need to go connect to the internet and they will be able to process and run some AI modules in itself.
ANAT NULMAN: Thank you. Thank you, Krishat. Nathan, you know, your company is based on a form factor that is fashionable, and it kind of goes along the line of what Krishat said, with technology, with new advancements in technology and in AI. There is a growing opportunity to make devices smaller, less obtrusive. So tell us more about your thoughts on that.
NATHAN DEUTSCH: Yeah, so I think I’d just say we’re both in an AI revolution, but we’re also in a wearables revolution. And this is where smaller is better. To us, those things go hand in hand in our design. So what excites us is efficiency and really also discreteness, discreteness in terms of how we work together with our devices. So we’re starting to see stuff coming out in the wild, not just us, but we’re starting to see some interesting stuff. But I would just say neither AI nor wearables are going to be the magic bullet for people living with vision impairment in and of themselves. But there’s a lot going on now in the space. There’s a lot of know-how around new technologies. So it’s a great time to be out there doing this stuff.
ANAT NULMAN: Thank you. Thank you all. So such an interesting discussion. As we’re wrapping up our session, I want to give you an opportunity to share a few final thoughts and takeaways for our listeners. So, Maiden, let’s start with you and let’s make it really short.
NATHAN DEUTSCH: Sure. So one final thought on the future of navigation and obstacles, I think is just what we’re getting is just to be focused, eyeing up one challenge right now, and that’s as a company here. And we hope that by doing that, that we can fit into a larger, the larger picture of the larger ecosystem better. So that’s the key take home from my conversations with people testing the product. And of course, there’s always people who want everything. We also get feedback that some of these problems need to be solved one step at a time, and that’s what we’re taking seriously. It’s smart eyewear and it’s wearable technology that we’re building on. So it’s very important that we make a usable tool and we stick to our mission of providing people with a discreet device that people can wear and use in a dignified way without screaming, ‘I’ve got an assistive device on my face.’
ANAT NULMAN: Patrick?
PATRICK BURTON: Yes, thank you so much for having me on this panel. So my parting thought is that universal design, sometimes called inclusive design, is this idea that when we design for the inclusion of the most marginalized communities, everyone benefits as a whole. That’s sort of central to our philosophy, and it’s been the core idea behind the design of both Speakaboo and Ben. You know, curb cuts, automatic doors, closed captionings, even large language models, they all have their roots in accessibility. And I think that’s something that’s important for everyone, especially the sighted and fully abled people who might be listening here to remember: when you support this industry, you’re not just giving to a charitable cause. You’re not just helping a marginalized group of people. You’re actually investing in the future of humanity as a whole.
ANAT NULMAN: Beautifully said. Beautifully said. Krishad?
KÜRŞAT CEYLAN: Whoever wants to develop technology for visually impaired people, they shouldn’t forget community contribution. It’s crucial. And also, as a VWOC, we’re always in touch with our community from Australia to Canada, USA to the UK, we are partnering with the leading blind organizations. And we developed our smart cane too with all those contributions. And we see the positive impact on our products. I strongly recommend everyone to get community contribution.
ANAT NULMAN: Thank you. What a lively discussion. I really appreciate you taking the time to share your insights. I hope that our listeners and viewers have enjoyed this conversation. And thank you again, SciTech Global. We are truly privileged to be here as part of the panel. And we’re looking forward to seeing you next year.
[MUSIC PLAYING]