Inventors invent: Three new takes on assistive technology
DESCRIPTIONInventors have long been inspired to apply their genius to helping blind people. Think of innovators like Louis Braille and Ray Kurzweil, to name just two. Today's ambitious pioneers have the cheap sensors, high speed data networks, and data and compute "in the cloud" to do more than ever before. In this session, three founders present products that have just or will soon will enter production that they believe will improve the lives of people with disabilities.
NED DESMOND: Thank you, Will. This is Ned Desmond from Sight Tech Global. And we’re here today with three inventors, founders, pioneers, trying to figure out the next generation of assistive tech and how it will impact the communities that it hopes to reach.
First up, we have Keith Kirkland, who is with Wayband. He invented a product called Wayband and is part of a team called WearWorks. So Keith, can you tell us in five minutes what the Wayband is all about?
KEITH KIRKLAND:: Yeah. Happy to. Hi, this is Keith, everyone. I’m the co-founder at WearWorks. And at WearWorks, we build products and experiences that communicate information through touch.
And so our Wayband, our first product is a wristband that gently guides you to an end destination, using only vibrating cues without the need for any visuals or audio. And so the way it works is you get a Wayband device, which I have on my wrist right here. It’s a very small curvature rectangular device that’s very discrete that sits on your wrists quite comfortably, if I say so myself.
And you connect it to a Wayband app, which is a mapping application that we built custom for the Wayband. And just like Google Maps, you tell it where you want to go. You can use screen reader, voice over, or you can do audio input, or you can do a text if you have a lot of functional vision. And after that, unlike Google, you then put your phone away. And the device gently navigates you on your wrist of which way you should go, left or right.
And so the way it does that we’ve invented this thing called the Haptic Corridor. What the Haptic Corridor is, it’s a 360 degree tactile experience. And when you’re going the right way, if you can imagine a slice of pizza being missing from a pie, or Pac-Man’s open mouth, if you play video games, Pac-Man’s open mouth is the right way to go. Everything else is some varying degree of wrong.
And so what we’ve done is we built a 360 degree experience. When you’re facing the right way, you feel absolutely no vibrations at all. And when you turn 180 degrees the wrong way, you feel the strongest vibration we can give you. Everything in there in between is a gradient.
So let’s say you need to make a left turn. You’re walking on a straight line and you feel no vibration. Everything is going perfect.
You get to your corner and right before you get to your corner, your Wayband will give you a confirmation buzz, beep beep, saying that, hey, you collected that dot on the corner. Now we’re going to navigate you and point you at the next dot.
And that next dot is left-hand turn. So now you feel a vibration on the wrist. When you turn to the right, let’s say, you sense immediately that the vibration starts to get heavier. And you instantaneously know that you’re turning in the wrong direction.
So when you turn toward the left, you start to feel the vibration get lighter, and when you feel nothing, you know that’s your new path. And so that’s how we can navigate you turn by turn through an entire city without any visual or audio cues. And that’s also how we helped the first person who was blind 15 miles of the New York City marathon without sighted assistance for the first time ever.
NED DESMOND: Pretty remarkable, Keith. And what’s the story as far as production of the device goes. When will it be in market?
KEITH KIRKLAND:: Yes. So we’re looking to launch and market by June of next year. Right now we’re in preproduction. We’re selecting our manufacturers and getting that all secured, then ready to go to make our first 5,000 units.
We’re also in the middle of launching a pilot program. So our pilot starts on November 30, if anyone wants to sign up. We’d love to have some members of the community, of course, as many as possible to test out the device.
Ultimately, our goal is we want to build the best experience that we can. It’s both a sighted and non-sighted experience, so you don’t necessarily need to be blind to use the device. But we optimized it for the blind experience, but made it so that it was accessible to everyone with sight or without. And we’d love feedback from both communities who’d be interested in testing out the products.
So we’ll launch the pilot in November. Take all the feedback. And you know, we want you to be really honest with us. Ultimately, the goal is to make the best thing we can make. And that only starts with telling us where the experience fails and how we can make sure that it doesn’t fail when we have 5,000 users using the device all around the country. Yeah.
NED DESMOND: What is the price point on it, Keith?
KEITH KIRKLAND:: So the Wayband’s going to be $249. And right now it’s on sale on preorder for $179. So the first 1,000 units we’re selling for $179. We’re advertising those as exclusively as we can to members and allies of the blind and visually impaired community.
NED DESMOND: Great. Well, if we could step back a little bit and talk about the development of this product. Were blind or low vision people involved in the development of the product?
KEITH KIRKLAND:: Yeah. So since the very beginning, we’ve been working with the blind and visually impaired community. It started with our first advisor, Marcus Engel, who is a writer, author, and public speaker. He has written several books around health care and this space.
We also have some paralympic athletes that we’ve been working with, Charles-Edouard that we’re Catherine and, of course, Simon Wheatcroft, who ran the marathon. And plus, we’ve been talking to organizations, the National Federation of the Blind, American Printing House for the Blind, Lighthouse Guilds in various states around the country, the Royal National Institute of the Blind in the UK, and a few other organizations, kind of around the world. So we’ve been deeply embedded, not only what the challenges are here in the United States, but also what they are in developing markets globally.
NED DESMOND: A lot of the use cases you’ve mentioned involve athletics, in particular, running a marathon. Do you think this is the strongest way to think about Wayband? Or is it just as useful in day-to-day functioning and navigation?
KEITH KIRKLAND:: Yeah. So running a marathon was never our goal. You know, that was actually Simon’s goal. He saw what we were doing.
And said, hey, if this device can be ready for the marathon in six months, I’ll run with it. And we were, like, OK. Let’s see what we can do.
But the way we built it really was, initially, we built it because we wanted to get people out of their phones and back into the real world. And then we saw a wonderful application for the blind and visually impaired community because of their challenges around autonomy and mobility. And so this is really kind of like your everyday navigation device.
Meet a friend for coffee at a new cafe. Go and record a route that you go to all the time, just so you can have that extra confidence that the device is kind of there with you, guiding you along the way, in case you make some mistakes. Ultimately, we just want to reduce a bit of the mental math that’s necessary for a person who’s blind to just go outside and do something.
NED DESMOND: Are there advantages to a haptic-based system like this, as opposed to one that’s based more on verbal communication, turn left, turn right?
KEITH KIRKLAND:: Yeah. And so right now, the way we see it is we have this skin that is almost unutilized as a communications channel. But what we’ve been doing is only communicating these haptic notifications that we get from our cell phone. And now smartwatches are taking it to a slightly other level with a bit more distinction. And so what we see as the biggest advantage is that your ears or your remaining vision remain completely free to focus on what’s most important, which is keeping yourself safe.
And meanwhile, you get all of the information directly through your skin. It’s discreet. It’s personal.
You can design or you can edit the haptic volume so that the sensitivity feels great to you. Touch is a huge part of our experience. So we wanted to make sure that that was really adaptable based off of people’s personal preferences. And so yes, the overall advantage is that your ears and your eyes remain free to focus on the task at hand, which is keeping yourself safe.
NED DESMOND: And does it take very long to learn how to translate those haptic signals into movement? Is it pretty intuitive? Or is there a little bit of a learning curve with that?
KEITH KIRKLAND:: Yes. So it’s ridiculously intuitive. We’ve tried it with thousands of people. Most people I give the device to, I tell them to spin around, and then I tell them to spin again and stop in the direction that they think the device is telling them is the right way to go. And 95% of the people can figure out within about 10 seconds.
So we really took the same principles– we’re all designers– we took the same principles that you take in graphic design and how you can use that to visually guide someone’s eye across a screen. And we took those principles and put them into the haptics in a field that we’re calling, effectively, haptic design. Utilizing the skin not just to give you buzz, buzz, buzz, and have you eventually translate, but giving you, like, in a very intuitive expression, like a punch or a kiss that’s like instantaneously recognizable. And then that shortens the learning curve, because language acquisition is ridiculously slow.
NED DESMOND: Interesting. And then, from a technical standpoint, where is the navigation data coming from? Are you using an API on one of the map platforms?
KEITH KIRKLAND:: Yeah, exactly. So we’re using OpenStreetMaps for the mapping platform, which is an open source mapping application. And then we’re running Mapbox routing on top of OpenStreetMaps. So that’s how we’re giving you the route from point A to point B. And then we’re tying Mapbox directions into our patented haptic navigation system to point you in the right way at the right time.
NED DESMOND: I see. Have you been in touch at all with the other mapping platforms like Google or Apple?
KEITH KIRKLAND:: Yeah. Yeah. Actually, we built our system so that the mapping program can be interchangeable. And ultimately, what we find is that sometimes Mapbox works better. Sometimes Google works better. so being able to have an aggregate of both systems will make for a more effective navigation solution.
And ultimately, our goal is to become, like, Waze for pedestrians. Where, when you realize that there’s construction on a street, and we can update that route based off of the actual roads that people have walked so that we’re giving people the most updated version of the information. That’s going to come with time and machine learning, once we get a lot more data points. But what we’re starting with right now is replacing the visual and audio cues with haptic cues.
NED DESMOND: Time’s up, Keith. But that’s great. Thank you very much. That was Keith Kirkland and Wayband.
So now let’s turn to Andreas Forsland from Cognixion. Andreas, as I said at the outset, has a different product. But it’s right on the cutting edge of the science and the technology that might ultimately create a direct brain-to-device connection. So Andreas, could you take us through the work at Cognixion and tell us how it’s going to help people with disabilities? You have five minutes to describe.
ANDREAS FORSLAND: Thanks, Ned. Absolutely. Yeah. So Cognixion has been focusing on understanding how the brain works from a language perspective. For the most part, Cognixion has been targeting all of our efforts towards creating neural prosthetics for individuals with speech disabilities, speech and motor. So if you think about someone like Stephen Hawking, or someone that has autism, or cerebral palsy, or has had a brain injury or a stroke, oftentimes these individuals, their central nervous system or even their peripheral nervous system has been compromised and renders communication difficult to impossible.
So what we’ve been working on is building a direct interface that combines EEG-based brain sensing, so dry electrodes that are placed on the scalp and able to detect certain kinds of brainwave patterns in your mind. And we’ve coupled those sensors with an augmented reality headset. So being able to project holograms onto a clear lens in front of the user’s face on the headset, allowing us to present buttons, whether those application buttons could be for generating speech, like a predictive keyboard, or other things like smart home controls with Alexa integrations.
So if you think about someone in a wheelchair being able to communicate with a caregiver in their home through a visual interface that’s directly in front of their eyes and be able to make selections in controlling that interface through their mental attention on specific items that are presented to them. So those controls could basically be generating audible speech for someone that’s in their proximity. Because it’s also got cellular and Wi-Fi connectivity, we can plug directly into a home network. So if they have Alexa home controls or Google home controls, et cetera, they can essentially do anything that those speech AIs enable through those skills.
Where we are right now is we’re in our alpha prototyping phase. So we’ve been doing human factor studies with individuals. We have a fully integrated system that works. And at this point, we’re in the final processes of getting the product ready for design for manufacturing.
And early next year, we should be in a position to be able to do our first 100 units. And we’re initially identifying partners that are in the accessibility departments at major technology corporations or other corporations as well as research labs around the world. So the device will be CE certified. It will be available internationally.
And primarily, our first market is the research market. But our second market, which will be pursuing FDA 510(k) and CMS accreditation next year will enable us to then qualify for reimbursement through Medicaid, Medicare, and private insurance as a augmentative speech-generating device. So there’s existing reimbursement codes for speech-generating devices, which our technology qualifies for. So as soon as we have the CMS accreditation, we’ll be able to start to tap into Medicare, Medicaid as a reimbursement model. So that’s an overview of what we’re creating.
And how we’re able to actually do that scientifically is we are not just reading your mind, per se. What we’re doing is we’re writing to the brain and reading the signals that are being written to the brain. So what that means in layman terms is there are specific frequencies that we can attach, certain motion frequencies, like, vibration patterns, so like [INAUDIBLE] in the eyes. So we can add frequencies to vibration patterns to the graphics, or light flicker patterns to the buttons.
And we can parse those frequencies out of your brainwaves. Your brain is an inherently noisy popcorn popper. [LAUGHS] And so we’re able to, at a very high degree of accuracy, 100% within around one second latency, determine exactly what item you’re looking at with no cameras. So it’s on the lines of something that could be comparable to eye tracking without the need for cameras.
And so right now, we’re positioning the product as an accessibility device. We’ve learned through literature and a lot of our scientific partners that have done research in the areas of this kind of stimulation through the visual cortex of the brain, basically stimulating the eyes through the optic nerve to the visual cortex and the occipital lobe, that there is a number of therapeutic and clinical opportunities and benefits that this device presents. We’re not going to go to market initially making any medical claims other than it’s a really wonderful measurement device that can provide alternative communication access. But I think as it relates to this community, from a vision perspective, it really opens the door to where neuroscience and vision technology can come together and understand what’s going on behind the eyes, not just what’s happening in front of the eyes.
NED DESMOND: That’s a great place to break, Andreas. Thank you. That was a lot in five minutes. Thank you for trying–
ANDREAS FORSLAND: You’re welcome.
NED DESMOND: –to compress it all in there. But let’s back up just a little bit. And could you describe this first generation device? What it is and who will be using it?
ANDREAS FORSLAND: Absolutely. So it’s a headband that includes a clear visor on the front. And it contains– essentially, it’s a mobile heads up device. So it’s a head-mounted display where you can put a mobile phone on the front and it reflects into a clear lens.
And so that reflection is what the user would see looking through the clear lens. So the person wearing the headband can see out. And then the person that’s outside, their communication partner, can also see their face and see their eyes.
The display itself will be projecting predictive keyboards and other kinds of holographic menus that the user can select from so they can store phrases, so shortcut phrases, or what we call one shot phrases that can be sent to Alexa as key commands. Like turn on lights, turn off lights, turn on the TV, text Mom, things like this.
NED DESMOND: So let me interrupt for one second. So what they’re looking at in that heads up display is being detected by your electrodes that are reading brainwaves. And then that is conveyed to whatever external system, like Alexa, it’s engaged with.
ANDREAS FORSLAND: Yeah. It contains two parts. So the AR, or the augmented reality piece, is the application. And so that application that’s running in AR is a speech generating application in the AR.
So the BCI, or the brain computer interface, that is, the electronics and the electrodes that are on the back of the head that are part of the same headset, those are picking up your brainwaves and monitoring what items you’re paying attention to. And so based on the item that you’re visually paying attention to, we can detect that through your brainwaves and send a command to say provide a keystroke to the augmented reality, based on the item that I’m mentally paying attention to. So we’re not trying to just read your mind or what you’re thinking. We’re just really looking for very specific signals that we’re sending through the visual cortex.
NED DESMOND: And for this first generation product, who is the target in terms of the users who would benefit from this particular type of interface?
ANDREAS FORSLAND: Individuals– typically, it’s a literate population, that’s sort of a teen or adult population that either has a congenital, acquired, or a progressive neurological disorder. So someone that might have been born with cerebral palsy is a great use case for this. Someone who has ALS, which is a progressive disorder, someone who has multiple sclerosis or PSP, someone who’s had a stroke would be a good use case for this.
Individuals who have developed a level of literacy and understanding, there’s a cognitive awareness of what’s going on around them. And really what they’re looking for is a faster way to access the words that they want to express. If you can imagine not being able to use your arms or your voice to be able to communicate, you have to look for other senses to be able to control a digital interface to generate those expressions.
NED DESMOND: And what’s the hardest part of this technology? There are a lot of technologies involved in what you’re doing. What’s the toughest part?
ANDREAS FORSLAND: Two of the toughest parts– one is just signal quality over time. Because we’re designing this as something like what Keith is designing, you can wear it all day for multiple days. So it’s designed for long duration wear. Most virtual reality and augmented reality headsets were not designed for extended wear. So we’re designing it for extended wear, which means that the physical properties of the headset need to be comfortable for long duration wear.
As well, the biometric, or the EEG signal processing, the signal quality, the signal to noise over time is an area that’s difficult to overcome. So that’s where we have to write pretty sophisticated machine learning models that are running locally on the headset to adapt to various types of conditions that emerge. I’d say those are the two biggest concerns.
The third, which I think is endemic to all technology, is battery life. So how do you get something that’s as sophisticated as this to run on low power so that you could in fact, actually use it for an entire day.
NED DESMOND: I see. And then last question, just to bring this back to the community of folks who are blind or have low vision, how would this technology potentially apply to their futures?
ANDREAS FORSLAND: You know, it’s interesting. We’ve been looking at the headset as a general purpose accessibility platform. And the application that’s running on it is a speech generating application. But because of the array of sensors that are available on the headset, if there are individuals who would like to use this wearable platform as a way to develop it for vision impairment or assistive tech for vision enhancement, it’s definitely an open platform that we can partner with others, to have a visual interface that could be controlled as well.
I think initially in the short term, though, it’s really a great platform for anyone, any clinical or scientific research in the areas of what’s going on in the eyes over time and how that’s being affected when you’re out and about in everyday life. So what’s going on in your visual cortex, I think, as really targeted for researchers would be the most appropriate short term.
NED DESMOND: Great. Thank you, Andreas. That was Andreas Forsland from Cognixion. And next up, we have Karthik Mahadevan from Envision– LetsEnvision. LetsEnvision was initially an app on the Android phone. But it’s taken a very exciting next step into glasses. So take it away, Karthik. You have five minutes.
KARTHIK MAHADEVAN: All right. Yes. So I’m Karthik from Envision. And I’m here today mainly to talk about glasses of Envision that we just launched.
Basically, Envision has been a very popular app amongst the blind and visually impaired community for a while. And what it used to do is it can help them take images of things. And we extract information from those images with artificial intelligence and speak it out to them.
So it can help them to do things like the recognition of text, recognition of objects, of faces, and so much more. And all of them were implemented in a way, or in like, a design that is very easy to access with screen readers and [INAUDIBLE] and stuff like that.
When we had that out, a lot of the demand that we kept getting from our end users was that it would be awesome to be able to access all that technology, but in a totally hands free way. Especially if they’re out and about and they have a cane or a dog in their hand already, to have to hold a phone in your free hand and be able to take pictures with it is not the most ideal experience.
So what Envision has done is that we have entered into a partnership with Google, who just came up with the second edition of the Google glasses. So we took that hardware and we developed a software on it, and to introduce to the market a product which is a combination of the Google glass with Envision software. And that we call Envision tech glasses.
And what they do is, basically, it’s something like I have on at the moment. It’s sort of like a plastic smart glass that sits on your face like a pair of spectacles would. And it has a camera towards front of the glasses.
And there is a touchpad to interact with it. And you can actually make use of the touchpad to take images of things around you. And then, depending on the kind of information that you’re looking for, be it things like text, objects, or faces. All of that will be spoken out to you through a speaker within the glass itself.
So one of the biggest use cases that people have of this is mainly with the recognition of text. It can do a recognition of text in over 60 different languages, including different scripts, like, Arabic, or Chinese, you name it. It can also very well do recognition of things like handwritten text. A lot of effort has been put into optimizing this application for as many recognition of text aspects as possible.
And since it’s in such a form factor where it’s not something that is occupying their hands, it’s very intuitive for them to use. And they also have a form factor that they’re very OK with actually having on their face when they’re out and about.
One of the additional other functionalities that we added to the glasses was the ability for them to make a video call. So if they’re ever in a situation where the AI is not able to offer them the information that they’re looking for, they can always make a call to a friend or a family member, who will be able to answer this call. And they get to see a video feed directly of what the glasses are able to see.
And they can just offer them assistance over audio. And they don’t have to be actually be holding anything in their hands. And they can just be out and about with a cane, and they can still be talking to a friend or a family member who can still offer them assistance.
So we had a pre-order campaign of this. And we just shipped out 111 pairs of these glasses to the first customers, who did make a pre-order of this in November. And at the moment, Envision is focused on picking up a distributor [INAUDIBLE] network of it.
So by the time all you guys are able to hear this, everybody who wants a pair of these glasses will be able to make a purchase of it either on our website of Envision, or through a distributor in your area.
NED DESMOND: Thank you, Karthik. So just to help our audience here. Karthik is actually wearing these glasses right now. And they look like a pretty normal pair of glasses, actually. You wouldn’t really think they were anything fancy, unless you looked at the right side of his head where the normal arm for a pair of glasses is much thicker and contains the electronics, presumably. And there’s a little extra thick lens, like a little block of Plexiglas that hangs down just partially covering his eyelid on the right side, which is, I assume, where the camera is, right? Karthik, is that fair?
KARTHIK MAHADEVAN: No. That actually is the screen. And the camera is off, like, a circle that is next to it.
NED DESMOND: I see.
KARTHIK MAHADEVAN: Yeah. So it does have a small screen. But for our use cases, we totally ignore a screen. Because the audience is for the blind and the visually impaired. So you totally interact with the glasses, just on the basis of audio. But you still have a screen in there.
NED DESMOND: So compared to a lot of other devices that involved a headset of some type to assist, this is a very lightweight presentation. And it almost looks like a very cool designer pair of glasses.
And these are, in fact, the Google glasses, right? The next generation of Google Glass? And what Envision has done is you’ve essentially taken over the functionality of the Google Glass and applied your own software and your own AI to make this work for the use case that you’re designing for. Is that correct?
KARTHIK MAHADEVAN: Exactly. Yeah. Yeah. So we took the hardware of the second edition of the Google Glass. And we put our software into it entirely.
NED DESMOND: I see. And then when you were trying to understand the transition from just being an app on a phone and delivering all the great functionality that you did on that platform, how was it to transition that to the glasses themselves and to give people a hands-free experience? That must have changed everything in a lot of ways.
KARTHIK MAHADEVAN: Yes, it did. So it’s not easy. Because I would say capabilities of a smartphone has advanced quite a lot in the past few years, in terms of the camera, the processor, and all of that. So to be able to optimize our algorithms from that to be able to operate on a processor that is on a smart glass, that took a lot of effort.
At the same time, also, working with other different interactions, right? So for example, when somebody wants to read a document with our app, we do a detection of edges. Because people often put a document on a table. And then they take a picture with their phone. So the edges of a document are very well, I would say, defined.
But when they take an image of a document with their glasses, they’re actually holding up the document in front of them. So you cannot really do edge detection on the document anymore, because they’re either holding the edges of the document, or there is enough stuff in the background that there isn’t a clear distinction. So we actually had to innovate a lot. And we were doing it with a group of beta testers who were sort of helping us out with feedback on what’s the best experience that had to be bought with the glasses. So the technology is the same, but we had to do a lot of effort into bringing tech design elements into it so that it can also operate on the glasses.
NED DESMOND: Now from a technical standpoint, is all of the data and the algorithm onboard the device, or are you in the cloud? How is it working?
KARTHIK MAHADEVAN: So it’s a combination of things. So there is a trade off that we offer to the end users. So there is a possibility to do a few things offline. And that will be happening, I would say, faster.
But then you can also do a bunch of things online which would improve your accuracy. So it is a trade off between latency and accuracy. So if you do want to have the information quickly, you do it offline. But if you do want it more accurate, then you do have it online.
NED DESMOND: And is there a navigation feature built into this?
KARTHIK MAHADEVAN: Not yet. That’s actually something that we don’t intend to build ourselves, but we actually intend to open up this whole thing as, like, a platform, so people who do have a lot more expertise in building stuff like navigation, for example, if, like, Keith tomorrow actually wants to have his app to be on the glasses, that is totally a possibility, to actually make collaboration with other people. So there can be a whole array of complementary apps that could be added to the glasses in the future.
NED DESMOND: Great. Well, we’re out of time. But Karthik, that was great. Thank you very much. That was Karthik from Envision.
Thank you, Keith, Karthik, and Andreas. That was a great session. And I’m going to hand the show back to Will. Thanks again, gentlemen.