-
DESCRIPTIONDedicated devices versus accessible platforms? Victor Reader Stream versus iPhones and Alexa? How will AT companies take advantage of a world with cloud data and edge computational power, AI algorithms and more demanding customers than ever? Humanware, eSight and APH are already looking far into that future.
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
BETSY BEAUMON: Hello, everyone. This is going to be a really exciting panel talking about dedicated assistive tech devices and the future. I think it’s an important area that a lot of people outside of assistive technology aren’t even all aware how robust this industry is and how much actually this specialized arena has influence the general tech market and tracked some of the really important advances in that market. And at the same time, they have overcome a bunch of technical challenges to be able to effectively serve their users.
And I’m excited about how much this assistive tech industry and these three gentlemen, in particular, are looking into the future. And they are going to share how they’re advancing access through cloud computing, mobility, and AI. So why don’t we jump right in? Can each of you share one of your latest product innovations that gives us an idea of the future in your world? Greg, why don’t we start with you?
GREG STILSON: Great. So at the American Printing House, we really focus on advancements to help students learn and play more inclusively, right? And so one of the barriers that has always been existing is access to tactile material. So many of you may not know that the production of tactile textbooks, Braille textbooks, and tactile graphics, raised line graphics or graphics with different tactile material is quite a manual process.
From the time that we get the book at the American Printing House from the publisher to actually creating a Braille copy of that book, it’s a massively manual process and also creates a significant time delay oftentimes for the student to get access to that book. And so one of the projects that we’re working on right now, and we put out a request for information publicly for this, is the idea of a dynamic tactile display. And the idea with this display is to basically utilize a lot of the mainstream capabilities for AI object recognition, scene detection, image filtering, and be able to really focus to bring that paper textbook into an e-textbook kind of concept for blind or low vision students.
Today, you can gain access to Braille textbooks and electronic Braille textbooks through single-line Braille displays. They’ve been around for a long, long time where you have one line of essentially anywhere from 12 to 40 or 80 cells, depending on what you’re doing. But that’s a single-line, really linear approach.
The idea with this is that you essentially would get a full page of refreshable Braille, tactile graphics that a blind television student could participate with their sighted peers at the exact same time. And so this is something that’s going to really take a lot of our assistive technology I would say expertise, but also really focus on the innovations that are happening from the inclusive design things that the mainstream is doing as well.
BETSY BEAUMON: That’s great. I find these advances really, really exciting and exciting for people all around the world. And looking forward to more about that. Gilles, how about one of your latest product innovations?
GILLES PEPIN: Yeah. Thank you, Betsy. And just to piggyback on what Greg just said, we’ve done a lot of work in the last two years with APH. And in my view, access to electronic documents in Braille is really, really important. And we’ve developed the Chameleon and the Mantis with APH.
And we also develop the NLS e-reader, the Braille e-reader that they will introduce soon. And this expands really, really the access to electronic documents. So we’re very proud and very excited about all these things that are happening about tactile access to broaden access to information.
On my side at HumanWare, I think we’ve been involved on a number of projects and very exciting. We are living through very exciting times in terms of new technologies and things that are happening out there. But the thing that I’d like to focus on is really what I would call our next generation of our VictorReader Trek. The VictorReader Trek is a GPS-based navigation device that really helps people go from point A to point B.
And we’re building right now what we call the extended navigation tool. That will become a personal assistant. It’s really a mobile hand-held device that still has GPS but the newest GPS technology with more accuracy. But we’re including also cameras and sensors.
And the dream is really to provide a tool that will let users discover their surroundings, be more aware and more in control of what’s around them. And that’s very important. And this is made possible because of new technologies, especially computer vision with artificial intelligence. Computer vision has done huge steps forward because of AI. And that enables us to do much more than we were able to do.
Natural language understanding is another one. We’re going to be adding indoor navigation, again, working with APH on– they’ve created a company called GoodMaps. And there is a lot happening there. The features that will be on the device will be also cloud based. That will expand the possibilities of these devices.
So just to give you a few examples, one of the things that has been asked and asked about is micro navigation. How do we get to– we’re able to bring someone very close to their destination. But what about these last 40 feet to get to the door, to get to the place where I want to be? And I think with this, with a camera and sensors, we can really get the information for people to get exactly to the place they want to be.
How do you cross a street? How do you find a pedestrian crossing information? If you’re sitting on the side of a corner and you’re waiting for a bus, how do you make sure that this bus that is coming is the right one for you? These are all problems that we’re going to be able to solve. Indoor navigation– how do you get to the restroom in a restaurant?
This device should find all of the little details and information about where this restroom is. So I think you get the idea. But finding your white cane, finding your phone, finding your keys are things that will be possible to do with this device.
So it’s really exciting to see all the potential that AI is bringing to devices like that, things that we would never have been able to do with standard computing today. I just want to say that version one is probably likely to be a bit more modest than what I just described. But we will get there eventually. And I think it’s very exciting for people.
BETSY BEAUMON: That’s fantastic. I love it and love to hear about where that’s going. Charles, speaking of where people are going, talk about some of your new stuff.
CHARLES LIM: Yeah. Thank you very much, Betsy. And it’s very good to have you guys and talking to my colleagues Greg and Gilles about personal assistant and tactile innovation is definitely very exciting. For us at eSight, we’re a wearable technology. And it’s exciting times for wearable technology and AI at least where we stand right now.
So what we do just to give you guys a quick overview is that we’re a head-mounted wearable device that’s worn daily by thousands of individuals with low vision and legal blindness. So that allows our users, for example, we have visual acuities of 20 to 60 or 20 to 800 caused by different eye conditions to actually see better. So wearing eSight, many achieve like 20/20 acuity. So we are one of the most advanced wearables in this space.
And what makes us truly unique is that in addition to improving sight, now, we have ensured 100% mobility retention. So it does this by allowing the wearer’s natural peripheral vision to be made available to the user with our patented bioptic tilt. So wearers can easily adjust the amount of enhanced vision versus natural vision that they would like to access, making it easier for them to navigate to new places.
And when it comes to our latest product innovation that we’re working on– so this summer we actually released our fourth generation device eSight 4. And it’s now powered by the cloud and integrated to our mobile app capabilities and also AI. So that gives a lot more excitement for our next generation device because not only does the new device actually provide a new form factor that offers greater mobility, it’s actually also wireless with vision controls integrated directly into the headset.
So it allows you to actually use it seamlessly and makes it a lot more comfortable. But because it’s also connected to the cloud, it’s future proof, which allows us to update the device anytime that we would like with new applications and leverage all of the new applications and software that the cloud has to offer, some of which my colleagues Gilles and Greg has also mentioned. On top of that with the mobile revolution, we’ve embedded a mobile app also onto it.
So you can actually integrate and use a lot of your different applications on your mobile phone for your eSight device. So a good example would be sharing the apps that you’re using on your phone with your eSight HMD and vice versa, giving some of your caretakers the ability to actually see what you’re doing, to track what you’re doing so that if you need some help, they can literally help you from over half the world away. So I think there’s been a lot of advances in cloud mobile and the wearable space.
And it’s very exciting. I think we’re only at the beginning of it. And I think there’s a lot more excitement to come.
BETSY BEAUMON: Yeah, I agree with you. And I think you can all hear from all three of the panelists, there’s this really interesting mix of their specialized knowledge and specialized technology and taking advantage of the latest in mainstream tech. So I guess I’d love you guys to dive in a little deeper on how have some of these latest technologies, including mobility, cloud computing, the stuff you’ve been mentioning, how they actually changed your approach to your products.
And did they create a different kind of canvas to work on? How do you look at leveraging and staying on top of the latest technology? And maybe, Gilles, we’ll start with you on this one.
GILLES PEPIN: Well, I think, Betsy, for the last 30 years– and Greg was telling me don’t go back that far. But we’ve always been building our products on mainstream technology. We do not develop base technologies. We integrate technologies in products to break barriers for people with vision loss. And I think what’s important is we’re seeing that even today– and for me, AI today is really a big, big, big stepping stone for us for the future.
And so it is exciting. And mastering these technologies for small companies like the companies that are in this field is always a challenge, but it’s so exciting. I’m looking at what we’ve done in AI, for example, in the last two years. I’ve talked about micro navigation to get to the specific door that you want to go. And with AI, we were able to do something that we would never have been able to do with algorithms.
The example I wanted to give is we have built a data set pf pictures. So we’ve took about 10,000 pictures of doors and civic numbers in different countries and different settings, the lighting, the way we would take the pictures. And then we just had some people to annotate those, to identify let’s say the doors and the civic numbers in each of these pictures. And we were able to just feed that information, these pictures, into a pre-trained model, a pre-trained AI model.
And what comes out of that is we are able to now recognize over 90% of the doors when someone just scans with a camera, just scans around, and the system will be able to identify the doors. So when you talk about a canvas for the future, I do believe that AI is extremely powerful. And it is a setting for us and a base for us to really build our future.
And one aspect that I think is extremely important with AI is sharing these data sets. I mean, the biggest challenge with AI is to get this data available to train models. Now we’ve been working lately with Microsoft Research and London University.
There’s a project called ORBIT, which is building a data set of specific pictures that blind and low vision people really need to have access to. And collaboration– we’re in the time with AI where the phase is collaboration and sharing data instead of competing. Competing will come later. But for now, I think it’s ready to build a great data set of information that we can feed AI models with.
BETSY BEAUMON: Yeah. That’s great. As somebody who spends a fair bit of time talking about some of the challenges that AI presents for people with disabilities, it is also very notable how many amazing opportunities there are.
And I think you are all highlighting them. Charles, say more about how you see the canvas of these mainstream technologies. You’ve mentioned I think a lot of examples already just in your intro. But I’d love to hear more about how you approach it.
CHARLES LIM: Yeah, for sure. Thank you, Betsy. This is Charles. So one of the major technological advancements that we launched with eSight 4 is the movement over to a cloud. So now the device is not necessarily only a hardware device. It’s actually more like a connected device right now.
And we did this for a few reasons. The first one is to better leverage our own data that we can now capture with our cloud back and to provide more value to the consumer. So to Gille’s point, data is really a gold mine for us right now, understanding how they use it and using that to power our AI algorithms later on are critical. So that is enabled by the cloud infrastructure that we have built right now, which is one of the mainstream technologies.
The second reason is we would like to leverage, integrate some of the more relevant technologies that have come out, for example, with mobile or not even mobile in general. So, for example, voice assistant services like Siri is very important for us. And because we have now integrated to the mobile app, we have access to Siri. But we have also access to embedded GPS sensors and other capabilities of mobile technology right now that would otherwise be unavailable or very cost prohibitive for us to otherwise develop on our own and embed into our device.
So I think the ability to leverage a lot of the advanced services and applications offered by the mainstream like cloud providers and integrate them into our technology is unique. And that opens doors for applications made available to our users to help improve their quality of life that they otherwise would not be able to do like access to good quality facial recognition, object recognition, mobile GPS just to name a few.
BETSY BEAUMON: Yeah, awesome. Greg, how are you looking at things maybe differently than you guys have in the past?
GREG STILSON: Yeah. Thanks, Betsy. With the tactile sort of surface and tactile learning process, it’s a really hard problem to solve. And the reason I say that is because what we’re trying to do is basically create a way for a student to dynamically understand a print graphic, a print picture, something that allows for this impromptu learning that happens so frequently with sighted kids in the classroom, right?
So a teacher in the classroom can say, hey, look at this. I’m using this new app, and look at the way that this atom is formed or look at the way that the mitochondria here looks and things like that, right? All of that has to today be provided to a student far, far, far in advance so that a tactile graphic can be handmade, right? So you have these tactile graphic experts who are doing so much manual creation of this kind of content. And that creates a delay in learning and doesn’t allow for the impromptu learning process.
What we’re trying to do is use this dynamic tactile surface in a way where it can be used for that impromptu learning process. Our dream with this is to be able to connect it via HDMI or casting to it and utilize the existing filters that are used in a lot of the most common photography apps, and the way that AI edits photos, and things like that. Because if you hand me– I’m a blind person myself. If you hand me a picture of somebody’s face, and you say, OK, this is somebody’s face, as a tactile graphic if I just cast that onto this tactile surface, I’ll never know because there’s too much detail provided in a tactile surface.
So we then have to filter that out so that I’m only really seeing the outlines and the bare minimum to get a sense of what that is, right? And so for us, we’re really relying on a lot of the mainstream filters that are there to be able to do those type of things. But also really relying on artificial intelligence to look at these images and say, OK, this image would be optimized with this filter so that you want to strip out X, Y, and Z so that the tactile learner can understand that.
I think in addition we’re also looking at building our own data set. We’ve got thousands and thousands of tactile graphics that we produce manually here at the Printing House, right? And so what that gives us is it gives us the starting image and then the end result of what the tactile graphic looks like.
Our dream is to be able to build these data sets and run models of the start and the end result. So that as time progresses, we can learn what types of images need to be filtered in what way to create readable tactile graphics, right? And for down the road, our goal there is to do the same with the way that Braille production is done because much of Braille production is done– you have transcribers who are hand moving pieces of Braille to align them to other pieces of Braille.
And the reason I say that is you look at something like spatial math, right? So long division you have multiple lines of math on top of each other to create a problem that you’re looking to solve. Well, a lot of that doesn’t happen magically. A lot of that is hand done by a transcriber.
So if we can learn what the transcriber does in response to the way that a print document is looking like, our dream is that we’ll be able to sort of automate that and produce this kind of content automatically. So that’s a couple examples of how we’re starting to look at using AI in our tactile journey here.
BETSY BEAUMON: That’s great. Anybody want to jump in on any other examples in this area?
GILLES PEPIN: I just want to come back on what Charles has said, which is extremely important, is data set or data collection is the biggest challenge that we’re going to face using AI. And I mean, I think eSight is in a very good position to capture a lot of information from their user. The problem is privacy.
I mean, imagine that even with our device, the device that I described, if we start taking pictures, imagine the person wants to take a look at– I mean, she has three different credit cards and want to identify each. Using that data later on, we would use the credit card of that person. So privacy is a challenge.
We’re going to have to almost on an individual basis get the approval to use those pictures or those images. So this is a step that we’re going to be facing that everybody will have to work around to get as many pictures to really feed into our systems, our models. But this is a big challenge.
BETSY BEAUMON: Others? Charles, do you have a comment?
CHARLES LIM: Yeah. I fully agree with Gilles. And this is also one of the challenges that we have because for our users, there’s different eye conditions that each one of them suffer from. Some of them are more centered around I guess peripheral vision. Some of them are more central vision issues.
And the ability for us to understand and help them better actually requires a lot more data for us. So I think the ability to actually capture data based on their eye condition and how they use the device is one of the challenges that we’re actually facing in addition to privacy and security, of course, in helping them actually have a better user experience. So I couldn’t agree more. It’s definitely one of the challenges of our time if we’re actually going to leverage a lot of the deep learning capabilities we have.
GILLES PEPIN: And because when you look at it carefully, I mean, the pipeline the chain of actions you have to take to get from the data set to the end result, I mean, it’s complex. But it’s not that difficult. Once you control that chain, that pipeline, it’s possible.
But getting the data is costly, and long, and difficult. So that’s why we need to share. But we need also approval from people that will give us access to their information. And I think that’s the biggest hurdle to get to thousands and millions of different pictures and images that we can use.
BETSY BEAUMON: Yeah. So it seems to me you all are doing really cool stuff. What fun things to work on. And you’re making people’s lives better using pretty much the latest technology that’s out there and the latest thinking, whether it’s concerns about data or the solutions to dealing with it. Say more about working at your organizations, and what that’s like, and even where maybe you have at times been ahead of mainstream technology.
GILLES PEPIN: Who wants to take this?
[LAUGHTER]
I can start a little bit by saying, I mean, I think this market, this community is always, always a very early adopter of new technologies. So we’ve been very often ahead of technology. And that’s where, Greg, I start talking about the past. But I remember OCR at the beginning of the 1990s.
The accuracy was low. You needed a board that you would add to your computer. It was all key. It was difficult. And it was very expensive.
But the need of blind people was so great because they needed a system, a reading system to be able to access printed documents by themselves. So they were ready to pay. They were ready to accept low accuracy. They were ready to go forward with technology that was not mainstream ready.
Same for GPS. Same for e-books. I think, Betsy, you’ve been very much involved in that. But e-books has been a great example right at the beginning, 1990s again, where e-books were being talked about in mainstream. But they were nowhere near completion.
But the DAISY Consortium and the digital talking books were there. And they were starting being distributed. So these are examples of us being ahead of mainstream and using technologies that just appeared but are not primetime mainstream at that point.
So they are more expensive. They’re not as good. But they’re very useful for our community.
GREG STILSON: Yeah, this is Greg. I remember very clearly as a blind person myself being an early adopter of GPS navigation tools, right? And this is before every car had a GPS system inside of it or everybody had a GPS system on their phone, right? I remember very clearly I was using a Braille note taker. It was a very large kind of bulky device.
And it was wired into a serial connection to a GPS receiver that’s larger than most cell phones today. And I had to clip it on my collar of my shirt so that it would stay facing the sky, right? And when I crossed the street, I looked like I was going to take off into orbit because I had wires coming all over me, right? But the reality is that our population, our community sees the value. We see the life changing impact.
I for the first time knew that there was a Walgreens on this corner or whatever store that I had no idea was there, right? And so for us, we’re willing to accept some sort of early adopter kind of challenges when we see the potential and the value, right? As Gilles said, looking at those audio books and things like that, blind people were using audiobooks before audiobooks were cool because we needed to get content as quickly as possible.
And so for us, I echo what you said Betsy in that the work that we do in our field you see a direct correlation between the amazing technology that you build and the lives that you are changing. And that’s something that I want to make sure is very clear, that the work you do has a significant impact in this field.
CHARLES LIM: Agree.
BETSY BEAUMON: It is one of the most gratifying things about all of this work. So I appreciate you saying that. Charles, do you have anything to add to that?
CHARLES LIM: No, I fully agree with my colleagues, Gilles and Greg. And, actually, I would like to emphasize what Greg just mentioned. I’ve worked in a lot of different industries. And this is the industry that actually makes a big difference in terms of changing people’s lives.
So it’s the first time ever– for example, we were helping one of our users. And all of a sudden, the parents started to cry and shake my hand because it changed how his kid can now go back to school, can actually read. I’ve done a lot of stuff in my career, and that is definitely very touching. So I think this industry is doing a lot of good for society. And I think this is one of the best things that we can do as a whole.
BETSY BEAUMON: Yeah. And that is such a great note from all of you as we get toward the end of this session to end on because I think this is not only cool work but it’s important work that you’re all doing. Could you each maybe give briefly what is your most hopeful thought for the future and the work you’re doing? And maybe we’ll kind of start with Greg and jump back to the rest.
GREG STILSON: I always tell people today is one of the best times to be a blind person in technology. And the reason I say that is because the things that we’ve been doing for years in assistive technology are starting to be recognized by the mainstream companies, by the Apples, by the Googles, by the Microsofts in the world and the Amazons that it has value and the things that we’re doing are things that help just more than this niche population, right?
So for me, my hope is that that trend continues, and they’re going to continue to work with us and to see the value in what we’re doing to help the larger populations because you never know what something that can help a blind kid may help some other child with a different disability or may not even have a disability. That’s something that we’re starting to see even in the video game population with the move towards everyone being able to game now. So for me, the work that we’ve been doing for years, it’s extremely validating to me to see that entering the mainstream population and to be able to continue working with the larger scale companies of the world to partner with us down the road. So that’s my hope.
BETSY BEAUMON: That’s awesome. Charles, do you have any brief hopes as you look at the future?
CHARLES LIM: Yeah. So I’m actually very optimistic about the future because I think in this day and age for us, there are so many technological innovations that can help people in any way, shape, or form that I would echo what Greg said. It’s one of the best times to be blind because I actually have one of my good friends who is blind, and he has leveraged technology to reach his full potential, both at work and in terms of what he’s looking to do writing books, making sure that he’s making a difference in people’s lives.
So I think all of the work that everyone is doing in this industry is going to be very helpful for society moving forward. And I think we should be all very proud of that. And because of that, I’m very optimistic for the future.
BETSY BEAUMON: Thanks. And Gilles, last word to you.
GILLES PEPIN: Oh, I completely agree. I mean, we are living through exciting times not only for blind people because of technology but working in technology and for the good reasons and bringing innovation to this community is just fantastic. And we’re seeing, I mean, again, with AI but other technologies that are out there today more and more opportunities to break barriers that are there, that are still there after many, many years and trying to break those barriers.
So I believe that technology is accelerating as many have said before. And AI is a great accelerator. And I think we’re going to get to fantastic results with all this.
BETSY BEAUMON: Yep. I agree. Thank you all so much for all of your work and for your expertise today on the panel. Take care.
GILLES PEPIN: Thank you.
GREG STILSON: Thank you.
CHARLES LIM: Thank you.
[MUSIC PLAYING]