-
DESCRIPTIONThis session will spotlight the trajectory of HumanWare and how current technological trends impact the future of product development. Join Eric Beauchamp, Francois Boutrouille and Peter Tucic for a discussion of how the previous 32 years of HumanWare’s development of blindness and low vision technology has evolved and will continue to do so with the advent of artificial intelligence and machine learning. Participants will develop a better understanding of how the challenge of providing products that solved singular tasks has now shifted to integrate the complexities of deep learning technology to interact with dynamic objectives in real-time.
- Eric Beauchamp, Director of Product Management
- Francois Boutrouille, Emerging Technology Leader
- Peter Tucic, Brand Ambassador of Blindness Products
-
SESSION TRANSCRIPT
ROBERT FRAWLEY: OK, great. So hello and welcome to Plotting the Course, Delving into the Past, Present, and Future of Assistive Technology for the Visually Impaired Community Through the Lens of Artificial Intelligence. My name is Robert Frawley. And on behalf of Sight Tech Global, I’m excited to have you join us today.
In this 30-minute breakout session hosted by Humanware, you’ll hear from Eric Beauchamp, Director of Product Management at Humanware, Francois Boutrouille, Emerging Technology Leader at Humanware, and Peter Tucic, Brand Ambassador of Blindness Products at Humanware. Before we begin, just a couple of housekeeping items. This session is being recorded and will be available post-event on our Sight Tech Global YouTube channel. If you have any questions or comments throughout this breakout session, please use the Q&A box. And we will send those to our panelists after this session. And with that, please take it away, Peter.
PETER TUCIC: Thank you so much, Robert. And thank you to everyone for being here. It is definitely a different type of conference. I know many of us have attended quite a few virtual events. So we’re really glad you are here. And Humanware, myself included, would like to thank the Sight Tech Global conference chairs and everyone for giving us the opportunity to be here.
We do have an intro slide up, which basically has our Humanware logo and our website. But I do want to talk a little bit about what Humanware does. For those of you, again, who aren’t familiar with us, we are an assistive technology company. That’s really our main focus is to improve the ability for those who are blind or visually impaired to interact with mainstream products, as well as just the world around them.
We have been around for a long time. We’re going on, I believe, just had our 32nd anniversary here at Humanware in some form. So we’ve been around for quite some time. And as the brand ambassador of blindness products, as Robert gave you my title, I am totally blind. And I’m a user of many of our blindness and speech products. And I’ll be talking about three quickly, three categories of products that we make.
I’m really glad to have Francois and Eric here because once I give an overview, a little bit of who we are, we’re going to really talk about what we are. And how we present and how we kind of get products and have brought products to market over the years. And what we look at, as well as kind of taking a glance at and looking at how the world around us is changing. And how artificial intelligence, specifically, machine learning and through deep learning and these different types of emerging technologies, are going to really play a role in the future of assistive technology. And I mean, more specifically, for those of us who are blind or visually impaired.
At Humanware, we have really three main categories when it comes to the sort of product set that we offer. The first of which being braille or blindness products that are braille displays or refreshable braille products. I did not want to make everyone sit through slide after slide of what these products are. But when we talk about the braille side of things, braille is a very linear way. A braille reader reads one line at a time. And it’s very linear way of taking in the world.
And what we try to do with refreshable braille products is not only encourage braille literacy, well, just literacy in general, so that somebody can actually read the world around them, meaning with a refreshable braille display, we can read anything that is on a computer screen, anything that is presented through a screen reader, such as voiceover or JAWS or Nvidia. But also, it’s a way to take information and make it more usable. Because a lot of times when information is presented to a sighted user or on a screen, it can be very visually organized. And we try to take that and help organize and present that information in a linear way.
So refreshable braille products are a big part of what we do. When we talk about refreshable braille, these products work in conjunction with mainstream devices, such as an iPhone, an iPad, an Android tablet, as well as working independently. So we do have independent Google certified braille tablets, which we refer to as note takers. And those products are more of your full standalone braille first products that give access to a braille first word processor or email client, a braille first planner, as well as improving the usability of third-party applications.
So again, when we launch something like a phone app, such as, let’s say, Amazon, there are tabs across the bottom of the screen. One of them maybe for you and another may be titled with something else. And we can use first letter navigation to quickly jump to certain parts of the screen, right? We’re improving that usability to a braille user, to somebody who reads the world one line at a time.
And this is actually something we’ll touch on at the end of this discussion in terms of how can we, and how do we determine what information is relevant. And how do we improve that. So Humanware is very much into promoting braille literacy.
We also have a speech product sort of category when it comes to our devices. One of our most popular products worldwide is something called the Victor Reader Stream or the Vector Reader line of products. These are products that really are used by a wide variety of individuals, but mostly by those who are newer to vision loss, or who are not as comfortable or familiar with touchscreen devices. Not at all saying that is our only sort of user group there. But it is something that we build to allow an easy way to access auditory content. So a very easy way to consume podcasts, a way to read books, auditory content, music, and record notes.
In addition, we also have a GPS component to that, so a standalone blindness sort of oriented GPS system. And that is where Francois, who is far smarter than I on these products, will touch on. And how did we get there, and how we develop these pieces.
But that sort of is our speech side of things. So again, a way for somebody who is blind or visually impaired to consume audio content via a push button device. In addition to having the ability to work with GPS for instructions, whether we are walking, in a car, looking at points of interest around us. So that’s kind of that second sort of strand of what we do.
And the third is our low vision product category. Low vision can affect anybody, but primarily, it is something that is age-related, as we know. That, again, does not mean that is our only focus. But we make products that will range from devices that can magnify what is directly in front of you, so a recipe book or a sewing or knitting pattern to very intelligent devices, much like our braille devices that incorporate mainstream Google certified tablets. We have the same sort of side of things when it comes to our low vision devices.
So being able to combine not only magnifying the book that you are reading, but then being able to jump into Google Classroom or use a third-party application with screen magnification in addition to distance viewing and other pieces. So the low vision side, we have the standalone products. We also have handheld magnification, which would kind of encompass spot reading or quickly looking at information, such as maybe a bill or some mail that you’re reading. And you’re doing that in a very portable way.
So the point I’m making is when we look at the product set that we have, we are making products for an age range of anywhere from four or five all the way up to 100 plus, as well as every facet or every level of vision loss. From somebody who is new to vision loss, whether that be age-related or a genetic condition, such as myself, somebody who has been totally blind their entire life, which is me for making braille products, as well as products for somebody who just finds using mainstream devices more difficult. Right?
We want to improve that usability on those third-party applications. So a lot goes into how we get to what we do. And I just wanted to give a brief overview of what Humanware does and kind of the different levels of products that we produce that will kind of guide us into what we’re kind of having as a roundtable discussion with Eric and Francois. And I guess, it’ll lead me in, and I’m going to ask some questions to really help Eric and Francois along here. They don’t need my help, but really to help guide the discussion. Because what we want to talk about is how does and how will artificial intelligence affect what it is that we have built and what we will build.
And I guess, to start it off, and I think it’s very relevant to our audience, and Eric, I mean, when we talk about where you came from, and Francois’s in the same boat, how did you go from being somebody who was more on the development side of things? Because I know you were a programmer. And you’re really good at that. But going from someone who sort of built or programmed to now being in a role where you’re more of a visionary, or you’re adapting or reading the market and figuring out what people need?
Because I think a lot of people out there are not familiar with assistive technology or as familiar. So can you talk a little bit about how you got to kind of make that shift into AT and into looking at not just the programming, but into the analyzing side of things?
ERIC BEAUCHAMP: Thank you very much for that great introduction, Peter. Now I understand why we call you the brand ambassador. You have this beautiful speech.
PETER TUCIC: No, no, no. Yeah, exactly.
ERIC BEAUCHAMP: So my name is Eric Beauchamp. I’m with the company for 10 years now. And before I joined the company, or when I joined the company 10 years ago, I’m a computer engineer by trade. And I develop a lot of applications ranging from simulation aircraft controllers or even in the financial world. So I was doing all these kinds of programming.
And then I ended up at Humanware. And Humanware, we develop and help people out. And I was always curious on the how, what, and for who we’re developing these products. So by asking these questions, I started being interested on the who.
And when I started understanding the who, means that I was getting excited about these products. And I wanted to do more. And I felt good about myself because I was helping people that were visually impaired be more productive in their life. So that’s how I made the switch from the R&D part of things to the marketing, the product management things. So I became the product manager for low vision devices, which you describe very well our portfolio with the simple magnifier, electronic magnifier range to the more complex and intelligent devices out there. But it was my interest in the what, how, and who we do it for that give me that passion for this industry.
PETER TUCIC: And for a second, I thought you’re talking about the band, The Who. I thought you just really got into the band The Who, and that really got you into AT. And I know, Francois, you kind of had that development side. And now, you’re in a very unique role too with Humanware. Can you talk a little bit about how you got to that level, or how you got to where you are right now?
FRANCOIS BOUTROUILLE: Yes, I’ve been with the company for the last 18 years now. And I was at the very beginning of the development of our GPS navigation tool called Tracker. And for all those years, we have tried to [INAUDIBLE] to simplify the device to be more efficient and to be more usable by different type of people. Even old people can use this type of product.
But at some point, we have realized that it was great. We could lead a person from point a to point b, but we have realized that it was not enough. Playing with GPS receiver, playing with digital maps was not enough. We could enhance a bit even more the user experience.
So we decided to maybe to take a step back and say, OK, what doesn’t exist in this industry, but also in academic fields with various projects led by researchers in AI? And we were wondering how could we take those new technologies not already in the mainstream and find the most promising one to introduce them in our own products? So that’s what we have been doing.
And I would call it a job of applied research, I would say. So between the academic world, startups, and how a job of integrating the technology for real products. So we are in the middle of all that and trying to offer some new type of user experience at the end.
PETER TUCIC: I love it. And I think when we look at that, and we talk about that we did, not decided, but when we started to look and realize that technology, it wasn’t just solving a problem. We used to build products, and we still do, that solve a problem of accessing information. How do we get more easily browse the web? Or how do we more easily look at the board at the front of the room?
But can both of you, and I’ll throw it to Eric first, talk a little bit about the first forays into how did we use AI originally? When we started thinking about ways to not just make the font bigger, but actually use artificial intelligence to improve or start to build that into our products. And I know one that comes to mind being something like Diamond Edge font, if you wanted to touch on it. But how did we start to implement that? And this is kind of more of that past side of this discussion before we get to where we are now, and what we’re looking at in the future.
ERIC BEAUCHAMP: Yeah, and to answer that question, you have to understand what is artificial intelligence. And I looked it up in a very simple definition of artificial intelligence refers to a simulation of human intelligence and machines that are programmed to think like humans and mimic their actions. So right away, you know, that first part of that definition, when you think about our products, like you said, the Diamond Edge. What is a Diamond Edge? It’s really a marketing sentence that we use here at Humanware.
But it really identifies the OCR part of things. And OCR is an acronym called Optical Character Recognition. So what we do in our products and our intelligence product is that when we put something, a reading material underneath a camera, we take a capture of it. And that intelligence, with that program, the algorithm, will go through that capture and detect all the writing on the material. And then replace that writing with a computer font.
And that computer font can be magnified as big as you want without loss of any image quality because you’ve replaced that font. You’re not relying on the quality of the camera. But you’re really relying on how well the algorithm or the program did on detecting all these characters on the material.
And by doing that, it gives you the biggest contrast, the biggest zooming factor things. And then what you end up doing is that you can have the machine read it out loud to you. So that first definition of AI, replacing what human can do, you have it right there. You have the reading part of things, the seeing part of things.
And then the second part is more on the future that’s coming. I’ll read it out to you, but it’s for your thoughts. You know? And I love this little definition. The term may also be applied to any machine that exhibits traits associated with the human mind, such as learning and problem solving. So that is the future of AI.
But you’re right, Peter, it all started somewhere. And that that’s kind of where it started. And we had several products in our past that was the start of the character recognition and the Diamond Edge.
PETER TUCIC: Right. And I think it’s neat because it kind of augments what optical character recognition did, right? We didn’t invent OCR. We just made it better, right? Took it to the next level for somebody who has low vision, using the ability to take a picture of text. Not only turn it into that font, but then what if you want to read it by column, by line? Right? Making it more applicable or more usable by implementing that AI.
I always think of that example because we took something that already existed and applied some intelligence to it to make it more functional. [INTERPOSING VOICES]
ERIC BEAUCHAMP: And adapt it to the needs of the user. Changing the contrast colors is one example also.
PETER TUCIC: Exactly. And Francois, from your side, you know, when you think of we’ve had products, we’ve had GPS products for over 15 years, actually longer than that. Where have you seen, or how did you see kind of that? How did we bring some of that intelligence in from making it more than just turn right, turn left? You know, which is enough intelligence in itself. But I know even on those products we’ve done a lot to make that better.
FRANCOIS BOUTROUILLE: Yes, yes, you are right. So let me give you an example on an ongoing project that we have. We want to guide the person to point b, but at a certain point, maybe the address is not accurate, or the GPS position has some inaccuracy.
And we need to be better. In order to get the person to reach the very final destination, which is the door. In that case, we want to understand what’s the outside world, the outdoor scene in order to understand where are the buildings. Where are the possible doors.
So we have developed an approach based on deep learning to understand all that. And basically, we need to do some work on the data. So we have a definitive AI pipeline in order to do that, to accomplish that AI task.
So we collect some data. And I will share with you a few slides about that.
PETER TUCIC: That’s perfect because that’s where I was headed. How do we collect that data? How do we get kind of that information? It’s easy to say, well, I think I want to solve this problem. But how do we gather or collect that info anyway?
FRANCOIS BOUTROUILLE: Yes. So here, we want to do is to understand how the person can reach a door and enter a home or store or whatever possible address he wants to reach. And we have collected thousands of doors, the images of buildings and the house numbers. We tried to be not very specific to a given area. We need to have a broad, various type of database with many, many images. And we did that first in collecting some data in the area in Montreal in Canada looking at residential areas, looking at commercial areas, even in the suburbs. And trying to get the maximum that we could get just walking around in streets, a bit like a Google Maps, but specifically for the pedestrian.
And then here you can see that based on those images, we trained some models. Because the root of the AI we are looking at is the training of models, which are going to be the piece of software that will take some decision and will add [INAUDIBLE] to accomplish a certain task. And in that very specific case, it’s to detect some doors that you can see here with house members.
And in that case, we are able to take a better decision to say to the user, OK, you are not far from that location. You can walk maybe 30 feet straight. And you will reach the entrance of the home.
And to do that, it’s good, but we have to deal with very difficult task of handling some images where the world is not perfect. We don’t have the door, which is obviously on an image very visible on an image. Sometimes we have some occlusion, what we call the occlusion. It can be a stair ramp. It can be a person, which is in front of the door when we collected data. So we have a very different situation where it’s difficult. And we have to take care of that.
So it’s not an easy path. But when you get the right data, we can offer something, which is very interesting.
PETER TUCIC: But the variables come into play as well, right? I mean, it could be snowy. Or even if you have that perfect door, I think there are so many different variables that the AI has to account for.
FRANCOIS BOUTROUILLE: Exactly. Yes. But you know what? What we want to do is maybe to do less, but to do it better. I think one of the principle that we are working with is the fact that if we want to accomplish too much, if we want to be too much optimistic with AI, maybe we will not succeed. We need to be very pragmatic and say, OK, let’s try to introduce the technology the best way that we can by tackling a few interesting problems, but not everything at the same time. It will take years, but we are starting.
PETER TUCIC: And I guess it leads me– because, again, we don’t want to run out of time, and I think about we look at where we are, right? We have a wide range of products for a wide range of users. And when we think about not only how we’ve begun and have successfully implemented a lot of intelligence, whether that is improving mainstream technology, developing our own specific technology. When we look at moving forward, and we look at this– I love this sort of what we’re delving into. What we’re getting into. What you’re looking at, Francois, with learning how to traverse very complex environments, right? Or being able to walk and identify where doors are or signage or things like that.
But when we look at where this is going, and taking the last couple of minutes here, when we think about the future, how is it going to impact us as a blind and visually impaired community? And not only how will it do it, but will that be in steps? I mean, how do you foresee, or how do you think it will come into play? And how many years do you think we’re looking at?
Because I think we proverbially– I mean, we’re always hearing two years from now or three years from now. But we heard that four years ago, right? So what are your thoughts there in terms of how will this technology ultimately help build these types of products?
ERIC BEAUCHAMP: And I think I can build on what Francois was saying previously is that we need to ask the right questions to the right users. And what do the users want to do with it? And where we can bring that technology. And Francois was talking about a subset of objects that we can detect. Which objects can we detect and be good at? What would make a difference to the users in their day-to-day lives?
In this example that Francois was saying, it was from point a to b. And then what makes it better as a UX or a user experience is to bring that person to the door directly. And how can we bring that person to the door? By detecting the address, for example. And then what happens at the door? Open the door. Where do we go? We go inside.
PETER TUCIC: Right.
ERIC BEAUCHAMP: What’s going to happen next? Is it indoor navigations? Is it scenery detection? Is that where we’re going to go?
I think the technology will, with time, develop. And we can use that mainstream technology and apply it to these for the visually impaired community and have them more effective in their day-to-day tasks. I don’t know if you want to add something to that. [INTERPOSING VOICES]
PETER TUCIC: What I think I hear you’re saying– and Francois I’d be curious if you would agree with Eric– is that it’s going to be in stops, right? We identify. And I love what you’re saying about start with less is more. Let’s perfect the task we can perfect is what I’m hearing.
And then we start to move on. Because, again, if we bite off more than we can chew, we’re never going to get there, right? It’s kind of in phases.
FRANCOIS BOUTROUILLE: Yeah. I totally agree. I think there are two points here. The first thing is how people will interact with the system. So we didn’t talk about the dialogues and personal assistant agents that can help the user. But there is something which is very important. It’s the speech recognition capability, the natural language understanding.
So it means that people– in a few years from now, and it’s already starting– people we interact in a very easier way with the natural language understanding. So they would be able to establish the kind of dialogue with the system. It would be less difficult to operate with some buttons and menus, et cetera.
And what I think is at the moment what a search assistant does is, for the user, it’s a bit of a passive situation. The person is requesting something. And the system gives an answer. I see it as passive.
And in the future, it would be much more proactive. People are real. Take the initiative to ask the system a specific request. Where do I have an empty seat in a restaurant? Can you guide me to the approaching crosswalk in the street? You see this type of thing.
So I think it will take some years. I think, to be honest, it will be more than two years, probably three to five years, it would be much easier to work with. But it’s the way I think the development will go. Those are better interaction, better user interface. And to have the user in the center of the system. The user will be controlling the system, not the other way around.
ERIC BEAUCHAMP: I agree with Francois. And how the user will interact and how we’re going to present information to the user is very important. You can have all the technology and all the intelligence and the technology, but if you pour out all the information at the same time, what is that user going to do with all that information? And there’s a lot of thinking that has to go in there and how to present that information to the user. I totally agree with that, Francois, yeah.
PETER TUCIC: Thanks so much. Well, we’re almost out of time. And I will say, as somebody who is totally blind and travels all over the place, the day I can go to an airport gate and easily locate a seat to sit-in. And I have no problem asking for help. But it’s that whole point to make that final task, right? How can I simply, without any assistance, locate a restroom.
Some of those very, very simple tasks, they’re simple ones. They’re not seen as that major problem. But they’re something that we face. And I think that this sort of AI and the intelligence and combining it into the products and our experience that we’ve put into play over the past 32 years is going to start to make some major differences. So thank you so much, you guys. Thanks for being here. And I really appreciate it.
I hope this was helpful. I know it went by fast. We’re like on the bullet train over here. But thank you to the Sight Tech Global conference chairs again for giving us the opportunity. We hope everyone found is helpful.
And for more information, you can visit us at www.humanware.com. And if there’s anything else anybody wants to say, please make it happen, Francois. Thank you so much. And thanks so much to Eric.
ERIC BEAUCHAMP: And I think we should pass it on to Robert to finish it off.
ROBERT FRAWLEY: Great. Thank you, guys. This was a great breakout session. So I will be sending any Q&As that your team received to you via email. So be on the lookout for those.
So everyone, we’re going to be closing out the session now. Please make your way back to the main stage by going to SightTechGlobal.com/event. And we hope you enjoy the rest of the show.
ERIC BEAUCHAMP: Thank you.
PETER TUCIC: Thanks, guys.