[MUSIC PLAYING] NICHOLAS GIUDICE: Thanks, Alice. And hello. Welcome to everybody tuning here, tuning in today. My name is Nicholas Giudice. And we'll be hearing about what's been happening with the Dot Pad since they were here at Sight Tech Global last year. So excitingly for me, at least, I was recently able to get my hands on one of the Dot Pad units and put it through its paces. And I'll say, I was definitely impressed. And I don't say that just because I'm moderating this session. I'm actually difficult to impress. I'm a researcher who does a lot of work in the domain of tactile technologies. I'm also a blind user who uses a lot of tech in my daily life. And I have a pretty good sense of what works and what's hype. And after working with and playing with this unit for a while, it's definitely the real deal. I'm really excited about what it means for the next era of tactile access, especially for graphics. I see all kinds of applications in terms of blind folks and education and work and navigation and different types of social engagement. So let's get on and hear about the cool stuff from our panel. So we have Eric, the CEO, and Ki at Dot with us today. Hey, both. Welcome. ERIC JU YOON KIM: Hi. Thanks for having us. NICHOLAS GIUDICE: Yeah, I think we have a lot of cool stuff. Sorry. KI KWANG SUNG: Yeah, thank you for having us. This is Ki. NICHOLAS GIUDICE: I think there's a lot of great stuff to talk about. Let's jump right into it. Why don't you just start by giving us an update about what's been going on with the Dot Pad since last year. ERIC JU YOON KIM: I think first, Ki can give some software updates. And I'll do some hardware updates. KI KWANG SUNG: Yeah. So my name is Ki. I'm a co-founder at Dot. So today, I love to give you some little bit more updates about the software and also integration of the Dot Pad with Dot other screen readers. Recently, we have collaborated with many screen reader softwares, including NVDA and we are also talking with JAWS to integrate our software. And also, we are working with Google people to integrate the Dot Pad with ChromeOS and also, of course, Apple VoiceOver. So today, I'm really excited about giving you more clues about our collaborations with the all screen reader softwares. And also, we have an update with the generative AI APIs. So we are integrating our software with the generative AI APIs to generate the tactile graphics and also converting images into tactile graphics. NICHOLAS GIUDICE: Yeah, we'll dig into a little bit more of that. That's very exciting. ERIC JU YOON KIM: Yes. And also, like Ki mentioned, we have a lot of software collaboration and software updates. But also, we have a lot of hardware improvements. So just imagine tactile graphic display as a monitor. Of course, in our world, there are different sizes of monitors. So we have right now four different sizes of Dot Pad. So the largest one is 832 cells. So you can imagine it's more than 6,000 cells. And the small one is with 16 cells. So it's really comfortable. The current kind of standard version that we have right now is 320 cells, which has more than 2,400 pins, royal pins. So like this, we are really working hard to present different sizes for a different purpose. For example, like for developers, we saw a lot of needs about-- when developers do the coding, they want to see in more bigger screen and some of creative purpose too. Collaboration with designers and developers, they also want very big screen. And not just different sizes, we have upcoming next tactile cells. It's called-- it's third generation. So we call D3 cell. Right now, the current version we have is D2 cell, which is second version. And the D3 cell is amazingly fast. So it became more than 10 times faster than what we have right now. So we expect with this speed and with this refresh rate, we believe we can even put some of the video contents or like games or sports games. So it's going to be more and more dynamic. Of course, there is some gaps. When you feel tactile through your fingers, of course, you need some time to familiarize with it. But we believe that we right now have a fundamental technology that we can put a lot of diverse contents, includes more dynamic content like videos. So we are so excited. Like so many entertainment related thing like games. And exciting thing could happen. So we are really excited. NICHOLAS GIUDICE: Let me follow up on that, because this is one of the limitations I feel of most displays that have been out there, the refresh rate. It's really difficult to make these cells refresh the screen in a way that is-- it's OK for static images, but it's really slow for anything that's dynamic. And so these D3 cells that you're talking about, they're not in the units yet, but they're coming. Is that what you're saying? ERIC JU YOON KIM: Yes. NICHOLAS GIUDICE: And they're going to be like 10 times faster, if I'm hearing you correctly. ERIC JU YOON KIM: Exactly. So it's 10 times faster. The development has finished. So right now we're working on the production side. So we are planning production-- some of the test production really soon. But the actual device will be possible a little bit later. But this is coming. Development has-- yeah. NICHOLAS GIUDICE: Well, that's cool because that was one of the limitations when I was using this, where it was-- you've get a graphic to come up and it would take a while to come up. And I know that that's kind of the normal practice, but it's frustrating. So that's really cool to hear. Can you speak a little bit more about in the different incarnations and the different sizes of the products? Is the resolution, the tactile resolution of the cells the same in terms of pin density, or pin spacing and these types of things? ERIC JU YOON KIM: Yes. We know there is a Braille standard. So we are right now following that. So basically on Dot Pad, one of our core principle is we need to put Braille and also graphic at the same time with existing format. So it's same. But in the future, we believe that it can-- the Dot can be more dense. So it can present more specific information or delicate information. But also we believe it's going to be a 3D format. So it's going to be more-- we can actually feel the scale of the contents and colors. NICHOLAS GIUDICE: So you're saying, ultimately, your goal is to have different dot heights. ERIC JU YOON KIM: Exactly. NICHOLAS GIUDICE: Oh, great. Yeah, I mean, one of the things when I saw it-- when I was using it in the version pad I was working with, it had a one line Braille display. So I thought that was going to be only where the Braille was. And it was cool to see the Braille on the main graphics part of the display integrated with the graphics, because obviously, that's what you need to do in the real world. But other units that I've seen can't-- they only can do Braille on the dedicated display. And so I thought that's obviously something that's people are going to want. My question here was-- so you mentioned the screen reader access. And I was able to connect and do some work testing with NVDA, which worked well. And it was really cool to see the multi-line Braille on the big part of the display. I didn't think I would like it as much as I did. And then once I was using it, I'm like, oh wow, my one line display kind of stinks. But so far that doesn't work with graphics. So I'm assuming that's something that you're working on. I'm wondering, when do you imagine having full screen reader integration, where you'd be able to access the graphics and the text on the same display? ERIC JU YOON KIM: I think Ki can answer that. KI KWANG SUNG: Yeah. Basically we are collaborating with many screen reader softwares. But firstly, Apple and VoiceOver, we have integrated our Dot Pad with the VoiceOver software so far. And then they put the official software on the public like one year and a half year ago. And so basically, VoiceOver, the official software, they support the Dot Pad in the accessibility, the VoiceOver setting. And actually, when we turn on the VoiceOver, we can actually see the Dot Pad to display the graphics. And also NVDA-- so basically, not only the VoiceOver iOS, but also the Windows OS screen readers, such as JAWS and NVDA. So we are talking with JAWS, and we are collaborating with JAWS-- NVDA right now. And it's going to be done by early next year, around January, February. So we can actually see the multi-line at this moment. And we can actually see the spatial information, including tactile graphics by February next year. NICHOLAS GIUDICE: Excellent. I had a fun time in the VoiceOver-- using VoiceOver playing with the emojis. Because each year, there's an emoji in my text or something, but I have no idea what they look like. So that was fun. Can you talk a little bit more about the AI layer? Because I think this is-- that API is really cool. And it's obviously-- I've done some work in this area segmenting a graphical content, converting the visual image to a tactile output, which is a much lower bandwidth modality. It's just hard. So I'm interested, what can you say about how the AI is facilitating this process? How is it working? ERIC JU YOON KIM: Well, I think it started with the fundamental problems. Right after we started this project, we realized there are not many standards or UI/UX examples or even like, there was no standardized contents. So we felt so hard because-- especially there are no kind of a pre-research about this area. Of course, there are some researches that we used it as a reference. But certainly, there is not many contents. So we thought that if we could actually use AI to generate specific information, especially graphic and image information for blind people in tactile format, and if you can train AI in more, I would say, better and better in terms of filling on fingers. So for example, at first, we started with transferring images in existing format to tactile. We use AI on that purpose. But right now, we are researching very hard how can we use AI in blind people to type in or say something and AI could actually generate tactile images in exactly for that purpose. NICHOLAS GIUDICE: Make a circle something like that. Make a picture of a dog something like that. ERIC JU YOON KIM: Yes. Simple images to complicated images or even creative images, and maybe Venn diagram or charts too. NICHOLAS GIUDICE: So are you using proprietary AI algorithms that you're developing? Are using open-source APIs, or how are you doing this? KI KWANG SUNG: So basically, we're using open-source APIs, so image generative AI. And also, we actually use different two APIs. One is generative AI to create simple images and graphics. And also, we use different APIs to convert that simple image into tactile graphics. So we combine them together, so that basically outcome would be-- when we type something in text, such as lion or giraffe, then the outcome will automatically translate into tactile graphics on the Dot Pad display. NICHOLAS GIUDICE: I mean, one thing that you hit on that really is needed is more standards in this area. And it would be great to get-- if companies and researchers and other people could come together and specify something that people could use because it is a huge limitation of people working in graphical-- accessible graphics. ERIC JU YOON KIM: Exactly. NICHOLAS GIUDICE: Another part that I found cool was this-- I believe it's called the Dot Canvas app. Maybe you can talk a little bit more about that. This is where you could kind of tweak and manipulate the images or draw on a tablet, an iPad. I think you have Android support coming. And then the output is rendered on the display. So it's kind of like a paint, but it was neat because I could actually draw on the iPad, and then feel what I drew on the Dot Pad. KI KWANG SUNG: Yeah. So Dot Canvas, that is the one-- the first Dot owned software for the Dot Pad. And as you mentioned, it's like a PowerPoint for the Dot Pad usage. So you can draw something. You can put some shapes and figures on the canvas. And so if you type something, it will automatically translate it into Braille and then display it on the canvas. So you can display on the Dot Pad as well. And one good thing about the Dot Canvas is you can also upload any image or PDF file on the Dot Canvas. So image can be translated into tactile graphics and then displayed on the Dot Pad display. And also, any PDF file can be also translated into multi-line text Braille display on the Dot Canvas. ERIC JU YOON KIM: I think also one very cool thing is it's interactive. So the canvas can be connected with multiple-- in a multi-Dot Pads like, 10 Dot Pads, for example. And you can imagine-- like, you can do the presentation. So you have your images and data on the canvas and you can send it simultaneously to 10 Dot Pads. So if you have people that want to share real time information, it can be really, really interactive. You can put some label on it, or you can change the graph shapes. So it's really kind of a new communication, a more professional communication tools that we are imagining. And also, we have the share functionality, so like a Google Drive or iCloud. So on Dot Drive, you can share your works with other people. So we already saw a lot of professionals sharing their contents. For example, some science teachers actually worked on how to explain the shapes of the Suns in different time. So we are very excited. Users are already sharing a lot of professional tactile and Braille contents on Dot Canvas. NICHOLAS GIUDICE: Yeah. I mean, I could imagine a classroom having a teacher, as you said, a colleague doing something, having it shared in real time. I mean, often, that's the huge problem, right? I think we'll get to this because you've also talked about books. But often, when you have to get a book made into Braille or a tactile equivalent, not only is it really expensive, but it takes months. And so being able to do this in real time is huge. As a researcher myself, and I know there's other people probably listening here that use tactile technology and are interested in how this could be used, I'm interested in your views on how this relates to research, but specifically also how it-- one problem I noticed that isn't just related to research, but to graphical manipulation and graphical-- how you can do translations of graphics on the screen. So for instance, zooming and panning when you have a large image. I do some work with large format maps. And on the one hand, the Dot seem like a great way to show maps, but I couldn't figure out how to get anything to zoom in a way that was useful. Is that something that you're working on or where is that at? KI KWANG SUNG: So basically in the Dot Canvas, we don't really support the zoom in and out functionality of the images because that is a static, the Canvas, like a PowerPoint. So they basically when they have different information complex images, then they create different slides of the images and graphics to tell the story for the education. But at the same time, we are adding those zoom in and out and navigation image-- navigate image functionality in NVDA screen reader integration. So when we have the images and graphics on the Windows OS PC, then the NVDA screen reader can go to any images and graphics. And also, we are supporting zoom in and out functionality to make it abstract version or specific version, detailed version of the image. So the people can navigate through the graphics. NICHOLAS GIUDICE: Oh, so like hard coded zooming almost? KI KWANG SUNG: Yeah. NICHOLAS GIUDICE: I mean, it makes it-- KI KWANG SUNG: Yeah. Basically what we want to do is we want to show the whole layout of the screen information first. And then when they focus on the specific elements, then it will show the detailed information out of that. NICHOLAS GIUDICE: And I would imagine the type of graphic makes a difference. I mean, it's much easier to figure out how to zoom on something like an SVG than a fixed PNG or something. KI KWANG SUNG: Yes. NICHOLAS GIUDICE: One thing that kept-- I kept thinking about when I was using the system was how useful it would be if it supported some kind of bidirectional kind of input and output functions on the device. So instead of just being more of an output display, imagine if I could touch the cells directly and draw on the display or click on something on the display. And I know that that's something that-- it's one of the big benefits of people talk about with the competing graffiti device that is exciting to people. And I'm just wondering, is there anything that you could imagine being adopted into the Dot cells in the future? ERIC JU YOON KIM: Well, I think the-- you mean the touching-- I mean, touchable, right? NICHOLAS GIUDICE: Touchable, yeah, and having it be touch sensitive. ERIC JU YOON KIM: Exactly. So I think for that part, right now, we are doing that with the humanware in APH on Monarch device. NICHOLAS GIUDICE: Oh, you are. OK. ERIC JU YOON KIM: Yeah. On Monarch, it can actually trigger fingers. So you can use your fingers to navigate contents, and it can actually know where your fingers are on the tactile screen. So Monarch is the most advanced device with the partnership we are working on. So on that device, it's already possible. And I think in near future, there will be diverse way of using your touch on tactile screen. There are diverse ways to do it. But right now, we are using the Razer sensor to track fingers. There is some pros and cons. But that's the best way that we can do right now. So we're working on that. NICHOLAS GIUDICE: So can you briefly-- because I know some people, including myself, apparently are a little bit confused between the Dot Pad and the Monarch. I know that your technology is underlying both, but what's the fundamental difference? ERIC JU YOON KIM: Well, I think the fundamental difference is, first of all, the size. So Monarch is 480 cells. And it is specifically designed for textbook, so for K-12 education. So APH has a great mission and vision to actually digitalize all the textbook for students in the world, of course, in the US first. So it's in different size and different in functionality. It has also it's own OS. So it's more, I would say, a complete device. And I think the Dot Pad is more like a screen, monitors that can be used in diverse purpose researches, in your professional careers, or in your home, or entertainment. So with Dot Pad, we want to-- we would like to find a lot of potentials as much as we can. So for example, right now, with some of the partners we are working on games. We recently made Texas Hold'em. So a lot of blind people are really excited and enjoyed playing it. So for example, we are imagining, what if a blind people do the multi-playable games in their home competing each other through a tactile screen and also sounds and also talking. So we see so many potential on this. And also recently, we also had a project with Toyota in Japan. Akio Toyota, the chairman of Toyota, actually wanted the racing games accessible to blind people. So we actually created a solution for racing games. So with commentary voice, you hear all what's going on the circuit. And you can feel where the car is on the Dot Pad. And you can track the record. So Dot Pad is like a monitor that we see every day. So it's different types, different sizes, different purposes, more variety of purposes. But Monarch is specifically for education and more complete device, more advanced device. NICHOLAS GIUDICE: Are they also using your work with the AI layer and the Smart Canvas in these things? Is that included in there OS as well? ERIC JU YOON KIM: I think in the future, it will be integrated. But at this moment-- it's a big project. So I think we have a full schedule to support K-12 education. So that's kind of first priority right now. NICHOLAS GIUDICE: So I know we're running out of time. I know there's a lot of people that are really interested in these products and in what you're doing and in the advancements and how quickly things are happening. I guess my question is, how do customers-- how do you think of your market, and how do customers get a hold of these things? Because if I understand your strategy, the Dot Pad or the Monarch, but they're being-- you get them through educational organizations or other types of distributors, not direct to customers. And so how does a customer, if they're interested, get their hands on getting one of these? KI KWANG SUNG: Yeah. So I think the Monarch, they are focused on education market. The Dot Pad is more variety of the contents, and also other entertainment functionality that we have. So it's suggest-- so basically we are also targeting not only the education, but also rehabilitation and job access and entertainment market. So we are targeting so many different types of rehabilitation centers and also the B2B the customers, they-- for example, blind people and blind employees, they are working in the company. And then they have to work with their colleagues based on the Microsoft Office tools like documentation tools. So I think the Dot Pad is more like working with the distributors for the B2B and rehabilitation market for the blind employees and the people. So yeah, that's the one strategy. And also in the future, we would love to make the Dot Pad, the price, dramatically affordable for every individual blind users. So I think in the future, we can also provide the Dot Pad with a really lower price for individual blind people. And also even, we can provide the subscription model like a rental service for the lower-- the barrier of the price so that we can actually-- we hope that we can actually sell-- provide Dot Pad to individuals. NICHOLAS GIUDICE: Yeah. And then the B2C to market, that would be great. I mean, so many people would love this. It looks like-- I know we're almost out of time. I want to thank you both Eric and Ki. This is really interesting. I know there's going to be a lot more updates. I think probably a lot of people in the audience are interested. I think everyone should feel free to reach out to us, if people have questions. And we can certainly keep this discussion going. And yeah, thanks to you both. And thanks to everybody at Sight Tech Global. And back to you, Alice. [MUSIC PLAYING]