-
DESCRIPTIONJoin Envision's founders for an in-depth look at their pioneering journey from smartphone apps to smart glasses, culminating in Ally, their revolutionary AI assistant. Learn how this conversational AI platform is breaking new ground in accessibility across multiple platforms, enabling greater independence for blind, low-vision, elderly, and disabled users worldwide.
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
VOICEOVER: Unveiling Ally, the AI-Powered Leap From Apps to Assistants. Speakers, Karthik Mahadevan, CEO of InVision. Karthik Kannan, CTO of InVision. Moderator, Joe Devon, founder of Ally Audits and chair of GAAD.
JOE DEVON: Thank you, Ross and Karae, for that introduction. And also to Sight Tech Global for producing this session with the Vista Center. My name is Joe Devon. I am moderating this fireside chat. I am the chair of the GAAD Foundation, co-founder of Global Accessibility Awareness Day. And I just started a company recently called Ally Audits. I’m really excited to revisit with the InVision team about their product. So first, let me introduce myself. I’m going to introduce Karthik Mahadevan. Please describe what your role is at InVision and a little bit about your background, Karthik.
KARTHIK MAHADEVAN: Yes, hi, Jyot. Appreciate that a lot. Thank you so much. I’m Karthik. I’m the co-founder and a CEO at InVision. Started InVision about six years ago with a simple aim to increase independence for people who are blind or have a low vision through a smartphone app over the years. It has evolved into several things, including a smart glasses-based solution. And today we’re super excited to talk about what’s next. And at InVision, we are mainly focused on the design and the strategy aspect of things. Just making sure that we’re always keeping our eye towards the future and just building the best accessibility tools that we can out there.
JOE DEVON: Awesome. Thanks, Karthik. And now, Karthik Kannan, who is the CTO of InVision. I’d like to invite you to say a few words about your background and involvement here.
KARTHIK KANNAN: Sure. Thank you so much for having me on this panel today. And, you know, my name is Karthik Kannan, and I’m one of the founders and the CTO of InVision. My work is the most fun job in the world. I get to play with the coolest of technologies, you know, especially artificial intelligence, smart glasses. You know, AR, XR, name it, and figure out ways where we can take the advances in all these areas and effectively help the 380 million blind and low-vision people across the world today. And that’s what my role is in a nutshell. So just constantly sniff around for what’s the cutting edge and just think about ways where we can apply that technology to help people and improve accessibility.
JOE DEVON: Great. And I’m really excited for our conversation. I love to geek out about. About the AI models and go into a lot of tech detail, but because not all of the audience wants that, we’ll do a little bit of it, but bear with us. We won’t go too deep. And but I think there’s some interesting things to learn from it. So just to set the table. So it’s been about a year since we last did this. And while I’d love to jump in and talk about your new product, I do think that a little bit of background, you know, how you got to the point where you felt like you needed a new product, what the history of it was. And so I’d like to invite you first, KM, to share your thoughts on it. But let’s make this a conversation. So, hey, K, if you want to come in with some thoughts as well, please feel free. You don’t, we don’t need to do that question-answer, answer-question each time.
KARTHIK MAHADEVAN: Yeah, sure. Of course. Yes. I think it’d be good to sort of give the historical context to begin with, when we started Envision back in 2018, we sort of started with, you know, figuring out that, hey, you know, we live in a visual world. A lot of the information around us happens to be a visual information and they’re not always accessible. And we started an exploration to see if the technologies of the day can, you know, can can can actually help to help to translate these visual information into accessible pieces of your content. And we started with image recognition. You know, this is AI of 2018, which is archaic by the standards of AI that we have today. But we started with very simple stuff. We started with very simple object recognition with very simple OCR. And that’s basically what the way we started building our application. application and ever since then it has been sort of an incremental evolution of our technology where we kept on increasing the different kinds of visual information that your envision could recognize and the level of depth and accuracy with which it could recognize it right so it could recognize you know your simple you know something like text to begin with but it evolved into recognizing headings and your tables and your graphs and stuff like that over a period of time the same with objects in the beginning it’ll simply say this is a cup this is a chair but over the over you know the evolution it sort of sort of became more detailed faster and more accurate and as this evolved both across the app and the glasses as soon as we sort of had the onset of the llm with gpt on three we started to you know see a big uh you know almost like a pivotal change in the way these ai models operate and they function and uh what we understood very quickly is that uh you know the you know like the era of discrete ai models is going to uh eventually go away where you don’t need a model that only recognizes text and a model that only recognizes objects and so on and so forth so we started understanding how can we just build a very simple uh your way for people to just get answers to their questions without them having to think which model to use which kind of object recognize so that sort of is uh where we started uh with basically you know you’re you’re building something simple uh in the early days and then just just just like following along the technological in your evolution from that point on uh but but also understanding the needs of the User base that we have on the app and the glasses, which is over half a million at the moment, so that’s basically, uh, what led to us starting to think differently about, you know, how we build our tools and Aki has a lot of experience on that from a technology standpoint; he’s been the one who was really early on on the whole GPT-a wagon. So, uh, maybe he can, uh, express what he saw from his technology standpoint.
JOE DEVON: Thanks, thank you, uh, Cam and before you do that, I’m going to ask you a question, Kk. Um, I would like you to define, uh, OCR since, um, Ham mentioned it, uh, for those that may not know what it means, and also, um, from what I understood from KM, you are basically Sort of a front end to different models, where there’s something called multimodal models where you can have vision, you can have hearing, text, um so you’re sort of a front end, so so can you speak to that um and as well as what AI improvements have happened like what are the stages where okay here is something new that we can leverage from a foundation model um in our product that was not possible six months ago one year ago two years ago, like what is the evolution of that?
KARTHIK KANNAN: Sure I think you know to answer your question overall um so firstly optical character recognition or OCR is just uh very basically a computer being able to recognize. Text, uh, as accurately as a human or sometimes even better, right? Uh, so when you see, uh, you know, when you take a picture for example, uh, today with Apple devices, it’s kind of very straightforward: take a picture and then you’re able to select the text on the image and then copy it, uh, from an image that’s OCR at work, right? And it’s been a field that has been, you know, even evolving for over 30, 35 years now. Um, and broadly speaking, I think you know we in the very early days of computing, we had a very text-based interface, right? Where you would have to type in very complex, cryptic commands and speak to the computer in a language that it understands. And, uh, you know over the years we’ve been trying to bridge that gap where we’re trying to get computers to speak our language and you know, you know, work exactly the way or speak like we do, and understand what we say. And in between, we had this whole graphical user interface where people could like, you know, drag um you know a mouse and, or use a keyboard, and then interact with the computer. Now, we’ve got with these language models, we now have a way of interacting with them where finally computers can actually speak our language instead of us learning how to, you know, use them by speaking their language anymore. This is sort of the ‘the’ you know um where we are, where we are at is in this particular stage and for our users uh who might be on the elderly side uh who might not be that tech savvy, this is the biggest thing uh that’s ever happened to them in accessibility space that all of a sudden you could be an 80 year old grandma who’s never used an iPhone before but you can get more out of a computer than a than a power user could like you know two or three years ago right, that is what uh you know Envision is trying to do with Ally is uh we’re basically having all these really powerful you know complex AI tools, but instead of you sitting and figuring It out, oh I have to swipe forwards; I need to tap this button, I need to remember to speak this command. All of a sudden, you could just ask a question and get a response, and we decide under the hood that these are the things that we could probably do. And six months ago, I think, you know, we started seeing um, you know, I think November of 2023 was when we first got access to GPT-4 Vision, right? We’ve been playing, you know, with models a little bit earlier on that front but just models being able to understand images and interpret them with increasing accuracy has been one of the biggest game-changing, uh, you know, things that have happened. To us, in this space right, and it’s actually enabled us to build something like ally um, over time, another thing that I notice uh, also happening um, with in this space is that these models are able to now interact with the outside world, so it’s no more you know a talking parrot where you could just type something and it just gives you a response; it can actually take action on your behalf and in a safe way, and uh, what you know insiders call function calling or tool use, uh, that has again improved dramatically, uh, over the last six months, where today there are some you know kinks to iron out for sure, but we can still go ahead and you know. Reliably call these very complex AI models, and know which ones to route them or which query to route to which model easily than what we could have done six months before. And again, that is uh that spectrum of change is basically what we’re capitalizing on. So these are these are some of the advances: just being able to understand images better; being able to you know understand which tool to use based on the query that you know a user asks; and of course as time goes on, you know we have more modalities like audio and all of those things also being included. Um, and that’s what’s making this possible right now.
JOE DEVON: And so in order to
KARTHIK KANNAN: route to the correct model or to the best model uh did you fine tune or train your own model to to be the best model for the best model for the best model for the best model in that router in the middle yeah so what we did was we built this on a base of um of open source models right and uh you know and and this we did that for several reasons one because we wanted to have better control over the privacy aspect of it right um and wanted to have better control over what queries need to be routed in what models so we ended up fine-tuning for example a vision model uh that you know understands different types of visual content and can for Example, if you go ahead and scan, uh, you know, show you, know, Ally, a picture of a document, it’s able to understand that you are indeed talking about the document, you know, it is a document that you’re holding, and so that kind of intelligence layer is what we built on top of existing open-source models, right?
JOE DEVON: Um, so KM how has the reaction been by the audience and how did in terms of the product that you’ve built up until the new one just please describe like what what their reaction has been what have they felt that they wanted and why did you decide to create a new product, yeah, so uh, the reaction has been very don’t really I would say
KARTHIK MAHADEVAN: Surprising, and in the most uh, I would say like pleasant way, um, so the insight that we were working off of is when we sort of, you had all these users, you know, using the Envision app and the glasses. Is that, um, people opened our app or opened you know, like the glasses, they were always trying to look for a particular thing, like they had a question, and they just needed an answer to that to a question, right? And what these apps were making them do was, you know, actually are making them do your tasks, uh. So, to give an example, if I, you hold up a menu, I already know that I want to know how much is the cappuccino, right? That’s the question that I have. But, uh, in the like in you know like in like previous application, uh, that we had, you have to open up your app, open scan text, take a picture of it, then a read through the whole thing until you get to cappuccino and then you know how much is a cappuccino, uh. But with the new approach, you can be like, hey, let’s start with the question; let a user open up the app, and the first thing they ask is the question. And after that, we’ll take all you know, like, the heavy lifting to the back end where we’ll understand the intent behind the question, so if he says how much is the cappuccino and we know he’s attempting to go, like, read a menu, you know he Or she is holding in her hand, then we will do a cropping of the menu. We will do an OCR on it, and we will look for the price of the cappuccino within the OCR document with an LLM and then just speak out the answer to a user, so that sort of decreases uh the time to answer the number of actions to answers by a lot. And that was the primary insight that we were working on is that every user is just looking to ask a question, and if we can give them the answer to the question in the fastest way possible, and if that’s what we optimize for then we can build a great tool. Having a conversation as a layer for that, and having this your router which understands. Intent very well and is able to your route, uh, you know, like the prompt or the question to the right AI tool is the biggest stuff that we put a lot of our effort and energy into, so that was the thing that we were expecting people to remember exactly. The reaction that we got when we, you know, actually had the first round of your beta testers was the speed of access to information was significantly improved where now as soon as they had a need for information within a fraction of a second they actually had the answer to what they were looking for right, uh, like we had a you know like a user who spoke to us who is in a data entry job where His job was to look for a particular, you know, data point in a document and he needs to enter it into his computer. And earlier, he had to sort of do the whole take a picture, do the scan text, and then look for that information which used to take him 15 to 20 or 20 minutes, and now with this Ally, he can do that in 15 seconds. Uh, like he just holds up the document and just asks the Ally to look for the data and that’s basically how much of efficiency improvement we have had at a job like this, so the whole conversation aspect has been incredible. But the other aspect of Ally that we also put significant effort into has been the Most surprising one for me that is the personalization aspect, uh, so Ally is not just a conversation assistant, it’s also a personal assistant and it is a personal assistant in like two ways, uh, a, uh, you can offer Ally information about about yourself, uh, about you know stuff that you do, your your stuff, you like stuff, you don’t like, and Ally will will use all of that information about you as a context to answer any kind of a question about you. Right? So just to go back to the example with the menu if you hold up a menu, you can also ask Ally for a recommendation and because it understands what kind of a dietary preferences you have, what Kind of allergies you have can actually offer you a great recommendation in a menu like that, so that personalization aspect is something that we have seen a great a great a reception of. But the other aspect of a personal assistant was also that you can offer your Ally a personality, so you can ask, ‘You’re like, you actually I wanted to speak to me,’ so you can either ask it to be straight to the point just be a professional or you can ask it to be more, more of like joyful and like funny. So you can sort of define exactly how you want Ally to speak the answers out to you, and that’s the aspect that, to be very honest, when we were introducing It was thought to be more of a gimmick, kind of a thing, but we have seen some incredible adaptation, uh, you know that that feature has had where now that’s become a feature that people sort of, you know, keep on like on to going back to there are people who have built a character, stories of their allies, so that’s the you know, you know, like the feature that’s becoming very, very sticky with all the users that we have.
JOE DEVON: Very cool, um, before the panel I actually was, uh, was playing with it and I named my assistant Carl, uh, and asked it to like look around and and um answer a bunch of questions, it was very cool, so I think K. K. is this something that you Can demo to some degree, right now?
KARTHIK KANNAN: Sure, I can actually go ahead and show people uh you know, Ally in action and my favorite way to experience Ally is on the glasses because you know it completely hands-free, you could go ahead and hold things, you know, and you could move it around and since it’s almost real-time, uh, it gives you feedback if you need to, if it needs to take a better picture, and if the lighting is off, so it’s it’s responsive that way. So what I’m going to do is I’m going to wear my glasses and uh, I’m going to go ahead and connect the glasses uh, to a speaker so uh, the audience can hear the output, I’m going to go ahead And bring up my ally okay it’s a pleasure to meet you how may I be of assistance today uh all right, Ally so uh could you take a look at my windowsill and uh tell me what you see?
VOICEOVER: Let me take a look. The plant on the left appears to be a variety with reddish leaves, possibly a type of rubber plant or a similar species. The one on the right with thicker leaves looks like a jade plant, which is a type of succulent. Would you like guidance on their care?
KARTHIK KANNAN: Amongst the two of them, which one do you think is the healthier looking plant?
VOICEOVER: Let me take a look. Estimating plant health from a photo can be tricky. Both plants appear to be in decent condition. Look for factors like leaf color, firmness, and absence of spots or wilting to determine health.
KARTHIK KANNAN: I’d like to let you know that I actually watered both of them, so I’m their plant daddy. All right, I have a bottle here with me. Could you tell me how many calories, you know, is there?
VOICEOVER: Let me take a look. I can’t see the exact nutritional information. You might be able to find the calories on the bottle label or the manufacturer’s website.
KARTHIK KANNAN: Hold on. I’m going to go ahead and, you know, bring this focus. Or could you just read this out to me?
VOICEOVER: Let me take a look. I’m sorry, I can’t read the text on the bottle. You might try checking the label for the name of the drink. Okay.
KARTHIK KANNAN: So I’m just going to go ahead and, you know, have it describe the room in general as well. So Ally, could you go ahead and quickly give me a description of what you see in front of you?
VOICEOVER: Let me take a look. The room appears to be a dining area connected to a kitchen. There’s a wooden table with various items on it, such as a laptop, mugs, and a water bottle. On the windowsill, there are several plants. The kitchen in the back, background has a sink and some kitchenware. The room has a cozy and lived-in feel.
KARTHIK KANNAN: And I’m actually showing you live off at this event called Sight Tech Global. Do you want to say hi to everyone who’s watching it right now?
VOICEOVER: A pleasure and good day to all at Sight Tech Global. I trust the event is enlightening and inspiring for everyone involved. Do enjoy your time. All right. Bye. Farewell and take care. Don’t hesitate to reach out.
JOE DEVON: All right. For those that aren’t technical, I mean, I think it’s like magic watching this, especially if you can’t see and you need to have a device that you can speak to that can, that you don’t need to worry about asking someone to describe the room to you. So it’s kind of like magic. But if you’re not technical, you may not know just what it takes to get something like this to work. And I think the real toughest part of all of this, beyond the pictures, like the visual, is the conversation. Because it’s listening to you, it’s analyzing, it’s like multimodal, but you interrupt it and it sort of figures it out and then understands that interruption and then keeps going. I think that’s an amazing achievement. Obviously, it’s going to get better and better over time. Is that something you were able to push out to the foundation? Yeah. Or did you have to do a lot of work on your end to make that work?
KARTHIK KANNAN: Yeah, we had to do quite a bit of work in grounding a lot of the responses in facts, right? Which is why, for example, Ally will not speculate about what text is there in front of you if it’s not able to read it properly, or it will not speculate about certain things that it cannot 100% know. So just being able to ground a lot of this information in facts and work on the factfulness, that was something we put a lot of effort into. We used both open source models. Some of these ideas we used are just as research papers. And we’ve taken those research papers and actually implemented them. And some of them are using proprietary models. So it’s a mix of a whole bunch of things in order to make it as grounded as possible so that people who are using it can trust it over time. And if it’s not sure about something, it’s not going to shy away from saying, hey, I don’t know what this is, right? Maybe you need to get sighted assistance to help you with it, versus just giving you information that it assumes is right about the world. I think that is where we put a lot of effort into. And also, we put a lot of effort into ensuring that we know how the data travels through this whole stack. Again, because people who are blind and low vision trust us with this information, trust us with what they want Ally to see. So we have this additional responsibility of ensuring that we know how data travels through this pipeline, and we can account for it. And those are the two key areas that we spent the most amount of time, apart from, of course, putting in all the bells and whistles that you see Ally using today.
JOE DEVON: Well, just great job. Really fantastic. And KM, did I hear right? Did you say 500,000 users?
KARTHIK MAHADEVAN: Yeah, that’s on the Envision app and the glasses. We have about 500,000 users. The Ally app is on the Envision app. And we have about 500,000 users. The Ally app has been on a beta for a couple of months. We have about 2,000 users just on the private beta so far. So yeah, 1,001 as of today.
JOE DEVON: Yes.
KARTHIK MAHADEVAN: So we’re super excited to open it up for everyone to start playing around with. And I think the interesting aspect of Ally is that when we are actually, you’re going live, it’s actually going live on several platforms at once, right? Like, it’s available on your iOS, on your Android phone, on the glasses and on the web. So you can have a consistent experience of using Ally across all of these different platforms. And our aim and idea would be to put it on as many different platforms and devices as possible, so it can be accessible across the stack, right? So this can unlock amazing possibilities where you can start a conversation. You’re on the glasses, so you can, you know, take a picture of a document on the glasses. But then when you go to work and just sit behind a laptop, you will be able to access the contents of those documents on the web when you’re at work. So that kind of having across a device and across your platform access is also a unique thing. And that’s something that we are putting into Ally as much as possible from day one.
JOE DEVON: Yeah, which is fantastic, and definitely not an easy thing to do. It definitely takes some investment. And where do you see things going? I’ll stick with you, Cam, like on the business side of it. In terms of markets, like who, where are your users now? Where do you see your users coming in the future? And also, how does AI impact the entire accessibility industry? Is assistive technology looking completely different? I mean, in a year, five years from now, are all the big problems solved or not?
KARTHIK MAHADEVAN: Yeah. So I think with Ally, there are a lot of really exciting opportunities that we’re exploring simultaneously. I think, you know, sort of, you know, I think simply from the customer’s standpoint as itself, we’re seeing a lot of excitement, you know, from our audience who are blind or have a low vision, but also people who are not, right? Like, I’ve been using Ally so much for, you know, like every day at tasks just for myself. And we’re seeing a lot of adoption and excitement amongst the elderly, like, you know, the people who are not as tech-savvy, who are sort of, you’re kept away from all these technological advances so far. We believe that just offering them a very simple conversational interface to begin with, and they can just speak to it, you know, any way that they want to, that’s unlocking a whole new segment for us. But we’re also seeing some very exciting things, your pathways are on the business end of this, where we were at the NIB, a conference, you know, like a few weeks ago, and we were talking to a lot of these NIB agencies there. And what we’re doing with them is we’re trying to do your startup pilot where you can actually hook up your Ally to a lot of the internal databases that are at these agencies. You can hook it up to a knowledge base. it up to an inventory your system and then an employee there can simply speak to that ally and ask for how much inventory of just something is there or what is the process to do something at the company and all of this is just available through a very easy to use conversation interface where you don’t really have to struggle through a bunch of inaccessible and cover some steps in between so this whole ally for enterprise is something that’s very very exciting where i think uh you know folks like this nib agencies will probably be the first ones to come in but we do believe that this can expand to a lot more your opportunities to any your select company employer or a service that wants to make their service a lot more accessible they can simply hook up their internal existing knowledge bases to ally and all of a sudden they have this very easy to uh you know interact interface that all of the blind or low Vision employees or users of theirs can actually immediately have access to very cool.
JOE DEVON: Uh, now I’m gonna go towards a little bit of a techie question for you, kk, um are you using AI to build AI products? Are you using one of those AI coding assistants like GitHub Co-Pilot or Cursor, uh, if so can you describe the experience for those who haven’t done it, uh, and which might be the most important part of the process for you models you use, you tend to prefer and for what purpose, yeah I think, uh, you know if if you’re, you know we both follow each other on Twitter and you know how much I am a huge fan of Claude 3.
KARTHIK KANNAN: 5 sonnet uh and and the cursor ecosystem so yes i do use uh these coding llms uh quite heavily you know like i use ai to actually write uh you know ally in fact i would say around 20 or 30 of ally written by these uh written by ai itself and i’m hoping that we can you know up that number to you know maybe more than 50 in the coming year right um and uh if anyone who’s interested in starting off with this i think i would say you know it’s it’s a great boon for people who are already fairly experienced programmers because these models still tend to make a lot of mistakes right um and they they don’t get the full context of what you’re doing it’s like trying To, uh, you know these models writing letters and then we want them to write novels, you know, uh, they know the alphabet well, but then in order to write an actual story, you need to have a context of the characters and so much going on, and a codebase is a story, you know, uh, essentially, and uh it’s it’s telling a story in some way, so you need to have models that understand context, and that’s happening over time, still, uh, with what we have today; I’m a huge fan of Claude 3. 5 sonic uh which is um and they just launched a new version uh last week which uh you know improves on what is possible today and cursor is a is an um what they call an integrated development environment where you actually it’s like a text editor but then you can actually write code in it uh for those of you who don’t know and both these put together are a great combination uh in fact everybody at envision who writes code uh actually heavily uses uh you know these two tools um and i can’t recommend them enough you know i’m a huge fan yeah yeah as am i um the only annoying thing is it doesn’t remember uh the architecture that you pick so it may recommend one kind of architecture for logging in one file and then it doesn’t remember that and then it’ll go a different way in a different file i’ve seen that
JOE DEVON: Over and over again, um, but it’s it’s coming a long way and I think, um, I think it’s going to be great for accessibility.
KARTHIK KANNAN: I think it’s going to be great for accessibility. I think it’s going to be great even from the perspective of role playing right? So one of the big things that I like, uh, you know, like, uh, okay if you’re if you’re a developer you’re gonna request a creative way of recording it and Tu also starts just because you mentioned it’s it’s a it’s being able to uh understand and personalize the responses and uh to what you ask for and a very interesting uh offshoot of these uh you know AI um these models getting really Good at coding is that they also get good at reasoning, and reasoning is a very key step in role-playing. Right? And as these more so, just one advanced ‘uh’ or these no advances that are happening in one area, just have this amazing offshooter ‘uh’ you know advance that are happening in one area or advantages in other areas. And so it’s very exciting space! It just keeps changing every other week and uh I’m I’m at this point I’m living more on Twitter than my house, so I’m I’m uh you know all the stuff that’s going on, I’m very excited for for this space, yeah I am too, and I know that you and I could do a two-hour
JOE DEVON: ‘uh’ chat on just cursory ‘uh’ and and The coding tools and the models alone, but unfortunately, believe it or not, we’ve already been speaking about a half an hour, half an hour, uh, which is our our time window, so I would like to give both of you a chance to sort of wrap up, um, provide final thoughts as well as where people can access Ally or any any products and connect with you as well.
KARTHIK MAHADEVAN: Yes, appreciate that, Joe. Thank you so much, uh, so yeah, I think uh, we’re super excited excited about Ally, it’s going to be your conversational, personal, and ubiquitous, your assistant to use everywhere, and we are, you know, we are like launching it publicly at the Sight Tech Global, so you. can you can go right away to ally . me that’s a l a y . me and there will be a links to download ally for ios android the web and the glasses so you can immediately have access to all these your platforms of ally right away so definitely will encourage all of you to start playing around with you and i think the thing i would you’re close with is that the ally that you’re experiencing you know as of like today this is the this is you know like the worst this technology is ever going to be right and this is like the bottom line it’s only going to improve from here it’s only going to get a better i think in a future ally will become the interface that you talk to to access the computer to interact you know with your internet to be able to you know do all sorts of activities online and i think that will give you like the most personal message to people you know as you’ve experienced all of this in your life. The future that we are super excited about, so this is your year, I think. I think a time to install Ally, please. Like play around with it and offer us as much of of feedback as you can.
JOE DEVON: Uh, thank you so much, and then to reach you personally like LinkedIn or your socials, yeah?
KARTHIK MAHADEVAN: So I am, I’m Kartika Mahadevan on LinkedIn, and I’m @KartikIO on X or Twitter, so that’s the two platforms where you can reach out to me, thank you, Karthik, and now Karti Cannon, um, like you know, km mentioned.
KARTHIK KANNAN: Please play around with uh, with Ally, um, you know, many years ago my uh grandma was my grandma used to be one of the smartest people that I knew, and I, you know, worked very hard to teach her how to use a computer. She couldn’t speak as much English, and you know, I’ve always kept thinking about the moment that we live in right now, where you know, we don’t have to learn the language of the computers anymore. The computers are speaking our language; it’s Ally today is multilingual, you can speak it in your own language. You can get responses in your own language and you can pretty much use it to do any task that a computer expert would do right, and I think that’s the beauty of this technology is just that it It allows anybody and everybody to just get things done. And Ally is literally, you know, we named it to represent a real ally, a real friend that you could just, you know, use as you go about your day. So I’m really excited for this and I want you all to try it. And you can reach out to me on Twitter. That’s where I spend most of my time or X. You can reach out to me at M-E-T-H-E-K-A-R-T-H-I-K. You can just go or you can just search for Karthik Kannan and just find me out there and reach out to me. I would love to hear your thoughts on Ally, and thank you so much, Joe, for doing this. You know, you were the right person to do this panel. We knew it from the moment we thought about this. So thank you so much for doing this as well with us.
JOE DEVON: Well, thank you both, Karthik. This was great. I wish we had another hour or two to talk some more. I will say if anybody wants to reach out to me, I’m also pretty active on X, talking about AI and accessibility at Joe Devon. And with that, thank you to all of you and to Coray and Ross. Let me pass it, say thank you again, Sight Tech Global, Vista Center, but I’m going to pass it back to you, Karae and Ross. And that is a wrap.
[MUSIC PLAYING]