With ever more powerful computer and data resources available in the cloud, Microsoft’s Seeing AI mobile app is destined to become a steadily better ally for anyone with vision challenges. Co-founder Saqib Shaikh leads the engineering team that’s charting the app’s cloud-enabled future.
As AI-based computer vision, voice recognition and natural language processing race ahead, the engineering challenge is to design devices that can perceive the physical world and communicate that information in a timely manner. Amnon Shashua’s OrCam MyEye is the most sophisticated effort yet to merge those technologies in a seamless experience on a dedicated device.
Whether it’s Alexa, Tesla or Facebook, AI is already deeply embedded in our daily lives. Few understand that better than Dr. Kai-Fu Lee, a scientist who developed the first speaker-independent, continuous speech recognition system as a Ph.D. student at Carnegie Mellon, led Google in China and held senior roles at Microsoft and Apple. Today, Dr. Lee runs Sinovation Ventures, a $2 billion fund based in China, is president of the Sinovation’s Artificial Intelligence Institute and has 50 million followers on social media.
Dedicated devices versus accessible platforms? Victor Reader Stream versus iPhones and Alexa? How will AT companies take advantage of a world with cloud data and edge computational power, AI algorithms and more demanding customers than ever? Humanware, eSight and APH are already looking far into that future.
The screen reader is arguably the most consequential digital technology ever for people who are blind or visually impaired. At the same time, screen readers depend on a dizzying array of keyboard commands, and — when it comes to reading websites in a browser — they struggle with the ugly reality of poor website accessibility. New technologies may lead the way to better outcomes.
When Alexa launched six years ago, no one imagined that the voice assistant would reach into millions of daily lives and become a huge convenience for people who are blind or visually impaired. This fall, Alexa introduced personalization and conversational capabilities that are a step-change toward more human-like home companionship. Amazon’s Josh Miele and Anne Toth will discuss the impact on accessibility as Alexa becomes more capable.
Inventors have long been inspired to apply their genius to helping blind people. Think of innovators like Louis Braille and Ray Kurzweil, to name just two. Today's ambitious pioneers have the cheap sensors, high speed data networks, and data and compute "in the cloud" to do more than ever before. In this session, three founders present products that have just or will soon will enter production that they believe will improve the lives of people with disabilities.
While it's clear that AI-based technologies like natural language processing and computer vision are powerful tools to help with accessibility, there are also areas where AI technologies inject bias against people with disabilities by contrasting them again "norms" established in databases. This panel will look at examples of where that is happening – in employment software, benefits determination or even self-driving cars, for example, - and approaches that will help address these issues from the ground up.
Apple has long embraced accessibility as a bedrock design principle. Not only has Apple created some of the most popular consumer products in history, these same products are also some of the most powerful assistive devices ever. Apple’s Sarah Herrlinger and Jeffrey Bigham will discuss the latest accessibility technology from Apple and how the company fosters a culture of innovation, empowerment and inclusion.
For an AI to interpret the visual world on behalf of people who are blind or visually impaired, the AI needs to know what it’s looking at, and no less important, that it’s looking at the right thing. Mainstream computer vision databases don’t do that well — yet.
If people who are blind or visually impaired find Uber and Lyft liberating, imagine how they will feel summoning a fully autonomous ride from an app on their mobile phones. But wait, how exactly will they locate the cars and what happens when they climb in? Presenter Clem Wright is responsible for the fully autonomous taxi’s accessibility, and he will be joined by leadership from two organizations closely involved in that effort: The Lighthouse for the Blind SF and the Foundation for Blind Children.
Technologists like to imagine how their work affects people, but that’s no substitute for truly knowing the real impact on lives, or better yet, understanding what people, especially people with disabilities, really want from their surroundings and community. In her recent book, What Can a Body Do? professor and designer Sara Hendren’s “aim … isn’t to throw cold water on innovation; it’s to recenter the people, behind the tools, who must work with their surroundings, their adaptations at least as miraculous as the technology that helps them.” (Katy Waldman, in her New Yorker review)
Map apps on mobile phones are miraculous tools accessible via voice output, but mainstream apps don’t announce the detailed location information (which people who are blind or visually impaired really want), especially inside buildings and in public transportation settings. Efforts in the U.S. and U.K. are improving accessible navigation.
It’s one thing for an AI-based system to “know” when it’s time to turn left, who came through the door or how far away the couch is: It’s quite another to convey that information in a timely fashion with minimal distraction. Researchers are making use of haptics, visual augmented reality (AR), sound and language to figure out the right solutions.
When technologists design exciting new innovations, those designs rarely include blind people. Advocates urge us to employ a variety of strategies, Access to technology and information is a civil right. Yet when technologists design exciting new innovations, those designs rarely include blind people. Advocates urge us to employ a variety of strategies, from education to litigation, to ensure accessibility is baked into all future tech and information systems. Harvard Law’s first Deafblind graduate Haben Girma, disability rights attorney Lainey Feingold, and Chief Innovations officer with the DAISY Consortium, George Kerscher will discuss strategies for creating a future fully accessible to persons with disabilities, including those who are Black, Indigenous, People of Color, and persons with print disabilities.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.