The Screen Reader Interoperability Gap – A Hidden Systemic Barrier

At Sight Tech Global two years ago, we unveiled the ARIA AT initiative, which aimed to address the frustrating, damaging reality that screen readers are not interoperable on the Web, unlike their cousins for sighted users – browsers like Chrome and Safari. In other words,any developer that takes accessibility seriously has to test their code on JAWS, VoiceOver, NVDA and the rest. In this session, the people advancing the ARIA AT project  are back with a refresher, progress to report, and a call to action.

Wisk: The people’s autonomous (and accessible!) air taxi

“Where is my flying car?” is a longstanding Silicon Valley lament. Almost here, is the answer, and the startup Wisk is one of many startups closing in on that promise with an autonomous (no pilot), electric, 12-prop four-seater that’s more or less like a flying Waymo, only it will initially fly only pre-set routes to destinations like LAX airport from locations around LA. Beat the traffic, right? What’s remarkable about Wisk is how they are building accessibility into the Wisk experience from the start. That narrow staircase for passengers? Guide dogs need something wider. Check.

Glidance: It’s not a cane. It’s not a dog. It’s a self-driving mobility aid.

For years, technologists have experimented with ways to assemble powerful new technologies like computer vision, digital navigation, and a variety of  sensors to help blind and visually impaired people navigate more easily. Former Microsoft engineer Amos Miller, who is blind himself, had an idea: why not create a device that uses multi-modal AI technology to guide users by attaching the familiar concept of a cane to a small, two-wheel assembly that guides with steering and brakes a user to their destination? Could people, especially those who lose their vision later in life, easily afford the device and use it right out of the box? That’s what Miller aims to deliver with Glidance.

Immediately after this session, Amos Miller will be available for live questions in a breakout session listed in the agenda.

Andrew Leland on his instant classic: “The Country of the Blind”

To lose one’s sight to the unpredictable course of retinitis pigmentosa is an experience many people with sight loss know all too well. In the US alone, there are an estimated 100,000 people with the condition, but there are not many who happen to be authors and journalists of considerable skill who can relate in a wonderfully compelling detail the very personal experience of  losing their sight while also starting a family, maintaining social and work connections, and navigating the many perspectives on blindness swirling in the American scene. Only human, not artificial, intelligence is on tap for this conversation with the author of the remarkable new book, “The Country of the Blind.”

Where will AI take accessibility? A conversation with Mike Shebanek

At META, Mike Shebanek has a ringside view of the emerging AI universe. Not only is META one of the top contenders developing the most powerful generative AI models, it is a player in hardware as well, with the rollout later this year of the META Quest 3 AR/VR headset and Ray-Ban META smart glasses.That combined with leadership on the evolution of VoiceOver at Apple earlier in his career, provides Shebanek with almost unique perspective on where accessibility and assistive tech are headed. Are we nearing a time when critical technologies, like GPUs, sensors, and generative, multimodal AI might yield remarkable agents that were once the realm of sci-fi? Will we think of those technologies as purpose-built for people with disabilities, or will they be facets of something much bigger, a vision of universal design, the realization that all tech is assistive technology, to quote the artist and designer, Sara Hendren.

AI gets complicated: emerging debates and surprising challenges

For all remarkable advantages AI has brought to accessibility, there’s always been a backbeat of issues, notably around certain kinds of bias against people with disabilities. Now that AI is infiltrating more and more day-to-day experiences and generative AI is taking wing, the expanse of issues for blind people is growing fast. In some cases, it’s all about advocating for technologies like autonomous taxis (think Waymo) or facial recognition that present big advantages but are opposed by other interests in the name of privacy or public safety; in other cases, the challenge is making sure emerging generative AIs take into account the worlds of eBraille and the always emerging language of the community.

Generative AI: What just happened?

Since the launch of OpenAI’s chatGPT in November last year, the technology and startup world have been in transfixed by the possibilities inherent in what’s referred to as “generative AI,” which means AI that can actually create content, whether that’s very fluent sounding essays, stunning images, computer code,  and much more. Many of the sessions at Sight Tech Global discuss the impact of generative AI on accessibility, which is vast, if also problematic in some cases. At the same time, many AI experts warn that generative AI is too powerful, advancing too quickly and argue it should be regulated to prevent a potential catastrophe. Dr. Stuart Russell, Professor of Computer Science at the University of California at Berkeley, is one of the world’s leading authorities on AI and author of the bestselling book,  “Human Compatible, Artificial Intelligence and the Problem of Control.

Be My AI: What happens when an accessibility favorite makes the jump to AI?

Founded in 2015 by Hans Jørgen Wiberg, Be My Eyes quickly established itself as a wildly helpful mobile phone app for people with no or limited vision. Today, more than 500,000 blind users rely on 6.8 million sighted volunteers (covering 180 languages) to take their call and, by looking through the camera on the blind user’s phone, describe what they see.

The huge leaps in AI capabilities in the past year, however, have opened incredible possibilities.  Can AI do better than all those human volunteers? In September, Be My Eyes launched its chatGPT4 AI-based beta, “Be My AI” in an exclusive collaboration with the leader in generative AI, Sam Altman’s Open AI. We’ll hear from the Be My Eyes team about how they integrated AI, what they are hearing from thousands of users in the beta, and how humans are still in the loop – for now – and how they handle chatGPT’s tendency to “hallucinate.”

Immediately after this session, the speakers will be available for live questions in a breakout session listed in the agenda.

Alexa, what is your future?

When Alexa launched six years ago, no one imagined that the voice assistant would reach into millions of daily lives and become a huge convenience for people who are blind or visually impaired. This fall, Alexa introduced personalization and conversational capabilities that are a step-change toward more human-like home companionship. Amazon’s Josh Miele and Anne Toth will discuss the impact on accessibility as Alexa becomes more capable.