AI gets complicated: emerging debates and surprising challenges

For all remarkable advantages AI has brought to accessibility, there’s always been a backbeat of issues, notably around certain kinds of bias against people with disabilities. Now that AI is infiltrating more and more day-to-day experiences and generative AI is taking wing, the expanse of issues for blind people is growing fast. In some cases, it’s all about advocating for technologies like autonomous taxis (think Waymo) or facial recognition that present big advantages but are opposed by other interests in the name of privacy or public safety; in other cases, the challenge is making sure emerging generative AIs take into account the worlds of eBraille and the always emerging language of the community.

Generative AI: What just happened?

Since the launch of OpenAI’s chatGPT in November last year, the technology and startup world have been in transfixed by the possibilities inherent in what’s referred to as “generative AI,” which means AI that can actually create content, whether that’s very fluent sounding essays, stunning images, computer code,  and much more. Many of the sessions at Sight Tech Global discuss the impact of generative AI on accessibility, which is vast, if also problematic in some cases. At the same time, many AI experts warn that generative AI is too powerful, advancing too quickly and argue it should be regulated to prevent a potential catastrophe. Dr. Stuart Russell, Professor of Computer Science at the University of California at Berkeley, is one of the world’s leading authorities on AI and author of the bestselling book,  “Human Compatible, Artificial Intelligence and the Problem of Control.

Be My AI: What happens when an accessibility favorite makes the jump to AI?

Founded in 2015 by Hans Jørgen Wiberg, Be My Eyes quickly established itself as a wildly helpful mobile phone app for people with no or limited vision. Today, more than 500,000 blind users rely on 6.8 million sighted volunteers (covering 180 languages) to take their call and, by looking through the camera on the blind user’s phone, describe what they see.

The huge leaps in AI capabilities in the past year, however, have opened incredible possibilities.  Can AI do better than all those human volunteers? In September, Be My Eyes launched its chatGPT4 AI-based beta, “Be My AI” in an exclusive collaboration with the leader in generative AI, Sam Altman’s Open AI. We’ll hear from the Be My Eyes team about how they integrated AI, what they are hearing from thousands of users in the beta, and how humans are still in the loop – for now – and how they handle chatGPT’s tendency to “hallucinate.”

Immediately after this session, the speakers will be available for live questions in a breakout session listed in the agenda.

Alexa, what is your future?

When Alexa launched six years ago, no one imagined that the voice assistant would reach into millions of daily lives and become a huge convenience for people who are blind or visually impaired. This fall, Alexa introduced personalization and conversational capabilities that are a step-change toward more human-like home companionship. Amazon’s Josh Miele and Anne Toth will discuss the impact on accessibility as Alexa becomes more capable.

AI Decision Systems Permeate Our Lives. Now what?

AI decision systems have permeated most of the critical decisions within our society. They shape our views, wants, friendships, hates and fears. They influence who is hired, fired, admitted, prioritized for healthcare, funded, voted for, and seen as a security threat. It is argued that current AI systems are mechanizing eugenics and segregation. They automate, amplify, and accelerate past discrimination more efficiently, accurately, and consistently. What is the impact if you have a disability? AI is also poised to make the next leap, moving beyond statistical modelling based on big data sets. What are the opportunities and risks of emerging forms of AI?

Did Computer Vision AI Just Get Worse or Better?

The ability an assistive tech devices to recognize objects, faces, scenes is a type of AI called Computer Vision, which calls for building vast databases on images labeled by humans to train AI algorithms. A new technique called “one-shot learning” learns dramatically faster because the AI trains itself on images across the Internet. No human supervision needed. Is that a good idea?