Archives: Sessions
Session custom post type.
Spotlight: PitchAbility 2025 Winner: XR Navigation
Understanding the accessibility of digital map tools is essential for creating inclusive maps. The first product from XR Navigation, Audiom, makes digital geography accessible to blind and low-vision people for the first time as a digital map viewer and editor that’s fully usable visually, auditorily, and through text. XR Navigation’s strategic goal is that every map is accessible to everyone, providing the platform map tools can use to make their maps accessible and legally compliant for the first time.
Spotlight: Cooking Up Independence with Brava’s Accessible Smart Oven
Cooking safely and independently can be challenging for people who are blind or have low vision. In this session, the Brava team joins Sam Seavey of The Blind Life to explore how Brava’s smart oven is breaking down barriers in the kitchen. Sam, who has reviewed Brava on his YouTube channel, shares his first-hand cooking experiences, and together they’ll discuss accessibility improvements and look ahead at Brava’s next frontier: conversational app control powered by GPT.
Spotlight: EchoVision: Redefining Independence Through AI and Inclusive Design
This Spotlight Session unveils the story behind Agiga’s EchoVision—the groundbreaking smart glasses designed specifically for the blind and low-vision community. Discover what sets this product apart: real-time scene description and contextual understanding powered by advanced AI. Our speakers delve into how deep collaboration with the blind and low-vision community influenced every design decision, prioritizing accessibility, comfort, and trust, with authentic user experiences and a live product demonstration. Learn more, join the Facebook Group, the Google Group, and watch YouTube videos.
Designing AI That Sees Us: Powered by Blind-Centric Data
How do we ensure that the next generation of AI systems truly works for blind and low-vision people? The answer is in the design, and for that, you need the right data. The vast majority of AI models are trained on datasets that underrepresent or incorrectly categorize disability, creating a “disability data desert” that limits accuracy by up to 30% for disability-related objects. Without blindness-relevant data, AI models may confidently describe obstacles that are not there, unreadable signage, or mislabeled medication. Discover how Be My Eyes’ transparent, privacy-first data collection is revolutionizing AI’s ability to understand the lived experience of blind and low-vision users.
The Virtuous Alt Text Cycle: Engineering Context & Quality in AI-Generated Alt Text
Generative AI offers the promise of large-scale web accessibility, yet its automated image descriptions often fall short in accuracy, context, and equity. This session explores AI’s dual role as a powerful but imperfect alt text author, examining its strengths and weaknesses. We will present solutions for building resilient workflows through Human-in-the-Loop (HITL) strategies, moving beyond simple error correction to cultivate a virtuous alt text cycle where expert human input informs adaptive, context-aware AI. Join us to critically evaluate the AI-generated description process, champion quality-focused alt text solutions, and understand how integrating AI into your workflow—rather than replacing it—is essential for truly effective alt text outcomes.
Crossing the Lines: The Power and Promise of Multiline Braille Technology
Multiline braille technology is rapidly transforming tactile literacy, access to information, and digital inclusion for blind people around the world. Join this insightful session to hear from the architects of this revolution: American Printing House, HumanWare and Dot Inc who are collaborating across continents to deliver the next generation multiline braille and tactile graphics displays.
AI That Sees, Hears, and Understands: Google’s Accessible Technology
Explore how Google is using AI to transform accessibility for people with disabilities. This session showcases innovative vision-focused tools including TalkBack with Gemini for detailed image descriptions, Lookout’s Image Q&A feature, Pixel Magnifier with voice search, Guided Frame for photo composition, and the StreetReaderAI prototype. Learn how machine learning is leveling the playing field and making the visual world more accessible. Features a live Gemini demo and insights into Google’s accessibility mission.
Responsible AI in Action: Microsoft Is Building a Fair and Inclusive AI Future
AI has the power to transform lives, but only if it’s built for everyone. This panel digs into the challenges of AI and the principles used to create accessible solutions for all, with speakers sharing Microsoft’s commitment to responsibly design, build, and release accessible AI technologies. A demo of the Ask Microsoft Accessibility bot, “AskMa,” showcases how users can find information about the accessibility of Microsoft products and services. The panel’s call to action: Be a community of change makers, take the first step in using AI, build your skillset, and share feedback.
Unlocking Human Potential: Universal Design and AI with Meta
Join Agustya Mehta, Director of Concept Engineering at Meta’s Reality Labs, to reflect upon his lived-experience with technology and how it has enhanced everyday life for the blind and low vision community, and discuss how AI-powered wearable technology can address some of the many challenges that have yet to be tackled. We’ll cover emerging trends in this space, Meta’s exciting portfolio of AI-powered Wearables, and how developers can create groundbreaking experiences around independence and inclusivity for Ray-Ban Meta glasses and beyond.
