After the session on AI and Media, join Warner Brothers Discovery’s Angela McIntosh for a live AMA on audio description at Max®. Warner Brother Discovery’s Max®, which launched May 23 2023, is an enhanced streaming platform. Listen in to live answers to your questions regarding audio description on Max and the future of accessibility in streaming platforms.
Archives: Sessions
Session custom post type.
Live AMA with Greg Stilson on APH’s Monarch tactile display
Following his session on the main stage, join Greg Stilson for a live AMA focused on the APH Monarch rollout.
Live AMA with the founder of the OKO app from AYES
Following the main stage session with OKO, join the startup’s founder, Michiel Janssen, for a live AMA.
Live AMA with the ARIA AT team
Join the ARIA AT team for a live AMA on their work following their session on the main stage.
Live AMA with the founder of Glidance
Join the founder of Glidance, Amos Miller, for a live AMA following his main stage session.
Live AMA with the Be My AI / Be My Eyes team
Following the Be My Eyes session on the main stage, please join the Be My Eyes team working on Be My AI for a live AMA to get answers to your questions about this breakthrough application of generative AI to one of the blind community’s favorite applications.
Waymo in San Francisco: A lesson in public advocacy for AI
Who loves the idea of autonomous, driverless taxis best? Hard to say, but anyone who is blind will likely tell you they can’t wait. Why? The human drivers in ride-share apps turn down passengers with guide dogs, and driving with a stranger is that much more stressful when you can’t see them. And fundamentally, it’s about mobility without reliance on other people. That’s why Lighthouse and NFB took a big interest in Waymo’s San Francisco rollout and even took up the cause for the autonomous taxis.
Thank you and final remarks
Envision: What happens when smart glasses really get smart?
Envision is a pioneer in the effort to connect computer vision with everyday life in the form of tech-enabled glasses that can tell a blind user about their surroundings. Using the Google glass platform, Envision found a market with blind users who value a hands-free interaction, and the experience only got better with the launch of scene description AIs in the past two years. But what’s really changed the game for Envision is generative AI, and the tantalizing possibility of a multimodal AI that’s more like an all-around personal assistant.
Immediately following this session, Karthik Mahadevan will be available to take questions live in a breakout session.
Can we enlist AI to accelerate human-led work in alt text and audio description?
To watch the recently released “All The Light We Cannot See” with audio descriptions “on” is a revelation, at least for a sighted person. The audio description uses words sparingly to augment the obvious soundscape and to call out subtle details anyone might easily miss. It’s art only a human team could produce (sorry AI proponents), but then it’s also expensive and time consuming. In that regard, producing alt text for images online or audio descriptions for video face the same challenge: how to do more and do it well. At Scribely and MAX, the human-first approach is uppermost, but they are also exploring how AI and related tech can be narrowly channeled to speed up their vital work.