Getting around: Autonomous Vehicles, Ride-Sharing and Those Last Few Feet
DESCRIPTIONSummoning a ride from a smart phone is a dream come true for many, but when you have difficulty finding that ride, even when it’s a few feet away, the experience can be a nightmare, not to mention dangerous. How are ride-share and autonomous taxi companies working to make those last few feet from rider-to-car safer and better for blind and low vision riders?
BRYAN BASHIN: OK. Thank you, Will Butler, and welcome to the autonomous vehicle and accessibility panel. As CEO of LightHouse for the Blind in San Francisco, we want autonomous vehicles to succeed. And there are lots of reasons for that. In these pandemic times, of course, if there’s no human in the car, other human, there’s no infection. There’s no chance for discrimination if you use a guide dog. There’s a chance that you can be alone if you just want to be alone like sighted people sometimes enjoy that.
As blind people, we never have the chance to commute alone in a vehicle until now. And we also don’t have to explain ourselves and answer all those questions that people are infinitely curious about us. And maybe the bottom line is the bottom line. We want rides that are affordable. And if autonomous vehicles without paying for a driver will produce more affordable rides, we’re all for it. But this challenge, the autonomous vehicle challenge, is one of the most ambitious AI issues we’ve ever faced as a society.
And I want to bring Eshed Ohn-Bar into this conversation. Eshed is a researcher at Boston University with a specialty in AI. Eshed, you know, there have been problems with AI and autonomous vehicles already. This summer in Japan, a Toyota automated bus ran across a blind person in a crosswalk and had to stop all the autonomous vehicles during the Paralympics. And of course, we all remember it’s already three years since the first pedestrian was killed by an autonomous vehicle in Tempe, Arizona, in 2018 through software and other issues that we hope will be resolved. But, Eshed, why is this problem so hard?
ESHED OHN-BAR: Oh, great. Great question. Thanks for having me here. I think there’s multiple reasons. There are several systems that need to be combined together and make decisions that are on the order of magnitude of seconds, and their implications could be potentially fatal.
And this is something that is incredibly difficult to design because there are so many potential cases that the system could encounter. What happens when one of the system fails? How do you combine all these pieces together to get a seamless navigation? This is the big grand challenge, and obviously, there’s going to be unexpected situations, but there’s also going to be interactions with diverse environments and diverse people.
And we don’t know yet how to design a system that can work in all the situations. What if you have a pedestrian with different needs, different interaction needs? What if you have a pedestrian that has a different reaction to the autonomous vehicle? Even something like recognizing the cane could be an incredibly difficult computer vision challenge that we just haven’t understood yet because that’s something that may not be represented in current data sets for autonomous vehicles. So I think accessibility really needs to be the focus of the future of autonomous vehicles in order to make sure they’re safe for everybody.
BRYAN BASHIN: You know you mentioned the cane. This year marks the 100th anniversary of the white cane, which was first invented in Bristol, England. And it was done because of the newfangled thing cars on the road, and the cars produced a new threat to a blind pedestrian. And that was the reason why canes started to be painted white. Here we are 100 years later, and you’re telling us that this is actually a very strong challenge to be predictive in a visual system. Is that what you’re saying?
ESHED OHN-BAR: Yes, the system needs to recognize parts of the cane or the mobility aid of pedestrian and understand that a person with — maybe a person who is blind that’s trying to cross the street might take a little longer to explore safely before they cross. It needs to yield properly to these pedestrians and definitely avoid the cane. So I think if a high-speed robot might, even may not hit the pedestrian but may hit their cane. They might not be the most seamless experience or most inclusive experience. Yes.
BRYAN BASHIN: Yeah, that’s really interesting. I want to bring Kerry Brennan into this conversation. Kerry from Waymo, you’re a researcher looking at user experience and design, and I just wonder how do you, in your professional work, get the opinions, ideas, and expectations of blind people into the mix?
KERRY BRENNAN: Yeah, thanks so much, Bryan. We’ve done a lot, but we have a lot more to do. But we are really lucky to benefit from a number of partnerships. We think it’s so important to have ongoing engagement. Accessibility is not like a one consultation and done kind of thing. It’s something that really has to be constant, and so we kind of approach it from multiple angles.
I have wonderful colleagues in my public affairs team who work through our partnership and the Let’s Talk Autonomous Driving group, and we have wonderful partners there like the Foundation for Blind Children, Foundation for Senior Living, United Cerebral Palsy, and others. Of course, we love to work with LightHouse for the Blind. So we have these organizations and institutions that we’re in touch with to make sure that we’re getting feedback.
And then, on the user experience research side, when we’re actually looking at getting feedback on new features that we’re wanting to test when we’re doing foundational research on people’s mobility behaviors and mobility needs, we make an effort to include folks diverse and representative groups of people. So we want to make sure that we include folks who are blind or who may be visually impaired or who have other needs that might not be considered the majority but that we really want to learn from and need to learn from. So we’ve been making a point of including those perspectives throughout our research, both in dedicated studies, where we kind of say this is specifically an accessibility study, but also in kind of a mainstreamed way, where we want to include those perspectives as part of any study group.
BRYAN BASHIN: And of course, this is a vexing problem because there are about 1.5 million Americans who are legally blind or have no vision, something like that. Only 130,000 of us, like me, cross the street, use a cane outdoors. 90% of blind people do not use a cane or a dog. How does Waymo seek to predict and design for the 90% of legally blind people who are not identifying as blind but still may have the same challenges that anybody else does?
KERRY BRENNAN: I’m so glad you asked this question because I love talking about the service features of Waymo One, our ride-hailing service and the app and the vehicle, and all of that, and I hope we get to talk about that too. But the thing that powers the service is the Waymo driver. And so when we’re training the Waymo driver and helping teach the Waymo driver how to be a good user of the road, we take great care in how pedestrians are perceived and how we respond to the movement of pedestrians to make sure that we take their comfort into account.
So it’s not just a simple calculation of there’s enough space for our car to get by, so we’re going to take that turn. We want to make that person feel comfortable. We want to be polite, and we don’t assume that the pedestrian can see our vehicle. We see the pedestrian, and we act accordingly. And so I’m really proud of the work that so many of my colleagues who work on perception and work on the driving behaviors. Take that into account.
There’s a current effort right now, for example, where we’re consulting orientation and mobility experts to actually take the perspective of the behaviors that are taught to individuals who are blind or visually impaired into consideration as we’re also thinking about the way that the Waymo driver behaves. So I think it’s really ultimately about understanding the pedestrian, perceiving the pedestrian, and prioritizing their comfort in such a way that it doesn’t matter — it shouldn’t only rely on a sign like a cane because there are folks who for any number of reasons might move slower than average, might move in a different forward motion pattern than average, and it’s really our job to capture that and respond accordingly.
BRYAN BASHIN: Kerry, I have to wrap my mind around this neologism you just said. When you say Waymo driver, you mean a machine, don’t you?
KERRY BRENNAN: I mean the robot driving the car.
BRYAN BASHIN: OK. All right, that’s what I thought. First time I’ve seen it used in that way. Marco Salsiccia from Lyft. You’re a blind person and heavily involved in Lyft’s autonomous and other aspects in which there’s surface contact with blind individuals. What do you do with Lyft, and what are you thinking about in the autonomous space?
MARCO SALSICCIA: Yeah. Hey, Brian. So particularly with Lyft, I’m their accessibility specialists. So I generally focus on the rider app at the moment, making sure that in iOS and Android, those of us who are blind or visually impaired or any other kind of disability, we can get rides. And in terms of the autonomous vehicle space, when we were partnered with Aptiv back in 2019, we actually held this fun autonomous vehicle experience at NFB, where we were able to kind of print out tactile graphics and maps to kind of show the route that we would have people take and where the driverless cars were going around Las Vegas.
And yeah, that was really well received, and it gave the teams a lot of experience and basically the thirst that blind people really have for autonomous vehicles. And how some of them knew more about the tech than some of our own engineers did at the time, or it surprised them at the amount of knowledge that was being shown. So yeah, we’ve had different partnerships, and at the moment, I’m just kind of working with the user research teams and making sure that whatever we come up with, we’re going to be kind of facilitating the transportation independence through the spectrum of different user needs.
BRYAN BASHIN: Well, you mentioned Las Vegas, and I’ve read some published reports recently of Lyft about to start an autonomous service there. Have you been involved in the accessibility or user experience prior to starting that service?
MARCO SALSICCIA: Not yet. I think we’re partnering with Motional, and in 2023, we’ll be launching driverless cars. But we’re starting like the UX research phase and talking with different user groups of different user needs about what would work best for the overall experience. So I think we did complete around of research with the neurodivergent folks, which brought up a lot of really interesting data about how autonomous vehicles would be perceived, emotions and feelings that kind of go into the experience.
But we’re definitely now kind of moving more into the accessibility space, so making sure that we’re designing empathetically. So, I believe with Motional with the driverless cars and with later partnerships with Ford and Argo, we’re looking into the cars, looking into the experience, it’ll be handled through an app very similar to the Lyft rider app as it is right now. So it’ll be TalkBack and voiceover ready. And there will be a variety of audible signals, visual, and sensory signals within the car. I believe we’re planning also to be able to control car and functionality and user comfort through the app itself with the button to actually make everything go.
So once you get into the car, there will be braille and raised lettering by the button. There will be a call for help button, and general controls are kind of — we’re making sure that everything is going to be working concurrently through the app or through the car. And they have audible signals and feedback, and like put your belt on or any kind of rider updates. So it’s really interesting seeing like what we’re coming up with and yeah, we just hope — I’ll continue to work with the team on making sure that we’re designing empathetically.
BRYAN BASHIN: I love that you mentioned neurodivergence. In our field, the plain, vanilla blind person rarely exists. Everybody is an intersection of other identities, and so you might want to just speak a little more fully about some of those other identities, multiple disabilities, different ways that you might design an autonomous vehicle to be more inclusive. Could you give us some specifics?
MARCO SALSICCIA: I don’t have any specifics about the actual research itself.
BRYAN BASHIN: OK, all right.
MARCO SALSICCIA: I do know just partnering and asking questions and talking with the actual groups of people. That’s the important part — the whole concept of designing with, not designing for. That’s really what we should be aiming for or everyone else should be aiming for.
BRYAN BASHIN: Well, of course, the user experience, there are some delicious engineering challenges and solutions. You had mentioned you’re thinking about later on incorporating even three-dimensional sounds, something like soundscape, to address this problem, and let me just set it up for everybody. You’re standing on the curb, and you want to know where the car is. How do you do that? Or you’ve taken your ride, you open the door, and then how do you find your way to the door of where you want to go? What are some solutions? I’m going to ask you, Marco, and then go around the panel. That might be helpful here.
MARCO SALSICCIA: Yeah, great. I mean, wayfinding is one of the biggest issues with this. Even as a blind user, how do I get to the car? I know with our teams, with Motional, and in the partnerships, our UX research teams are starting to explore a variety of different options. And I personally have brought up partnering with Microsoft Soundscape, but for users who don’t particularly use Soundscape or don’t have iOS, there needs to be a multiple variety of methods, either a GPS hot and cold method with haptic feedback to alert you to how close you are to the car, honking the horn, flashing the lights.
There are a variety of different things that we can be looking at. And to kind of go back to the 2019 driverless car experience that we had, you would order the self-driving car through the Lyft rider app itself. And when you got to your destination, we were able to surface a link to the Google Maps to show you walking directions to go from wherever the car has dropped you off, and then you can kind of move on to your destination. So those are some ideas.
BRYAN BASHIN: Appreciate that. I’m going to give Kerry a chance. The same question, Kerry. When curb cuts started to be mandated in the ’70s, which was a huge boon to society, there was one thing that was lost, and that was the easy understanding that you step off a curb because you stepped off a curb, and there had to be other facilitations to fix that.
As we think about going from regular TNCs to a TNC service where there’s nobody else in the car, we lose that driver to say, oh, the door you go out, and then you head a little bit to your left, and it’s there. So what’s Waymo thinking about to sort of help the user without somebody in the car pointing us in the right direction?
KERRY BRENNAN: Yeah, it’s a great question, and it’s so interesting the way you frame it as kind of like the loss of the person, you know, telling you where to go. And I always like to think of it as like, wow, you don’t have to ask another person, just totally do it on your own. So I kind of think that reframing is also makes it sound even more exciting. But we have been thinking about this a lot as far as wayfinding.
We’ve seen it in action because we’re operating our completely autonomous, “no one in the front seat” service down here in Arizona where I live, and we’ve gotten to see firsthand how important some of the things that we put in place were like those walking directions so that you can — when you get dropped off, you can choose to have walking directions help you get the rest of the way to your destination. We’ve seen how popular our feature of tapping a button on the phone to honk the car’s horn if you can’t quite find it has been really popular with just many, many of our riders.
But we also — well, and you know, even before you get in the car, we heard from our riders that sometimes where the vehicle is able to drop you off, it’s a little bit farther than you might normally expect. And so we’re also really clear when that’s the case, and we set that expectation with our riders to empower them to make the choice. You know what? I’m actually not going to take this particular ride that’s too far of a walk for me. And so all of those things that we’re really proud of, but we know we’ve got lots more to do. So we’re always excited to think about what’s next.
I know other folks on the call are also involved in the Department of Transportation’s Inclusive Design Challenge, and we’re working on wayfinding in particular in terms of our participation in that. And so we’re exploring some of the ideas that Marco mentioned, thinking about how might we include sounds that aren’t the honking horn, you know, sounds that are a little bit nicer to trigger as you’re trying to just take a calm ride.
How might we include things like haptics? Because to your point about intersectional disabilities, I had a really just enlightening conversation with a research participant who is blind and also has some hearing loss. And she shared with me that even though she’s able to hear with hearing aids, she’s not able to use sound to determine directionality.
And so, for her, the kind of typical like, oh, it’s audible, so that will work for you, isn’t the correct assumption. So she needs something that’s going to be a different signal other than assuming that somebody can tell the location of the vehicle, for example, purely based on sound. And so I’m really excited by things like haptics, just things like improved directions and how we deliver those, as well as things like headlights and other visual cues.
BRYAN BASHIN: And of course, I’d loved your point about no two blind people being the same. We’re a spectrum of people and concerns and identities. Eshed, you’re doing some interesting research thinking about ways in which one smartphone could actually start learning a person’s unique identities and then start adapting its cues and help in order to be personalized. You want to speak a little bit about what you’re thinking about and perhaps how far you’ve come along with this idea?
ESHED OHN-BAR: Yes, especially in this use case that you brought up where you have to think about the navigation experience very broadly because maybe you have to go from your doorstep at your home all the way to a unfamiliar destination like, say, an airport. And in this kind of door-to-door journey, you might travel different settings from your home outside to the autonomous vehicle throughout this journey and safely outside of the autonomous vehicle, and to a destination safely and smoothly.
So I think AI is a critical technology to enabling that seamless experience because the question of when and how to deliver the exact information that’s needed to a navigator is such an intricate question that requires you to understand the context of the scene, the mobility skills, and preferences of the person, and obviously, technological system that might also have some failures at some point. So how do you bring all this together, this kind of a intricate system of needs and timely information? This is something that AI can do. And AI gives us the potential to learn these patterns.
So the systems that we’re working on, for instance, even if you need to guide a blind individual to a door handle of the right car, that could be an incredibly delicate process. How do you design for that is not clear. We don’t have autonomous vehicles today that are operating that we can test this extensively again and again. And maybe you have your own set of preferences versus another person versus another’s.
So the AI system will observe you and then tailor itself and adapt itself to you extremely efficiently. So if it said something too soon or too late, it will observe your behavior and, based on that, adapt. For instance, if you’re turning to go to an autonomous vehicle and it will say turn right, but maybe your reaction time is a little longer or shorter, it will adapt to you.
If maybe you know where you are and you don’t need to hear that much information because it’s distracting you, it will adjust the verbosity of the system accordingly. So these personalized systems are something I envision becoming critical to the design process of any real-world intelligent systems because they will enable these new type of robust and seamless interactions with any person.
BRYAN BASHIN: How do you bridge that question of trust? I mean, I’m standing in front of a business. I have summoned the TNC. What happens is there’s cars everywhere. I’m standing in a gap between two cars at the curb, hoping that the TNC driver will see me. Now we’ve got an autonomous vehicle, and maybe the thing is going to be required that I step out between those two cars and into the first lane of traffic.
These are leaps of faith, and we’re in this period now where like that woman who was killed in Tempe, Arizona, in 2018. That Uber autonomous vehicle took three long seconds when a sighted person could have identified her before it started the braking. So I’m just wondering, how do we have that conversation about trust, dependability, and assumed risk?
ESHED OHN-BAR: I think trust is a really difficult challenge. When there’s a person involved as part of the ride-hailing service, for instance, there’s so much interaction that can happen that facilitate trust. You can call them ahead of time and say, oh, can you see me, or are you approaching? Where you’re at? And there’s this continual dialogue that facilitates trust. And you also can be aware of any situation that might require you to maybe cancel a ride.
But I think the first step is to integrate individuals who are blind throughout the entire AI lifecycle. I think there is so much potential for technology, and I think that is understood by the community. But there’s also some skepticism, and it is a natural skepticism, and I think that will take time to kind of — as we develop more robust systems, the trust will follow, but it should definitely not be taken for granted. And the main way in which I see us going forward from this is just a very tight integration between the entire AI life cycle development from the beginning to the end with outreach organizations, with individuals with diverse needs. And as the experience becomes more and more seamless, the trust factor will increase. Yes.
BRYAN BASHIN: And this is something in our disability community we’ve heard for a while, don’t do something to us, don’t do something for us. Do it with us. We are all at the table. I’m just wondering you’re doing advanced research at Boston University on AI. How do you, yourself, incorporate people with disabilities, in our case, people who are blind or have low vision, into your research?
ESHED OHN-BAR: Thanks for the question. Yes, we continuously work with local organizations at Boston. One of them is the Carroll Center for the Blind. They are an outreach organization in Newton, and they are extremely active in the community. They are helping us with recruitment of participants, prototype design — from prototype design and up to longer-term longitudinal studies where we can really test these type of personalized AI systems.
So the students that we have working here are — we’re doing user studies. We’re discussing on a regular basis what are some challenges and potential solutions. Some of the classes that we are designing here actually create simulations that incorporate canes and individuals with visual impairments.
And so we are trying to really make this also an educational experience for the next generation of engineers, tackling this multifaceted problem because I think it is a longer-term perspective. How can we solve these technological issues? Just like you said, there’s so much — the cane is still such a effective tool after all these years, and the technology will have its time, but I think there’s still some maturity that needs to happen before it can have a prime time.
BRYAN BASHIN: Thank you. And a similar question to you, Kerry at Waymo. I mean, you — yes, you’re doing UX, and you’re looking at ways to improve things narrowly, but you have a much more audacious goal in mind, which is really to change and augment Waymo’s culture entirely. Talk about the objective there and how are you going to get there in terms of blind participation in all the things that you’re doing.
KERRY BRENNAN: Yeah, thanks so much for that question. It’s work I’m really proud of. A lot of it is that almost administrative work that doesn’t feel so shiny and exciting, but in the end, you know it’s going to lead to something that’s really great. So what our team decided we wanted to do was stop thinking about accessibility as if it were separable from good product design.
To us, inclusive product design is the only good product design. If we’re just doing product design with a very narrow definition of our user, we’re missing out on so many opportunities and ways to meet very broad needs. And so we’ve been putting in place practices that make it easier for us to just mainstream participation of people, for example, who are blind or who have disabilities into our user experience research as opposed to saying, oh, we only do that when we have the time or the resources to do a dedicated study, specifically focused on the needs of such and such. And instead, we thought, let’s just think of everybody as part of this group of users and make an effort to incorporate them.
And so that’s looked like having partnerships with local community organizations, for example, with the Access Technology group at LightHouse for the Blind to help us recruit for our studies so that a researcher knows, hey, it’s actually easy for me to do this. It’s easy for me to include blind participants because of this partnership. So I’m not going to let that stop me. I’m going to make it happen. And I think that that’s part of a broader view towards inclusion that we’ve really been pushing towards.
For anybody who has downloaded our app and filled out the screener survey in an effort to become part of our trusted tester program in San Francisco, they’ll notice that totally optional. They can share whether they have any particular accessibility needs that they want to share, and that allows us to identify and make sure that we’re meeting those needs of these diverse group of users. So just a few things that we’re thinking about.
Of course, we have more to do. We want more community partnerships, such as what we have with LightHouse, but I’m really excited about the way that we’re just making it normal. It’s not special. It’s shouldn’t be congratulated and should just be a normal part of the process, and we’re really working towards that.
BRYAN BASHIN: That’s where we want to go. And in the meantime, a hard question, an honest question is, Waymo, with a market cap in the billions of dollars, how many blind employees are actually employed by Waymo now?
KERRY BRENNAN: That is a great question, and I don’t know. I don’t have the answer. That kind of sensitive data wouldn’t be shared with someone like me. But what I can tell you is it could be more.
BRYAN BASHIN: Yeah, I think that’s fair. We have a long way to go in many industries. But in this one in particular, since it’s about us, it’s a reasonable question. And we’ll be asking it for a long time to come, which turns me over to Marco, who is a blind employee employed by Lyft, for exactly these considerations. Marco, what’s it like to be a pioneer in this field? You’ve been working with Lyft for a while now. Are you the only one so far? And what are your thoughts about having more colleagues who are blind?
MARCO SALSICCIA: And in terms of my exact position as this accessibility specialist and working with design and engineering alike, I’m the only one at Lyft doing that. But we do have blind employees, and it’s worth pointing out that I am I’m a contractor. I’m still contracting with them, but we do have blind employees in our trust and safety space and things that I work with regularly to make sure that our internal tools are compliant or are able to be used.
It would be great to have more of us specialized in a variety of different fields. I do have deaf employees. I’ve worked with hard-of-hearing employees. We actually had a really interesting anecdote just to have more diversity amongst those of us in the space.
One of our programmers — he was deaf, and we ended up having a meeting about an accessibility issue that we were coming up with one of our venues, which involved us sitting in a conference room with two ASL interpreters. And we both work together through the interpreters and just had a completely normal, perfectly fine cogent conversation. And we’re able to solve our accessibility issue for multiple disabilities. So yeah, we do have a good diverse spectrum, but there could definitely be more.
KERRY BRENNAN: I love that story. It reminds me too how important it is that you had those interpreters available, that that meeting could happen. And I think that some of those, again, behind-the-scenes pieces — do you have a contract with an interpretation service so that that is easy and those conversations happen? Because look at the great outcome.
MARCO SALSICCIA: Yeah, and do you have like internal office, like your office team — do they know how to facilitate that and get those contracts? Or, if you’re putting on a presentation, do you have description? Do you have captioning? And the other thing that you have to kind of think about it’s great to have people that understand that and listen to it. They don’t like push against it.
BRYAN BASHIN: And so now I’m going to ask you all a crystal ball question that’s entirely unfair again, which is each are laboring in a corner of a much larger field about getting autonomous vehicles everywhere. We want them as blind people for the reasons I said at the beginning of this time. How long are we going to wait until this is regularly and usually available? I’m not talking about regulation so much, but technically.
When do you think based upon your sense of the progress of the algorithms in the cars and every other part of the user experience? When do you think this will be ready for prime time so that the average blind person in the average American city would have a reasonable chance of stepping into an autonomous vehicle like anybody else? We’ll start maybe with Eshed first. What’s your timeline? What’s your crystal ball?
ESHED OHN-BAR: Sure. Thanks. This is a fantastic question. I wish I had a crystal ball. It’s guided by the fact that many systems today — so the nearest neighbor that we have today is maybe something like Siri. Siri is an AI system. It’s in our everyday life in our pockets and smartphone. It helps us. But it also gets things wrong sometimes. So maybe what would be the consequence of that?
So the consequence of that would be that maybe it calls your mom instead of the pizza place. That’s not so bad. The problem with an autonomous vehicle is when it gets something wrong — so us as a society need to understand that it will get things wrong sometimes, and then it can crash into things. And because of the severe potential consequences of these systems, I have a longer timeline of decades.
BRYAN BASHIN: Decades, OK.
ESHED OHN-BAR: I really would love to have an autonomous vehicle here tomorrow, but it will, in my humble opinion, it will be at least decades. Yeah.
BRYAN BASHIN: OK, thank you for that honesty. Kerry, what’s your crystal ball?
KERRY BRENNAN: Well, I’m a little bit biased because, according to me, the future is here because I live in Arizona, where I can hail a completely driverless car and take a ride in it. But I know that your answer is a writ larger. I have to say I’m more optimistic than Eshed because I’ve seen it in action, because of what I’ve seen behind the scenes and the wonderful learning that we’re having in San Francisco with our trusted tester program. It is a really hard problem, and we don’t want to minimize how important it is to get it right, absolutely right. But I am quite a bit more optimistic than Eshed, and I invite him down to Arizona, especially with the Boston weather cooling rapidly, to have a look.
ESHED OHN-BAR: You should test some of your vehicles in Boston as well.
BRYAN BASHIN: Yes, that’s right. And you have 700 plus or minus Jaguars now in operation in the country, Kerry, something like that?
KERRY BRENNAN: I believe our total fleet size — I can’t guarantee — the split between the different vehicle types is that is it at about that number.
BRYAN BASHIN: OK, and Marco, you’ve got your finger on the pulse of things emerging. We talked about Las Vegas in 2023. What’s your estimate for the average blind person in an average city having the opportunity for AV? How long?
MARCO SALSICCIA: Should I split the difference and say 10 years? Yeah, just kind of with — I understand from Eshed’s point of view. But AI, there’s a lot that would need to be worked out and a lot of chaos to kind of take into consideration across the country because no two street is alike, and cities are completely different going from rural to dense populations.
Yeah, of course, it’s happening right now, both in Vegas and Tempe, Arizona, San Francisco — or not in Temp, but just in Arizona. And there are places it’s already happening but not to the degree that you can just — anyone can do it right now. We’re still in the testing phase. So I would hope cautiously optimistic at like 10 years.
BRYAN BASHIN: OK, well, we won’t have to wait that long. I’m sure we’ll be discussing this in another edition of Sight Tech Global in an upcoming year. I want to thank all three panelists for joining us and really a fascinating discussion, and now back to you, Will Butler.