-
DESCRIPTIONAs homes become increasingly more technology-driven, inputs from multiple sources—teachable AI, multimodal understanding, sensors, computer vision, and more—will create a truly ambient, surround experience. Already, 1 in every 5 Alexa smart home interactions is initiated by Alexa without any spoken command. As Alexa develops an understanding of us and our home well enough to predict our needs and act on our behalf in meaningful ways, what are the implications for accessibility?
Speakers
-
-
SESSION TRANSCRIPT
[MUSIC PLAYING]
WILL BUTLER: All right, thanks, and here I am again. And thank you, everyone, for tuning in to our session with Amazon. I’m really pleased to be here today with Beatrice and Prem. We’re going to have a great discussion, and the title of this session is all– hints at the idea of talking to Alexa less.
And as a blind person myself and as someone who’s had an Amazon device for so long, I think it’s fascinating the idea of how could talking less be of benefit to a blind and low vision population. So we’re going to dive into that. but first, before we get started, I just wanted to allow you both to introduce yourselves and tell me in your own words cocktail party banter. How do you explain what you do forAmazon, Beatrice?
BEATRICE GEOFFRIN: Yeah, hi, my name is Beatrice Geoffrin. Thanks for having us today with you. It’s Great to be here. So I’ve been with Amazon for 16 years. And for the last four years, I’ve led the AlexaTrust team.
And so that means the mission of my team is to build the trust that customers have in Alexa. Really,usually, when people hear “trust,” they think about privacy and security. And that’s really important and foundational, and that’s part of my team as well. But at Amazon, we define trust a bit more broadly than that.
We also think about the fact that to build trust with customers, we need to have an experience that is really inclusive and working really well for all customers. And so my team thinks about that as well. How Do we make sure that our devices and services work really well for everyone? And so as part of that, we think about how to make Alexa really accessible for everybody, including customers, for example, who are blind or visually impaired. So that’s what we do.
WILL BUTLER: Wonderful, and Prem, what do you do, and how do you work alongside Beatrice’s team?
PREM NATARAJAN: Yeah, first thank you for hosting us, Will. Excited to talk about Alexa always. So Ilead a multi-disciplinary science and engineering and product team. In a nutshell, the goal of our team is to make human interactions with conversational AI or AI in general as natural as it is with other humans.
And one of our north star visions, if you will, is to make Alexa a multimodal AI assistant who’s capable of responding to both spoken language but also visual context. And by doing that, we then enable a number of user capabilities that people can leverage and use to make their everyday lives that much more easier and less frictionful. And for that, we work closely with Beatrice’s team and other such teams across Amazon’s devices and services group to deliver those experiences to end users.
WILL BUTLER: Wonderful. I have this very vivid memory from several years ago when I was just getting to know the blindness community myself, and it was right around the time when the first Alexa-enabled devices came out. And I remember installing my Echo and connecting my contacts and looking through to see who else had an Echo that I could call.
And I laughed to myself because I realized, it was early on in the release of this amazing new tech, and the only people I knew who had it were all blind or low vision people. It was just my blind friends all had it already in their home. And the blind and visually impaired, low vision community, we’ve always been early adopters of tech.
But I wonder, Prem, starting with you, why was Alexa and those devices that enabled Alexa, why was that such a game changer for the blind and low vision community? Why was that thing in everyone’s homes right off the bat?
PREM NATARAJAN: Yeah, what technologies such as spoken language understanding or image understanding have long been seen as naturally or inherently access-oriented technologies with implications for access. So as you just pointed to, Will, even early on, you could do your favorite streaming video service, and you search, and you’ll find a lot of examples of people saying how people who don’t have full vision or proper vision or are blind can use these technologies.
Simple things like, what is the time right now, or what’s the weather outside, or being able to access that without having to go through a tactile interface like your phone or a laptop. What absolute game changer.Because now you can just speak to your environment, and your environment speaks back to you with the answers that you want.
And when voice is your main communication avenue, that’s phenomenal. What’s today’s news, and you hear back a summary of today’s news. What’s the traffic like today headed to the airport, and it tells you what the congestion is or not. So all of those things just made this natural. Now, the natural question that follows, and maybe we’ll talk about it later is, what’s the future like, given that conversation AI is inherently an access-oriented capability? How do we see the future and how do we improve it? And I’d Be happy to chat about it a bit.
WILL BUTLER: Yeah, absolutely. I think it’s interesting. When those devices first came out, they were this amazing new way for blind and low vision people to access, but the devices themselves were blind in a sense that they only picked up on audio. We’ll talk a little bit more about that. But I want to ask you too,Beatrice, so accessibility, part of the Alexa trust team now. But was that always the case? Was this mass adoption of the devices somewhat expected or somewhat of a surprise for the Alexa teams?
BEATRICE GEOFFRIN: Yeah, it came very quickly, and we got very quickly feedback from customers about blind or low vision customers or mobile impaired customers, like quickly realized indeed this directvoice experience how convenient it was for them. How it gives them by voice access basically to a computer and all the capabilities that is on blogs. We set out to build the device as a way for convenience, and so I think they just embraced that convenience even more.
The device is convenient for everybody. If you’re like me and you are in the kitchen, you’re cooking,you’re unpacking groceries, you’re taking care of kids, your hands are busy. And so the convenience for hands busy scenario is even more so convenient for people, for example, mobile impaired or low vision.So we saw that. And when we see that Amazon that customers embrace an experience, that gives us the desire to go and double down and see how else we can be useful for that community of customers.
WILL BUTLER: Absolutely.
BEATRICE GEOFFRIN: I think the one thing I would add as well that we’ve heard from customers is the fact that these speakers are relatively affordable. This is a technology that’s fairly cheap, and we’ve onlytaken the prices of our devices down. And so something that was maybe the one device you had in the kitchen in the beginning is more and more affordable, and people can now place them– they don’t have to choose where to put them in the house. They can place them wherever they would like it to be convenient. So that’s also, I think, been helpful in the way to really make a device that actually can be accessed by everybody.
PREM NATARAJAN: Great point.
WILL BUTLER: But it strikes me that when it came out, it was already just born– Alexa was born sohuman, so natural. The conversations were so natural. And of course, the conversation AI is improving.But, Prem, how do we go about– what’s the theory behind improving that with the bar already set so high?
PREM NATARAJAN: Yeah, one of the interesting things, Will, is humans, users, we are natural bar raisers.Anytime something becomes satisfying, we seek to push it more and then eventually find its limitations,which then becomes the signal for us to say, oh, people want to do this with this technology, so now let’s make that possible as well.
So it’s a very interactive, if you will, kind of development process. I mean, Amazon’s customer-obsessed culture is very famous. And so any time they say, oh, customers are trying to push the product in this direction, say, OK, what can we do?
Now to frame our forward-looking ideas in this area, I say two key directions. One is what we’ll call a self-service AI, and the other is self-directed or self-learning. And these are two pillars of what I more generally like to call nowadays is the coming age of self in AI, if you will, the third pillar of self-awareness for those who are interested.
Now self-service AI is really all about democratization of AI, and the democratization of AI here we meanlike making it equally accessible in some sense to everyone so that you don’t need to have a computer scientist be the one who tailors the AI to a specific task. More generally, in the engineering community,this is referred to as kind of low code or no code revolution. And in the case of AI, we choose to call itself-service AI. Think of it as us enabling the barista to create an Alexa skill instead of teaching a PhDcomputer scientists how to do a latte. That’s really the spirit behind this push towards self-service.
Already, there are many such capabilities available within Alexa beyond what was originally launched adjust conversational AI like you pointed out. One of them I’ll call out is routines, which is a team that lives very close to Beatrice. And the routines team makes it easier, and herein is where we’ll also point beacon towards while we talk about speaking less, Will, which is, you want to do a bunch of things withAlexa, but sometimes maybe they’re clustered. Like when you set the morning alarm, maybe after that you want to hear the news, and then after that, you want to hear something else, and you have a set routine in the morning.
What if you could simply have that be triggered by the event that your alarm went off, and then the next two things happen? So you’re speaking less, but it’s not like you’re getting less. You’re getting muchmore from the AI. The AI is doing more. Another way might be you say, “Alexa, goodbye,” indicating that you’re leaving your home, and you might program to, say, set the temperature at a certain level so that you’re not wasting heat or air conditioning, or turn off the lights in if you’re leaving or turn on the alarm, etcetera, et cetera.
Last point I’ll make on this is the self-learning piece, which is about trying to make Alexa continuously better once it’s out in the field interacting with users. Conceptually, I’ll just say, imagine you asked Alexato play a piece of music, and the playback starts, and you immediately interrupt Alexa. That’s usually a pretty strong signal that something didn’t quite work out the way you wanted. And that signal turns out to be a super useful signal for us to learn from and say, aha, that was wrong.
How can I now improve my response the next time? And the magic is, turns out, we can do that entirely without any human intervention and fix all of these defects out in the field on an ongoing basis. Many More examples to share in the rest of the session, Will, but I’ll stop there for now.
WILL BUTLER: Yeah, wonderful. I wonder, like on the topic of making Alexa more accessible for everyone, there are so many different types of people in the world with so many different types of accessibility needs. And sometimes the accessibility needs conflict with one another and interact with one another. Someone in the family has a need that’s different than another.
When you’re creating the future of Alexa, mapping out the accessibility roadmap, is Amazon going and seeking out as many different types of people as possible to craft the product around the accessibility needs? Or is it based more on customer feedback coming in, reading the analytics, reading the signs,and steering it that way? Beatrice, maybe you have some insight on this.
BEATRICE GEOFFRIN: Yeah, it’s really both. And it really follows Amazon’s way of designing products in general, which is start from the customers and work backwards. And so that methodology is all about trying to understand what are the customer needs, and then see how those customer needs, we could meet them or even exceed them, delight them using unique capabilities that Amazon as a company or our product has. And so that’s our overall philosophy in product development, and we apply just in the same way for the field of accessibility.
You’re right, needs are very diverse across all individuals, across different types of accessibility needs,and even within one community. And so it’s really important to do the hard work to understand what are the real customer experiences. So we do both. We look at customer contacts and feedback from customers through our customer service and usage data to see how customers use our products and anecdotes that we receive. And then we also go and proactively seek out feedback and submit ideas orseek out ideas from the different communities to invent one of our products.
If I take the example of the Show and Tell feature with Alexa, this is a product that we’ve developed where a blind customer can leverage Alexa in the kitchen to recognize pantry items by showing the product to the camera of the Echo Show, and then Alexa tells the customer what the item is. And it can be really convenient to sort items or know what you’re going to use when you cook. And so this is an example of a feature that we’ve developed with the blind community because we’ve heard about some of the complexity of these use cases of identifying items that cannot be distinguished by touch.
And we’ve worked with members of that community. We’ve also worked with organizations that help uscontact some of these customers. So in that case, we worked with the Vista Center for the Blind in SantaCruz. And they’re super helpful also in helping us really put that product in the hands of blind customers and get their real feedback of how that product work. We can talk later about why that’s important.
So it’s been this hybrid approach about hearing from customers and seeking out, presenting ideas to customers, and finding through those two methodologies what resonates, what customers feel is a real need that we can successfully meet.
WILL BUTLER: That’s so that’s so exciting. And I spend all my days thinking about visual description from humans on Be My Eyes. But the thought that there’s so many diverse different use cases that someone might need to describe to them. But I guess if anybody has a database of what that item might be, it’s probably Amazon.
Tell us a little more about Show and Tell because I think folks are really going to want to know about this.Where is it available right now? How do you get it up and running? Where can folks find this?
BEATRICE GEOFFRIN: Yeah, so customers who are interested in this feature need to have an Echo Showdevice, which is our Echo device that has a camera on your screen. And then they would just– they don’t need to enable anything. They can just take a product from their pantry and present the product in front of the hardware. We have a guiding system, and they ask Alexa, “Alexa, what am I holding?” Because they don’t know what they have in their hand.
And then Alexa will guide the customer first to orientate the product so that it’s in front of the hardware.And if Alexa sees there’s a product but can’t tell where the label is, Alexa might advise the customer to turn the product around so that until Alexa can actually recognize the item. This is where the feedback from real customers was helpful, because designing that guidance system, there’s a sound audio that helps the customer place the item. And designing that required iteration with real customers giving feedback on how to make it work really well.
And those are the kind of experiences where it’s got to work really well to be efficient for customers. If It’s going to waste their time, customers are impatient. They have a lot of things to do. They have busy lives, again, and so we really were super grateful from all the blind customers who helped us build the product so that it was efficient.
WILL BUTLER: That’s awesome. And so presumably Show and Tell would be available in any Alexa Product that’s camera-enabled in the future, right?
BEATRICE GEOFFRIN: Right, yeah. So that’s now part of one of the many, many capabilities that Alexahas. And I think you said it right. Amazon, we have a database of products. That’s first that’s howAmazon started, selling products. So we have a lot of knowledge about products. And so we are able,again, to meet the unique needs with some of our Amazon-unique strengths, which was this device that has a camera, which can be seen as the eyes, the Alexa intelligence, and the voice, and then the knowledge of Amazon product.
You’re right, there is a lot of further applicability in this world of visual description. We’ve heard from customers that they would like for Alexa to be able to tell them the expiration dates, for example. And Then you could imagine a number of interesting use cases where then Alexa could remind you of your expiration dates and so on and so forth. I can’t really talk about the future-looking roadmap, but the applicability is broad in the kitchen. And then you can think about other places in the house or even outside the house where something similar could be useful. So I think we’re just at the beginning of this journey.
WILL BUTLER: Now you’re speaking my language. These are the things that I’m thinking about every single day. So I want to double tap on this. So at the beginning, we talked about this, and Prem, you’ve already given us a few examples, like the no code idea of teaching Alexa skill or something like that.Again, the title of this session is about talking to Alexa less. Can you go a little deeper into some of these examples of what you mean by a more human, more nuanced Alexa?
PREM NATARAJAN: Yeah, so I know the title of the session, like you said, can be scary to somebody who’s blind is saying, well, what do you mean Alexa is going to talk less or speak less? The whole value here is that we hear from it. And the intention is for Alexa to continue speaking but that it should requireless from the user to get the right set of information from Alexa.
If you look at the trajectory of AI overall, we’re moving increasingly towards a regime where AI is more context-aware. It’s more aware of the state of the environment, all of which can make it capable of deriving insights that help you.
So maybe it knows– it learns over time approximately when you leave work or your set of behaviors when you leave work, and it can automatically turn down the thermostat for you and can turn off the lights maybe for you. So that’s one example of where it’s speaking less but it’s actually providing a lot of value to you.
But another example more closely connected to conversational AI, many of us, we like to use our own ways of referring to it, whether kids or grown adults. And maybe you want to go and say, Alexa, set my lights to Christmas Mode. And we thought, oh, would be great if we could ask, what does that mean? Idon’t know what that means. Can you teach me? And then the user says, yeah, that means the greenhouse light, just making up an example.
Or you walk into a room and say, I feel really cold here. And Alexa says, oh, would you like me to raise the temperature? And then you say yes, can you set the temperature to Cozy Mode? And then it says, Idon’t know what Cozy Mode is. Can you teach me? And you say, yeah, it’s 72 degrees, this Cozy Mode.
And then the next time you just say, Alexa, set the temperature to Cozy Mode. And if you have an Alexa-enabled thermostat, it does that. So this is another way of making Alexa more useful and more accessible in that you’re freed from having to remember the ways in which you have to get something done because you can actually teach Alexa how you like to say how to get something done. And so that’s another conversational way of teaching.
You might go and say, hey, I have a preference for you responding with this kind of answer when I ask you this kind of question, and those are all things that this teachable– what we call teachable AI– allows people to personalize Alexa to themselves. Looking out into the future, as Beatrice said, we don’t talk about our future roadmap or specific launch plans, but you can imagine a combination of things where wouldn’t it be great if you walked into a room and the AI knows that the room is cold for you, and it senses human presence in the room, and then can adjust the temperature automatically? And that goes towards the spirit of speaking less and doing more.
WILL BUTLER: Yeah, yeah. And I also imagine when it comes to accounting for different speech patterns, you’re really addressing the needs of not only children but adults as they age, which, as we know, has a huge overlap with blind and low vision community. And so if we’re helping more people as they age use Alexa more effectively, we’re helping more blind and low vision individuals, right?
PREM NATARAJAN: Indeed. And so Beatrice may have some things to talk about on adaptive listening and speaking, so I’ll hand it over to her to talk about a few of those features that are more tuned towards the customer experience being accessibility-oriented.
BEATRICE GEOFFRIN: Yeah, I am very passionate about actually our aging customer segment as well. Ithink this is also, similarly, we’ve seen great feedback from aging customers or their caregivers about how Alexa is, again, directly accessible for them. They might not want to learn a new technology, but they’ve been speaking their whole life. And so for them, using Alexa is very natural. And yet, there are things that we can do to make the experience even better.
And so we have lounge features– actually, it’s a feature that might be interesting for all of your audience– so we’ve launched a feature where you can set Alexa to speak faster or slower depending on what your personal preference is. And we’ve seen it, for example, used– that feature we’ve seen that aging customers enjoy having Alexa speak a little bit slower. It helps them make sure that they understand better what Alexa says back.
We actually heard from some blind customers that they enjoy Alexa speaking a little bit faster because they are used to faster speed rate of listening from some of the other experiences that they use. And so that’s an interesting feature.
Right now it’s a setting. You have to go in the Alexa app and decide for you for your device whatspeaking rate you would like Alexa to speak. As Prem described, I think our vision for the future is to have Alexa even more natural, and so could we imagine a world where Alexa knows and adapts the speaking rate used depending on the understanding of what customers prefer and then adapt that to the user.
WILL BUTLER: Wow, and just know what the user wants to hear, yeah.
PREM NATARAJAN: Or even learn dynamically through a conversation. I mean, we are not static. As we know, humans are not static in our attributes, Will. We like to mix it up. Sometimes we are reflective.Other times we’re all about speed. So, yeah.
WILL BUTLER: Tired, or how you’re different in the morning or at night or something like that.
PREM NATARAJAN: Indeed.
BEATRICE GEOFFRIN: Yeah.
WILL BUTLER: I like this imagine a world where thought experiment. So without revealing anything about the product roadmap, what can we imagine a world where Alexa could truly rise to meet the needs of ablind or low vision person? Where is this all going in each of your vision of what you do?
BEATRICE GEOFFRIN: I think we’ve touched on a number of things already, but I would say this concept of maybe describing the world, helping bank customers know what’s around them. And in situations where that’s particularly useful to them, I think this is, again, a place where Alexa with a camera and a brain, I call it, the AI, and a voice could play a role.
I think this concept of just having to do less. The customer should not have to pull up the app and do things in the app. We know that’s doable but not convenient for blind customers.
So I really like when Prem talks about teachable Alexa because that’s almost programming Alexa or creating settings on Alexa by voice. Like you just use your voice to go and to do teach Alexa to do things for you. So that concepts of doing more, again, by voice so that Alexa understands you and adapts and anticipates sometimes even things that the customer would have to otherwise go and do.
PREM NATARAJAN: And maybe on the AI side, to complement what Beatrice just said on the experience side, I can add– like if we imagine a world– this is the world we like to imagine of ambient intelligence,where it’s not so much about the individual pieces of hardware but the individual products. But it’s about how they come together along with the intelligence that they embody to create a fabric of intelligence around us that responds to us as we move through this ambient environment, et cetera.
So when we think about it that way, we can imagine the AI completing tasks on your behalf. Maybe inorder for it to do that, you need to have agents that can talk to each other. So we imagine an interoperable voice environment where voice agents talk with each other, and maybe they complete tasks for you.
We imagine an ambient intelligence where all of these products and services are available for you, they respond to you, and they are responsive to you when you need them to be. But then they recede into the background, and they’re just there waiting to help you when you need them. They’re personal, they’re proactive, they’re able to learn insights over time that tells them exactly when something is useful to bedone, et cetera.
So from the AI perspective, that’s the world we imagine. And, in a way, we’re making tangible progress towards that world. I mean, whether it’s just right now some products talking to each other, like you can have your smart home instrumented so that it’s all accessible through Alexa if you have smart blinds and our door lock, et cetera. But that’s the world we imagine.
WILL BUTLER: Yeah, it’s interesting. I’ve been trying to wrap my head around this idea of what an ambient AI means, and it almost contradicts this idea of a very human AI. We think of Alexa has just become this person in our lives, but ambient is moving away from being a person and into being a more intelligent environment. Is that putting it correctly, Prem?
PREM NATARAJAN: In a way. But if you think about businesses team talks about the north star vision forAlexa about ubiquitous assistant, it’s always assisting us in completing our mission or the tasks at hand.So yeah, you’re right. What you’re basically alluding to, Will, is the fact that this sounds like it’s beyondhuman.
When we talk about “human,” we’re talking about the naturalness of the interaction, about the ability to understand us like other humans might understand each other. Not so much about the fact that you can ask it to lock a network connected door and it can. That is clearly the role of technology. And so this is really about naturalness connected with the power of technology to make things easier.
And Beatrice, anything to add on that north star vision of what Alexa could do for people with disabilities?
BEATRICE GEOFFRIN: Yeah, I think this concept of disappearing in the background and being there doing things on behalf, removing some of the work from the customer side is what we are striving for.Like how do we double down. Making each piece of the experience a little bit less work on the customer.
Maybe it’s a very small example and baby step towards that direction, but we launched yesterday a new feature called Conversation Mode, where you can bring Alexa into the conversation if you are with someone in front of a device and are trying to get some information. So a typical example would be that you are with a friend in front of an Echo device, and you want to decide where to go for dinner. And so you’re looking for suggestions for restaurants, and you can say, “Alexa, join in the conversation.”
And now you can talk to your friend about where you want to go for dinner. And sometimes ask Alexa a question without having to every time say, “Alexa,” because Alexa has joined the conversation. So at that point of time, you can just ask a question to Alexa, and Alexa will understand that question is directed to the device and will answer.
And so it’s a small example about making it just a slightly bit easier for the user to get access to the information they want because they don’t have to wake up the device and say, “Alexa,” every time. And Then once you’ve picked the restaurant that you want to go, size of the conversation, and you’re back to the normal mode. But again, it’s a baby step, I think, but it’s just a concrete example of a baby step of what we mean by removing the friction, making all these interactions more natural and easier. And all of this more friction that we remove are just so many seconds that we give back to the customer to do the other things that they want to do, which is interact with the people around them and live their busy lives.
WILL BUTLER: Yeah.
PREM NATARAJAN: It’s an interesting launching point, actually, because I like the introduction of theConversation Mode in the discussion where it actually ties well with the ambient concept, which is it’s actually bringing spoken language context and visual context to make it a much more natural and friction-free experience for the user to interact with Alexa in a more detailed session. Alexa always could handle multiple contexts across sessions.
You ask, what’s the temperature? What’s the weather today, and then it says this thing. And then you say, what’s it like in Boston, and it knows that you’re asking about the weather in Boston today. So that kind of context, of course, does.
But this Conversation Mode that Beatrice just talked about is just raising the bar to use your phrase, Will,early on how we are constantly raising the bar. So this raises the bar on naturalness of interaction with multiple parties can be part of the interaction with Alexa. And Alexa knows its place in the conversation and when it’s appropriate for it to speak.
WILL BUTLER: I love that. And a final thought, when assistive technology devices started being created,just like computers, they were big and bulky. And blind and low vision people who are any older than Iam will remember walking around with these big backpacks on of all these different devices. Walking Around with these big backpacks on.
And it’s over the years it’s gotten smaller and smaller and more compact and easier to take with you, and then it made its way into our mainstream devices. And what I’m seeing here is the idea of it becoming invisible. And I like that idea. That’s an idea that I can get behind. So thank you both for joining me and for coming to Sight Tech Global and supporting Sight Tech Global and the Vista Center. And if you have any final thoughts, I’d love for you to share them.
BEATRICE GEOFFRIN: Will, thanks for having us. I would say we love delighting our customers, and we also love when our customers are not delighted and are vocal about their needs, and the [INAUDIBLE]community is great to do that. So we love feedback and anecdotes. We want to hear from our customers what they want to see next. And so really, if you have ideas for us or if your audience has ideas for us, we really want to work from our customers and work backwards. Like work from the needs that we can meet and see how we can delight customers.
WILL BUTLER: We’re good at feedback. If we have feedback, where do we go? How do we share that feedback?
BEATRICE GEOFFRIN: Contacting your Amazon customer service always, if you do that, the feedback always find its way back to the product team. That’s part of our mechanisms. So that would be a greatway.
WILL BUTLER: Yeah, OK, wonderful.
PREM NATARAJAN: Underlying what Beatrice said, I think she summarized it perfectly. I think great to have participated in this forum with the two of you, Will and Beatrice. Thank you for organizing this.Thank you for the opportunity. And yes, please do let us know what we can do better. And also, if you feel like it, what’s really working well for you, so you can kind take that as inspiration.
WILL BUTLER: No one was ever mad at a little bit of positive feedback, that’s for sure. Well, thank you both, and I hope you enjoy the rest of the event. Thanks so much again for joining us.
BEATRICE GEOFFRIN: Thank you.
[MUSIC PLAYING]