Learning Library

← Back to Library

AI & Accessibility with Deafblind Writer

Key Points

  • The episode explores how AI intersects with disability and accessibility, featuring a conversation with Elsa Honison, a deaf‑blind speculative‑fiction writer and long‑time disability advocate.
  • Elsa recounts early experiments with Microsoft’s co‑pilot AI, which produced distorted or apologetic images when asked to depict a mother with hearing aids and blindness, highlighting the technology’s initial inability to accurately represent disabled identities.
  • Over the past two years, both hosts note a rapid evolution in AI tools—moving from crude, inaccurate outputs to more nuanced capabilities—yet persistent challenges remain in ensuring these tools serve as inclusive aids rather than reinforcing biases.
  • The discussion delves into practical hurdles of building accessible products, including the need for diverse data, thoughtful engineering, and respectful design that avoids tokenism or over‑compensation.
  • By blending technical, artistic, and advocacy perspectives, the interview underscores that AI can be both a powerful accessibility tool and a potential hindrance if development doesn’t actively center disabled user experiences.

Sections

Full Transcript

# AI & Accessibility with Deafblind Writer **Source:** [https://www.youtube.com/watch?v=T_yHQbZ4hF0](https://www.youtube.com/watch?v=T_yHQbZ4hF0) **Duration:** 00:20:18 ## Summary - The episode explores how AI intersects with disability and accessibility, featuring a conversation with Elsa Honison, a deaf‑blind speculative‑fiction writer and long‑time disability advocate. - Elsa recounts early experiments with Microsoft’s co‑pilot AI, which produced distorted or apologetic images when asked to depict a mother with hearing aids and blindness, highlighting the technology’s initial inability to accurately represent disabled identities. - Over the past two years, both hosts note a rapid evolution in AI tools—moving from crude, inaccurate outputs to more nuanced capabilities—yet persistent challenges remain in ensuring these tools serve as inclusive aids rather than reinforcing biases. - The discussion delves into practical hurdles of building accessible products, including the need for diverse data, thoughtful engineering, and respectful design that avoids tokenism or over‑compensation. - By blending technical, artistic, and advocacy perspectives, the interview underscores that AI can be both a powerful accessibility tool and a potential hindrance if development doesn’t actively center disabled user experiences. ## Sections - [00:00:00](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=0s) **AI, Accessibility, and Deafblind Perspectives** - The host introduces a conversation with deaf‑blind writer and disability advocate Elsa Honison to explore how AI is applied to accessibility, its dual role as tool and obstacle, and the technical and creative challenges of building inclusive products. - [00:03:12](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=192s) **AI Progress on Disability Representation** - The speaker discusses how recent AI models like ChatGPT and Claude have improved by stopping apologetic language, recognizing disability‑specific terminology, and accurately depicting disabled bodies without erasing them. - [00:06:58](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=418s) **AI as Adaptive Assistive Tool** - The speaker explains how they use AI to read text, locate visual information, and organize daily tasks—offering a privacy‑preserving alternative to human‑based apps—and references other accessibility professionals who champion similar technologies. - [00:10:54](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=654s) **Beyond Checklist: Inclusive AI Design** - The speakers argue that treating accessibility as a simple compliance box—like checking the WAI guidelines—fails to serve the billions of users with diverse disabilities, urging AI builders to actively understand and design for this large, often overlooked audience. - [00:14:37](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=877s) **Avoiding Accessibility Tech Debt** - The speaker cautions against postponing accessibility fixes—labeling them as tech debt—and argues that AI overlays aren't genuine solutions, urging built‑in accessibility support for emerging low‑code “vibe coder” platforms. - [00:17:54](https://www.youtube.com/watch?v=T_yHQbZ4hF0&t=1074s) **AI Browsers for Accessibility Audits** - The speakers discuss leveraging Comet and Claude Haiku 4.5 as AI‑powered browser agents to generate SEO punch lists and automate accessibility QA on live websites. ## Full Transcript
0:00Finally, we are going to dive into the 0:02vexed question of AI and disability and 0:05accessibility. And I'm going to be 0:07interviewing a surprise guest to dive 0:10into this with me. Help me understand 0:13how is AI being used for accessibility? 0:15How is accessibility both a tool and a 0:18hindrance? What are some of the 0:20challenges that come with building 0:21product in the world of accessibility? 0:23We're going to get into the vibe coding 0:24side, the engineering side a little bit. 0:26We're going to talk a little bit about 0:27even the artwork and drawing side. very 0:29fun stuff. So, have fun. I hope you guys 0:31enjoy this. I love doing interviews. 0:33Okay, we have a fun guest post today. 0:38Uh, I want to interview my spouse and 0:42there's a special reason for that. Uh, 0:43Elsa, do you want to sort of introduce 0:45yourself and tell us a little bit about 0:47uh why we both thought it would be a 0:48great idea for you to do a video? 0:50>> Sure. So, my name is Elsa Honison. I'm a 0:53deafb blind speculative fiction writer 0:55and non-fiction writer. I've been 0:57spending the last 16 years of my life 1:01doing disability advocacy that has 1:03rolled into tech and we've been talking 1:06a lot over the last two years about how 1:08disability and AI have been intersecting 1:11which is why we thought it might be 1:12interesting to have a conversation. 1:14>> Yes. And so for for context, I've been 1:17the one that has come back to you and 1:19said AI this, AI that, what about this 1:21and that. And I feel like that 1:22conversation has really shifted over the 1:25last 24 months. And maybe it would be 1:28helpful just to give folks a sense of 1:29where we've come from and how far we've 1:32come in the last two years with AI and 1:34disability. 1:35>> Yeah. So two years ago, I think we were 1:38using, you know, it's been so long since 1:40I used it because it was kind of crap 1:42that I don't remember the name anymore. 1:45>> No, it wasn't even Chad GPT. It was the 1:48Windows Microsoft Edge one. Oh, 1:50co-pilot. Early co-pilot. 1:52>> It was early co-pilot. So, it was early 1:54co-pilot. We were using it to generate 1:57images and I made coloring book pages 2:00that were custom for our kids. And one 2:03of them was a unicorn corgi and one of 2:05them was a skeleton corgi. And I was 2:06like, "Oh, you know what might be really 2:08fun is to do an image of me with kids 2:11because it was close to Mother's Day or 2:13something." Mhm. 2:13>> Um, and I asked it to do a picture of a 2:16mom with one eye with hearing aids and 2:19glasses with kiddos, and it couldn't do 2:22it. The first version was this weird 2:26surrealist almost Picasso thing that was 2:29vaguely terrifying. The second one gave 2:32the child the blindness, but not the 2:35adults, which I thought was an 2:36interesting choice. And it kept 2:38apologizing to me. It kept saying, "Oh, 2:40I'm so sorry that you're blind. I'm so 2:41sorry that you're deaf. this is 2:43terrible. I was like, "Excuse you. I 2:45don't want you to apologize to me." And 2:48because I am a disability activist, I 2:50started playing with AI to understand 2:52whether or not this was just co-pilot. 2:54And what I discovered is that nothing 2:56could replicate my blind eye. It kept 2:59kicking it out and changing it into two 3:01eyes that matched. And so, somebody 3:03would be doing the Renaissance painting 3:05selfie game and I couldn't do it because 3:08it would turn me into a two-eyed person. 3:10So we went we started there and I just 3:12kept asking questions of the AI. I kept 3:14saying things like, "Hey, what do you 3:16think about disability?" And again, it 3:17used to apologize. Over the last 24 3:20months, Chad, GPT, and Claude have both 3:22stopped apologizing for my disability, 3:24which is frankly better than most 3:26average people. 3:27>> It's progress. 3:28>> It's progress. Claude even knows 3:31disability policy. Like I can go into 3:33Claude and ask it questions about 3:36disability language choices. And Claude 3:38will say, 3:39>> "Well, I know person first language. I 3:41know identity first language. Which one 3:43do you like?" Like, so it's starting to 3:44get it. And today there was an article I 3:48think it was on BBC. 3:49>> Yeah. The Are you talking about the one 3:51where someone was talking about how she 3:53her body was able to be drawn now 3:54because 3:55>> and that was a BBC article. Yeah. 3:56>> So she she was she's a prosthetic 3:59wearer. Um, and she was able to get AI 4:01to draw her prosthetics. And I hadn't 4:03run a test in a while. I thought, I'm 4:05bored with this. I'm tired of just 4:06constantly seeing my disability erased. 4:08So, I didn't want to keep trying. And 4:10then today, I tried it cuz I saw the 4:12article and it's able to mostly do it. 4:15Sometimes all the way. It depends on the 4:17prompt. But now, it actually will draw 4:19my cataracted eye. It will draw my 4:21hearing aids. It doesn't try to make me 4:23a non-disabled person through an AI 4:26lens. 4:26>> Yeah. I remember you and I had some 4:28conversations about why that was an 4:31interesting problem for the modelmaker 4:33community. And I'm really curious. I I 4:35don't have an inside story on how they 4:37solved that one, but there's some 4:40presumably some kind of reinforcement 4:42learning or perhaps some new images 4:44they've ingested that are helping the AI 4:46to figure out what to draw when asked 4:49for a prosthetic or asked for hearing 4:51aid. I mean, I think it's interesting 4:53because from my perspective, I've been 4:55pushing a lot publicly saying we need to 4:58have AIs that are trained to respect 5:00disability and to see it because that's 5:02been an issue in previous AI 5:04experimentation before. If anybody is 5:06familiar with the morality machine 5:08project from MIT, this is back in 2019, 5:11but they were testing the trolley 5:13problem using AI and they fed a whole 5:15bunch of different kinds of bodies that 5:17might be crossing the street. It's a 5:19really dark thought process, but what 5:22was darker is that there were no 5:23disabled people in the training data. 5:26So, the only way you could think about 5:28whether or not the autonomous vehicle 5:30was going to hit a disabled person was 5:32if it were an old person or a child or a 5:36dog. 5:37>> There were no wheelchairs. There were no 5:38white canes. And so, it just it showed a 5:42lack of information. And I talked with 5:44people at the Allen Institute about 5:46this. I've talked with people just 5:48generally about the issue of not seeing 5:50or talking about disability within AI 5:53spaces and how dangerous that can be. 5:55So, we're talking about a really sort of 5:57fun example of just being able to see 5:59yourself, but I will say that being able 6:02to see yourself even in silly selfie 6:04games matters for people's inclusion 6:06within community. And so, it's all in 6:08some ways kind of serious. If we don't 6:11envision disabled people in a future 6:13thinking world, we're not envisioning 6:15disabled people at all. I I think that's 6:17a really interesting sort of segue or a 6:20point because one of the things that 6:21I've been thinking a lot about is this 6:24concept of AI as a universal uh enabler 6:27a technology that helps all of us do a 6:29lot of things well which makes it one 6:31very difficult to talk about because 6:33everyone's experiences sort of their own 6:35experience of AI but it also means that 6:38it applies in a lot of surprising ways 6:40and so as much as you've talked here 6:42about struggling with getting AI to see 6:45you I think there's also a side of it 6:48where I see AI as at least potentially 6:51extremely powerful as an accessibility 6:53tool and I'd be curious sort of for your 6:55thoughts there as well. 6:56>> It absolutely is. So I'll talk about my 6:58experience first and then I have a 7:00couple people I can shout out who people 7:01can go research. Um I use AI to do 7:04things like if I can't see something far 7:07away, I can take a picture of it and 7:09tell the AI to read it to me and then it 7:11becomes large print. If I'm trying to 7:13find something on a wall like 7:16handwriting or something in a whole 7:18packet of things and I can say this is 7:20what I'm looking for, it can zero in on 7:22that. I've used it to read prescription 7:24bottles. And I'll tell you that there 7:26used to be an app for that. It's called 7:28Be My Eyes, but I never liked that app 7:30because it required me to interact with 7:33a stranger who was a volunteer on an app 7:36who would read something for me. And I 7:39am well known enough in the disability 7:41community that I did not want to 7:43necessarily go on an app where somebody 7:45could see my name and where I was and 7:47say, "Hey, tell me how to get around 7:49this airport that I'm currently in by 7:51myself. This just seems like a bad 7:53idea." Or blind people having to have 7:56someone read their credit cards. So now 7:58the AI can do that for me. These are all 8:00examples of ways that you can use AI as 8:02an adaptive aid. Another one is people 8:04with ADHD sometimes use it to track 8:07medications. They'll talk to their AI 8:09agent or they'll say, "Hey, here's what 8:11I did four hours ago. What do I do 8:13next?" And then there's people like 8:15Jesse Laurens who you can find on 8:17LinkedIn. She's also an accessibility 8:19professional like I am. She's blinder 8:21than I am. And she used AI to help her 8:24take a cross-country trip on Amtrak. She 8:27used Yeah. Like she basically used it to 8:29take pictures, tell her where she was 8:31going. It also helps her in her kids' 8:33classroom to see her kids artwork. I do 8:35the same thing. Like this is an 8:38application of AI that allows people to 8:41interact with the world the way that a 8:43cited person would without having to be 8:45a cited person. 8:46>> You know, I love the range of examples 8:49that you included there. One of the 8:50things that has come up a little bit in 8:52the Substack chat has been the range of 8:55possible use cases around neurode 8:56divergence. not just ADHD, but also we 8:59have autistic folks in the substack chat 9:01who are talking about how they're using 9:02AI. It seems like if I were to sort of 9:05maybe I'll try slapping a principle or a 9:08layer on it and see how you feel about 9:09it. It feels like one of the things that 9:11makes AI really compelling as an 9:13adaptive aid or a support in these 9:15situations is that it can fill in the 9:19spaces that you need it to fill in and 9:23disappear when you don't want it to be 9:25there. you don't have the privacy 9:27intrusions associated with another 9:28person the way you were talking about. 9:30If your sort of ADHD is all about 9:33hyperactivity, maybe it's about focus, 9:35right? And about how you can get focus. 9:37If it's more about sort of the inability 9:40to get into flow state, but you can be 9:41on the task, then you can work on that. 9:43And there's just different ways to get 9:44engaged. I'd be curious if that 9:46resonates for you as sort of a universal 9:48take, or if it feels too prescriptive. I 9:51think the way I would frame it is 9:53actually that it puts adaptability in 9:56the hands of the disabled person. One of 9:58the major issues with adaptive aids and 10:00with sort of our culture of 10:03accessibility is that it often relies on 10:05external forces to give you the access 10:08that you need. So as an example for 10:10somebody who needs a guide dog, you need 10:12that dog with you all the time, every 10:14day. You are also responsible for that 10:16guide dog all the time, every day. And 10:18that means that you are sort of 10:20externally required to use something 10:22that's not just right there and can be 10:24put down. And so I think your example of 10:26like ADHD focus and flow where you don't 10:30need to take a pill necessarily if you 10:32have something that can help you go 10:33through the guideposts. It's allowing 10:35you to take control of your 10:37accessibility in a way that's really 10:38meaningful. And I think that's true for 10:40the blindness examples that I was giving 10:42you as well. I don't have to rely on 10:43another person. I don't have to ask 10:45someone for help. I actually can take 10:47personal control and autonomy and that's 10:49very rare in accessibility. 10:52>> So let's pull on that thread because I'm 10:54really curious. I know that in your day 10:56job you think about product and 10:58accessibility a fair bit. I think about 10:59product a lot. It feels like you might 11:01have some perspective for builders, for 11:05people who are constructing with AI how 11:07they can think about accessibility 11:09beyond just like WIKG because I had to 11:11run WIKG for chat bots back in the day 11:13and I feel like that 11:14>> that can't just be the only answer, 11:16right? Like it can't just be okay, well, 11:18we tick the box, we're done, right? 11:19>> I mean, it's not. And I'll give you the 11:21number one example why it's not. Um, 11:23WIKG only solves for one disability at a 11:26time and there are 1 billion disabled 11:28people in the world. One in four 11:29Americans are disabled. Just as 11:31>> people didn't know that. I bet I bet 11:32most people watch. 11:33>> I bet most people had no idea. One in 11:35four people in the United States are 11:37disabled. And the disabled population in 11:39the world is roughly the size of China. 11:41>> In other words, In other words, there 11:42are more disabled people than there are 11:45users of chat GP2 right now. 11:47>> Yes, that's correct. So, if you're 11:49thinking about those numbers, then the 11:51number one lie that you hear in building 11:53conversations is that there are no 11:55disabled people using your product 11:58>> because there are absolutely disabled 11:59people using your product. So, then the 12:02next question is, well, what does 12:03accessibility really mean if WIKG 12:05doesn't solve for everything? And the 12:06answer is to get to know your audience 12:08and to get to know what your users can 12:10and can't do. Now, not every single 12:12product is made for every single 12:14disability equally. I don't expect, for 12:16example, a video game to be perfectly 12:19accessible if it is a visual thing. But 12:21I do expect you to have accessibility 12:23controls so someone can try. Um, that is 12:28the way that we get to things like 12:31what's outside of WIKAG. Well, what's 12:32outside of WIKAG is lots of things. And 12:35so really look at problem solving 12:37through logic rather than problem 12:39solving through checkboxes. If you are 12:42solving something with just audio, think 12:44about whether or not a deaf person can 12:46actually access the content. 12:48>> Because at the end of the day, what 12:49you're solving for is equal access to 12:52information and equal access to 12:54experience. And every experience might 12:56look different depending on what 12:58disability you need to think about. 13:00>> So I want to throw another question at 13:02you that feels related and I'm sort of 13:04curious for how you think about it. We 13:06are going to be in a world in call it 10 13:09months to 12 months where the Intel deal 13:13with Nvidia is done and there are Intel 13:16chips that are LLM friendly in a lot of 13:20laptops which are going to enable what I 13:23call local inference or local LLMs. From 13:26a practical experience perspective, I 13:28think about it as we are almost on the 13:30verge of a world that feels like Cloy 13:33all the time. So, the the Cluey 13:35experience is where you have this almost 13:38glass-like overlay on the screen and 13:40like it it talks it at you in text while 13:44you are talking or having this 13:45conversation or whatever. I don't have 13:47clearly on, I promise, but it's just 13:49always there and it's always on. It's 13:50like a layer. It's very similar to what 13:52Meta has come up with with their glasses 13:54approach where they you put on the 13:55glasses and it's like an always on 13:56layer. I am a little bit worried or a 13:59little bit curious maybe both that 14:01having that kind of technology in 14:03glasses, having that kind of technology 14:05in laptops is going to very much offer 14:09the opportunity to app builders to give 14:11up on the accessibility problem and say, 14:13"Well, the good news is, you know, the 14:15new version of Cluey can just see the 14:17screen and we don't have to worry about 14:19this or the the glasses that you'll be 14:21wearing can just sort of take care of 14:22this for you and they just sort of punt 14:24the problem down the road because we're 14:26expecting an intelligence layer to catch 14:27it. And I 14:28>> Oh god, that's going to be so expensive 14:30for you to fix if you don't do 14:31accessibility at the beginning. So tech 14:33debt for accessibility is very real. 14:35People will often say, "Well, we'll just 14:37fix it in post." And that's how you end 14:39up with massive tech debt. So the first 14:41thing that I would tell people is don't 14:42build up the accessibility tech debt 14:44because you will regret that life 14:45choice. But I also think that the same 14:48thing is true for accessibility in terms 14:50of websites. Like layovers don't do the 14:52job. They're not actually accessibility. 14:54And so I think it's the same thing with 14:56AI. AI layovers are not accessibility. 14:59The way that the user may use accessibil 15:01uh accessibility as a LLM tool. That's 15:04that's the accessibility is the user 15:06making that choice. But you can't force 15:08a user to adapt using LLMs because I 15:13don't think that's going to function 15:14very well for a variety of reasons. One 15:16of them being that you can't possibly 15:18solve or every version of accessibility 15:21when it comes to using an AI agent. I 15:23guess maybe my last question and then 15:25I'll leave it to you to sort of wrap up. 15:27I I am curious. We are in a world where 15:30it's not just developers building apps 15:32anymore. It's also vibe coders who are 15:34using services to build and there's sort 15:36of accessibility implications there. I 15:38know that tools like lovable have made 15:40big strides on the security front 15:42recently on bringing backend into the 15:44tool. I'm curious. I haven't seen any 15:47telegraphed updates on accessibility 15:50from those tools. Do you feel like that 15:52is a lovable bolts kind of level problem 15:56to solve so vibe coders get support for 15:58that or how would you frame that? 16:00>> I do think that things like lovable need 16:02to be thinking about that because if you 16:04start building it in on a base level 16:07with that kind of a product, it opens 16:10the door for people like vibe coders to 16:12learn accessibility versus relying on 16:15every single vibe coder to do the right 16:17thing. And as much as I think all the 16:20vibe coders want to do really cool 16:22stuff, I want vibe coders to have the 16:24tools and a little bit of a nudge to 16:26make sure that everybody they can't 16:28think of because you can't think of 16:30every product user, right? Like that's 16:32not realistic even when you're just 16:34thinking about non-disabled people. So 16:36expecting everybody to know everything 16:38doesn't work. And that's where I think 16:40that larger companies have an ethical 16:42responsibility to give people the 16:44support to do that thing. So, I do think 16:46that Lovable needs to build the nuts and 16:49bolts, give people the opportunity to 16:51learn these things because then you're 16:54training people in skills that they may 16:56not have or have even thought they 16:57needed to get. 16:58>> Have you ever handed a screenshot or 17:01something like that to an LLM and asked 17:04it to generate an accessibility 17:05critique? 17:06>> I have. 17:07>> What happened? 17:08>> It didn't catch everything. 17:10>> Tell me more. So, it knew a lot, 17:14understood, it understood things like 17:17being able to see contrast, but because 17:20it wasn't looking at the website itself, 17:22it couldn't catch things like the link 17:24wasn't an accessible link because it 17:26can't see that through a screenshot. It 17:28wasn't able to catch things like whether 17:30or not the web page was able to be read 17:32out loud using a screen reader because 17:34if you're using a screenshot, 17:36that information isn't available to the 17:38LLM. I would be very curious to see how 17:41to solve for screen reader use with an 17:44LLM. I haven't really played around with 17:46that a whole lot recently, but it's 17:49definitely something I should look into 17:50more. Maybe I'll report back another 17:52time. 17:52>> You could. I think one area that I would 17:54be curious for your take on is aic 17:57browsers as accessibility reviewers. So, 18:01two examples comes to mind. Comet I 18:03think is really interesting because 18:05Comet lets you pull up the sidebar and 18:08examine a living site that it will 18:11navigate in any way you want. So as an 18:14example have done a terrible job with 18:16SEO optimization on my own personal site 18:19and I was like I need a punch list and 18:20so I asked Comet to go through my site 18:24and navigate it and figure out the 18:25issues with SEO and come back with a 18:27punch list and it was able to look at 18:28multiple pages and come back with a very 18:30complete list. Oh, that's a really 18:32interesting thought. I haven't used 18:34Comet to try doing accessibility work 18:36yet. 18:36>> The other one that is going, it's in 18:38research preview now, but we're all 18:39going to get it soon, is Claude Haiku 18:424.5 18:44in Chrome. So, not in the app. 18:47>> And so, if you're in Chrome and you 18:49install a browser, then Claude can act 18:52as that browser agent for you. and 18:54people are starting to use it to 18:55automate QA testing of live sites 18:59because Claude can go through and do 19:00that kind of navigation. 19:02>> I would say I would trust Claude because 19:04I've I've had really good experiences 19:06with Claude around WIKAG and also with 19:08accessibility in general and I've 19:10noticed that that particular model is 19:13well trained on disability and 19:14accessibility 19:15>> which sort of fits with Anthropic's 19:17constitutional AI approach. 19:18>> It does. I think that they they 19:21definitely have the market on that 19:23particular sort of aspect is looking at 19:25disability as part of the ethos of who 19:28they serve. And I think that's 19:30interesting that you can kind of tell 19:31just from talking to the different 19:33products what comes up. 19:34>> Cool. Any final words of wisdom? 19:37>> Well, I think two things. One, don't 19:39trust Chad GPT to write perfectly 19:42accessible code. Please double check it. 19:44It makes mistakes. And two, uh, it 19:47because one always puts in a plug at the 19:49end of an interview, I have a book 19:51called Being Seen, which is out now. And 19:53I have a second book that's coming out 19:54next fall called Dear Blind Lady. And it 19:56basically answers all your questions 19:58about disability, even questions like 20:01these 20:01>> and stuff. Well, thank you for coming 20:02on. I had a good conversation. I don't 20:05think this gets talked about enough. I'm 20:06glad we were able to have a 20:07conversation. 20:07>> I mean, fortunately, we live in the same 20:09house. 20:09>> Yeah, we talk about it a lot, but but 20:11the world in general, 20:12>> it's true. All right. Thanks for having 20:14me on. Of course, toxin.