Learning Library

← Back to Library

Disney to Sue X Over AI Images

Key Points

  • The speaker predicts that Disney’s lawyers will soon sue Elon Musk because X’s new image‑generation AI lacks any safeguards against producing trademark‑infringing depictions of Disney characters.
  • Disney’s litigation history—having helped shape much of modern copyright and trademark law—means it will aggressively protect its IP, and other celebrities are likely to follow suit for unauthorized, realistic portrayals.
  • Unlike other generators (e.g., Midjourney, ChatGPT, Gemini, Copilot) that employ dedicated teams to enforce copyright, trademark, and safety policies, X’s model was released with virtually no guardrails.
  • This unchecked freedom invites “bad actors” who can create misleading, photorealistic images that could spread misinformation or depict violence, damaging both public trust and the platform’s reputation.
  • The lack of responsible moderation positions X as a liability risk, potentially prompting legal action and broader societal backlash against the company.

Full Transcript

# Disney to Sue X Over AI Images **Source:** [https://www.youtube.com/watch?v=fOx5qeb0Bac](https://www.youtube.com/watch?v=fOx5qeb0Bac) **Duration:** 00:07:24 ## Summary - The speaker predicts that Disney’s lawyers will soon sue Elon Musk because X’s new image‑generation AI lacks any safeguards against producing trademark‑infringing depictions of Disney characters. - Disney’s litigation history—having helped shape much of modern copyright and trademark law—means it will aggressively protect its IP, and other celebrities are likely to follow suit for unauthorized, realistic portrayals. - Unlike other generators (e.g., Midjourney, ChatGPT, Gemini, Copilot) that employ dedicated teams to enforce copyright, trademark, and safety policies, X’s model was released with virtually no guardrails. - This unchecked freedom invites “bad actors” who can create misleading, photorealistic images that could spread misinformation or depict violence, damaging both public trust and the platform’s reputation. - The lack of responsible moderation positions X as a liability risk, potentially prompting legal action and broader societal backlash against the company. ## Sections - [00:00:00](https://www.youtube.com/watch?v=fOx5qeb0Bac&t=0s) **Disney Suing Elon Over AI** - The speaker warns that X’s new image‑generation AI, which lacks safeguards against unauthorized use of Disney characters and celebrity likenesses, will soon trigger lawsuits from Disney and other public figures. ## Full Transcript
0:00I do not often make specific concrete 0:03predictions about the future in Tech 0:05because I think that predicting the 0:07future is inherently a Fool's errand but 0:10I'm going to make one 0:12now the lawyers for the Disney 0:15Corporation are going to be suing Elon 0:18Musk very very shortly and the reason 0:21why is that there are absolutely no 0:25guard rails as far as we can tell on the 0:28image generation 0:30AI that his company X just 0:35released if you can release an image 0:39generation AI that allows you to 0:43show Disney characters like Mickey Mouse 0:47doing things that are against the brand 0:50without consent from the 0:52corporation you as the company who built 0:56the model are going to be headed 0:58straight to court to talk with Disney's 1:01lawyers Disney's lawyers literally wrote 1:04the book on trademark and copyright a 1:07lot of our copyright law comes from the 1:09Disney Corporation in the way they have 1:11worked with the court system to protect 1:13the intellectual property of Disney 1:16characters and I'm calling Disney out 1:18specifically here but I'm not saying 1:21they're the only ones you are going to 1:24have a lot of individuals who are 1:27celebrities coming for X as well because 1:30another guard rail that just doesn't 1:31seem to be there in X is that you can 1:34build celebrity images showing 1:38celebrities doing things that they've 1:39never actually 1:42done and that's going to be a problem 1:45especially as these images are near 1:47photorealistic people are going to look 1:49at them and they're going to say oh this 1:52is what this political figure did when 1:55they didn't do 1:57it this is what this musici did when she 2:01didn't do 2:03it and I recognized that part of the 2:07value proposition that X is building is 2:10that it is a place where the idea is 2:14that you are the one almost in a 2:16Libertarian sense who is responsible for 2:18your choice of speech and that you 2:20should be free to express yourself and 2:22so I wasn't surprised to see that the 2:25model had virtually no guardrails but 2:30considering the difference between the 2:33very very structured guard rails for 2:37other image generation other llms other 2:40large language models that we have in 2:42the wild like mid Journey like chat GPT 2:45like Gemini like 2:47co-pilot these are all models where 2:50there are entire teams of highly paid 2:53professionals whose job it is to ensure 2:55that they're not used in an unsafe 2:57manner to not to ensure that they're not 2:59used in a manner that violates copyright 3:01or violates trademark if you decide to 3:04build a highly capable model now after 3:07all of those guardrails are in place for 3:09other models and just ignore all of that 3:12and just say you can do whatever you 3:14want you are going to get into a 3:18situation 3:19where no one will believe you if you say 3:22you can't do it it will look more 3:25intentional than it would have eight 3:27months ago 12 months ago 24 months ago 3:30and you are going to attract the kinds 3:33of Bad actors that Society at large will 3:37view as reprehensible so you do you're 3:38doing some bad some some brand damage 3:41right I've I've seen pictures of 3:45violence on Twitter on X that come from 3:51that model generation of artwork that 3:54were shocking that should not be just 3:57rolling through on a social network 3:59without any kind of consent without any 4:03kind of guardrails at all and I you know 4:07most of the people who are using these 4:08networks are supposed to be grown-ups 4:09anyway right I I get that this is a 4:12grown-up environment that does not mean 4:15that you can violate IP violate 4:17trademarks and expect to get away with 4:20it for long so I'm paying attention to 4:23this because typically when these things 4:26happen they don't stay in the wild long 4:29it is going to be a situation I expect 4:32where there will be some sort of 4:34injunction where there will be some sort 4:36of uh ruling from a judge that says 4:39either you need to put guardrails into 4:40this model now or you need to make 4:45sure that you pay for damages and take 4:48the model down something like that 4:49that's my 4:51guess the problem is guard rails have to 4:54be deeply rooted to work in llms if you 4:56just slap something that's kind of at 4:58the prompt level on the very end 5:01I mean yes Apple's prompts leaked there 5:04is some prompt guarding there but you 5:06also have to think about making sure 5:09that the training materials you're using 5:12are not training materials that are 5:13going to be harmful that give the llm or 5:16give the image generation tool features 5:19inside their latent space that let them 5:24produce harmful material easily now 5:29these 5:30tools are trained on so much data I 5:33understand that you cannot catch every 5:35single thing that they look at but there 5:38is something to be said for making sure 5:40that you are not intentionally training 5:42them on the kinds of material that is 5:46going to cause real problems for you the 5:49brand that built the model down the 5:51road this actually came up uh you may 5:55recall a few I guess it was two or three 5:57months ago Google r red out their AI 6:00answers system and they didn't have full 6:03guard rails on it and you started to get 6:05really really hateful Google answers 6:08come through that seem to be highly 6:11correlated to similar answers on 6:16Reddit that is the kind of thing where 6:18guardrails weren't fully baked in and 6:19Google went back and fixed that like we 6:21don't talk about that as much anymore 6:23because Google went back and their team 6:25addressed it like they had tried to put 6:27guard rails in but they hadn't put 6:28enough in they had a whole team on that 6:29and they fixed it I don't think it will 6:32be quite that fast if you have 6:33intentionally built the entire structure 6:35of your business your brand and your org 6:37to not put guard rails in it's going to 6:40be hard to add them now so I don't know 6:42what's going to happen for Elon and X 6:45but I am sure paying attention because 6:47we have not seen a gp4 class Model come 6:51out we have not seen a highly 6:52sophisticated image Model come out with 6:55this few guard rails this 6:57publicly in a while right like maybe 6:59maybe since the launch of Chad GPT yes 7:02there are always ways to jailbreak yes 7:04there are models on other parts of the 7:06web that are not as publicly available 7:07that are intentionally jailbroken I know 7:09those things exist but putting it in a 7:11public brand where you own the brand and 7:13you have hundreds of millions of people 7:14in the app is a different thing and I 7:16think the lawyers of the Disney 7:18Corporation are going to agree so I will 7:19be really curious to see how this goes