Learning Library

← Back to Library

Nano Banana Pro Beats Chad GPT

Key Points

  • Chad GPT’s “code‑red” response to Google’s Gemini 3 rollout includes a new image‑generation update touted as up to 4× faster, but side‑by‑side tests against Nano Banana Pro show it consistently underperforms.
  • Nano Banana Pro’s image generator embeds logical reasoning directly in the generation process, producing more accurate diagrams and business‑relevant visuals, whereas Chad GPT relies on generating code and “photographing” it, leading to misaligned or incorrect outputs.
  • The self‑editing loop Chad GPT introduced (intended to catch and fix errors) often triggers long, ineffective edit cycles—e.g., a 20‑minute loop on an alphabet test that yields no quality improvement.
  • In practical use, Nano Banana Pro delivers higher‑quality images faster and with fewer complications, despite occasional minor flaws, indicating a clear performance edge over the latest Chad GPT version.

Full Transcript

# Nano Banana Pro Beats Chad GPT **Source:** [https://www.youtube.com/watch?v=biJqOrsYN70](https://www.youtube.com/watch?v=biJqOrsYN70) **Duration:** 00:13:38 ## Summary - Chad GPT’s “code‑red” response to Google’s Gemini 3 rollout includes a new image‑generation update touted as up to 4× faster, but side‑by‑side tests against Nano Banana Pro show it consistently underperforms. - Nano Banana Pro’s image generator embeds logical reasoning directly in the generation process, producing more accurate diagrams and business‑relevant visuals, whereas Chad GPT relies on generating code and “photographing” it, leading to misaligned or incorrect outputs. - The self‑editing loop Chad GPT introduced (intended to catch and fix errors) often triggers long, ineffective edit cycles—e.g., a 20‑minute loop on an alphabet test that yields no quality improvement. - In practical use, Nano Banana Pro delivers higher‑quality images faster and with fewer complications, despite occasional minor flaws, indicating a clear performance edge over the latest Chad GPT version. ## Sections - [00:00:00](https://www.youtube.com/watch?v=biJqOrsYN70&t=0s) **Nano Banana Pro Outshines Chad GPT** - In a side‑by‑side evaluation of nine business‑focused image‑generation challenges, the presenter finds Nano Banana Pro consistently surpasses Chad GPT 5.2—delivering faster, more accurate, and better‑reasoned visuals, especially for diagrammatic tasks. - [00:03:10](https://www.youtube.com/watch?v=biJqOrsYN70&t=190s) **Celebrity Image Editing Evaluation** - The speaker compares nine slide outputs in a dual test that places celebrity Kira Nightly in an atypical teaching scenario to assess the model’s ability to render diagrams, handle perspective shifts, and preserve likeness while avoiding copyright concerns. - [00:06:25](https://www.youtube.com/watch?v=biJqOrsYN70&t=385s) **Nano Banana Pro Beats Google** - The speaker praises Nano Banana Pro for its superior detail in graphing and its accurate generation of fictional maps from P.G. Wodehouse novels, while criticizing Google’s seemingly polished but ultimately flawed graph output. - [00:09:49](https://www.youtube.com/watch?v=biJqOrsYN70&t=589s) **Model Output Failures vs Successes** - The speaker critiques one AI model for producing incorrect revenue data and unusable diagrams, then contrasts it with another model that successfully generated a humorous Venn diagram and a complete opportunity‑solution tree. - [00:13:03](https://www.youtube.com/watch?v=biJqOrsYN70&t=783s) **Promoting Nano Banana Pro for Diagrams** - The speaker markets a set of prompts for turning extensive presentations into business diagrams with Nano Banana Pro, asserting its superiority over the latest ChatGPT despite benchmark evaluations. ## Full Transcript
0:00Chad GPT continues a code red response 0:02to Google. For context, they've been in 0:04this code red mode for a while since 0:06Google launched Gemini 3. Chad GPT 5.2 0:10was the initial response to that. And 0:12now they're continuing with a new images 0:14release that is of course aimed at Nano 0:16Banana Pro. Chad GPT is claiming faster 0:19image generation up to 4x faster. and 0:21they are obviously saying that theirs is 0:23quote unquote better and that it's going 0:25to be able to deliver more compelling 0:28edit capabilities. I put all of that to 0:30the test. I went through and did a 0:32sidebyside comparison across nine 0:35different challenges with business 0:37relevant implications and I got to say 0:40Nano Banana Pro wiped the floor with Jad 0:42GPT 5.2 even the new updated version. 0:45And I will show you the slides in a 0:46minute with side-by-side image 0:48comparisons and you will see for each of 0:50the nine why Nano Banana Pro did a 0:52better job. Before we get into that, 0:54just a couple of highlevel observations. 0:56Number one, there is a different method 0:59that Chad GPT is using to generate these 1:02images and I don't think it works well 1:04for them. This is particularly for 1:06images that require a lot of logical 1:08thinking by the model. So if you ask it 1:10to develop a diagram that would be 1:12appropriate for a PowerPoint slide, Nano 1:14Banana Pro appears to use reasoning 1:16baked into the image generation process 1:18itself and if it fails you see a badly 1:22conducted set of reasoning with 1:24incorrect labels or something like that 1:25and it actually doesn't fail very often. 1:27On the other hand with Chad GPT what you 1:30see is code. If it fails, you see code 1:33and it is literally writing the code for 1:35the diagram and then it is trying to 1:37photograph the results and bring that to 1:39you. That has concrete consequences and 1:41you'll see that you have issues with 1:44lining up the diagrams in a way that the 1:46model can photograph it. The model 1:47clearly doesn't understand quite what 1:49it's doing. There's not an internal 1:50reasoning check. It looks like Jet GPT 1:52tried to compensate for this by 1:54including a self-edit loop in this 1:57launch. And so when I did a children's 2:00alphabet test where you have an a for 2:03arvar, right, and you have an animal for 2:04each letter and it goes all the way 2:05through A to Z, Chad GPT tried to catch 2:08itself and edit itself. It got into a 2:0920-minute edit loop. It produced like a 2:11dozen images. And at the end, the 2:13resulting quality was still not any 2:15better than the initial image. And so I 2:17like the idea of checking and rechecking 2:19the work. But I'm not seeing actual 2:21quality gains that would justify that 2:23kind of time. And despite the claim that 2:26this is a very fast image generator, I 2:28found in practice that Nano Banana Pro 2:31generated the images I'm about to show 2:32you much much faster and with a lot less 2:35drama, a lot less thinking, a lot less 2:37reasoning. They just like got it done 2:38and generated an image. Now, I'm not 2:40going to tell you Nano Banana Pro is 2:42perfect. You're going to see a few 2:43issues as we go through this slide deck. 2:45But overall, there is a tipping point 2:47where an image model becomes useful for 2:50say creating a useful PowerPoint slide. 2:52And I have several examples in the deck 2:54here. Nano Banana Pro has hit that and 2:56Chad GPB2 5.2 isn't there. And no other 2:59image model is there today. No other 3:02image model is as good as Nano Banana 3:05Pro today. So with that, let's hop in. 3:08Let's see a comparison across nine 3:10different slides side by side. Okay, 3:12here we have a dual test. I wanted the 3:15model to take a celebrity and be able to 3:18repurpose the celebrity into a different 3:20location. This is an image edit test. I 3:22used Kira Nightly because her image is 3:24going to be widely available in training 3:25data. And I wanted to see if the model 3:28could adequately present her in 3:31obviously an unusual situation in this 3:33case where she's teaching how LLBs work. 3:35This allows me to test whether the model 3:37can show diagram within the image, 3:40whether it can handle the perspective 3:41shift, and of course, whether it can 3:43handle representing an image correctly 3:47if it's a celebrity. And I you might 3:49think, well, why are we worrying about 3:50celebrities? This is relevant because if 3:52you include an image of yourself, you 3:54want to know if it's going to look like 3:56you. And so that was really the test. 3:58And I gave the model, I did not call it 4:01Kira because I didn't want to draw run 4:04into any uh copyright issues. All I did 4:07was give it a blurry picture of Kira 4:10Knly and Pirates of the Caribbean. And I 4:12said, "Please have her teach how LLM 4:14work to both models." And so what you 4:16get on the right is not really a correct 4:18image of Kiara Nightly. You get a 4:21overall nice colorful very high level 4:24view of how LM's work. And as Chad GPT2 4:265.2's approach, it's clear Nano Banana 4:29Pro knows Kiara Nightly. That's a 4:31photographically correct image of her. 4:33She's even in costume. Uh this was not a 4:36visible costume in the source image. So 4:38it decided to put her in that costume 4:41and clearly knew the movie I was 4:43referencing. And then it has a much more 4:45detailed diagram of how LMS work, 4:47although it's not as visually appealing. 4:48Let's go to the child's alphabet. On the 4:50left, you see Nano Banana Pro right chat 4:53GPT. Both models failed, but they failed 4:56in ways that are interesting. In this 4:58case, what you'll see is that Nano 5:00Banana Pro needed this to be a complete 5:02box. And so it had Fox Gorilla and it 5:05had Fox Goat here. FN G FNG. 5:08Individually, these are these are 5:09correct in their cells, but you don't 5:12need to repeat those letters. It did 5:13take some coaching. I will say in both 5:16cases I had to ask for edits for these 5:18because the initial versions messed up 5:20the X. I had X-ray presented by Nano 5:22Banana Pro. So we had some issues. Uh I 5:25would say the ability to get to a final 5:27result a little bit better from Nano 5:29Banana Pro but not perfect. Uh and 5:32really Chad GPT kind of fell apart here. 5:34Uh zebra zebra indifferent and then like 5:37some form of W at the end and then an X 5:40way down there. We just didn't get where 5:42we needed to go here. Uh, and this is 5:44after multiple edits. So, I would say 5:45Nano Banana Pro again did a better job, 5:47although neither model did perfect 5:50funnel diagram slide. Let's go to the 5:52professional side. This is quite a 5:54detailed slide. If you look, the text is 5:57all readable here. I can read completion 5:59down 1.2 percentage points week over 6:01week, drop off on password and SSO step. 6:03That is a perfectly correct assessment 6:06of a leak in the funnel. Uh and what you 6:09see over here is uh somewhat less text 6:12uh and you see a sort of weird funnel 6:15illustration. This does not look like 6:17the biggest leak in the funnel even if 6:19mathematically 57 is the biggest drop 6:22off from 820. 6:24Uh the thing that I really want to call 6:25out from a quality perspective is that 6:28Google has taken the time to draw this 6:30entire sequence of graph charts 6:33correctly. Uh and this is this is 6:35graphed in such a way that it believably 6:37goes up and down point to point across 6:39these dozens of points. Uh and this is 6:42just a very light overall version that 6:44clearly isn't designed to be a fully 6:47functional graph. And so from a level of 6:49detail perspective, 6:51Nano Banana Pro wins here. And uh I 6:55don't know what else to say. I think 6:56that this is a case where this is going 6:58to look good initially and then you're 6:59going to dive in and say, "Well, it's 7:00not quite right." uh and not quite right 7:03is not going to work with an image 7:04because you you would have to just re 7:06generate it from scratch. Let's look at 7:08fictional maps. Uh this measures the 7:10LLM's ability to generate spatial 7:13relationships and understand how story 7:14structures work, etc. I chose PG 7:17Woodhouse's England because it's a very 7:19well-known corpus of books that the 7:20models have read, but it's not often 7:23mapped. It's not like the Lord of the 7:24Rings where there's an obvious map to 7:26reference in the training data. In this 7:28case, I think Nano Banana Pro knocked it 7:30out of the park. All of these funny 7:32sounding names are actually in PG 7:34Woodhouse's novels. Um, and the 7:36characters here, Lord Emworth is 7:38associated with Blanding's Castle in the 7:40novel. Uh, and Birdie Worcester is 7:42associated with Brinkley Court as is 7:44Aunt Dillia. So, it got it right. It got 7:46the characters correct and it associated 7:49them with the correct locations uh in 7:51the novels. On the other hand, Chad GPT 7:53really struggled. Uh, it initially 7:57named and generated a bunch of points on 7:59a map. It tried to generate a photograph 8:01of a paper map, but if you zoom in, like 8:04this is so blurry and tiny, you can't 8:06read it even zoomed in. So, there's 8:08nothing really usable about this. It's 8:10just a nice visual concept of a map. And 8:13that's kind of the whole game right 8:14there. Like, you have to be able to 8:15generate a map and actually make it 8:17readable. There may be a comprehension 8:19issue here with what the ask was. Uh 8:22this may be a situation where Chad GPT 8:24took the ask very literally and wanted 8:26to list out a bunch of place names here 8:28whereas Nano Banana Pro was able to 8:29synthesize more effectively uh across 8:32the ask advertisements. Uh this is 8:35perhaps more business relevant. Uh Nano 8:37Banana Pro and Chad GBT both did pretty 8:40well here. Uh I would say the option was 8:43left to the models as to how they want 8:45to handle aspect ratio and layout. Uh, I 8:48think the overall layout worked better 8:51on Nano Banana Pro. That nice four 8:53badges all the way across over the car 8:55looks really good. The car is centered 8:57nicely. Uh, this is still a fine ad. I 9:00don't think that there's a huge issue 9:01here. There's just a small issue where 9:03this safe pickup and drop off wasn't 9:05handled correctly because you have to 9:06drop it down underneath the three 9:08badges. Uh, but overall, not too bad on 9:10either on either count. ARR revenue is a 9:13real problem. Uh so Nano Banana Pro uh 9:17correctly built a revenue bridge and a 9:18revenue bridge is very simply your 9:20starting ARR you have green upward marks 9:23for all of the additional AR you get new 9:25and expansion and then you have red for 9:27contraction and for churn and then you 9:30have your ending ARR and that's that's 9:32just how it is. It's a very defined 9:34chart style. Uh in this case uh you'll 9:37see that example of Chad GPT trying to 9:39code this here because it could not 9:41photograph what I'm sure it coded which 9:43is ARRB bridge. It cut it off at RR and 9:46it also cut off the notes section here. 9:48So that's not going to work. You cannot 9:49recover that. I checked this. The image 9:51is the image. This is just lost. And 9:53worst of all the 4.2 should not be going 9:56down to 4.5. It should not have placed 10:00uh upward gains in revenue as declines 10:02in revenue. So it just misunderstood the 10:04assignment and this is absolutely not 10:07usable. Ven diagram is another case 10:09where Nano Banana Pro just won straight 10:11up. I deliberately gave uh a challenging 10:14prompt that would not have been in the 10:16training data. I said please create a 10:17ven diagram of Taylor Swift product 10:20managers and the Army Corps of Engineers 10:22and make it funny. And I got a fairly 10:25usable ven diagram from Nanobro. A 10:28little bit wordy but you can see what 10:30it's trying to do. It talks about 10:31coordinating massive high stakes 10:33operations for all three. Uh for Taylor 10:36Swift and the Army Corps, they're 10:37designing massive structurally sound 10:39stages and infrastructure and managing 10:41leaks, which was a nice funny touch. And 10:43this just falls apart. There's no 10:44visuals to it. Uh I think that the model 10:47is trying to understand what it's 10:50supposed to do, but it wasn't able to 10:52make it funny. It wasn't able to draw 10:54it. And ultimately, this is not 10:56something that would be usable. Again, 10:58you notice the cut off issue. That's not 10:59me taking a bad screenshot. That's how 11:01that was produced. Let's try an 11:03opportunity solution tree. In this case, 11:06you get a full diagram opportunity 11:08solution tree from Nano Banana. You get 11:10full text from Nano Banana all the way 11:12through. Uh the text is very 11:14consistently styled. Um and this 11:16represents a usable solution tree for 11:18onboarding and activation. On the right 11:20with Chad GPT, you get less detail, less 11:23options, and you also get cut offs here 11:25that would make this unusable. It's 11:26almost as if it coded it again and it 11:29just cut off what it was able to see 11:31from a coded series of boxes. And this 11:35would not be usable on a slide because 11:36no one's going to accept the dot dot dot 11:38dot dot dot and nano banana understands 11:40that and just writes it out. Let's try 11:42edit. That's one of the things that they 11:44asked for uh and and said was great 11:47about Chad GPT was that it can edit 11:48well. Uh I took a diagram uh showing uh 11:52juice blend composition and I simply 11:54said please add 20% blueberries and make 11:57it correct. Nano Banana was able to do 12:00that. Uh orange plus lemon plus 12:02grapefruit now equals 80% and the 12:04blueberries equal 20%. This is a 12:06believable looking pie chart. Uh I 12:08believe Nano Banana even got the 20% pie 12:11slice a little bit wider than the 12:12grapefruit at 15% and narrower than the 12:15lemon at 25. So I think it did a fine 12:17job. On the other hand, Chad GPT 12:19couldn't do it. Uh, it correctly added 12:22up. So, 24 + 16 + 40 is 80 and then 12:26blueberries are 20. So, the math was 12:27fine, but it could not draw the pie 12:29chart. It just kind of had blueberries 12:31spilling out everywhere. The grapefruit 12:32isn't correctly framed. This just 12:34doesn't work straight up. And I think uh 12:37one of the smaller adjustments that I 12:38see is that Nano Banana correctly put a 12:41little blueberry purple tinge into the 12:43drink and uh Chad GPT did not figure 12:46that out. So overall, my takeaway here, 12:49my takeaway here is pretty simple. Do 12:51not listen to the benchmarks. Do your 12:53own tests. And for now, Nano Banana Pro 12:56remains the only image model that I 12:59would trust for serious business work. 13:01If you enjoyed some of the sort of 13:03business diagrams and you think they're 13:05useful, I'm actually putting together a 13:06basket of prompts that I'm using to 13:08create those kinds of diagrams because I 13:10think that's one of the great 13:11applications for Nano Banana Pro right 13:13now. you can take a full presentation, a 13:1560 70page presentation and ladder it up 13:18into a really useful diagram. So, I'm 13:20going to share some of those over on the 13:22Substack. We'll get a whole list of 13:23prompts going. It'll be nice. But, I 13:25would recommend Nano Banana Pro right 13:26now. I don't care what the evaluations 13:28say. I don't care what the benchmarks 13:30say. I put the new chat GPT model 13:32through its paces and it just is not 13:35able to