Learning Library

← Back to Library

ChatGPT 5.1: Top 10 Takeaways

Key Points

  • Chat GPT 5.1’s most notable advance is its dramatically sharper instruction‑following ability, making it essential to write non‑contradictory, concise prompts and treat prompts like code.
  • The model now strictly obeys system‑level directives (e.g., “don’t apologize” or “use three bullets”), so conflicting instructions can cause odd oscillations and must be debugged first.
  • OpenAI markets the update as “warmer,” but the real breakthrough is its increased agency and utility, offering developers a more reliable tool for building complex workflows.
  • GPT‑5.1 operates with two distinct processing modes—an “instant” quick‑response brain and a deeper “thinking” brain—allowing users to choose speed versus thoroughness as needed.

Sections

Full Transcript

# ChatGPT 5.1: Top 10 Takeaways **Source:** [https://www.youtube.com/watch?v=uySTyxsmrxM](https://www.youtube.com/watch?v=uySTyxsmrxM) **Duration:** 00:20:09 ## Summary - Chat GPT 5.1’s most notable advance is its dramatically sharper instruction‑following ability, making it essential to write non‑contradictory, concise prompts and treat prompts like code. - The model now strictly obeys system‑level directives (e.g., “don’t apologize” or “use three bullets”), so conflicting instructions can cause odd oscillations and must be debugged first. - OpenAI markets the update as “warmer,” but the real breakthrough is its increased agency and utility, offering developers a more reliable tool for building complex workflows. - GPT‑5.1 operates with two distinct processing modes—an “instant” quick‑response brain and a deeper “thinking” brain—allowing users to choose speed versus thoroughness as needed. ## Sections - [00:00:00](https://www.youtube.com/watch?v=uySTyxsmrxM&t=0s) **ChatGPT 5.1 Sharper Instruction Following** - The speaker argues that the most significant advance of the November 12th release is its heightened fidelity to user instructions, making the model more agentic and useful while also increasing sensitivity to contradictory prompts. - [00:03:24](https://www.youtube.com/watch?v=uySTyxsmrxM&t=204s) **Balancing Latency and Reasoning Depth** - The passage explains how to route simple, low‑latency model instances for everyday tasks while reserving higher‑cost, chain‑of‑thought processing for complex queries, treating latency versus depth as a primary design consideration. - [00:07:36](https://www.youtube.com/watch?v=uySTyxsmrxM&t=456s) **Configurable Personas in GPT5.1** - GPT‑5.1 adds persistent personality presets—like quirky or formal—that can be tuned across chats, but they may clash with custom instructions, prompting organizations to establish standards for persona development and deployment. - [00:12:35](https://www.youtube.com/watch?v=uySTyxsmrxM&t=755s) **Orchestrating Multi‑Step Tool Workflows** - The speaker urges delegating whole sequences of tasks—reading documents, listing open questions, drafting plans—using OpenAI’s 5.1 model as an orchestrator over a full tool stack (web search, code execution, file handling, custom APIs), while emphasizing the necessity of clear tool definitions, safety guards, and robust engineering to manage failures and security risks. - [00:16:26](https://www.youtube.com/watch?v=uySTyxsmrxM&t=986s) **Prioritizing Stable AI Workflows** - The speaker urges teams to replace ad‑hoc prompt hacks with documented, versioned core workflows (e.g., triage, summarization, drafting) supported by prompt libraries and testing, because only such repeatable processes can reliably scale in production. - [00:19:45](https://www.youtube.com/watch?v=uySTyxsmrxM&t=1185s) **Excitement Over New Model 5.1** - The speaker celebrates the release of the agentic‑build 5.1 model, sharing how effortless workflow upgrades feel like a delightful “Christmas morning” surprise. ## Full Transcript
0:00Chat GPT 5.1 dropped November 12th. It's 0:03the biggest update since chat GPT5 and 0:05everyone is talking about the emotions, 0:08the ability of the model to be warmer 0:09and they're all missing the point. The 0:12point is that this is the most agentic 0:14and useful model that we have seen out 0:16of Open AI and I want to tell you why. 0:18So, I'm going to get into my top 10 0:19takeaways. I would love to hear your 0:21take. Let's hop right into it. Number 0:22one, sharper instruction following. So, 0:25what is it? Chat GPT 5.1 is explicitly 0:28tuned to follow instructions much more 0:30faithfully than chat GPT5 or any earlier 0:33OpenAI model. So OpenAI is framing it as 0:36warmer. But the the important part is 0:39that it's better at following your 0:40instructions. And the way that shows up 0:42is for example if your prompt says three 0:44bullets in a one-s sentence summary, the 0:46model is more likely to do exactly that. 0:48If your system prompt says don't 0:50apologize or don't restate the question, 0:52it's going to try to obey that. The new 0:53prompting guide explicitly calls on 0:56developers to reduce conflicting 0:58instructions because Chad GPT 5.1 takes 1:02instructions super seriously and if 1:05there are conflicts, it's going to try 1:06and resolve them. The edge case here is 1:08that there's there's an upside and a 1:10downside when you have something that 1:11follows instructions. In older models, 1:13if you had sloppy or conflicting 1:15prompts, they often got averaged out and 1:17people got used to that. Now, 1:18contradictions like be concise and 1:20explain in detail are more likely to 1:23cause really weird behavior or 1:24oscillation. Instruction following is 1:27better, but it's still probabilistic. 1:29Long prompts or hidden defaults or vague 1:31language will still lead to drift. So, 1:33if you want to dig in more, Chad GPT 5.1 1:36published a usage guide. They published 1:38a prompting guide. They both call for 1:39stronger instruction following and the 1:42need to simplify prompts. You have to 1:44treat your prompts in your system like 1:46real specs. My takeaway here is that we 1:48continue to move toward a world where 1:51prompt is code. That means you have to 1:53separate your tone, your tools, your 1:55safety, and your workflow rules if 1:57you're a developer instead of just 1:58piling everything into one paragraph in 2:01your system prompt. When your behavior 2:03is off, your first debugging step needs 2:05to be to look for conflicting 2:06instructions, not maybe the model got 2:08worse or they nerfed it or whatever. 2:11Assume that it takes your instructions 2:13seriously. If you're a non-technical 2:15user, your settings now matter more. If 2:17you tell chat GPT to be brief, to 2:19explain everything, and to sound 2:21friendly in the same breath, you are 2:22going to feel that friction. You want to 2:24keep your instructions really simple and 2:26non-contradictory. And your main goal 2:29should be to have a visible effect on 2:31answer quality from what you write. 2:33Takeaway number two, Chat GPT5.1 has two 2:37brains, instant and thinking. Now, you 2:40might think this was already true with 2:41chat GPT5, but it's much more true with 2:445.1. So, chat GPT 5.1 comes in two main 2:48variants. Instant is the default fast 2:50model, and thinking is the advanced 2:52reasoning model. Thinking adapts how 2:54long it thinks, faster for simple tasks 2:57or a much more persistent long train of 2:59thought for complicated tasks. I've 3:01already noticed that just playing around 3:03with it in the chat, and it's even more 3:04prevalent in the API. Developers are 3:06also now able to set reasoning effort to 3:09none, which effectively turns 5.1 into a 3:12pure non-reasoning model for very low 3:15latency use cases. So this shows up in 3:18different model options, right? You can 3:19go down to model selector and pick them. 3:22If you're on Atlas, the browser, or if 3:24you're on auto, it the surfaces may just 3:26auto route you, which we've seen before. 3:28Simple requests in practice are going to 3:30feel snappier than full thinking mode, 3:32but still smart. and harder questions 3:34are going to trigger visibly longer 3:36thinking. I had questions run for 3:38multiple minutes that did not take that 3:39long on equivalent questions on chat 3:42GPT5. Now, none doesn't mean dumb. You 3:45still get language skill. You actually 3:48still get its tool calling. You just 3:50don't get the expensive chain of 3:51thought. And so, more reasoning is not 3:53always better. And for some tasks, 3:55overthinking can actually produce 3:57incorrect, convoluted answers, 3:59unnecessary tool calls, stuff you don't 4:01want. There will be workloads both for 4:03non- tech and tech users where instance 4:05is clearly better. The implications for 4:08attack are pretty clear. You need to 4:10think about latency versus depth as a 4:12first class design parameter. You'll be 4:15routing sort of known pattern tasks, 4:17templated replies, very simple 4:19transforms to something like instant and 4:21you're going to reserve thinking and 4:22higher reasoning effort for problems 4:24that actually deserve it. So cost and 4:26speed and reliability trade-offs now 4:28depend on how you route across those 4:30modes. And that needs to be a first 4:32class object that you think about when 4:34designing systems. For non- tech, you no 4:36longer have to guess why the model is 4:38slow. You can use the quick model for 4:40day-to-day stuff and it will be good. 4:42Emails, summaries, simple exploration, 4:44and you only need to switch to the 4:46thinking model if you want to really 4:47wrestle with a big decision, a 4:49complicated document, really confusing 4:51data. You have that power. And it's 4:53going to feel more like a skateboard 4:56where you are writing either a lot of 4:58power at the top and there's a long 4:59thinking parameter or it will very 5:01quickly drop off to instant. It's less 5:03of an even slope if that makes sense. 5:05Number three, prompts should be framed 5:08again as many specifications. They're 5:11not wishes. The 5.1 prompting guide 5:13explicitly treats prompts as small 5:16specifications that define role, 5:18objective, inputs, and output format. 5:20The model is really tuned to respect 5:22these patterns, especially for 5:24production agents that run with code, 5:26but really for the whole model. And it 5:27shows up when you have well structured 5:30prompts. If you say, you are my project 5:32manager. I'm going to paste this 5:33context. I want your output to be three 5:35risks, three next steps, and a one 5:37paragraph summary of the project status. 5:39You'll get predictable and repeatable 5:40behavior because you're prompting in the 5:42context it expects. If you have a chatty 5:45prompt, it may still work for casual 5:47use, but it's going to be very hard to 5:49reuse. It's going to be very hard to 5:50automate. It's going to be very hard 5:52with a chatty prompt to get predictable 5:55results. I will also call out that we 5:57are starting to see diminishing returns 5:59on verbosity. One of the risks of very 6:02long spec prompts is you may run into 6:05redundant or conflicting roles that 6:07backfire. And so one of the things that 6:08I would recommend is if you have a 6:10lengthy prompt in an agentic system 6:12today, think about reviewing it for 6:14conflicting rules using chat GPT 5.1 6:17thinking so that it can call out areas 6:20where you have conflicts within 6:24the prompt itself that could cause chat 6:27GPT 5.1 to backfire. And so you want to 6:30think in terms of crisp structure and 6:32make sure that you have the right size 6:34prompt to clarify roles, goals, and 6:37expectations. This goes back to what I 6:40talked about on Monday when I talked 6:42about the idea of having a Goldilocks 6:44clean shaped. There's no substitute for 6:46the right-sized prompt for the degree of 6:48freedom you give the model. And in this 6:50case, we're seeing more of that. Give it 6:52a right size degree of freedom and let 6:54it go. For tech, this means you should 6:56standardize prompt templates as if they 6:58were interfaces. You can actually have 7:00like a clean summarized document, a 7:02clean proposed plan. These probably 7:04should be version controlled if you're 7:06not already. Consistency across specs is 7:08going to matter much more than clever 7:10phrasing. And that's going to continue 7:11to be a trend. If you're in non- tech, 7:13I'm not saying you have to learn jargon, 7:15but adopting a simple habit is going to 7:18help you a lot. If you can just learn to 7:19say who the model should think of 7:21themselves as, what you want from the 7:24model, what you're giving it, and how 7:26you'd like the answer formatted, that 7:28alone is enough to make chat GPT in chat 7:31mode feel dramatically more reliable. 7:33Number four, configurable behavior. Chat 7:36GPT5.1 7:38leans into configurability. OpenAI calls 7:41out more enjoyable to talk to behavior. 7:44It calls out personality presets like 7:46quirky or nerdy. It shows up in your 7:49ability to pick or tune how formal or 7:50playful you want the assistant to be. 7:52And the settings do persist across 7:54chats, but combined with stronger 7:56instruction feelings following this 7:58means that the tone of the model feels 8:01really consistent. It feels like a 8:02consistent personality. I think people 8:04will emotionally attach to this model a 8:07little bit the way they attach to chat. 8:08GPT40. Personalities remain prompts 8:11under the hood. So, if you stack your 8:13own instructions over the top, they may 8:15conflict with a preset and you'll get 8:17mixed results. For example, if you say 8:19no emojis, be brutally direct. That can 8:21conflict with be friendly, be quirky, 8:23and you might get really weird results. 8:25Warmer models can also feel too shabby 8:27unless you explicitly ask it to be 8:29concise. For tech, you can now ship 8:31differentiated voices for different 8:33agents. You can have a formal enterprise 8:35assistant. You could have a casual 8:37onboarding helper. You could have a very 8:38tur internal tool for engineers. These 8:41are just different specification blocks 8:42now. They're very easy to work with, but 8:44you're going to need internal standards 8:46so marketing and legal and support don't 8:48reinvent conflicting personas. There's 8:50an organizational question now around 8:52persona development. For non- tech, you 8:55can stop fighting the default voice. 8:57Finally, if you hate being bubbly, you 8:59can just tell it not to be bubbly and 9:01put that in the rules. If you love being 9:02bubbly and warm, you can just do that. 9:04The thing to do is to make sure that 9:06your personality preset plays nicely 9:09with your system prompt so you're not 9:10fighting. Takeaway number five, modes 9:12and soft types for behavior. So 5.1 is 9:15more literal. You can define simple 9:17modes like review or teach or plan and 9:20you can treat them like soft types. Each 9:23will have specific rules that you can 9:24invoke for structure and for tone just 9:27by calling that mode. So the prompting 9:30guide leans into this pattern for agents 9:32really heavily. And I think there's 9:33interesting implications for both 9:34technical and non-technical teams here. 9:37For example, you can say, "When I start 9:38with teach, please explain like I'm new. 9:41Give one example and provide a 9:42three-step practice exercise. When I 9:44start with critique, please only point 9:46out issues and suggestions, no 9:47rewrites." With 5.1, the model will 9:50usually respect these kinds of contracts 9:52in a way that's reusable. These modes 9:54are still enforced by vibes, though. 9:56They're not enforced by a compiler. So 9:58the model is good at following 10:00instructions and that's what we're 10:01depending on when we use these modes. 10:03And so the model will occasionally 10:05violate a contract that you set in. 10:07That's why I call them soft types 10:09especially if later instructions 10:11contradict the mode. So if you say teach 10:13explain like I knew and then try and say 10:15I'm super experienced go deeper the 10:17model may get confused. So mode 10:19definitions need to be very short. They 10:21need to be unambiguous and long lists of 10:23rules are going to make violations more 10:25likely. It goes back to instruction 10:27following. So for tech, if you're in 10:29application design, you can define 10:31explicit sub modes for the same model, 10:33planning or execution or critique or 10:35what have you and swap them via system 10:37messages or tags. This gives you very 10:39differentiated tools without needing 10:41different models. It also makes eval 10:43much easier because you can test each 10:44mode separately. For non- tech in plain 10:48chat, you can get most of this benefit 10:50by using consistent keywords like think, 10:53just do it, teach, critique. Each should 10:56map to a very clear style in your system 10:58instructions. Over time, chat GPT is 11:01going to feel like a toolbox of 11:02behaviors instead of just one generic 11:05assistant. Takeaway number six, agentic 11:07behavior. You are in a plan, act, 11:10summarize world. So, Chad GPT 5.1 is 11:13positioned as a flagship model for 11:15agentic tasks. Things where the model 11:17plans, where it uses tools, where it 11:19iterates, not just answers. So the 11:22cookbook, which is what they released 11:24with 5.1, leans really heavily on agents 11:26that gather context and plan and verify 11:28and summarize because that's where chat 11:30GPT thinks the tools are going. When 11:33prompted correctly, this means 5.1 will 11:35outline a plan. It will call tools like 11:37search and code and files. It will 11:39adjust the plan based on tool outputs 11:41and only then will it give you a final 11:43answer. So a coding agent might read 11:46files and generate patches and run tests 11:48and iterate before proposing a poll 11:49request. Now, the agent behavior is not 11:52automatic. If your prompt does not spell 11:54out planning and verification steps, 5.1 11:56will still happily act like a one-shot 11:58chatbot, and more agentic behavior also 12:01raises the opportunity for brand new 12:02failure modes. You get infinite loops, 12:04you get overuse of tools, you get doing 12:06too much when the user just wanted a 12:08quick answer. So, when you're thinking 12:10about this from an engineering 12:11perspective, you need to design explicit 12:13agent loops. Under what conditions 12:15should the model replan under what 12:17conditions does it reququery tools 12:19logging guardrails and evaluation are 12:21becoming very very important. You're not 12:22just calling a model. You're designing a 12:24tiny autonomous worker whose behavior is 12:27governed by your specification and your 12:29tool set. If you're a nontech, start 12:31thinking in terms of many projects. 12:33Don't just think in terms of one answer 12:35at a time. So, for instance, read these 12:37three documents, list the open 12:39questions, then draft me a one-page plan 12:41that answers as many of those open 12:42questions as possible. You're delegating 12:45a whole sequence of steps, not just 12:47asking for that summary at the end. 12:48Takeaway number seven, tools are now 12:51normal. They're not advanced. 5.1 is 12:54designed to work with a full tool stack. 12:56Web search, code execution, file 12:58reading, and for developers, custom 13:00tools and APIs. OpenAI markets this as 13:03the flagship for coding and agentic 13:05tasks with very strong tool calling 13:07performance even in instant or 13:09non-reasoning mode. In chat GPT you can 13:12automatically use search when needed. 13:14You can read uploaded files. You can run 13:16code in certain contexts. And in the 13:17apps you can actually orchestrate calls 13:20to your own APIs. You can orchestrate 13:22calls to your databases or services 13:24instead of just generating text. There's 13:25a lot more flexibility here. Now, we've 13:28been calling tools for a while, and we 13:29know that tool use isn't magical. The 13:31model still needs clear descriptions of 13:33what every tool does, what inputs are 13:35allowed, and when it should not call a 13:37tool. For example, sensitive operations. 13:39External tools introduce new real world 13:42failure mode, security issues, API 13:44error, errors, stale data. So, you need 13:46to think about 5.1 as an orchestrator 13:49over your APIs more than a tax 13:51generator. The hard work for engineers 13:53is going to be in designing good tool 13:55schemas, in understanding safety checks 13:58that need to be run, in understanding 13:59that success will depend on the quality 14:01of your tools and prompts rather than 14:03just squeezing out slightly better text 14:05response to a random battery of 14:08questions from a chatbot. For non tech, 14:11you don't need to know what tools are 14:13under the hood necessarily. You just 14:14need to remember you can say things like 14:16use the web and show me sources or 14:18please summarize this PDF into three 14:20bullets for the VP. That's you asking 14:23the model to reach outside itself 14:25instead of hallucinating everything from 14:26it. Takeaway number eight is it's about 14:28reliability. What can you prompt for 14:30reliability? Open AAI keeps improving 14:33safety and reliability evals like 14:35jailbreak resistance, mental health, 14:37political bias. 5.1 in the prompting 14:39guide explicitly encourages building 14:42selfch checks and verification into your 14:44prompts and workflows. Don't treat 14:45hallucination as unfixable magic, which 14:48I've been saying for a while, so it's 14:49good to see them saying it. You You can 14:51ask 5.1 to explain its reasoning at a 14:53high level. You can ask it to list what 14:55should be verified externally. You can 14:57ask it to output in a structured way 14:59what you can automatically sanity check. 15:01These are all things I recommend you do, 15:03particularly for higher value workflows. 15:05In agent flows, you can make it verify 15:06via tools before answering. Now, even 15:09with better safety scores, 5.1 is not 15:11perfect. It can still hallucinate, 15:13especially when forced to answer without 15:14tools or when asked for very obscure 15:17facts. Chain of thought is also not a 15:18lie detector. It is still possible to 15:20get a well-worded but incorrect 15:22reasoning trace. You need to think about 15:24this from an engineering perspective is 15:26designing patterns that are safe by 15:28default. Right? Answer plus uncertainty 15:31plus verification checklist mitigates 15:33the risk of hallucination. So you want 15:34to use tools to validate validate key 15:37claims where possible. You want to build 15:39evals that probe for failure modes in 15:41your particular domain where they matter 15:43to you. And you want reliability to 15:45become a product of your prompt design, 15:47your tools, your monitoring, not just 15:50this model's good. If you're in non 15:52tech, instead of just asking, is this 15:53right? I would suggest asking give me 15:56your answer. List two things I should 15:58doublech checkck before I trust it. or 16:00explain how you are confident and then 16:02explain why. You're using the model to 16:04improve your own skepticism there 16:06instead of just replacing it. Takeaway 16:08number nine, workflows are much better 16:10than one-off tricks with 5.1. 5.1 is 16:13strong enough that the bottleneck is no 16:15longer can the model do this. It is do 16:17you have a repeatable way of asking the 16:19model to do it. And that's why 16:21pattern-based prompting is so important. 16:23Teams that build with 5.1 are not 16:26necessarily the ones with the fanciest 16:27prompt hacks. They're the ones that turn 16:29really high-value tasks into workflows 16:31that are stable with versioned prompts, 16:34with tools, with output formats. So, 16:36it's not that ad hoc prompting is bad, 16:38right? It can still be fine for 16:40exploring. It can be fine for personal 16:42use, but if anything touches customers 16:44or colleagues or production, you can't 16:46improvise. That doesn't scale. You need 16:48to document your workflows. You need to 16:49share them. You need to test them. So, 16:51the implication is pretty clean here. 16:53You need to be able to identify a number 16:55of core workflows. triage, 16:57summarization, recommendations, 16:59drafting, QA. There's a bunch of 17:01workflows you could get into. And you 17:02need to invest in making those 17:04bulletproof instead of chasing lots and 17:06lots of niche use cases. And I've said 17:08this before, if you're building with a 17:09Gentic system, chase your core workflows 17:12and make them work. So, this is where 17:13prompt libraries and evaluations and 17:15prompt config systems earn their keep. 17:18And if you're a non tech, whenever chat 17:20GPT helps you with something that you'll 17:21need, again, save the prompt that works. 17:24Really simple, right? If you got an 17:26email that worked, if you got a meeting 17:27recap that worked, save it and then just 17:29drop in those details and get a reusable 17:31prompt because five good workflows that 17:34you can use every day are going to beat 17:35fancy random AI tricks. Number 10, last 17:38one. The new AI literacy is 17:41specifications plus judgment. In the 5.1 17:44era, AI literacy is less about knowing 17:46how transformers work and it's moving 17:48more toward two key skills. One is 17:51writing simple non-conlicting 17:53instructions or specs and two is 17:55applying human judgment to the outputs. 17:57OpenAI's documentation implicitly 17:59assumes this. Everything is about better 18:02instructions. Everything is about better 18:03evaluation. It's not teaching you matrix 18:06math because you don't need to know it. 18:07So the people who get the most from 5.1 18:10are the ones who can describe what they 18:12want really clearly and then decide 18:14whether the answer is good enough. These 18:17people don't just ask give me something. 18:18They ask, "Give me this in this form and 18:20here's how I will use it." There's still 18:22a lot of value in understanding models 18:23at a deeper level. Don't get me wrong. I 18:25love it. I love to nerd out on it. 18:27Especially if you're setting policy or 18:28building infrastructure, it makes sense. 18:30But for most knowledge workers these 18:32days, we've moved to the point where 18:34your biggest risk to your career is 18:36overconfidence. If you are not reading 18:38good-looking answers correctly, if your 18:41judgment is not there when you're 18:42evaluating AI, if you're unable to write 18:44good specs, you're going to be in 18:46trouble. Now for engineers really the 18:48implication is pretty clear. Your 18:50comparative advantage is now not knowing 18:52models and APIs. It's really designing 18:55good human and AI systems. It's clear 18:57instructions. It's well-chosen tools. 18:59It's guardrails. It's monitoring. You 19:01are becoming a builder of specs. You're 19:03becoming a designer. And the agents are 19:05increasingly small autonomous workers 19:08you are designing. And for non tech, you 19:10don't have to become a prompt engineer, 19:12but you do need to be able to say what 19:14you want without contradictions, and you 19:16need to be able to look at an answer and 19:18decide if you can trust it, and that's 19:19priceless. So, 10 takeaways, a lot to 19:22dig into for Chad GPT 5.1. I hope that 19:25this has been helpful for you and 19:27understanding how the model is 19:28different. Each of these 10 is a special 19:31point of emphasis in 5.1. These are not 19:33things that are generically true of all 19:35models. This is especially true of 5.1 19:37and is to a lesser degree true of other 19:40models in Chad GPT or claude families. 19:42Dig in. Every new model is a new time to 19:45uh get excited. I hope that this one, 19:47which feels like an agentic build model, 19:49is going to give you a chance to build 19:51some interesting things. I've already 19:52heard of people doing I call it like the 19:54Christmas morning we get every few 19:56months where there you're building a 19:58workflow and suddenly you switch to 5.1 20:00and it just works. I've had that happen 20:01a couple of times and I'd be curious to 20:03hear if that's happened for you as well. 20:05Cheers. Enjoy 5.1.