Learning Library

← Back to Library

Observability vs Monitoring: Mythbusting

Key Points

  • Myth 1: APM and observability are not interchangeable; APM focuses on visibility inside monolithic runtimes, while observability is designed for complex micro‑service ecosystems and must cover every component, from front‑ends to legacy back‑ends.
  • Myth 2: “Log love” – relying solely on logs for diagnostics – is an anti‑pattern because it eliminates real‑time monitoring, causing issues to be detected only after they impact users.
  • Effective observability combines metrics, traces, and logs with proactive monitoring to detect and address problems before they affect end‑users.
  • Integrating monitoring data with log information accelerates troubleshooting and prevents the destructive consequences of a logs‑only strategy.

Full Transcript

# Observability vs Monitoring: Mythbusting **Source:** [https://www.youtube.com/watch?v=IQn3W8EedvA](https://www.youtube.com/watch?v=IQn3W8EedvA) **Duration:** 00:10:29 ## Summary - Myth 1: APM and observability are not interchangeable; APM focuses on visibility inside monolithic runtimes, while observability is designed for complex micro‑service ecosystems and must cover every component, from front‑ends to legacy back‑ends. - Myth 2: “Log love” – relying solely on logs for diagnostics – is an anti‑pattern because it eliminates real‑time monitoring, causing issues to be detected only after they impact users. - Effective observability combines metrics, traces, and logs with proactive monitoring to detect and address problems before they affect end‑users. - Integrating monitoring data with log information accelerates troubleshooting and prevents the destructive consequences of a logs‑only strategy. ## Sections - [00:00:00](https://www.youtube.com/watch?v=IQn3W8EedvA&t=0s) **Untitled Section** - - [00:03:29](https://www.youtube.com/watch?v=IQn3W8EedvA&t=209s) **Observability Pricing Myths Debunked** - The speaker highlights how proactive monitoring reduces incident impact, then dispels the “sticker shock” myth by comparing flat‑per‑host pricing with usage‑based models for observability tools. ## Full Transcript
0:00Observability? Monitoring? Aren't they the same  thing? That's what a lot of people think. But 0:06we're going to debunk that myth and five  others today in this video. Let's get to it. 0:11Myth number one: Just a name. What do I mean  by that? I'm actually talking about how a lot of 0:19people think that there's not much difference  between application performance monitoring and 0:25observability. But the truth is, it's they're  built for two completely different problem 0:30sets. Application performance monitoring or APM  was built around the concept of seeing inside 0:37runtimes. Now things like Java or .Net. But the  reason it works for APM is because in the world of 0:48runtimes, especially monolithic runtimes, all your  backend systems are tied to that same runtime. 0:56And all your front end requests are coming in to  that same runtime. So if you have visibility here, 1:06then you can see everything that's going on  in your system. But observability is built 1:10for modern applications that are built on top  of microservices environments with much more 1:16complex systems. And while there are some  runtimes that might be in those systems, 1:22that's not enough for you to understand  exactly what's going on throughout the 1:25entire microservice application. So the only  way to really do that is with observability, 1:31which has the ability to monitor all parts of the  system--even going back to your backend systems 1:38like your mainframe, so that you see a full  picture of everything that's going on in the world 1:43of your applications. Well, that was myth number  one. Now, let's take a look at myth number two: 1:51Log love. What's log love? Log love is actually  referring to something I found out a few years 1:58ago from an analyst who I was talking to, and he  asked me if I had heard about this situation where 2:04people were taking their metrics, traces and logs,  which everyone thinks of as observability and not 2:12doing any monitoring, but rather just writing  all that information into log files. And when 2:17a problem occurred, going to the logs to solve the  problem. This isn't just a myth, this is an actual 2:24anti-pattern. And anti-pattern is something that  seems like it's a good idea, but it's actually 2:30the exact opposite result. And in this case, the  exact opposite result can be destructive to your 2:35environment, to your applications, to your  business. Let me explain why. If you're not 2:42monitoring, that is, looking at all the different  pieces of your environment, plus seeing how your 2:52end users are being affected by monitoring their  performance as well, and doing this in real time as 3:01it's happening. Then when you find out there's  a problem, say, through a trouble ticket, 3:07that means that you're already too late to help  yourself. But by monitoring, you have the ability 3:13to catch things before they happen. And the other  nice thing is by tying all the monitoring pieces 3:20together with your log information that you do  have, you actually speed up your troubleshooting. 3:29So not only do you have the chance to get in front  of incidents before they impact your users, but 3:34when an incident does occur, you have the ability  to solve it much faster. So when I think of M/T/L 3:41in the world of monitoring versus logging, I just  like to put a little "2" on the M for metrics, 3:47monitoring, traces and logs. Now that we've looked  at two myths, let's look at myth number three: 3:55Sticker shock. What are we talking about? You  probably want to hear about pricing and cost 4:02for observability tools. And it's something that  we should look at because the reality is that 4:08observability tools can be expensive, but they  don't have to be expensive. Let me explain why. 4:15There is one way of pricing where everything, all  the features that you get and all the things that 4:19you do are inclusive and forecastable and known,  such as charging per host in your environment. 4:27This allows you to have a very steady price  quarter by quarter, year by year, based on the 4:34number of things that you're actually monitoring.  But there is another way that some observability 4:39solutions and monitoring solutions charge, and  that is to charge you for other things around the 4:45system, such as the number of applications that  you're running or maybe the number of users that 4:52you have using your product, that's actually  using the observability tool itself. Maybe 4:58they're just looking at the amount of data that  you're sending through the system. They might even 5:03charge you just for debugging. And what happens  when this comes in is you don't know what's going 5:11to happen ahead of time. So you're going along at  a fair clip. And then one or more of these things 5:17happen and you end up with a quarterly surprise  charge. How big is that surprise charge going to 5:23be? There's evidence out in the marketplace that  it could be $50 million or more. So when you're 5:29looking at your observability solutions, you  have to keep this myth in mind, not because it's 5:35definitely going to be there, but because there's  ways around it. But you want to be looking for 5:39solutions that have all inclusive pricing and have  a fair and forecastable way of giving you that 5:44price. Okay. We're halfway through the myths. Now  we're about to talk about myth number four: Who, me? 5:53That sounds really weird. Let's talk about  it. A lot of people think that observability 5:59is built to be used only by site reliability  engineers or SREs. But the situation with modern 6:06applications-- and this is actually something that  goes beyond traditional monitoring capabilities 6:12--is that observability allows us to give the  information from the systems to the individual 6:19people and organizations that need it. So, for  example, you could get end user information to 6:27your marketing team. You can get performance  of different runtimes to your development 6:34organization. You can look at the system as a  whole sent to your DevOps team. Or, of course, 6:43your SRE team and other IT personnel that need  to see what's going on. You can even include 6:51your business users and give them the information  they need. The fact is that observability takes 6:57all the data that in traditional monitoring  is put just through your Ops power users and 7:05makes it democratized, giving everyone a view  of the data that they need so that they can do 7:09their job as an application stakeholder.  Now, we've made it to myth number five: 7:17Pick favorites. Pick favorites? What do you mean  by that, Chris? Well, we've talked about the fact 7:23that we have all kinds of ways of pricing things  and observability tools. But one of the things 7:30that happens is that it takes a lot of effort  to get traditional monitoring tools working. So 7:36while a lot of organizations have anywhere from 8  to maybe 20 or even hundreds of applications, the 7:47truth is that traditional monitoring tools require  way too much effort and work and cost to give you 7:54all this information. So you usually have to draw  a line and pick your favorite applications that 8:00you're going to monitor and the rest don't get any  monitoring at all. Why do I not like this? I don't 8:09like this because if you have an application,  it's important to somebody. Or as I like to say, 8:13every application is important to somebody. And  that means that there are stakeholders of those 8:18applications, including business owners  and application owners and developers, 8:23that need the information that comes from  observability. You shouldn't have to pick. And 8:28that's why observability gives you this broader,  better view of the entire system, as opposed to 8:34making you pick just a few applications "just in  case". Okay, we're at the last myth. Myth number six: 8:42DIY. The truth of the matter is you can  build monitoring yourself, but you shouldn't 8:51build monitoring yourself. Let me explain why.  When you add monitoring to all the pieces you 8:58have to do. Plus, when you add the idea of being  able to measure everything to the front end. Oh, 9:08and don't forget, you need to detect changes like  when a service disappears. Or maybe when a new 9:15service appears within the system. Doing all that  manually requires you to slow things down. And as 9:22you're trying to accelerate your development, as  you're trying to get better as an IT organization, 9:27have better performing applications-- slowing  things down is a bad idea. In fact, it leads 9:32to lower quality applications. You want to speed  things up, and the only way to speed things up is 9:37to automate. And that's why you want to look for  an observability solution that automates things, 9:44that automates discovery, that automates mapping  the system, that automates monitoring end users, 9:51and does that for all of the different users  that you're going to bring into place across 9:57all the different applications that you have  to monitor and can see fully across the entire 10:03system and trace everything it needs to trace.  It needs to do this automatically, or else you're 10:08going to slow down your development systems and  ultimately result in lower quality applications 10:14and unhappy customers. And that's not what we  want. So look for automation and stay away from 10:20manual observability. Thanks for watching. Before  you leave, make sure you hit like and subscribe.