Learning Library

← Back to Library

Security: Say How, Not No

Key Points

  • Security teams should focus on “how” to enable safe adoption of new technology rather than simply saying “no,” because outright denial pushes risky behavior underground where it can’t be monitored.
  • Acting as a “brake” that controls speed—like high‑performance car brakes that allow fast driving without crashing—makes security an enabler that supports calculated risk and business agility.
  • When security becomes the “department of no,” users inevitably find work‑arounds (the “how”), leading to unmanaged, insecure practices that expose the organization to greater risk.
  • A concrete example is BYOD: employees bypass security controls by using personal devices and remote‑access tools, introducing unvetted software and viruses into the corporate network.
  • To stay effective, security must collaborate with the business, providing controlled pathways for innovation instead of acting as a constant inhibitor.

Full Transcript

# Security: Say How, Not No **Source:** [https://www.youtube.com/watch?v=U9Ckc3MecvA](https://www.youtube.com/watch?v=U9Ckc3MecvA) **Duration:** 00:16:29 ## Summary - Security teams should focus on “how” to enable safe adoption of new technology rather than simply saying “no,” because outright denial pushes risky behavior underground where it can’t be monitored. - Acting as a “brake” that controls speed—like high‑performance car brakes that allow fast driving without crashing—makes security an enabler that supports calculated risk and business agility. - When security becomes the “department of no,” users inevitably find work‑arounds (the “how”), leading to unmanaged, insecure practices that expose the organization to greater risk. - A concrete example is BYOD: employees bypass security controls by using personal devices and remote‑access tools, introducing unvetted software and viruses into the corporate network. - To stay effective, security must collaborate with the business, providing controlled pathways for innovation instead of acting as a constant inhibitor. ## Sections - [00:00:00](https://www.youtube.com/watch?v=U9Ckc3MecvA&t=0s) **Security Should Enable, Not Block** - Security teams need to answer “how” instead of “no,” acting as controlled brakes that guide safe innovation and keep the organization in the loop rather than driving risky behavior underground. - [00:05:51](https://www.youtube.com/watch?v=U9Ckc3MecvA&t=351s) **User Bypasses Forbidden Wireless Policy** - The speaker explains how a corporate ban on Wi‑Fi prompted a user to install an unsecured access point, creating a vulnerable network entry, and argues that providing a managed, encrypted hotspot would have mitigated the risk. - [00:12:56](https://www.youtube.com/watch?v=U9Ckc3MecvA&t=776s) **Shadow AI’s Hidden Breach Costs** - The speaker explains that shadow AI can add roughly $670,000 to the already $10 million average U.S. data‑breach cost and recommends assessing risks and offering alternatives instead of simply denying AI usage. ## Full Transcript
0:00Security teams, listen up. Don't say no, say how. Because when you say no, your users say 0:07how, and you are not going to like their answer. Look, I get it. I'm a security guy myself. I know 0:13how risky new tech can be, but I believe it's better to get out in front of this stuff rather 0:18than stick your head in the sand to pretend that when you say no, that's what actually happens, 0:23because it isn't. you say no, you just drive the behavior underground where you can't monitor 0:29or control it. You're essentially out of the loop at that point. It's a lot better to get on board 0:34the train and have a say in where it's going, than to stand in front of it and just yell stop! 0:41Another way to look at it is through this analogy that I've used on some other videos, Why do you 0:45put brakes on a car? So you can stop? No. So you can go really fast. Don't believe me? How fast 0:52would you go in a car that had no brakes or that had really bad brakes? In fact, it turns out the 0:58fastest cars in the world have the best brakes. They have to or otherwise they just crash into 1:03the wall. So, that's the way security should be done. should be an enabler. It allows the 1:10organization to take calculated risk and do things that they otherwise wouldn't be able to do. 1:16It shouldn't be like a parking brake that's just on all the time and saying no all the time. That 1:22becomes an inhibitor to the business. And if you remain an inhibitor to the business, the business 1:27will go around you. So it's better to be the brakes on the high-performance car so that you 1:33are part of the solution and not part of the problem. Those who fail to learn from history are 1:38destined to repeat it. So what happens? What does history tell us happens when the security 1:44department becomes the department of no? Where they just, for everything that you come up with, 1:48they give you a reason why you can't do that. Well, what happens is the user says how. Let me give you 1:55some examples of how users said how when the security department says no. And you're not going 2:01to like these answers if you're in security. So, let's start with something that we're still 2:06dealing with today, but this one has been an ... historically an issue for quite some time: bring 2:11your own device. So an employee has at home, uh, let's say a desktop system, a laptop, something 2:18like that. And they decide they're just going to either bring that into the office uh, and connect in, 2:23or maybe they're on a trip, maybe they're on vacation, they still want to get some work done, so 2:27they're going to use that, or they've just got it at home and want to access some files from the 2:32office. So, they just go ahead and do this. And how would they do it? Well, they could use some remote 2:37control software to connect into the corporate network. And uh, now you've got a system which uh, is 2:44not secured. In other words, the kids may be playing games on this thing as well. And it's got 2:48viruses and all kinds of other stuff uh, running on it. And now it's got access into the system. So, 2:54that's what happens. If the company says your system is too insecure, just don't connect. Don't 3:00use your system. Do not bring your own device; we don't allow that. Well, the ... the employee just 3:05figures out how to do it and does it anyway. Now you end up with an insecure system on your ... your 3:10network anyway. So I say, when it comes to BYOD, there are really uh, only two types of organizations: those 3:17who have a good BYOD program where they have known which devices are coming in and 3:23they've put in the security controls in place, and those who haven't. But there's not a third group 3:29that doesn't have BYOD. You might outlaw it, but your employees figured out the how and you've got 3:35bring your own device in your environment. Another example, and this one goes back a little further 3:40to when we first started getting mobile phones. A lot of companies would say mobile devices are not 3:46secure enough for you to have your corporate email downloaded into this thing. So we are not 3:52going to allow that; not secure enough. Again, the security department said no. How does the user say 3:58how? Well, you're not going to like this answer either, because what they decided to do in some of 4:04these cases is say, okay, if you won't let me download my corporate email to my phone, which is 4:09where it's really convenient for me to access it when I'm in between meetings and stuff like that, 4:14well, then I'll tell you what I'll do. I will forward from the corporate email server over to a 4:20public email server. Uh, use your favorite uh, personal email server as an example here. And 4:27then from there I'll be able to download to here. So now what you've done, again, in this case, 4:33instead of providing a how mechanism that would have allowed the corporate email to come down to 4:38this device and allow it, instead, the user went with taking the sensitive information, putting it 4:44into a public server, which we have no control and visibility over, and now they're getting it down 4:48here anyway. Again, the organization said no; the user figured out how. And this is much worse 4:55than what it would have been if we had crafted a solution for them in the first place. Let's take a 5:00look at another example. How about, uh, bring your own wireless? Well, 5:07we take wireless access pretty much for granted these days because we seem it feels like 5:12everywhere we go, we can go into a building and we just expect there to be a wifi hotspot. Well, it 5:18hasn't always been the case. In the early days of wireless technology coming out, a lot of companies 5:23didn't deploy it. Why would they not deploy it? Well, their view of it was, we're going to take all 5:29these workstations that we have, and we're going to put them on a LAN of some sort, a local area 5:35network, and we're going to connect all these ... these things up in a hardwired situation where 5:41with that, now, th ... I can't have a situation where someone is sitting out in the parking lot and 5:46sniffing our network traffic, because that's a risk with wireless. Wireless goes through the air. 5:51So, it's not bounded just to the building itself. Unless you've got some sort of shielding in the 5:56building, and most people don't have that. So, the companies looked at wireless and said these 6:01wireless access points, those are too risky. We are not providing wireless access. So, again, the 6:06security department said no. What did the user say? How. I know how to do that. I can go to my local 6:12electronics computer store, whatever. I can buy one of these relatively cheap access points and that 6:19will give me wireless access. And now we have an unsecured access point that has direct connection 6:26into our network, and they just run it off a port that's sitting right there in their office. 6:32So again, the organization, the security department said no; the user said how. The user now puts out 6:38an insecure version of this. What would have been better, again, is if the organization had said, you 6:43know what, we'll give you a secured wireless hotspot and we'll use the right kind of 6:49cryptography and all of those kinds of things to make sure that this is a protected connection. 6:54Saying no led to a worse situation with the how. Let's go back even further in the way back 7:00machine, and I remember these days. Uh, bring your own internet. The internet was not ubiquitous and 7:06everyone had it all the time. That has not always been the case. In the early days of the internet, 7:12if you wanted to get access, a company might say, okay, this is your workstation right here, and you 7:19have access to our internal network. So that is intranet. 7:26And we'll give you that level of access. That was not uncommon. But here's this big, bad, 7:33scary external internet. And in this case, companies said, hey, we don't know 7:39what's out there. You shouldn't be on there; just stay away from it. Well, so in saying no, guess what 7:46the user said? How. How might they do that? Well, these devices here, if that was a laptop, back in 7:53those days, if you wanted to access systems while you were traveling and things like that, they 7:58would have a modem port in there. So there was a modem built into it. You could plug it into an 8:02analog line in your hotel room or whatever and then dial out. So, what if somebody just did that 8:08with the analog line that might still be in their office in those days? Not so common now, but it was 8:14then. So then what happens is a user is connected to the internal network and 8:20simultaneously to the external internet that we were trying to avoid because we thought it was 8:25too risky. Now this workstation has essentially become a router between the two. So the 8:32company didn't want that to happen for sure. That's a worst-case scenario. What they should 8:37have done is in ... instead of saying no, they should have said how and the way they could say how is 8:42to say we're going to put the right controls on your system. And some companies would put maybe a 8:47proxy server you had to log in through, or we're going to put a firewall here that's going to do 8:52other types of mitigation. We'll put intrusion detection systems, we'll do the kind of monitoring 8:57that's necessary and educate the users so that they can go on the internet and be safe. We take 9:03all that stuff for granted now. But initially, security department said no to bring your own 9:08device, no to bring your own wireless, no to bring your own internet. And now, eventually, those have 9:14become the norm, because every one of these things, the user figured out how and it resulted in a 9:20massive failure. Okay, now let's move a little closer to the current times and things that we're 9:26dealing with really on a regular basis. Uh, next version of turn of this crank was essentially 9:32bring your own cloud, where now I can basically say, you know, uh, there are cloud providers 9:39that are out there. Uh, it's not expensive. There are apps that are out here in this space. There are 9:46file sharing services out here in this space. If I want to send you a really big file and I don't 9:51want to send it through email because I want to be maybe share it with lots of people, well, then I 9:56just upload this to a file sharing uh, system in the cloud, and then all the people that come along 10:02that I want to have access to it can get it. Along with all the people maybe that I didn't intend to 10:07have access will also be able to come and get it. So, that's a big risk. Again, when the cloud first 10:12came out, what did organization security department say? Don't use that. No, because we don't 10:19control all of this. This thing is risky and I don't know what's going to happen with it. So, if uh, 10:25if we don't provide users a method to do file sharing, then they're just going to figure out how 10:31to do it and they're going to use the cloud anyway because you can outlaw it, but that doesn't 10:36make it stop happening. If people have mobile devices, then it doesn't matter what ... what rules 10:42you're putting in your firewall. That's not stopping those mobile devices from getting out. So, 10:47people went ahead and used these things anyway. What would have been a better solution in this 10:50case? Instead of saying no, say, you know what, we actually have contracted with this other cloud 10:57provider where we have vetted their security and they have a file sharing app, and we want 11:02everybody to use this one. Or, if there's another particular app that we think our employees are 11:08going to use, have them use the approved authorized version and have them avoid this 11:14version that hasn't been. That's an example of saying how. Not just saying no. Now, if we move this 11:21into the future a little more there, a little more to where we are right now, uh, the big thing is bring 11:27your own AI. And I've worked with companies that say, you know what, those public chatbots, we know 11:34that they leak data because when you tell them the information, that information then can be used 11:39to train their models and all kinds of bad stuff can happen from this, potentially. So, we're going 11:45to uh, outlaw that. You cannot uh, use these AI systems that are sitting 11:52around out here. It's just too risky for you to be doing this kind of stuff. So, we'll say 11:59no to that. Well, again, you can block it at the firewall. I've got a mobile device, I'll just 12:05access it from that. So, saying no drives the behavior below ground where the security 12:10department now has no control. What would be a better option in this case? The better option 12:15would be to say, you know what, we're going to pick an AI provider and we're going to use their 12:21service because we've vetted what the security is of it. Better still, we're going to build our own 12:26in-house version of this. Now, when I say build, that doesn't mean you have to go out and do all 12:32the model development and design and tuning and all that kind of business, but you could use a 12:37platform from a cloud provider and run it on your own private instance, either in your environment, 12:43on premise, or in a private tenant in a cloud system that you have vetted the security on, and 12:49then tell your employees, do not use this one, use this one. Now they have an option. 12:56You have said how and not only said no. According to the 2025 IBM 13:03Cost of a Data Breach report, the average cost of a data breach in the US was greater than 13:0910 million dollars. That's a big chunk of change. And if shadow AI was involved ... What's shadow AI? Well, that 13:16basically means the user went ahead and created their own AI. They downloaded their own models, 13:20they ran it in a cloud instance and then maybe put in some of your data. Shadow AI increased the 13:27cost of a data breach more than 670,000 dollars. So, these are the things 13:34that are costing companies real money and driving AI behaviors below ground. Actually ends 13:41up costing you more, because now you have the shadows to deal with. So, all of that stuff just 13:46adds salt to the self-inflicted wound of saying no. So, let's take a look at how you can say how 13:53without giving away the whole store in the process. So, some of the things that you can do: 13:59start off by assessing the risk. Understand what it is that we're actually facing here, rather 14:06than just fear and superstition, fear, uncertainty and doubt, that kind of thing. That big scary 14:11internet, that uh, terrible AI, those uh, infected uh, devices that users 14:18have in their homes ... Go understand what's actually there and consider in the risk calculation the 14:24alternatives. If we don't provide them something, then they will find their own way. And even though 14:30you thought that internet was scary back in the day, it's a lot scarier if users end up making 14:36your system uh, into a router that's connecting your network in ways that you didn't. So, assess the 14:42full picture of the risk. And a lot of organizations tend not to do that. Then, look for 14:47alternatives. See, are there ways that we can do it? Maybe we don't want you to use that particular AI 14:53because there are too many risks with it, but here's another one that you can use instead. Maybe 14:59we don't want you doing file sharing just in general, because now we have a problem with shadow data, 15:04data that's sitting around in all sorts of places that we have no visibility and no control 15:08over, and therefore it's not encrypted, it's potentially leaking out to the world. Find 15:14alternatives. Don't use this file sharing service, use this one that we've approved. Don't use this 15:19AI, use this one that we built for you, or one that we have contracted with. And we understand and 15:25know that it will do what we intended to do. Another thing we can do is train our users. Make 15:31sure they understand what the risks are, because in many cases they don't. They just look at the 15:36bright, shining object of the new technology and say, isn't this cool? Let me do that. Well, they need to 15:42understand that in some cases, what they're doing is putting not only the company at risk, but 15:48downstream, their jobs at risk. So this is existential threats for everyone involved. And 15:54then, we need to do discovery. We need to discover all the cases where we may have shadow 16:01AI, uh, where we have shadow data, where we have people that are bringing their own devices that have not 16:07been secured. Uh, we've got to do a ... a good job of making sure we understand what the full threat space 16:13looks like. So, ultimately, by doing this kind of stuff, you get out in front of the risk and keep 16:20everything above board where you can monitor and control it. In other words, don't say no, say 16:27how. Every chance you get.