Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

From Aware to Care: How to Get Your Employees Invested in Protecting Your Security Fortress [Security Sandbox Podcast]

McKenna Brown

Subscribe to Security Sandbox

In the first episode of Season 2 of Security Sandbox, host Amanda Fennell welcomes cybersecurity evangelist, author, and host of the 8th Layer Insights podcast, Perry Carpenter. They have a nuanced conversation about how to craft a new-age security program that works better for today’s businesses and employees operating in different cultures, regions, and viewpoints.

How do you bridge the psychological gap from "aware" to "care"? How do you craft trainings that are culturally relevant, empathetic, and inclusive for your employees? How do you move the needle with your awareness program internally and with key stakeholders? What is the biggest mistake first-time administrators make in creating their security program?

All that and more in the first episode of the new season, which focuses on strengthening the strongest link in your security chain—your people —through better programs, processes, and tech.

Transcript

Amanda Fennell: Welcome to Security Sandbox! I'm Amanda Fennell, chief security officer and chief information officer at Relativity, where we help the legal and compliance world solve complex data problems securely—and that takes a lot of creativity! One of the best things about a sandbox is you can explore and try anything. When good tech meets well-trained, empowered employees, your business is more secure. This season, we're exploring ways to elevate the strongest link in your security chain—people—through creative use of technology, process, and training. Grab your shovel, and let's dig in! In today's episode, our sandbox heads to the psychology couch for a compelling conversation with cybersecurity evangelist, author, and host of the 8th Layer Insights podcast, Perry Carpenter. Also joining us is Marcin Swiety, Relativity's director of global security and IT. We're exploring how to craft a new-age security program that works better for employees operating in different cultures and regions and gets them personally invested in their organization's security posture. So let's take off our shoes, get comfortable, and start talking.

I'm going to be hard-pressed after reading your book to find something that we really disagree on, and I'm going to make that my effort for this conversation: how to find something you and I disagree on. It was just—so many things that I was like, yes, yes, exactly! Why is this so difficult? One thing I will say before we get into some questions that I found most interesting about this book—and I don't want the whole thing to be a book review—but this transformational security awareness. I will say the thing I found the most interesting was that you were able to keep an energy behind it that didn't feel old and stale and like, we've been doing this a long time.

Perry Carpenter: That was my biggest thing with that. How do I try to make it readable and have a ... I think the phrase that I kept using in my own head when I was trying to write it is how do I make it forward leading to where you read one sentence and you naturally want to read the next without it sounding—and I probably slipped every now and then but—without it sounding pedantic or preachy.

AF: I think that that's what we come across so much in the security industry. For any of us who've been in it for a while, we use the same terminology and the same lingo, and it's almost to the point where that's how you get your chops. Like you look at the person across from me like, OK, we understand each other because we're using that cultural knowledge. We have the same understanding. But the problem is that it's hard to keep people excited about something like security awareness. When it sounds, like you said "pedantic," it sounds like ... OK, so people, we need to teach them, and we need to keep it going and so on. So, my favorite part is that you did make it feel like it was still really fresh and something to be excited about and have energy for. But I'm going to start with something, not an unexpected question. Here we go: your "why" determines your "what."

PC: Mm hmm.

AF: I do like this part, and I'd love for you to expand a little bit about what you think the "why" is or what the "why" should be.

PC: Coming from an analyst background, I'll give you the analyst answer. The "why" is going to be an "it depends." I think when it comes to security awareness, or when it comes to any action, behavior, belief that you want somebody to have, you have to sell them on why that's important and why that should be a core value within themselves. And so my "why" is going to be subtly different than your "why," which is going to be subtly different than, let's say, somebody on a different team within your organization. I think that there's this sliding scale of "whys" that we have to determine. But for us, we have to figure out what is the "why" that I need to have and maintain as my core value and belief that's going to give me energy to try to do all the hard things on finding the "why" for everybody else. Because as soon as I can find that other person's "why"—let's say it's somebody in product marketing within a large retail organization—well, I need to tell them, I need to find a "why" that makes sense for them related to security or security behavior that then I can backtrack up and build all the messaging of the program that I need to put around that person.

AF: Marcin, you lead all these efforts with our internal security guardian and champions, all these programs that we have where we really embedded security within the teams. I'm curious, what's your "why" or do you think that we've given these people a "why they should care"?

Marcin Swiety: We are on the way there. Obviously, we started there. When you build programs like this, you usually start with content, and that's actually not the right way to start. Basically, the content is throwing just knowledge at people. And this is a good starting point, but in the end, this needs to be impactful for a longer term. And the only way to make that impactful is actually, you know, going into the "why." We kind of wrapped it a little bit different. We thought about the context of the role and try to leverage the tools that we are using because it's not only that knowledge, not only awareness of it, but it's also getting some power and permissions and tools, techniques that they can leverage in their work. And that's actually, I think, wrapping back to this "why" that we talk about—context and the real benefit of the role that they play as guardians.

AF: Is that—oh, go ahead, Perry.

PC: Yeah, I was going to say that's exactly it. It's not necessarily that we all need to use the same terminology on finding the "why" before the "what." But it is understanding the context that somebody sits in and the things that motivate them, plus the things that they need to understand in order to do their job appropriately from a security perspective. And then I think there's another side of that on the backend. If and when we perceive that they are not living up to expectations or that, you know, quote-unquote "failing in some way,” is it because they don't have the right motivation or is it because I've somehow set them up to fail in that way, or the organization, not me personally, for their success. Have they somehow been slotted in an area where either they don't have the right motivation or they don't have the right tools or they don't have the right staffing or something else? I think that if somebody is failing to perform the security functions of their job, before I put blame on somebody else, I need to look at the complete context of that situation and find out if we as an organization are failing them somehow.

AF: Well, that's certainly helpful for people to take accountability. It's not you, it's me, and I set you up for failure or I set you up for success, which I love. But one of the threads I saw that came across is not so much the failure and success rate. It was this constant feeling that you wanted to induce in people for security of curiosity. And it's intriguing to me because I think that's actually what makes people successful in life is curiosity. And when we lose our curiosity, it's whenever we become stale and tired or bored or don't have energy, as you had mentioned for something. Do you think that's the key to these good programs, that we're eliciting curiosity among people? That's probably a question for both of you.

PC: I'll go first on that. I think that curiosity is a core skill that some people have. They know how to propel themselves forward by finding ways to be curious and to make things a game. And I think in other people, we have to find ways to elicit that curiosity. But as soon as we become curious on something, we engage in that in a completely different way. I mean, that's why clickbait works so well in order to get pay-per-click type of stuff. If I can give you a list of five things that will make you a better lover and number four will blow your mind, type of thing, all of a sudden you're like, “Oh, I didn't even know that there were five things! I wonder what number four is!” And then before you know it, you're about to click on that. If I can make security content that appealing in some way to where it opens up this little, you know, itch in your mind that you're not going to be satisfied until you scratch it, then I've automatically got buy-in from that person to take the next step. And so I see my job is always winning the next 30 seconds of attention in some way. And if I can do that, then I can continue to propel that forward.

AF: Hmm, interesting. Marcin, do you think that you elicit curiosity in our organization for security?

MS: Oh, for sure. For some, yes. For some, it's a longer road but definitely, in the end, that's the real purpose, right? We don't want to give people, you know, ready-to-use knowledge because we cannot keep up with changes, right? We have to give them that way of thinking that's, you know, an ability to go outside of the normal scheme and apply the same safe principles to those unnecessarily shown behaviors that we already outlined. Something that is fresh and new. We want for them to default to security and make sure that they are curious in making sure that, whenever they are entering a new space, new technology, new product, they are applying the same principles on a higher level. And that's actually the curiosity—being able to be curious how my actions will play out from a secure space, and that's our way of thinking on how to make sure that people actually take the default secure action.

AF: We also kind of put, I don't know what the right word would be, but I guess there's a bit of fanfare to our security team. People all think it's a very awesome group, and we know a lot about things that you may not know about in the security space. So we really try to offer that knowledge to people who are interested, like, hey, if you want to know how malware works, happy to have you shadow us, and we can show you how this tool works. And I think that our learning content that we have put together and all the videos and the quizzes and all this kind of stuff, we don't just say, OK, what kind of phishing is there? It gets deeper than that, and we offer these increased levels that you can go further. Yes, we want you to understand a lot of intro information and how to better protect yourself in your own life. But we also want you to, if you have that curiosity to go deeper, to become just as good at security as some of us who've been doing it for years are. We'd love for that. It's almost like, we want you to catch that bug. We want you to be a part of that. And it's a big part of where we've set it up.

PC: Yeah, absolutely. I think that there is a push and pull that comes with these things, or a set up in a resolution that should come with it, is that you're inducing that curiosity. And then as soon as they find a way to start to address that, you're rewarding them with comfort in some way, you know, get a little bit of a dopamine release and then you build up a little bit. It's a little bit like a, you know, a great spy novel or any good bit of writing. You build a situation, you kind of add some conflict in that in some way, and then you bring a resolution and that gives a catharsis. I think that security awareness can be a little bit like that. I also think that on the other side of it, if I can never successfully induce curiosity or give somebody that forward momentum, well, then I need to find a way to build behavioral guardrails around that. And that can be a technology-based solution, like some kind of browser containers, or it can be some other layer of security that I put in place. But if I can never induce somebody to participate in it, then I've got to find the way to still make that person safe or make the organization safe with that person as part of it.

AF: You know, rewarding with comfort is such an intriguing concept in a world where security has been such a penalized like, “Oh, you click too many phish, you're fired!” So, I will want to chat more on how to reward or not. But speaking of setup, I feel like you did a good setup for me to actually ask you something. We're going to quiz you on your own book, by the way. Here we go. [Laughs]

PC: I'm going to suck at this. [Laughs] It's been a good year since I've read it.

AF: No, you're not! You mentioned Fogg Behavior Model component, and you talk about this idea about the prompt in the motivation. And you talk about a glass of water as an example of how you approach this. But if somebody were to say, how would you explain this dynamic between the model of prompt and motivation, how would you try to explain this for us so that we can really grasp this in our own companies?

PC: Yeah, no, that's a great question. So, the Fogg Model I see is kind of like the E=MC2 of security or behavior. I mean, it's a generalized behavior model that is really simple when you first look at it because it is just, you know, B=MAP. So, behavior equals a combination of motivation, ability, and prompt at the time of the behavior. And the way that Fogg talks about it is that, you know, behavior happens when three things come together at the same time: motivation—so somebody has the right amount of motivations, so motivation can range from being low to high. Ability can range from being something that's very hard to do to something that's very easy to do. And then the prompt is something internal or external that says, all right, we want you to do this thing. And so there's a compensatory relationship between all of these different pieces. If I have fully in me the ability to do something, but I don't have the motivation to do it, then if you asked me to do it, I'm not going to. Any of us that are parents know that that's the case. Go clean your room. Our kids know what a good clean room looks like, but they may not be motivated to do so.

AF: I would argue that they don't know that... [Laughs]

PC: I would argue back and say at one point—

AF: We found it! We found our argument! Here we go, OK, kids cleaning the room.

PC: At one point in your life, you've gotten frustrated enough that you've gone and cleaned the room and said, this is this is what I want. This is the goal state, or when you moved into your house, you set it up in that way. But they've seen an example.

AF: I do, I don't think Marcin will be surprised, but—and this is absolutely a completely sideways thing to say—but I do have a thing that my husband finds hilarious where I will look at my kids and say, “is your room ready for inspection?” And they look at me like doe eyes kind of for a moment. “Uhhhh I'll be right back!” And they go upstairs and then they slam everything in their closets and under the bed and everything. And that's the first place I go for inspection, right? Human nature. I'm going to go exactly where I know as a kid I used to throw things. But yes, they have seen it clean.

PC: They may know the appearance of clean. They may not know the detail of clean.

AF: Yes.

PC: So, let's assume that they have the ability. They know what it looks like. They're not lacking any training. They're not lacking, you know, they're not tied up in their closet when you're asking them to clean up their room. So they've got all the things that they need to do. They would just rather play on their PlayStation or something. So the other thing is, and this is where I think we fail sometimes as security teams, I can have somebody with all the right motivation, and I can be prompting them all day long, but they may not have the actual ability to do the thing that I want them to do. And my rant on this is, I think we as security practitioners, we kind of have this mantra all the time of saying, you need better passwords, you need to not share passwords, you need to not write them down. You need to not use them across different systems, not use the same one across different systems. And as humans right now in the century that we live in, we're managing upwards of 200 different accounts each. And no, nobody that I know can remember 200 great unique passwords across 200 different systems that are totally strong, that aren't based on some kind of internal algorithm that I could determine. And so that's where I can have all the right motivation in the world: the ability piece is just not there. There's going to be a compromise. And so at that point, you say, if I'm prompting somebody all day long, they've got all the right motivation, and they're just never going to have the ability, the only thing I'm going to do is frustrate them and make them feel pathetic. And so I need to step in there with a tool like a password manager or password with password-less authentication or something else to enable that and give the comfort and then bring that password proliferation problem down. Maybe they only need to remember one, five, 10, but it starts to become a much more manageable number.

AF: Is it—and Marcin, I'm interested to know what you think—but you talk about this dynamic. So let's say having to handle that many passwords, the human mind isn't going to remember them all as we have to refresh them so often. And so we have to use some kind of a tool in order to do this to become more effective at it. But you mention also at times the size of your security team is rather small compared to what you're trying to protect in an organization. So a little circle, big circle. Right? And we try to approach fixing this with this culture and this shift of culture. But we also do it with these tools like password managers in your example. What are some of the tools you think are most important for security awareness?

MS: I think there are a couple of those that are used daily. Some of them are some center point for organizations. Just to give you an example, we used to use intranet pages, Confluence, or other tools, where we publish information, and that kind of way is a very basic thing to do. Those resources and those tools used to be all informative, and right now they need to be dynamic. It's something that we need to go back to, and the same goes for password managers. We used to have, you know, I remember a long time ago, managing passwords, takes their files somewhere out there and shared in another space because there were no other tools that would make this dynamic and shared and properly protected and audited and so forth. The same goes for any other thing. And I think that's basically our doom for thinking of people, tech, and processes as separate things. They are intertwined. And whenever we talk about people making sure that people are retracting to their proper behaviors in the new world with new tech, this actually means introducing new tech, using password managers that have previously been static right now. I think a lot of us are using something that is more dynamic in the cloud, accessible everywhere, available on all the different devices. But now we also move to, you know, basically small computers that are doing cryptographic functions to authenticate yourself. So leveraging that key component of each of people, tech, and process, and making them work in the same cycle, that's, I think, the key. So for password managers, that's one example, from it taking their files somewhere out there going to a password manager keyed up with some cool tech behind it and process, that's a way to go.

AF: Perry, what do you think? Is there a technology we should call out specifically? Is It learning management systems and all of the educational things that we use? Can you give us some examples of what that might be for people who want to incorporate this into their programs?

PC: Oh, so me give some examples?

AF: If you give two examples and then I'm going to quiz Marcin if we have any. We'll see how we measure up.

PC: So microlearning, from my perspective, comes back to understanding people's attention spans and the fact that this model that we've done in our industry for a long time of rounding everybody up in the room for a couple of hours and then kind of preaching at them for a while, giving them a test and then sending them on their way ... none of that works. And I think we all candidly know that that doesn't work because there's a ton of psychological theory around this with decay of learning and other things, but we are only interested in things that are relevant to us at that time. And so this comes back to curiosity again. Unless I can induce curiosity, unless I can kind of trick the mind into retaining something, it's gone. And when we only teach people about things once a year, let's say it is password policy, if I teach somebody on password policy on January 15 of the year that they're in, most people aren't going to change their passwords that day. The best time to train somebody on passwords or any behavior is at the time that they are actually about to do that because they're naturally more curious. They're more inclined to be curious, at least. And so you have to inject that but in a way that's not bothersome at the same time, which is probably another conversation. So microlearning then means that I can take that thing that was a two-hour training, and I can break that down into many small subsegments and treat it like a marketing program. Coca-Cola doesn't just do one campaign a year where they tried to get your attention for two hours. They get you in 30-, 60-second bites all the time. And I think security is the same way. We want to bring that down, cater to the attention span, spark the curiosity, and continue the messaging over and over and over again.

AF: Marcin, what are you thinking?

MS: This is something that ... I don't think we use the word "microlearning." But we actually facilitate something like this in a couple of those examples where we had to move in quickly with some cool stuff that we shipped out, for instance our cell service. I remember there was a requirement to make sure that people are educated, how to self-service their access, this privileged access, and how to properly use it, what kind of risks it introduces, and so on, so forth. And I think there's also a little bit of discussion on whether to include that training as part of our internal Security Guardian or our basic onboarding for engineering teams. But we ended up in, no, this is all self-service, and the best way for you to learn that would be self-service. So we are prepping for getting that type of access at some point. We kind of implemented that as part of the process, so before you get any type of access, it actually checks every time where you've undergone that training and how long ago, obviously. But generally speaking, every engineer at Relativity at some point will need to get access to some parts that will require some service, will need to go through that smaller, more convenient training that is very contextualized to that type of specific action and use that. And that will definitely take longer, because whatever that person will learn through that training will be used within the next couple of days when that access is really needed. So I think we leveraged that to some extent, and that's kind of real-time marketing in our security awareness space. But that's kind of true, yeah.

AF: So then if we are trying to make sure we're giving them this information at a time when it's more useful or applicable, and not just like an annual prod of “check the box that you paid attention,” do we also have to focus on training that's more relevant based on where you're at? So like cultural relativism. Is the training that we do in Poland and security awareness the same as it is that we do in North America. So Marcin, do you think that there's a separation?

MS: Yes and no. Obviously, I think every security professional likes to have uniform and holistic culture. That's basically, we want to have the same culture all across the same level of security. We don't want to have teams that are lagging behind, using ill behaviors and so on, so forth. Nobody wants that. Nobody wants to have focus groups that need some more attention. But on the other hand, culture and global space comes with its own flavors in there. There is a number of differentiators from person to person, but also from region to region. So we get that. We have that contextualization mainly in onboarding to make sure that the way we operate, interact with people, interact with teams, interact with security resources, is really effective. And sometimes it's based on language—purist problem of all is having the language barrier and understanding the intent. The same goes for "paranoia-meter." And I am saying it that way because every security professional that works in a global company hits the problem of looking at the privacy a couple of years back when GDPR was rolled out. GDPR increased paranoia in some places and in some places was totally not a surprise because the law was already there. So yeah, there is a flavor that we need to employ with making sure that people are operating in the same level in the same space, but also leveraging those human finesse and global flavors to reinforce the message to make sure that these are effective.

AF: OK, Perry. I mean, I have some debate on this one, but Perry, I'll let you tap in on this.

PC: The thing that I would say when it comes to diversity as far as deploying security awareness is that there are people that will resonate with different messages at different times, at different locations of the world. One of the big things that we have to think about is, let's say you're doing a simulated phishing program. If I'm sending it to somebody in Europe, and I've got a Bank of America template, it's naturally not going to be as effective as if I use something that's more regional to their location. So, there's definite tweaks that have to happen there to make it relevant. There's also an understanding of the nuance of each specific culture. Let's say I'm trying to encourage phish reporting or incident reporting, and I'm trying to do that in a very honor-driven structure that respects hierarchy a lot like an Eastern culture like Japan. People there are less likely to speak out against their, quote unquote, "superiors" because of a lot of the concepts of honor and structure. And so I have to find ways to build messaging that are going to encourage the behavior that I want within that specific region, and I'm going to do that differently than I would in America.

AF: You know, it's interesting, but there's a natural inclination if I were to get an email in some capacity, that's in a different language, I already am not sure. I'm already not sure what's going on, and I'm going to be careful, right? But there are so many other entities that, well, if it's in Polish, I would think about it, actually. I think I'd be like, oh, Marcin wrote me in his normal language now, right? But there's this dynamic there, and you automatically have some concern of like, why is somebody sending me something I clearly can't understand? But there is this cultural aspect that—and Perry, I think we've talked about this. When I had worked at a global organization, the exchange has to start out with like, “Hi, yeah, how was your day going? Did you spend time with your family this weekend?” And like, you have to do that. And it's difficult for me because I'm insecure. You're like, “Hey, so I need this, what's going on, like blah blah blah.” And you're very straightforward. But you have to learn the cultural relativism of what is that normal, emic versus etic? What is the perspective culturally, internally? Because it would look odd for me to get an email that ... if somebody knows me, and they send me an email with a whole bunch of like, “let's talk about how our weekend was,” like, you clearly don't know me, and I can already tell you don't know me. Right? So I do think—it's a question about how we design the security awareness, about how we do approach what we're going to use as a simulation, and do we want to ... there's a trust dynamic that you have with your company when you're in security that you're not trying to trick them, you're trying to help them build this muscle. But this dynamic can get messed up over time, and people can start to feel like we are trying to trick them. And “oh, you got me,” and … I didn't want to get you. Actually, that's not what I wanted, and I like that a lot of things that you say are so positive sounding, Perry, and you use such great words of the trust and the honor and the more positive way to approach it. Can we use things that are like zero days and not lose the trust of the people? So whenever COVID hit, there is a simulation that went out, because it was real that they used this for malicious intent: free N95 masks from the CDC if you showed up in this location in the US. And there's a lot of outcry of, “Oh my god, how could you take advantage of such a horrible time right now and send this to us?” But it was a legitimate exploit that was taking place. So what's the thin line here about not changing that dynamic with your people? How do you do that?

PC: The thin line is all about the relationship that precedes the test. I think that there is—and it's hard to say this categorically—but I think that you work with your relationship such that there is implicit or explicit permission to do those things. And so I'll give you an anecdote. I work for a company that does simulated phishing. It's our entire business model. So obviously, we believe in the efficacy of doing that, and we believe that when you do it right, it's super effective. Now you gave an example where somebody did it wrong. They broke a relationship, the relationship wasn't there. So, the beginning of the pandemic, I had a customer call and say, how do we do this right? Because I don't feel good about taking a year off or six months off of a phishing program because we've gotten great results so far. But at the same time, I am really afraid of this time in everybody's life, just doing something wrong, and they had some relationship problems with their people in the past. And so I said, give me the weekend to think about it and I'll set something up for you. And so what I ended up doing over the weekend is I created a set of videos. One was basically commercials for phishing. And so you'd put the first one out and it says, you know, basically, hey, everybody is struggling right now. There's a global pandemic going on. Everybody's changing their work habits. People are stressed out, people are sick. There's lots and lots of concerns. But you need to be aware that cyber criminals are taking advantage of that and they're sending emails like this. And then it went on to say, and because of all this, we need you to be super suspect of anything that comes in, and we are going to be using some of those same things in the way that we test. It's not to trick anybody or to call anybody out or to make them feel bad. But it's to help us protect the organization and help you become a vital part of that. So we set it up right and we said, it's not to trick anybody or to laugh at them. It is because you are a critical chain in the defensive posture here. And so that was done in a super friendly way, done as a commercial. And then you start to be able to send those things out because you set the tone appropriately. And then your follow-up is critical, too, because at the point of follow-up, if you redirect to a page that feels scolding or something else, then that's bad. So if somebody clicked on that, the next video that it would go to is a thing that basically said, oops, you clicked, but that's OK, you're safe. The company is safe, and here's why we've had to do these things, and it reinforces all those other points.

AF: Hmm. Is there ever ... OK, Marcin, you don't get to answer this one first. Do you ever get to a place where it becomes punitive? Do you ever say, “OK, enough is enough? You have failed so many times. You have led to a company breach” or et cetera. Do you ever get punitive or do you always have this very nice, happy, holistic perspective?

PC: That's a hard one to answer. I'm way more carrot than stick because I do believe that if somebody fails over and over and over that, there's probably something messed up within the context that we've put that person. That being said, there are organizations that have really, really high risk naturally, and they cannot afford to have somebody fail in a phishing test over and over and over again. And so the punitive piece comes into that. Have I said, is my organization something that cannot tolerate this risk? And are we at the point where there is no other layer of protection that we can put as a net under this person? And so the first thing I'm always going to do is look at, have we somehow failed this person, which is leading to their supposed, quote unquote, "failure" of this phishing scenario. And if I can with a clear conscience say we've done everything we can possibly do as an organization to set this person up for success, then there is a series of steps that you take somebody through, but it may not be firing that person. It may be relocating that person to an area where they can be successful. A different job, maybe reducing their permissions set. I'm going to look at those things before I say three strikes and you're out.

AF: Huh. Wow. So I don't have my answer, and I'm going to ask Marcin for his because I think I'm still exploring how to be humanistic in this way. And you've clearly gotten there. You've clearly gotten to the mountaintop of Nirvana of how to be very humanistic with the way you're doing it. And I think I'm like 80 percent there.

PC: I mean, it's easier for me because I'm not running a security program right now. Right? But if I was CISO, then my perspective may shift a little bit. But I would like to think that I'm always going to set, you know, even if I'm in a CISO space, to say how have I or how has the company, because maybe I've asked for budget for something that hasn't been approved, like browser containers, and that's not been approved. Well, now that starts to become part of my ... That person's failure, quote unquote, "failure" becomes part of my business case for this other thing that can help us.

AF: So Marcin, it's funny because I think when one time somebody said, “what is a thing that people don't know about you until they get to know you?” And I thought about this, and I asked the people who I work with closely, “what do I say to this answer? I don't know.” And it was that I come across as like, very like humanistic but behind things I run the very tight ship, and I have a very financial, government-style background and the things have to be perfect, and I expect the best. And there's no question, which is why I kind of struggle with this, and I think I'm still trying to find who I am as a CISO in this space because there is so much of me that is focused like this and I don't want to be punitive. But there's also this part of you that grew up that way. I went to school uphill both ways in the snow. Why don't you? And you know, you better not click, and you have to break that cycle in order to be better, this next generation of security professionals. And I know that. And I'm still trying for it, but I'm intrigued. Marcin, how do you feel about punitive or not?

MS: I have a slightly different answer here. Wherever I come across a situation when I actually have to choose where I go into a space that is punitive, I'm kind of turning away into a space of where it's protective or preventive. And it's more or less making sure that we are actually fulfilling our responsibilities to our people, to our organization, to our stakeholders, customers, and then looking back on what we can, whether it was actually also punitive as well. And I think I'm a little bit on her side here. There are a number of things that might prevent us from having a success that are not largely dependent on the person itself. We might say that person is improperly in their role, improperly equipped them with the "why." And maybe there are a number of things that we actually can add to this to make sure that it doesn't happen. But at some point, we have to fulfill our obligations and make sure that our responsibilities are intact. And I like the answer that Perry gave about “I would have a different point of view if I would be a CISO.” That's actually a very cool ... I think I have a different perspective on looking at employees that I try to protect as customers. I try to protect and I'm a little bit on the side of making sure that I understand twice. And I have probably a different approach to looking at my employees. If my person that is reporting in my structure would make a couple of those mistakes, I would be probably more into making sure that everything is a little more on the perfect side because I'm more responsible for their performance and for their success ratio and so on, so for like different metrics and KPIs.

AF: Yeah, there's—Oh, go ahead, Perry.

PC: I was going to say, I think the critical thing that always comes back to my mind is that we are only human. That phrase exists for a reason. And in the right context, you or I or anybody else can be phished. We can make the mistake. At one point in the book, I list this whole thing that security people are typically, you know, run afoul of as we preach this. And yet, let's say passwords again, we say you need to create a strong password. It needs to be unique. It needs to not be shared. And yet in the moment when we're, let's say, on our mobile phone, and we go sign into something that makes us create an account, we don't have our password manager there. What do we do in that moment? We go, “Oh, I'll use this crap password and then I'll go back and change it later.” And we never do or we’re presented with the offer of setting up MFA. And we just don't feel like we have time right then because we need to get into the system. We don't need to be bothered with doing this other piece of administration. Or I'm just going into the store for a minute, I guess I can leave my laptop in the front seat. So we all do these things as security professionals, just about everybody does. And yet we preach against it. And when somebody in our organization falls on the other side of that, we tend to want to condemn that person. But I think most of us end up crossing the line in just about every area that we preach vehemently against.

AF: All right. Well, I will say that I only made it halfway through the questions in the book that I had for you. This may become a serial conversation at some point.

PC: Yeah, if you ever want to do another episode or even just follow up with a chat.

AF: Yes, this was just, there's just so much, and it's so nice to talk to somebody who does keep this so alive and so fresh. But I will say there were a few things that come top of mind that I hope that listeners will walk away with on this one. The first one: security programs need to take accountability as well. Did we set them up for success and how did we make them curious about it? I think that's such a very pure way to look at this and to take that accountability, leveraging a key component of the people, tech, process, incorporate them into that cycle, and the workflow is really the best way to approach this, which is how a lot of us do, and it's great that we're doing it that way. But I love this part about thinking of a security program, as a marketing program where you facilitate those microlearnings, these piece-by-piece bits. Keep these things primed at all times and give people the things that are the most useful for them. These were all the ones that came to me, this trust dynamic, so important. And clearly, we finally disagree on what kids’ clean rooms look like. We found our one thing to disagree on. [Laughs] But it's just been a wonderful time, and I do hope we get to talk again.

PC: Yeah, thank you.

MS: Thank you.

Artwork for this series was created by Guss Tsatsakis.

Follow Along with Security Sandbox by Subscribing to The Relativity Blog


McKenna Brown is a member of the marketing team at Relativity, specializing in content development.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.