There’s no doubt about it: security isn’t a challenge you can automate away. There isn’t a robot or a script ready and waiting to stop every adversary before they infiltrate your systems and replace your human teams. It’s just not going to happen.
In fact, according to this month’s Security Sandbox guests—Lou Yeck, Qlik's CISO, and Hector Pena, Relativity's senior manager of security—told Amanda Fennell, Relativity’s CSO, that automation can actually prompt the need for more headcount.
How? Why? Listen in to find out.
Amanda Fennell: Thanks for tuning in. If you enjoy today's episode, please rate and review us wherever you get your podcasts.
Welcome to Security Sandbox. I'm Amanda Fennell, chief security officer and chief information officer at Relativity, where we help the legal and compliance world solve complex data problems securely. And that takes a lot of creativity. One of the best things about a sandbox is you can explore and try anything. When good tech meets well-trained, empowered employees, your business is more secure. This season, we're exploring ways to elevate the strongest link in your security chain—people—through a creative use of technology, process, and training. Grab your shovel, and let's dig in.
In today's episode, our sandbox heads to the scripts for a simplified conversation with Lou Yeck, QLIK's CISO, and Hector Pena, Relativity's senior manager of security, on the delicate art of blending automation and analytics with human expertise in your security program. Let's dive in.
All right, so Lou and Hector, I feel like I basically went into my LinkedIn and popped in the keyword “automation,” and the two of you were the ones who popped up. So that's how we got to today. But in the background of my work history, the two of you are the ones I've worked with the most with whom this comes to mind. So let's level set with automation. It can mean a lot of things. I'm going to ask what it means to you, and I want this to be a very existential answer—how you see automation, but also how your teams and your security program see automation. Lou, you are up to bat.
Louis Yeck: Thanks, Amanda. So, I mean, I think, for me, it's very simple. You know, it's how do we use technology to augment our teams? Just like everybody else, we have a shortage in staff, right? And there's never—there's the never-ending task list of things to do. So how do we leverage technology to help us, you know, simplify processes and even make decisions for things like repetitive tasks? I think that's really where we've benefited from the most. I think the repetitive tasks one has been, I think, a really good one for us. Like, an example I would use is cloud compliance—right?—using automation to let technology make corrective actions for violations, so users don't have to do that, especially ones that happen frequently. And then—you know, then we can follow up later on with policies and enforcement and things like that.
AF: I mean, I wish I could just hand that over to Hector and be like, Hector, what are your thoughts? But that would be easy. And since we work together, you know I'm not going to do that. I'm going to ask you to go straight for the negative then, which—you're good at this part. What are some mistakes that you might have made along the way with thinking of automation and, like, how you've implemented it? You've used it with your team. You tried to integrate analytics. What are some mistakes we've made?
Hector Pena: So some of the mistakes I have identified with automation is actually thinking you've automated everything, just for items to fall through the cracks. So you may think you had documented all your manual, mundane, repeatable tasks, only to go and identify that you only automated maybe a section of it—maybe 10 percent, 30 percent, 40, 50, whatever the case is, right? So essentially what you thought you fully automated end-to-end requires you to do actually more work so that you continue to automate. So the mistake that I guess I would identify is you thinking that your small wins are your major wins when technically you've only started the battle, and you've yet to even win the war of automation.
AF: Oh, man. So negative to start—I love it. I'm not surprised at all. That's exactly where I thought you were going to go with it. It's a good one. I will say that, Lou, as a caveat, to really make sure that you don't get any feet to the fire here, I believe that your perspective and information does not in any way represent Qlik. Is that correct?
LY: Fair enough.
AF: That's fair enough. So in your background—totally sanitized, unrelated—have there been any things that you made some mistakes with of how you tried to do analytics and automation in your security program? Just like Hector—he saw a lot of things he thought were big wins, and they were just the tip of the iceberg.
LY: I mean, I think for us—'cause I relate to Hector's example—I think for us it was awareness training, right? We had automation for, you know, user awareness training. And we had this—we thought we had it all sorted out, and it would go out to all of our users. And what we found was—we basically found that our platform was sending training to the wrong users in the wrong departments. And then we had other unexpected bugs and inconsistencies. And so overall, we thought we were simplifying our lives because users could take the training, and then we would see the completion. And then actually, in actuality, what ended up happening is we actually ended up having to do more work, you know, further right of the schedule—of our process because we had manual activities where we had all these interactions with users who had questions, who couldn't find things. And then—like I said, we had inconsistencies. And so at the end of the day, it turned out to be a large learning experience for us, where we are going back to chart a new path later on this year. And let's leave it at that.
AF: Well, so like, let's navigate back to the friendly forest here of the positive. So there's a big problem with automation. It seems to be about how it's framed. And so I think that's the question of how does it go from being a job replacer to a job amplifier? So do you have any perspective on how you frame it, or is that something that you haven't really honed in on?
LY: No, I think it's definitely not a job replacer. I mean, again, I think for us it's a force multiplier, right? It's helping us, you know, realize economies of scale. As I mentioned, it's things like repetitive tasks, right? Being able to even, like, just simplify that and that allows you to elevate activities for your staff to focus on other things. Part of this other conversation is, with automation, you can have decisions being made. And then you push some of those tasks down to other tiers, like an operations or a service tier. So if a decision is made through technology, then your service tiers are just handling the ticketing and post-decision-making cleanup. And so for us, I think it's helping us to move some of the responsibilities to a different level. And—or you could look at it, conversely, of elevating a different group to be empowered to make decisions, as well.
AF: Hector, it's time to tap you in. Do you want to fight with him or you agree with this?
HP: Sure, I guess. So it's going back to the original question of job replacer or job amplifier. I would lean more towards—on the job amplifier, personally. So I think that we put people in their positions with what they need to succeed and to be able to handle their manual task. I think in Lou's case, he's talking about compliance trainings, right? So we put people in order to handle that job. But there's just so much overhead that goes into that type of work, such as having to verify the users have completed those trainings, that they're actually taking effect, that if you're preventing different blocks or different preventions, access requests based off of those trainings.
I think putting somebody in that role and putting automation around that will amplify their role to be able to spread out to further parts of the organization, to be able to complete those tasks with faster times, faster response times, faster integrations, being able to analyze the results with more speed—versus the job replacer. We don't want to build the automation to remove that person so that we're no longer having somebody actually monitor that compliance that compliance training and security training that Lou might be referencing, right? We don't want to put an automation, a script, a robot in that case. We want an actual human to be able to guide that automation to amplify the results of that type of work.
AF: Why do you like humans so much, Hector? You're always fighting for that, that you want to make sure that people are not forgetting the human element of automation. But what's the deal here?
HP: Personally, I will tell you, because I can talk to them. I can have an exchange with them and try to figure out the pain points, where the automation may just throw a trace-type error back at me or an output that says, “go find this.” So it comes down to the technical part of where I can have a human that interacts with the framework and actually pinpoint where items need to be corrected, need to be improved, versus where the automation is just doing what it's told to do. So I do have—the banter of the human is what I definitely enjoy in the automation life cycle.
AF: Oh, you just like to banter back and forth, right? They get the "Futurama" jokes.
AF: But that's, you know, the main point. If you have people who are starting out—so I'll start from the beginning for everybody. If we're starting out, let's look back a couple years. Let's do a little bit of a flashback—best practices if you're going to build automation and analytics into your program. You're on a small scale. You're on a budget. Where would you tell people to start and what resources? And then, Lou, I'll kick it to you after Hector chimes in.
HP: So small scale, small budget, I would definitely try to eliminate any type of, I'd say, verification processes, for example. Let's say vulnerability verification, phishing verification, endpoint, just deployments, tools—try to make sure that your services are healthy so that you actually know how effective your program is. If you can shine light on a lot of, like, dark spots, dark areas within an organization such as visibility coverage, cyber coverage, where your weaknesses might be—and automation can help you uncover that, to help guide you of how good your security posture is—I think that could be a win at a low cost so that you know where to focus on so that when, you know, maybe the budget expands one day, you know where to invest.
AF: All right. Verifications—I'm taking that as the tagline. All right, Lou, what did you think? How did you start out?
LY: So I would agree with that because it's—I look at those as repetitive, right? Because if you're doing validations—right?—you're basically going to be going across the company and asking, you know, similar questions to try to uncover those dark spots. But if you have everything, you know, in front of you, that allows you to see everything and kind of identify any areas that require additional inspection, right? I think to take it up a level, too, once you get to that point, that's where you can have the automation of even just enforcement because everything is point in time.
If you're looking at something, and you're trying to identify areas that may require some improvement—'cause we probably have them like everyone else—in different parts of the company. How do you make sure that it stays clean, right? Once—because you're going to come clean your room and then, five minutes later, it may not be clean again. But that's where I think automation comes in because if someone comes in and makes a change or does something, you can have a bot come in and come back and say, “okay, now we're going to clean the room up because you weren't supposed to do that.” And I think there's some benefits to that, right?
AF: So when you started out, you have some ideas and some priorities of how you're going to approach what your low-hanging fruit would be for some of your automation. What—I mean, there's no nice way to ask this, but did you have any screw-ups, something you automated that at the end you were like, that was totally not worth it? The juice was not worth the squeeze on that one. And I'll just take a step back and see who mentions something first and be really quiet.
HP: I do have a very technical example I can provide you, so if you’d like to...
AF: Oh, let's go for it.
AF: I feel like I better know about it, but okay.
HP: You might not, just because it does operate in the background. So we built an automation to actually—to lock out user accounts that may have actually had a security alert trigger them. Lo and behold, there's a lot of false positives, and some people started to get locked out of their accounts.
AF: Again, let's bring it back to the happy. We'll sprinkle some magic automation glitter. What was a success then, Lou—one thing that you were like, yeah, we nailed it on that one?
LY: For us, we have automation for if there's an event for devices, we will take action to isolate or remove from the network. The value prop is the time to respond and put them in a timeout. They—the devices go on a timeout so we can investigate them. So, at, you know, 2 o'clock in the morning when these things happen—the isolation happens almost within minutes. And then we can sit there and analyze what's happened afterwards.
AF: Okay. I'd go for that. I feel like that's a success. I always like to kick people off the network. All right, Hector, what's your thinking—something that you were like, yeah, this was awesome?
HP: I would lean into just enrichment of automation, right? So, often what's, like—
AF: Wait, wait, wait. You're not starting with phishing? That's not where you're going?
HP: Well, we might lead into that discussion based ...
AF: Okay, okay.
HP: ... off the enrichment talk if you want it to go that way. But, yeah, I mean, we have hundreds of phishing emails, hundreds of alerts that trigger all day long. And the enrichment process of having an analyst go to every single one of your vendors, every single one of your queues, just to go and get some intel indicators of compromise, to find out CVEs, to find out any emerging threats—the automation of actually just having that enrichment all in a single location for you, that automatically gave you some of those results, it cuts down all the time of having to actually go and manually analyze that. You get faster time to response, faster information and better analytics around your data so that you can make a true positive determination or false positive of—you know, of a phishing email, of an alert that triggers on one of your endpoints. So I would say one of our biggest wins here is just enrichment of our laws, of our SIM platform, of our output of our phishing program. So that has been a major win in the incident response world for us.
AF: Okay. I mean, like, I appreciate the loop in for that one. I think there's a lot of, like—it goes back to me as a highlight here about what we consider a success whenever you are doing something. And so for you, it's not just about the efficiency and time, but also the enrichment of it, the quality of something—of what we're getting. So that's helpful.
One thing I feel like we've navigated—and, Lou, I don't know if we've talked about this before—but it's really about adoption of this idea of automation. And it feels like it's not always adopted. And I feel like it's a little bit how I am whenever people mention AI. I'm immediately like, is it AI or ML? Like, I'm immediately defensive, as are most people in security because we're, like—we've seen so many people utilize the words incorrectly. And so automation's the same thing that—I think it immediately gets a lot of people's back up, and they don't necessarily: A, understand it; B, they think it is just magical dust that we sprinkle upon things, and things happen. So what does adoption for automation look like, and how do we do that or fail at that—and some lessons people could have from it?
LY: So I think adoption is key for this because, in order for you to have success, the decision-making or the work, the outcome, is going to be driven with groups. And with not just the security teams, but others. And, you know, an example I'll say is automation in our CICD pipeline—we have components of security embedded in our ...
AF: That was a bingo. You said CICD. That's a security bingo right there. All right. I'm putting it down.
LY: I was going to say, I'm checking the box here.
AF: Checking the—(laughter). Keep going.
LY: But I think it's validation of security in your CICD pipeline, but also your development lifecycle, right? So getting everybody on board with—you're going to have these checks, at different checkpoints within your lifecycle, and making sure that it's automated. You know, for us, mostly it's automated so that we have different checks in, and it's about getting that data. And Hector mentioned enrichment—you know, getting it back to the stakeholders, whether it's something like a finding in code or something related to even a scan—right?—but making sure that that automation gets back to the impacted team so that they can do the corrective actions.
But the automation is key because then it takes away from a user having to chase a group or a bunch of developers. It just kind of shows up—but the adoption piece is everybody being on board with, “this is the process, and here's where the automated steps are going to come in and then the expectations once those automated actions come in or the or the reports come in.” We agree on what the expected outcomes are in terms of corrections or fixes and things like that.
AF: Adoption from the executive team or the company—does that have to take place? Do you have to have it bought-in from the CEO of the company to work on automation?
HP: So personally, I'm going to say no because I think we somewhat do operate in our own little pocket of Relativity, as well as within the industry. I would say, no, you don't need it to come from top-down, from CEOs to executives, et cetera. I think there's a playing field that we can operate on our own, and we can automate what works for us. And, honestly, we don't ever have to go and interact with any other users if you don't want to. So if they don't want to automate it, we'll either automate it for you or kind of take a get-out-of-our-way mentality—we're automating this, right? You know, welcome to the new age, right? And then there's also, I guess, insecurity. I guess maybe there's a little bit of—in Relativity, it was freedom, right? You pick your own language. You pick your own scripting—Python, Bash, you know, PowerShell—whatever works for us. We weren't really told what to use, so we got to pick our own flavor, so we kind of went with either Python or PowerShell. And we were able to automate it that way, without, like—this is the standard for an engineering team, or this is how our product is built. Use this type of automation ...
AF: And that didn't set off any security alerts at all. We love PowerShell in security. It's awesome.
HP: Oh, yeah. Yes. It's fantastic, right?
AF: So I posit—and I'm going to say this as, like, a shameless—for myself, Lou—instead of a plug shamelessly for Qlik, I'm going to do a shameless plug for myself as, like, the world's best boss. I'm going to get the mug. I'm going to shamelessly say that you were given that amount of latitude because of trust and because you were buffered. Because I do think there is a lot of attention on the executive level—and the CEO and the company, the founder, and so on—on automation, and they just loved hearing about the wins after it was done. They didn't necessarily need to know how everything was made. And so coming out every three months or whatever and saying, hey, we've automated this new thing in security was kind of like, “oh, that sounds amazing, security's awesome,” as opposed to, “why are you so slow?”
So it goes back to what automation solves for: faster, more efficient, and less resourcing required for something. You can give people more interesting things to work on so you can retain top talent and so on and so on. I think it's kind of, like, this pattern that comes up with automation—shameless plug for me.
Lou, adoption—going back to that, did you need adoption about the things that you wanted to automate in your program from your CEO and from the exec team?
LY: No. I mean, I think for us, it was also very similar, is that we just ...
AF: No. You have to say I was awesome. You have to say “Amanda's a special unicorn, you're really lucky to work (laughter)”—start from there. Start from there.
LY: Well, you are very ...
AF: Hector, you're so lucky.
LY: You are awesome. Yeah, you are awesome. But, you know, I think, in general, in the security industry, having good leadership at the top that's supportive of security in general. You know, as long as you don't go stick your finger in the outlet, I think you get some latitude. At least that's what happened with me. And having very supportive leadership was definitely key. And my leadership is very techie, so it's helpful.
AF: Oh, so, okay. Great point, actually. And let's just—let's be super edgy here and talk about that. Okay. Leadership in the tech industry being techie—it seems like that's not always the norm. So how do you actually communicate efficacy of automation when it's not technical? So you have to have somebody in your company that's not technical. How did you convince them that it was something that was really useful to be done if they're not technical?
LY: So I'll be honest with you, I think it goes back to resourcing and then saying like, well, if we have to continue to have these tasks as manual—I mean, because remember, automation isn't just the manual tasks, but also reducing the number of steps users have to take. But the workforce would continue to have to grow, right? So part of this is, it's either going to be we have to grow a larger bench for operations. Users may leave because they don't want to do these tasks anymore, right? And so it comes down to, for me, money and resource expansion. So automation is—the automation piece helps to say, “hey, if we do these things, we're simplifying and it's reduced costs or reduced offshoring costs” or things like that.
AF: Yeah. I will say that I've thought about this for a few years now. It's this best line, this moment in "The Matrix," which—by the way, "The Matrix," the original one—great. But any one after that is successively less great. And so I don't remember which one it is, if it's the second or third one that had come out. But there's a moment when they're, like, overwhelmed by what's going to happen and how many machines there are. And they just say that moment, the machines are digging. And that really resonated with me in terms of, like, the cyber realm, because we can't compete with the automation that's taking place with adversaries. We just can't. We can't throw enough resources at that, and we can't throw enough bodies and people and so on. There's just not enough bodies for the grenades.
And so at some point, you have to fight them with what they also are doing, and that's the automation. They've written scripts. They've done botnets, et cetera. So they're doing all of this stuff. We have to do the same thing and deploy the same thing. But also, in order to be better, we've got a little bit more that we have to do. We have to reach a little bit further and look a little bit further along than they are in order to beat them. So I've always felt there's, like, a resonating thing about the resourcing, the people. That's the selling point of, we're never going to win that battle. So you have to find a different way to try to win it. Hector, unmute that microphone, buddy. I know you've got some stuff to say.
HP: Oh, no, you're good. Sorry. It was just—I had a knot in my throat. Didn't want that on the audio.
AF: (Mimicking coughing)
HP: Can edit that. Yeah.
AF: Let me clear my throat. All right.
HP: Yes. So do you mind repeating the last bit of the question?
AF: No, I'm not repeating anything. Like, I am totally going to put you on the spot. But, no, the only thing I would say is that I don't think we can fight anything without automation because adversaries are automating. So if you go back to that and you look at this idea of, like, how did you sell people in automation in any way, shape, or form, was it because you said, we're going to be faster and we're going to need less people because now you argue about the fact that you don't have more people? But my question is—and this is a—we're going to take a one-on-one moment right now in front of everyone in the podcast. Your argument is that I need more people because we automated all this other stuff, but we have other things we need to focus on. But I thought that's why I let you automate it, was so you didn't need more people.
HP: So that's my go-to—one of your lessons of automation. And one of the early points I made is, you only automated a subset of it, right? So I'm going to use, definitely, our analytics. If we absorb data entry points from all over the organization—from our cloud platform, from our corporate environment, from our end users, from our SaaS products—and it's constantly feeding in, right? And we are trying to automate as much of the analysis as possible, right? So we try to enrich it. We'll try to close it. We'll try to make a determination of, what would the human do, right? And the human says, hey, this is malware. So now we've programmed this bot to do the malware for us. And as a result of that, we set up another layer, and we set up another layer, and another layer. By the time we realize it, we maybe had 100, 150 different custom alerts and we've probably only automated maybe about 10, 20 of them that actually matter.
So what about the rest of them? We're constantly trying to put an automation out. But as we keep moving along the automation line, who's verifying that the automation actually works from the beginning? Did we get it correctly? Does it need to be tuning? It's kind of the mentality—sometimes it happens like set it and forget it. Hey, we won this battle. Let's move on to the next one. But, you know, hey, you're losing that battle again in the background. So, now you have to take people who are moving the front line forward and realizing, hey, there are people coming from behind you for you. I think you're saying this in terms of how adversaries are automating, they're trying to constantly adapt. I actually had a conversation with somebody on my team this week and I made a reference. I was like, “yeah, Relativity is a fortress.” He's like, “no, Relativity is a country and we're trying to defend about 200 different borders.”
AF: We automated quality versus quantity, which we said earlier. It's about what you automated. Was it actually something that was super valuable that you were able to get to, or were we just trying things out and we weren't there yet?
HP: I agree with you on that. So that's kind of my value. That is, more automation sometimes might equate to more people. That's very—it's an oxymoron in its own sense, right? It's like, how do you need more people when you're automating stuff? Well, you know, just because you automate it, doesn't mean it actually went away. Now it's just possibly enriched. There's more output, there's more data streams from that automation. Who's absorbing all that?
AF: That's a really good point that I hope is a big takeaway for people—that more automation doesn't necessarily mean less people. I think peripherally, a lot of people think, wait, if you're automating everything, where's my job going? So let's remove that fear. First of all, if you're a talented individual, we would find a role for you. So it's not about that. And it would actually behoove you to automate that so that we can give you more exciting stuff to work on. So it's a good disclaimer that automating doesn't necessarily mean less people. But, Lou, that was your selling point. It would take less resources. That's why we should automate. So what's your response to that?
LY: There's always more work to do. And so automation helps to reduce your, you know, your monotonous tasks, your repetitive tasks, right? I think this is the theme of the conversation here, that even as Hector has mentioned, I always look at this as there's always more work to do. And so using automation to get the data and to shorten decision points and decision-making is key. But it could lead to more work, right? But there's always more work to do in security. There's always—as you mentioned, attackers are automating. I think that was a really good point, and so we've got to evolve too.
AF: Well, look, I've got some takeaways in case this isn't the obvious one for everybody who's been listening. But I've got some, and I'm just going to do thumbs up if we agree with this, all right? I'm going to make some statements. We're going to see if it's controversial. So in automation, you may have won the battle, but the war is going to be ongoing, and what looks like a big achievement—it's already a thumbs up. All right, what looks like a big achievement might only be the tip of the iceberg in tasks that you would need to automate. So for everyone who can't see—obviously, we use video. So I can see the cues from people we're talking to. But this is a thumbs up from Hector, a thumbs up from Lou. We like that. The war is ongoing.
Okay, second one. Implemented correctly, automation, analytics—this can be a force multiplier for your talent, but your talent should also be force multipliers for your automation. Feels like there's an organic amount here that the humans have to be working with the automation. I don't see thumbs up on that one. That's a no? Talk about it. Where do you think it needs to be?
HP: I guess let me try to decompress that statement. So you're saying that more automation technically leads to force multipliers, which means that as a result, your people should also force multiply their work, right? So essentially, you—as the person who automates their job, their role, their task, their repeatable item, that they should be—their job should be amplified, so now they have to go and do more work? Is that kind of what you're trying to get at?
AF: Oh, man. Hector is not happy about that one. I know, right? He's like, wait, are you asking for more from my people? But, no, I think there's an organic, which comes first? But, Lou, go for it. I see you about to say something.
LY: I think the work—like, it can lead to more work, right? It just may not be the security team. Like, you're going to have findings, you're going to have something. And I was just thinking about this. Like, your automation could do—you know, the data comes in, and you're going to get a bunch of outputs of things that need to be addressed. And so somebody is going to have to do them, right? The automation is going to tell you something, and that something is going to have to be addressed by a subset of people. And so by being more efficient, you could be throwing more work over the wall. I guess that's all I'm thinking about.
AF: I think that's actually a way cooler mention to call out because it'd be awesome if I could just say, like, that was my, you know, final point and that's it. But it's a great provocative point that sometimes when you're automating something, you just caused more work, or you just threw it over the wall. You just moved it along. Someone else is still having to do with the manual work.
AF: So really seeing something through to completion is a much longer conversation than just, like, okay, well, I automated this one thing. We don't care about anything else. That's how we measure success, so whatever. Screw you. I think there's something about keep going along and follow the thread, keep pulling the thread to make sure that you've automated to completion.
HP: I have a real-world example of—on that as well. Specifically to throwing it over—we automated a—our vulnerability detection program, where we took our vulnerabilities that one of our security vendors identified. We automated a bunch of vulnerabilities to about 50, 60 different engineering teams. They got about maybe a thousand tickets. And we said, “have fun,” right? So we automated the identification part, but now they're stuck verifying and closing it and adding it to their Scrums and their Agile workflows. And we're just—now what we have to do in the back end to close the loop is, “hang on, let me save you; let me go and automatically close those for you now so that you no longer have to worry about it.” So we not only threw it over the fence, we had to reel it back, and then we said, “here's a better product.”
AF: Oh, we did that? When did we do that? That's horrible. That sounds vaguely familiar from two years ago.
HP: Yeah. It was about—maybe mid-2021. It's when—the merger of our detection automation and security operations team.
AF: Yeah, I feel like this was a moment in time that I had to deal with a lot of arguments from engineering. But, yeah, that was not cool, dude. That was not cool.
AF: Thanks for the (laughter) ...
HP: Lessons learned.
AF: Lessons learned. All right. So I guess the last thing I would say that's a simple point for a takeaway is I feel like good leadership will trust the team to make the changes that need to be made. So there's a little bit of latitude that seems to be required for creativity and innovation to happen. We have to know what good is. But you got to have a little bit of trust there. And I think it's required. And it sounds like, Lou, you seem to think that, like, it's the technical side of it that allowed for the trust. You have a technical leadership team. They know. They understand. They understand the value of what you're doing with your analytics, obviously, as a company that's driven by analytics. So it seems like that's an agreement that you would say, yes, good leadership will trust the team to make some changes.
AF: Is that a—that's a test.
LY: Hands down. Totally.
AF: Okay. Hector, shameless plug back for best boss ever—do you think good leadership will trust the team to make the changes?
HP: I would say so. It's ...
AF: Womp, womp (laughter).
HP: I feel like I've gained your trust over the last five years of our working relationship here, but ...
AF: It was definitely—there's a lot of trust for it. But I think that that's something that's been transferred reasonably well to the team that works under you. Like, whenever you've proven yourself over time and time again that you fundamentally are passionate about what you're doing, you're trying to just secure our company, our data, and so on, there's a certain amount of trust that gets instilled that I know that you're doing it for the right reasons. This isn't something you're trying to do for any other reason other than you care. So I think it's super helpful. That's my quality moment.
All right. So I end on quotes, and I think this is a fun one. So I recently took a trip to Rome, which means I've reemerged with my obsession with Marcus Aurelius. Obviously, for everyone who's watched "Gladiator," we all love him. By the way, just like "Braveheart"—not historically accurate whatsoever in so many different ways, but let's just move past that. But he does have, you know, as Roman emperor—I think he was only emperor for maybe 20 years—really impactful wisdom that he's imparted in a lot of meditations. And I came across something because I've been reading a lot and watching some documentaries about him recently, and he said a comment that sounded very intriguing.
“Loss is nothing else but change, and change is nature's delight.” There's a certain aspect of “nature's delight” which sounded like such an intriguing pairing of words, that feels like all the changes that come at us that we're trying to automate. It's this idea of almost, like, mischievousness. It's this idea that there's so many different things that are happening in the world that we can't really be in front of. There's going to constantly be this change. We always say change is constant and so on. But how can we handle this? And there's a certain amount of acceptance of change and constantly changing that I think automation gets in front of and tries to tackle. We know we can't solve for everything, but we can solve for a little bit of nature's delight. So I thought that was a cool one.
Gentlemen, it was a pleasure to have both of you. I appreciate that my reg expression has worked for my search for automation people in my LinkedIn. So thank you for joining today, and thanks for talking about how you worked on your automation program.
HP: Thank you, Amanda.
LY: Thanks for having me.
AF: Thanks for digging into these topics with us today. We hope you got some valuable insights from the episode. Please share your comments, and give us a rating. We'd love to hear from you.