AI in Security

Posted on Tuesday, Jan 6, 2026
AI is changing the way a lot of technical teams are doing their jobs, and security teams are no exception. In this episode, we talk with Oren Saban of Mate Security about the impact of AI on the security space and the potential for increasing the success of security teams.

Transcript

Mandi Walls (00:09): Welcome to Page It to the Limit, a podcast where we explore what it takes to run software in production successfully. We cover leading practices used in the software industry to improve the system reliability and the lives of the people supporting those systems. I’m your host, Mandi Walls. Find me at LNXCHK on Twitter. Alright, hello folks. Welcome back. This is our first episode of the New Year 2026. I’m here today with Oren Saban. We are going to be talking about all kinds of stuff Oren, give us a little bit about who you are and what you do these days.

Oren Saban (00:46): So nice to meet you Mandi I am Oren. Oren Saban, I’m the Chief product officer of Mate, Mate Security. We’re a company building products for security teams, elite SOC teams that want to utilize AI to become better. And we’ll talk about, maybe a bit more, a bit about myself. I live in Tel Aviv. Before my current role at Mate, I was leading another product teams director of product management at Apex, the other way around doing security for AI and another startup got acquired by Tenable and before that was at Microsoft, been building the XDR solution and later on security copilot and love, AI, life, cyber security, everything in between and I’m very excited about what we do. Good morning.

Mandi Walls (01:34): Great. Awesome. Thank you so much for being with us today. So let’s dive right into it. So what are you expecting folks or what are you hoping to give folks as far as helping them do better security with AI? Where’s the state of the art in that space right now?

Oren Saban (01:51): So I think we all know that security and specifically security operations get to a kind of impossible equation. The impossible SOC equation we call there is infinite scope. It’s growing every time, right? We got cloud, it brought up a lot of alerts, now we’re getting AI, which brings up more stuff, data, the regular things and then this sums up. And on top of that, we have very high stakes and every alert might matter, might not. The decisions have very high stakes and it came up to a point where it’s really hard or nearly impossible to really manage what’s coming in a regular enterprise. And you can add layers of complexity, right? There is multiple tooling, there is M&As and subsidiaries and a lot of things that enterprise security teams in enterprise have to handle. And we are seeing an opportunity, which is one of its kind now with ai I would say to actually change this, to change the way it works and they change the paradigm of yeah, there is 90% of false and benign positive in my queue. Most of the things I see are wrong. People burn out after 18 months. We’re seeing an opportunity to bring back the joy of doing security into security work and get rid of a lot of the grunt work.

Mandi Walls (03:17): That seems really important. Like you say, we do see a lot of false positives, folks over tuning things in the necessity to make sure they’re not missing anything. Like you mentioned, the stakes are very high across all of these. The attack surface seems infinite right now. There’s just so many places where things can be misconfigured or just plain wrong and you don’t know it necessarily. Everything is so complex. So how do you go about figuring out how to help people or what looks like the next best target to help folks with?

Oren Saban (03:54): The way we think that AI is changing the approach? So there’s about 50 startups now that they’re claiming to be AI agents and most of them are treating the environment in the same way. So what do I mean by that is there’s a playbook or even some agentic response, but those are mostly generic. And I think it’s the same path that we’ve seen from SOAR. You need to configure a lot.

Mandi Walls (04:21): MmHmm

Oren Saban (04:21): The insight we’re building is effective security operations actually requires deep organizational context and understanding the same way that if we’ll bring senior security analysts to the team, why can’t they be operational from day one? They don’t have the context. They dunno the architecture, they dunno the escalation path, they dunno what’s the convention, what’s the baseline? And it’s not just here’s how to investigate phishing, but here’s how your organization will actually investigate phishing with your tools, with your policies, with how risk averse are you, who travels where, which vendors are working with what’s normal behavior of your finance team versus what’s not. And the secret sauce of Mate is the ability to learn that within hours so that if someone logs in, if Mandi logs in from a new gear location, mate will know Mandi is traveling now. And this is kind of the moment of how did you know that moment that I think is special. It’s the ability to bring up the context of the specific organization to the table. And it’s not magic, it’s the ability to, and this is really something that AI is really good at. Let’s take the data from all the sources and try to normalize it into something that is actually meaningful. And this is how Mate is fueled by your knowledge and context to deliver better investigation response.

Mandi Walls (05:48): That’s super fascinating, right? That’s a part you want, you want the anomaly detection. I don’t necessarily need to be pinged every time somebody goes to the coffee shop. They do that every Thursday afternoon, but if their computer suddenly shows up somewhere very weird, you want to know about it. But then yeah, that’s a lot of training for that data for sure. Super interesting. So how did you get into this part of it? What made you step into security in the first place?

Oren Saban (06:15): I think like many security folks, by mistakes. I dealt with other areas more towards defense tech and later on even e-commerce and web development and joining Microsoft I got the opportunity to join security team. And for me first it was overwhelming almost. I’ve ran a lot of red team blue team trainings with elite SOC teams around the world and people that before then have been there years before me. It was super complex going into this world at first, but I really found something special with security for me, it connected all the dots. What does it mean? It’s complex technology wise, you’ve got to be at the edge of technology, which is awesome and it’s complex from a product perspective. You need to solve technical stuff with research oriented stuff alongside good user experience, which I think wasn’t at the first priority at the past nowadays and organization choose for, they lean in for good user experience and it’s important to understanding changes, productivity of the team and the overall happiness and the pace is really high with security, you got a problem that is always growing. Unlike a lot of other areas, threats and attackers, they’re not stepping away. And I think for me, as it connects also to impact which you actually put your energy as something that is good to the world, I can help make the world a safer place. And whether it’s helping hospitals that we do today to some other very critical environments, this is something that for me is very fulfilling and I’m happy that most of my day is actually putting energy into something that I believe in.

Mandi Walls (08:01): That’s amazing. Yeah, absolutely. I think we only hear about, I feel like we really only hear about security stories when something has gone very, very wrong and there’s so much that goes on just in the day-to-day that yeah, some bad things do slip past, but there’s so much coming at folks all the time that some of it gets filtered out and one thing that pokes through, you’re just like, it happens and we’ve seen such a, I feel like things are escalating. I see more even just basic spam getting through and that stuff is tied somewhere to something horrific that’s going to happen. And then here in the US we’ve had some really horrible stories about ransomware at hospitals and things like that. Like you mentioned, it just never seems to end. So yeah, I mean with the inclusion of AI solutions, you get more bandwidth to it more because we’re not making more people. It doesn’t feel like more folks are willing to step in and be security practitioners right now that we need something to help folks cope with all of these things that are coming. It seems like AI is a good place to get that rolling for sure.

Oren Saban (09:17): A good dangerous one. It’s really a question of how are you going to use it? How do you build trust with this new AI thing? We know there’s lots of hallucinations and there is really a question of how do, I don’t yet take a presentation that is AI made, I’m going to show it to my board. How are you going to trust some solution to be able to take actions in my environment or to close around stuff? And it’s a lot of the approach that for me in AI in general in working with AI transparency first every action need to have full audit trail with its reasoning with a very, I think that’s something that Cursor, we’re really good at being able to solve around user experience. There is a circle, right? AI is generating stuff and then the human is verifying how do we make and record or party talks about it?

Oren Saban (10:13): How do we make the part of the human verifying shorter? And for me this part of making it shorter is a lot of building great user experience that enables me to see Red Green easy for me to see the changes. Now I can lean in and say what I think about it and get to the decision making, the critical decision making moment. I think of it the same way, exactly the same way with AI insecurity. How am I able to verify it very quickly without reading long pages of summaries of incident because that we’re probably, I don’t want, and even if I want to bring AI to my team, they didn’t sign up for reading long reports of AI generated content. No one does. That’s not what we are in for security. You want to solve complex problem, you want to actually hunt for threats. And I think it’s a lot about giving the team control over this autonomy per se. And it’s not just about pushing the automation forward, but how we do it deliberately with control so that we build trust over the actions it can take and it’s easy for us. The governance part is easy for us so that it grows with trust and control.

Mandi Walls (11:25): And how do you find folks, what is the process of them sort of building more trust with the system? Do you find folks are leaning on human in the loop more at the beginning or are they just kind of using the N AI as an advisor and doing their own research on top of that? How do folks evolve into that kind of trust space with the product?

Oren Saban (11:49): So I think it’s combined and there is a process of when you first use ChatGPT until where it is today for example, it’s a new type of technology and you learn what it’s good at and what is not as good at and then you know how to work with it. So yes, it’s about putting human in the loop for critical decisions and putting guard rails. Yes, it’s about putting that as an assistant that is very smart, it’s great, Mate knows your environment, it will go and search out all your policies and data lakes and figure out and bring you up some data. But then I want to see how did you get to this decision and what were your hypotheses and why did you choose this one so that I can work on top of that and actually put my thought process in. I haven’t seen yet a fully autonomous SOC working with ai.

Oren Saban (12:38): I don’t think that’s where we are at. At least the 2025 or maybe early 2026. It’s about how we augment AI to more and more processes. For me, I look at it, we call it instant promotion. As an analyst I get an instant promotion now I manage a set of agents that can take a lot of grant work, like go bring the info about this evidence across multiple threat intelligence sources or go see if this IP had any prevalence in the org. Is this IP managed by us or not? Now all these little things, these are just questions that are go bring them, bring back, we use Perplexity to search over the internet, but then I want to be the coordinator eventually saying, okay, this is actually bad or machines don’t yet have this intuition that we have this gut feeling of something smells bad here, I really don’t know yet what it is, but it’s not what I’ve seen before and we’ll get there, but I think this personally is going to take time.

Mandi Walls (13:39): Yeah. Yeah. And that feels like it’s part of the future training as it learns what you do and how you respond to what it brings you that it will start to learn more about the things that you care most about or are most focused on or have maybe the most impact towards your environment and whatever your risk posture is for all of those kinds of things. As we do get, there’s so many reports that come through every day and it takes time to go out and figure out are we actually running this thing and this version that is the bad thing that they told us about. Do we have to care about this or not? And having little AI agents go out and do a lot work for you seems like a massive way to save a whole lot of time.

Oren Saban (14:26): And specifically the grunt work I would say. So there is lots type of tasks that you’ll have to do in security operations. A lot of them are repetitive and maybe those are the ones that I’m thinking definitely agents should take. Some of them is thinking process. Like the way I work today with Claude or Cursor is that sometimes we contemplate challenge me, this is my hypothesis, what do you think? Bring up five more. Can you validate each of those and bring evidence for each of those? And then we can estimate which one is the most probable, right? It’s a game of bets doing security in many cases. There are some places where I can really, I’m sure, and then there’s a lot of places where I have to choose where do I put my efforts now there’s 10 critical alerts, which one is the most interesting? And I think within the game of bets, adding more knowledge to the table, being able to unify it and bring some insights on top of that is something that AI really is really good and it really gives me kind of an Ironman suit as an analyst that I didn’t have before.

Mandi Walls (15:32): We’ve got vibe coding, right? With folks who are not familiar, not super experienced with coding environments or are actually putting code together. Do you see that in the security space as well? Folks who maybe don’t have the expertise in their organization who are maybe looking for a little bit more aggressive advisor from those products.

Oren Saban (15:53): So definitely, and I think here it’s another place where you need to be very cautious as a SOC manager for example, adopting AI for your team. It might be very easy to go through the phase of, for me as an analyst, if I’m a junior analyst, kind of like what’s happening for engineer or junior engineer sometimes if the AI said that, I’m not even sure how am I going to validate it. Because I don’t have the knowledge if the AI read this power command line and said this is bad or good and this is dangerous for me. You need, and I think also in vibe coding, the same thing, our most senior engineers that are able to do amazing things with the validation and the junior ones as a more like someone who starts in the field, you want to make sure that you’re also learning.

Oren Saban (16:49): You’re not just putting the work away because then when hits the fan, you’re able to go back and trace how did the decision got made and humans are still accountable in security operations, they’re still accountable. I don’t see a CISO that’s going to say to the board, the agent said it wasn’t malicious and that’s why we decided it’s okay. And the ransomware is because of the agent. You go blame the agent. I don’t see that the humans are going to be still accountable. And that’s why as a SOC manager or leader in security operations, you need to ask your people to be able to explain how did this decision got made, what is the reasoning? Do you really understand that? And I think it’s also for us as a product builders, it’s a lot about how we guide what we show, how we train your people to be better. Team upskilling is a crucial part of bringing AI or SOC to be more AI augmented and to be able to teach your people and talk in a way that they can understand and they can learn from for the next place to be better. And I think it also makes you a happier team that is getting better and they’re happy with their jobs.

Mandi Walls (18:02): Yeah. Oh absolutely. Especially if you’re taking an offloading the grunt work, the things that sort of feel like a slog and eat so much of your decision making capability out of the day and really lead to that sort of mental fatigue that people get buried under when they’re just inundated with mounds and mounds of information that comes through every day. It’s super fascinating. Exactly. Interesting. So we mentioned earlier we do see our large language models hallucinating like making things up that sound plausible. That’s why they show up in the output. How does that impact on the security side? Are there protections that you can put in place to make sure that what you’re getting back is believable or other guiderails for the systems there?

Oren Saban (18:51): Guiderails and agent governance is a crucial part. I think it’s kind of the same thing with cars, right? We couldn’t build fast cars if we don’t have good brakes. You want on board. And the same thing happens here. If you don’t have the capability and for sure for an enterprise to guardrail the agent from taking the wrong action or not validating content or being succinct to prompt injections or to other type of biases, you’re just not going to use it, at least not our customers.

Oren Saban (19:25): And the way we think on guardrails is in many different aspects through the way. So it’s guardrailing the content that is coming in is guardrail is hardcoded about what actions the agent can or can’t do and if it can, so how many of those and what’s the rate limit and where should human be involved always. And maybe it’s production environment or the high value asset or if the blast radius release is high and the potential impact is very low, maybe era, an email. Sure the agent can take it when we get to a certain type of level of confidence. So there’s a lot of different mechanisms of governance and guard for agents. I don’t think it can work without one, without the others. You can’t run fast with technology, at least not in security. The potential impact is so high and then this come together.

Mandi Walls (20:17): Excellent. So over the next sort of five years, what do you see, how do you see maybe the role or the day-to-day job of say a SOC engineer or a CISO, how does that evolve in the face of some of these newer products and how things are evolving? What do you think that job ends up looking like in a few years?

Oren Saban (20:41): I think a lot of jobs, less grunt work, more strategic, more decision making and the key critical aspects and being able to work and utilize AI much better. So it’s going to come from both ways. We as people and humans and employees are going to be better utilizing the AI tooling and the AI are going to be easier to maneuver and manage. I think we make progress in both parts and I think that SOC will tilt more towards orchestration and investigation process design to some extent and really how we build the architecture so it’s secure by design and being able to again, govern and governance on top of the patterns that we see and the aspects that we see around that. Which questions should I ask? Even during investigation I can talk more about the processes I go, I take my head out of the water. It gives me the capability to do that and the skills are going to change from how do I ask this specific query in this specific language to can I reason about this risk? How good am I in validating what the AI suggestion actually guiding it for the next time to become better. So I think everybody’s going to level up their level.

Mandi Walls (22:05): Yeah, super interesting. I mentioned earlier I feel like we’ve never had enough folks in the security practices anywhere I’ve ever been. It’s always seemed like they’re understaffed and overworked and there’s too many things that need to be focused on and then we are dealing with crazy stuff in the cloud and everything else. So looking forward to them having some more time to do some thinking about the things that are, no,

Oren Saban (22:31): I’m not really sure if they’ll have more time because when I think about my role even I’m currently understaffed, I have so much stuff I want to do that I haven’t got to yet. So today we’re in minus from where we want to be. I think we’re going to be closer to equilibrium maybe, but there is still going to be a lot of stuff you will need to do. And as I said, the problem space will continue on growing.

Oren Saban (22:58): The type of work for sure is going to change, that’s for sure. Whether later on you’re going to have more time and security. I dunno, the threats are not putting their words aside either they’re using AI too. I’m not talking about a complex AI based attacks, but even for the simple stuff of creating more phishing, there is more of a unique and harder to trace the building many types of malware with OBS application that is harder to trace again. So we’re shifting the way the word works, the security word works, not sure yet about a complete time change.

Mandi Walls (23:37): Awesome. What do you see then on the other side of that actually securing AI systems and the things that folks are working on there?

Oren Saban (23:49): So security for ai, right?

Mandi Walls (23:53): Yeah,

Oren Saban (23:55): So obviously I haven’t forgot about it as being in this space from the beginning. So there’s multiple layers to securing AI. I think at first is the classic security around the AI stack itself, the infra and this in many cases kind of overlaps with regular security things that we’ve done from data security to application security to access control and security. Again, data pipelines, all the stuff we know from regular security, it applies here as well. Just what happens is that everybody ran to develop really fast and sometimes forgot some of their secure by design aspects of building products. Now after that there’s the safety and governance of the AI behavior. So we spoke about guard rails, there’s auditing and the blast radius limits. So what’s going to happen if I click the button and the model, we need to be able to make sure that it doesn’t go crazy.

Oren Saban (24:57): And I think one place where we seeing it was before even the AI, but what happened with CrowdStrike, that was a place where I think a lot of people in security team felt like we gave too much permission. Now the kernel permission now it broke us down and we don’t want to get to the same place with AI. I won’t be surprised if some point something happened and we will, it’s not going to be Mate. But there is a lot of things, people tend to be very permissive with AI beside the security people and it might cause some events, but we will see. And I think that the third part of securing AI is everything about the new attack surfaces. So new things that we, and some of them are modifications. So we used to have SQL injection, now we have prompt injections and the impact might be different, but there is data poisoning and other aspects of really how I drift AI.

Oren Saban (25:55): I’ve seen already command lines that are supposedly talking to the Copilot from the SOC trying to maneuver its decision about a specific command line or in phishing we see that for Microsoft Copilot the things they’re trying to guide there and we’re seeing some interesting things around this world. So it’s a lot about, again, layering security layers on top of this, but eventually, yeah, LLMs are kind of like compute and you can utilize them and you need to be able to secure the compute and make sure that there’s the Rule of Meta, which I really like of the, have you heard about this for security?

Mandi Walls (26:35): I don’t think so. Yeah.

Oren Saban (26:37): Yeah. So it’s basically how you make sure that your agents safe. It’s called the rule of two. Basically what it says, it’s security framework by Meta that an agent within a single session must be limited processing, no more than two of the following. And the following are basically think there is process and untrustworthy inputs. So things coming from external sources, there’s access to sensitive system or private data and there is the change state or communicate externally. So the core idea is that if an agent has all three simultaneously a malicious prompt embedded in untrusted input from A could instruct the agents to access private data and exfiltrate it. And this kind of flow is a flow that we don’t want to have. And I think there’s going to come a lot of other aspects of how we actually manage agents and how we keep them secure with more and more capabilities coming in. But this is going to be interesting for sure.

Mandi Walls (27:43): Yeah. Yeah, I’ll find that link and we’ll put it in the show notes for folks. That sounds super interesting to kind of read through. I think a lot of folks are excited about agents and what they can do, but at the same time not really sure what they would trust an agent to do for them yet. So it would be very interesting to see how they emerge in the next year or so, what folks end up doing with them. Super interesting. Alright, we’re almost at the end of our time. So do you have any parting thoughts or anything else you’d like to mention for folks? Is there anything you’re excited about coming in the new year that you want to share with everybody?

Oren Saban (28:19): In general? Definitely exciting time in tech. I’m super excited about the problem that we solve about the technology we’re bringing in. I think there is multiple stuff coming in from us, whether it’s deeper forensic and the ability to go deeper into security and really the whole management of investigation end to end. I think security people, as I said before, they didn’t come into this field to read AI generated summaries or click through the same tools 12 time for the hundred time they get to outsmart attackers and when things actually happen, that’s where you get excited in security, right? Where it’s sad to say, but when there’s a real attack then you have adrenaline and the goal of AI in security shouldn’t be to replace that. It should be how we give analysts back the work they actually signed up for. And that’s what gets me excited, what’s coming in for the next few months.

Mandi Walls (29:21): Awesome. That sounds great. That definitely translates to what we work with with our audience as well as far as automation, all these other components, things we want you to get back to the interesting things that you’re working on and things that interest you and get that grunt work, give it to your agents, give it to your automation so you don’t have to do that stuff anymore. Exactly. Oren, this has been great. I’ve learned a lot. This is not a part of the industry. I know a lot about these days, so my security knowledge is probably 25 years old so I don’t click on things, but that’s about the extent of it. That’s good. So this has been super interesting. We’ll make sure we link to Mate security in the show notes for folks who want to find out more and learn more about what you guys do. So yeah, thank you so much for being on today. This has been great.

Oren Saban (30:10): Thank you for hosting me, that was really helpful. Absolutely.

Mandi Walls (30:13): Well shout out to Sharone for the intro. She’s always hooking me up with great folks. So with that, we’ll wish everybody an uneventful day. We’ll be back in a couple of weeks with another episode that does it for another installment of Page to the Limit. We’d like to thank our sponsor PagerDuty for making this podcast possible. Remember to subscribe to this podcast if you like what you’ve heard. You can find our show notes at pager to the limit.com and you can reach us on Twitter at page it to the limit using the number two. Thank you so much for joining us and remember, uneventful days are beautiful days.

Show Notes

Additional Resources

Guests

Oren Saban

Oren Saban

“I love cybersecurity because it’s a win-win-win: fight the bad guys, build awesome products, and take technology to its edge. That never stopped being exciting.”

Oren combines deep security operations expertise with AI product development experience. Before Mate, he led product for Microsoft Defender XDR and Security Copilot, where his work helped thousands of security teams reduce mean-time-to-response. While there, he ran red-blue SOC simulations to map how security teams actually work, translating those insights into product decisions that now help over 10,000 organizations. He later served as Director of Product at Apex AI Security, leading an AI security investigation platform from concept to enterprise deployments and Gartner recognition. His experience spans both large-scale security platforms and AI-first security products. Oren also heads PM101, Israel’s flagship product leadership course, where he teaches how to build practical, explainable AI products that scale in the real world. His teaching experience helps him communicate complex AI concepts to both technical and business audiences.

Hosts

Mandi Walls

Mandi Walls (she/her)

Mandi Walls is a DevOps Advocate at PagerDuty. For PagerDuty, she helps organizations along their IT Modernization journey. Prior to PagerDuty, she worked at Chef Software and AOL. She is an international speaker on DevOps topics and the author of the whitepaper “Building A DevOps Culture”, published by O’Reilly.