Cybersecurity

During the Black Hat 2024 conference, CyberRisk Alliance TV sat down with Shimon Modi, Dataminr VP of Product Management, Cyber, to discuss how publicly available data can be used to create a more resilient cyber structure for organizations—and the critical role both AI and Dataminr play in doing so.

For the full interview, listen to the podcast below or read the transcript that follows, which has been edited for clarity and length.


Has AI come to the rescue of cybersecurity?

Absolutely not. We haven’t gotten to that point yet. I don’t think we’ll get to that point anytime soon. When we talk about AI to the rescue, there are two sides to the coin. AI is a disruptive technology. It can be used just as effectively by the adversaries as it can be by the defenders. 

And when we talk about AI to the rescue, it’s really thinking about it in a very thoughtful way. What can AI do to really lessen the burden that today’s security professionals are under so that they can stay on top of everything that is coming at their organization, that is coming at their attack surface? 

So when we really think about AI to the rescue, it’s from the perspective of how do we help human analysts get through some complex analytical processes with an increasing volume of information that they have to process? And they just cannot do that. It’s just not humanly possible.

Does Dataminr care about the humans that are doing the work?

We absolutely do. And we also recognize that humans will always be part of the solution. We don’t think of AI as a silver bullet. AI is not going to replace humans. Artificial intelligence is supposed to be—in my opinion and as we see the space evolve—a way for humans to actually do higher order analytical work. 

I’ve been in the cyber world for 25 years, and the last 15 of them have been building products that focus on human analysts that are doing a lot of drudge work—all the way from threat intelligence to threat detection to investigations, remediations. 

My passion point has always been around how do we help these humans do the higher order analytical work so that machines can do some of the things that are repeatable, or that humans are just not capable of doing at a volume and scale.

Is there anything in particular that you’re excited about regarding security teams and the effective use of AI?

Absolutely. And to take a step back, Dataminr’s mission is to help security teams take faster actions against emerging risks and threats using publicly available information. Because we recognize there are a lot of signals available beyond an enterprise’s four walls. There’s just a plethora of information out there. 

For a human analyst to be able to harness that so that they can take faster action is important. Time matters. Seconds matter, both from a physical standpoint as well as a cyber standpoint. And so to be able to detect these events that are happening in the public domain, recognize its relevance, deliver it to the appropriate organization with the appropriate context so that the analyst can take the appropriate mitigative action is core to Dataminr’s mission.

Can Dataminr take trillions of data points and apply them, not only to things like natural disasters, but to what threat actors are doing?

Yes. When we think about public sources, there’s a spectrum. There’s a spectrum all the way from human-generated information that you can find on social media, Telegram channels and the deep and dark web—all publicly available. You may need special infrastructure to access it, but if you have it, you can actually get to it. 

On the other side of the spectrum is machine-generated data. You have internet scanners that are generating large volumes of information, all machine generated, machine-readable. You also have sensor networks and then code repositories. Again publicly available, but machine-readable. So when we think about public sources, it spans an entire spectrum of human-generated to machine-generated, surface web to deep and dark web, and everything in between. 

For us [Dataminr], AI has a very important role to play; it enables us to extract the right signals from all this information. We generally think about AI playing a role in this across three main layers. At its very core, it’s all about how you detect the signal and classify it. Is this a physical risk or is this a cyber risk? Is this a fire or is this a ransomware attack? Leaked credentials? 

Being able to do that at scale across millions of data points, trillions of computations, that’s where a lot of the power of what we do from an AI perspective comes in. On top of that, as I was describing, there’s human-generated information that we look at that our AI platform consumes. Telegram channels, it’s a discussion between threat actors. They are either doing a transaction or trying to figure out what kind of a campaign they want to launch. It’s a blurb, right?

Now for an analyst to kind of read through it and try and extract the relevant information, that is analytical work that should be automated. That’s where we use generative AI to: 

  • Create the lead 
  • Figure out what is relevant and who the threat actors are
  • Determine how this matters to you as an organization
  • Create a human-readable caption so that organizations can get the right context at the right time

Something that we are very, very excited about now, with some of the AI innovations that have come out in the last couple of years, has been around how do you bring about stitching together this evolving timeframe, this fabric of signals that you have into a live brief? Because again, that’s what analysts want.

Watch Video: Dataminr ReGenAI: Live, Dynamically Updated Event Briefs

Whenever there is some kind of an event, guess what happens? Ten tabs open up, you start tracking the event. What is its impact? With every new piece of information that you have with passing time, you need to know what is going to be the updated context behind it. 

And so again, AI has a role to play in bringing this context to the analysts so that they can then either modify, update or double down on whatever risk mitigation strategy they’re following.

How do formats like text, image and video give actionable information to security teams?

If we take a step back and think about all of those signals in an atomic sense, then it becomes a fragmented reality. What we have done from a Dataminr perspective is to really think about it from a multimodal fusion perspective. To make it real with an example, ransomware attacks just starting to increase, right? They’re all financially motivated.

So guess what they do? Break in. It’s a smash and grab. Get in, get the data, put it up on their ransomware leak site, start their clock and say, “Within the next 24 hours, if you don’t pay up, all this data’s going to go into the public domain.”

cyber risk ransomware - image of person hacking someones car

Dataminr in Action: CDK Ransomware

See how Dataminr helped customers stay ahead of the ransomware attack on software vendor CDK Global.

Watch Video

Well, when they first go do that, guess what happens? They put up a screenshot of some data to prove, to show evidence that they actually have it. Then they may mention the name of the company, not quite the exact name of the company or they may even use the company logo. So now you have an image and evidence of the data. You have a logo of the company that is the victim, and it may be in a language that is not English, it may be in a foreign language.

You then have this tapestry of signals across images. You have a logo, computer vision, you have a different language bringing it all together so that now you can deliver this unified risk event to the analyst. It’s a game changer.

This is where we see the multimodal aspect of this really come together. Now, from an audio and a video perspective, as the threat landscape evolves, I’m sure there’ll be signals available for cyber from those. But today we see a lot of the implications of audio and video really driving physical risk detection.

What trends do you think organizations should zero in on over the next 12 to 24 months?

We said earlier that AI is a threat, so let’s move on to what is really happening out there. Given the Dataminr solution that we have, we have a very unique vantage point in terms of all the activity that we see in the public domain. We look at ransomware attacks, leaked credentials, domain impersonations and phishing attempts. We see fraud and scams, but these individually are just a type of threat activity. 

There are two big trends that we are starting to see as it relates to the cyber threat landscape—and what we see CISOs (chief information security officers) and our customers really, really starting to gravitate towards.

No. 1: The cyber criminal ecosystem

This ecosystem is really coming to the forefront. When you think about a ransomware attack, it’s financially motivated, the attackers want something in return that is of financial value. So you see data breaches happen. To actually do a data breach though, you need to be able to get credentials. This is why information access brokers are starting to become more and more popular.

Well, how do information access brokers get someone to leak credentials? They set up phishing campaigns, they do domain impersonation, they create fake login screens. And so there’s this attack chain all the way through. But that is an ecosystem. And thinking about it from an ecosystem versus individual attack patterns is something that we are starting to see CISOs really, really gravitate towards. 

No. 2: Third-party attacks 

The second thing that we are seeing is not just the attacks, but who are the attacks targeting? Increasingly, cyber attacks, especially ransomware attacks or even leaked credential type of attacks—which eventually do lead to ransomware and data breaches—we see those targeting third parties of organizations versus the organization itself.

What happened to Target 10 years ago, that playbook was written. What we are seeing out there is that threat actors have essentially updated that playbook. They’re going after third parties with these tactics because once they go after a third party, if they get leaked credentials from that third party, they can then try and get access to the organization that they care about. 

Increasingly, we are seeing this. In fact, even if you look at Gartner’s big report, third party risk is among the top three concerns for CISOs. If you look at the Verizon DBIR report for this year, they actually had a special section on data breaches that targeted third parties. So it is clearly becoming a top of mind concern. 

Those are the two big trends. Financially motivated actions, data breaches, leaked credentials, but then being extremely targeted at third parties.

If you think about the GDP per capita on that [cyber crime], it really changes the dynamic on it. It is becoming increasingly rampant. And part of the reason why we see cyber criminals being so successful is that third parties are hard to monitor. It is hard enough to monitor your own organization.

Now, can you imagine trying to extend your own capabilities to a third party, but they’re part of your attack surface. The only thing that organizations can really do right now is rely on vendor questionnaires or some kind of security ratings, which happen every so often.

How can the cyber industry help keep pace given there is still a practitioner shortage?

It’s a very complex question, a complex answer. It’s like a PhD thesis. It takes me back to my academia days. But it is a multi-layered answer with many different perspectives, but I believe there are two things that everyone should be thinking about. 

No. 1: Hygiene basics

One is hygiene basics. Make sure you do the basics right. Make sure you have effective user training. Users are part of the problem, but they’re also part of the solution. I don’t know if you saw the latest deepfake attack that was targeted at a Ferrari executive. It was reported last week where one of the top-ranking execs at Ferrari got a call from someone pretending to be the CEO. 

At first, he kind of fell for it, but then he thought there was something fishy, and then he basically asked the person who was pretending or the deepfake, “What is the book that you just recently asked me to read?”

Simple, but those are the kind of things that we can start thinking about, which are just basic in nature, but bring the user into the solution. 

No. 2: The technology component 

But then on the flip side, if you start thinking about the technology component of it, AI definitely has a role to play in a very thoughtful way. How do we bring AI to bear upon the massive data challenge that we have, as well as the complex human and analytical challenge that we have? How do we believe the drudgery? How do we believe some of the work that analysts have to do today, which should be automated, which should be augmented by AI? 

And that’s where Dataminr really, really focuses on solving that problem from a public data standpoint. There’s just too much data out there. There are relevant signals that are buried in the public data. How do we use AI in a way that delivers the right signals to the right people at the right time?


blue data on a black background

Dataminr Pulse for Cyber Risk

Learn how Dataminr Pulse for Cyber Risks helps organizations like yours stay ahead of digital risk, third-party risk, vulnerability intelligence and cyber-physical risk.

Learn More
August 15, 2024
MIT Imagination in Action logo
  • Cybersecurity
  • Cyber Risk
  • Podcast
Insight

The Digital Operational Resilience Act: Understand the Key Impacts for Financial Institutions

Answers to key questions on the EU’s Digital Operational Resilience Act (DORA), including why it was enacted, what it entails and the effect it will have on financial institutions and their third-party providers.

Infographic

Combat Zero-day Exploits Before It’s Too Late

Organizations are seeing a rise in zero-day exploits via third-party vendor vulnerabilities. Here’s what CISOs and their teams need to know.