Cybersecurity is in the middle of a fresh arms race, and the effective weapon of choice in this new era is AI.
AI gives a classic double-edged sword: a powerful shield for defenders and a amazing new tool for those with vicious purpose. Navigating this complex battleground needs a consistent hand and a deep understanding of both the technology and the people who could misuse it.
To get a view from the front lines, AI News stuck up with Rachel James, Principal AI ML Threat Intelligence Engineer at international biopharmaceutical corporation AbbVie.
“In addition to the construct in AI augmentation that has been vendor-supplied in our recent tools, we additionally use LLM evaluation on our discoveries, observations, correlations and related rules,” James explains.
James and her team are utilize large language models to inspect by a mountain of security signals, looking for patterns, recognizing duplicates, and finding dangerous gaps in their defenses before an attacker can.
“We use this to decide similarity, duplication and offer hole evaluation,” she provides, noting that the next step is to weave in even more external threat data. “We are trying to improve this with the integration of risk intelligence in our next segment.”
Central to this operation is a specialized threat intelligence platform referred to as OpenCTI, which supports them build a integrated picture of threats from a sea of digital noise.
AI is the engine that makes this cybersecurity attempt possible, taking big quantities of jumbled, unstructured text and well organizing it into a preferred format known as STIX. The grand vision, James stated, is to apply language models to link this core intelligence with all other regions in their security operation, from vulnerability management to third-party risk.
Taking benefit of this power, but, comes with a healthy dose of caution. As a main contributor to a prime industry action, James is acutely aware of the pitfalls.
“I would be remiss if I didn’t point out the work of a tremendous organization of folks I am part of – the ’OWASP Top 10 for GenAI’ as a foundational way of understanding exposure that GenAI can introduce,” she says.
Beyond accurate exposes, James factors at 3 fundamental trade-offs that business leaders need to confront:
Accepting the risk that comes with the creative but frequently unstable nature of generative AI.
The lack of transparency in how AI reaches its conclusions, a trouble that most effective grows as the models emerge as more complex.
The risk of poorly judging the real return on funding for any AI project, wherein the hype can easily cause overestimating the advantages or underestimating the effort needed in such sort of fast-moving field.
To make a better cybersecurity posture in the AI era, you have to apprehend your attacker. This is where James’ deep expertise comes into play.
“This is certainly my particular expertise – I actually have a cyber threat intelligence background and have carried out and documented extensive research into threat actor’s interest, use, and development of AI,” she notes.
James actively tracks adversary chatter and tool development by open-source channels and her own automated collections from the dark web, sharing her findings on her cybershujin GitHub. Her work also affects getting her own palms dirty.
“As the lead for the Prompt Injection entry for OWASP, and co-author of the Guide to Red Teaming GenAI, I additionally spend time developing adversarial input techniques myself and keep a network of professionals also in this field,” James adds
So, what does this all mean for the future of the industry? For James, the path ahead is clear. She points to a appealing parallel she discovered years ago: “The cyber hazard intelligence lifecycle is almost identical to the data science lifecycle foundational to AI ML structures.”
This alignment is a large opportunity. “Without a doubt, in terms of the datasets we can function with, defenders have a unique risk to capitalize at the power of intelligence data sharing and AI,” she asserts.
Her final message provide both encouragement and a warning for her peers in the cybersecurity world: “Data science and AI may be a part of each cybersecurity professional’s existence moving forward, embrace it.”