Industry warned AI is widening gap between testing and development

Artificial intelligence code development is increasingly causing friction between security testing and development teams, according to a new report that was shared with QA Financial.

In fact, “large amounts of noisy, unclear test results” slow down security teams’ ability to prioritise issues and impede developers’ remediation workflows more and more, a researcher at a US software testing firm claims.

Therefore, a more integrated, automated DevSecOps strategy is needed to securing faster, AI-enabled pipelines, argues Steven Zimmerman of Black Duck Software, based in Burlington, Massachusetts.

The findings and recommendations form the basis of Black Duck’s latest annual “Global State of DevSecOps” report, which provides an overview of the current state of application security testing.

For the report, around 1,000 software developers, application security professionals, chief information security officers, and DevOps engineers were interviewed by the team of Black Duck, which is formerly known as Synopsys Software Integrity Group and acquired by Clearlake Capital and Francisco Partners for close to $2.1 billion in October 2024.

Zimmerman and his team found that more than nine out of ten of respondents affirmed that they are using AI assistance in their code development.

Steven Zimmerman

“AI-generated code can create ambiguity about IP ownership and licensing, especially when the AI model uses datasets that might include open source or other third-party code without attribution,” he wrote.
“AI-assisted coding tools also have the potential to introduce security vulnerabilities into codebases,” Zimmerman added.

Many companies are not addressing these risks yet, however. While 85% of respondents said they have some measures in place to address the potential issues with of AI-generated code, only a quarter were “very confident” of those measures.

Another 41% were moderately confident, but 26% were either “slightly” or “not at all” confident.

“When one out of four respondents report lacking confidence in their own software security measures, there is a serious problem,” Zimmerman stated.

Compounding that problem, barely one in five admit that “their development teams are bypassing corporate policies and using unsanctioned—and, one would assume, unsupervised—AI tools.”

AI-powered tools

Zimmerman continued by saying that, currently, most organisations are navigating how to permit and use AI-assisted development tools.

“Developers are employing AI either to develop code from inception or to complete code they have already begun,” he said.

Around a quarter of organisations are allowing developers to utilise AI-based tools to write code and modify projects, with just over 40% of organisations currently restricting the use of AI solutions to specific developers or teams.

Merely 5% report that they have not yet embraced AI-assisted development and are certain that their developers are not utilising AI development tools, Zimmerman pointed out.

However, he did add that a fifth of organisations are aware that at least some developers are using AI tools in development despite organizational prohibitions.


“Using AI-based development tools without authorisation is a major organisational risk.”

– Steven Zimmerman

If security teams have no visibility into AI tools, or the code that comes from them, it will be exceedingly challenging to adjust DevSecOps programs to maintain adequate levels of security and augment productivity, Zimmerman explained.

“What this statistic really tells us is that AI development tools are already in use even if organisations think they are not, and an organisation needs to make a plan to implement them securely or risk losing visibility into the wider software security risk landscape,” he argued.

Evaluation challenges

The researchers further found that twenty-four percent of organisations expressed high confidence in the automated mechanisms they’ve put in place to assess AI-generated or AI-completed code, and 41% of respondents reported moderate confidence in their capacity to automatically test this code.

This leaves a fifth of companies that are only slightly confident, 6% that are not at all confident in the preparedness of their organisation, and 5% that lack sufficient visibility or for which this is not a current priority.

“While some organizations may be able to manage AI-generated code issues with their current AppSec infrastructure, others may need to allocate additional security resources, consolidate testing tools, integrate automated testing mechanisms, and unify policies across projects and teams,” Zimmerman noted.

“These measures will allow them to install safety nets and security gates to enable their organizations to adjust to the changes in their pipelines as rapidly as AI will propel them,” he stressed.

According to Zimmerman, it is also important to note that, in addition to application security issues, AI-assisted development may introduce issues with software license compliance, potentially jeopardizing intellectual property by incorporating third-party code with associated reciprocal licenses.

Self-aware or self-sabotaging?

One key take away from the findings, according to Zimmerman, is that organisations that embrace AI-enabled development are approaching this challenge with varying levels of caution.

“The key factor is the level of confidence each organisation has in its own security protocols,” he said.
“Some companies are proceeding with cautious confidence, while others appear to be taking serious risks with their development security.”

Zimmerman called it “no surprise” that of the 27% of organisations that allow free AI use across their company, 81% report having high and moderate confidence in AI.

“These organisations are ready to go and they’re confident that they have the controls in place to mitigate risk,” he said.

However, “it is a bit of a surprise,” as Zimmerman put it, that the 43% of respondents who are taking a more phased approach to AI-enabled development, also reported having moderate confidence in their ability to secure AI-generated code even while allowing only select development teams to use it in their work.


“The risk for an organisation is when AI-generated code and risk mitigation controls are not a priority.”

– Steven Zimmerman

Meanwhile, a fifth of surveyed organizations report lower overall confidence in their ability to secure AI-generated code, while recognizing that development teams are establishing unauthorised secondary AI workflows that circumvent security.

And there are 5% of respondents that disallow the use of AI in development and are sure their developers are not using it.

“I can only speculate whether this confidence about managing AI risk stems from this disallowance, or because they’re getting controls in place before they open the gates,” Zimmerman said.

However, each of these cohorts also includes respondents that admit to being only slightly, or not at all, confident in their ability to secure AI tools and their output within the context of their development pipelines.

“The least-concerning subset of this group are those that do not permit AI use at all, either because of a lack of confidence in preparation or because its use is not a priority for them,” he continued.

“The risk for an organisation in this group is when AI-generated code and risk mitigation controls are not a priority despite knowing that it’s already being used in development,” he shared.

“While this may feel like controlled use, it is still critical to evaluate risk visibility and establish automated security gates.”

However, the group most at risk, according to Zimmerman, are those respondents that reported allowing AI use during development despite also reporting a clear lack of confidence in their preparations to mitigate risks.

“While the risks posed by AI development are similar to those posed by traditional application development, such as weak source code or vulnerable open source, they manifest at an even faster velocity,” he explained.

Zimmerman did stress that most organisations have learned to accommodate open source dependencies in their development pipelines and have built systems to discover vulnerabilities as they are published, so they can patch and update in a timely manner.

“Most companies test proprietary source code regularly to detect weaknesses and insecure configurations,” he said.

“To incorporate the needs of AI-assisted development into these processes, security and development teams must cooperate by using a DevSecOps toolkit that satisfies each group’s needs for efficiency and reliability,” Zimmerman concluded.


NEXT MONTH


DO NOT MISS


QA FINANCIAL FORUM LONDON 2024: RECAP

In September, QA Financial held the London conference of the QA Financial Forum, a global series of conference and networking meetings for software risk managers.

The agenda was designed to meet the needs of software testers working for banks and other financial firms working in regulated, complex markets.

Please check our special post-conference flipbook by clicking here.


READ MORE


Why not become a QA Financial subscriber? It’s entirely FREE

* Receive our weekly newsletter * Priority invitations to our Forum events *

REGISTER HERE TODAY