Parasoft steps up efforts in race to embrace agentic AI tools for QA

Igor Kirilenko, since 2021 the company’s chief product officer

California-based software testing solution provider Parasoft has rolled out a set of new features aimed squarely at software quality assurance teams, including what it claims is the first agentic AI-powered assistant for service virtualisation. #

The update is designed to simplify and accelerate the testing process for both traditional and AI-infused applications, particularly in environments using large language models and emerging agent-based architectures.

The core update centers on the company’s Virtualize platform, which now includes a chat-based AI assistant that allows testers to generate virtual services by simply describing their needs in natural language.

This removes the need for deep domain knowledge or custom development skills, potentially reducing setup time from hours to minutes. QA teams can now simulate complex environments without manually writing service mocks or code stubs, freeing up time for test coverage and analysis.

The agentic AI assistant complements existing Parasoft offerings like SOAtest and CTP, which are increasingly being adapted for AI-intensive workflows.

Igor Kirilenko, the firm’s chief product officer at Parasoft, said the move reflects an effort to give QA teams more accessible tools to test earlier in the development cycle, particularly in distributed systems where test environments are difficult to replicate.


“Traditional methods of validating and extracting data are ineffective in dealing with responses that vary randomly with AI-infused applications.”

– Igor Kirilenko

Another significant feature addresses a growing challenge in AI testing: validating the unpredictable output of LLMs. Traditional data extraction and verification techniques, which often depend on fixed patterns or schemas, struggle with the variable responses generated by generative models.

Parasoft’s new natural language validation tools allow testers to extract and check dynamic outputs without hard-coded logic. The company says this approach simplifies the process for both technical and non-technical team members working on AI-integrated applications.

The release also adds support for the Model Context Protocol, an emerging open standard that governs how GenAI agents interact with software tools in enterprise environments. Parasoft’s tools can now simulate and test MCP-based interactions, giving QA teams the ability to validate agent workflows and integration points using a codeless interface.

The features are available immediately and come as more organizations explore autonomous software agents, large language models, and hybrid architectures in testing workflows.

Strategic focus

Parasoft’s latest update continues a broader shift among software testing vendors toward incorporating AI into QA tooling, not only to increase automation, but also to address the specific complexities introduced by generative AI and agentic systems.

With its previous updates, the company has integrated machine learning and NLP technologies into areas like API testing, test creation, and service virtualization. Its Virtualize and SOAtest platforms are already used widely in enterprise DevOps pipelines, especially in industries where complete test environments are difficult to replicate due to dependency on external or unavailable services.

The introduction of agentic AI into virtualization mirrors a wider industry trend toward co-pilot-style tools that allow non-expert users to interact with sophisticated systems through simple, conversational prompts.

While these features can boost productivity, they also raise new challenges for test accuracy and validation—especially when dealing with AI-generated outputs that vary from one test to the next.

This is where Parasoft’s natural language-based validation comes in, Kirilenko stressed, saying that by enabling QA teams to frame assertions in everyday language, the company is attempting to bridge the gap between AI unpredictability and enterprise-grade quality assurance.

These tools are particularly valuable in financial services, and other regulated sectors adopting LLMs and AI agents for internal and customer-facing applications, he noted.

Support for the Model Context Protocol further signals Parasoft’s intent to stay relevant as agent-based architectures evolve. MCP, which is being embraced by vendors including Tricentis and others, allows AI agents to interact with enterprise software tools in structured and auditable ways.

QA teams now need tools to simulate, test, and verify these interactions just as rigorously as traditional APIs, and Parasoft appears to be building toward that future.

While some in the testing community remain cautious about the risks of over-automating with AI, Parasoft’s focus remains on augmenting, not replacing, human testers. The goal, according to Kirilenko, is to reduce setup complexity and manual overhead so teams can spend more time testing what matters.

These latest additions underscore that Parasoft, like several of its competitors, sees AI not merely as a testing target, but as a core part of the modern testing toolchain itself.


NEW EVENT!


Why not become a QA Financial subscriber?

It’s entirely FREE

* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGISTER HERE TODAY



REGULATION & COMPLIANCE

Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.


READ MORE


WATCH NOW