QA Financial Forum New York | 15 May 2024 | BOOK TICKETS
Search
Close this search box.

Blinq.io launch will harness GPT for testing

tali-1693297839

Estimated reading time: 6.5 minutes

Tal Barmeir [pictured], together with Guy Arieli, founded Experitest in 2009. They sold that mobile-focused continuous testing platform to Digital.ai – the Texas-based DevOps management platform – in 2020. Now they have a new testing business, Blinq.io, which is designed to harness Chat GPT for software testing and will fully launch in September 2023. Vodafone, the mobile communications group, is a design partner. QA Financial spoke to Barmeir to find out more.

How did Blinq.io come about?

With Experitest we saw how the software testing industry rapidly evolved as mobile applications were launched onto various smartphones. And what we are seeing right now is artificial intelligence becoming a significant driver of change. Software testing is still dominated by a lot of hard work done manually; either for manual testing or for creating automated scripts and then maintaining those scripts as the platform has  to adapt to new releases and versions.

Blinq.io is leveraging the power and the intelligence of AI, specifically GPT models, and we’re using that to replace and enhance any type of work that humans are doing.

And how does Blinq.io do that?

What we are doing is creating a virtual software tester. We have a platform that can fulfil the tasks of both manual and automated test script creation, exactly as a human tester would provide. It can analyse the task and create on-the-fly test automation scripts which it is later able to maintain. As this virtual tester is an instance of computing power, we can instantaneously create as many of these testers as we want; when we want them. So we can launch a hundred thousand of those tester instances just before a software release date; test 100% of the scope of the application or website and then basically kill these virtual tester instances the minute the software has been released.

The testing industry today is facing a lot of challenges in terms of its ability to recruit and retain very good test automation engineers, and to keep pace with the very large volume of software testing requirements. Our solution is fully scalable and removes that need. We’re currently using GPT4 as the large language model (LLM) for Blinq.io, but the platform works with any generative AI engine and longer term we are planning on switching engines if and when we need to. As some of the open source AI engines develop, we believe that they could probably perform the tasks quicker and more effectively.

How do you ensure a consistency of output from the LLM? 

The consistency and stability of the output of the LLM agent is obviously a very significant consideration, which affects the ability to work with it in the long term. If you simply request an LLM agent to perform a test without providing any sort of refinement then you would get sporadic answers which change every time. So we have learned how to narrow or widen the space of possibilities in which the LLM agent is working, so that we can create the consistency you need so it can be used in a definitive way.

Who are your competitors in the market?

I’m sure there are other companies that might be developing similar products, but at this point we’re not aware of them as this is a relatively new capability – software testing using an LLM agent. Unlike some other AI software testing support tools which improve a certain aspect of the testing process, such as object identification or visual comparison of the layout, what we’re doing is different. We’re actually replacing the human decision-making of a tester. We’re actually getting instructions from the “brains” of AI in order to execute the test, instead of just using it to improve a certain aspect of a tool that is ultimately operated by a human tester.

Are you working in partnership with any other companies?

Yes, we’re working with various design partners spanning various verticals: some in communications such as Vodafone, but others in other areas such as transportation; there’s a cruise company and some hardware component creators. So there are various verticals in which we’re working on with the beta version in order to improve and fine-tune the model and the product to make sure it’s aligned with all the requirements of each vertical. But the main vertical that we’re focussing on is actually the financial sector where we’re working with two leading banks in particular – one in Europe and the other in Asia. This is the primary vertical that we’re focused on and the product is optimised for the financial sector to start with, but it also already works very well with other verticals.

Why is the platform most suitable for financial services firms?

To be specific, it’s most suitable for companies that use a lot of standardised, formalised or templatised text interactions –  so financial institutions, such as banks and insurance companies; but also transportation companies, airlines and retailers. We also see a clear use case for B2B or internal enterprise applications, such as SAP applications, or customer-facing applications which follow a standardised pattern of ingesting and outputting information.

What is the longer-term opportunity for using LLM as the basis for testing?

We think that the testing of software which is already released is going to become an even larger bottleneck, because a lot of the product code in use today is being created at least somewhat automatically with AI-powered tools. The result of this is that functional testing, which focuses on how real human beings interact with the software, is of growing importance. That’s why we are focussing on that element which is the ultimate point of contact with the software application, alongside performance testing, load testing and API testing . We believe that the other parts of the software testing process, such as unit code tests and so forth, are already largely solved, so we don’t see those as significant areas for innovation. 

Related Articles:

How virtualization accelerates software delivery

AI-based testing for customer apps: The future is faster

SustainableIT.org releases ESG impact standards