QA Financial Forum New York | 15 May 2024 | BOOK TICKETS
Close this search box.

QA Financial Forum: LambdaTest on how firms can leverage AI

At the QA Financial Forum in New York City, Jay Singh delved into the power of AI-based tooling for test orchestration in the cloud.
At the QA Financial Forum in New York City, Jay Singh delved into the power of AI-based tooling for test orchestration in the cloud.

As the QA Financial Forum in New York City took place yesterday, one of the biggest names at the annual conference was LambdaTest, a San Francisco-headquartered cloud automation testing platform provider.

At the event in New York City, the company’s chief customer officer and co-founder, Jay Singh, discussed how to spin off test infrastructure on demand, prioritize the right sequence of tests, irrespective of legacy test tooling and maintaining data integrity.

He delved into the power of AI-based tooling for test orchestration in the Cloud, highlighting how financial firms can leverage it to revolutionize test infrastructure, prioritize the sequence of testing, and maintain data integrity.

LambdaTest's Jay Singh (right) during the event in New York City
LambdaTest’s Jay Singh (right) during the event in New York City

Following the event, QA Financial checked in with Singh’s colleague and co-founder Asad Khan, currently the CEO of LambdaTest, which was founded in 2017.

With over a decade of experience in the software testing industry, Khan helped build 360logica into a multi-million dollar business within five years, which was then sold to Saksoft in 2014.

Prior to founding LambdaTest, Khan worked as a Lead Engineer at GlobalLogic, serving customers such as top tier banks Bank of America, Wachovia, PNC, Wells Fargo, and BNY Mellon.

QA Financial was curious to hear what this industry veteran makes of the current shape of the testing market, the emergence of GenAI and automation, and which trends may define tomorrow’s QA market.

QA Financial: Mr Khan, firstly, financial firms businesses are facing enormous challenges with traditional QA and testing methodologies. How do you channel the ever-changing needs and rapidly evolving demands from your customers?

Asad Khan: Financial firms are struggling with some ongoing challenges as they update their QA practices. In order to comply with strict policies like GDPR, CCPA, and PSD2, they need to be very thorough in their testing process.

California-based Asad Khan, the CEO of LambdaTest
Asad Khan

Another crucial aspect is that the end customers in the finance services expect a more secure, personalized and a seamless experience. Ensuring that applications meet these demands across various platforms and devices is crucial. To tackle these diverse and evolving needs, our approach is comprehensive.

This includes engaging directly with customers through dedicated channels to gather frequent feedback, conduct surveys, and participate in industry-specific forums that address FinTech testing challenges. Our analysis helps identify where traditional methods may be lacking and highlights functionalities that are most valuable to finance sector.

QA Financial: You emphasized customer engagement: do you integrating customer feedback directly into your roadmaps and strategies?

Asad Khan: Customer feedback has been one of the most important parts of our product development process. Gathering feedback helps us ensure that our development roadmap remains aligned with the evolving needs of every industry and not only finance industry, in the dynamic technology landscape. One way of achieving this is maintaining open communication channels with our customers like scheduling meetings at regular intervals or addressing their concerns through our support channels. This allows them to directly submit feature requests, report bugs, and suggest improvements to the platform. All this feedback is meticulously analysed and prioritized based on its relevance and impact.

Apart from leveraging surveys and polls, we have also built the LambdaTest Community where QA professionals all around the world can share experiences, ask questions, discuss about the new trends and technologies and provide feedback on our platform. This allows us to gain valuable insights into real-world testing scenarios and the ever changing user needs.

“We often think that automation is sufficient to handle manual repetitive tasks, however, it cannot replace the expertise of human testers.”

Asad Khan

QA Financial: Speaking of insights and ever-changing user needs: what will be the biggest trend or change within the QA space that you foresee in the next, let’s say, 2-4 years?

Asad Khan: As we already know Ai and automation in testing processes are capable of taking over repetitive, manual tasks, freeing up QA teams to take on more complex parts of testing. For instance one benefit is that this allows them to focus on ensuring that applications are secure, reliable, and perform well, ultimately improving the overall quality of the software we use every day.

Apart from AI and automation, another trend that may take over in the future is prioritizing performance testing. With the constant complexity of systems and rising user expectations, performance engineering is becoming increasingly crucial. Teams are now emphasizing the integration of performance engineering throughout the development process to ensure that applications perform well under various stress conditions.

Another trend is moving towards TaaS, or Testing as a Service. There’s a shift happening in the way companies approach test automation, with a move towards TaaS. Senior QA leaders are really getting behind this, pushing the use of AI-powered platforms that streamline everything from creating test cases to analyzing the results. This isn’t just about saving resources—it’s changing the game for QA teams, giving them the freedom to focus on more strategic, creative work. This means they can really dive deep into making processes better, which boosts productivity and drives innovation in testing.

Finally, another trend is an increased focus on explainable AI. With AI taking on a bigger role in QA activities, there’s a growing need for something called explainable AI, or XAI. It’s all about making sure that when AI helps us by identifying why a test didn’t go as planned, everyone can understand the reasons. This clarity is crucial because it makes troubleshooting a lot smoother and lets QA teams fix issues faster and more efficiently.

Asad Khan at his office in San Francisco
Asad Khan at his office in San Francisco

QA Financial: Speaking of AI, the impact of GenAI and LLMs is revolutionary, to say the least. How can these technologies best be exploited? And any pitfalls business should look out for?

Asad Khan: Gen AI and LLM offer a multitude of advantages in the QA landscape, some of the benefits that these technologies bring can be streamlining test case creation. LLMs can sift through existing test cases and application functionalities to automatically generate new, pertinent test cases. This automation can drastically cut down the time and labour traditionally needed for crafting test cases, particularly for intricate financial applications.

Also, AI helps to improve test data management. In the finance industry, generating realistic yet anonymized test data can be quite a challenge due to strict privacy regulations. However, by using diverse financial datasets for training, we can create synthetic test data that closely mirrors real-world financial scenarios. This approach ensures that we can carry out effective testing without compromising sensitive information.

AI also helps to proactively identify issues. By analyzing patterns in user behaviour, LLMs can foresee potential edge cases or unexpected user interactions that might cause errors. This forward-looking testing approach allows financial institutions to pinpoint and resolve problems before they affect actual users.

QA Financial: And what about the risks and challenges?

Asad Khan: Yes, LLMs and Gen AI come with their own set of challenges like bias and accuracy concerns/ The outputs of large language models are only as good as the datasets they learn from. If these datasets have inherent biases, the generated test cases and synthetic data might not accurately reflect real-world scenarios, leading to flawed or misleading test results. It’s vital to choose and manage the data used for training these models with care to maintain the integrity of the outcomes.

“Bringing AI into testing frameworks demands strong security measures to protect the training data and the models themselves.”

– Asad Khan

Another challenge is explainability challenges. Understanding why a model generates certain test cases or data points can be tough. This opacity can complicate the process of debugging and solving issues during testing, as the reasoning behind the decisions isn’t always clear.

And then there are security risks. Bringing AI into testing frameworks demands strong security measures to protect the training data and the models themselves. Keeping these security protocols tight is crucial to avoid introducing new vulnerabilities into the system.

QA Financial: Testing the quality of AI apps is a timely issue, as there are no global standards yet. How do you think businesses should go about that?

Asad Khan: The lack of global standards for testing AI applications poses a significant challenge. Some hands-on steps businesses can take to ensure their AI-powered apps meet high-quality standards are, for example, the need of functional testing. Unlike traditional software, AI apps require more comprehensive functional testing to ensure they perform their expected tasks accurately and reliably. This also includes crafting scenarios that test how the model processes various inputs, recognizes patterns, and delivers the correct outputs.

Also, conducting non-functional testing. Apart from basic functionality its important to test other critical aspects of AI apps like performance testing. It’s important to see how the model performs under stress to ensure it can handle real-world demands without faltering. Scalability testing, too: check if the model can efficiently manage growth, whether in data volume or user numbers.

Moreover, security testing. Ensure that the app/model is well protected from malicious attacks and security threats and remains secure and robust. Also important, clarify AI decision-making: Understanding the rationale behind an AI model’s decisions is key. Utilizing technologies that enhance transparency, known as explainable AI, can aid in debugging, identifying biases, and fostering trust in the system.

Finally, focus on data quality and bias prevention: The effectiveness of AI models greatly depends on the quality of the training data. It’s critical to ensure that the data is diverse, unbiased, and mirrors real-life scenarios. Conducting fairness tests can pinpoint and mitigate biases in the data, preventing skewed or unjust outcomes.

QA Financial: Moving on, what are key ingredients for real-world successes of test strategies?

Asad Khan: When it comes to finance firms, even minor testing errors can lead to big problems like security gaps, a clunky user interface, or payment gateway issues. Some of the ways I think test strategies could be more effective is that it is crucial to understand what your business aims to achieve. Make sure your tests verify that the application meets these goals and adheres to required standards. One suggestion here is to incorporate testing early in your development lifecycle to find bugs before they escalate, avoiding last-minute panic and ensuring the software is delivered on time and functions properly.

“As AI plays a bigger role in testing, tools that clarify why tests fail become increasingly important for effective debugging and decision-making.”

– Asad Khan

There might be scenarios when you have limited testing resources. In such cases, identify and prioritize the key functionalities and scenarios that are most likely to cause issues and concentrate your efforts there. Utilizing data from past tests and user behaviour can help you predict potential problem areas and adjust your test coverage to be more effective.

We often think that automation is sufficient to handle manual repetitive tasks, however, it cannot replace the expertise of human testers. Instead, automation is focused to free up your QA team to take up more complex and strategic tasks, such as refining test cases or trying out new testing methods. Security is particularly paramount for financial applications, which handle sensitive data. Make sure to perform regular penetration tests and vulnerability assessments to keep your applications safe from threats.

QA Financial: Thank you. Looking ahead, is there anything else you would like to share with our readers?

Asad Khan: The future of QA is exciting, but it still comes with its own set of challenges. Cultivating skilled in-house QA teams remains essential, even as AI and automation change the game. Training staff in the latest technologies and practices enables them to effectively use new tools while keeping them integral to the QA process. As AI plays a bigger role in testing, tools that clarify why tests fail become increasingly important for effective debugging and decision-making. Staying current with ongoing learning and exploring new testing methods and tools is also crucial. By embracing these practices, the financial services industry can lead in advanced QA, creating a testing environment that not only drives innovation and ensures security but also enhances the overall digital experience for customers.


Stay up to date and receive our news, features and interviews for free

Our e-newsletter lands in your inbox every Friday. Sign up HERE in one simple step.