fbpx

AI Fintech Project 2: Enhanced QA Processes Using AI as a Collaborative Partner

AI Fintech Project 2: Enhanced QA Processes Using AI as a Collaborative Partner
Reading Time: 4 minutes

Customer service, financial stability, risk management, and a robust network of physical branches used to define the top companies in the finance industry. But a digital revolution has accelerated growth as the best finance companies embrace technology and innovation.

Fintech firms, in particular, continue to shake things up by offering more accessible wealth management tools for everyone from big businesses to traditionally overlooked populations. By educating individuals on building credit and how to focus on saving, fintech companies use technology to serve as a stepping stone toward financial stability.

For many in the finance industry, the next logical step is to start evaluating artificial intelligence (AI) and where it might solve critical business problems and align with each company’s specific goals. Growth Acceleration Partners (GAP) takes a consultative approach, strategizing on AI with clients by suggesting it as a solution to business problems only when it makes sense, rather than trying to fit AI into any product without a clear purpose.

We’re not afraid to say: AI is great, but sometimes it’s overkill. So, it really is paramount to understand the challenge you’re looking to solve, and then create tailored strategies to help determine appropriate AI applications.

One area where GAP sees concerns — especially challenges related to security, compliance, and seamless user experience — can frequently be improved with quality assurance (QA). Software quality engineering services go beyond handling sensitive financial data; these services improve internal software and data engineering processes as well. And when AI is implemented with QA, the results are boundless.

Here are the details of a recent AI advisory proposal GAP shared with an existing client:

Enhanced QA Processes Using AI as a Collaborative Partner

GAP currently works with a fintech company focused on ways to increase economic inclusion and financial resilience. Implementing a consulting mindset, the GAP team proposed ideas to create a better way to handle QA with AI.

There is a lot of interest in generating artifacts related to development tickets in an interactive and supervisory manner. This client recently received forward-thinking recommendations related to optimized QA procedures.

In this proposed process, a software engineer would take a short description and use AI to generate one of the artifacts you’re interested in, such as test cases or basic automation test code. Then, a QA engineer looks at it, and if it’s approved, you take those two artifacts and use them as a basis for a third artifact that’s maybe a longer description or a classification.

One example is that a QA engineer can use the information from a user story and, with the help of AI tools, quickly generate the necessary artifacts for testing the implementation of that feature in the application. AI can streamline the creation of these artifacts, such as test cases, test plans, traceability matrices, and automated test scripts, reducing the time previously spent generating content from scratch. This allows the engineer to focus on reviewing and ensuring the accuracy of the information, making adjustments as needed to deliver high-quality artifacts that add significant value to the testing process, all while achieving greater efficiency and quality. Other examples include the generation of bug reports and the review of test scripts, all using AI.

GAP’s team would work with the client to utilize user story data to iteratively generate detailed descriptions, classification priorities, and other metadata for development tickets. And then, the GAP team would automate the creation of test code for specific QA tools to facilitate testing processes based on generated metadata.

The recommended process for AI-driven end-to-end test case automation is to do this iteratively, carefully, and with supervision so the output is relevant. At GAP, we look at results, refine, and get approval along the way. With this process, we can get to boilerplate code — but it’s not magic.

By the way…

Have you ever heard the phrase “selling smoke” (or perhaps “blowing smoke”)? It can mean someone is talking nonsense or trying to sell you something totally useless. And that’s what we’re seeing a lot of when it comes to products or services that claim to integrate AI into quality assurance (QA) processes.

These snake oil salesmen claim to be able to take a ticket from a simple description all the way to boilerplate code. Or, the demo they show has a pristine, seven-paragraph description that already contains most of the answers.

But in real-life settings, you’ll be working with either far less detailed or much more complicated descriptions, along with prioritization for whichever classification system the team adheres to. We’re talking two lines, not seven paragraphs! Also, other artifacts that are usually created manually require a lot of context based on the state of the project — and what it’s meant to do.

With these “blowing smoke” demos, there’s a lot of showmanship around this, but it’s not actually useful when it comes to implementations because it’s unrealistic.

The process is not iterative, and it’s not supervisory. It looks nice, but it can’t actually be implemented.

GAP’s QA Center of Excellence confirmed AI will not replace the professionals in the area. Instead, AI enhances an engineer’s performance and improves their allocation of time in different states of the process. It’s important for clients to know we still need the professionals behind driving the process.

At GAP, no smoke is blown. Instead, we precisely execute on a vision of how implementation should actually work. And this level of trust is why clients keep using GAP to consult, design, build, and modernize software and data solutions.