This post was contributed by Provar’s Chief Strategy Officer, Richard Clark.

In 2019, I wrote an article about why test automation didn’t need AI. My original article was even renamed to Demystifying AI: What Does Artificial Intelligence Mean for Test Automation? as it was considered too controversial and for fear of alienating people.

My basis at that time was the limited and misleading claims across the software industry about artificial intelligence and deliberate attempts by companies to either increase their valuation to investors wanting to jump on the bandwagon or attempts to mislead potential customers about the value of AI within their products. Four years later, I no longer stand by that article.

Since I wrote that article, three major things have changed:

  1. The market and audience have generally learned the lesson of false idols when it comes to AI in testing and have become more educated about the actual value of AI. We now better understand Narrow/Weak AI and General/Strong AI.
  2. New tools have come to market, especially around Conversational and Generative AI solutions, with actual commercial licenses, that offer practical uses of AI to do more than fix a brittle element locator or rectify a poor user experience to explain what a test case is trying to do. AI as a service now has real potential to build upon, and software companies no longer need to build their machine-learning systems from scratch.
  3. Here at Provar, we moved our product suite beyond Salesforce test automation. We became a software quality vendor with integrated solutions, each with valuable benefits to our rapidly growing customer base. This includes using a new algorithm (via application intelligence using Salesforce metadata) for test case generation and leveraging the OpenAI ChatGPT API to suggest potential test scenarios from a user story. We also integrated, rather than rebuilt, existing test optimization machine learning and static code analysis tools. 

At the same time, the hype around AI has never been more in the public domain. The consumer AI appetite was whetted first by voice assistants like Amazon’s Alexa, and most of us have one or more of these in our homes. Did they transform our lives? What have they replaced? And how has this heightened the demand for AI and intelligent capabilities across the board?

Intelligence on Artificial Intelligence 

Is Alexa intelligent? No. Does it use AI? Absolutely. From voice recognition (natural language processing), personalization, and ” remembering” my favorite things, we’re seeing machine learning.

The reality is that most voice assistants do use some AI. Still, they essentially automate searching the internet, filtering harmful material (most of the time). If you want a conversational dialog, tell it which skill you want it to apply. You’ll be back at square one once you exit that skill (such as an AWS Lambda function with an algorithmic flow like a chatbot). It won’t “remember” the conversation unless programmed to, but it does hold a record of it in the vendor’s database, and presumably, some data is collated and categorized about you.

At the same time, we’ve seen a rise in driverless car technology, which many people see on the road daily. We’re yet to trust this technology enough to make it mainstream. Still, it’s only a matter of time, in my opinion, once we overcome the moral dilemmas (save the child in front or risk killing the driver). One of the other challenges to driverless cars is learning to deal with human drivers who have learned to bully them! Guess what – people are smart and have learning models, too.

We should also remember that all these machine learning systems are trained on accurate data, potentially teaching that rules aren’t always followed and not relying on algorithms alone. Humans are notoriously bad decision-makers, and data scientists help us find the line between good and bad training data.

Rise of the Co-Pilot

The benefits these integrations bring aren’t changes that mean we need fewer people to work. Instead, they’re simply co-pilots, or assistants, that can help us do more in less time. For example, an airline pilot in 2023 has a much easier time flying than one in 1923 or even 1993, as the plane does a lot of work for them. It does this through a combination of computer automation (flight control), artificial intelligence (route planning, fuel saving), and engineering (hydraulics). When the plane’s computers get it wrong, it can spell disaster, so a human must override and generally take off and land the plane.

Tragically, the events of the Boeing 738 MAX crashes were reportedly down to a combination of the pilot’s training in conflict with the flight software interpretation of faulty data. The computer pushed the nose of the aircraft down due to an incorrect stall warning, and the actions required by the pilot to override had changed since the previous aircraft model. 

We also need people on the plane who will reassure us when things go wrong, take necessary safety actions, and deal with the chaotic behavior of other humans. Few of us are ready for computers to talk to computers about aircraft maintenance, refueling, or passenger safety. Pilots remain essential, not least because knowing how to land a plane under challenging conditions is much more complicated than flying straight and level for 9 hours.

Let me get back to the point. Voice assistants and driverless cars are changing how the general public perceives AI from being a death-wielding robot who wants to exterminate humans to something to help them with their lives – a convenience. The sudden public rise of Generative and Conversational AI tools is similar to Voice Assistants. One vendor caught the media’s attention first, and everyone else working hard on their AIs rushed to get attention. I’m in no position to judge which Generative AI is better, but we all know who was first to steal the headlines.

The value of the new generation of AIs is impressive, not just ChatGPT. This is best demonstrated by how Microsoft quickly swooped on OpenAI and started to build GPT into their products in weeks, not the months you would have normally expected. Likewise, companies like Salesforce have announced their integration. They did this because it’s surprisingly easy and far more manageable than building your machine learning system and asking a data scientist to train the correct data correctly. 

What Salesforce especially appears to be getting right with their recent AI Cloud announcement is adding additional value. Through their existing secure and trusted architecture, they’re helping to implement the appropriate ethics and security by tokenizing conversations through data masking, checking for toxicity, maintaining audit logs, and utilizing different AI models for different types of requests. This includes being able to automatically ground the model for each customer using their application data or their own externally hosted language model. 

The Application of AI in Testing

When it comes to AI in testing, some prominent areas of benefits can be unlocked through automation, including using AI:

  • Image-Based Testing: Allowing rapid verification of textual and visible information within graphics for user experience and accessibility analysis or solutions hard to test with traditional tools.
  • Scenario Generation: Verifying that sufficient test coverage has been achieved for the functionality based on your unique criteria, goals, and risks. We’ve started this with Provar Manager, and more will come soon.
  • Result Analysis: Summarizing the trends and changes in test performance and results quickly and concisely for stakeholders to communicate. Collecting test results in Provar Manager will unlock the future opportunity to utilize Salesforce AI Cloud effectively.
  • Performance Optimization: Both in terms of orchestration of test execution to ensure defects are found as early as possible so they can be reworked and in terms of rewriting tests to improve test performance. Our near-future microservices will unlock background optimization and recommendations.
  • Intelligent Test Generation: Provar Automation already provides metadata-based test generation, which provides an initial level of coverage and a rapid return on investment. Through the use of AI, we want to extend this to cover non-Salesforce applications and align with actual user journeys and business processes.

At Provar we’ve already delivered on some of these initiatives through product development and partnerships and have even more on the horizon. Adopting AI tools within our business also leads to rapid productivity improvements. The title of this article came from ChatGPT, for example!

Beware of False Idols

For test automation, some solutions use AI, and they can take several months to train on a specific customer’s application by analyzing actual user actions, or they harvest data from multiple customers. Either way, being able to react immediately to changes without waiting for the model to re-train is beyond either means.

Meanwhile, I’ve also seen a lot of “AI Washing” by vendors desperate to show AI, which doesn’t consider the ethical issues, isn’t using AI, or even worse, doesn’t have value to the user. 

I recently read a book by UCL Mathematics Professor Hannah Fry called Hello World. She discusses the ethical challenges of AIs in general and the differences between an algorithm and an artificial intelligence, specifically a machine learning solution. Two quotes from her, in particular, stand out for me:

“People sometimes just see the word ‘AI’ and it’s all sparkly and magical. It can make them forget about all of the other important things that have to go alongside it.”

– Simon Brook interviewing Hannah Fry

 “Whenever we use an algorithm – especially a free one – we need to ask ourselves about the hidden incentives. Why is this app giving me all this stuff for free? What is this algorithm really doing? Is this a trade I’m comfortable with? Would I be better off without it?”

– Hannah Fry, Hello World: Being Human in the Age of Algorithms

My favorite quote, however, is this one she uses to assess whether claims about AI are bogus, her so-called “magic” test. Apologies for the language, but I feel it’s more impactful to leave uncensored:

“If you take out all the technical words and replace them with the word ‘magic’ and the sentence still makes grammatical sense, then you know that it’s going to be bollocks.”

Hannah Fry, Hello World: Being Human in the Age of Algorithms


More in-depth thoughts on this topic and how some test automation solutions are promoting AI-driven solutions when they are really just harnessing intelligent capabilities — an important distinction — is forthcoming in a future blog post, but for now, I’ll close with this.

AI has unmistakable value, but the people element will remain, even when it comes to AI for software quality. AI isn’t a “magic” solve-all, so beware of solutions framing themselves as such. Embrace AI tools, learn how to use them securely and ethically, ensure they add benefit to your work practices, and measure the increase in productivity or quality they are delivering.

Interested in learning more about how Provar is using intelligent capabilities in its solutions? Connect with us today!