This post was contributed by Provar’s Chief Strategy Officer, Richard Clark.
Back in 2019, I wrote an article about why test automation didn’t need AI. My original article was even renamed to Demystifying AI: What Does Artificial Intelligence Mean for Test Automation? as it was considered too controversial, and for fear of alienating people.
My basis at that time was the limited and misleading claims across the software industry about artificial intelligence and deliberate attempts by companies to either increase their valuation to investors wanting to jump on the bandwagon or attempts to mislead potential customers about the value of AI within their products. Four years later, I no longer stand by that article.
Since I wrote that article, three major things have changed:
- The market and audience have generally learned the lesson of false idols when it comes to AI in testing and have become more educated about the real value of AI. We now have a better understanding of Narrow/Weak AI and General/Strong AI.
- New tools have come to market, especially around Conversational and Generative AI solutions, with real commercial licenses, that offer practical uses of AI to do more than fix a brittle element locator or which rectify a poor user experience to explain what a test case is trying to do. AI as a service now has real potential to build upon, and software companies no longer need to build their own machine learning systems from scratch.
- Here at Provar, we moved our product suite beyond Salesforce test automation and became a software quality vendor, with integrated solutions, each with valuable benefits to our rapidly growing customer base. This includes using a new algorithm (via application intelligence using Salesforce metadata) for test case generation, plus leveraging the OpenAI ChatGPT API for suggesting potential test scenarios from a user story. We also integrated, rather than rebuilt, existing test optimization machine learning and static code analysis tools.
At the same time, the hype around AI has never been more in the public domain. The consumer AI appetite was whetted first by voice assistants like Amazon’s Alexa, and most of us have one or more of these in our homes. Did they transform our lives? What have they replaced? And how has this heightened the demand for AI and intelligent capabilities across the board?
Intelligence on Artificial Intelligence
Is Alexa intelligent? No. Does it use AI? Absolutely. From voice recognition (natural language processing), personalization, and being able to “remember” my favorite things, what we’re really seeing is machine learning.
The reality is that most voice assistants do use some AI, but they largely automate searching the internet, filtering harmful material (most of the time), and if you want a conversational dialog you have to tell it which skill you want it to apply. Once you exit that skill (such as an AWS Lambda function pretty much with an algorithmic flow like a chatbot) you’ll be back at square one. It won’t “remember” the conversation unless programmed to, but it does hold a record of it in the vendor’s database, and presumably some data is collated and categorized about you.
At the same time, we’ve seen a rise in driverless car technology being something many people are seeing on the road every day. We’re yet to trust this technology enough to make it mainstream, but it’s only a matter of time in my opinion. Once we overcome the moral dilemmas (save the child in front, or risk killing the driver), one of the other challenges to driverless cars is learning how to deal with human drivers who learned to bully them! Guess what – people are smart and have learning models too.
We should also remember that all these machine learning systems are trained on real data, which potentially teaches that rules aren’t always followed and not to rely on algorithms alone. Humans are notoriously really bad decision-makers, and data scientists help us find the line between good and bad training data.
Rise of the Co-Pilot
The benefits these integrations are bringing aren’t changes that mean we need less people to do work. Instead, they’re simply co-pilots, or assistants, that can help us do more in less time. For example, an airline pilot in 2023 has a much easier time flying than one in 1923, or even 1993, as the plane does a lot of work for them. It does this through a combination of computer automation (flight control), artificial intelligence (route planning, fuel saving), and engineering (hydraulics). When the plane’s computers get it wrong it can spell disaster, so a human is still required to override and generally to take off and land the plane.
Tragically, the events of the Boeing 738 MAX crashes were reportedly down to a combination of the pilot’s training in conflict with the flight software interpretation of faulty data. The computer pushed the nose of the aircraft down due to an incorrect stall warning and the actions required by the pilot to override had changed since the previous aircraft model.
We also need people on the plane who are going to reassure us when things go wrong, take necessary safety actions, and deal with the chaotic behavior of other humans. Few of us are ready for computers to talk to computers about aircraft maintenance, refueling, or passenger safety. Pilots remain essential, not least because knowing how to land a plane in difficult conditions is much harder than flying straight and level for 9 hours.
Let me get back to the point. Voice assistants and driverless cars are changing how the general public perceives AI from being a death-wielding robot who wants to exterminate humans to something to help them with their lives – a convenience. The sudden public rise of Generative and Conversational AI tools is similar to Voice Assistants in that one vendor caught the media’s attention first and everyone else that’s been working hard on their own AIs has rushed to get attention. I’m in no position to judge which Generative AI is better, but we all know who was first to steal the headlines.
The value of the new generation of AIs is quite impressive, and not just ChatGPT. This is best demonstrated by how Microsoft quickly swooped on OpenAI and started to build GPT into their products in weeks, not the months you would have normally expected. Likewise, companies like Salesforce have announced their own integration. They did this because it’s surprisingly easy and far easier than building your own machine learning system and asking a data scientist to correctly train the right data.
What Salesforce especially appears to be getting right with their recent AI Cloud announcement is adding additional value. Through their existing secure and trusted architecture they’re helping to implement the appropriate ethics and security by tokenizing conversations through data masking, checking for toxicity, maintaining audit logs, and utilizing different AI models for different types of requests. This includes being able to automatically ground the model for each customer using their own application data, or use their own externally hosted language model.


The Application of AI in Testing
When it comes to AI in testing, there are some obvious areas of benefits that can be unlocked, through automation, including using AI:
- Image Based Testing: Allowing rapid verification of textual and visible information within graphics for user experience and accessibility analysis, or solutions hard to test with traditional tools.
- Scenario Generation: Verifying that sufficient test coverage has been achieved for the functionality to be delivered based on your unique criteria, goals, and risks. We’ve started this with Provar Manager and more will be coming in the future.
- Result Analysis: Being able to summarize the trends and changes in test performance and results quickly and concisely for communication to stakeholders. Collecting test results in Provar Manager will unlock the future opportunity to utilize Salesforce AI Cloud effectively.
- Performance Optimization: Both in terms of orchestration of test execution to ensure defects are found as early as possible so they can be reworked, and in terms of rewriting of tests to improve test performance. Our near-future microservices will unlock background optimization and recommendations.
- Intelligent Test Generation: Provar Automation already provides metadata-based test generation, which provides an initial level of coverage and a rapid return on investment. Through the use of AI we want to extend this to cover non-Salesforce applications and align with actual user journeys and business processes.
At Provar we’ve already delivered on some of these initiatives through product development and partnerships and we have even more on the horizon. Adopting AI tools within our own business is also leading to rapid productivity improvements. The title of this article came from ChatGPT, for example!
Beware of False Idols
For test automation, there are some solutions that use AI, and they can take several months to train on a specific customer’s application by analyzing real user actions, or they harvest data from multiple customers. Either way, being able to react immediately to changes without waiting for the model to re-train is beyond either’s means.
Meanwhile, I’ve also seen a lot of “AI Washing” by vendors desperate to show AI, which doesn’t consider either the ethical issues, isn’t actually using AI, or even worse, doesn’t have value to the user.
I recently read a book by UCL Mathematics Professor, Hannah Fry, called Hello World. She discusses both the ethical challenges of AIs in general and the differences between an algorithm and an artificial intelligence, or specifically a machine learning solution. Two quotes from her, in particular, stand out for me:
“People sometimes just see the word ‘AI’ and it’s all sparkly and magical. It can make them forget about all of the other important things that have to go alongside it.”
– Simon Brook interviewing Hannah Fry
“Whenever we use an algorithm – especially a free one – we need to ask ourselves about the hidden incentives. Why is this app giving me all this stuff for free? What is this algorithm really doing? Is this a trade I’m comfortable with? Would I be better off without it?”
– Hannah Fry, Hello World: Being Human in the Age of Algorithms
My favorite quote, however, is this one which she uses to assess whether claims about AI are bogus, her so-called “magic” test. Apologies for the language, but I feel it’s more impactful to leave uncensored:
“If you take out all the technical words and replace them with the word ‘magic’ and the sentence still makes grammatical sense, then you know that it’s going to be bollocks.”
– Hannah Fry, Hello World: Being Human in the Age of Algorithms
Conclusion
More in depth thoughts on this topic and how some test automation solutions are promoting AI-driven solutions when they are really just harnessing intelligent capabilities — an important distinction — are forthcoming in a future blog post, but for now, I’ll close with this.
AI has unmistakable value, but the people element will remain, even when it comes to AI for software quality. AI isn’t a “magic” solve-all, so beware of solutions framing themselves as such. Embrace AI tools, learn how to use them securely and ethically, ensure they add benefit to your work practices, and measure the increase in productivity or quality they are delivering.
Interested in learning more about how Provar is using intelligent capabilities in its solutions? Connect with us today!