Provar’s Chief Strategy Officer, Richard Clark, has been excellent at sharing his thoughts on AI in the test automation and software quality spaces. Please take a look through our recent blog posts for more of his insight. Today, we’re diving deeper into AI in this Q&A with Richard.

Be sure to follow Provar on this blog and social media for more on this hot topic!

How will the rapid evolution of AI impact the overall growth and dynamics of the Salesforce AppExchange marketplace?

Initially, I think the primary difference will be how listings present themselves. We’ll see many false AI claims when tools use an algorithm. We hope Salesforce will implement changes that help people understand which external services listings use. We also want clarity on the compatibility with their own Einstein GPT features.

Conversational and Generative AI tools should create more inbound leads for AppExchange. They are becoming more popular than Google Search for specific inquiries. Salesforce must make better recommendations for applications to admins. These recommendations should highlight apps that can improve their roles and the company’s performance. It doesn’t matter whether those apps use AI. What matters is that they add the most benefit to customers by using AI to evaluate recommendations.

How can AI-driven apps on the AppExchange differentiate themselves from competitors and provide value to Salesforce users?

That’s an interesting point. I’d argue that AI-driven apps don’t inherently provide additional value to Salesforce users. However, they can add more value to the authors of those applications. This happens when authors release fewer updates or when their apps become more dynamic for the organization in which they’re installed.

The biggest problem remains false claims about AI in apps. There’s a book by Hannah Fry, a professor and regular TV panel guest at University College London, and she says, “If you replace all the technical words with magic and the sentence still makes grammatical sense, you know it’s going to be [excuse my language] bollocks.”

Don’t get me wrong; AI-driven apps can offer value to users. For example, one app can generate in-app help without requiring prior writing. Another can auto-complete data entry by suggesting values based on previous similar interactions.

The apps that throw AI chat windows onto every screen so you type in what you want in a sentence instead of clicking buttons are either misguided or trying to hide the fact they have a poor user experience. Expert users use a combination of voice, keyboard shortcuts, and automation to improve their productivity.

To find real AI apps with real benefits on the AppExchange, we need Salesforce to provide some level of tagging and scoring for this that avoids the ability of marketing teams to exaggerate the benefits. There needs to be reasons why AI is better or faster than the alternatives.

What are AI’s most significant opportunities for your product on the AppExchange platform?

We have OpenAI API integration in our Provar Manager product. To use that feature, customers need an OpenAI API subscription. This subscription comes with a monthly operating cost based on the number of queries. I’m excited about Salesforce’s potential to help us secure better pricing for our customers. This would be based on shared accounts to access both OpenAI and other AI services through collective purchasing power.

Furthermore, Salesforce has a massive opportunity to go with Data Cloud and EinsteinGPT. We can generate billions of test results that would be horrifically expensive to build our LLM. Letting customers apply our models to their data to tweak the results is incredibly exciting. Doing that in a secure and trusted environment is critical for enterprise adoption.

Instead of asking QA teams to write and execute test cases blindly, we can analyze existing test coverage. We can modify the data on those tests and generate new test cases or data combinations. This approach helps us continually increase application quality. Additionally, feeding back the bugs that occur in production is incredibly valuable to this process and the results we can achieve.

Thinking outside of the AppExchange, we see that the same tools help us develop our product. We use them in our research, marketing, and content feedback, but most of all, AI is driving a massive increase in our online event sign-ups and product interest.

What are the significant risks and challenges associated with implementing AI-driven solutions on the AppExchange, and how can businesses mitigate these risks?

The most significant risks are the potential for bias, toxicity, data protection, and intellectual property. There are plenty of exaggerated horror stories, which, to be fair, are essentially no different from when people paste code or data into an email, Google search, or, god forbid, stack overflow.

To mitigate the risks, we should immediately update ISO27001 policies. We also need to retrain staff to understand the risks and threats. Additionally, we must urgently educate people that the results of Generative AI are no more trustworthy than using Facebook for fact-checking. You need to challenge and check the results before quoting them. This ability to exaggerate and invent is also why those tools are so exciting. The answer to these challenges isn’t to block access – those businesses choosing to prevent access are like those that refused to go onto the cloud until the 2020 pandemic forced them to.

As always, it comes down to adequate education and governance of what is being shared and how the results are used. It’s not black and white. Suppose a developer wants a regex to validate a phone number. In that case, it’s an entirely different information category (public domain) than if I wanted to pass an AI my unique algorithm for calculating insurance premiums (company confidential). Information classification is critical.

Regarding data protection, toxicity, and bias, it’s worth looking into Salesforce’s announcements on their AI Cloud and what layers they’re implementing. I’m excited to see this is being opened up to ISV partners, too.

What challenges have you faced while developing and implementing AI features in your AppExchange solutions, and how have you overcome them?

I’ll be honest. It was straightforward to implement. Our biggest challenge at first was the performance and time-outs when OpenAI first hit the mainstream market. They’ve responded with their geofenced services, and the paid plans have made this much more reliable. 

The limits and requirements for prompting and breaking down queries and responses are also well documented, but to be honest, that’s just a learning curve, not a challenge. 

How is your company addressing ethical considerations when developing AI-powered applications for the Salesforce AppExchange?

Regarding security, we do not transfer or record any data onto our cloud platform that isn’t documented, non-PII, or anonymized against our product licenses.

For transparency, we’ve notified our customer advisory board in advance of what data we share and when. We’ve also consulted specific large enterprise customers with regulatory compliance needs before implementing changes.

Regarding accountability, all our data use in AI is optional—our users can disable those integrations if they wish. When public AI models are used, we add multiple options for them to choose their trusted providers.

For fairness and guarding against bias, we template our prompts to avoid being a chatbot for AI. However, this is where the news from Salesforce on their AI strategy will be most helpful, including implementing data masking to protect customer data from external LLMs.  

How can businesses evaluate and select the most appropriate AI-driven AppExchange solutions for their specific needs?

Again, I’d suggest not picking solutions based on their AI-driven capability. Instead, pick the best tool that meets your requirements rather than the shiniest one. AI-driven solutions also have an operational cost. Therefore, you should consider your budget for variable costs for any such product. Ensure that these solutions can limit your cost exposure. You might manage costs through a pre-pay or fair usage policy. This approach helps prevent unexpectedly large bills at the end of the month, quarter, or year.

First, calculate the cost of not having a solution to measure if any solution will have the desired effect to meet your needs. Regardless of AI, too many solutions are implemented without checking the ROI and payback period or measuring whether said ROI was delivered.

How does your company stay informed about the latest AI research and ethical guidelines to ensure the responsible development and deployment of your AppExchange solutions?

That is a real challenge. For the same reason, we all face a lot of misinformation. We work with industry analysts such as Gartner and Forrester to help support us, and my role is explicitly to stay abreast of the latest threats and opportunities.

Unesco has also published an Ethical Guidelines for AI paper, so research plays a significant role.

While that’s aimed at countries, not companies, evaluating the concerns and areas of potential impact is helpful. We also work closely with Salesforce, and I expect to see their security review process adjust to consider ethical guidelines and detect bias.

Currently, we restrict user input to stay within the boundaries of our application scope. This includes a User Story from Elements Cloud, Jira, or ADO, formatted using specific templates and controlling the prompts sent. I believe that apps allowing uncontrolled Generative AI access risk misuse, which could lead to potentially damaging results. Imagine if I asked my DevOps release management tool for share advice on what went wrong instead of which changes to deploy that were low risk? If people can, they will. That’s something customer service and testing taught me a long time ago!

Most of this is common sense, though. If we created an application that automatically tested websites using fuzzy logic to test different data field entries, it would be pretty obvious we’d want to avoid the malicious use of this to try and crack passwords or gain backdoor access.

Can you share any exciting new AI-driven features or upcoming releases that your company is working on with us?

Yes, we’re collaborating with several strategic partners to use AI to help enhance the integration experience. Where our products already integrate technically, there’s additional expertise that AI can provide to augment customer teams and streamline the process.

Not everyone can afford to hire the best business analyst, test architect, or DevOps engineer. Automating some of those capabilities end-to-end will be transformational. It will also provide feedback on their effectiveness. This approach delivers the real benefits of DevOps rather than focusing on just one solution. We emphasize being a continuous quality company. We want this commitment to apply to our solution and our customers’ use of it.

There’s a saying in sales: there are two winners at every opportunity: the vendor who wins and the vendor who qualifies early. It’s the same with application development. We ultimately want to maximize the business benefit from making changes while using adoption and historical success information to determine when a change shouldn’t be made and suggest alternative ways to achieve the same result.

In the long term, our goal is to automate the complete implementation of valid requirements. We want to evaluate their effectiveness through reinforcement learning. We won’t see entire apps created this way anytime soon. However, it makes sense to start with minor changes and work our way up from there. We stop where it’s not adding additional value or accuracy.

That’s a bit cryptic, but I can’t go into specifics for obvious reasons. Wait and see what’s announced around Dreamforce this year and next year!

Want to learn more about how Provar’s robust solutions can help your organization? Connect with us today!