AI is fast evolving as the most transformative technology. It's no longer hype. As software testing professionals, we started dealing with both 'Testing of AI' and 'AI in testing.'

'Testing of AI' refers to any activity aimed at detecting differences between AI systems' actual and expected behaviors. 

 'AI in testing' is 'AI-based testing' and is different from 'Testing of AI.' In the case of 'AI in testing,' we use AI techniques or those guided by AI in the test approach.

Testing of AI:

Do we have to take a fundamentally different approach to test an AI system? Though some AI testing aspects are the same as regular, explicitly programmed rule-based systems, AI systems pose new challenges because we build them using training data and procedures. They are indeterministic and more statistically-oriented systems.

'Testing of AI' demands from testing professionals the same tactics that we apply for testing in general:

  • Identifying the right testing characteristics
  • Understanding the (AI) system components
  • Determining the proper testing workflow
  • Contextualizing the application scenarios.

You must spend time understanding the following as you create a test strategy for your AI system so that you can independently evaluate the quality of an AI system.

  • The engineering process of building models
  • Effective data cleaning/data wrangling strategies
  • Features of the domain and feature engineering strategies
  • Model complexity - especially on how to deal with overfitting problem
  • Model pipelines and model training using cross-validation objects with appropriate training and test data
  • Selection of the right algorithm for the right task and the most impactful hyperparameters for the algorithms
  • Model evaluation metrics for a variety of AI tasks

AI in testing:

We hear much noise in this space. Software testing makes an ideal candidate to be tackled by AI techniques. Testing professionals need to look out for options wherever the expert test professional needs to make a data-driven decision to run the testing workflow more efficiently and effectively. Typical issues such as 'Risk-Based Testing,' 'Regression Test Selection,' and 'Automated Test Result Analysis' and 'Auto Healing' may make some good candidates where data-driven automated decision making can be beneficial.

There are too many 'AI-enabled' solutions. As a testing practitioner, keep the following in mind as you innovate.

  • Critically look at your test engineering workflow, break it down into tasks.
  • Investigate data-driven automation opportunities that yield high ROI and insert a prediction machine in the workflow
  • Remember: Reimagine your AI-enabled testing workflows and do not insert a point solution into your current workflow. Please note that as you 'AI-enable' your testing workflow, some tasks performed by expert testers may be automated, and as a result, the ordering and emphasis of remaining tasks may change, and you may need to create new tasks.