Molecular counterfactuals method helps researchers explain AI predictions

Machine learning

Source: © Getty Images

Understanding machine learning predictions by exploring the road not travelled

 Machine learning methods can efficiently solve complex problems, by training models on known data and applying those models to related problems. However understanding why a model returns a particular result, which is vital to validating and applying this information, is often technically challenging, conceptually difficult and model-specific. Now, a team in the US working on explainable AI for chemistry has developed a method that generates counterfactual molecules as explanations, which works flexibly across different machine learning models.