BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Three Ways A Causal Approach Can Improve Trust In AI

Forbes Technology Council

Bernd Greifeneder is the CTO and Founder of Dynatrace, a software intelligence company that helps to simplify enterprise cloud complexity.

IT, development and business departments are under more pressure than ever to innovate. However, this has led to applications becoming increasingly complex as organizations move to more dynamic, multicloud environments for greater agility. DevOps and SRE teams need to make sense of this complexity and optimize their services, but this drains the time you can devote to innovation. The move to cloud-native architectures is also making it harder for these teams to quickly identify vulnerabilities.

As this continues, organizations are increasingly turning to AI to improve efficiency and reduce the burden on overworked teams by automating operations, development and security workflows.

However, how can organizations trust AI is making the right decisions and implementing the right solutions?

Precision and accountability are key to building trust in AI, particularly when organizations are asking it to provide insights into optimizing digital services or identifying security vulnerabilities. This is why many are turning to causal AI. This approach models cloud environments through a dependency graph, or topology, that retains context and semantics, helping make links between cause and effect. The graph is refreshed in real time and isn’t reliant on historical data for learning behavior, unlike typical machine learning. This opens up the "black box" where other types of AI operate, helping build trust for three key reasons.

Reason 1: Decisions are more transparent.

First, causal AI is transparent and can be held accountable for its actions. The reasoning behind decisions is explainable and easily understood by humans. This is because the AI follows a deterministic approach, based on a causal graph that learns and updates in real time. This reflects the step-by-step fault tree analysis commonly used in safety engineering.

For example, if an application’s performance degrades, causal AI will work through dependencies to identify anomalies across the entire service delivery chain. It can trace these back to the precise root cause. It can also surface the context behind the problem, detailing dependencies, then triggering appropriate action for resolution. Human operators can refer back to this fault tree to follow the entire thought process and to see how, and why the AI reached its decision.

Reason 2: Results can be reproduced.

A cornerstone of science is the ability to reproduce results. The ability to repeat experiments and get the same results increases the trustworthiness and accuracy of scientific output. The same is true when it comes to AI and automated decision-making. In order to trust decisions made by AI, users need to know that those decisions have a high degree of reproducibility.

Causal AI enables reproducibility because of its transparent, fully explainable, step-by-step fault tree analysis. Human operators can see the full set of circumstances that led up to the decision that was made, so they can feel confident the outcome was the correct one and that it would be the same if an identical set of circumstances arose again. Ultimately, this increases trust in the decisions made by the AI.

Reason 3: Bias is eliminated.

Causal AI helps eliminate trained bias in decision-making. Other forms of AI, such as machine learning, correlate information based on historical data but operate within a black box. Often, their designers are unable to explain how the AI arrives at decisions because it is only a statistical convergence. If these systems are fed poor information or statistically converge with a bias, they can misread signs or reach the wrong conclusion, increasing the risk of bias.

More thought needs to go into how organizations harness AI in a way that enables it to go beyond the time-based correlation of limited data sets. AI needs the full context behind the data it is basing decisions on, so it can develop and rule out causal dependencies. For example, it isn’t guaranteed that it rains when somebody opens an umbrella. It would therefore be wrong for AI to conclude that an umbrella equals rain. When it comes to AIOps, causal AI can remove bias because it understands how IT services are interconnected and dependent on each other.

Imagine a customer transaction failing at checkout in a mobile app. Rather than assuming the fault lies in an anomaly detected in one service that failed at the same point in time and directing teams to investigate, causal AI follows the dependency graph to identify which other processes and services caused the transaction to fail, even if they don’t show symptoms. This can reveal if the anomaly was just another symptom of the fault, rather than its root cause. With other approaches, the AI would likely identify the anomaly and assume it’s the root cause. If the problem was a recurring issue, the AI would receive the same input each time and would soon become biased in assuming the anomaly was the reason for the failure.

It’s only possible to eliminate this bias with causal AI, which evaluates the full context about what happens and why. This enables users to better understand its rationale and have greater confidence in the AI making a fair and unbiased decision.

Causality is key to improving trust.

While it can help build trust, causal AI isn’t the holy grail. Organizations must ensure they choose the right AI for a problem. For use cases where the rules or systems change dynamically, causal AI is likely the best approach. Anyone looking to build a use case should start by identifying the risk to their organization if automated processes were influenced by biased decision-making. For example, is there a risk of revenue loss, or could there be reputational damage?

When they begin evaluating vendors for an AI solution for these use-cases, organizations should ensure they have a good understanding of how the AI operates and reaches its decisions, so they can identify whether it follows a causal approach. Ultimately, this will give teams the confidence to trust AI enough to step away from manual involvement.

If they win the trust of their workforce and customers, organizations will be well on the road to successfully adopting AI and automation — and delivering the business transformation they’ve aspired to.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website