Are Algorithms Making Better Decisions?


In boardrooms and government offices, the excitement around artificial intelligence and predictive analytics is palpable.

Algorithms now promise to tell us which customers are likely to churn, which loan applicants are risky, or which maintenance issues deserve priority. Despite spending billions on analytics tools, many organizations quietly admit that performance barely improves after rollout. Why? Because the real challenge is not the algorithm itself; This is how man uses (or ignores) its advice.

Hyunjin Kim, Edward L. A new study published in the Strategic Management Journal by Glaser, Andrew Hillis, Scott Duke Kominers, and Michael Luca, titled Decision Authority and the Returns to Algorithms, delves into this problem.

Researchers examined how much of the promised value of data-driven systems disappears when frontline professionals have the authority to reject them. Their evidence provides a powerful reminder that benefits from technology depend as much on governance and culture as on code.
The study analyzed a large-scale pilot in an inspection department that used two predictive models – one simple and one sophisticated – to rank restaurants based on potential risk of violations. On paper, both algorithms were far superior to the old manual systems.

They more accurately predicted risky locations and should have helped inspectors catch more serious problems. But when the results were implemented, there was almost no improvement in actual outcomes in the city. The reason? Inspectors often ignored the rankings, choosing instead to visit familiar or convenient sites or to balance the workload geographically.

Those human overrides effectively wiped out the model's predicted gains.

This pattern, of excellent predictions but poor results, occurs far beyond public sector observations. Banks, hospitals, logistics companies and retailers all experience it.

Employees and managers rely on their experience more than the system, or they work toward a different goal altogether. A credit officer may bypass the model's rejection of a borderline applicant to achieve a monthly loan-volume target. A maintenance team may skip the flagged site to save travel time. These choices may make sense, but collectively they reduce the value of the analytics investment.

The lesson for leaders is clear: deploying algorithms is not just a software project; This is a change in the authority to decide. Managers should make it clear when employees are expected to follow the model and when professional judgment can override it. A simple policy like 'follow the model unless any of these documented exceptions apply' can make a huge difference. Transparency is equally important – each override should be documented with a concise reason. It allows leaders to audit patterns, identify valid local insights, and detect biases or habits that undermine performance.

Aligning incentives also matters. If employees are rewarded for speed or convenience rather than accuracy or risk reduction, they will naturally game the system. Performance metrics should match the outcomes the algorithm was designed to improve. For example, an oversight unit measured on the number of visits completed per week might conflict with a model that recommends far-away sites; Anyone measured on actual risk reduction would accept this.

The study also emphasizes the value of feedback loops. When employees can see whether their overrides outperform or underperform the algorithm, both human and machine learning improves. Over time, this builds trust in the system and sharpens professional intuition. In contrast, when overrides are never reviewed, the same costly patterns are repeated indefinitely.

Daily observation of ride-share experiences helps to clarify this dynamic. In Nigeria, some drivers often discuss it with passengers before deviating from the app's suggested route, and use local knowledge of traffic or road conditions to find the faster route. These thoughtful overrides, guided by experience and consensus, can lead to better trips. In contrast, drivers in countries such as the United States generally follow the app's instructions precisely, trusting the system for fairness and consistency. Both approaches make sense in their contexts, but they illustrate how culture, beliefs, and institutional expectations shape the balance between human discretion and algorithmic authority. Organizations face a similar choice every day: when to trust the model and when to trust the expert.

For Nigerian companies investing heavily in digital transformation, this insight is particularly timely. Many banks, telecoms, and energy companies now use predictive analytics for credit scoring, fraud detection, and maintenance scheduling. But technology will be beneficial only when the decision making process also evolves. Without clear governance, human overrides and conflicting incentives can undo years of data science work. Returns from the use of algorithms come not from mathematics, but from disciplined execution.

Looking ahead, the most promising frontier is explainable AI. When algorithms can clearly show why they made a recommendation – by revealing the factors that led to the score or ranking – users are more likely to trust it and follow it. Combining this with structured override logs can ultimately bridge the gap between prediction accuracy and operational performance.

The bottom line is that better predictions do not automatically lead to better decisions. Human judgment is inevitable, but it must be integrated systematically, not haphazardly. Organizations that define when to follow the model, require transparency for exceptions, and align incentives with true objectives will ultimately realize the value their algorithms promise. In the age of AI, leadership is still about accountability, not automation.
image.png

Omagbitse Barrow is the Chief Executive of Learning Impact, an Abuja-based strategy and management consultancy firm.

Source link