Algorithmic decision-making and artificial intelligence (“AI”) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies will encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.
Ensuring that societal values are reflected in algorithms and AI technologies will likely require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing.
Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because–like algorithms–companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage.
To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. We should subject societally impactful “black box” algorithms to comparable scrutiny.
Credit: James Guszcza, Iyad Rahwan, Will Bible, Manuel Cebrian, and Vic Katyal for hbr.org (November, 2018)