Human knowledge workers have been compelled to be increasingly rational, data-driven and transparent about their decision-making. How does the emergence of intuitive and intransparent AI colleagues fit into this?

New team members may occasionally cause new conflicts, but especially so when they’re from another species and treated differently to the rest.
Consider the case of machine learning based software systems that are being deployed to help knowledge workers make better decisions. It doesn’t really matter whether we call this “AI” or not, or whether we believe in the possibility of “General AI”.
As a baseline understanding, let’s assume that McKinsey is right in predicting that “30 percent of the activities in 60 percent of all occupations could be automated“, which would mean that “most workersâfrom welders to mortgage brokers to CEOsâwill work alongside rapidly evolving machines.”
Now, for a knowledge-working hu-man, there are by and large two ways to make a decision:
- one is slow, clean, step-by-step, deliberate, data-driven and ultimately explainable. In its ideal form, the very way in which the decision has been formed is, by virtue of its logical and empirical foundation, actually identical to its explanation.
- The other is fast, messy, intuition based, experience driven and often not easy (or even impossible) to explain, at least right away – i.e. the decision-maker isn’t in a position to immediately understand, explain or justify how and why their decision is indeed right and rational.
In the history of “making corporate workers successful” ideas, Malcolm Gladwell’s 2005 hit book “Blink” was probably the last defense of any merits of the latter, intuition-based approach.
Today, the ideal of corporate and managerial decision making is anything but ‘blink’-based (at least outside of the C-level, where exceptions apply – it is generally more possible to operate on intuition-based decision-making in the upper echolons of management). That said, the vast majority of knowledge workers have to base their decisions strictly on a transparent compound of data and logic (often with an unfortunate emphasis on the former alone), and since about a decade, entire organisations devote themselves to “become more data-driven“.
In practice this means that a human decision-maker, say Sally from Marketing, will not only be required to base her decision on data, but she also must be able to demonstrate and explain how exactly her decision was made and what data she used in support of her conclusion. “Trust my training and experience, this just feels right” isn’t what Sally’s bosses or their board want to hear, even if they may occassionally operate on hunch alone.
Interestingly, the opposite is also true for Sally’s new “AI” colleague, which for now may provide business recommendations without the burden of having to explain how it came to its conclusions.
This isn’t because it wouldn’t be useful to have AI systems explain their output (see the note on “Explainable AI” below), but rather because they currently simply can’t: the best performing breed of machine learning systems is incapable of providing proper insight into how exactly a specific conclusion came about. This is especially true for systems based on convolutional neural networks (“Deep Learning”), which aren’t designed (and possibly undesignable) to provide discrete specifics into how exactly their decision-making-output has been computed.
Instead, they are purely judged on past statistical performance, as in: we have extensively evaluated the system’s output against empirical data and found its performance to be good enough (aka: better or as good as Sally) to be deployed in producing predictions on yet unseen outcomes.
To be clear, there are laudable efforts to look into “Explainable AI“, which aims to investigate methods to have machine learning based systems explain or justify their outputs in more detail. However, it’s far from clear whether these efforts will be met with success, all the while the utilization of the affected machine learning methods across industry and government is in full force.
Assuming that we’ll see more and more of these opaque, yet performant machine learning systems deployed alongside human workers, we should perhaps be prepared for a new type of workplace conflict to arise out of this “discriminatory” practice – especially if the collaborative human-machine frontier will turn out to be as crucial as predicted (see again the above linked report by the World Economic Forum).
Given that many knowledge workers know this type of decision-making discrepancy from their superiors, the emergence of similarly exempted AI colleagues may well be experienced as an unfortunate power grab, acting to the detriment of widespread and successful AI adoption.