Smart Machines Can Make Organizations Less Smart
Digital tools should complement group learning, not take its place
Based on the research of Sirkka Jarvenpaa

Artificial intelligence has been hailed as a game-changer for business operations. But a survey of 300 organizations, released in July by the Massachusetts Institute of Technology, found that 95% are getting zero return on their investments in it.
In a new article, Sirkka Jarvenpaa, professor of information, risk, and operations management at Texas McCombs, explores one possible reason. She asks whether smart technology is making organizations less intelligent. She co-authored the article with Liisa Välikangas of Technical University of Denmark.
By organizational intelligence, she means the collective knowledge that staffers develop and share through word of mouth, debating diverse viewpoints, and solving problems together: knowledge such as work routines, tricks of the trade, and shared experiences.
Such learning is especially vital for navigating change, she says. “Organizational intelligence is the ability to adapt and bring your organization with you.”
Surveying a broad range of research, Jarvenpaa finds several ways AI and other digital technologies can undermine organizational intelligence. To preserve it, she says, teams need to work in concert with intelligent tech, challenging its answers and making their own judgments. This interview has been edited for length and clarity.
What got you interested in this question?
The news is full of articles about the automation of major business processes. But at the same time, very few companies can really report substantive overall productivity gains. Organizations don’t seem to be learning, even though they have these efficient machines at every person’s fingertips.
I’ve also spent some time in Northern Europe, which is advanced in use of the latest tools. There, public sector organizations found that the majority of their staff use large language models [the engines of AI programs such as ChatGPT], but the organizations are not necessarily learning from their overall use.
How can intelligent tech pose a threat to organizational intelligence?
Collective learning and organizational intelligence are fundamentally social. The purpose of organizations is to coordinate people’s actions toward some kind of goal that we have all agreed to work toward.
We have highly efficient digital tools, but we can’t learn without old-fashioned social processes, like discussions and playing devil’s advocate.
You point out three ways in which technology can challenge group learning. Please talk about the first: relying too much on digital answers that AI provides.
You have images, you have documents, and you treat them as a source of knowledge or valued information. But they are just data objects.
Large language models are terrible at temporal sense-making. They mix and match different kinds of information that are detached from their sources. There’s a lack of context: for example, what the data is documenting, who documented it, and when.
We need to make sure that we are not dropping our social tools. Are we discussing the output? Do we question what is not present in the answer? These digital objects don’t tell us what is not known, which can be as important as what is known.
Another process is acceleration, which happens when machines solve problems faster than people can. What’s the danger?
Curiosity can become lost as we speed our information processing with these tools. How do we maintain our curiosity by bringing different perspectives, expertise, and skill sets together?
Another thing that can be lost with digital tools is hunches. Sometimes, somebody is making a live presentation, and just by looking at their expression, you know that something is wrong. But neither you nor the person can articulate it yet. By having a conversation, you can put words to the problem.
Acceleration has to be complemented with social tools so that you’re leveraging these hunches that can act as a trigger for more collective discovery.
Finally, please talk about the need to vet information as a group.
Your team needs to interrogate the answers you get from these tools. Don’t take them for granted. Check the references, because references can be hallucinated.
Again, look for limitations, for situations that are not covered by the answer. Ask about possible short-term and long-term harms.
For example, have we taken into consideration a previous marketing campaign that has similarities to this marketing campaign? Maybe there were some unanticipated problems that happened last time, and the machine may not know about them.
Another great practice is role reversal. Ask the large language model to ask you questions, to see how well you understand its answer. As you answer, you and the AI can do a joint assessment of the solution.
It’s like mentoring. Both the mentor and mentee need to continue to learn to sustain learning. The student who’s been mentored is providing answers but is also continually doubting and asking questions of the mentor. The mentor and mentee are learning collectively.
Is the lesson to integrate machines into organizational learning rather than letting them take its place?
Yes. I’m in information systems. I believe that there’s a lot of value from technology. It is just, how do we use it?
It’s not throwing out the technological tools but making sure that we are using all our social tools and techniques to keep up with it. In the long-term, humans and machines working together beat humans working alone or machines working alone.
“Organizational Learning Lens: Does Intelligent Technology Make Organizations More or Less Intelligent?” is published in Strategic Organization.
Story by Steve Brooks
About this Post
Share: