home bbs files messages ]

Just a sample of the Echomail archive

<< oldest | < older | list | newer > | newest >> ]

 Message 1758 
 Mike Powell to All 
 AI-caused skill erosion 
 17 Sep 25 09:07:23 
 
TZUTC: -0500
MSGID: 1507.consprcy@1:2320/105 2d2ffc7e
PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0
TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0
BBSID: CAPCITY2
CHRS: ASCII 1
FORMAT: flowed
Researchers warn that skill erosion caused by AI could have a devastating and
lasting impact on businesses - but it may already be too late

Date:
Tue, 16 Sep 2025 17:57:00 +0000

Description:
Esko Penttinen, Associate Professor, and Joona Ruissalo, Post-doctoral
researcher at Aalto University, tell us about the risks of skill erosion, and
why AI makes the problem more urgent.

FULL STORY
======================================================================

The AI boom is changing workplaces in myriad ways that extend well beyond
efficiency gains. As companies automate more knowledge work, researchers warn
of a worrying threat: the erosion of human skills. 

This "de-skilling", once seen as the natural shedding of obsolete tasks, can
leave employees unable to perform essential functions when automation fails. 

Few studies capture this as clearly as The Vicious Circles of Skill Erosion ,
published in 2023. That paper examined an accounting firm where reliance on
automation fostered complacency, and eroded staff awareness, competence, and
the ability to assess outputs. 

When the system was removed, the firm realized its employees could no longer
perform core accounting tasks. 

The paper's findings are more pertinent than ever in an era where AI tools 
are becoming ubiquitous. 

I spoke with two of the papers original authors, Esko Penttinen (Associate
Professor at Aalto University), and Joona Ruissalo (Post-doctoral researcher
also at Aalto University), about the risks of skill erosion, why the issue is
more urgent in the age of AI, and what businesses can do to prevent it. Your
research talks about AI's erosion of workspace skills.

What motivated you to explore that subject?

 Esko Penttinen (EP): In our research team, we pursue something that we call
phenomenon based or problem-based approach to research. By that, we mean that
we always start our research project with a practical problem that we
encounter in real-life. 

In this case, it was a serendipitous interaction with an informant in an
accounting company who, in a side sentence, told us that an automation system
had been removed from their IT architecture, revealing that their accountants
skills related to the underlying business process had been eroded. 

We found this observation made by the accountant fascinating and embarked on 
a case study to understand how this skill erosion manifested in the
organization and how the erosion had happened. This truly was a revelatory
case in the sense that it is extremely difficult to get access to this kind 
of failure case. 

Organizations are typically reluctant to share their experiences regarding
failures. We were very lucky and remain grateful that the organization let us
study this phenomenon.

What does it matter for every business and beyond? 

 EP: Our main finding points towards the delicate balancing act of handing
tasks to technology while being mindfully engaged in the business process.
This is what we depict in the figure in our paper (Figure 3 on page 1391). 

We claim that most companies need to take their stance on this mindful
conduction vs. automation reliance conundrum. These loops are by no means
mutually exclusive, but we claim that it is very easy to go for the extreme;
either fully automate something or then fully manually conduct the process. 

The sweet spot is located somewhere in between these extremes. But this is
easier said than done, as is shown in our case. (Image credit: Association 
for Information Systems) Some might say that this is just evolution and that
things have to change.

How is this different? 

 EP: This is a good question and we struggle with this. For decades (and
centuries for that matter), the objective of technological development has
been to free human effort for more productive work. 

There is this pro-automation argument, claiming that we should automate
everything that can be automated so that human effort can be targeted to
higher thoughts. 

This applies in many arenas, accountants should not be manually entering
invoice data into systems, but rather analyze the invoice data to provide
insights to managers. 

The flipside of this is that higher thoughts cannot be had without engaging 
in details. 

By engaging in tedious manual work, employees often immerse themselves in
nitty gritty details about the business process. And by this immersion, 
better insights can be gained.

Given the quasi-evangelistic nature of the current AI ecosystem, can anything
be done to mitigate this erosion?

 EP: There are measures that organizations can take, for sure. We are writing
a paper on this topic, hopefully getting it out next year. 

In that paper, we stress the importance of technical and organizational
control points. Instances of checks enforced to employees to check that they
are in the loop, in other words, that they understand the actions taken by 
the automated agents or AI-tools. 

 Joona Ruissalo (JR): Other options are to organize recurrent workshops where
employees work together to solve complex or unusual cases or even build
automation-free training environments to raise awareness of potential gaps in
skills and domain knowledge. 

Also, to not fall asleep behind the wheel, organizations can run periodical
audits in which they are asked for justification of outputs or implementing a
nudging feature that regularly asks the human employee to validate an output
with justifications. 

In addition, adding explanation features to the essential systems they are
using in their daily work allows to not just learn or quickly recap how the
system operates to produce a specific output. 

These measures should ideally be implemented in tandem to challenge employees
to engage in reflection and critically evaluating AI outputs. 

Actively maintaining organizations skill and knowledge capital puts them in a
position where they can quickly adjust to external shocks where core systems
can be taken down for an unspecified time and changing technological
conditions to better co-evolve with new technological capabilities. One of 
the themes we discussed via email was prompt engineering (or composition as I
put it).

Can you draw a link between prompt engineering and the issue of skill erosion?

 JR: Getting the prompt right is one thing, but it is quite another to
evaluate those outputs. 

These require different skill sets, but both necessitate competence in the
problem domain, such as in accounting or software engineering, and lengthy
exposure to the contextual intricacies to become efficient in composing the
prompt and then validating the output you receive. 

Of course, you can take ready-made prompts and let them spit out a response
without scrutinizing them in depth, but where is the critical evaluation of
the outputs and active reflection on why and how you are going about the
process? 

This is where the dynamics of skill erosion come to play: the issue of 
relying on the pre-validated prompts made by you in the past or someone else
and repeatedly relying on those ceases your active engagement with the task
where you no longer apply your skills and knowledge to the full. 

As the prompts continue to produce the desired outputs, such as accurate
financial information or lines of code, we run the risk of automation
complacency where we become even more reliant on the generative AIs outputs
and are thrown to the fringes of being in the loop. 

And as more time passes by, this is the moment where the issue of skill
erosion might blow to our face: the prompt that produced the accurate output
for a long time does not do so any longer as the underlying GenAI tools model
changes (such as OpenAI forcing the move from GPT4 models to a single GPT5
model) or the software that builds upon the lines of code created with GenAI
tools breaks. 

If we have become complacent about conducting work mindfully and digging into
the details, it is likely that skills have eroded over time. 

Therefore, on an individual level, critical thinking and maintaining active
reflection is essential as GenAI tools responses can at first look 
convincing, but as we know, looks can be deceiving as the responses can be
suboptimal or partly hallucinated. 

This issue is even more profound to junior employees who will likely have 
less chances to immerse themselves with the work context and facing less
challenges to solve if they are mostly evaluating GenAI tools outputs  in
other words, there are less chances to learn on the job to become 
experienced.

Given the urgency and the clear risks associated with the
phenomenon of skill erosion, why isn't this issue pushed atop agendas? 

 EP: What makes this phenomenon tricky is its latent nature. If a company
fully automates a business process, there are no problems as long as the
system works. 

This was true in our case organization as well. 

The system was in a way too perfect, effectively optimizing the client
companies optimization of their fixed assets. So why push something atop
agendas if there are no problems? 

Problems arise then when something goes wrong. In this case, it was a top
management decision to discontinue the automation system. 

The environment changed, leading to the discovery of the detrimental latent
effects on employee skills. In some other contexts, it might be some other
form of trigger that unearths the long-term impactful problems related to
automation reliance.

Anything else you want to discuss that wasn't covered in the questions above?

 EP: Which skills are such that should be retained and which skills can be
forgotten or eroded? Drawing this line seems to be problematic. 

Partly due to the changing environments. Something considered redundant now
might not be considered redundant in the future. 

We encourage companies to engage in scenario analysis, what are the possible
and foreseeable alternative scenarios on organizational, technological, and
environmental fronts? 

How likely is it that an automation technology or an AI-tool that an
organization has deployed suddenly becomes unavailable? 

How likely is it that an environmental change impacts the required necessary
skills in the business process that I am personally responsible? 

What if our organization makes a strategic decision that impacts our IT
infrastructure in a way that jeopardizes our IT? 

These are the questions that we would like companies to consider.
======================================================================
Link to news story:
https://www.techradar.com/pro/researchers-warn-that-skill-erosion-caused-by-ai
-could-have-a-devastating-and-lasting-impact-on-businesses-but-it-may-already-
be-too-late
$$
--- SBBSecho 3.28-Linux
 * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700
SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 470
SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45
SEEN-BY: 460/58 712/848 902/26 2320/0 105 304 3634/12 5075/35
PATH: 2320/105 229/426


<< oldest | < older | list | newer > | newest >> ]

(c) 1994,  bbs@darkrealms.ca