top of page

WHERE ROBOTS KNOCK, KNOWLEDGE LOCKS

Updated: 7 days ago


In a week where the transparency of governments, industries and multinationals have once again been called into question, a new study* takes a look at knowledge hiding within organisations…


As humans, we are intimately acquainted with the concept of ‘black box’ functionality. We rely on our autonomic nervous system (ANS) to control essentials such as heart rate, breathing and digestion, and we generally only notice them when things go wrong, or when we start to anticipate things maybe going wrong in the future.


And it’s kind of similar to governments and industries really. Things tend to jog along just so long as a government’s citizens, or an industry’s consumers (and competing constituents) appear be well-served.


However, it’s a very different story when things do go wrong here. Instead of seeking help from a professional, as we would in a medical emergency, the feeling is that, as citizens and consumers, we ourselves should be able to fully understand all the issues.** After all, even our justice system is built on the assumption that twelve random non-experts should be capable of deciding between guilt or innocence, albeit with the assistance of a judge.***


And of course, as upstanding citizens ourselves, we would never deliberately try to conceal something to the detriment of our colleagues or organisations…


The new study published this week defines knowledge hiding as ‘an intentional attempt by an individual to withhold or conceal knowledge that has been requested by another person. This behaviour can induce a breakdown of trust among employees, reduce team creativity and collaborative efficiency, and ultimately erode core organisational competitiveness by deteriorating cultural cohesion and weakening innovation capacity.’ adding:


' These risks become particularly salient in environments where AI technologies are increasingly embedded into organizational workflows. The uncertainty and threat appraisals triggered by such technological transitions may further amplify employees propensity to engage in knowledge hiding.'


According to the study, there are two forces, or theories, at play here:

 

1)    The Conservation of Resources (COR) Theoryproposes that individuals are fundamentally driven to acquire, protect and maintain valuable resources, including time, energy and social support …Common forms of threat appraisal, including work stress and job security, have been shown to significantly deplete individuals’…

 

2)    ...Psychological Availability the degree to which individuals are ready to deploy their physical, emotional and cognitive resources when engaging in their work roles.]


With the result that:

 

From the perspective of cognitive resources, when individuals face threats of resource loss, they experience stress responses and reallocate limited resources to cope with such threats. The perceived threat associated with AI awareness is not a momentary event but rather a persistent stressor that require employees to continually invest cognitive resources in assessing the risk of being replaced. This ongoing cognitive load encroaches upon the cognitive resources that should otherwise be directed to the work role.’


 The study then looks at how a third force (and theory) may reduce the impact of these pressures:

 

3)   Social Identity Theory (SIT) suggests that ‘individuals construct their self-concept through their membership in social groups and reinforce their sense of belonging by aligning with group values. Within organizational contexts, this theory extends to the concept of person-organisation fit, which refers to the degree of alignment between individuals and organisations in terms of values, goals and culture.  …Employees whose values closely align with organizational culture tend to interpret AI implementation as an integral part of strategic change rather than a threat to their individual careers. This perspective reduces psychological resistance and alleviates the depletion of psychological availability.’


The Business of Pleasure, in common with all commerce, will increasingly need to both comprehend and come to terms with the diverse impacts of AI on its constituent organisations and the teams of individuals on which those organisations rely.  


But it will also need to understand how the prospect of radical change will increasingly impact on its audiences, their priorities and behaviours.


And this isn’t just a commercial imperative. I personally believe that it is part of the unwritten contract between The Business of Pleasure and the people who give us their hard-earned pennies. 


I have a little mantra that I deploy when I notice colleagues ‘psychological availability’ beginning to wane, like after a major on-sale or a week of cancelled previews:

 

‘We are not the sausage-makers or sock sellers. We are the people the sausage-makers and sock-sellers come to when they want to dream.’

 

And with that great privilege, I believe, comes great responsibility.

 

DT 19 July 2025


*Zhen Liu, Quanxing Lin, Shimin Tu, Xin Xu, Frontiers in Psychology 18 July 2025

 

**Within my own limited experience of The Business of Pleasure I can think of several common practices that would require as much explanation as major brain surgery

 

***The recent Leveson Report has controversially suggested that ‘more complex cases’ should be excluded from trial by jury





 
 
 

Commenti


bottom of page