What makes an employee AI-Ready? It’s not just tech skills

HR Tools
11.03.2026
Amelie Vrijdags

Imagine two colleagues: Sarah and Tom.

The moment her organisation makes a generative AI tool available, Sarah starts experimenting. Within a few days, she has watched tutorials, mastered basic prompting, and is already showing her team how to save time in their daily operations. Tom, in the same team with the exact same access, feels overwhelmed. He notes that he “doesn’t have time to play around with it,” argues that “AI makes too many mistakes,” and declares he’ll “start using it once it becomes mandatory.” After one awkward first attempt, he stops trying altogether.

Same organisation. Same tools. Same access. Yet a completely different adoption. This is not a coincidence. It is not a matter of age, education, or even general work motivation. Even when training is offered and policies are clear, adoption speed and skill levels can vary dramatically between colleagues. Most people will recognise aspects of both Sarah and Tom in themselves. Importantly, these differences should not be interpreted as a divide between those who “can” and those who “cannot” work with AI. Rather, they reflect different starting points: everyone can get on the AI boat — but not everyone boards it in the same way.

The Shift from Skills to Psychology

While technical know-how plays a role, scientific research confirms that who we are (our personality) (Seibert et al., 2021) and how efficiently we process information (our cognitive abilities) (Grassini et al., 2025) are strong predictors of successful technology adoption.

In an effort to identify the right talent, some organisations and hiring managers look for tools to assess acquired AI skills, such as prompting. However, this is a "moving target" approach. Learnable skills change so rapidly that AI skills tests rarely offer a stable foundation for long-term talent decisions. By the time a dedicated “AI skills test” is fully validated, the technology—and the skills required to use it—may already be outdated.

What remains stable are the underlying psychological markers, which we can assess reliably through classical psychometric tests. Hudson’s Tools for Talent combine the results of various assessments — such as a personality questionnaire and cognitive reasoning tests — in a smart, weighted approach. This allows them to map four essential core domains that together form an individual’s Digital Learning Agility.

Those who score high on these competencies are equipped to adopt new technology quickly and effectively, while those who score lower may require more time, guidance, and structured support. Below, we explore the four competencies that form the foundation of Hudson’s Digital Learning Agility model, along with their scientific underpinnings.

Digital Learning Agility

1. Learning Ability: Cognitive horsepower

First of all, there is raw processing capacity: Learning Ability. Successfully adopting new technology requires learning new concepts and is significantly influenced by cognitive abilities, such as working memory and processing speed (Grassini et al., 2025). In our model, we measure this through Abstract Reasoning. AI systems are often complex and "inscrutable" (hard to interpret directly). Employees need the cognitive horsepower to recognise patterns, make sense of ambiguous output, and integrate it into their work.

  • What this means for the workplace: In a chaotic information landscape, Learning Ability is the engine of efficiency. High cognitive ability acts as a lever: these individuals process the logic of AI faster, allowing them to adapt to new technologies quickly.

2. Agile Thinking, fuelled by Openness

Another core aspect of digital agility is being receptive to the unknown and to change, which our model defines as Agile Thinking.

Research demonstrates that while traits like Agreeableness often predict the acceptance of established technologies, the Big Five personality factor Openness to Experience is the distinct predictor for positive attitudes toward Artificial Intelligence specifically (Grassini et al., 2025). AI represents a disruptive frontier. Professionals who score high on Openness possess a naturally innovative and change-oriented mindset, and are more likely to view AI as a tool for exploration rather than a threat (Kovbasiuk et al., 2025).

  • What this means for the workplace: It’s about intellectual curiosity. Employees with high Agile Thinking don’t wait for a manual; they love experimenting with new tools and use out-of-the-box thinking to discover how technology can transform their daily tasks.

3. Critical Thinking: The bridge to trust

A common misconception is that trusting AI means believing the machine is perfect, while in reality, blind trust leads to disappointment. The third core domain is therefore Critical Thinking, which combines a critical personality with strong cognitive reasoning as the true bridge to adoption.

Interestingly, research reveals that employees develop higher trust in AI systems when they actively uncover their limitations, such as hallucinations or technical constraints (Übellacker, 2025). Having hands-on experience with the system's boundaries is essential; users must understand not just what AI can do, but also what it cannot do, to trust it effectively (Glikson & Woolley, 2020). By identifying these gaps, users move from blind reliance to realistic expectations.

  • What this means for the workplace: We need professionals who can objectively analyse and evaluate AI output, which defines the Critical Thinking domain. This "human-in-the-loop" capability ensures that AI output is validated, ethical, and contextualised within the business environment.

4. Growth Mindset: The antidote to "AI Anxiety"

Adopting AI involves trial, error, and inevitable glitches. This is where a Growth Mindset becomes crucial. Our research identifies Emotional Stability as a defining factor in sustaining this mindset.

High levels of Neuroticism (the inverse of Emotional Stability) are consistently correlated with "Technostress" and "AI Anxiety," (Symasek et al., 2025) while Self-Efficacy serves as a buffer against these fears (Montag et al., 2023). Professionals with high Emotional Stability possess the resilience to handle the unpredictability of AI. When a prompt fails, they don't panic; they persevere.

  • What this means for the workplace: A true Growth Mindset is more than just wanting to use a tool. It requires the emotional stability to stay positive and the perseverance to keep going through setbacks. When combined with the ambition to master new tasks, this resilience helps employees turn potential “technostress” into a drive to develop new expertise.

From Insight to Impact: actionable Insights

The value of Hudson’s Digital Learning Agility framework lies in moving away from a "one-size-fits-all" approach to digital transformation. By applying these insights, organisations can tailor their talent strategies for both recruitment and internal development.

Identifying your AI champions

In selection, the framework provides a lens for identifying future-proof talent. By focusing on individuals with high Digital Learning Agility, organisations can find their "AI Champions"—the Sarahs who will lead the charge, mentor others, and drive organic adoption of new tools through their natural curiosity and cognitive agility.

Tailoring support for every profile

For existing teams, these insights provide a roadmap for personalised support. A lower score on a specific domain is not a verdict; it is a signal for a different approach. And even the enthusiastic early adopters benefit from the right structure to maximise their impact.

  • Providing Structure: While a "Sarah" thrives on trial and error, a "Tom" may need a highly structured environment with clear step-by-step guidelines and psychological safety to overcome initial hesitation. By allocating dedicated, management‑sanctioned experimentation time, organisations increase the likelihood that “Toms” will actively engage with AI instead of reverting to familiar routines.

  • Targeted Development: If the friction in adoption is caused by the Critical Thinking dimension, development can focus on validation techniques. If it's a Growth Mindset challenge, the focus shifts to creating psychological safety and reinforcing a "human-in-the-loop" approach, ensuring employees feel secure enough to experiment without fear of failure

  • Creating Synergy: Knowing these profiles allows for strategic pairing—connecting high-scorers with those who need more guidance creates a peer-to-peer learning culture. This also allows the “Sarahs” to strengthen their skills in mentoring, reflection, and knowledge sharing.

Conclusion: A holistic View of Talent

While organisational AI readiness—encompassing strategy, business alignment, and technology infrastructure—is a vital foundation, it only tells part of the story. There remains a significant difference between employees in terms of the enthusiasm and speed with which they adopt AI; individuals possess varying levels of personal readiness.

Understanding individual differences in AI readiness is not about deciding who belongs in the future of work, but about ensuring that everyone has a viable path to get there.

Curious how these insights can transform your talent strategy? Contact us.

About the author

Amelie Vrijdags, Senior Expert | Expert Psychologist

Amelie Vrijdags is a senior expert and Expert Psychologist in Hudson Benelux’s R&D department, which develops assessment instruments that guide organisations through various HR procedures in both the private and public sectors. As Hudson Benelux’s main point of contact for all questions related to the quality of its assessment instruments, she is also involved in most research studies carried out by Hudson and its academic partners.

References

Glikson, E. and Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2):627–660.

Grassini, S., Oltedal Thorp, S., Sævild Ree, A., Sevic, A., & Cipriani, E. (2025). Distinct predictors of positive attitudes toward artificial intelligence and general technology: big five traits, gender, and age. Behaviour & Information Technology.

Kovbasiuk, A., Triantoro, T., Przegalińska, A., Sowa, K., Ciechanowski, L., & Gloor, P. (2025). The personality profile of early generative AI adopters: a big five perspective. Central European Management Journal, 33(2), 252-264.

Montag, C., Kraus, J., Baumann, M., & Rozgonjuk,  D. (2023). The propensity to trust in (automated) technology mediates the links between technology self-efficacy and fear and acceptance of artificial intelligence. Computers in Human Behavior Reports, 11.

Seibert, D., Godulla, A., & Wolf, C. (2021). Understanding How Personality Affects the Acceptance of Technology: A Literature Review. Working paper.

Symasek, L., Yeazitzis, T., Weger, K., & Mesmer, B. (2025). Recent Developments in Individual Difference Research to Inform the Adoption of AI Technology. Systems, 13(3), 156.

Übellacker, T. (2025). Making sense of AI limitations: how individual perceptions shape organizational readiness for AI adoption. arXiv preprint.

Contact us

Submit your HR challenge to us. Together we look at how we can help you.

Newest jobs