03/03/2026
The AI trust gap: Why employees fear automation more than leaders
Across the UK, AI adoption is accelerating, investment is rising, and board confidence is high. But our research reveals that trust on the ground is not keeping pace.
Recent workforce data gathered by our latest Wishlist survey shows a clear pattern:
- Early-career professionals are significantly more likely than senior leaders to believe AI will replace people in most areas of work.
- Meanwhile, nearly all leaders report active AI investment, yet a notable minority of junior staff are unsure what’s being implemented or how it will affect them.
Why AI feels threatening at the frontline
Our research indicated that leaders frame AI around:
- Productivity
- Efficiency
- Risk reduction
- Competitive advantage
But to workers, AI feels different - it feels personal. Early-career employees associate automation with:
- Reduced control over how work is done
- Opaque decision-making
- Fewer opportunities to build career-defining skills
Our research suggests that when AI is deployed as a top-down decision, particularly via generic, off-the-shelf tools, employees experience it as something done to them, find it threatening, and are blind to the importance of on-the-ground process and systems insight. It negates user cases and issues the optimal process or outlines procedures, whereas in reality, on the ground, shortcuts are usually in place to ensure processes and the business keep working. That perception gap creates an unrealistic user case for AI and, in turn, leads to resistance before a single AI model goes live and then failure when it does.
The hidden risk of off-the-shelf AI
- Surface inconsistent or low-quality data
- Generate outputs that users don’t trust
- Add monitoring without improving autonomy
- Disrupt processes rather than streamline them
When AI exposes structural weaknesses in systems and data architecture, employees blame the technology. In reality, the problem is implementation; AI readiness is not just about adopting tools; it’s about developing systems people trust.
Bespoke software as a trust engine
At Propel Tech, we approach AI differently. As bespoke systems developers, we treat AI integration as an engineering and design challenge rather than a procurement exercise. Bespoke software development allows organisations to:
- Understand user cases in real time
- Map real operational workflows before introducing AI
- Co-design solutions with frontline teams that actually work
- Embed transparency, governance, and oversight into interfaces
- Improve data quality before automation is layered in
- Empower human decision-making as part of AI.
When teams see their input reflected in the systems they use, AI shifts from replacement to augmentation; trust in AI is not built through messaging, it is built through detailed systems audits and rigorous software development that enables AI to improve the workplace, allowing for team integration into the process and the development of meaningful technology architecture.
Is your infrastructure ready?
Many AI failures are not model failures; they’re code quality, integration, data readiness or security failures.
Before investing further in AI tools, organisations should ask:
- Is our existing software architecture AI-ready?
- Is our data structured and secure?
- Are we building on stable, maintainable code?
- Have we assessed long-term scalability and governance risk?
If you’re unsure about where to start, book your FREE AI Software Audit to assess your code quality, security posture, and AI readiness before scaling automation.