A growing number of companies are experimenting with so-called “digital twins” — AI versions of employees designed to think, respond and even make decisions like the real person.

The idea is spreading and is expected to become more popular.

Richard Skellett, a senior analyst at a technology consultancy, has built his own AI twin.

His “Digital Richard” is trained on his own meetings, documents and presentations, and can be queried to help with business decisions or client work.

It also has separate private sections for personal tasks.

Skellett’s company has rolled out similar digital twins across its workforce, and other firms are beginning to test the concept.

In some cases, they’ve even been used to cover staff on leave or help ease people into retirement, without needing to hire replacements.

Supporters say the benefits are clear. Instead of emails, calls or meetings, colleagues can ask a digital twin for updates.

Josh Bersin, a US-based consultant, says the technology is making his team far more productive, even coining the term “superworker” to describe employees boosted by AI.

But the rise of digital twins is also raising some uncomfortable questions, such as “who actually owns the AI version of you: you or the company?”

Should workers be paid more if their digital twin increases output? And what happens if the AI makes a mistake?

Experts caution that using personal data to train these systems touches on issues around consent and employment rights. However, there’s little clear regulation in place.