Why businesses judge AI like humans — and what that means for adoption

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
As businesses rush to adopt AI, they’re discovering an unexpected truth: Even the most rational enterprise buyers aren’t making purely rational decisions — their subconscious requirements go far beyond the conventional software evaluation standards.
Let me share an anecdote: It’s November 2024; I’m sitting in a New York City skyscraper, working with a fashion brand on their first AI assistant. The avatar, Nora, is a 25-year-old digital assistant displayed on a six-foot-tall kiosk. She has sleek brown hair, a chic black suit and a charming smile. She waves “hi” when recognizing a client’s face, nods as they speak and answers questions about company history and tech news. I came prepared with a standard technical checklist: response accuracy, conversation latency, face recognition precision…
But my client didn’t even glance at the checklist. Instead, they asked, “Why doesn’t she have her own personality? I asked her favorite handbag, and she didn’t give me one!”
Changing how we evaluate technology
It’s striking how quickly we forget these avatars aren’t human. While many worry about AI blurring the lines between humans and machines, I see a more immediate challenge for businesses: A fundamental shift in how we evaluate technology.
When software begins to look and act human, users stop evaluating it as a tool and begin judging it as a human being. This phenomenon — judging non-human entities by human standards — is anthropomorphism, which has been well-studied in human-pet relationships, and is now emerging in the human-AI relationship.
When it comes to procuring AI products, enterprise decisions are not as rational as you might think because decision-makers are still humans. Research has shown that unconscious perceptions shape most human-to-human interactions, and enterprise buyers are no exception.
Thus, businesses signing an AI contract aren’t just entering into a “utility contract” seeking cost reduction or revenue growth anymore; they’re entering an implicit “emotional contract.” Often, they don’t even realize it themselves.
Getting the ‘AI baby’ perfect?
Although every software product has always had an emotional element, when the product becomes infinitely similar to a real human being, this aspect becomes much more prominent and unconscious.
These unconscious reactions shape how your employees and customers engage with AI, and my experience tells me how widespread these responses are — they’re truly human. Consider these four examples and their underlying psychological ideas:
When my client in New York asked about Nora’s favorite handbag, craving for her personality, they were tapping into social presence theory, treating the AI as a social being that needs to be present and real.
One client fixated on their avatar’s smile: “The mouth shows a lot of teeth — it’s unsettling.” This reaction reflects the uncanny valley effect, where nearly human-like features provoke discomfort.
Conversely, a visually appealing yet less functional AI agent sparked praise because of the aesthetic-usability effect — the idea that attractiveness can outweigh performance issues.
Yet another client, a meticulous business owner, kept delaying the project launch. “We need to get our AI baby perfect,” he repeated in every meeting. “It needs to be flawless before we can show it to the world.” This obsession with creating an idealized AI entity suggests a projection of an ideal self onto our AI creations, as if we’re crafting a digital entity that embodies our highest aspirations and standards.
What matters most to your business?
So, how can you lead the market by tapping into these hidden emotional contracts and win over your competitors who are just stacking up one fancy AI solution after another?
The key is determining what matters for your business’s unique needs. Set up a testing process. This will not only help you identify top priorities but, more importantly, deprioritize minor details, no matter how emotionally compelling. Since the sector is so new, there are almost no readily usable playbooks. But you can be the first mover by establishing your original way of figuring out what suits your business best.
For example, the client’s question about “the AI avatar’s personality” was validated by testing with internal users. On the contrary, most people couldn’t tell the difference between the several versions that the business owner had struggled back and forth for his “perfect AI baby,” meaning that we could stop at a “good enough” point.
To help you recognize patterns more easily, consider hiring team members or consultants who have a background in psychology. All four examples are not one-off, but are well-researched psychological effects that happen in human-to-human interactions.
Your relationship with the tech vendor must also change. They must be a partner who navigates the experience with you. You can set up weekly meetings with them after signing a contract and share your takeaways from testing so they can create better products for you. If you don’t have the budget, at least buffer extra time to compare products and test with users, allowing those hidden “emotional contracts” to surface.
We are at the forefront of defining how humans and AI interact. Successful business leaders will embrace the emotional contract and set up processes to navigate the ambiguity that will help them win the market.
Joy Liu has led enterprise products at AI startups and cloud and AI initiatives at Microsoft.
Comments are closed, but trackbacks and pingbacks are open.