AI adoption has moved fast — faster than most organizations’ ability to make it meaningful. Tools are deployed, pilots are launched, and dashboards light up, yet many leaders are left asking the same question: Why doesn’t this feel like progress yet?
In this episode of #shifthappens, Erin Rollenhagen, Founder & CEO of People‑Friendly Tech, shares a grounded perspective on what makes AI adoption work in practice. Her insights move past surface‑level innovation and into something more durable: understanding where people struggle, simplifying what feels overwhelming, and designing within clear boundaries so trust has room to grow.
Core Insights from the Conversation
Throughout the conversation, one idea comes through consistently: AI succeeds not when it tries to impress, but when it quietly supports people in moments that matter. When experiences feel natural, helpful, and respectful, adoption follows and trust builds over time.
Start Where Users Hurt
Early on, Erin shares what might be the most important principle for anyone working with AI:
“You go where the pain is.” It sounds obvious, but it’s also where many AI initiatives go wrong.
Too often, teams start with a capability. Add a chatbot, automate this decision, or use AI here — and only later try to justify it with a user need. But that’s backward. Real value comes from starting with the moments that frustrate people, slow them down, or make them feel uncertain.
And here’s the key nuance: You don’t ask users what to build; ask them what tasks are time-consuming to fulfill.
When users suggest features, they are usually offering a solution to a problem they haven’t fully articulated. The real work is digging deeper. Why do they want that feature? What’s getting in their way? What are they trying to accomplish that feels harder than it should?
When teams skip that step, they risk misunderstanding exactly what was requested and fail to solve the real problem at hand.
Simplify Until it’s Obvious
Once you’ve found the pain, the instinct is to fix it with complexity: more options, controls, and intelligence.
However, the better move is often the opposite.
As Erin puts it, “Simplifying is an amazing source of innovation.”
Some of the most transformative tech products didn’t win because they did more — they won because they made things easier to understand. As Erin notes, this is why people first gravitated to tools like TurboTax and QuickBooks. They took systems most people found complex and intimidating and made them easier for users to understand what mattered and take action.
AI is especially powerful here, not because it’s flashy, but because it can quietly take some of the burden off the user. One example Erin shares involves using AI to summarize long, jargon‑heavy healthcare documents in plain language. The original documents still exist, and nothing is hidden, but the summary removes barriers to understanding critical health information, helping people know what’s happening and what to do next.
That small shift matters. When people feel like they understand what’s happening, trust grows. Confidence grows. Suddenly, the experience feels supportive instead of adversarial.
Simplification doesn’t mean oversimplifying. It means respecting people’s time, attention, and mental energy.
Turn Constraints into Leverage
Security, compliance, and regulation usually show up in conversations as blockers to AI adoption; things teams must “deal with” before they can get creative.
This perspective flips that framing.
Constraints, Erin explains, are what make creativity possible. Just like solving a math problem requires boundaries, designing responsible AI requires knowing where the walls are. Once those walls are clear, teams can move with confidence.
That mindset is especially important in regulated industries. Decisions about where AI models live, how data moves, and what safeguards exist aren’t just technical details — they directly affect user trust.
And it’s not only about what AI decides but how AI interacts.
Does a chatbot understand different accents or different phrasings? How about different ways of expressing the same need? If not, some users will experience the system as less natural or less fair than others. That’s not a technical glitch; it’s a trust issue.
Choosing the right use cases matters, too. High‑stakes decisions demand extra care, testing, and often human involvement. Other moments – like summaries, recommendations, or backend efficiency – offer room for AI to help without putting trust at risk.
Align Business Goals with Human Outcomes
One of the most grounded insights that Erin provides is the acknowledgment that leadership goals and user needs are both valid and different.
As Erin explains, “The leadership’s always right about what they’re trying to accomplish for their brand. And the users are always right about where their pain points are.”
Problems arise when teams prioritize one and ignore the other.
Wanting to be seen as innovative is a legitimate business goal, but innovation only lands when it connects to something meaningful in a user’s real life. AI features introduced purely for optics tend to feel hollow. Features introduced to solve real problems tend to speak for themselves.
The role of good design – and honest consulting – is finding the overlap. Sometimes that means recommending a different AI approach than leadership initially imagined. Those conversations can be uncomfortable, but they’re often the difference between short‑term excitement and long‑term success.
Build Trust One Decision at a Time
What makes this conversation resonate is its honesty. There’s no promise of instant transformation and no single framework guaranteed to make AI adoption effortless. Instead, the focus stays on intention — on choosing the right problems, respecting human limits, and designing experiences that feel supportive rather than disruptive.
Building AI that people welcome doesn’t happen all at once. It happens through small, thoughtful decisions: simplifying rather than adding, testing rather than assuming, and meeting people where they are rather than where technology wants to go.
When organizations lead with empathy, clarity, and trust, AI stops feeling like something users must adapt to and becomes something they rely on. That’s how progress sticks, and that’s how change feels human.
Episode Resources
#shifthappens Research: The State of AI Report
#shifthappens Insights:
#shifthappens Podcasts:
Erin Rollenhagen on LinkedIn
Dux Raymond Sy on LinkedIn