The 8 Beliefs Keeping Engineering Firms Stuck on AI

Key Takeaways:

  • Engineering firms are blocked by belief systems that do not respond to feature lists or vendor demos, and addressing those beliefs is the prerequisite to real adoption.
  • The 8 beliefs fall into three categories: identity threats, cultural friction, and operational misunderstanding.
  • After working with 40+ engineering projects, the pattern is consistent: belief shifts precede tool adoption, every time.
  • The antidote is a structured, low-risk starting point that lets leaders act without betting the firm.

Last month I sat across from the owner of a 30-person engineering firm who told me, with complete sincerity, that he had been “about to start with AI” for 14 months. He had attended three conferences, evaluated two platforms, read a shelf of articles, and changed exactly zero workflows.

He is not unusual. According to McKinsey’s November 2025 State of AI survey (n=1,993 across 105 countries), 88% of companies now report regular AI use in at least one business function, and only 39% can trace any financial benefit from it. That gap between adoption and impact is not a technology problem. It is a belief problem.

After implementing AI across 40+ engineering projects over the past two years, I keep seeing the same pattern. Capable, technically sharp leaders who built careers on knowing the right answers are hesitating for reasons that have almost nothing to do with software selection or pricing.

These are belief systems, and belief systems do not bend when you show someone a product demo or hand them a comparison spreadsheet.

The 8 Beliefs (And Why They Matter More Than Your Tech Stack)

Over the past year, I have catalogued the specific beliefs that surface again and again in conversations with engineering firm owners, principals, and department heads. Some come from industry forums and published research, but most come from the honest conversations that happen after a conference session ends and leaders stop performing confidence.

These 8 beliefs, ranked by a combination of frequency and stopping power, explain more about AI stagnation than any technology comparison ever could.

1. “I know AI matters, but I don’t know where to start.”
This is the most common belief, and it looks like a logistics problem on the surface. Underneath, it is a fear of looking incompetent in a domain where competence was always the foundation of professional respect. Leaders who built careers on having answers now face a domain where they do not, and asking for help feels uncomfortably like admitting they fell behind. I have watched firm owners spend 6 to 12 months in “research mode” rather than risk looking like a beginner in front of their team.

2. “My expertise is becoming worthless.”
This is the deepest wound in the entire list. Leaders who spent decades accumulating domain knowledge watch AI replicate portions of it in seconds, and the threat feels personal in a way that budget or security concerns never do. In my practice, this fear shows up most strongly in senior engineers with 20+ years of experience, and it is often the hidden reason behind surface-level objections about data security or cost.

3. “If people find out I use AI, they will respect me less.”
Teams are using AI in secret because the cultural signal in most firms says that competence means doing it yourself. Wharton professor Ethan Mollick’s research shows that when workers know their AI use is monitored, they use it significantly less, even when reducing usage hurts their performance. In our Fall 2025 Accelerator, we saw this dynamic firsthand: participants were far more open about their AI use once leadership modeled it first.

4. “AI will just make me do more work, faster, with no escape.”
UC Berkeley Haas researchers found in a 2026 study (published in Harvard Business Review) that workers using generative AI did not work less. They worked faster, took on broader projects, and often extended into more hours voluntarily. The treadmill runs faster, and nobody feels like they can step off. This fear is rational, and it is also solvable when firms decide in advance what recovered time will be used for.

5. “We need to get it right before we start.”
This is perfectionism dressed as diligence. Leaders form committees, launch pilot programs, and wait for a comprehensive strategy while the people closest to the actual work know exactly where the friction is and cannot get permission to fix it. I worked with one firm that spent 9 months evaluating platforms before we helped them get their first meaningful AI win in under two weeks.

6. “AI is for tech companies, not for businesses like mine.”
This belief is fading but remains persistent among owner-operators in service, manufacturing, and trades. It conflates AI adoption with “becoming a tech company,” which misreads AI entirely. AI is infrastructure for every business, the same way electricity and telephones were. In our Accelerator cohorts, the participants who are often most surprised by AI’s practical value are the ones from non-tech backgrounds who assumed it was not for them.

7. “If I don’t act right now, it’s already too late.”
The panic belief, and it creates as much paralysis as belief number one but from the opposite direction. Instead of delaying because they lack a first step, leaders freeze because they feel the window has already closed. A Dataiku/Harris Poll survey of 504 CEOs (March 2025) found that 74% said they could lose their jobs within two years if they do not deliver measurable AI-driven business gains. That kind of pressure creates urgency that often paralyzes rather than mobilizes.

8. “Whatever time AI saves, we will repay by fixing mistakes.”
The trust tax. Leaders experience AI as a trade where speed now means rework later, and when rework becomes unpredictable, trust collapses and pilots stall before they can prove value. This belief is addressable by starting AI integration on workflows where the verification burden is manageable (internal documentation, email drafting, formatting) rather than on high-stakes technical deliverables.

The 5 Patterns Underneath

Across all 8 beliefs, five deeper patterns emerge that reveal what is really going on beneath the surface.

Pattern 1: Identity threat drives more resistance than any information gap. The core friction for most leaders is that AI challenges the story they tell themselves about who they are and why they are valuable, which feels true even when it is factually wrong, and feelings drive decisions faster than spreadsheets.

Pattern 2: Leaders are caught between urgency and uncertainty. Two equally loud signals compete (“move now or fall behind” and “nobody knows where this is going”), and the result is oscillation rather than action.

Pattern 3: Cultural permission is missing in most organizations. AI adoption is officially encouraged and culturally punished at the same time, which produces a workforce of invisible users who gain individual speed without building organizational capability.

Pattern 4: Speed gets confused with significance. The dominant narrative says AI collapses time, and it does, but leaders fill recaptured time with more tasks instead of better decisions, which means the treadmill just runs faster without anyone getting ahead.

Pattern 5: Tool adoption gets mistaken for transformation. Deploying a tool is a procurement decision that takes a week, while developing a capability requires changing how people think, decide, and collaborate, which takes sustained effort that most leaders are trying to skip.

What Actually Works

The cost of leaving these beliefs unaddressed compounds every quarter. Firms that wait another 6 to 12 months will face the same learning curve with less competitive runway, tighter hiring markets, and clients who have already started expecting AI-enhanced delivery speeds from their engineering partners.

In my practice, we have found that the antidote to these beliefs is a structured, low-risk entry point that lets leaders act without betting the firm on a technology they do not fully understand yet.

That is why we built the Crawl-Walk-Run-Sprint methodology. It starts where the beliefs are weakest: small, daily-use applications that save time on tasks engineers already resent doing. No committee required, and no strategy document needed, just one workflow improved this week.

Think of it like commissioning a new process system. You would never run the entire plant at full capacity on day one. You start with loop checks, then cold commissioning, then hot commissioning, then ramp-up, and each phase proves the next one is safe to begin.

The beliefs listed above are real and rational, and they are optimized for a world that no longer exists. The work is to give leaders a safe, structured way to test a different story about what AI means for their firm and their career.

The remaining posts in this series unpack each belief cluster in detail: the evidence behind the beliefs, the reframes that shift them, and the specific steps engineering firms have used to move from awareness to action.

Your next step: Run your numbers through the free AI ROI Calculator — it takes about 10 minutes and shows you which workflows to optimize first, estimated weekly time savings for your team size, and a prioritized action plan for where AI creates real value. That’s the constraint-first clarity this post describes.

 

Picture of Shane Chalupa, PE

Shane Chalupa, PE

Co-Founder of Obnovit, where he helps engineering powered businesses build practical AI capabilities that actually work. Through systematic education and hands-on enablement, Shane guides teams from AI-overwhelmed to confidently implementing systems that save team members hours every week. Drawing from 40+ AI implementations across a variety of projects, he's built a framework that creates lasting team capability, not dependency on consultants.

Share the Post:

Related Posts

Scroll to Top