
Last week’s GPT-5 launch disappointed many who expected AGI-level breakthroughs. But if you actually build things, here’s what matters: GPT-5 is a viable alternative to Claude for coding work, and that competition benefits everyone who ships. I’ve been testing GPT-5 in Cursor for a week, and it untangled a few issues where I’d been going in circles with Claude Code.
Meanwhile, Anthropic pushed Claude Sonnet’s API context window to 1M tokens—a size that lets you analyze huge specs or large codebases in one go. In practice, you can keep very large documents—think hundreds of thousands of words—in a single prompt, making some “no-RAG” workflows practical. (Google’s Gemini family offers long-context modes as well.)
So if the tools are this capable, why are most businesses still tapping <10% of what LLMs can do?
Barrier 1: “I don’t know what’s possible” (the concept gap)
Most folks think “LLM = chat, search, emails.” The mental model is incomplete: LLMs can draft scripts that operate your tools. Many professionals don’t realize LLMs can:
Generate working scripts in Python, JavaScript, or PowerShell.
Create API integrations between different software platforms.
Write Python or AutoLISP routines for AutoCAD automation.
Build Excel macros that reflect real business rules.
Produce Dynamo scripts for Revit workflows.
The knowledge gap isn’t about AI’s capability—it’s about understanding that these tools can create other tools. You don’t need to know programming; you need to know what to ask for.
Barrier 2: “I’m too busy to explore” (the exploration tax is real)
People aren’t lazy—they’re over-subscribed. Docs, meetings, inbox, compliance… and learning time gets squeezed out. Learning to prompt effectively for automation typically requires:
Initial experimentation time.
Trial and error with your specific use cases.
Ongoing (but decreasing) debugging and refinement.
That upfront commitment can easily be a full day of unbillable time. When you’re drowning in deadlines, finding that day feels impossible. But here’s the math most teams skip: automate just three repetitive tasks that each save 45 minutes per week, and you recoup the investment in under a month. After that, it’s a time dividend—roughly 120 hours annually. Small, consistent wins compound faster than a never-started “AI initiative.”
Barrier 3: “We’re not allowed” (policy, data, and security)
This is the thorniest problem. Even when individuals see the potential, they hit institutional walls:
“IT says we can’t use AI tools.” A legitimate data-security concern—but often a sledgehammer where a scalpel would do. Define safe lanes: non-sensitive data only, enterprise endpoints, anonymization, least-privilege accounts, logging, and a fast approval path for low-risk pilots.
“That’s not how we’ve always done it.” Cultural inertia and unclear ownership slow change. Run a small, time-boxed pilot on one repetitive task, track minutes saved and defects avoided, document the workflow, and scale it with a named champion.
“What if the AI makes a mistake?” A fair worry—manual processes miss too. Keep a human in the loop, start with dry-run modes, set confidence thresholds and audit trails, and compare error rates to baseline before moving from assist to automation.
Breaking Through: A Practical Path Forward
Despite these barriers, change is possible. Here’s how to start:
For Individuals
Start with private experiments. Use your personal ChatGPT account to automate something in your own workflow.
Document wins: time saved, errors avoided, stress reduced.
Example tasks: batch-rename files to your firm’s naming convention; create Excel formulas that clean up consultant markups; write a Python script to extract specific data from PDFs.
For Teams
Pick non-sensitive, high-frequency tasks.
Create “automation hours”—e.g., Friday afternoons—for LLM-assisted experiments, and make it official unbillable time.
Start with processes that don’t touch client data: CAD standard compliance checks, drawing-set organization, internal resource scheduling, meeting-minute formatting.
For Organizations
Establish clear AI use policies rather than blanket bans.
Run pilot programs with volunteer teams; let early adopters demonstrate value before scaling.
Define what data can/cannot be processed, approved tools and platforms, review processes for AI-generated outputs, and training requirements.
The Real Opportunity
While the tech world debates whether GPT-5 is “intelligent,” practitioners already have tools that can transform daily work—right now, not in some hypothetical AGI future. Every hour spent on repetitive admin is an hour stolen from real engineering challenges, better design, or simply getting home on time. The tools exist; the capability is here. My challenge: pick one annoying task this week—just one—open ChatGPT or Claude, describe what you want to automate, and start with “Write a Python script that…” or “Create an Excel macro to…”. You might be surprised by what happens next. The revolution isn’t coming; it’s here—we’re just too busy to notice.
