You’re reading The Steady Beat, a weekly pulse of must-reads for anyone orchestrating teams, people, and agents across the modern digital workplace – whether you’re managing sprints, driving roadmaps, leading departments, or just making sure the right work gets done. Curated by the team at Steady.
The Real Moat
Microsoft’s 2026 Work Trend Index just landed and the headline isn’t the number of agents (active agents in Microsoft 365 grew 15x year-over-year), it’s what most organizations are doing with all those agent outputs. Spoiler: losing them. Surveying 19,854 respondents across 10 markets, Microsoft found organizational culture, manager support, and talent practices explain 67% of AI’s impact on outcomes. Individual skills account for just 32%. So the system beats individual skill by 2x. Yet only 19% of firms qualify as “Frontier” – where capability and readiness actually reinforce each other – with half stuck in undefined middle ground. Microsoft’s term for what the leaders are building is owned intelligence: value creation know-how unique to your firm, accumulated over time, impossible to replicate. The problem with the rest of the pack: as agents take on more work, they generate valuable signals about what worked, what failed, where outcomes drifted – and those signals stay local or never spread. Adoption metrics (licenses, prompts) hide the failure mode. Absorption is what actually compounds – whether agent signals reshape future work. The advice for leaders who don’t want to be commoditized by the next platform shift: stop chasing tool adoption. Pick one coordination loop and instrument it end-to-end. Execution is getting cheap. Coordination is the moat.
— Steady, 8m, #ai, #strategy, #leadership
Amplifier
The 10-20% AI productivity bump most teams report? Shrivu Shankar argues that’s the “free” part – the gains you get by sprinkling AI on top of how you already work. Anything beyond that requires rebuilding both your personal practice and your org’s operating model. Skip either and AI amplifies whatever was already broken. His personal pitfall list is uncomfortably specific. Skipping upfront planning leads to undebuggable systems. Running too many parallel agents exceeds human cognitive capacity. Staying “in the loop” prevents you from automating verification. Treating each session as ephemeral wastes the chance to build reusable scaffolding. And offloading too much cognitive work prevents the skill development that makes you valuable in the first place – especially for juniors. The org-level pitfalls are even harder to fix. Measuring usage instead of outcomes incentivizes hollow metrics. Tool sprawl without governance fragments knowledge. Fast generation outpaces review capacity, creating “review debt” that compounds quietly. Functional handoffs absorb whatever compression gains you got from coding faster. His one-line summary should be on a poster: “AI multiplies what you already do well. If your org runs on handoffs, AI accelerates the frequency of handoffs.” For most teams, the path to real leverage isn’t more usage – it’s redesigning the work around what AI changes, not bolting it onto what already existed.
— Shrivu’s Substack, 10m, #ai, #productivity, #engineering
Buyer’s Remorse
The receipts on the AI layoff cycle are coming back, and they’re brutal. Gartner projects 50% of companies that attributed headcount cuts to AI will rehire those same functions by 2027. A Careerminds survey found one in three employers spent more on restaffing than they saved – not efficiency, just a wire transfer with extra steps. Sarah Choudhary’s argument is that the underlying AI investment isn’t rescuing the math either: IBM found only 25% of AI projects deliver promised returns and 16% ever scale; MIT puts the “fully embraced AI and saw measurable profit” rate at 5%. The remaining 95% are cutting people to fund infrastructure that hasn’t paid for itself. Klarna is the canonical reversal – 700 agents “replaced” in 2024, then publicly walked back in 2025 when quality collapsed. This isn’t about the tech; it’s about leadership. 64% of CEOs admit they invest before understanding the value because they’re afraid of falling behind. On a conference stage, that’s vision. On a balance sheet, it’s an unfunded liability. Choudhary’s rule for leaders who actually want to come out ahead: budget the reversal before the layoff. If the business case still works with rehiring costs priced in, proceed. If it doesn’t, you’re not running a strategy – you’re running a career arc dressed up as transformation.
— Forbes, 7m, #ai, #leadership, #strategy
Bring Your Tools
Banning AI from tech interviews in 2026 is like banning candidates from Googling syntax or opening their IDE – it tests the wrong skill against the wrong job. A JetBrains survey found 90% of developers regularly use at least one AI tool at work. Gregor Ojstersek argues that the interview should reflect the work, not pretend it’s 2018. He’s earned the right to say this – he solved 200+ LeetCode problems himself and still found them irrelevant to actual engineering responsibilities. The deeper problem with AI-banned coding tests is that they were already a poor signal in the era they were designed for: memorization-friendly puzzles that filter for who studied recently, not who builds well. Stripping AI out now just makes that signal worse, because you’re evaluating people on workflows they haven’t used in years. The replacement isn’t permissive vibes coding either. It’s harder: design interview tasks where AI can produce a plausible answer, then judge candidates on what they did with it. Did they spot the bug the model missed? Did they push back on the wrong abstraction? Did they ask the right clarifying question before generating anything? Modern engineering is judgment plus AI for speed.
— Engineering Leadership, 6m, #hiring, #engineering, #ai
Slop Cannons
Jake Handy has a name for the engineers and designers who use AI agents as high-throughput artifact generators: slop cannons. They run multiple parallel agents, produce sprawling PRs that need quick patches to land, and trust model output over peer review. The numbers say the behavior is spreading fast and quietly costing teams more than it saves. AI-authored PRs on GitHub jumped 325% in six months. CodeRabbit found them 1.7x buggier than human-written code. A METR study clocked developers reporting they felt 20% faster while actually being 19% slower – a perception/reality gap that compounds in any org running on velocity narratives. Underneath the volume problem is a quality problem. Models exhibit sycophancy at 58% rates, defaulting to agreement instead of pushback. AI-generated code fails secure coding benchmarks 45% of the time. Developers score 17% lower on conceptual quizzes about code they wrote with AI – they shipped it, but they don’t actually understand it. Handy’s prescription: cap agents at two, mandate specs before running any of them, enforce adversarial review, protect junior developers from over-automation, and read your diffs aloud before merging. None of it requires new tooling. It requires the discipline to slow the cannon down before it shells your own codebase.
— Handy AI, 9m, #ai, #engineering, #quality
Echo of the Week
Echoes are AI agents in Steady that automatically gather and deliver work context to teams on a schedule—answering recurring questions about progress, capacity, and coordination so you stop burning hours assembling the same information manually.
Product Progress Overview is the Echo for anyone who’s tired of translating engineering work into business language by hand. Every Friday morning, it pulls the past week’s merged pull requests, organizes them into coherent themes by application area, and delivers a non-technical summary suited for stakeholders who don’t read diffs. Engineering managers reporting to non-technical leadership get a head start on board updates. DevRel teams get the bones of a release note. Product managers get a customer-friendly recap of what shipped. The promise is simple: no more interrupting engineers to ask “what actually got done this week?” The Echo does the translation work automatically, so the people who need visibility get it without anyone losing flow.
The lightweight teamwork OS
Teams rely on two coordination loops to function: a big-picture loop connecting plans to progress, and a ground-level loop keeping teammates in sync.
Problem is, status quo approaches to running those loops are an incomplete, inconsistent, and inefficient tangle of meetings, emails, chat threads, dashboards, and manual toil.
Steady is the teamwork OS that runs both loops for you. Purpose-built agents continuously distill updates and activity into personalized intelligence that keeps everyone aligned and informed automatically.
The outcome: high-performing teams that deliver better work, 3X faster.
Learn more at runsteady.com.