• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Bollosoft

Engineering Leadership & Software Design

  • Culture
  • Leadership
  • Software Design
  • Strategy & Governance
  • AI & Automation

Managing AI Agents: The Career Skill Nobody’s Teaching Yet

January 31, 2026 by Chris Bollerud

Control panel with adjustment knobs, gauges, and status indicators representing careful system oversight

Most management training teaches you to motivate, coach, and develop people. But your next “direct report” might be an AI agent, and everything you learned about managing humans will steer you wrong.

AI agents are entering the workforce faster than organizations can adapt. They book meetings, draft documents, process requests, and increasingly take autonomous action. Someone has to specify what they should do, verify they did it correctly, and catch them when they fail. That someone is you.

The problem is that managing an agent looks nothing like managing a person. It requires a fundamentally different skill set, one that most employees haven’t developed and most organizations aren’t teaching. Understanding this gap, and closing it, is becoming a career-defining capability.

Why Agent Management Is Different

When you manage people, you rely on shared context, judgment, and the ability to fill gaps. You can say “handle this customer complaint” and trust that a competent employee will figure out what “handle” means, what boundaries apply, and when to escalate. People learn from feedback, adapt to situations, and bring their own expertise to ambiguous problems.

Agents don’t work this way.

An agent will do exactly what you specify, including the parts you didn’t think to mention. It will use every tool you give it access to, whether or not that access was intentional. It won’t ask clarifying questions unless you explicitly tell it to. And it won’t learn from its mistakes unless you build evaluation systems that catch them.

The shift is from motivation to control systems. Managing people is about alignment, trust, and development. Managing agents is about specification, constraints, and verification. Get this wrong and you’ll either hamstring the agent with excessive restrictions or unleash something that causes real damage before anyone notices.

The Core Competencies

Agent management breaks down into five interconnected skills. You don’t need to master all of them immediately, but you need working knowledge of each.

Specification: Turning Intent Into Instructions

The most common failure mode is vague goals producing brittle behavior. “Summarize this document” seems clear until the agent produces a three-sentence summary when you needed an executive brief, or includes confidential information because you didn’t tell it not to.

Effective specification means translating your intent into explicit tasks, constraints, acceptance criteria, and “stop and ask” rules. You define what “done” looks like and what evidence proves it. You anticipate edge cases and build in guardrails.

This is harder than it sounds. People fill gaps with judgment. Agents require explicit boundaries for everything outside the happy path.

Practice by taking your next request to an AI tool and writing it as a formal brief: objective, constraints, success criteria, escalation triggers, and prohibited actions. Notice how much implicit context you normally leave unstated.

Permissions: Designing for Least Privilege

Agents gain power when they can call tools, access data, or take actions. They can send emails, query databases, update records, and interact with external systems. Every capability you grant is a potential failure mode.

The principle is least privilege: give the agent only the access it needs for the specific task, with explicit allowlists rather than implicit trust. If it needs to read one database table, don’t give it access to the whole database. If it needs to send emails to customers, scope it to approved templates and recipients.

This requires thinking about tools as APIs with defined interfaces, not as general capabilities you “turn on.” What inputs are valid? What outputs are acceptable? What approval gates should exist before consequential actions?

Practice by auditing the tools and permissions on any agent you currently use. Ask yourself: if this agent were compromised or misbehaving, what’s the worst it could do? Then reduce that blast radius.

Evaluation: Building Systems That Catch Failures

People learn over time. Agents can regress with every model update, tool change, or prompt modification. The only way to know if an agent is performing correctly is to build evaluation systems that measure it.

This means creating gold-standard test cases, defining rubrics for quality, running regression tests when anything changes, and measuring not just accuracy but completeness, latency, and cost. You need to catch hallucinations, inconsistent outputs, and silent errors before they reach production.

Most people skip this because it feels like overhead. It’s not. It’s the only reliable way to know whether your agent actually works.

Start small: build a handful of test cases that represent your most common and most critical use cases. Run your agent against them regularly. When something fails, add it to the test set. This is how you build confidence over time.

Observability: Reconstructing What Happened

When a person makes a mistake, you can ask them what they were thinking. When an agent makes a mistake, you need logs.

Effective agent management requires traceability: logging prompts, tool calls, data accessed, and decisions made. You should be able to replay any run, understand why the agent did what it did, and identify where things went wrong.

This isn’t just for debugging. It’s for accountability, compliance, and continuous improvement. Without observability, you’re flying blind.

Make it a habit to review agent runs, not just outputs. Look at the intermediate steps. Check what tools were called and what data was accessed. The run summary should make sense. If it doesn’t, that’s a signal.

Security: Treating Agents as Principals

Here’s the uncomfortable truth: an AI agent is a new security principal in your environment. It can be attacked, manipulated, and exploited just like a user or service account.

Prompt injection (where malicious input tricks the agent into ignoring its instructions) is the most discussed risk, but it’s not the only one. Agents can leak sensitive data in their outputs, have their credentials stolen, or be socially engineered through their interfaces.

This doesn’t mean agents are too risky to use. It means you need to think about agent security the way you think about application security: threat modeling, input validation, secure output handling, secrets management, and incident response.

Ask yourself: what happens if this agent processes malicious input? What happens if someone gains access to its credentials? How would we detect and respond to agent misuse? If you don’t have answers, that’s the first gap to close.

Developing These Skills

The good news is that agent management skills are learnable. The bad news is that most people won’t develop them through passive exposure.

The fastest path is deliberate practice. Write agent briefs for tasks you’d normally handle conversationally. Build small evaluation harnesses for tools you use regularly. Review logs and reconstruct runs. Design permission structures for hypothetical agent scenarios. Attack your own agents to find weaknesses.

If your organization offers training, take it. If it doesn’t, ask for it, or build informal skill-sharing with colleagues who are experimenting. The learning curve is steeper for those who wait.

For leaders building training programs, consider a tiered approach: foundational skills for all employees (specification, verification, safe data handling), intermediate skills for power users (permissions, evaluation, observability), and advanced skills for builders (platform controls, security, governance). Make training practice-heavy. Scenario labs with real agent runs teach more than slide decks.

The Opportunity

Every technology shift creates a skill premium for those who develop competence early. Agent management is no different. The employees who learn to specify, supervise, and audit agentic work will be more effective, more trusted with higher-stakes automation, and more valuable in a workforce where agents handle an increasing share of routine tasks.

This isn’t about replacing human judgment. It’s about extending it. Agents are powerful tools, but they’re still tools. The humans who learn to wield them well will shape what they accomplish.

The question isn’t whether you’ll need these skills. It’s whether you’ll develop them now, while the learning is cheap, or later, when the gaps are obvious.

© 2026 Chris Bollerud, Bollosoft. Unauthorized reproduction prohibited.

Filed Under: AI & Automation, Leadership

Primary Sidebar

Culture to Customer

Great organizations build great products. Engineering culture, security leadership, and software design connect to create teams that deliver real value. Lessons from two decades of building and leading technical organizations.

About Chris Bollerud

Recent Posts

  • The Single Wringable Neck: Why Your RACI Is Putting Accountability in the Wrong Place
  • Managing AI Agents: The Career Skill Nobody’s Teaching Yet
  • Stop Calling Everything AI
  • The Myth of the Intuitive Interface
  • The Hidden Tax of Bad Architecture

Copyright © 2026 - Chris Bollerud, Bollosoft. All rights reserved.