This is a follow-up to our live webinar, AI for Humans, The Basics, the Tools and What Actually Matters. If you have not watched it yet, you should. Ethan Erdebil broke down the terms that intimidate people.
Generative AI is one AI trying to guess the next best word. It is trained on almost all the language in the world and predicts what comes next based on patterns in that data. When you open ChatGPT and ask a question, that is generative AI. One tool, one response.
An AI Agent is a tool you configure for a specific purpose. Give it a defined scope such as researching policy, reviewing compliance, or drafting reports, and it will focus entirely on that task. Think of it less like a worker and more like a very focused assistant that never loses track of what it was asked to do.
Agentic AI is what happens when multiple agents work together, the way humans do in an organization. As Ethan put it in the webinar, this can be anywhere from two agents all the way up to thousands, keeping each other in check and working toward a shared goal. That is the shift worth understanding.
First, What AI Actually Is and What It Is Not
Ethan and Nilufer were direct about this in the webinar and it bears repeating. AI is not conscious. It has no emotions, no ethics, no self-awareness, and it is not a replacement for people. At its core it is pattern recognition software trained on vast amounts of language, predicting the next most likely word or action based on what it has seen before.
That is not a limitation to be embarrassed about. It is a feature to be understood. AI is exceptional at repetition, scale, and speed but genuinely poor at judgment, ethics, and relationships. The moment you misunderstand what it is, Ethan warned, you start offloading decisions you should be making yourself and that is where things go wrong.
One more thing worth stating. If your communication is vague, your AI output will be too. As Nilufer put it, AI is a mirror of your organization. Prompting well is the same skill as managing people well. If you do not give your team enough context they go off in the wrong direction, and an AI agent will do exactly the same thing, just faster.
Three Things Worth Taking Seriously
Agents keep each other accountable. Each agent has a defined role. When they work together they check each other’s outputs fewer errors slip through, context does not get lost, and no single agent is overloaded. One writes, another reviews, another fact-checks.
Not all AI treats your data the same way. Some public tools may use your inputs to train future model versions, which is why many government organizations are not permitted to use them for work-related tasks. Copilot is the approved option for most organizations. It keeps your data within your organization’s environment. The trade-off is that it may not produce the same output quality as other models, but when working with sensitive information, using an approved tool is not a trade-off at all. It is the baseline.
Your legacy systems are not the blocker you think they are. Almost any existing system can have AI worked into it. Start with a small prototype on one team, one process, one use case. Measure it, learn from it, then expand.
One Thing to Try Today
Stop prompting AI with questions. Start delegating it with roles. Instead of “summarize this document,” try: “You are a senior policy analyst. Review this document and identify the top three operational risks with supporting evidence.” The difference in output quality will be immediate.
How Agents Actually Talk to Each Other
This was one of the moments in the webinar that made people lean forward. Agents communicate by passing structured messages to one another like a relay race. One completes its task and hands the output to the next, which uses it as its own starting point. No human sits in the middle copying and pasting results. If you watched the session, you will hear Ethan Erdebil walk through how this works in real organizational contexts.
Picture it in practice. A Writer agent drafts a report and passes that draft to a Critic agent, which reviews it for gaps. The Critic’s notes go to a Reviser which tightens the document. A Compliance agent then checks the whole thing against your organization’s policies before it ever reaches a human inbox.
The practical upside is significant. Workflows that once took hours of research, drafting, fact-checking, and formatting can now run in the background while your people focus on decisions that actually require them. You are not removing humans from the process. You are repositioning them at the moments that matter most.
What about AI and Design Thinking?
Nilufer Erdebil spoke about how the integration of AI with Design Thinking. She emphasizes AI can be a partner to accelerate the ideation phase. By leveraging AI to synthesize vast amounts of user data and identify patterns, designers can get into the core problems they are trying to solve. There may be nuances that AI misses in the synthesis which are valuable, so it doesn’t fully replace human review and insights.
This tech-enhanced clarity allows teams to move past surface-level symptoms and focus on human-centric solutions that are both innovative and highly targeted.
The real power of AI in design thinking lies in its ability to facilitate a more rigorous prototyping and testing phase, providing immediate feedback loops that refine the outcome. For professionals looking to future-proof their organizations, this approach blends human empathy with machine intelligence. This ensures innovation remains grounded in user needs while benefiting from the speed and analytical depth that AI can provide.
You Already Have an Agent and You Have Not Used It Yet
If your organization runs on Microsoft 365, this part is for you. Copilot’s built-in agent is already there sitting quietly in your sidebar, mostly being used as a fancy autocomplete. It is significantly more than that and most government teams are leaving the best of it completely untouched.
Think of it this way. The Copilot agent has read every document in your SharePoint, every email in your Outlook, and every Teams conversation you have access to. It is not standing by to answer questions about that information. It is standing by to work with it inside your secure Microsoft environment, without anything leaving your ecosystem.
The reason most people do not use it this way is simple. They do not know they are allowed to give it a job. They ask it questions instead of assigning it roles. That is the switch worth flipping and it costs nothing. No new software, no new budget, no IT request.
Here is what that looks like in practice
• Instead of “What does this policy say about travel reimbursement?”
• Try “You are a compliance officer. Review this policy and identify any reimbursement rules that may conflict with the updated federal guidelines from Q4.”
That is not a search. That is a delegated task and it is exactly the kind of shift from generative to agentic that we unpacked in the webinar. If you have not watched it yet, that framing alone is worth your time.
Agentic AI is not about replacing your team. It is about extending what your team can do. The organizations that get this right pair powerful technology with clear human intent. That requires leadership and culture, not just software. If any of this sparked questions, the full webinar is the best next step. Watch it at and bring your team along: youtube.com/watch?v=CztlCC8Rssk.
To learn more about how to enhance leadership and culture to get ready for AI, book a free half hour consultation.
If you want to know more about AI basics, we offer customized AI Workshops and have a publicly available: https://spring2innovation.com/business/business-courses/.