Dr. Michael Zargum provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.
Show notes
Introduction to Dr. Michael Zargum (00:00:03)
- The founder and Chief Engineer at BlockScience, a systems engineering firm focused on digital public infrastructure.
Defining agents and principals (00:01:20)
- Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability
- True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions
- Robotics example: Arctic Rover
LLMs vs agents: Key distinctions (00:07:40)
- LLMs by themselves are "high-dimensional word calculators," not agents
- Agents are more complex systems with LLMs as components
Systems engineering perspective (00:13:20)
- Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development
Constraints and guardrails (00:21:40)
- Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds")
Engineering with uncertainty (00:31:44)
- Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standards
- The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior
Accountability and responsibility (00:37:42)
- Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control
Final thoughts and conclusion (00:44:00)
Do you have a question about autonomous, multi-step systems?
Check out these great episodes about complex systems and digital twins:
Or connect with The AI Fundamentalists to share your feedback and insights:
- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.