Metaphysics and modern AI: What is thinking? - Series Intro

This episode is the intro to a special project by The AI Fundamentalists’ hosts and friends. We hope you're ready for a metaphysics mini‑series to explore what thinking and reasoning really mean and how those definitions should shape AI research.

Join us for thought-provoking discussions as we tackle basic questions: What is metaphysics and its relevance to AI? What constitutes reality? What defines thinking? How do we understand time? And perhaps most importantly, should AI systems attempt to "think," or are we approaching the entire concept incorrectly?

Chapters

  • Intro to the metaphysics mini‑series (0:03)
  • Why metaphysics for AI (0:33)
  • "What is thinking?" Voices from friends (2:24)
  • “Reasoning” in LLMs (3:43)
  • Turing test vs. true thinking (5:06)
  • Agentic limits and stepwise reasoning (10:55)
  • Math, context, and systemic failures (12:58)
  • Defining the roadmap for the series (15:33)

Metaphysics and its implications for modern AI: A series introduction

What if our industry’s favorite word—reasoning—is mostly a magic trick? We kick off a metaphysics mini‑series to ground AI debates in first principles: existence, objects, properties, causation, change, and space‑time. Instead of treating “thinking” as a vibe, we ask what deliberation really requires—memory, context, goals, and stepwise inference—and measure today’s models against that bar. Throughout episodes in this series, we explore why mixture‑of‑experts and ensemble prompting look like reflection but often reduce to coordinated sampling and ranking.

We share candid definitions of thinking from our colleagues and use them to test popular claims. From the Apple “Illusion of Thinking” paper to the humble Towers of Hanoi, we show where performance collapses as problems scale, even when algorithms are provided. In agentic setups—planning, tool use, and multi‑step execution—we examine why context slips, objectives drift, and math composition fails when interdependent parts must cohere. The Turing test, born as imitation, helps explain the confusion: fluency can mask the absence of causal models, temporal reasoning, and robust decomposition.

Our goal isn’t to put down LLMs; it’s to name what they do well and stop promising what they don’t. By pulling from Aristotle and Plato to modern safety research, we sketch a roadmap for clearer definitions, better benchmarks, and more honest system design. We also preview upcoming conversations on space‑time, causation, and world modeling, and invite your toughest references and counterexamples to keep us honest.

Subscribe, share with a friend who loves (or doubts) “reasoning mode,” and tell us: how would you define thinking—and what evidence would change your mind?

Resources

Do you have questions about metaphysics or reasoning in AI?

Connect with the hosts to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.