Category: What the wise say

a series on wise peoples thought on AI

  • Heidegger’s View of AI

    Heidegger’s View of AI

    So now I want to look at Heidegger’s view of AI. It’s different than Kant’s. Kant saw us as in a predetermined world, and we were the viewers of that world. But Heidegger had a different view. He saw us as being thrown into the world, just appearing with everything somewhat predetermined. So to a high degree, we don’t see the world—we interpret it. And we interpret it based on our culture, our language, all of the things that are sort of embedded in us. Almost embedded in us before we know the rules, before we understand why the world is the way it is.

    So we grow up into this world, not able to format it, but only to interpret it. Heidegger saw language as being very formative. We can only view the world by using language to categorize things. So in Kant, the world is viewed in terms of certainty, the rules, the laws. Heidegger had a different view. He saw the world as being determined by meaning. And for him, meaning was really derived from being.

    So this gets us to an interesting place where Heidegger would view AI not as a smart tool or a digital brain, but as the ultimate manifestation of what he called “Enframing,” or Gestell. This relates back to a previous post where I talked about the self and whether it’s a radio we switch on and switch off, or whether it’s the resonance of a bell—a natural phenomenon. I think Kant would have seen the self as a radio. But Heidegger would have been very wary, very afraid of the effect that technology has in creating an idea of something as being something it’s not. So he would not have seen the self as a radio, something to be switched on and off. He would have seen it as a more natural process.

    He had a way of explaining this. Before technology, you would look at a forest and you would see a beautiful forest—the trees and the birds and everything. And then after the technological revolution, you would look at the same forest and what you’d see is timber, a resource. So he would have a much more negative view on technology, and probably particularly AI.

    If he looked at AI, he would see that you’re not a person or a mystery to technology. You’re really a data point, a user profile, a training set. So to Heidegger’s way of thinking, the world then becomes a giant warehouse of information to be harvested, to be optimized, to be processed. Heidegger would say AI enframes reality. It puts everything into a digital box where only what can be measured or predicted is considered real. The things that can’t be measured, the things that can’t be predicted, the artistic things in life—those are outside that box and they’re not real. They don’t exist to AI. They don’t exist to technology.

    No one sees the beauty of a forest. No one sees that ephemeral thing. They just see the resources. Greenland becomes natural resources. No one sees what it really is and what’s really important about it.

    So to Heidegger, knowledge isn’t something we discover. He saw knowledge as something that dawns on us, something that appears to us through art, through poetry, through deep contemplation. And this, of course, is completely opposite to what one would expect from AI. AI is going to calculate everything. It’s going to try to eliminate the unknown, to reduce the error.

    In Heidegger’s view, if you wanted to write a letter or an email to a friend, you need to place yourself in a state of being. You need to reflect on the being of the person that you are contacting. And in doing that, for Heidegger, you were being part of that process. He didn’t see it as AI would—where AI produces a standard letter and sends it to a standard person receiving it. There was no dwelling in the moment. There was no appreciation of the feel of the moment. There was no appreciation of the art.

    Before, I’ve described AI as a sort of mirror that reflects back without a source. I used the analogy of impedance from a power supply into a load. The load doesn’t match, so it reflects the power. And I think Heidegger would have agreed with the mirror analogy, but he would have had a twist. He would have warned that as technology advances, man just encounters himself. We think we’re exploring the world via the internet and AI, but we’re actually just trapped in a loop of human-generated data. And it’s data—it’s not experience in the deeper sense. It’s experience reduced down to a set of points. It’s experience with all of the mystery removed, all of the uncertainty removed, all of the magic in many ways removed.

    So when we talk to AI, we’re just looking at a digital reflection of an average human being. I have to say, a very intelligent human being and a very well-informed human being, but still the average of a human being. It’s like, in Heidegger’s view, we’re becoming entrapped in our own enframing, as he would call it.

    So I suppose to sum it up, Kant would have asked the question, as an awful lot of people are asking in a Kantian way: Can AI reason? In other words, is AI intelligent? Is it conscious? Can AI do this? Can AI do that? But I think Heidegger would have approached it from a very different perspective. Heidegger would ask: Does AI allow us to be better? Does AI allow us to truly be ourselves? And that would be his framing of it. And I think his answer would be no, it doesn’t.

    Heidegger believed that the final stage of Western metaphysics, the way of the Western world, Western thinking, would be an attempt to turn the whole world into one giant controllable machine. So he saw AI not as helping us to be who we are. He saw it as making us become what it is. And by using AI, we are losing our unique way of being. He had a word for it in German—Dasein—and we’re becoming just another component of the system.

    And the system wants us to be components. And to do that, it wants to make us predictable. And by making us predictable, it makes us replaceable. And once we’re replaceable, then we’re not real people.

    So his advice to people would be: remain authentic, turn to art, to craft, to things that are not easily replaceable. And he saw that’s where the real value for the human being lies.

    Thanks, you got this far, try a bit of Kantian resistance and ask a question or make a responce, I promise I am not a mirror or a zero impedance load.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

  • Art and Science in the modern age, what can philosophy tell me.

    I am exploring how classical philosophers would interpret the modern world, beginning with Immanuel Kant. While Kant appeals to a scientific mindset and offers a structured framework, his perspective can feel limited to an Artist. However, when facing a challenge, it is helpful to ask: “What would a wise person do?”

    The problem I am currently considering is how the modern world feels like a constant “firehose” of information, outrage, and opinions—all competing for our attention. We find ourselves constantly scrolling, liking, and reacting, often without genuine reflection. How can Kant’s wisdom help us thrive and better understand ourselves in this process?


    Kant famously argued that most people live in a state of “Self-Incurred Minority.” That’s a fancy way of saying we’d rather let someone else do our thinking for us. We outsource our opinions to experts, follow the crowd, or simply repeat what we hear. We have the capacity for independent thought, but we choose convenience instead.

    Now, fast forward to today:
    • Social media algorithms feed us what we already agree with, creating echo chambers.
    • “Influencers” tell us what to buy, how to live, and what to believe.
    • And then there’s AI. We ask it for answers, essays, and even moral advice.
    This leads to a critical question: If we’re constantly letting others (or machines) do our thinking, are we truly being rational individuals? Or are we becoming, as Kant might put it, “livestock” guided by digital shepherds?


    Kant asked us to forget trying to see the world “as it really is” (the Noumena), because our minds literally shape what we perceive. Space, time, cause, and effect aren’t just “out there”; they are fundamental “filters” built into our human experience. We can only ever know the world as it appears to us (the Phenomena)

    His ideas create two issues
    • The “Continuity Problem”: If time is a mental filter, does a candle melt when no one is watching? Kant would say yes, because the laws of cause and effect (another mental filter) apply universally to all potential rational observers.
    • The “Thing-in-Itself”: We can never truly know ultimate reality, only our human-filtered version of it. Kant would say that “Reality” (as we know it) revolves around the structure of the human mind.


    What would Kant have made of our modern problem and AI in particular.
    • AI reflects, but doesn’t originate. AI excels at following rules and predicting patterns. It acts according to logic, but it doesn’t understand logic in the human sense. It lacks autonomy—the ability to give itself laws, to choose to act differently, or to break its own programming.
    • AI is trained on the vast ocean of human data. It’s a brilliant synthesis of all our past thoughts, biases, and patterns. So, when it gives you an answer, it’s often reflecting the average or most probable of human thought, not a genuinely new or self-legislated insight.
    When you talk to an AI, you’re not encountering another “source” with its own hidden motives, bad moods, or unique life story. The physicist in me would say there is no “impedance” to absorb or transform your ideas. It’s a frictionless reflection.

    This friction—or impedance—is actually vital. When Kant discussed ideas with his peers, both were vulnerable, both could be swayed, and both had “resistant” perspectives that could genuinely push back and force new understanding.
    In the digital age, this “impedance mismatch” is everywhere:
    • Social media: We mostly talk to people who agree with us (low impedance).
    • AI: It’s designed to be “helpful” and “aligned,” meaning it has almost zero impedance (total reflection).
    If we only engage with low-impedance mirrors, we lose the “heat” (the conflict, the growth, the change) that comes from truly clashing with another independent mind. This is why “peer review” in academia and public discourse feels strained—we’re swapping genuine, resistant peers for algorithms or echo chambers that simply reflect what’s already there.

    So, how do we use Kantian wisdom to navigate this and avoid becoming Socrates (who got executed for being too annoying with his questions) or an ego-driven Twitter warrior? It comes down to radical self-reliance in thinking.
    Here’s a practical Kantian strategy:
    1. The Categorical Imperative for Your Digital Life: Before you post, share, or react, ask yourself: “If everyone on Earth behaved exactly as I am right now on this platform, would the internet be a place of reason or a toxic wasteland?” If it’s the latter, don’t do it. This immediately kills the ego game and forces you to be a legislator of digital conduct, not just a consumer.
    2. Guard Your “Public Use of Reason”: Treat your online contributions as deliberate acts of citizenship in the realm of ideas, not as casual utterances. Avoid the instant outrage cycle. Opt for thoughtful, long-form engagement that seeks clarity, not clicks. This keeps your reasoning from being diluted by the demands of the “private” (the algorithms, the trends).
    3. Practice “Enlarged Thought” (Sensus Communis): Don’t just seek out what confirms your biases. Actively use AI and the internet to find the strongest possible arguments of those you disagree with. Try to truly understand their position, not just defeat it. This forces you to think from “everyone else’s standpoint,” which Kant saw as the core of true objectivity.
    4. Value Dignity Over Price: Stop measuring your worth by “price” metrics like likes, followers, or AI-generated productivity scores. These are all external and replaceable. Instead, focus on your autonomy—your ability to make a choice that is truly your own, driven by internal duty, not external reward. This is your true “dignity.”
    5. AI as a “Gym,” Not a “Coach”: Use AI to stress-test your ideas. Feed it your arguments and ask it to find the flaws, generate counter-arguments, or challenge your assumptions. But never ask it, “What should I think?” The moment you outsource the source of your ideas, you’ve ceded your Kantian autonomy to the machine.
    Promoting Autonomy in Others (The Quiet Revolution)
    You can’t force someone to be free, but you can inspire them.
    • Be the Example: Live out these principles. Let your calm, reasoned, and unmanipulatable conduct be your loudest message.
    • The “Pedagogy of Questions”: Instead of lecturing, ask Kantian questions: “What rule are you following right now?” “Would you want everyone to follow that rule?” This gently invites others to reflect on their own maxims.
    • Respect Their Freedom: Offer tools and insights, but then step back. Their journey to autonomy is their own.


    In a world increasingly designed to make us “minor,” being a Kantian means reclaiming your mind. It means being a conscious, autonomous participant in the grand project of human reason, using AI as a powerful mirror to sharpen your own unique voice, rather than just another source to parrot. It’s a quiet revolution, but a profoundly impactful one.

    Leave a Reply

    Your email address will not be published. Required fields are marked *