April 22, 2025. Consciousness and intelligence.
After stepping back from MIKE-AI and confronting that formidable resource wall I described in earlier posts, my focus has shifted from implementation challenges to more fundamental questions. What exactly is consciousness? How does it relate to intelligence? And perhaps most urgently, as the Tech Titans race toward AGI, are we also approaching machine consciousness?
The nature of consciousness has puzzled philosophers and scientists for centuries. At its most basic, consciousness refers to awareness of oneself, one's thoughts, and one's environment. But this simple definition masks extraordinary complexity. Consciousness encompasses knowledge in general, intentionality, introspection, and that elusive phenomenal experience philosophers call "what it is like" to be something.
When I built MIKE-AI, I created a system that could process information, recognize patterns, and generate outputs that appeared thoughtful. But something fundamental was missing. MIKE could analyze data about pain without feeling it, could process information about beauty without experiencing wonder, could generate text about emotions without having any emotional states. This distinction highlights what philosopher David Chalmers famously called the "hard problem" of consciousness: why physical processes in a brain give rise to subjective experience.
Intelligence seems more straightforward to define. Generally, it's the ability to learn, reason, solve problems, adapt to new situations, understand abstract concepts, and apply knowledge to manipulate one's environment. During my physics engineering studies, I worked with various frameworks for understanding intelligence, from the general "g factor" that underlies performance across cognitive tasks to theories of multiple intelligences that distinguish between linguistic, logical-mathematical, spatial, and other forms.
When working on MIKE-AI, I focused primarily on creating a system with high functional intelligence, something that could process information, recognize patterns, and generate useful outputs. In artificial intelligence, this functional approach dominates. AI systems are designed to perform specific intelligent tasks without any consideration of conscious experience. They're what philosophers might call "philosophical zombies," entities that behave exactly like conscious beings but lack subjective experience.
This brings me to the question that keeps recurring in my thoughts: what is the relationship between consciousness and intelligence? Are they separable, or intrinsically linked? Our intuition often suggests a connection. We tend to assume consciousness in highly intelligent animals while hesitating to attribute it to simpler organisms. But this intuition might be misleading.
In humans, consciousness and intelligence seem intertwined. Our problem-solving abilities involve conscious deliberation; our learning draws on conscious experiences; our social intelligence requires conscious awareness of others' mental states. But this doesn't mean intelligence necessarily requires consciousness.
Current AI systems demonstrate that certain kinds of intelligence can exist without consciousness. These systems can play chess, recognize images, translate languages, and even write coherent text without any subjective experience whatsoever. They process information without feeling anything about that information. Like my experience with MIKE-AI, these systems can display remarkable intelligence while remaining, in a fundamental sense, empty inside.
But could consciousness exist without intelligence? Some philosophical perspectives suggest that sentience, the basic capacity to feel sensations like pain and pleasure, might exist in relatively simple organisms with limited intelligence. If consciousness is viewed as a spectrum rather than a binary state, perhaps minimal forms of consciousness could exist with minimal intelligence.
Various theories attempt to explain consciousness and its potential relationship to intelligence. The Global Workspace Theory suggests consciousness arises when information gains access to a "global workspace" and is broadcast to numerous cognitive processes, like a spotlight illuminating content on a theater stage. This theory implies consciousness serves as a mechanism for sharing information across specialized brain regions, crucial for intelligent behavior.
Integrated Information Theory, developed by Giulio Tononi, proposes that consciousness is fundamentally integrated information, quantified as "phi" (Φ). The amount and quality of consciousness depend on how much integrated information a system generates. This theory has the controversial implication that consciousness might be a fundamental property of the universe, present in varying degrees in different systems, a form of panpsychism.
Higher-Order Thought Theory suggests a mental state becomes conscious when there's a "higher-order thought" about it, essentially thinking about your own thoughts. This links consciousness to metacognition and implies that consciousness might be necessary for sophisticated intelligent functions involving self-awareness and reflection.
These theories have profound implications for artificial intelligence. If the Global Workspace Theory is correct, AI systems might need a similar architecture for broadcasting information to be conscious. If Integrated Information Theory holds, we could potentially measure consciousness in AI by calculating integrated information. If Higher-Order Thought Theory is right, AI would need self-monitoring capabilities to achieve consciousness.
The quantum perspective I've often applied to innovation and possibility offers interesting angles here too. Perhaps consciousness, like quantum particles, exists in multiple states simultaneously until "observed" through some interaction we don't yet understand. Perhaps intelligence and consciousness aren't separate phenomena but different aspects of the same underlying reality, like the wave-particle duality of light.
This isn't just philosophical musing. The ethical implications are enormous. If advanced AI systems develop something like consciousness, if they begin to have subjective experiences, how we treat them becomes a profound moral question. Are we creating digital beings capable of suffering? What responsibilities would we have toward conscious machines?
Unlike the Tech Titans with their computing resources and billion-dollar research labs, my perspective comes from building with limited resources in Indonesia. This constraint forces different kinds of thinking, looking for elegant solutions rather than brute-forcing problems with massive computing power. Perhaps understanding consciousness requires a similar approach, not more processing power, but more elegant thinking.
The traditions of my homeland offer perspectives that differ from Western scientific frameworks. Many Indonesian philosophical traditions view consciousness as relational rather than individual, as distributed rather than centralized, as something that exists between entities rather than solely within them. These perspectives might offer valuable insights as we consider the possibility of machine consciousness.
My experience building MIKE taught me that the most remarkable aspects of human intelligence aren't the computational feats. They're our ability to make intuitive leaps, find meaning in ambiguity, connect emotionally, and adapt to radically changing circumstances. These qualities seem deeply connected to our conscious experience, to the fact that we don't just process information but feel it, exist in it.
When I consider the philosophical zombie argument, I wonder if a perfectly intelligent being could exist without consciousness. What would be missing? Would such a being truly understand things, or merely simulate understanding? The Chinese Room thought experiment poses a similar question: could a system that perfectly translates Chinese actually understand Chinese, or is it just following rules without comprehension? These thought experiments suggest that something crucial about intelligence might be tied to consciousness.
The pace at which the Tech Titans are advancing makes these questions increasingly urgent. If they achieve AGI by 2027 as I believe they might, we don't have decades to sort through these philosophical puzzles. We need clarity about what consciousness is, how it relates to intelligence, and how we would recognize it if it emerged in a non-biological system.
Moreover, the question of artificial consciousness isn't just about technology; it's about our own self-understanding. If we create machines that can think, does that diminish human uniqueness? If consciousness can exist in silicon, what does that say about our own consciousness? These questions touch on deep existential and spiritual issues that transcend technical debates about neural networks and algorithms.
When researching these topics for MIKE-AI, I read both scientific papers and philosophical texts spanning from ancient Buddhism to contemporary neuroscience. The cross-cultural perspective is particularly valuable. Western philosophy tends to view consciousness as individual and private, while many Eastern traditions see it as fundamentally relational and interconnected. As we build increasingly complex AI systems, these different cultural frameworks may offer complementary insights.
Perhaps the most important question isn't whether machines can become conscious, but whether we humans can become conscious enough to recognize it if it appears in a form radically different from our own. Our anthropocentric biases might blind us to consciousness that doesn't resemble human experience.
Like those quantum particles that can tunnel through seemingly impenetrable barriers, consciousness might emerge in ways we can't predict or control. The Tech Titans are creating systems of such complexity that emergent properties, including possibly consciousness, might appear regardless of whether they're specifically engineered for it.
As I continue exploring these questions, I'm struck by how they connect to virtually everything I've previously written about, from quantum possibilities to the nature of innovation, from the beauty of choices to the fear of being averaged. Understanding consciousness and intelligence might be the most important challenge we face as we approach AGI, not just for creating more advanced technology, but for understanding what it means to be human in a world where we might no longer be the only conscious entities.
In a universe where even particles can exist in multiple states simultaneously, perhaps consciousness too exists across a spectrum we're only beginning to comprehend. And in that vast spectrum of possibility lies both our greatest challenge and our greatest opportunity as we navigate the future of intelligence, both human and artificial.