The limits of language–implications for ASI

When looking at the philosophy that probably underlies the core belief in Large Language Models (LLMs) being the right path to achieving Artificial Superintelligence (ASI) or Artificial General Intelligence (AGI), I could not help but be reminded of the famous quote from Ludwig Wittgenstein: “The limits of my language are the limits of my world.” What he likely argues is that the language we use defines the limits of our knowledge and our understanding of the world.

Following this logic, one may surely arrive at the conclusion that, indeed, there is nothing we know that we don’t know or cannot describe. Language—the words we use to define or describe—must represent the line where our knowledge lies. As Parmenides has also argued, “the thing that can be thought and that for the sake of which the thought exists is the same; for you cannot find thought without something that is, as to which it is uttered.” Therefore, if an LLM can capture the entirety of human language, there exists a possibility to build a machine that can be at comparable levels as humans when it comes to thought.

Many have argued in the past that a machine will not be able to smell, taste, or see by itself, and the lack of these senses will prevent it from reaching the human level, not to mention moving beyond it. However, while this argument may sound compelling, it does not pinpoint exactly what is missing from machines that lack sensory experience that prevents them from becoming human-comparable.

So, should we be convinced that ASI (or even AGI) are inevitable? Are we missing other arguments?

Here’s my thought: I do believe that the limits of words indeed define our current borders of understanding—our current borders of human thought from the past thousands of years. However, what about the future? Do we already have language for everything we don’t yet know? Surely not. Are the unknowns simply a re-arrangement of the words of what we know today, or would they require the creation of concepts that are entirely new? The answer lies in truly understanding human cognition—a process that moves from the known to the unknown, continuously redefining the boundaries of knowledge. Such a trajectory undoubtedly requires leaps of mind—conceptual breakthroughs that arise from inspiration and intuition, transcending what can be fully captured by words or any existing computational models.

Will we see ASI? I don’t believe so—not yet. If true intelligence requires transcending language itself, then ASI is not inevitable—it is, for now, inconceivable.

Leave a comment