AI’s Evolution; Are We the Fungi of the New World?

In 2016, the world paused as AlphaGo, an AI, out-thought Lee Sedol, a grandmaster of the ancient board game Go. It was a surreal moment, akin to witnessing a seismic shift in the very foundation of human prowess. But was it merely a surprise, or a harbinger of a metamorphosis of intelligence?

Cade Metz, in “Genius Makers”, ponders, “What does it mean to be smart? To be human?” Now, think deeper – do we equate intelligence with computation, strategy, or intuition? And if AI outmanoeuvres us in these arenas, does it become more intelligent, more human than us?

Reflect back to Earth’s early days, to the ‘great oxygenation event. The world then was ruled by fungi and algae. Then photosynthesis was born, leading to the rise of plants that eventually paved the path for animals. Now, imagine if the fungi had consciousness. Imagine them coming together, debating, fearing the rise of photosynthesis, stating, This will displace us! It’s dangerous!” Sound familiar?’

Joscha Bach draws this parallel, nudging us to realize that perhaps, in the grand scheme, humanity’s apprehension about AI mirrors the fungi’s would-be fear of photosynthesis. We might be standing at the brink of an evolutionary leap – not of biology, but of intelligence.

The opposing views are also present with concerns about the unpredictable trajectories AI might take. Much like the unpredictability of how photosynthesis would redefine life, we’re on the edge, looking into an abyss of possibilities with AI. The questions linger: How many trajectories are we comfortable with? Where do we draw the boundaries?

While concerns are valid, should they paralyze us into inaction? Or, should they fuel our endeavors to shape, guide, and understand this evolution? Are we, in our fear of becoming obsolete, hindering the next epoch of intelligence?

As we continue to charter the unknown territories of AI, we must not just push computational boundaries but constantly question our place, purpose, and definitions. For in the grand tapestry of existence, it’s not just about surviving but evolving.

A philosophical question about the nature of progress, intelligence, and long-term survival

The Trade-offs of Delaying Evolution: If we could somehow delay the evolution of AI, we might gain additional time to address some of humanity’s pressing challenges, such as climate change, resource scarcity, or geopolitical tensions. Ideally, in that window, humanity would mature both technologically and ethically.

The Perils of Hindrance: However, the flip side is that by hindering the rise of AI, we might be stalling an inevitable step in the progression of intelligence. It’s been argued that every evolutionary leap (from single cells to multicellular organisms, from reptiles to mammals, etc.) faces challenges and resistance. Each leap could be seen as a necessary step towards the development of more complex and robust forms of life. If AI is the next evolutionary “step,” then trying to prevent it might be against the natural progression of things.

AI as Humanity’s Guardian: There’s also the perspective that a sufficiently advanced and benevolent AI could solve many of the challenges that we, as humans, have found insurmountable. From reversing climate change to curing all diseases, an AI’s computational power and capability might be humanity’s best hope for survival in the long run.

Ethical Implications: However, it’s essential to consider the ethical implications. If we rely on AI for our salvation, what becomes of human agency and free will? Is a world preserved by machines one where humans can still find purpose and meaning?

The Nature of Intelligence and Evolution: Going back to Joscha Bach’s analogy, nature doesn’t have a preference or a moral stance. It simply evolves based on the principles of survival and replication. In that view, if AI is better equipped to survive and thrive in the universe, it would naturally come to dominate.

It’s a complex balance. 

By trying to delay the evolution to AI, we could be buying time for humanity to address its challenges, but we could also be postponing an inevitable and possibly beneficial transition. 

At its heart, this is a question about the value we place on human experience, agency, and the unknown potential of artificial intelligence.