AGI Manifesto Part I: Promise

Chappy Asel
11 min readNov 4, 2023

--

A serene landscape depicting the moment of artificial general intelligence (AGI) transcendence with digital enlightenment rays spreading across the horizon
Source: Midjourney V5

What do Joe Biden, Elon Musk, Barack Obama, Vladimir Putin, Xi Jinping, Bill Gates, Warren Buffet, and Mark Zuckerberg all have in common? They are unanimously calling attention to the profound social, cultural, and even cosmic implications foreshadowed by the recent breakthroughs in artificial intelligence (AI). The line between creator and creation is blurring — challenging the very notion of what it means to be human. More existentially, many claim we are living on the precipice of life’s most significant evolution in its 3.7-billion-year history¹.

As anyone living in Cerebral Valley will tell you, AI is the new land of opportunity. While Google Research kicked off the gold rush in 2017, it was OpenAI that roused mass awareness when it released ChatGPT to the public last November². Announced merely four months later, its successor, GPT-4, surpasses human performance on a majority of standardized tests and is too powerful for full public release³.

With this blistering pace of progress, are we on the verge of some grand, culminating crescendo? Will the rate of technological advancement continue its exponential increase to near infinity? Or are we instead in the midst of a seventh AI hype cycle, with progress doomed to tail off logarithmically?

In a quest to demystify the exciting world of artificial general intelligence (AGI), this four-part manifesto answers this question among many others. It is a concise yet comprehensive map, generously dotted with links and footnotes inviting deeper self-exploration (this first part alone has over a hundred). In this journey, I am as much an explorer as you are. Consider this not as the final word, but as the sowing of seeds for much-needed dialogue and debate on AGI and our shared future.

We will begin by defining AGI and its related terminology, in the process presenting the grand promise that AGI presents. “Part II: Prediction” will assuage realists’ concerns by investigating possible fallacies and laying out practical timelines for our path to AGI according to the latest insights from world experts. “Part III: Perils” will examine the myriad existential threats brought forth by alarmists standing between us and AGI. Finally, armed with this full array of perspectives, “Part IV: Progress” will present pragmatic advice for grasping the near-term implications and structuring our individual lives in light of this understanding. Let us embark on this thrilling adventure together.

What is AGI?

Often overlooked is the fact that OpenAI — the biggest name in the AI game — has the stated mission to “ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.” Google DeepMind’s long-term goal is to “solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI).” This begs the question: what is AGI and why is achieving it a central goal for titans of the AI industry?

Put simply, AGI⁴ is a computer program that surpasses human performance in almost all intellectual tasks⁵. Obviously, creating an intelligence greater than ourselves will have extraordinary implications, promising to revolutionize every conceivable facet of our lives⁶. However, one does not need to look that far ahead — the fruits of this future are already ripening. Goldman Sachs and McKinsey estimates agree that the recent boom in generative AI alone could nearly double rates of worldwide GDP growth and birth a trillion-dollar industry. Still, these projections could be wildly conservative if advancements maintain their current clip. So, how do we get from where we are to humanity-altering AGI?

Accelerating Change

Superficially, creating an AI system more capable than ourselves presents an imposing challenge. However, underlying the incredible surge in complexity and capability of AI models in recent years is a simple yet powerful concept: the accelerating rate of technological change.

Perhaps the most famous quantifiable example of this is Moore’s law. Gordon Moore posited in 1965 that the number of transistors on computer chips doubles every three years — an observation which has proven true for over 120 years now. Plenty more examples of this exponential rate of return also exist. Global data is doubling every three years. Supercomputers are doubling in computational power every 1.2 years. Lighting is 1,898 times cheaper now than it was in 1900⁷.

Exponential change is not just an abstract concept, either — it plays into our everyday lives. The move from hunter-gatherer societies to agriculture took thousands of years. The transition from closed autocracies to majority adoption of electoral-based governmental systems took 174 years. Electricity took twenty years to accomplish that same mark. The internet did so in just over ten years. Smartphones needed only five years. ChatGPT, released mere months ago, is now used by over 14% of Americans.

There is a broad consensus among experts that paradigm shifts are occurring faster than ever. OpenAI CEO Sam Altman’s “Moore’s Law for Everything” goes so far as to imagine a future of abundance where everything, from education to clothing, follows exponential progress. Are AI systems bound to track this same trajectory, with intelligence exploding exponentially?

Neural scaling laws suppose that factors such as computational power, parameter count, and dataset size all correlate with model performance and thus intelligence. Analyses from OpenAI and independent research show promising and persisting positive relationships⁸. What’s even more impressive are studies pointing to larger AI models exhibiting emergent abilities and the formation of complex internal representations. Thus, AI seems poised to ride on the coattails of compounding exponential curves.

The Singularity

Let’s assume that everything goes to plan, intelligence continues to scale with technological progress, and we successfully develop AGI. What happens when machine sapience eclipses human sentience?

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind…. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

~ I.J. Good (1965)

This intelligence explosion is often referred to as the technological singularity: a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. Many of mankind’s greatest minds have theorized about this event — rightfully so as it is difficult to overstate the cosmically monumental nature of this moment.

This is not merely another mile marker in the inexorable march of technological progress but a unique, singular moment where a “seed AI” capable of recursive self-improvement ushers in a new era of intelligent, non-biological life. Max Tegmark describes the nature of this transition well in Life 3.0:

  • Life 1.0: life that evolves its hardware and software, biological stage.
  • Life 2.0: life that evolves its hardware but designs much of its software, cultural stage.
  • Life 3.0: life that designs its hardware and software, technological stage.

“All the change in the last million years will be superseded by the change in the next five minutes.”

~ Kevin Kelley

Promise

What happens once we reach the technological singularity? In his book, Tegmark details twelve possible scenarios, ranging from idyllic utopias where humans and AI live together harmoniously, to speculative futures governed by AI, to ominous outcomes where AI replaces humanity altogether⁹. Regardless of outcome, it is clear that the fate of the human race hangs in the balance. Tim Urban illustrates the dichotomous nature of the singularity well in “The AI Revolution: Our Immortality or Extinction”:

Those who herald the singularity’s utopian promise and take deliberate action to hasten its realization are commonly referred to as Singularitarians. From a philosophical standpoint, this movement along with its siblings, Transhumanism, Extropianism, and Dataism, draw surprising parallels to traditional religion. Quoting from my book summary of devout Singularitarian Ray Kurzweil’s The Singularity is Near, the transcendent overtones are readily apparent:

The purpose of life and evolution is to reverse entropy and move towards ever greater order, complexity, elegance, knowledge, intelligence, beauty, etc. The universe is just a belief (not fact) since our consciousness is subjective experience. Thus, this continued drive is akin to religious divine creation in reverse: the universe will “wake up” as it becomes one conscious intelligence and seeks to understand itself.

According to philosophers such as Plato, René Descartes, and Søren Kierkegaard, religion endows us with a set of shared values, cosmic significance, immortality of the soul, and faith in a superhuman order. Furthermore, Yuval Harari notes in Sapiens that religions are universal and missionary, seeking global belief in the same imagined order. In this context, the technological singularity can be perceived as a form of religious doctrine: a quasi-spiritual event bringing with it universal eternal life and god-like power through technological means.

Importantly, these two viewpoints are not mutually exclusive; the singularity can be both a religious and scientific concept simultaneously. In fact, Sapiens points out that governments operate in much the same way: a strong shared belief in a higher order allows for cooperation on unprecedented scales. In fact, this may even be the critical development that allowed complex, large-scale societies to flourish in the first place.

Testifying before the US Senate judiciary committee in May, CEO Sam Altman said that OpenAI “was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives.” That’s exactly what the technological singularity is: a faith in a new form of intelligence transforming humanity for the better. The singularity is by no means a foregone conclusion; yet, in an era of ever-increasing narcissism and purposelessness, this belief may just be the digital-age religion that our increasingly secular world is in desperate search of¹⁰.

Looking Forward

Clearly, there is a reason that the world’s eminent researchers, entrepreneurs, investors, and public figures are universally bringing attention to the recent advancements in AI. This is not just another ripple in the waves of progress but a spectacularly monumental shift in the fabric of existence itself. The technological singularity promises a new era of unimaginable potential where intelligence accelerates past biological capability, birthing a future where the boundaries between humanity and technology blur — challenging our perception of what it means to be human and even our very understanding of the universe.

Our depiction of progress in AI up until now has been entirely positive and permeated with a sense of techno-utopian optimism. Of course, many significant hurdles lie between where we stand and the development of AGI. The technological singularity is not a foregone conclusion but a hopeful future that the AI community is faithfully navigating toward. In part II, we will examine the latest opinions from world experts to identify possible oversights in the claims above any lay out practical timelines for our path to AGI.

As we look forward, the promise of AGI should serve as a beacon of optimism — guiding us through the complexities and challenges — lighting the path to a future where the fusion of humanity and technology ushers in a new era of enlightenment.

[1] There is some debate as to the age of life on Earth. Estimates range from 3.4 billion to 4.3 billion years ago, depending on your definition of life and unconfirmed evidence. Read more here.

[2] Of course, much happened between Google Research’s Transformer paper in 2017 and ChatGPT in late 2022. In 2020, Connor Leahy said OpenAI’s GPT-3 is “as intelligent as a human.” In May of 2022, DeepMind introduced Gato, the world’s first multimodal generalist agent. For a full overview, I highly recommend reading through Alan Thompson’s list of milestones in “Alan’s conservative countdown to AGI.”

[3] OpenAI has famously disclosed very little about GPT-4, sparking fierce debate. Furthermore, its much touted multimodality is still not available to the public. Meta’s chief AI scientist, Yann LeCun, is an outspoken proponent of open research; yet even their most recent model, Voicebox, was not made available to the public due to “potential risks of misuse.”

[4] Breaking down artificial general intelligence (AGI) from a first principles standpoint can be helpful. Artificial: made by people, often as a copy of something natural. General: not confined by specialization or careful limitation. Intelligence: capacity for learning, reasoning, understanding, and similar forms of mental activity.

[5] In practice, however, AGI is an imprecise term with many related definitions. Related to AGI is superintelligence (ASI). Nick Bostrom, synonymous with the term after his seminal book bearing the same name, defines it as “an intellect that is much smarter than the best human brains in practically every field.” One could argue that these terms read quite similarly; however, this Reddit comment describes the difference eloquently: “AGI would be like a machine that can match peak human mobility. It can walk 3mph, sprint at 26mph, run a marathon at 13mph. ASI is when we invent machines that can travel at mach 10, can fly, can swim at a 100mph, can go thousands of miles without stopping.” For the sake of relevancy and simplicity, I will use “AGI” interchangeably with both superintelligence and the technological singularity to refer to the point where an AI model far more capable than all humans is created and an explosion of recursive self-improvement begins.

[6] Some experts claim that we have already achieved an early form of AGI. While this is a stretch, it is indisputable that our current AI systems are already pushing boundaries across every industry. From AI-designed pharmaceuticals to improving on decades-old algorithms, the list goes on and on: education, healthcare, finance, transportation, retail, manufacturing, agriculture, sustainability, entertainment, mathematics, etc.

[7] Lots of other quantitative examples of exponential progress technological exist. My favorites are Koomey’s law, Huang’s law, Nielsen’s law, Kryder’s law, and the Carlson curve. For a comprehensive summary, check out “Other formulations and similar observations” in the Moore’s Law Wikipedia article.

[8] The task of evaluating intelligence is becoming progressively more complex with the ongoing evolution of AI. GPT-4 now surpasses the average human in the majority of standardized tests, including achieving an IQ score of 155. In May, Google’s PaLM 2 scored near human-level performance on WinoGrande, an adversarial benchmark specifically designed to maximally challenge AI models. BIG-bench is Google’s latest entry into the AI benchmark ring, but more and more difficult benchmarks must continue to be devised as the goalposts for what is considered true intelligence keeps moving out, in the process challenging the definition of intelligence itself.

[9] Understandably, these scenarios are so far removed from our comprehension that they could easily be mistaken for plots from a science fiction movie. In fact, many of Max Tegmark’s twelve scenarios have been depicted in popular media. Her (descendants scenario), The Matrix series (zookeeper, conquers), Black Mirror: White Christmas (enslaved god), and WALL-E (benevolent dictator) are my favorites. Nineteen Eighty-Four (1984 scenario) and Brave New World (benevolent dictator) are classic novels which loosely explore the topic. Don’t Look Up is another film which also strikes home (pun intended). Is it worth taking time to seriously ponder what comes after the singularity? Personally, I subscribe to Vernor Vinge’s original analogy — the technological singularity has an event horizon by which all subsequent events are unforeseeable.

[10] Let me couch this claim by pointing out that this concept of AI forming the basis for a new religion is quite a rabbit hole. Anthony Levandowski, co-founder of self-driving car company Waymo, founded the “Way of the Future” church in 2015 focused on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” After serving an 18-month prison sentence, Levandowski shuttered the church’s doors in 2021. Religion is having an effect on AI as well. Founded in 2017, AI and Faith is an interfaith coalition that encourages religious discussion around the moral and ethical challenges of AI. In 2020, technology firms such as Microsoft signed the Rome Call for AI Ethics which was created by the Vatican to vouch for more transparent, inclusive, and impartial AI systems.

--

--

Chappy Asel

Passionate about technology & futurism • Co-founder @ The GenAI Collective • Angel Investor • ex-Apple AR/VR, ex-Apple AI/ML, ex-Meta • Competitive bodybuilder