Cosmologist and Massachusetts Institute of Technology physics professor, Max Tegmark, is a big-picture thinker. His previous book describes how our universe is a mathematical construct and makes a compelling argument for multiverse theory. The math is a bit tricky, but the dialogue is easy to follow. When he defines the words “life” and “intelligence” for his book, “Life 3.0: Being Human in the Age of Artificial Intelligence,” he first strips both words of their anthropocentric baggage.
Life, he argues, is a “process that can retain its complexity and reproduce,” and “intelligence is the ability to accomplish complex goals.” These definitions serve two purposes. They minimize potential confusion for the reader and move the frame of conversation from science and engineering to a social and political context. This is crucial, he insists, because our civilization is on the cusp of a revolution in engineering and science that could fundamentally alter the trajectory of our species’ evolution on this planet. The conversation is for everyone because the implications change everything.
“Life 3.0” begins with the evolution of intelligence and demonstrates how our progeny – both biological and man-made – will shape every aspect of life on earth in the coming decades, whether we are ready or not. More importantly, he demystifies the popular science notion that artificial intelligence might emerge as an evil force, working against humanity. He focuses instead on the very real threat that AI will attain competency, disrupting daily life for almost everyone, before we can react or shape its influence.
Tegmark insists that narrow, competent AI’s are already transforming the way we buy a cup of coffee, read maps and interpret foreign languages by using apps. Killer robots are unnecessary and beside the point. The end of the beginning is nigh.
The Three Stages of Life
Tegmark sees the evolution of life on Earth as a series of stages, proceeding naturally from the simplest to the most complex.
He defines life 1.0 as biological– a bacteria’s ability to modify its own hardware, the physical body and software, is limited by the speed of biological evolution. Simple organisms only change over time and cannot change during any one organism’s lifetime.
Life 2.0 is the cultural stage, represented best by human beings. Our hardware is a product of biological evolution, but we possess the ability to upgrade our own software through learning within our own lifetimes. Indeed, this is the main advantage of the cultural stage of life, as most of our growth toward adulthood and all of our operating intelligence is acquired after birth. None of us, he argues, are born with the ability to speak perfect English or ace our college entry exams. We modify our software as we grow and can adapt to our circumstances through learning.
Tegmark defines Life 3.0 as the technological stage. Living organisms who have achieved this level possess the ability to design and modify their own hardware and software within their own lifetime, while retaining their ability to self-replicate.
The Three Schools of Thought on AI
Tegmark says the idea that we as human beings can create AI that meet his definition of Life 3.0 is “wonderfully controversial.” He divides adherents into three groups: Digital utopians, the beneficial AI movement and techno-skeptics.
Digital utopians are convinced that we are on the verge of creating an artificial general intelligence in the next 20-100 years and that it will ultimately benefit everyone.
Advocates of the beneficial AI movement argue for a similar timeline, but insist that AI safety research is both warranted and useful, and can be used to guide our progress toward optimal outcomes for human beings.
Techno-skeptics argue that we are, at best, hundreds of years from creating an AIG, and that worrying too much over its potential impact or capabilities is both premature and stifling to progress and innovation. Prominent techno-skeptic Andrew Ng, former chief scientist at Baidu, China’s Google equivalent, sums up their position, “fearing the rise of killer robots is like worrying about overpopulation on Mars.”
Tegmark’s book initially attempts to move the discussion away from AGI toward a more narrow definition of intelligence. Long before we can build a human level AI that can rival our position as the dominant lifeform on Earth, narrow AI will transform our jobs, energy, warfare, politics, crime and relationships in ways we are only beginning to understand or discuss. It is a conversation we need to have now, before our digital offspring shake our foundations to dust.
Narrow Vs. General AI
We have been living with narrow AI for a long time. Calculators can perform complex calculations faster and more accurately than any human being. Financial experts use algorithms to perform high-speed trades on the stock market and earn big returns on millions of tiny transactions over a short period of time. In 2016, Google DeepMind created an AI that could learn to master dozens of computer games with zero instructions. Their AI, AlphaGo, learned to play GO and defeat its reigning champion in the same year. None of these AI systems can do everything we can do. They are not AGI’s. Their focus is incredibly specific, but transformative in their achievement within the precise confines of their initial purpose.
When Google’s DeepMind produced their code, they revealed that their AI was able to learn and master games without reference to any of the markers humans use to define them. It used a technique called deep reinforcement learning to analyze any game through a system of positive rewards. DeepMind interacted with strings of numbers to maximize its score. It possessed no frame of reference for the tradition of GO, or the metaphysical and philosophical implications of strategy and competition. It simply played for rewards and learned to design its moves by mimicking player intuition to achieve maximum results in the shortest amount of time.
The implications for this type of learning strategy are profound, even for very narrow forms of AI. DeepMind knows nothing about the thousands of years-long traditions behind the game, but it rapidly learned to intuit strategies no human had discovered over millennia of play, using a process that appears “creative.” Most importantly, it could teach its human opponents these moves and strategies by defeating them, a process not dissimilar to the deep learning strategies that made their discoveries possible in the first place.
Tegmark argues that there is simply no reason that a narrow AI cannot intuit novel, creative strategies in many other areas of human thought or achievement. The Google Brain team has made rapid advances in the translation of human languages and can now do so nearly as well as a human translator, all without understanding the meaning of the words themselves. He invites us to imagine what AI can do for other, more immediate areas of human thought and achievement, from our social and political systems, to environmental, economic and warfare strategies. It is no accident that Tegmark, and the Future of Life Institute, which he is a co-founder of, argue that AI is the most important question of our time.
In short, narrow AI, when turned to problems outside of games, will help us innovate and create new strategies toward achievement in many of the most vital and consequential aspects of day to day life. That it has begun to change the way we think about our world, long before it achieves any semblance or human level intelligence, is a revolution that we cannot afford to ignore.
Ultimately, Tegmark’s “Life 3.0” serves as a baseline resource for future conversations about the implications of our inventiveness as a species and as a roadmap for the evolution of our civilization. Like his previous work, “Life 3.0” is a book that I will read from and refer to for years to come. Tegmark invites us all to participate in a conversation that will shape the future of life on our planet and throughout the universe. It is both easy to read and difficult to ignore– or put down. Its questions are both haunting and thought provoking.
We should discuss the ramifications of Life 3.0 before it arrives and we must do so before we need the answers. A future ignored has a way of arriving before we are ready for it.