The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma book cover

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma: Summary & Key Insights

by Mustafa Suleyman

Fizz10 min10 chaptersAudio available
5M+ readers
4.8 App Store
100K+ book summaries
Listen to Summary
0:00--:--

Key Takeaways from The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

1

The most important technologies are no longer arriving one at a time; they are arriving together, accelerating each other, and spreading faster than institutions can respond.

2

Every age believes it has seen disruption before, but history can comfort us too much.

3

Suleyman’s boldest claim is that artificial intelligence and synthetic biology are not merely industries; they are new forms of power.

4

A dangerous technology is not only one that fails; it is one that scales before society knows how to control it.

5

One of Suleyman’s most practical arguments is that governance can no longer be treated as a slow afterthought.

What Is The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma About?

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma by Mustafa Suleyman is a ai_ml book spanning 13 pages. In The Coming Wave, Mustafa Suleyman argues that humanity is entering a new era defined by technologies powerful enough to transform not just industries, but the foundations of politics, security, and human life itself. The book focuses on two especially consequential forces: artificial intelligence and synthetic biology. Unlike previous inventions, these tools are improving exponentially, spreading globally, and becoming easier for individuals, startups, and states to access. That combination creates extraordinary promise alongside extraordinary risk. Suleyman writes with unusual authority. As a cofounder of DeepMind and later Inflection AI, he has helped build the very systems he is warning the world to govern wisely. He combines insider knowledge of frontier technology with a broad historical perspective on how power shifts when new capabilities emerge. What makes this book matter is its central dilemma: modern societies depend on innovation and openness, yet the same openness may allow dangerous technologies to escape control. Suleyman does not argue for halting progress. Instead, he asks how we can contain powerful technologies without crushing the benefits they can bring. The result is a timely, urgent framework for anyone trying to understand the future.

This FizzRead summary covers all 10 key chapters of The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma in approximately 10 minutes, distilling the most important ideas, arguments, and takeaways from Mustafa Suleyman's work. Also available as an audio summary and Key Quotes Podcast.

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

In The Coming Wave, Mustafa Suleyman argues that humanity is entering a new era defined by technologies powerful enough to transform not just industries, but the foundations of politics, security, and human life itself. The book focuses on two especially consequential forces: artificial intelligence and synthetic biology. Unlike previous inventions, these tools are improving exponentially, spreading globally, and becoming easier for individuals, startups, and states to access. That combination creates extraordinary promise alongside extraordinary risk.

Suleyman writes with unusual authority. As a cofounder of DeepMind and later Inflection AI, he has helped build the very systems he is warning the world to govern wisely. He combines insider knowledge of frontier technology with a broad historical perspective on how power shifts when new capabilities emerge.

What makes this book matter is its central dilemma: modern societies depend on innovation and openness, yet the same openness may allow dangerous technologies to escape control. Suleyman does not argue for halting progress. Instead, he asks how we can contain powerful technologies without crushing the benefits they can bring. The result is a timely, urgent framework for anyone trying to understand the future.

Who Should Read The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma?

This book is perfect for anyone interested in ai_ml and looking to gain actionable insights in a short read. Whether you're a student, professional, or lifelong learner, the key ideas from The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma by Mustafa Suleyman will help you think differently.

  • Readers who enjoy ai_ml and want practical takeaways
  • Professionals looking to apply new ideas to their work and life
  • Anyone who wants the core insights of The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma in just 10 minutes

Want the full summary?

Get instant access to this book summary and 100K+ more with Fizz Moment.

Get Free Summary

Available on App Store • Free to download

Key Chapters

The most important technologies are no longer arriving one at a time; they are arriving together, accelerating each other, and spreading faster than institutions can respond. Suleyman calls this the “coming wave,” a period in which artificial intelligence, synthetic biology, robotics, quantum tools, advanced networks, and other fields converge to create unprecedented power. What makes this different from earlier technological revolutions is not just speed, but accessibility. Capabilities once reserved for governments or giant corporations are increasingly available to smaller organizations and even individuals.

This convergence matters because one breakthrough now amplifies others. AI can accelerate scientific discovery, automate design, optimize supply chains, and even help engineer new biological systems. Synthetic biology can transform medicine, agriculture, and manufacturing. Combined, these technologies become general-purpose power multipliers. They reduce the cost of creation, imitation, and disruption.

Suleyman’s core warning is that society tends to build powerful systems first and govern them later. That delay was manageable with many past technologies. It may be catastrophic with technologies that can autonomously act, replicate, manipulate information at scale, or alter biological reality. Think of AI-generated cyberattacks, lab tools that lower barriers to gene editing, or automated persuasion systems shaping public opinion in real time.

The practical implication is that people in business, government, and education can no longer think in narrow industry categories. A hospital administrator must understand AI security. A policymaker must think about bio-risk and data infrastructure together. A founder building a productivity tool may also be building a geopolitical capability.

Actionable takeaway: Stop viewing advanced technology as a sector trend and start treating it as a society-wide shift in power that requires cross-disciplinary planning now.

Every age believes it has seen disruption before, but history can comfort us too much. Suleyman examines earlier waves of transformation such as the steam engine, electricity, industrial chemistry, aviation, and computing. These technologies reshaped work, warfare, wealth, and governance. Yet even the largest past revolutions generally changed one layer of society at a time. The coming wave reaches into intelligence itself, biological design, and the infrastructure of decision-making.

Historical analogy still matters because it reveals a recurring pattern: powerful technologies begin with optimism, scale through competition, and only later trigger rules, norms, and institutions. Railroads led to regulatory regimes. Nuclear weapons led to arms control frameworks. The internet led to global communication but also misinformation, surveillance, and concentration of power. In each case, governance lagged behind capability.

Suleyman’s point is not that history repeats exactly, but that it teaches two sobering lessons. First, technology nearly always escapes the intentions of its creators. Second, containment is hardest once adoption becomes economically and strategically irresistible. Societies rarely pause when tools promise military advantage, medical breakthroughs, or massive productivity gains.

A useful example is social media. It began as a liberating communication platform, then evolved into a system with deep effects on mental health, politics, and public discourse. If relatively “light” digital tools could destabilize institutions, the stakes are much higher with AI systems that can reason, generate, persuade, and coordinate, or with biological tools that affect living systems directly.

Actionable takeaway: Use history as a warning against complacency. When evaluating a new technology, ask not only what it can do today, but how competition, scale, and unintended consequences may reshape it tomorrow.

Suleyman’s boldest claim is that artificial intelligence and synthetic biology are not merely industries; they are new forms of power. AI extends cognitive power. It can classify, predict, design, persuade, monitor, and increasingly act on the world with limited human supervision. Synthetic biology extends biological power. It allows humans to read, write, and edit life with growing precision. Together, they expand what small groups can do, compressing the distance between intention and impact.

Why does this matter? Because power is shifting away from traditional bottlenecks. In the past, large-scale influence often required land, factories, armies, or capital-intensive infrastructure. Now a small team with models, compute access, specialized tools, and global distribution can create outsized effects. A startup can produce a drug discovery breakthrough. A malicious actor can automate phishing, disinformation, or cyber intrusion. A bio-lab can speed up beneficial therapies but also lower the threshold for dangerous experimentation.

Suleyman does not frame this only as a threat. The same powers could transform medicine, climate solutions, education, accessibility, and scientific research. AI can help detect disease earlier, optimize energy systems, and support millions with low-cost tutoring. Synthetic biology can create new vaccines, sustainable materials, and resilient crops. The problem is that capability is dual-use: the same tools that heal can also harm.

This is why the book constantly returns to governance. If power is becoming more diffuse, then laws, norms, technical safeguards, audits, licensing, and international coordination become essential. The old assumption that only states or giant firms can do major damage no longer holds.

Actionable takeaway: Treat AI and synthetic biology as foundational power technologies. If you work with them, design for both upside and misuse from the very beginning.

A dangerous technology is not only one that fails; it is one that scales before society knows how to control it. Suleyman argues that the greatest risk is not a single dramatic scenario but a broad pattern of “containment failure.” As advanced technologies become cheap, general, and globally available, they can escape geographic, institutional, and legal boundaries. Once that happens, the challenge is no longer just invention but persistent management.

The risks span multiple domains. In AI, uncontained systems could supercharge cybercrime, automate fraud, generate convincing propaganda, destabilize labor markets, or be used in military settings with limited accountability. In synthetic biology, lower-cost tools and wider knowledge distribution could increase the risk of accidental leaks or deliberate misuse. Across both domains, concentration of capability also creates political risk: a handful of actors may gain immense leverage over information, infrastructure, or public life.

Suleyman is especially concerned with compounding effects. Imagine AI speeding up biological research, cloud platforms distributing tools globally, open-source communities lowering entry barriers, and geopolitical competition encouraging corners to be cut. In that world, the danger is not one villainous actor but an ecosystem where incentives drive rapid release faster than safeguards can mature.

For ordinary organizations, systemic risk can look surprisingly mundane. A company adopts AI without strong data security. A school relies on tools that manipulate attention. A government department deploys models it cannot audit. A lab uses powerful protocols without robust oversight. Individually these choices seem manageable; collectively they create fragility.

Actionable takeaway: Ask not only whether a technology works, but whether it remains safe and governable when millions of people use it, misuse it, or compete through it.

One of Suleyman’s most practical arguments is that governance can no longer be treated as a slow afterthought. Traditional regulation often assumes stable industries, visible products, and gradual change. Frontier technologies do not fit that model. They improve rapidly, cross borders instantly, and evolve through software updates, model scaling, and distributed experimentation. By the time harm becomes obvious, the technology may already be deeply embedded.

This does not mean heavy-handed bans are the answer. Suleyman advocates a layered approach: technical safeguards, corporate accountability, licensing for high-risk capabilities, better monitoring, liability frameworks, public standards, and specialized institutions with real expertise. Governance should be adaptive, evidence-based, and targeted toward dangerous capabilities rather than broad fear of innovation.

A useful parallel is aviation. Flight became safe not because risk vanished, but because society built norms, engineering standards, reporting systems, inspections, training protocols, and international cooperation. Advanced technology needs comparable maturity. For AI, this could include evaluations before deployment, red-teaming, provenance systems for generated content, controlled access to dangerous models, and independent audits. For biology, it could involve screening of DNA synthesis orders, secure lab practices, and better surveillance for emerging threats.

Suleyman also emphasizes legitimacy. If the public sees governance as captured by industry or detached from reality, trust collapses. Effective rules must involve democratic debate, technical competence, and global dialogue. The goal is not to freeze progress, but to shape it before market and military incentives make reform much harder.

Actionable takeaway: Support governance mechanisms early, while they still feel inconvenient rather than urgent. Waiting for a crisis usually means the cost of control will be far higher.

The central dilemma of the book is stark: open societies thrive on freedom, experimentation, and innovation, yet those same qualities make it hard to contain technologies that can be dangerous at scale. Suleyman argues that containment is not a perfect solution, nor is it simple. It is a continuous strategy of limiting misuse, slowing dangerous proliferation, and embedding constraints into systems before harms become unmanageable.

Containment can take many forms. At the technical level, it includes safeguards, restricted interfaces, access controls, monitoring, and fail-safes. At the institutional level, it includes export controls, safety testing, licensing, procurement rules, and legal penalties for reckless deployment. At the social level, it includes norms among researchers, public expectations, and professional standards. None of these measures is enough alone; together they create friction against catastrophe.

Suleyman knows the objections. Containment can be bypassed. Competitive markets may punish caution. States may race for strategic advantage. Open-source communities may resist restrictions. Yet he insists that imperfection is not an excuse for inaction. We already contain many dangerous capabilities imperfectly: aviation accidents still happen, but safety systems save lives; financial fraud persists, but compliance reduces it; nuclear proliferation was never fully solved, but institutions still matter.

A practical example is frontier model access. Not every highly capable system needs to be downloadable by everyone without safeguards. Tiered access, usage monitoring, identity verification, and capability thresholds may feel restrictive, but they can reduce large-scale misuse while preserving beneficial applications.

Actionable takeaway: Replace the fantasy of total control with a realistic containment mindset: build layers of friction, oversight, and restraint wherever capabilities could cause outsized harm.

Powerful technologies do not respect borders, but politics still does. Suleyman argues that no nation can solve the coming wave alone because AI models, biological knowledge, data flows, chips, talent, and cloud infrastructure operate in a global system. At the same time, international coordination is difficult precisely because these technologies are tied to economic growth, military advantage, and national prestige.

This creates a classic collective-action problem. Every major actor may privately recognize the dangers of uncontrolled proliferation, yet still feel pressure to move faster because rivals might gain an edge. The result is a race dynamic: accelerate now, govern later. Suleyman warns that this logic is unstable. If countries treat frontier technologies purely as strategic assets, the world may end up with weaker safety standards, more secrecy, and higher systemic risk.

Still, he does not dismiss coordination as naive. He points to the possibility of partial cooperation: shared safety benchmarks, export controls on critical inputs, incident reporting norms, scientific monitoring, red lines around especially dangerous applications, and trusted forums where states and firms can exchange information. Perfect harmony is unrealistic; practical coordination is not.

For businesses and citizens, geopolitics may seem distant, but it shows up in concrete ways: who controls chips and cloud infrastructure, how AI standards are written, whether global biosecurity systems share data quickly, and how cross-border companies handle compliance. The future will likely involve a patchwork of competition and collaboration rather than one grand treaty.

Actionable takeaway: Think globally even when acting locally. Any serious strategy for AI or biotech should account for international incentives, standards, and dependencies rather than assuming one country can safely go it alone.

Technological revolutions do not just create tools; they reorganize who is valuable, who is vulnerable, and how income flows through society. Suleyman argues that the coming wave could bring enormous economic abundance while also intensifying inequality, labor displacement, and concentration of power. AI in particular threatens to automate not only routine manual work, but also cognitive tasks once considered secure: drafting, analysis, customer support, coding, legal research, and aspects of design or management.

This does not mean human work disappears wholesale. More likely, jobs will be reconfigured. Some roles will be enhanced by AI, others fragmented into lower-value oversight tasks, and new categories will emerge around system supervision, safety, integration, and trust. The problem is timing. Productivity gains may arrive faster than institutions can help workers retrain, relocate, or maintain bargaining power. Meanwhile, firms controlling compute, data, and models could capture disproportionate value.

Suleyman’s concerns extend beyond employment. If advanced systems allow a small number of platforms or states to dominate information, logistics, and decision support, then economic inequality may merge with political inequality. Access to capability becomes a determinant of power.

Practical responses include updated education systems, portable benefits, stronger competition policy, public investment in shared infrastructure, and labor policies that value adaptation rather than static job categories. Individuals can respond by building complementary skills: judgment, domain expertise, relationship-building, ethical reasoning, and the ability to work productively with AI tools.

Actionable takeaway: Don’t assume the future of work is simply job loss or job growth. Prepare for rapid role redesign, and invest in skills and policies that keep people economically relevant as capability shifts.

One of the book’s most important ethical claims is that responsibility for the coming wave does not belong only to regulators. Governments matter, but so do founders, researchers, investors, engineers, educators, and users. Suleyman argues that companies building frontier systems are not neutral suppliers. They are shaping the distribution of power in society. Their choices about access, safety, incentives, transparency, and deployment carry public consequences.

This is especially significant in AI, where private labs often move faster than public institutions. A product decision inside one company can affect millions of classrooms, workplaces, elections, or security environments. In synthetic biology, procurement protocols or screening standards can influence global biosecurity. Investors, too, shape behavior through what they reward: reckless scale at any cost, or disciplined growth with safety measures built in.

Suleyman pushes against the comforting idea that ethics can be delegated. Individuals inside organizations often see problems before institutions do. Researchers notice risky capabilities. Product teams spot misuse patterns. Employees sense when incentives undermine safety. Citizens also have a role through democratic pressure, informed debate, and consumer expectations.

A practical example is internal escalation. A responsible company should empower teams to pause deployments, run adversarial testing, document incidents, and accept short-term friction in exchange for long-term trust. Likewise, schools and workplaces should teach people how to evaluate AI outputs, protect privacy, and recognize manipulation rather than adopting tools blindly.

Actionable takeaway: If you are involved with advanced technology in any role, treat safety and societal impact as part of your job description, not as someone else’s problem.

Suleyman’s message is ultimately neither anti-technology nor utopian. It is a plea for civilization-level maturity. The coming wave will not be stopped, and trying to shut down innovation entirely would likely fail while sacrificing immense benefits. But surrendering to acceleration without guardrails is equally dangerous. The real challenge is to preserve openness, prosperity, and scientific progress while preventing a world in which powerful systems overwhelm institutions, concentrate authority, or empower destructive actors.

This requires a new mindset of deliberate restraint. Restraint is not the enemy of ambition; it is what makes ambition sustainable. In practice, that means prioritizing robust institutions, transparent standards, trusted public oversight, secure technical design, and global norms around the most dangerous capabilities. It also means accepting trade-offs. Some products may launch later. Some forms of access may remain restricted. Some profits may be deferred in favor of safety.

Suleyman’s broader contribution is to reframe the debate. The question is no longer whether technology will transform the century. It will. The question is whether democratic societies can adapt fast enough to shape that transformation. If they cannot, power may drift toward whoever can build and deploy fastest, regardless of wisdom or accountability.

For readers, the book is a call to seriousness. Whether you are a policymaker, founder, teacher, investor, or citizen, you are living through a period in which technical literacy and civic responsibility increasingly belong together.

Actionable takeaway: Embrace innovation, but demand restraint worthy of the power these technologies carry. The future will be shaped not just by what we can build, but by what we choose to limit.

All Chapters in The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

About the Author

M
Mustafa Suleyman

Mustafa Suleyman is a British entrepreneur, technologist, and writer known for his influential role in the development of modern artificial intelligence. He co-founded DeepMind in 2010, helping build one of the world’s most important AI research companies before its acquisition by Google. Later, he co-founded Inflection AI, continuing his work at the frontier of intelligent systems. Beyond company building, Suleyman has become a prominent public voice on the ethics, governance, and long-term societal impact of advanced technology. His perspective is distinctive because it combines hands-on experience creating powerful AI systems with deep concern about how those systems should be managed. In The Coming Wave, he draws on that background to explore the intersection of technology, power, policy, and human responsibility in the twenty-first century.

Get This Summary in Your Preferred Format

Read or listen to the The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma summary by Mustafa Suleyman anytime, anywhere. FizzRead offers multiple formats so you can learn on your terms — all free.

Available formats: App · Audio · PDF · EPUB — All included free with FizzRead

Download The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma PDF and EPUB Summary

Key Quotes from The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

The most important technologies are no longer arriving one at a time; they are arriving together, accelerating each other, and spreading faster than institutions can respond.

Mustafa Suleyman, The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

Every age believes it has seen disruption before, but history can comfort us too much.

Mustafa Suleyman, The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

Suleyman’s boldest claim is that artificial intelligence and synthetic biology are not merely industries; they are new forms of power.

Mustafa Suleyman, The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

A dangerous technology is not only one that fails; it is one that scales before society knows how to control it.

Mustafa Suleyman, The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

One of Suleyman’s most practical arguments is that governance can no longer be treated as a slow afterthought.

Mustafa Suleyman, The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

Frequently Asked Questions about The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma by Mustafa Suleyman is a ai_ml book that explores key ideas across 10 chapters. In The Coming Wave, Mustafa Suleyman argues that humanity is entering a new era defined by technologies powerful enough to transform not just industries, but the foundations of politics, security, and human life itself. The book focuses on two especially consequential forces: artificial intelligence and synthetic biology. Unlike previous inventions, these tools are improving exponentially, spreading globally, and becoming easier for individuals, startups, and states to access. That combination creates extraordinary promise alongside extraordinary risk. Suleyman writes with unusual authority. As a cofounder of DeepMind and later Inflection AI, he has helped build the very systems he is warning the world to govern wisely. He combines insider knowledge of frontier technology with a broad historical perspective on how power shifts when new capabilities emerge. What makes this book matter is its central dilemma: modern societies depend on innovation and openness, yet the same openness may allow dangerous technologies to escape control. Suleyman does not argue for halting progress. Instead, he asks how we can contain powerful technologies without crushing the benefits they can bring. The result is a timely, urgent framework for anyone trying to understand the future.

You Might Also Like

Browse by Category

Ready to read The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma?

Get the full summary and 100K+ more books with Fizz Moment.

Get Free Summary