
What We Owe the Future: Summary & Key Insights
Key Takeaways from What We Owe the Future
A simple idea can rearrange an entire moral worldview: people who do not exist yet can still matter.
The moral stakes of the future become enormous once we recognize how much life may still lie ahead.
Some actions matter not because of what they do today, but because of what they set in motion.
It is tempting to think that history unfolds inevitably, but MacAskill emphasizes how often it hinges on fragile contingencies.
Few ideas are more unsettling than human extinction, yet MacAskill argues that we must confront it directly.
What Is What We Owe the Future About?
What We Owe the Future by William MacAskill is a ethics book published in 2022 spanning 10 pages. What if the most important moral choices of our time are not only about the people alive today, but about the billions or even trillions who may live after us? In What We Owe the Future, philosopher William MacAskill makes the case that our responsibilities extend far beyond the present generation. He argues that because future people can matter just as much as those alive now, decisions made in this century could shape the entire trajectory of human civilization. That claim turns questions about technology, politics, war, institutions, and moral progress into matters of extraordinary ethical importance. MacAskill combines philosophy, history, economics, and risk analysis to develop the case for longtermism: the view that positively influencing the long-run future should be a central moral priority. Rather than offering abstract speculation alone, he grounds his argument in examples of historical turning points, existential risks, and the possibility that today’s values may become locked in for centuries. As a leading moral philosopher, Oxford professor, and co-founder of major effective altruist organizations, MacAskill brings both intellectual rigor and practical urgency to one of the biggest ethical questions of our age: what do we owe the people who come after us?
This FizzRead summary covers all 10 key chapters of What We Owe the Future in approximately 10 minutes, distilling the most important ideas, arguments, and takeaways from William MacAskill's work. Also available as an audio summary and Key Quotes Podcast.
What We Owe the Future
What if the most important moral choices of our time are not only about the people alive today, but about the billions or even trillions who may live after us? In What We Owe the Future, philosopher William MacAskill makes the case that our responsibilities extend far beyond the present generation. He argues that because future people can matter just as much as those alive now, decisions made in this century could shape the entire trajectory of human civilization. That claim turns questions about technology, politics, war, institutions, and moral progress into matters of extraordinary ethical importance.
MacAskill combines philosophy, history, economics, and risk analysis to develop the case for longtermism: the view that positively influencing the long-run future should be a central moral priority. Rather than offering abstract speculation alone, he grounds his argument in examples of historical turning points, existential risks, and the possibility that today’s values may become locked in for centuries. As a leading moral philosopher, Oxford professor, and co-founder of major effective altruist organizations, MacAskill brings both intellectual rigor and practical urgency to one of the biggest ethical questions of our age: what do we owe the people who come after us?
Who Should Read What We Owe the Future?
This book is perfect for anyone interested in ethics and looking to gain actionable insights in a short read. Whether you're a student, professional, or lifelong learner, the key ideas from What We Owe the Future by William MacAskill will help you think differently.
- ✓Readers who enjoy ethics and want practical takeaways
- ✓Professionals looking to apply new ideas to their work and life
- ✓Anyone who wants the core insights of What We Owe the Future in just 10 minutes
Want the full summary?
Get instant access to this book summary and 100K+ more with Fizz Moment.
Get Free SummaryAvailable on App Store • Free to download
Key Chapters
A simple idea can rearrange an entire moral worldview: people who do not exist yet can still matter. MacAskill begins from a strikingly intuitive premise—if someone will exist in the future and can experience happiness, suffering, flourishing, or loss, then their wellbeing deserves moral consideration. The fact that they are not alive today does not automatically make their interests less important. We already accept this in everyday life when we save for our children, preserve natural resources, or build durable institutions. Longtermism extends that instinct much further.
This matters because our moral focus is often pulled toward what is immediate, visible, and emotionally close. We donate after disasters, worry about current elections, and prioritize today’s crises. MacAskill does not say these concerns are unimportant. Instead, he asks us to widen the lens. If future generations may number in the billions across centuries—or far more if civilization endures for a very long time—then choices that shape their lives could have immense moral significance.
Consider climate policy, AI governance, nuclear security, or biosecurity. These are not only present-day policy debates; they may determine whether future people inherit stability or catastrophe. Even small improvements in how we manage these risks could echo across generations.
The practical implication is not to ignore current suffering, but to include future wellbeing in our moral calculations. A useful personal test is this: when evaluating a decision, ask not only, “Who is affected now?” but also, “How might this influence people fifty, one hundred, or one thousand years from now?” Making that question a habit is the first step toward longtermist thinking.
The moral stakes of the future become enormous once we recognize how much life may still lie ahead. Humanity is a very young species by cosmic standards. If civilization survives major dangers, there is no obvious reason human or post-human life could not continue for thousands, millions, or even far longer. MacAskill uses this possibility to challenge our ordinary scale of ethical concern. We tend to treat the future as a small extension of the present, but it may instead contain the overwhelming majority of all lives that will ever exist.
This is not just a mathematical curiosity. If the potential future is vast, then preventing events that permanently curtail it—such as extinction or irreversible civilizational collapse—carries extraordinary moral importance. Imagine a library containing nearly all the books ever to be written, and then imagine letting it burn because we were focused only on the few volumes already on our desks. MacAskill argues that this is roughly how shortsighted our ethics can become when we neglect the long-run future.
A practical example is pandemic preparedness. Investing in surveillance systems, medical stockpiles, and global coordination may seem expensive in the short term, but if such systems reduce the chance of a civilization-ending event, their long-term benefits could be immense. The same logic applies to reducing nuclear risks or building robust democratic institutions.
The takeaway is to develop “scale sensitivity.” When assessing causes, policies, or careers, do not ask only how urgent they feel now. Ask how many lives and how much value could be affected over the very long run. Ethical seriousness requires matching our concern to the true scale of what is at stake.
Some actions matter not because of what they do today, but because of what they set in motion. MacAskill argues that when evaluating the consequences of our choices, we should pay attention to indirect, delayed, and compounding effects. A policy may appear modest in the present yet profoundly shape future institutions, technologies, norms, and risks. The long-term ripple effects can easily exceed the short-term outcomes that dominate public attention.
History offers countless examples. The abolitionist movement did not only improve conditions for those alive at the time; it reshaped legal systems, moral norms, and political identities for generations. The creation of democratic institutions often produced lasting trajectories of accountability and civil liberty. Scientific breakthroughs such as vaccination changed not just one era’s health outcomes but the health prospects of countless future people.
This idea also changes how we think about personal impact. A person choosing a career in research, public policy, institutional reform, or communication may contribute to systems that endure far beyond their lifetime. Building stronger mechanisms for international cooperation, improving decision-making under uncertainty, or creating norms of scientific responsibility can all have multiplier effects across time.
MacAskill does not suggest that we can forecast the future with precision. Rather, he argues that uncertainty is not an excuse for neglect. We regularly act under uncertainty in medicine, business, and government; ethics should do the same. The goal is to identify interventions that seem robustly beneficial across many possible futures.
An actionable takeaway is to look for leverage. Before committing time, money, or attention, ask: could this action influence structures, incentives, or values that persist? Prioritizing durable positive effects is one of the most practical ways to honor the long term.
It is tempting to think that history unfolds inevitably, but MacAskill emphasizes how often it hinges on fragile contingencies. Specific leaders, inventions, social movements, accidents, and institutional decisions have altered the course of civilization. If that is true, then the present may also contain pivotal moments whose consequences extend far into the future. We are not merely passengers in history; under the right conditions, we may be steering it.
This insight supports one of the book’s most important warnings: value lock-in. Sometimes institutions or technologies become so entrenched that they preserve a particular set of norms, incentives, or power structures for very long periods. A political constitution, a surveillance regime, a dominant AI architecture, or a global governance framework might become difficult to reverse once established. If bad values become locked in, future generations may inherit a world that is stable yet deeply unjust.
Examples already surround us. Colonial borders have shaped political instability for generations. Fossil fuel infrastructure created path dependence that made climate transition harder. Social media platforms established engagement-maximizing norms before societies understood their long-term effects on attention and trust.
The practical implication is that periods of rapid change deserve special moral attention. Emerging technologies, constitutional design, international institutions, and educational norms may all be more important than they appear because they can set the baseline for centuries to come.
A useful takeaway is to treat foundational decisions as high-stakes decisions. When supporting policies, companies, or movements, ask not only whether they solve today’s problem, but whether they create a pattern we would want repeated indefinitely. In pivotal times, careful design matters more than speed alone.
Few ideas are more unsettling than human extinction, yet MacAskill argues that we must confront it directly. If humanity’s future could be vast, then events that end civilization entirely are not just tragedies for current populations; they would erase all possible future lives, achievements, cultures, and discoveries. That makes existential risk reduction one of the most morally urgent projects available to us.
MacAskill distinguishes between ordinary disasters and existential catastrophes. A terrible earthquake or financial crash can cause immense suffering, but humanity can recover. An existential event would permanently destroy civilization’s potential—through nuclear war, engineered pandemics, runaway artificial intelligence, or some combination of cascading failures. The key point is permanence. Once the future is lost, it cannot be regained.
This perspective changes priorities. Biosecurity is no longer just a public health issue; it becomes a civilizational safeguard. Nuclear de-escalation is not merely strategic diplomacy; it is future protection. Responsible AI development is not only a technical matter; it may be central to whether humanity retains control over its destiny.
Practical examples include funding better disease surveillance, strengthening international norms against bioweapons, improving nuclear command-and-control systems, and supporting AI alignment research. Even communication systems that reduce misinformation during crises can help societies respond more effectively to existential threats.
The actionable takeaway is clear: take low-probability, high-impact risks seriously. In your civic life, philanthropy, or career, consider supporting institutions that increase resilience against catastrophic threats. The future is too valuable to leave its survival to luck.
Technological advancement is often treated as automatically good, but MacAskill offers a more careful view: progress increases our power, and power magnifies both virtue and error. New technologies can cure diseases, reduce poverty, and expand knowledge. They can also scale surveillance, accelerate war, and amplify irreversible mistakes. The question is not whether progress should stop, but whether our moral and institutional maturity can keep pace with our capabilities.
This is why MacAskill pays attention to the relationship between innovation and moral change. Humanity has made real ethical progress in some areas—expanding rights, reducing violence in certain contexts, and deepening concern for marginalized groups. Yet moral progress is uneven and fragile. A civilization with advanced technology but shallow moral wisdom may become more dangerous, not less.
Artificial intelligence is a central example. Powerful AI could dramatically increase productivity, medical discovery, and scientific understanding. But if developed recklessly, it could centralize power, destabilize labor markets, manipulate information ecosystems, or create uncontrollable systems. Similar reasoning applies to biotechnology, where life-saving tools can also create novel risks.
MacAskill’s broader argument is that steering progress matters as much as generating it. Societies need norms, laws, and institutions capable of channeling innovation toward widely shared, long-term benefit. Ethics cannot be an afterthought appended once systems are already deployed.
A practical takeaway is to support “differential progress”: accelerating developments that improve safety, coordination, and wisdom while slowing or scrutinizing developments that raise existential danger. In professional terms, that may mean working not only on what can be built, but on what should be built and under what governance.
One of the book’s most philosophically challenging themes is population ethics: how should we think about actions that affect not only the quality of future lives, but the number of future people who exist at all? MacAskill does not pretend this issue is simple. Instead, he shows that many of our deepest intuitions about morality become unstable when we think at civilizational scale.
If future people matter, then creating conditions under which many worthwhile lives can exist seems valuable. At the same time, we do not want ethics to become a crude numbers game that ignores justice, rights, or quality of life. MacAskill navigates this tension by emphasizing that what matters is not merely maximizing headcount, but protecting and enabling flourishing futures. The aim is a world where future people can live lives genuinely worth living.
This debate affects real policy questions. Consider urban planning, climate policy, reproductive freedom, migration systems, or education. These shape not only present welfare but the size, distribution, and opportunities of future populations. Likewise, existential risk reduction is significant partly because it preserves the possibility of many future lives.
Population ethics also reminds us to be humble. Our moral theories may be incomplete, and some dilemmas resist clean answers. Still, uncertainty does not eliminate responsibility. Even if we cannot settle every philosophical debate, we can recognize that destroying the conditions for future flourishing would be a profound moral loss.
The actionable lesson is to think in terms of enabling flourishing at scale. Support policies and institutions that preserve options, expand opportunity, and improve the expected quality of future lives. A humane longtermism is concerned not just with more lives, but with better and freer ones.
Lasting moral concern requires durable structures. MacAskill argues that if we truly care about future generations, we cannot rely only on individual goodwill or one-time acts of generosity. We need institutions—governments, laws, research bodies, international agreements, courts, educational systems, and cultural norms—that systematically represent long-term interests. Otherwise, short election cycles, market pressures, and media incentives will keep pushing attention toward the immediate and the visible.
Many current institutions are poorly designed for long horizons. Politicians are rewarded for quick wins, corporations for quarterly performance, and citizens for reacting to urgent headlines. Yet problems like climate change, AI governance, biodiversity loss, and catastrophic risk require planning across decades or longer. This mismatch creates a governance gap between the timescales of our incentives and the timescales of our consequences.
MacAskill points toward reforms that could close this gap: independent future generations commissions, better forecasting systems, stronger international coordination, constitutional safeguards, and public investment in resilience. Some countries have experimented with ombudsmen or parliamentary committees tasked with representing future citizens. Even when imperfect, these efforts signal an important shift: future people need institutional advocates.
At a smaller scale, organizations can build long-term thinking into strategy through scenario planning, red-team exercises, ethical review, and resilience metrics. Families, schools, and nonprofits can do the same by asking how decisions affect not only current stakeholders but future ones.
The takeaway is to embed long-term concern into systems, not just sentiments. If you want the future to matter in practice, support organizations and policies that make future impacts visible, measurable, and politically actionable.
The future is not determined by technology alone; it is also shaped by the stories societies tell, the virtues they reward, and the values they pass on. MacAskill stresses that cultural and moral stewardship may be among the most underestimated forms of long-term influence. Institutions matter, but institutions themselves are built and sustained by norms—ideas about dignity, truth, responsibility, cooperation, and what counts as progress.
History shows how moral norms can spread and endure. Abolition, women’s rights, concern for animals, and democratic ideals all gained force not merely through policy changes, but through cultural transformation. Likewise, destructive norms—nationalist fervor, dehumanization, prejudice, or glorified domination—can guide societies toward violence and repression for generations.
In the modern world, culture moves through education, media, research communities, religious traditions, family life, and digital platforms. That means writers, teachers, artists, journalists, founders, and public intellectuals may influence the future in ways that are difficult to measure but deeply consequential. A culture that values truth-seeking, humility, scientific integrity, and concern for strangers is better positioned to navigate powerful technologies and global risks.
MacAskill’s perspective broadens the idea of impact. Helping shape better norms—around responsible innovation, intergenerational justice, or global solidarity—may be a significant contribution even if its effects are diffuse. The long-term future depends not only on what we build, but on the character with which we build it.
The actionable takeaway is to practice and promote values that scale well into the future: honesty, compassion, restraint, intellectual humility, and stewardship. Cultural influence may feel indirect, but over time it can become civilizational infrastructure.
Grand moral ideas matter only if they can guide action. MacAskill therefore ends on a practical question: what should individuals and societies do differently if longtermism is true? His answer is not that everyone must become a philosopher of the far future. It is that we should take seriously opportunities to reduce existential risk, improve institutions, and preserve the possibility of flourishing futures.
For individuals, this may affect career choice, charitable giving, political engagement, and habits of attention. Someone with technical skills might contribute to AI safety, biosecurity, or forecasting. A policymaker might work on arms control, democratic resilience, or science governance. A philanthropist might fund neglected long-term risks that receive too little public support. A teacher or communicator might cultivate future-oriented citizenship.
Practical longtermism also requires balance. MacAskill does not argue for abandoning present-day moral duties. Rather, he invites us to add a neglected dimension to our ethical reasoning. Helping people now and protecting the future are often complementary: stronger health systems, better institutions, and wiser technology policy can do both.
Importantly, action under uncertainty should be guided by robustness, humility, and learning. Because the future is hard to predict, we should favor interventions that seem beneficial across many scenarios and remain open to revising our views as evidence improves.
The takeaway is to make one concrete long-term commitment. Choose a cause, habit, or professional direction that contributes to a safer, wiser, more resilient future. Longtermism becomes real not when we admire the idea, but when we let it shape what we do next.
All Chapters in What We Owe the Future
About the Author
William MacAskill is a Scottish philosopher, writer, and academic whose work focuses on ethics, rational decision-making, and how to do the most good. He is an associate professor in philosophy at the University of Oxford and a leading public voice in contemporary moral philosophy. MacAskill is also closely associated with the effective altruism movement and helped found influential organizations such as Giving What We Can, 80,000 Hours, and the Centre for Effective Altruism. His earlier books, including Doing Good Better, brought ideas about evidence-based altruism to a broad audience. In What We Owe the Future, he extends that project by examining humanity’s obligations to future generations, combining philosophical rigor with practical concern for policy, risk, and civilization’s long-term trajectory.
Get This Summary in Your Preferred Format
Read or listen to the What We Owe the Future summary by William MacAskill anytime, anywhere. FizzRead offers multiple formats so you can learn on your terms — all free.
Available formats: App · Audio · PDF · EPUB — All included free with FizzRead
Download What We Owe the Future PDF and EPUB Summary
Key Quotes from What We Owe the Future
“A simple idea can rearrange an entire moral worldview: people who do not exist yet can still matter.”
“The moral stakes of the future become enormous once we recognize how much life may still lie ahead.”
“Some actions matter not because of what they do today, but because of what they set in motion.”
“It is tempting to think that history unfolds inevitably, but MacAskill emphasizes how often it hinges on fragile contingencies.”
“Few ideas are more unsettling than human extinction, yet MacAskill argues that we must confront it directly.”
Frequently Asked Questions about What We Owe the Future
What We Owe the Future by William MacAskill is a ethics book that explores key ideas across 10 chapters. What if the most important moral choices of our time are not only about the people alive today, but about the billions or even trillions who may live after us? In What We Owe the Future, philosopher William MacAskill makes the case that our responsibilities extend far beyond the present generation. He argues that because future people can matter just as much as those alive now, decisions made in this century could shape the entire trajectory of human civilization. That claim turns questions about technology, politics, war, institutions, and moral progress into matters of extraordinary ethical importance. MacAskill combines philosophy, history, economics, and risk analysis to develop the case for longtermism: the view that positively influencing the long-run future should be a central moral priority. Rather than offering abstract speculation alone, he grounds his argument in examples of historical turning points, existential risks, and the possibility that today’s values may become locked in for centuries. As a leading moral philosopher, Oxford professor, and co-founder of major effective altruist organizations, MacAskill brings both intellectual rigor and practical urgency to one of the biggest ethical questions of our age: what do we owe the people who come after us?
More by William MacAskill
You Might Also Like

Abortion
David Boonin

Blind Spots: Why We Fail to Do What's Right and What to Do About It
Max H. Bazerman, Ann E. Tenbrunsel

Digital Ethics: Research and Practice
Christopher Burr, Luciano Floridi (Editors)

Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference
William MacAskill

Eating Animals
Jonathan Safran Foer

Ethics of Artificial Intelligence
Various Authors
Featured In
Browse by Category
Ready to read What We Owe the Future?
Get the full summary and 100K+ more books with Fizz Moment.