Table of Contents
- Is it actually possible to positively influence the long-term future?
- Isn't much of longtermism obvious? Why are people only just realising all this?
- Isn't longtermism just applied utilitarianism?
- Is longtermism asking that we make enormous sacrifices for the future? Isn't that unreasonably demanding?
- What has longtermism got to do with effective altruism?
- What's the point of caring about the long-run future if we're just going to go extinct anyway?
- Why focus on humanity? What about animals, or nature?
- Isn't much of longtermism driven by tiny probabilities of enormous payoffs?
- Could longtermism justify totalitarianism or other political harms?
- Longtermists don't seem to pay as much attention to climate change as I would have expected. What's going on there?
If you have a question about longtermism, you might be able to find an answer in this FAQ. You can select one of the questions in the contents above to skip to it.
We’ve included some brief responses to the most common objections to longtermism, in theory and practice. Some objections may be based on understandable misconceptions; others pose important and difficult questions for longtermists. You shouldn't get the impression that all these questions have agreed-upon, knock-down answers. Indeed, longtermists continue to debate and disagree about many of them.
If you have a question that’s not answered below, you're welcome to get in touch. You could also ask some members of the effective altruism community about it, by posting your question on the Effective Altruism Forum.
Is it actually possible to positively influence the long-term future?
It's possible to influence the future in trivial ways: you could carve your name on a hard stone, and your name could still be there thousands of years from now. But that's not really the question, because you're not influencing the future in positive and important ways. Our actions might instead end up influencing the long-run future in more important ways — but in ways we failed to predict, or couldn't possibly have predicted. So it also matters that we can actually foresee ways to positively influence the future. So the important question is: is it possible to predictably, positively influence the long-term future? It's reasonable to wonder if there really is anything we can do now which could make a meaningfully big difference to the long-run future.
However, there is a strong case for thinking we can do things today to influence the very long-run.
Mitigating the worst effects of climate change works as a 'proof of concept'. Beyond the harms caused by unmanaged climate change in the near-term, we know that carbon dioxide can persist in the Earth's atmosphere for tens of thousands of years. We also know that there are ways to reduce how much we emit, such as by investing in green technology and pricing carbon emissions in line with their true social cost. If we successfully switch to supplying almost all the world's energy and electricity requirements without burning carbon, there's no reason to expect future generations will undo our work — our success would last long into the future.
Our efforts today could influence the long-term future in even more dramatic ways. The philosopher Toby Ord argues that this century could be a time of unprecedented and unsustainable risk to humanity's entire potential to flourish long into the future — so-called existential risk. But there's hope: he argues that we could choose to bring this period to a close. Through deliberate action, we could succeed in preventing a catastrophe large enough to set humanity on a much worse course, or even cause human extinction. The major risks of this kind are human in origin: like the risk of an engineered pandemic much worse than COVID-19, or irreversibly powerful artificial intelligence which we fail to align with the right values. But because the biggest risks are human in origin, we are surely also capable of reducing them. It's hard to imagine a clearer instance of positively influencing the long-run future than preventing an existential catastrophe.
For more, see the section of the introduction on this site titled 'Our actions could influence the long-term future'.
Isn't much of longtermism obvious? Why are people only just realising all this?
Some features of the longtermist worldview really are not especially new, or sophisticated, or hard to understand. For instance, it's not controversial to suggest that at least some future people matter: future parents are doing something straightforwardly reasonable when they make preparations for the child they plan to have, to make sure that child has a good life. But it’s also not very controversial to say people matter even if they’re not closely related to you, so this concern for future family could be generalised.
This raises the question of why longtermism isn't far more widely believed and acted on; and why it's only now being taken seriously as an intellectual project.
One answer is that the full picture of longtermism actually required a series of surprisingly recent discoveries. For instance, it was only through discoveries in geology and cosmology that we began to fully appreciate how much time we have left on and beyond Earth. And we have only recently begun to appreciate just how long the effects of our activities can persist over time: it was late in the 20th century when climate scientists began to form a consensus that human-caused greenhouse gases can persist in the atmosphere for tens of thousands of years.
Further, although there's a sense in which future people obviously matter, path-breaking conceptual work was required to understand what kinds of ethical obligation we might have to future people. Before North America was colonised, the Iroquois developed and taught a ‘seventh generation principle’ — that decisions we make today should benefit seven generations into the future. Similar concepts were reinvented in contemporary thought — such as in Derek Parfit's Reasons and Persons, and Jonathan Schell's The Fate of the Earth. As the historian Thomas Moynihan describes, historical views about the future of humanity almost never left room for a sense of 'stakes' — for the idea that it could be down to us to ensure things go well. In fact, very few people even noticed the possibility that humanity might accidentally go extinct.
Moreover, for most of history it simply wasn't clear how to positively influence the long-run future, even if you had cared about doing so. This may have only recently changed. First, we now have technologies powerful enough to influence the entire future, such as nuclear weapons. Second, we’ve made progress in the social and physical sciences, enabling us to more accurately predict some long-term effects of our actions.
Only with the invention of nuclear weapons did humanity begin acquiring the means to destroy itself. And other similarly powerful technologies are on the horizon, including artificial intelligence, advanced biotechnology, and geoengineering. Ensuring the safe development and deployment of these technologies looks like a promising way to improve the future over long time horizons, especially by reducing the probability of an existential catastrophe. So even if the abstract importance of improving the long-term future was always obvious, it's beginning to look far more practically important.
Notice also that it took a long time for many moral views that we now consider obvious to become widespread. For a long time, only a few fringe thinkers and advocates spoke out about the abolition of slavery, extending suffrage to women, or the idea of treating animals humanely. During those times, many of the arguments for these moral views may have been well-known and appreciated, but the views themselves remained nonobvious for a long time.
Isn't longtermism just applied utilitarianism?
Utilitarian theories of ethics focus on bringing about the best consequences for the world by improving the lives of all sentient beings. One key feature is impartiality: utilitarianism holds that we should give equal moral consideration to the wellbeing of all individuals, regardless of characteristics such as their gender, race, nationality, or even species. In other words: good and bad things like happiness or suffering matter — indeed, matter just as much — regardless of who they occur for.
Utilitarian theories also ask us to be sensitive to the scale of good or bad outcome — for instance, an outcome with twice as many happy or unhappy people should be counted as twice as good or bad. This is significant because we know that it is easy to be insensitive to scale, in a way that may bias us against tackling especially large problems.
Longtermism plausibly follows from most versions of utilitarianism (assuming some of our actions can meaningfully and predictably affect the long-term future). Impartial moral theories like utilitarianism naturally suggest that just as it doesn’t morally matter where you are born, it doesn’t matter when you are born. So the emphasis on impartiality calls for expanding our moral ‘circle of concern’ to include future generations.
Moreover, the focus on scale means that utilitarianism values the long-term future in proportion to its vast scope and duration — fully appreciating the trillions of lives it may be home to.
However, you do not need to believe in utilitarianism to find longtermism compelling. First, many other non-utilitarian consequentialist theories (theories that assess the value of acts according to their effects) agree on the importance of impartiality and sensitivity to scale: the particular features that set utilitarianism apart from other theories, like prioritarianism or egalitarianism, don’t seem necessary for longtermism.
Moreover, even many non-consequentialist moral theories may agree that positively influencing the long-term future is a key moral priority of our time. For example, we may find reasons to protect future generations which are grounded in the past. Taking in the long sweep of human history, we might feel compelled by a kind of duty of continuity, or solidarity with past generations. The philosopher Toby Ord writes:
Because the arrow of time makes it so much easier to help people who come after you than people who come before, the best way of understanding the partnership of the generations may be asymmetrical, with duties all flowing forwards in time— paying it forwards. On this view, our duties to future generations may thus be grounded in the work our ancestors did for us when we were future generations.
On the other hand, while there are many non-consequentialist reasons for caring about future generations, they may fall a little short of the claim that positively influencing the long-term future is a key moral priority of our time. Plausibly, a principled sensitivity to the potential scale of the future — and the influence our actions will have on it — is important for longtermism.
In any case, you don't need to associate with a particular fully-formed ethical theory at all to adopt the longtermist perspective. It's obviously fine to assess the case for longtermism on its own merits, based on the arguments that stand on their own or appeal to sensible intuitions. Similarly, it’s totally fine to decide to care about environmentalism without paying much attention to which overall moral theory seems most plausible.
But it's worth giving this question its due: it is the case that certain kinds of utilitarianism suggest especially strong kinds of longtermism, especially 'total utilitarianism', which determines the value of outcomes by adding up the total amount of well-being they contain. Philosophers Hilary Greaves and William MacAskill explain in ‘The case for strong longtermism’ how stronger versions of longtermism could follow from standard versions of total utilitarianism.
Is longtermism asking that we make enormous sacrifices for the future? Isn't that unreasonably demanding?
If protecting and improving the long-run future is as important as longtermism claims, it might make sense to pass up on short-term benefits now out of a concern for future generations. Some people object that there could be no upper limit to the kind of sacrifices that longtermism could recommend we make today.
In theoretical discussions, there are difficult questions to answer about how much one generation really should be willing to sacrifice for the future. Strong versions of longtermism might indeed say that we have overwhelming moral obligations to protect the future, because there is so much at stake, and no limit to what can be demanded of us. If true, then the right thing to do may well be to make sacrifices, at an individual and societal level. However, longtermism is absolutely compatible with strict limits on what it is reasonable to demand of any single generation.
Fortunately, this discussion is largely moot, because the world is currently spending close to nothing on directly trying to protect and improve the long-run future. Of course, we spend some of our resources on projects that could improve the future incidentally, such as through broad efforts to mitigate the effects of climate change. But, globally speaking, very few resources are currently devoted to thinking and acting squarely around positively influencing the long-term future. As such, longtermists can and do disagree about how much such spending would be ideal, while all agreeing that it makes sense to spend and do far more.
For the last couple of years, very roughly $200 million has been spent yearly on longtermist cause areas, and about $20 billion has so far been committed by philanthropists engaged with longtermist ideas (this post gives a good overview of the funding situation). That means that less than one part in 100,000 (0.001%) of the gross world product (the combined income of all the countries in the world) is deliberately targeted at protecting the long-run fate of humanity — less than half the combined earnings of the top ten highest-paid athletes in the world, less than 0.2% the revenue from U.S. casino gaming industry, and less than 5% the amount that is spent annually on ice cream in the U.S.
But it is also important to be honest about where, and to what extent, longtermism might in fact suggest making sacrifices in the present on behalf of future generations. If you think that something is a key moral priority of our time, and that society currently underappreciates its importance, that’s got to have some striking implications for how we should be spending our resources and allocating our focus. As such, longtermism does have some bold implications for how our society should be spending, for what policies and norms might be best, and for how individuals can make a positive difference through their career. From the perspective of ‘business as usual’, it could suggest a surprising level of caution and delay around pursuing potentially dangerous technologies, or investing a surprising amount in measures to guard against catastrophes whose effects last long into the future. So it would be dishonest to say that longtermism asks nothing of us. It may well be demanding in this sense — but not unreasonably so.
We are already familiar with the idea that making sacrifices for others is often the right thing to do: we would be prepared to ruin expensive shoes in order to save a child drowning in a pond, or to delay a lucrative career to help raise a child. Some things do clearly matter enough to demand at least some level of sacrifice. If longermism is correct, then the long-run future matters a huge amount. Therefore, if there came an opportunity to benefit that future by giving something up in the short term, then longtermism could demand that of us. But this might be reasonable — like how common-sense ethics demands that we jump into a pond to save a drowning child, even if that means ruining our expensive clothes.
Yet, there is room for reasonable disagreement about exactly how much longtermism should demand of us, when its demands conflict with other moral priorities. Consider how philanthropic resources are spent: there are many pressing problems in the world, but only finite resources. If you were able to decide where these resources were allocated, you would have a difficult job on your hands: in some sense, every dollar diverted to longermist cause areas (like biosecurity) is a dollar you could have spent on eradicating malaria, or improving animal welfare. When we choose to give to one cause, there is therefore a sense in which we’re disregarding another cause, which could count as a kind of sacrifice. You might reasonably think that longtermism is not the only moral view that is able to make demands on how you spend your time or money, or how society decides what to prioritise. For instance, you could also think that lifting living people out of poverty should be another key priority of our time. In that case, there’s no easy answer about how much lontgermism should demand compared to that priority. A humble FAQ would be overstepping its remit if it claimed to have the definitive answer to that kind of question!
What has longtermism got to do with effective altruism?
Effective altruism is the project of using evidence and reason to find ways to make progress on the world’s most pressing problems; and then taking action, by using our time and money to actually make a difference. It’s both an intellectual project, and a practical one. Effective altruism is trying to build a broad research field focused on finding the best ways to do good. But it’s also putting the results of that research into practice, and trying to bring about real positive changes.
A big international community has formed around this project. People inspired by effective altruism work on a variety of causes, from improving the welfare of animals, to alleviating global health and poverty, to trying to reduce the chance of global catastrophes, such as a devastating pandemic.
Much of what is now called 'longtermism' was developed by people who were and are part of the effective altruism community. But they can't claim close to full credit for the longtermist worldview — others were developing the foundational ideas long before effective altruism existed.
Both longtermism and effective altruism are intellectual movements: a set of key ideas and questions which can motivate research across disciplines, and spur action. Effective altruism has a strong community aspect: a growing number of people across the world identify with its core ideas, and relate with one another in working together towards shared goals. Currently, many people who would associate with longtermism would also say that they’re part of this effective altruist community. But that needn’t be the case: although it’s natural to see how an interest in effective altruism could lead to a particular focus on longtermism, there’s nothing inherent about longtermism that means you also need to feel part of effective altruism to care about it. Nor, for that matter, does caring about effective altruism imply that you must care about longtermism.
Indeed, it’s not clear that longtermism needs to be a ‘community’ or ‘identity’ at all. Many people might eventually work on problems associated with longtermism, but they don't need to feel like they're part of a big longtermist community. In this respect, you might compare longtermism with broad approaches in ethics, like human rights; or with paradigms in economics, like sustainable development. Researchers, activists, and policymakers might all care deeply about those ideas, without considering themselves part of a single human rights or sustainable development 'community'.
So you really don't need to identify with effective altruism to make progress on longtermist research, or to work towards improving the very long-run future. In practice, many people working at key longtermist organizations do not in fact identify much with effective altruism, and cut across all sorts of social, spiritual, and political affiliations also.
What's the point of caring about the long-run future if we're just going to go extinct anyway?
You might think that humanity is currently on a path towards extinction or unrecoverable ruin in the not-so-distant future. If true, the long-run future is unlikely to matter (since people won’t be around by then), and so it wouldn't make much sense to think that improving it should be a moral priority.
However, that belief alone wouldn't be enough to conclude that longtermism is misguided, as long as we have some control over whether we go extinct. In fact, this is likely to be true: the key risks to humanity’s survival are all caused by humans, so it must be possible for humans to mitigate them. Therefore, in line with longtermism, your conclusion should really be that avoiding extinction should be a moral priority, in order to leave the world fit for thousands of future generations to flourish.
For this objection to really work, you would need to think that (i) we're likely to go extinct soon, and (ii) there's nothing we can do about that. It's very hard to see how both these things could be true. Presumably, if the risk of human extinction is high (and the expected number of future generations is therefore low), there would be many things we can do to bring down that risk. If on the other hand the risk is low, it may be more difficult to lower the risk even further, but it will already be very unlikely that we’ll go extinct soon, and so we should expect a large number of future generations.
Some people might fear climate change causing human extinction in the next few centuries. However, while climate change will have devastating global effects, a balanced review of the evidence suggests that human extinction as a direct result of climate change is not very likely (see here, here, and here).
Mitigating the effects of climate change is clearly immensely valuable. In fact, it probably does reduce the overall risk of extinction, because the damage from climate change could plausibly be a ‘risk factor’, for example by increasing the chance of international conflict. But it looks like neither climate change, nor any other foreseeable trend, is very likely to cause literal human extinction any time soon. We're not so unavoidably doomed as to give up on the prospect of surviving, and even flourishing, for a very long time.
Why focus on humanity? What about animals, or nature?
Longtermism applies a special focus on humanity — it talks about human potential, and future people. But humans share a planet with thousands of species of nonhuman animals that also matter morally, especially because many are capable of suffering. Other aspects of nature might matter intrinsically too: perhaps there is something worth protecting about environments which haven't been corrupted or interfered with by humans. So why this focus on humans?
The reason is that (for better or worse) humans find themselves in a position of responsibility and control over the fate of animals and nature, and they are the only creatures capable of understanding and acting on moral reasons for improving how the future goes. Animals and perhaps entire ecosystems can be moral patients — the kinds of things we should care for — but humans are in effect the only moral agents — creatures capable of planning what is best to do, from a moral perspective.
So 'human potential' should be taken to mean something like 'the futures which could be chosen by humans', rather than just 'futures for just the human species'. The value of the long-run future needn't be determined just by the humans that live in it — you could very well express a deep concern for improving the wellbeing of animals, or protecting features of nature over the long-run also. This is true in practice: many longtermists are also committed vegetarians or vegans.
Isn't much of longtermism driven by tiny probabilities of enormous payoffs?
One suspicion about the case for longtermism is that it relies on extremely speculative guesses about how to influence the future, which are positively unlikely to pay off, but are nonetheless supposed to derive their importance from the enormous size of the stakes — such that even the most remote possibility of success makes those guesses worth putting resources into. But this could look objectionably 'fanatical', especially if people and resources that go towards longtermist projects might otherwise go to far more tangible, immediately pressing problems.
This problem is especially relevant for efforts to mitigate existential risks: risks that threaten to curtail humanity's potential, through causing extinction, or some equivalently bad and unrecoverable state. We know, for instance, that the chance of an extinction-level asteroid colliding with Earth this century is vanishingly low — but because it would destroy humanity's entire future prospects, longtermists might want to recommend shoveling large amounts of spending into an asteroid defence system. Something about this reasoning seems suspicious.
It is true that, by necessity, longtermist reasoning relies to an unusual extent on extrapolating from incomplete data, using creative forecasting methods, and accepting a lot of uncertainty. But it is not true that longtermism derives its importance from somehow multiplying together enormously valuable possible futures with vanishingly small probabilities.
Consider the longtermist case for avoiding catastrophic risks. The argument is not that while the risks are tiny, the 'reward' being squandered is proportionally even larger. Rather, the problem is that many of the risks look unacceptably, unsustainably high. Furthermore, it looks possible to reduce them by meaningful amounts, not just by tiny slivers of probability. In fact, you often barely need to appeal to future generations to see why we should do more to address these risks, as researcher Carl Shulman explains in this podcast episode.
One especially concerning feature of choosing small probabilities of enormous payoffs could be that this means entirely foregoing nearly certain benefits, or even very likely causing a small amount of harm. But this isn’t the case for longtermism either. In fact, the interventions that longtermism suggests also seem to be great for the near-term, and likely to have some significant benefits even in the cases where they don’t end up literally averting an existential catastrophe. For instance, most of what we could do to address the risk of worst-case pandemics will also help with less severe pandemics. And this applies when it comes to addressing risks from extreme climate change, nuclear exchange, and powerful artificial intelligence — work that seems very likely to be useful in the relatively short-term, even if the worst-case scenarios don’t materialise.
And while it's impossible to avoid uncertainty about how exactly the future will look, there are actions we can take now that seem to be robustly good for the long-term future — such as reducing the chance of a great power war this century, or building measures against engineered pandemics.
Could longtermism justify totalitarianism or other political harms?
A key feature of longtermism is the recognition of the enormity of the stakes: the amount of value that lies in the long-term future could be extraordinarily large, but it hangs in the balance: it could depend on our actions today. What if the best or only way to ensure that the future goes well looks like enforcing a highly objectionable political regime, or committing what looks like serious harm in the near-term?
In particular, you might imagine a situation where we develop such dangerous and inexpensive technology this century that quite serious measures, such as mass surveillance, end up looking necessary to avoid catastrophe (see Bostrom (2019), 'The Vulnerable World Hypothesis'). Consider, for instance, a world in which biotechnology advances to the point that a smart high-schooler can create and release a deadly, highly infectious pathogen. To prevent catastrophic pandemics from ravaging the world, societies may decide to institute intrusive surveillance and enforcement mechanisms to prevent this from happening. The longtermist emphasis on the size of the stakes could make such measures more likely to sound like the correct course. This line of thinking is especially concerning; not least because it sounds similar to historical justifications of totalitarianism, and the atrocities totalitarian regimes commit.
The liberal philosopher Isaiah Berlin summarised this kind of argument:
To make mankind just and happy and creative and harmonious forever - what could be too high a price to pay for that? To make such an omelette, there is surely no limit to the number of eggs that should be broken.
It's possible to pick up on various quotations from writing about longtermism, reproduced outside of their wider context, and come away with worries along these lines. So a concern that longtermism could justify major political harms is somewhat understandable. But, frankly, it's very hard to see how it relates to what longtermists currently work on or care about.
In fact, totalitarianism is especially concerning for longtermists, because a totalitarian regime aided by novel technology for surveillance and enforcement might itself constitute the kind of existential risk that longtermists work towards preventing. As such, a great deal of discussion and action within longtermism is focused on spreading political norms that have stood the test of time — anti-authoritarianism, liberalism, and the idea of an open society.
The totalitarian regimes of the past were responsible for the worst atrocities in human history: hardship and deaths caused not by nature but by human choices. Those regimes failed with horrendous consequences not just because of some over-willingness to make sacrifices for a better future, but because they were straightforwardly wrong that revolutionary violence and actual totalitarianism make the world better in the short or long term. It is therefore very hard to think of a realistic scenario where the longtermist perspective would recommend using mass repression or violence, but other reasonable perspectives would not (for example, choosing to fight the Axis powers in World War II).
That said, it would be a mistake to ignore concerns that longtermist ideas might become twisted or misinterpreted in order to justify political harms in the future. We know that even very noble aspirations can eventually transform into terrible consequences in the hands of normal, fallible people. If that worry were legitimate, we should take great care to communicate longtermist ideas sensitively and honestly, and to avoid them being misused or taken in dangerous directions.
Longtermists don't seem to pay as much attention to climate change as I would have expected. What's going on there?
When you think about ways to improve the long-term future, climate change might come immediately to mind. We know beyond reasonable doubt that human activity disrupts Earth's climate, and that climate change will have devastating effects, such as more extreme weather events, the mass displacement of (predominantly poor) people, and biodiversity loss. Some of these effects could last a very long time, because greenhouse gases can persist in the Earth's atmosphere for tens of thousands of years. Finally, we have control over how much damage we cause, such as by redoubling efforts to develop green technology, building more zero-carbon energy sources, and pricing carbon emissions in line with their true social cost. For these reasons, longtermists have strong reasons to be concerned about climate change, and many are actively working on climate issues (see this report for example). Yet, longtermists overall seem to focus relatively more on other problems, many of which sound far more obscure. What's going on?
One special concern for longtermists is the possibility of an existential catastrophe: an event which irrevocably destroys humanity's long-term potential. Climate change is sometimes described as an 'existential threat’, but this definition sets an extremely high bar. On this definition, it's not clear that climate change is among the most plausible causes of an existential catastrophe. Drawing on the best available data at the time of writing, the philosopher Toby Ord estimates the chance of an existential catastrophe directly caused by climate change over the next century at around 1 in 1,000 (mostly driven by the possibility of extreme, 'runaway' scenarios); in contrast, the estimated risk from engineered pandemics is 30 times higher and the risk from misaligned artificial intelligence is (on Ord’s guess) very roughly 100 times higher. To be sure, climate change is an ongoing global emergency. However, given our present state of knowledge, it seems unlikely to cause human extinction.
So while many climate advocates refer to climate change as an ‘existential threat’, it doesn’t quite meet the definition of ‘existential threat’ as longtermists understand it. In this way, what might appear to be substantial disagreement over the severity of climate change’s impacts could partly be a case of words being used in different ways.
Another reason why some longtermists prioritise threats to the future other than climate change is that efforts to mitigate climate change, fortunately, receive an appreciable amount of public attention and resources. They are thus comparably less neglected than, for example, efforts to reduce risks from nuclear weapons, engineered pathogens, or artificial intelligence. This matters because the more resources are spent to address a problem, the less impactful additional resources tend to be. Thus, there are likely to be relatively fewer ‘low hanging fruit’ left to pick for making progress on mitigating the effects of climate change compared to other, more neglected, issues. Roughly $1 trillion per year is currently being invested in green tech and other mitigation strategies. Nonprofits appear to now be spending around $10 billion per year. By contrast, consider the threat of a pandemic worse than Covid. Covid has caused more than ten million premature deaths and trillions of dollars in economic damages, but less than $100 million of nonprofit funding seems to be directed at improving pandemic preparedness (as of 2019). So it looks like an additional expert scientist, or donation, could go further working on pandemic preparedness than on mitigating the general effects of climate change.
Ideally, there's a strong case that we should be spending much more than we currently do on practical solutions to problems related to climate change. But the regrettable fact is that our resources are finite: to do the most good with our available resources, we must identify and work on the problems that look most pressing, even at the expense of other important problems.
That said, there are ways to address climate change that look especially neglected from a longtermist perspective. There is active discussion about nuclear power and how to ensure that if geoengineering is done, that it is done safely and responsibly. Another key example is work on modelling worst-case outcomes, such as proposed 'runaway' effects — since these scenarios could cause damage that lasts far into the future and is close to impossible to fully recover from. Such modelling may also help us study and mitigate other risks, such as the climatic effects caused by large-scale nuclear war, supervolcano eruptions, or massive asteroid impacts.