Research Questions
This is a compilation of sources of potential research questions on existential risk mitigation. All credit for these research questions and lists goes to Arden Koehler, Howie Lempel, David Manheim, and particularly Michael Aird, as well as other researchers and experts cited in these lists; mistakes are our own. We have compiled this list as a conversation-starter, not as strong endorsements. Listings are lightly edited for this context.
Research questions about entirely natural risks (e.g. asteroid strikes, supervolcanoes) have been excluded from this list because research strongly suggests that the risks involved are extremely low; anthropogenic risks appear more urgent (Ord, 2020).
Longtermism, Existential Risks, or GCRs Generally
- The Precipice, Appendix F: Policy and research recommendations - Toby Ord, 2020
- Also available here
- Research questions that could have a big social impact, organised by discipline - Arden Koehler & Howie Lempel (80,000 Hours), 2020
- Pp. 83-112 of Legal Priorities Research: A Research Agenda - Legal Priorities Project, 2021
- Open Research Questions - Center on Long-Term Risk
- ALLFED’s research and Effective Theses topic ideas - 2019
- Crucial questions for longtermists
- Some history topics it might be very valuable to investigate
- Questions related to moral circles that are listed at the end of this post and in this comment
- Research statement [FHI RSP]
AI Risks: Technical
- Concrete Problems in AI Safety - Amodei et al., 2016
- “Clusters of ideas that we believe warrant further attention and research” - Center for Human-Compatible AI (CHAI)
- Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda - Nate Soares and Benya Fallenstein (MIRI)
- Alignment for Advanced Machine Learning Systems - Jessica Taylor et al. (MIRI), 2016
- Research Agenda v0.9: Synthesising a human's preferences into a utility function - Stuart Armstrong, 2019
- Related talk here
- FLI AI Safety Research Landscape - Future of Life Institute, 2018
- Associated paper here
- Some materials from or related to Paul Christiano that some people have indicated serve as research agendas, collections of questions, or supporting materials:
- Iterated Distillation and Amplification
- AI alignment landscape
- Directions and desiderata for AI alignment
- Other research agendas are listed here
AI Strategy/Governance
- The Centre for the Governance of AI’s research agenda - 2018
- Pp. 35-55 of Legal Priorities Research: A Research Agenda - Legal Priorities Project, 2021
- Artificial Intelligence and Global Security Initiative Research Agenda - Centre for a New American Security
- Promising research projects - AI Impacts, 2018
- Cooperation, Conflict, and Transformative Artificial Intelligence (the Center on Long-Term Risk’s research agenda) - Jesse Clifton, 2019
- A survey of research questions for robust and beneficial AI - Future of Life Institute
- Proposals by individual researchers:
- Problems in AI Alignment that philosophers could potentially contribute to - Wei Dai, 2019
- Problems in AI risk that economists could potentially contribute to - Michael Aird, 2021
- Technical AGI safety research outside AI - Richard Ngo, 2019
- “Studies which could illuminate our strategic situation with regard to superintelligence” - Luke Muehlhauser, 2014 (he also made a list in 2012)
- A shift in arguments for AI risk - Tom Sittler, 2019
- Longtermist AI policy projects for economists - Risto Uuk
Biorisks
- Pp. 56-82 of Legal Priorities Research: A Research Agenda - Legal Priorities Project, 2021
- Project Ideas in Biosecurity for EAs - David Manheim ("In conjunction with a group of other EA biosecurity folk"), 2021
- 80 Questions for UK Biological Security - Luke Kemp et al., 2021
Climate Risks
- Climate research suggestions from The Precipice - Toby Ord, 2020
- Additional research questions - 80,000 Hours, 2020
- More generally, what environmental problems — if any — pose existential risks? (Adapted from Effective Thesis)
-
What are potential risks from geoengineering technologies and which of these technologies — if any — might be promising for mitigating climate change?
Nuclear Risks
- Nuclear risks research suggestions from The Precipice - Toby Ord, 2020
- Improve modeling on nuclear winter and climate effect of asteroids, comets, and supervolcanoes
- Work on resolving the key uncertainties in nuclear winter modelling.
- Characterise the remaining uncertainties then use Monte Carlo techniques to show the distribution of outcome possibilities, with a special focus on the worst-case possibilities compatible with our current understanding.
- Investigate which parts of the world appear most robust to the effects of nuclear winter and how likely civilisation is to continue there.
- Research and develop methods for genetically engineering or breeding crops that could thrive in the tropics during a nuclear winter scenario (Adapted from ALLFED, Effective Theses Topic Ideas)
- How does Chinese nuclear no first-use policy affect global stability and potential strategic doctrines for emerging technologies? (Adapted from Brian Tse, personal correspondence)