The Ecology of Extinction: Evolutionary Game Theory and Existential Risk from Artificial Superintelligence Recent surveys indicate that a significant fraction of AI researchers assign non-trivial probability to human extinction or civilisational collapse resulting from advanced AI systems. In this talk, I apply frameworks from evolutionary game theory to two key strategic questions raised by the development of artificial superintelligence (ASI). First, I model competition among near-AGI systems and their developers as an evolutionary game, asking whether competitive dynamics might naturally constrain dangerous capability growth, and argue that they are unlikely to do so. Second, I analyse international coordination on development moratoria as a public goods game, exploring the conditions under which stable cooperation can emerge and the parameter regimes in which it cannot. I conclude by discussing the broader risk landscape and the roles that academics can play through research contributions, public communication, and political engagement. This article was published on 2026-03-03