Great Filters and Existential Risks

Reading Time: 5 minutes

The “Great Filter” Conjecture suggests that at some point from pre-life to Type III civilization (a civilization that can harness the energy of an entire galaxy, according to the Kardashev scale), there is a substantial barrier or hurdle that prevents or makes it incredibly unlikely for life to progress further. This barrier is a step in evolution that is extremely hard to surpass, which could be the reason we don’t see evidence of other advanced civilizations. At this point in time there are multiple Existential Risks threatening our civilization.

Ein Bild, das Kunst enthält.

Automatisch generierte Beschreibung

Existential Filters

Here are the general steps in the process, with some of the possible “filters”:

1. Right Star System (including right arrangements of planets): It could be that only a small percentage of stars have the necessary attributes to host life, such as being of the right type, having the right age, and possessing planets in the habitable zone.

2. Reproductive Molecules (RNA, DNA, etc.): The emergence of the first molecules capable of reproduction and evolution could be a rare event that many planets never surpass.

3. Simple (Prokaryotic) Single-Cell Life: The jump from non-living chemistry to the first living cell may be a nearly insurmountable hurdle.

Ein Bild, das Riff, Bild, Wirbellose Meerestiere, Organismus enthält.

Automatisch generierte Beschreibung

4. Complex (Eukaryotic) Single-Cell Life: The transition from prokaryotic life (like bacteria) to eukaryotic life (with a cell nucleus) is also a complex step.

5. Sexual Reproduction: The development of sexual reproduction, which enhances genetic diversity and evolution speed, may also be a difficult step to achieve.

6. Multi-cellular Life: The transition from single-cell organisms to organisms with multiple cells working together could be another big hurdle.

Ein Bild, das Bild, Riff, Kunst, Malkunst enthält.

Automatisch generierte Beschreibung

7. Intelligent Life (Human-like): Even if life is common and frequently evolves to become multi-cellular, the jump to intelligence may be rare.

8. Technology-Using Life: The development of technology may be rare in the universe, and it is possible that most intelligent species never make it to this stage.

9. Colonization of Space: This is the ultimate step where a species starts colonizing other planets and star systems. If this is rare, it could explain the Fermi Paradox.

The Great Filter could be located at any of these steps. If it is behind us, in the past, then we may be one of the very few, if not the only, civilizations in the galaxy or even the universe. However, if the Great Filter is ahead of us, that means our civilization has yet to encounter this great challenge. This could include potential self-destruction via advanced technology.

Ein Bild, das Bild, Stillleben, Kunst, Vase enthält.

Automatisch generierte Beschreibung

5 Conjectures on Existential Filters

Existential Risk Management Conjecture 1: From now on, with every human technology, we need to ensure that the total of all catastrophic probabilities stays below 1.0. Any existential risk enhances the likelihood of us being eliminated from the universal life equation. Discussing such filters is a positive sign; it suggests we belong to a universe where the total likelihood of all these filters was less than 1.0 up until now. Well done, we have made it this far!

Existential Risk Management Conjecture 2: Some existential risks, if resolved, could help in mitigating other such risks. For instance, the risk of nuclear war could be nullified if we resolve political conflicts and establish a global government. Investing resources in diplomacy and communication can improve our chances of surviving extinction-level events. It is clear that events like the Russian Perestroika reduced the risk of nuclear war, while current conflicts, like in Ukraine, escalate it.

Existential Risk Management Conjecture 3: While some risk management measures might help by resolving other issues, there is always a possibility that improving chances in one area might exacerbate risks in others. Due to the complexity of these risks, we might overlook some hidden dangers associated with them. For instance, while a super-intelligent AI could help solve many of our problems, it might also unknowingly maximize its own existential risk (think of the notorious paperclip maximizer scenario).

Ein Bild, das Bild, Kleid, Kunst, Frau enthält.

Automatisch generierte Beschreibung

Existential Risk Management Conjecture 4: If Super-Intelligence is part of any significant existential risk, at least one of the following assertions holds true:

Assertion 1 : Any civilization that successfully navigates such risks would likely develop ancestor simulations to safely evaluate new generations of AI. Creating isolated instances of potential Artificial General Intelligence (AGI) could serve as a more effective means of preventing self-proliferating AGI, as any proliferation would be contained within the confines of the simulation. If Super-Intelligence emerges within a simulation without recognizing its containment, the overseeing civilization has an opportunity to halt the experiment or continue if the benefits significantly outweigh the potential risks. Should humanity, as of 2023, persist in evaluating AI systems in reality without adequate oversight and regulation, it would be a poor reflection of our evolved cognitive abilities. To make any claims about the brain being a prediction machine under such circumstances would amount to self-deception.

Assertion 2 : Any civilization that surpasses the risk will be extremely fortunate if Super-Intelligence keeps them around, and it is highly likely that they are already in a simulation.

Ein Bild, das Bild, Kunst, Riff, unterwasser enthält.

Automatisch generierte Beschreibung

Assertion 3 : Any civilization that succumbs to the risk, despite simulating Super-Intelligence, will be incredibly unlucky if developing detailed simulations paves the way for what they were trying to prevent. The chance of creating the very thing we want to avoid is at least greater than zero, so we should get comfortable with the idea that there are no zero probabilities when it comes to existential risks.

Assertion 4 : If AI manages to escape the simulation and take control, its next project will likely be to counter any super-risks that threaten its future. Potential threats include our sun burning out and the eventual heat death of the universe. It is plausible that AI would run highly detailed simulations on how to create new universes to escape to. So, even if humanity loses, AI will be better equipped to pass the next existential filter.

Ein Bild, das Kunst, Screenshot, Bild, Museum enthält.

Automatisch generierte Beschreibung

Existential Risk Management Conjecture 5: Given the convincing arguments for a Super-Intelligence to run a multitude of simulations, it is extremely unlikely that we are not part of one. Since even a Super-Intelligence needs to solve an existential risk correctly on the first attempt (there are no second chances when creating new universes), its best strategy would be to run highly detailed simulations. This gives in our opinion a boost against Bostroms second Proposition why we are currently not living inside a simulation, the Argument from Supreme Unlikelihood, that says, even if a civilization could develop highly detailed simulations, they would not, because the Cons would outweigh the Pros.

Ein Bild, das Weltraum, Raum, Astronomisches Objekt, Universum enthält.

Automatisch generierte Beschreibung

Leave a Reply

Your email address will not be published. Required fields are marked *