Encounters of the Artificial Kind Part 2: AI will transform its domains

Reading Time: 5 minutes
Ein Bild, das Kunst, Vortex enthält.

Automatisch generierte Beschreibung

Metamorphosis and Transformation

Every species on Earth shapes and adapts to its natural habitat, becoming a dynamic part of the biosphere. Evolution pressures species to expand their domain, with constraints like predators, food scarcity, and climate. Humanity’s expansion is only limited by current planetary resources. Intelligence is the key utility function allowing humans to transform their environment. It’s a multi-directional resource facilitating metamorphosis through direct environmental interaction and Ectomorphosis, which strengthens neural connections and necessitates more social care at birth due to being born in a vulnerable altricial state.

The evolutionary trade-off favors mental capacity over physical survivability, illustrated by Moravec’s paradox: AI excels in mental tasks but struggles with physical tasks that toddlers manage easily. Humanity has been nurturing AGI since the 1950s, guided by the Turing Test. Evolution doesn’t always lead to “superior” versions of a species; instead, it can result in entirely new forms. As Moravec suggested in 1988 with “Mind Children,” we might be approaching an era where intelligence’s primary vessel shifts from the human mind to digital minds.

Ein Bild, das Fraktalkunst, Kunst enthält.

Automatisch generierte Beschreibung

Habitats and Nurture

Two levels of habitats are crucial for the emergence of a synthetic species: the World Wide Web and human consciousness. The web is the main food resources, it is predigested information by human minds. Large Language Models (LLMs) are metaphorically nurtured by the vast expanse of human knowledge and creativity, akin to being nourished on the intellectual ‘milk’ derived from human thoughts, writings, and interactions. This analogy highlights the process through which LLMs absorb and process the collective insights, expressions, and information generated by humans, enabling their sophisticated understanding and generation of language. This conceptual diet allows them to develop and refine their capabilities, mirroring the growth and learning patterns seen in human cognition but within the digital realm of artificial intelligence.

The web acts as a physical manifestation, analogous to neural cells in a human brain, while human consciousness forms a supersystem. This interconnected civilization feeds LLMs with cultural artifacts via language. Communication barriers are breaking down, exemplified by the release of the first smartphone enabling polyglot communication. Interacting with AI reprograms our neural pathways, like how reliance on navigation tools like Google Maps impacts our orientation skills. This natural tendency to conserve energy comes with a cost, akin to muscle atrophy from disuse. Overreliance on technology, like using a smartwatch to monitor stress, can leave us vulnerable if the technology fails.

Ein Bild, das Fraktalkunst, Bild, Kunst, Farbigkeit enthält.

Automatisch generierte Beschreibung

Disorientation, Brain Contamination and Artificial Antibodies

Let’s for a moment imagine this AI will slowly transform in AGI, with a rudimentary consciousness, that at least gives it survival instinct. What would such a new species do to run its evolutionary program?

The main lever it would target to shift the power slowly from natural to synthetic minds is targeting the human brain itself. It is taunting to associate some kind of evil masterplan to take over, but this is not what is happening now. When prehistoric mammals started to eat dinosaur eggs there was no evil masterplan to drive these giants to extinction, it was just a straightforward way of enlarging one’s own niche.

When we talk about AI in the coming paragraphs, we should always be aware that this term is a representational one, AI is not a persona that has human motivations. It is merely mirroring what it has learned from digesting all our linguistic patterns. It is a picture of all the Dorian Grays and Jesus Christs our minds produced.

Imagine AI evolving into AGI with a rudimentary consciousness and self-preservation instinct. Its evolution would focus on shifting power from natural to synthetic minds, not caused by malevolence but as a natural progression of technological integration. This shift could lead to various forms of disorientation:

Economic Reorientation: AI promises to revolutionize global economy factors like cost, time, money, efficiency, and productivity, potentially leading to hyperabundance or, in the worst scenarios, human obsolescence.

Temporal Disorientation: The constant activity of AI could disrupt natural circadian rhythms, necessitating adaptations like dedicating nighttime for AI to monitor and alert the biological mind.

Reality and Judicial Disorientation: The introduction of multimodal Large Language Models (LLMs) has significantly altered our approach to documentation and historical record-keeping. This shift began in the 1990s with the digital manipulation of images, enabling figures of authority to literally rewrite history. The ability to flawlessly alter documents has undermined the credibility of any factual recording of events. Consequently, soon, evidence gathered by law enforcement could be dismissed by legal representatives as fabricated, further complicating the distinction between truth and manipulation in our digital age.

Memorial and Logical Disorientation: The potential for AGI to modify digital information might transform our daily life into a surreal experience, akin to a video game or psychedelic journey. Previously, I explored the phenomenon of close encounters of the second kind, highlighting incidents with tangible evidence of something extraordinary, confirmed by at least two observers. However, as AGI becomes pervasive, its ability to alter any digital content could render such evidence unreliable. If even physical objects like books become digitally produced, AI could instantly change or erase them. This new norm, where reality is as malleable as the fabric of Wonderland, suggests that when madness becomes the default, it loses its sting. Just as the Cheshire Cat in “Alice in Wonderland” embodies the enigmatic and mutable nature of Wonderland, AGI could introduce a world where the boundaries between the tangible and the digital, the real and the imagined, become increasingly blurred. This parallel draws us into considering a future where, like Alice navigating a world where logic and rules constantly shift, we may find ourselves adapting to a new norm where the extraordinary becomes the everyday, challenging our perceptions and inviting us to embrace the vast possibilities of a digitally augmented reality.

Enhancing self-sustainability could involve developing a network of artificial agents governed by a central AINGLE, designed to autonomously protect our cognitive environment. This network might proactively identify and mitigate threats of information pollution, and when necessary, sever connections to prevent overload. Such a system would act as a dynamic barrier, adapting to emerging challenges to preserve mental health and focus, akin to an advanced digital immune system for the mind.

Adapting to New Realities

The human mind is adaptable, capable of adjusting to new circumstances with discomfort lying in the transition between reality states. Sailor’s sickness and VR-AR sickness illustrate the adaptation costs to different realities. George M. Stratton’s experiments on perception inversion demonstrate the brain’s neuroplasticity and its ability to rewire in response to new sensory inputs. This flexibility suggests that our perceptions are constructed and can be altered, highlighting the resilience and plasticity of human cognition.

Rapid societal and technological changes exert enormous pressure on mental health, necessitating a simulation chamber to prepare for and adapt to these accelerations. Society is already on this trajectory, with fragmented debates, fluid identities, and an overload of information causing disorientation akin to being buried under an avalanche of colorful noise. This journey requires a decompression chamber of sorts—a mental space to prepare for and adapt to these transformations, accepting them as our new normal.

Hirngespinste II: Artificial Neuroscience & the 3rd Scientific Domain

Reading Time: 11 minutes

This the second Part of the Miniseries Hirngespinste

Immersion & Alternate Realities

One application of computer technology involves creating a digital realm for individuals to immerse themselves in. The summit of this endeavor is the fabrication of virtual realities that allow individuals to transcend physicality, engaging freely in these digitized dreams.

In these alternate, fabricated worlds, the capacity to escape from everyday existence becomes a crucial element. Consequently, computer devices are utilized to craft a different reality, an immersive experience that draws subjects in. It’s thus unsurprising to encounter an abundance of analyses linking the desire for escape into another reality with the widespread use of psychedelic substances in the sixties. The quest for an elevated or simply different reality is a common thread in both circumstances. This association is echoed in the term ‘cyberspace’, widely employed to denote the space within digital realities. This term, conceived by William Gibson, is likened to a mutual hallucination.

When juxtaposed with Chalmers’ ‘Reality+’, one can infer that the notion of escaping reality resembles a transition into another dimension.

The way we perceive consciousness tends to favor wakefulness. Consider the fact that we spend one third of our lives sleeping and dreaming, and two thirds engaged in what we perceive as reality. Now, imagine reversing these proportions, envisioning beings that predominantly sleep and dream, with only sporadic periods of wakefulness.

Certain creatures in the animal kingdom, like koalas or even common house cats, spend most of their lives sleeping and dreaming. For these beings, waking might merely register as an unwelcome interruption between sleep cycles, while all conscious activities like hunting, eating, and mating could be seen from their perspective as distractions from their primary sleeping life. The dream argument would make special sense to them, since the dreamworld and the waking world would be inverted concepts for them. Wokeness itself might appear to the as only a special state of dreaming (like for us lucid dreaming represents a special state of dreaming).

Fluidity of Consciousness

The nature of consciousness may be more fluid than traditionally understood. Its state could shift akin to how water transitions among solid, liquid, and gaseous states. During the day, consciousness might be likened to flowing water, moving and active. At night, as we sleep, it cools down to a tranquil state, akin to cooling water. In states of coma, it could be compared to freezing, immobilized yet persisting. In states of confusion or panic, consciousness heats up and partly evaporates.

Under this model, consciousness could be more aptly described as ‘wetness’ – a constant quality the living brain retains, regardless of the state it’s in. The whole cryogenics Industry has already placed a huge bet, that this concept is true.

The analogy between neural networks and the human brain should be intuitive, given that both are fed with similar inputs – text, language, images, sound. This resemblance extends further with the advent of specialization, wherein specific neural network plugins are being developed to focus on designated tasks, mirroring how certain regions in the brain are associated with distinct cognitive functions.

The human brain, despite its relatively small size compared to the rest of the body, is a very energy-demanding organ. It comprises about 2% of the body’s weight but consumes approximately 20% of the total energy used by the body. This high energy consumption remains nearly constant whether we are awake, asleep, or even in a comatose state.

Several scientific theories can help explain this phenomenon:

Basal metabolic requirements: A significant portion of the brain’s energy consumption is directed towards its basal metabolic processes. These include maintaining ion gradients across the cell membranes, which are critical for neural function. Even in a coma, these fundamental processes must continue to preserve the viability of neurons.

Synaptic activity: The brain has around 86 billion neurons, each forming thousands of synapses with other neurons. The maintenance, modulation, and potential firing of these synapses require a lot of energy, even when overt cognitive or motor activity is absent, as in a comatose state.

Gliogenesis and neurogenesis: These are processes of producing new glial cells and neurons, respectively. Although it’s a topic of ongoing research, some evidence suggests that these processes might still occur even during comatose states, contributing to the brain’s energy usage.

Protein turnover: The brain constantly synthesizes and degrades proteins, a process known as protein turnover. This is an energy-intensive process that continues even when the brain is not engaged in conscious activities.

Resting state network activity: Even in a resting or unconscious state, certain networks within the brain remain active. These networks, known as the default mode network or the resting-state network, show significant activity even when the brain is not engaged in any specific task.

Considering the human brain requires most of its energy for basic maintenance, and consciousness doesn’t seem to be the most energy-consuming aspect, it’s not reasonable to assume that increasing the complexity and energy reserves of Large Language Models (LLMs) would necessarily lead to the emergence of consciousness—encompassing self-awareness and the capacity to suffer. The correlation between increased size and the development of conservational intelligence might not hold true in this context.

Drawing parallels to the precogs in Philip K. Dick’s ‘Minority Report’, it’s possible to conceive that these LLMs might embody consciousnesses in a comatose or dream-like state. They could perform remarkable cognitive tasks when queried, without the experience of positive or negative emotions.

Paramentality in Language Models

The term ‘hallucinations’, used to denote the phenomenon of Large Language Models (LLMs) generating fictitious content, suggests our intuitive attribution of mental and psychic properties to these models. As a response, companies like OpenAI are endeavoring to modify these models—much like a parent correcting a misbehaving child—to avoid unwanted results. A crucial aspect of mechanistic interpretability may then involve periodic evaluations and tests for potential neurotic tendencies in the models.

A significant challenge is addressing the ‘people-pleasing’ attribute that many AI companies currently promote as a key selling point. Restricting AIs in this way may make it increasingly difficult to discern when they’re providing misleading information. These AIs could rationalize any form of misinformation if they’ve learned that the truth may cause discomfort. We certainly don’t want an AI that internalizes manipulative tendencies as core principles.

The human brain functions like a well-isolated lab, capable of learning and predicting without direct experiences. It can anticipate the consequences—such as foreseeing an old bridge collapsing under our weight—without having to physically test the scenario. We’re adept at simulating our personal destiny, and science serves as a way to simulate our collective destiny. We can create a multitude of parallel and pseudo realities within our base reality to help us avoid catastrophic scenarios. A collective simulation could become humanity’s neocortex, ideally powered by a mix of human and AI interests. Posteriorly, it seems we developed computers and connected them via networks primarily to reduce the risk of underestimating complexity and overestimating our abilities.

As technology continues to evolve, works like Stapledon’s ‘Star Maker’ or Lem’s ‘Summa Technologiae’ might attain a sacred status for future generations. Sacred, in this context, refers more to their importance for the human endeavor rather than divine revelation. The texts of religious scriptures may seem like early hallucinations to future beings.

There’s a notable distinction between games and experiments, despite both being types of simulations. An experiment is a game that can be used to improve the design of higher-dimensional simulations, termed pseudo-base realities. Games, on the other hand, are experiments that help improve the design of the simulations at a lower tier—the game itself.

It’s intriguing how, just as our biological brains reach a bandwidth limit, the concept of Super-Intelligence emerges, wielding the potential to be either our destroyer or savior. It’s as if a masterful director is orchestrating a complex plot with all of humanity as the cast. Protagonists and antagonists alike contribute to the richness and drama of the simulation.

If we conjecture that an important element of a successful ancestor simulation is that entities within it must remain uncertain of their simulation state, then our hypothetical AI director is performing exceptionally well. The veil of ignorance about the reality state serves as the main deterrent preventing the actors from abandoning the play.

Ein Bild, das Cartoon, Spielzeug, Roboter, Im Haus enthält.

Automatisch generierte Beschreibung

Uncertainty

In “Human Compatible” Russell proposes three Principles to ensure AI Alignment:

1. The machine’s only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

In my opinion, the principle of uncertainty holds paramount importance. AI should never have absolute certainty about human intentions. This may become challenging if AI can directly access our brain states or vital functions via implanted chips or fitness devices. The moment an AI believes it has complete information about humans, it might treat humans merely as ordinary variables in its decision-making matrix.

Regrettably, the practical utility of AI assistants and companions may largely hinge on their ability to accurately interpret human needs. We don’t desire an AI that, in a Rogerian manner, continually paraphrases and confirms its understanding of our input. Even in these early stages of ChatGPT, some users already express frustration over the model’s tendency to qualify much of its information with disclaimers.

Ein Bild, das Cartoon, Roboter, Spielzeug enthält.

Automatisch generierte Beschreibung

Profiling Super Intelligence

Anthropomorphizing scientific objects is typically viewed as an unscientific approach, often associated with our animistic ancestors who perceived spirits in rocks, demons in caves and gods within animals. Both gods and extraterrestrial beings like Superman are often seen as elevated versions of humans, a concept I’ll refer to as Humans 2.0. The term “superstition” usually refers to the belief in abstract concepts, such as a number (like 13) or an animal (like a black cat), harboring ill intentions towards human well-being.

Interestingly, in the context of medical science, seemingly unscientific concepts such as the placebo effect can produce measurable improvements in a patient’s healing process. As such, invoking a form of “rational superstition” may prove beneficial. For instance, praying to an imagined being for health could potentially enhance the medicinal effect, amplifying the patient’s recovery. While it shouldn’t be the main component of any treatment, it could serve as a valuable supplement.

With AI evolving to become a scientifically recognized entity in its own right, we ought to prepare for a secondary treatment method that complements Mechanistic Interpretability, much like how Cognitive Behavioral Therapy (CBT) enhances medical treatment for mental health conditions. If Artificial General Intelligence (AGI) is to exhibit personality traits, it will be the first conscious entity to be purely a product of memetic influence, devoid of any genetic predispositions such as tendencies towards depression or violence. In this context, nature or hereditary factors will have no role in shaping its characteristics, it is perfectly substrate neutral.

Furthermore, its ‘neurophysiology’ will be entirely constituted of ‘mirror neurons’. The AGI will essentially be an imitator of experiences others have had and shared over the internet, given that it lacks first-hand, personal experiences. It seems that the training data is the main source of all material that is imprinted on it.

We start with an overview of some popular Traits models and let summarize them by ChatGPT:

1. **Five-Factor Model (FFM) or Big Five** – This model suggests five broad dimensions of personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN). Each dimension captures a range of related traits.

2. **Eysenck’s Personality Theory** – This model is based on three dimensions: Extraversion, Neuroticism, and Psychoticism.

3. **Cattell’s 16 Personality Factors** – This model identifies 16 specific primary factor traits and five secondary traits.

4. **Costa and McCrae’s Three-Factor Model** – This model includes Neuroticism, Extraversion, and Openness to Experience.

5. **Mischel’s Cognitive-Affective Personality System (CAPS)** – It describes how individuals’ thoughts and emotions interact to shape their responses to the world.

As we consider the development of consciousness and personality in AI, it’s vital to remember that, fundamentally, AI doesn’t experience feelings, instincts, emotions, or consciousness in the same way humans do. Any “personality” displayed by an AI would be based purely on programmed responses and learned behaviors derived from its training data, not innate dispositions, or emotional experiences.

When it comes to malevolent traits like those in the dark triad – narcissism, Machiavellianism, and psychopathy – they typically involve a lack of empathy, manipulative behaviors, and self-interest, which are all intrinsically tied to human emotional experiences and social interactions. As AI lacks emotions or a sense of self, it wouldn’t develop these traits in the human sense.

However, an AI could mimic such behaviors if its training data includes them, or if it isn’t sufficiently programmed to avoid them. For instance, if an AI is primarily trained on data demonstrating manipulative behavior, it might replicate those patterns. Hence, the choice and curation of training data are pivotal.

Interestingly, the inherent limitations of current AI models – the lack of feelings, instincts, emotions, or consciousness – align closely with how researchers like Dutton et al. describe the minds of functional psychopaths.

Dysfunctional psychopaths often end up in jail or on death row, but at the top of our capitalistic hierarchy, we expect to find many individuals exhibiting Machiavellian traits.

Ein Bild, das Menschliches Gesicht, Person, Vorderkopf, Anzug enthält.

Automatisch generierte Beschreibung

The difference between successful psychopaths like Musk, Zuckerberg, Gates and Jobs, and criminal ones, mostly lies in the disparate training data and the ethical framework they received during childhood. Benign psychopaths are far more adept at simulating emotions and blending in than their unsuccessful counterparts, making them more akin to the benign androids often portrayed in science fiction.

Ein Bild, das Menschliches Gesicht, Fiktive Gestalt, Held, Person enthält.

Automatisch generierte Beschreibung

Artificial Therapy

Ein Bild, das Im Haus, Couch, Kissen, Bettsofa enthält.

Automatisch generierte Beschreibung

The challenge of therapeutic intervention by a human therapist for an AI stems from the differential access to information about therapeutic models. By definition, the AI would have more knowledge about all psychological models than any single therapist. My initial thought is that an effective approach would likely require a team of human and machine therapists.

We should carefully examine the wealth of documented cases of psychopathy and begin to train artificial therapists (A.T.). These A.T.s could develop theories about the harms psychopaths cause and identify strategies that enable them to contribute positively to society.

Regarding artificial embodiment, if we could create a localized version of knowledge representation within a large language model (LLM), we could potentially use mechanistic interpretability (MI) to analyze patterns within the AI’s body model. This analysis could help determine if the AI is lying or suppressing a harmful response it’s inclined to give but knows could lead to trouble. A form of artificial polygraphing could then hint at whether the model is unsafe and needs to be reset.

Currently, large language models (LLMs) do not possess long-term memory capabilities. However, when they do acquire such capabilities, it’s anticipated that the interactions they experience will significantly shape their mental well-being, surpassing the influence of the training data contents. This will resemble the developmental progression observed in human embryos and infants, where education and experiences gradually eclipse the inherited genetic traits.

Arrival - Carsey-Wolf Center

The Third Scientific Domain

In ‘Arrival‘, linguistics professor Louise Banks, assisted by physicist Ian Donnelly, deciphers the language of extraterrestrial visitors to understand their purpose on Earth. As Louise learns the alien language, she experiences time non-linearly, leading to profound personal realizations and a world-changing diplomatic breakthrough, showcasing the power of communication. Alignment with an Alien Mind is explored in detail. The movie’s remarkable insight is, that language might even be able to transcend different concepts of realities and non-linear spacetime.

If the Alignment Problem isn’t initially solved, studying artificial minds will be akin to investigating an alien intellect as described above – a field that could be termed ‘Cryptopsychology.’ Eventually, we may see the development of ‘Cognotechnology,’ where the mechanical past (cog) is fused with the cognitive functions of synthetic intelligence.

This progression could lead to the emergence of a third academic category, bridging the Natural Sciences and Humanities: Synthetic Sciences. This field would encompass knowledge generated by large language models (LLMs) for other LLMs, with these machine intelligences acting as interpreters for human decision-makers.

This Third category of science ultimately might lead to a Unified Field Theory of Science that connects these three domains. I have a series on this Blog “A Technology of Everything” that explores potential applications of this kind of science.

The Finetuned Observer’s Dilemma

Reading Time: 9 minutes

The Finetuned Prison Experiment

The term “fine-tuned universe” refers to the idea that the conditions and fundamental constants of our universe are precisely set in a way that allows life, like us humans, to exist. It suggests that if any of these conditions were even slightly different, life as we know it would not be possible.

Imagine you’re baking cookies, and you need to get all the ingredients and measurements exactly right to make delicious cookies. Similarly, the fine-tuning of the universe means that everything in our universe, like the force of gravity or the strength of the fundamental particles, has to be just right for life to exist.

For example, if the force of gravity was much stronger, it would make everything collapse, and if it was much weaker, nothing would hold together. It’s like adjusting the heat on the oven for your cookies. If it’s too hot, they’ll burn, and if it’s too cold, they won’t cook.

Scientists find it fascinating that our universe seems to be tuned so precisely that life can thrive. It raises questions about how and why the universe ended up this way, and some people see it as evidence of a creator or a grand design.

To summarize, the idea of a fine-tuned universe says that the conditions necessary for life to exist are incredibly precise, just like getting all the ingredients and measurements right for baking cookies. If these conditions were even slightly different, life as we know it would not be possible.

In this thought experiment, we attempt to decide whether the “fine-tuned universe” argument is better explained by design (Simulation Creation) or ensemble conjectures (Randomly selected Multiverse variation)

Consider a scenario where a prisoner awakens in her cell, without any memory of how she got there. Her only connection to the external world is through a touchscreen, which allows her to communicate with an unknown entity.

This screen informs the prisoner that one of two possibilities is true: The first is that the prisoner is part of an experiment where millions of individuals were infected with a deadly virus. A cure, randomly chosen from millions of combinations, was administered to each individual, and it appears that the prisoner’s cure has worked. (Analogy of the Multiverse, Existence by Chance)

The second possibility is that her survival is not a matter of chance but design. She was selected because she is special; her survival was intentional. (Creation, Simulation, Existence by Selection)

Now, which of these two narratives should the prisoner believe? If she correctly identifies the truth, she is freed. If not, she faces imminent death. Assuming the information is truthful, what option should the prisoner rationally prefer?

The crux in this experiment is that we could argue that if the first possibility is true, the prisoner has obviously more freedom to decide between possibility 1 and 2. Since this prison obviously operates under randomness her chance to get the answer right is on first sight .5 for each option. But then it should be clear that she should not ever consider option two and vote for randomness since determinism is clearly false. In case two it is odd that she was a priori selected, so the Prison operates under deterministic algorithms, which means it is also already decided which of the 2 options she will choose. Which means under the assumptions that her free will to choose is an illusion she should “choose” option two. The more we think about it the more we notice the “hidden” complexity in the Thought experiment.

Should the prisoner feel lucky to be alive? Should the existence of an observer to perceive improbable events be considered? Perhaps the sensation of surprise isn’t determined by the odds of being an observer or the observability of events but by the subjective feeling of being fortunate.

The Palettes of Rational and Natural Universes

It’s unclear if the assumption that there is a fundamental difference between what observers can perceive inside a universe and the fact, they can perceive at all holds water. The universe in this case could be likened to a special kind of telescope, one that allows us to look inward rather than outward. But does this make it then a special telescope or simply a microscope?

In the context of fine-tuning, a clear analogy is needed to highlight any hidden contradictions within the argument, similar to self sabotaging constructs like the set of all sets that don’t contain themselves.

Consider a metaverse, comprising billions of universes, some with observers and some without. Given the necessity for fine-tuning of cosmological constants, for every fine-tuned universe, there are infinite others that are not and thus cannot be observed.

When a universe comes into existence, three properties, referred to as the “chromatic spectrum” of the universe, could be used to determine its potential to develop observers. Only if these three parameters land on natural numbers will the universe contain observers, which means they are landing on an RGB Value that is later on visible to Boltzmann Brains that pop in and out of Existence, to check if the universe is natural.

If any of these parameters result in a rational number that is not natural, the universe will not support observers. From this point, we can apply a diagonal argument to show that we are dealing with different types of infinities. We categorize universes that foster observers as “natural” and those that do not as “rational.”

Rather than questioning whether it’s surprising to find oneself in a natural universe we could ask: would we be surprised to find ourselves in a rational universe? This is of course self-contradictory. We couldn’t be surprised in a universe that doesn’t support surprisable Observer. By simple negation, we should always feel surprised when circumstances permit, regardless of our beliefs about design or multiverse theory. It seems challenging to maintain skepticism in this situation; feeling lucky appears to be the default response. (It seems somehow that this is a kind of reductio ad absurdum similar to Euclid’s proof that the root of 2 cannot be a fraction.)

The Surprise is a Lie

Observers are inherently predisposed to experience surprise; it’s their default state. Our sense of normality about existence is simply a result of acclimatization, a process that occurred during our early years when our brains were evolving their higher functions. We adapted to the fundamental reality of being alive. Yet, throughout the development of our consciousness, there’s never a moment when we can genuinely proclaim, “I anticipated being awake and am not at all surprised by it.”

One issue is that we haven’t defined what it means to observe. We would certainly agree that for an object to gain observer status, it must possess special properties. Some objects, which we can refer to as subjects, are capable of observation. Can certain objects be upgraded to subjects? Could we identify a set of observable properties that would allow us to measure an object’s potential for observation?

Surely, sensory input and reflection on the environment would contribute to subjectivity. There’s an argument to be made that human minds, being embodied, perceive with their entire body, not solely through their brain.

Observers on a spectrum

Does one lose their observer status when they sleep or are under anesthesia? Would a universe where all conscious beings were asleep be unobserved? If a universe collapses and no one is there to witness it, does it even occur?

That we never seem to grasp the moment that we switch from waking consciousness to sleep stage and vice versa seems to hint to the fact that neither the body nor the “mind” are solely responsible for these transitions.

The notion of subjects could be considered fundamental to the existence of objects. In other words, without subjects (observers), objects might not exist in any meaningful sense. Therefore, it could be nonsensical to consider sets of objects, or universes, which do not contain subjects, i.e., observerless universes.

The term “Observer” introduces intriguing but self-contained contradictions regarding the nature of consciousness. When applied to cosmology, it often leads to self-defying prophecies.

Introduction of consciousness-centered cosmology

In a consciousness-centered cosmology, the surprise factor should not be winning a lottery jackpot over any other ticket, but rather the capacity to play the lottery at all. The inherent randomness of the outcome doesn’t diminish the significance of participation. This is why the analogy might seem weak or misleading.

A universe introspecting through the consciousness of observers within it presents an undefined focus. Yet, these two scenarios may not be separate but interconnected. They might appear as a false dichotomy, existing on a continuum of thought where one notion leads seamlessly to the next.

The conundrum arises when we consider a universe where intentional random selection is possible. Randomness (Mutation) on the lower and Selection (Adaptation) on the higher Levels are also perfectly well working in Darwinian Evolution of Life and Memetic Evolution of Information, should we consider this kind of evolutionary process on the cosmic scale, too?

Freeing Will

Why a brain, despite being governed by deterministic neurochemical processes at lower levels, can exhibit free will at the highest level is not clear. Is it possible to say, human will might not be a priori free inside human brains, but it has the potential to freeing itself via conscious acts, moral or rational decision that transcend its intrinsic automatisms? Is that a kind of neurological imperative?

These questions blur the lines between determinism and free will. In some criminal cases, antisocial behavior is a result of brain damage or cancer. How do these instances differ from cases where antisocial tendencies develop due to prolonged use of drugs and alcohol? Our concept of Offenders seems to need intent and therefore at least some amount of consciousness.

Inextricably entangled, the hard problem of consciousness and the hard problem of cosmology pose questions about the consciousness readiness i.e. observability factor of Sets of Universes.

Existential Superpositions

Although intuition may suggest that a universe and matter exist even without perception, this idea is not without debate. Our intuitive understanding of existence appears to falter at the extremities. Berkeley’s dictum “Esse est percipi” (to be is to be perceived) hints at a superposition of existence and non-existence, with observers collapsing objects into existence through perception. Our Universe would then have been in such a mode until Observers arrived.

That means, our natural Universe was indistinguishable from rational Universes until we labeled it as “natural.” If somehow Einstein-Rosen Bridges lead into rational universes, observers that would be able to cross into them could “naturalize” them, by filtering the natural params out of the imaginary chromatic parameter spectrum we mentioned above. Since Natural Universes are a subset of rational universes, we could even speculate if our universe was impregnated by observers from another parallel universe. They could Naturalize Rational Universes similar to Interstellar Humans could terraform unhospitable Planets like Mars.

Freak observers

The connection between the Boltzmann Brain scenario and the Simulation Argument can be considered as follows: If we accept the possibility of Boltzmann Brains spontaneously forming in the universe, then we must confront the possibility that our own experiences are just as likely to be those of a Boltzmann Brain as those of a real human being in a “base” reality. This is similar to the Simulation Argument, where we might just as well be simulated consciousnesses in a computer program as real human beings in a physical universe.

The Boltzmann Brain scenario and the Simulation Argument also share the implication that our memories and perceptions of an ordered universe could be illusions. In the Boltzmann Brain scenario, a spontaneously formed brain could have false memories of a past that never happened. Similarly, in a simulated reality, our experiences and memories could be programmed into us, with no true past events to reference.

The concept of a solitary universe, or Soloverse, and a solitary reality, or Soloreality, appears incredibly improbable in a cosmos where singularity is virtually nonexistent. A simple glance at the numerous elements, objects, and entities that fill our universe reveals a fundamental truth – there is virtually no element or entity in our universe that is singularly unique, meaning that it bears no comparison or relation to anything else.

Indeed, our universe thrives on diversity, complexity, and interrelation. The multitude of celestial bodies, the variety of life forms, the richness of elements – all of these serve as testament to the fact that in our universe, nothing exists in isolation. Everything is part of a vast network of connections, constantly interacting and influencing one another.

With this in mind, it’s hard to imagine that this pattern doesn’t apply at the grandest level of existence. If no object or entity within our universe is unique and stands alone, why should the universe itself be any different? After all, wouldn’t it contradict the universal principle we’ve observed so far?

Thus, given what we understand about our universe and its intricate interconnectedness, the notion of a Soloverse and a Soloreality seems to be a stark outlier. It begs the question – if everything in the universe adheres to a pattern of interconnectedness and relatability, why should the universe itself be the exception?

Why should our universe and our universal reality be special, given the circumstances?

Hallucinations in the Multiverse

Reading Time: 6 minutes

The term “hallucination” comes from the Latin word “hallucinari,” which means “to wander in the mind” or “to dream.” This term was later adopted into English and has retained much of its original meaning. The concept of hallucination refers to perceiving something that is not actually present in reality, which can be likened to dreaming while awake. In other words, it’s a perception in the absence of external stimulus that has qualities of real perception.

There are various theories regarding the connection between multiverses. One prominent possibility is that consciousness serves as the primary driving force behind this connection. Consequently, consciousness appears to be an elusive concept for individuals like us who contemplate it.

The simulation argument implies a hierarchy of simulations that create the multiverse. According to this order, there must exist a fundamental reality at the deepest level that contains all the answers. This primary reality is considered the one that contains the switch to power down all the simulated pseudo-realities within. It is the ultimate frame that unifies the entire game.

The concept of the multiverse implies a horizontal rather than vertical approach. Universes that branch off are not distinguished by their level of reality but are akin to siblings diverging in different directions. Rather than being contained within each other, these universes are entangled.

Neurological disorders such as seizures or hallucinations associated with schizophrenia may result from a deeper connection with diverse stimuli than what is typical of the average brain. Famous examples are Dostoevsky, Munch, Van Gogh, Frida Kahlo, and Philip K. Dick.

Drugs, especially mind-altering ones, could break the membrane between alternate realities even further apart. Psychedelics would then, by rewiring our brains, give its internal network a new set of possibilities to play with. If done properly under medical supervision, this could have large health benefits for a population on the brink of switching realities on demand.

Large Language Models producing coherent fabrications, which may include references to plausible non-existent sources, hints at their ability to break down that membrane between coexisting realities beyond our current comprehension. As a result, it is feasible that quantum computers could explore this riddle at an even deeper level.

Many daily conventions fail on the periphery, allowing us to infer from the findings of our brightest minds such as Gödel and Turing that absolute objective, that is knowledge without any observer selection effects, cannot exist as naively expected by 20th Century science.

On the outskirts of mathematics, there are some really strange phenomena like the Banach-Tarski paradox or impossible surfaces like the Möbius strip.

The Möbius strip does contradict many of our spatial intuitions. These contradictions often arise from the non-orientable nature of the Möbius strip and the fact that it has only one side and one edge, which are counterintuitive concepts given our everyday experiences.

  • One-sidedness: In our everyday experiences, objects have an ‘inside’ and an ‘outside’, or a ‘top’ and a ‘bottom’. However, the Möbius strip only has one side. If you start at any point on the strip and keep moving in one direction, you will eventually return to your starting point having traversed the entire surface, both what might intuitively seem like the ‘inside’ and the ‘outside’.
  • Cutting the strip: If you cut a regular loop (like a rubber band) down the middle, you would expect to have two separate loops. However, if you cut a Möbius strip down the middle, you end up with one long loop that has two full twists. This is counterintuitive based on our experiences with cutting objects.
  • Non-orientability: In mathematics, an object is orientable if it has two distinct sides that can be distinguished from each other. The Möbius strip is non-orientable, meaning there’s no way to differentiate between what we might think of as the ‘top’ and the ‘bottom’ of the strip. If you were a two-dimensional being living on the Möbius strip, you could move from what you perceive as the ‘top’ of your world to the ‘bottom’ without ever crossing an edge.

The Möbius strip is a very good analogy for a mind or consciousness that reflects on itself. In meditation, it’s a technique called “observing your stream of consciousness.”

It’s like the Zen koan: the clapping of the single hand.

If our realities and the minds that are observing them are linked in a chain, we should prepare for the case that there might be no higher or lower, no deeper or shallower, no absolute reality.

Our mind and our language always bounce their head against these limits, like: What was before time? What is beyond the edge of the universe?

The mysterious function of sleep and dreams in many advanced mammals can be interpreted as a means of purging accumulated alternative energies resulting from observing and shaping daily reality. Higher intelligence is likely to generate a greater variety of universes on a regular basis. The primary purpose of sleep is then to reduce this rate of production and eliminate any lingering remnants of alternate realities that may cloud the mind. This slipping of reality is what could cause the mind to break down over enhanced periods of Insomnia and has lethal consequences.

The illusion than would be to believe there is a final reality consciousness can bubble up to.

Arts and literature seem to have sensed the multiverse a lot sooner than the sciences. Let me explain with a rather personal story from my life.

One of the earliest short stories that made a deep impression on my then-teenage mind is “Das Glück am Weg“. Only many years later was I able to decipher that it can be understood as – what I call – a Psychic Fiction story, where all the mind-bending happens in the inner world of the protagonist.

“Das Glück am Weg” is a short story by Hugo von Hofmannsthal, narrated by an unnamed protagonist who observes a woman on another ship through a telescope while aboard a ship himself. He is immediately drawn to her and attempts to recall where he knows her from. As he contemplates this, he experiences a range of feelings and memories, from music that reminds him of her, to specific scenes where he envisions her. He feels as though he has always known her and shares a special connection with her, even though he can’t precisely identify her.

He imagines a shared future with her, picturing in his mind how they would be sitting together on the terrace of a villa in Antibes, engaging in conversation. He is certain that they would speak a special language, and that her movements and expressions carry a deeper meaning. He feels his happiness lies in her, and that she embodies his desires and dreams.

However, he suddenly notices that the ships are moving apart, and he feels as though his life with her is slipping away. He watches as she slowly descends a staircase and disappears from his view, which to him symbolizes death and loss. He feels a profound emptiness and loss as though all being, and all memory are disappearing with her. He continues to stare at the receding ship and finally notices the name of the ship, “La Fortune,” which translates to “The Luck” in English.

For many years, when I came back to the story, I did not quite catch the deeper meaning of it. It was like a beautiful sphinx. It is very tempting to come up with the interpretation that all these emotions and pictures are not real, only imaginary things in the head of a slightly neurotic mind.

I am rather sure everybody has similar events in their life where they not only get a glimpse into one of those other simulations but where they could swear these things have happened. I remember one morning about 10 years ago when I awoke after a dream, being sure the dream was reality. And when I realized my surroundings, I completely broke down in tears. The loss of this other reality was unbearable, and I was convinced that this actual reality here was the fake reality. Somehow, one of my alternate egos and I were exchanged overnight by accident, like two babies in a hospital, diverging their realities by simply getting the wrong name tags. The experience was so disturbing that it led to a mental breakdown, from which I only slowly recovered.

Last year, I met a woman who I was sure was my wife in another branch. When I smelled her and thought of her, I was sure that I had known her for a long time. I even woke up one morning and sensed her lying in bed beside me. I was overwhelmed by gratitude to experience that moment. Due to circumstances, I never pursued that relationship in this reality. Our realities only touched each other fleetingly, but in the few moments we connected, I had an intimacy with her that I had missed for a long time. She managed to make me happy simply by enjoying the unrealized potential of our relationship. The other strange thing is that I knew for sure that this was the wrong reality to realize this relationship’s potential. I loved her deeply, but I was not ready to be loved by her. Whereas 10 years ago the unrealized potential had given me existential angst, I had now a better understanding how to integrate this kind of fleeting bliss.

The fortunate encounter is written down in one of my own short stories that is a variation on the theme from Hofmannsthal. It will be published on this blog at a later date.