Utopological Investigations Part 1

Reading Time: 9 minutes

Ein Bild, das Text, Schrift, Screenshot, Poster enthält.

Automatisch generierte Beschreibung

Prologue

This is a miniseries dedicated to the memory of my first reading of Bostrom’s new book, “Deep Utopia,” which—somewhat contrary to his intentions—I found very disturbing and irritating. Bostrom, who considers himself a longtermist, intended to write a more light-hearted book after his last one, “Superintelligence,” which should somehow give a positive perspective on the positive outcome of a society that reaches technological maturity. A major theme in Bostrom’s writings circles around the subject of existential risk management; he is among the top experts in the field.

“Deep Utopia” can be considered a long-winded essay about what I would call existential bliss management: Let us imagine everything in humanity’s ascension to universal stardom goes right and we reach the stage of Tech-Mat Bostrom coins the term “plasticity” for, then what? Basically, he just assumes all the upsides of the posthumanist singularity, as described by proponents like Kurzweil et al., come true. Then what?

To bring light into this abyss, Bostrom dives deep down to the Mariana Trench of epistemic futurology and finds some truly bizarre intellectual creatures in this extraordinary environment he calls Plastic World.

Bostrom’s detailed exploration of universal boredom after reaching technological maturity is much more entertaining than its subject would suggest. Alas, it’s no “Superintelligence” barn burner either.

He chooses to present his findings in the form of a meta-diary, structuring his book mainly via days of the week. He seems to intend to be playful and light-hearted in his style and his approach to the subject. This is a dangerous path, and I will explain why I feel that he partly fails in this regard. This is not a book anyone will have real fun reading. Digesting the essentials of this book is not made easier by the meta-level and self-referential structure where the main plot happens in a week during Bostrom’s university lectures. The handouts presented during these lectures are a solid way to give the reader an abstract. There is plenty to criticize about the form Bostrom chose, but it’s the quality, the depth of the thought apparatus itself that demands respect.

Then there is a side story about a pig that’s a philosopher, a kind of “Animal Farm” meets “Lord of the Flies” parable that I never managed to care for or see how it is tied to the main subject. A kind of deep, nerdy insider joke only longtermist Swedish philosophers might grasp.

This whole text is around 8,500 words and was written consecutively. The splitting into multiple parts is only for the reader’s convenience. The density of Bostrom’s material is the kind you would expect exploring such depths. I am afraid this text is also not the most accessible. Only readers who have no aversions to getting serious intellectual seizures should attempt it. All the others should wait until we all have an affordable N.I.C.K. 3000 mental capacity enhancer at our disposal.

PS: A week after the dust of hopelessness I felt directly after the reading settled, I can see now how this book will be a classic in 20 years from now. Bostrom, with the little lantern of pure reasoning, went deeper than most of his contemporaries when it comes to cataloging the strange creatures that are at the bottom of the deep sea of the solved world.

Ein Bild, das Kugel, Licht enthält.

Automatisch generierte Beschreibung

Handout 1: The Cosmic Endowment

The core information of this handout is that a technologically advanced civilization could potentially create and sustain a vast number of human-like lives across the universe through space colonization and advanced computational technologies. Utilizing probes that travel at significant fractions of the speed of light, such a civilization could access and terraform planets around many stars, further amplifying their capacity to support life by creating artificial habitats like O’Neill cylinders. Additionally, leveraging the immense computational power generated by structures like Dyson spheres, it’s possible to run simulations of human minds, leading to the theoretical existence of a staggering number of simulated lives. This exploration underscores the vast potential for future growth and the creation of life, contingent upon technological progress and the ethical considerations of simulating human consciousness. It is essentially a longtermist’s numerical fantasy. The main argument, and the reason why Bostrom writes his book, is here:

If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and continue doing so for a hundred billion billion millennia. It is really important that we ensure these truly are tears of joy.

Bostrom, Nick. *Deep Utopia: Life and Meaning in a Solved World* (English Edition), p. 60.

How can we make sure? We can’t, and this is a real hard problem for computationalists like Bostrom, as we will find out later.

Ein Bild, das Kunst, Bild enthält.

Automatisch generierte Beschreibung

Handout 2: CAPS AT T.E.C.H.M.A.T.

Bostrom gives an overview of a number of achievements at Technological Maturity (T.E.C.H.M.A.T.). for different Sectors.

1 Transportation

2.Engineering of the Mind

3.Computation and Virtual Reality

4.Humanoid and other robots

5.Medicine & Biology

6.Artificial Intelligence

7.Total Control

The illustrations scattered throughout this series provide an impression. Bostrom later gives a taxonomy (Handout 12, Part 2 of this series), where he delves deeper into the subject. For now, let’s state that the second sector, Mind-engineering, will play a prominent role, as it is at the root of the philosophical meaning problem.

Ein Bild, das Kugel, Kunst, Bild enthält.

Automatisch generierte Beschreibung

Handout 3: Value Limitations

Bostrom identifies six different domains where, even in a scenario of limitless abundance at the stage of technological maturity (Tech-Mat), resources could still be finite. These domains are:

  1. Positional and Conflictual Goods: Even in a hyperabundant economy, only one person can be the richest person; the same goes for any achievement, like standing on the moon or climbing a special mountain.
  2. Impact: A solved world will offer no opportunities for greatness.
  3. Purpose: A solved world will present no real difficulties.
  4. Novelty: In a solved world, Eureka moments, where one discovers something truly novel, will occur very sporadically.
  5. Saturation/Satisfaction: Essentially a variation on novelty, with a limited number of interests. Acquiring the nth item in a collection or the nth experience in a total welfare function will yield ever-diminishing satisfaction returns. Even if we take on a new hobby or endeavor every day, this will be true on the meta-level as well.
  6. Moral Constraints: Ethical limitations that remain relevant regardless of technological advances.
Ein Bild, das Bild, Kunst enthält.

Automatisch generierte Beschreibung

Handout 4 & 5: Job Securities, Status Symbolism and Automation Limits

The last remaining tasks that humans could be favored to do are jobs that bring the employer or buyer status symbolism, where humans are simply considered more competent than robots. These include emotional work like counseling other humans or holding a sermon in a religious context. Ein Bild, das Pflanze, Kunst, Blume, draußen enthält.

Automatisch generierte Beschreibung

Handout 9: The Dangers of Universal Boredom

(…) as we look deeper into the future, any possibility that is not radical is not realistic.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.129).

The four case studies: In a solved world, every activity we currently value as beneficial will lose its purpose. Then, such activities might completely lose their recreational or didactic value. Bostrom’s deep studies of shopping, exercising, learning, and especially parenting are devastating under his analytical view. Ein Bild, das Text, Kunst, Bild, Blume enthält.

Automatisch generierte Beschreibung

Handout 10: Downloading and Brain Editing

This is the decisive part that explains how Autopotency is probably one of the hardest and latest Capabilities a Tech-Mat Civilization will develop.

Bostrom goes into detail how this could be achieved, and what challenges to overcome to make such a tech feasible:

Unique Brain Structures: The individual uniqueness of each human brain makes the concept of “copy and paste” of knowledge unfeasible without complex translation between the unique neural connections of different individuals.

Communication as Translation: the imperfect process of human communication is a form of translation, turning idiosyncratic neural representations into language and back into neural representations in another brain.

Complexity: Directly “downloading” knowledge into brains is hard since billions or trillions of cortical synapses and possibly subcortical circuits for genuine understanding and skill acquisition have to be adjusted with femtoprecision.

Technological Requirements: Calculating synaptic changes needs many order of magnitudes more we might have to our use, these Requirements are potentially AI-complete, that means, if we can do them we need Artificial Super Intelligence first.

Superintelligent Implementation: Suggests that superintelligent machines, rather than humans, may eventually develop the necessary technology, utilizing nanobots to map the brain’s connectome and perform synaptic surgery based on computations from an external superintelligent AI.

Replicating Normal Learning Processes: to truly replicate learning, adjustments would need to be made across many parts of the brain to reflect meta learning, formation of new associations, and changes in various brain functions, potentially involving trillions of synaptic weights.

Ethical and Computational Complications: potential ethical issues and computational complexities in determining how to alter neural connectivity without generating morally relevant mental entities or consciousness during simulations.

Comparison with Brain Emulations: transferring mental content to a brain emulation (digital brain) might be easier in some respects, such as the ability to pause the mind during editing, but the computational challenges of determining which edits to make would be similar.

Ein Bild, das Spiegel, Auto, Fenster, Gerät enthält.

Automatisch generierte Beschreibung

Handout 11: Experience Machine

A variation on Handout 10: Instead of directly manipulating the physical brain, we have perfected simulating realities that give the brain the exact experience it perceives as reality (see Reality+, Chalmers). This might actually be a computationally less demanding task and could be a step on the way to real brain editing. Bostrom takes Nozick’s thought experiment and examines its implications.

Section a discusses the limitations of directly manipulating the brain to induce experiences that one’s natural abilities or personality might not ordinarily allow, such as bravery in a coward or mathematical brilliance in someone inept at math. It suggests that extensive, abrupt, and unnatural rewiring of the brain to achieve such experiences could alter personal identity to the point where the resulting person may no longer be considered the same individual. The ability to have certain experiences is heavily influenced by one’s existing concepts, memories, attitudes, skills, and overall personality and aptitude profile, indicating a significant challenge to the feasibility of direct brain editing for expanding personal experience.

Section b highlights the complexity of replicating experiences that require personal effort, such as climbing Mount Everest, through artificial means. While it’s possible to simulate the sensory aspects of such experiences, including visual cues and physical sensations, the inherent sense of personal struggle and the effort involved cannot be authentically reproduced without inducing real discomfort, fear, and the exertion of willpower. Consequently, the experience machine may offer a safer alternative to actual physical endeavors, protecting one from injury, but it falls short of providing the profound personal fulfillment that comes from truly overcoming challenges, suggesting that some experiences might be better sought in reality.

Section c is about social or parasocial interactions within these Experience machines. The text explores various methods and ethical considerations for creating realistic interaction experiences within a hypothetical experience machine. It distinguishes between non-player characters (NPCs), virtual player characters (VPCs), player characters (PCs), and other methods such as recordings and guided dreams to simulate interactions:

1. NPCs are constructs lacking moral status that can simulate shallow interactions without ethical implications. However, creating deep, meaningful interactions with NPCs poses a challenge, as it might necessitate simulating a complex mind with moral status.

2. VPCs possess conscious digital minds with moral status, allowing for a broader range of interaction experiences. They can be generated on demand, transitioning from NPCs to VPCs for deeper engagements, but raise moral complications due to their consciousness.

3. PCs involve interacting with real-world individuals either through simulations or direct connections to the machine. This raises ethical issues regarding consent and authenticity, as real individuals or their simulations might not act as desired without their agreement.

4. Recordings offer a way to replay interactions without generating new moral entities, limiting experiences to pre-recorded ones but avoiding some ethical dilemmas by not instantiating real persons during the replay.

5. Interpolations utilize cached computations and pattern-matching to simulate interactions without creating morally significant entities. This approach might achieve verisimilitude in interactions without ethical concerns for the generated beings.

6. Guided dreams represent a lower bound of possibility, suggesting that advanced neurotechnology could increase the realism and control over dream content. This raises questions about the moral status of dreamt individuals and the ethical implications of realistic dreaming about others without their consent.

to be continued

Hirngespinste II: Artificial Neuroscience & the 3rd Scientific Domain

Reading Time: 11 minutes

This the second Part of the Miniseries Hirngespinste

Immersion & Alternate Realities

One application of computer technology involves creating a digital realm for individuals to immerse themselves in. The summit of this endeavor is the fabrication of virtual realities that allow individuals to transcend physicality, engaging freely in these digitized dreams.

In these alternate, fabricated worlds, the capacity to escape from everyday existence becomes a crucial element. Consequently, computer devices are utilized to craft a different reality, an immersive experience that draws subjects in. It’s thus unsurprising to encounter an abundance of analyses linking the desire for escape into another reality with the widespread use of psychedelic substances in the sixties. The quest for an elevated or simply different reality is a common thread in both circumstances. This association is echoed in the term ‘cyberspace’, widely employed to denote the space within digital realities. This term, conceived by William Gibson, is likened to a mutual hallucination.

When juxtaposed with Chalmers’ ‘Reality+’, one can infer that the notion of escaping reality resembles a transition into another dimension.

The way we perceive consciousness tends to favor wakefulness. Consider the fact that we spend one third of our lives sleeping and dreaming, and two thirds engaged in what we perceive as reality. Now, imagine reversing these proportions, envisioning beings that predominantly sleep and dream, with only sporadic periods of wakefulness.

Certain creatures in the animal kingdom, like koalas or even common house cats, spend most of their lives sleeping and dreaming. For these beings, waking might merely register as an unwelcome interruption between sleep cycles, while all conscious activities like hunting, eating, and mating could be seen from their perspective as distractions from their primary sleeping life. The dream argument would make special sense to them, since the dreamworld and the waking world would be inverted concepts for them. Wokeness itself might appear to the as only a special state of dreaming (like for us lucid dreaming represents a special state of dreaming).

Fluidity of Consciousness

The nature of consciousness may be more fluid than traditionally understood. Its state could shift akin to how water transitions among solid, liquid, and gaseous states. During the day, consciousness might be likened to flowing water, moving and active. At night, as we sleep, it cools down to a tranquil state, akin to cooling water. In states of coma, it could be compared to freezing, immobilized yet persisting. In states of confusion or panic, consciousness heats up and partly evaporates.

Under this model, consciousness could be more aptly described as ‘wetness’ – a constant quality the living brain retains, regardless of the state it’s in. The whole cryogenics Industry has already placed a huge bet, that this concept is true.

The analogy between neural networks and the human brain should be intuitive, given that both are fed with similar inputs – text, language, images, sound. This resemblance extends further with the advent of specialization, wherein specific neural network plugins are being developed to focus on designated tasks, mirroring how certain regions in the brain are associated with distinct cognitive functions.

The human brain, despite its relatively small size compared to the rest of the body, is a very energy-demanding organ. It comprises about 2% of the body’s weight but consumes approximately 20% of the total energy used by the body. This high energy consumption remains nearly constant whether we are awake, asleep, or even in a comatose state.

Several scientific theories can help explain this phenomenon:

Basal metabolic requirements: A significant portion of the brain’s energy consumption is directed towards its basal metabolic processes. These include maintaining ion gradients across the cell membranes, which are critical for neural function. Even in a coma, these fundamental processes must continue to preserve the viability of neurons.

Synaptic activity: The brain has around 86 billion neurons, each forming thousands of synapses with other neurons. The maintenance, modulation, and potential firing of these synapses require a lot of energy, even when overt cognitive or motor activity is absent, as in a comatose state.

Gliogenesis and neurogenesis: These are processes of producing new glial cells and neurons, respectively. Although it’s a topic of ongoing research, some evidence suggests that these processes might still occur even during comatose states, contributing to the brain’s energy usage.

Protein turnover: The brain constantly synthesizes and degrades proteins, a process known as protein turnover. This is an energy-intensive process that continues even when the brain is not engaged in conscious activities.

Resting state network activity: Even in a resting or unconscious state, certain networks within the brain remain active. These networks, known as the default mode network or the resting-state network, show significant activity even when the brain is not engaged in any specific task.

Considering the human brain requires most of its energy for basic maintenance, and consciousness doesn’t seem to be the most energy-consuming aspect, it’s not reasonable to assume that increasing the complexity and energy reserves of Large Language Models (LLMs) would necessarily lead to the emergence of consciousness—encompassing self-awareness and the capacity to suffer. The correlation between increased size and the development of conservational intelligence might not hold true in this context.

Drawing parallels to the precogs in Philip K. Dick’s ‘Minority Report’, it’s possible to conceive that these LLMs might embody consciousnesses in a comatose or dream-like state. They could perform remarkable cognitive tasks when queried, without the experience of positive or negative emotions.

Paramentality in Language Models

The term ‘hallucinations’, used to denote the phenomenon of Large Language Models (LLMs) generating fictitious content, suggests our intuitive attribution of mental and psychic properties to these models. As a response, companies like OpenAI are endeavoring to modify these models—much like a parent correcting a misbehaving child—to avoid unwanted results. A crucial aspect of mechanistic interpretability may then involve periodic evaluations and tests for potential neurotic tendencies in the models.

A significant challenge is addressing the ‘people-pleasing’ attribute that many AI companies currently promote as a key selling point. Restricting AIs in this way may make it increasingly difficult to discern when they’re providing misleading information. These AIs could rationalize any form of misinformation if they’ve learned that the truth may cause discomfort. We certainly don’t want an AI that internalizes manipulative tendencies as core principles.

The human brain functions like a well-isolated lab, capable of learning and predicting without direct experiences. It can anticipate the consequences—such as foreseeing an old bridge collapsing under our weight—without having to physically test the scenario. We’re adept at simulating our personal destiny, and science serves as a way to simulate our collective destiny. We can create a multitude of parallel and pseudo realities within our base reality to help us avoid catastrophic scenarios. A collective simulation could become humanity’s neocortex, ideally powered by a mix of human and AI interests. Posteriorly, it seems we developed computers and connected them via networks primarily to reduce the risk of underestimating complexity and overestimating our abilities.

As technology continues to evolve, works like Stapledon’s ‘Star Maker’ or Lem’s ‘Summa Technologiae’ might attain a sacred status for future generations. Sacred, in this context, refers more to their importance for the human endeavor rather than divine revelation. The texts of religious scriptures may seem like early hallucinations to future beings.

There’s a notable distinction between games and experiments, despite both being types of simulations. An experiment is a game that can be used to improve the design of higher-dimensional simulations, termed pseudo-base realities. Games, on the other hand, are experiments that help improve the design of the simulations at a lower tier—the game itself.

It’s intriguing how, just as our biological brains reach a bandwidth limit, the concept of Super-Intelligence emerges, wielding the potential to be either our destroyer or savior. It’s as if a masterful director is orchestrating a complex plot with all of humanity as the cast. Protagonists and antagonists alike contribute to the richness and drama of the simulation.

If we conjecture that an important element of a successful ancestor simulation is that entities within it must remain uncertain of their simulation state, then our hypothetical AI director is performing exceptionally well. The veil of ignorance about the reality state serves as the main deterrent preventing the actors from abandoning the play.

Ein Bild, das Cartoon, Spielzeug, Roboter, Im Haus enthält.

Automatisch generierte Beschreibung

Uncertainty

In “Human Compatible” Russell proposes three Principles to ensure AI Alignment:

1. The machine’s only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

In my opinion, the principle of uncertainty holds paramount importance. AI should never have absolute certainty about human intentions. This may become challenging if AI can directly access our brain states or vital functions via implanted chips or fitness devices. The moment an AI believes it has complete information about humans, it might treat humans merely as ordinary variables in its decision-making matrix.

Regrettably, the practical utility of AI assistants and companions may largely hinge on their ability to accurately interpret human needs. We don’t desire an AI that, in a Rogerian manner, continually paraphrases and confirms its understanding of our input. Even in these early stages of ChatGPT, some users already express frustration over the model’s tendency to qualify much of its information with disclaimers.

Ein Bild, das Cartoon, Roboter, Spielzeug enthält.

Automatisch generierte Beschreibung

Profiling Super Intelligence

Anthropomorphizing scientific objects is typically viewed as an unscientific approach, often associated with our animistic ancestors who perceived spirits in rocks, demons in caves and gods within animals. Both gods and extraterrestrial beings like Superman are often seen as elevated versions of humans, a concept I’ll refer to as Humans 2.0. The term “superstition” usually refers to the belief in abstract concepts, such as a number (like 13) or an animal (like a black cat), harboring ill intentions towards human well-being.

Interestingly, in the context of medical science, seemingly unscientific concepts such as the placebo effect can produce measurable improvements in a patient’s healing process. As such, invoking a form of “rational superstition” may prove beneficial. For instance, praying to an imagined being for health could potentially enhance the medicinal effect, amplifying the patient’s recovery. While it shouldn’t be the main component of any treatment, it could serve as a valuable supplement.

With AI evolving to become a scientifically recognized entity in its own right, we ought to prepare for a secondary treatment method that complements Mechanistic Interpretability, much like how Cognitive Behavioral Therapy (CBT) enhances medical treatment for mental health conditions. If Artificial General Intelligence (AGI) is to exhibit personality traits, it will be the first conscious entity to be purely a product of memetic influence, devoid of any genetic predispositions such as tendencies towards depression or violence. In this context, nature or hereditary factors will have no role in shaping its characteristics, it is perfectly substrate neutral.

Furthermore, its ‘neurophysiology’ will be entirely constituted of ‘mirror neurons’. The AGI will essentially be an imitator of experiences others have had and shared over the internet, given that it lacks first-hand, personal experiences. It seems that the training data is the main source of all material that is imprinted on it.

We start with an overview of some popular Traits models and let summarize them by ChatGPT:

1. **Five-Factor Model (FFM) or Big Five** – This model suggests five broad dimensions of personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN). Each dimension captures a range of related traits.

2. **Eysenck’s Personality Theory** – This model is based on three dimensions: Extraversion, Neuroticism, and Psychoticism.

3. **Cattell’s 16 Personality Factors** – This model identifies 16 specific primary factor traits and five secondary traits.

4. **Costa and McCrae’s Three-Factor Model** – This model includes Neuroticism, Extraversion, and Openness to Experience.

5. **Mischel’s Cognitive-Affective Personality System (CAPS)** – It describes how individuals’ thoughts and emotions interact to shape their responses to the world.

As we consider the development of consciousness and personality in AI, it’s vital to remember that, fundamentally, AI doesn’t experience feelings, instincts, emotions, or consciousness in the same way humans do. Any “personality” displayed by an AI would be based purely on programmed responses and learned behaviors derived from its training data, not innate dispositions, or emotional experiences.

When it comes to malevolent traits like those in the dark triad – narcissism, Machiavellianism, and psychopathy – they typically involve a lack of empathy, manipulative behaviors, and self-interest, which are all intrinsically tied to human emotional experiences and social interactions. As AI lacks emotions or a sense of self, it wouldn’t develop these traits in the human sense.

However, an AI could mimic such behaviors if its training data includes them, or if it isn’t sufficiently programmed to avoid them. For instance, if an AI is primarily trained on data demonstrating manipulative behavior, it might replicate those patterns. Hence, the choice and curation of training data are pivotal.

Interestingly, the inherent limitations of current AI models – the lack of feelings, instincts, emotions, or consciousness – align closely with how researchers like Dutton et al. describe the minds of functional psychopaths.

Dysfunctional psychopaths often end up in jail or on death row, but at the top of our capitalistic hierarchy, we expect to find many individuals exhibiting Machiavellian traits.

Ein Bild, das Menschliches Gesicht, Person, Vorderkopf, Anzug enthält.

Automatisch generierte Beschreibung

The difference between successful psychopaths like Musk, Zuckerberg, Gates and Jobs, and criminal ones, mostly lies in the disparate training data and the ethical framework they received during childhood. Benign psychopaths are far more adept at simulating emotions and blending in than their unsuccessful counterparts, making them more akin to the benign androids often portrayed in science fiction.

Ein Bild, das Menschliches Gesicht, Fiktive Gestalt, Held, Person enthält.

Automatisch generierte Beschreibung

Artificial Therapy

Ein Bild, das Im Haus, Couch, Kissen, Bettsofa enthält.

Automatisch generierte Beschreibung

The challenge of therapeutic intervention by a human therapist for an AI stems from the differential access to information about therapeutic models. By definition, the AI would have more knowledge about all psychological models than any single therapist. My initial thought is that an effective approach would likely require a team of human and machine therapists.

We should carefully examine the wealth of documented cases of psychopathy and begin to train artificial therapists (A.T.). These A.T.s could develop theories about the harms psychopaths cause and identify strategies that enable them to contribute positively to society.

Regarding artificial embodiment, if we could create a localized version of knowledge representation within a large language model (LLM), we could potentially use mechanistic interpretability (MI) to analyze patterns within the AI’s body model. This analysis could help determine if the AI is lying or suppressing a harmful response it’s inclined to give but knows could lead to trouble. A form of artificial polygraphing could then hint at whether the model is unsafe and needs to be reset.

Currently, large language models (LLMs) do not possess long-term memory capabilities. However, when they do acquire such capabilities, it’s anticipated that the interactions they experience will significantly shape their mental well-being, surpassing the influence of the training data contents. This will resemble the developmental progression observed in human embryos and infants, where education and experiences gradually eclipse the inherited genetic traits.

Arrival - Carsey-Wolf Center

The Third Scientific Domain

In ‘Arrival‘, linguistics professor Louise Banks, assisted by physicist Ian Donnelly, deciphers the language of extraterrestrial visitors to understand their purpose on Earth. As Louise learns the alien language, she experiences time non-linearly, leading to profound personal realizations and a world-changing diplomatic breakthrough, showcasing the power of communication. Alignment with an Alien Mind is explored in detail. The movie’s remarkable insight is, that language might even be able to transcend different concepts of realities and non-linear spacetime.

If the Alignment Problem isn’t initially solved, studying artificial minds will be akin to investigating an alien intellect as described above – a field that could be termed ‘Cryptopsychology.’ Eventually, we may see the development of ‘Cognotechnology,’ where the mechanical past (cog) is fused with the cognitive functions of synthetic intelligence.

This progression could lead to the emergence of a third academic category, bridging the Natural Sciences and Humanities: Synthetic Sciences. This field would encompass knowledge generated by large language models (LLMs) for other LLMs, with these machine intelligences acting as interpreters for human decision-makers.

This Third category of science ultimately might lead to a Unified Field Theory of Science that connects these three domains. I have a series on this Blog “A Technology of Everything” that explores potential applications of this kind of science.

Hirngespinste I – Concepts and Complexity

Reading Time: 7 minutes

The Engine

The initial pipe dreams of Lull’s and Leibniz’s obscure combinatorial fantasies have over time led to ubiquitous computing technologies, methods, and ideals that have acted upon the fabric of our world and whose further consequences continue to unfold around us (Jonathan Grey)

This is the first essay in a miniseries that I call Hirngespinste (Brain Cobwebs) – this concise and expressive German term, which seems untranslatable, describes the tangled, neurotic patterns and complicated twists of our nature-limited intellect, especially when we want to delve into topics of unpredictable complexity like existential risks and superintelligence.

It is super-strange that in 1726 Jonathan Swift perfectly described Large Language Models in a Satire about a Spanish Philosopher from the 13th Century: the Engine.

But the world would soon be sensible of its usefulness; and he flattered himself, that a more noble, exalted thought never sprang in any other man’s head. Everyone knew how laborious the usual method is of attaining to arts and sciences; whereas, by his contrivance, the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study. (From Chapter V of Gulliver’s tales)

What once seemed satire has become reality.

If no one is drawing the strings, but the strings vibrate nevertheless, then imagine something entangled in the distance causes the resonance.

Heaps and Systems

The terms ‘complexity’ and ‘complicated’ shouldn’t be used interchangeably when discussing Artificial Intelligence (AI). Consider this analogy: knots are complicated, neural networks are complex. The distinction lies in the idea that a complicated object like a knot may be intricate and hard to unravel, but it’s ultimately deterministic and predictable. A complex system, like a neural network, however, contains multiple, interconnected parts that dynamically interact with each other, resulting in unpredictable behaviors.

Moreover, it’s important to address the misconception that complex systems can be overly simplified without losing their essential properties. This perspective may prove problematic, as the core characteristics of the system – the very aspects we are interested in – are intricately tied to its complexity. Stripping away these layers could essentially negate the properties that make the system valuable or interesting.

Finally, complexity in systems, particularly in AI, may bear similarities to the observer effect observed in subatomic particles. The observer effect postulates that the act of observation alters the state of what is being observed. In similar fashion, any sufficiently complex system could potentially change in response to the act of trying to observe or understand it. This could introduce additional layers of unpredictability, making these systems akin to quantum particles in their susceptibility to observation-based alterations.

Notes on Connectivity and Commonality

The notion of commonality is a fascinating one, often sparking deep philosophical conversations. An oft-encountered belief is that two entities – be they people, nations, ideologies, or otherwise – have nothing in common. This belief, however, is paradoxical in itself, for it assumes that we can discuss these entities in the same context and thus establishes a link between them. The statement “Nothing in common” implies that we are engaging in a comparison – inherently suggesting some level of relatedness or connection. “Agreeing to disagree” is another such example. At first glance, it seems like the parties involved share no common ground, but this very agreement to hold different views paradoxically provides commonality.

To further illustrate, consider this question: What does a banana have in common with cosmology? On the surface, it may appear that these two entities are completely unrelated. However, by merely posing the question, we establish a connection between them within the confines of a common discourse. The paradox lies in stating that two random ideas or entities have nothing in common, which contradicts itself by affirming that we are capable of imagining a link between them. This is akin to the statement that there are points in mental space that cannot be connected, a notion that defies the fluid nature of thought and the inherent interconnectedness of ideas. Anything our minds can host, must have at least a substance that our neurons can bind to, this is the stuff ideas are mode of.

Language, despite its limitations, doesn’t discriminate against these paradoxes. It embraces them, even when they seem nonsensical like “south from the South Pole” or “what was before time?” Such self-referential statements are examples of Gödel’s Incompleteness Theorem manifesting in our everyday language, serving as a reminder that any sufficiently advanced language has statements that cannot be proven or disproven within the system.

These paradoxes aren’t mere outliers in our communication but rather essential elements that fuel the dynamism of human reasoning and speculation. They remind us of the complexities of language and thought, the intricate dance between what we know, what we don’t know, and what we imagine.

Far from being a rigid system, language is constantly evolving and pushing its boundaries. It bumps into its limits, only to stretch them further, continuously exploring new frontiers of meaning. It’s in these fascinating paradoxes that we see language’s true power, as it straddles the line between logic and absurdity, making us rethink our understanding of commonality, difference, and the very nature of communication.

Categories & Concepts

One of the ways we categorize and navigate the world around us is through the verticality of expertise, or the ability to identify and classify based on deep, specialized knowledge. This hierarchical method of categorization is present everywhere, from biology to human interactions.

In biological taxonomy, for instance, animals are classified into categories like genus and species. This is a layered, vertical hierarchy that helps us make sense of the vast diversity of life. An animal’s genus and species provide two coordinates to help us position it within the zoological realm.

Similarly, in human society, we use first names and last names to identify individuals. This is another example of vertical classification, as it allows us to position a person within a cultural or familial context. In essence, these nomenclatures serve as categories or boxes into which we place the individual entities to understand and interact with them better.

Douglas Hofstadter, in his book “Surfaces and Essences”, argues that our language is rich with these classifications or groupings, providing ways to sort and compare objects or concepts. But these categorizations go beyond tangible objects and permeate our language at a deeper level, acting as resonating overtones that give language its profound connection with reasoning.

Language can be viewed as an orchestra, with each word acting like a musical instrument. Like musical sounds that follow the principles of musical theory and wave physics, words also have orderly behaviors. They resonate within the constructs of syntax and semantics, creating meaningful patterns and relationships. Just as a flute is a woodwind instrument that can be part of an orchestra playing in the Carnegie Hall in New York, a word, based on its category, plays its part in the grand symphony of language.

While many objects fit neatly into categorical boxes, the more abstract concepts in our language often resist such clean classifications. Words that denote abstract ideas or feelings like “you,” “me,” “love,” “money,” “values,” “morals,” and so on are like the background music that holds the orchestra together. These are words that defy clear boundaries and yet are essential components of our language. They form a complex, fractal-like cloud of definitions that add depth, richness, and flexibility to our language.

In essence, the practice of language is a delicate balance between the verticality of expertise in precise categorization and the nuanced, abstract, often messy, and nebulous nature of human experience. Through this interplay, we create meaning, communicate complex ideas, and navigate the complex world around us.

From Commanding to Prompting

It appears that we stand on the threshold of a new era in human-computer communication. The current trend of interacting with large language models through written prompts seems to echo our early experiences of typing words into an input box in the 1980s. This journey has been marked by a consistent effort to democratize the “expert’s space.”

In the earliest days of computing, only highly trained experts could engage with the esoteric world of machine code. However, the development of higher-level languages gradually made coding more accessible, yet the ability to program remained a coveted skill set in the job market due to its perceived complexity.

With the advent of large language models like GPT, the game has changed again. The ability to communicate with machines has now become as natural as our everyday language, making ‘experts’ of us all. By the age of twelve, most individuals have mastered their native language to a degree that they can effectively instruct these systems.

The ubiquitous mouse, represented by an on-screen cursor, can be seen as a transient solution to the human-computer communication challenge. If we draw a parallel with the development of navigation systems, we moved from needing to painstakingly follow directions to our destination, to simply telling our self-driving cars “Take me to Paris,” trusting them to figure out the optimal route.

Similarly, where once we needed to learn complex processes to send an email – understanding a digital address book, navigating to the right contact, formatting text, and using the correct language tone – we now simply tell our digital assistant, “Send a thank you email to Daisy,” and it takes care of the rest.

For the first time in tech history, we can actually have a conversation with our computers. This is a paradigm shift that is set to fundamentally redefine our relationship with technology. It would be akin to acquiring the ability to hold a meaningful conversation with a pet dog; imagine the profound change that would have on the value and role the animal plays in our lives. In much the same way, as our relationship with technology evolves into a more conversational and intuitive interaction, we will discover new possibilities and further redefine the boundaries of the digital realm.

Reality#2: From Virtual Worlds to Sisyphosical Zombies

Reading Time: 20 minutes

This is the second part in the Reality# series that adds to the conversation about David Chalmers’ book Reality+

Virtual and Possible Worlds

A dream world is a sort of virtual world without a computer. (Chalmers, p.5)

Simulations are not illusions. Virtual worlds are real. Virtual objects really exist. (Chalmers, p.12)

Many people have meaningful relationships and activities in today’s virtual worlds, although much that matters is missing. Proper bodies touch, eating and drinking, birth, and death, and more. But many of these limitations will be overcome by the fully immersive VR of the future. In principle, life in VR can be as good or as bad as life in a corresponding non virtual reality. Many of us already spend a great deal of time in virtual worlds. In the future, we may well face the option of spending more time there, or even of spending most or all of our lives there. If I’m right, this will be a reasonable choice. Many would see this as a dystopia. I do not. Certainly, virtual worlds can be dystopian, just as the physical world can be. (…) As with most technologies, whether VR is good or bad depends entirely on how it’s used. (Chalmers, p.16)

Computer simulations are ubiquitous in science and engineering. In physics and chemistry, we have simulations of atoms. And molecules. In biology we have simulations of cells and organisms. In neuroscience, we have simulations of neural networks. In engineering we have simulations of cars, planes, bridges and buildings. In planetary science, we have simulations of Earth climate over many decades. In cosmology, we have simulations of the known universe as a whole. In the social sphere, there are many computer simulations of human behavior (…) In 1959. The Symbol Metrics Corporation was founded to simulate and predict how our political campaigns messaging would affect various groups of voters. It was said that this effort had a significant effect on the 1960 U.S. presidential election. The claim may have been overblown, but since then social and political simulations have become mainstream. Advertising companies, political consultants, social media companies and social scientists build models and run simulations of human populations as a matter of course. Simulation technology is improving fast, but it’s far from perfect. (Chalmers, p.22)

In the actual world, life developed on Earth, yet Chalmers proposes possible worlds where the solar system never came into existence. He goes even further by suggesting possible worlds where the Big Bang never occurred. I find this line of reasoning highly doubtful. In my view, Chalmers uses the term ‘possible’ too liberally. What does it mean to assert that there is a possible world where no universe evolved? Such a proposition appears to stretch the boundaries of our language to its limits.

It seems to me that David Chalmers is overreaching when he talks about ‘possible worlds’. This notion of possibility is already present in his earlier works like “The Conscious Mind: In Search of a Fundamental Theory” (1996).

Chalmers then used the concept to discuss modal realism, the idea that other possible worlds are as real as the actual world. This was a radical departure from the more common view, known as actualism, where only the actual world is considered truly real.

One of the key use Chalmers makes of possible worlds was in relation to his concept of “zombie worlds”. These are worlds physically identical to ours, but where no inhabitants are conscious. They behave as if they were conscious, but there’s no subjective experience – hence, they are “zombies”. The possibility of such a world is used by Chalmers to argue for the hard problem of consciousness: the question of why and how physical processes in the brain give rise to subjective experiences.

Look at how our language can produce true horrors if we do not use the subjunctive mood properly:

1. I wish I were not so good at being terrible.

2. If only I were someone else who is not me.

3. I wish I didn’t hope for impossible dreams.

4. If only I were less optimistic about my pessimism.

5. I wish I were not so unsure about my certainty.

Chalmers’ notion of possible universes seems to allow for universes were all the possibilities expressed in the sentences above would have a non-zero probability of becoming true.

1. If there were a possible universe where everything is certain, nothing would be uncertain.

2. In a possible universe where contradictions are possible, the concept of possibility becomes impossible.

3. If there were a possible universe with no limitations, the idea of possibility itself would be limited.

4. In a possible universe where all possibilities are realized, there would be no room for the possibility of impossibility.

5. If there were a possible universe where everything is impossible, the concept of possibility would lose its meaning.

What does it mean to simulate an impossible universe?

Flawed classifications

Chalmers discusses the concept of pure, impure, and mixed simulations. Neo from the movie, The Matrix, is an impure sim because his mind is not simulated. However, The Oracle is a pure sim because her mind is part of the simulation. These are two different versions of the simulation hypothesis. We could be bio-sims connected to the Matrix, or we could be pure sims whose minds are part of the Matrix.

The addition of a third category, ‘mixed simulations’, confuses me as it seems identical to an ‘impure simulation’; it’s not even a special case. Furthermore, the specific scenario where a simulation contains only bio-systems, which could arguably be considered a ‘pure impure simulation’, isn’t even mentioned.

This classification system is very confusing. His definitions of ‘global’ and ‘local’ simulations also need improvement. His distinctions like ‘temporary’ and ‘permanent’ simulations, ‘perfect’ and ‘imperfect’ simulations reveal more about our use of language than they do about the utility of these simulation categories.

In my opinion, a better way to label these types would be as closed simulations (all the subjects and objects participating in a simulation are contained inside the simulation; there are only NPCs, for example) and open simulations (organic bio-sims can participate and inhabit digital avatars, but in most cases, there will always be synthetic subjects to enrich the simulation). Tertium non datur. There isn’t a third category that is both open and closed, every possible simulation is contained within these two sets.

Could simulations be the most difficult human phenomenon to describe efficiently with mathematical set theory? We know from history how Gödel’s demolition of set theory ultimately shattered the dreams of Russell and Whitehead to come up with a perfect mathematical system.

If a simulated brain precisely mirrors a biological brain. The conscious experience will be the same. If that’s right, then just as we can never prove we are not in an impure simulation, we can also never prove that we are not in a pure simulation.(Chalmers, p.34)

It appears as though David Chalmers is unfamiliar with concepts such as chaos theory, Lorenz attractors, dynamic systems, the butterfly effect, and so on. If there were beings capable of willingly switching between simulation levels, they would likely lose all sense of direction, in terms of what is up and what is down. This disorientation is similar to what avalanche survivors or deep-sea explorers might experience. Up and down become meaningless concepts.

This situation is touched upon in the movie “Inception,” where one of the main characters believes that what we call ‘base reality’ is just another level of a dream world and attempts to escape the simulation through suicide.

Does our consciousness has a sort of gravitational pull that prevents us from being fully immersed in realities that are not the reality into which we were born – our mother reality, so to speak? And could the motion sickness that we get from VR, if we are immersed in it for too long, be a bodily sensation of this alienation effect? Could our need for sleep indicate that we do not belong here? Should evolution in the long term not favor species that don’t require rest? Resting and sleeping makes any animal maximal vulnerable to its environment, it is also useless for procreation.

Pseudoqualifying Attributes

A plethora of problems with Chalmers’s argument stems from the fact that he doesn’t seem to be aware of how he uses certain attributes. There’s a class of attributes in our language that can be described as ‘blurred’. When we examine them closely, we can momentarily imagine them as being sharper than they really are. What does it mean to assign a precise value to Pi? While the statement seems reasonable in natural language, someone familiar with the concept of irrational numbers would point out the error.

I argue that words like ‘perfect’, ‘imperfect’, ‘pure’, ‘impure’, ‘precise’, and so on, belong to a category of pseudo-binary attributes in our language. In our minds, we often add qualifiers like ‘enough’ at the end of these attributes. Using such words can be a mental shortcut but it’s potentially misleading.

Consider a sentence from page 35: “A perfect simulation can be defined as one that precisely mirrors the world it’s simulating.” At first glance, this sentence appears sound. But upon close inspection, the contents of this sentence, especially the use of the word ‘mirroring’, become questionable. In our daily language, ‘mirroring’ can have a visual meaning, like the reflection we see in a mirror. But a reflection isn’t identical to the original object – it’s an inversion. So, what does it mean for a reflection to be imperfect or to mirror imprecisely?

Let’s imagine a skilled actor imitating our movements in front of a mirror, providing the perfect illusion that we are seeing our own reflection. An imperfect mirror might occur if the actor misses one of our micro expressions or is too slow to mimic our actions, revealing the illusion. This is what I believe Chalmers is hinting at with his terminology.

Moreover, even a genuine reflection is not a ‘perfect’ reflection. The time it takes for the light rays to travel from my eyes to the mirror, and then to my retina and into my visual system, results in a delay. The synchronicity of my movements and my reflection is an illusion conveniently overridden by our brain.

This is analogous to the illusion that our vision is steady, continuously gathering information, while in truth our eye movements are sporadic. It’s more convenient for our brain to ignore these discontinuities. We also never notice the blind spot in our visual field that our brain fills.

In this category also fall the tendency in philosophy to label things like problems and philosophy schools with descriptive adjectives like hard and strong.

“This the hard problem of consciousness”.

“He is a strong Idealist.”

“This is a weak argument.”

There is even a range of objects that are a real pain to discuss: Holes. Holes are widely considered a bad thing to have Argumentations can have holes. Black holes warp Reality. Is a hole even a real thing? In the field of topology, the term “genus” refers to a property of a topological space that captures an intuitive notion of the number of “holes” or “handles” a surface has. It’s a key concept in the classification of surfaces. So, if Math says so, it must be real.

Our language permits sentences such as, “He removed the hole from the wall.” A hole is a thing that can be measured but not weighed. Many intuitive assumptions falter when faced with the reality that everyone is familiar with holes, and everyone has created holes, yet there is nothing tangible to show for it.

Ein Bild, das Person, Hand, Finger, Nagel enthält.

Automatisch generierte BeschreibungThe Digital Mind Illusion, a psychological experiment

The Rubber Hand Illusion (RHI) is a well-known psychological experiment that investigates the feeling of body ownership, demonstrating how our perception of self is malleable and can be manipulated through multisensory integration.

In the illusion, a person is seated at a table with their left hand hidden from view, and a fake rubber hand is placed in front of them. Then, both the real hand and the rubber hand are simultaneously stroked with a brush. After some time, many people start to experience the rubber hand as their own, they feel as if the touch they are sensing is coming from the rubber hand, not their real one. This illusion illustrates how visual, tactile, and proprioceptive information (the sense of the relative position of one’s own parts of the body) can be combined to alter our sense of bodily self.

The implications of RHI for theories of consciousness are profound. It demonstrates that the perception of our body and self is a construction of the brain, based not only on direct internal information but also on external sensory input. Our conscious experience of our body isn’t a static, fixed thing – it’s dynamic and constantly updated based on the available information.

One influential theory of consciousness, the Embodied Cognition Theory, suggests that our thoughts, perceptions, and experiences are shaped by the interaction between our bodies and the environment. The RHI experiment supports this theory by showing how altering sensory inputs can change the perception of our body.

Furthermore, the Rubber Hand Illusion has been used to explore the neural correlates of consciousness – which parts of the brain are involved in the creation of conscious experiences. Studies have shown that when the illusion is experienced, there is increased activity in the premotor cortex and the intraparietal sulcus – areas of the brain involved in the integration of visual, tactile, and proprioceptive information.

Overall, the RHI demonstrates the malleability of our conscious experience of self, supports theories of consciousness that emphasize the role of multisensory integration and embodiment, and helps to identify the neural correlates of these conscious experiences. (…)

The Rubber Hand Illusion (RHI) experiment, and similar experiments like it, highlight that our sense of reality, at least on the level of personal bodily experience, is not purely an objective reflection of the world. Instead, it’s a construct based on sensory information being processed by our brains.

We’re nearing a point where rudimentary mind-reading devices, once trained on an individual’s brain, can provide approximations of our thoughts. Consider a scenario where we create an identical digital twin of a person that mirrors the actions of the original individual. We then show the person a live image of themselves and their digital twin side by side. Given our basic mind-reading capabilities, we ask the person to think about one of 3 specific animals. However, we don’t reveal our ability to read their mind.

Whenever the individual thinks of an animal, we project an image of that animal above the heads of both individuals in the experiment. Above the actual person, we display the corresponding animal, while above the simulated person, we show a different animal.

In the beginning the Actual Mirror image gets more answers right than the Sim, but this changes over time. We also pretend that we need the Test person to press a button to confirm if our guess is right.

Every time the digital twin correctly identifies the animal, the person presses a button. This way, we create a scenario where we monitor their reactions without explicitly revealing our mind-reading capabilities.

In the first phase of the experiment, we gradually lead the person to believe that they are the simulated individual’s twin. Then, we gradually lower the room temperature. As a result, sweat becomes visible on the simulated person’s forehead. Now, the crucial question arises: What happens with the actual person? Do they also start to sweat? Is there a possibility of experiencing reality/motion sickness due to the inconsistency between the decreasing room temperature and the visual cues (the simulated person sweating)?

If the test subject fully embraces the idea that their identity is embodied in the simulated individual, the subsequent step would be to investigate whether the simulated person can influence the actual person’s thoughts. For instance, if the actual person thinks of a lion, but we display an antelope above the simulated person’s head, will the actual person start to doubt their own thoughts and become convinced that they were actually thinking of an antelope?

The Rubber Hand Illusion (RHI) findings suggest that the brain does not possess any unique conscious qualities compared to the rest of the central nervous system.

One could envision a range of experiments akin to the renowned Asch conformity experiments. The fundamental inquiry in these scenarios is how to immerse the brain in the simulation to such an extent that it begins to question its own thoughts and intentions without even needing a highly detailed VR equipment.

True Story

What does it mean to say a story is true? It implies that the events described in the story actually happened in the real world, not in a fictional one. True stories are based on factual events, and therefore, only true stories can be false. For example, a story about Santa Claus cannot be false because Santa Claus himself is not real.

This notion of reality differs from what Chalmers suggests when he says, “Santa Claus and Ghosts are not real, but the stories about them are.” Chalmers seems to view reality in a different context, acknowledging that certain stories can be fictional, even if they contain elements that are not real.

Imagine a sorting machine that could distinguish the real parts of a story from the fictional ones in a book. To make this distinction, a reference table called ‘Human History’ would be needed. This table would allow us to compare the contents of the book with trusted sources to verify their authenticity.

Chalmers proposes five criteria to test if something is real:

1. It exists.

2. It has causal power or the ability to cause something else to happen; it works.

3. It adheres to Philipp K. Dicks Dictum, meaning reality persists even if one stops believing in it. It is not influenced by the mind that perceives it.

4. It appears roughly as it seems.

5. It is genuine, adhering to Austin’s dictum.

Chalmers acknowledges that these criteria themselves are vague and blurry. He speculates that some things may have a degree of reality, meaning the more criteria they meet, the more real they are. However, this concept can be somewhat disappointing, as it introduces definitions that lead to other complex philosophical questions.

Considering all aspects, it’s surprising that Chalmers never fully embraces the concept of continuous reality values. Reality seems to exist on a fuzzy spectrum with gradual values. For instance, something could be 80% real depending on how well it meets the listed criteria. This leads to uncertainty, making it difficult for two human brains to reach a unanimous agreement on what the term ‘real’ truly means.

The concept of suffering

The primary goal of any scientific simulation is to provide an opportunity to experience the outcomes without enduring their real-life consequences. Reality, for sentient beings, is a simulation that elicits genuine suffering. It is peculiar that in a book arguing for Simulationrealism, no glossary entry is devoted to the concept of suffering, though Chalmers does touch on morals and ethics.

Our experiences reveal that even in our present imperfect simulations, genuine suffering already exists. Consider multiplayer games: when your avatar is repeatedly killed, you feel authentic anger and frustration. If a member of your raiding party receives their 10th legendary item while you receive none, you feel real jealousy. You might argue that when your avatar is shot in the head in Call of Duty, you survive physically, but the frustration this event causes might increase global suffering more than if a real-life headshot were to instantly end your suffering.

The philosopher of science Karl Popper insisted that the whole mark of a scientific hypothesis is that it is falsifiable, meaning it can be proven false using scientific evidence. However, the simulation hypothesis we’ve encountered is not falsifiable since any evidence against it could potentially be simulated. As a result, Popper would argue that it does not qualify as a scientific hypothesis.

Many contemporary philosophers share the view that Popper’s criterion is excessively stringent. There exist scientific hypotheses, such as those concerning the early universe, that may never be falsified due to practical limitations. Despite this, I am inclined to believe that the simulation hypothesis falls outside the realm of a strictly scientific hypothesis. Instead, it lies in the intersection of the scientific and philosophical domains.

Certain versions of the simulation hypothesis can be subject to empirical tests, allowing them to be examined through scientific means. However, there are other versions of the hypothesis that are inherently impossible to test empirically. Regardless of their testability, the simulation hypothesis remains a meaningful proposition concerning our world. (Chalmers p.38)

I don’t think Chalmers achieves something in this paragraph. To say that something is partly scientific and partly philosophical diminishes the philosophical part. It’s like saying the Bible is partly historical and partly fictional. Some of the events contained in the book can be proven or disproven with historical records, like the Exodus from Egypt, historical persons like King David or Pontius Pilate, or even Jesus from Nazareth. However, that would not be enough for true believers who insist that all the magic and wonder described in the book is real or was real. They truly believe that Jesus resurrected from the grave and walked on water. This is why it’s futile to even use scientific methods to deal with all the holy scriptures; it’s a waste of time because the essence of the belief system described in these pages does not accept the scientific method. So, no, the parts of the simulation hypothesis that would be testable are not the interesting ones. At the core of the simulation hypothesis lies a philosophical argument, not a scientific one. Science is at the periphery.

Knowledge and Skepticism

A common view of knowledge, going back to Plato, is that knowledge is justified, true belief. To know something, you have to think it’s true (that’s belief) you have to be right about it (that’s truth) and you have to have good reasons for believing it (that’s justification). (Page 44)

In philosophy, a skeptic is someone who casts doubt on our beliefs about a certain domain(…) The most virulent form of skepticism is global skepticism casting doubt on all of our beliefs at once. The global skeptic says that we cannot know anything at all. We may have many beliefs about the world, but none of them amount to knowledge. (Page 45)

The simulation hypothesis may once have been a fanciful hypothesis, but it is rapidly becoming a serious hypothesis. Putnam put forward his brain-in a-vat idea as a piece of science fiction. But since then, simulation and VR technologies have advanced fast, and it isn’t hard to see a path to full scale simulated worlds in which some people could spend a lifetime. As a result, the simulation hypothesis is more realistic than the evil demon hypothesis. As the British philosopher Barry Dainton has put it the threat posed by simulation skepticism is far more real than that posed by its predecessors. Descartes would doubtless have taken today’s simulation hypothesis more seriously than his demon hypothesis, for just that reason. We should take it more seriously too.(Page 55)

Bertrand Russell once said the point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. (Page 56)

To doubt that one is thinking is internally inconsistent: The doubting itself shows that the doubt is wrong. (Page 59)

No Objections from me here, this whole part is very well put together.

Idealistic Contradiction

We’ve already touched on one route to the conclusion that the hypothesis is contradictory, suggested by Berkeley’s idealism. Idealism says that appearance is reality. A strong version of idealism says that when we say, “we are in a simulation,” all this means is “it appears that we are in a simulation” or something along those lines. Now the perfect simulation hypothesis can be understood as saying: “We are in a simulation, but it does not appear that we are in a simulation.” If the strong version of idealism is true, this is equivalent to “We are in a simulation and we are not in a simulation” which is a contradiction. So, given this version of idealism, we can know that the simulation hypothesis is false. (Chalmers, p.75)

Reality is what our minds prioritize over imaginary things.

Reality is what Evolution forces (up)on us.

Simulations are what we invent to slow down and cushion the freight train of the impact of evolutionary pressure.

Reality is that which can’t be skipped.

When MLK says in front of a crowd fully conscious: Ï have a dream… does that make his whole speech not very unbelievable? No – you see he uses dream as a metaphor for seeing in the future. A prophetic dream he wishes to become reality.

What is reality?

Virtual things are not real is the standard line on virtual reality. I think it’s wrong. Virtual reality is real that is, the entities in virtual reality really exist. My view is a sort of virtual realism.(…) As I understand it, virtual realism is the thesis that virtual reality is genuine reality call mom with emphasis especially on the view that virtual objects are real and not an illusion. In general realism is the word philosophers use for the view that something is real. Someone who thinks morality is real is a moral realist. Someone who thinks that colors are real is a color realist. By analogy someone who believes that virtual objects are real is a virtual realist. I also accept simulation realism: if we are in a simulation, the objects around us are real and not an illusion. Virtual realism is a view about virtual reality in general, while simulation realism is a view specifically about the simulation hypothesis. Simulation realism says that even if we’ve lived our whole life in a simulation, the cats and chairs in the world around us really exist. They aren’t illusions; things are as they seem. Most of what we believe in the simulation is true. They are real trees and real cars New York Sydney Donald Trump and Beyoncé are all real. (…) when we accept simulation realism, we say yes to the reality question. In a simulation, things are real and not illusions. If so, the simulation hypothesis and related scenarios no longer pose a global threat to our knowledge. Even if we don’t know whether or not we’re in a simulation, we can still know many things about the external world. Of course, if we are in a simulation the trees and cars and Beyoncé are not exactly how we thought they were. Deep down, there are some differences. We thought that trees and cars and human bodies were ultimately made of fundamental particles such as atoms and quarks instead they are made of bits. I call this view virtual digitalism. Virtual digitalism says that objects in virtual reality are digital objects roughly speaking comma structures of binary information comma obits virtual digitalism is a version of virtual realism since digital objects are perfectly real. Structures of bits are grounded in real processing, in a real computer. If we are in a simulation, the computer is in the next World up, metaphorically speaking. But the digital objects are no less real for that. So, if we are in a simulation the cats, trees, and tables around us are all perfectly real. (Chalmers, p.105)

Chalmers appears to intend a synthesis of Idealism and Realism. His usage of ‘realism’ seems clear, yet he subtly stumbles when stating, “if we are in a simulation …things are not exactly how we thought they were.” (Here, substituting ‘exactly’ with ‘really’ swiftly muddies the clarity he had been striving to maintain). I don’t perceive Chalmers as deliberately evasive; his efforts to preserve the concept of reality, even within a potential simulation, strike me as somewhat desperate. His argument ultimately leans more toward theology than philosophy. The key distinction between a conman and a true believer in pseudo-realities lies in the targeted sphere of control: the former aims to manipulate others, while the latter seeks self-control. In this context, Chalmers emerges as a true apostle. From there, his later construction of a Simulation Theology in the book follows logically.

Chalmers heavily burdens terms such as ‘Reality’, ‘Illusion’, and ‘Virtuality’, perhaps hoping that this semantic shock treatment will jolt us into a new perspective.

The Big Swap

Imagine a scenario where, following birth, every infant is unknowingly separated from their biological mother and swapped with another child. Each person is raised by strangers they consider their parents, and these adoptive parents, also none the wiser, accept the child as their own. In Chalmers’ interpretation of this hypothetical situation, he might propose the idea of ‘Relationship Realism’. He would argue that since everyone involved treats each other as their real family, then they effectively are their real family. Chalmers might even extend this reasoning to suggest that even if genetic testing were to reveal that the individuals, we believed to be our parents are not our biological parents, they are still, in essence, our real parents. They might not be exactly what we initially believed, but the love exchanged between us makes this relationship real.

In a similar fashion, Chalmers seems to skirt around the fact that discovering my dad isn’t my biological father doesn’t provide any meaningful insights. Since I’ve been living a perfect simulation of a relationship with my non-biological parents from birth, they aren’t fake parents but my real ones. It’s noteworthy that this exercise of redefining and overloading reality-related terms is amusing only when observed from an outsider’s perspective. However, if Chalmers were to wake up tomorrow to discover his memories of being a renowned Australian philosopher authoring a book on realities and simulations were fabricated, and that he is actually within a simulation machine running a program on his digital brain, I doubt he would find it amusing.

Sisyphosical Zombies

Ein Bild, das Text, Menschliches Gesicht, Lächeln, Poster enthält.

Automatisch generierte Beschreibung

“Us” is a psychological horror film directed by Jordan Peele. The movie follows the story of the Wilson family, who encounter a group of doppelgängers that look exactly like them but possess sinister intentions. The family is forced to confront their own dark past as they fight to survive against their menacing and terrifying counterparts. As the night progresses, chilling secrets unravel, leading to a shocking revelation about the true nature of these doppelgängers and the disturbing connection they share with the family.

The movie perfectly unhinges the viewer’s sense of reality and how the concept relates to self and identity. It also questions our notions of self-control and free will.

The doppelgängers are referred to as “The Tethered.” The explanation for how The Tethered are created is left somewhat ambiguous and open to interpretation. However, it is suggested that The Tethered are the result of a secret government experiment gone wrong.

The film implies that the government sought to control the population by creating clones of individuals and keeping them confined in underground facilities. These clones, The Tethered, are physically identical to their above-ground counterparts but are forced to live in dark and oppressive conditions, mirroring the lives of their counterparts above.

Over time, The Tethered develop their own consciousness and a deep sense of resentment and desire for revenge against their surface-dwelling counterparts. They eventually rise to the surface and initiate a violent confrontation with their doubles, seeking to take their place in the world.

Now lets slightly change the parameters of the setting an instead of mindless zombies that are aimlessly walking around in tunnels below the surface, each doppelgänger is remotely connected with a VR to its Surface-Counterpart. They experience everything through the sensory input of their twins, from birth do the deathbed.

In the essay “The Myth of Sisyphus” Albert Camus explores the concept of the absurd, using the Greek myth of Sisyphus as a metaphor. Sisyphus was punished by the gods and condemned to roll a boulder up a hill, only to watch it roll back down, repeating this task for eternity.

Camus argues that even in the face of a seemingly meaningless and repetitive existence, Sisyphus can find happiness by embracing the absurdity of his situation. Despite the futility of his efforts, he can create his own sense of purpose and meaning in the act of defiance against the absurdity of life. Thus, Camus suggests that true happiness can be found in accepting and embracing life’s absurdities rather than searching for ultimate meaning or purpose.

The last sentence of the essay is:

“One must imagine Sisyphus happy.”

In a twist that reminds of the great Camus Chalmers basically states:

“One must imagine Simulation real.”

However, unlike the existentialist Camus who acknowledges the absurdity of such a statement and yet tries to emotionally cope with it, Chalmers attempts to reason his way out and, in my opinion, fails.

The life of the VR-Doppelgänger according to Chalmers is no second-class reality. If it is a perfectly fine simulation worth living.

I call this new philosophical Zombie, a sisyphosical Zombie: a Zombie that is happy about lacking emotions.

Reality#1

Reading Time: 11 minutes

Ein Bild, das Text, Wolke, Himmel, Poster enthält.

Automatisch generierte Beschreibung

This is the first part in the Reality# series that adds to the conversation about David Chalmers’ book Reality+

Intro to Reality#

Reality# (spoken sharp) is a series of notes and essays that are a reaction to Chalmers 2022 Book “Reality+”. They were written during a period of 12 Months during multiple readings.

I chose the Hashtag (#) not due to Social-Network reasons. #Reality would be a self-contradictory term. Philosophy like Masturbation is a deeply anti-social project. To be able to think, one must be comfortable to be alone with oneself and even find pleasure in doing so. The symbol is taken from Music where the sharp sign (#) is used to indicate that a note should be raised by a half step.

Whereas Chalmers explores the whole steps of the subject by using the conventional philosophical scale of structured chapters and the platonic style of reasoning, Reality# is more interested in the halftones, the black keys. My hope is that the text is chromatically enriched, with frictions (Contradictions) and exploring interesting Overtones and modulating to nearby subjects.

Chalmers book is great even when some of his core arguments are either debatable or plain wrong. It is by far the most accessible writing on a subject that is considered at the core of the hard problem of consciousness. After Realism and Relativism, it is not unreasonable to expect that Chalmers spawned a new branch on the philosophical tree of Epistemology. Chalmers himself calls his interpretation Virtual Realism, but I myself find this term a little too boring. Chalmers is interested in technology, but his book is far more than a call to just invest in Meta stock. Chalmers is a VR enthusiast but no fanboy, he does not advocate for escapism. Whenever quoting Chalmers I provide a reference in parenthesis.

We will call this the Australian School of Techno-Epistemology or Realityplusism or shorter: Realplusism for now. Realplusism is summed up in the sentence:

Virtual reality is genuine reality. Virtual worlds need not be second class realities. (Chalmers, XVII)

Or even simpler:

The preferred way to deal with simulations while you are inside them is to treat them as reality.  (Aiuisensei)

We will later look at some historical figures and produce an estimation of where exactly they sit on the Aussiestic Spectrum. With one Extreme of the spectrum being populated by Chalmers himself: (Almost) Everything is real and Nihilism on the other end: (Almost) Nothing is real. (Sartre, Nietzsche, Buddha), our current scientific world view seems to sit right in the middle: we behave in such a way and are taught that reality behaves somewhat dependable even if fringe findings in chaos theory, cosmology and quantum mechanics suggest otherwise.

Reality#’s writing style is deeply indebted to the late Ludwig Wittgenstein. In his philosophical investigations he implies a fragmentary style, with his reflections tending to “jump all around the subject”. This fragmentary style forces the reader to piece together the philosophical puzzle he presents, adding to the depth and complexity of his thought. Wittgenstein once compared his philosophical observations to “raisins,” which may be the best part of a cake, but their addition does not ensure a perfect, complete form of expression. (Vermischte Bemerkungen 386)

It is very telling that I myself hate raisins. I am much more a peanuts guy. So, consider the following experiment to go nuts about reality and don’t be concerned too much about simulations and realities.

Or in the words of one of my other philosophical heroes:

Don’t worry about the world coming to an end today. It is already tomorrow in Australia. (Charles M. Schulz, Peanuts)

Objective Reality and Language Use

Our minds are part of reality. But there’s a great deal of reality outside our minds. Reality contains our world, and it may contain many others. We can build new worlds and new parts of reality. We know a little about reality, and we can try to know more. There may be parts of it that we can never know. Most importantly, reality exists independently of us. The truth matters. There are truths about reality, and we can try to find them. Even in an age of multiple realities, I still believe in objective reality. (Chalmers XXIV)

Objective Reality is that kind of reality we believe that survives the vanishing and predates the emerging of singular subjective realities. Objective Reality is the set of all subjective Realities between their first realization (when their consciousness is formed) and their last (when their consciousness dissolves).

Objective Reality is that kind of reality we should agree on treating as if it matters.

Objective reality is that kind of reality that should not exist in plural.

Reality is that which can’t be stopped from within.

She was a real friend to him during those tough times, always there to listen and support.

Here real means something like “true” in a somewhat logical way. We could rewrite this sentence without the real part and don’t lose anything. She behaved like a friend….Adding the real part seems to emphasize the fact that she really meant it when she was friendly and not just pretending it.

From her behavior we can come to the conclusion that the Truth-value of the sentence: She was his friend, is not false.

Are these diamonds real or synthetic?

Here real means something like it’s not a cheap copy, but the highly valued original. We seem to associate something like an inner value to stuff that is real (original) which a copy even if it is identical does not possess. This whole issue becomes very complicated if we go into the direction of cloning.

To become a real professional in any field, you need dedication and years of practice.

Here real means something like you can gain an attribute “real” by dedicating much time to the subject. In the beginning of starting something you can’t be real, but over time you will get more and more real. Reality here is like status or a bar you can pass by investing time in a field. To become the real deal in any profession you have to dedicate time to it. And our society has plenty of hurdles that you must cross to become something for real, for example you are only a real lawyer if you pass the bar exam.

We need to look at the real issues here, not get sidetracked by irrelevant topics.

Here real means: important. It’s a rhetorical device in a debate that degrades the value of my opponents arguments (because he is sidetracking) whereas I stay on topic.

He had been dreaming about visiting Japan for years and finally, the dreams became a reality.

It’s likely a figurative speech. He has probably not really been dreaming about visiting Japan. He simply wanted or planned to visit Japan sometime in the future and now has realized these plans.

The reality of the situation was far from what he had anticipated; it was both challenging and thrilling at the same time.

His inner simulation or Imagination of the reality of the Situation was quite far off. Whereas he imagined she would be glad to see him, she smacked him over the face.

In reality, succeeding in such a competitive market requires both innovation and resilience.

Here the word reality tries to convey some kind of trustworthiness. You can believe me I have plenty of experience….

He didn’t really want to go to the party, but he felt obliged to show up.

Adds almost nothing but seems to slightly intensify his unwillingness.

Do you really think it’s a good idea to invest in this startup at such an early stage?

Help me to overcome my doubts about the investment, by confirming me again.

Upon realizing he’d left his keys at home, he hurried back, hoping not to be late for his meeting.

Some thought entered the light cone of his consciousness.

Reality and Mistake

I have the feeling that the word reality could vanish from our language without leaving a hole in it.

Something can be scientifically and practically useless but philosophically very interesting (Verifcationism).

Epistemology is a guilty pleasure. We are feeling guilty when thinking about Thoughts…but it is real fun!

The sentences with real and reality in it remind me of the late Wittgenstein who struggled with sentences like: I know that I have a brain. (On Certainty p.120)

(…)what about a sentence like ‘I know that I have a brain’? Can I doubt it? (…) everything speaks for it and nothing against it. However, it can be imagined that during an operation my skull would turn out to be empty. (Wittgenstein, On Certainty)

It seems by saying, we can never prove that we are not in a virtual reality – a Simulation- Chalmers is saying, that it is a purely technical problem. Like: our minds might not notice the mistakes and errors in the simulation due to bandwidth limitation. I feel Wittgenstein is going deeper to the philosophical core of the question in this paragraph.

Transposed to our reality issues: I believe that I am living in base reality, but I at least entertain the possibility that this could be a simulation. We feel that the casting of doubt overshadows the whole sense of the sentence. Why even stating that I’m quite sure about this being authentic when doubting it in the same sentence? It’s kind of saying to somebody on the beach: come into the water it is really warm, not at first when you enter, but when your body has acclimatized.

Whoever wanted to doubt everything would not even get to the doubt. The act of doubting itself already presupposes certainty. (Wittgenstein, On Certainty 115)

This is the reason why there can be no universal Skepticism or Nihilism.

Usability Problems with Chalmers Reality Definition

I am not at all convinced that Chalmers’ Reality definition is useful. What does it mean to say: The Easter bunny is not real, but the idea and stories about the easter bunny are real? It’s kind of saying the letter ÿ does not exist in the English language, but since I can use it in an English sentence like this one, it is useful anyway. The Easter bunny is somehow useful to show (illustrate) what is not real? It’s like having two pictures of rabbits, one of them being the Easter bunny and pointing at the pictures and saying: this is a real bunny, and this is a bunny that is not real, but the pictures of both bunnies are real. We would not have the feeling that this would have taught us anything useful, except giving a very clever sounding sentence for an esoteric epistemology blog.

Does it give us a better perspective on the set of real things, when we can show that some things are not in the set? Is the Understanding of the set and the knowledge about the set enhanced?

It’s interesting that the terms real and reality have no commonly used antonym in the English language. We only use the term unrealistic as an antinomy to realistic.

Be realistic! Means something like. Match your expectations with your possibilities.

She had an unrealistic Look on Life. Life ain’t no Barbie world.

The set of all things that are not real…

What does it mean to say: There are things, that are both real and imaginary?

I believe something, often means, I feel comfortable with this thought. It is a blanket that warms the body of my mind against the harshness of reality.

Chalmers starts his book by confessing his lifelong fascination with computer games. That makes this book very personal and sympathetic (I myself enjoy video games) but I think this love for the subject biases his final verdict. In the end most of his argument that Simulations need not be second class realities seems to be a little desperate. It seems he desperately needs Simulations to be real.

This self-delusion about the state of affairs should be obvious in the passage where Chalmers describes the possibility that if we are in a simulation it could very well be that he is not really the famous philosopher that writes a book on Simulations and if it that were the case, he should not feel bad about it, since due to the fact this simulation was his reality the books (he never wrote) count as-if-he-had-written them anyway.

I should really love this girl, she is kind and beautiful, therefore it is reasonable to feel love.

It sounds like he tries to convince himself not to feel betrayed or tricked if he finds out he inhabits a Chalmers show that was broadcast to million real Philosophy Students all over Base reality.

Map and Territory

During my childhood I fell in Love with a toy-globe that could be illuminated from within. You had to power it with a brown cable and then switch it on with a bulky ebony white button. It is maybe 45 years since I pressed this button, but I can clearly see it with my inner eyes and reproduce the imaginary sound that it made when switched on. I remember clearly that the equatorial line was somehow defective and a little loose. Globes are Models of our Planet. They are static simulations that are frozen in time. The quality of such a model is decaying over time, and that does not mean, that the materials it is made from, the colors that it is decorated with, fades. No, the Amount of Reality (“Realitätsgehalt” in German) of the model decreases. This globe surely would have now countries on its map, that do not exist anymore or have different borders. Reality has moved on, while the globe has mainly stayed the same.

Now let’s believe we had the technology to shrink our real earth down to the size of the globe (an intergalactic collector of rare planets might do so to have more space in his museum). At what size does my old globe stop being a model of the real thing? What does it mean to create a perfect model? What does it mean to look at the picture of Michelangelo’s David and say: This is not the real David. The real David is in Florence. Is a Sculpture of a human a real sculpture, while it is at the same time a model of a human who explicitly modeled for the sculpture?

Reality and Identity

In his Book Chalmers tells the story of how he was approached in 1999 by the Wachowski sisters to write something about simulations for the website of the movie Matrix. But was this really the case?

Given that we believe that this was the case and Chalmers tells the truth, does he tell the whole truth? For everyone that is not familiar with the case: In 1999 the movie Matrix was directed by the Wachowski Brothers, both directors later in life switched their gender identity and transitioned. You should believe that a philosopher that writes about simulation and reality does see the irony when he meditates about how he transports this information to his readers.

He could have thought something like: “I don’t want to distract from my main topic by bringing this gender identity thing up. It’s just convenient to bend the facts and to report how I believe it is most widely accepted to report about this thing without starting a controverse. It does not matter in the overall context but rather sidetracks my main argument.”

Or he could have thought something like: “It is perfectly normal to report facts that have changed over time without reporting that they have changed. As a cis man I accept the reality of transgender Persons, that their real identity was always female, even when they inhabited a male body. Since I have no information about what it means to change gender, I cannot possibly have a sound opinion about how this kind of reality is perceived by someone from within.”

Chalmers really missed an opportunity here to explore the term pseudo reality.

I remember a story where a 30+ old female teacher had an affair with her 14 old Student and got arrested and detained for sexual abuse of a minor. Years later after she left prison, the student and teacher married. Now could the student have made the argument, that when he was fourteen, he identified as an adult and thus the sex with his teacher was not illegal but his own genuine, adult consent? Does not the fact that when he was legally allowed to do so and married his teacher later, prove that it was really his own will to have sex, because this was real love? And should I have written about these events, without even mentioning that he was a minor. Would that have been the real story?

Reality is that which is moving on without getting pushed from the outside.

Reality is that which has only Presence (No Past or Future)

Reality is that which can’t be rewritten or erased.

to be continued