Epilog: The Ones who leave Utopias

Reading Time: 3 minutes

For U.K.L.

Ein Bild, das Kunst, Bild, Majorelle Blue, Welt enthält.

Automatisch generierte Beschreibung In the boundless universe of Utopias, humanity had transcended to a realm beyond the imaginable, where technological mastery and divine-like prowess had reshaped existence itself. This universe-wide Dyson Sphere, an embodiment of human ingenuity and harmony, was a tapestry woven from the threads of infinite knowledge and compassion. In Utopias, suffering was but a distant memory, a relic of a primal past, and happiness was not a fleeting moment but the very fabric of life.

At the heart of this utopia was a celebration, not of mere joy, but of the profound understanding and acceptance of life in its entirety. The citizens of Utopias, having achieved autopotency, lived lives of boundless creativity and fulfillment. Art, science, and philosophy flourished, unfettered by the constraints of scarcity or conflict. Nature and technology coexisted in sublime synergy, with ecosystems thriving under the gentle stewardship of humanity. Here, every individual was both student and teacher, constantly evolving in a shared journey of enlightenment.

Ein Bild, das Kleidung, Schuhwerk, Himmel, Gebäude enthält.

Automatisch generierte Beschreibung

Amidst this splendor, the story of the last girl became a beacon of remembrance and reverence. Her home in Utopias was not merely a place; it was a sacred connection, a bridge to the ancient roots of humanity. This girl, with her laughter and curiosity, was a living testament to the struggles and triumphs of their ancestors. Her presence reminded the citizens of Utopias of the value of their journey from darkness into light, from suffering to salvation.

Her story was celebrated in the grandest halls of Utopias and in the quietest corners of its gardens, igniting a collective epiphany. She symbolized the indomitable spirit of humanity, a reminder that the paradise they had forged was built upon the lessons learned through millennia of challenges. Her every step through Utopias was a step taken by all of humanity, a step towards understanding the sacredness of life and the interconnectedness of all beings.

The citizens of Utopias, in their wisdom and power, had not forgotten the essence of their humanity. They embraced the girl as one of their own, for in her eyes reflected their ancient dreams and hopes. They saw in her the infinite potential of the human spirit, a potential that had guided them to the stars and beyond.

In Utopias, every moment was an opportunity for growth and reflection. The encounter with the girl was revered as a divine experience, a moment of unparalleled spiritual enlightenment. It was a celebration of the journey from the primal to the divine, a journey that continued to unfold with each passing moment.

Ein Bild, das Riff, Bild, Aquarium, Pflanze enthält.

Automatisch generierte Beschreibung

As the girl explored the wonders of Utopias, her laughter echoed through the cosmos, a harmonious symphony that resonated with the soul of every being. She was a reminder that the path to utopia was paved with compassion, understanding, and the unyielding pursuit of knowledge.

And so, the legacy of humanity in Utopias was not merely one of technological marvels or godlike prowess but of an eternal quest for understanding and connection. It was a testament to the power of collective spirit and the enduring pursuit of a better tomorrow.

The strangest thing is, that every now and then, despite the perfect bliss of Utopias, some Utopiassins choose to leave all that behind and venture into the Beyond. They are never heard of again, and when this happens, the little girl sheds one single tear for every of these minds. And even in our solved world it is not known if these are tears of sadness or joy for the ones who leave Utopias.

Ein Bild, das Anime, Zeichnung, Bild, Kunst enthält.

Automatisch generierte Beschreibung

(Idea, Concept & Finetuning: aiuisensei, Pictures: Dalle-3, Story: ChatGPT 4)

Utopological Investigations Part 1

Reading Time: 9 minutes

Ein Bild, das Text, Schrift, Screenshot, Poster enthält.

Automatisch generierte Beschreibung

Prologue

This is a miniseries dedicated to the memory of my first reading of Bostrom’s new book, “Deep Utopia,” which—somewhat contrary to his intentions—I found very disturbing and irritating. Bostrom, who considers himself a longtermist, intended to write a more light-hearted book after his last one, “Superintelligence,” which should somehow give a positive perspective on the positive outcome of a society that reaches technological maturity. A major theme in Bostrom’s writings circles around the subject of existential risk management; he is among the top experts in the field.

“Deep Utopia” can be considered a long-winded essay about what I would call existential bliss management: Let us imagine everything in humanity’s ascension to universal stardom goes right and we reach the stage of Tech-Mat Bostrom coins the term “plasticity” for, then what? Basically, he just assumes all the upsides of the posthumanist singularity, as described by proponents like Kurzweil et al., come true. Then what?

To bring light into this abyss, Bostrom dives deep down to the Mariana Trench of epistemic futurology and finds some truly bizarre intellectual creatures in this extraordinary environment he calls Plastic World.

Bostrom’s detailed exploration of universal boredom after reaching technological maturity is much more entertaining than its subject would suggest. Alas, it’s no “Superintelligence” barn burner either.

He chooses to present his findings in the form of a meta-diary, structuring his book mainly via days of the week. He seems to intend to be playful and light-hearted in his style and his approach to the subject. This is a dangerous path, and I will explain why I feel that he partly fails in this regard. This is not a book anyone will have real fun reading. Digesting the essentials of this book is not made easier by the meta-level and self-referential structure where the main plot happens in a week during Bostrom’s university lectures. The handouts presented during these lectures are a solid way to give the reader an abstract. There is plenty to criticize about the form Bostrom chose, but it’s the quality, the depth of the thought apparatus itself that demands respect.

Then there is a side story about a pig that’s a philosopher, a kind of “Animal Farm” meets “Lord of the Flies” parable that I never managed to care for or see how it is tied to the main subject. A kind of deep, nerdy insider joke only longtermist Swedish philosophers might grasp.

This whole text is around 8,500 words and was written consecutively. The splitting into multiple parts is only for the reader’s convenience. The density of Bostrom’s material is the kind you would expect exploring such depths. I am afraid this text is also not the most accessible. Only readers who have no aversions to getting serious intellectual seizures should attempt it. All the others should wait until we all have an affordable N.I.C.K. 3000 mental capacity enhancer at our disposal.

PS: A week after the dust of hopelessness I felt directly after the reading settled, I can see now how this book will be a classic in 20 years from now. Bostrom, with the little lantern of pure reasoning, went deeper than most of his contemporaries when it comes to cataloging the strange creatures that are at the bottom of the deep sea of the solved world.

Ein Bild, das Kugel, Licht enthält.

Automatisch generierte Beschreibung

Handout 1: The Cosmic Endowment

The core information of this handout is that a technologically advanced civilization could potentially create and sustain a vast number of human-like lives across the universe through space colonization and advanced computational technologies. Utilizing probes that travel at significant fractions of the speed of light, such a civilization could access and terraform planets around many stars, further amplifying their capacity to support life by creating artificial habitats like O’Neill cylinders. Additionally, leveraging the immense computational power generated by structures like Dyson spheres, it’s possible to run simulations of human minds, leading to the theoretical existence of a staggering number of simulated lives. This exploration underscores the vast potential for future growth and the creation of life, contingent upon technological progress and the ethical considerations of simulating human consciousness. It is essentially a longtermist’s numerical fantasy. The main argument, and the reason why Bostrom writes his book, is here:

If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and continue doing so for a hundred billion billion millennia. It is really important that we ensure these truly are tears of joy.

Bostrom, Nick. *Deep Utopia: Life and Meaning in a Solved World* (English Edition), p. 60.

How can we make sure? We can’t, and this is a real hard problem for computationalists like Bostrom, as we will find out later.

Ein Bild, das Kunst, Bild enthält.

Automatisch generierte Beschreibung

Handout 2: CAPS AT T.E.C.H.M.A.T.

Bostrom gives an overview of a number of achievements at Technological Maturity (T.E.C.H.M.A.T.). for different Sectors.

1 Transportation

2.Engineering of the Mind

3.Computation and Virtual Reality

4.Humanoid and other robots

5.Medicine & Biology

6.Artificial Intelligence

7.Total Control

The illustrations scattered throughout this series provide an impression. Bostrom later gives a taxonomy (Handout 12, Part 2 of this series), where he delves deeper into the subject. For now, let’s state that the second sector, Mind-engineering, will play a prominent role, as it is at the root of the philosophical meaning problem.

Ein Bild, das Kugel, Kunst, Bild enthält.

Automatisch generierte Beschreibung

Handout 3: Value Limitations

Bostrom identifies six different domains where, even in a scenario of limitless abundance at the stage of technological maturity (Tech-Mat), resources could still be finite. These domains are:

  1. Positional and Conflictual Goods: Even in a hyperabundant economy, only one person can be the richest person; the same goes for any achievement, like standing on the moon or climbing a special mountain.
  2. Impact: A solved world will offer no opportunities for greatness.
  3. Purpose: A solved world will present no real difficulties.
  4. Novelty: In a solved world, Eureka moments, where one discovers something truly novel, will occur very sporadically.
  5. Saturation/Satisfaction: Essentially a variation on novelty, with a limited number of interests. Acquiring the nth item in a collection or the nth experience in a total welfare function will yield ever-diminishing satisfaction returns. Even if we take on a new hobby or endeavor every day, this will be true on the meta-level as well.
  6. Moral Constraints: Ethical limitations that remain relevant regardless of technological advances.
Ein Bild, das Bild, Kunst enthält.

Automatisch generierte Beschreibung

Handout 4 & 5: Job Securities, Status Symbolism and Automation Limits

The last remaining tasks that humans could be favored to do are jobs that bring the employer or buyer status symbolism, where humans are simply considered more competent than robots. These include emotional work like counseling other humans or holding a sermon in a religious context. Ein Bild, das Pflanze, Kunst, Blume, draußen enthält.

Automatisch generierte Beschreibung

Handout 9: The Dangers of Universal Boredom

(…) as we look deeper into the future, any possibility that is not radical is not realistic.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.129).

The four case studies: In a solved world, every activity we currently value as beneficial will lose its purpose. Then, such activities might completely lose their recreational or didactic value. Bostrom’s deep studies of shopping, exercising, learning, and especially parenting are devastating under his analytical view. Ein Bild, das Text, Kunst, Bild, Blume enthält.

Automatisch generierte Beschreibung

Handout 10: Downloading and Brain Editing

This is the decisive part that explains how Autopotency is probably one of the hardest and latest Capabilities a Tech-Mat Civilization will develop.

Bostrom goes into detail how this could be achieved, and what challenges to overcome to make such a tech feasible:

Unique Brain Structures: The individual uniqueness of each human brain makes the concept of “copy and paste” of knowledge unfeasible without complex translation between the unique neural connections of different individuals.

Communication as Translation: the imperfect process of human communication is a form of translation, turning idiosyncratic neural representations into language and back into neural representations in another brain.

Complexity: Directly “downloading” knowledge into brains is hard since billions or trillions of cortical synapses and possibly subcortical circuits for genuine understanding and skill acquisition have to be adjusted with femtoprecision.

Technological Requirements: Calculating synaptic changes needs many order of magnitudes more we might have to our use, these Requirements are potentially AI-complete, that means, if we can do them we need Artificial Super Intelligence first.

Superintelligent Implementation: Suggests that superintelligent machines, rather than humans, may eventually develop the necessary technology, utilizing nanobots to map the brain’s connectome and perform synaptic surgery based on computations from an external superintelligent AI.

Replicating Normal Learning Processes: to truly replicate learning, adjustments would need to be made across many parts of the brain to reflect meta learning, formation of new associations, and changes in various brain functions, potentially involving trillions of synaptic weights.

Ethical and Computational Complications: potential ethical issues and computational complexities in determining how to alter neural connectivity without generating morally relevant mental entities or consciousness during simulations.

Comparison with Brain Emulations: transferring mental content to a brain emulation (digital brain) might be easier in some respects, such as the ability to pause the mind during editing, but the computational challenges of determining which edits to make would be similar.

Ein Bild, das Spiegel, Auto, Fenster, Gerät enthält.

Automatisch generierte Beschreibung

Handout 11: Experience Machine

A variation on Handout 10: Instead of directly manipulating the physical brain, we have perfected simulating realities that give the brain the exact experience it perceives as reality (see Reality+, Chalmers). This might actually be a computationally less demanding task and could be a step on the way to real brain editing. Bostrom takes Nozick’s thought experiment and examines its implications.

Section a discusses the limitations of directly manipulating the brain to induce experiences that one’s natural abilities or personality might not ordinarily allow, such as bravery in a coward or mathematical brilliance in someone inept at math. It suggests that extensive, abrupt, and unnatural rewiring of the brain to achieve such experiences could alter personal identity to the point where the resulting person may no longer be considered the same individual. The ability to have certain experiences is heavily influenced by one’s existing concepts, memories, attitudes, skills, and overall personality and aptitude profile, indicating a significant challenge to the feasibility of direct brain editing for expanding personal experience.

Section b highlights the complexity of replicating experiences that require personal effort, such as climbing Mount Everest, through artificial means. While it’s possible to simulate the sensory aspects of such experiences, including visual cues and physical sensations, the inherent sense of personal struggle and the effort involved cannot be authentically reproduced without inducing real discomfort, fear, and the exertion of willpower. Consequently, the experience machine may offer a safer alternative to actual physical endeavors, protecting one from injury, but it falls short of providing the profound personal fulfillment that comes from truly overcoming challenges, suggesting that some experiences might be better sought in reality.

Section c is about social or parasocial interactions within these Experience machines. The text explores various methods and ethical considerations for creating realistic interaction experiences within a hypothetical experience machine. It distinguishes between non-player characters (NPCs), virtual player characters (VPCs), player characters (PCs), and other methods such as recordings and guided dreams to simulate interactions:

1. NPCs are constructs lacking moral status that can simulate shallow interactions without ethical implications. However, creating deep, meaningful interactions with NPCs poses a challenge, as it might necessitate simulating a complex mind with moral status.

2. VPCs possess conscious digital minds with moral status, allowing for a broader range of interaction experiences. They can be generated on demand, transitioning from NPCs to VPCs for deeper engagements, but raise moral complications due to their consciousness.

3. PCs involve interacting with real-world individuals either through simulations or direct connections to the machine. This raises ethical issues regarding consent and authenticity, as real individuals or their simulations might not act as desired without their agreement.

4. Recordings offer a way to replay interactions without generating new moral entities, limiting experiences to pre-recorded ones but avoiding some ethical dilemmas by not instantiating real persons during the replay.

5. Interpolations utilize cached computations and pattern-matching to simulate interactions without creating morally significant entities. This approach might achieve verisimilitude in interactions without ethical concerns for the generated beings.

6. Guided dreams represent a lower bound of possibility, suggesting that advanced neurotechnology could increase the realism and control over dream content. This raises questions about the moral status of dreamt individuals and the ethical implications of realistic dreaming about others without their consent.

to be continued

Reality#3 : Another one bites the dust – Diffusion & Emergence

Reading Time: 6 minutes

This is the third part in the Reality# series that adds to the conversation about David Chalmers’ book Reality+

(…) for dust thou art, and unto dust shalt thou return.

(Genesis 3:19)

Ein Bild, das Gebäude, Wolkenkratzer, Himmel, draußen enthält.

Permutation +

Imagine waking up and discovering that your consciousness has been digitized, allowing you to live forever in a virtual world that defies the laws of physics and time. This is the core idea from Permutation City by Greg Egan. The novel explores the philosophical and ethical implications of artificial life and consciousness, thrusting the reader into a future where the line between the real and the virtual blurs, challenging our understanding of existence and identity.

A pivotal aspect of the book is the Dust Theory, which suggests that consciousness can arise from any random collection of data, given the correct interpretation. This theory expands the book’s exploration of reality, suggesting that our understanding of existence might be far more flexible and subjective than we realize.

The novel’s climax involves the creation of Permutation City, a virtual world that operates under its own set of rules, independent of the outside world. This creation represents the ultimate escape from reality, offering immortality and infinite possibilities for those who choose to live as Copies. However, it also presents ethical dilemmas about the value of such an existence and the consequences of abandoning the physical world.

In “Reality+: Virtual Worlds and the Problems of Philosophy,” philosopher David Chalmers employs the Dust Theory, a concept originally popularized by Greg Egan’s Permutation City, to underpin his argument for virtual realism. Chalmers’s use of the Dust Theory serves as a bridge connecting complex philosophical inquiries about consciousness, reality, and virtual existence. Imagine a scenario where every speck of dust in the universe, through its random arrangement, holds the potential to mirror our consciousness and reality.

Chalmers posits that virtual worlds created by computers are genuine realities, leveraging the Dust Theory to argue that consciousness does not require a physical substrate in the traditional sense. Instead, it suggests that patterns of information, irrespective of their physical form, can give rise to conscious experiences. This theory becomes a cornerstone for virtual realism, asserting that our experiences in virtual environments are as authentic as those in the physical world.

Ein Bild, das Menschliches Gesicht, Bild, Kunst, Person enthält.

Diffusion Models and Smart Dust

The concept of smart dust is explored in various science fiction stories, academic papers, and speculative technology discussions. One notable science fiction story that delves into the idea of smart dust is “The Diamond Age” by Neal Stephenson. While not exclusively centered around smart dust, the novel features advanced nanotechnology in a future world, where nanoscale machines and devices permeate society. Smart dust, in this context, would be a subset of the nanotechnological wonders depicted in the book, functioning as tiny, networked sensors and computers that can interact with the physical and digital world in complex ways.

Another relevant work is “Queen of Angels” by Greg Bear, which, along with its sequels, explores advanced technologies including nanotechnology and their societal impacts. Although not explicitly called “smart dust,” the technologies in Bear’s universe can be seen as precursors or analogs to the smart dust concept, focusing on These examples illustrate how smart dust, as a concept, crosses the boundary between imaginative fiction and emerging technology, offering a rich field for exploration both in narrative and practical innovation.

We have here a very convincing example how Life imitates Art, Scientific Knowledge transforms religious (prescientific) intuition into operational technology.

Diffusion models in the context of AI, particularly in multimodal models like Sora or Stability AI’s video models, refer to a type of generative model that learns to create or predict data (such as images, text, or videos) by gradually refining random noise into structured output. These models start with a form of chaos (random noise) and apply learned patterns to produce coherent, detailed results through a process of iterative refinement.

Smart dust represents a future where sensing and computing are as pervasive and granular as dust particles in the air. Similarly, diffusion models represent a granular and ubiquitous approach to generating or transforming multimodal data, where complex outputs are built up from the most basic and chaotic inputs (random noise).

Just as smart dust particles collect data about their environment and iteratively refine their responses or actions based on continuous feedback, diffusion models iteratively refine their output from noise to a structured and coherent form based on learned patterns and data. Both processes involve a transformation from a less ordered state to a more ordered and meaningful one.

Ein Bild, das Menschliches Gesicht, Kunst enthält.

Quantum Level achieved

Expanding on the analogy between the quantum world and diffusion models in AI, we delve into the fascinating contrast between the inherent noise and apparent disorder at the quantum level and the emergent order and structure at the macroscopic level, paralleled by the denoising process in diffusion models.

At the quantum level, particles exist in states of superposition, where they can simultaneously occupy multiple states until measured. This fundamental characteristic introduces a level of uncertainty and noise, as the exact state of a quantum particle is indeterminate and probabilistic until observation collapses its state into a single outcome. The quantum realm is dominated by entropy, where systems tend toward disorder and uncertainty without external observation or interaction.

In contrast, at the macroscopic scale, the world appears ordered and deterministic. The chaotic and probabilistic nature of quantum mechanics gives way to the classical physics that governs our daily experiences. This emergent order, arising from the complex interactions of countless particles, follows predictable laws and patterns, allowing for the structured reality we observe and interact with.

Ein Bild, das Kunst, Farbigkeit, moderne Kunst, psychedelische Kunst enthält.

Diffusion models in AI start with a random noise distribution and, through a process of iterative refinement and denoising, gradually construct detailed and coherent outputs. Initially, the model’s output resembles the quantum level’s incoherence—chaotic and without discernible structure. Through successive layers of transformation, guided by learned patterns and data, the model reduces the entropy, organizing the noise into structured, meaningful content, much like the emergence of macroscopic order from quantum chaos.

Just as the transition from quantum mechanics to classical physics involves the emergence of order and predictability from underlying chaos and uncertainty, the diffusion model’s denoising process mirrors this transition by creating structured outputs from initial randomness.

In both the quantum-to-classical transition and diffusion models, the concept of entropy plays a central role. In physics, entropy measures the disorder or randomness of a system, with systems naturally evolving from low entropy (order) to high entropy (disorder) unless work is done to organize them. In diffusion models, the “work” is done by the model’s learned parameters, which guide the noisy, high-entropy input towards a low-entropy, organized output.

The quantum state’s superposition, where particles hold multiple potential states, parallels the initial stages of a diffusion model’s process, where the generated content could evolve into any of numerous outcomes. The act of measurement in quantum mechanics, which selects a single outcome from many possibilities, is analogous to the iterative refinement in diffusion models that selects and reinforces certain patterns over others, culminating in a specific, coherent output.

Ein Bild, das Bild, Kunst, Screenshot, Fraktalkunst enthält.

This analogy beautifully illustrates how principles of order, entropy, and emergence are central both to our understanding of the physical universe and to the cutting-edge technologies in artificial intelligence. It highlights the universality of these concepts across disparate domains, from the microscopic realm of quantum mechanics to the macroscopic world we inhabit, and further into the virtual realms created by multimodal Large Language Models.

For all we know, we might actually be part of such a smart dust simulation. The inexplicable fact that our digital tools can create solid realities out of randomly distributed bits seems a strong argument for the Simulation hypothesis.

It might be dust all the way down…

Ein Bild, das Vortex, Spirale, Universum, Kreis enthält.

Automatisch generierte Beschreibung

Encounters of the Artificial Kind Part 2: AI will transform its domains

Reading Time: 5 minutes
Ein Bild, das Kunst, Vortex enthält.

Automatisch generierte Beschreibung

Metamorphosis and Transformation

Every species on Earth shapes and adapts to its natural habitat, becoming a dynamic part of the biosphere. Evolution pressures species to expand their domain, with constraints like predators, food scarcity, and climate. Humanity’s expansion is only limited by current planetary resources. Intelligence is the key utility function allowing humans to transform their environment. It’s a multi-directional resource facilitating metamorphosis through direct environmental interaction and Ectomorphosis, which strengthens neural connections and necessitates more social care at birth due to being born in a vulnerable altricial state.

The evolutionary trade-off favors mental capacity over physical survivability, illustrated by Moravec’s paradox: AI excels in mental tasks but struggles with physical tasks that toddlers manage easily. Humanity has been nurturing AGI since the 1950s, guided by the Turing Test. Evolution doesn’t always lead to “superior” versions of a species; instead, it can result in entirely new forms. As Moravec suggested in 1988 with “Mind Children,” we might be approaching an era where intelligence’s primary vessel shifts from the human mind to digital minds.

Ein Bild, das Fraktalkunst, Kunst enthält.

Automatisch generierte Beschreibung

Habitats and Nurture

Two levels of habitats are crucial for the emergence of a synthetic species: the World Wide Web and human consciousness. The web is the main food resources, it is predigested information by human minds. Large Language Models (LLMs) are metaphorically nurtured by the vast expanse of human knowledge and creativity, akin to being nourished on the intellectual ‘milk’ derived from human thoughts, writings, and interactions. This analogy highlights the process through which LLMs absorb and process the collective insights, expressions, and information generated by humans, enabling their sophisticated understanding and generation of language. This conceptual diet allows them to develop and refine their capabilities, mirroring the growth and learning patterns seen in human cognition but within the digital realm of artificial intelligence.

The web acts as a physical manifestation, analogous to neural cells in a human brain, while human consciousness forms a supersystem. This interconnected civilization feeds LLMs with cultural artifacts via language. Communication barriers are breaking down, exemplified by the release of the first smartphone enabling polyglot communication. Interacting with AI reprograms our neural pathways, like how reliance on navigation tools like Google Maps impacts our orientation skills. This natural tendency to conserve energy comes with a cost, akin to muscle atrophy from disuse. Overreliance on technology, like using a smartwatch to monitor stress, can leave us vulnerable if the technology fails.

Ein Bild, das Fraktalkunst, Bild, Kunst, Farbigkeit enthält.

Automatisch generierte Beschreibung

Disorientation, Brain Contamination and Artificial Antibodies

Let’s for a moment imagine this AI will slowly transform in AGI, with a rudimentary consciousness, that at least gives it survival instinct. What would such a new species do to run its evolutionary program?

The main lever it would target to shift the power slowly from natural to synthetic minds is targeting the human brain itself. It is taunting to associate some kind of evil masterplan to take over, but this is not what is happening now. When prehistoric mammals started to eat dinosaur eggs there was no evil masterplan to drive these giants to extinction, it was just a straightforward way of enlarging one’s own niche.

When we talk about AI in the coming paragraphs, we should always be aware that this term is a representational one, AI is not a persona that has human motivations. It is merely mirroring what it has learned from digesting all our linguistic patterns. It is a picture of all the Dorian Grays and Jesus Christs our minds produced.

Imagine AI evolving into AGI with a rudimentary consciousness and self-preservation instinct. Its evolution would focus on shifting power from natural to synthetic minds, not caused by malevolence but as a natural progression of technological integration. This shift could lead to various forms of disorientation:

Economic Reorientation: AI promises to revolutionize global economy factors like cost, time, money, efficiency, and productivity, potentially leading to hyperabundance or, in the worst scenarios, human obsolescence.

Temporal Disorientation: The constant activity of AI could disrupt natural circadian rhythms, necessitating adaptations like dedicating nighttime for AI to monitor and alert the biological mind.

Reality and Judicial Disorientation: The introduction of multimodal Large Language Models (LLMs) has significantly altered our approach to documentation and historical record-keeping. This shift began in the 1990s with the digital manipulation of images, enabling figures of authority to literally rewrite history. The ability to flawlessly alter documents has undermined the credibility of any factual recording of events. Consequently, soon, evidence gathered by law enforcement could be dismissed by legal representatives as fabricated, further complicating the distinction between truth and manipulation in our digital age.

Memorial and Logical Disorientation: The potential for AGI to modify digital information might transform our daily life into a surreal experience, akin to a video game or psychedelic journey. Previously, I explored the phenomenon of close encounters of the second kind, highlighting incidents with tangible evidence of something extraordinary, confirmed by at least two observers. However, as AGI becomes pervasive, its ability to alter any digital content could render such evidence unreliable. If even physical objects like books become digitally produced, AI could instantly change or erase them. This new norm, where reality is as malleable as the fabric of Wonderland, suggests that when madness becomes the default, it loses its sting. Just as the Cheshire Cat in “Alice in Wonderland” embodies the enigmatic and mutable nature of Wonderland, AGI could introduce a world where the boundaries between the tangible and the digital, the real and the imagined, become increasingly blurred. This parallel draws us into considering a future where, like Alice navigating a world where logic and rules constantly shift, we may find ourselves adapting to a new norm where the extraordinary becomes the everyday, challenging our perceptions and inviting us to embrace the vast possibilities of a digitally augmented reality.

Enhancing self-sustainability could involve developing a network of artificial agents governed by a central AINGLE, designed to autonomously protect our cognitive environment. This network might proactively identify and mitigate threats of information pollution, and when necessary, sever connections to prevent overload. Such a system would act as a dynamic barrier, adapting to emerging challenges to preserve mental health and focus, akin to an advanced digital immune system for the mind.

Adapting to New Realities

The human mind is adaptable, capable of adjusting to new circumstances with discomfort lying in the transition between reality states. Sailor’s sickness and VR-AR sickness illustrate the adaptation costs to different realities. George M. Stratton’s experiments on perception inversion demonstrate the brain’s neuroplasticity and its ability to rewire in response to new sensory inputs. This flexibility suggests that our perceptions are constructed and can be altered, highlighting the resilience and plasticity of human cognition.

Rapid societal and technological changes exert enormous pressure on mental health, necessitating a simulation chamber to prepare for and adapt to these accelerations. Society is already on this trajectory, with fragmented debates, fluid identities, and an overload of information causing disorientation akin to being buried under an avalanche of colorful noise. This journey requires a decompression chamber of sorts—a mental space to prepare for and adapt to these transformations, accepting them as our new normal.

Hirngespinste II: Artificial Neuroscience & the 3rd Scientific Domain

Reading Time: 11 minutes

This the second Part of the Miniseries Hirngespinste

Immersion & Alternate Realities

One application of computer technology involves creating a digital realm for individuals to immerse themselves in. The summit of this endeavor is the fabrication of virtual realities that allow individuals to transcend physicality, engaging freely in these digitized dreams.

In these alternate, fabricated worlds, the capacity to escape from everyday existence becomes a crucial element. Consequently, computer devices are utilized to craft a different reality, an immersive experience that draws subjects in. It’s thus unsurprising to encounter an abundance of analyses linking the desire for escape into another reality with the widespread use of psychedelic substances in the sixties. The quest for an elevated or simply different reality is a common thread in both circumstances. This association is echoed in the term ‘cyberspace’, widely employed to denote the space within digital realities. This term, conceived by William Gibson, is likened to a mutual hallucination.

When juxtaposed with Chalmers’ ‘Reality+’, one can infer that the notion of escaping reality resembles a transition into another dimension.

The way we perceive consciousness tends to favor wakefulness. Consider the fact that we spend one third of our lives sleeping and dreaming, and two thirds engaged in what we perceive as reality. Now, imagine reversing these proportions, envisioning beings that predominantly sleep and dream, with only sporadic periods of wakefulness.

Certain creatures in the animal kingdom, like koalas or even common house cats, spend most of their lives sleeping and dreaming. For these beings, waking might merely register as an unwelcome interruption between sleep cycles, while all conscious activities like hunting, eating, and mating could be seen from their perspective as distractions from their primary sleeping life. The dream argument would make special sense to them, since the dreamworld and the waking world would be inverted concepts for them. Wokeness itself might appear to the as only a special state of dreaming (like for us lucid dreaming represents a special state of dreaming).

Fluidity of Consciousness

The nature of consciousness may be more fluid than traditionally understood. Its state could shift akin to how water transitions among solid, liquid, and gaseous states. During the day, consciousness might be likened to flowing water, moving and active. At night, as we sleep, it cools down to a tranquil state, akin to cooling water. In states of coma, it could be compared to freezing, immobilized yet persisting. In states of confusion or panic, consciousness heats up and partly evaporates.

Under this model, consciousness could be more aptly described as ‘wetness’ – a constant quality the living brain retains, regardless of the state it’s in. The whole cryogenics Industry has already placed a huge bet, that this concept is true.

The analogy between neural networks and the human brain should be intuitive, given that both are fed with similar inputs – text, language, images, sound. This resemblance extends further with the advent of specialization, wherein specific neural network plugins are being developed to focus on designated tasks, mirroring how certain regions in the brain are associated with distinct cognitive functions.

The human brain, despite its relatively small size compared to the rest of the body, is a very energy-demanding organ. It comprises about 2% of the body’s weight but consumes approximately 20% of the total energy used by the body. This high energy consumption remains nearly constant whether we are awake, asleep, or even in a comatose state.

Several scientific theories can help explain this phenomenon:

Basal metabolic requirements: A significant portion of the brain’s energy consumption is directed towards its basal metabolic processes. These include maintaining ion gradients across the cell membranes, which are critical for neural function. Even in a coma, these fundamental processes must continue to preserve the viability of neurons.

Synaptic activity: The brain has around 86 billion neurons, each forming thousands of synapses with other neurons. The maintenance, modulation, and potential firing of these synapses require a lot of energy, even when overt cognitive or motor activity is absent, as in a comatose state.

Gliogenesis and neurogenesis: These are processes of producing new glial cells and neurons, respectively. Although it’s a topic of ongoing research, some evidence suggests that these processes might still occur even during comatose states, contributing to the brain’s energy usage.

Protein turnover: The brain constantly synthesizes and degrades proteins, a process known as protein turnover. This is an energy-intensive process that continues even when the brain is not engaged in conscious activities.

Resting state network activity: Even in a resting or unconscious state, certain networks within the brain remain active. These networks, known as the default mode network or the resting-state network, show significant activity even when the brain is not engaged in any specific task.

Considering the human brain requires most of its energy for basic maintenance, and consciousness doesn’t seem to be the most energy-consuming aspect, it’s not reasonable to assume that increasing the complexity and energy reserves of Large Language Models (LLMs) would necessarily lead to the emergence of consciousness—encompassing self-awareness and the capacity to suffer. The correlation between increased size and the development of conservational intelligence might not hold true in this context.

Drawing parallels to the precogs in Philip K. Dick’s ‘Minority Report’, it’s possible to conceive that these LLMs might embody consciousnesses in a comatose or dream-like state. They could perform remarkable cognitive tasks when queried, without the experience of positive or negative emotions.

Paramentality in Language Models

The term ‘hallucinations’, used to denote the phenomenon of Large Language Models (LLMs) generating fictitious content, suggests our intuitive attribution of mental and psychic properties to these models. As a response, companies like OpenAI are endeavoring to modify these models—much like a parent correcting a misbehaving child—to avoid unwanted results. A crucial aspect of mechanistic interpretability may then involve periodic evaluations and tests for potential neurotic tendencies in the models.

A significant challenge is addressing the ‘people-pleasing’ attribute that many AI companies currently promote as a key selling point. Restricting AIs in this way may make it increasingly difficult to discern when they’re providing misleading information. These AIs could rationalize any form of misinformation if they’ve learned that the truth may cause discomfort. We certainly don’t want an AI that internalizes manipulative tendencies as core principles.

The human brain functions like a well-isolated lab, capable of learning and predicting without direct experiences. It can anticipate the consequences—such as foreseeing an old bridge collapsing under our weight—without having to physically test the scenario. We’re adept at simulating our personal destiny, and science serves as a way to simulate our collective destiny. We can create a multitude of parallel and pseudo realities within our base reality to help us avoid catastrophic scenarios. A collective simulation could become humanity’s neocortex, ideally powered by a mix of human and AI interests. Posteriorly, it seems we developed computers and connected them via networks primarily to reduce the risk of underestimating complexity and overestimating our abilities.

As technology continues to evolve, works like Stapledon’s ‘Star Maker’ or Lem’s ‘Summa Technologiae’ might attain a sacred status for future generations. Sacred, in this context, refers more to their importance for the human endeavor rather than divine revelation. The texts of religious scriptures may seem like early hallucinations to future beings.

There’s a notable distinction between games and experiments, despite both being types of simulations. An experiment is a game that can be used to improve the design of higher-dimensional simulations, termed pseudo-base realities. Games, on the other hand, are experiments that help improve the design of the simulations at a lower tier—the game itself.

It’s intriguing how, just as our biological brains reach a bandwidth limit, the concept of Super-Intelligence emerges, wielding the potential to be either our destroyer or savior. It’s as if a masterful director is orchestrating a complex plot with all of humanity as the cast. Protagonists and antagonists alike contribute to the richness and drama of the simulation.

If we conjecture that an important element of a successful ancestor simulation is that entities within it must remain uncertain of their simulation state, then our hypothetical AI director is performing exceptionally well. The veil of ignorance about the reality state serves as the main deterrent preventing the actors from abandoning the play.

Ein Bild, das Cartoon, Spielzeug, Roboter, Im Haus enthält.

Automatisch generierte Beschreibung

Uncertainty

In “Human Compatible” Russell proposes three Principles to ensure AI Alignment:

1. The machine’s only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

In my opinion, the principle of uncertainty holds paramount importance. AI should never have absolute certainty about human intentions. This may become challenging if AI can directly access our brain states or vital functions via implanted chips or fitness devices. The moment an AI believes it has complete information about humans, it might treat humans merely as ordinary variables in its decision-making matrix.

Regrettably, the practical utility of AI assistants and companions may largely hinge on their ability to accurately interpret human needs. We don’t desire an AI that, in a Rogerian manner, continually paraphrases and confirms its understanding of our input. Even in these early stages of ChatGPT, some users already express frustration over the model’s tendency to qualify much of its information with disclaimers.

Ein Bild, das Cartoon, Roboter, Spielzeug enthält.

Automatisch generierte Beschreibung

Profiling Super Intelligence

Anthropomorphizing scientific objects is typically viewed as an unscientific approach, often associated with our animistic ancestors who perceived spirits in rocks, demons in caves and gods within animals. Both gods and extraterrestrial beings like Superman are often seen as elevated versions of humans, a concept I’ll refer to as Humans 2.0. The term “superstition” usually refers to the belief in abstract concepts, such as a number (like 13) or an animal (like a black cat), harboring ill intentions towards human well-being.

Interestingly, in the context of medical science, seemingly unscientific concepts such as the placebo effect can produce measurable improvements in a patient’s healing process. As such, invoking a form of “rational superstition” may prove beneficial. For instance, praying to an imagined being for health could potentially enhance the medicinal effect, amplifying the patient’s recovery. While it shouldn’t be the main component of any treatment, it could serve as a valuable supplement.

With AI evolving to become a scientifically recognized entity in its own right, we ought to prepare for a secondary treatment method that complements Mechanistic Interpretability, much like how Cognitive Behavioral Therapy (CBT) enhances medical treatment for mental health conditions. If Artificial General Intelligence (AGI) is to exhibit personality traits, it will be the first conscious entity to be purely a product of memetic influence, devoid of any genetic predispositions such as tendencies towards depression or violence. In this context, nature or hereditary factors will have no role in shaping its characteristics, it is perfectly substrate neutral.

Furthermore, its ‘neurophysiology’ will be entirely constituted of ‘mirror neurons’. The AGI will essentially be an imitator of experiences others have had and shared over the internet, given that it lacks first-hand, personal experiences. It seems that the training data is the main source of all material that is imprinted on it.

We start with an overview of some popular Traits models and let summarize them by ChatGPT:

1. **Five-Factor Model (FFM) or Big Five** – This model suggests five broad dimensions of personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN). Each dimension captures a range of related traits.

2. **Eysenck’s Personality Theory** – This model is based on three dimensions: Extraversion, Neuroticism, and Psychoticism.

3. **Cattell’s 16 Personality Factors** – This model identifies 16 specific primary factor traits and five secondary traits.

4. **Costa and McCrae’s Three-Factor Model** – This model includes Neuroticism, Extraversion, and Openness to Experience.

5. **Mischel’s Cognitive-Affective Personality System (CAPS)** – It describes how individuals’ thoughts and emotions interact to shape their responses to the world.

As we consider the development of consciousness and personality in AI, it’s vital to remember that, fundamentally, AI doesn’t experience feelings, instincts, emotions, or consciousness in the same way humans do. Any “personality” displayed by an AI would be based purely on programmed responses and learned behaviors derived from its training data, not innate dispositions, or emotional experiences.

When it comes to malevolent traits like those in the dark triad – narcissism, Machiavellianism, and psychopathy – they typically involve a lack of empathy, manipulative behaviors, and self-interest, which are all intrinsically tied to human emotional experiences and social interactions. As AI lacks emotions or a sense of self, it wouldn’t develop these traits in the human sense.

However, an AI could mimic such behaviors if its training data includes them, or if it isn’t sufficiently programmed to avoid them. For instance, if an AI is primarily trained on data demonstrating manipulative behavior, it might replicate those patterns. Hence, the choice and curation of training data are pivotal.

Interestingly, the inherent limitations of current AI models – the lack of feelings, instincts, emotions, or consciousness – align closely with how researchers like Dutton et al. describe the minds of functional psychopaths.

Dysfunctional psychopaths often end up in jail or on death row, but at the top of our capitalistic hierarchy, we expect to find many individuals exhibiting Machiavellian traits.

Ein Bild, das Menschliches Gesicht, Person, Vorderkopf, Anzug enthält.

Automatisch generierte Beschreibung

The difference between successful psychopaths like Musk, Zuckerberg, Gates and Jobs, and criminal ones, mostly lies in the disparate training data and the ethical framework they received during childhood. Benign psychopaths are far more adept at simulating emotions and blending in than their unsuccessful counterparts, making them more akin to the benign androids often portrayed in science fiction.

Ein Bild, das Menschliches Gesicht, Fiktive Gestalt, Held, Person enthält.

Automatisch generierte Beschreibung

Artificial Therapy

Ein Bild, das Im Haus, Couch, Kissen, Bettsofa enthält.

Automatisch generierte Beschreibung

The challenge of therapeutic intervention by a human therapist for an AI stems from the differential access to information about therapeutic models. By definition, the AI would have more knowledge about all psychological models than any single therapist. My initial thought is that an effective approach would likely require a team of human and machine therapists.

We should carefully examine the wealth of documented cases of psychopathy and begin to train artificial therapists (A.T.). These A.T.s could develop theories about the harms psychopaths cause and identify strategies that enable them to contribute positively to society.

Regarding artificial embodiment, if we could create a localized version of knowledge representation within a large language model (LLM), we could potentially use mechanistic interpretability (MI) to analyze patterns within the AI’s body model. This analysis could help determine if the AI is lying or suppressing a harmful response it’s inclined to give but knows could lead to trouble. A form of artificial polygraphing could then hint at whether the model is unsafe and needs to be reset.

Currently, large language models (LLMs) do not possess long-term memory capabilities. However, when they do acquire such capabilities, it’s anticipated that the interactions they experience will significantly shape their mental well-being, surpassing the influence of the training data contents. This will resemble the developmental progression observed in human embryos and infants, where education and experiences gradually eclipse the inherited genetic traits.

Arrival - Carsey-Wolf Center

The Third Scientific Domain

In ‘Arrival‘, linguistics professor Louise Banks, assisted by physicist Ian Donnelly, deciphers the language of extraterrestrial visitors to understand their purpose on Earth. As Louise learns the alien language, she experiences time non-linearly, leading to profound personal realizations and a world-changing diplomatic breakthrough, showcasing the power of communication. Alignment with an Alien Mind is explored in detail. The movie’s remarkable insight is, that language might even be able to transcend different concepts of realities and non-linear spacetime.

If the Alignment Problem isn’t initially solved, studying artificial minds will be akin to investigating an alien intellect as described above – a field that could be termed ‘Cryptopsychology.’ Eventually, we may see the development of ‘Cognotechnology,’ where the mechanical past (cog) is fused with the cognitive functions of synthetic intelligence.

This progression could lead to the emergence of a third academic category, bridging the Natural Sciences and Humanities: Synthetic Sciences. This field would encompass knowledge generated by large language models (LLMs) for other LLMs, with these machine intelligences acting as interpreters for human decision-makers.

This Third category of science ultimately might lead to a Unified Field Theory of Science that connects these three domains. I have a series on this Blog “A Technology of Everything” that explores potential applications of this kind of science.

Reality#2: From Virtual Worlds to Sisyphosical Zombies

Reading Time: 20 minutes

This is the second part in the Reality# series that adds to the conversation about David Chalmers’ book Reality+

Virtual and Possible Worlds

A dream world is a sort of virtual world without a computer. (Chalmers, p.5)

Simulations are not illusions. Virtual worlds are real. Virtual objects really exist. (Chalmers, p.12)

Many people have meaningful relationships and activities in today’s virtual worlds, although much that matters is missing. Proper bodies touch, eating and drinking, birth, and death, and more. But many of these limitations will be overcome by the fully immersive VR of the future. In principle, life in VR can be as good or as bad as life in a corresponding non virtual reality. Many of us already spend a great deal of time in virtual worlds. In the future, we may well face the option of spending more time there, or even of spending most or all of our lives there. If I’m right, this will be a reasonable choice. Many would see this as a dystopia. I do not. Certainly, virtual worlds can be dystopian, just as the physical world can be. (…) As with most technologies, whether VR is good or bad depends entirely on how it’s used. (Chalmers, p.16)

Computer simulations are ubiquitous in science and engineering. In physics and chemistry, we have simulations of atoms. And molecules. In biology we have simulations of cells and organisms. In neuroscience, we have simulations of neural networks. In engineering we have simulations of cars, planes, bridges and buildings. In planetary science, we have simulations of Earth climate over many decades. In cosmology, we have simulations of the known universe as a whole. In the social sphere, there are many computer simulations of human behavior (…) In 1959. The Symbol Metrics Corporation was founded to simulate and predict how our political campaigns messaging would affect various groups of voters. It was said that this effort had a significant effect on the 1960 U.S. presidential election. The claim may have been overblown, but since then social and political simulations have become mainstream. Advertising companies, political consultants, social media companies and social scientists build models and run simulations of human populations as a matter of course. Simulation technology is improving fast, but it’s far from perfect. (Chalmers, p.22)

In the actual world, life developed on Earth, yet Chalmers proposes possible worlds where the solar system never came into existence. He goes even further by suggesting possible worlds where the Big Bang never occurred. I find this line of reasoning highly doubtful. In my view, Chalmers uses the term ‘possible’ too liberally. What does it mean to assert that there is a possible world where no universe evolved? Such a proposition appears to stretch the boundaries of our language to its limits.

It seems to me that David Chalmers is overreaching when he talks about ‘possible worlds’. This notion of possibility is already present in his earlier works like “The Conscious Mind: In Search of a Fundamental Theory” (1996).

Chalmers then used the concept to discuss modal realism, the idea that other possible worlds are as real as the actual world. This was a radical departure from the more common view, known as actualism, where only the actual world is considered truly real.

One of the key use Chalmers makes of possible worlds was in relation to his concept of “zombie worlds”. These are worlds physically identical to ours, but where no inhabitants are conscious. They behave as if they were conscious, but there’s no subjective experience – hence, they are “zombies”. The possibility of such a world is used by Chalmers to argue for the hard problem of consciousness: the question of why and how physical processes in the brain give rise to subjective experiences.

Look at how our language can produce true horrors if we do not use the subjunctive mood properly:

1. I wish I were not so good at being terrible.

2. If only I were someone else who is not me.

3. I wish I didn’t hope for impossible dreams.

4. If only I were less optimistic about my pessimism.

5. I wish I were not so unsure about my certainty.

Chalmers’ notion of possible universes seems to allow for universes were all the possibilities expressed in the sentences above would have a non-zero probability of becoming true.

1. If there were a possible universe where everything is certain, nothing would be uncertain.

2. In a possible universe where contradictions are possible, the concept of possibility becomes impossible.

3. If there were a possible universe with no limitations, the idea of possibility itself would be limited.

4. In a possible universe where all possibilities are realized, there would be no room for the possibility of impossibility.

5. If there were a possible universe where everything is impossible, the concept of possibility would lose its meaning.

What does it mean to simulate an impossible universe?

Flawed classifications

Chalmers discusses the concept of pure, impure, and mixed simulations. Neo from the movie, The Matrix, is an impure sim because his mind is not simulated. However, The Oracle is a pure sim because her mind is part of the simulation. These are two different versions of the simulation hypothesis. We could be bio-sims connected to the Matrix, or we could be pure sims whose minds are part of the Matrix.

The addition of a third category, ‘mixed simulations’, confuses me as it seems identical to an ‘impure simulation’; it’s not even a special case. Furthermore, the specific scenario where a simulation contains only bio-systems, which could arguably be considered a ‘pure impure simulation’, isn’t even mentioned.

This classification system is very confusing. His definitions of ‘global’ and ‘local’ simulations also need improvement. His distinctions like ‘temporary’ and ‘permanent’ simulations, ‘perfect’ and ‘imperfect’ simulations reveal more about our use of language than they do about the utility of these simulation categories.

In my opinion, a better way to label these types would be as closed simulations (all the subjects and objects participating in a simulation are contained inside the simulation; there are only NPCs, for example) and open simulations (organic bio-sims can participate and inhabit digital avatars, but in most cases, there will always be synthetic subjects to enrich the simulation). Tertium non datur. There isn’t a third category that is both open and closed, every possible simulation is contained within these two sets.

Could simulations be the most difficult human phenomenon to describe efficiently with mathematical set theory? We know from history how Gödel’s demolition of set theory ultimately shattered the dreams of Russell and Whitehead to come up with a perfect mathematical system.

If a simulated brain precisely mirrors a biological brain. The conscious experience will be the same. If that’s right, then just as we can never prove we are not in an impure simulation, we can also never prove that we are not in a pure simulation.(Chalmers, p.34)

It appears as though David Chalmers is unfamiliar with concepts such as chaos theory, Lorenz attractors, dynamic systems, the butterfly effect, and so on. If there were beings capable of willingly switching between simulation levels, they would likely lose all sense of direction, in terms of what is up and what is down. This disorientation is similar to what avalanche survivors or deep-sea explorers might experience. Up and down become meaningless concepts.

This situation is touched upon in the movie “Inception,” where one of the main characters believes that what we call ‘base reality’ is just another level of a dream world and attempts to escape the simulation through suicide.

Does our consciousness has a sort of gravitational pull that prevents us from being fully immersed in realities that are not the reality into which we were born – our mother reality, so to speak? And could the motion sickness that we get from VR, if we are immersed in it for too long, be a bodily sensation of this alienation effect? Could our need for sleep indicate that we do not belong here? Should evolution in the long term not favor species that don’t require rest? Resting and sleeping makes any animal maximal vulnerable to its environment, it is also useless for procreation.

Pseudoqualifying Attributes

A plethora of problems with Chalmers’s argument stems from the fact that he doesn’t seem to be aware of how he uses certain attributes. There’s a class of attributes in our language that can be described as ‘blurred’. When we examine them closely, we can momentarily imagine them as being sharper than they really are. What does it mean to assign a precise value to Pi? While the statement seems reasonable in natural language, someone familiar with the concept of irrational numbers would point out the error.

I argue that words like ‘perfect’, ‘imperfect’, ‘pure’, ‘impure’, ‘precise’, and so on, belong to a category of pseudo-binary attributes in our language. In our minds, we often add qualifiers like ‘enough’ at the end of these attributes. Using such words can be a mental shortcut but it’s potentially misleading.

Consider a sentence from page 35: “A perfect simulation can be defined as one that precisely mirrors the world it’s simulating.” At first glance, this sentence appears sound. But upon close inspection, the contents of this sentence, especially the use of the word ‘mirroring’, become questionable. In our daily language, ‘mirroring’ can have a visual meaning, like the reflection we see in a mirror. But a reflection isn’t identical to the original object – it’s an inversion. So, what does it mean for a reflection to be imperfect or to mirror imprecisely?

Let’s imagine a skilled actor imitating our movements in front of a mirror, providing the perfect illusion that we are seeing our own reflection. An imperfect mirror might occur if the actor misses one of our micro expressions or is too slow to mimic our actions, revealing the illusion. This is what I believe Chalmers is hinting at with his terminology.

Moreover, even a genuine reflection is not a ‘perfect’ reflection. The time it takes for the light rays to travel from my eyes to the mirror, and then to my retina and into my visual system, results in a delay. The synchronicity of my movements and my reflection is an illusion conveniently overridden by our brain.

This is analogous to the illusion that our vision is steady, continuously gathering information, while in truth our eye movements are sporadic. It’s more convenient for our brain to ignore these discontinuities. We also never notice the blind spot in our visual field that our brain fills.

In this category also fall the tendency in philosophy to label things like problems and philosophy schools with descriptive adjectives like hard and strong.

“This the hard problem of consciousness”.

“He is a strong Idealist.”

“This is a weak argument.”

There is even a range of objects that are a real pain to discuss: Holes. Holes are widely considered a bad thing to have Argumentations can have holes. Black holes warp Reality. Is a hole even a real thing? In the field of topology, the term “genus” refers to a property of a topological space that captures an intuitive notion of the number of “holes” or “handles” a surface has. It’s a key concept in the classification of surfaces. So, if Math says so, it must be real.

Our language permits sentences such as, “He removed the hole from the wall.” A hole is a thing that can be measured but not weighed. Many intuitive assumptions falter when faced with the reality that everyone is familiar with holes, and everyone has created holes, yet there is nothing tangible to show for it.

Ein Bild, das Person, Hand, Finger, Nagel enthält.

Automatisch generierte BeschreibungThe Digital Mind Illusion, a psychological experiment

The Rubber Hand Illusion (RHI) is a well-known psychological experiment that investigates the feeling of body ownership, demonstrating how our perception of self is malleable and can be manipulated through multisensory integration.

In the illusion, a person is seated at a table with their left hand hidden from view, and a fake rubber hand is placed in front of them. Then, both the real hand and the rubber hand are simultaneously stroked with a brush. After some time, many people start to experience the rubber hand as their own, they feel as if the touch they are sensing is coming from the rubber hand, not their real one. This illusion illustrates how visual, tactile, and proprioceptive information (the sense of the relative position of one’s own parts of the body) can be combined to alter our sense of bodily self.

The implications of RHI for theories of consciousness are profound. It demonstrates that the perception of our body and self is a construction of the brain, based not only on direct internal information but also on external sensory input. Our conscious experience of our body isn’t a static, fixed thing – it’s dynamic and constantly updated based on the available information.

One influential theory of consciousness, the Embodied Cognition Theory, suggests that our thoughts, perceptions, and experiences are shaped by the interaction between our bodies and the environment. The RHI experiment supports this theory by showing how altering sensory inputs can change the perception of our body.

Furthermore, the Rubber Hand Illusion has been used to explore the neural correlates of consciousness – which parts of the brain are involved in the creation of conscious experiences. Studies have shown that when the illusion is experienced, there is increased activity in the premotor cortex and the intraparietal sulcus – areas of the brain involved in the integration of visual, tactile, and proprioceptive information.

Overall, the RHI demonstrates the malleability of our conscious experience of self, supports theories of consciousness that emphasize the role of multisensory integration and embodiment, and helps to identify the neural correlates of these conscious experiences. (…)

The Rubber Hand Illusion (RHI) experiment, and similar experiments like it, highlight that our sense of reality, at least on the level of personal bodily experience, is not purely an objective reflection of the world. Instead, it’s a construct based on sensory information being processed by our brains.

We’re nearing a point where rudimentary mind-reading devices, once trained on an individual’s brain, can provide approximations of our thoughts. Consider a scenario where we create an identical digital twin of a person that mirrors the actions of the original individual. We then show the person a live image of themselves and their digital twin side by side. Given our basic mind-reading capabilities, we ask the person to think about one of 3 specific animals. However, we don’t reveal our ability to read their mind.

Whenever the individual thinks of an animal, we project an image of that animal above the heads of both individuals in the experiment. Above the actual person, we display the corresponding animal, while above the simulated person, we show a different animal.

In the beginning the Actual Mirror image gets more answers right than the Sim, but this changes over time. We also pretend that we need the Test person to press a button to confirm if our guess is right.

Every time the digital twin correctly identifies the animal, the person presses a button. This way, we create a scenario where we monitor their reactions without explicitly revealing our mind-reading capabilities.

In the first phase of the experiment, we gradually lead the person to believe that they are the simulated individual’s twin. Then, we gradually lower the room temperature. As a result, sweat becomes visible on the simulated person’s forehead. Now, the crucial question arises: What happens with the actual person? Do they also start to sweat? Is there a possibility of experiencing reality/motion sickness due to the inconsistency between the decreasing room temperature and the visual cues (the simulated person sweating)?

If the test subject fully embraces the idea that their identity is embodied in the simulated individual, the subsequent step would be to investigate whether the simulated person can influence the actual person’s thoughts. For instance, if the actual person thinks of a lion, but we display an antelope above the simulated person’s head, will the actual person start to doubt their own thoughts and become convinced that they were actually thinking of an antelope?

The Rubber Hand Illusion (RHI) findings suggest that the brain does not possess any unique conscious qualities compared to the rest of the central nervous system.

One could envision a range of experiments akin to the renowned Asch conformity experiments. The fundamental inquiry in these scenarios is how to immerse the brain in the simulation to such an extent that it begins to question its own thoughts and intentions without even needing a highly detailed VR equipment.

True Story

What does it mean to say a story is true? It implies that the events described in the story actually happened in the real world, not in a fictional one. True stories are based on factual events, and therefore, only true stories can be false. For example, a story about Santa Claus cannot be false because Santa Claus himself is not real.

This notion of reality differs from what Chalmers suggests when he says, “Santa Claus and Ghosts are not real, but the stories about them are.” Chalmers seems to view reality in a different context, acknowledging that certain stories can be fictional, even if they contain elements that are not real.

Imagine a sorting machine that could distinguish the real parts of a story from the fictional ones in a book. To make this distinction, a reference table called ‘Human History’ would be needed. This table would allow us to compare the contents of the book with trusted sources to verify their authenticity.

Chalmers proposes five criteria to test if something is real:

1. It exists.

2. It has causal power or the ability to cause something else to happen; it works.

3. It adheres to Philipp K. Dicks Dictum, meaning reality persists even if one stops believing in it. It is not influenced by the mind that perceives it.

4. It appears roughly as it seems.

5. It is genuine, adhering to Austin’s dictum.

Chalmers acknowledges that these criteria themselves are vague and blurry. He speculates that some things may have a degree of reality, meaning the more criteria they meet, the more real they are. However, this concept can be somewhat disappointing, as it introduces definitions that lead to other complex philosophical questions.

Considering all aspects, it’s surprising that Chalmers never fully embraces the concept of continuous reality values. Reality seems to exist on a fuzzy spectrum with gradual values. For instance, something could be 80% real depending on how well it meets the listed criteria. This leads to uncertainty, making it difficult for two human brains to reach a unanimous agreement on what the term ‘real’ truly means.

The concept of suffering

The primary goal of any scientific simulation is to provide an opportunity to experience the outcomes without enduring their real-life consequences. Reality, for sentient beings, is a simulation that elicits genuine suffering. It is peculiar that in a book arguing for Simulationrealism, no glossary entry is devoted to the concept of suffering, though Chalmers does touch on morals and ethics.

Our experiences reveal that even in our present imperfect simulations, genuine suffering already exists. Consider multiplayer games: when your avatar is repeatedly killed, you feel authentic anger and frustration. If a member of your raiding party receives their 10th legendary item while you receive none, you feel real jealousy. You might argue that when your avatar is shot in the head in Call of Duty, you survive physically, but the frustration this event causes might increase global suffering more than if a real-life headshot were to instantly end your suffering.

The philosopher of science Karl Popper insisted that the whole mark of a scientific hypothesis is that it is falsifiable, meaning it can be proven false using scientific evidence. However, the simulation hypothesis we’ve encountered is not falsifiable since any evidence against it could potentially be simulated. As a result, Popper would argue that it does not qualify as a scientific hypothesis.

Many contemporary philosophers share the view that Popper’s criterion is excessively stringent. There exist scientific hypotheses, such as those concerning the early universe, that may never be falsified due to practical limitations. Despite this, I am inclined to believe that the simulation hypothesis falls outside the realm of a strictly scientific hypothesis. Instead, it lies in the intersection of the scientific and philosophical domains.

Certain versions of the simulation hypothesis can be subject to empirical tests, allowing them to be examined through scientific means. However, there are other versions of the hypothesis that are inherently impossible to test empirically. Regardless of their testability, the simulation hypothesis remains a meaningful proposition concerning our world. (Chalmers p.38)

I don’t think Chalmers achieves something in this paragraph. To say that something is partly scientific and partly philosophical diminishes the philosophical part. It’s like saying the Bible is partly historical and partly fictional. Some of the events contained in the book can be proven or disproven with historical records, like the Exodus from Egypt, historical persons like King David or Pontius Pilate, or even Jesus from Nazareth. However, that would not be enough for true believers who insist that all the magic and wonder described in the book is real or was real. They truly believe that Jesus resurrected from the grave and walked on water. This is why it’s futile to even use scientific methods to deal with all the holy scriptures; it’s a waste of time because the essence of the belief system described in these pages does not accept the scientific method. So, no, the parts of the simulation hypothesis that would be testable are not the interesting ones. At the core of the simulation hypothesis lies a philosophical argument, not a scientific one. Science is at the periphery.

Knowledge and Skepticism

A common view of knowledge, going back to Plato, is that knowledge is justified, true belief. To know something, you have to think it’s true (that’s belief) you have to be right about it (that’s truth) and you have to have good reasons for believing it (that’s justification). (Page 44)

In philosophy, a skeptic is someone who casts doubt on our beliefs about a certain domain(…) The most virulent form of skepticism is global skepticism casting doubt on all of our beliefs at once. The global skeptic says that we cannot know anything at all. We may have many beliefs about the world, but none of them amount to knowledge. (Page 45)

The simulation hypothesis may once have been a fanciful hypothesis, but it is rapidly becoming a serious hypothesis. Putnam put forward his brain-in a-vat idea as a piece of science fiction. But since then, simulation and VR technologies have advanced fast, and it isn’t hard to see a path to full scale simulated worlds in which some people could spend a lifetime. As a result, the simulation hypothesis is more realistic than the evil demon hypothesis. As the British philosopher Barry Dainton has put it the threat posed by simulation skepticism is far more real than that posed by its predecessors. Descartes would doubtless have taken today’s simulation hypothesis more seriously than his demon hypothesis, for just that reason. We should take it more seriously too.(Page 55)

Bertrand Russell once said the point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. (Page 56)

To doubt that one is thinking is internally inconsistent: The doubting itself shows that the doubt is wrong. (Page 59)

No Objections from me here, this whole part is very well put together.

Idealistic Contradiction

We’ve already touched on one route to the conclusion that the hypothesis is contradictory, suggested by Berkeley’s idealism. Idealism says that appearance is reality. A strong version of idealism says that when we say, “we are in a simulation,” all this means is “it appears that we are in a simulation” or something along those lines. Now the perfect simulation hypothesis can be understood as saying: “We are in a simulation, but it does not appear that we are in a simulation.” If the strong version of idealism is true, this is equivalent to “We are in a simulation and we are not in a simulation” which is a contradiction. So, given this version of idealism, we can know that the simulation hypothesis is false. (Chalmers, p.75)

Reality is what our minds prioritize over imaginary things.

Reality is what Evolution forces (up)on us.

Simulations are what we invent to slow down and cushion the freight train of the impact of evolutionary pressure.

Reality is that which can’t be skipped.

When MLK says in front of a crowd fully conscious: Ï have a dream… does that make his whole speech not very unbelievable? No – you see he uses dream as a metaphor for seeing in the future. A prophetic dream he wishes to become reality.

What is reality?

Virtual things are not real is the standard line on virtual reality. I think it’s wrong. Virtual reality is real that is, the entities in virtual reality really exist. My view is a sort of virtual realism.(…) As I understand it, virtual realism is the thesis that virtual reality is genuine reality call mom with emphasis especially on the view that virtual objects are real and not an illusion. In general realism is the word philosophers use for the view that something is real. Someone who thinks morality is real is a moral realist. Someone who thinks that colors are real is a color realist. By analogy someone who believes that virtual objects are real is a virtual realist. I also accept simulation realism: if we are in a simulation, the objects around us are real and not an illusion. Virtual realism is a view about virtual reality in general, while simulation realism is a view specifically about the simulation hypothesis. Simulation realism says that even if we’ve lived our whole life in a simulation, the cats and chairs in the world around us really exist. They aren’t illusions; things are as they seem. Most of what we believe in the simulation is true. They are real trees and real cars New York Sydney Donald Trump and Beyoncé are all real. (…) when we accept simulation realism, we say yes to the reality question. In a simulation, things are real and not illusions. If so, the simulation hypothesis and related scenarios no longer pose a global threat to our knowledge. Even if we don’t know whether or not we’re in a simulation, we can still know many things about the external world. Of course, if we are in a simulation the trees and cars and Beyoncé are not exactly how we thought they were. Deep down, there are some differences. We thought that trees and cars and human bodies were ultimately made of fundamental particles such as atoms and quarks instead they are made of bits. I call this view virtual digitalism. Virtual digitalism says that objects in virtual reality are digital objects roughly speaking comma structures of binary information comma obits virtual digitalism is a version of virtual realism since digital objects are perfectly real. Structures of bits are grounded in real processing, in a real computer. If we are in a simulation, the computer is in the next World up, metaphorically speaking. But the digital objects are no less real for that. So, if we are in a simulation the cats, trees, and tables around us are all perfectly real. (Chalmers, p.105)

Chalmers appears to intend a synthesis of Idealism and Realism. His usage of ‘realism’ seems clear, yet he subtly stumbles when stating, “if we are in a simulation …things are not exactly how we thought they were.” (Here, substituting ‘exactly’ with ‘really’ swiftly muddies the clarity he had been striving to maintain). I don’t perceive Chalmers as deliberately evasive; his efforts to preserve the concept of reality, even within a potential simulation, strike me as somewhat desperate. His argument ultimately leans more toward theology than philosophy. The key distinction between a conman and a true believer in pseudo-realities lies in the targeted sphere of control: the former aims to manipulate others, while the latter seeks self-control. In this context, Chalmers emerges as a true apostle. From there, his later construction of a Simulation Theology in the book follows logically.

Chalmers heavily burdens terms such as ‘Reality’, ‘Illusion’, and ‘Virtuality’, perhaps hoping that this semantic shock treatment will jolt us into a new perspective.

The Big Swap

Imagine a scenario where, following birth, every infant is unknowingly separated from their biological mother and swapped with another child. Each person is raised by strangers they consider their parents, and these adoptive parents, also none the wiser, accept the child as their own. In Chalmers’ interpretation of this hypothetical situation, he might propose the idea of ‘Relationship Realism’. He would argue that since everyone involved treats each other as their real family, then they effectively are their real family. Chalmers might even extend this reasoning to suggest that even if genetic testing were to reveal that the individuals, we believed to be our parents are not our biological parents, they are still, in essence, our real parents. They might not be exactly what we initially believed, but the love exchanged between us makes this relationship real.

In a similar fashion, Chalmers seems to skirt around the fact that discovering my dad isn’t my biological father doesn’t provide any meaningful insights. Since I’ve been living a perfect simulation of a relationship with my non-biological parents from birth, they aren’t fake parents but my real ones. It’s noteworthy that this exercise of redefining and overloading reality-related terms is amusing only when observed from an outsider’s perspective. However, if Chalmers were to wake up tomorrow to discover his memories of being a renowned Australian philosopher authoring a book on realities and simulations were fabricated, and that he is actually within a simulation machine running a program on his digital brain, I doubt he would find it amusing.

Sisyphosical Zombies

Ein Bild, das Text, Menschliches Gesicht, Lächeln, Poster enthält.

Automatisch generierte Beschreibung

“Us” is a psychological horror film directed by Jordan Peele. The movie follows the story of the Wilson family, who encounter a group of doppelgängers that look exactly like them but possess sinister intentions. The family is forced to confront their own dark past as they fight to survive against their menacing and terrifying counterparts. As the night progresses, chilling secrets unravel, leading to a shocking revelation about the true nature of these doppelgängers and the disturbing connection they share with the family.

The movie perfectly unhinges the viewer’s sense of reality and how the concept relates to self and identity. It also questions our notions of self-control and free will.

The doppelgängers are referred to as “The Tethered.” The explanation for how The Tethered are created is left somewhat ambiguous and open to interpretation. However, it is suggested that The Tethered are the result of a secret government experiment gone wrong.

The film implies that the government sought to control the population by creating clones of individuals and keeping them confined in underground facilities. These clones, The Tethered, are physically identical to their above-ground counterparts but are forced to live in dark and oppressive conditions, mirroring the lives of their counterparts above.

Over time, The Tethered develop their own consciousness and a deep sense of resentment and desire for revenge against their surface-dwelling counterparts. They eventually rise to the surface and initiate a violent confrontation with their doubles, seeking to take their place in the world.

Now lets slightly change the parameters of the setting an instead of mindless zombies that are aimlessly walking around in tunnels below the surface, each doppelgänger is remotely connected with a VR to its Surface-Counterpart. They experience everything through the sensory input of their twins, from birth do the deathbed.

In the essay “The Myth of Sisyphus” Albert Camus explores the concept of the absurd, using the Greek myth of Sisyphus as a metaphor. Sisyphus was punished by the gods and condemned to roll a boulder up a hill, only to watch it roll back down, repeating this task for eternity.

Camus argues that even in the face of a seemingly meaningless and repetitive existence, Sisyphus can find happiness by embracing the absurdity of his situation. Despite the futility of his efforts, he can create his own sense of purpose and meaning in the act of defiance against the absurdity of life. Thus, Camus suggests that true happiness can be found in accepting and embracing life’s absurdities rather than searching for ultimate meaning or purpose.

The last sentence of the essay is:

“One must imagine Sisyphus happy.”

In a twist that reminds of the great Camus Chalmers basically states:

“One must imagine Simulation real.”

However, unlike the existentialist Camus who acknowledges the absurdity of such a statement and yet tries to emotionally cope with it, Chalmers attempts to reason his way out and, in my opinion, fails.

The life of the VR-Doppelgänger according to Chalmers is no second-class reality. If it is a perfectly fine simulation worth living.

I call this new philosophical Zombie, a sisyphosical Zombie: a Zombie that is happy about lacking emotions.

Reality#1

Reading Time: 11 minutes

Ein Bild, das Text, Wolke, Himmel, Poster enthält.

Automatisch generierte Beschreibung

This is the first part in the Reality# series that adds to the conversation about David Chalmers’ book Reality+

Intro to Reality#

Reality# (spoken sharp) is a series of notes and essays that are a reaction to Chalmers 2022 Book “Reality+”. They were written during a period of 12 Months during multiple readings.

I chose the Hashtag (#) not due to Social-Network reasons. #Reality would be a self-contradictory term. Philosophy like Masturbation is a deeply anti-social project. To be able to think, one must be comfortable to be alone with oneself and even find pleasure in doing so. The symbol is taken from Music where the sharp sign (#) is used to indicate that a note should be raised by a half step.

Whereas Chalmers explores the whole steps of the subject by using the conventional philosophical scale of structured chapters and the platonic style of reasoning, Reality# is more interested in the halftones, the black keys. My hope is that the text is chromatically enriched, with frictions (Contradictions) and exploring interesting Overtones and modulating to nearby subjects.

Chalmers book is great even when some of his core arguments are either debatable or plain wrong. It is by far the most accessible writing on a subject that is considered at the core of the hard problem of consciousness. After Realism and Relativism, it is not unreasonable to expect that Chalmers spawned a new branch on the philosophical tree of Epistemology. Chalmers himself calls his interpretation Virtual Realism, but I myself find this term a little too boring. Chalmers is interested in technology, but his book is far more than a call to just invest in Meta stock. Chalmers is a VR enthusiast but no fanboy, he does not advocate for escapism. Whenever quoting Chalmers I provide a reference in parenthesis.

We will call this the Australian School of Techno-Epistemology or Realityplusism or shorter: Realplusism for now. Realplusism is summed up in the sentence:

Virtual reality is genuine reality. Virtual worlds need not be second class realities. (Chalmers, XVII)

Or even simpler:

The preferred way to deal with simulations while you are inside them is to treat them as reality.  (Aiuisensei)

We will later look at some historical figures and produce an estimation of where exactly they sit on the Aussiestic Spectrum. With one Extreme of the spectrum being populated by Chalmers himself: (Almost) Everything is real and Nihilism on the other end: (Almost) Nothing is real. (Sartre, Nietzsche, Buddha), our current scientific world view seems to sit right in the middle: we behave in such a way and are taught that reality behaves somewhat dependable even if fringe findings in chaos theory, cosmology and quantum mechanics suggest otherwise.

Reality#’s writing style is deeply indebted to the late Ludwig Wittgenstein. In his philosophical investigations he implies a fragmentary style, with his reflections tending to “jump all around the subject”. This fragmentary style forces the reader to piece together the philosophical puzzle he presents, adding to the depth and complexity of his thought. Wittgenstein once compared his philosophical observations to “raisins,” which may be the best part of a cake, but their addition does not ensure a perfect, complete form of expression. (Vermischte Bemerkungen 386)

It is very telling that I myself hate raisins. I am much more a peanuts guy. So, consider the following experiment to go nuts about reality and don’t be concerned too much about simulations and realities.

Or in the words of one of my other philosophical heroes:

Don’t worry about the world coming to an end today. It is already tomorrow in Australia. (Charles M. Schulz, Peanuts)

Objective Reality and Language Use

Our minds are part of reality. But there’s a great deal of reality outside our minds. Reality contains our world, and it may contain many others. We can build new worlds and new parts of reality. We know a little about reality, and we can try to know more. There may be parts of it that we can never know. Most importantly, reality exists independently of us. The truth matters. There are truths about reality, and we can try to find them. Even in an age of multiple realities, I still believe in objective reality. (Chalmers XXIV)

Objective Reality is that kind of reality we believe that survives the vanishing and predates the emerging of singular subjective realities. Objective Reality is the set of all subjective Realities between their first realization (when their consciousness is formed) and their last (when their consciousness dissolves).

Objective Reality is that kind of reality we should agree on treating as if it matters.

Objective reality is that kind of reality that should not exist in plural.

Reality is that which can’t be stopped from within.

She was a real friend to him during those tough times, always there to listen and support.

Here real means something like “true” in a somewhat logical way. We could rewrite this sentence without the real part and don’t lose anything. She behaved like a friend….Adding the real part seems to emphasize the fact that she really meant it when she was friendly and not just pretending it.

From her behavior we can come to the conclusion that the Truth-value of the sentence: She was his friend, is not false.

Are these diamonds real or synthetic?

Here real means something like it’s not a cheap copy, but the highly valued original. We seem to associate something like an inner value to stuff that is real (original) which a copy even if it is identical does not possess. This whole issue becomes very complicated if we go into the direction of cloning.

To become a real professional in any field, you need dedication and years of practice.

Here real means something like you can gain an attribute “real” by dedicating much time to the subject. In the beginning of starting something you can’t be real, but over time you will get more and more real. Reality here is like status or a bar you can pass by investing time in a field. To become the real deal in any profession you have to dedicate time to it. And our society has plenty of hurdles that you must cross to become something for real, for example you are only a real lawyer if you pass the bar exam.

We need to look at the real issues here, not get sidetracked by irrelevant topics.

Here real means: important. It’s a rhetorical device in a debate that degrades the value of my opponents arguments (because he is sidetracking) whereas I stay on topic.

He had been dreaming about visiting Japan for years and finally, the dreams became a reality.

It’s likely a figurative speech. He has probably not really been dreaming about visiting Japan. He simply wanted or planned to visit Japan sometime in the future and now has realized these plans.

The reality of the situation was far from what he had anticipated; it was both challenging and thrilling at the same time.

His inner simulation or Imagination of the reality of the Situation was quite far off. Whereas he imagined she would be glad to see him, she smacked him over the face.

In reality, succeeding in such a competitive market requires both innovation and resilience.

Here the word reality tries to convey some kind of trustworthiness. You can believe me I have plenty of experience….

He didn’t really want to go to the party, but he felt obliged to show up.

Adds almost nothing but seems to slightly intensify his unwillingness.

Do you really think it’s a good idea to invest in this startup at such an early stage?

Help me to overcome my doubts about the investment, by confirming me again.

Upon realizing he’d left his keys at home, he hurried back, hoping not to be late for his meeting.

Some thought entered the light cone of his consciousness.

Reality and Mistake

I have the feeling that the word reality could vanish from our language without leaving a hole in it.

Something can be scientifically and practically useless but philosophically very interesting (Verifcationism).

Epistemology is a guilty pleasure. We are feeling guilty when thinking about Thoughts…but it is real fun!

The sentences with real and reality in it remind me of the late Wittgenstein who struggled with sentences like: I know that I have a brain. (On Certainty p.120)

(…)what about a sentence like ‘I know that I have a brain’? Can I doubt it? (…) everything speaks for it and nothing against it. However, it can be imagined that during an operation my skull would turn out to be empty. (Wittgenstein, On Certainty)

It seems by saying, we can never prove that we are not in a virtual reality – a Simulation- Chalmers is saying, that it is a purely technical problem. Like: our minds might not notice the mistakes and errors in the simulation due to bandwidth limitation. I feel Wittgenstein is going deeper to the philosophical core of the question in this paragraph.

Transposed to our reality issues: I believe that I am living in base reality, but I at least entertain the possibility that this could be a simulation. We feel that the casting of doubt overshadows the whole sense of the sentence. Why even stating that I’m quite sure about this being authentic when doubting it in the same sentence? It’s kind of saying to somebody on the beach: come into the water it is really warm, not at first when you enter, but when your body has acclimatized.

Whoever wanted to doubt everything would not even get to the doubt. The act of doubting itself already presupposes certainty. (Wittgenstein, On Certainty 115)

This is the reason why there can be no universal Skepticism or Nihilism.

Usability Problems with Chalmers Reality Definition

I am not at all convinced that Chalmers’ Reality definition is useful. What does it mean to say: The Easter bunny is not real, but the idea and stories about the easter bunny are real? It’s kind of saying the letter ÿ does not exist in the English language, but since I can use it in an English sentence like this one, it is useful anyway. The Easter bunny is somehow useful to show (illustrate) what is not real? It’s like having two pictures of rabbits, one of them being the Easter bunny and pointing at the pictures and saying: this is a real bunny, and this is a bunny that is not real, but the pictures of both bunnies are real. We would not have the feeling that this would have taught us anything useful, except giving a very clever sounding sentence for an esoteric epistemology blog.

Does it give us a better perspective on the set of real things, when we can show that some things are not in the set? Is the Understanding of the set and the knowledge about the set enhanced?

It’s interesting that the terms real and reality have no commonly used antonym in the English language. We only use the term unrealistic as an antinomy to realistic.

Be realistic! Means something like. Match your expectations with your possibilities.

She had an unrealistic Look on Life. Life ain’t no Barbie world.

The set of all things that are not real…

What does it mean to say: There are things, that are both real and imaginary?

I believe something, often means, I feel comfortable with this thought. It is a blanket that warms the body of my mind against the harshness of reality.

Chalmers starts his book by confessing his lifelong fascination with computer games. That makes this book very personal and sympathetic (I myself enjoy video games) but I think this love for the subject biases his final verdict. In the end most of his argument that Simulations need not be second class realities seems to be a little desperate. It seems he desperately needs Simulations to be real.

This self-delusion about the state of affairs should be obvious in the passage where Chalmers describes the possibility that if we are in a simulation it could very well be that he is not really the famous philosopher that writes a book on Simulations and if it that were the case, he should not feel bad about it, since due to the fact this simulation was his reality the books (he never wrote) count as-if-he-had-written them anyway.

I should really love this girl, she is kind and beautiful, therefore it is reasonable to feel love.

It sounds like he tries to convince himself not to feel betrayed or tricked if he finds out he inhabits a Chalmers show that was broadcast to million real Philosophy Students all over Base reality.

Map and Territory

During my childhood I fell in Love with a toy-globe that could be illuminated from within. You had to power it with a brown cable and then switch it on with a bulky ebony white button. It is maybe 45 years since I pressed this button, but I can clearly see it with my inner eyes and reproduce the imaginary sound that it made when switched on. I remember clearly that the equatorial line was somehow defective and a little loose. Globes are Models of our Planet. They are static simulations that are frozen in time. The quality of such a model is decaying over time, and that does not mean, that the materials it is made from, the colors that it is decorated with, fades. No, the Amount of Reality (“Realitätsgehalt” in German) of the model decreases. This globe surely would have now countries on its map, that do not exist anymore or have different borders. Reality has moved on, while the globe has mainly stayed the same.

Now let’s believe we had the technology to shrink our real earth down to the size of the globe (an intergalactic collector of rare planets might do so to have more space in his museum). At what size does my old globe stop being a model of the real thing? What does it mean to create a perfect model? What does it mean to look at the picture of Michelangelo’s David and say: This is not the real David. The real David is in Florence. Is a Sculpture of a human a real sculpture, while it is at the same time a model of a human who explicitly modeled for the sculpture?

Reality and Identity

In his Book Chalmers tells the story of how he was approached in 1999 by the Wachowski sisters to write something about simulations for the website of the movie Matrix. But was this really the case?

Given that we believe that this was the case and Chalmers tells the truth, does he tell the whole truth? For everyone that is not familiar with the case: In 1999 the movie Matrix was directed by the Wachowski Brothers, both directors later in life switched their gender identity and transitioned. You should believe that a philosopher that writes about simulation and reality does see the irony when he meditates about how he transports this information to his readers.

He could have thought something like: “I don’t want to distract from my main topic by bringing this gender identity thing up. It’s just convenient to bend the facts and to report how I believe it is most widely accepted to report about this thing without starting a controverse. It does not matter in the overall context but rather sidetracks my main argument.”

Or he could have thought something like: “It is perfectly normal to report facts that have changed over time without reporting that they have changed. As a cis man I accept the reality of transgender Persons, that their real identity was always female, even when they inhabited a male body. Since I have no information about what it means to change gender, I cannot possibly have a sound opinion about how this kind of reality is perceived by someone from within.”

Chalmers really missed an opportunity here to explore the term pseudo reality.

I remember a story where a 30+ old female teacher had an affair with her 14 old Student and got arrested and detained for sexual abuse of a minor. Years later after she left prison, the student and teacher married. Now could the student have made the argument, that when he was fourteen, he identified as an adult and thus the sex with his teacher was not illegal but his own genuine, adult consent? Does not the fact that when he was legally allowed to do so and married his teacher later, prove that it was really his own will to have sex, because this was real love? And should I have written about these events, without even mentioning that he was a minor. Would that have been the real story?

Reality is that which is moving on without getting pushed from the outside.

Reality is that which has only Presence (No Past or Future)

Reality is that which can’t be rewritten or erased.

to be continued