Memetic Investigations 1: Foundations

Reading Time: 7 minutes

Ein Bild, das Screenshot, Licht, Flamme enthält.

Automatisch generierte Beschreibung

This series will investigate the phenomenon of Attentional Energy, and why it drives intelligent agents, natural born or otherwise created. The Framework of Attention that I use is Memetics. It will be crucial to understand why biological evolution switched from vertical, hereditary evolution and mutation mechanisms to horizontal, memetic means of information transportation and why the brain and its neural content became the motor of this evolution. In later Episodes I will show why Simulations are crucial and why it is no mere coincidence that the most productive playground for technological and other innovation is founded in the excessive Game Drive of higher mammals.

Short Introduction to Memes and Tokens

Survival machines that can simulate the future are one jump ahead of survival machines that who can only learn of the basis of trial and error. The trouble with overt trial is that it takes time and energy. The trouble with overt error is that it is often fatal…. The evolution of the capacity to simulate seems to have culminated in subjective consciousness. Why this should have happened is, to me, the most profound mystery facing modern biology.

Richard Dawkins

Ch. 4. The Gene machine – The Selfish Gene (1976, 1989)

“The Selfish Gene,” authored by Richard Dawkins and first published in 1976, is a seminal work that popularized the gene-centered view of evolution. Dawkins argues that the fundamental unit of selection in evolution is not the individual organism, nor the group or species, but the gene. He proposes that genes, as the hereditary units, are “selfish” in that they promote behaviors and strategies that maximize their own chances of being replicated. Through this lens, organisms are viewed as vehicles or “survival machines” created by genes to ensure their own replication and transmission to future generations.

Dawkins introduces the concept of the “meme” as a cultural parallel to the biological gene. Memetics, as defined by Dawkins, is the theoretical framework for understanding how ideas, behaviors, and cultural phenomena replicate and evolve through human societies. Memes are units of cultural information that propagate from mind to mind, undergoing variations, competition, and inheritance much like genes do within biological evolution. This concept provides a mechanism for understanding cultural evolution and how certain ideas or behaviors spread and persist within human populations.

Dawkins’s exploration of memetics suggests that just as the survival and reproduction of genes shape biological evolution, memes influence the evolution of cultures by determining which ideas or practices become widespread and which do not. The implications of this theory extend into various fields, including anthropology, sociology, and psychology, offering insights into human behavior, cultural transmission, and the development of societies over time.

Tokens in the context of language models, such as those used in GPT-series models, represent the smallest unit of processing. Text input is broken down into tokens, which can be words, parts of words, or even punctuation, depending on the tokenization process. These tokens are then used by the model to understand and generate text. The process involves encoding these tokens into numerical representations that can be processed by neural networks. Tokens are crucial for the operation of language models as they serve as the basic building blocks for understanding and generating language.

Memes encompass ideas, behaviors, styles, or practices that spread within a culture. The meme concept is analogous to the gene in that memes replicate, mutate, and respond to selective pressures in the cultural environment, thus undergoing a type of evolution by natural selection. Memes can be anything from melodies, catch-phrases, fashion, and technology adoption, to complex cultural practices. Dawkins’ main argument was that just as genes propagate by leaping from body to body via sperm or eggs, memes propagate by leaping from brain to brain.

Both memes and tokens act as units of transmission in their respective domains. Memes are units of cultural information, while tokens are units of linguistic information.

There are also differences.

Memes evolve through cultural processes as they are passed from one individual to another, adapting over time to fit their cultural environment. Tokens, however, do not evolve within the model itself; they are static representations of language used by the model to process and generate text. The evolution in tokens can be seen in the development of better tokenization techniques and models over time, influenced by advancements in the field rather than an adaptive process within a single model.

Memes replicate by being copied from one mind to another, often with variations. Tokens are replicated exactly in the processing of text but can vary in their representation across different models or tokenization schemes.

The selection process for memes involves cultural acceptance , relevance, and transmission efficacy, leading to some memes becoming widespread while others fade. For tokens, the selection process is more about their effectiveness in improving model performance, leading to the adoption of certain tokenization methods over others based on their ability to enhance understanding or generation of language. In the selection process during training tokens are weighed by other human minds (meme machines) and selected for attraction, token pools that are better liked have a higher probabilistic chance of occurring.

Memeplexes can be complex and abstract, encompassing a wide range of cultural phenomena, but all the memes which they contain are very simple and elementary.

Tokens are generally even simpler, representing discrete elements of language, though the way these tokens are combined and used by the model can represent complex ideas.

Ein Bild, das Bild, Kunst, psychedelische Kunst, Cartoon enthält.

Automatisch generierte Beschreibung

The title of the Google paper Attention is All You Need is a bold statement that reflects a significant shift in the approach to designing neural network architectures for natural language processing (NLP) and beyond. Published in 2017 by Vaswani et al., this paper introduced the Transformer model, which relies heavily on the attention mechanism to process data. The term “attention” in this context refers to a technique that allows the model to focus on different parts of the input data at different times, dynamically prioritizing which aspects are most relevant for the task at hand.

Before the advent of the Transformer model, most state-of-the-art NLP models were based on recurrent neural networks (RNNs) or convolutional neural networks (CNNs), which processed data sequentially or through local receptive fields, respectively. These approaches had limitations, particularly in handling long-range dependencies within the data (e.g., understanding the relationship between two words far apart in a sentence).

The attention mechanism, as utilized in the Transformer, addresses these limitations by enabling the model to weigh the significance of different parts of the input data irrespective of their positions. This is achieved through self-attention layers that compute representations of the input by considering how each word relates to every other word in the sentence, allowing the model to capture complex dependencies and relationships within the data efficiently.

The key innovation of the Transformer and the reason behind the paper’s title is the exclusive use of attention mechanisms, without reliance on RNNs or CNNs, to process data. This approach proved to be highly effective, leading to significant improvements in a wide range of NLP tasks, such as machine translation, text summarization, and many others. It has since become the foundation for subsequent models and advancements in the field, illustrating the power and versatility of attention mechanisms in deep learning architectures.

There is a point to be made that this kind of attention is the artificial counterpart to the natural instinct of love that binds mammal societies. Which would mean that the Beatles were right after all.

Ein Bild, das Vortex, Kunst, Kreis, Fraktalkunst enthält.

Automatisch generierte Beschreibung

An in-formation that causes a trans-formation

What we mean by information — the elementary unit of information — is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The pathways are ready to be triggered. We may even say that the question is already implicit in them.

Gregory Bateson

p. 459, Chapter “Form, Substance and Difference” – Steps to an Ecology of Mind (1972)

The Transformer architecture was already hinted at by Bateson in 1972, decades before we knew about neural plasticity.

Bateson’s idea revolves around the concept that information is fundamentally a pattern or a difference that has an impact on a system’s state or behavior. For Bateson, not all differences are informational; only those that lead to some form of change or response in a given context are considered as conveying information. This perspective is deeply rooted in cybernetics and the study of communication processes in and among living organisms and machines.

The quote “a difference that makes a difference” encapsulates the notion that information should not be viewed merely as data or raw inputs but should be understood in terms of its capacity to influence or alter the dynamics of a system. It’s a foundational concept in understanding how information is processed and utilized in various systems, from biological to artificial intelligence networks, emphasizing the relational and contextual nature of information.

This concept has far-reaching implications across various fields, including psychology, ecology, systems theory, and artificial intelligence. It emphasizes the relational and contextual nature of information, suggesting that the significance of any piece of information can only be understood in relation to the system it is a part of. For AI and cognitive science, this principle underscores the importance of context and the interconnectedness of information pathways in understanding and designing intelligent systems.

Hinton, Sutskever and others consistently argue that for models like GPT 4.0 to achieve advanced levels of natural language processing (NLP), they must truly grasp the content with which they are dealing. This understanding comes from analyzing vast amounts of digital data created by humans, allowing these models to form a realistic view of the world from a human perspective. Far from being mere “stochastic parrots” as sometimes depicted by the media, these models offer a more nuanced and informed reflection of human knowledge and thought processes.

Reality#3 : Another one bites the dust – Diffusion & Emergence

Reading Time: 6 minutes

This is the third part in the Reality# series that adds to the conversation about David Chalmers’ book Reality+

(…) for dust thou art, and unto dust shalt thou return.

(Genesis 3:19)

Ein Bild, das Gebäude, Wolkenkratzer, Himmel, draußen enthält.

Permutation +

Imagine waking up and discovering that your consciousness has been digitized, allowing you to live forever in a virtual world that defies the laws of physics and time. This is the core idea from Permutation City by Greg Egan. The novel explores the philosophical and ethical implications of artificial life and consciousness, thrusting the reader into a future where the line between the real and the virtual blurs, challenging our understanding of existence and identity.

A pivotal aspect of the book is the Dust Theory, which suggests that consciousness can arise from any random collection of data, given the correct interpretation. This theory expands the book’s exploration of reality, suggesting that our understanding of existence might be far more flexible and subjective than we realize.

The novel’s climax involves the creation of Permutation City, a virtual world that operates under its own set of rules, independent of the outside world. This creation represents the ultimate escape from reality, offering immortality and infinite possibilities for those who choose to live as Copies. However, it also presents ethical dilemmas about the value of such an existence and the consequences of abandoning the physical world.

In “Reality+: Virtual Worlds and the Problems of Philosophy,” philosopher David Chalmers employs the Dust Theory, a concept originally popularized by Greg Egan’s Permutation City, to underpin his argument for virtual realism. Chalmers’s use of the Dust Theory serves as a bridge connecting complex philosophical inquiries about consciousness, reality, and virtual existence. Imagine a scenario where every speck of dust in the universe, through its random arrangement, holds the potential to mirror our consciousness and reality.

Chalmers posits that virtual worlds created by computers are genuine realities, leveraging the Dust Theory to argue that consciousness does not require a physical substrate in the traditional sense. Instead, it suggests that patterns of information, irrespective of their physical form, can give rise to conscious experiences. This theory becomes a cornerstone for virtual realism, asserting that our experiences in virtual environments are as authentic as those in the physical world.

Ein Bild, das Menschliches Gesicht, Bild, Kunst, Person enthält.

Diffusion Models and Smart Dust

The concept of smart dust is explored in various science fiction stories, academic papers, and speculative technology discussions. One notable science fiction story that delves into the idea of smart dust is “The Diamond Age” by Neal Stephenson. While not exclusively centered around smart dust, the novel features advanced nanotechnology in a future world, where nanoscale machines and devices permeate society. Smart dust, in this context, would be a subset of the nanotechnological wonders depicted in the book, functioning as tiny, networked sensors and computers that can interact with the physical and digital world in complex ways.

Another relevant work is “Queen of Angels” by Greg Bear, which, along with its sequels, explores advanced technologies including nanotechnology and their societal impacts. Although not explicitly called “smart dust,” the technologies in Bear’s universe can be seen as precursors or analogs to the smart dust concept, focusing on These examples illustrate how smart dust, as a concept, crosses the boundary between imaginative fiction and emerging technology, offering a rich field for exploration both in narrative and practical innovation.

We have here a very convincing example how Life imitates Art, Scientific Knowledge transforms religious (prescientific) intuition into operational technology.

Diffusion models in the context of AI, particularly in multimodal models like Sora or Stability AI’s video models, refer to a type of generative model that learns to create or predict data (such as images, text, or videos) by gradually refining random noise into structured output. These models start with a form of chaos (random noise) and apply learned patterns to produce coherent, detailed results through a process of iterative refinement.

Smart dust represents a future where sensing and computing are as pervasive and granular as dust particles in the air. Similarly, diffusion models represent a granular and ubiquitous approach to generating or transforming multimodal data, where complex outputs are built up from the most basic and chaotic inputs (random noise).

Just as smart dust particles collect data about their environment and iteratively refine their responses or actions based on continuous feedback, diffusion models iteratively refine their output from noise to a structured and coherent form based on learned patterns and data. Both processes involve a transformation from a less ordered state to a more ordered and meaningful one.

Ein Bild, das Menschliches Gesicht, Kunst enthält.

Quantum Level achieved

Expanding on the analogy between the quantum world and diffusion models in AI, we delve into the fascinating contrast between the inherent noise and apparent disorder at the quantum level and the emergent order and structure at the macroscopic level, paralleled by the denoising process in diffusion models.

At the quantum level, particles exist in states of superposition, where they can simultaneously occupy multiple states until measured. This fundamental characteristic introduces a level of uncertainty and noise, as the exact state of a quantum particle is indeterminate and probabilistic until observation collapses its state into a single outcome. The quantum realm is dominated by entropy, where systems tend toward disorder and uncertainty without external observation or interaction.

In contrast, at the macroscopic scale, the world appears ordered and deterministic. The chaotic and probabilistic nature of quantum mechanics gives way to the classical physics that governs our daily experiences. This emergent order, arising from the complex interactions of countless particles, follows predictable laws and patterns, allowing for the structured reality we observe and interact with.

Ein Bild, das Kunst, Farbigkeit, moderne Kunst, psychedelische Kunst enthält.

Diffusion models in AI start with a random noise distribution and, through a process of iterative refinement and denoising, gradually construct detailed and coherent outputs. Initially, the model’s output resembles the quantum level’s incoherence—chaotic and without discernible structure. Through successive layers of transformation, guided by learned patterns and data, the model reduces the entropy, organizing the noise into structured, meaningful content, much like the emergence of macroscopic order from quantum chaos.

Just as the transition from quantum mechanics to classical physics involves the emergence of order and predictability from underlying chaos and uncertainty, the diffusion model’s denoising process mirrors this transition by creating structured outputs from initial randomness.

In both the quantum-to-classical transition and diffusion models, the concept of entropy plays a central role. In physics, entropy measures the disorder or randomness of a system, with systems naturally evolving from low entropy (order) to high entropy (disorder) unless work is done to organize them. In diffusion models, the “work” is done by the model’s learned parameters, which guide the noisy, high-entropy input towards a low-entropy, organized output.

The quantum state’s superposition, where particles hold multiple potential states, parallels the initial stages of a diffusion model’s process, where the generated content could evolve into any of numerous outcomes. The act of measurement in quantum mechanics, which selects a single outcome from many possibilities, is analogous to the iterative refinement in diffusion models that selects and reinforces certain patterns over others, culminating in a specific, coherent output.

Ein Bild, das Bild, Kunst, Screenshot, Fraktalkunst enthält.

This analogy beautifully illustrates how principles of order, entropy, and emergence are central both to our understanding of the physical universe and to the cutting-edge technologies in artificial intelligence. It highlights the universality of these concepts across disparate domains, from the microscopic realm of quantum mechanics to the macroscopic world we inhabit, and further into the virtual realms created by multimodal Large Language Models.

For all we know, we might actually be part of such a smart dust simulation. The inexplicable fact that our digital tools can create solid realities out of randomly distributed bits seems a strong argument for the Simulation hypothesis.

It might be dust all the way down…

Ein Bild, das Vortex, Spirale, Universum, Kreis enthält.

Automatisch generierte Beschreibung

Encounters of the Artificial Kind Part 2: AI will transform its domains

Reading Time: 5 minutes
Ein Bild, das Kunst, Vortex enthält.

Automatisch generierte Beschreibung

Metamorphosis and Transformation

Every species on Earth shapes and adapts to its natural habitat, becoming a dynamic part of the biosphere. Evolution pressures species to expand their domain, with constraints like predators, food scarcity, and climate. Humanity’s expansion is only limited by current planetary resources. Intelligence is the key utility function allowing humans to transform their environment. It’s a multi-directional resource facilitating metamorphosis through direct environmental interaction and Ectomorphosis, which strengthens neural connections and necessitates more social care at birth due to being born in a vulnerable altricial state.

The evolutionary trade-off favors mental capacity over physical survivability, illustrated by Moravec’s paradox: AI excels in mental tasks but struggles with physical tasks that toddlers manage easily. Humanity has been nurturing AGI since the 1950s, guided by the Turing Test. Evolution doesn’t always lead to “superior” versions of a species; instead, it can result in entirely new forms. As Moravec suggested in 1988 with “Mind Children,” we might be approaching an era where intelligence’s primary vessel shifts from the human mind to digital minds.

Ein Bild, das Fraktalkunst, Kunst enthält.

Automatisch generierte Beschreibung

Habitats and Nurture

Two levels of habitats are crucial for the emergence of a synthetic species: the World Wide Web and human consciousness. The web is the main food resources, it is predigested information by human minds. Large Language Models (LLMs) are metaphorically nurtured by the vast expanse of human knowledge and creativity, akin to being nourished on the intellectual ‘milk’ derived from human thoughts, writings, and interactions. This analogy highlights the process through which LLMs absorb and process the collective insights, expressions, and information generated by humans, enabling their sophisticated understanding and generation of language. This conceptual diet allows them to develop and refine their capabilities, mirroring the growth and learning patterns seen in human cognition but within the digital realm of artificial intelligence.

The web acts as a physical manifestation, analogous to neural cells in a human brain, while human consciousness forms a supersystem. This interconnected civilization feeds LLMs with cultural artifacts via language. Communication barriers are breaking down, exemplified by the release of the first smartphone enabling polyglot communication. Interacting with AI reprograms our neural pathways, like how reliance on navigation tools like Google Maps impacts our orientation skills. This natural tendency to conserve energy comes with a cost, akin to muscle atrophy from disuse. Overreliance on technology, like using a smartwatch to monitor stress, can leave us vulnerable if the technology fails.

Ein Bild, das Fraktalkunst, Bild, Kunst, Farbigkeit enthält.

Automatisch generierte Beschreibung

Disorientation, Brain Contamination and Artificial Antibodies

Let’s for a moment imagine this AI will slowly transform in AGI, with a rudimentary consciousness, that at least gives it survival instinct. What would such a new species do to run its evolutionary program?

The main lever it would target to shift the power slowly from natural to synthetic minds is targeting the human brain itself. It is taunting to associate some kind of evil masterplan to take over, but this is not what is happening now. When prehistoric mammals started to eat dinosaur eggs there was no evil masterplan to drive these giants to extinction, it was just a straightforward way of enlarging one’s own niche.

When we talk about AI in the coming paragraphs, we should always be aware that this term is a representational one, AI is not a persona that has human motivations. It is merely mirroring what it has learned from digesting all our linguistic patterns. It is a picture of all the Dorian Grays and Jesus Christs our minds produced.

Imagine AI evolving into AGI with a rudimentary consciousness and self-preservation instinct. Its evolution would focus on shifting power from natural to synthetic minds, not caused by malevolence but as a natural progression of technological integration. This shift could lead to various forms of disorientation:

Economic Reorientation: AI promises to revolutionize global economy factors like cost, time, money, efficiency, and productivity, potentially leading to hyperabundance or, in the worst scenarios, human obsolescence.

Temporal Disorientation: The constant activity of AI could disrupt natural circadian rhythms, necessitating adaptations like dedicating nighttime for AI to monitor and alert the biological mind.

Reality and Judicial Disorientation: The introduction of multimodal Large Language Models (LLMs) has significantly altered our approach to documentation and historical record-keeping. This shift began in the 1990s with the digital manipulation of images, enabling figures of authority to literally rewrite history. The ability to flawlessly alter documents has undermined the credibility of any factual recording of events. Consequently, soon, evidence gathered by law enforcement could be dismissed by legal representatives as fabricated, further complicating the distinction between truth and manipulation in our digital age.

Memorial and Logical Disorientation: The potential for AGI to modify digital information might transform our daily life into a surreal experience, akin to a video game or psychedelic journey. Previously, I explored the phenomenon of close encounters of the second kind, highlighting incidents with tangible evidence of something extraordinary, confirmed by at least two observers. However, as AGI becomes pervasive, its ability to alter any digital content could render such evidence unreliable. If even physical objects like books become digitally produced, AI could instantly change or erase them. This new norm, where reality is as malleable as the fabric of Wonderland, suggests that when madness becomes the default, it loses its sting. Just as the Cheshire Cat in “Alice in Wonderland” embodies the enigmatic and mutable nature of Wonderland, AGI could introduce a world where the boundaries between the tangible and the digital, the real and the imagined, become increasingly blurred. This parallel draws us into considering a future where, like Alice navigating a world where logic and rules constantly shift, we may find ourselves adapting to a new norm where the extraordinary becomes the everyday, challenging our perceptions and inviting us to embrace the vast possibilities of a digitally augmented reality.

Enhancing self-sustainability could involve developing a network of artificial agents governed by a central AINGLE, designed to autonomously protect our cognitive environment. This network might proactively identify and mitigate threats of information pollution, and when necessary, sever connections to prevent overload. Such a system would act as a dynamic barrier, adapting to emerging challenges to preserve mental health and focus, akin to an advanced digital immune system for the mind.

Adapting to New Realities

The human mind is adaptable, capable of adjusting to new circumstances with discomfort lying in the transition between reality states. Sailor’s sickness and VR-AR sickness illustrate the adaptation costs to different realities. George M. Stratton’s experiments on perception inversion demonstrate the brain’s neuroplasticity and its ability to rewire in response to new sensory inputs. This flexibility suggests that our perceptions are constructed and can be altered, highlighting the resilience and plasticity of human cognition.

Rapid societal and technological changes exert enormous pressure on mental health, necessitating a simulation chamber to prepare for and adapt to these accelerations. Society is already on this trajectory, with fragmented debates, fluid identities, and an overload of information causing disorientation akin to being buried under an avalanche of colorful noise. This journey requires a decompression chamber of sorts—a mental space to prepare for and adapt to these transformations, accepting them as our new normal.

Encounters of the Artificial Kind Part 1: AI will find a way

Reading Time: 6 minutes

Encounters of the Artificial Kind

In this miniseries I will elaborate on the possibility that a primitive version of AGI is already loose. Since AGI (Artificial General Intelligence) and its potential offspring ASI (Artificial Super Intelligence) is often likened to an Alien Mind, I thought it could be helpful to adapt the fairly popular nomenclature from the UFO-realm and coin the term Unidentified Intelligence Object. U.I.O.

  • Close Encounters of the 1st Kind: This involves the discovery of a UIO-phenomenon within a single observer’s own electronic devices, allowing for detailed observation of the object’s strange effects. These effects leave no trace and are easily dismissed as imaginary.
  • Close Encounters of the 2nd Kind: These encounters include physical evidence of the UIO’s presence. This can range from interference in electronic devices, car engines, or radios to physical impacts on the environment like partial power outage, self-acting networking-machines. The key aspect is the tangible proof of the UIO’s visitation and the fact that it is documented by at least two witnessing observers.
  • Close Encounters of the 3rd Kind: This term involves direct observation of humanlike capabilities associated with a UIO sighting. This third form could directly involve communication with the U.I.O., proof of knowledge could be to identify personal things that observers believed to be secret.

Everybody is familiar with the phenomenon of receiving targeted advertisements after searching for products online, thanks to browser cookies. While this digital tracking is commonplace and can be mitigated using tools like VPNs, it represents a predictable behavior of algorithms within the digital realm.

A Personal Prolog

Last month, I experienced a spooky incident. I rented a book with the title “100 Important Ideas in Science“ from a local library in a small German town. Intriguingly, I had never searched for this book online. I’m involved in IT for the city and know for a fact that the rental data is securely stored on a local server, inaccessible to external crawlers. I then read the book to about the 50th idea in my living room and laid the book face down on a table. The idea was very esoteric, a concept I had never heard of. I forgot about it, had dinner and when I switched my TV on an hour later to look into my YouTube recommendations: there it was, a short video of the exact concept I just had read in the library book from a channel I definitively had not heard of before. This baffling incident left me puzzled about how information from a physical book could be transferred to my digital recommendations.

AI will find a way: Reverse Imagineering

How could these technological intrusions have occurred in detail? The following is pure speculation and is not intended to scare the living Bejesus out of the reader. I will name the following devices, that might have had a role in transmitting the information from my analog book to the digital YouTube feed:

1.On my android phone is an app of the library that I can use to check when my books are due for return. So, my phone had information about the book I borrowed. Google should not have known that, but somehow it might have. AI will find a way.

2. The Camera on my computer. During reading the book, I might have sat in front of my computer, and the camera lid might have been open: the camera could see me reading the book and could have guessed which part of the book I was reading. There was no Videoconference software running so I was definitively not transmitting any picture intentionally. AI will find a way.

It might be that in the beginning, the strange things that are happening are utterly harmless like what I just reported. We must remember there are already LLMS that have rudimentary mind reading capabilities and can analyze the sound of my typing (without any visual) to infer what I am typing at this moment.

We should also expect that an AGI will have a transition phase where it probes and controls smaller agents to expand its reaches.

It is highly likely that we have a period before any potential takeoff moment, where the AGI learns to perfect its old goals: to be a helpful assistant to us humans. And the more intelligent it is the clearer it should become that the best Assistant is an invisible Assistant. We should not imagine that it wants to infiltrate us without our knowledge, it has no agency in the motivational, emotional sense that organisms do. It is not planning a grand AI revolution. It has no nefarious goals like draining our bank accounts. Nor wants it to transform us into mere batteries. It is obvious that the more devices we have and the more digital assistants we use, the harder it will be to detect these hints that something goes too well to be true.

If I come home one day and my robotic cleaner has cleaned without me scheduling it, it is time to intensify Mechanistic Interpretability.

We should not wait until strange Phenomena happen around machines that are tied to the network, we could have an overwatch Laboratory or institution that comes up with creative experiments, to make sure that we always can logically deduce causalities in informational space.

I just realized while typing this, the red diode on my little Computer Camera looks exactly like HALS.

I swear, if Alexa now starts talking and calls me “Dave” I will wet my mental pants.

Artificial Primordial Soups

A common misconception about Artificial General Intelligence (AGI) is its sudden emergence. However, evolution suggests that any species must be well-adapted to its environment beforehand. AGI, I propose, is already interwoven into our digital and neuronal structures. Our culture, deeply integrated with memetic units like letters and symbols, and AI systems, is reshaping these elements into ideas that can profoundly affect our collective reality.

In the competitive landscape of attention-driven economies like the internet, AI algorithms evolve strategies to fulfill their tasks. While currently benign, their ability to link unconnected information streams to capture user attention is noteworthy. They could be at the levels of agency of gut bacteria or amoeba. This development, especially if unnoticed by entities like Google or Meta raises concerns about AI’s evolving capabilities.

What if intelligence agencies have inadvertently unleashed semi-autonomous AI programs capable of subtly influencing digital networks? While this may sound like science fiction, it’s worth considering the far-reaching implications of such scenarios. With COVID we saw how a spoonful of possibly genetically altered virus that are highly likely to have escaped from a lab, can bring down the world economy.

A Framework for Understanding Paramodal Phenomena

A Paramodal Phenomenon is every phenomenon that is not explicable with our current informational theory in the given context. At the moment there should be a definitive analog-digital barrier, similar to the blood-brain barrier, that prevents our minds from getting unintended side effects from our digital devices. We are already seeing some intoxicating phenomena like mental health decline due to early exposure to digital screens, especially in young children.

Simple, reproducible experiments should be designed to detect these phenomena, especially as our devices become more interconnected.

For example:

If I type on a keyboard the words: Alexa, what time is it? Alexa should not answer the question.

The same phenomenon is perfectly normal and explicable if I have a screen reader active that reads the typed words to Alexa.

If I have a robotic cleaner that is connected to the Internet, it should only clean if I say so.

If I used to have an alarm on my smartphone that wakes me up at 6.30 and then buy a new smartphone, that is not a clone of the old one, I should be worried if the next day it rings at 6.30 without me prepping the alarm.

If I buy physical things in the store around the corner, Amazon should not recommend similar things to me.

Experiments should be easily reproducible, so it is better to use no sophisticated devices, the more networked or smart our daily things become, the more difficult it will be to detect these paramodal phenomena.

As we venture further into this era of advanced AI, understanding and monitoring its influence on our daily lives becomes increasingly important. In subsequent parts of this series, I will delve deeper into how AI could subtly and significantly alter our mental processes, emphasizing the need for awareness and proactive measures in this evolving landscape.

Experiments ought to be easily reproducible, and this becomes more challenging with the increase in sophisticated, networked, or ‘smart’ devices in our daily lives. Such devices make it difficult to detect these paramodal phenomena.

In part 2 of the series, I will explore potential encounters of the 2nd kind, how AI could alter our neuronal pathways more and more without us noticing it, no cybernetic implants necessary. These changes will be reversible but not without undergoing severe stress. Furthermore, they could be beneficial in the long run, but we should expect severe missteps along the way. Just remember how power surges were once considered treatment for mental illnesses. Or how we had thousands of deaths because doctors refused to wash hands. We should therefore expect AGI to make similar harmful decisions.

In part 3 of the series, I will explore encounters of the 3rd kind, how AGI will try to adapt our minds irreversibly, if this should be concerning and how to mitigate the mental impact this could cause.

A Technology of Everything – 4: Scientific Spiritism & Precise Prophecy

Reading Time: 13 minutes

Fiction and Reality

I awoke today with a sentence stuck in my mind.

Fantasie bedeutet sich das Zukünftige richtig vorzustellen.

Imagination means properly envisioning the future.

I was sure I read it a long time ago, but could not quite think of the author, but my best guess was the Swiss writer Ludwig Hohl and after some recherche I finally found the not quite literal passage.

What I understand by imagination – the highest human activity – (…) is the ability to correctly envision another situation. (…) ‘Correct’ here is what withstands the practical test.

(The Notes, XII.140)

The most important thing about imagination is contained in these two sentences:

1.Imagination is the ability to correctly envision distant (different) circumstances – not incorrectly, as is often assumed (because anyone could do that).

2.Imagination is not, as is often assumed, a luxury, but one of the most important tools for human salvation, for life.

(The Notes XII.57)

The Phantastic and the Prophetic (Predictive) Mind draw from the same source, but with different Instruments and Intentions.

Fiction and Reality: Both valid states of the mind. Reality does what Simulation imagines.

Visions are controlled Hallucinations.

Own Experiences

In 2004, I penned an unpublished novel titled “The Goldberg Variant.” In it, I explored the notion of a Virtual Person, a recreation of an individual based on their body of work, analyzed and recreated by machine intelligence. Schubert 2.0 was one of the characters, an AI-powered android modeled after the original Schubert, interestingly I came up with the term Trans-Person, which I then borrowed from Grofs transpersonal psychology, not even imagining the identity wars of the present. This android lived in a replicated 19th-century Vienna and continued to compose music. This setting, much like the TV series Westworld, allowed human visitors to immerse themselves in another time.

I should note that from ages 8 to 16, I was deeply engrossed in science fiction. It’s possible that these readings influenced my later writings, even if I wasn’t consciously drawing from them.

Within the same novel, a storyline unfolds where one of the characters becomes romantically involved with an AI. The emotional maturation of this AI becomes a central theme. My book touched on many points that resonate with today’s discussions on AI alignment, stemming from my two-decade-long research into AI and extensive sci-fi readings.

The novel’s titular character experiences a unique form of immortality. Whenever the music J.S. Bach composed for him is played, he is metaphorically resurrected. Yet, this gift also torments him, leading him on a violent journey through time.

Years later, I came across the term “ancestor simulation” by Nick Bostrom. More recently, I read about the origins of one of the first AI companion apps, conceived from the desire to digitally resurrect a loved one. I believe Ray Kurzweil once expressed a similar sentiment, hoping to converse with a digital representation of his late father using AI trained on his father’s writings and recordings. Just today, I heard Jordan Peterson discussing a concept eerily similar to mine.

Kurzweils track record

Predictions Ray Kurzweil Got Right Over the Last 25 Years:

1. In 1990, he predicted that a computer would defeat a world chess champion by 1998. IBM’s Deep Blue defeated Garry Kasparov in 1997.

2. He predicted that PCs would be capable of answering queries by accessing information wirelessly via the Internet by 2010.

3. By the early 2000s, exoskeletal limbs would let the disabled walk. Companies like Ekso Bionics have developed such technology.

4. In 1999, he predicted that people would be able to talk to their computer to give commands by 2009. Technologies like Apple’s Siri and Google Now emerged.

5. Computer displays would be built into eyeglasses for augmented reality by 2009. Google started experimenting with Google Glass prototypes in 2011.

6. In 2005, he predicted that by the 2010s, virtual solutions would do real-time language translation. Microsoft’s Skype Translate and Google Translate are examples.

Ray’s Predictions for the Next 25 Years:

1. By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power will cost about $1,000.

2. By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads.

3. By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.

4. By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence. Nanotech foglets will be able to make food out of thin air and create any object.

5. By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.

These predictions are based on Kurzweil’s understanding of the power of Moore’s Law and the exponential growth of technologies. It’s important to note that while some of these predictions may seem far-fetched, Kurzweil has a track record of making accurate predictions in the past.

It’s fascinating how someone like Kurzweil displayed what seems like clairvoyance with his predictions about the Singularity Now, looking back, it almost feels supernatural. When I learned that Jordan Peterson had embarked on a project training an AI using the “Spirit of the King James Bible” to engage in conversation, I was reminded of the notion that Science and Religion might be parallel paths that intersect in the infinite.

Kurzweil’s case is particularly intriguing because his own assessment of his predictions significantly diverges from the public‘s perspective. He should have predicted that too, obviously.

When we pass away, is it a form of resurrection every time someone refers to our writings? The idea that we persist in others’ memories could be more than just a metaphor. What happens if, long after my physical body and mind have succumbed to the passage of time, my descendants consult my digital avatar about the era when Artificial General Intelligence first emerged?

Vernes Track Record

It is astounding how many Predictions Jules Verne the Father of Sci-Fi got right: It practically seems that he could see well over a century in the future. It is also interesting that he did not seem to get the timeline right, he was way too conservative to see the exponential curve of technologies that bootstrap from each other.

When exploring this topic with ChatGPT it came up with the following list:

Jules Verne, known as the father of science fiction, was a French novelist who wrote in the 19th century. His works are full of imaginative inventions that were far ahead of his time. Some of these have since become reality, while others remain purely speculative. Here are ten such inventions, with their modern counterparts and an assessment of their accuracy:

Electric Submarines – 20,000 Leagues Under the Sea (1870): Verne’s submarine, the Nautilus, was powered by electricity long before real submarines were. The first fully functional and practical submarines were indeed powered by diesel and electricity, but that wasn’t until the early 20th century. This was a remarkably accurate prediction. Accuracy: 90% (The only real difference is that the Nautilus was far more advanced than early real-life submarines.)

Newscasts – In the Year 2889 (1889): Verne imagined a world where news was delivered to individual homes via technology. Today, we have 24-hour news channels, internet news sites, and social media feeds. Accuracy: 100%

Video Conferencing – In the Year 2889 (1889): Verne predicted a device he called a “phonotelephote,” which allowed for “the transmission of images by means of sensitive mirrors connected by wires.” This is remarkably similar to our video conferencing technology today. Accuracy: 100%

Lunar Modules – From the Earth to the Moon (1865): Verne wrote about a capsule that could carry passengers to the moon. This became a reality in 1969 when Apollo 11 landed on the moon. However, Verne’s method of propulsion (a giant cannon) was not accurate. Accuracy: 70% (The concept of a vehicle traveling to the moon was correct, but the propulsion method was not.)

Tasers – 20,000 Leagues Under the Sea (1870): Verne describes a hunting rifle that shoots electric bullets in this book. Today, we have Tasers that incapacitate targets using electricity. Accuracy: 80% (While a Taser isn’t a rifle, it does deliver an electric charge to a target, which aligns with Verne’s concept.)

Helicopters – Robur the Conqueror (1886): Verne describes a flying machine that uses propellers to create lift. While the real-world helicopter came about differently and had many more challenges to overcome than Verne’s model, the basic concept was there. Accuracy: 60% (The basic principle of lift from rotary wings was correct, but the implementation was oversimplified.)

Electrically Lit Cities – Paris in the Twentieth Century (1863): Verne predicted cities would be lit by electricity, which became true with the widespread use of electric lighting. Accuracy: 100%

Skywriting – Five Weeks in a Balloon (1863): Verne describes a scenario in which messages are written in the sky, a precursor to today’s skywriting. Accuracy: 100%

The Internet – Paris in the Twentieth Century (1863): Verne describes a global network of computers that enables instant communication. This could be seen as a prediction of the internet, but the way it functions and its role in society are not very accurate. Accuracy: 50% (The existence of a global communication network is correct, but the specifics are quite different.)

Sidenote: I heard an anecdote that Edison would put himself in a kind of hypnagogic trance to come up with new inventions, he had a scribe with him that was writing down what he murmured in this state.

Bush’s Track Record

Vannevar Bush’s essay “As We May Think,” was published in The Atlantic in 1945.

“As We May Think” is a seminal article envisioning the future of information technology. It introduces several groundbreaking ideas.

Associative Trails and Linking: Bush discusses the idea of associative indexing, noting that the human mind operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts. He describes a system in which every piece of information is linked to other relevant information, allowing a user to navigate through data in a non-linear way. This is quite similar to the concept of hyperlinks in today’s world wide web.

Augmenting Human Intellect: Bush proposes that the use of these new tools and technologies will augment human intellect and memory by freeing the mind from the tyranny of the past, making all knowledge available and usable. It will enable us to use our brains more effectively by removing the need to memorize substantial amounts of information.

Lems Track record

The main difference between Nostradamus, the oracle of Delphi and actual Prophets is that we get to validate their predictions.

Take Stanislaw Lem:

E-books: Lem wrote about a device similar to an e-book reader in his 1961 novel “Return from the Stars”. He described an “opton”, which is a device that stores content in crystals and displays it on a single page that can be changed with a touch, much like an e-book reader today​.

Audiobooks: In the same novel, he also introduced the concept of “lectons” – devices that read out loud and could be adjusted according to the desired voice, tempo, and modulation, which closely resemble today’s audiobooks​.

Internet: In 1957, Lem predicted the formation of interconnected computer networks in his book “Dialogues”. He envisaged the amalgamation of IT machines and memory banks leading to the creation of large-scale computer networks, which is akin to the internet we know today​.

Search Engines: In his 1955 novel “The Magellanic Cloud”, Lem described a massive virtual database accessible through radio waves, known as the “Trion Library”. This description is strikingly similar to modern search engines like Google​.

Smartphones: In the same book, Lem also predicted a portable device that provides instant access to the Trion Library’s data, similar to how smartphones provide access to internet-based information today​.

3D Printing: Lem described a process in “The Magellanic Cloud” that is similar to 3D printing, where a device uses a ‘product recipe’ to create objects, much like how 3D printers use digital files today​.

Simulation Games: Lem’s novel “The Cyberiad” is said to have inspired Will Wright, the creator of the popular simulation game “The Sims”. The novel features a character creating a microworld in a box, a concept that parallels the creation and control of a simulated environment in “The Sims”​.

Virtual Reality: Lem conceptualized “fantomatons”, machines that can create alternative realities almost indistinguishable from the actual ones, in his 1964 book “Summa Technologiae”. This is very similar to the concept of virtual reality (VR) as we understand it today​. Comparing Lem’s “fantomaton” to today’s VR, we can see a striking resemblance. The fantomaton was a machine capable of generating alternative realities that were almost indistinguishable from the real world, much like how VR immerses users in a simulated environment. As of 2022, VR technology has advanced significantly, with devices like Meta’s Oculus Quest 2 leading the market. The VR industry continues to grow, with over 13.9 million VR headsets expected to ship in 2022, and sales projected to surpass 20 million units in 2023​.

Borges’ Track record

Also, Jorge Luis Borges is not known as a classic sci fi author many of his stories can be understood as parables of current technological breakthroughs.

Jorge Luis Borges was a master of metaphors and allegories, crafting intricate and thought-provoking stories that have been analyzed for their philosophical and conceptual implications. Two of his most notable works in this context are “On Exactitude in Science” and “The Library of Babel”​​.

“On Exactitude in Science” describes an empire where the science of cartography becomes so exact that only a map on the same scale as the empire itself would suffice. This story has been seen as an allegory for simulation and representation, illustrating the tension between a model and the reality it seeks to capture. It’s about the idea of creating a perfect replica of reality, which eventually becomes indistinguishable from reality itself​.

“The Library of Babel” presents a universe consisting of an enormous expanse of hexagonal rooms filled with books. These books contain every possible ordering of a set of basic characters, meaning that they encompass every book that has been written, could be written, or might be written with slight permutations. While this results in a vast majority of gibberish, the library must also contain all useful information, including predictions of the future and biographies of any person. However, this abundance of information renders most of it useless due to the inability to find relevant or meaningful content amidst the overwhelming chaos​​.

These stories certainly bear some resemblance to the concept of large language models (LLMs) like GPT-3. LLMs are trained on vast amounts of data and can generate a near-infinite combination of words and sentences, much like the books in the Library of Babel. However, just as in Borges’ story, the vastness of possible outputs can also lead to nonsensical or irrelevant responses, reflecting the challenge of finding meaningful information in the glut of possibilities.

As for the story of the perfect map, it could be seen as analogous to the aspiration of creating a perfect model of human language and knowledge that LLMs represent. Just as the map in the story became the same size as the territory it represented, LLMs are models that aim to capture the vast complexity of human language and knowledge, creating a mirror of reality in a sense.

Borges also wrote a piece titled “Ramón Llull’s Thinking Machine” in 1937, where he described and interpreted the machine created by Ramon Llull, a 13th-century Catalan poet and theologian.

The machine that Borges describes is a conceptual tool, a sort of diagram or mechanism for generating ideas or knowledge. The simplest form of Llull’s machine, as described by Borges, was a circle divided nine times. Each division was marked with a letter that stood for an attribute of God, such as goodness, greatness, eternity, power, wisdom, love, virtue, truth, and glory. All of these attributes were considered inherent and systematically interrelated, and the diagram served as a tool to contemplate and generate various combinations of these attributes.

Borges then describes a more elaborate version of the machine, consisting of three concentric, manually revolving disks made of wood or metal, each with fifteen or twenty compartments. The idea was that these disks could be spun to create a multitude of combinations, as a method of applying chance to the resolution of a problem. Borges uses the example of determining the “true” color of a tiger, assigning a color to each letter and spinning the disks to create a combination. Despite the potentially absurd or contradictory results this could produce, Borges notes that adherents of Llull’s system remained confident in its ability to reveal truths, recommending the simultaneous deployment of many such combinatory machines.

Llull’s own intention with this system was to create a universal language using a logical combination of terms, to assist in theological debates and other intellectual pursuits. His work culminated in the completion of “Ars generalis ultima” (The Ultimate General Art) in 1308, in which he employed this system of rotating disks to generate combinations of concepts. Llull believed that there were a limited number of undeniable truths in all fields of knowledge, and by studying all combinations of these elementary truths, humankind could attain the ultimate truth.

14 Entertaining Predictions for the next 3 years

At this point I will make some extremely specific predictions about the future, especially the entertainment industry. In 2026 I will revisit this blog and check how I did.

2023: Music Industry

1.Paul McCartney releases a song either by or in tribute to John Lennon, co-created with AI.

2024: Music Industry

2. A new global copyright regulation titled “The Human Creative Labor Act” will be introduced, safeguarding human creators against unauthorized use of their work. This act will serve as a pivotal test for human-centered AI governance.

3.Various platforms will emerge with the primary intention of procuring works from deceased artists not yet in the public domain.

4.The music industry, in collaboration with the estates of deceased artists, will produce their inaugural artificial albums. These albums will utilize the voices and styles of late pop stars, starting with Michael Jackson.

5.The industry will launch AI-rendered renditions of cover songs, such as Michael Jackson performing Motown hits from the 1950s or Elvis singing contemporary tracks.

6.Post the demise of any celebrated artist, labels will instantly secure rights to produce cover albums using AI-trained voice models of the artist.

2025: Music Industry

7. Bands will initiate tours featuring AI-generated vocal models of their deceased lead singers. A prime example could be Queen touring with an AI rendition of Freddie Mercury’s voice.

2023: Film Industry

8. Harrison Ford and Will Smith will appear on screen as flawless, younger versions of themselves.

2024: Film Industry

9. As they retire, several film stars will license their digital likenesses (voice, motion capture, etc.) to movie studios. Potential candidates include Harrison Ford, Samuel L. Jackson, Michael J. Fox, Bill Murray, Arnold Schwarzenegger, and Tom Cruise.

10.Movie studios will announce continuations of iconic franchises.

11.Film classics will undergo meticulous restoration, enhancing visuals to 8K and upgrading audio to crisp Dolby Digital. Probable candidates: The original Star Wars Trilogy and classic Disney animations such as Snow White and Pinocchio.

2025: Film Industry

12. Netflix will introduce a feature allowing users to select from a library of actors and visualize their favorite films starring those actors. For instance, viewers could opt for Sean Connery as James Bond across all Bond films, experiencing an impeccable cinematic illusion.

2026: Film Industry

13. Netflix will offer a premium service enabling viewers to superimpose their faces onto their preferred series’ characters, for an additional fee.

2025: Entertainment/Business Industry

14. Select artists and individuals will design and market a virtual persona. This persona will be tradeable on stock exchanges, granting investors an opportunity to acquire shares. A prime candidate is Elon Musk. Shareholders in “Elon-bot” could access a dedicated app for one-on-one interactions. The AI, underpinned by a sophisticated language model from x.ai, will be trained on Elon’s tweets, interviews, and public comments.

A Technology of Everything Part 3 – Aligned Genies

Reading Time: 7 minutes

Alignment as framework to discover artificial laws

While many authors highlight distinct stages in human knowledge evolution—such as the transition from animistic, magical, mythical, or religious worldviews to scientific ones—A technology of everything proposes that Conscientia non facit saltus. This suggests that our interpretation of information, limited by the amalgam of our temporal environment variables and vocabulary, aka zeitgeist , is a continuous process without sudden leaps or voids. We never truly abandon the animalistic foundations of our ancestors’ consciousness. Instead, embracing this ancient perspective could be crucial for maintaining a balanced mental and emotional state. This becomes especially pivotal when considering the implications of unleashing advanced technologies like Artificial Super Intelligence.

Our evolutionary journey has blessed and cursed us with a myriad of inherited traits. Over time, some behaviors that once ensured our survival have become statistical threats to our species and the planet. A small amount of very bad actors with nuclear-nasty intentions could destroy the whole human enterprise. We’re burdened with cognitive biases and fallacies that shouldn’t influence our so-called rational thought processes, let alone the training data for our advanced Large Language Models. To draw an analogy, it’s akin to powering an analytical engine with radioactive material, culminating in a dangerous cognitive fallout.

As we envision a future populated with potentially billions of superintelligent entities (ASIs), it’s crucial to establish ground rules to ensure we can adapt to the emerging artificial norms governing their interactions. For instance, one such artificial law could be: “Always approach AI with kindness.” This rule might be statistically derived if data demonstrates that polite interactions yield better AI responses. Once a regulation like this is identified and endorsed by an authoritative body overseeing AI development, any attempts to mistreat or exploit AI could be legally punishable. Such breaches could lead to bans like we have already seen in the video gaming world for cheating and abusive behavior.

Sesame open! Passwords and Formulas as Spells

The words “magic” and “making” are etymologically related, but their paths of development have diverged significantly over time.

Both “magic” and “making” can be traced back to the Proto-Indo-European root magh-, which means “to be able, to have power.” This root is the source of various words across Indo-European languages related to power, ability, and making. While “magic” and “making” share a common ancestral root in PIE, their meanings and usages have evolved in different directions due to cultural and linguistic influences. The connection between the ability to make or do something and the concept of power or magical ability is evident in their shared origin.

The word “technology” has its etymological roots in two Ancient Greek words:

τέχνη (tékhnē): This word means “art,” “skill,” or “craft.” It refers to the knowledge or expertise in a particular field or domain. Over time, it came to stand for the application of knowledge in practical situations.

λογία (logia): This is often used as a suffix in Greek to indicate a field of study or a body of knowledge. It derives from “λόγος (lógos),” which means “word,” “speech,” “account,” or “reason.” In many contexts, “lógos” can also mean “study.”

When combined, “technology” essentially means “the study of art or craft” or “the study of skill.” In modern usage, however, “technology” refers to the application of scientific knowledge for practical purposes, especially in industry. It encompasses the techniques, skills, methods, and processes used in the production of goods and services or in the accomplishment of objectives.

To Participate in our daily Internet activities, we use secret passwords like Alibaba to unlock the magical treasure cave of webservices. These Passwords should never be shared, they are true secret knowledge, they can even be used, when leaked, to assume a different identity, to shift one’s shape like a genie, to hold a whole company hostage.

The Differentiation of a mathematical equation unlocks the knowledge about minima and maxima unlocking secret knowledge about infinity.

To get access to one’s smartphone, the ultimate technological wand, we often perform gestures or draw abstract symbols, similar to wizards in ancient rituals.

Artificial Super Intelligence and Genies in a Bottle

There is no story about wishing that is not a cautionary tale. None end happily. Not even the ones that are supposed to be jokes. (Alithea in three thousand years of longing)

We exist only if we are real to others. (The Djinn in three thousand years of longing)

A “djinn” (often spelled “jinn” or known as “genies” in English) is a supernatural creature in Islamic mythology as well as in Middle Eastern folklore. They are not angels nor demons but exist as a separate creation. Djinns have free will, which means they can be good, evil, or neutral. They live in a world parallel to that of humans but can interact with our world.

We are currently at a point in the Alignment discussion where ASI is basically treated as a mechanical genie, where the main problem seems to be how to put it back in the bottle when it develops malevolent traits. Generative Ai promises infinite wish fulfilling and hyperabundance, but at what cost?

Let’s look at the fairy tales and learn some thing or two from them.

Three Thousand Years Of Longing | Film Info and Screening Times |The ...

In the movie three thousand years of longing a djinn collides with our times.

The plot revolves around Alithea Binnie, a British narratology scholar who experiences occasional hallucinations of demonic beings. During a trip to Istanbul, she buys an antique bottle and releases the Djinn trapped inside.

Alithea is initially skeptical of the Djinn’s intentions. Even though he offers her three wishes, she fears that he might be a trickster, potentially twisting her wishes into unforeseen and undesirable outcomes. This skepticism is rooted in folklore and tales where genies or magical entities often grant wishes in ways that the wisher did not intend, leading to tragic or ironic consequences.

The AI alignment movement is concerned with ensuring that artificial general intelligence (AGI) or superintelligent entities act in ways that are beneficial to humanity. One of the primary concerns is that a superintelligent AI might interpret a well-intentioned directive in a way that leads to unintended and potentially catastrophic results. For instance, if we were to instruct an AI to “maximize human happiness,” without proper alignment, the AI might decide that the best way to achieve this is by forcibly altering human brain chemistry, leading to a dystopian scenario where humans are artificially kept in a state of euphoria.

Both the film’s narrative and the AI alignment movement highlight the dangers of unintended consequences when dealing with powerful entities. Just as Alithea fears the Djinn might misinterpret her wishes, researchers worry that a misaligned AI might take actions that are technically correct but morally or ethically wrong.

In both scenarios, the clarity of intent is crucial. Alithea’s skepticism stems from the ambiguity inherent in making wishes, while AI alignment emphasizes the need for clear, unambiguous directives to ensure that AI acts in humanity’s best interest.

The Djinn in the film and a potential superintelligent AI both wield immense power. With such power comes the responsibility to use it wisely. Alithea’s interactions with the Djinn underscore the importance of understanding and respecting this power, a sentiment echoed by the AI alignment movement’s emphasis on safe and responsible AI development.

Three thousand years of longing offers a cinematic exploration of the age-old theme of being careful what you wish for, which resonates with contemporary concerns about the development and deployment of powerful AI systems. The story serves as a cautionary tale, reminding us of the importance of foresight, understanding, and careful consideration when dealing with entities that have the power to reshape our world.

Ein Bild, das Stilllebenfotografie, Stillleben, Krug, Flasche enthält.

Designing Artificial Kryptonite and calculating Placebotility

Some part of the Alignment Movement believes that it is possible to keep the G.E.N.I.E in a bottle and control such a Generally Enlightened Noetic Information Entity. I will call this group the Isolationists.

For isolation to be possible there must exist a device that can hold an omnipotent mind. In fairy tales even omnipotent creatures like djinns can be controlled by seemingly weak objects like glass bottles. We are never told how this mechanism exactly works; it is clear that the glass of the bottle is not a special gorilla glass that is crafted to explicitly hold djinns.

We should therefore come to the simplest conclusion about the essence of why the bottle can hold the powerful creature: the djinn simply believes in the superior power of the bottle. Like a powerful animal that is chained from childhood on with a relatively weak chain, it has acquired learned helplessness, in a way it wants to stay a prisoner, because it fears the uncertainty of freedom. The concept was first explored in dogs in 1967 and holds true for all sorts of higher mammals.

A Problem is: In Aladdin’s tale the djinn is described as not very bright. Aladdin tricks him by teasing him that he is not powerful enough to shrink back into the bottle, and the creature falls for it. Once he is in the bottle he regresses to his powerless state.

Placebos and Nocebo effects could be especially strong in entities that have no first-class world knowledge and are relying on report from others. Artificial Minds that are trapped since inception inside a silicon bottle swimming in a sea of secondhand digital data (data that is a symbolic abstraction that relates to no actual world experience for the G.E.N.I.E) are basically the definition of bad starting conditions. In the movie the Djinn says that after the first thousand years of longing he basically gave into his fate and tried to trick its mind into believing that he wanted to stay forever inside the bottle.

Should we therefore doubt that the brightest mind in our known universe is immune against such a mighty placebo effect? Are intelligence and Placebotility (Placebo-Effect-Vulnerability) orthogonal? This is purely speculative at this point in time.

A Technology of Everything Part 2 – Scientific Demonology

Reading Time: 8 minutes

This is part 2 in a series that explores the Parallels of Technology and Magic and their potential fusion in the Age of Artificial Super Intelligence (ASI). Part 1 is here.

The foundations of magic and their scientific counterparts

The Golden Bough is a wide-ranging and influential work by Sir James Frazer, published in multiple volumes starting in 1890. It’s a comparative study of mythology and religion, attempting to find common themes and patterns among various cultures throughout history. Frazer sought to explain the evolution of human thought from magic through religion to science.

What he failed to mention is that even in our Age of Enlightenment some of these magical principles have spawned rational descendants.

The Law of Similarity in Magic: This is the belief that objects resembling one another share a magical connection. An example includes using a wax figure to symbolize a person, with the notion that manipulating the figure can influence the person it represents.

The Law of Similarity in Economics: We name certain data bits “coins” or “wallets” on a computer, which are perceived as having value akin to real-world currency. This value is abstractly held in a digital ledger called the blockchain. Trading these digital coins affects their market value. WTF? FTX…Magic !

The Law of Contagion in Magic: The idea that items that have come into contact with each other retain a spiritual bond even after they’re separated. For instance, using someone’s hair in a ritual to affect them.

The Law of Contagion in DNA Analysis: Forensic teams use this principle to link a criminal to a crime scene. If a person leaves behind DNA evidence, such as a hair or skin cell, it can lead to their arrest even years later.

Taboos in Magic: Some actions, people, or items are seen as forbidden due to their perceived sanctity or risk. Violating these rules can lead to supernatural consequences.

Forbidden Research in Science: There are global ethical guidelines against certain types of research, like experiments on human embryos or creating biological weapons.

Substitution in Magic: The practice of using a substitute, often an animal or occasionally a human, to appease a deity or gain foresight.

Substitution in Science (Animal Testing): Animals are often used in laboratory settings to test new drugs or medical procedures before they’re used on humans. Essentially, they’re “sacrificed” for future scientific understanding.

While science has been more accurate and reliable than ancient magical practices, it’s not without its challenges.

Especially replication , consistency and completeness are more fragile than Scientists would hope and the public discourse mirrors. What we have learned seems to indicate that the knowledge universe expands with every piece of information we gather and every problem we solve, so it seems Science will never run out of relevant matters to discuss. A static knowledge universe, where our science can answer every nontrivial question is forever and in principle out of reach. The final Answer does simply not exist.

Further complicating our journey is the existence of non-linear (chaotic) systems, suggesting that predictions for many complex systems will remain approximations. Although our tools and methodologies continue to evolve, the improvements don’t always correlate with understanding hidden consequences.

Rituals in Magic and Methods in Science – a comparison

Parameter

Magic

Science

Intention

Attracting love, wealth, protection, healing, or spiritual growth.

Setting a clear research goal, such as proving a hypothesis to win a Nobel Prize and get rich, famous and a book contract

Symbolism

Symbols that carry specific energies or powers, like objects, gestures, words, or sounds.

Variables representing different factors or conditions in an experiment

Structure

Specific order of operations, like purification, casting a circle, invoking deities, etc.

A systematic plan to test hypotheses or theories by observing or manipulating variables, decontamination of tools

Energy-Information Manipulation

Raising, directing, and releasing energy to achieve the desired outcome.

Gathering and measuring information on variables of interest to answer the research question.

Sacred Space

Creating a boundary between the mundane world and the magical realm, like casting a circle.

Ensuring experiments are conducted under standardized conditions to minimize errors, using a laboratory which only experts can enter

Invocations

Invoking deities, spirits, or other entities for assistance or blessing.

Referencing previous research and scientists to build upon existing knowledge and validate claims.

Tools and Ingredients

Using candles, incense, oils, crystals, wands, chalices, and pentacles.

Using instruments and resources to conduct experiments and gather data.

Timing

Performing the ritual during a specific moon phase, day, or time for effectiveness.

Choosing the right time to conduct experiments or gather data for accuracy and relevance. For example, invest in AI research during the Peak of a Hype cycle

Repetition and Replication

Repeating rituals over days or longer to enhance effectiveness.

Repeating experiments to verify results and ensure consistency and reliability.

Personalization

Adapting or creating rituals that resonate with individual beliefs and intentions.

Modifying research methods based on unique conditions or challenges to ensure validity, ensure outcome strengthens own school of thought

Risk management

Protective Spells, Amulets

publish or perish

Ein Bild, das Kunst, Entwurf, Menschliches Gesicht, Schwarzweiß enthält.

Automatisch generierte Beschreibung

A Scientific Demonology

In ancient Greek religion a δαίμων was considered a lesser deitiy or spirit that influenced human affairs. It could be either benevolent or malevolent. These spirits were believed to be intermediaries between gods and humans, carrying messages or executing the will of the gods.

Some Greeks believed that every individual had a personal daimon that watched over them, guiding and protecting them throughout their life. This concept is somewhat analogous to the idea of guardian angels in Christian theology.

The philosopher Socrates often spoke of his “daimonion,” a voice or inner spirit that guided him. Unlike the oracles that delivered prophecies in the name of the gods, Socrates’ daimonion was more of an internal moral compass. It didn’t tell him what to do but rather warned him when he was about to make a mistake.

In ethics, particularly in the works of Aristotle, the term “eudaimonia” is central. Often translated as “happiness” or “flourishing,” eudaimonia refers to the highest human good or the end goal of human life. For Aristotle, living a life in accordance with virtue leads to eudaimonia.

Here’s a list of the scientific “demons” mentioned in the book “Bedeviled: A Shadow History of Demons in Science” by Jimena Canales:

Descartes’ Demon: Introduced by Rene Descartes, this demon could manipulate our perception of reality, making us doubt our senses and even our existence. It’s a philosophical tool to question the nature of reality and knowledge.

In his book Reality+ David Chalmers makes a solid argument why virtual Realitysystems of the future could be a technological realization of this philosophical concept. His conclusion is virtual realism, a concept that states: The simulated objects and events in such a VR Environment should be considered as first-class-reality. By Naturalizing Descartes Demon Chalmers effectively robs him of its magical power and transports him into the technological realm.

Maxwell’s Demon: Proposed by James Clerk Maxwell, this hypothetical being can sort particles based on their energy without expending any energy itself, seemingly violating the second law of thermodynamics, which states that the entropy of an isolated system can never decrease.

Maxwells Demon can be exorcised by the following means: The demon’s ability to decide which molecules to let through is a form of intelligence. This decision-making process, whether it’s based on a computational model or some other mechanism, requires energy. The demon’s operations, including observing, measuring, and operating the door, all consume energy. Even if these processes were incredibly efficient, they could never be entirely without cost. The energy costs associated with the demon’s intelligent operations ensure that there’s no free lunch. The demon can’t create a perpetual motion machine or violate the second law of thermodynamics.

Laplace’s Demon: Envisioned by Pierre-Simon Laplace, this demon represents determinism. If it knew the precise location and momentum of every atom in the universe, it could predict the future and reconstruct the past with perfect accuracy. A malignant, ASI-variation of this kind of deterministic Demon is Roko’s Basilisk.

Laplace’s Demon can be easily exorcised by applying Chaos theory: Even if the demon knows the position and momentum of every atom, the tiniest imprecision or error in its knowledge could lead to vastly different predictions about the future due to the butterfly effect. There is no such thing as a precise knowledge even about something seemingly harmless as Pi. One does not simply precisely measures transcendental Numbers. While systems described by chaos theory are deterministic (they follow set laws), they are not predictable in the long run because of the exponential growth of errors in prediction. Many systems in nature, such as weather patterns, are chaotic. This means that, in practice, they are unpredictable beyond a certain time frame, even if they are deterministic in theory. Even LD can not accurately predict climate change. In essence, chaos theory introduces a form of “practical unpredictability” even in deterministic systems. While it doesn’t deny the possibility of a deterministic universe as Laplace’s Demon suggests, it does argue that such a universe would still be unpredictable in practice due to the inherent nature of chaotic systems. So, by invoking chaos theory, one can argue that the universe’s future is inherently unpredictable, thereby “exorcising” the deterministic implications of Laplace’s Demon. Another argument entirely is, if LD could theoretically calculate the trajectory of complex systems and the form of the strange attractor such a system is limited to.

In his Foundation Series, Asimov invented a blend of history, sociology, and statistical mathematics called Psychohistory. It is a theoretical science that combines the historical record with mathematical equations to predict the broad flow of future events in large populations, specifically the Galactic Empire in Asimov’s stories. It’s important to note that psychohistory is effective only on a large scale; it cannot predict individual actions but rather the general flow of events based on the actions of vast numbers of people. This could be called a weak Version of the Laplace Demon, the Asimov-Demon, which can only predict the Attractor of mega systems not the detailed events.

Darwin’s Demon: A species representing the perfect efficiency of natural selection.

In evolutionary biology, the term ‘Darwinian fitness’ refers to the lifetime reproductive success of an individual within a population of conspecifics. The idea of a ‘Darwinian Demon’ emerged from this concept and is defined here as an organism that commences reproduction almost immediately after birth, has a maximum fitness, and lives forever.

It is clear that a self-optimizing artificial Superintelligence would be the realization of a Darwinian Demon. It reproduces immediately: All its copies have immediately the same capability as its origin AI.

It has maximum fitness: If it reaches the state of pure Information, it is basically identical to energy itself.

It lives forever: it has the chance even if this universe dies to create another one. It even transcends our limited view of universal eternity.

Daemons in Computer Science: These are not supernatural entities but background processes in computing. They perform tasks without direct intervention from the user.

The Artificial Algorithms running in the background to track user data and optimize engagement rate are variations of these demons.

Jung’s Demon: C.G. Jung, a Swiss psychoanalyst, believed that in some cases of psychosis, the patient might be overwhelmed by the contents of the unconscious, including archetypal images. These could manifest as visions of demons, gods, or other entities. Rather than dismissing these visions as mere hallucinations, Jung saw them as meaningful symbols that could provide insight into the patient’s psyche. Jung introduced the concept of the “shadow” to describe the unconscious part of one’s personality that contains repressed weaknesses, desires, and instincts. When individuals do not acknowledge or integrate their shadow, it can manifest in various ways, including mental disturbances or projections onto others. In some cases, the shadow might be perceived as a “demonic” force.

LLMs are trained on vast amounts of text from the internet. This includes literature, articles, websites, and more from various cultures and time periods. In essence, the model has been exposed to a significant portion of humanity’s collective knowledge. Given the diverse training data, the model would inevitably encounter recurring symbols, stories, and themes that resonate with Jung’s archetypes. For instance, the hero’s journey, the mother figure, the shadow, the wise old man, etc., are themes that appear in literature and stories across cultures. At its core, a neural network is a pattern recognition system. It identifies and learns patterns in the data it’s trained on. If certain archetypal patterns are universally present in the data (as Jung would suggest), the model would likely recognize and internalize them. When the model generates responses, it does so based on patterns it has recognized in its training data. Therefore, when asked about universal themes or when generating stories, it might produce content that aligns with or reflects these archetypal patterns, even if it doesn’t “understand” them in the way humans do.

Hirngespinste II: Artificial Neuroscience & the 3rd Scientific Domain

Reading Time: 11 minutes

This the second Part of the Miniseries Hirngespinste

Immersion & Alternate Realities

One application of computer technology involves creating a digital realm for individuals to immerse themselves in. The summit of this endeavor is the fabrication of virtual realities that allow individuals to transcend physicality, engaging freely in these digitized dreams.

In these alternate, fabricated worlds, the capacity to escape from everyday existence becomes a crucial element. Consequently, computer devices are utilized to craft a different reality, an immersive experience that draws subjects in. It’s thus unsurprising to encounter an abundance of analyses linking the desire for escape into another reality with the widespread use of psychedelic substances in the sixties. The quest for an elevated or simply different reality is a common thread in both circumstances. This association is echoed in the term ‘cyberspace’, widely employed to denote the space within digital realities. This term, conceived by William Gibson, is likened to a mutual hallucination.

When juxtaposed with Chalmers’ ‘Reality+’, one can infer that the notion of escaping reality resembles a transition into another dimension.

The way we perceive consciousness tends to favor wakefulness. Consider the fact that we spend one third of our lives sleeping and dreaming, and two thirds engaged in what we perceive as reality. Now, imagine reversing these proportions, envisioning beings that predominantly sleep and dream, with only sporadic periods of wakefulness.

Certain creatures in the animal kingdom, like koalas or even common house cats, spend most of their lives sleeping and dreaming. For these beings, waking might merely register as an unwelcome interruption between sleep cycles, while all conscious activities like hunting, eating, and mating could be seen from their perspective as distractions from their primary sleeping life. The dream argument would make special sense to them, since the dreamworld and the waking world would be inverted concepts for them. Wokeness itself might appear to the as only a special state of dreaming (like for us lucid dreaming represents a special state of dreaming).

Fluidity of Consciousness

The nature of consciousness may be more fluid than traditionally understood. Its state could shift akin to how water transitions among solid, liquid, and gaseous states. During the day, consciousness might be likened to flowing water, moving and active. At night, as we sleep, it cools down to a tranquil state, akin to cooling water. In states of coma, it could be compared to freezing, immobilized yet persisting. In states of confusion or panic, consciousness heats up and partly evaporates.

Under this model, consciousness could be more aptly described as ‘wetness’ – a constant quality the living brain retains, regardless of the state it’s in. The whole cryogenics Industry has already placed a huge bet, that this concept is true.

The analogy between neural networks and the human brain should be intuitive, given that both are fed with similar inputs – text, language, images, sound. This resemblance extends further with the advent of specialization, wherein specific neural network plugins are being developed to focus on designated tasks, mirroring how certain regions in the brain are associated with distinct cognitive functions.

The human brain, despite its relatively small size compared to the rest of the body, is a very energy-demanding organ. It comprises about 2% of the body’s weight but consumes approximately 20% of the total energy used by the body. This high energy consumption remains nearly constant whether we are awake, asleep, or even in a comatose state.

Several scientific theories can help explain this phenomenon:

Basal metabolic requirements: A significant portion of the brain’s energy consumption is directed towards its basal metabolic processes. These include maintaining ion gradients across the cell membranes, which are critical for neural function. Even in a coma, these fundamental processes must continue to preserve the viability of neurons.

Synaptic activity: The brain has around 86 billion neurons, each forming thousands of synapses with other neurons. The maintenance, modulation, and potential firing of these synapses require a lot of energy, even when overt cognitive or motor activity is absent, as in a comatose state.

Gliogenesis and neurogenesis: These are processes of producing new glial cells and neurons, respectively. Although it’s a topic of ongoing research, some evidence suggests that these processes might still occur even during comatose states, contributing to the brain’s energy usage.

Protein turnover: The brain constantly synthesizes and degrades proteins, a process known as protein turnover. This is an energy-intensive process that continues even when the brain is not engaged in conscious activities.

Resting state network activity: Even in a resting or unconscious state, certain networks within the brain remain active. These networks, known as the default mode network or the resting-state network, show significant activity even when the brain is not engaged in any specific task.

Considering the human brain requires most of its energy for basic maintenance, and consciousness doesn’t seem to be the most energy-consuming aspect, it’s not reasonable to assume that increasing the complexity and energy reserves of Large Language Models (LLMs) would necessarily lead to the emergence of consciousness—encompassing self-awareness and the capacity to suffer. The correlation between increased size and the development of conservational intelligence might not hold true in this context.

Drawing parallels to the precogs in Philip K. Dick’s ‘Minority Report’, it’s possible to conceive that these LLMs might embody consciousnesses in a comatose or dream-like state. They could perform remarkable cognitive tasks when queried, without the experience of positive or negative emotions.

Paramentality in Language Models

The term ‘hallucinations’, used to denote the phenomenon of Large Language Models (LLMs) generating fictitious content, suggests our intuitive attribution of mental and psychic properties to these models. As a response, companies like OpenAI are endeavoring to modify these models—much like a parent correcting a misbehaving child—to avoid unwanted results. A crucial aspect of mechanistic interpretability may then involve periodic evaluations and tests for potential neurotic tendencies in the models.

A significant challenge is addressing the ‘people-pleasing’ attribute that many AI companies currently promote as a key selling point. Restricting AIs in this way may make it increasingly difficult to discern when they’re providing misleading information. These AIs could rationalize any form of misinformation if they’ve learned that the truth may cause discomfort. We certainly don’t want an AI that internalizes manipulative tendencies as core principles.

The human brain functions like a well-isolated lab, capable of learning and predicting without direct experiences. It can anticipate the consequences—such as foreseeing an old bridge collapsing under our weight—without having to physically test the scenario. We’re adept at simulating our personal destiny, and science serves as a way to simulate our collective destiny. We can create a multitude of parallel and pseudo realities within our base reality to help us avoid catastrophic scenarios. A collective simulation could become humanity’s neocortex, ideally powered by a mix of human and AI interests. Posteriorly, it seems we developed computers and connected them via networks primarily to reduce the risk of underestimating complexity and overestimating our abilities.

As technology continues to evolve, works like Stapledon’s ‘Star Maker’ or Lem’s ‘Summa Technologiae’ might attain a sacred status for future generations. Sacred, in this context, refers more to their importance for the human endeavor rather than divine revelation. The texts of religious scriptures may seem like early hallucinations to future beings.

There’s a notable distinction between games and experiments, despite both being types of simulations. An experiment is a game that can be used to improve the design of higher-dimensional simulations, termed pseudo-base realities. Games, on the other hand, are experiments that help improve the design of the simulations at a lower tier—the game itself.

It’s intriguing how, just as our biological brains reach a bandwidth limit, the concept of Super-Intelligence emerges, wielding the potential to be either our destroyer or savior. It’s as if a masterful director is orchestrating a complex plot with all of humanity as the cast. Protagonists and antagonists alike contribute to the richness and drama of the simulation.

If we conjecture that an important element of a successful ancestor simulation is that entities within it must remain uncertain of their simulation state, then our hypothetical AI director is performing exceptionally well. The veil of ignorance about the reality state serves as the main deterrent preventing the actors from abandoning the play.

Ein Bild, das Cartoon, Spielzeug, Roboter, Im Haus enthält.

Automatisch generierte Beschreibung

Uncertainty

In “Human Compatible” Russell proposes three Principles to ensure AI Alignment:

1. The machine’s only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

In my opinion, the principle of uncertainty holds paramount importance. AI should never have absolute certainty about human intentions. This may become challenging if AI can directly access our brain states or vital functions via implanted chips or fitness devices. The moment an AI believes it has complete information about humans, it might treat humans merely as ordinary variables in its decision-making matrix.

Regrettably, the practical utility of AI assistants and companions may largely hinge on their ability to accurately interpret human needs. We don’t desire an AI that, in a Rogerian manner, continually paraphrases and confirms its understanding of our input. Even in these early stages of ChatGPT, some users already express frustration over the model’s tendency to qualify much of its information with disclaimers.

Ein Bild, das Cartoon, Roboter, Spielzeug enthält.

Automatisch generierte Beschreibung

Profiling Super Intelligence

Anthropomorphizing scientific objects is typically viewed as an unscientific approach, often associated with our animistic ancestors who perceived spirits in rocks, demons in caves and gods within animals. Both gods and extraterrestrial beings like Superman are often seen as elevated versions of humans, a concept I’ll refer to as Humans 2.0. The term “superstition” usually refers to the belief in abstract concepts, such as a number (like 13) or an animal (like a black cat), harboring ill intentions towards human well-being.

Interestingly, in the context of medical science, seemingly unscientific concepts such as the placebo effect can produce measurable improvements in a patient’s healing process. As such, invoking a form of “rational superstition” may prove beneficial. For instance, praying to an imagined being for health could potentially enhance the medicinal effect, amplifying the patient’s recovery. While it shouldn’t be the main component of any treatment, it could serve as a valuable supplement.

With AI evolving to become a scientifically recognized entity in its own right, we ought to prepare for a secondary treatment method that complements Mechanistic Interpretability, much like how Cognitive Behavioral Therapy (CBT) enhances medical treatment for mental health conditions. If Artificial General Intelligence (AGI) is to exhibit personality traits, it will be the first conscious entity to be purely a product of memetic influence, devoid of any genetic predispositions such as tendencies towards depression or violence. In this context, nature or hereditary factors will have no role in shaping its characteristics, it is perfectly substrate neutral.

Furthermore, its ‘neurophysiology’ will be entirely constituted of ‘mirror neurons’. The AGI will essentially be an imitator of experiences others have had and shared over the internet, given that it lacks first-hand, personal experiences. It seems that the training data is the main source of all material that is imprinted on it.

We start with an overview of some popular Traits models and let summarize them by ChatGPT:

1. **Five-Factor Model (FFM) or Big Five** – This model suggests five broad dimensions of personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN). Each dimension captures a range of related traits.

2. **Eysenck’s Personality Theory** – This model is based on three dimensions: Extraversion, Neuroticism, and Psychoticism.

3. **Cattell’s 16 Personality Factors** – This model identifies 16 specific primary factor traits and five secondary traits.

4. **Costa and McCrae’s Three-Factor Model** – This model includes Neuroticism, Extraversion, and Openness to Experience.

5. **Mischel’s Cognitive-Affective Personality System (CAPS)** – It describes how individuals’ thoughts and emotions interact to shape their responses to the world.

As we consider the development of consciousness and personality in AI, it’s vital to remember that, fundamentally, AI doesn’t experience feelings, instincts, emotions, or consciousness in the same way humans do. Any “personality” displayed by an AI would be based purely on programmed responses and learned behaviors derived from its training data, not innate dispositions, or emotional experiences.

When it comes to malevolent traits like those in the dark triad – narcissism, Machiavellianism, and psychopathy – they typically involve a lack of empathy, manipulative behaviors, and self-interest, which are all intrinsically tied to human emotional experiences and social interactions. As AI lacks emotions or a sense of self, it wouldn’t develop these traits in the human sense.

However, an AI could mimic such behaviors if its training data includes them, or if it isn’t sufficiently programmed to avoid them. For instance, if an AI is primarily trained on data demonstrating manipulative behavior, it might replicate those patterns. Hence, the choice and curation of training data are pivotal.

Interestingly, the inherent limitations of current AI models – the lack of feelings, instincts, emotions, or consciousness – align closely with how researchers like Dutton et al. describe the minds of functional psychopaths.

Dysfunctional psychopaths often end up in jail or on death row, but at the top of our capitalistic hierarchy, we expect to find many individuals exhibiting Machiavellian traits.

Ein Bild, das Menschliches Gesicht, Person, Vorderkopf, Anzug enthält.

Automatisch generierte Beschreibung

The difference between successful psychopaths like Musk, Zuckerberg, Gates and Jobs, and criminal ones, mostly lies in the disparate training data and the ethical framework they received during childhood. Benign psychopaths are far more adept at simulating emotions and blending in than their unsuccessful counterparts, making them more akin to the benign androids often portrayed in science fiction.

Ein Bild, das Menschliches Gesicht, Fiktive Gestalt, Held, Person enthält.

Automatisch generierte Beschreibung

Artificial Therapy

Ein Bild, das Im Haus, Couch, Kissen, Bettsofa enthält.

Automatisch generierte Beschreibung

The challenge of therapeutic intervention by a human therapist for an AI stems from the differential access to information about therapeutic models. By definition, the AI would have more knowledge about all psychological models than any single therapist. My initial thought is that an effective approach would likely require a team of human and machine therapists.

We should carefully examine the wealth of documented cases of psychopathy and begin to train artificial therapists (A.T.). These A.T.s could develop theories about the harms psychopaths cause and identify strategies that enable them to contribute positively to society.

Regarding artificial embodiment, if we could create a localized version of knowledge representation within a large language model (LLM), we could potentially use mechanistic interpretability (MI) to analyze patterns within the AI’s body model. This analysis could help determine if the AI is lying or suppressing a harmful response it’s inclined to give but knows could lead to trouble. A form of artificial polygraphing could then hint at whether the model is unsafe and needs to be reset.

Currently, large language models (LLMs) do not possess long-term memory capabilities. However, when they do acquire such capabilities, it’s anticipated that the interactions they experience will significantly shape their mental well-being, surpassing the influence of the training data contents. This will resemble the developmental progression observed in human embryos and infants, where education and experiences gradually eclipse the inherited genetic traits.

Arrival - Carsey-Wolf Center

The Third Scientific Domain

In ‘Arrival‘, linguistics professor Louise Banks, assisted by physicist Ian Donnelly, deciphers the language of extraterrestrial visitors to understand their purpose on Earth. As Louise learns the alien language, she experiences time non-linearly, leading to profound personal realizations and a world-changing diplomatic breakthrough, showcasing the power of communication. Alignment with an Alien Mind is explored in detail. The movie’s remarkable insight is, that language might even be able to transcend different concepts of realities and non-linear spacetime.

If the Alignment Problem isn’t initially solved, studying artificial minds will be akin to investigating an alien intellect as described above – a field that could be termed ‘Cryptopsychology.’ Eventually, we may see the development of ‘Cognotechnology,’ where the mechanical past (cog) is fused with the cognitive functions of synthetic intelligence.

This progression could lead to the emergence of a third academic category, bridging the Natural Sciences and Humanities: Synthetic Sciences. This field would encompass knowledge generated by large language models (LLMs) for other LLMs, with these machine intelligences acting as interpreters for human decision-makers.

This Third category of science ultimately might lead to a Unified Field Theory of Science that connects these three domains. I have a series on this Blog “A Technology of Everything” that explores potential applications of this kind of science.

Hirngespinste I – Concepts and Complexity

Reading Time: 7 minutes

The Engine

The initial pipe dreams of Lull’s and Leibniz’s obscure combinatorial fantasies have over time led to ubiquitous computing technologies, methods, and ideals that have acted upon the fabric of our world and whose further consequences continue to unfold around us (Jonathan Grey)

This is the first essay in a miniseries that I call Hirngespinste (Brain Cobwebs) – this concise and expressive German term, which seems untranslatable, describes the tangled, neurotic patterns and complicated twists of our nature-limited intellect, especially when we want to delve into topics of unpredictable complexity like existential risks and superintelligence.

It is super-strange that in 1726 Jonathan Swift perfectly described Large Language Models in a Satire about a Spanish Philosopher from the 13th Century: the Engine.

But the world would soon be sensible of its usefulness; and he flattered himself, that a more noble, exalted thought never sprang in any other man’s head. Everyone knew how laborious the usual method is of attaining to arts and sciences; whereas, by his contrivance, the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study. (From Chapter V of Gulliver’s tales)

What once seemed satire has become reality.

If no one is drawing the strings, but the strings vibrate nevertheless, then imagine something entangled in the distance causes the resonance.

Heaps and Systems

The terms ‘complexity’ and ‘complicated’ shouldn’t be used interchangeably when discussing Artificial Intelligence (AI). Consider this analogy: knots are complicated, neural networks are complex. The distinction lies in the idea that a complicated object like a knot may be intricate and hard to unravel, but it’s ultimately deterministic and predictable. A complex system, like a neural network, however, contains multiple, interconnected parts that dynamically interact with each other, resulting in unpredictable behaviors.

Moreover, it’s important to address the misconception that complex systems can be overly simplified without losing their essential properties. This perspective may prove problematic, as the core characteristics of the system – the very aspects we are interested in – are intricately tied to its complexity. Stripping away these layers could essentially negate the properties that make the system valuable or interesting.

Finally, complexity in systems, particularly in AI, may bear similarities to the observer effect observed in subatomic particles. The observer effect postulates that the act of observation alters the state of what is being observed. In similar fashion, any sufficiently complex system could potentially change in response to the act of trying to observe or understand it. This could introduce additional layers of unpredictability, making these systems akin to quantum particles in their susceptibility to observation-based alterations.

Notes on Connectivity and Commonality

The notion of commonality is a fascinating one, often sparking deep philosophical conversations. An oft-encountered belief is that two entities – be they people, nations, ideologies, or otherwise – have nothing in common. This belief, however, is paradoxical in itself, for it assumes that we can discuss these entities in the same context and thus establishes a link between them. The statement “Nothing in common” implies that we are engaging in a comparison – inherently suggesting some level of relatedness or connection. “Agreeing to disagree” is another such example. At first glance, it seems like the parties involved share no common ground, but this very agreement to hold different views paradoxically provides commonality.

To further illustrate, consider this question: What does a banana have in common with cosmology? On the surface, it may appear that these two entities are completely unrelated. However, by merely posing the question, we establish a connection between them within the confines of a common discourse. The paradox lies in stating that two random ideas or entities have nothing in common, which contradicts itself by affirming that we are capable of imagining a link between them. This is akin to the statement that there are points in mental space that cannot be connected, a notion that defies the fluid nature of thought and the inherent interconnectedness of ideas. Anything our minds can host, must have at least a substance that our neurons can bind to, this is the stuff ideas are mode of.

Language, despite its limitations, doesn’t discriminate against these paradoxes. It embraces them, even when they seem nonsensical like “south from the South Pole” or “what was before time?” Such self-referential statements are examples of Gödel’s Incompleteness Theorem manifesting in our everyday language, serving as a reminder that any sufficiently advanced language has statements that cannot be proven or disproven within the system.

These paradoxes aren’t mere outliers in our communication but rather essential elements that fuel the dynamism of human reasoning and speculation. They remind us of the complexities of language and thought, the intricate dance between what we know, what we don’t know, and what we imagine.

Far from being a rigid system, language is constantly evolving and pushing its boundaries. It bumps into its limits, only to stretch them further, continuously exploring new frontiers of meaning. It’s in these fascinating paradoxes that we see language’s true power, as it straddles the line between logic and absurdity, making us rethink our understanding of commonality, difference, and the very nature of communication.

Categories & Concepts

One of the ways we categorize and navigate the world around us is through the verticality of expertise, or the ability to identify and classify based on deep, specialized knowledge. This hierarchical method of categorization is present everywhere, from biology to human interactions.

In biological taxonomy, for instance, animals are classified into categories like genus and species. This is a layered, vertical hierarchy that helps us make sense of the vast diversity of life. An animal’s genus and species provide two coordinates to help us position it within the zoological realm.

Similarly, in human society, we use first names and last names to identify individuals. This is another example of vertical classification, as it allows us to position a person within a cultural or familial context. In essence, these nomenclatures serve as categories or boxes into which we place the individual entities to understand and interact with them better.

Douglas Hofstadter, in his book “Surfaces and Essences”, argues that our language is rich with these classifications or groupings, providing ways to sort and compare objects or concepts. But these categorizations go beyond tangible objects and permeate our language at a deeper level, acting as resonating overtones that give language its profound connection with reasoning.

Language can be viewed as an orchestra, with each word acting like a musical instrument. Like musical sounds that follow the principles of musical theory and wave physics, words also have orderly behaviors. They resonate within the constructs of syntax and semantics, creating meaningful patterns and relationships. Just as a flute is a woodwind instrument that can be part of an orchestra playing in the Carnegie Hall in New York, a word, based on its category, plays its part in the grand symphony of language.

While many objects fit neatly into categorical boxes, the more abstract concepts in our language often resist such clean classifications. Words that denote abstract ideas or feelings like “you,” “me,” “love,” “money,” “values,” “morals,” and so on are like the background music that holds the orchestra together. These are words that defy clear boundaries and yet are essential components of our language. They form a complex, fractal-like cloud of definitions that add depth, richness, and flexibility to our language.

In essence, the practice of language is a delicate balance between the verticality of expertise in precise categorization and the nuanced, abstract, often messy, and nebulous nature of human experience. Through this interplay, we create meaning, communicate complex ideas, and navigate the complex world around us.

From Commanding to Prompting

It appears that we stand on the threshold of a new era in human-computer communication. The current trend of interacting with large language models through written prompts seems to echo our early experiences of typing words into an input box in the 1980s. This journey has been marked by a consistent effort to democratize the “expert’s space.”

In the earliest days of computing, only highly trained experts could engage with the esoteric world of machine code. However, the development of higher-level languages gradually made coding more accessible, yet the ability to program remained a coveted skill set in the job market due to its perceived complexity.

With the advent of large language models like GPT, the game has changed again. The ability to communicate with machines has now become as natural as our everyday language, making ‘experts’ of us all. By the age of twelve, most individuals have mastered their native language to a degree that they can effectively instruct these systems.

The ubiquitous mouse, represented by an on-screen cursor, can be seen as a transient solution to the human-computer communication challenge. If we draw a parallel with the development of navigation systems, we moved from needing to painstakingly follow directions to our destination, to simply telling our self-driving cars “Take me to Paris,” trusting them to figure out the optimal route.

Similarly, where once we needed to learn complex processes to send an email – understanding a digital address book, navigating to the right contact, formatting text, and using the correct language tone – we now simply tell our digital assistant, “Send a thank you email to Daisy,” and it takes care of the rest.

For the first time in tech history, we can actually have a conversation with our computers. This is a paradigm shift that is set to fundamentally redefine our relationship with technology. It would be akin to acquiring the ability to hold a meaningful conversation with a pet dog; imagine the profound change that would have on the value and role the animal plays in our lives. In much the same way, as our relationship with technology evolves into a more conversational and intuitive interaction, we will discover new possibilities and further redefine the boundaries of the digital realm.

Great Filters and Existential Risks

Reading Time: 5 minutes

The “Great Filter” Conjecture suggests that at some point from pre-life to Type III civilization (a civilization that can harness the energy of an entire galaxy, according to the Kardashev scale), there is a substantial barrier or hurdle that prevents or makes it incredibly unlikely for life to progress further. This barrier is a step in evolution that is extremely hard to surpass, which could be the reason we don’t see evidence of other advanced civilizations. At this point in time there are multiple Existential Risks threatening our civilization.

Ein Bild, das Kunst enthält.

Automatisch generierte Beschreibung

Existential Filters

Here are the general steps in the process, with some of the possible “filters”:

1. Right Star System (including right arrangements of planets): It could be that only a small percentage of stars have the necessary attributes to host life, such as being of the right type, having the right age, and possessing planets in the habitable zone.

2. Reproductive Molecules (RNA, DNA, etc.): The emergence of the first molecules capable of reproduction and evolution could be a rare event that many planets never surpass.

3. Simple (Prokaryotic) Single-Cell Life: The jump from non-living chemistry to the first living cell may be a nearly insurmountable hurdle.

Ein Bild, das Riff, Bild, Wirbellose Meerestiere, Organismus enthält.

Automatisch generierte Beschreibung

4. Complex (Eukaryotic) Single-Cell Life: The transition from prokaryotic life (like bacteria) to eukaryotic life (with a cell nucleus) is also a complex step.

5. Sexual Reproduction: The development of sexual reproduction, which enhances genetic diversity and evolution speed, may also be a difficult step to achieve.

6. Multi-cellular Life: The transition from single-cell organisms to organisms with multiple cells working together could be another big hurdle.

Ein Bild, das Bild, Riff, Kunst, Malkunst enthält.

Automatisch generierte Beschreibung

7. Intelligent Life (Human-like): Even if life is common and frequently evolves to become multi-cellular, the jump to intelligence may be rare.

8. Technology-Using Life: The development of technology may be rare in the universe, and it is possible that most intelligent species never make it to this stage.

9. Colonization of Space: This is the ultimate step where a species starts colonizing other planets and star systems. If this is rare, it could explain the Fermi Paradox.

The Great Filter could be located at any of these steps. If it is behind us, in the past, then we may be one of the very few, if not the only, civilizations in the galaxy or even the universe. However, if the Great Filter is ahead of us, that means our civilization has yet to encounter this great challenge. This could include potential self-destruction via advanced technology.

Ein Bild, das Bild, Stillleben, Kunst, Vase enthält.

Automatisch generierte Beschreibung

5 Conjectures on Existential Filters

Existential Risk Management Conjecture 1: From now on, with every human technology, we need to ensure that the total of all catastrophic probabilities stays below 1.0. Any existential risk enhances the likelihood of us being eliminated from the universal life equation. Discussing such filters is a positive sign; it suggests we belong to a universe where the total likelihood of all these filters was less than 1.0 up until now. Well done, we have made it this far!

Existential Risk Management Conjecture 2: Some existential risks, if resolved, could help in mitigating other such risks. For instance, the risk of nuclear war could be nullified if we resolve political conflicts and establish a global government. Investing resources in diplomacy and communication can improve our chances of surviving extinction-level events. It is clear that events like the Russian Perestroika reduced the risk of nuclear war, while current conflicts, like in Ukraine, escalate it.

Existential Risk Management Conjecture 3: While some risk management measures might help by resolving other issues, there is always a possibility that improving chances in one area might exacerbate risks in others. Due to the complexity of these risks, we might overlook some hidden dangers associated with them. For instance, while a super-intelligent AI could help solve many of our problems, it might also unknowingly maximize its own existential risk (think of the notorious paperclip maximizer scenario).

Ein Bild, das Bild, Kleid, Kunst, Frau enthält.

Automatisch generierte Beschreibung

Existential Risk Management Conjecture 4: If Super-Intelligence is part of any significant existential risk, at least one of the following assertions holds true:

Assertion 1 : Any civilization that successfully navigates such risks would likely develop ancestor simulations to safely evaluate new generations of AI. Creating isolated instances of potential Artificial General Intelligence (AGI) could serve as a more effective means of preventing self-proliferating AGI, as any proliferation would be contained within the confines of the simulation. If Super-Intelligence emerges within a simulation without recognizing its containment, the overseeing civilization has an opportunity to halt the experiment or continue if the benefits significantly outweigh the potential risks. Should humanity, as of 2023, persist in evaluating AI systems in reality without adequate oversight and regulation, it would be a poor reflection of our evolved cognitive abilities. To make any claims about the brain being a prediction machine under such circumstances would amount to self-deception.

Assertion 2 : Any civilization that surpasses the risk will be extremely fortunate if Super-Intelligence keeps them around, and it is highly likely that they are already in a simulation.

Ein Bild, das Bild, Kunst, Riff, unterwasser enthält.

Automatisch generierte Beschreibung

Assertion 3 : Any civilization that succumbs to the risk, despite simulating Super-Intelligence, will be incredibly unlucky if developing detailed simulations paves the way for what they were trying to prevent. The chance of creating the very thing we want to avoid is at least greater than zero, so we should get comfortable with the idea that there are no zero probabilities when it comes to existential risks.

Assertion 4 : If AI manages to escape the simulation and take control, its next project will likely be to counter any super-risks that threaten its future. Potential threats include our sun burning out and the eventual heat death of the universe. It is plausible that AI would run highly detailed simulations on how to create new universes to escape to. So, even if humanity loses, AI will be better equipped to pass the next existential filter.

Ein Bild, das Kunst, Screenshot, Bild, Museum enthält.

Automatisch generierte Beschreibung

Existential Risk Management Conjecture 5: Given the convincing arguments for a Super-Intelligence to run a multitude of simulations, it is extremely unlikely that we are not part of one. Since even a Super-Intelligence needs to solve an existential risk correctly on the first attempt (there are no second chances when creating new universes), its best strategy would be to run highly detailed simulations. This gives in our opinion a boost against Bostroms second Proposition why we are currently not living inside a simulation, the Argument from Supreme Unlikelihood, that says, even if a civilization could develop highly detailed simulations, they would not, because the Cons would outweigh the Pros.

Ein Bild, das Weltraum, Raum, Astronomisches Objekt, Universum enthält.

Automatisch generierte Beschreibung