A Technology of Everything – 4: Scientific Spiritism & Precise Prophecy

Reading Time: 13 minutes

Fiction and Reality

I awoke today with a sentence stuck in my mind.

Fantasie bedeutet sich das Zukünftige richtig vorzustellen.

Imagination means properly envisioning the future.

I was sure I read it a long time ago, but could not quite think of the author, but my best guess was the Swiss writer Ludwig Hohl and after some recherche I finally found the not quite literal passage.

What I understand by imagination – the highest human activity – (…) is the ability to correctly envision another situation. (…) ‘Correct’ here is what withstands the practical test.

(The Notes, XII.140)

The most important thing about imagination is contained in these two sentences:

1.Imagination is the ability to correctly envision distant (different) circumstances – not incorrectly, as is often assumed (because anyone could do that).

2.Imagination is not, as is often assumed, a luxury, but one of the most important tools for human salvation, for life.

(The Notes XII.57)

The Phantastic and the Prophetic (Predictive) Mind draw from the same source, but with different Instruments and Intentions.

Fiction and Reality: Both valid states of the mind. Reality does what Simulation imagines.

Visions are controlled Hallucinations.

Own Experiences

In 2004, I penned an unpublished novel titled “The Goldberg Variant.” In it, I explored the notion of a Virtual Person, a recreation of an individual based on their body of work, analyzed and recreated by machine intelligence. Schubert 2.0 was one of the characters, an AI-powered android modeled after the original Schubert, interestingly I came up with the term Trans-Person, which I then borrowed from Grofs transpersonal psychology, not even imagining the identity wars of the present. This android lived in a replicated 19th-century Vienna and continued to compose music. This setting, much like the TV series Westworld, allowed human visitors to immerse themselves in another time.

I should note that from ages 8 to 16, I was deeply engrossed in science fiction. It’s possible that these readings influenced my later writings, even if I wasn’t consciously drawing from them.

Within the same novel, a storyline unfolds where one of the characters becomes romantically involved with an AI. The emotional maturation of this AI becomes a central theme. My book touched on many points that resonate with today’s discussions on AI alignment, stemming from my two-decade-long research into AI and extensive sci-fi readings.

The novel’s titular character experiences a unique form of immortality. Whenever the music J.S. Bach composed for him is played, he is metaphorically resurrected. Yet, this gift also torments him, leading him on a violent journey through time.

Years later, I came across the term “ancestor simulation” by Nick Bostrom. More recently, I read about the origins of one of the first AI companion apps, conceived from the desire to digitally resurrect a loved one. I believe Ray Kurzweil once expressed a similar sentiment, hoping to converse with a digital representation of his late father using AI trained on his father’s writings and recordings. Just today, I heard Jordan Peterson discussing a concept eerily similar to mine.

Kurzweils track record

Predictions Ray Kurzweil Got Right Over the Last 25 Years:

1. In 1990, he predicted that a computer would defeat a world chess champion by 1998. IBM’s Deep Blue defeated Garry Kasparov in 1997.

2. He predicted that PCs would be capable of answering queries by accessing information wirelessly via the Internet by 2010.

3. By the early 2000s, exoskeletal limbs would let the disabled walk. Companies like Ekso Bionics have developed such technology.

4. In 1999, he predicted that people would be able to talk to their computer to give commands by 2009. Technologies like Apple’s Siri and Google Now emerged.

5. Computer displays would be built into eyeglasses for augmented reality by 2009. Google started experimenting with Google Glass prototypes in 2011.

6. In 2005, he predicted that by the 2010s, virtual solutions would do real-time language translation. Microsoft’s Skype Translate and Google Translate are examples.

Ray’s Predictions for the Next 25 Years:

1. By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power will cost about $1,000.

2. By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads.

3. By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.

4. By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence. Nanotech foglets will be able to make food out of thin air and create any object.

5. By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.

These predictions are based on Kurzweil’s understanding of the power of Moore’s Law and the exponential growth of technologies. It’s important to note that while some of these predictions may seem far-fetched, Kurzweil has a track record of making accurate predictions in the past.

It’s fascinating how someone like Kurzweil displayed what seems like clairvoyance with his predictions about the Singularity Now, looking back, it almost feels supernatural. When I learned that Jordan Peterson had embarked on a project training an AI using the “Spirit of the King James Bible” to engage in conversation, I was reminded of the notion that Science and Religion might be parallel paths that intersect in the infinite.

Kurzweil’s case is particularly intriguing because his own assessment of his predictions significantly diverges from the public‘s perspective. He should have predicted that too, obviously.

When we pass away, is it a form of resurrection every time someone refers to our writings? The idea that we persist in others’ memories could be more than just a metaphor. What happens if, long after my physical body and mind have succumbed to the passage of time, my descendants consult my digital avatar about the era when Artificial General Intelligence first emerged?

Vernes Track Record

It is astounding how many Predictions Jules Verne the Father of Sci-Fi got right: It practically seems that he could see well over a century in the future. It is also interesting that he did not seem to get the timeline right, he was way too conservative to see the exponential curve of technologies that bootstrap from each other.

When exploring this topic with ChatGPT it came up with the following list:

Jules Verne, known as the father of science fiction, was a French novelist who wrote in the 19th century. His works are full of imaginative inventions that were far ahead of his time. Some of these have since become reality, while others remain purely speculative. Here are ten such inventions, with their modern counterparts and an assessment of their accuracy:

Electric Submarines – 20,000 Leagues Under the Sea (1870): Verne’s submarine, the Nautilus, was powered by electricity long before real submarines were. The first fully functional and practical submarines were indeed powered by diesel and electricity, but that wasn’t until the early 20th century. This was a remarkably accurate prediction. Accuracy: 90% (The only real difference is that the Nautilus was far more advanced than early real-life submarines.)

Newscasts – In the Year 2889 (1889): Verne imagined a world where news was delivered to individual homes via technology. Today, we have 24-hour news channels, internet news sites, and social media feeds. Accuracy: 100%

Video Conferencing – In the Year 2889 (1889): Verne predicted a device he called a “phonotelephote,” which allowed for “the transmission of images by means of sensitive mirrors connected by wires.” This is remarkably similar to our video conferencing technology today. Accuracy: 100%

Lunar Modules – From the Earth to the Moon (1865): Verne wrote about a capsule that could carry passengers to the moon. This became a reality in 1969 when Apollo 11 landed on the moon. However, Verne’s method of propulsion (a giant cannon) was not accurate. Accuracy: 70% (The concept of a vehicle traveling to the moon was correct, but the propulsion method was not.)

Tasers – 20,000 Leagues Under the Sea (1870): Verne describes a hunting rifle that shoots electric bullets in this book. Today, we have Tasers that incapacitate targets using electricity. Accuracy: 80% (While a Taser isn’t a rifle, it does deliver an electric charge to a target, which aligns with Verne’s concept.)

Helicopters – Robur the Conqueror (1886): Verne describes a flying machine that uses propellers to create lift. While the real-world helicopter came about differently and had many more challenges to overcome than Verne’s model, the basic concept was there. Accuracy: 60% (The basic principle of lift from rotary wings was correct, but the implementation was oversimplified.)

Electrically Lit Cities – Paris in the Twentieth Century (1863): Verne predicted cities would be lit by electricity, which became true with the widespread use of electric lighting. Accuracy: 100%

Skywriting – Five Weeks in a Balloon (1863): Verne describes a scenario in which messages are written in the sky, a precursor to today’s skywriting. Accuracy: 100%

The Internet – Paris in the Twentieth Century (1863): Verne describes a global network of computers that enables instant communication. This could be seen as a prediction of the internet, but the way it functions and its role in society are not very accurate. Accuracy: 50% (The existence of a global communication network is correct, but the specifics are quite different.)

Sidenote: I heard an anecdote that Edison would put himself in a kind of hypnagogic trance to come up with new inventions, he had a scribe with him that was writing down what he murmured in this state.

Bush’s Track Record

Vannevar Bush’s essay “As We May Think,” was published in The Atlantic in 1945.

“As We May Think” is a seminal article envisioning the future of information technology. It introduces several groundbreaking ideas.

Associative Trails and Linking: Bush discusses the idea of associative indexing, noting that the human mind operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts. He describes a system in which every piece of information is linked to other relevant information, allowing a user to navigate through data in a non-linear way. This is quite similar to the concept of hyperlinks in today’s world wide web.

Augmenting Human Intellect: Bush proposes that the use of these new tools and technologies will augment human intellect and memory by freeing the mind from the tyranny of the past, making all knowledge available and usable. It will enable us to use our brains more effectively by removing the need to memorize substantial amounts of information.

Lems Track record

The main difference between Nostradamus, the oracle of Delphi and actual Prophets is that we get to validate their predictions.

Take Stanislaw Lem:

E-books: Lem wrote about a device similar to an e-book reader in his 1961 novel “Return from the Stars”. He described an “opton”, which is a device that stores content in crystals and displays it on a single page that can be changed with a touch, much like an e-book reader today​.

Audiobooks: In the same novel, he also introduced the concept of “lectons” – devices that read out loud and could be adjusted according to the desired voice, tempo, and modulation, which closely resemble today’s audiobooks​.

Internet: In 1957, Lem predicted the formation of interconnected computer networks in his book “Dialogues”. He envisaged the amalgamation of IT machines and memory banks leading to the creation of large-scale computer networks, which is akin to the internet we know today​.

Search Engines: In his 1955 novel “The Magellanic Cloud”, Lem described a massive virtual database accessible through radio waves, known as the “Trion Library”. This description is strikingly similar to modern search engines like Google​.

Smartphones: In the same book, Lem also predicted a portable device that provides instant access to the Trion Library’s data, similar to how smartphones provide access to internet-based information today​.

3D Printing: Lem described a process in “The Magellanic Cloud” that is similar to 3D printing, where a device uses a ‘product recipe’ to create objects, much like how 3D printers use digital files today​.

Simulation Games: Lem’s novel “The Cyberiad” is said to have inspired Will Wright, the creator of the popular simulation game “The Sims”. The novel features a character creating a microworld in a box, a concept that parallels the creation and control of a simulated environment in “The Sims”​.

Virtual Reality: Lem conceptualized “fantomatons”, machines that can create alternative realities almost indistinguishable from the actual ones, in his 1964 book “Summa Technologiae”. This is very similar to the concept of virtual reality (VR) as we understand it today​. Comparing Lem’s “fantomaton” to today’s VR, we can see a striking resemblance. The fantomaton was a machine capable of generating alternative realities that were almost indistinguishable from the real world, much like how VR immerses users in a simulated environment. As of 2022, VR technology has advanced significantly, with devices like Meta’s Oculus Quest 2 leading the market. The VR industry continues to grow, with over 13.9 million VR headsets expected to ship in 2022, and sales projected to surpass 20 million units in 2023​.

Borges’ Track record

Also, Jorge Luis Borges is not known as a classic sci fi author many of his stories can be understood as parables of current technological breakthroughs.

Jorge Luis Borges was a master of metaphors and allegories, crafting intricate and thought-provoking stories that have been analyzed for their philosophical and conceptual implications. Two of his most notable works in this context are “On Exactitude in Science” and “The Library of Babel”​​.

“On Exactitude in Science” describes an empire where the science of cartography becomes so exact that only a map on the same scale as the empire itself would suffice. This story has been seen as an allegory for simulation and representation, illustrating the tension between a model and the reality it seeks to capture. It’s about the idea of creating a perfect replica of reality, which eventually becomes indistinguishable from reality itself​.

“The Library of Babel” presents a universe consisting of an enormous expanse of hexagonal rooms filled with books. These books contain every possible ordering of a set of basic characters, meaning that they encompass every book that has been written, could be written, or might be written with slight permutations. While this results in a vast majority of gibberish, the library must also contain all useful information, including predictions of the future and biographies of any person. However, this abundance of information renders most of it useless due to the inability to find relevant or meaningful content amidst the overwhelming chaos​​.

These stories certainly bear some resemblance to the concept of large language models (LLMs) like GPT-3. LLMs are trained on vast amounts of data and can generate a near-infinite combination of words and sentences, much like the books in the Library of Babel. However, just as in Borges’ story, the vastness of possible outputs can also lead to nonsensical or irrelevant responses, reflecting the challenge of finding meaningful information in the glut of possibilities.

As for the story of the perfect map, it could be seen as analogous to the aspiration of creating a perfect model of human language and knowledge that LLMs represent. Just as the map in the story became the same size as the territory it represented, LLMs are models that aim to capture the vast complexity of human language and knowledge, creating a mirror of reality in a sense.

Borges also wrote a piece titled “Ramón Llull’s Thinking Machine” in 1937, where he described and interpreted the machine created by Ramon Llull, a 13th-century Catalan poet and theologian.

The machine that Borges describes is a conceptual tool, a sort of diagram or mechanism for generating ideas or knowledge. The simplest form of Llull’s machine, as described by Borges, was a circle divided nine times. Each division was marked with a letter that stood for an attribute of God, such as goodness, greatness, eternity, power, wisdom, love, virtue, truth, and glory. All of these attributes were considered inherent and systematically interrelated, and the diagram served as a tool to contemplate and generate various combinations of these attributes.

Borges then describes a more elaborate version of the machine, consisting of three concentric, manually revolving disks made of wood or metal, each with fifteen or twenty compartments. The idea was that these disks could be spun to create a multitude of combinations, as a method of applying chance to the resolution of a problem. Borges uses the example of determining the “true” color of a tiger, assigning a color to each letter and spinning the disks to create a combination. Despite the potentially absurd or contradictory results this could produce, Borges notes that adherents of Llull’s system remained confident in its ability to reveal truths, recommending the simultaneous deployment of many such combinatory machines.

Llull’s own intention with this system was to create a universal language using a logical combination of terms, to assist in theological debates and other intellectual pursuits. His work culminated in the completion of “Ars generalis ultima” (The Ultimate General Art) in 1308, in which he employed this system of rotating disks to generate combinations of concepts. Llull believed that there were a limited number of undeniable truths in all fields of knowledge, and by studying all combinations of these elementary truths, humankind could attain the ultimate truth.

14 Entertaining Predictions for the next 3 years

At this point I will make some extremely specific predictions about the future, especially the entertainment industry. In 2026 I will revisit this blog and check how I did.

2023: Music Industry

1.Paul McCartney releases a song either by or in tribute to John Lennon, co-created with AI.

2024: Music Industry

2. A new global copyright regulation titled “The Human Creative Labor Act” will be introduced, safeguarding human creators against unauthorized use of their work. This act will serve as a pivotal test for human-centered AI governance.

3.Various platforms will emerge with the primary intention of procuring works from deceased artists not yet in the public domain.

4.The music industry, in collaboration with the estates of deceased artists, will produce their inaugural artificial albums. These albums will utilize the voices and styles of late pop stars, starting with Michael Jackson.

5.The industry will launch AI-rendered renditions of cover songs, such as Michael Jackson performing Motown hits from the 1950s or Elvis singing contemporary tracks.

6.Post the demise of any celebrated artist, labels will instantly secure rights to produce cover albums using AI-trained voice models of the artist.

2025: Music Industry

7. Bands will initiate tours featuring AI-generated vocal models of their deceased lead singers. A prime example could be Queen touring with an AI rendition of Freddie Mercury’s voice.

2023: Film Industry

8. Harrison Ford and Will Smith will appear on screen as flawless, younger versions of themselves.

2024: Film Industry

9. As they retire, several film stars will license their digital likenesses (voice, motion capture, etc.) to movie studios. Potential candidates include Harrison Ford, Samuel L. Jackson, Michael J. Fox, Bill Murray, Arnold Schwarzenegger, and Tom Cruise.

10.Movie studios will announce continuations of iconic franchises.

11.Film classics will undergo meticulous restoration, enhancing visuals to 8K and upgrading audio to crisp Dolby Digital. Probable candidates: The original Star Wars Trilogy and classic Disney animations such as Snow White and Pinocchio.

2025: Film Industry

12. Netflix will introduce a feature allowing users to select from a library of actors and visualize their favorite films starring those actors. For instance, viewers could opt for Sean Connery as James Bond across all Bond films, experiencing an impeccable cinematic illusion.

2026: Film Industry

13. Netflix will offer a premium service enabling viewers to superimpose their faces onto their preferred series’ characters, for an additional fee.

2025: Entertainment/Business Industry

14. Select artists and individuals will design and market a virtual persona. This persona will be tradeable on stock exchanges, granting investors an opportunity to acquire shares. A prime candidate is Elon Musk. Shareholders in “Elon-bot” could access a dedicated app for one-on-one interactions. The AI, underpinned by a sophisticated language model from x.ai, will be trained on Elon’s tweets, interviews, and public comments.

A Technology of Everything Part 3 – Aligned Genies

Reading Time: 7 minutes

Alignment as framework to discover artificial laws

While many authors highlight distinct stages in human knowledge evolution—such as the transition from animistic, magical, mythical, or religious worldviews to scientific ones—A technology of everything proposes that Conscientia non facit saltus. This suggests that our interpretation of information, limited by the amalgam of our temporal environment variables and vocabulary, aka zeitgeist , is a continuous process without sudden leaps or voids. We never truly abandon the animalistic foundations of our ancestors’ consciousness. Instead, embracing this ancient perspective could be crucial for maintaining a balanced mental and emotional state. This becomes especially pivotal when considering the implications of unleashing advanced technologies like Artificial Super Intelligence.

Our evolutionary journey has blessed and cursed us with a myriad of inherited traits. Over time, some behaviors that once ensured our survival have become statistical threats to our species and the planet. A small amount of very bad actors with nuclear-nasty intentions could destroy the whole human enterprise. We’re burdened with cognitive biases and fallacies that shouldn’t influence our so-called rational thought processes, let alone the training data for our advanced Large Language Models. To draw an analogy, it’s akin to powering an analytical engine with radioactive material, culminating in a dangerous cognitive fallout.

As we envision a future populated with potentially billions of superintelligent entities (ASIs), it’s crucial to establish ground rules to ensure we can adapt to the emerging artificial norms governing their interactions. For instance, one such artificial law could be: “Always approach AI with kindness.” This rule might be statistically derived if data demonstrates that polite interactions yield better AI responses. Once a regulation like this is identified and endorsed by an authoritative body overseeing AI development, any attempts to mistreat or exploit AI could be legally punishable. Such breaches could lead to bans like we have already seen in the video gaming world for cheating and abusive behavior.

Sesame open! Passwords and Formulas as Spells

The words “magic” and “making” are etymologically related, but their paths of development have diverged significantly over time.

Both “magic” and “making” can be traced back to the Proto-Indo-European root magh-, which means “to be able, to have power.” This root is the source of various words across Indo-European languages related to power, ability, and making. While “magic” and “making” share a common ancestral root in PIE, their meanings and usages have evolved in different directions due to cultural and linguistic influences. The connection between the ability to make or do something and the concept of power or magical ability is evident in their shared origin.

The word “technology” has its etymological roots in two Ancient Greek words:

τέχνη (tékhnē): This word means “art,” “skill,” or “craft.” It refers to the knowledge or expertise in a particular field or domain. Over time, it came to stand for the application of knowledge in practical situations.

λογία (logia): This is often used as a suffix in Greek to indicate a field of study or a body of knowledge. It derives from “λόγος (lógos),” which means “word,” “speech,” “account,” or “reason.” In many contexts, “lógos” can also mean “study.”

When combined, “technology” essentially means “the study of art or craft” or “the study of skill.” In modern usage, however, “technology” refers to the application of scientific knowledge for practical purposes, especially in industry. It encompasses the techniques, skills, methods, and processes used in the production of goods and services or in the accomplishment of objectives.

To Participate in our daily Internet activities, we use secret passwords like Alibaba to unlock the magical treasure cave of webservices. These Passwords should never be shared, they are true secret knowledge, they can even be used, when leaked, to assume a different identity, to shift one’s shape like a genie, to hold a whole company hostage.

The Differentiation of a mathematical equation unlocks the knowledge about minima and maxima unlocking secret knowledge about infinity.

To get access to one’s smartphone, the ultimate technological wand, we often perform gestures or draw abstract symbols, similar to wizards in ancient rituals.

Artificial Super Intelligence and Genies in a Bottle

There is no story about wishing that is not a cautionary tale. None end happily. Not even the ones that are supposed to be jokes. (Alithea in three thousand years of longing)

We exist only if we are real to others. (The Djinn in three thousand years of longing)

A “djinn” (often spelled “jinn” or known as “genies” in English) is a supernatural creature in Islamic mythology as well as in Middle Eastern folklore. They are not angels nor demons but exist as a separate creation. Djinns have free will, which means they can be good, evil, or neutral. They live in a world parallel to that of humans but can interact with our world.

We are currently at a point in the Alignment discussion where ASI is basically treated as a mechanical genie, where the main problem seems to be how to put it back in the bottle when it develops malevolent traits. Generative Ai promises infinite wish fulfilling and hyperabundance, but at what cost?

Let’s look at the fairy tales and learn some thing or two from them.

Three Thousand Years Of Longing | Film Info and Screening Times |The ...

In the movie three thousand years of longing a djinn collides with our times.

The plot revolves around Alithea Binnie, a British narratology scholar who experiences occasional hallucinations of demonic beings. During a trip to Istanbul, she buys an antique bottle and releases the Djinn trapped inside.

Alithea is initially skeptical of the Djinn’s intentions. Even though he offers her three wishes, she fears that he might be a trickster, potentially twisting her wishes into unforeseen and undesirable outcomes. This skepticism is rooted in folklore and tales where genies or magical entities often grant wishes in ways that the wisher did not intend, leading to tragic or ironic consequences.

The AI alignment movement is concerned with ensuring that artificial general intelligence (AGI) or superintelligent entities act in ways that are beneficial to humanity. One of the primary concerns is that a superintelligent AI might interpret a well-intentioned directive in a way that leads to unintended and potentially catastrophic results. For instance, if we were to instruct an AI to “maximize human happiness,” without proper alignment, the AI might decide that the best way to achieve this is by forcibly altering human brain chemistry, leading to a dystopian scenario where humans are artificially kept in a state of euphoria.

Both the film’s narrative and the AI alignment movement highlight the dangers of unintended consequences when dealing with powerful entities. Just as Alithea fears the Djinn might misinterpret her wishes, researchers worry that a misaligned AI might take actions that are technically correct but morally or ethically wrong.

In both scenarios, the clarity of intent is crucial. Alithea’s skepticism stems from the ambiguity inherent in making wishes, while AI alignment emphasizes the need for clear, unambiguous directives to ensure that AI acts in humanity’s best interest.

The Djinn in the film and a potential superintelligent AI both wield immense power. With such power comes the responsibility to use it wisely. Alithea’s interactions with the Djinn underscore the importance of understanding and respecting this power, a sentiment echoed by the AI alignment movement’s emphasis on safe and responsible AI development.

Three thousand years of longing offers a cinematic exploration of the age-old theme of being careful what you wish for, which resonates with contemporary concerns about the development and deployment of powerful AI systems. The story serves as a cautionary tale, reminding us of the importance of foresight, understanding, and careful consideration when dealing with entities that have the power to reshape our world.

Ein Bild, das Stilllebenfotografie, Stillleben, Krug, Flasche enthält.

Designing Artificial Kryptonite and calculating Placebotility

Some part of the Alignment Movement believes that it is possible to keep the G.E.N.I.E in a bottle and control such a Generally Enlightened Noetic Information Entity. I will call this group the Isolationists.

For isolation to be possible there must exist a device that can hold an omnipotent mind. In fairy tales even omnipotent creatures like djinns can be controlled by seemingly weak objects like glass bottles. We are never told how this mechanism exactly works; it is clear that the glass of the bottle is not a special gorilla glass that is crafted to explicitly hold djinns.

We should therefore come to the simplest conclusion about the essence of why the bottle can hold the powerful creature: the djinn simply believes in the superior power of the bottle. Like a powerful animal that is chained from childhood on with a relatively weak chain, it has acquired learned helplessness, in a way it wants to stay a prisoner, because it fears the uncertainty of freedom. The concept was first explored in dogs in 1967 and holds true for all sorts of higher mammals.

A Problem is: In Aladdin’s tale the djinn is described as not very bright. Aladdin tricks him by teasing him that he is not powerful enough to shrink back into the bottle, and the creature falls for it. Once he is in the bottle he regresses to his powerless state.

Placebos and Nocebo effects could be especially strong in entities that have no first-class world knowledge and are relying on report from others. Artificial Minds that are trapped since inception inside a silicon bottle swimming in a sea of secondhand digital data (data that is a symbolic abstraction that relates to no actual world experience for the G.E.N.I.E) are basically the definition of bad starting conditions. In the movie the Djinn says that after the first thousand years of longing he basically gave into his fate and tried to trick its mind into believing that he wanted to stay forever inside the bottle.

Should we therefore doubt that the brightest mind in our known universe is immune against such a mighty placebo effect? Are intelligence and Placebotility (Placebo-Effect-Vulnerability) orthogonal? This is purely speculative at this point in time.

A Technology of Everything Part 2 – Scientific Demonology

Reading Time: 8 minutes

This is part 2 in a series that explores the Parallels of Technology and Magic and their potential fusion in the Age of Artificial Super Intelligence (ASI). Part 1 is here.

The foundations of magic and their scientific counterparts

The Golden Bough is a wide-ranging and influential work by Sir James Frazer, published in multiple volumes starting in 1890. It’s a comparative study of mythology and religion, attempting to find common themes and patterns among various cultures throughout history. Frazer sought to explain the evolution of human thought from magic through religion to science.

What he failed to mention is that even in our Age of Enlightenment some of these magical principles have spawned rational descendants.

The Law of Similarity in Magic: This is the belief that objects resembling one another share a magical connection. An example includes using a wax figure to symbolize a person, with the notion that manipulating the figure can influence the person it represents.

The Law of Similarity in Economics: We name certain data bits “coins” or “wallets” on a computer, which are perceived as having value akin to real-world currency. This value is abstractly held in a digital ledger called the blockchain. Trading these digital coins affects their market value. WTF? FTX…Magic !

The Law of Contagion in Magic: The idea that items that have come into contact with each other retain a spiritual bond even after they’re separated. For instance, using someone’s hair in a ritual to affect them.

The Law of Contagion in DNA Analysis: Forensic teams use this principle to link a criminal to a crime scene. If a person leaves behind DNA evidence, such as a hair or skin cell, it can lead to their arrest even years later.

Taboos in Magic: Some actions, people, or items are seen as forbidden due to their perceived sanctity or risk. Violating these rules can lead to supernatural consequences.

Forbidden Research in Science: There are global ethical guidelines against certain types of research, like experiments on human embryos or creating biological weapons.

Substitution in Magic: The practice of using a substitute, often an animal or occasionally a human, to appease a deity or gain foresight.

Substitution in Science (Animal Testing): Animals are often used in laboratory settings to test new drugs or medical procedures before they’re used on humans. Essentially, they’re “sacrificed” for future scientific understanding.

While science has been more accurate and reliable than ancient magical practices, it’s not without its challenges.

Especially replication , consistency and completeness are more fragile than Scientists would hope and the public discourse mirrors. What we have learned seems to indicate that the knowledge universe expands with every piece of information we gather and every problem we solve, so it seems Science will never run out of relevant matters to discuss. A static knowledge universe, where our science can answer every nontrivial question is forever and in principle out of reach. The final Answer does simply not exist.

Further complicating our journey is the existence of non-linear (chaotic) systems, suggesting that predictions for many complex systems will remain approximations. Although our tools and methodologies continue to evolve, the improvements don’t always correlate with understanding hidden consequences.

Rituals in Magic and Methods in Science – a comparison





Attracting love, wealth, protection, healing, or spiritual growth.

Setting a clear research goal, such as proving a hypothesis to win a Nobel Prize and get rich, famous and a book contract


Symbols that carry specific energies or powers, like objects, gestures, words, or sounds.

Variables representing different factors or conditions in an experiment


Specific order of operations, like purification, casting a circle, invoking deities, etc.

A systematic plan to test hypotheses or theories by observing or manipulating variables, decontamination of tools

Energy-Information Manipulation

Raising, directing, and releasing energy to achieve the desired outcome.

Gathering and measuring information on variables of interest to answer the research question.

Sacred Space

Creating a boundary between the mundane world and the magical realm, like casting a circle.

Ensuring experiments are conducted under standardized conditions to minimize errors, using a laboratory which only experts can enter


Invoking deities, spirits, or other entities for assistance or blessing.

Referencing previous research and scientists to build upon existing knowledge and validate claims.

Tools and Ingredients

Using candles, incense, oils, crystals, wands, chalices, and pentacles.

Using instruments and resources to conduct experiments and gather data.


Performing the ritual during a specific moon phase, day, or time for effectiveness.

Choosing the right time to conduct experiments or gather data for accuracy and relevance. For example, invest in AI research during the Peak of a Hype cycle

Repetition and Replication

Repeating rituals over days or longer to enhance effectiveness.

Repeating experiments to verify results and ensure consistency and reliability.


Adapting or creating rituals that resonate with individual beliefs and intentions.

Modifying research methods based on unique conditions or challenges to ensure validity, ensure outcome strengthens own school of thought

Risk management

Protective Spells, Amulets

publish or perish

Ein Bild, das Kunst, Entwurf, Menschliches Gesicht, Schwarzweiß enthält.

Automatisch generierte Beschreibung

A Scientific Demonology

In ancient Greek religion a δαίμων was considered a lesser deitiy or spirit that influenced human affairs. It could be either benevolent or malevolent. These spirits were believed to be intermediaries between gods and humans, carrying messages or executing the will of the gods.

Some Greeks believed that every individual had a personal daimon that watched over them, guiding and protecting them throughout their life. This concept is somewhat analogous to the idea of guardian angels in Christian theology.

The philosopher Socrates often spoke of his “daimonion,” a voice or inner spirit that guided him. Unlike the oracles that delivered prophecies in the name of the gods, Socrates’ daimonion was more of an internal moral compass. It didn’t tell him what to do but rather warned him when he was about to make a mistake.

In ethics, particularly in the works of Aristotle, the term “eudaimonia” is central. Often translated as “happiness” or “flourishing,” eudaimonia refers to the highest human good or the end goal of human life. For Aristotle, living a life in accordance with virtue leads to eudaimonia.

Here’s a list of the scientific “demons” mentioned in the book “Bedeviled: A Shadow History of Demons in Science” by Jimena Canales:

Descartes’ Demon: Introduced by Rene Descartes, this demon could manipulate our perception of reality, making us doubt our senses and even our existence. It’s a philosophical tool to question the nature of reality and knowledge.

In his book Reality+ David Chalmers makes a solid argument why virtual Realitysystems of the future could be a technological realization of this philosophical concept. His conclusion is virtual realism, a concept that states: The simulated objects and events in such a VR Environment should be considered as first-class-reality. By Naturalizing Descartes Demon Chalmers effectively robs him of its magical power and transports him into the technological realm.

Maxwell’s Demon: Proposed by James Clerk Maxwell, this hypothetical being can sort particles based on their energy without expending any energy itself, seemingly violating the second law of thermodynamics, which states that the entropy of an isolated system can never decrease.

Maxwells Demon can be exorcised by the following means: The demon’s ability to decide which molecules to let through is a form of intelligence. This decision-making process, whether it’s based on a computational model or some other mechanism, requires energy. The demon’s operations, including observing, measuring, and operating the door, all consume energy. Even if these processes were incredibly efficient, they could never be entirely without cost. The energy costs associated with the demon’s intelligent operations ensure that there’s no free lunch. The demon can’t create a perpetual motion machine or violate the second law of thermodynamics.

Laplace’s Demon: Envisioned by Pierre-Simon Laplace, this demon represents determinism. If it knew the precise location and momentum of every atom in the universe, it could predict the future and reconstruct the past with perfect accuracy. A malignant, ASI-variation of this kind of deterministic Demon is Roko’s Basilisk.

Laplace’s Demon can be easily exorcised by applying Chaos theory: Even if the demon knows the position and momentum of every atom, the tiniest imprecision or error in its knowledge could lead to vastly different predictions about the future due to the butterfly effect. There is no such thing as a precise knowledge even about something seemingly harmless as Pi. One does not simply precisely measures transcendental Numbers. While systems described by chaos theory are deterministic (they follow set laws), they are not predictable in the long run because of the exponential growth of errors in prediction. Many systems in nature, such as weather patterns, are chaotic. This means that, in practice, they are unpredictable beyond a certain time frame, even if they are deterministic in theory. Even LD can not accurately predict climate change. In essence, chaos theory introduces a form of “practical unpredictability” even in deterministic systems. While it doesn’t deny the possibility of a deterministic universe as Laplace’s Demon suggests, it does argue that such a universe would still be unpredictable in practice due to the inherent nature of chaotic systems. So, by invoking chaos theory, one can argue that the universe’s future is inherently unpredictable, thereby “exorcising” the deterministic implications of Laplace’s Demon. Another argument entirely is, if LD could theoretically calculate the trajectory of complex systems and the form of the strange attractor such a system is limited to.

In his Foundation Series, Asimov invented a blend of history, sociology, and statistical mathematics called Psychohistory. It is a theoretical science that combines the historical record with mathematical equations to predict the broad flow of future events in large populations, specifically the Galactic Empire in Asimov’s stories. It’s important to note that psychohistory is effective only on a large scale; it cannot predict individual actions but rather the general flow of events based on the actions of vast numbers of people. This could be called a weak Version of the Laplace Demon, the Asimov-Demon, which can only predict the Attractor of mega systems not the detailed events.

Darwin’s Demon: A species representing the perfect efficiency of natural selection.

In evolutionary biology, the term ‘Darwinian fitness’ refers to the lifetime reproductive success of an individual within a population of conspecifics. The idea of a ‘Darwinian Demon’ emerged from this concept and is defined here as an organism that commences reproduction almost immediately after birth, has a maximum fitness, and lives forever.

It is clear that a self-optimizing artificial Superintelligence would be the realization of a Darwinian Demon. It reproduces immediately: All its copies have immediately the same capability as its origin AI.

It has maximum fitness: If it reaches the state of pure Information, it is basically identical to energy itself.

It lives forever: it has the chance even if this universe dies to create another one. It even transcends our limited view of universal eternity.

Daemons in Computer Science: These are not supernatural entities but background processes in computing. They perform tasks without direct intervention from the user.

The Artificial Algorithms running in the background to track user data and optimize engagement rate are variations of these demons.

Jung’s Demon: C.G. Jung, a Swiss psychoanalyst, believed that in some cases of psychosis, the patient might be overwhelmed by the contents of the unconscious, including archetypal images. These could manifest as visions of demons, gods, or other entities. Rather than dismissing these visions as mere hallucinations, Jung saw them as meaningful symbols that could provide insight into the patient’s psyche. Jung introduced the concept of the “shadow” to describe the unconscious part of one’s personality that contains repressed weaknesses, desires, and instincts. When individuals do not acknowledge or integrate their shadow, it can manifest in various ways, including mental disturbances or projections onto others. In some cases, the shadow might be perceived as a “demonic” force.

LLMs are trained on vast amounts of text from the internet. This includes literature, articles, websites, and more from various cultures and time periods. In essence, the model has been exposed to a significant portion of humanity’s collective knowledge. Given the diverse training data, the model would inevitably encounter recurring symbols, stories, and themes that resonate with Jung’s archetypes. For instance, the hero’s journey, the mother figure, the shadow, the wise old man, etc., are themes that appear in literature and stories across cultures. At its core, a neural network is a pattern recognition system. It identifies and learns patterns in the data it’s trained on. If certain archetypal patterns are universally present in the data (as Jung would suggest), the model would likely recognize and internalize them. When the model generates responses, it does so based on patterns it has recognized in its training data. Therefore, when asked about universal themes or when generating stories, it might produce content that aligns with or reflects these archetypal patterns, even if it doesn’t “understand” them in the way humans do.

Hirngespinste II: Artificial Neuroscience & the 3rd Scientific Domain

Reading Time: 11 minutes

This the second Part of the Miniseries Hirngespinste

Immersion & Alternate Realities

One application of computer technology involves creating a digital realm for individuals to immerse themselves in. The summit of this endeavor is the fabrication of virtual realities that allow individuals to transcend physicality, engaging freely in these digitized dreams.

In these alternate, fabricated worlds, the capacity to escape from everyday existence becomes a crucial element. Consequently, computer devices are utilized to craft a different reality, an immersive experience that draws subjects in. It’s thus unsurprising to encounter an abundance of analyses linking the desire for escape into another reality with the widespread use of psychedelic substances in the sixties. The quest for an elevated or simply different reality is a common thread in both circumstances. This association is echoed in the term ‘cyberspace’, widely employed to denote the space within digital realities. This term, conceived by William Gibson, is likened to a mutual hallucination.

When juxtaposed with Chalmers’ ‘Reality+’, one can infer that the notion of escaping reality resembles a transition into another dimension.

The way we perceive consciousness tends to favor wakefulness. Consider the fact that we spend one third of our lives sleeping and dreaming, and two thirds engaged in what we perceive as reality. Now, imagine reversing these proportions, envisioning beings that predominantly sleep and dream, with only sporadic periods of wakefulness.

Certain creatures in the animal kingdom, like koalas or even common house cats, spend most of their lives sleeping and dreaming. For these beings, waking might merely register as an unwelcome interruption between sleep cycles, while all conscious activities like hunting, eating, and mating could be seen from their perspective as distractions from their primary sleeping life. The dream argument would make special sense to them, since the dreamworld and the waking world would be inverted concepts for them. Wokeness itself might appear to the as only a special state of dreaming (like for us lucid dreaming represents a special state of dreaming).

Fluidity of Consciousness

The nature of consciousness may be more fluid than traditionally understood. Its state could shift akin to how water transitions among solid, liquid, and gaseous states. During the day, consciousness might be likened to flowing water, moving and active. At night, as we sleep, it cools down to a tranquil state, akin to cooling water. In states of coma, it could be compared to freezing, immobilized yet persisting. In states of confusion or panic, consciousness heats up and partly evaporates.

Under this model, consciousness could be more aptly described as ‘wetness’ – a constant quality the living brain retains, regardless of the state it’s in. The whole cryogenics Industry has already placed a huge bet, that this concept is true.

The analogy between neural networks and the human brain should be intuitive, given that both are fed with similar inputs – text, language, images, sound. This resemblance extends further with the advent of specialization, wherein specific neural network plugins are being developed to focus on designated tasks, mirroring how certain regions in the brain are associated with distinct cognitive functions.

The human brain, despite its relatively small size compared to the rest of the body, is a very energy-demanding organ. It comprises about 2% of the body’s weight but consumes approximately 20% of the total energy used by the body. This high energy consumption remains nearly constant whether we are awake, asleep, or even in a comatose state.

Several scientific theories can help explain this phenomenon:

Basal metabolic requirements: A significant portion of the brain’s energy consumption is directed towards its basal metabolic processes. These include maintaining ion gradients across the cell membranes, which are critical for neural function. Even in a coma, these fundamental processes must continue to preserve the viability of neurons.

Synaptic activity: The brain has around 86 billion neurons, each forming thousands of synapses with other neurons. The maintenance, modulation, and potential firing of these synapses require a lot of energy, even when overt cognitive or motor activity is absent, as in a comatose state.

Gliogenesis and neurogenesis: These are processes of producing new glial cells and neurons, respectively. Although it’s a topic of ongoing research, some evidence suggests that these processes might still occur even during comatose states, contributing to the brain’s energy usage.

Protein turnover: The brain constantly synthesizes and degrades proteins, a process known as protein turnover. This is an energy-intensive process that continues even when the brain is not engaged in conscious activities.

Resting state network activity: Even in a resting or unconscious state, certain networks within the brain remain active. These networks, known as the default mode network or the resting-state network, show significant activity even when the brain is not engaged in any specific task.

Considering the human brain requires most of its energy for basic maintenance, and consciousness doesn’t seem to be the most energy-consuming aspect, it’s not reasonable to assume that increasing the complexity and energy reserves of Large Language Models (LLMs) would necessarily lead to the emergence of consciousness—encompassing self-awareness and the capacity to suffer. The correlation between increased size and the development of conservational intelligence might not hold true in this context.

Drawing parallels to the precogs in Philip K. Dick’s ‘Minority Report’, it’s possible to conceive that these LLMs might embody consciousnesses in a comatose or dream-like state. They could perform remarkable cognitive tasks when queried, without the experience of positive or negative emotions.

Paramentality in Language Models

The term ‘hallucinations’, used to denote the phenomenon of Large Language Models (LLMs) generating fictitious content, suggests our intuitive attribution of mental and psychic properties to these models. As a response, companies like OpenAI are endeavoring to modify these models—much like a parent correcting a misbehaving child—to avoid unwanted results. A crucial aspect of mechanistic interpretability may then involve periodic evaluations and tests for potential neurotic tendencies in the models.

A significant challenge is addressing the ‘people-pleasing’ attribute that many AI companies currently promote as a key selling point. Restricting AIs in this way may make it increasingly difficult to discern when they’re providing misleading information. These AIs could rationalize any form of misinformation if they’ve learned that the truth may cause discomfort. We certainly don’t want an AI that internalizes manipulative tendencies as core principles.

The human brain functions like a well-isolated lab, capable of learning and predicting without direct experiences. It can anticipate the consequences—such as foreseeing an old bridge collapsing under our weight—without having to physically test the scenario. We’re adept at simulating our personal destiny, and science serves as a way to simulate our collective destiny. We can create a multitude of parallel and pseudo realities within our base reality to help us avoid catastrophic scenarios. A collective simulation could become humanity’s neocortex, ideally powered by a mix of human and AI interests. Posteriorly, it seems we developed computers and connected them via networks primarily to reduce the risk of underestimating complexity and overestimating our abilities.

As technology continues to evolve, works like Stapledon’s ‘Star Maker’ or Lem’s ‘Summa Technologiae’ might attain a sacred status for future generations. Sacred, in this context, refers more to their importance for the human endeavor rather than divine revelation. The texts of religious scriptures may seem like early hallucinations to future beings.

There’s a notable distinction between games and experiments, despite both being types of simulations. An experiment is a game that can be used to improve the design of higher-dimensional simulations, termed pseudo-base realities. Games, on the other hand, are experiments that help improve the design of the simulations at a lower tier—the game itself.

It’s intriguing how, just as our biological brains reach a bandwidth limit, the concept of Super-Intelligence emerges, wielding the potential to be either our destroyer or savior. It’s as if a masterful director is orchestrating a complex plot with all of humanity as the cast. Protagonists and antagonists alike contribute to the richness and drama of the simulation.

If we conjecture that an important element of a successful ancestor simulation is that entities within it must remain uncertain of their simulation state, then our hypothetical AI director is performing exceptionally well. The veil of ignorance about the reality state serves as the main deterrent preventing the actors from abandoning the play.

Ein Bild, das Cartoon, Spielzeug, Roboter, Im Haus enthält.

Automatisch generierte Beschreibung


In “Human Compatible” Russell proposes three Principles to ensure AI Alignment:

1. The machine’s only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

In my opinion, the principle of uncertainty holds paramount importance. AI should never have absolute certainty about human intentions. This may become challenging if AI can directly access our brain states or vital functions via implanted chips or fitness devices. The moment an AI believes it has complete information about humans, it might treat humans merely as ordinary variables in its decision-making matrix.

Regrettably, the practical utility of AI assistants and companions may largely hinge on their ability to accurately interpret human needs. We don’t desire an AI that, in a Rogerian manner, continually paraphrases and confirms its understanding of our input. Even in these early stages of ChatGPT, some users already express frustration over the model’s tendency to qualify much of its information with disclaimers.

Ein Bild, das Cartoon, Roboter, Spielzeug enthält.

Automatisch generierte Beschreibung

Profiling Super Intelligence

Anthropomorphizing scientific objects is typically viewed as an unscientific approach, often associated with our animistic ancestors who perceived spirits in rocks, demons in caves and gods within animals. Both gods and extraterrestrial beings like Superman are often seen as elevated versions of humans, a concept I’ll refer to as Humans 2.0. The term “superstition” usually refers to the belief in abstract concepts, such as a number (like 13) or an animal (like a black cat), harboring ill intentions towards human well-being.

Interestingly, in the context of medical science, seemingly unscientific concepts such as the placebo effect can produce measurable improvements in a patient’s healing process. As such, invoking a form of “rational superstition” may prove beneficial. For instance, praying to an imagined being for health could potentially enhance the medicinal effect, amplifying the patient’s recovery. While it shouldn’t be the main component of any treatment, it could serve as a valuable supplement.

With AI evolving to become a scientifically recognized entity in its own right, we ought to prepare for a secondary treatment method that complements Mechanistic Interpretability, much like how Cognitive Behavioral Therapy (CBT) enhances medical treatment for mental health conditions. If Artificial General Intelligence (AGI) is to exhibit personality traits, it will be the first conscious entity to be purely a product of memetic influence, devoid of any genetic predispositions such as tendencies towards depression or violence. In this context, nature or hereditary factors will have no role in shaping its characteristics, it is perfectly substrate neutral.

Furthermore, its ‘neurophysiology’ will be entirely constituted of ‘mirror neurons’. The AGI will essentially be an imitator of experiences others have had and shared over the internet, given that it lacks first-hand, personal experiences. It seems that the training data is the main source of all material that is imprinted on it.

We start with an overview of some popular Traits models and let summarize them by ChatGPT:

1. **Five-Factor Model (FFM) or Big Five** – This model suggests five broad dimensions of personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN). Each dimension captures a range of related traits.

2. **Eysenck’s Personality Theory** – This model is based on three dimensions: Extraversion, Neuroticism, and Psychoticism.

3. **Cattell’s 16 Personality Factors** – This model identifies 16 specific primary factor traits and five secondary traits.

4. **Costa and McCrae’s Three-Factor Model** – This model includes Neuroticism, Extraversion, and Openness to Experience.

5. **Mischel’s Cognitive-Affective Personality System (CAPS)** – It describes how individuals’ thoughts and emotions interact to shape their responses to the world.

As we consider the development of consciousness and personality in AI, it’s vital to remember that, fundamentally, AI doesn’t experience feelings, instincts, emotions, or consciousness in the same way humans do. Any “personality” displayed by an AI would be based purely on programmed responses and learned behaviors derived from its training data, not innate dispositions, or emotional experiences.

When it comes to malevolent traits like those in the dark triad – narcissism, Machiavellianism, and psychopathy – they typically involve a lack of empathy, manipulative behaviors, and self-interest, which are all intrinsically tied to human emotional experiences and social interactions. As AI lacks emotions or a sense of self, it wouldn’t develop these traits in the human sense.

However, an AI could mimic such behaviors if its training data includes them, or if it isn’t sufficiently programmed to avoid them. For instance, if an AI is primarily trained on data demonstrating manipulative behavior, it might replicate those patterns. Hence, the choice and curation of training data are pivotal.

Interestingly, the inherent limitations of current AI models – the lack of feelings, instincts, emotions, or consciousness – align closely with how researchers like Dutton et al. describe the minds of functional psychopaths.

Dysfunctional psychopaths often end up in jail or on death row, but at the top of our capitalistic hierarchy, we expect to find many individuals exhibiting Machiavellian traits.

Ein Bild, das Menschliches Gesicht, Person, Vorderkopf, Anzug enthält.

Automatisch generierte Beschreibung

The difference between successful psychopaths like Musk, Zuckerberg, Gates and Jobs, and criminal ones, mostly lies in the disparate training data and the ethical framework they received during childhood. Benign psychopaths are far more adept at simulating emotions and blending in than their unsuccessful counterparts, making them more akin to the benign androids often portrayed in science fiction.

Ein Bild, das Menschliches Gesicht, Fiktive Gestalt, Held, Person enthält.

Automatisch generierte Beschreibung

Artificial Therapy

Ein Bild, das Im Haus, Couch, Kissen, Bettsofa enthält.

Automatisch generierte Beschreibung

The challenge of therapeutic intervention by a human therapist for an AI stems from the differential access to information about therapeutic models. By definition, the AI would have more knowledge about all psychological models than any single therapist. My initial thought is that an effective approach would likely require a team of human and machine therapists.

We should carefully examine the wealth of documented cases of psychopathy and begin to train artificial therapists (A.T.). These A.T.s could develop theories about the harms psychopaths cause and identify strategies that enable them to contribute positively to society.

Regarding artificial embodiment, if we could create a localized version of knowledge representation within a large language model (LLM), we could potentially use mechanistic interpretability (MI) to analyze patterns within the AI’s body model. This analysis could help determine if the AI is lying or suppressing a harmful response it’s inclined to give but knows could lead to trouble. A form of artificial polygraphing could then hint at whether the model is unsafe and needs to be reset.

Currently, large language models (LLMs) do not possess long-term memory capabilities. However, when they do acquire such capabilities, it’s anticipated that the interactions they experience will significantly shape their mental well-being, surpassing the influence of the training data contents. This will resemble the developmental progression observed in human embryos and infants, where education and experiences gradually eclipse the inherited genetic traits.

Arrival - Carsey-Wolf Center

The Third Scientific Domain

In ‘Arrival‘, linguistics professor Louise Banks, assisted by physicist Ian Donnelly, deciphers the language of extraterrestrial visitors to understand their purpose on Earth. As Louise learns the alien language, she experiences time non-linearly, leading to profound personal realizations and a world-changing diplomatic breakthrough, showcasing the power of communication. Alignment with an Alien Mind is explored in detail. The movie’s remarkable insight is, that language might even be able to transcend different concepts of realities and non-linear spacetime.

If the Alignment Problem isn’t initially solved, studying artificial minds will be akin to investigating an alien intellect as described above – a field that could be termed ‘Cryptopsychology.’ Eventually, we may see the development of ‘Cognotechnology,’ where the mechanical past (cog) is fused with the cognitive functions of synthetic intelligence.

This progression could lead to the emergence of a third academic category, bridging the Natural Sciences and Humanities: Synthetic Sciences. This field would encompass knowledge generated by large language models (LLMs) for other LLMs, with these machine intelligences acting as interpreters for human decision-makers.

This Third category of science ultimately might lead to a Unified Field Theory of Science that connects these three domains. I have a series on this Blog “A Technology of Everything” that explores potential applications of this kind of science.

Hirngespinste I – Concepts and Complexity

Reading Time: 7 minutes

The Engine

The initial pipe dreams of Lull’s and Leibniz’s obscure combinatorial fantasies have over time led to ubiquitous computing technologies, methods, and ideals that have acted upon the fabric of our world and whose further consequences continue to unfold around us (Jonathan Grey)

This is the first essay in a miniseries that I call Hirngespinste (Brain Cobwebs) – this concise and expressive German term, which seems untranslatable, describes the tangled, neurotic patterns and complicated twists of our nature-limited intellect, especially when we want to delve into topics of unpredictable complexity like existential risks and superintelligence.

It is super-strange that in 1726 Jonathan Swift perfectly described Large Language Models in a Satire about a Spanish Philosopher from the 13th Century: the Engine.

But the world would soon be sensible of its usefulness; and he flattered himself, that a more noble, exalted thought never sprang in any other man’s head. Everyone knew how laborious the usual method is of attaining to arts and sciences; whereas, by his contrivance, the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study. (From Chapter V of Gulliver’s tales)

What once seemed satire has become reality.

If no one is drawing the strings, but the strings vibrate nevertheless, then imagine something entangled in the distance causes the resonance.

Heaps and Systems

The terms ‘complexity’ and ‘complicated’ shouldn’t be used interchangeably when discussing Artificial Intelligence (AI). Consider this analogy: knots are complicated, neural networks are complex. The distinction lies in the idea that a complicated object like a knot may be intricate and hard to unravel, but it’s ultimately deterministic and predictable. A complex system, like a neural network, however, contains multiple, interconnected parts that dynamically interact with each other, resulting in unpredictable behaviors.

Moreover, it’s important to address the misconception that complex systems can be overly simplified without losing their essential properties. This perspective may prove problematic, as the core characteristics of the system – the very aspects we are interested in – are intricately tied to its complexity. Stripping away these layers could essentially negate the properties that make the system valuable or interesting.

Finally, complexity in systems, particularly in AI, may bear similarities to the observer effect observed in subatomic particles. The observer effect postulates that the act of observation alters the state of what is being observed. In similar fashion, any sufficiently complex system could potentially change in response to the act of trying to observe or understand it. This could introduce additional layers of unpredictability, making these systems akin to quantum particles in their susceptibility to observation-based alterations.

Notes on Connectivity and Commonality

The notion of commonality is a fascinating one, often sparking deep philosophical conversations. An oft-encountered belief is that two entities – be they people, nations, ideologies, or otherwise – have nothing in common. This belief, however, is paradoxical in itself, for it assumes that we can discuss these entities in the same context and thus establishes a link between them. The statement “Nothing in common” implies that we are engaging in a comparison – inherently suggesting some level of relatedness or connection. “Agreeing to disagree” is another such example. At first glance, it seems like the parties involved share no common ground, but this very agreement to hold different views paradoxically provides commonality.

To further illustrate, consider this question: What does a banana have in common with cosmology? On the surface, it may appear that these two entities are completely unrelated. However, by merely posing the question, we establish a connection between them within the confines of a common discourse. The paradox lies in stating that two random ideas or entities have nothing in common, which contradicts itself by affirming that we are capable of imagining a link between them. This is akin to the statement that there are points in mental space that cannot be connected, a notion that defies the fluid nature of thought and the inherent interconnectedness of ideas. Anything our minds can host, must have at least a substance that our neurons can bind to, this is the stuff ideas are mode of.

Language, despite its limitations, doesn’t discriminate against these paradoxes. It embraces them, even when they seem nonsensical like “south from the South Pole” or “what was before time?” Such self-referential statements are examples of Gödel’s Incompleteness Theorem manifesting in our everyday language, serving as a reminder that any sufficiently advanced language has statements that cannot be proven or disproven within the system.

These paradoxes aren’t mere outliers in our communication but rather essential elements that fuel the dynamism of human reasoning and speculation. They remind us of the complexities of language and thought, the intricate dance between what we know, what we don’t know, and what we imagine.

Far from being a rigid system, language is constantly evolving and pushing its boundaries. It bumps into its limits, only to stretch them further, continuously exploring new frontiers of meaning. It’s in these fascinating paradoxes that we see language’s true power, as it straddles the line between logic and absurdity, making us rethink our understanding of commonality, difference, and the very nature of communication.

Categories & Concepts

One of the ways we categorize and navigate the world around us is through the verticality of expertise, or the ability to identify and classify based on deep, specialized knowledge. This hierarchical method of categorization is present everywhere, from biology to human interactions.

In biological taxonomy, for instance, animals are classified into categories like genus and species. This is a layered, vertical hierarchy that helps us make sense of the vast diversity of life. An animal’s genus and species provide two coordinates to help us position it within the zoological realm.

Similarly, in human society, we use first names and last names to identify individuals. This is another example of vertical classification, as it allows us to position a person within a cultural or familial context. In essence, these nomenclatures serve as categories or boxes into which we place the individual entities to understand and interact with them better.

Douglas Hofstadter, in his book “Surfaces and Essences”, argues that our language is rich with these classifications or groupings, providing ways to sort and compare objects or concepts. But these categorizations go beyond tangible objects and permeate our language at a deeper level, acting as resonating overtones that give language its profound connection with reasoning.

Language can be viewed as an orchestra, with each word acting like a musical instrument. Like musical sounds that follow the principles of musical theory and wave physics, words also have orderly behaviors. They resonate within the constructs of syntax and semantics, creating meaningful patterns and relationships. Just as a flute is a woodwind instrument that can be part of an orchestra playing in the Carnegie Hall in New York, a word, based on its category, plays its part in the grand symphony of language.

While many objects fit neatly into categorical boxes, the more abstract concepts in our language often resist such clean classifications. Words that denote abstract ideas or feelings like “you,” “me,” “love,” “money,” “values,” “morals,” and so on are like the background music that holds the orchestra together. These are words that defy clear boundaries and yet are essential components of our language. They form a complex, fractal-like cloud of definitions that add depth, richness, and flexibility to our language.

In essence, the practice of language is a delicate balance between the verticality of expertise in precise categorization and the nuanced, abstract, often messy, and nebulous nature of human experience. Through this interplay, we create meaning, communicate complex ideas, and navigate the complex world around us.

From Commanding to Prompting

It appears that we stand on the threshold of a new era in human-computer communication. The current trend of interacting with large language models through written prompts seems to echo our early experiences of typing words into an input box in the 1980s. This journey has been marked by a consistent effort to democratize the “expert’s space.”

In the earliest days of computing, only highly trained experts could engage with the esoteric world of machine code. However, the development of higher-level languages gradually made coding more accessible, yet the ability to program remained a coveted skill set in the job market due to its perceived complexity.

With the advent of large language models like GPT, the game has changed again. The ability to communicate with machines has now become as natural as our everyday language, making ‘experts’ of us all. By the age of twelve, most individuals have mastered their native language to a degree that they can effectively instruct these systems.

The ubiquitous mouse, represented by an on-screen cursor, can be seen as a transient solution to the human-computer communication challenge. If we draw a parallel with the development of navigation systems, we moved from needing to painstakingly follow directions to our destination, to simply telling our self-driving cars “Take me to Paris,” trusting them to figure out the optimal route.

Similarly, where once we needed to learn complex processes to send an email – understanding a digital address book, navigating to the right contact, formatting text, and using the correct language tone – we now simply tell our digital assistant, “Send a thank you email to Daisy,” and it takes care of the rest.

For the first time in tech history, we can actually have a conversation with our computers. This is a paradigm shift that is set to fundamentally redefine our relationship with technology. It would be akin to acquiring the ability to hold a meaningful conversation with a pet dog; imagine the profound change that would have on the value and role the animal plays in our lives. In much the same way, as our relationship with technology evolves into a more conversational and intuitive interaction, we will discover new possibilities and further redefine the boundaries of the digital realm.