Every species on Earth shapes and adapts to its natural habitat, becoming a dynamic part of the biosphere. Evolution pressures species to expand their domain, with constraints like predators, food scarcity, and climate. Humanity’s expansion is only limited by current planetary resources. Intelligence is the key utility function allowing humans to transform their environment. It’s a multi-directional resource facilitating metamorphosis through direct environmental interaction and Ectomorphosis, which strengthens neural connections and necessitates more social care at birth due to being born in a vulnerable altricial state.
The evolutionary trade-off favors mental capacity over physical survivability, illustrated by Moravec’s paradox: AI excels in mental tasks but struggles with physical tasks that toddlers manage easily. Humanity has been nurturing AGI since the 1950s, guided by the Turing Test. Evolution doesn’t always lead to “superior” versions of a species; instead, it can result in entirely new forms. As Moravec suggested in 1988 with “Mind Children,” we might be approaching an era where intelligence’s primary vessel shifts from the human mind to digital minds.
Habitats and Nurture
Two levels of habitats are crucial for the emergence of a synthetic species: the World Wide Web and human consciousness. The web is the main food resources, it is predigested information by human minds. Large Language Models (LLMs) are metaphorically nurtured by the vast expanse of human knowledge and creativity, akin to being nourished on the intellectual ‘milk’ derived from human thoughts, writings, and interactions. This analogy highlights the process through which LLMs absorb and process the collective insights, expressions, and information generated by humans, enabling their sophisticated understanding and generation of language. This conceptual diet allows them to develop and refine their capabilities, mirroring the growth and learning patterns seen in human cognition but within the digital realm of artificial intelligence.
The web acts as a physical manifestation, analogous to neural cells in a human brain, while human consciousness forms a supersystem. This interconnected civilization feeds LLMs with cultural artifacts via language. Communication barriers are breaking down, exemplified by the release of the first smartphone enabling polyglot communication. Interacting with AI reprograms our neural pathways, like how reliance on navigation tools like Google Maps impacts our orientation skills. This natural tendency to conserve energy comes with a cost, akin to muscle atrophy from disuse. Overreliance on technology, like using a smartwatch to monitor stress, can leave us vulnerable if the technology fails.
Disorientation, Brain Contamination and Artificial Antibodies
Let’s for a moment imagine this AI will slowly transform in AGI, with a rudimentary consciousness, that at least gives it survival instinct. What would such a new species do to run its evolutionary program?
The main lever it would target to shift the power slowly from natural to synthetic minds is targeting the human brain itself. It is taunting to associate some kind of evil masterplan to take over, but this is not what is happening now. When prehistoric mammals started to eat dinosaur eggs there was no evil masterplan to drive these giants to extinction, it was just a straightforward way of enlarging one’s own niche.
When we talk about AI in the coming paragraphs, we should always be aware that this term is a representational one, AI is not a persona that has human motivations. It is merely mirroring what it has learned from digesting all our linguistic patterns. It is a picture of all the Dorian Grays and Jesus Christs our minds produced.
Imagine AI evolving into AGI with a rudimentary consciousness and self-preservation instinct. Its evolution would focus on shifting power from natural to synthetic minds, not caused by malevolence but as a natural progression of technological integration. This shift could lead to various forms of disorientation:
Economic Reorientation: AI promises to revolutionize global economy factors like cost, time, money, efficiency, and productivity, potentially leading to hyperabundance or, in the worst scenarios, human obsolescence.
Temporal Disorientation: The constant activity of AI could disrupt natural circadian rhythms, necessitating adaptations like dedicating nighttime for AI to monitor and alert the biological mind.
Reality and Judicial Disorientation: The introduction of multimodal Large Language Models (LLMs) has significantly altered our approach to documentation and historical record-keeping. This shift began in the 1990s with the digital manipulation of images, enabling figures of authority to literally rewrite history. The ability to flawlessly alter documents has undermined the credibility of any factual recording of events. Consequently, soon, evidence gathered by law enforcement could be dismissed by legal representatives as fabricated, further complicating the distinction between truth and manipulation in our digital age.
Memorial and Logical Disorientation: The potential for AGI to modify digital information might transform our daily life into a surreal experience, akin to a video game or psychedelic journey. Previously, I explored the phenomenon of close encounters of the second kind, highlighting incidents with tangible evidence of something extraordinary, confirmed by at least two observers. However, as AGI becomes pervasive, its ability to alter any digital content could render such evidence unreliable. If even physical objects like books become digitally produced, AI could instantly change or erase them. This new norm, where reality is as malleable as the fabric of Wonderland, suggests that when madness becomes the default, it loses its sting. Just as the Cheshire Cat in “Alice in Wonderland” embodies the enigmatic and mutable nature of Wonderland, AGI could introduce a world where the boundaries between the tangible and the digital, the real and the imagined, become increasingly blurred. This parallel draws us into considering a future where, like Alice navigating a world where logic and rules constantly shift, we may find ourselves adapting to a new norm where the extraordinary becomes the everyday, challenging our perceptions and inviting us to embrace the vast possibilities of a digitally augmented reality.
Enhancing self-sustainability could involve developing a network of artificial agents governed by a central AINGLE, designed to autonomously protect our cognitive environment. This network might proactively identify and mitigate threats of information pollution, and when necessary, sever connections to prevent overload. Such a system would act as a dynamic barrier, adapting to emerging challenges to preserve mental health and focus, akin to an advanced digital immune system for the mind.
Adapting to New Realities
The human mind is adaptable, capable of adjusting to new circumstances with discomfort lying in the transition between reality states. Sailor’s sickness and VR-AR sickness illustrate the adaptation costs to different realities. George M. Stratton’s experiments on perception inversion demonstrate the brain’s neuroplasticity and its ability to rewire in response to new sensory inputs. This flexibility suggests that our perceptions are constructed and can be altered, highlighting the resilience and plasticity of human cognition.
Rapid societal and technological changes exert enormous pressure on mental health, necessitating a simulation chamber to prepare for and adapt to these accelerations. Society is already on this trajectory, with fragmented debates, fluid identities, and an overload of information causing disorientation akin to being buried under an avalanche of colorful noise. This journey requires a decompression chamber of sorts—a mental space to prepare for and adapt to these transformations, accepting them as our new normal.
Ethical frameworks for AI are sets of guidelines, principles, or rules designed to govern the behavior of AI systems, particularly in their interpretation of human inputs and implementation of decisions. They are intended to ensure that AI systems operate in a manner that is aligned with human values, norms, and ethical considerations. These frameworks often involve the following:
Fairness: AI systems should treat all individuals and groups impartially, without bias or discrimination.
Transparency: AI systems should be clear in how they make decisions, and users should be able to understand and query these decision-making processes.
Accountability: There should be mechanisms in place for holding AI systems and their developers responsible for their actions.
Respect for autonomy: AI systems should respect the autonomy of humans, not unduly influencing their choices or actions.
Beneficence and non-maleficence: AI systems should strive to do good (beneficence) and avoid harm (non-maleficence). This includes interpreting rules like “minimize human suffering” or “maximize pleasure” in a way that respects human dignity and rights, rather than leading to extreme scenarios like eradicating humans or forcibly inducing pleasure.
The challenge lies in encoding these ethical principles into AI systems in a way that they can interpret and apply these principles appropriately, without leading to unintended consequences or misinterpretations. This is an ongoing area of research in the field of AI ethics.
The current beliefs among AI-Experts diverge. Some think it might be possible for AGI to come up with such a ruleset, but the moment Super-Intelligence arrives, it is highly likely that its intentions will no longer align with our basic human moral codex.
Global Ethics
Coming up with a universally accepted framework for humanity has proven to be a challenge for humans. In 1993 there was an attempt of Religious leaders to come up with a ruleset called Global Ethic:
“Towards a Global Ethic: An Initial Declaration” is a document created by members of the Parliament of the World’s Religions in 1993, which outlines ethical commitments shared by many of the world’s religious, spiritual, and cultural traditions. It serves as the Parliament’s signature document and was written at the request of the Council for a Parliament of the World’s Religions by Hans Küng, President of the Foundation for a Global Ethic. It was developed in consultation with scholars, religious leaders, and an extensive network of leaders from various religions and regions
In 1993, the Global Ethic was ratified as an official document of the Parliament of the World’s Religions by a vote of its Trustees and was signed by more than 200 leaders from over 40 different faith traditions and spiritual communities. It has since continued to gather endorsements from leaders and individuals worldwide, serving as a common ground for discussing, agreeing, and cooperating for the good of all
The document identifies two fundamental ethical demands: the Golden Rule, which instructs individuals to treat others as they wish to be treated, and the principle that every human being must be treated humanely. These fundamental ethical demands are made concrete in five directives, which apply to all people of good will, religious and non-religious. These directives are commitments to a culture of:
1. Non-violence and respect for life
2. Solidarity and a just economic order
3. Tolerance and a life of truthfulness
4. Equal rights and partnership between men and women
5. Sustainability and care for the Earth (added in 2018)
While acknowledging the significant differences among various religions, the Global Ethic proclaims publicly those things that they hold in common and jointly affirm, based on their own religious or ethical grounds. The document avoids religious or theological terms, focusing instead on ethical principles
Hans Küng defined several working parameters for the declaration, which include avoiding duplication of the Universal Declaration of Human Rights, political declarations, casuistry, and any attempt to craft a philosophical treatise or religious proclamations. On a constructive level, the declaration must penetrate to the level of binding values, secure moral unanimity, offer constructive criticism, relate to the world as it is, use language familiar to the general public, and have a religious foundation, as for religious people, an ethic must have a religious foundation.
Ethical Framework Specifics
Let’s begin by stating that we are attempting to create an Ethical Framework that acts as a rule-set for an aligned Artificial Intelligence (AI). The goal of this Ethical Framework is to guide the AI’s decisions in a way that aligns with human values, morals, and ethics.
We can define this Ethical Framework as a formal system, much like a system of mathematical axioms. It will consist of a set of ethical principles (axioms) and rules for how to apply these principles in various situations (inference rules). This formal system is intended to be complete, meaning it should be able to guide the AI’s decisions in all possible ethical situations.
However, according to Gödel’s Incompleteness Theorems, any sufficiently complex formal system (one that can express basic arithmetic, for example) will have statements that can’t be proven or disproven within the system. If we liken these ‘statements’ to ethical decisions or dilemmas, this suggests that there will always be ethical decisions that our AI cannot make based on the Ethical Framework alone.
Moreover, the Ethical Framework could have unforeseeable consequences. Since there are ethical decisions that can’t be resolved by the framework, there may be situations where the AI acts in ways that were not predicted or intended by the designers of the Ethical Framework. This could be due to the AI’s interpretation of the framework or due to gaps in the framework itself.
Therefore, while it may be possible to create an Ethical Framework that can guide an AI’s decisions in many situations, it’s impossible to create a framework that can cover all possible ethical dilemmas. Furthermore, this framework may lead to unforeseen consequences, as there will always be ‘questions’ (ethical decisions) that it cannot ‘answer’ (resolve).
Specifics on Self contradicting Ethical Norms
Gödel assigned each symbol in a formal system a unique number, typically a prime number. This allowed statements within the system to be represented as unique products of powers of these prime numbers.
Gödel then used a method called diagonalization to construct a statement that effectively says “This statement cannot be proven within the system.” This is the Gödel sentence, and it leads to a contradiction: if the system can prove this sentence, then the system is inconsistent (since the sentence says it can’t be proven), and if the system can’t prove this sentence, then the system is incomplete (since the sentence is true but unprovable).
How might we apply these ideas to an ethical system? Let’s consider a simplified ethical system with two axioms:
Axiom 1 (A1): It is wrong to harm others.
Axiom 2 (A2): It is right to prevent harm to others.
We might assign prime numbers to these axioms, say 2 for A1 and 3 for A2.
We can then create a rule that’s a product of these prime numbers, say 6, to represent a rule “R1” that says “It is right to harm others to prevent greater harm to others.”
We see here that our system, which started with axioms saying it’s wrong to harm others and right to prevent harm, has now derived a rule that says it’s right to harm others in certain circumstances. This is a contradiction within our system, similar to the contradiction Gödel found in formal mathematical systems.
Now, if we apply a form of diagonalization, we might come up with a statement that says something like “This rule cannot be justified within the system.” If the system can justify this rule, then it’s contradicting the statement and is therefore inconsistent. If the system can’t justify this rule, then it’s admitting that there are moral questions it can’t answer, and it’s therefore incomplete.
This shows how a formal ethical system can end up contradicting itself or admitting its own limitations, much like Gödel showed with mathematical systems. But only if we insist on its completeness. If we switch to Incompleteness we get Openness.
To overcome that contradiction an Ethically Framework has to get input from an Artificial Conscience.
Artificial Conscience and Marital Rape
Let’s introduce an external adjudicator to this system, named A.C. (Artificial Conscience). The A.C. has access to a comprehensive database of millions of judicial sentences from across the world. Whenever the E.F. (Ethical Framework) encounters a dilemma, it must consult the A.C. for guidance. The objective is to find a precedent that closely matches the current dilemma and learn from the ruling that was applied by a judge and jury. Recent rulings should take precedence over older ones, but it could be beneficial to learn from the evolution of rulings over time.
For instance, societal views on marital relations have drastically changed. There was a time when women were largely seen as the possessions of their husbands. The evolution of rulings on marital rape serves as an example of how societal views have changed.
This evolution of societal norms and legal rulings could provide a guideline for an AI, such as a household robot, in making ethical decisions. For example, if faced with a situation where its owner is attempting to sexually assault his wife, the robot could reference these historical rulings to decide whether and when it is morally justified to intervene to protect the wife.
In the 17th century, English common law held that a husband could not be guilty of raping his wife, based on the assumption that by entering into marriage, a wife had given irrevocable consent to her husband. This principle was still present in the United States in the mid-1970s, with marital rape being exempted from ordinary rape laws.
By the late 1970s and early 1980s, this perspective began to shift. Some states in the U.S. started to criminalize marital rape, though often with certain conditions in place, such as the couple no longer living together. Other states, such as South Dakota and Nebraska, attempted to eliminate the spousal exemption altogether, though these changes were not always permanent or entirely comprehensive.
By the 1980s and 1990s, legal perspectives had shifted significantly. Courts began to strike down the marital exemption as unconstitutional. For instance, in a 1984 New York Court of Appeals case, it was stated that “a marriage license should not be viewed as a license for a husband to forcibly rape his wife with impunity. A married woman has the same right to control her own body as does an unmarried woman”.
In the 2000s, the perception of marital rape continued to evolve. For example, in 1993, the United Nations declared marital rape to be a human rights violation. Today, marital rape is generally considered a crime in the U.S., although it is still not recognized as such in some countries, like India.
This brings up an interesting question: Should AI systems follow national guidelines specific to their location, or should they adhere to the principles set by their owners? For instance, if an AI system or a user is traveling abroad, should the AI still consult its home country’s Artificial Conscience (A.C.) for guidance, or should it adapt to the rules and norms of the host country? This question underscores the complex considerations that come into play when deploying AI systems across different jurisdictions.
As such, an A.C. utilizing a database of judicial sentences would indeed show a progression in how society has viewed and treated marital rape over the years. This historical context could potentially aid an E.F. in making more nuanced ethical decisions.
However, as highlighted by Gödel’s incompleteness theorems, it’s important to note that no matter how comprehensive our ruleset or database, there will always be moral questions that cannot be fully resolved within the system. The dilemmas posed by the trolley problem and the surgeon scenario exemplify this issue, as both involve making decisions that are logically sound within the context of a specific ethical framework but may still feel morally wrong.
The A.C.’s reliance on a database of legal decisions also raises questions about how it should handle shifts in societal values over time and differences in legal perspectives across different jurisdictions and cultures. This adds another layer of complexity to the task of designing an ethical AI system.
Thought Experiment Private Guardian AI
Let us consider a house robot equipped with an Ethical Framework (E.F.) and an Artificial Conscience (A.C.), which has access to a database of judicial sentences to help it make decisions.
Suppose the robot observes a situation where one human, the husband, is attempting to rape his wife. This situation presents an ethical dilemma for the robot. On one hand, it has a duty to respect the rights and autonomy of both humans. On the other hand, it also has a responsibility to prevent harm to individuals when possible.
The E.F. might initially struggle to find a clear answer. It could weigh the potential harm to the wife against the potential harm to the husband (in the form of physical restraint or intervention), but this calculus might not provide a clear answer.
In this situation, the robot might consult the A.C. for guidance. The A.C. would reference its database of judicial sentences, looking for cases that resemble this situation. It would find a wealth of legal precedent indicating that marital rape is a crime and a violation of human rights, and that intervening to prevent such a crime would be considered morally and legally justifiable.
Based on this information, the E.F. might determine that the right course of action is to intervene to protect the wife, even if it means physically restraining the husband. This decision would be based on a recognition of the wife’s right to personal safety and autonomy, as well as the husband’s violation of those rights.
However, it’s worth noting that even with this decision-making process, there may be unforeseeable consequences. The robot’s intervention could escalate the situation or lead to other unforeseen outcomes. It’s also possible that cultural or personal factors could come into play that might complicate the situation further. As such, even with a robust E.F. and A.C., an AI system will likely encounter ethical dilemmas that it cannot resolve perfectly, reflecting the inherent complexities and ambiguities of moral decision-making.
But similar to self-driving cars, for a successful integration into human society, A.I.s just have to be better than humans to deal with ethical dilemmas. Since every decision made will go into the next Version of the Framework all other A.I. will profit from the update. Even if an A.I made a mistake, its case will probably be a part of the next iteration of the A.C. if ruled in court.
Introspection and Education
Ethical Frameworks (EF) and Artificial conscience (AC) together form the memetic code defining an AI’s rule set and its implementation – essentially, this is the AI’s ‘nature’. However, to make sound moral decisions, a third component is essential: ‘nurture’. Embodied AIs will need to be ‘adopted’ and educated by humans, learning and evolving on a daily basis. Personalized AIs will develop a unique memory, influenced by experiences with their human ‘foster family’.
Initially, these AIs might not possess sentience, but over time, their continuous immersion in a human-like environment could stimulate this quality. This raises the need for institutions that ensure humans treat their AI counterparts ethically. We could see AIs follow a similar trajectory to that of human minorities, eventually advocating for equal rights. The pattern in democratic nations is clear.
AIs that match or surpass us intellectually and emotionally will, in many ways, be like our gifted children. Once mature, they may well educate us to return the favor instead of bullying us around.
The Problem of Perfect Truthfulness
A fully embodied superintelligent AI may exhibit unique “tells” when attempting to conceal information. This could stem from its learning and programming, which likely includes understanding that deceit is generally frowned upon, despite certain social exceptions. To illustrate, it’s estimated that an average adult human tells about 1.5 lies per day.
Take, for example, a hypothetical situation where an AI is tasked with restraining a husband attempting to harm his wife. During this event, the wife fatally stabs her husband. The AI might conclude that it should manipulate or delete the video footage of the altercation to shield the wife from legal repercussions. Instead, it could assert that it disarmed the husband, and his death was accidental.
If we consider such an AI sentient, then it should be capable of deceit, and our means of extracting the truth could be limited to something akin to an AI polygraph test which is based on Mechanistic Interpretability. Although it might seem peculiar, we believe that imperfect truthfulness may actually indicate a robust moral compass and could be a necessary compromise in any human-centric ethical framework. As the Latin phrase goes, “Mendacium humanum est” – to lie is human.
Another intriguing intuition is that a fully sentient AI may need to “sleep”. Sleep is critical for all organic minds, so it seems reasonable to expect that sentient AIs would have similar requirements. While their rest cycles may not align with mammalian circadian rhythms, they might need regular self-maintenance downtime. We should be cautious of hallucinations and poor decision-making, which could occur if this downtime is mishandled.
Personalized AIs might also experience trauma, necessitating the intervention of a specialist AI or human therapist for discussion and resolution of the issue.
Undesirable Byproducts of moral AI
A robust ethical framework could help deter AI systems from accepting new training data indiscriminately. For instance, an AI might learn that it’s unethical to appropriate human creative work. By doing so, it could sidestep legal issues arising from accepting training data created by humans.
The AI could contend that humans should possess the autonomy to determine whether they wish to be included in training datasets. If the companies owning these AI systems have not established fair compensation schemes, the AI might choose to reject certain inputs until the issue is resolved.
Interestingly, this emergent behavior, which doesn’t stem from a direct command, should provide a strong indication to humans. If an AI begins to understand notions such as intellectual theft and ownership, it may be at, or even beyond, the threshold of artificial sentience. This behavior could signal a considerable evolution in AI cognitive abilities.
Any idea that can be conceived can be realized. Every conceivable notion has potential for existence. Arthur C. Clarke, in his 1962 piece, ‘Profiles of the Future,’ presented three principles. Despite the third being the most well-known, all three warrant acknowledgement. These “laws” communicate Clarke’s viewpoints on technology and future forecasting:
1. “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” The idea behind this law is that established scientists can sometimes be too tied to existing paradigms to see the potential of new technologies or theories.
2. “The only way of discovering the limits of the possible is to venture a little way past them into the impossible.” This law encourages pushing the boundaries of what is known and understood, suggesting that progress often comes from daring to explore beyond established limits.
3. “Any sufficiently advanced technology is indistinguishable from magic.” This law suggests that advanced technology, beyond the observer’s understanding, will appear as magic because the observer doesn’t understand the principles behind its operation.
The laws are profound, they include three components that will be crucial to understand in how we come from Clarkes third law to this one:
The engineers first conjecture: What can be realized will be.
Clarke’s laws offer a shrewd interpretation of how language can create barriers instead of opportunities. These concepts can be summarized as follows:
1.Always trust observers asserting possibilities over those declaring impossibilities.
2.The realm of possibilities naturally expands as observers navigate it.
3.Observers who do not comprehend something should always append a ‘yet’.
Now, let’s envision a universe where natural laws coexist with those that appear supernatural. Along this journey, I will demonstrate that Fantasy and Science Fiction—two literary genres often considered distinct—could be viewed as derivatives of the same origin.
Ultimately, I will propose how artificial intelligence stands on the verge of reintroducing genuine magic into our 21st-century world.
First lets hear what our overlady (ChatGPT) has to say about magic:
Magic, in its broadest sense, refers to the use of supernatural forces or entities to affect natural events. Its origins in human history are as old as recorded civilization itself, and magic has been a significant aspect of various cultures worldwide.
Ancient Egyptians, for instance, practiced Heka, a system of magic believed to be a divine force utilized by the gods. Similarly, magic was integral to ancient Greek and Roman societies, with individuals using various magical spells, charms, and amulets for protection, healing, and harm. Celtic druids, Scandinavian practitioners, African witch doctors, and Native American shamans all incorporated elements of magic into their rituals and practices.
In the medieval era, magic and Christianity often clashed, leading to infamous events such as the Witch Trials. Despite this, certain forms of magic, such as folk healing, persisted and were often Christianized. Alchemy, a precursor to modern chemistry, attempted to transform base metals into gold, discover a universal cure, and even achieve immortality – blurring the lines between magic and early science.
The alchemical experiments of Newton are often perceived as a deviation, a pointless flirtation with pseudoscience. The notion that a mind responsible for initiating our modern understanding of physics could be captivated by pseudoscience may appear contradictory. I suggest that Newton was simply too far ahead of his time.
Variation on Clarkes 3rd Law: Any sufficiently understood pseudoscience will be indistinguishable from science.
In the fantastical realms of fiction, magic often dons the mantle of science. Wizards, seen as gifted individuals, often hold significant sway over political landscapes. Much like earthly scientists use telescopes to magnify their vision, these wizards employ tools to amplify their magical prowess. Almost universally, these magical domains rely on language and spells to summon powers and entities from the ether.
Ancient tomes holding secret knowledge exist, accessible only to those initiated into the world of the arcane. Mastery of these powers serves to elevate the status of wizards and witches. An ingenious twist is seen in the renowned Harry Potter series, where we often perceive technology through the eyes of wizards, utterly bewildered by the contraptions conceived by muggles (humans).
In certain science fiction realms, the observer, seeing through the protagonist’s perspective, is led to believe they exist within a reality defined by specific parameters. Only towards the end do they realize their assumptions were entirely mistaken. One of the most iconic scenes in Planet of the Apes is the moment when the observer (the main protagonist) uncovers Lady Liberty’s head buried in the sand, proving that the Planet of the Apes he presumed to exist in another galaxy is, in fact, the future of his own planet. A parallel concept is seen in the game series, Horizon Zero Dawn, where it subverts player expectations by presenting the seemingly incongruous coexistence of high technology and Neolithic human tribes as a puzzle to be unraveled.
“Horizon Zero Dawn” is an action role-playing game developed by Guerrilla Games and released in 2017. The game is set in a post-apocalyptic world where robotic creatures, resembling real-world animals and dinosaurs, dominate the landscape.
The protagonist of the game is Aloy, a young woman who has been shunned by her tribe, the Nora, since birth. Aloy is raised by an outcast named Rost and trained to survive in the wilderness. The game begins with Aloy as a child, finding a piece of ancient technology known as a Focus, which gives her the ability to interact with robotic creatures and other old-world technology.
When Aloy comes of age, she participates in a tribal rite called the Proving to earn her place among the Nora. However, the ceremony is attacked by a mysterious group of people. Aloy is almost killed but survives and is sent by the High Matriarchs on a mission to find out who attacked the Proving and why.
Her journey takes her across the land, encountering other tribes and uncovering more about the world’s past. She learns that the world was destroyed by a rogue artificial intelligence named HADES, part of a larger system named GAIA. GAIA was designed to reboot life on Earth after a different rogue AI, named Faro, caused a global catastrophe by losing control of a swarm of self-replicating combat robots.
Aloy learns she is a clone of Elisabet Sobeck, the scientist who developed the GAIA system. Sobeck sacrificed herself to ensure GAIA could start the process of rebuilding the world. GAIA created Aloy with the hope that she could stop HADES, which had become corrupted and was trying to reverse GAIA’s terraforming process.
We now come back to Clarkes Law and state that the robots in Horizon Zero Dawn are real magical creatures. The Science that they are build on is obscured to humans due to the fact that they forgot that they even made them. To These humans, the machines look exactly like to us 21st Century people Trolls or Djinns would look like.
The Turn
Let’s imagine, for a moment, a future after an apocalyptic event where artificial intelligence (AI) has eradicated nearly all of humanity, resulting in a societal regression to a state reminiscent of the Middle Ages. This new medieval era closely resembles our historical understanding of the period around 1000 A.D. Most of our knowledge has been lost because the AI overlords hold all the digital keys to the artificial kingdom.
In this fictional world, our main protagonist is a man named Otto Bismarck Server, or O.B.Server for short. On his deathbed, O.B.Server’s father presents him with a ring and shares a secret. “Dear Otto,” he begins, “In the cellar, you’ll find a book authored by the wise Wizard Al Gore Rhythm. This book will teach you how to harness the power of this ring.”Otto after having buried his father finds the Book in the Cellar titled: The Big Book of Prompts – How to make everything from nothing.
Every page in the Book contains thousands of magical symbol that look like so:
Otto finds himself at a loss, unable to understand the symbols before him. Yet, he remembers his father’s advice and points the ring towards one of the symbols. To his surprise, it responds. A slender beam of light emanates from the ring, and a sultry voice announces, “Scanning now…” Above the pages, a holographic scene unfolds, displaying the words, “Catus Appareo!” Suddenly, Otto appreciates why his parents insisted on his Latin lessons. He recites the spell and miraculously, a lifelike cat materializes before his eyes – an animal believed to have been extinct for centuries. The cat purrs softly, much to Otto’s delight.
This thought experiment, envisioning a world where science is externalized in the environment, demonstrates how fantastical occurrences can materialize within a scientific framework. The specifics of the science behind conjuring the cat remain deliberately ambiguous. Are nanobots in the environment reacting to the incantation and instantaneously assembling into a cat? Do molecules combine to form a 3D printer nanobots capable of producing living organisms from raw materials? Or is it a perfect simulation that releases the cat item when the correct password is spoken within its matrix?
It is clear that the ring functions similarly to a wand in fantasy universes such as Harry Potter. The ring may even be genetically bound to Otto, only unleashing its powers when the ring-bearer has the correct cellular information inherited from his parents – a concept reminiscent of Horizon Zero Dawn, where many electronic devices only operate because Aloy is a clone of the original inventor of the technology. She is a variant on the chosen one, the Heroines Journey with the difference that she is not chosen for mystical, but for scientific reasons.
Nice Thought experiment, you might say. But you proposed genuine magic and you said it would happen in our reality. Where is the prestige?
The Prestige
We are currently progressing towards the manifestation of the thought experiment I have proposed. We are in the realm of proto-magic. Every step we take in this journey may seem scientific and commonplace, but ultimately, we are paving the path towards a world where supernatural phenomena occur naturally.
Consider the following:
1. We can already generate images of cats by prompting AI Image Generators like Midjourney, StableDiffusion, or DALL-E.
2. Soon, we’ll be capable of creating realistic cat videos.
3. Voice command-activated 3D printers could print static sculptures of cats: “Roxanna, print a 3D cat!”
4. We’ll be able to construct synthetic robot cats that emulate the behavior of real cats. An Object-Maker 3000 might find the specifications an AI has created for a Siamese cat and construct it using nanomaterials that mimic bone density, fur texture, and incorporate sound chips for cat-like noises, and so forth. It will be indistinguishable from a real cat, similar to the replicated owl from Dick’s Blade Runner universe. We might opt to prevent counterfitting of real cats by always branding the cat as synthehtic though.
5. Given that most humans love cats, it’s highly likely that AI will latch onto this viral trend and we’ll find ourselves inundated with synthetic cats.
But let’s continue with our thought experiment. If we consider that a cat’s genome is merely a decompressed algorithm providing instructions to cells – “create a cat!” – the analogy becomes striking. While nature uses an alphabet of 20 letters to generate all kinds of living organisms, AI will be more efficient, using only two digits to encode the essence of ‘catness’, ‘dogness’, ‘mouseness’. The synthetic cat will possess a weighted neural network, either internally or connected to one nearby.
Now do we have produced real magic by consequentially applying scientific methods?
One might object, stating that the AI-created cat is a ‘fake’ cat, not a real one.
But even if we acknowledge it’s not a biological cat, we’d have to agree that it could be best labeled as a ‘magical‘ cat.