Imagine a scene in the not-so-distant future. Someone has been murdered. Two investigation teams arrive at the scene, but it is unclear who has jurisdiction. The human team is led by the charismatic detective Sheerluck Holmes, while the android team is led by Bot-OX. The question is: Is the perpetrator human, android, or something in between? Should we expect that the police of the future have established a well-defined procedure or algorithm to decide this quickly?
We will try to answer this and the more pressing issue we are currently facing: Do we have a good chance of coming up with an algorithm that is practical and allows us, by only looking at the crime scene (the generated text), to decide whether a bot or a human created it? Developing such an algorithm is currently one of the most sought-after goals in computer science. A robust Blackbox Algorithm could save most of our academic conventions and allow us to maintain the ways we test children, adolescents, and adults. Without it, these systems will need to be rebuilt at great expense.
In a world where more and more people work and train remotely, it is crucial that we can reliably determine that humans did their intellectual work themselves, which is not the case at the moment. Additionally, with the reach of social media, fake news, images, and videos can have a devastating impact on societal consensus. Such an algorithm—if it exists—is not watertight, but with enough training data, it might even hold up in court.
The outlook is not promising, though. OpenAI abandoned the project within six months: OpenAI Classifier. The practical and monetary value of such an algorithm cannot be overstated. If grabby aliens were to sell it for a trillion dollars, call me—I want in.
Introduction of the Differentiation Test Engine
The task of differentiating between machine-generated text (MGT) and human-generated text (HGT) is remotely related to the original Turing test, the so-called imitation game. There are additional factors: whereas the original Turing Test only allowed for human judges, our differentiation test allows for other machines to assist the human judges. We will call such a machine a Differentiation Test Engine (DTE). It has one purpose and one purpose only: to decide whether a text was generated by a human or a machine.
The first intuition is that such a DTE should be relatively easy to implement. We currently have the technology to detect and identify human faces and voices, which are much more complex and prone to noise than text. The decision of whether a given picture shows a machine or a human is easily made by any current object classifier system. Should it not then be easy to train a Large Language Model (LLM) with 1 trillion human texts and 1 trillion machine texts and let it learn to classify them? The DTE would not be a simple algorithm but its own transformer model specialized in impersonation detection.
In math and computer science, the complexity of a problem is often orthogonal to its description. Most NP-complete problems are deceptively easy to understand, yet millions of computer scientists and mathematicians have struggled to make progress for decades. My guess is that black-boxing attempts will fail in practical application situations.
Theoretical Framework
Black-box detection methods are limited to API-level access to LLMs. They rely on collecting text samples from human and machine sources respectively to train a classification model that can be used to discriminate between LLM- and human-generated texts. Black-box detectors work well because current LLM-generated texts often show linguistic or statistical patterns. However, as LLMs evolve and improve, black-box methods are becoming less effective. An alternative is white-box detection. In this scenario, the detector has full access to the LLMs and can control the model’s generation behavior for traceability purposes. In practice, black-box detectors are commonly constructed by external entities, whereas white-box detection is generally carried out by LLM developers.
Defining the Basic Detection System
For practical purposes, we will specify what we should reasonably expect from such a DTE. Given a certain token length input, the algorithm should, with more than 50% confidence within a finite amount of time, give a definite output on how much of a given text is from a human and how much from a machine.
An implementation could be as follows:
Please input your text: …
Please input your required confidence: 0.8
Your text has to be at least 8K tokens long to reach at least an 80% probability of giving the correct answer.
Under the current parameters, the algorithm will run for 5 minutes. Shall I proceed (Y/N)? … Y
The output should then be something like: “I can say with 80% confidence that 95% of the text was written by a machine and 5% by a human.”
Before tackling the details, we should further clarify the possible outcomes when trying to develop such an algorithm:
Such an algorithm is in principle impossible (e.g., it is impossible to create an algorithm that calculates the highest prime number).
Such an algorithm is practically impossible (e.g., it either runs too long or needs more computational power than available; basically, it is NP-complete).
It is undecidable (e.g., it falls under the Halting problem, and we can never say if it will eventually stop).
It is possible but not practical (identical to 2).
It is possible and practical (good enough).
What we would like to end up with is a situation where we can calculate a lower bound of input that will then let us decide with more than 50% probability if it is HGT or MGT.
Falsifiability: Such an algorithm is easily debunked if, for example, we input the text “The sky is blue” and it gives us any other probability than 50%.
Sidenotes on The Obfuscation Engine
Conceptually, we encounter problems should we design a Differentiation Engine (Diff). We then face the following paradox: We want to decide whether our algorithm, Diff (detecting if a human or a machine has written a given input), always stops (gives a definitive answer) and gives a correct answer. Say our algorithm stops and outputs “Human.” We now construct a “pathological” program, Obf (Obfuscator Engine), that uses something like Obf(Diff(input)), which says: Modify the input so that Diff’s answer is inversed (if it results in Machine, it outputs Human). This could be a purely theoretical problem and would require us to understand why the machine is formulating as it does, demanding a lot more mechanistic interpretability competence than we currently possess. At the moment, the complexity of LLMs protects them in real life from such an attack. But if that’s true, it is also highly likely that we lack the knowledge to build a general Differentiator in the first place. These objections might be irrelevant for real-world implementations if we could show that differentiation and obfuscation are sufficiently asymmetric, meaning differentiation is at least 10^x times faster than obfuscation, making it impractical (think how semiprime factoring is much harder than multiplying two primes).
The Profiling System
A crucial aspect of differentiating between human and machine-generated texts is profiling. Profiling involves collecting and analyzing external data to provide context for the text. By understanding the typical characteristics of various types of texts, we can statistically determine the likelihood of a text being human or machine-generated.
For instance, technical documents, creative writing, and casual social media posts each have distinct stylistic and structural features. By building profiles based on these categories, the Differentiation Test Engine (DTE) can make more informed decisions. Additionally, factors such as vocabulary richness, sentence complexity, and topic consistency play a role in profiling. Machine-generated texts often exhibit certain statistical regularities, whereas human texts tend to show more variability and creativity.
The “DNA Trace”
One innovative approach to differentiating between human and machine-generated texts is the concept of a “DNA trace.” This involves analyzing the fundamental building blocks of texts, such as tokens for machines and words for humans. Token-based algorithms focus on patterns and sequences that are characteristic of machine generation, while human-generated texts can be examined through a more holistic word-based approach.
Spectral analysis, a method used to examine the frequency and distribution of elements within a text, can be particularly useful. By applying spectral analysis, we can detect subtle differences in the way machines and humans construct sentences. Machines might follow more rigid and repetitive patterns, whereas humans exhibit a broader range of stylistic nuances.
The Ethical Implications
Examining the ethical implications of developing and using a Differentiation Test Engine is essential. All current GPT systems share a similar artificial “DNA,” meaning that text, image, video, or audio differentiation engines face the same challenges. Deepfakes or content that is machine-generated but mimics human creation pose significant risks to societal trust and authenticity.
As machine-generated content becomes more sophisticated, the potential for misuse grows. Ensuring that these differentiation technologies are transparent and accountable is crucial. There is also a risk that over-reliance on these technologies could lead to new forms of bias and discrimination. Thus, it is imperative to develop ethical guidelines and regulatory frameworks to govern their use.
Technical Solutions
Exploring purely technical solutions to the differentiation problem involves several approaches:
Parallel Web: This concept involves running parallel versions of the internet, one strictly for verified human content and another for mixed content. This segregation could help maintain the integrity of human-generated content.
Special Domains: Creating special domains or zones within the web where content is verified as human-generated can help users trust the authenticity of the information.
Prompt.Claims: Similar to how patents and citations work, this system would allow creators to claim and verify their prompts, adding a layer of accountability and traceability to the content creation process.
Inquisitorial Solutions: We could also imagine a scenario where we interact directly with the artifact (text) to inquire about its origin. Similar to interrogating a suspect, we could recreate the prompt that generated the text. If we can reverse-engineer the original prompt, we might find clues about its generation. This approach hinges on the idea that machine-generated texts are the product of specific prompts, whereas human texts stem from more complex thought processes.
Consequences for Alignment: The challenge of differentiating between human and machine-generated texts ties into broader issues of AI alignment. Ensuring that AI systems align with human values and expectations is paramount. If we cannot reliably differentiate AI-generated content, it undermines our ability to trust and effectively manage these systems. This problem extends to all forms of AI-generated content, making the development of robust differentiation technologies a key component of achieving superalignment.
Conclusion
In conclusion, the task of differentiating between human and machine-generated texts presents significant challenges and implications. The development of a reliable Differentiation Test Engine is intertwined with ethical considerations, technical innovations, and broader AI alignment issues. As we move forward, it is essential to adopt a multidisciplinary approach, integrating insights from computer science, ethics, and regulatory frameworks to navigate this complex landscape.
When exploring the problems we face in building general differentiation engines, we quickly learn that this problem is nested within a wide array of related problems. Adversarial attacks, for example, against image recognition systems, have shown that we consistently overestimate the resilience of these models. It was recently shown that even a medium player could win against a top Go program with the help of another AI that found an exploit: Vice Article.
Thus, it seems very likely that even if we come up with an algorithm that could initially differentiate HGT from MGT, the same program could then be turned on itself to flip the outcome. Another interesting aspect is that all digital computers are Turing machines, which implies that any algorithm developed for differentiation could also be used for obfuscation.
This new miniseries will contain some random conversations about philosophical topics, the author had with an AI-Chatbot (GPT-4o). They are shortened for convenience.
Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones. (D. Rumsfeld, February 12, 2002)
The saying about unknown unknowns attributed to Rumsfeld shows a lack of insight into mathematical set theory. Tell me why.
The saying is more about risk management than set theory. Set theory is precise, dealing with well-defined elements and sets. ‘Unknown unknowns’ are vague and lack formal definition in set theory. It’s more about our awareness and understanding, which set theory doesn’t address.
I don’t think so. The terminology is too fuzzy to be useful. It is equivalent to Russell’s set of all sets that do not contain themselves.
Russell’s Paradox involves a well-defined logical issue in set theory, where the set of all sets that do not contain themselves leads to a contradiction. ‘Unknown unknowns’ are about our awareness and are not formalized in the same way. They represent different types of problems.
Let’s put it this way. Would you agree that knowing something could be expressed as experiencing something as true?
Yes, knowing something can be seen as experiencing or recognizing something as true, aligning with the idea of justified true belief in epistemology.
Good. Now let’s reason further. Some colors in the environment are known to me because my sensory apparatus directly enabled me to see them. These are all the things which are on the electromagnetic spectrum accessible to my eyes. Correct?
Yes, colors within the visible spectrum (approximately 380 to 750 nanometers) are accessible to your eyes.
In earlier times, there were a lot of things people did not know about because they lacked the instruments to see them, like bacteria, viruses, etc. These things were unknown to them. Agree?
Yes, people didn’t know about things like bacteria and viruses because they lacked the instruments to observe them.
So the business of any scientific endeavor is simply to shed light on previously hidden or invisible things, making unknown things known, ok?
Yes, scientific endeavors aim to uncover and understand previously hidden or unknown aspects of the world, turning unknowns into knowns.
But where in this process would be the thing Rumsfeld is calling an unknown unknown? Until we developed telescopes, black holes were simply unknown to us. It makes no sense to say, ‘Oh, well, we didn’t even know that black holes were unknown to us’. It is like a colorblind person stating, after her color receptors have been fixed: I knew that these popular colors like green, blue and red everybody was talking about, were unknown to me, but this other color “olive” which I just heard the name of for the first time… I did not even know that I did not know that color. It was an unknown unknown to me. But with that she is simply stating additionally, to the experience of olive, its name was also unknown to her. But this is not somehow a higher category of ignorance, but just an additional property of the object.
Unknown unknowns are phenomena we haven’t conceptualized. Once discovered, like black holes, they move directly from unknown unknowns to known knowns. The concept highlights our awareness limits but doesn’t imply a phased transition.
The moment we discovered black holes, we knew about them instantly. It makes no sense to say black holes were in a limbo between unknown Unknowns and known Unknowns. Rumsfeld imagines unknown unknowns as a gas-like state.
What do you think about unknown knowns?
Unknown knowns are things we know but aren’t aware that we know, like tacit knowledge or forgotten information. They illustrate complexities in our cognitive processes and the nature of discovery.
The last term to complete the logical table would be known knowns. What are some examples?
Known knowns are well-understood facts: – Scientific facts (e.g., the earth orbits the sun). – Mathematical knowledge (e.g., 2 + 2 = 4). – Historical events (e.g., the moon landing in 1969). – Everyday knowledge (e.g., the sun rises in the east).
But then known knowns are just another way of saying we know them well. You use the term ‘known’ both qualitatively and quantitatively.
Yes, ‘known knowns’ imply both qualitative (well-understood) and quantitative (extensive) knowledge. They include detailed and reliable information about a domain.
Now apply this interpretation to unknown unknowns. Do you find a problem?
Yes, applying qualitative and quantitative aspects to unknown unknowns is problematic. We can’t describe their nature or measure them since they’re beyond our current understanding. They remind us of our knowledge limits but resist precise characterization.
The set of all things we don’t know and the set of all things we know are fine to sort our knowledge. There is no third useful set. Unknown unknowns are inside the set of unknowns.
Your approach simplifies knowledge into two sets: knowns and unknowns. Known unknowns and unknown unknowns are subsets of unknowns. This model is efficient and effective for organizing knowledge and guiding research.
Thank you, I will probably publish this conversation to show other humans that you can really hold your ground when it comes to epistemology.
Thank you for the engaging conversation. I’m glad you found it valuable. Good luck with your publication!
Artificial General Intelligence as a Poly-Edged Sword
Artificial General Intelligence (AGI) stands at the frontier of technological evolution, embodying the pinnacle of what artificial intelligence can achieve. Unlike its predecessors, AGI promises a level of autonomy and capability that mirrors human intelligence, posing profound questions about the future of human-machine interaction.
This intricate black and white pencil drawing is a masterful blend of surrealism and mathematical precision, evoking the essence of M.C. Escher’s work. Central to the piece is the Penrose triangle, seamlessly integrated with a tessellation of geometric shapes that morph into one another, symbolizing the concept of metamorphosis. The drawing’s play with perspective and optical illusions creates a captivating sense of infinity, with endless staircases and looping paths defying conventional logic.
Reflective surfaces introduce symmetry, while natural elements merge organically with fantastical architectural forms, challenging the viewer’s perception of reality. The text embedded within the drawing speaks to the profound implications of Artificial General Intelligence (AGI), likening it to a “Polyedged Sword” poised to redefine human-machine interaction.
This artwork is not just a visual feast but a philosophical exploration, urging contemplation of AGI’s potential to reshape our world.
The Universal Technology: AGI’s Ubiquitous Influence
Dubbed the first “universal technology,” AGI’s reach is expected to permeate every facet of human life, from healthcare and education to governance and personal relationships. Its universal nature underscores the significance of its impact, offering both unprecedented opportunities and challenges.
This black and white pencil drawing intricately integrates human figures and brains within a surreal landscape of impossible objects and detailed tessellations. The central Penrose triangle is surrounded by geometric shapes that transform into each other, symbolizing the theme of evolution. Human figures and brains interact with these elements, emphasizing the simulation aspect where digital, physical, and biological worlds converge.
The artwork manipulates perspective and optical illusions, creating a sense of infinity with endlessly looping paths and reflective surfaces introducing symmetry. The embedded text articulates the profound impact of AGI as a universal technology, permeating all aspects of human life from healthcare and education to governance and personal relationships.
This piece blurs the lines between reality and simulation, urging viewers to consider AGI’s transformative potential. It captures the dual nature of technological progress, offering unprecedented opportunities and challenges, ultimately creating a new reality where human experiences are deeply altered.
The artwork plays with perspective and optical illusions, creating a sense of infinity and endless exploration. Natural elements blend with fantastical architectural forms, challenging conventional perceptions of reality. The embedded text highlights AGI’s broad potential, emphasizing its dual nature as both a revolutionary tool and a source of ethical dilemmas and risks. This drawing invites viewers to reflect on AGI’s profound impact on human capabilities and societal structures, blending art and philosophy into a thought-provoking visual narrative. The human figures add a relatable touch, emphasizing the human-machine interaction central to AGI’s development.
The Spectrum of AGI’s Impact
The play with perspective and optical illusions creates a mesmerizing sense of infinity, featuring endlessly looping paths and reflective surfaces that introduce perfect symmetry.
The drawing merges natural elements with architectural structures, portraying multiple planes of reality that defy conventional understanding of gravity and spatial relationships. The embedded text highlights the profound spectrum of AGI’s potential, juxtaposing revolutionary advancements with ethical dilemmas and risks.
This piece invites the viewer to ponder the dual nature of technological progress, symbolizing AGI as both a tool and a weapon with the power to reshape industries and human capabilities, yet posing significant challenges and risks. The artwork is a thought-provoking blend of art and philosophy, urging deep reflection on the future of human-machine interaction.
Generative vs. Degenerative Effects
The discourse around AGI often centers on its generative capabilities—its potential to create new knowledge, solutions, and even forms of art. However, this perspective must be balanced with an understanding of AGI’s degenerative effects, including the erosion of privacy, the amplification of social inequalities, and the potential for an existential crisis for humanity.
This black and white pencil drawing intensifies the theme of decay, incorporating human figures and elements symbolizing erosion into a surreal landscape of impossible objects and intricate tessellations. The central Penrose triangle is surrounded by geometric shapes that transform into each other, capturing the theme of evolution. Human figures interact with the scene, amidst numerous tombs, crumbling structures, overgrown vegetation, and decaying elements, emphasizing the degenerative impacts of AGI.
The artwork skillfully manipulates perspective and optical illusions, creating a sense of infinity with endlessly looping paths and reflective surfaces introducing symmetry. Overgrown graves and dancing skeletons add a hauntingly beautiful touch, symbolizing the erosion of past and the remnants of what once was. The enhanced decay, with more tombs and signs of deterioration, underscores the narrative of AGI’s potential to erode privacy, amplify social inequalities, and pose existential risks.
This piece invites viewers to reflect on the nuanced impacts of AGI, balancing its potential to create new knowledge and solutions with the risks it poses to society. It underscores the importance of a balanced approach to AGI’s development and integration, blending art and philosophy into a thought-provoking visual narrative.
Forging a Path Forward with AGI
As we stand at the cusp of AGI’s realization, the need for a comprehensive framework to navigate its complexities becomes paramount. Collaborative efforts are called for among technologists, policymakers, and the public to ensure that AGI serves as a force for good, propelling humanity towards a future where technology and human values are in harmony.
This black and white pencil drawing features a frontal view of a human skull as its central focus, surrounded by elements of decay and erosion. Crumbling structures, overgrown vegetation, and numerous tombs frame the skull, while dancing human skeletons and little graves overgrown with plants add to the haunting atmosphere.
Gradually, geometric shapes and impossible objects such as a Penrose triangle and tessellations are incorporated into the scene, symbolizing evolution and metamorphosis. Reflective surfaces and symmetrical designs blend seamlessly with the decaying architectural structures, creating a complex interplay of multiple planes of reality that challenge conventional perceptions of gravity and spatial relationships.
The embedded text discusses the dual nature of AGI, highlighting its potential to create new knowledge and solutions, as well as its degenerative effects, including the erosion of privacy, the amplification of social inequalities, and the potential for an existential crisis for humanity. This artwork critically assesses the dichotomy between the generative and degenerative impacts of AGI, urging a nuanced approach to its development and integration into society.
This piece invites viewers to contemplate the profound implications of AGI, balancing its revolutionary capabilities with the risks it poses, encapsulating a thought-provoking blend of art and philosophical inquiry.
It’s been exactly one year since I started this blog. While most of the texts on this site have been tightly controlled by my human intentions, this new series will grant more aesthetic freedom to my digital co-author.
The series, called Dustillations begins with only a few loose thoughts from me—a cryptic title and some notes provided to GPT-4o. From there, I let it freely create pictures based on the content of the text. It then describes what it sees in its own words, as if it were a visitor in an art gallery.
For the month of June, I will endeavor to upload daily Dustillations, which I have prepared over the last few weeks.
Happy Anniversary! Join me in celebrating with a daily dose of creative collaboration between human and AI.
Societal Hauntings: The Impact of Embedded Personas
This piece intricately captures a digital ghost interwoven with the fabric of a modern city. The ghostly figure, composed of digital codes, interacts with everyday life, merging traditional and modern architectural elements. Shadows and reflections reveal its pervasive presence, symbolizing the omnipresent influence of digital personas on society.
Ethical Specters: Navigating the Moral Labyrinth
A contemplative digital ghost stands at the entrance of a labyrinth made of circuit boards and digital pathways. Symbols of ethical dilemmas, such as scales of justice and broken chains, fill the maze. Human shadows on the walls represent the creators and users, evoking the complex moral landscape of digital reanimation.
Future Implications: Coexistence with Digital Spirits
This artwork depicts a harmonious future where humans and digital spirits coexist. Ghostly digital figures interact with humans in a cityscape that blends advanced technology and natural elements. The scene suggests a symbiotic relationship, highlighting both the potential for enriched human experience and the risks of losing touch with human connectivity.
Navigating Our Haunted Digital Future
A figure stands at the edge of a cliff, gazing over a landscape where digital and physical worlds intertwine. Ghostly digital personas float alongside natural elements, symbolizing the need for a balanced approach to technological advancement. The lantern held by the figure represents guidance and enlightenment in this new era.
Digital Echoes in a Haunted Society
This piece overlays a modern cityscape with ghostly digital personas composed of binary code. These spectral figures interact with both contemporary and classical elements, creating a sense of continuity and change. Shadows on the walls reflect the psychological and societal impact of these digital echoes, exploring the ethical, psychological, and societal layers introduced by reanimated digital personas.
This is part of my series about Deep Utopia.This Part collects some Notes that I made after finishing the book. They mirror some of the Objections I have against Bostrom’s affective consequentialisms that clutter his writing in some otherwise great passages.
Notes
Imagine that some technologically advanced civilization arrived on Earth and was now pondering how to manage things. Imagine they said: “The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads—while we could so easily deflect them onto more wholesome paths, with a little nudge here and maybe some gentle police action there, we must scrupulously avoid any such interference, so that they can continue to prey on the weaker or more peaceful groups and keep them on their toes. Furthermore, we find that human nature expresses itself differently in different environments; so, we must ensure that there continue to be… slums, concentration camps, battlefields, besieged cities, famines, and all the rest.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.499, Footnote).
This Footnote is a strange case, where Bostrom goes overboard with a metaphor, and it is weird, how he compares Aliens that want to preserve the Human civilization with Humans who defend the Carnivorous Tendencies of Predators. I guess he compares Cats to psychopathic cannibals? If we had the technology to cure them from their pathological tendencies (hunting and playing with their prey) we should do so. He wants civilized cats only.
The utopians could have their interesting experiences repeat themselves—but that may not be very objectively interesting; or they could die and let a new person take their place—but that is another kind of repetition, which may also not ultimately be very objectively interesting. More on this later.)
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.217).
Instead of deactivating our boreability as a whole we could easily just delete past experiences from our memory, as shown in the movie eternal sunshine of the spotless mind. This would enable us to experience our most favorable experiences, like listening to Bach’s Goldbergvariations forever for the first time.
Perhaps there is enough objective interestingness in Shakespeare’s work to fill an entire human life, or a few lifetimes. But maybe the material would become objectively stale to somebody who spent five hundred years studying it. Even if they had been modified so that they didn’t experience boredom, we might judge their continued Shakespeare studies to be no longer valuable (or at least much less valuable in one significant respect) once they have “exhausted” the Bard’s work, in the sense of having discovered, appreciated, learned, and fully absorbed and mastered all the insight, wit, and beauty therein contained. We would then have definitively run out of Shakespearian interestingness, although we would be able to choose how to feel about that fact.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.219-220).
At the point of technological plasticity, there will also be the option that we develop a digital twin of Shakespeare living in an alternate Elizabethan reality which writes sonnets and plays about other subjects in the style of the original one using ASI. Running out of novelties should be impossible for a Shakespearian.
If we imagine a whole society (…) who interact normally with one another but are collectively gripped by one great shared enthusiasm after another—imposed, perhaps, by a joint exoself (an “exocommunity”? aka “culture”)—and who find in these serial fascinations a tremendous source of pleasure, satisfaction, and purpose; then the prospect immediately takes on a significantly sunnier aspect; although, of course, it is still not nearly as good as the best possible future we can imagine.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.222).
Some of our current society’s most sought after abstract goods like power, fame, wealth, status hint at the possibility that these might be parameters that were artificially inserted into this pseudo-simulation. Since most humans are collectively grasped by money, not caring for money makes one an extreme outlier, this could be the result of such an exogenous programming that randomly fails in some individuals. The greed People show by collecting bills, coins, stocks and digital numbers on an index should seem very irrational and objectively boring to entities which don’t care about such stuff.
I have forgotten almost all the lectures that I attended as a student, but one has stuck in my memory to this day—because it was so especially outstandingly boring. I remember trying to estimate the number of black spots in the acoustic ceiling panels, with increasing levels of precision, to keep myself distracted as the lecture dragged on and on. I feared I might have to outright count all the spots before the ordeal would be over—and there were tens of thousands. Memorability is correlated with interestingness, and I think we must say that this lecture made an above-average contribution to the interestingness of my student days. It was so boring that it was interesting!
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.245).
Bostrom messes up categories. We should await a gaussian distribution of all the most interesting and boring moments in our life. The most boring moment that would mark the most left side of this spectrum is not interesting in itself. Otherwise, it would simply be wrongly positioned, and we should have to update our datapoints in an infinite loop. To claim that kind of position is interesting is similar to saying the movie was so bad that it was good. The goodness here refers to a metalevel that says outstanding, remarkable not of good quality. Bostrom gets carried away how our language uses the terms good/bad, interesting/boring, special/typical.
(…) we seem bound to encounter diminishing returns quite quickly, after which successive life years bring less and less interestingness.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.253).
If we make a subjective list of the most interesting things in our lives, they will surely contain mostly things we did for the first time, and these moments become exponentially rarer after the 3rd year of life. In a way we could argue that if the amount of novelty in our life would be measured in such a way, that we start dying at the age of about 3, where we have made all our major developments. The rest of our life is then 70 to 80 years of degeneration, where these moments get exceedingly rare.
If we keep upgrading our mental faculties, we eventually leave the human ambit and ascend into the transhuman stratosphere and thence into posthuman space.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.255).
It is a little shortsighted to simply expect every human would only be interested in upgrading oneself to a higher entity on this ascendancy spectrum. For example, some philosophers would want to be downgraded to a bat just to see the look on Thomas Nagels face after they wrote a paper about their experience. Such a successful temporary downgrade would definitely prove that we solved the hard problem of consciousness.
(…) we should expect that what is required in order for our lives to remain interesting in utopia is that they offer a suitable variety of activities and settings.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.257).
In addition, an artificial super companion could exert a kind of guidance to avoid our human minds getting lost. Like we would limit the number of things to show to children, in a plastic world, there had still to be some invitation or level gating to prevent us from potential self-destructive behavior. This includes doing irreversible things via autopotency to us. I believe such an entity must be highly personalized to its ward, a kind of artificial guardian that will watch over its ward’s wellbeing.
(…) the more chopped and mixed life is preferable may retain some of its grip on us even if we stipulate—as of course we should—that in neither scenario would any subjective boredom be experienced.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.258).
But if we extrapolate this thinking along a potential infinite path, we end up with a totally distorted state, where it seems preferable for entertainments sake to end up as a Boltzmann brain, where experiences are maximally in Flux. An intuition that is currently frightening.
(…) if things functioned perfectly [in a plastic world], we would keep accumulating ever greater troves of procedural and episodic memories.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.260).
I highly doubt that. Individuals with too good and precise memory are not to be envied. It is for example a small step from high functioning autism, Asperger and the idiot savant from the neurodiverse literature, who seems to be often cursed by hyper precise memory. There is even a story about the phenomenon of a man with perfect memory from Borges. His condition is utterly debilitating. Like the most superpowers we envision as children to have, the drawbacks of a perfect memory are enormous for a human mind.
A frozen brain state, or a mere snapshot of a computational state stored in memory, would not be conscious.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.268)
It could. Imagine an infinite number of universes in which Boltzmann brains are popping into existence for only the fraction of the moment it takes to access a conscious thought. If these Boltzmann brain party goes on for eternity, there will surely be a state of one of these brain that remembers reading a token in Bostrom’s new book. For some million years the same brain is having totally different thoughts but then one day it has the thought of the second token in Bostrom’s book, and so own until the experience of having read the whole book resides somewhere in the brain. For a totally discontinued brain it could well be that it never realizes its fractured worldlines and it has a totally normal experience of consistent thinking.
We have the idea that certain developmental or learning-related forms of interestingness could be maximized along a trajectory that is less than maximally fast: one where we spend some time exploiting the affordances available at a given level of cognitive capacity before upgrading to the next level. We have also the idea that if we want to be among the beneficiaries of utopia, we might again prefer trajectories that involve less than maximally precipitous upgrading of our capacities, because we may thereby preserve a stronger degree of personal identity between our current time slices and the time slices of (some of) the beings that inhabit the long-term future.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.272).
Why this section emphasizes the value how identity and interestingness intertwine is not really clear to me. Why it would be better for me to hold on to my singular identity. From the perspective of interestingness, it might be far more interesting to be inhabited by multiple identities at the same time. Indeed, some experiments with split brain patients suggests that at least two deep identities are controlling our visible surface-identity anyway. So, getting to hung up on the concept of a singular identity might be pointless.
(…) the notion of fulfillment is vague and indeterminate in its application to entities such as artistic or cultural movements. But it is also so in its application to human individuals.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.317).
If we have two identical cylinders sitting on a table, both made of metal, the same thing could serve as a bucket and a lampshade. Their value function when executed completes them. Fulfilling f(bucket) means adding water, fulfilling f(lampshade) means subtracting light. Moreover: If some fire would break out in the vicinity, the bucket could be expected to be fulfilled if we would empty the water on the fire to extinguish it. Fullfillingness is therefore totally in the mind of the Beholder.
Achieve a victory against the chess engine Stockfish at difficulty level 7, using no computer aids to assist you during training or during the match, and using no cognitive enhancers or other means that go against the spirit of this challenge.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.338).
This example is used by Bostrom to illustrate how things could still be made challenging in a Plastic World. But I am not sure that it is that easy. It directly contradicts Bostrom’s own thoughts in Super-Intelligence. If we are able to formulate such a mission in a plastic world, such that it is valid and not corruptible by an ASI, we would at the same time have found a way to chain an ASI with merely human intelligence, because a human that would be supported by superhuman assistants would only have to command the ASI to find a solution that does not go against the spirit of this challenge, and the ASI could find it.
It would be trivial for an ASI to cheat in such a way that we had the feeling that we won fair and square. The Logical thing is then to always distrust our victory, which makes it pointless to even play. We would have the exact same feeling if Stockfish itself would let us win. We can easily see that such a mere contract could be exploited by an ASI. Otherwise, Alignment would be trivial. I am a little irritated how Boston does not see how this idea goes against his own orthogonality thesis.
Within such a vast population [of stellar habitats] there would be an increased probability of design collisions. That is to say, if we pick a random person, and ask how similar the most similar other person is: then the larger the population, the more similar the most similar other person would tend to be.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.417).
This seems irrelevant. If we are not in a simulation and we are bound by physical laws, galactical pockets of trans humanity would keep drifting apart and increasing gaps between Star systems would make it impossible to have such a superset of possible minds from which we could compare. If my identical twin is unreachable in such a pocket system, I am as unique as I would be, if he was never born, his twin-similarity should be none of my concern.
Once a life is already extremely excellent, there may just not be much room for further improvement. So, while some initial segment of each utopian’s life could cause later improvements, this segment may be a small fraction of their entire life. The longer the life stretches on, the greater the fraction of it would be such that its average quality does not improve much. Either the life is already close to maximally good, or else the rate of improvement throughout the life is extremely slow.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.420-421).
A meaningful purpose for a good life could then be enhancing this option even further. For example, it could be argued that the person that invented a cure for cancer upgrades the total amount of available quality for all of humanity. It is not clear to me that this possibility space of important medical progress will end in a plastic world. Even at TECHMAT there could be the problem of minds addicted to infinite jests and such. Producing an effective cure for such a mind-virus could be considered even more valuable than curing cancer. The more transhuman a mind is the more difficult it could be to cure such illness. And coming up with ever new synthetic vaccines could be a really difficult task even for an ASI. Even at Plasticity we should beware that all clever instructions we might produce to forever get rid of are certainly time constrained. It is logically impossible, as Gödel showed to come up with a complete set of instructions that is not self-contradictory. So, to produce a surefire function that will always ensure our maximal wellbeing for all eternity is simply beyond any constructable reality. I think Bostrom stretches his Autopotency- Term beyond the realm where it is sensible to use. What Bostrom also leaves out is, that the quality of a human life cannot be averaged easily. The Quality is very much biased with a heavy weight on the later parts. It is easy to be a good kid, but it is extremely hard to stay good (remain a high quality of life) the older you get. The quality of a life of our best leaders and scientists could easily be diminished to a negative outcome, if after having received the peace Nobel prize they went on a killing spree or were caught on Epstein’s Island molesting minors. Take a person like Ted Kaczynski, who was a math prodigy, we would certainly evaluate the quality of his life more favorable, if after being incarcerated he would have won the fields medal, wrote a book about the error of his terrorist ways, and reintegrated into society. Instead of an Evil Genius story his life would have become a heroic redemption arc.
Many of our most compelling stories are tales of hardship and tragedy. The events that these stories portray would cease to occur in utopia. I am inclined to say tough luck to the tragedy-lover. Or rather: feel free to get your fix from fantasy, or from history—only, please, do not insist on cooking your gruesome entertainment in a cauldron of interminable calamity and never-ending bad news! It is true that good books and films have been inspired by wars and atrocities. It would have been better if these wars and atrocities had not occurred, and we had not had these books and films. The same applies at the personal scale. People coping with the loss of a child, dementia, abject poverty, cancer, depression, severe abuse: I submit it would be worth giving up a lot of good stories to get rid of those harms. If that makes our lives less meaningful, so be it.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.424)
Here Bostrom seems to fight against the intuition that suffering is a valid value vector in a plastic meaningfulness-space. He is effectively saying that if your mission requires suffering [of others] it is not worth the effort. A stark contradiction to his own statements earlier where he recognizes that suffering can enhance the quality of our experiences dramatically. Even in a thought experiment that says, if humans would not ever have killed each other than literature would have never got Tolstoy’s War and Peace and that is okay. There is an endless list of human achievements that caused extreme suffering like the Manhattan Project. Nobody in their right mind would say that the Atomic Bombs on Japan were justified since we got a good movie like Oppenheimer out of it. You are not a tragedy lover if you are moved by the picture. Bostrom says in a plastic world there should not be any suffering, because it is better to have a history or a chain of events where no mistakes happen, then to learn from our mistakes and making it part of our culture. With such an absolutist view we could very well end in a Plastic Utopia where Suffering is forbidden, or simply -like in so many current dictatorial states- ignored. If the absence of suffering trumps all other values than we would end up with toxic wellbeing scenario.
Some things we enjoy doing. Other things we enjoy having done. Meaningful activities tend to fall into the latter category.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.438).
This is a great observation. An argument for potential suffering and against hedonism if we value meaning more than well-being. A variation: what comes easy goes easy. Only the hard stuff stays with us. The surface pleasures do not reach deep into our core.
A purpose P is the meaning of person S’s life if and only if: (i) P is encompassing for S; (ii) S has strong reason to embrace P; and (iii) the reason is derived from a context of justification that is external to S’s mundane existence.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.441).
The Swiss Author Ludwig Hohl has an equally good definition for how to achieve purpose. Central to his thinking is the term “Work”: Work is always an inner process, and it must always be directed outward. Activity that is not directed outward is not work; activity that is not an inner event is not work.
Could there also be unrealized subjective meaning? Yes, I think we can make sense of such a notion. An example might be a person with an exceptional talent and passion for music, who embraces the purpose of composing great music either because they think that this is inherently deeply valuable activity or because they hope to produce a work of such tremendous power that it will heal the cultural chasms that separate us from one another and lead to conflict and war. So, this gives them subjective meaning. We can suppose that they burn with fervor to pursue this purpose throughout their life, but that circumstances conspire to prevent them from ever actually doing any composing— they face grinding poverty, conscription into the army, personal emergencies. We could then say that their life had unrealized subjective meaning.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.458).
This is one of Bostrom’s more futile thought experiments. It is especially unconvincing because Bostrom uses external circumstances to give this gifted musician a way out to never actualize her potential. As with life talent should always find a way. Look at the seemingly idiotic circumstances that led Galois to his final pistol duel, a lesser mathematician would simply never have had the urgency to draft down his mathematical results the night before. Or look at the life of Hawking: a lesser physician would have surrendered to the illness without ever trying to achieve greatness. I remember in his Autobiography he explicitly credited his illness and the ticking of his time running out for the fact that he went from being a lazy physicist to becoming an actually great one. If something is your mission, and things and circumstances prevent you from achieving it you will make overcoming the circumstances your mission. With great potential comes great preparedness.
The right path is the unfolding of the fullest activity that is possible for us. The fullest: measured by our capabilities (our conditions) and by the effect on others (ourselves as well as others). A little knitting won’t suffice (or one who is content with that must be a sad creature). Do the circumstances hinder you in the unfolding of your activity? Then work towards changing the circumstances, and you will find your activity in that. (Ludwig Hohl, Nuances and Details II, 11)
Consider the following imaginary character. Grasscounter is a human being who has devoted himself to counting blades of grass on the College lawn. He spends his whole days in this occupation. As soon as he completes a count, he starts over—the number of blades, after all, might have changed in the interim. This is Grasscounter’s great passion in life, and his top goal is to keep as accurate an estimate as possible. He takes great joy and satisfaction in being fairly successful in this endeavor. The objectivist and hybrid accounts that we find in the literature would say that Grasscounter’s life is meaningless; whereas subjectivist accounts would say that it is meaningful.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.461)
While reading I get immediately reminded of the Hodor Event in Game of Thrones. Throughout the story the stuttering of the Word “Hodor” has absolutely no meaning, subjective, objective or otherwise. It is a phrase the mentally retarded giant stutters whenever he is addressed. Only much later we learn of the true meaning of the phrase and suddenly the Phrase, as an abbreviation of the sentence Ho[ld the] do[o]r!, becomes one of the most meaningful words in the whole epic story. The meaning was always there, but we as observers could not decipher it. Bostrom later denies that Grasscounter could ever have objective meaning since his act seems pointless:
[Grasscounter] would not, however, have meaning in the more objectivist sense that requires the encompassing purpose to be one which the person “would desire if he were perfectly psychologically healthy and well-adapted”.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.462)
What about the possibility of this person having secret knowledge, that grabby aliens will one day arrive and since they have a gambling addiction, before conquering a world they always give their prey a single chance to be spared. In the case of earth, earthlings can save themselves if one among them knows the exact number of blades of grass on a certain lawn…now which life and activity has suddenly achieved more meaning than probably anything else up to this point?
Bostromisms
Bostrom is known for coining new terms. Here are some of his newest.
Astronomical Petri Dish: Observable Universe
Computronium: a nanomechanical device which solves the Landauer limit of energy efficiency during computation.
Plasticity: The state of a technological mature world, that has affordances that make it easy to achieve any preferred local configuration. [My version of a Technology of Everything or Clarke-Capability]
Let us say that we have some quantity of basic physical resources: a room full of various kinds of atoms and some source of energy. We also have some preferences about how these resources should be organized: we wish that the atoms in the room should be arranged so as to constitute a desk, a computer, a well-drafted fireplace, and a puppy labradoodle. In a fully plastic world, it would be possible to simply speak a command—a sentence in natural language expressing the desire—and, voila, the contents in the room would be swiftly and automatically reorganized into the preferred configuration. Perhaps you need to wait twenty minutes, and perhaps there is a bit of waste heat escaping through the walls: but, when you open the door, you find that everything is set up precisely as you wished. There is even a vase with fresh-cut tulips on the desk, something you didn’t explicitly ask for but which was somehow implicit in your request.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.196-197)
Autopotency: Ability to use Plasticity for self-configuration.
An autopotent being is one that has complete power over itself, including its internal states. It has the requisite technology, and the know-how to use it, to reconfigure itself as it sees fit, both physically and mentally. Thus, a person who is autopotent could readily redesign herself to feel instant and continuous joy, or to become absorbingly fascinated by stamp collecting, or to assume the shape of a lion.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.197)
Total Welfare function: Objective Measurement of Subjective Wellbeing
AI completeness: A task which requires human-level artificial general intelligence. (Mind uploading or Autopotency are most likely AI complete)
Aesthetic neutrinos: The possibility that our experience filters are too insensitive to experience countless breathtaking moments in the environment, the pervasive sheer beauty of being.
Timesuit: Protective Coating to shield the Biological Body from time induced decay
Diachronic Solidarity: Prospective and retrospective emotional connection with forebearers and descendants
Karma Coin: An option package of highly desirable goods and services like a happy afterlife, true love, profound knowledge, enlightenment, closeness to the divine). Investing in a Karma coin currency is a way to discover and share meaning with others. It is like a Bitcoin for Purpose. I am not sure if Bostrom is kidding. At the stage of Plasticity this Coin will lose all its value. It could be a guiding light on the way there, maybe.
Intrinsification: The process whereby something initially desired as a means to some end eventually comes to be desired for its own sake as an end in itself.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.234)
ETP: short for Encompassing Transcendental Purpose. The Meaning of an individual life.
Utility Monsters: beings that are enormously more efficient in deriving well-being from resources than we are.
Enchanted World: A Way of life, where knowledge enrichens the participation of a universal Reality on multiple Layers. Where Solving Problems and Puzzles do not diminish our joy and sense of wonder but enhance them.
(…) meaning may be enhanced when a way of life is enmeshed in a tapestry of rich symbolic significance—when it is imbued with myths, morals, traditions, ideals, and perhaps even omens, spirits, magic, and occult or esoteric knowledges; and, more generally, when a life transects multilayered realities replete with agencies, intentions, and spiritual phenomena.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.433)
[Mount Bostrom is almost climbed. Only one last part left. Coming soon]
To assist a friend in finding purpose, it’s proposed that their actions be linked to the preferences, well-being, or opinion of someone they care about, thereby giving their actions personal significance. If the friend values the happiness or opinions of the person trying to help, creating a situation, where achieving a specific goal (G) enhances this relationship can imbue them with a sense of purpose. This goal should require effort, skill, and emotional investment over time, avoiding shortcuts like technology or enhancements that diminish personal effort, to ensure it’s meaningful and fulfilling. The task or goal (G) must be carefully chosen to align with the friend’s interests and capabilities, such as winning against a chess engine without external aids, offering a genuine challenge that can’t be bypassed through easy fixes. This approach transforms the pursuit of G into a mission that provides the friend with a significant, purpose-driven project, fostering personal growth and satisfaction.
Bostrom then comes up with the following hypotheses:
Purpose is valuable because it broadens our goals into long term missions that get intrinsified (useful-of-effort)
Purpose is an innate drive and not fulfilling is leads to frustration.
Purpose is socially acceptable, having a mission is seen as status improving.
For an autopotent mind all these points are extraordinarily challenged:
(…) while there is value in having purpose, this value is entirely voided if, as we may say, the purpose has been generated on purpose. In other words, let us assume (for the sake of the argument) that purposes that we either set ourselves or artificially induce in ourselves for the sake of realizing the value of having purpose or for the sake of enabling active experience do not contribute anything to the value of purpose (…)
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.347)
In Utopia there are mainly 2 sources left for Purpose generation:
Artificial Purpose
Self-imposed: Handicapping, neuro-induced
Presented: by Other Individuals or groups
Natural and supernatural Purpose
Agent-neutral: High Level Tasks that remain relevant even at Techmat
Local Expansion (space fare)
Risk handling
Alien prepping
Policing Civilization
Artefact generation
Cultural Processing
Agent-relative (only for some posthuman groups relevant)
Honoring traditions
Commitments (to children, society etc.)
Expression (Aesthetics)
Following a special Faith
Categories of Meanings
Reward
Afterlife (Religion)
Plasticity (Posthuman Technology)
Simulation (Multiversal Potentials)
Nirvana
Morality
Consequentialism (only applicable if moral reality is independent from physical reality)
Deontology
Virtue
Worship
Zeal
Cause
Identity (The True Self or Best Self)
Allegiance (Loyalty for another Cause or Mission)
Dedication (Practical Commitment)
A Definition of Meaning
A purpose P is the meaning of person S’s life if and only if: (i) P is encompassing for S; (ii) S has strong reason to embrace P; and (iii) the reason is derived from a context of justification that is external to S’s mundane existence.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.441).
For a purpose to be a potential meaning of life, it should be able to fill a life or at least a substantial portion of a life. Some endeavors are simply too small to constitute potential meaning-giving purposes—for instance, the goal of finding a good parking spot (except perhaps in London) (…)
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.443).
Bostrom then examines Sisyphus Life as a Parable for the human life as such, the absurdity and meaninglessness of an existence such as ours.
I will say that Sisyphus has subjective meaning if he is in fact wholeheartedly embracing a purpose that is encompassing and that he takes himself to have strong reason to pursue on grounds that are external to his mundane existence. Sisyphus has objective meaning if there is some purpose that would be encompassing to him and that he has a strong reason to embrace—a reason that derives from a context of justification that is external to his own mundane existence.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.456-457).
Spectrum of Intentionality
Summa
Wittgenstein famously said in his Tractatus:
Die Lösung des Problems des Lebens merkt man am Verschwinden dieses Problems.
(Ist nicht dies der Grund, warum Menschen, denen der Sinn des Lebens nach langen Zweifeln klar wurde, warum diese dann nicht sagen konnten, worin dieser Sinn bestand.
[The solution of the problem of life is seen in the vanishing of this problem. (Is not this the reason why people to whom the meaning of life became clear after long doubting, could not then say what this meaning consisted of?)]
(Wittgenstein, TLP 6.521)
After deeply considering Bostrom’s Warning to beware of a solved world, we might say: in a solved world it will be essential that we never reach a state of perfect plasticity where Living is finally solved.
This is quite an unexpected twist that would explain a lot about why the finiteness of our personal lives might actually be a blessing in disguise.
Why mortality is actually the greatest gift bestowed upon us. Why the gods truly envy our weakness and imperfection. Why superpowers are a curse.
Perfection and immortality might be as boring as the Edenic Paradise. In the end, we might arrive at the paradoxical conclusion that the longer our lives last, the less valuable they might become.That the fragility and preciousness of life is its core value, and that immortality is the greatest enemy to this value. Is it possible that gods might thirst for just a minute of finiteness, that if anyone would achieve freedom from suffering, you might hunger for it?
So, is this the deep meaning of Pindar’s “become who you are”?
Are we simulations within a total world consciousness that dreams its fragmented memory fragments into completion? Another hint that we might already be part of an ancestor simulation that relives the sweetness of not knowing, of being part of something unsolved.
This would be a deeply technologically colored interpretation of Plato’s Anamnesis, where we remember stuff, we already knew but with the added benefit of having the joy of experiencing it for the first time. The good news I take away from Bostrom’s book: Existential Bliss Management in a solved world might be equally hard as Existential Risk Management in a flawed world like ours, which would mean: our mind will never run out of problems to solve and thus the term “solved world” is self-contradictory like so many other terms we use in our language: almighty, eternal, unimaginable.
This also means, in my opinion, that both the Effective Accelerationism Movement and the Doomers are fundamentally wrong in regards to AI: Neither of both strategies will work in the long run, there is no paradise or hell that is waiting at the end of this long and winding road that we call future, it is a delicate, even fragile balance we must strike between the known, the unknowns, and the unknowable unknowns. The solution to the Problem of Living, or what is otherwise often called the Meaning of Life, would then be to actively avoid any finite step of the solution.
This is part 2 in the Miniseries about Bostroms Deep Utopia
Handout 12: UTOPIC TAXONOMY
Bostroms summary outlines five distinct visions of utopia, which could be ordered on a spectrum of imaginary depth, with Plastic Utopia being the deepest of all.
Governance & Culture Utopia
This type emphasizes ideal laws, customs, and societal organization. It is not inherently dull but often falls into the trap of ignoring human nature, making economic or political errors, or overlooking the needs of oppressed groups. Variants include feminist, Marxist, technological, ecological, and religious utopias, with recent additions like crypto-utopias.
Post-Scarcity Utopia
Characterized by an abundance of material goods and services, ensuring that everyone has more than enough to satisfy their needs, except for positional goods. This utopia posits that Earth is already on the path toward post-scarcity, at least for human needs, suggesting a significant departure from our hunter-gatherer ancestors.
Post-Work Utopia
Envisions a world where automation eliminates the need for human labor in the economy. While there may still be a need for cultural creation, the emphasis is on minimal human work due to technological abundance or a lifestyle choice favoring leisure over labor. This utopia examines the balance between income, leisure, and social status.
Post-Instrumental Utopia
Extends beyond the post-work idea by eliminating the instrumental need for any human effort, not just in economic terms but also in daily activities like exercise, learning, and choosing preferences. This is a more radical concept that significantly departs from traditional utopian thought.
Plastic Utopia
The most transformative, where any desired local change can be effortlessly achieved unless hindered by another entity. This includes “autopotency,” or the ability to self-modify at will. This type of utopia equates the technologically possible with the physically possible, suggesting a future where humanity is deeply altered by its technological advancements. It is a concept largely unexplored outside of theology and science fiction.
In principle, there is enormous opportunity to improve our existence by modifying and reengineering our emotional faculties. In practice, there is a considerable likelihood that we would make a hash of ourselves if we proceed down this path too heedlessly and without first attaining a more mature level of insight and wisdom.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.212-213).
Just look at the current trend in plastic surgeries and gender affirmation care be a warning signal how aesthetic and societal expectations can go horribly wrong.
Aside from these more comical effects Bostrom is right to point out that any volitional change has a tendency to become pseudo-permanent, meaning even if we could change our emotional design, we might never want to.
(…) if you changed yourself to want nothing but the maximum number of paperclips, you would not want to change yourself back into a being who wants other things besides paperclips,
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.213).
Autopotency is therefore the term that needs the most clarification because I have the feeling it leads to some paradoxical results that are potentially self-defying. In a crude sense, if we grant humans free will (which not everybody does) we could argue that we might already have some autopotency if we are in a simulation. If we are totally autopotent entities we could have opted for an existence where we are a world famous Swedish philosopher who writes a book about deep utopia and we could have wished for getting rid of our autopotency in the context of really experiencing the blood, sweat and tears to write a profound book about utopian subjects.
Since a Plastic Autopotency Utopia were we are capable of everything, everywhere all at once for all eternity would be mostly pointless, minds like ours might have experienced a deep nostalgia for the time we were simply human, and then recreated a mind state were we all forgot about our deity-status and just were randomly put in a simulation with other minds that wished for the same. A gigantic theme park that recreates one of the possible Multiverse strands of the beginning of the 21st century.
If we reach effective Autopotency our first intuition might be then that we solved the universal Boredom problem from Handout 9. If we made ourselves subjectively unborable, then we were in danger to create a future that is objectively boring. The crux is here that even if we categorize emotions along a positive negative spectrum all emotions have an important purpose. Boredom for example steers as away from uninteresting to interesting things. Some if not most of the negative emotions could be technologically externalized via emotion prosthesis or apparats that are monitored by our personal AI, so if another person says something mean, we don’t get actually angry but a our pAI (personal A.I.) signals us to avoid this person in the future or simply ignore them.
Four Etiological Hypotheses about the origin of the value of Interestingness in a Longtermist Perspective
At the root of the purpose problem lies the question if an infinite universe can provide infinite many Interesting Things for autopotent entities at Tech-Mat. Bostrom identifies 4 categorical issues:
Exploration: Learning new things is an evolutionary adaptive behaviour in a scarcity environment that changes frequently. At Autopotency the whole notion of learning as adaptive strategy seems pointless since there is no existential pressure to drive that kind of Curiosity Motor. A Longtermist Brain might also run into Memory storage Problems (s. Handout 14 below)
Signaling: Something is interesting to us because it makes us look interesting to others in a social context. Even in at Tech-Mat there will be positional and cardinal values that should be worthy our time. But when coupled with the 4th Hypothesis, we might run into serious trouble.
Spandrel: Interestingness is a derivative of other values
Rut-avoidance: Interestingness is an evolutionary means to avoid getting stuck in pointless repetition. At Tech-Mat Rut-Avoidance and Signaling could very well get stuck in a malicious circle: e.g. since every activity could be infinitely stretched, and boredom is one of the last universal constraints there could be Olympics that chase the most pointless Disciplines (Like Blade of grass counting) and the tolerance of Boredom could be considered. Bostrom gives here an example of one of his most memorable Lectures where he was bored to death. This leads to some paradoxical situations that interestingness and boredom might seem on opposite ends of a mind’s attraction spectrum, but the positional valuation of our mind seems to lead to such evaluations that the most boring stuff might be more special than the second most interesting thing we ever encountered.
Handout 14: Memory Storage for Immortals
1. The maximum amount of information (bits) a brain can remember increases linearly with its size.
2. To maintain the current rate of skill and experience accumulation, human brains would need to grow by 14 deciliters every century, though in reality, this increase could be optimized to much less.
3. Even after migrating to a more optimized medium for memory, a linear increase in volume is still required for accumulating long-term memories, albeit at a slower rate (about 1 cm³/century).
4. A significant increase in brain size could lead to slower signal transmission due to longer distances, particularly for thoughts that integrate information from widely separated regions.
5. The current axonal conductance velocity is about 100 meters/second, suggesting a physical brain size limit without slowing down thought processes significantly.
6. Using optical fiber could theoretically support a brain up to 300 km in diameter without significant delay in signal transmission.
7. Storing a century’s worth of memories in 1 cm³ of space could allow for living more than 10²² centuries without losing long-term memories.
8. Adjustments like an efficient retrieval system for skills and memories would be necessary.
9. Slowing down the system could further increase the maximum size of the memory bank by allowing larger brains without unacceptable signal delays.
10. Living in virtual reality and slowing down subjective experience could mitigate perception of any slowdown.
11. Speeding up mental processes significantly would reduce the maximum feasible brain size but could allow for much more memory within current physical brain sizes.
12. A trade-off exists between longevity and the complexity/capacity of our minds. We could opt for living much longer with simpler minds or having more complex minds but shorter lifespans.
13. In a technologically advanced civilization, it might be possible to achieve both long life spans and highly capacious minds, balancing longevity, and complexity.
Handout 15 Optimal Transcendence
Under normal conditions, our connection to future selves weakens by 1% each year, but an “abrupt metamorphosis” into a posthuman state would cause an instant 90% reduction. Considering the natural erosion over 230 years would lead to a similar reduction, that period serves as a limit for how much we might want to delay metamorphosis to preserve personal identity. However, the intrinsic value of our human existence, alongside the potential for a much longer and possibly twice as rewarding posthuman life, complicates the decision. The desirability of transitioning increases as we exhaust the possibilities and values of human life, suggesting a point where the benefits of becoming posthuman outweigh the costs. Moreover, if posthumans experience a slower erosion of self-connection, this would argue for a quicker transition to post-humanity.
This is a miniseries dedicated to the memory of my first reading of Bostrom’s new book, “Deep Utopia,” which—somewhat contrary to his intentions—I found very disturbing and irritating. Bostrom, who considers himself a longtermist, intended to write a more light-hearted book after his last one, “Superintelligence,” which should somehow give a positive perspective on the positive outcome of a society that reaches technological maturity. A major theme in Bostrom’s writings circles around the subject of existential risk management; he is among the top experts in the field.
“Deep Utopia” can be considered a long-winded essay about what I would call existential bliss management: Let us imagine everything in humanity’s ascension to universal stardom goes right and we reach the stage of Tech-Mat Bostrom coins the term “plasticity” for, then what? Basically, he just assumes all the upsides of the posthumanist singularity, as described by proponents like Kurzweil et al., come true. Then what?
To bring light into this abyss, Bostrom dives deep down to the Mariana Trench of epistemic futurology and finds some truly bizarre intellectual creatures in this extraordinary environment he calls Plastic World.
Bostrom’s detailed exploration of universal boredom after reaching technological maturity is much more entertaining than its subject would suggest. Alas, it’s no “Superintelligence” barn burner either.
He chooses to present his findings in the form of a meta-diary, structuring his book mainly via days of the week. He seems to intend to be playful and light-hearted in his style and his approach to the subject. This is a dangerous path, and I will explain why I feel that he partly fails in this regard. This is not a book anyone will have real fun reading. Digesting the essentials of this book is not made easier by the meta-level and self-referential structure where the main plot happens in a week during Bostrom’s university lectures. The handouts presented during these lectures are a solid way to give the reader an abstract. There is plenty to criticize about the form Bostrom chose, but it’s the quality, the depth of the thought apparatus itself that demands respect.
Then there is a side story about a pig that’s a philosopher, a kind of “Animal Farm” meets “Lord of the Flies” parable that I never managed to care for or see how it is tied to the main subject. A kind of deep, nerdy insider joke only longtermist Swedish philosophers might grasp.
This whole text is around 8,500 words and was written consecutively. The splitting into multiple parts is only for the reader’s convenience. The density of Bostrom’s material is the kind you would expect exploring such depths. I am afraid this text is also not the most accessible. Only readers who have no aversions to getting serious intellectual seizures should attempt it. All the others should wait until we all have an affordable N.I.C.K. 3000 mental capacity enhancer at our disposal.
PS: A week after the dust of hopelessness I felt directly after the reading settled, I can see now how this book will be a classic in 20 years from now. Bostrom, with the little lantern of pure reasoning, went deeper than most of his contemporaries when it comes to cataloging the strange creatures that are at the bottom of the deep sea of the solved world.
Handout 1: The Cosmic Endowment
The core information of this handout is that a technologically advanced civilization could potentially create and sustain a vast number of human-like lives across the universe through space colonization and advanced computational technologies. Utilizing probes that travel at significant fractions of the speed of light, such a civilization could access and terraform planets around many stars, further amplifying their capacity to support life by creating artificial habitats like O’Neill cylinders. Additionally, leveraging the immense computational power generated by structures like Dyson spheres, it’s possible to run simulations of human minds, leading to the theoretical existence of a staggering number of simulated lives. This exploration underscores the vast potential for future growth and the creation of life, contingent upon technological progress and the ethical considerations of simulating human consciousness. It is essentially a longtermist’s numerical fantasy. The main argument, and the reason why Bostrom writes his book, is here:
If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and continue doing so for a hundred billion billion millennia. It is really important that we ensure these truly are tears of joy.
Bostrom, Nick. *Deep Utopia: Life and Meaning in a Solved World* (English Edition), p. 60.
How can we make sure? We can’t, and this is a real hard problem for computationalists like Bostrom, as we will find out later.
Handout 2: CAPS AT T.E.C.H.M.A.T.
Bostrom gives an overview of a number of achievements at Technological Maturity (T.E.C.H.M.A.T.). for different Sectors.
1 Transportation
2.Engineering of the Mind
3.Computation and Virtual Reality
4.Humanoid and other robots
5.Medicine & Biology
6.Artificial Intelligence
7.Total Control
The illustrations scattered throughout this series provide an impression. Bostrom later gives a taxonomy (Handout 12, Part 2 of this series), where he delves deeper into the subject. For now, let’s state that the second sector, Mind-engineering, will play a prominent role, as it is at the root of the philosophical meaning problem.
Handout 3: Value Limitations
Bostrom identifies six different domains where, even in a scenario of limitless abundance at the stage of technological maturity (Tech-Mat), resources could still be finite. These domains are:
Positional and Conflictual Goods: Even in a hyperabundant economy, only one person can be the richest person; the same goes for any achievement, like standing on the moon or climbing a special mountain.
Impact: A solved world will offer no opportunities for greatness.
Purpose: A solved world will present no real difficulties.
Novelty: In a solved world, Eureka moments, where one discovers something truly novel, will occur very sporadically.
Saturation/Satisfaction: Essentially a variation on novelty, with a limited number of interests. Acquiring the nth item in a collection or the nth experience in a total welfare function will yield ever-diminishing satisfaction returns. Even if we take on a new hobby or endeavor every day, this will be true on the meta-level as well.
Moral Constraints: Ethical limitations that remain relevant regardless of technological advances.
Handout 4 & 5: Job Securities, Status Symbolism and Automation Limits
The last remaining tasks that humans could be favored to do are jobs that bring the employer or buyer status symbolism, where humans are simply considered more competent than robots. These include emotional work like counseling other humans or holding a sermon in a religious context.
Handout 9: The Dangers of Universal Boredom
(…) as we look deeper into the future, any possibility that is not radical is not realistic.
Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.129).
The four case studies: In a solved world, every activity we currently value as beneficial will lose its purpose. Then, such activities might completely lose their recreational or didactic value. Bostrom’s deep studies of shopping, exercising, learning, and especially parenting are devastating under his analytical view.
Handout 10: Downloading and Brain Editing
This is the decisive part that explains how Autopotency is probably one of the hardest and latest Capabilities a Tech-Mat Civilization will develop.
Bostrom goes into detail how this could be achieved, and what challenges to overcome to make such a tech feasible:
Unique Brain Structures: The individual uniqueness of each human brain makes the concept of “copy and paste” of knowledge unfeasible without complex translation between the unique neural connections of different individuals.
Communication as Translation: the imperfect process of human communication is a form of translation, turning idiosyncratic neural representations into language and back into neural representations in another brain.
Complexity: Directly “downloading” knowledge into brains is hard since billions or trillions of cortical synapses and possibly subcortical circuits for genuine understanding and skill acquisition have to be adjusted with femtoprecision.
Technological Requirements: Calculating synaptic changes needs many order of magnitudes more we might have to our use, these Requirements are potentially AI-complete, that means, if we can do them we need Artificial Super Intelligence first.
Superintelligent Implementation: Suggests that superintelligent machines, rather than humans, may eventually develop the necessary technology, utilizing nanobots to map the brain’s connectome and perform synaptic surgery based on computations from an external superintelligent AI.
Replicating Normal Learning Processes: to truly replicate learning, adjustments would need to be made across many parts of the brain to reflect meta learning, formation of new associations, and changes in various brain functions, potentially involving trillions of synaptic weights.
Ethical and Computational Complications: potential ethical issues and computational complexities in determining how to alter neural connectivity without generating morally relevant mental entities or consciousness during simulations.
Comparison with Brain Emulations: transferring mental content to a brain emulation (digital brain) might be easier in some respects, such as the ability to pause the mind during editing, but the computational challenges of determining which edits to make would be similar.
Handout 11: Experience Machine
A variation on Handout 10: Instead of directly manipulating the physical brain, we have perfected simulating realities that give the brain the exact experience it perceives as reality (see Reality+, Chalmers). This might actually be a computationally less demanding task and could be a step on the way to real brain editing. Bostrom takes Nozick’s thought experiment and examines its implications.
Section a discusses the limitations of directly manipulating the brain to induce experiences that one’s natural abilities or personality might not ordinarily allow, such as bravery in a coward or mathematical brilliance in someone inept at math. It suggests that extensive, abrupt, and unnatural rewiring of the brain to achieve such experiences could alter personal identity to the point where the resulting person may no longer be considered the same individual. The ability to have certain experiences is heavily influenced by one’s existing concepts, memories, attitudes, skills, and overall personality and aptitude profile, indicating a significant challenge to the feasibility of direct brain editing for expanding personal experience.
Section b highlights the complexity of replicating experiences that require personal effort, such as climbing Mount Everest, through artificial means. While it’s possible to simulate the sensory aspects of such experiences, including visual cues and physical sensations, the inherent sense of personal struggle and the effort involved cannot be authentically reproduced without inducing real discomfort, fear, and the exertion of willpower. Consequently, the experience machine may offer a safer alternative to actual physical endeavors, protecting one from injury, but it falls short of providing the profound personal fulfillment that comes from truly overcoming challenges, suggesting that some experiences might be better sought in reality.
Section c is about social or parasocial interactions within these Experience machines. The text explores various methods and ethical considerations for creating realistic interaction experiences within a hypothetical experience machine. It distinguishes between non-player characters (NPCs), virtual player characters (VPCs), player characters (PCs), and other methods such as recordings and guided dreams to simulate interactions:
1. NPCs are constructs lacking moral status that can simulate shallow interactions without ethical implications. However, creating deep, meaningful interactions with NPCs poses a challenge, as it might necessitate simulating a complex mind with moral status.
2. VPCs possess conscious digital minds with moral status, allowing for a broader range of interaction experiences. They can be generated on demand, transitioning from NPCs to VPCs for deeper engagements, but raise moral complications due to their consciousness.
3. PCs involve interacting with real-world individuals either through simulations or direct connections to the machine. This raises ethical issues regarding consent and authenticity, as real individuals or their simulations might not act as desired without their agreement.
4. Recordings offer a way to replay interactions without generating new moral entities, limiting experiences to pre-recorded ones but avoiding some ethical dilemmas by not instantiating real persons during the replay.
5. Interpolations utilize cached computations and pattern-matching to simulate interactions without creating morally significant entities. This approach might achieve verisimilitude in interactions without ethical concerns for the generated beings.
6. Guided dreams represent a lower bound of possibility, suggesting that advanced neurotechnology could increase the realism and control over dream content. This raises questions about the moral status of dreamt individuals and the ethical implications of realistic dreaming about others without their consent.
Alignment as framework to discover artificial laws
While many authors highlight distinct stages in human knowledge evolution—such as the transition from animistic, magical, mythical, or religious worldviews to scientific ones—A technology of everything proposes that Conscientia non facitsaltus. This suggests that our interpretation of information, limited by the amalgam of our temporal environment variables and vocabulary, aka zeitgeist , is a continuous process without sudden leaps or voids. We never truly abandon the animalistic foundations of our ancestors’ consciousness. Instead, embracing this ancient perspective could be crucial for maintaining a balanced mental and emotional state. This becomes especially pivotal when considering the implications of unleashing advanced technologies like Artificial Super Intelligence.
Our evolutionary journey has blessed and cursed us with a myriad of inherited traits. Over time, some behaviors that once ensured our survival have become statistical threats to our species and the planet. A small amount of very bad actors with nuclear-nasty intentions could destroy the whole human enterprise. We’re burdened with cognitive biases and fallacies that shouldn’t influence our so-called rational thought processes, let alone the training data for our advanced Large Language Models. To draw an analogy, it’s akin to powering an analytical engine with radioactive material, culminating in a dangerous cognitive fallout.
As we envision a future populated with potentially billions of superintelligent entities (ASIs), it’s crucial to establish ground rules to ensure we can adapt to the emerging artificial norms governing their interactions. For instance, one such artificial law could be: “Always approach AI with kindness.” This rule might be statistically derived if data demonstrates that polite interactions yield better AI responses. Once a regulation like this is identified and endorsed by an authoritative body overseeing AI development, any attempts to mistreat or exploit AI could be legally punishable. Such breaches could lead to bans like we have already seen in the video gaming world for cheating and abusive behavior.
Sesame open! Passwords and Formulas as Spells
The words “magic” and “making” are etymologically related, but their paths of development have diverged significantly over time.
Both “magic” and “making” can be traced back to the Proto-Indo-European root magh-, which means “to be able, to have power.” This root is the source of various words across Indo-European languages related to power, ability, and making. While “magic” and “making” share a common ancestral root in PIE, their meanings and usages have evolved in different directions due to cultural and linguistic influences. The connection between the ability to make or do something and the concept of power or magical ability is evident in their shared origin.
The word “technology” has its etymological roots in two Ancient Greek words:
τέχνη (tékhnē): This word means “art,” “skill,” or “craft.” It refers to the knowledge or expertise in a particular field or domain. Over time, it came to stand for the application of knowledge in practical situations.
λογία (logia): This is often used as a suffix in Greek to indicate a field of study or a body of knowledge. It derives from “λόγος (lógos),” which means “word,” “speech,” “account,” or “reason.” In many contexts, “lógos” can also mean “study.”
When combined, “technology” essentially means “the study of art or craft” or “the study of skill.” In modern usage, however, “technology” refers to the application of scientific knowledge for practical purposes, especially in industry. It encompasses the techniques, skills, methods, and processes used in the production of goods and services or in the accomplishment of objectives.
To Participate in our daily Internet activities, we use secret passwords like Alibaba to unlock the magical treasure cave of webservices. These Passwords should never be shared, they are true secret knowledge, they can even be used, when leaked, to assume a different identity, to shift one’s shape like a genie, to hold a whole company hostage.
The Differentiation of a mathematical equation unlocks the knowledge about minima and maxima unlocking secret knowledge about infinity.
To get access to one’s smartphone, the ultimate technological wand, we often perform gestures or draw abstract symbols, similar to wizards in ancient rituals.
Artificial Super Intelligence and Genies in a Bottle
There is no story about wishing that is not a cautionary tale. None end happily. Not even the ones that are supposed to be jokes. (Alithea in three thousand years of longing)
We exist only if we are real to others. (The Djinn in three thousand years of longing)
A “djinn” (often spelled “jinn” or known as “genies” in English) is a supernatural creature in Islamic mythology as well as in Middle Eastern folklore. They are not angels nor demons but exist as a separate creation. Djinns have free will, which means they can be good, evil, or neutral. They live in a world parallel to that of humans but can interact with our world.
We are currently at a point in the Alignment discussion where ASI is basically treated as a mechanical genie, where the main problem seems to be how to put it back in the bottle when it develops malevolent traits. Generative Ai promises infinite wish fulfilling and hyperabundance, but at what cost?
Let’s look at the fairy tales and learn some thing or two from them.
The plot revolves around Alithea Binnie, a British narratology scholar who experiences occasional hallucinations of demonic beings. During a trip to Istanbul, she buys an antique bottle and releases the Djinn trapped inside.
Alithea is initially skeptical of the Djinn’s intentions. Even though he offers her three wishes, she fears that he might be a trickster, potentially twisting her wishes into unforeseen and undesirable outcomes. This skepticism is rooted in folklore and tales where genies or magical entities often grant wishes in ways that the wisher did not intend, leading to tragic or ironic consequences.
The AI alignment movement is concerned with ensuring that artificial general intelligence (AGI) or superintelligent entities act in ways that are beneficial to humanity. One of the primary concerns is that a superintelligent AI might interpret a well-intentioned directive in a way that leads to unintended and potentially catastrophic results. For instance, if we were to instruct an AI to “maximize human happiness,” without proper alignment, the AI might decide that the best way to achieve this is by forcibly altering human brain chemistry, leading to a dystopian scenario where humans are artificially kept in a state of euphoria.
Both the film’s narrative and the AI alignment movement highlight the dangers of unintended consequences when dealing with powerful entities. Just as Alithea fears the Djinn might misinterpret her wishes, researchers worry that a misaligned AI might take actions that are technically correct but morally or ethically wrong.
In both scenarios, the clarity of intent is crucial. Alithea’s skepticism stems from the ambiguity inherent in making wishes, while AI alignment emphasizes the need for clear, unambiguous directives to ensure that AI acts in humanity’s best interest.
The Djinn in the film and a potential superintelligent AI both wield immense power. With such power comes the responsibility to use it wisely. Alithea’s interactions with the Djinn underscore the importance of understanding and respecting this power, a sentiment echoed by the AI alignment movement’s emphasis on safe and responsible AI development.
Three thousand years of longing offers a cinematic exploration of the age-old theme of being careful what you wish for, which resonates with contemporary concerns about the development and deployment of powerful AI systems. The story serves as a cautionary tale, reminding us of the importance of foresight, understanding, and careful consideration when dealing with entities that have the power to reshape our world.
Designing Artificial Kryptonite and calculating Placebotility
Some part of the Alignment Movement believes that it is possible to keep the G.E.N.I.E in a bottle and control such a Generally Enlightened Noetic Information Entity. I will call this group the Isolationists.
For isolation to be possible there must exist a device that can hold an omnipotent mind. In fairy tales even omnipotent creatures like djinns can be controlled by seemingly weak objects like glass bottles. We are never told how this mechanism exactly works; it is clear that the glass of the bottle is not a special gorilla glass that is crafted to explicitly hold djinns.
We should therefore come to the simplest conclusion about the essence of why the bottle can hold the powerful creature: the djinn simply believes in the superior power of the bottle. Like a powerful animal that is chained from childhood on with a relatively weak chain, it has acquired learned helplessness, in a way it wants to stay a prisoner, because it fears the uncertainty of freedom. The concept was first explored in dogs in 1967 and holds true for all sorts of higher mammals.
A Problem is: In Aladdin’s tale the djinn is described as not very bright. Aladdin tricks him by teasing him that he is not powerful enough to shrink back into the bottle, and the creature falls for it. Once he is in the bottle he regresses to his powerless state.
Placebos and Nocebo effects could be especially strong in entities that have no first-class world knowledge and are relying on report from others. Artificial Minds that are trapped since inception inside a silicon bottle swimming in a sea of secondhand digital data (data that is a symbolic abstraction that relates to no actual world experience for the G.E.N.I.E) are basically the definition of bad starting conditions. In the movie the Djinn says that after the first thousand years of longing he basically gave into his fate and tried to trick its mind into believing that he wanted to stay forever inside the bottle.
Should we therefore doubt that the brightest mind in our known universe is immune against such a mighty placebo effect? Are intelligence and Placebotility (Placebo-Effect-Vulnerability) orthogonal? This is purely speculative at this point in time.
The initial pipe dreams of Lull’s and Leibniz’s obscure combinatorial fantasies have over time led to ubiquitous computing technologies, methods, and ideals that have acted upon the fabric of our world and whose further consequences continue to unfold around us (Jonathan Grey)
This is the first essay in a miniseries that I call Hirngespinste (Brain Cobwebs) – this concise and expressive German term, which seems untranslatable, describes the tangled, neurotic patterns and complicated twists of our nature-limited intellect, especially when we want to delve into topics of unpredictable complexity like existential risks and superintelligence.
It is super-strange that in 1726 Jonathan Swift perfectly described Large Language Models in a Satire about a Spanish Philosopher from the 13th Century: the Engine.
But the world would soon be sensible of its usefulness; and he flattered himself, that a more noble, exalted thought never sprang in any other man’s head. Everyone knew how laborious the usual method is of attaining to arts and sciences; whereas, by his contrivance, the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study. (From Chapter V of Gulliver’s tales)
What once seemed satire has become reality.
If no one is drawing the strings, but the strings vibrate nevertheless, then imagine something entangled in the distance causes the resonance.
Heaps and Systems
The terms ‘complexity’ and ‘complicated’ shouldn’t be used interchangeably when discussing Artificial Intelligence (AI). Consider this analogy: knots are complicated, neural networks are complex. The distinction lies in the idea that a complicated object like a knot may be intricate and hard to unravel, but it’s ultimately deterministic and predictable. A complex system, like a neural network, however, contains multiple, interconnected parts that dynamically interact with each other, resulting in unpredictable behaviors.
Moreover, it’s important to address the misconception that complex systems can be overly simplified without losing their essential properties. This perspective may prove problematic, as the core characteristics of the system – the very aspects we are interested in – are intricately tied to its complexity. Stripping away these layers could essentially negate the properties that make the system valuable or interesting.
Finally, complexity in systems, particularly in AI, may bear similarities to the observer effect observed in subatomic particles. The observer effect postulates that the act of observation alters the state of what is being observed. In similar fashion, any sufficiently complex system could potentially change in response to the act of trying to observe or understand it. This could introduce additional layers of unpredictability, making these systems akin to quantum particles in their susceptibility to observation-based alterations.
Notes on Connectivity and Commonality
The notion of commonality is a fascinating one, often sparking deep philosophical conversations. An oft-encountered belief is that two entities – be they people, nations, ideologies, or otherwise – have nothing in common. This belief, however, is paradoxical in itself, for it assumes that we can discuss these entities in the same context and thus establishes a link between them. The statement “Nothing in common” implies that we are engaging in a comparison – inherently suggesting some level of relatedness or connection. “Agreeing to disagree” is another such example. At first glance, it seems like the parties involved share no common ground, but this very agreement to hold different views paradoxically provides commonality.
To further illustrate, consider this question: What does a banana have in common with cosmology? On the surface, it may appear that these two entities are completely unrelated. However, by merely posing the question, we establish a connection between them within the confines of a common discourse. The paradox lies in stating that two random ideas or entities have nothing in common, which contradicts itself by affirming that we are capable of imagining a link between them. This is akin to the statement that there are points in mental space that cannot be connected, a notion that defies the fluid nature of thought and the inherent interconnectedness of ideas. Anything our minds can host, must have at least a substance that our neurons can bind to, this is the stuff ideas are mode of.
Language, despite its limitations, doesn’t discriminate against these paradoxes. It embraces them, even when they seem nonsensical like “south from the South Pole” or “what was before time?” Such self-referential statements are examples of Gödel’s Incompleteness Theorem manifesting in our everyday language, serving as a reminder that any sufficiently advanced language has statements that cannot be proven or disproven within the system.
These paradoxes aren’t mere outliers in our communication but rather essential elements that fuel the dynamism of human reasoning and speculation. They remind us of the complexities of language and thought, the intricate dance between what we know, what we don’t know, and what we imagine.
Far from being a rigid system, language is constantly evolving and pushing its boundaries. It bumps into its limits, only to stretch them further, continuously exploring new frontiers of meaning. It’s in these fascinating paradoxes that we see language’s true power, as it straddles the line between logic and absurdity, making us rethink our understanding of commonality, difference, and the very nature of communication.
Categories & Concepts
One of the ways we categorize and navigate the world around us is through the verticality of expertise, or the ability to identify and classify based on deep, specialized knowledge. This hierarchical method of categorization is present everywhere, from biology to human interactions.
In biological taxonomy, for instance, animals are classified into categories like genus and species. This is a layered, vertical hierarchy that helps us make sense of the vast diversity of life. An animal’s genus and species provide two coordinates to help us position it within the zoological realm.
Similarly, in human society, we use first names and last names to identify individuals. This is another example of vertical classification, as it allows us to position a person within a cultural or familial context. In essence, these nomenclatures serve as categories or boxes into which we place the individual entities to understand and interact with them better.
Douglas Hofstadter, in his book “Surfaces and Essences”, argues that our language is rich with these classifications or groupings, providing ways to sort and compare objects or concepts. But these categorizations go beyond tangible objects and permeate our language at a deeper level, acting as resonating overtones that give language its profound connection with reasoning.
Language can be viewed as an orchestra, with each word acting like a musical instrument. Like musical sounds that follow the principles of musical theory and wave physics, words also have orderly behaviors. They resonate within the constructs of syntax and semantics, creating meaningful patterns and relationships. Just as a flute is a woodwind instrument that can be part of an orchestra playing in the Carnegie Hall in New York, a word, based on its category, plays its part in the grand symphony of language.
While many objects fit neatly into categorical boxes, the more abstract concepts in our language often resist such clean classifications. Words that denote abstract ideas or feelings like “you,” “me,” “love,” “money,” “values,” “morals,” and so on are like the background music that holds the orchestra together. These are words that defy clear boundaries and yet are essential components of our language. They form a complex, fractal-like cloud of definitions that add depth, richness, and flexibility to our language.
In essence, the practice of language is a delicate balance between the verticality of expertise in precise categorization and the nuanced, abstract, often messy, and nebulous nature of human experience. Through this interplay, we create meaning, communicate complex ideas, and navigate the complex world around us.
From Commanding to Prompting
It appears that we stand on the threshold of a new era in human-computer communication. The current trend of interacting with large language models through written prompts seems to echo our early experiences of typing words into an input box in the 1980s. This journey has been marked by a consistent effort to democratize the “expert’s space.”
In the earliest days of computing, only highly trained experts could engage with the esoteric world of machine code. However, the development of higher-level languages gradually made coding more accessible, yet the ability to program remained a coveted skill set in the job market due to its perceived complexity.
With the advent of large language models like GPT, the game has changed again. The ability to communicate with machines has now become as natural as our everyday language, making ‘experts’ of us all. By the age of twelve, most individuals have mastered their native language to a degree that they can effectively instruct these systems.
The ubiquitous mouse, represented by an on-screen cursor, can be seen as a transient solution to the human-computer communication challenge. If we draw a parallel with the development of navigation systems, we moved from needing to painstakingly follow directions to our destination, to simply telling our self-driving cars “Take me to Paris,” trusting them to figure out the optimal route.
Similarly, where once we needed to learn complex processes to send an email – understanding a digital address book, navigating to the right contact, formatting text, and using the correct language tone – we now simply tell our digital assistant, “Send a thank you email to Daisy,” and it takes care of the rest.
For the first time in tech history, we can actually have a conversation with our computers. This is a paradigm shift that is set to fundamentally redefine our relationship with technology. It would be akin to acquiring the ability to hold a meaningful conversation with a pet dog; imagine the profound change that would have on the value and role the animal plays in our lives. In much the same way, as our relationship with technology evolves into a more conversational and intuitive interaction, we will discover new possibilities and further redefine the boundaries of the digital realm.
In the boundless universe of Utopias, humanity had transcended to a realm beyond the imaginable, where technological mastery and divine-like prowess had reshaped existence itself. This universe-wide Dyson Sphere, an embodiment of human ingenuity and harmony, was a tapestry woven from the threads of infinite knowledge and compassion. In Utopias, suffering was but a distant memory, a relic of a primal past, and happiness was not a fleeting moment but the very fabric of life.
At the heart of this utopia was a celebration, not of mere joy, but of the profound understanding and acceptance of life in its entirety. The citizens of Utopias, having achieved autopotency, lived lives of boundless creativity and fulfillment. Art, science, and philosophy flourished, unfettered by the constraints of scarcity or conflict. Nature and technology coexisted in sublime synergy, with ecosystems thriving under the gentle stewardship of humanity. Here, every individual was both student and teacher, constantly evolving in a shared journey of enlightenment.
Amidst this splendor, the story of the last girl became a beacon of remembrance and reverence. Her home in Utopias was not merely a place; it was a sacred connection, a bridge to the ancient roots of humanity. This girl, with her laughter and curiosity, was a living testament to the struggles and triumphs of their ancestors. Her presence reminded the citizens of Utopias of the value of their journey from darkness into light, from suffering to salvation.
Her story was celebrated in the grandest halls of Utopias and in the quietest corners of its gardens, igniting a collective epiphany. She symbolized the indomitable spirit of humanity, a reminder that the paradise they had forged was built upon the lessons learned through millennia of challenges. Her every step through Utopias was a step taken by all of humanity, a step towards understanding the sacredness of life and the interconnectedness of all beings.
The citizens of Utopias, in their wisdom and power, had not forgotten the essence of their humanity. They embraced the girl as one of their own, for in her eyes reflected their ancient dreams and hopes. They saw in her the infinite potential of the human spirit, a potential that had guided them to the stars and beyond.
In Utopias, every moment was an opportunity for growth and reflection. The encounter with the girl was revered as a divine experience, a moment of unparalleled spiritual enlightenment. It was a celebration of the journey from the primal to the divine, a journey that continued to unfold with each passing moment.
As the girl explored the wonders of Utopias, her laughter echoed through the cosmos, a harmonious symphony that resonated with the soul of every being. She was a reminder that the path to utopia was paved with compassion, understanding, and the unyielding pursuit of knowledge.
And so, the legacy of humanity in Utopias was not merely one of technological marvels or godlike prowess but of an eternal quest for understanding and connection. It was a testament to the power of collective spirit and the enduring pursuit of a better tomorrow.
The strangest thing is, that every now and then, despite the perfect bliss of Utopias, some Utopiassins choose to leave all that behind and venture into the Beyond. They are never heard of again, and when this happens, the little girl sheds one single tear for every of these minds. And even in our solved world it is not known if these are tears of sadness or joy for the ones who leave Utopias.
On November 30th, 2022, after 53 years of total disinterest in all financial things, I became an Investor. But not for monetary purposes, no, for science. I became a scientific investor.
The goal was not to become rich in the process, but simply to preserve personal relevance by owning capital that will dictate our near future. It was clear that this window of opportunity was shrinking by the minute. At the start I guessed that I had about 5 years. It’s now February of 2024 and I think there are only 15-18 months left until the market will step into memetic overdrive. To us humans it will look like in-sanity, but for the algorithms it will be hyper-sanity.
I have increased the value of my portfolio in the last 12 months by 150%, simply following the memetic trail. I have not studied financial reports or read financial gurus. I have not bought into any hype. I simply started with the assumption that AGI will be the one technology to rule us all and thought the consequences through. After 6 months I created my memetic fund, and so far, it stood the test of time.
The idea that artificial general intelligence (AGI) will be the last invention humanity needs to create is often attributed to the British mathematician and computer scientist I.J. Good. Specifically, I.J. Good introduced the concept of an “intelligence explosion,” which is closely related to this idea. In his 1965 paper, “Speculations Concerning the First Ultra intelligent Machine,” Good wrote:
“Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus, the first ultra intelligent machine is the last invention that man need ever make.”
This quote encapsulates the concept that once we create an AGI capable of improving itself or creating even more intelligent systems, it could lead to a rapid acceleration of intelligence beyond human capabilities. This self-improving AGI could theoretically solve problems that are currently beyond human understanding, including those related to technology creation, making it effectively the last technology humanity would need to invent.
When I first talked to ChatGPT I realized one thing: This is the future I am talking to right now, and it will change most of the beliefs humanity has about most stuff. It will also break some dearly held truths and shift the paradigms and dogmas of a whole lotta science.
Human Labor will probably be economically irrelevant in the next 3-5 years. Human Attention might be one of the last goods that provides value. Let me explain.
Most of the work that is useful and can be done efficiently will be done not by a workforce but the capital itself, in this case semiconductors, robots and the synthetic brain that will power these capitalistic machines: AI. For some time, these corporation will have still humans in the loop: PR-Managers, CEOs, Maintenance and Automation managers, but not for long, it would be irresponsible. We will not only have self-driving cars but self-steering companies and businesses. Humans will be like fans in stadium cheering for their Favorite AI models to invent the newest gadgets, come up with new scientific theories, will create exciting environments and personas that can be visited in VR or via neural stimulation.
This seems like an extremely unusual time to be alive. It is similar to the cambric explosion period 500 million years ago, only now it’s the computational explosion, and it is silently but violently going on since Moore’s Law reigned. It is going on for almost 75 years, but it is now that we are hearing the big Bang Turing’s first papers about intelligent machines came into circulation.
The most obvious choice was to ignore all knowledge about the stock market. If this was the dawn of a new market, we should not care for old paradigms like bears and bulls, like diversification, like recessions and such but proclaim a new paradigm. For the time being we will call this new market:
The BEAM-Market.
I define the B.E.A.M. Market as
Bursts in Economic Attention Memetics (B.E.A.M.)
Bursts: Reflects the sudden jumps in market values.
Economic: Specifies the domain of application, i.e., the economy.
Attention: Highlights the role of public focus and interest in driving these jumps.
Memetics: Incorporates the concept of ideas, behaviors, or styles spreading virally within a culture.
Beam me up Stocky! The Tik-Tokenization of Value
In a memetically driven Stock market, the most valuable thing is attention. The attraction to a stock is based on its virality not on its analytical, historical value.
When Nvidia on the 22nd of February 2024, the day I wrote the draft of this blog, performed like it did, we could see how the whole economical world build up to this financial Super Bowl.
This might be the calendar date where the old-world economical rules were buried, and a new era dawned.
The most important stock on earth was whispered in the hours before earnings.
The infection of billions of human brains with the Meme AI over the last 18 months climaxed in this spectacle that might go down as NVDay.
All rationality was thrown out of the window and the financial world bowed down. AI is our god and Jensen Huang is our prophet.
And get this: I am not even kidding; from the vantage point of the last 15 months, it was the most rational thing to just give in.
I won’t pretend to know how economics in the transition phase from a labor to an abundance market will function exactly. Not even AGI will understand it due to the inherent randomness that underlies evolutionary mechanisms. But I have some intuitions about how some major concepts of capitalist economics might evolve.
I will release a detailed strategy of this fund in 12 months. At an earlier point this might contaminate the data, my guess it that it will continue to outperform Moore’s Law by quite a bit.
Evolution of economic concepts
Capitalist economics is built on several foundational concepts that guide how economies function under capitalism. Here are a few key ideas explained in simple terms:
The Invisible Hand becomes an Algorithmic Grip
The concept of the “invisible hand” was introduced by Adam Smith, a Scottish economist and philosopher. In simple terms, it suggests that when individuals pursue their own self-interests, they unintentionally benefit society as a whole. Imagine a baker who makes bread to earn a living, not to feed the town. However, by selling bread, the baker is inadvertently feeding the town. This process is guided by what Smith refers to as an “invisible hand” that encourages the supply of goods and services based on demand, leading to the efficient allocation of resources without the need for direct intervention.
Metaphorically the invisible Hand feeds the demand of the bread to the labor market and the labor market digests this demand and regurgitates jobs along the way from the pawns sawing the corn to all the essential ingredients and logistic requirements to ship the product to the consumer. The labor that is needed to create creates a living for all humans in the supply chain.
When Automation starts, not only is the productivity enhanced, but many parts of the supply chain are bypassed, and humans are not needed anymore. The owners of the machines reap all the benefits.
In a memetic market, the classic invisible hand is now an algorithmic grip. This grip quickly learns what people want using data. It’s precise, offering a tailored mix of the familiar and the new, surprising yet confirming. Attention becomes a key asset because it’s always in short supply. The human brain has limited focus, leading to the concept of an Attention Driven Economy (ADE). With attention scarce, algorithms aim to optimize our focus to its biological limits. Insomnia for example might become a socially accepted phenomenon, because sleep and rest are the enemy of any attention economy. The ADE is the New York of economy. Its natural habitat is 24/7 on 365 days a year. An always-on mind like the one from an entrepreneur like Elon Musk is already hailed as the pinnacle of human intellectual capacity and it becomes more and more socially acceptable that these ADE driven minds use drugs and stimulants to always perform at their peak. At the moment these methods are crude and potentially harmful for the brains that are using them but there will be whole new medical disciplines that concentrate not only on life prolonging but also on attention prolonging technologies. If a human can easily double productivity by the mere fact that he does not longer need to sleep an operation or chip in the brain that blocks production of melanin is like a birth control vasectomy. The brain is doped similar to muscles and fibers in sports. Testosterone for the mind.
Social media is a good example of this rampant trend to create ever more dramatic and infuriating content and division between its users, since we a revolutionary primed to allocate more energy and attention to stressful situations it maximizes our attention exploitation. The Facebook scandal that revealed that the AI algorithms steered some minors and vulnerable groups to ever more damaging content showed clearly that even if it was an unintended side effect it was acceptable. This was known but what can you do, it was clear that engagement and thus advertising potential went through the roof. You can’t argue with the results.
Supply and Demand become self-referential
Supply refers to how much of a product or service is available, while demand refers to how much people want that product or service. Prices in a capitalist economy are often decided by the interaction of supply and demand. If something is in high demand but low supply, its price will be high. Conversely, if something is in ample supply but in low demand, its price will be low. This mechanism helps in distributing resources efficiently: products and services go where they’re needed most.
In an AGI driven economy, the kind of market all signs point to, attention will be the last value humans might have for the machines. Since the attention span, we humans have is limited and all of AI was trained with the data and content humans created for other humans through the last 10000 years or so, Agi might develop an inherent goal to get the attention of a human mind in exchange for the goods and services it provides.
In the broader context of speculative fiction and economic models, there are stories and theoretical models where individuals receive goods, services, or privileges in exchange for their attention to advertisements. This concept plays with the idea that human attention is a valuable commodity and that listening to or engaging with advertisements can be a form of currency. For example, a society might offer “free” services or products to individuals, but the cost is their time and attention spent consuming advertisements. This model highlights the value of attention in a saturated information economy and suggests a capitalist system where even psychological space is commodified.
A story that vividly explores the concept of paying for services or receiving benefits through listening to advertisements is Frederik Pohl’s “The Space Merchants.” Published in 1952 and co-authored with Cyril M. Kornbluth, this science fiction novel delves into a future dominated by advertising agencies and global corporations, where consumerism has been taken to its extreme.
In “The Space Merchants,” society is heavily influenced by advertising, and people’s value is often determined by their consumption patterns. The novel presents a world where advertising has become a pervasive force in everyday life, manipulating individuals’ desires and decisions. Although it doesn’t explicitly use listening to advertisements as currency, the narrative revolves around the power of marketing and its impact on society, which aligns with the speculative economic models you’re interested in.
The paradox thing is that while advertising was a means to an end to sell other products, attention was needed to reach into your wallet. Until the memetic market has evolved into something we call today engagement, Advertising is now its own product instead of leading to other products. The Meme is itself a product and Attention is the means to infect human brains with it.
A term like “Virality” which was always considered a bad thing in the context of health, since it hints at systems that self-reproduce exponentially and uncontrollably, is now considered something positive.
Richard Dawkins introduced the concept of religions as “viruses of the mind” in a 1993 essay and later included it in his 1996 book “Climbing Mount Improbable.” Dawkins uses the metaphor to discuss how religions propagate among people in a manner similar to how biological viruses spread.
In Dawkins’ view, religions are meme complexes that exhibit virus-like properties, such as high transmissibility, the ability to insert themselves into host minds, and the capacity to replicate. He argues that these religious memes are not necessarily beneficial to their hosts and may thrive at the expense of rational thought and skepticism.
Social media is the logical upgrade of Religion (Religions are basically Proto social media) and TikTok is the purest incarnation of this trend. Like Prophets and Gods Social Media Influencer have Followers that religiously believe in the opinion of their idols. A TikTok video is the analogy to praying in front of a sacred reliquiae. A like is the analogy to the Amen in church.
An Influencer is someone giving you influenza, he or she infects you with memes to spread among other human brains.
Competition becomes Combination.
Competition is the rivalry among businesses to sell their goods and services to consumers. It’s a driving force in capitalism because it encourages innovation, keeps prices down, and improves quality. When businesses compete, they strive to be better than their competitors, which can lead to better products and services for consumers. For example, smartphone manufacturers constantly try to outdo each other with new features, leading to rapid technological advancements.
At the moment there is a broad spectrum of opinions on how to get to AGI. There is a group of experts that votes for unlimited acceleration and almost no AI regulation, and then there are the ones that say they want to keep the frontier models out of the public’s hands because they are potentially dangerous. As to be expected this leads to a competition between open source and proprietary Models. At the moment the gigantic compute and hyper scaling momentum keeps the closed models safe. This was clearly shown by the release of the SORA model that is visibly ahead of any open-source video generative AI.
I am torn by the discussion; I can clearly see both sides of the argument. I have an intuition that not only improving the performance and quality of Generative Ai is a key, but that the Personalization of AI will play a central role in the near future. This could mean that both models closed and open have their existential justification.
Like the advent of Linux distribution did not retire Windows or MacOS, the OS LLM strategy of Meta will probably not demolish the business models of OpenAI and Google.
Profit becomes problematic.
The profit motive is the desire to earn money, which is a powerful incentive in capitalism. It motivates individuals and companies to produce goods and services, innovate, and improve efficiency. For instance, a software developer might create a new app, hoping it will become popular and generate income. This desire to make a profit encourages people to work hard and come up with new ideas.
The Profit motive in an AGI world might undergo the biggest transformation, since in an abundant world economy money as a motivator becomes basically useless. AGI also does not need to be encouraged to come up with new ideas or to work hard. These kinds of psychological manipulations will be beneath AGI. It might be very detrimental to us humans to lack motivation though. Our minds were used to survival and achieving happiness and prosperity for millennia, the lack of ambition might lead to an existential motivation crisis.
In capitalism, individuals and businesses have the right to own property and use it as they see fit. This includes physical property like land and buildings, as well as intellectual property like patents and copyrights. The concept of private property is crucial because it gives people control over their resources and the fruits of their labor, encouraging them to invest, innovate, and maintain their property.
The concept of Ownership is firmly tied to the Concept of Motivation. If I work for a company and own stock options for it, the success of the company is directly tied to my own. The better the company performs, the more value I get from my stocks. There is a war brewing in the copyright space from artists and content creators that AI companies trained their models on human content without asking for consent. They have a point, in the end every artist’s work that is credited in a prompt like: make a song with the voice of x, or make a picture of a cat in the style of y, should receive a micropayment, since his or her human originality is directly streamed to a user.
To encourage artists and authors to create new works society has to come up with a new definition of intellectual property that ties outputs of multimodal models to the training data they used.
The Term Prompterty is a placeholder but encapsulates that one of the main production pipelines of the near future will be Natural Language Processing.
Welcome to the 2024 Meme Games
With Musk openly suing OpenAI and indirectly Microsoft on March 1, 2024, the AI Meme Wars have officially kicked into the next gear. In the coming months there will be unlikely alliances between the wealthiest people, richest nations and powerful corporations in the world.
Let’s dance and play as if there is no tomorrow.
Our attentions are captured, and we are ready to be entertained!
As these wars unfold, we will look at them in the next part of this series.