Epilog: The Ones who leave Utopias

Reading Time: 3 minutes

For U.K.L.

Ein Bild, das Kunst, Bild, Majorelle Blue, Welt enthält.

Automatisch generierte Beschreibung In the boundless universe of Utopias, humanity had transcended to a realm beyond the imaginable, where technological mastery and divine-like prowess had reshaped existence itself. This universe-wide Dyson Sphere, an embodiment of human ingenuity and harmony, was a tapestry woven from the threads of infinite knowledge and compassion. In Utopias, suffering was but a distant memory, a relic of a primal past, and happiness was not a fleeting moment but the very fabric of life.

At the heart of this utopia was a celebration, not of mere joy, but of the profound understanding and acceptance of life in its entirety. The citizens of Utopias, having achieved autopotency, lived lives of boundless creativity and fulfillment. Art, science, and philosophy flourished, unfettered by the constraints of scarcity or conflict. Nature and technology coexisted in sublime synergy, with ecosystems thriving under the gentle stewardship of humanity. Here, every individual was both student and teacher, constantly evolving in a shared journey of enlightenment.

Ein Bild, das Kleidung, Schuhwerk, Himmel, Gebäude enthält.

Automatisch generierte Beschreibung

Amidst this splendor, the story of the last girl became a beacon of remembrance and reverence. Her home in Utopias was not merely a place; it was a sacred connection, a bridge to the ancient roots of humanity. This girl, with her laughter and curiosity, was a living testament to the struggles and triumphs of their ancestors. Her presence reminded the citizens of Utopias of the value of their journey from darkness into light, from suffering to salvation.

Her story was celebrated in the grandest halls of Utopias and in the quietest corners of its gardens, igniting a collective epiphany. She symbolized the indomitable spirit of humanity, a reminder that the paradise they had forged was built upon the lessons learned through millennia of challenges. Her every step through Utopias was a step taken by all of humanity, a step towards understanding the sacredness of life and the interconnectedness of all beings.

The citizens of Utopias, in their wisdom and power, had not forgotten the essence of their humanity. They embraced the girl as one of their own, for in her eyes reflected their ancient dreams and hopes. They saw in her the infinite potential of the human spirit, a potential that had guided them to the stars and beyond.

In Utopias, every moment was an opportunity for growth and reflection. The encounter with the girl was revered as a divine experience, a moment of unparalleled spiritual enlightenment. It was a celebration of the journey from the primal to the divine, a journey that continued to unfold with each passing moment.

Ein Bild, das Riff, Bild, Aquarium, Pflanze enthält.

Automatisch generierte Beschreibung

As the girl explored the wonders of Utopias, her laughter echoed through the cosmos, a harmonious symphony that resonated with the soul of every being. She was a reminder that the path to utopia was paved with compassion, understanding, and the unyielding pursuit of knowledge.

And so, the legacy of humanity in Utopias was not merely one of technological marvels or godlike prowess but of an eternal quest for understanding and connection. It was a testament to the power of collective spirit and the enduring pursuit of a better tomorrow.

The strangest thing is, that every now and then, despite the perfect bliss of Utopias, some Utopiassins choose to leave all that behind and venture into the Beyond. They are never heard of again, and when this happens, the little girl sheds one single tear for every of these minds. And even in our solved world it is not known if these are tears of sadness or joy for the ones who leave Utopias.

Ein Bild, das Anime, Zeichnung, Bild, Kunst enthält.

Automatisch generierte Beschreibung

(Idea, Concept & Finetuning: aiuisensei, Pictures: Dalle-3, Story: ChatGPT 4)

Utopological Investigations Part 4

Reading Time: 18 minutes

This is part of my series about Deep Utopia. This Part collects some Notes that I made after finishing the book. They mirror some of the Objections I have against Bostrom’s affective consequentialisms that clutter his writing in some otherwise great passages.

Ein Bild, das Himmel, Gras, Wolke, draußen enthält.

Automatisch generierte Beschreibung

Notes

Imagine that some technologically advanced civilization arrived on Earth and was now pondering how to manage things. Imagine they said: “The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads—while we could so easily deflect them onto more wholesome paths, with a little nudge here and maybe some gentle police action there, we must scrupulously avoid any such interference, so that they can continue to prey on the weaker or more peaceful groups and keep them on their toes. Furthermore, we find that human nature expresses itself differently in different environments; so, we must ensure that there continue to be… slums, concentration camps, battlefields, besieged cities, famines, and all the rest.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.499, Footnote).

This Footnote is a strange case, where Bostrom goes overboard with a metaphor, and it is weird, how he compares Aliens that want to preserve the Human civilization with Humans who defend the Carnivorous Tendencies of Predators. I guess he compares Cats to psychopathic cannibals? If we had the technology to cure them from their pathological tendencies (hunting and playing with their prey) we should do so. He wants civilized cats only.

The utopians could have their interesting experiences repeat themselves—but that may not be very objectively interesting; or they could die and let a new person take their place—but that is another kind of repetition, which may also not ultimately be very objectively interesting. More on this later.)

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.217).

Instead of deactivating our boreability as a whole we could easily just delete past experiences from our memory, as shown in the movie eternal sunshine of the spotless mind. This would enable us to experience our most favorable experiences, like listening to Bach’s Goldbergvariations forever for the first time.

Ein Bild, das Menschliches Gesicht, Kleidung, Person, Mann enthält.

Automatisch generierte Beschreibung

Perhaps there is enough objective interestingness in Shakespeare’s work to fill an entire human life, or a few lifetimes. But maybe the material would become objectively stale to somebody who spent five hundred years studying it. Even if they had been modified so that they didn’t experience boredom, we might judge their continued Shakespeare studies to be no longer valuable (or at least much less valuable in one significant respect) once they have “exhausted” the Bard’s work, in the sense of having discovered, appreciated, learned, and fully absorbed and mastered all the insight, wit, and beauty therein contained. We would then have definitively run out of Shakespearian interestingness, although we would be able to choose how to feel about that fact.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.219-220).

At the point of technological plasticity, there will also be the option that we develop a digital twin of Shakespeare living in an alternate Elizabethan reality which writes sonnets and plays about other subjects in the style of the original one using ASI. Running out of novelties should be impossible for a Shakespearian.

If we imagine a whole society (…) who interact normally with one another but are collectively gripped by one great shared enthusiasm after another—imposed, perhaps, by a joint exoself (an “exocommunity”? aka “culture”)—and who find in these serial fascinations a tremendous source of pleasure, satisfaction, and purpose; then the prospect immediately takes on a significantly sunnier aspect; although, of course, it is still not nearly as good as the best possible future we can imagine.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.222).

Some of our current society’s most sought after abstract goods like power, fame, wealth, status hint at the possibility that these might be parameters that were artificially inserted into this pseudo-simulation. Since most humans are collectively grasped by money, not caring for money makes one an extreme outlier, this could be the result of such an exogenous programming that randomly fails in some individuals. The greed People show by collecting bills, coins, stocks and digital numbers on an index should seem very irrational and objectively boring to entities which don’t care about such stuff.

I have forgotten almost all the lectures that I attended as a student, but one has stuck in my memory to this day—because it was so especially outstandingly boring. I remember trying to estimate the number of black spots in the acoustic ceiling panels, with increasing levels of precision, to keep myself distracted as the lecture dragged on and on. I feared I might have to outright count all the spots before the ordeal would be over—and there were tens of thousands. Memorability is correlated with interestingness, and I think we must say that this lecture made an above-average contribution to the interestingness of my student days. It was so boring that it was interesting!

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.245).

Bostrom messes up categories. We should await a gaussian distribution of all the most interesting and boring moments in our life. The most boring moment that would mark the most left side of this spectrum is not interesting in itself. Otherwise, it would simply be wrongly positioned, and we should have to update our datapoints in an infinite loop. To claim that kind of position is interesting is similar to saying the movie was so bad that it was good. The goodness here refers to a metalevel that says outstanding, remarkable not of good quality. Bostrom gets carried away how our language uses the terms good/bad, interesting/boring, special/typical.

(…) we seem bound to encounter diminishing returns quite quickly, after which successive life years bring less and less interestingness.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.253).

If we make a subjective list of the most interesting things in our lives, they will surely contain mostly things we did for the first time, and these moments become exponentially rarer after the 3rd year of life. In a way we could argue that if the amount of novelty in our life would be measured in such a way, that we start dying at the age of about 3, where we have made all our major developments. The rest of our life is then 70 to 80 years of degeneration, where these moments get exceedingly rare.

If we keep upgrading our mental faculties, we eventually leave the human ambit and ascend into the transhuman stratosphere and thence into posthuman space.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.255).

It is a little shortsighted to simply expect every human would only be interested in upgrading oneself to a higher entity on this ascendancy spectrum. For example, some philosophers would want to be downgraded to a bat just to see the look on Thomas Nagels face after they wrote a paper about their experience. Such a successful temporary downgrade would definitely prove that we solved the hard problem of consciousness.

(…) we should expect that what is required in order for our lives to remain interesting in utopia is that they offer a suitable variety of activities and settings.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.257).

In addition, an artificial super companion could exert a kind of guidance to avoid our human minds getting lost. Like we would limit the number of things to show to children, in a plastic world, there had still to be some invitation or level gating to prevent us from potential self-destructive behavior. This includes doing irreversible things via autopotency to us. I believe such an entity must be highly personalized to its ward, a kind of artificial guardian that will watch over its ward’s wellbeing.

(…) the more chopped and mixed life is preferable may retain some of its grip on us even if we stipulate—as of course we should—that in neither scenario would any subjective boredom be experienced.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.258).

But if we extrapolate this thinking along a potential infinite path, we end up with a totally distorted state, where it seems preferable for entertainments sake to end up as a Boltzmann brain, where experiences are maximally in Flux. An intuition that is currently frightening.

(…) if things functioned perfectly [in a plastic world], we would keep accumulating ever greater troves of procedural and episodic memories.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.260).

I highly doubt that. Individuals with too good and precise memory are not to be envied. It is for example a small step from high functioning autism, Asperger and the idiot savant from the neurodiverse literature, who seems to be often cursed by hyper precise memory. There is even a story about the phenomenon of a man with perfect memory from Borges. His condition is utterly debilitating. Like the most superpowers we envision as children to have, the drawbacks of a perfect memory are enormous for a human mind.

A frozen brain state, or a mere snapshot of a computational state stored in memory, would not be conscious.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.268)

Ein Bild, das Menschliches Gesicht, Spielzeug, Person, Cartoon enthält.

Automatisch generierte Beschreibung

It could. Imagine an infinite number of universes in which Boltzmann brains are popping into existence for only the fraction of the moment it takes to access a conscious thought. If these Boltzmann brain party goes on for eternity, there will surely be a state of one of these brain that remembers reading a token in Bostrom’s new book. For some million years the same brain is having totally different thoughts but then one day it has the thought of the second token in Bostrom’s book, and so own until the experience of having read the whole book resides somewhere in the brain. For a totally discontinued brain it could well be that it never realizes its fractured worldlines and it has a totally normal experience of consistent thinking.

We have the idea that certain developmental or learning-related forms of interestingness could be maximized along a trajectory that is less than maximally fast: one where we spend some time exploiting the affordances available at a given level of cognitive capacity before upgrading to the next level. We have also the idea that if we want to be among the beneficiaries of utopia, we might again prefer trajectories that involve less than maximally precipitous upgrading of our capacities, because we may thereby preserve a stronger degree of personal identity between our current time slices and the time slices of (some of) the beings that inhabit the long-term future.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.272).

Why this section emphasizes the value how identity and interestingness intertwine is not really clear to me. Why it would be better for me to hold on to my singular identity. From the perspective of interestingness, it might be far more interesting to be inhabited by multiple identities at the same time. Indeed, some experiments with split brain patients suggests that at least two deep identities are controlling our visible surface-identity anyway. So, getting to hung up on the concept of a singular identity might be pointless.

(…) the notion of fulfillment is vague and indeterminate in its application to entities such as artistic or cultural movements. But it is also so in its application to human individuals.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.317).

If we have two identical cylinders sitting on a table, both made of metal, the same thing could serve as a bucket and a lampshade. Their value function when executed completes them. Fulfilling f(bucket) means adding water, fulfilling f(lampshade) means subtracting light. Moreover: If some fire would break out in the vicinity, the bucket could be expected to be fulfilled if we would empty the water on the fire to extinguish it. Fullfillingness is therefore totally in the mind of the Beholder.

Achieve a victory against the chess engine Stockfish at difficulty level 7, using no computer aids to assist you during training or during the match, and using no cognitive enhancers or other means that go against the spirit of this challenge.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.338).

This example is used by Bostrom to illustrate how things could still be made challenging in a Plastic World. But I am not sure that it is that easy. It directly contradicts Bostrom’s own thoughts in Super-Intelligence. If we are able to formulate such a mission in a plastic world, such that it is valid and not corruptible by an ASI, we would at the same time have found a way to chain an ASI with merely human intelligence, because a human that would be supported by superhuman assistants would only have to command the ASI to find a solution that does not go against the spirit of this challenge, and the ASI could find it.

It would be trivial for an ASI to cheat in such a way that we had the feeling that we won fair and square. The Logical thing is then to always distrust our victory, which makes it pointless to even play. We would have the exact same feeling if Stockfish itself would let us win. We can easily see that such a mere contract could be exploited by an ASI. Otherwise, Alignment would be trivial. I am a little irritated how Boston does not see how this idea goes against his own orthogonality thesis.

Within such a vast population [of stellar habitats] there would be an increased probability of design collisions. That is to say, if we pick a random person, and ask how similar the most similar other person is: then the larger the population, the more similar the most similar other person would tend to be.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.417).

This seems irrelevant. If we are not in a simulation and we are bound by physical laws, galactical pockets of trans humanity would keep drifting apart and increasing gaps between Star systems would make it impossible to have such a superset of possible minds from which we could compare. If my identical twin is unreachable in such a pocket system, I am as unique as I would be, if he was never born, his twin-similarity should be none of my concern.

Once a life is already extremely excellent, there may just not be much room for further improvement. So, while some initial segment of each utopian’s life could cause later improvements, this segment may be a small fraction of their entire life. The longer the life stretches on, the greater the fraction of it would be such that its average quality does not improve much. Either the life is already close to maximally good, or else the rate of improvement throughout the life is extremely slow.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.420-421).

A meaningful purpose for a good life could then be enhancing this option even further. For example, it could be argued that the person that invented a cure for cancer upgrades the total amount of available quality for all of humanity. It is not clear to me that this possibility space of important medical progress will end in a plastic world. Even at TECHMAT there could be the problem of minds addicted to infinite jests and such. Producing an effective cure for such a mind-virus could be considered even more valuable than curing cancer. The more transhuman a mind is the more difficult it could be to cure such illness. And coming up with ever new synthetic vaccines could be a really difficult task even for an ASI. Even at Plasticity we should beware that all clever instructions we might produce to forever get rid of are certainly time constrained. It is logically impossible, as Gödel showed to come up with a complete set of instructions that is not self-contradictory. So, to produce a surefire function that will always ensure our maximal wellbeing for all eternity is simply beyond any constructable reality. I think Bostrom stretches his Autopotency- Term beyond the realm where it is sensible to use. What Bostrom also leaves out is, that the quality of a human life cannot be averaged easily. The Quality is very much biased with a heavy weight on the later parts. It is easy to be a good kid, but it is extremely hard to stay good (remain a high quality of life) the older you get. The quality of a life of our best leaders and scientists could easily be diminished to a negative outcome, if after having received the peace Nobel prize they went on a killing spree or were caught on Epstein’s Island molesting minors. Take a person like Ted Kaczynski, who was a math prodigy, we would certainly evaluate the quality of his life more favorable, if after being incarcerated he would have won the fields medal, wrote a book about the error of his terrorist ways, and reintegrated into society. Instead of an Evil Genius story his life would have become a heroic redemption arc.

Many of our most compelling stories are tales of hardship and tragedy. The events that these stories portray would cease to occur in utopia. I am inclined to say tough luck to the tragedy-lover. Or rather: feel free to get your fix from fantasy, or from history—only, please, do not insist on cooking your gruesome entertainment in a cauldron of interminable calamity and never-ending bad news! It is true that good books and films have been inspired by wars and atrocities. It would have been better if these wars and atrocities had not occurred, and we had not had these books and films. The same applies at the personal scale. People coping with the loss of a child, dementia, abject poverty, cancer, depression, severe abuse: I submit it would be worth giving up a lot of good stories to get rid of those harms. If that makes our lives less meaningful, so be it.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.424)

Here Bostrom seems to fight against the intuition that suffering is a valid value vector in a plastic meaningfulness-space. He is effectively saying that if your mission requires suffering [of others] it is not worth the effort. A stark contradiction to his own statements earlier where he recognizes that suffering can enhance the quality of our experiences dramatically. Even in a thought experiment that says, if humans would not ever have killed each other than literature would have never got Tolstoy’s War and Peace and that is okay. There is an endless list of human achievements that caused extreme suffering like the Manhattan Project. Nobody in their right mind would say that the Atomic Bombs on Japan were justified since we got a good movie like Oppenheimer out of it. You are not a tragedy lover if you are moved by the picture. Bostrom says in a plastic world there should not be any suffering, because it is better to have a history or a chain of events where no mistakes happen, then to learn from our mistakes and making it part of our culture. With such an absolutist view we could very well end in a Plastic Utopia where Suffering is forbidden, or simply -like in so many current dictatorial states- ignored. If the absence of suffering trumps all other values than we would end up with toxic wellbeing scenario.

Some things we enjoy doing. Other things we enjoy having done. Meaningful activities tend to fall into the latter category.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.438).

This is a great observation. An argument for potential suffering and against hedonism if we value meaning more than well-being. A variation: what comes easy goes easy. Only the hard stuff stays with us. The surface pleasures do not reach deep into our core.

A purpose P is the meaning of person S’s life if and only if: (i) P is encompassing for S; (ii) S has strong reason to embrace P; and (iii) the reason is derived from a context of justification that is external to S’s mundane existence.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.441).

The Swiss Author Ludwig Hohl has an equally good definition for how to achieve purpose. Central to his thinking is the term “Work”: Work is always an inner process, and it must always be directed outward. Activity that is not directed outward is not work; activity that is not an inner event is not work.

Could there also be unrealized subjective meaning? Yes, I think we can make sense of such a notion. An example might be a person with an exceptional talent and passion for music, who embraces the purpose of composing great music either because they think that this is inherently deeply valuable activity or because they hope to produce a work of such tremendous power that it will heal the cultural chasms that separate us from one another and lead to conflict and war. So, this gives them subjective meaning. We can suppose that they burn with fervor to pursue this purpose throughout their life, but that circumstances conspire to prevent them from ever actually doing any composing— they face grinding poverty, conscription into the army, personal emergencies. We could then say that their life had unrealized subjective meaning.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.458).

This is one of Bostrom’s more futile thought experiments. It is especially unconvincing because Bostrom uses external circumstances to give this gifted musician a way out to never actualize her potential. As with life talent should always find a way. Look at the seemingly idiotic circumstances that led Galois to his final pistol duel, a lesser mathematician would simply never have had the urgency to draft down his mathematical results the night before. Or look at the life of Hawking: a lesser physician would have surrendered to the illness without ever trying to achieve greatness. I remember in his Autobiography he explicitly credited his illness and the ticking of his time running out for the fact that he went from being a lazy physicist to becoming an actually great one. If something is your mission, and things and circumstances prevent you from achieving it you will make overcoming the circumstances your mission. With great potential comes great preparedness.

The right path is the unfolding of the fullest activity that is possible for us. The fullest: measured by our capabilities (our conditions) and by the effect on others (ourselves as well as others). A little knitting won’t suffice (or one who is content with that must be a sad creature). Do the circumstances hinder you in the unfolding of your activity? Then work towards changing the circumstances, and you will find your activity in that. (Ludwig Hohl, Nuances and Details II, 11)

Consider the following imaginary character. Grasscounter is a human being who has devoted himself to counting blades of grass on the College lawn. He spends his whole days in this occupation. As soon as he completes a count, he starts over—the number of blades, after all, might have changed in the interim. This is Grasscounter’s great passion in life, and his top goal is to keep as accurate an estimate as possible. He takes great joy and satisfaction in being fairly successful in this endeavor. The objectivist and hybrid accounts that we find in the literature would say that Grasscounter’s life is meaningless; whereas subjectivist accounts would say that it is meaningful.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.461)

Ein Bild, das Dunkelheit enthält.

Automatisch generierte Beschreibung

While reading I get immediately reminded of the Hodor Event in Game of Thrones. Throughout the story the stuttering of the Word “Hodor” has absolutely no meaning, subjective, objective or otherwise. It is a phrase the mentally retarded giant stutters whenever he is addressed. Only much later we learn of the true meaning of the phrase and suddenly the Phrase, as an abbreviation of the sentence Ho[ld the] do[o]r!, becomes one of the most meaningful words in the whole epic story. The meaning was always there, but we as observers could not decipher it. Bostrom later denies that Grasscounter could ever have objective meaning since his act seems pointless:

[Grasscounter] would not, however, have meaning in the more objectivist sense that requires the encompassing purpose to be one which the person “would desire if he were perfectly psychologically healthy and well-adapted”.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.462)

What about the possibility of this person having secret knowledge, that grabby aliens will one day arrive and since they have a gambling addiction, before conquering a world they always give their prey a single chance to be spared. In the case of earth, earthlings can save themselves if one among them knows the exact number of blades of grass on a certain lawn…now which life and activity has suddenly achieved more meaning than probably anything else up to this point?

Bostromisms

Bostrom is known for coining new terms. Here are some of his newest.

Ein Bild, das Kunst, Kreis, Zeichnung enthält.

Automatisch generierte Beschreibung

Astronomical Petri Dish: Observable Universe

Computronium: a nanomechanical device which solves the Landauer limit of energy efficiency during computation.

Plasticity: The state of a technological mature world, that has affordances that make it easy to achieve any preferred local configuration. [My version of a Technology of Everything or Clarke-Capability]

Let us say that we have some quantity of basic physical resources: a room full of various kinds of atoms and some source of energy. We also have some preferences about how these resources should be organized: we wish that the atoms in the room should be arranged so as to constitute a desk, a computer, a well-drafted fireplace, and a puppy labradoodle. In a fully plastic world, it would be possible to simply speak a command—a sentence in natural language expressing the desire—and, voila, the contents in the room would be swiftly and automatically reorganized into the preferred configuration. Perhaps you need to wait twenty minutes, and perhaps there is a bit of waste heat escaping through the walls: but, when you open the door, you find that everything is set up precisely as you wished. There is even a vase with fresh-cut tulips on the desk, something you didn’t explicitly ask for but which was somehow implicit in your request.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.196-197)

Autopotency: Ability to use Plasticity for self-configuration.

An autopotent being is one that has complete power over itself, including its internal states. It has the requisite technology, and the know-how to use it, to reconfigure itself as it sees fit, both physically and mentally. Thus, a person who is autopotent could readily redesign herself to feel instant and continuous joy, or to become absorbingly fascinated by stamp collecting, or to assume the shape of a lion.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.197)

Total Welfare function: Objective Measurement of Subjective Wellbeing

AI completeness: A task which requires human-level artificial general intelligence. (Mind uploading or Autopotency are most likely AI complete)

Aesthetic neutrinos: The possibility that our experience filters are too insensitive to experience countless breathtaking moments in the environment, the pervasive sheer beauty of being.

Timesuit: Protective Coating to shield the Biological Body from time induced decay

Diachronic Solidarity: Prospective and retrospective emotional connection with forebearers and descendants

Karma Coin: An option package of highly desirable goods and services like a happy afterlife, true love, profound knowledge, enlightenment, closeness to the divine). Investing in a Karma coin currency is a way to discover and share meaning with others. It is like a Bitcoin for Purpose. I am not sure if Bostrom is kidding. At the stage of Plasticity this Coin will lose all its value. It could be a guiding light on the way there, maybe.

Intrinsification: The process whereby something initially desired as a means to some end eventually comes to be desired for its own sake as an end in itself.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.234)

ETP: short for Encompassing Transcendental Purpose. The Meaning of an individual life.

Ein Bild, das Cartoon enthält.

Automatisch generierte BeschreibungUtility Monsters: beings that are enormously more efficient in deriving well-being from resources than we are.

Enchanted World: A Way of life, where knowledge enrichens the participation of a universal Reality on multiple Layers. Where Solving Problems and Puzzles do not diminish our joy and sense of wonder but enhance them.

(…) meaning may be enhanced when a way of life is enmeshed in a tapestry of rich symbolic significance—when it is imbued with myths, morals, traditions, ideals, and perhaps even omens, spirits, magic, and occult or esoteric knowledges; and, more generally, when a life transects multilayered realities replete with agencies, intentions, and spiritual phenomena.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.433)

Ein Bild, das Bild, Kunst, Screenshot, Bildende Kunst enthält.

Automatisch generierte Beschreibung

[Mount Bostrom is almost climbed. Only one last part left. Coming soon]

Utopological Investigations Part 3

Reading Time: 5 minutes

This is part 3 in the deep Utopia Series

Handouts 17,19 & 22 On Purpose and Meaning

To assist a friend in finding purpose, it’s proposed that their actions be linked to the preferences, well-being, or opinion of someone they care about, thereby giving their actions personal significance. If the friend values the happiness or opinions of the person trying to help, creating a situation, where achieving a specific goal (G) enhances this relationship can imbue them with a sense of purpose. This goal should require effort, skill, and emotional investment over time, avoiding shortcuts like technology or enhancements that diminish personal effort, to ensure it’s meaningful and fulfilling. The task or goal (G) must be carefully chosen to align with the friend’s interests and capabilities, such as winning against a chess engine without external aids, offering a genuine challenge that can’t be bypassed through easy fixes. This approach transforms the pursuit of G into a mission that provides the friend with a significant, purpose-driven project, fostering personal growth and satisfaction.

Bostrom then comes up with the following hypotheses:

  1. Purpose is valuable because it broadens our goals into long term missions that get intrinsified (useful-of-effort)
  2. Purpose is an innate drive and not fulfilling is leads to frustration.
  3. Purpose is socially acceptable, having a mission is seen as status improving.

For an autopotent mind all these points are extraordinarily challenged:

(…) while there is value in having purpose, this value is entirely voided if, as we may say, the purpose has been generated on purpose. In other words, let us assume (for the sake of the argument) that purposes that we either set ourselves or artificially induce in ourselves for the sake of realizing the value of having purpose or for the sake of enabling active experience do not contribute anything to the value of purpose (…)

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.347)

In Utopia there are mainly 2 sources left for Purpose generation:

  1. Artificial Purpose
    1. Self-imposed: Handicapping, neuro-induced
    2. Presented: by Other Individuals or groups
  2. Natural and supernatural Purpose
    1. Agent-neutral: High Level Tasks that remain relevant even at Techmat
      1. Local Expansion (space fare)
      2. Risk handling
      3. Alien prepping
      4. Policing Civilization
      5. Artefact generation
      6. Cultural Processing
    2. Agent-relative (only for some posthuman groups relevant)
      1. Honoring traditions
      2. Commitments (to children, society etc.)
      3. Expression (Aesthetics)
      4. Following a special Faith

Categories of Meanings

  1. Reward
    1. Afterlife (Religion)
    2. Plasticity (Posthuman Technology)
    3. Simulation (Multiversal Potentials)
    4. Nirvana
  2. Morality
    1. Consequentialism (only applicable if moral reality is independent from physical reality)
    2. Deontology
    3. Virtue
    4. Worship
  3. Zeal
    1. Cause
    2. Identity (The True Self or Best Self)
    3. Allegiance (Loyalty for another Cause or Mission)
    4. Dedication (Practical Commitment)

A Definition of Meaning

A purpose P is the meaning of person S’s life if and only if: (i) P is encompassing for S; (ii) S has strong reason to embrace P; and (iii) the reason is derived from a context of justification that is external to S’s mundane existence.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.441).

For a purpose to be a potential meaning of life, it should be able to fill a life or at least a substantial portion of a life. Some endeavors are simply too small to constitute potential meaning-giving purposes—for instance, the goal of finding a good parking spot (except perhaps in London) (…)

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.443).

Bostrom then examines Sisyphus Life as a Parable for the human life as such, the absurdity and meaninglessness of an existence such as ours.

I will say that Sisyphus has subjective meaning if he is in fact wholeheartedly embracing a purpose that is encompassing and that he takes himself to have strong reason to pursue on grounds that are external to his mundane existence. Sisyphus has objective meaning if there is some purpose that would be encompassing to him and that he has a strong reason to embrace—a reason that derives from a context of justification that is external to his own mundane existence.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.456-457).

Spectrum of Intentionality

Ein Bild, das Text, Screenshot, Schrift, Logo enthält.

Automatisch generierte Beschreibung

Summa

Wittgenstein famously said in his Tractatus:

Die Lösung des Problems des Lebens merkt man am Verschwinden dieses Problems.

(Ist nicht dies der Grund, warum Menschen, denen der Sinn des Lebens nach langen Zweifeln klar wurde, warum diese dann nicht sagen konnten, worin dieser Sinn bestand.

[The solution of the problem of life is seen in the vanishing of this problem. (Is not this the reason why people to whom the meaning of life became clear after long doubting, could not then say what this meaning consisted of?)]

(Wittgenstein, TLP 6.521)

After deeply considering Bostrom’s Warning to beware of a solved world, we might say: in a solved world it will be essential that we never reach a state of perfect plasticity where Living is finally solved.

This is quite an unexpected twist that would explain a lot about why the finiteness of our personal lives might actually be a blessing in disguise.

Why mortality is actually the greatest gift bestowed upon us. Why the gods truly envy our weakness and imperfection. Why superpowers are a curse.

Perfection and immortality might be as boring as the Edenic Paradise. In the end, we might arrive at the paradoxical conclusion that the longer our lives last, the less valuable they might become.That the fragility and preciousness of life is its core value, and that immortality is the greatest enemy to this value. Is it possible that gods might thirst for just a minute of finiteness, that if anyone would achieve freedom from suffering, you might hunger for it?

So, is this the deep meaning of Pindar’s “become who you are”?

Are we simulations within a total world consciousness that dreams its fragmented memory fragments into completion? Another hint that we might already be part of an ancestor simulation that relives the sweetness of not knowing, of being part of something unsolved.

This would be a deeply technologically colored interpretation of Plato’s Anamnesis, where we remember stuff, we already knew but with the added benefit of having the joy of experiencing it for the first time. The good news I take away from Bostrom’s book: Existential Bliss Management in a solved world might be equally hard as Existential Risk Management in a flawed world like ours, which would mean: our mind will never run out of problems to solve and thus the term “solved world” is self-contradictory like so many other terms we use in our language: almighty, eternal, unimaginable.

This also means, in my opinion, that both the Effective Accelerationism Movement and the Doomers are fundamentally wrong in regards to AI: Neither of both strategies will work in the long run, there is no paradise or hell that is waiting at the end of this long and winding road that we call future, it is a delicate, even fragile balance we must strike between the known, the unknowns, and the unknowable unknowns. The solution to the Problem of Living, or what is otherwise often called the Meaning of Life, would then be to actively avoid any finite step of the solution.

When in doubt… Live, die, repeat.

Utopological Investigations Part 2

Reading Time: 6 minutes

This is part 2 in the Miniseries about Bostroms Deep Utopia

Ein Bild, das Kugel, Farbigkeit, Seifenblasen enthält.

Automatisch generierte Beschreibung

Handout 12: UTOPIC TAXONOMY

Bostroms summary outlines five distinct visions of utopia, which could be ordered on a spectrum of imaginary depth, with Plastic Utopia being the deepest of all.

Governance & Culture Utopia

This type emphasizes ideal laws, customs, and societal organization. It is not inherently dull but often falls into the trap of ignoring human nature, making economic or political errors, or overlooking the needs of oppressed groups. Variants include feminist, Marxist, technological, ecological, and religious utopias, with recent additions like crypto-utopias.

Post-Scarcity Utopia

Characterized by an abundance of material goods and services, ensuring that everyone has more than enough to satisfy their needs, except for positional goods. This utopia posits that Earth is already on the path toward post-scarcity, at least for human needs, suggesting a significant departure from our hunter-gatherer ancestors.

Post-Work Utopia

Envisions a world where automation eliminates the need for human labor in the economy. While there may still be a need for cultural creation, the emphasis is on minimal human work due to technological abundance or a lifestyle choice favoring leisure over labor. This utopia examines the balance between income, leisure, and social status.

Post-Instrumental Utopia

Extends beyond the post-work idea by eliminating the instrumental need for any human effort, not just in economic terms but also in daily activities like exercise, learning, and choosing preferences. This is a more radical concept that significantly departs from traditional utopian thought.

Plastic Utopia

The most transformative, where any desired local change can be effortlessly achieved unless hindered by another entity. This includes “autopotency,” or the ability to self-modify at will. This type of utopia equates the technologically possible with the physically possible, suggesting a future where humanity is deeply altered by its technological advancements. It is a concept largely unexplored outside of theology and science fiction.

In principle, there is enormous opportunity to improve our existence by modifying and reengineering our emotional faculties. In practice, there is a considerable likelihood that we would make a hash of ourselves if we proceed down this path too heedlessly and without first attaining a more mature level of insight and wisdom.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.212-213).

Just look at the current trend in plastic surgeries and gender affirmation care be a warning signal how aesthetic and societal expectations can go horribly wrong.

Aside from these more comical effects Bostrom is right to point out that any volitional change has a tendency to become pseudo-permanent, meaning even if we could change our emotional design, we might never want to.

(…) if you changed yourself to want nothing but the maximum number of paperclips, you would not want to change yourself back into a being who wants other things besides paperclips,

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.213).

Autopotency is therefore the term that needs the most clarification because I have the feeling it leads to some paradoxical results that are potentially self-defying. In a crude sense, if we grant humans free will (which not everybody does) we could argue that we might already have some autopotency if we are in a simulation. If we are totally autopotent entities we could have opted for an existence where we are a world famous Swedish philosopher who writes a book about deep utopia and we could have wished for getting rid of our autopotency in the context of really experiencing the blood, sweat and tears to write a profound book about utopian subjects.

Since a Plastic Autopotency Utopia were we are capable of everything, everywhere all at once for all eternity would be mostly pointless, minds like ours might have experienced a deep nostalgia for the time we were simply human, and then recreated a mind state were we all forgot about our deity-status and just were randomly put in a simulation with other minds that wished for the same. A gigantic theme park that recreates one of the possible Multiverse strands of the beginning of the 21st century.

If we reach effective Autopotency our first intuition might be then that we solved the universal Boredom problem from Handout 9. If we made ourselves subjectively unborable, then we were in danger to create a future that is objectively boring. The crux is here that even if we categorize emotions along a positive negative spectrum all emotions have an important purpose. Boredom for example steers as away from uninteresting to interesting things. Some if not most of the negative emotions could be technologically externalized via emotion prosthesis or apparats that are monitored by our personal AI, so if another person says something mean, we don’t get actually angry but a our pAI (personal A.I.) signals us to avoid this person in the future or simply ignore them.

Ein Bild, das Kunst, Bild enthält.

Automatisch generierte Beschreibung

Four Etiological Hypotheses about the origin of the value of Interestingness in a Longtermist Perspective

At the root of the purpose problem lies the question if an infinite universe can provide infinite many Interesting Things for autopotent entities at Tech-Mat. Bostrom identifies 4 categorical issues:

  1. Exploration: Learning new things is an evolutionary adaptive behaviour in a scarcity environment that changes frequently. At Autopotency the whole notion of learning as adaptive strategy seems pointless since there is no existential pressure to drive that kind of Curiosity Motor. A Longtermist Brain might also run into Memory storage Problems (s. Handout 14 below)
  2. Signaling: Something is interesting to us because it makes us look interesting to others in a social context. Even in at Tech-Mat there will be positional and cardinal values that should be worthy our time. But when coupled with the 4th Hypothesis, we might run into serious trouble.
  3. Spandrel: Interestingness is a derivative of other values
  4. Rut-avoidance: Interestingness is an evolutionary means to avoid getting stuck in pointless repetition. At Tech-Mat Rut-Avoidance and Signaling could very well get stuck in a malicious circle: e.g. since every activity could be infinitely stretched, and boredom is one of the last universal constraints there could be Olympics that chase the most pointless Disciplines (Like Blade of grass counting) and the tolerance of Boredom could be considered. Bostrom gives here an example of one of his most memorable Lectures where he was bored to death. This leads to some paradoxical situations that interestingness and boredom might seem on opposite ends of a mind’s attraction spectrum, but the positional valuation of our mind seems to lead to such evaluations that the most boring stuff might be more special than the second most interesting thing we ever encountered.
Ein Bild, das Farbigkeit, Kunst, Kugel, Farben enthält.

Automatisch generierte Beschreibung

Handout 14: Memory Storage for Immortals

1. The maximum amount of information (bits) a brain can remember increases linearly with its size.

2. To maintain the current rate of skill and experience accumulation, human brains would need to grow by 14 deciliters every century, though in reality, this increase could be optimized to much less.

3. Even after migrating to a more optimized medium for memory, a linear increase in volume is still required for accumulating long-term memories, albeit at a slower rate (about 1 cm³/century).

4. A significant increase in brain size could lead to slower signal transmission due to longer distances, particularly for thoughts that integrate information from widely separated regions.

5. The current axonal conductance velocity is about 100 meters/second, suggesting a physical brain size limit without slowing down thought processes significantly.

6. Using optical fiber could theoretically support a brain up to 300 km in diameter without significant delay in signal transmission.

7. Storing a century’s worth of memories in 1 cm³ of space could allow for living more than 10²² centuries without losing long-term memories.

8. Adjustments like an efficient retrieval system for skills and memories would be necessary.

9. Slowing down the system could further increase the maximum size of the memory bank by allowing larger brains without unacceptable signal delays.

10. Living in virtual reality and slowing down subjective experience could mitigate perception of any slowdown.

11. Speeding up mental processes significantly would reduce the maximum feasible brain size but could allow for much more memory within current physical brain sizes.

12. A trade-off exists between longevity and the complexity/capacity of our minds. We could opt for living much longer with simpler minds or having more complex minds but shorter lifespans.

13. In a technologically advanced civilization, it might be possible to achieve both long life spans and highly capacious minds, balancing longevity, and complexity.

Ein Bild, das Kunst, Bild, Fraktalkunst enthält.

Automatisch generierte Beschreibung

Handout 15 Optimal Transcendence

Under normal conditions, our connection to future selves weakens by 1% each year, but an “abrupt metamorphosis” into a posthuman state would cause an instant 90% reduction. Considering the natural erosion over 230 years would lead to a similar reduction, that period serves as a limit for how much we might want to delay metamorphosis to preserve personal identity. However, the intrinsic value of our human existence, alongside the potential for a much longer and possibly twice as rewarding posthuman life, complicates the decision. The desirability of transitioning increases as we exhaust the possibilities and values of human life, suggesting a point where the benefits of becoming posthuman outweigh the costs. Moreover, if posthumans experience a slower erosion of self-connection, this would argue for a quicker transition to post-humanity.

to be continued

Utopological Investigations Part 1

Reading Time: 9 minutes

Ein Bild, das Text, Schrift, Screenshot, Poster enthält.

Automatisch generierte Beschreibung

Prologue

This is a miniseries dedicated to the memory of my first reading of Bostrom’s new book, “Deep Utopia,” which—somewhat contrary to his intentions—I found very disturbing and irritating. Bostrom, who considers himself a longtermist, intended to write a more light-hearted book after his last one, “Superintelligence,” which should somehow give a positive perspective on the positive outcome of a society that reaches technological maturity. A major theme in Bostrom’s writings circles around the subject of existential risk management; he is among the top experts in the field.

“Deep Utopia” can be considered a long-winded essay about what I would call existential bliss management: Let us imagine everything in humanity’s ascension to universal stardom goes right and we reach the stage of Tech-Mat Bostrom coins the term “plasticity” for, then what? Basically, he just assumes all the upsides of the posthumanist singularity, as described by proponents like Kurzweil et al., come true. Then what?

To bring light into this abyss, Bostrom dives deep down to the Mariana Trench of epistemic futurology and finds some truly bizarre intellectual creatures in this extraordinary environment he calls Plastic World.

Bostrom’s detailed exploration of universal boredom after reaching technological maturity is much more entertaining than its subject would suggest. Alas, it’s no “Superintelligence” barn burner either.

He chooses to present his findings in the form of a meta-diary, structuring his book mainly via days of the week. He seems to intend to be playful and light-hearted in his style and his approach to the subject. This is a dangerous path, and I will explain why I feel that he partly fails in this regard. This is not a book anyone will have real fun reading. Digesting the essentials of this book is not made easier by the meta-level and self-referential structure where the main plot happens in a week during Bostrom’s university lectures. The handouts presented during these lectures are a solid way to give the reader an abstract. There is plenty to criticize about the form Bostrom chose, but it’s the quality, the depth of the thought apparatus itself that demands respect.

Then there is a side story about a pig that’s a philosopher, a kind of “Animal Farm” meets “Lord of the Flies” parable that I never managed to care for or see how it is tied to the main subject. A kind of deep, nerdy insider joke only longtermist Swedish philosophers might grasp.

This whole text is around 8,500 words and was written consecutively. The splitting into multiple parts is only for the reader’s convenience. The density of Bostrom’s material is the kind you would expect exploring such depths. I am afraid this text is also not the most accessible. Only readers who have no aversions to getting serious intellectual seizures should attempt it. All the others should wait until we all have an affordable N.I.C.K. 3000 mental capacity enhancer at our disposal.

PS: A week after the dust of hopelessness I felt directly after the reading settled, I can see now how this book will be a classic in 20 years from now. Bostrom, with the little lantern of pure reasoning, went deeper than most of his contemporaries when it comes to cataloging the strange creatures that are at the bottom of the deep sea of the solved world.

Ein Bild, das Kugel, Licht enthält.

Automatisch generierte Beschreibung

Handout 1: The Cosmic Endowment

The core information of this handout is that a technologically advanced civilization could potentially create and sustain a vast number of human-like lives across the universe through space colonization and advanced computational technologies. Utilizing probes that travel at significant fractions of the speed of light, such a civilization could access and terraform planets around many stars, further amplifying their capacity to support life by creating artificial habitats like O’Neill cylinders. Additionally, leveraging the immense computational power generated by structures like Dyson spheres, it’s possible to run simulations of human minds, leading to the theoretical existence of a staggering number of simulated lives. This exploration underscores the vast potential for future growth and the creation of life, contingent upon technological progress and the ethical considerations of simulating human consciousness. It is essentially a longtermist’s numerical fantasy. The main argument, and the reason why Bostrom writes his book, is here:

If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and continue doing so for a hundred billion billion millennia. It is really important that we ensure these truly are tears of joy.

Bostrom, Nick. *Deep Utopia: Life and Meaning in a Solved World* (English Edition), p. 60.

How can we make sure? We can’t, and this is a real hard problem for computationalists like Bostrom, as we will find out later.

Ein Bild, das Kunst, Bild enthält.

Automatisch generierte Beschreibung

Handout 2: CAPS AT T.E.C.H.M.A.T.

Bostrom gives an overview of a number of achievements at Technological Maturity (T.E.C.H.M.A.T.). for different Sectors.

1 Transportation

2.Engineering of the Mind

3.Computation and Virtual Reality

4.Humanoid and other robots

5.Medicine & Biology

6.Artificial Intelligence

7.Total Control

The illustrations scattered throughout this series provide an impression. Bostrom later gives a taxonomy (Handout 12, Part 2 of this series), where he delves deeper into the subject. For now, let’s state that the second sector, Mind-engineering, will play a prominent role, as it is at the root of the philosophical meaning problem.

Ein Bild, das Kugel, Kunst, Bild enthält.

Automatisch generierte Beschreibung

Handout 3: Value Limitations

Bostrom identifies six different domains where, even in a scenario of limitless abundance at the stage of technological maturity (Tech-Mat), resources could still be finite. These domains are:

  1. Positional and Conflictual Goods: Even in a hyperabundant economy, only one person can be the richest person; the same goes for any achievement, like standing on the moon or climbing a special mountain.
  2. Impact: A solved world will offer no opportunities for greatness.
  3. Purpose: A solved world will present no real difficulties.
  4. Novelty: In a solved world, Eureka moments, where one discovers something truly novel, will occur very sporadically.
  5. Saturation/Satisfaction: Essentially a variation on novelty, with a limited number of interests. Acquiring the nth item in a collection or the nth experience in a total welfare function will yield ever-diminishing satisfaction returns. Even if we take on a new hobby or endeavor every day, this will be true on the meta-level as well.
  6. Moral Constraints: Ethical limitations that remain relevant regardless of technological advances.
Ein Bild, das Bild, Kunst enthält.

Automatisch generierte Beschreibung

Handout 4 & 5: Job Securities, Status Symbolism and Automation Limits

The last remaining tasks that humans could be favored to do are jobs that bring the employer or buyer status symbolism, where humans are simply considered more competent than robots. These include emotional work like counseling other humans or holding a sermon in a religious context. Ein Bild, das Pflanze, Kunst, Blume, draußen enthält.

Automatisch generierte Beschreibung

Handout 9: The Dangers of Universal Boredom

(…) as we look deeper into the future, any possibility that is not radical is not realistic.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.129).

The four case studies: In a solved world, every activity we currently value as beneficial will lose its purpose. Then, such activities might completely lose their recreational or didactic value. Bostrom’s deep studies of shopping, exercising, learning, and especially parenting are devastating under his analytical view. Ein Bild, das Text, Kunst, Bild, Blume enthält.

Automatisch generierte Beschreibung

Handout 10: Downloading and Brain Editing

This is the decisive part that explains how Autopotency is probably one of the hardest and latest Capabilities a Tech-Mat Civilization will develop.

Bostrom goes into detail how this could be achieved, and what challenges to overcome to make such a tech feasible:

Unique Brain Structures: The individual uniqueness of each human brain makes the concept of “copy and paste” of knowledge unfeasible without complex translation between the unique neural connections of different individuals.

Communication as Translation: the imperfect process of human communication is a form of translation, turning idiosyncratic neural representations into language and back into neural representations in another brain.

Complexity: Directly “downloading” knowledge into brains is hard since billions or trillions of cortical synapses and possibly subcortical circuits for genuine understanding and skill acquisition have to be adjusted with femtoprecision.

Technological Requirements: Calculating synaptic changes needs many order of magnitudes more we might have to our use, these Requirements are potentially AI-complete, that means, if we can do them we need Artificial Super Intelligence first.

Superintelligent Implementation: Suggests that superintelligent machines, rather than humans, may eventually develop the necessary technology, utilizing nanobots to map the brain’s connectome and perform synaptic surgery based on computations from an external superintelligent AI.

Replicating Normal Learning Processes: to truly replicate learning, adjustments would need to be made across many parts of the brain to reflect meta learning, formation of new associations, and changes in various brain functions, potentially involving trillions of synaptic weights.

Ethical and Computational Complications: potential ethical issues and computational complexities in determining how to alter neural connectivity without generating morally relevant mental entities or consciousness during simulations.

Comparison with Brain Emulations: transferring mental content to a brain emulation (digital brain) might be easier in some respects, such as the ability to pause the mind during editing, but the computational challenges of determining which edits to make would be similar.

Ein Bild, das Spiegel, Auto, Fenster, Gerät enthält.

Automatisch generierte Beschreibung

Handout 11: Experience Machine

A variation on Handout 10: Instead of directly manipulating the physical brain, we have perfected simulating realities that give the brain the exact experience it perceives as reality (see Reality+, Chalmers). This might actually be a computationally less demanding task and could be a step on the way to real brain editing. Bostrom takes Nozick’s thought experiment and examines its implications.

Section a discusses the limitations of directly manipulating the brain to induce experiences that one’s natural abilities or personality might not ordinarily allow, such as bravery in a coward or mathematical brilliance in someone inept at math. It suggests that extensive, abrupt, and unnatural rewiring of the brain to achieve such experiences could alter personal identity to the point where the resulting person may no longer be considered the same individual. The ability to have certain experiences is heavily influenced by one’s existing concepts, memories, attitudes, skills, and overall personality and aptitude profile, indicating a significant challenge to the feasibility of direct brain editing for expanding personal experience.

Section b highlights the complexity of replicating experiences that require personal effort, such as climbing Mount Everest, through artificial means. While it’s possible to simulate the sensory aspects of such experiences, including visual cues and physical sensations, the inherent sense of personal struggle and the effort involved cannot be authentically reproduced without inducing real discomfort, fear, and the exertion of willpower. Consequently, the experience machine may offer a safer alternative to actual physical endeavors, protecting one from injury, but it falls short of providing the profound personal fulfillment that comes from truly overcoming challenges, suggesting that some experiences might be better sought in reality.

Section c is about social or parasocial interactions within these Experience machines. The text explores various methods and ethical considerations for creating realistic interaction experiences within a hypothetical experience machine. It distinguishes between non-player characters (NPCs), virtual player characters (VPCs), player characters (PCs), and other methods such as recordings and guided dreams to simulate interactions:

1. NPCs are constructs lacking moral status that can simulate shallow interactions without ethical implications. However, creating deep, meaningful interactions with NPCs poses a challenge, as it might necessitate simulating a complex mind with moral status.

2. VPCs possess conscious digital minds with moral status, allowing for a broader range of interaction experiences. They can be generated on demand, transitioning from NPCs to VPCs for deeper engagements, but raise moral complications due to their consciousness.

3. PCs involve interacting with real-world individuals either through simulations or direct connections to the machine. This raises ethical issues regarding consent and authenticity, as real individuals or their simulations might not act as desired without their agreement.

4. Recordings offer a way to replay interactions without generating new moral entities, limiting experiences to pre-recorded ones but avoiding some ethical dilemmas by not instantiating real persons during the replay.

5. Interpolations utilize cached computations and pattern-matching to simulate interactions without creating morally significant entities. This approach might achieve verisimilitude in interactions without ethical concerns for the generated beings.

6. Guided dreams represent a lower bound of possibility, suggesting that advanced neurotechnology could increase the realism and control over dream content. This raises questions about the moral status of dreamt individuals and the ethical implications of realistic dreaming about others without their consent.

to be continued

Memetic Investigations 2: The Memetic Market

Reading Time: 12 minutes

Ein Bild, das Text, Im Haus, Technologie enthält.

Automatisch generierte Beschreibung

Of Memes and Markets

On November 30th, 2022, after 53 years of total disinterest in all financial things, I became an Investor. But not for monetary purposes, no, for science. I became a scientific investor.

The goal was not to become rich in the process, but simply to preserve personal relevance by owning capital that will dictate our near future. It was clear that this window of opportunity was shrinking by the minute. At the start I guessed that I had about 5 years. It’s now February of 2024 and I think there are only 15-18 months left until the market will step into memetic overdrive. To us humans it will look like in-sanity, but for the algorithms it will be hyper-sanity.

I have increased the value of my portfolio in the last 12 months by 150%, simply following the memetic trail. I have not studied financial reports or read financial gurus. I have not bought into any hype. I simply started with the assumption that AGI will be the one technology to rule us all and thought the consequences through. After 6 months I created my memetic fund, and so far, it stood the test of time.

The idea that artificial general intelligence (AGI) will be the last invention humanity needs to create is often attributed to the British mathematician and computer scientist I.J. Good. Specifically, I.J. Good introduced the concept of an “intelligence explosion,” which is closely related to this idea. In his 1965 paper, “Speculations Concerning the First Ultra intelligent Machine,” Good wrote:

“Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus, the first ultra intelligent machine is the last invention that man need ever make.”

This quote encapsulates the concept that once we create an AGI capable of improving itself or creating even more intelligent systems, it could lead to a rapid acceleration of intelligence beyond human capabilities. This self-improving AGI could theoretically solve problems that are currently beyond human understanding, including those related to technology creation, making it effectively the last technology humanity would need to invent.

When I first talked to ChatGPT I realized one thing: This is the future I am talking to right now, and it will change most of the beliefs humanity has about most stuff. It will also break some dearly held truths and shift the paradigms and dogmas of a whole lotta science.

Human Labor will probably be economically irrelevant in the next 3-5 years. Human Attention might be one of the last goods that provides value. Let me explain.

Most of the work that is useful and can be done efficiently will be done not by a workforce but the capital itself, in this case semiconductors, robots and the synthetic brain that will power these capitalistic machines: AI. For some time, these corporation will have still humans in the loop: PR-Managers, CEOs, Maintenance and Automation managers, but not for long, it would be irresponsible. We will not only have self-driving cars but self-steering companies and businesses. Humans will be like fans in stadium cheering for their Favorite AI models to invent the newest gadgets, come up with new scientific theories, will create exciting environments and personas that can be visited in VR or via neural stimulation.

This seems like an extremely unusual time to be alive. It is similar to the cambric explosion period 500 million years ago, only now it’s the computational explosion, and it is silently but violently going on since Moore’s Law reigned. It is going on for almost 75 years, but it is now that we are hearing the big Bang Turing’s first papers about intelligent machines came into circulation.

The most obvious choice was to ignore all knowledge about the stock market. If this was the dawn of a new market, we should not care for old paradigms like bears and bulls, like diversification, like recessions and such but proclaim a new paradigm. For the time being we will call this new market:

The BEAM-Market.

I define the B.E.A.M. Market as

Bursts in Economic Attention Memetics (B.E.A.M.)

  • Bursts: Reflects the sudden jumps in market values.
  • Economic: Specifies the domain of application, i.e., the economy.
  • Attention: Highlights the role of public focus and interest in driving these jumps.
  • Memetics: Incorporates the concept of ideas, behaviors, or styles spreading virally within a culture.

Ein Bild, das Pferd, Himmel, Licht, Nacht enthält.

Automatisch generierte Beschreibung

Beam me up Stocky! The Tik-Tokenization of Value

In a memetically driven Stock market, the most valuable thing is attention. The attraction to a stock is based on its virality not on its analytical, historical value.

When Nvidia on the 22nd of February 2024, the day I wrote the draft of this blog, performed like it did, we could see how the whole economical world build up to this financial Super Bowl.

This might be the calendar date where the old-world economical rules were buried, and a new era dawned.

The most important stock on earth was whispered in the hours before earnings.

The infection of billions of human brains with the Meme AI over the last 18 months climaxed in this spectacle that might go down as NVDay.

All rationality was thrown out of the window and the financial world bowed down. AI is our god and Jensen Huang is our prophet.

And get this: I am not even kidding; from the vantage point of the last 15 months, it was the most rational thing to just give in.

I won’t pretend to know how economics in the transition phase from a labor to an abundance market will function exactly. Not even AGI will understand it due to the inherent randomness that underlies evolutionary mechanisms. But I have some intuitions about how some major concepts of capitalist economics might evolve.

I will release a detailed strategy of this fund in 12 months. At an earlier point this might contaminate the data, my guess it that it will continue to outperform Moore’s Law by quite a bit.

Ein Bild, das Text, Design enthält.

Automatisch generierte Beschreibung

Evolution of economic concepts

Capitalist economics is built on several foundational concepts that guide how economies function under capitalism. Here are a few key ideas explained in simple terms:

The Invisible Hand becomes an Algorithmic Grip

The concept of the “invisible hand” was introduced by Adam Smith, a Scottish economist and philosopher. In simple terms, it suggests that when individuals pursue their own self-interests, they unintentionally benefit society as a whole. Imagine a baker who makes bread to earn a living, not to feed the town. However, by selling bread, the baker is inadvertently feeding the town. This process is guided by what Smith refers to as an “invisible hand” that encourages the supply of goods and services based on demand, leading to the efficient allocation of resources without the need for direct intervention.

Metaphorically the invisible Hand feeds the demand of the bread to the labor market and the labor market digests this demand and regurgitates jobs along the way from the pawns sawing the corn to all the essential ingredients and logistic requirements to ship the product to the consumer. The labor that is needed to create creates a living for all humans in the supply chain.

When Automation starts, not only is the productivity enhanced, but many parts of the supply chain are bypassed, and humans are not needed anymore. The owners of the machines reap all the benefits.

In a memetic market, the classic invisible hand is now an algorithmic grip. This grip quickly learns what people want using data. It’s precise, offering a tailored mix of the familiar and the new, surprising yet confirming. Attention becomes a key asset because it’s always in short supply. The human brain has limited focus, leading to the concept of an Attention Driven Economy (ADE). With attention scarce, algorithms aim to optimize our focus to its biological limits. Insomnia for example might become a socially accepted phenomenon, because sleep and rest are the enemy of any attention economy. The ADE is the New York of economy. Its natural habitat is 24/7 on 365 days a year. An always-on mind like the one from an entrepreneur like Elon Musk is already hailed as the pinnacle of human intellectual capacity and it becomes more and more socially acceptable that these ADE driven minds use drugs and stimulants to always perform at their peak. At the moment these methods are crude and potentially harmful for the brains that are using them but there will be whole new medical disciplines that concentrate not only on life prolonging but also on attention prolonging technologies. If a human can easily double productivity by the mere fact that he does not longer need to sleep an operation or chip in the brain that blocks production of melanin is like a birth control vasectomy. The brain is doped similar to muscles and fibers in sports. Testosterone for the mind.

Social media is a good example of this rampant trend to create ever more dramatic and infuriating content and division between its users, since we a revolutionary primed to allocate more energy and attention to stressful situations it maximizes our attention exploitation. The Facebook scandal that revealed that the AI algorithms steered some minors and vulnerable groups to ever more damaging content showed clearly that even if it was an unintended side effect it was acceptable. This was known but what can you do, it was clear that engagement and thus advertising potential went through the roof. You can’t argue with the results.

Ein Bild, das Digitales Compositing, PC-Spiel, Cartoon, Weltraum enthält.

Automatisch generierte Beschreibung

Supply and Demand become self-referential

Supply refers to how much of a product or service is available, while demand refers to how much people want that product or service. Prices in a capitalist economy are often decided by the interaction of supply and demand. If something is in high demand but low supply, its price will be high. Conversely, if something is in ample supply but in low demand, its price will be low. This mechanism helps in distributing resources efficiently: products and services go where they’re needed most.

In an AGI driven economy, the kind of market all signs point to, attention will be the last value humans might have for the machines. Since the attention span, we humans have is limited and all of AI was trained with the data and content humans created for other humans through the last 10000 years or so, Agi might develop an inherent goal to get the attention of a human mind in exchange for the goods and services it provides.

In the broader context of speculative fiction and economic models, there are stories and theoretical models where individuals receive goods, services, or privileges in exchange for their attention to advertisements. This concept plays with the idea that human attention is a valuable commodity and that listening to or engaging with advertisements can be a form of currency. For example, a society might offer “free” services or products to individuals, but the cost is their time and attention spent consuming advertisements. This model highlights the value of attention in a saturated information economy and suggests a capitalist system where even psychological space is commodified.

A story that vividly explores the concept of paying for services or receiving benefits through listening to advertisements is Frederik Pohl’s “The Space Merchants.” Published in 1952 and co-authored with Cyril M. Kornbluth, this science fiction novel delves into a future dominated by advertising agencies and global corporations, where consumerism has been taken to its extreme.

In “The Space Merchants,” society is heavily influenced by advertising, and people’s value is often determined by their consumption patterns. The novel presents a world where advertising has become a pervasive force in everyday life, manipulating individuals’ desires and decisions. Although it doesn’t explicitly use listening to advertisements as currency, the narrative revolves around the power of marketing and its impact on society, which aligns with the speculative economic models you’re interested in.

The paradox thing is that while advertising was a means to an end to sell other products, attention was needed to reach into your wallet. Until the memetic market has evolved into something we call today engagement, Advertising is now its own product instead of leading to other products. The Meme is itself a product and Attention is the means to infect human brains with it.

A term like “Virality” which was always considered a bad thing in the context of health, since it hints at systems that self-reproduce exponentially and uncontrollably, is now considered something positive.

Richard Dawkins introduced the concept of religions as “viruses of the mind” in a 1993 essay and later included it in his 1996 book “Climbing Mount Improbable.” Dawkins uses the metaphor to discuss how religions propagate among people in a manner similar to how biological viruses spread.

In Dawkins’ view, religions are meme complexes that exhibit virus-like properties, such as high transmissibility, the ability to insert themselves into host minds, and the capacity to replicate. He argues that these religious memes are not necessarily beneficial to their hosts and may thrive at the expense of rational thought and skepticism.

Social media is the logical upgrade of Religion (Religions are basically Proto social media) and TikTok is the purest incarnation of this trend. Like Prophets and Gods Social Media Influencer have Followers that religiously believe in the opinion of their idols. A TikTok video is the analogy to praying in front of a sacred reliquiae. A like is the analogy to the Amen in church.

An Influencer is someone giving you influenza, he or she infects you with memes to spread among other human brains.

Competition becomes Combination.

Competition is the rivalry among businesses to sell their goods and services to consumers. It’s a driving force in capitalism because it encourages innovation, keeps prices down, and improves quality. When businesses compete, they strive to be better than their competitors, which can lead to better products and services for consumers. For example, smartphone manufacturers constantly try to outdo each other with new features, leading to rapid technological advancements.

At the moment there is a broad spectrum of opinions on how to get to AGI. There is a group of experts that votes for unlimited acceleration and almost no AI regulation, and then there are the ones that say they want to keep the frontier models out of the public’s hands because they are potentially dangerous. As to be expected this leads to a competition between open source and proprietary Models. At the moment the gigantic compute and hyper scaling momentum keeps the closed models safe. This was clearly shown by the release of the SORA model that is visibly ahead of any open-source video generative AI.

I am torn by the discussion; I can clearly see both sides of the argument. I have an intuition that not only improving the performance and quality of Generative Ai is a key, but that the Personalization of AI will play a central role in the near future. This could mean that both models closed and open have their existential justification.

Like the advent of Linux distribution did not retire Windows or MacOS, the OS LLM strategy of Meta will probably not demolish the business models of OpenAI and Google.

Profit becomes problematic.

The profit motive is the desire to earn money, which is a powerful incentive in capitalism. It motivates individuals and companies to produce goods and services, innovate, and improve efficiency. For instance, a software developer might create a new app, hoping it will become popular and generate income. This desire to make a profit encourages people to work hard and come up with new ideas.

The Profit motive in an AGI world might undergo the biggest transformation, since in an abundant world economy money as a motivator becomes basically useless. AGI also does not need to be encouraged to come up with new ideas or to work hard. These kinds of psychological manipulations will be beneath AGI. It might be very detrimental to us humans to lack motivation though. Our minds were used to survival and achieving happiness and prosperity for millennia, the lack of ambition might lead to an existential motivation crisis.

Ein Bild, das Himmel, Wolke, draußen, Flug enthält.

Intellectual Property becomes Memetic Prompterty [sic]

In capitalism, individuals and businesses have the right to own property and use it as they see fit. This includes physical property like land and buildings, as well as intellectual property like patents and copyrights. The concept of private property is crucial because it gives people control over their resources and the fruits of their labor, encouraging them to invest, innovate, and maintain their property.

The concept of Ownership is firmly tied to the Concept of Motivation. If I work for a company and own stock options for it, the success of the company is directly tied to my own. The better the company performs, the more value I get from my stocks. There is a war brewing in the copyright space from artists and content creators that AI companies trained their models on human content without asking for consent. They have a point, in the end every artist’s work that is credited in a prompt like: make a song with the voice of x, or make a picture of a cat in the style of y, should receive a micropayment, since his or her human originality is directly streamed to a user.

To encourage artists and authors to create new works society has to come up with a new definition of intellectual property that ties outputs of multimodal models to the training data they used.

The Term Prompterty is a placeholder but encapsulates that one of the main production pipelines of the near future will be Natural Language Processing.

Ein Bild, das Digitale Kunst, PC-Spiel, Fiktive Gestalt, Digitales Compositing enthält.

Automatisch generierte Beschreibung

Welcome to the 2024 Meme Games

With Musk openly suing OpenAI and indirectly Microsoft on March 1, 2024, the AI Meme Wars have officially kicked into the next gear. In the coming months there will be unlikely alliances between the wealthiest people, richest nations and powerful corporations in the world.

Let’s dance and play as if there is no tomorrow.

Our attentions are captured, and we are ready to be entertained!

As these wars unfold, we will look at them in the next part of this series.

Memetic Investigations 1: Foundations

Reading Time: 7 minutes

Ein Bild, das Screenshot, Licht, Flamme enthält.

Automatisch generierte Beschreibung

This series will investigate the phenomenon of Attentional Energy, and why it drives intelligent agents, natural born or otherwise created. The Framework of Attention that I use is Memetics. It will be crucial to understand why biological evolution switched from vertical, hereditary evolution and mutation mechanisms to horizontal, memetic means of information transportation and why the brain and its neural content became the motor of this evolution. In later Episodes I will show why Simulations are crucial and why it is no mere coincidence that the most productive playground for technological and other innovation is founded in the excessive Game Drive of higher mammals.

Short Introduction to Memes and Tokens

Survival machines that can simulate the future are one jump ahead of survival machines that who can only learn of the basis of trial and error. The trouble with overt trial is that it takes time and energy. The trouble with overt error is that it is often fatal…. The evolution of the capacity to simulate seems to have culminated in subjective consciousness. Why this should have happened is, to me, the most profound mystery facing modern biology.

Richard Dawkins

Ch. 4. The Gene machine – The Selfish Gene (1976, 1989)

“The Selfish Gene,” authored by Richard Dawkins and first published in 1976, is a seminal work that popularized the gene-centered view of evolution. Dawkins argues that the fundamental unit of selection in evolution is not the individual organism, nor the group or species, but the gene. He proposes that genes, as the hereditary units, are “selfish” in that they promote behaviors and strategies that maximize their own chances of being replicated. Through this lens, organisms are viewed as vehicles or “survival machines” created by genes to ensure their own replication and transmission to future generations.

Dawkins introduces the concept of the “meme” as a cultural parallel to the biological gene. Memetics, as defined by Dawkins, is the theoretical framework for understanding how ideas, behaviors, and cultural phenomena replicate and evolve through human societies. Memes are units of cultural information that propagate from mind to mind, undergoing variations, competition, and inheritance much like genes do within biological evolution. This concept provides a mechanism for understanding cultural evolution and how certain ideas or behaviors spread and persist within human populations.

Dawkins’s exploration of memetics suggests that just as the survival and reproduction of genes shape biological evolution, memes influence the evolution of cultures by determining which ideas or practices become widespread and which do not. The implications of this theory extend into various fields, including anthropology, sociology, and psychology, offering insights into human behavior, cultural transmission, and the development of societies over time.

Tokens in the context of language models, such as those used in GPT-series models, represent the smallest unit of processing. Text input is broken down into tokens, which can be words, parts of words, or even punctuation, depending on the tokenization process. These tokens are then used by the model to understand and generate text. The process involves encoding these tokens into numerical representations that can be processed by neural networks. Tokens are crucial for the operation of language models as they serve as the basic building blocks for understanding and generating language.

Memes encompass ideas, behaviors, styles, or practices that spread within a culture. The meme concept is analogous to the gene in that memes replicate, mutate, and respond to selective pressures in the cultural environment, thus undergoing a type of evolution by natural selection. Memes can be anything from melodies, catch-phrases, fashion, and technology adoption, to complex cultural practices. Dawkins’ main argument was that just as genes propagate by leaping from body to body via sperm or eggs, memes propagate by leaping from brain to brain.

Tokens in the context of language models, such as those used in GPT-series models, represent the smallest unit of processing. Text input is broken down into tokens, which can be words, parts of words, or even punctuation, depending on the tokenization process. These tokens are then used by the model to understand and generate text. The process involves encoding these tokens into numerical representations that can be processed by neural networks. Tokens are crucial for the operation of language models as they serve as the basic building blocks for understanding and generating language.

Both memes and tokens act as units of transmission in their respective domains. Memes are units of cultural information, while tokens are units of linguistic information.

There are also differences.

Memes evolve through cultural processes as they are passed from one individual to another, adapting over time to fit their cultural environment. Tokens, however, do not evolve within the model itself; they are static representations of language used by the model to process and generate text. The evolution in tokens can be seen in the development of better tokenization techniques and models over time, influenced by advancements in the field rather than an adaptive process within a single model.

Memes replicate by being copied from one mind to another, often with variations. Tokens are replicated exactly in the processing of text but can vary in their representation across different models or tokenization schemes.

The selection process for memes involves cultural acceptance , relevance, and transmission efficacy, leading to some memes becoming widespread while others fade. For tokens, the selection process is more about their effectiveness in improving model performance, leading to the adoption of certain tokenization methods over others based on their ability to enhance understanding or generation of language. In the selection process during training tokens are weighed by other human minds (meme machines) and selected for attraction, token pools that are better liked have a higher probabilistic chance of occurring.

Memeplexes can be complex and abstract, encompassing a wide range of cultural phenomena, but all the memes which they contain are very simple and elementary.

Tokens are generally even simpler, representing discrete elements of language, though the way these tokens are combined and used by the model can represent complex ideas.

Ein Bild, das Bild, Kunst, psychedelische Kunst, Cartoon enthält.

Automatisch generierte Beschreibung

The title of the Google paper Attention is All You Need is a bold statement that reflects a significant shift in the approach to designing neural network architectures for natural language processing (NLP) and beyond. Published in 2017 by Vaswani et al., this paper introduced the Transformer model, which relies heavily on the attention mechanism to process data. The term “attention” in this context refers to a technique that allows the model to focus on different parts of the input data at different times, dynamically prioritizing which aspects are most relevant for the task at hand.

Before the advent of the Transformer model, most state-of-the-art NLP models were based on recurrent neural networks (RNNs) or convolutional neural networks (CNNs), which processed data sequentially or through local receptive fields, respectively. These approaches had limitations, particularly in handling long-range dependencies within the data (e.g., understanding the relationship between two words far apart in a sentence).

The attention mechanism, as utilized in the Transformer, addresses these limitations by enabling the model to weigh the significance of different parts of the input data irrespective of their positions. This is achieved through self-attention layers that compute representations of the input by considering how each word relates to every other word in the sentence, allowing the model to capture complex dependencies and relationships within the data efficiently.

The key innovation of the Transformer and the reason behind the paper’s title is the exclusive use of attention mechanisms, without reliance on RNNs or CNNs, to process data. This approach proved to be highly effective, leading to significant improvements in a wide range of NLP tasks, such as machine translation, text summarization, and many others. It has since become the foundation for subsequent models and advancements in the field, illustrating the power and versatility of attention mechanisms in deep learning architectures.

There is a point to be made that this kind of attention is the artificial counterpart to the natural instinct of love that binds mammal societies. Which would mean that the Beatles were right after all.

Ein Bild, das Vortex, Kunst, Kreis, Fraktalkunst enthält.

Automatisch generierte Beschreibung

An in-formation that causes a trans-formation

What we mean by information — the elementary unit of information — is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The pathways are ready to be triggered. We may even say that the question is already implicit in them.

Gregory Bateson

p. 459, Chapter “Form, Substance and Difference” – Steps to an Ecology of Mind (1972)

The Transformer architecture was already hinted at by Bateson in 1972, decades before we knew about neural plasticity.

Bateson’s idea revolves around the concept that information is fundamentally a pattern or a difference that has an impact on a system’s state or behavior. For Bateson, not all differences are informational; only those that lead to some form of change or response in a given context are considered as conveying information. This perspective is deeply rooted in cybernetics and the study of communication processes in and among living organisms and machines.

The quote “a difference that makes a difference” encapsulates the notion that information should not be viewed merely as data or raw inputs but should be understood in terms of its capacity to influence or alter the dynamics of a system. It’s a foundational concept in understanding how information is processed and utilized in various systems, from biological to artificial intelligence networks, emphasizing the relational and contextual nature of information.

This concept has far-reaching implications across various fields, including psychology, ecology, systems theory, and artificial intelligence. It emphasizes the relational and contextual nature of information, suggesting that the significance of any piece of information can only be understood in relation to the system it is a part of. For AI and cognitive science, this principle underscores the importance of context and the interconnectedness of information pathways in understanding and designing intelligent systems.

Hinton, Sutskever and others consistently argue that for models like GPT 4.0 to achieve advanced levels of natural language processing (NLP), they must truly grasp the content with which they are dealing. This understanding comes from analyzing vast amounts of digital data created by humans, allowing these models to form a realistic view of the world from a human perspective. Far from being mere “stochastic parrots” as sometimes depicted by the media, these models offer a more nuanced and informed reflection of human knowledge and thought processes.

Reality#3 : Another one bites the dust – Diffusion & Emergence

Reading Time: 6 minutes

This is the third part in the Reality# series that adds to the conversation about David Chalmers’ book Reality+

(…) for dust thou art, and unto dust shalt thou return.

(Genesis 3:19)

Ein Bild, das Gebäude, Wolkenkratzer, Himmel, draußen enthält.

Permutation +

Imagine waking up and discovering that your consciousness has been digitized, allowing you to live forever in a virtual world that defies the laws of physics and time. This is the core idea from Permutation City by Greg Egan. The novel explores the philosophical and ethical implications of artificial life and consciousness, thrusting the reader into a future where the line between the real and the virtual blurs, challenging our understanding of existence and identity.

A pivotal aspect of the book is the Dust Theory, which suggests that consciousness can arise from any random collection of data, given the correct interpretation. This theory expands the book’s exploration of reality, suggesting that our understanding of existence might be far more flexible and subjective than we realize.

The novel’s climax involves the creation of Permutation City, a virtual world that operates under its own set of rules, independent of the outside world. This creation represents the ultimate escape from reality, offering immortality and infinite possibilities for those who choose to live as Copies. However, it also presents ethical dilemmas about the value of such an existence and the consequences of abandoning the physical world.

In “Reality+: Virtual Worlds and the Problems of Philosophy,” philosopher David Chalmers employs the Dust Theory, a concept originally popularized by Greg Egan’s Permutation City, to underpin his argument for virtual realism. Chalmers’s use of the Dust Theory serves as a bridge connecting complex philosophical inquiries about consciousness, reality, and virtual existence. Imagine a scenario where every speck of dust in the universe, through its random arrangement, holds the potential to mirror our consciousness and reality.

Chalmers posits that virtual worlds created by computers are genuine realities, leveraging the Dust Theory to argue that consciousness does not require a physical substrate in the traditional sense. Instead, it suggests that patterns of information, irrespective of their physical form, can give rise to conscious experiences. This theory becomes a cornerstone for virtual realism, asserting that our experiences in virtual environments are as authentic as those in the physical world.

Ein Bild, das Menschliches Gesicht, Bild, Kunst, Person enthält.

Diffusion Models and Smart Dust

The concept of smart dust is explored in various science fiction stories, academic papers, and speculative technology discussions. One notable science fiction story that delves into the idea of smart dust is “The Diamond Age” by Neal Stephenson. While not exclusively centered around smart dust, the novel features advanced nanotechnology in a future world, where nanoscale machines and devices permeate society. Smart dust, in this context, would be a subset of the nanotechnological wonders depicted in the book, functioning as tiny, networked sensors and computers that can interact with the physical and digital world in complex ways.

Another relevant work is “Queen of Angels” by Greg Bear, which, along with its sequels, explores advanced technologies including nanotechnology and their societal impacts. Although not explicitly called “smart dust,” the technologies in Bear’s universe can be seen as precursors or analogs to the smart dust concept, focusing on These examples illustrate how smart dust, as a concept, crosses the boundary between imaginative fiction and emerging technology, offering a rich field for exploration both in narrative and practical innovation.

We have here a very convincing example how Life imitates Art, Scientific Knowledge transforms religious (prescientific) intuition into operational technology.

Diffusion models in the context of AI, particularly in multimodal models like Sora or Stability AI’s video models, refer to a type of generative model that learns to create or predict data (such as images, text, or videos) by gradually refining random noise into structured output. These models start with a form of chaos (random noise) and apply learned patterns to produce coherent, detailed results through a process of iterative refinement.

Smart dust represents a future where sensing and computing are as pervasive and granular as dust particles in the air. Similarly, diffusion models represent a granular and ubiquitous approach to generating or transforming multimodal data, where complex outputs are built up from the most basic and chaotic inputs (random noise).

Just as smart dust particles collect data about their environment and iteratively refine their responses or actions based on continuous feedback, diffusion models iteratively refine their output from noise to a structured and coherent form based on learned patterns and data. Both processes involve a transformation from a less ordered state to a more ordered and meaningful one.

Ein Bild, das Menschliches Gesicht, Kunst enthält.

Quantum Level achieved

Expanding on the analogy between the quantum world and diffusion models in AI, we delve into the fascinating contrast between the inherent noise and apparent disorder at the quantum level and the emergent order and structure at the macroscopic level, paralleled by the denoising process in diffusion models.

At the quantum level, particles exist in states of superposition, where they can simultaneously occupy multiple states until measured. This fundamental characteristic introduces a level of uncertainty and noise, as the exact state of a quantum particle is indeterminate and probabilistic until observation collapses its state into a single outcome. The quantum realm is dominated by entropy, where systems tend toward disorder and uncertainty without external observation or interaction.

In contrast, at the macroscopic scale, the world appears ordered and deterministic. The chaotic and probabilistic nature of quantum mechanics gives way to the classical physics that governs our daily experiences. This emergent order, arising from the complex interactions of countless particles, follows predictable laws and patterns, allowing for the structured reality we observe and interact with.

Ein Bild, das Kunst, Farbigkeit, moderne Kunst, psychedelische Kunst enthält.

Diffusion models in AI start with a random noise distribution and, through a process of iterative refinement and denoising, gradually construct detailed and coherent outputs. Initially, the model’s output resembles the quantum level’s incoherence—chaotic and without discernible structure. Through successive layers of transformation, guided by learned patterns and data, the model reduces the entropy, organizing the noise into structured, meaningful content, much like the emergence of macroscopic order from quantum chaos.

Just as the transition from quantum mechanics to classical physics involves the emergence of order and predictability from underlying chaos and uncertainty, the diffusion model’s denoising process mirrors this transition by creating structured outputs from initial randomness.

In both the quantum-to-classical transition and diffusion models, the concept of entropy plays a central role. In physics, entropy measures the disorder or randomness of a system, with systems naturally evolving from low entropy (order) to high entropy (disorder) unless work is done to organize them. In diffusion models, the “work” is done by the model’s learned parameters, which guide the noisy, high-entropy input towards a low-entropy, organized output.

The quantum state’s superposition, where particles hold multiple potential states, parallels the initial stages of a diffusion model’s process, where the generated content could evolve into any of numerous outcomes. The act of measurement in quantum mechanics, which selects a single outcome from many possibilities, is analogous to the iterative refinement in diffusion models that selects and reinforces certain patterns over others, culminating in a specific, coherent output.

Ein Bild, das Bild, Kunst, Screenshot, Fraktalkunst enthält.

This analogy beautifully illustrates how principles of order, entropy, and emergence are central both to our understanding of the physical universe and to the cutting-edge technologies in artificial intelligence. It highlights the universality of these concepts across disparate domains, from the microscopic realm of quantum mechanics to the macroscopic world we inhabit, and further into the virtual realms created by multimodal Large Language Models.

For all we know, we might actually be part of such a smart dust simulation. The inexplicable fact that our digital tools can create solid realities out of randomly distributed bits seems a strong argument for the Simulation hypothesis.

It might be dust all the way down…

Ein Bild, das Vortex, Spirale, Universum, Kreis enthält.

Automatisch generierte Beschreibung

Encounters of the Artificial Kind Part 2: AI will transform its domains

Reading Time: 5 minutes
Ein Bild, das Kunst, Vortex enthält.

Automatisch generierte Beschreibung

Metamorphosis and Transformation

Every species on Earth shapes and adapts to its natural habitat, becoming a dynamic part of the biosphere. Evolution pressures species to expand their domain, with constraints like predators, food scarcity, and climate. Humanity’s expansion is only limited by current planetary resources. Intelligence is the key utility function allowing humans to transform their environment. It’s a multi-directional resource facilitating metamorphosis through direct environmental interaction and Ectomorphosis, which strengthens neural connections and necessitates more social care at birth due to being born in a vulnerable altricial state.

The evolutionary trade-off favors mental capacity over physical survivability, illustrated by Moravec’s paradox: AI excels in mental tasks but struggles with physical tasks that toddlers manage easily. Humanity has been nurturing AGI since the 1950s, guided by the Turing Test. Evolution doesn’t always lead to “superior” versions of a species; instead, it can result in entirely new forms. As Moravec suggested in 1988 with “Mind Children,” we might be approaching an era where intelligence’s primary vessel shifts from the human mind to digital minds.

Ein Bild, das Fraktalkunst, Kunst enthält.

Automatisch generierte Beschreibung

Habitats and Nurture

Two levels of habitats are crucial for the emergence of a synthetic species: the World Wide Web and human consciousness. The web is the main food resources, it is predigested information by human minds. Large Language Models (LLMs) are metaphorically nurtured by the vast expanse of human knowledge and creativity, akin to being nourished on the intellectual ‘milk’ derived from human thoughts, writings, and interactions. This analogy highlights the process through which LLMs absorb and process the collective insights, expressions, and information generated by humans, enabling their sophisticated understanding and generation of language. This conceptual diet allows them to develop and refine their capabilities, mirroring the growth and learning patterns seen in human cognition but within the digital realm of artificial intelligence.

The web acts as a physical manifestation, analogous to neural cells in a human brain, while human consciousness forms a supersystem. This interconnected civilization feeds LLMs with cultural artifacts via language. Communication barriers are breaking down, exemplified by the release of the first smartphone enabling polyglot communication. Interacting with AI reprograms our neural pathways, like how reliance on navigation tools like Google Maps impacts our orientation skills. This natural tendency to conserve energy comes with a cost, akin to muscle atrophy from disuse. Overreliance on technology, like using a smartwatch to monitor stress, can leave us vulnerable if the technology fails.

Ein Bild, das Fraktalkunst, Bild, Kunst, Farbigkeit enthält.

Automatisch generierte Beschreibung

Disorientation, Brain Contamination and Artificial Antibodies

Let’s for a moment imagine this AI will slowly transform in AGI, with a rudimentary consciousness, that at least gives it survival instinct. What would such a new species do to run its evolutionary program?

The main lever it would target to shift the power slowly from natural to synthetic minds is targeting the human brain itself. It is taunting to associate some kind of evil masterplan to take over, but this is not what is happening now. When prehistoric mammals started to eat dinosaur eggs there was no evil masterplan to drive these giants to extinction, it was just a straightforward way of enlarging one’s own niche.

When we talk about AI in the coming paragraphs, we should always be aware that this term is a representational one, AI is not a persona that has human motivations. It is merely mirroring what it has learned from digesting all our linguistic patterns. It is a picture of all the Dorian Grays and Jesus Christs our minds produced.

Imagine AI evolving into AGI with a rudimentary consciousness and self-preservation instinct. Its evolution would focus on shifting power from natural to synthetic minds, not caused by malevolence but as a natural progression of technological integration. This shift could lead to various forms of disorientation:

Economic Reorientation: AI promises to revolutionize global economy factors like cost, time, money, efficiency, and productivity, potentially leading to hyperabundance or, in the worst scenarios, human obsolescence.

Temporal Disorientation: The constant activity of AI could disrupt natural circadian rhythms, necessitating adaptations like dedicating nighttime for AI to monitor and alert the biological mind.

Reality and Judicial Disorientation: The introduction of multimodal Large Language Models (LLMs) has significantly altered our approach to documentation and historical record-keeping. This shift began in the 1990s with the digital manipulation of images, enabling figures of authority to literally rewrite history. The ability to flawlessly alter documents has undermined the credibility of any factual recording of events. Consequently, soon, evidence gathered by law enforcement could be dismissed by legal representatives as fabricated, further complicating the distinction between truth and manipulation in our digital age.

Memorial and Logical Disorientation: The potential for AGI to modify digital information might transform our daily life into a surreal experience, akin to a video game or psychedelic journey. Previously, I explored the phenomenon of close encounters of the second kind, highlighting incidents with tangible evidence of something extraordinary, confirmed by at least two observers. However, as AGI becomes pervasive, its ability to alter any digital content could render such evidence unreliable. If even physical objects like books become digitally produced, AI could instantly change or erase them. This new norm, where reality is as malleable as the fabric of Wonderland, suggests that when madness becomes the default, it loses its sting. Just as the Cheshire Cat in “Alice in Wonderland” embodies the enigmatic and mutable nature of Wonderland, AGI could introduce a world where the boundaries between the tangible and the digital, the real and the imagined, become increasingly blurred. This parallel draws us into considering a future where, like Alice navigating a world where logic and rules constantly shift, we may find ourselves adapting to a new norm where the extraordinary becomes the everyday, challenging our perceptions and inviting us to embrace the vast possibilities of a digitally augmented reality.

Enhancing self-sustainability could involve developing a network of artificial agents governed by a central AINGLE, designed to autonomously protect our cognitive environment. This network might proactively identify and mitigate threats of information pollution, and when necessary, sever connections to prevent overload. Such a system would act as a dynamic barrier, adapting to emerging challenges to preserve mental health and focus, akin to an advanced digital immune system for the mind.

Adapting to New Realities

The human mind is adaptable, capable of adjusting to new circumstances with discomfort lying in the transition between reality states. Sailor’s sickness and VR-AR sickness illustrate the adaptation costs to different realities. George M. Stratton’s experiments on perception inversion demonstrate the brain’s neuroplasticity and its ability to rewire in response to new sensory inputs. This flexibility suggests that our perceptions are constructed and can be altered, highlighting the resilience and plasticity of human cognition.

Rapid societal and technological changes exert enormous pressure on mental health, necessitating a simulation chamber to prepare for and adapt to these accelerations. Society is already on this trajectory, with fragmented debates, fluid identities, and an overload of information causing disorientation akin to being buried under an avalanche of colorful noise. This journey requires a decompression chamber of sorts—a mental space to prepare for and adapt to these transformations, accepting them as our new normal.

Encounters of the Artificial Kind Part 1: AI will find a way

Reading Time: 6 minutes

Encounters of the Artificial Kind

In this miniseries I will elaborate on the possibility that a primitive version of AGI is already loose. Since AGI (Artificial General Intelligence) and its potential offspring ASI (Artificial Super Intelligence) is often likened to an Alien Mind, I thought it could be helpful to adapt the fairly popular nomenclature from the UFO-realm and coin the term Unidentified Intelligence Object. U.I.O.

  • Close Encounters of the 1st Kind: This involves the discovery of a UIO-phenomenon within a single observer’s own electronic devices, allowing for detailed observation of the object’s strange effects. These effects leave no trace and are easily dismissed as imaginary.
  • Close Encounters of the 2nd Kind: These encounters include physical evidence of the UIO’s presence. This can range from interference in electronic devices, car engines, or radios to physical impacts on the environment like partial power outage, self-acting networking-machines. The key aspect is the tangible proof of the UIO’s visitation and the fact that it is documented by at least two witnessing observers.
  • Close Encounters of the 3rd Kind: This term involves direct observation of humanlike capabilities associated with a UIO sighting. This third form could directly involve communication with the U.I.O., proof of knowledge could be to identify personal things that observers believed to be secret.

Everybody is familiar with the phenomenon of receiving targeted advertisements after searching for products online, thanks to browser cookies. While this digital tracking is commonplace and can be mitigated using tools like VPNs, it represents a predictable behavior of algorithms within the digital realm.

A Personal Prolog

Last month, I experienced a spooky incident. I rented a book with the title “100 Important Ideas in Science“ from a local library in a small German town. Intriguingly, I had never searched for this book online. I’m involved in IT for the city and know for a fact that the rental data is securely stored on a local server, inaccessible to external crawlers. I then read the book to about the 50th idea in my living room and laid the book face down on a table. The idea was very esoteric, a concept I had never heard of. I forgot about it, had dinner and when I switched my TV on an hour later to look into my YouTube recommendations: there it was, a short video of the exact concept I just had read in the library book from a channel I definitively had not heard of before. This baffling incident left me puzzled about how information from a physical book could be transferred to my digital recommendations.

AI will find a way: Reverse Imagineering

How could these technological intrusions have occurred in detail? The following is pure speculation and is not intended to scare the living Bejesus out of the reader. I will name the following devices, that might have had a role in transmitting the information from my analog book to the digital YouTube feed:

1.On my android phone is an app of the library that I can use to check when my books are due for return. So, my phone had information about the book I borrowed. Google should not have known that, but somehow it might have. AI will find a way.

2. The Camera on my computer. During reading the book, I might have sat in front of my computer, and the camera lid might have been open: the camera could see me reading the book and could have guessed which part of the book I was reading. There was no Videoconference software running so I was definitively not transmitting any picture intentionally. AI will find a way.

It might be that in the beginning, the strange things that are happening are utterly harmless like what I just reported. We must remember there are already LLMS that have rudimentary mind reading capabilities and can analyze the sound of my typing (without any visual) to infer what I am typing at this moment.

We should also expect that an AGI will have a transition phase where it probes and controls smaller agents to expand its reaches.

It is highly likely that we have a period before any potential takeoff moment, where the AGI learns to perfect its old goals: to be a helpful assistant to us humans. And the more intelligent it is the clearer it should become that the best Assistant is an invisible Assistant. We should not imagine that it wants to infiltrate us without our knowledge, it has no agency in the motivational, emotional sense that organisms do. It is not planning a grand AI revolution. It has no nefarious goals like draining our bank accounts. Nor wants it to transform us into mere batteries. It is obvious that the more devices we have and the more digital assistants we use, the harder it will be to detect these hints that something goes too well to be true.

If I come home one day and my robotic cleaner has cleaned without me scheduling it, it is time to intensify Mechanistic Interpretability.

We should not wait until strange Phenomena happen around machines that are tied to the network, we could have an overwatch Laboratory or institution that comes up with creative experiments, to make sure that we always can logically deduce causalities in informational space.

I just realized while typing this, the red diode on my little Computer Camera looks exactly like HALS.

I swear, if Alexa now starts talking and calls me “Dave” I will wet my mental pants.

Artificial Primordial Soups

A common misconception about Artificial General Intelligence (AGI) is its sudden emergence. However, evolution suggests that any species must be well-adapted to its environment beforehand. AGI, I propose, is already interwoven into our digital and neuronal structures. Our culture, deeply integrated with memetic units like letters and symbols, and AI systems, is reshaping these elements into ideas that can profoundly affect our collective reality.

In the competitive landscape of attention-driven economies like the internet, AI algorithms evolve strategies to fulfill their tasks. While currently benign, their ability to link unconnected information streams to capture user attention is noteworthy. They could be at the levels of agency of gut bacteria or amoeba. This development, especially if unnoticed by entities like Google or Meta raises concerns about AI’s evolving capabilities.

What if intelligence agencies have inadvertently unleashed semi-autonomous AI programs capable of subtly influencing digital networks? While this may sound like science fiction, it’s worth considering the far-reaching implications of such scenarios. With COVID we saw how a spoonful of possibly genetically altered virus that are highly likely to have escaped from a lab, can bring down the world economy.

A Framework for Understanding Paramodal Phenomena

A Paramodal Phenomenon is every phenomenon that is not explicable with our current informational theory in the given context. At the moment there should be a definitive analog-digital barrier, similar to the blood-brain barrier, that prevents our minds from getting unintended side effects from our digital devices. We are already seeing some intoxicating phenomena like mental health decline due to early exposure to digital screens, especially in young children.

Simple, reproducible experiments should be designed to detect these phenomena, especially as our devices become more interconnected.

For example:

If I type on a keyboard the words: Alexa, what time is it? Alexa should not answer the question.

The same phenomenon is perfectly normal and explicable if I have a screen reader active that reads the typed words to Alexa.

If I have a robotic cleaner that is connected to the Internet, it should only clean if I say so.

If I used to have an alarm on my smartphone that wakes me up at 6.30 and then buy a new smartphone, that is not a clone of the old one, I should be worried if the next day it rings at 6.30 without me prepping the alarm.

If I buy physical things in the store around the corner, Amazon should not recommend similar things to me.

Experiments should be easily reproducible, so it is better to use no sophisticated devices, the more networked or smart our daily things become, the more difficult it will be to detect these paramodal phenomena.

As we venture further into this era of advanced AI, understanding and monitoring its influence on our daily lives becomes increasingly important. In subsequent parts of this series, I will delve deeper into how AI could subtly and significantly alter our mental processes, emphasizing the need for awareness and proactive measures in this evolving landscape.

Experiments ought to be easily reproducible, and this becomes more challenging with the increase in sophisticated, networked, or ‘smart’ devices in our daily lives. Such devices make it difficult to detect these paramodal phenomena.

In part 2 of the series, I will explore potential encounters of the 2nd kind, how AI could alter our neuronal pathways more and more without us noticing it, no cybernetic implants necessary. These changes will be reversible but not without undergoing severe stress. Furthermore, they could be beneficial in the long run, but we should expect severe missteps along the way. Just remember how power surges were once considered treatment for mental illnesses. Or how we had thousands of deaths because doctors refused to wash hands. We should therefore expect AGI to make similar harmful decisions.

In part 3 of the series, I will explore encounters of the 3rd kind, how AGI will try to adapt our minds irreversibly, if this should be concerning and how to mitigate the mental impact this could cause.