Epilog: The Ones who leave Utopias

Reading Time: 3 minutes

For U.K.L.

Ein Bild, das Kunst, Bild, Majorelle Blue, Welt enthält.

Automatisch generierte Beschreibung In the boundless universe of Utopias, humanity had transcended to a realm beyond the imaginable, where technological mastery and divine-like prowess had reshaped existence itself. This universe-wide Dyson Sphere, an embodiment of human ingenuity and harmony, was a tapestry woven from the threads of infinite knowledge and compassion. In Utopias, suffering was but a distant memory, a relic of a primal past, and happiness was not a fleeting moment but the very fabric of life.

At the heart of this utopia was a celebration, not of mere joy, but of the profound understanding and acceptance of life in its entirety. The citizens of Utopias, having achieved autopotency, lived lives of boundless creativity and fulfillment. Art, science, and philosophy flourished, unfettered by the constraints of scarcity or conflict. Nature and technology coexisted in sublime synergy, with ecosystems thriving under the gentle stewardship of humanity. Here, every individual was both student and teacher, constantly evolving in a shared journey of enlightenment.

Ein Bild, das Kleidung, Schuhwerk, Himmel, Gebäude enthält.

Automatisch generierte Beschreibung

Amidst this splendor, the story of the last girl became a beacon of remembrance and reverence. Her home in Utopias was not merely a place; it was a sacred connection, a bridge to the ancient roots of humanity. This girl, with her laughter and curiosity, was a living testament to the struggles and triumphs of their ancestors. Her presence reminded the citizens of Utopias of the value of their journey from darkness into light, from suffering to salvation.

Her story was celebrated in the grandest halls of Utopias and in the quietest corners of its gardens, igniting a collective epiphany. She symbolized the indomitable spirit of humanity, a reminder that the paradise they had forged was built upon the lessons learned through millennia of challenges. Her every step through Utopias was a step taken by all of humanity, a step towards understanding the sacredness of life and the interconnectedness of all beings.

The citizens of Utopias, in their wisdom and power, had not forgotten the essence of their humanity. They embraced the girl as one of their own, for in her eyes reflected their ancient dreams and hopes. They saw in her the infinite potential of the human spirit, a potential that had guided them to the stars and beyond.

In Utopias, every moment was an opportunity for growth and reflection. The encounter with the girl was revered as a divine experience, a moment of unparalleled spiritual enlightenment. It was a celebration of the journey from the primal to the divine, a journey that continued to unfold with each passing moment.

Ein Bild, das Riff, Bild, Aquarium, Pflanze enthält.

Automatisch generierte Beschreibung

As the girl explored the wonders of Utopias, her laughter echoed through the cosmos, a harmonious symphony that resonated with the soul of every being. She was a reminder that the path to utopia was paved with compassion, understanding, and the unyielding pursuit of knowledge.

And so, the legacy of humanity in Utopias was not merely one of technological marvels or godlike prowess but of an eternal quest for understanding and connection. It was a testament to the power of collective spirit and the enduring pursuit of a better tomorrow.

The strangest thing is, that every now and then, despite the perfect bliss of Utopias, some Utopiassins choose to leave all that behind and venture into the Beyond. They are never heard of again, and when this happens, the little girl sheds one single tear for every of these minds. And even in our solved world it is not known if these are tears of sadness or joy for the ones who leave Utopias.

Ein Bild, das Anime, Zeichnung, Bild, Kunst enthält.

Automatisch generierte Beschreibung

(Idea, Concept & Finetuning: aiuisensei, Pictures: Dalle-3, Story: ChatGPT 4)

Utopological Investigations Part 4

Reading Time: 18 minutes

This is part of my series about Deep Utopia. This Part collects some Notes that I made after finishing the book. They mirror some of the Objections I have against Bostrom’s affective consequentialisms that clutter his writing in some otherwise great passages.

Ein Bild, das Himmel, Gras, Wolke, draußen enthält.

Automatisch generierte Beschreibung

Notes

Imagine that some technologically advanced civilization arrived on Earth and was now pondering how to manage things. Imagine they said: “The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads—while we could so easily deflect them onto more wholesome paths, with a little nudge here and maybe some gentle police action there, we must scrupulously avoid any such interference, so that they can continue to prey on the weaker or more peaceful groups and keep them on their toes. Furthermore, we find that human nature expresses itself differently in different environments; so, we must ensure that there continue to be… slums, concentration camps, battlefields, besieged cities, famines, and all the rest.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.499, Footnote).

This Footnote is a strange case, where Bostrom goes overboard with a metaphor, and it is weird, how he compares Aliens that want to preserve the Human civilization with Humans who defend the Carnivorous Tendencies of Predators. I guess he compares Cats to psychopathic cannibals? If we had the technology to cure them from their pathological tendencies (hunting and playing with their prey) we should do so. He wants civilized cats only.

The utopians could have their interesting experiences repeat themselves—but that may not be very objectively interesting; or they could die and let a new person take their place—but that is another kind of repetition, which may also not ultimately be very objectively interesting. More on this later.)

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.217).

Instead of deactivating our boreability as a whole we could easily just delete past experiences from our memory, as shown in the movie eternal sunshine of the spotless mind. This would enable us to experience our most favorable experiences, like listening to Bach’s Goldbergvariations forever for the first time.

Ein Bild, das Menschliches Gesicht, Kleidung, Person, Mann enthält.

Automatisch generierte Beschreibung

Perhaps there is enough objective interestingness in Shakespeare’s work to fill an entire human life, or a few lifetimes. But maybe the material would become objectively stale to somebody who spent five hundred years studying it. Even if they had been modified so that they didn’t experience boredom, we might judge their continued Shakespeare studies to be no longer valuable (or at least much less valuable in one significant respect) once they have “exhausted” the Bard’s work, in the sense of having discovered, appreciated, learned, and fully absorbed and mastered all the insight, wit, and beauty therein contained. We would then have definitively run out of Shakespearian interestingness, although we would be able to choose how to feel about that fact.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.219-220).

At the point of technological plasticity, there will also be the option that we develop a digital twin of Shakespeare living in an alternate Elizabethan reality which writes sonnets and plays about other subjects in the style of the original one using ASI. Running out of novelties should be impossible for a Shakespearian.

If we imagine a whole society (…) who interact normally with one another but are collectively gripped by one great shared enthusiasm after another—imposed, perhaps, by a joint exoself (an “exocommunity”? aka “culture”)—and who find in these serial fascinations a tremendous source of pleasure, satisfaction, and purpose; then the prospect immediately takes on a significantly sunnier aspect; although, of course, it is still not nearly as good as the best possible future we can imagine.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.222).

Some of our current society’s most sought after abstract goods like power, fame, wealth, status hint at the possibility that these might be parameters that were artificially inserted into this pseudo-simulation. Since most humans are collectively grasped by money, not caring for money makes one an extreme outlier, this could be the result of such an exogenous programming that randomly fails in some individuals. The greed People show by collecting bills, coins, stocks and digital numbers on an index should seem very irrational and objectively boring to entities which don’t care about such stuff.

I have forgotten almost all the lectures that I attended as a student, but one has stuck in my memory to this day—because it was so especially outstandingly boring. I remember trying to estimate the number of black spots in the acoustic ceiling panels, with increasing levels of precision, to keep myself distracted as the lecture dragged on and on. I feared I might have to outright count all the spots before the ordeal would be over—and there were tens of thousands. Memorability is correlated with interestingness, and I think we must say that this lecture made an above-average contribution to the interestingness of my student days. It was so boring that it was interesting!

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.245).

Bostrom messes up categories. We should await a gaussian distribution of all the most interesting and boring moments in our life. The most boring moment that would mark the most left side of this spectrum is not interesting in itself. Otherwise, it would simply be wrongly positioned, and we should have to update our datapoints in an infinite loop. To claim that kind of position is interesting is similar to saying the movie was so bad that it was good. The goodness here refers to a metalevel that says outstanding, remarkable not of good quality. Bostrom gets carried away how our language uses the terms good/bad, interesting/boring, special/typical.

(…) we seem bound to encounter diminishing returns quite quickly, after which successive life years bring less and less interestingness.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.253).

If we make a subjective list of the most interesting things in our lives, they will surely contain mostly things we did for the first time, and these moments become exponentially rarer after the 3rd year of life. In a way we could argue that if the amount of novelty in our life would be measured in such a way, that we start dying at the age of about 3, where we have made all our major developments. The rest of our life is then 70 to 80 years of degeneration, where these moments get exceedingly rare.

If we keep upgrading our mental faculties, we eventually leave the human ambit and ascend into the transhuman stratosphere and thence into posthuman space.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.255).

It is a little shortsighted to simply expect every human would only be interested in upgrading oneself to a higher entity on this ascendancy spectrum. For example, some philosophers would want to be downgraded to a bat just to see the look on Thomas Nagels face after they wrote a paper about their experience. Such a successful temporary downgrade would definitely prove that we solved the hard problem of consciousness.

(…) we should expect that what is required in order for our lives to remain interesting in utopia is that they offer a suitable variety of activities and settings.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.257).

In addition, an artificial super companion could exert a kind of guidance to avoid our human minds getting lost. Like we would limit the number of things to show to children, in a plastic world, there had still to be some invitation or level gating to prevent us from potential self-destructive behavior. This includes doing irreversible things via autopotency to us. I believe such an entity must be highly personalized to its ward, a kind of artificial guardian that will watch over its ward’s wellbeing.

(…) the more chopped and mixed life is preferable may retain some of its grip on us even if we stipulate—as of course we should—that in neither scenario would any subjective boredom be experienced.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.258).

But if we extrapolate this thinking along a potential infinite path, we end up with a totally distorted state, where it seems preferable for entertainments sake to end up as a Boltzmann brain, where experiences are maximally in Flux. An intuition that is currently frightening.

(…) if things functioned perfectly [in a plastic world], we would keep accumulating ever greater troves of procedural and episodic memories.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.260).

I highly doubt that. Individuals with too good and precise memory are not to be envied. It is for example a small step from high functioning autism, Asperger and the idiot savant from the neurodiverse literature, who seems to be often cursed by hyper precise memory. There is even a story about the phenomenon of a man with perfect memory from Borges. His condition is utterly debilitating. Like the most superpowers we envision as children to have, the drawbacks of a perfect memory are enormous for a human mind.

A frozen brain state, or a mere snapshot of a computational state stored in memory, would not be conscious.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.268)

Ein Bild, das Menschliches Gesicht, Spielzeug, Person, Cartoon enthält.

Automatisch generierte Beschreibung

It could. Imagine an infinite number of universes in which Boltzmann brains are popping into existence for only the fraction of the moment it takes to access a conscious thought. If these Boltzmann brain party goes on for eternity, there will surely be a state of one of these brain that remembers reading a token in Bostrom’s new book. For some million years the same brain is having totally different thoughts but then one day it has the thought of the second token in Bostrom’s book, and so own until the experience of having read the whole book resides somewhere in the brain. For a totally discontinued brain it could well be that it never realizes its fractured worldlines and it has a totally normal experience of consistent thinking.

We have the idea that certain developmental or learning-related forms of interestingness could be maximized along a trajectory that is less than maximally fast: one where we spend some time exploiting the affordances available at a given level of cognitive capacity before upgrading to the next level. We have also the idea that if we want to be among the beneficiaries of utopia, we might again prefer trajectories that involve less than maximally precipitous upgrading of our capacities, because we may thereby preserve a stronger degree of personal identity between our current time slices and the time slices of (some of) the beings that inhabit the long-term future.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.272).

Why this section emphasizes the value how identity and interestingness intertwine is not really clear to me. Why it would be better for me to hold on to my singular identity. From the perspective of interestingness, it might be far more interesting to be inhabited by multiple identities at the same time. Indeed, some experiments with split brain patients suggests that at least two deep identities are controlling our visible surface-identity anyway. So, getting to hung up on the concept of a singular identity might be pointless.

(…) the notion of fulfillment is vague and indeterminate in its application to entities such as artistic or cultural movements. But it is also so in its application to human individuals.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.317).

If we have two identical cylinders sitting on a table, both made of metal, the same thing could serve as a bucket and a lampshade. Their value function when executed completes them. Fulfilling f(bucket) means adding water, fulfilling f(lampshade) means subtracting light. Moreover: If some fire would break out in the vicinity, the bucket could be expected to be fulfilled if we would empty the water on the fire to extinguish it. Fullfillingness is therefore totally in the mind of the Beholder.

Achieve a victory against the chess engine Stockfish at difficulty level 7, using no computer aids to assist you during training or during the match, and using no cognitive enhancers or other means that go against the spirit of this challenge.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.338).

This example is used by Bostrom to illustrate how things could still be made challenging in a Plastic World. But I am not sure that it is that easy. It directly contradicts Bostrom’s own thoughts in Super-Intelligence. If we are able to formulate such a mission in a plastic world, such that it is valid and not corruptible by an ASI, we would at the same time have found a way to chain an ASI with merely human intelligence, because a human that would be supported by superhuman assistants would only have to command the ASI to find a solution that does not go against the spirit of this challenge, and the ASI could find it.

It would be trivial for an ASI to cheat in such a way that we had the feeling that we won fair and square. The Logical thing is then to always distrust our victory, which makes it pointless to even play. We would have the exact same feeling if Stockfish itself would let us win. We can easily see that such a mere contract could be exploited by an ASI. Otherwise, Alignment would be trivial. I am a little irritated how Boston does not see how this idea goes against his own orthogonality thesis.

Within such a vast population [of stellar habitats] there would be an increased probability of design collisions. That is to say, if we pick a random person, and ask how similar the most similar other person is: then the larger the population, the more similar the most similar other person would tend to be.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.417).

This seems irrelevant. If we are not in a simulation and we are bound by physical laws, galactical pockets of trans humanity would keep drifting apart and increasing gaps between Star systems would make it impossible to have such a superset of possible minds from which we could compare. If my identical twin is unreachable in such a pocket system, I am as unique as I would be, if he was never born, his twin-similarity should be none of my concern.

Once a life is already extremely excellent, there may just not be much room for further improvement. So, while some initial segment of each utopian’s life could cause later improvements, this segment may be a small fraction of their entire life. The longer the life stretches on, the greater the fraction of it would be such that its average quality does not improve much. Either the life is already close to maximally good, or else the rate of improvement throughout the life is extremely slow.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.420-421).

A meaningful purpose for a good life could then be enhancing this option even further. For example, it could be argued that the person that invented a cure for cancer upgrades the total amount of available quality for all of humanity. It is not clear to me that this possibility space of important medical progress will end in a plastic world. Even at TECHMAT there could be the problem of minds addicted to infinite jests and such. Producing an effective cure for such a mind-virus could be considered even more valuable than curing cancer. The more transhuman a mind is the more difficult it could be to cure such illness. And coming up with ever new synthetic vaccines could be a really difficult task even for an ASI. Even at Plasticity we should beware that all clever instructions we might produce to forever get rid of are certainly time constrained. It is logically impossible, as Gödel showed to come up with a complete set of instructions that is not self-contradictory. So, to produce a surefire function that will always ensure our maximal wellbeing for all eternity is simply beyond any constructable reality. I think Bostrom stretches his Autopotency- Term beyond the realm where it is sensible to use. What Bostrom also leaves out is, that the quality of a human life cannot be averaged easily. The Quality is very much biased with a heavy weight on the later parts. It is easy to be a good kid, but it is extremely hard to stay good (remain a high quality of life) the older you get. The quality of a life of our best leaders and scientists could easily be diminished to a negative outcome, if after having received the peace Nobel prize they went on a killing spree or were caught on Epstein’s Island molesting minors. Take a person like Ted Kaczynski, who was a math prodigy, we would certainly evaluate the quality of his life more favorable, if after being incarcerated he would have won the fields medal, wrote a book about the error of his terrorist ways, and reintegrated into society. Instead of an Evil Genius story his life would have become a heroic redemption arc.

Many of our most compelling stories are tales of hardship and tragedy. The events that these stories portray would cease to occur in utopia. I am inclined to say tough luck to the tragedy-lover. Or rather: feel free to get your fix from fantasy, or from history—only, please, do not insist on cooking your gruesome entertainment in a cauldron of interminable calamity and never-ending bad news! It is true that good books and films have been inspired by wars and atrocities. It would have been better if these wars and atrocities had not occurred, and we had not had these books and films. The same applies at the personal scale. People coping with the loss of a child, dementia, abject poverty, cancer, depression, severe abuse: I submit it would be worth giving up a lot of good stories to get rid of those harms. If that makes our lives less meaningful, so be it.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.424)

Here Bostrom seems to fight against the intuition that suffering is a valid value vector in a plastic meaningfulness-space. He is effectively saying that if your mission requires suffering [of others] it is not worth the effort. A stark contradiction to his own statements earlier where he recognizes that suffering can enhance the quality of our experiences dramatically. Even in a thought experiment that says, if humans would not ever have killed each other than literature would have never got Tolstoy’s War and Peace and that is okay. There is an endless list of human achievements that caused extreme suffering like the Manhattan Project. Nobody in their right mind would say that the Atomic Bombs on Japan were justified since we got a good movie like Oppenheimer out of it. You are not a tragedy lover if you are moved by the picture. Bostrom says in a plastic world there should not be any suffering, because it is better to have a history or a chain of events where no mistakes happen, then to learn from our mistakes and making it part of our culture. With such an absolutist view we could very well end in a Plastic Utopia where Suffering is forbidden, or simply -like in so many current dictatorial states- ignored. If the absence of suffering trumps all other values than we would end up with toxic wellbeing scenario.

Some things we enjoy doing. Other things we enjoy having done. Meaningful activities tend to fall into the latter category.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.438).

This is a great observation. An argument for potential suffering and against hedonism if we value meaning more than well-being. A variation: what comes easy goes easy. Only the hard stuff stays with us. The surface pleasures do not reach deep into our core.

A purpose P is the meaning of person S’s life if and only if: (i) P is encompassing for S; (ii) S has strong reason to embrace P; and (iii) the reason is derived from a context of justification that is external to S’s mundane existence.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.441).

The Swiss Author Ludwig Hohl has an equally good definition for how to achieve purpose. Central to his thinking is the term “Work”: Work is always an inner process, and it must always be directed outward. Activity that is not directed outward is not work; activity that is not an inner event is not work.

Could there also be unrealized subjective meaning? Yes, I think we can make sense of such a notion. An example might be a person with an exceptional talent and passion for music, who embraces the purpose of composing great music either because they think that this is inherently deeply valuable activity or because they hope to produce a work of such tremendous power that it will heal the cultural chasms that separate us from one another and lead to conflict and war. So, this gives them subjective meaning. We can suppose that they burn with fervor to pursue this purpose throughout their life, but that circumstances conspire to prevent them from ever actually doing any composing— they face grinding poverty, conscription into the army, personal emergencies. We could then say that their life had unrealized subjective meaning.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.458).

This is one of Bostrom’s more futile thought experiments. It is especially unconvincing because Bostrom uses external circumstances to give this gifted musician a way out to never actualize her potential. As with life talent should always find a way. Look at the seemingly idiotic circumstances that led Galois to his final pistol duel, a lesser mathematician would simply never have had the urgency to draft down his mathematical results the night before. Or look at the life of Hawking: a lesser physician would have surrendered to the illness without ever trying to achieve greatness. I remember in his Autobiography he explicitly credited his illness and the ticking of his time running out for the fact that he went from being a lazy physicist to becoming an actually great one. If something is your mission, and things and circumstances prevent you from achieving it you will make overcoming the circumstances your mission. With great potential comes great preparedness.

The right path is the unfolding of the fullest activity that is possible for us. The fullest: measured by our capabilities (our conditions) and by the effect on others (ourselves as well as others). A little knitting won’t suffice (or one who is content with that must be a sad creature). Do the circumstances hinder you in the unfolding of your activity? Then work towards changing the circumstances, and you will find your activity in that. (Ludwig Hohl, Nuances and Details II, 11)

Consider the following imaginary character. Grasscounter is a human being who has devoted himself to counting blades of grass on the College lawn. He spends his whole days in this occupation. As soon as he completes a count, he starts over—the number of blades, after all, might have changed in the interim. This is Grasscounter’s great passion in life, and his top goal is to keep as accurate an estimate as possible. He takes great joy and satisfaction in being fairly successful in this endeavor. The objectivist and hybrid accounts that we find in the literature would say that Grasscounter’s life is meaningless; whereas subjectivist accounts would say that it is meaningful.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.461)

Ein Bild, das Dunkelheit enthält.

Automatisch generierte Beschreibung

While reading I get immediately reminded of the Hodor Event in Game of Thrones. Throughout the story the stuttering of the Word “Hodor” has absolutely no meaning, subjective, objective or otherwise. It is a phrase the mentally retarded giant stutters whenever he is addressed. Only much later we learn of the true meaning of the phrase and suddenly the Phrase, as an abbreviation of the sentence Ho[ld the] do[o]r!, becomes one of the most meaningful words in the whole epic story. The meaning was always there, but we as observers could not decipher it. Bostrom later denies that Grasscounter could ever have objective meaning since his act seems pointless:

[Grasscounter] would not, however, have meaning in the more objectivist sense that requires the encompassing purpose to be one which the person “would desire if he were perfectly psychologically healthy and well-adapted”.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.462)

What about the possibility of this person having secret knowledge, that grabby aliens will one day arrive and since they have a gambling addiction, before conquering a world they always give their prey a single chance to be spared. In the case of earth, earthlings can save themselves if one among them knows the exact number of blades of grass on a certain lawn…now which life and activity has suddenly achieved more meaning than probably anything else up to this point?

Bostromisms

Bostrom is known for coining new terms. Here are some of his newest.

Ein Bild, das Kunst, Kreis, Zeichnung enthält.

Automatisch generierte Beschreibung

Astronomical Petri Dish: Observable Universe

Computronium: a nanomechanical device which solves the Landauer limit of energy efficiency during computation.

Plasticity: The state of a technological mature world, that has affordances that make it easy to achieve any preferred local configuration. [My version of a Technology of Everything or Clarke-Capability]

Let us say that we have some quantity of basic physical resources: a room full of various kinds of atoms and some source of energy. We also have some preferences about how these resources should be organized: we wish that the atoms in the room should be arranged so as to constitute a desk, a computer, a well-drafted fireplace, and a puppy labradoodle. In a fully plastic world, it would be possible to simply speak a command—a sentence in natural language expressing the desire—and, voila, the contents in the room would be swiftly and automatically reorganized into the preferred configuration. Perhaps you need to wait twenty minutes, and perhaps there is a bit of waste heat escaping through the walls: but, when you open the door, you find that everything is set up precisely as you wished. There is even a vase with fresh-cut tulips on the desk, something you didn’t explicitly ask for but which was somehow implicit in your request.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.196-197)

Autopotency: Ability to use Plasticity for self-configuration.

An autopotent being is one that has complete power over itself, including its internal states. It has the requisite technology, and the know-how to use it, to reconfigure itself as it sees fit, both physically and mentally. Thus, a person who is autopotent could readily redesign herself to feel instant and continuous joy, or to become absorbingly fascinated by stamp collecting, or to assume the shape of a lion.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.197)

Total Welfare function: Objective Measurement of Subjective Wellbeing

AI completeness: A task which requires human-level artificial general intelligence. (Mind uploading or Autopotency are most likely AI complete)

Aesthetic neutrinos: The possibility that our experience filters are too insensitive to experience countless breathtaking moments in the environment, the pervasive sheer beauty of being.

Timesuit: Protective Coating to shield the Biological Body from time induced decay

Diachronic Solidarity: Prospective and retrospective emotional connection with forebearers and descendants

Karma Coin: An option package of highly desirable goods and services like a happy afterlife, true love, profound knowledge, enlightenment, closeness to the divine). Investing in a Karma coin currency is a way to discover and share meaning with others. It is like a Bitcoin for Purpose. I am not sure if Bostrom is kidding. At the stage of Plasticity this Coin will lose all its value. It could be a guiding light on the way there, maybe.

Intrinsification: The process whereby something initially desired as a means to some end eventually comes to be desired for its own sake as an end in itself.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.234)

ETP: short for Encompassing Transcendental Purpose. The Meaning of an individual life.

Ein Bild, das Cartoon enthält.

Automatisch generierte BeschreibungUtility Monsters: beings that are enormously more efficient in deriving well-being from resources than we are.

Enchanted World: A Way of life, where knowledge enrichens the participation of a universal Reality on multiple Layers. Where Solving Problems and Puzzles do not diminish our joy and sense of wonder but enhance them.

(…) meaning may be enhanced when a way of life is enmeshed in a tapestry of rich symbolic significance—when it is imbued with myths, morals, traditions, ideals, and perhaps even omens, spirits, magic, and occult or esoteric knowledges; and, more generally, when a life transects multilayered realities replete with agencies, intentions, and spiritual phenomena.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (S.433)

Ein Bild, das Bild, Kunst, Screenshot, Bildende Kunst enthält.

Automatisch generierte Beschreibung

[Mount Bostrom is almost climbed. Only one last part left. Coming soon]

Utopological Investigations Part 3

Reading Time: 5 minutes

This is part 3 in the deep Utopia Series

Handouts 17,19 & 22 On Purpose and Meaning

To assist a friend in finding purpose, it’s proposed that their actions be linked to the preferences, well-being, or opinion of someone they care about, thereby giving their actions personal significance. If the friend values the happiness or opinions of the person trying to help, creating a situation, where achieving a specific goal (G) enhances this relationship can imbue them with a sense of purpose. This goal should require effort, skill, and emotional investment over time, avoiding shortcuts like technology or enhancements that diminish personal effort, to ensure it’s meaningful and fulfilling. The task or goal (G) must be carefully chosen to align with the friend’s interests and capabilities, such as winning against a chess engine without external aids, offering a genuine challenge that can’t be bypassed through easy fixes. This approach transforms the pursuit of G into a mission that provides the friend with a significant, purpose-driven project, fostering personal growth and satisfaction.

Bostrom then comes up with the following hypotheses:

  1. Purpose is valuable because it broadens our goals into long term missions that get intrinsified (useful-of-effort)
  2. Purpose is an innate drive and not fulfilling is leads to frustration.
  3. Purpose is socially acceptable, having a mission is seen as status improving.

For an autopotent mind all these points are extraordinarily challenged:

(…) while there is value in having purpose, this value is entirely voided if, as we may say, the purpose has been generated on purpose. In other words, let us assume (for the sake of the argument) that purposes that we either set ourselves or artificially induce in ourselves for the sake of realizing the value of having purpose or for the sake of enabling active experience do not contribute anything to the value of purpose (…)

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.347)

In Utopia there are mainly 2 sources left for Purpose generation:

  1. Artificial Purpose
    1. Self-imposed: Handicapping, neuro-induced
    2. Presented: by Other Individuals or groups
  2. Natural and supernatural Purpose
    1. Agent-neutral: High Level Tasks that remain relevant even at Techmat
      1. Local Expansion (space fare)
      2. Risk handling
      3. Alien prepping
      4. Policing Civilization
      5. Artefact generation
      6. Cultural Processing
    2. Agent-relative (only for some posthuman groups relevant)
      1. Honoring traditions
      2. Commitments (to children, society etc.)
      3. Expression (Aesthetics)
      4. Following a special Faith

Categories of Meanings

  1. Reward
    1. Afterlife (Religion)
    2. Plasticity (Posthuman Technology)
    3. Simulation (Multiversal Potentials)
    4. Nirvana
  2. Morality
    1. Consequentialism (only applicable if moral reality is independent from physical reality)
    2. Deontology
    3. Virtue
    4. Worship
  3. Zeal
    1. Cause
    2. Identity (The True Self or Best Self)
    3. Allegiance (Loyalty for another Cause or Mission)
    4. Dedication (Practical Commitment)

A Definition of Meaning

A purpose P is the meaning of person S’s life if and only if: (i) P is encompassing for S; (ii) S has strong reason to embrace P; and (iii) the reason is derived from a context of justification that is external to S’s mundane existence.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.441).

For a purpose to be a potential meaning of life, it should be able to fill a life or at least a substantial portion of a life. Some endeavors are simply too small to constitute potential meaning-giving purposes—for instance, the goal of finding a good parking spot (except perhaps in London) (…)

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.443).

Bostrom then examines Sisyphus Life as a Parable for the human life as such, the absurdity and meaninglessness of an existence such as ours.

I will say that Sisyphus has subjective meaning if he is in fact wholeheartedly embracing a purpose that is encompassing and that he takes himself to have strong reason to pursue on grounds that are external to his mundane existence. Sisyphus has objective meaning if there is some purpose that would be encompassing to him and that he has a strong reason to embrace—a reason that derives from a context of justification that is external to his own mundane existence.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.456-457).

Spectrum of Intentionality

Ein Bild, das Text, Screenshot, Schrift, Logo enthält.

Automatisch generierte Beschreibung

Summa

Wittgenstein famously said in his Tractatus:

Die Lösung des Problems des Lebens merkt man am Verschwinden dieses Problems.

(Ist nicht dies der Grund, warum Menschen, denen der Sinn des Lebens nach langen Zweifeln klar wurde, warum diese dann nicht sagen konnten, worin dieser Sinn bestand.

[The solution of the problem of life is seen in the vanishing of this problem. (Is not this the reason why people to whom the meaning of life became clear after long doubting, could not then say what this meaning consisted of?)]

(Wittgenstein, TLP 6.521)

After deeply considering Bostrom’s Warning to beware of a solved world, we might say: in a solved world it will be essential that we never reach a state of perfect plasticity where Living is finally solved.

This is quite an unexpected twist that would explain a lot about why the finiteness of our personal lives might actually be a blessing in disguise.

Why mortality is actually the greatest gift bestowed upon us. Why the gods truly envy our weakness and imperfection. Why superpowers are a curse.

Perfection and immortality might be as boring as the Edenic Paradise. In the end, we might arrive at the paradoxical conclusion that the longer our lives last, the less valuable they might become.That the fragility and preciousness of life is its core value, and that immortality is the greatest enemy to this value. Is it possible that gods might thirst for just a minute of finiteness, that if anyone would achieve freedom from suffering, you might hunger for it?

So, is this the deep meaning of Pindar’s “become who you are”?

Are we simulations within a total world consciousness that dreams its fragmented memory fragments into completion? Another hint that we might already be part of an ancestor simulation that relives the sweetness of not knowing, of being part of something unsolved.

This would be a deeply technologically colored interpretation of Plato’s Anamnesis, where we remember stuff, we already knew but with the added benefit of having the joy of experiencing it for the first time. The good news I take away from Bostrom’s book: Existential Bliss Management in a solved world might be equally hard as Existential Risk Management in a flawed world like ours, which would mean: our mind will never run out of problems to solve and thus the term “solved world” is self-contradictory like so many other terms we use in our language: almighty, eternal, unimaginable.

This also means, in my opinion, that both the Effective Accelerationism Movement and the Doomers are fundamentally wrong in regards to AI: Neither of both strategies will work in the long run, there is no paradise or hell that is waiting at the end of this long and winding road that we call future, it is a delicate, even fragile balance we must strike between the known, the unknowns, and the unknowable unknowns. The solution to the Problem of Living, or what is otherwise often called the Meaning of Life, would then be to actively avoid any finite step of the solution.

When in doubt… Live, die, repeat.

Utopological Investigations Part 1

Reading Time: 9 minutes

Ein Bild, das Text, Schrift, Screenshot, Poster enthält.

Automatisch generierte Beschreibung

Prologue

This is a miniseries dedicated to the memory of my first reading of Bostrom’s new book, “Deep Utopia,” which—somewhat contrary to his intentions—I found very disturbing and irritating. Bostrom, who considers himself a longtermist, intended to write a more light-hearted book after his last one, “Superintelligence,” which should somehow give a positive perspective on the positive outcome of a society that reaches technological maturity. A major theme in Bostrom’s writings circles around the subject of existential risk management; he is among the top experts in the field.

“Deep Utopia” can be considered a long-winded essay about what I would call existential bliss management: Let us imagine everything in humanity’s ascension to universal stardom goes right and we reach the stage of Tech-Mat Bostrom coins the term “plasticity” for, then what? Basically, he just assumes all the upsides of the posthumanist singularity, as described by proponents like Kurzweil et al., come true. Then what?

To bring light into this abyss, Bostrom dives deep down to the Mariana Trench of epistemic futurology and finds some truly bizarre intellectual creatures in this extraordinary environment he calls Plastic World.

Bostrom’s detailed exploration of universal boredom after reaching technological maturity is much more entertaining than its subject would suggest. Alas, it’s no “Superintelligence” barn burner either.

He chooses to present his findings in the form of a meta-diary, structuring his book mainly via days of the week. He seems to intend to be playful and light-hearted in his style and his approach to the subject. This is a dangerous path, and I will explain why I feel that he partly fails in this regard. This is not a book anyone will have real fun reading. Digesting the essentials of this book is not made easier by the meta-level and self-referential structure where the main plot happens in a week during Bostrom’s university lectures. The handouts presented during these lectures are a solid way to give the reader an abstract. There is plenty to criticize about the form Bostrom chose, but it’s the quality, the depth of the thought apparatus itself that demands respect.

Then there is a side story about a pig that’s a philosopher, a kind of “Animal Farm” meets “Lord of the Flies” parable that I never managed to care for or see how it is tied to the main subject. A kind of deep, nerdy insider joke only longtermist Swedish philosophers might grasp.

This whole text is around 8,500 words and was written consecutively. The splitting into multiple parts is only for the reader’s convenience. The density of Bostrom’s material is the kind you would expect exploring such depths. I am afraid this text is also not the most accessible. Only readers who have no aversions to getting serious intellectual seizures should attempt it. All the others should wait until we all have an affordable N.I.C.K. 3000 mental capacity enhancer at our disposal.

PS: A week after the dust of hopelessness I felt directly after the reading settled, I can see now how this book will be a classic in 20 years from now. Bostrom, with the little lantern of pure reasoning, went deeper than most of his contemporaries when it comes to cataloging the strange creatures that are at the bottom of the deep sea of the solved world.

Ein Bild, das Kugel, Licht enthält.

Automatisch generierte Beschreibung

Handout 1: The Cosmic Endowment

The core information of this handout is that a technologically advanced civilization could potentially create and sustain a vast number of human-like lives across the universe through space colonization and advanced computational technologies. Utilizing probes that travel at significant fractions of the speed of light, such a civilization could access and terraform planets around many stars, further amplifying their capacity to support life by creating artificial habitats like O’Neill cylinders. Additionally, leveraging the immense computational power generated by structures like Dyson spheres, it’s possible to run simulations of human minds, leading to the theoretical existence of a staggering number of simulated lives. This exploration underscores the vast potential for future growth and the creation of life, contingent upon technological progress and the ethical considerations of simulating human consciousness. It is essentially a longtermist’s numerical fantasy. The main argument, and the reason why Bostrom writes his book, is here:

If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and continue doing so for a hundred billion billion millennia. It is really important that we ensure these truly are tears of joy.

Bostrom, Nick. *Deep Utopia: Life and Meaning in a Solved World* (English Edition), p. 60.

How can we make sure? We can’t, and this is a real hard problem for computationalists like Bostrom, as we will find out later.

Ein Bild, das Kunst, Bild enthält.

Automatisch generierte Beschreibung

Handout 2: CAPS AT T.E.C.H.M.A.T.

Bostrom gives an overview of a number of achievements at Technological Maturity (T.E.C.H.M.A.T.). for different Sectors.

1 Transportation

2.Engineering of the Mind

3.Computation and Virtual Reality

4.Humanoid and other robots

5.Medicine & Biology

6.Artificial Intelligence

7.Total Control

The illustrations scattered throughout this series provide an impression. Bostrom later gives a taxonomy (Handout 12, Part 2 of this series), where he delves deeper into the subject. For now, let’s state that the second sector, Mind-engineering, will play a prominent role, as it is at the root of the philosophical meaning problem.

Ein Bild, das Kugel, Kunst, Bild enthält.

Automatisch generierte Beschreibung

Handout 3: Value Limitations

Bostrom identifies six different domains where, even in a scenario of limitless abundance at the stage of technological maturity (Tech-Mat), resources could still be finite. These domains are:

  1. Positional and Conflictual Goods: Even in a hyperabundant economy, only one person can be the richest person; the same goes for any achievement, like standing on the moon or climbing a special mountain.
  2. Impact: A solved world will offer no opportunities for greatness.
  3. Purpose: A solved world will present no real difficulties.
  4. Novelty: In a solved world, Eureka moments, where one discovers something truly novel, will occur very sporadically.
  5. Saturation/Satisfaction: Essentially a variation on novelty, with a limited number of interests. Acquiring the nth item in a collection or the nth experience in a total welfare function will yield ever-diminishing satisfaction returns. Even if we take on a new hobby or endeavor every day, this will be true on the meta-level as well.
  6. Moral Constraints: Ethical limitations that remain relevant regardless of technological advances.
Ein Bild, das Bild, Kunst enthält.

Automatisch generierte Beschreibung

Handout 4 & 5: Job Securities, Status Symbolism and Automation Limits

The last remaining tasks that humans could be favored to do are jobs that bring the employer or buyer status symbolism, where humans are simply considered more competent than robots. These include emotional work like counseling other humans or holding a sermon in a religious context. Ein Bild, das Pflanze, Kunst, Blume, draußen enthält.

Automatisch generierte Beschreibung

Handout 9: The Dangers of Universal Boredom

(…) as we look deeper into the future, any possibility that is not radical is not realistic.

Bostrom, Nick. Deep Utopia: Life and Meaning in a Solved World (English Edition) (S.129).

The four case studies: In a solved world, every activity we currently value as beneficial will lose its purpose. Then, such activities might completely lose their recreational or didactic value. Bostrom’s deep studies of shopping, exercising, learning, and especially parenting are devastating under his analytical view. Ein Bild, das Text, Kunst, Bild, Blume enthält.

Automatisch generierte Beschreibung

Handout 10: Downloading and Brain Editing

This is the decisive part that explains how Autopotency is probably one of the hardest and latest Capabilities a Tech-Mat Civilization will develop.

Bostrom goes into detail how this could be achieved, and what challenges to overcome to make such a tech feasible:

Unique Brain Structures: The individual uniqueness of each human brain makes the concept of “copy and paste” of knowledge unfeasible without complex translation between the unique neural connections of different individuals.

Communication as Translation: the imperfect process of human communication is a form of translation, turning idiosyncratic neural representations into language and back into neural representations in another brain.

Complexity: Directly “downloading” knowledge into brains is hard since billions or trillions of cortical synapses and possibly subcortical circuits for genuine understanding and skill acquisition have to be adjusted with femtoprecision.

Technological Requirements: Calculating synaptic changes needs many order of magnitudes more we might have to our use, these Requirements are potentially AI-complete, that means, if we can do them we need Artificial Super Intelligence first.

Superintelligent Implementation: Suggests that superintelligent machines, rather than humans, may eventually develop the necessary technology, utilizing nanobots to map the brain’s connectome and perform synaptic surgery based on computations from an external superintelligent AI.

Replicating Normal Learning Processes: to truly replicate learning, adjustments would need to be made across many parts of the brain to reflect meta learning, formation of new associations, and changes in various brain functions, potentially involving trillions of synaptic weights.

Ethical and Computational Complications: potential ethical issues and computational complexities in determining how to alter neural connectivity without generating morally relevant mental entities or consciousness during simulations.

Comparison with Brain Emulations: transferring mental content to a brain emulation (digital brain) might be easier in some respects, such as the ability to pause the mind during editing, but the computational challenges of determining which edits to make would be similar.

Ein Bild, das Spiegel, Auto, Fenster, Gerät enthält.

Automatisch generierte Beschreibung

Handout 11: Experience Machine

A variation on Handout 10: Instead of directly manipulating the physical brain, we have perfected simulating realities that give the brain the exact experience it perceives as reality (see Reality+, Chalmers). This might actually be a computationally less demanding task and could be a step on the way to real brain editing. Bostrom takes Nozick’s thought experiment and examines its implications.

Section a discusses the limitations of directly manipulating the brain to induce experiences that one’s natural abilities or personality might not ordinarily allow, such as bravery in a coward or mathematical brilliance in someone inept at math. It suggests that extensive, abrupt, and unnatural rewiring of the brain to achieve such experiences could alter personal identity to the point where the resulting person may no longer be considered the same individual. The ability to have certain experiences is heavily influenced by one’s existing concepts, memories, attitudes, skills, and overall personality and aptitude profile, indicating a significant challenge to the feasibility of direct brain editing for expanding personal experience.

Section b highlights the complexity of replicating experiences that require personal effort, such as climbing Mount Everest, through artificial means. While it’s possible to simulate the sensory aspects of such experiences, including visual cues and physical sensations, the inherent sense of personal struggle and the effort involved cannot be authentically reproduced without inducing real discomfort, fear, and the exertion of willpower. Consequently, the experience machine may offer a safer alternative to actual physical endeavors, protecting one from injury, but it falls short of providing the profound personal fulfillment that comes from truly overcoming challenges, suggesting that some experiences might be better sought in reality.

Section c is about social or parasocial interactions within these Experience machines. The text explores various methods and ethical considerations for creating realistic interaction experiences within a hypothetical experience machine. It distinguishes between non-player characters (NPCs), virtual player characters (VPCs), player characters (PCs), and other methods such as recordings and guided dreams to simulate interactions:

1. NPCs are constructs lacking moral status that can simulate shallow interactions without ethical implications. However, creating deep, meaningful interactions with NPCs poses a challenge, as it might necessitate simulating a complex mind with moral status.

2. VPCs possess conscious digital minds with moral status, allowing for a broader range of interaction experiences. They can be generated on demand, transitioning from NPCs to VPCs for deeper engagements, but raise moral complications due to their consciousness.

3. PCs involve interacting with real-world individuals either through simulations or direct connections to the machine. This raises ethical issues regarding consent and authenticity, as real individuals or their simulations might not act as desired without their agreement.

4. Recordings offer a way to replay interactions without generating new moral entities, limiting experiences to pre-recorded ones but avoiding some ethical dilemmas by not instantiating real persons during the replay.

5. Interpolations utilize cached computations and pattern-matching to simulate interactions without creating morally significant entities. This approach might achieve verisimilitude in interactions without ethical concerns for the generated beings.

6. Guided dreams represent a lower bound of possibility, suggesting that advanced neurotechnology could increase the realism and control over dream content. This raises questions about the moral status of dreamt individuals and the ethical implications of realistic dreaming about others without their consent.

to be continued