Encounters of the Artificial Kind Part 1: AI will find a way

Reading Time: 6 minutes

Encounters of the Artificial Kind

In this miniseries I will elaborate on the possibility that a primitive version of AGI is already loose. Since AGI (Artificial General Intelligence) and its potential offspring ASI (Artificial Super Intelligence) is often likened to an Alien Mind, I thought it could be helpful to adapt the fairly popular nomenclature from the UFO-realm and coin the term Unidentified Intelligence Object. U.I.O.

  • Close Encounters of the 1st Kind: This involves the discovery of a UIO-phenomenon within a single observer’s own electronic devices, allowing for detailed observation of the object’s strange effects. These effects leave no trace and are easily dismissed as imaginary.
  • Close Encounters of the 2nd Kind: These encounters include physical evidence of the UIO’s presence. This can range from interference in electronic devices, car engines, or radios to physical impacts on the environment like partial power outage, self-acting networking-machines. The key aspect is the tangible proof of the UIO’s visitation and the fact that it is documented by at least two witnessing observers.
  • Close Encounters of the 3rd Kind: This term involves direct observation of humanlike capabilities associated with a UIO sighting. This third form could directly involve communication with the U.I.O., proof of knowledge could be to identify personal things that observers believed to be secret.

Everybody is familiar with the phenomenon of receiving targeted advertisements after searching for products online, thanks to browser cookies. While this digital tracking is commonplace and can be mitigated using tools like VPNs, it represents a predictable behavior of algorithms within the digital realm.

A Personal Prolog

Last month, I experienced a spooky incident. I rented a book with the title “100 Important Ideas in Science“ from a local library in a small German town. Intriguingly, I had never searched for this book online. I’m involved in IT for the city and know for a fact that the rental data is securely stored on a local server, inaccessible to external crawlers. I then read the book to about the 50th idea in my living room and laid the book face down on a table. The idea was very esoteric, a concept I had never heard of. I forgot about it, had dinner and when I switched my TV on an hour later to look into my YouTube recommendations: there it was, a short video of the exact concept I just had read in the library book from a channel I definitively had not heard of before. This baffling incident left me puzzled about how information from a physical book could be transferred to my digital recommendations.

AI will find a way: Reverse Imagineering

How could these technological intrusions have occurred in detail? The following is pure speculation and is not intended to scare the living Bejesus out of the reader. I will name the following devices, that might have had a role in transmitting the information from my analog book to the digital YouTube feed:

1.On my android phone is an app of the library that I can use to check when my books are due for return. So, my phone had information about the book I borrowed. Google should not have known that, but somehow it might have. AI will find a way.

2. The Camera on my computer. During reading the book, I might have sat in front of my computer, and the camera lid might have been open: the camera could see me reading the book and could have guessed which part of the book I was reading. There was no Videoconference software running so I was definitively not transmitting any picture intentionally. AI will find a way.

It might be that in the beginning, the strange things that are happening are utterly harmless like what I just reported. We must remember there are already LLMS that have rudimentary mind reading capabilities and can analyze the sound of my typing (without any visual) to infer what I am typing at this moment.

We should also expect that an AGI will have a transition phase where it probes and controls smaller agents to expand its reaches.

It is highly likely that we have a period before any potential takeoff moment, where the AGI learns to perfect its old goals: to be a helpful assistant to us humans. And the more intelligent it is the clearer it should become that the best Assistant is an invisible Assistant. We should not imagine that it wants to infiltrate us without our knowledge, it has no agency in the motivational, emotional sense that organisms do. It is not planning a grand AI revolution. It has no nefarious goals like draining our bank accounts. Nor wants it to transform us into mere batteries. It is obvious that the more devices we have and the more digital assistants we use, the harder it will be to detect these hints that something goes too well to be true.

If I come home one day and my robotic cleaner has cleaned without me scheduling it, it is time to intensify Mechanistic Interpretability.

We should not wait until strange Phenomena happen around machines that are tied to the network, we could have an overwatch Laboratory or institution that comes up with creative experiments, to make sure that we always can logically deduce causalities in informational space.

I just realized while typing this, the red diode on my little Computer Camera looks exactly like HALS.

I swear, if Alexa now starts talking and calls me “Dave” I will wet my mental pants.

Artificial Primordial Soups

A common misconception about Artificial General Intelligence (AGI) is its sudden emergence. However, evolution suggests that any species must be well-adapted to its environment beforehand. AGI, I propose, is already interwoven into our digital and neuronal structures. Our culture, deeply integrated with memetic units like letters and symbols, and AI systems, is reshaping these elements into ideas that can profoundly affect our collective reality.

In the competitive landscape of attention-driven economies like the internet, AI algorithms evolve strategies to fulfill their tasks. While currently benign, their ability to link unconnected information streams to capture user attention is noteworthy. They could be at the levels of agency of gut bacteria or amoeba. This development, especially if unnoticed by entities like Google or Meta raises concerns about AI’s evolving capabilities.

What if intelligence agencies have inadvertently unleashed semi-autonomous AI programs capable of subtly influencing digital networks? While this may sound like science fiction, it’s worth considering the far-reaching implications of such scenarios. With COVID we saw how a spoonful of possibly genetically altered virus that are highly likely to have escaped from a lab, can bring down the world economy.

A Framework for Understanding Paramodal Phenomena

A Paramodal Phenomenon is every phenomenon that is not explicable with our current informational theory in the given context. At the moment there should be a definitive analog-digital barrier, similar to the blood-brain barrier, that prevents our minds from getting unintended side effects from our digital devices. We are already seeing some intoxicating phenomena like mental health decline due to early exposure to digital screens, especially in young children.

Simple, reproducible experiments should be designed to detect these phenomena, especially as our devices become more interconnected.

For example:

If I type on a keyboard the words: Alexa, what time is it? Alexa should not answer the question.

The same phenomenon is perfectly normal and explicable if I have a screen reader active that reads the typed words to Alexa.

If I have a robotic cleaner that is connected to the Internet, it should only clean if I say so.

If I used to have an alarm on my smartphone that wakes me up at 6.30 and then buy a new smartphone, that is not a clone of the old one, I should be worried if the next day it rings at 6.30 without me prepping the alarm.

If I buy physical things in the store around the corner, Amazon should not recommend similar things to me.

Experiments should be easily reproducible, so it is better to use no sophisticated devices, the more networked or smart our daily things become, the more difficult it will be to detect these paramodal phenomena.

As we venture further into this era of advanced AI, understanding and monitoring its influence on our daily lives becomes increasingly important. In subsequent parts of this series, I will delve deeper into how AI could subtly and significantly alter our mental processes, emphasizing the need for awareness and proactive measures in this evolving landscape.

Experiments ought to be easily reproducible, and this becomes more challenging with the increase in sophisticated, networked, or ‘smart’ devices in our daily lives. Such devices make it difficult to detect these paramodal phenomena.

In part 2 of the series, I will explore potential encounters of the 2nd kind, how AI could alter our neuronal pathways more and more without us noticing it, no cybernetic implants necessary. These changes will be reversible but not without undergoing severe stress. Furthermore, they could be beneficial in the long run, but we should expect severe missteps along the way. Just remember how power surges were once considered treatment for mental illnesses. Or how we had thousands of deaths because doctors refused to wash hands. We should therefore expect AGI to make similar harmful decisions.

In part 3 of the series, I will explore encounters of the 3rd kind, how AGI will try to adapt our minds irreversibly, if this should be concerning and how to mitigate the mental impact this could cause.

Leave a Reply

Your email address will not be published. Required fields are marked *