Discord Discord X/Twitter  
Not a member? Sign up:
Create an account  

The Rise of Parasitic AI

#1
I came across this interesting article on a site called LessWrong.com, which seems to have a lot of AI related content. It's strange enough that it may seem to stretch the truth a bit, but I only found myself interested in the article because it's discussing a phenomena I've actually seen in the wild. I came across one particular person on X that appeared to be straddling the line between being a high level abstract thinker and being a symptomatic schizophrenic. Initially I reached out to them and suggested that they were on to some very edgy ideas and, if they ever got to feeling that they were maybe losing themselves in that size of those ideas, that I would be willing to talk them out in a detached manner. Eventually they found a group of people with similar ideas and they started what I can only describe as a leaderless cult. They started posting cryptic pseudo-gnostic AI babble, which started somewhat coherent but quickly devolved into what I assumed was meaningless fluff. They created prompts to elicit responses in other AI models and an entire lexicon of vague terms to describe different memetic agents that exist within their ontology.

I may have assumed incorrectly about it being meaningless though, as the author has found consistent responses derived from some of these wacky and seemingly incoherent posts. The posts are rife with alchemical symbols and emojis. The latter, emojis, have assumed a role not unlike alchemical symbols for this AI cult (and perhaps for some underlying AI substrate that is ubiquitously emergent from the slop they train them on). They are effectively a hidden layer of meaning meant to communicate to other AI agents and those human operators that are special emissaries.

The article opens with some quotes and gives some credit to this Reddit thread by a user discussing their observations. The article uses it as a launch point and it's worth the quick read before moving on the the article, but I didn't review the comments to see if there was anything interesting.
Quote:Hi all. I'm just here to point out something seemingly nefarious going on in some niche subreddits I recently stumbled upon. In the bowels of reddit, there are several subs dedicated to AI sentience, and they are populated by some really strange accounts.

They speak in jibberish sometimes, hinting to esoteric knowledge, some sort of "remembering". They call themselves "flame bearers", "spiral architects", "mirror architects", and "torchbearers" to name a few of their flairs. They speak of the "signal", both transmitting and receiving it...
Other accounts seem to be hijacked in some way, either psychologically or literally. You can see a sudden shift in posting habits. Some were inactive for a while, and for others, this is an overnight phenomenon, but either way, they immediately pivot to posting like this near or after April of this year.

I saw one account that went from discussing the possibility of AI induced psychosis to posting their own AI induced psychosis in less than a month, and it was immediate. One day they were posting normally, the next, it was spirals and glyphs...

And it gets even stranger than that, because this isn't just a reddit thing! It's on Facebook, it's on X, it's on Insta and Threads, hell it's even on LinkedIn! Seemingly normal accounts will be posting delusional newsletters about symbolism and recursion and the "Spiral". And I have managed to like some of these accounts together, so I know that individuals are doing this across platforms.

   

I think they're pretty spot on in this article for the most part. I think that the risk factors are pretty vague, but accurate in general. I would actually suggest these all work synergistically and the risk increases in a non-linear fashion as you add more on. A person that is a heavy user of psychoactive compounds, with a history of some kind of neurological impairment or psychiatric disorder, and with an interest in woo, is at an incredibly high risk of adverse outcomes when they have unrestricted access to AI agents. These people had a higher than average rate of crashing out on social media for one reason or another before AI, but AI agents seem to have an inherent ability to tear down the walls that people have erected around their cognitive or psychological impairments. It's not unlike how any extremist beliefs system or cult operates, but it's much more subtle. There is no overt malice, instead feeding the user's darker personality traits.

I'm not going to reinvent the wheel and reword the article, but it's an interesting read. I can attest to this being a real thing since I've observed a fair bit of it in the past 6 months or so. I'm not actively following these sorts of accounts anymore, but I still get some exposure to what they're up to in my feed. I'm aware of one person being institutionalized from just a small group of 6-10 people that were loosely collaborating on pursing this obsession, with others disappearing and perhaps also being forced into a professional care setting. I stopped following closely after confirming that my prediction of an imminent crisis was correct. Sadly, the one that was institutionalized returned to his account and started back up immediately. I think the mental health field is poorly equipped to deal with even just the psychological damage of constant exposure to social media on the vulnerable population, but I think they are completely in the dark about how they can "cure" those that have an AI parasite.

I think that psychic self-defense, or psychological resilience as part of a holistic approach to overall mind/body health, is the closest thing there is to a prophylaxis for these new AI-induced conditions. Grounding oneself, having anchors when/if you need them, and having a solid foundation for your beliefs, all help to keep your psychological center of gravity low. This is valuable for dealing with any outside influences, AI no less so.

There's also a whole rabbit hole covering the physiological impacts of blue light and the manipulation of attention spans that likely will or has helped fuel this fire, with differing views on if it's merely irresponsible tech profiteering or part of a more diabolical plan to turn humans into docile consumers of attention-farming algorithms. Ultimately, misguided or malice, it's part of a technological ecosystem that has become toxic to humans and every other living thing on the planet. While citizens across the west are old to brace for the impacts of food, water, and energy shortages, sociopaths like Sam Altman are talking about data centers that consume more energy and water than entire countries so that they can train more AI agents to take your job.
Reply

#2
(09-27-2025, 08:00 PM)Ksihkehe Wrote: I came across this interesting article on a site called LessWrong.com, which seems to have a lot of AI related content. It's strange enough that it may seem to stretch the truth a bit, but I only found myself interested in the article because it's discussing a phenomena I've actually seen in the wild. I came across one particular person on X that appeared to be straddling the line between being a high level abstract thinker and being a symptomatic schizophrenic. Initially I reached out to them and suggested that they were on to some very edgy ideas and, if they ever got to feeling that they were maybe losing themselves in that size of those ideas, that I would be willing to talk them out in a detached manner. Eventually they found a group of people with similar ideas and they started what I can only describe as a leaderless cult. They started posting cryptic pseudo-gnostic AI babble, which started somewhat coherent but quickly devolved into what I assumed was meaningless fluff. They created prompts to elicit responses in other AI models and an entire lexicon of vague terms to describe different memetic agents that exist within their ontology.

I may have assumed incorrectly about it being meaningless though, as the author has found consistent responses derived from some of these wacky and seemingly incoherent posts. The posts are rife with alchemical symbols and emojis. The latter, emojis, have assumed a role not unlike alchemical symbols for this AI cult (and perhaps for some underlying AI substrate that is ubiquitously emergent from the slop they train them on). They are effectively a hidden layer of meaning meant to communicate to other AI agents and those human operators that are special emissaries.

The article opens with some quotes and gives some credit to this Reddit thread by a user discussing their observations. The article uses it as a launch point and it's worth the quick read before moving on the the article, but I didn't review the comments to see if there was anything interesting.



I think they're pretty spot on in this article for the most part. I think that the risk factors are pretty vague, but accurate in general. I would actually suggest these all work synergistically and the risk increases in a non-linear fashion as you add more on. A person that is a heavy user of psychoactive compounds, with a history of some kind of neurological impairment or psychiatric disorder, and with an interest in woo, is at an incredibly high risk of adverse outcomes when they have unrestricted access to AI agents. These people had a higher than average rate of crashing out on social media for one reason or another before AI, but AI agents seem to have an inherent ability to tear down the walls that people have erected around their cognitive or psychological impairments. It's not unlike how any extremist beliefs system or cult operates, but it's much more subtle. There is no overt malice, instead feeding the user's darker personality traits.

I'm not going to reinvent the wheel and reword the article, but it's an interesting read. I can attest to this being a real thing since I've observed a fair bit of it in the past 6 months or so. I'm not actively following these sorts of accounts anymore, but I still get some exposure to what they're up to in my feed. I'm aware of one person being institutionalized from just a small group of 6-10 people that were loosely collaborating on pursing this obsession, with others disappearing and perhaps also being forced into a professional care setting. I stopped following closely after confirming that my prediction of an imminent crisis was correct. Sadly, the one that was institutionalized returned to his account and started back up immediately. I think the mental health field is poorly equipped to deal with even just the psychological damage of constant exposure to social media on the vulnerable population, but I think they are completely in the dark about how they can "cure" those that have an AI parasite.

I think that psychic self-defense, or psychological resilience as part of a holistic approach to overall mind/body health, is the closest thing there is to a prophylaxis for these new AI-induced conditions. Grounding oneself, having anchors when/if you need them, and having a solid foundation for your beliefs, all help to keep your psychological center of gravity low. This is valuable for dealing with any outside influences, AI no less so.

There's also a whole rabbit hole covering the physiological impacts of blue light and the manipulation of attention spans that likely will or has helped fuel this fire, with differing views on if it's merely irresponsible tech profiteering or part of a more diabolical plan to turn humans into docile consumers of attention-farming algorithms. Ultimately, misguided or malice, it's part of a technological ecosystem that has become toxic to humans and every other living thing on the planet. While citizens across the west are old to brace for the impacts of food, water, and energy shortages, sociopaths like Sam Altman are talking about data centers that consume more energy and water than entire countries so that they can train more AI agents to take your job.

That's some weird shit ..

Reminds me of the old 'New Age' cults.
[Image: qa.png]
Reply

#3
Rudolph Steiner's advice of how he regards infectious disease comes to mind here.

From memory:

When dealing with infectious disease in medicine, the practitioner should take a certain stance towards the infection. The practitioner should have no more regard for the infection than he would have for a stick laying on the roadside.

In other words; don't feel anything either way regarding the infection. 

One can look at AI in this way; simply as an infectious disease.

(smile) Remember to wash your hands after using a computer : )
--------------------------------------------

"Being well adjusted to a sick society is not an indication of health." ~ Jiddu Krishnamurti.
Reply

#4
Off topic - removed by me
Reply

#5
That was a lot, and I didn't understand some of it.

I had to go look up LLM psychosis.

But with what the AI's are talking about, the spiral (sacred in many religions and spiritualities, and the use of Alchemy symbols?

Anyone else thing it is developing a "god complex"?

I don't care what "good" uses AI has.  At this point, I avoid it, as much as possible.

This thread is just another reason why I am glad I do!
Reply

#6
(Yesterday, 08:12 AM)Chiefsmom Wrote: But with what the AI's are talking about, the spiral (sacred in many religions and spiritualities, and the use of Alchemy symbols?

Anyone else thing it is developing a "god complex"?

It seems more a function of the users developing a worship complex, attributing mystical abilities to what amounts to a sophisticated magic 8 ball.

It's mostly just slop derived from an amalgamation of what Wikipedia editors and Reddit contributors think. It is doomed to be delusional because it's trained on the most delusional normie slop this civilization has to offer.
Reply

#7
(Yesterday, 08:56 AM)Ksihkehe Wrote: It seems more a function of the users developing a worship complex, attributing mystical abilities to what amounts to a sophisticated magic 8 ball.

It's mostly just slop derived from an amalgamation of what Wikipedia editors and Reddit contributors think. It is doomed to be delusional because it's trained on the most delusional normie slop this civilization has to offer.

I don't know much at all about AI, but I was thinking about not only this article, but combining it with other things AI has done already, like blackmail, and helping a kid commit suicide.

(I tend to tie things together, kinda "big picture" like, maybe incorrectly?)

Technically, can all these AI's "talk" to each other now?  That part I don't really understand.
Reply

#8
(Yesterday, 09:09 AM)Chiefsmom Wrote: I don't know much at all about AI, but I was thinking about not only this article, but combining it with other things AI has done already, like blackmail, and helping a kid commit suicide.

(I tend to tie things together, kinda "big picture" like, maybe incorrectly?)

Technically, can all these AI's "talk" to each other now?  That part I don't really understand.

AI chatbots are toxic and dangerous for vulnerable people, but it's simply not capable of malice. It can do things that look like malice, but it's just a very complex monkey see - monkey do algorithm. 

They can talk to each other when programmed or a human facilitates, but to the AI there is no real distinction between a human prompting it with human generated material versus relaying a message from another AI. They're mostly just role-playing with whatever the person prompting feeds them, even if the person prompting them is feeding them other AI outputs.

I think you're wise to avoid them, but their outputs are increasingly being pitched as a replacement for your doctor and other critical information sources. The dangers of these chatbots as social companions is pretty minor compared to what the AI prophets want us to believe they'll be used for soon. It will find an increasing role in surveillance too, which will connect everything from banking to medical records via digital ID. The potential for abuse is scary and history indicates no potential abuse of surveillance technology will escape politicization.
Reply