2026-05-11

模倣子 Mortal Computation

 Memetics index





I just had a flash about my own mental health issues that might have relevance to our discussions of consciousness 


Imagine that some quantities of identity such as self-esteem (the right to be protected, participate, etc) or enjoyment (the right to pursue one’s interests, the right to feel that one’s own interests and enjoyment are existentially important, etc) are a kind of hypersurface. 


So the “imitation” of actual mental health may be accomplished by a few points on that surface such as “I like movies” or “I like sports” or “I’m a good person/worker/Christian” or “I  avoid situations where I might encounter conflict”


The problem is that such a hack is a sparse discontinuity and is therefore undifferentiable and not meaningfully integrable 


That’s probably a whole tin of worms 🪱 unto itself, but let’s move

On for the moment


The “mortal substrate” question may rush to

The rescue here


* mortal computation, I think 


I’ll have to listen to

That paper again, since more I think about it the

More I find the “mortal substrate” idea pseudo-argumentative and question-beggy


Like yet another “here’s another implementation difference so human consciousness wins again, nyah, nyah, nyah!”


Oh, just that the brain 🧠 implements consciousness one way, and silicon 🤖 is a different implementation. 


But I think it does add “smoothing effect” which means you get a tighter approximation for less energy and hardware, your get some non-linearities (I always worry I’m misusing that term), which give you differentiability and integrability everywhere


While a digital or severely traumatized (human) gives you singularities everywhere 


You can try to approximate it better and better with more test cases and mantras or whatever, but you always, and especially in those problematic edge

Cases you’re try to deal with, fix, close up, you get hallucinations, wrong answers that Just seem so dang right, or just non-convergence 


The idea 💡 that a drug 💊 impacts the brain 🧠 in such and such a way does defeat the incremental neuron-with-chip analogy, but it also seems like a kind of red herring 🐟 or questionable-begging 


The stuff they talk about with “mortal substrate”* strikes me as the presence of a kind of hysteresis in the wetware 🧠 (biological) substrate, eg, the more you think about a thing or skill, the more engrained it gets, whereas in order to implement this same kind of phenomenon, inherent in the meat-brain 🧠 🥩 platform, is actually quite expensive in the straight silicon 🤖 approach, requiring a lot

More hardware, more time, and thus more power to not even quite do it 


A physics/mathematics question: do spaces where points on that space exhibit hystereses have higher granularity, or anything interesting?


I’m thinking along lines such as how even if the number of states is the same, the number of pathways whereby given states may be approached is lessened, which represents a higher level of learning


This idea of hysteresic N-spaces might have a lot of relevance to macromemetics as well, which should surprise no one.  


Have you heard of Brain organoids?


I think that’s the term…


Had another listen. Is seems shot through with question-begging à la “why IS consciousness impossible to

Implement on non-biological substrates?”


More on that, later. 


One thought is how trauma victims may look like allopoetic entities as wel, rather like factories, etc


More

On that…and ramifications for discrete manifolds and therapy directions 


The landscape of a digital intelligence and a trauma victim’s perceptual/cognitive manifolds may have useful similarities, eg, their rigidity, propensity to conflations (hallucination), undifferentiability, and widespread singularities. 


The podcast listed a final danger, ie, that not so much that we create a real intelligence but that we “computerify” ourselves in the process


I submit that this has already long ago happened, eg, with calling computer storage “memory” and such 


Pathology and edge/corner cases shed light, showing that something different is happening behind the scenes, eg, dissociative disorder, trauma (same thing), amnesia, perceptual anomalies, addictions, etc


“Normie” reactions to and prescriptions for addiction and dissociative identity disorder ring like “that’s not how memory works” (because of course it works the same as a computer) or “just download better software/tweak some parameters” (because of course self-destructive/dysfunctional behavior is just wrong, bad code, and has no underlying bases that make any sense)


It might be useful to look at digital intelligences and trauma cases as sparse, punctilious spaces mapped onto hypermanifolds.


This has ramifications that may shed light on the nature of consciousness, the refinement of AI 🤖 and the treatment of dissociative disorders


More to come on this…


Therapists regularly corner patients around their irrational behavior, probably by forcing an alter to acknowledge his/her own limitations and logical inconsistencies as well as irrational fears (this is all part and parcel of dissociative therapy, among others)


It’s sometimes an “ah-ha” moment leading immediately to healing, but not necessarily, rather like an AI 🤖 who “hallucinates”


Alcoholics and other mental patients confabulate similarly, and I tend to agree that this is a better term than “hallucination” for AIs 🤖 but not for the reason given, ie, that hallucination is a “feeling” rather than an “action” and therefore beyond the ken of mere automata. 


It’s also questionable that confabulation can be blithely classed as an “action” — another “computerification” (as opposed to anthropomorphisization which they incessantly worry about in the other direction) where a brain 🧠 function is tacitly classed as algorithmic… because it just “seems” that way. Analogies become models become descriptions, become assumptions about physical reality—we

Must be cautious in all directions. 


No comments:

Post a Comment