2026-05-11

ๆจกๅ€ฃๅญ Mortal Computation

 Memetics index





I just had a flash about my own mental health issues that might have relevance to our discussions of consciousness 


Imagine that some quantities of identity such as self-esteem (the right to be protected, participate, etc) or enjoyment (the right to pursue one’s interests, the right to feel that one’s own interests and enjoyment are existentially important, etc) are a kind of hypersurface. 


So the “imitation” of actual mental health may be accomplished by a few points on that surface such as “I like movies” or “I like sports” or “I’m a good person/worker/Christian” or “I  avoid situations where I might encounter conflict”


The problem is that such a hack is a sparse discontinuity and is therefore undifferentiable and not meaningfully integrable 


That’s probably a whole tin of worms ๐Ÿชฑ unto itself, but let’s move

On for the moment


The “mortal substrate” question may rush to

The rescue here


* mortal computation, I think 


I’ll have to listen to

That paper again, since more I think about it the

More I find the “mortal substrate” idea pseudo-argumentative and question-beggy


Like yet another “here’s another implementation difference so human consciousness wins again, nyah, nyah, nyah!”


Oh, just that the brain ๐Ÿง  implements consciousness one way, and silicon ๐Ÿค– is a different implementation. 


But I think it does add “smoothing effect” which means you get a tighter approximation for less energy and hardware, your get some non-linearities (I always worry I’m misusing that term), which give you differentiability and integrability everywhere


While a digital or severely traumatized (human) gives you singularities everywhere 


You can try to approximate it better and better with more test cases and mantras or whatever, but you always, and especially in those problematic edge

Cases you’re try to deal with, fix, close up, you get hallucinations, wrong answers that Just seem so dang right, or just non-convergence 


The idea ๐Ÿ’ก that a drug ๐Ÿ’Š impacts the brain ๐Ÿง  in such and such a way does defeat the incremental neuron-with-chip analogy, but it also seems like a kind of red herring ๐ŸŸ or questionable-begging 


The stuff they talk about with “mortal substrate”* strikes me as the presence of a kind of hysteresis in the wetware ๐Ÿง  (biological) substrate, eg, the more you think about a thing or skill, the more engrained it gets, whereas in order to implement this same kind of phenomenon, inherent in the meat-brain ๐Ÿง  ๐Ÿฅฉ platform, is actually quite expensive in the straight silicon ๐Ÿค– approach, requiring a lot

More hardware, more time, and thus more power to not even quite do it 


A physics/mathematics question: do spaces where points on that space exhibit hystereses have higher granularity, or anything interesting?


I’m thinking along lines such as how even if the number of states is the same, the number of pathways whereby given states may be approached is lessened, which represents a higher level of learning


This idea of hysteresic N-spaces might have a lot of relevance to macromemetics as well, which should surprise no one.  


Have you heard of Brain organoids?


I think that’s the term…


Had another listen. Is seems shot through with question-begging ร  la “why IS consciousness impossible to

Implement on non-biological substrates?”


More on that, later. 


One thought is how trauma victims may look like allopoetic entities as wel, rather like factories, etc


More

On that…and ramifications for discrete manifolds and therapy directions 


The landscape of a digital intelligence and a trauma victim’s perceptual/cognitive manifolds may have useful similarities, eg, their rigidity, propensity to conflations (hallucination), undifferentiability, and widespread singularities. 


The podcast listed a final danger, ie, that not so much that we create a real intelligence but that we “computerify” ourselves in the process


I submit that this has already long ago happened, eg, with calling computer storage “memory” and such 


Pathology and edge/corner cases shed light, showing that something different is happening behind the scenes, eg, dissociative disorder, trauma (same thing), amnesia, perceptual anomalies, addictions, etc


“Normie” reactions to and prescriptions for addiction and dissociative identity disorder ring like “that’s not how memory works” (because of course it works the same as a computer) or “just download better software/tweak some parameters” (because of course self-destructive/dysfunctional behavior is just wrong, bad code, and has no underlying bases that make any sense)


It might be useful to look at digital intelligences and trauma cases as sparse, punctilious spaces mapped onto hypermanifolds.


This has ramifications that may shed light on the nature of consciousness, the refinement of AI ๐Ÿค– and the treatment of dissociative disorders


More to come on this…


Therapists regularly corner patients around their irrational behavior, probably by forcing an alter to acknowledge his/her own limitations and logical inconsistencies as well as irrational fears (this is all part and parcel of dissociative therapy, among others)


It’s sometimes an “ah-ha” moment leading immediately to healing, but not necessarily, rather like an AI ๐Ÿค– who “hallucinates”


Alcoholics and other mental patients confabulate similarly, and I tend to agree that this is a better term than “hallucination” for AIs ๐Ÿค– but not for the reason given, ie, that hallucination is a “feeling” rather than an “action” and therefore beyond the ken of mere automata. 


It’s also questionable that confabulation can be blithely classed as an “action” — another “computerification” (as opposed to anthropomorphisization which they incessantly worry about in the other direction) where a brain ๐Ÿง  function is tacitly classed as algorithmic… because it just “seems” that way. Analogies become models become descriptions, become assumptions about physical reality—we

Must be cautious in all directions. 


They touch on this, ie, humans make the tests so that they alone can pass them, no animals or other designs allowed, and the term I was searching for is essentialism. 


It’s like question-begging or putting up strawmen, but here more applicable perhaps. 



I thought of another implication of the podcast…oh yes, I think I’ve got it. 


It’s to Do with analogues between network behavior in response to increased load, how individuals in different cultures respond to questions when they don’t know the answer or when they know a bad outcome for the asker will not come back

To bite them * and confabulation behavior of AI ๐Ÿค– 


Computer networks respond to increased loads by resending and ultimately dropping packets. Packets travel by different

Routes when

Loads need to be distributed and arrive on different sequences and are reรคssembled at the destination, and resend requests sent for bits that didn’t arrive. 


This in ways resembles the “many drafts” model of consciousness. 


One question is whether to send a resend request, a send failure, or to confabulate a message, and similar questions exist on the sending side, respond properly/improperly to resend

Requests, ignore them, press on despite failure messages or lack of acknowledgments and keep sending more stuff, stop sending midway, or other such. 


This is all Byzantine generals stuff, but how does consciousness deal with it? Like a fish ๐Ÿ  out of water ๐Ÿ’ฆ not gracefully—just flop around or have a seizure and hope for a state reboot. 


I subscribe to the notion, mentioned in the podcast, I think, that consciousness serves to “model” the information from the senses into some kind of set of expectations, on a many drafts approach applied to sensory input flowing in over a distributed time period, and when that expectation begins to clash egregiously with what is more and more continuing to

Come in, the consciousness starts to

Confabulate and ultimately falls into any of a number of usually well-know and unattractive failure modes (a psycho-cerebral ๐Ÿง  

404 error, if you will ๐Ÿ˜œ)


* Japanese may not tend to say “I don’t know” while Americans do, and may confabulate)hallucinate, and Americans may rein it in for more serious questions while the Japanese may actually go the opposite way, among

Many other scenarios. As frequency or difficulty of questions accelerates, confabulation or demurring behavior may increase or phase change 

ๆผซ็”ป Louise as WeChat profile Picture

 Manga index