Manga index
2020-02-29
2020-02-28
2020-02-27
2020-02-26
2020-02-25
2020-02-24
模倣子 The Dining Philosophers and Alienation
Memetic Index - Index of Memetic Materials
Introduction
The Dining Philosophers Problem is a computer science allegory that models how starvation can result in a simple system where all members should have equal access to food. My question is whether this system may be modeled with the macromemetic tools I've developed, and whether these will show how starvation can happen. I hope that it can also illustrate the macromemetic concept of memetic desolation, which is related to alienation. The idea is that memetic desolation always leads quickly to a resort to violence. The theory is that the resort to violence may be pushed away by adding memes and memetic pathways to the system, even if the immediate effect and value of said pathways is not immediately clear, or seem to have any direct effect on the functioning of the system.
Is it possible to model The Dining Philosophers satisfactorily using macromemetic tools, and is it in turn possible to make simple changes to the system, while preserving its essence, to bring about a non-alienated system where nobody 'starves'?
The Problem
Figure 1. The Dining Philosophers' Table
I like the description where there are chopsticks on either side of and a bowl of rice in front of each philosopher. All either Descartes, Voltaire, Socrates, Confucius, or Plato knows is whether he is hungry, and whether he can pick up the chopstick to his right or his left. He must be able to pick up both in order to eat. Each philosopher has the following memes available to deploy: eat!, pick-(up-)right!, pick-left!, think!, sleep!, and then to put his chopsticks back down, put-left!, put-right!
So far we have a collection of ideomemetic systems (1) where each philosopher has a collection of states: Sleeping, Thinking, Eating, where they may freely transition between Sleeping and Thinking, but they must deploy two memes, pick-right! and pick-left! to get to one or the other state of HoldingLeft or HoldingRight before they may transition to Eating (HoldingBoth).
By the way, with the table size we have here, we can have at most two philosophers eating at once, but two of the non-eating philosophers may still pick up his right or left chopstick, which is the kind of thing that leads to deadlocks.
The Antisocial Agent
If we really want to kick up the potential for antisocial (passive-aggressive) behavior, we could allow them to transition back to Thinking or even Sleeping without putting down either or even both chopsticks! In the 'Good Citizen' agent in the case above, by contrast, the agent has states which make it clear that he knows to transition back from Eating to Thinking by laying down both his chopsticks. Eating transitions back to either Hold(ing)Left or Hold(ing)Right, and finally back to Thinking. These meme deployments have clear state transitions as we see here:
Table 1. The 'Good Citizen' Philosopher-Agent
Eating [ ] eat!, put-right!HoldLeft, put-left!HoldRight
HoldRight [ ] put-right!Thinking
HoldLeft [ ] put-left!Thinking
Below we see how an antisocial ideomemeplex (1) could be set up where each philosopher has the possibility of hanging onto one or both chopsticks while only thinking or even sleeping, he could even twirl one or both chopsticks or what-have-you. Here we start to see a true ideomemeplex where we observe the behavior of the antisocial agent, but the internal state of the person that 'causes' that action is not outwardly visible (2).
Adding MIAO (3) notation, we see how the philosopher with the above ideomemeplex sort of picks up and puts down the chopsticks for his own internal reasons, and when he has both LEFT(STICK) and RIGHT(STICK), then and only then is he able to deploy the eat! meme to go to the Eating state. So some of his deployment descriptors would look like:
Table 2.a. Deployment Descriptors with MIAOs
Thinking [ LEFT, RIGHT ] eat!Eating
Thinking [ RIGHT ] pick-left!, put-right!
Thinking [ ] think!, sleep!Sleeping
Sleeping [ ] sleep!, wake!Thinking
Obviously, the pick-right! and pick-left! memes only work when there are chopsticks to pick up. So we might further add details such as:
Table 2.b. Deployment Descriptors with variable-state MIAOs
Thinking [ LEFT.INHAND ] put-left!
I think this is, in fact, a much more 'accurate' representation of what one of our philosophers is really doing. It's more human. The first state diagram in Fig. 2.a is more 'gamelike' in that it shows how an agent should behave, i.e., they should put their chopsticks down before they go back to eating or sleeping, whereas in truth there is no especial reason why they should do so.
Another thing we notice is that the selfish philosopher-agent does not have states and also does not have state transitions for many of his meme deployments. He doesn't seem to care whether he has one or both chopsticks. He doesn't have a state for either, so he's apparently indifferent to whether he picks one up or puts one down, or indeed, whether he has the chopsticks he needs when he wants to start Eating.
The Self-Satisfied Agent
In both the original state diagram and in the updated one (Figs. 2.a. and b.), where the philosopher-agent doesn't really 'care' about whether he's holding one or both chopsticks, he appears to be able to deploy any meme he likes, which is true if his neighbors are not hogging the chopsticks. He can do whatever he likes, and gets the result, the state change, that he wants.
At this point we question how well our state diagrams are working for us, since they seem to show that we can simply deploy eat! to hop over to Eating. In its current form, the state diagram does not illustrate very well how the eat! meme relates to the LEFT and RIGHT stick MIAOs, although this is made clear in the deployment descriptors (first lines of Tables 2.a. and 2.b.), i.e., the only times the eat! meme may be deployed from the Thinking state is when both LEFT and RIGHT chopstick MIAOs are .INHAND, and the only times they may be pick!ed is when they are .AVAIL.
In other words, my outwardly seeming self-satisfied agent is in fact dependent upon some unseen other person, upon whom he has no influence (no memes to deploy resulting in resonance, i.e., recognition and response).
We are at memetic desolation, or alienation. The starving philosopher is forced to resort to non-memetic means (violence) to deal with his situation.
Tea, Anyone? and Other-Bullying
A philosopher-agent who is starving must be able to make some kind of contact, some kind of memetic overture to his fellow diners. This is not necessarily his neighbors, and I'll try to explain why this is. In memetic engineering, the first problem is the memetic alienation, and assuming that the system is not badly broken in some other way (4), fixing that may be all that need be done.
One random idea is to give every philosopher a new meme which they can deploy which affects all the other philosophers. Just impacting their neighbors might not be enough, but enough on that for now, and let's just take this as an example of a problem solving approach.
Say there's always a teapot in the center of the table, and any of the philosophers may pick it up and offer to fill the teacups of all the others. Those who are asleep might not hear the offer, and those thinking might ignore it, and we may have to deal with those, but those who are eating should put down their chopsticks and hold out their cups.
Or better still, let's have all of the philosophers eating out of cha-wans 茶碗, which is kind of what I was imagining anyway, or a small rice bowl also for drinking tea. So those eating must quickly finish the last of their rice, put down their chopsticks, and hold out their bowls for some tea.
Anyone who is eating must stop, and both of their neighbors are necessarily not eating, since at least one of their chopsticks is taken. But it doesn't matter if the only people holding chopsticks are those eating. If thinkers put down their chopsticks for tea, then that's okay, too. The problem remains those who keep their chopsticks when sleeping.
What if these selfish philosophers don't all stop what they're doing, put down their chopsticks, and put their bowls out for some tea? The answer is: make the offer of tea meme a bullying opportunity.
One solution is that if your neighbor doesn't wake up and hold out his bowl with both hands (which requires putting down any chopsticks he may be holding), you give him a nudge.
An obvious problem is since offer-tea! is a freely deployable meme, i.e., there's nothing from stopping people from offering it over and over, and one feels the need for some kind of bullying meme to prevent that. One is that it takes time for a fresh pot of tea to steep, so anybody could complain that the tea is weak.
So we have stuff like:
Table 3. New Tea-related Behaviors
Eating [ offer-tea! ] put-left!, put-right!; put-out-bowl!
[ WEAK_TEA, offer-tea! ] complain!
[NEIGHBOR_TEABOWL, offer-tea! ] nudge!
[ complain! ] defend!
Then we start to get into engineering more and more memes, like sayingcomplain! or negate the complaining when somebody [ WEAK_TEA ] complain!s, i.e., when the tea is in fact fine. And nudge! when a neighbor doesn't put his bowl out.
Again, this is just an example of a memetic engineering solution to a memetic desolation problem by making more interactions available to those who would otherwise have no choice but to resort to violence.
Memes Coming From Others
How does it work? There are some notational issues, in the state diagrams and transition matrices (5), as I mentioned above. In Table 2, we see a couple of new deployment modes for which there may be as yet no clear representation in current state diagrams: memes coming from others and memetic deployment options that don't depend upon current state. This latter could merely be taken as a shorthand, but a notational convention might help a lot.
Summary and Conclusions
The Dining Philosophers Problem is a microcosm and an example of memetic desolation leading to alienation and starvation and/or violence. The solution reached by memetic engineering, i.e., to add memetic pathways, allows agents to communicate in states in which they were formerly alienated and had only recourse to violence or acceptance of death.
Moreover, this solution mirrors those put forward by computer scientists to resolve exactly this sort of deadlock problem both in effectiveness and elegance.
Again, problems of alienation leading to violence, such as the millions of drivers on American roadways who experience 'road rage,' can lend themselves to similar simple, if counterintuitive, solutions, and we need to start applying these principles.
It remains to shore up the notional system to better deal with memes deployed by others, which I think will shed a lot of light on how to represent the action of immunomemes as well.
________________________________________
(1) A memeplex that only exists for a given person. In this case, each philosopher has his own POV, his own state, but an outside observer can still see the overall situation.
(2) This is exemplified by antisocial drivers who do things out of some kind of self-justified spite, which, when interviewed, they will reveal. For instance, 'he cut me off, so I rode his bumper, honking,' or 'he was going too slow, so I burned around him and flipped him off,' or 'he was riding my bumper, so I slowed way below the speed limit.'
(3) Memetic Iconic Anchoring Object. In this case, the chopsticks, which may or may not themselves be present or absent, or in-hand, strongly govern which memes are eligible for deployment.
(4) For example, not enough food. The assumption of The Dining Philosopher's Problem is that there is plenty of food, infinitely much, in fact. The problem is, as in the USA, distribution, not total quantity. A philosopher starves because he can't get his hands on a chopstick, not because his bowl is empty. The former is, we hope, a macromemetic problem, and we attack it as such.
(5) Examples of those not presented here. Resolving issues associated with MIAOs and memes from other agents determining state and available memes to deploy is stuff for another essay.
Introduction
The Dining Philosophers Problem is a computer science allegory that models how starvation can result in a simple system where all members should have equal access to food. My question is whether this system may be modeled with the macromemetic tools I've developed, and whether these will show how starvation can happen. I hope that it can also illustrate the macromemetic concept of memetic desolation, which is related to alienation. The idea is that memetic desolation always leads quickly to a resort to violence. The theory is that the resort to violence may be pushed away by adding memes and memetic pathways to the system, even if the immediate effect and value of said pathways is not immediately clear, or seem to have any direct effect on the functioning of the system.
Is it possible to model The Dining Philosophers satisfactorily using macromemetic tools, and is it in turn possible to make simple changes to the system, while preserving its essence, to bring about a non-alienated system where nobody 'starves'?
The Problem
Figure 1. The Dining Philosophers' Table
I like the description where there are chopsticks on either side of and a bowl of rice in front of each philosopher. All either Descartes, Voltaire, Socrates, Confucius, or Plato knows is whether he is hungry, and whether he can pick up the chopstick to his right or his left. He must be able to pick up both in order to eat. Each philosopher has the following memes available to deploy: eat!, pick-(up-)right!, pick-left!, think!, sleep!, and then to put his chopsticks back down, put-left!, put-right!
So far we have a collection of ideomemetic systems (1) where each philosopher has a collection of states: Sleeping, Thinking, Eating, where they may freely transition between Sleeping and Thinking, but they must deploy two memes, pick-right! and pick-left! to get to one or the other state of HoldingLeft or HoldingRight before they may transition to Eating (HoldingBoth).
Figure 2.a. Memetic State Diagram
By the way, with the table size we have here, we can have at most two philosophers eating at once, but two of the non-eating philosophers may still pick up his right or left chopstick, which is the kind of thing that leads to deadlocks.
If we really want to kick up the potential for antisocial (passive-aggressive) behavior, we could allow them to transition back to Thinking or even Sleeping without putting down either or even both chopsticks! In the 'Good Citizen' agent in the case above, by contrast, the agent has states which make it clear that he knows to transition back from Eating to Thinking by laying down both his chopsticks. Eating transitions back to either Hold(ing)Left or Hold(ing)Right, and finally back to Thinking. These meme deployments have clear state transitions as we see here:
Table 1. The 'Good Citizen' Philosopher-Agent
Eating [ ] eat!, put-right!HoldLeft, put-left!HoldRight
HoldRight [ ] put-right!Thinking
HoldLeft [ ] put-left!Thinking
Below we see how an antisocial ideomemeplex (1) could be set up where each philosopher has the possibility of hanging onto one or both chopsticks while only thinking or even sleeping, he could even twirl one or both chopsticks or what-have-you. Here we start to see a true ideomemeplex where we observe the behavior of the antisocial agent, but the internal state of the person that 'causes' that action is not outwardly visible (2).
Figure 2.b. Antisocial Behavior State Diagram
Adding MIAO (3) notation, we see how the philosopher with the above ideomemeplex sort of picks up and puts down the chopsticks for his own internal reasons, and when he has both LEFT(STICK) and RIGHT(STICK), then and only then is he able to deploy the eat! meme to go to the Eating state. So some of his deployment descriptors would look like:
Table 2.a. Deployment Descriptors with MIAOs
Thinking [ LEFT, RIGHT ] eat!Eating
Eating [ ] eat!, done!Thinking
Thinking [ LEFT ] put-left!, pick-right!Thinking [ RIGHT ] pick-left!, put-right!
Thinking [ ] think!, sleep!Sleeping
Sleeping [ ] sleep!, wake!Thinking
Obviously, the pick-right! and pick-left! memes only work when there are chopsticks to pick up. So we might further add details such as:
Table 2.b. Deployment Descriptors with variable-state MIAOs
Thinking [ LEFT.INHAND, RIGHT.INHAND ] eat!Eating
Eating [ ] eat!, done!Thinking
Thinking [ LEFT.AVAIL ] pick-left!
Thinking [ RIGHT.AVAIL ] pick-right!
Thinking [ RIGHT.INHAND ] put-right!
Thinking [ ] think!, sleep!Sleeping
Sleeping [ ] sleep!, wake!Thinking
Sleeping [ ] sleep!, wake!Thinking
Another thing we notice is that the selfish philosopher-agent does not have states and also does not have state transitions for many of his meme deployments. He doesn't seem to care whether he has one or both chopsticks. He doesn't have a state for either, so he's apparently indifferent to whether he picks one up or puts one down, or indeed, whether he has the chopsticks he needs when he wants to start Eating.
The Self-Satisfied Agent
In both the original state diagram and in the updated one (Figs. 2.a. and b.), where the philosopher-agent doesn't really 'care' about whether he's holding one or both chopsticks, he appears to be able to deploy any meme he likes, which is true if his neighbors are not hogging the chopsticks. He can do whatever he likes, and gets the result, the state change, that he wants.
At this point we question how well our state diagrams are working for us, since they seem to show that we can simply deploy eat! to hop over to Eating. In its current form, the state diagram does not illustrate very well how the eat! meme relates to the LEFT and RIGHT stick MIAOs, although this is made clear in the deployment descriptors (first lines of Tables 2.a. and 2.b.), i.e., the only times the eat! meme may be deployed from the Thinking state is when both LEFT and RIGHT chopstick MIAOs are .INHAND, and the only times they may be pick!ed is when they are .AVAIL.
In other words, my outwardly seeming self-satisfied agent is in fact dependent upon some unseen other person, upon whom he has no influence (no memes to deploy resulting in resonance, i.e., recognition and response).
We are at memetic desolation, or alienation. The starving philosopher is forced to resort to non-memetic means (violence) to deal with his situation.
Tea, Anyone? and Other-Bullying
A philosopher-agent who is starving must be able to make some kind of contact, some kind of memetic overture to his fellow diners. This is not necessarily his neighbors, and I'll try to explain why this is. In memetic engineering, the first problem is the memetic alienation, and assuming that the system is not badly broken in some other way (4), fixing that may be all that need be done.
One random idea is to give every philosopher a new meme which they can deploy which affects all the other philosophers. Just impacting their neighbors might not be enough, but enough on that for now, and let's just take this as an example of a problem solving approach.
Say there's always a teapot in the center of the table, and any of the philosophers may pick it up and offer to fill the teacups of all the others. Those who are asleep might not hear the offer, and those thinking might ignore it, and we may have to deal with those, but those who are eating should put down their chopsticks and hold out their cups.
Or better still, let's have all of the philosophers eating out of cha-wans 茶碗, which is kind of what I was imagining anyway, or a small rice bowl also for drinking tea. So those eating must quickly finish the last of their rice, put down their chopsticks, and hold out their bowls for some tea.
Anyone who is eating must stop, and both of their neighbors are necessarily not eating, since at least one of their chopsticks is taken. But it doesn't matter if the only people holding chopsticks are those eating. If thinkers put down their chopsticks for tea, then that's okay, too. The problem remains those who keep their chopsticks when sleeping.
What if these selfish philosophers don't all stop what they're doing, put down their chopsticks, and put their bowls out for some tea? The answer is: make the offer of tea meme a bullying opportunity.
One solution is that if your neighbor doesn't wake up and hold out his bowl with both hands (which requires putting down any chopsticks he may be holding), you give him a nudge.
An obvious problem is since offer-tea! is a freely deployable meme, i.e., there's nothing from stopping people from offering it over and over, and one feels the need for some kind of bullying meme to prevent that. One is that it takes time for a fresh pot of tea to steep, so anybody could complain that the tea is weak.
So we have stuff like:
Table 3. New Tea-related Behaviors
Eating [ offer-tea! ] put-left!, put-right!; put-out-bowl!
[ WEAK_TEA, offer-tea! ] complain!
[
[ complain! ] defend!
Then we start to get into engineering more and more memes, like saying
Again, this is just an example of a memetic engineering solution to a memetic desolation problem by making more interactions available to those who would otherwise have no choice but to resort to violence.
Memes Coming From Others
How does it work? There are some notational issues, in the state diagrams and transition matrices (5), as I mentioned above. In Table 2, we see a couple of new deployment modes for which there may be as yet no clear representation in current state diagrams: memes coming from others and memetic deployment options that don't depend upon current state. This latter could merely be taken as a shorthand, but a notational convention might help a lot.
Summary and Conclusions
The Dining Philosophers Problem is a microcosm and an example of memetic desolation leading to alienation and starvation and/or violence. The solution reached by memetic engineering, i.e., to add memetic pathways, allows agents to communicate in states in which they were formerly alienated and had only recourse to violence or acceptance of death.
Moreover, this solution mirrors those put forward by computer scientists to resolve exactly this sort of deadlock problem both in effectiveness and elegance.
Again, problems of alienation leading to violence, such as the millions of drivers on American roadways who experience 'road rage,' can lend themselves to similar simple, if counterintuitive, solutions, and we need to start applying these principles.
It remains to shore up the notional system to better deal with memes deployed by others, which I think will shed a lot of light on how to represent the action of immunomemes as well.
________________________________________
(1) A memeplex that only exists for a given person. In this case, each philosopher has his own POV, his own state, but an outside observer can still see the overall situation.
(2) This is exemplified by antisocial drivers who do things out of some kind of self-justified spite, which, when interviewed, they will reveal. For instance, 'he cut me off, so I rode his bumper, honking,' or 'he was going too slow, so I burned around him and flipped him off,' or 'he was riding my bumper, so I slowed way below the speed limit.'
(3) Memetic Iconic Anchoring Object. In this case, the chopsticks, which may or may not themselves be present or absent, or in-hand, strongly govern which memes are eligible for deployment.
(4) For example, not enough food. The assumption of The Dining Philosopher's Problem is that there is plenty of food, infinitely much, in fact. The problem is, as in the USA, distribution, not total quantity. A philosopher starves because he can't get his hands on a chopstick, not because his bowl is empty. The former is, we hope, a macromemetic problem, and we attack it as such.
(5) Examples of those not presented here. Resolving issues associated with MIAOs and memes from other agents determining state and available memes to deploy is stuff for another essay.
2020-02-23
2020-02-22
2020-02-21
2020-02-20
2020-02-19
2020-02-18
2020-02-17
2020-02-16
漫画 Porcadis Baseball Caps
Manga index
I’m going to get some of these made soon. Post a comment if you’d like one and I’ll get in touch with you!I think I’ll add them to my Etsy site!
I’m going to get some of these made soon. Post a comment if you’d like one and I’ll get in touch with you!I think I’ll add them to my Etsy site!
2020-02-15
2020-02-14
2020-02-13
2020-02-12
2020-02-11
漫画 Chubbot VII
Manga Index
I think I need to practice drawing more of these (some of the previous ones are better). And of course there is "Chubby Chick Cop" manga that I have been kicking around. Is she another alter ego of 'Deb' from Cloud of Dreams, The Ferrisburgh and Porcadis Riots sections of Zeppelin, et al, or a totally new character?
2020-02-10
Mermaid MMMXL
Sushi mermaid
Qu’est-ce que ça veut dire, que la sirène se laisse faire mangée, particulièrement par des mains féminines?
模倣子 Memetic State Diagrams and Transition Matrices
Memetic Index
Introduction
The idea is that a memetic system, the states, the memes that cause transitions between the states, and the operative MIAOs can be represented in a diagram, and behind that diagram is a description of all those deployment descriptors (2), and from those may be constructed a set of transition matrices.
There are problems with this system as it stands. While it can be good as a design tool, i.e., when one neglects individual behavior and decision making, for working out which memes one wants, and the states and MIAOs to support their operation, it can come up short in terms of modeling an actual memetic system in operation.
One problem is modeling how individuals decide to deploy memes. Another is modeling the states of individuals. Things like "criminal" behavior imply that the criminal tries to avoid detection, which means there are "hidden states" in the system (somebody committed a crime but nobody else knows yet, others know, but not who's responsible, the criminal is caught, etc.). Anyway, more work to do here, but the promise is better and better computational modeling, leading ultimately, we hope, towards a Hari Seldonesque Psychohistory-like tool.
The Blue Shirt Tuesday Memeplex
Here's a memetic state diagram from an essay I wrote. This is from an actual live experiment I conducted.
Here're the deployment descriptors (2) for the above system.
Morning [ ] wear-blue!Blue,wear-blue!NoBlue
Blue [ BLUE, DONUT ] eat!BlueAte, announce!, explain!
NoBlue [BLUE, DONUT ] eat!NoBlueAte, recuse!, explain!
BlueAte [ BLUE, DONUT ] praise!,shame!, explain!, announce!
NoBlueAte [BLUE, DONUT ] shame!Pariah, confess!, pardon!, explain!, cheat!NoBlue
Pariah [ PARIAH,BLUE, DONUT ] shame!, explain!, confess!, pardon!NoBlueAte
Note how "cheating" is modeled by a special cheat! meme that takes the cheater back to a state of NoBlue, i.e., of not wearing blue, but not having snuck a doughnut, i.e., a NoBlueAte state. This is special because it affects the state of the system, but is not a meme exchanged among persons, per se, but only "inside the head of the cheater." In this sense, it's kind of hinting at the idea of idiomemetic systems or endomemetic systems, in other words, systems which are only perceivable by the individual.
Here are transition matrices based on the above. Note how they do not take into account the idea that different agents might have different probabilities of deploying certain memes. This can have relevance in the atrophy of certain memes, especially immunomemes, that keep the system healthy, even if they appear "ancillary" to its primary function. In this other essay I discuss how conformists who do not shame cheaters can be as bad or worse for the long-term healthy functioning of a system as rebels or even criminals (9).
Note that the above is more of a description of an engineered memetic system (which is what it originally was) as a system of rules (6), as opposed to a description of an organic, already functioning memetic system, which actual individuals who have different roles within the system (7).
An Example Where Agents Figure in the Matices
Here's a set of memetic transition matrices for a fairly arbitrary sample memetic system which I whipped up as an example in my Escaping Meme Hell through Play essay (1).
Note that agents are one of the axes of the matrices. This gives us that
Let's see if we can describe it with memetic deployment descriptors (2) and finally a state diagram (2).
Memetic Deployment Descriptors
These indicate, for each state (and collections of MIAOs), which memes may be deployed, and to which new state they lead. States are in CamelCaps, MIAOs are in ALL CAPS, and memes are in lowercase.
S1 [ ] P1.m1!S2, P1.m2!S2, P2.m1!S3, P2.m2!S4, P2.m3!S5, P3.m3!S1
In other words, when in state S1 (which has no attendant MIAOs), agent P1 can deploy meme m1 to take the system to state S2, or they may deploy m2 to the same end, or agent P2 may take us to state S3 by deploying meme m1, and so on.
S2 [ ] P1.m1!, P1.m2!S3, P1.m3!S4, P2.m1!S3, P2.m2!S4, P2.m3!S1, P3.m3!S1
Note here that P1 deploying m1 does not change state -- we stay in S2 -- so the new state is not mentioned for brevity.
S3 [ ] P1.m3!S2, P2.m1, P2.m2!S4
S4 [ ] P1.m3, P2.m3!S5, P3.m2!S5
S5 [ ] P1.m1!S3, P3.m1!S4
To make construction of our state diagram easier, we can arrange the deployment descriptors by state, rather than by agent.
State Diagram
Note how this diagram may be directly constructed from the deployment descriptors and/or the transition matrix of the memeplex.
Summary and Conclusions
Depending upon whether you are designing a memeplex (or enhancements to an existing one), trying to describe an existing memeplex from sample data, or trying to discuss a real or hypothetical memeplex or interesting subset of one, a transition matrix, a collection of deployment descriptors, or a state diagram (or a combination of these) may be the best visualization tool for you.
As I've shown above, starting with any of the three, you can produce one of the others directly, and deterministically. They may also be used to check the correctness of each other, i.e., what looks like a good state diagram may prove flawed when the deployments are all written out, or when a transition matrix is laid out.
It's clear how all of these tools illustrate, and depend upon, the Second Law of Macromemetics, i.e., that the deployment of a meme results in a change of state. However, the First Law of Macromemetics, i.e., that agents try to optimize resonance (10) when deploying memes. remains elusive. These three tools allow us to represent a memeplex, but they do not gives us a deployment decision theory, i.e., how do agents decide, given the memetic environment, including other agents, which memes, if any, to deploy, how do they resolve 'jinx events,' race conditions, and so on.
Again, an elaborated theory of endomemetic, or ideomemetics (11) may be required to address these questions, or it must be demonstrated that such a theoretical scaffolding is not in fact required in order to use Macromemetic theory. More work to be done here...
Afterword: What About The Dining Philosophers?
Is the fact that a philosopher can starve to death, surrounded by his fellows, an example of memetic desolation?
Are most of their states internal? It's mostly about the states of the chopsticks. Is it possible to represent the system as a collection of memetic transformation? And a transition matrix?
Is it possible to transform the system so that everybody is aware of somebody starving? And then to take "moral" rescue action? Could such a system then degenerate back into uncaring, if the given memes are not used? An example of memetic atrophy?
_____________________________________________________
(1) To be published. I'll try to add a link to it later.
(2) Final terms to be decided.
(3) A.k.a., flasher, winker, indicator, turn signal.
(4) and there is only one horn sound, certainly per car.
(5) one example is that in India, supposedly, since all the side mirrors are cut off to save space on the road, the horn is sounded to indicate one wishes to pass the car in front. In the US and Japan, flashing the headlights are used for this (assuming the driver in front is actually looking, which is a pretty bad bet).
(6) And according to The Second Law of Immunomemetics, any system of rules equates to a system of bullying behaviors.
(7) such as being at the center of a memetic nexus, subscribed to a memetic nexus, etc.
(8) A child's toy, rather like a clock face, where an arrow can point anywhere around the circle, and all around it are different animals, and the toy makes the animal's sound when it points to it. "The cow says 'moo'," "The horse says 'neigh'," "The sheep says 'baaah'," etc.
(9) In fact, I'm working on a theory that a memetic system may need criminals in order to keep healthy the immunomemes that fight crime and criminality. There's probably some "designing the rebellion" in here...more later.
(10) The process of deployment involves a. 'recognition' and b. 'response.' Resonance involves both, i.e., that one's target agent recognize that one has enacted a meme, and that they then respond. Failure to recognize can constitute alienation (resulting from memetic destitution), while recognition without response can equal oppression. No response can be a resonance, as in my much-touted nudist colony example. A working definition of oppression may be where the deployer lacks a certain 'control' over the response, i.e., is unable to evince a 'reasonable' response. Passive-aggression is thus an example of an oppressive response. More on this topic later...
(11) The two appear to be the same, but aren't, really. An endomemeplex is hidden, but may be shared by a number of individuals, while an ideomemeplex is specific to a given individual. A given individual's ideomemeplex may be an 'instance' of some known endomemeplex. The contents of ideo- and endomemeplexes may, in principle, learned through memetic hacking, i.e., personal interview process. The addition of the 'need' for ideomemeplexes in order to resolve decision theory issues could make things messy either through out-of-control complexity, non-determinism (12), or other ills.
(12) The strength of Macromemetics is its basis in observable reality, i.e., memes being deployed is an observable phenomenon. If in order for it to work we must add an unobservable (13) ideomemeplex concept, it is tantamount to an admission of defeat, i.e., that humans just have a certain je-ne-sais-quoi about the way they do things, and that's that.
(13) Even if you 'observe' an ideomemeplex through an interview or other process, it's incomplete, is of questionable accuracy, and subject to continual change. If the dynamics of the whole system is too strongly tied to these unobservables, then much of the value of the science remains out of reach.
______________________________
APPENDIX
Here's a good description of why humans like to process memes.
This essay contains diagrams that attempt to assign a weight to the probability of a given agent deploying a given meme.
Introduction
The idea is that a memetic system, the states, the memes that cause transitions between the states, and the operative MIAOs can be represented in a diagram, and behind that diagram is a description of all those deployment descriptors (2), and from those may be constructed a set of transition matrices.
There are problems with this system as it stands. While it can be good as a design tool, i.e., when one neglects individual behavior and decision making, for working out which memes one wants, and the states and MIAOs to support their operation, it can come up short in terms of modeling an actual memetic system in operation.
One problem is modeling how individuals decide to deploy memes. Another is modeling the states of individuals. Things like "criminal" behavior imply that the criminal tries to avoid detection, which means there are "hidden states" in the system (somebody committed a crime but nobody else knows yet, others know, but not who's responsible, the criminal is caught, etc.). Anyway, more work to do here, but the promise is better and better computational modeling, leading ultimately, we hope, towards a Hari Seldonesque Psychohistory-like tool.
The Blue Shirt Tuesday Memeplex
Here's a memetic state diagram from an essay I wrote. This is from an actual live experiment I conducted.
Here're the deployment descriptors (2) for the above system.
Morning [ ] wear-blue!Blue,
Blue [ BLUE, DONUT ] eat!BlueAte, announce!, explain!
NoBlue [
BlueAte [ BLUE, DONUT ] praise!,
NoBlueAte [
Pariah [ PARIAH,
Note how "cheating" is modeled by a special cheat! meme that takes the cheater back to a state of NoBlue, i.e., of not wearing blue, but not having snuck a doughnut, i.e., a NoBlueAte state. This is special because it affects the state of the system, but is not a meme exchanged among persons, per se, but only "inside the head of the cheater." In this sense, it's kind of hinting at the idea of idiomemetic systems or endomemetic systems, in other words, systems which are only perceivable by the individual.
Here are transition matrices based on the above. Note how they do not take into account the idea that different agents might have different probabilities of deploying certain memes. This can have relevance in the atrophy of certain memes, especially immunomemes, that keep the system healthy, even if they appear "ancillary" to its primary function. In this other essay I discuss how conformists who do not shame cheaters can be as bad or worse for the long-term healthy functioning of a system as rebels or even criminals (9).
|
|
|
|
|
Pariah PARIAH, | All Agents | |
---|---|---|
M e m e s |
shame! | Pariah |
confess! | Pariah | |
explain! | Pariah | |
pardon! | NoBlueAte |
Note that the above is more of a description of an engineered memetic system (which is what it originally was) as a system of rules (6), as opposed to a description of an organic, already functioning memetic system, which actual individuals who have different roles within the system (7).
An Example Where Agents Figure in the Matices
Here's a set of memetic transition matrices for a fairly arbitrary sample memetic system which I whipped up as an example in my Escaping Meme Hell through Play essay (1).
Note that agents are one of the axes of the matrices. This gives us that
Let's see if we can describe it with memetic deployment descriptors (2) and finally a state diagram (2).
|
|
|
|
|
Memetic Deployment Descriptors
These indicate, for each state (and collections of MIAOs), which memes may be deployed, and to which new state they lead. States are in CamelCaps, MIAOs are in ALL CAPS, and memes are in lowercase.
S1 [ ] P1.m1!S2, P1.m2!S2, P2.m1!S3, P2.m2!S4, P2.m3!S5, P3.m3!S1
In other words, when in state S1 (which has no attendant MIAOs), agent P1 can deploy meme m1 to take the system to state S2, or they may deploy m2 to the same end, or agent P2 may take us to state S3 by deploying meme m1, and so on.
S2 [ ] P1.m1!, P1.m2!S3, P1.m3!S4, P2.m1!S3, P2.m2!S4, P2.m3!S1, P3.m3!S1
Note here that P1 deploying m1 does not change state -- we stay in S2 -- so the new state is not mentioned for brevity.
S3 [ ] P1.m3!S2, P2.m1, P2.m2!S4
S4 [ ] P1.m3, P2.m3!S5, P3.m2!S5
S5 [ ] P1.m1!S3, P3.m1!S4
To make construction of our state diagram easier, we can arrange the deployment descriptors by state, rather than by agent.
S1 [ ] P3.m3!, P1.m1!S2, P1.m2!S2, P2.m1!S3, P2.m2!S4, P2.m3!S5
S2 [ ] P1.m1!, P2.m3!S1, P3.m3!S1, P1.m2!S3, P2.m1!S3, P1.m3!S4, P2.m2!S4
S3 [ ] P2.m1, P1.m3!S2, P2.m2!S4
S4 [ ] P1.m3, P2.m3!S5, P3.m2!S5
S5 [ ] P1.m1!S3, P3.m1!S4
State Diagram
Note how this diagram may be directly constructed from the deployment descriptors and/or the transition matrix of the memeplex.
Summary and Conclusions
Depending upon whether you are designing a memeplex (or enhancements to an existing one), trying to describe an existing memeplex from sample data, or trying to discuss a real or hypothetical memeplex or interesting subset of one, a transition matrix, a collection of deployment descriptors, or a state diagram (or a combination of these) may be the best visualization tool for you.
As I've shown above, starting with any of the three, you can produce one of the others directly, and deterministically. They may also be used to check the correctness of each other, i.e., what looks like a good state diagram may prove flawed when the deployments are all written out, or when a transition matrix is laid out.
It's clear how all of these tools illustrate, and depend upon, the Second Law of Macromemetics, i.e., that the deployment of a meme results in a change of state. However, the First Law of Macromemetics, i.e., that agents try to optimize resonance (10) when deploying memes. remains elusive. These three tools allow us to represent a memeplex, but they do not gives us a deployment decision theory, i.e., how do agents decide, given the memetic environment, including other agents, which memes, if any, to deploy, how do they resolve 'jinx events,' race conditions, and so on.
Again, an elaborated theory of endomemetic, or ideomemetics (11) may be required to address these questions, or it must be demonstrated that such a theoretical scaffolding is not in fact required in order to use Macromemetic theory. More work to be done here...
A Side Note on Curbing Road Rage
One sees in the traffic situation in the USA many antisocial driving habits exemplified by failure to signal properly, driving slow in the fast lane, reckless passing, cutting off other drivers, tailgating, honking in anger, etc. An immediate macromemetic observation one can make is that drivers are in a state of memetic desolation (also known as alienation), which is to say, they have very few memes at their disposal to deploy to one another, apart from signaling (3) and honking the horn (4).
As I will probably write about later, I have an idea that making the steering wheel look like a See-and-Say (8), so you can make your care make bleating, farting, mooing, "Tah-tah-tah-tah...Charge!" "Whuh, whuh, whuh," "Boooo," and other sounds, as well as flashing the headlights and taillights (back and forth, etc.), would give drivers more ways to communicate (exchange memes) together, and thereby eliminate memetic desolation (alienation) and reduce road rage and antisocial driving behavior as a direct result, in accordance with macromemetic theory.
What we should see is that "cultures" of ways of honking these various sounds would appear, perhaps quite different for different populations (5), and combinations and sequences, perhaps like a circa 1980s fight video game, of various honks could indicate to drivers that somebody wants to get right and turn out, wants to form a group to all go fast together and who's the leader, somebody is slow in the wrong lane and please quit it, I want to pass, please go 'round, etc.
Is the fact that a philosopher can starve to death, surrounded by his fellows, an example of memetic desolation?
Are most of their states internal? It's mostly about the states of the chopsticks. Is it possible to represent the system as a collection of memetic transformation? And a transition matrix?
Is it possible to transform the system so that everybody is aware of somebody starving? And then to take "moral" rescue action? Could such a system then degenerate back into uncaring, if the given memes are not used? An example of memetic atrophy?
_____________________________________________________
(1) To be published. I'll try to add a link to it later.
(2) Final terms to be decided.
(3) A.k.a., flasher, winker, indicator, turn signal.
(4) and there is only one horn sound, certainly per car.
(5) one example is that in India, supposedly, since all the side mirrors are cut off to save space on the road, the horn is sounded to indicate one wishes to pass the car in front. In the US and Japan, flashing the headlights are used for this (assuming the driver in front is actually looking, which is a pretty bad bet).
(6) And according to The Second Law of Immunomemetics, any system of rules equates to a system of bullying behaviors.
(7) such as being at the center of a memetic nexus, subscribed to a memetic nexus, etc.
(8) A child's toy, rather like a clock face, where an arrow can point anywhere around the circle, and all around it are different animals, and the toy makes the animal's sound when it points to it. "The cow says 'moo'," "The horse says 'neigh'," "The sheep says 'baaah'," etc.
(9) In fact, I'm working on a theory that a memetic system may need criminals in order to keep healthy the immunomemes that fight crime and criminality. There's probably some "designing the rebellion" in here...more later.
(10) The process of deployment involves a. 'recognition' and b. 'response.' Resonance involves both, i.e., that one's target agent recognize that one has enacted a meme, and that they then respond. Failure to recognize can constitute alienation (resulting from memetic destitution), while recognition without response can equal oppression. No response can be a resonance, as in my much-touted nudist colony example. A working definition of oppression may be where the deployer lacks a certain 'control' over the response, i.e., is unable to evince a 'reasonable' response. Passive-aggression is thus an example of an oppressive response. More on this topic later...
(11) The two appear to be the same, but aren't, really. An endomemeplex is hidden, but may be shared by a number of individuals, while an ideomemeplex is specific to a given individual. A given individual's ideomemeplex may be an 'instance' of some known endomemeplex. The contents of ideo- and endomemeplexes may, in principle, learned through memetic hacking, i.e., personal interview process. The addition of the 'need' for ideomemeplexes in order to resolve decision theory issues could make things messy either through out-of-control complexity, non-determinism (12), or other ills.
(12) The strength of Macromemetics is its basis in observable reality, i.e., memes being deployed is an observable phenomenon. If in order for it to work we must add an unobservable (13) ideomemeplex concept, it is tantamount to an admission of defeat, i.e., that humans just have a certain je-ne-sais-quoi about the way they do things, and that's that.
(13) Even if you 'observe' an ideomemeplex through an interview or other process, it's incomplete, is of questionable accuracy, and subject to continual change. If the dynamics of the whole system is too strongly tied to these unobservables, then much of the value of the science remains out of reach.
______________________________
APPENDIX
Here's a good description of why humans like to process memes.
This essay contains diagrams that attempt to assign a weight to the probability of a given agent deploying a given meme.
2020-02-09
2020-02-08
2020-02-07
2020-02-06
2020-02-05
2020-02-04
Mermaid MMMXXXIV
This is like my character Honquèrelle in my comics, Schrödinger’s Catachresis and Chestnut Quest and elsewhere.