Introduction
My idea is that the filling out the notation and basic structure of alliance theory will lead to being able to move forward and accommodate data after the fact. The idea is that allies deploy memes which help their beneficiaries get to good states or avoid bad states, and possibly create new states for them to get to.
Another theory, which may have a lot in common, is Oppression Theory, where oppressors take the same role as allies (benefactors) to prevent good things from happening, cause bad things to happen, and create negative states. This is for later. Hopefully, once the notation and fundamental structural dynamics of alliance theory are in place, they will transfer directly to oppression theory.
What Does an Ally Do?
The 2nd Law of Macromemetics states that deployment of a meme leads to a state transition. T.his suggests several possibilities. When an agent deploys a meme, the system transitions to a new state:
State1.any-agent.meme1! => State2 (fig 1.)
In order for a protegee to have a different outcome, they would have to transition to a different state:
State1.protegee.meme1! => State2a (fig 1.a.)
How does an ally accomplish this? Looking at a transition matrix model, this would involve somehow substituting State2a for State2 in the transition matrix for State1. This is a mutation of the previous transition matrix (as defined by the 3rd Law of Macromemetics).
We could say that the "goal" of any agent, including the protegee, is to get to some final state, State3, which could mean getting a job, getting a college admission, avoiding some police action, or so on. This could look like either:
State2.any-agent.meme2! => State3 (fig 2.a-e.)
State2.protegee.meme2! => State3
State2.protegee.meme3! => State3
State2a.protegee.meme2! => State3
State2a.protegee.meme2a! => State3
An Immunomemetic Perspective
I recently came up with a notation for immunomemetic interactions where we pile up the meme deployments in a long string. For example, a regular agent making his or her way from the initial state to the final, success, state would look like:
State1.any-agent.meme1!meme2! => State3 (fig. 3.)
For our protegee agent, we have to resolve some of the disparate conditions above. We were positing the ideas that due to the intercession of the ally the protegee might either go to a new state State2 as the result of deploying meme1 or go to some new state, State2a, that other agents don't get to go to, that is:
State1.protegee.meme1! => State2 (fig. 4.a, b.)
State1.protegee.meme1! => State2a
There is of course nothing wrong with different agents deploying different memes to go to the same state, just not the same agent deploying the same meme in the same state and going to different new states, because if there were some special differences that predicated such a forked transition, it would mean there were actually two different states, not one. Hense the deployment descriptors in fig. 4a. and 4b. are mutually exclusive. However, we could add an immunomemetic notation indicating the intercessory action of an ally:
State1.protegee.meme1! => State2 (fig. 5a, b)
State1.protegee.meme1!ally.help1! => State2b (1)
Alternatively the protegee could reach State2, and then be helped to State3 by helpful action by the ally. We could imagine things like "getting from State2 to State3 is hard"
State1.protegee.meme1!meme2!ally.help2! => State3 (fig. 5c)
Okay, just to clarify and reïterate, according to the 2nd Law of Macromemetics, a successful meme deployment results in a state change, but there is a competition among the eligible agents for the successful deployment that leads to the ultimate state transition. Whoever gets the clever comment out first gets credit and controls which state the group goes to next (which is automatically determined by the meme deployment), which usually puts said successful agent in the catbird seat for him or her (or his or her allies) to deploy additional memes which further enhance his or her status and standing. Hence, when we talk about the weights associated with which agents are most likely to deploy a meme and leverage a transition, the high-status agents are necessarily at an advantage. Let's take another look at an immunomemetic expression of the deployment pathways that a regular schmoe agent and a protegee agent could take to State3, the State of Success:
State1.any-agent.meme1!meme2! => State3 (fig. 6a)
State1.protege.meme1!ally.help1!meme2a! => State3 (fig. 6b)
State1.protege.meme1!meme2!ally.help2! => State3 (fig. 6c)
Deployment theory (2) holds that there are weights and probabilities associated with deployment decisions, and then there are race conditions and "jinx events" (4). For example, it may be useful to say that there is a 100% chance that given a certain memetic state (a collection of agents all in contact and each with deployment opportunities) some memetic deployment will take place, in some, perhaps many, social situations, that deployment is "no meme gets deployed" (5).
Concrete Example: A Job Application Process
Let's imagine a company that accepts job applications, chooses one (or zero) from the stack to interview, and then there is some chance that an interviewed candidate may be hired. For example, applicant agent schmoe applies for the job in the hope of getting interviewed, as follows:
Apply.schmoe.apply! => Interviewed (fig. 7)
We take P(State.agent.meme!) as the probability that a given meme will be successfully deployed at a given "memetic deployment juncture" (6). For example, P(Apply.schmoe.apply!) is the probability that agent "schmoe" will have his or her application accepted for an interview. Let's also say that P(Apply.company.no-action!) is the probability that the company takes no action on the outstanding applications on a given day. Hence we could have the following scenario where two resumes are in the inbox:
P(Apply.schmoe1.apply!) = 10% (fig. 8)
P(Apply.schmoe2.apply!) = 10%
P(Apply.company.no-action!) = 80%
So there's only a 20% chance that anybody will be interviewed today. Each prospective interviewee has a 10% chance of being chosen, and this would be less if there were more applicants. Now we have a protegee of an ally who works at (or has influence at) the company. Normally we'd have:
P(Apply.protegee.apply!) = (100% - P(company.no-action!)) / # of resumes (fig. 8b)
but we want to see something like:
P(Apply.protegee.apply!ally.help1!) = 90% (fig. 8c)
P(company.no-action!ally.help1!) = 10%.
You could perhaps still give the other applicants a fraction of the 10%, too, for example.
People who pass their interview are now eligible for a job when one comes up. The path to employment looks like:
Interviewed.schmoe.hire! => Hired (fig. 9)
We're leaving out time considerations (6), that is, how an ally might be able to make things move a little faster. Our model effectively does the same thing. Even if there is no protegee getting fast-tracked in the resume inbox or in the hiring pool, a regular schmoe faces delays from the fact that the company may take no action most of the time, and that one is in competition with many other applicants. For instance, if the company is 70% likely to not perform interviews on a given day, or give jobs to people who have been interviewed, and if an average of three people have applied, or have been interviewed, at any given time, then there's only a 10% chance you will be chosen. That translates into a 1% chance that you will make it all the way through the process in a shortest path. Assuming time spent sitting in the pool of applicants or interviewees doesn't decrease one's chances, just over a week (being passed up about three times in each pool) starts to look like about 3% chance of getting a job.
By contrast, if having an ally gives you a 90% chance of passing each of the two processes, then you have an 81% chance of getting through the whole process at the fastest possible speed.
Immunomemetic Notation
I call the notation "immnomemetic" because it mirrors a way of denoting immunomemes which I have recently developed. For example, if an agent (person) called schlemazel is at a party and tells a bad joke, getting no response from his fellow partygoers, we could denote it thusly:
Party.schlemazel.bad-joke! => Crickets (fig. 10a)
However, if another partygoer, a bully, mocks schlemiel for his bad joke, we have:
Party.schlemazel.bad-joke!bully.mock! => Derision (fig. 10b)
Telling a (bad) joke, or indeed any "risky" social is what we call a "bullying opportunity," or a chance for another agent to deploy an "immunomeme" (7). That can further give other agents space to deploy additional immunomemes.
Party.schlemazel.bad-joke!bully.mock!others.[mock!,deride!] => Pariah (fig. 10c)
This notation is a shorthand which effectively hides the states along the way. It can also be a notation for compelled states (10). There do not appear to be any compelled states in our party gaffe scenario, since we can write it out in long form:
Party.schlemazel.bad-joke! => Crickets (fig. 11a-d)
Crickets.bully.mock! => Derision
Derision.others.mock! => Pariah
Derision.others.deride! => Pariah
The immunomemetic notation allows us to express a series of meme deployments, and the states they drive us through (but about which we don't really care) as a single meme connecting only two states, with a probability of that meme being successfully deployed given the current state. We can then think in clear terms about the probability of success of a given meme deployment versus the probablity of a successful immunomemetic counter-deployment (8).
This notation would seem to work well with alliance theory. An ally deploys a meme in response to a protegee's deployment, and the probability of success is modified favorably, like we see in figures 8b and 8c. In our party example, our friend schlemazel could have a "sakura" (11) in the group who would either laugh! or groan! at the bad-joke! to potentially elicit more favorable response from the audience and defeat any immunomemetic attempts.
Party.schlemazel.bad-joke!sakura.[laugh!,groan!] => JokeReceived (fig. 12a-c)
JokeReceived.others.[laugh!,groan!] => StatusUp
JokeReceived.[bully,others].[ignore!,mock!] => JokeReceived
So we can say something like:
P(Party.schlemiel.bad-joke!) is low, while... (fig. 12d,e)
P(Party.schlemiel.bad-joke!sakura.[laugh!,groan!] is high.
One other property of immunomemes is that they are highly resonant (8). An immunomeme has the effect of diverting the system very reliably to a "safe state" thereby subverting an agent's attempt to deploy a novel meme and take the system to either a novel state, or to a state which the agent does not normally have access due to his or her status (MADSAM), in other words introduce a mutation into the memeplex (12). Similarly, we expect alliance memes to be highly resonant as well, i.e., not likely to fail when deployed.
The concept of "highly resonant" or "very likely to succeed" may still need more careful definition. The reality is that agents typically deploy memes with a high degree of confidence of success, i.e., that the state transition they are aiming for will be met. However, most agents don't deploy any memes, or memes which make little change to the state of the system, so that's not saying much. Immunomemes stand out because of their aggressive nature (although they are very common) and alliance memes stand out because they can potentially cause large shifts to the system (certainly for the protegee in question).
In summary, the notation for immunomemes seems to work for alliances as well.
What Does the Ally Get Out of It?
This is a good question, which merits an entire essay of its own. We could especially use some anecdotal data to do with mentors and protegees (13) to get some actual test cases to see what's really going on. Having said that, the simple fact of being able to deploy a meme that is guaranteed to land is solid gold. Is that a particular property of an ally meme? If that can be clearly demonstrated, i.e., that an ally meme has this same, strange, fundamental property of immunomemes, i.e., that they are guaranteed to succeed, then this could be a valuable insight.
If this is true, it could have to do with closure (14). That may be a necessary and sufficient description of the action of an immunomeme. The morphology of the meme itself is very clear (16), it's well-marked, and it has good closure. It becomes almost like conservative financing. Deploying memes that help allies, if they are like immunomemes, then they yield reliable rewards, and all that is needed is a crop of protegees to allow the ally to deploy them.
In sum, being able to deploy additional memes is always a plus. Agents always want access to more memes. This is easier said than done. Being able to deploy a meme doesn't just mean you have to learn it, there have to be lots and lots of others who are able to resonate with it.
It could be that memes to help others doesn't involve all that many novel memes. The memetic pathways may be mostly all there. This is a critical point. The principle that agents seek meme addition is sound, basic. The idea that alliance activities may provide this is an interesting theory, but it would be good to back it up with some data.
Summary & Conclusions
A lot of these ideas need to be backed up by more study, research, and data.
Notation-wise, alliance memes may look a lot like immunomemes. That makes it straightforward to at least write down alliance systems using existing notation and model them with existing tools.
Immunomemes and alliance memes may have even more in common.
Immunomemes appear to have the basic property of having good marking and closure, and also are reliable sources of memetic reward (18). Alliance memes appear to have this same property. With immunomemes one is waiting for others to break the rules to deploy, while with alliance memes one needs an influx of people needing help. Alliance memes potentially have very high rewards, lots of high-yield memes involved (in existing pathways) that are already in place. This needs to be elaborated, and data gathered to see if there are examples of this mechanism at work. The pretendian phenomenon could be an interesting case study (13).
In short, some interesting starts on some theoretical models. No roadblocks yet about how alliance systems might work. It would be good to get real-world data.
____________
(1) See Dining Philosophers essay. A "compelled state" is one in which the agent cannot remain for any duration, and from which there are limited meme deployments available to exit to other states.
(2) Deployment theory refers to the decision process whereby a system determines which meme is going to be deployed by which agent, thereby leading to the next state of the system. It's about modeling what really happens. One idea is to involve endomemetic theory (3)
(3) Endomemetic theory, which also relates to macromemetic modeling of psychology, is where we posit the presence of a memetic system inside every agent, that maintains its own state, that generates deployment decisions.
(4) A "jinx event" is where two or more agents try to deploy a meme (possibly the same one) at the same time, producing some kind of collision. Part of deployment theory (2) has to do with deciding which agent gets to "push the Jeopardy! button." It could well be that even if temporally multiple agents deploy memes, it's really the exposed cohort that decides which state the fabric transitions to. Remember that things like deployment theory and state transition models are just that, simple computational models for something that is very analog, very parallel.
(5) No meme getting deployed is in many cases acts as if one did, in terms of state transitions. I trot out my example of a nudist colony, where taking all your clothes off would normally elicit a big response, but at the nudist colony no response is the desired memetic resonance.
(6) A memetic deployment juncture is a loose term for the idea that there is a kind of "clock" associated with a model of a memetic system. There's a kind of assumption that agents are in a "wait state" for much of the time until the machine moves forward a notch. Things like the game of baseball and Robert's Rules of order and The Dining Philosophers problem (1) are fun to use as simplified systems since they have this property built in. The system is always in a well-defined state, and which agents are allowed to do what in each state.
(7) A bullying opportunity opens up a chance ot deploy an immunomme. The First Law of Immunomemetics states that any stable memeplex contains an immunomemeplex with memes that support the stability of the memeplex (8), that is, impeding the modification, addition, deletion, substitution, of a state, agent, or meme (MADSAM) of the memetic matrix. In other words, an attempt to de-assert an existing meme, or deploy a novel meme, is met with an immunomemetic deployment.
(8) Going back to probabilities of meme deployments, the would-be deployer of self.meme! weighs the likely success of said deployment against the probability of self.meme!bully.immunomene! and which state that counterdeployment will lead to. For instance:
BeginState.self.meme! => IncreasedStatus
BeginState.self.meme!bully.immunomeme! => LostStatus
Human beings (and other animals) are good at predicting these kinds of outcomes. It's a gamble between the probability that self.meme! will resonate and that self.meme!bully.immunomene will be deployed (and resonate). One thing about immunomemes is that they have a high degree of resonance (otherwise they wouldn't be very good immunomemes, right?) They don't expose the deployer to much risk of immunomemetic retaliation themselves. We have something like:
"likely success" 1 < P(self.meme!) / P(self.meme!bully.immunomeme!) < 1 "likely oppression" (9)
(9) Here we start to see a working definition of "oppression." If agent tries to deploy memes that lead to a better status, that is, better and better opportunities to deploy more memes that benefit the agent. However, deploying memes that make this less likely, or by providing bullying opportunities that lead to the deployment of immunomemes, tends to take the agent in the opposite direction. "Oppression" exists when an agent has few opportunities to deploy memes that improve his or her situation, or provide lots of bullying opportunities. Of course, bullying opportunity memes are better than alienation (memetic destitution), which leads directly to violence.
(10) A compelled state (see Glossary) is a state from which the system must immediately depart, and which typically has only one deployable meme (though a choice is not excluded). In other words and for example, an agent is placed in a state where they are forced to immediately perform on and only one meme. As a side note "Agents hate to be put into compelled states." In terms of policy and engineering of memetic systems, the preceding statement is right up there with "memetic destitution (alienation) leads to violence."
(11) A "sakura" is the Japanese word for "a plant," accomplice, shill, or somebody in the audience in kahoots with the speaker to help them get laughs or ask helpful qeustions.
(12) See MADSAM in Introduction to Macromemetics. The Third Law of Macromemetics defines a mutation to a memeplex as the Modification, Addition, or Deletion of a State, Agent, or Meme, and this is precisely what immunomemes work to prevent such mutations (3rd Law of Immunomemetics).
(13) Pretendianism is an interesting case because of a number of reasons. The benefactor actions should be easy to tease out. It's an obscure niche issue, so there should be fewer global social biases to obscure or blur the transactions taking place. Furthermore, it's a fundamentally fraudulent situation, but the dupery is cloaked by ignorance of, or pretensions of ignorance of, Native American culture. Hence, Real Natives (or their allies) are able to perceive the fraud in sharp relief. In sum, lots of good, clean data which slots right into our area of research.
(14) Closure closely related to marking (15). is the property of a memetic deployment that it is clear (or unclear) as to whether the transaction has been completed or not, or only partially.
(15) Marking is the property of a meme that it is easy or difficult to tell that a meme has been deployed, and, by extension, that an acceptable reply or response has been made to said meme.
(16) Some typical immunomemes include "We already talked about that," "And he's talking about [this] again," "that will never work," and such. These memes are fairly dumb. There are more sophisticated ones (17).
(17) A potentially interesting line of research into immunomemes (or memes in general, including notation) may be "joke analysis." Simple joke forms like knock-knock jokes and "a guy walks into a bar" jokes, and "what do you call a..." jokes may shine light on parameterized memes and immunomemes.
(18) Memetic reward is the physiological reward that an agent receives when a memetic loop, a memetic transation, is closed and completed. Also known as a memetic orgasm.
No comments:
Post a Comment