Introduction
There are tools for designing and modeling of microchips, which are enormously complex, with millions of devices. The effects of the states of these devices and the state of the whole chip may be similar to the states of a memetic system, with agents in "states" and disposition to deploy memes which changes the state of the whole system.
These tools, or something resembling, might be useful in modeling the behavior of a memetic system. There may even be tools to detect closed pathways and such and find the "skins" of memetic systems.
Propagation Speeds
One issue is what "propagation times" look like in a memetic system model. In a microchip, it's the clock speed, i.e., in principle a computation that involves a certain number of gates in series requires that same number of clock cycles to complete. However, computations may in principle proceed in parallel, even to the point of competing results being thrown out at the end.
In a memetic system we can think of a "result" that the memeplex reaches after propagation through some series of pathways, and there may be multiple such pathways (1).
In a memeplex, the propagation time of a step may be thought of as the meme that is most likely to be deployed by some agent, to lead to another state. The reasoning here is that whichever agent is able to deploy something the fastest, that is what is going to happen. In practice, we think of each available meme as having a "weight," so some kind of probability of being deployed.
So we're back to the idea of the probability of given memes being deployed in given states, and so the probability of given pathways happening.
Pathways
First off, in a sense all pathways are ultimately closed (2). In practice, no pathways are truly closed. There is always some non-zero probability that memetic exchanges will reach beyond some arbitrarily defined (or enforced) boundary (3).
What does a pathway look like? We need some notation here. I attempted to make a start on this in my previous essay on Closed Pathways. Obviously, deployment descriptor notation is at least somewhat suited to this:
State.agent.meme! => NewState
Each agent in each state has a weighted probability of deploying a given meme. Hence we could write something like:
W:State.agent.meme! => P:NewState
We can start to see the idea of Game Theory-like truth tables emerging. The simplest system is one with two agents and three states, with two memes available to each agents in the first state. We can write the state matrix for the first state as follows (5):
State A | agent-a | agent-b |
---|---|---|
meme-b! | 20%:StateB | 30%:StateB |
meme-c! | 10%:StateC | 40%:StateC |
Here we assume that the same meme deployed by different agents leads to the same final state, which is not required.
How do we interpret this? The most likely deployment (40%) is agent-b.meme-c! => StateC, however, there is a total 50% chance that somebody will deploy meme-b! => StateB. Game Theory gives us the idea of equilibrium states. In Macromemetics, agents try to maximize their own deployment opportunities (4). In practice, the important bit for an agent is that they themselves be the one to deploy the meme, as opposed to where the meme leads, as such. For instance, let's imagine that both StateB and StateC lead back to StateA in a closed path:
State B | agent-a | agent-b | State C | agent-a | agent-b | |
---|---|---|---|---|---|---|
meme-a! | 50%:StateA | 10%:StateA | meme-a! | 10%:StateA | 40%:StateA | |
meme-c! | 10%:StateC | 30%:StateC | meme-b! | 20%:StateB | 30%:StateB |
To sum up in deployment descriptors:
A.a.b! => B.a.a! => A (probability? 20% x 50% = 10%?)
A.a.c! => C.a.a! => A ( 10% x 10% = 1%?)
A.a.b! => B.a.c! => C.a.a! => A ( 20% x 10% x 10% = 0.2% )
A.a.c! => C.a.c! => B.a.a! => A ( 10% x 20% x 50% = 1% )
A.b.b! => B.b.a! => A ( 30% x 10% = 3% )
A.b.c! => C.b.a! => A ( 40% x 40% = 16% )
A.b.b! => B.b.c! => C.b.a! => A ( 30% x 30% x 40% = 3.6% )
A.b.c! => C.b.b! => B.b.a! => A ( 40% x 30% x 10% = 1.2% )
What does this mean? It seems the longer pathways are less likely. 36% of pathways...to what? These are closed pathways in which one agent, agent-a or agent-b, deploys all the memes that make the circuit. There are others in which one agent deploys the first meme and the other the second.
A.a.b! => B.b.a! => A ( 20% x 10% = 2% )
A.b.b! => B.a.a! => A ( 30% x 50% = 15% )
A.a.c! => C.b.a! => A ( 10% x 40% = 4% )
A.b.c! => C.a.a! => A ( 40% x 10% = 4%)
This gives us another 25% for a total of 61%. What about the other 39%? In practice, there are infinitely many pathways in which we go from State A to State B or C, and then bounce back and forth for an indeterminate period before returning to State A. How much do these pathways contribute to the "total" and what does that even mean?
We could compute the probability of jumping back and forth between State B and C without going to State A for however many iterations.
B => C => B
B.a.c! => C.a.b! => B ( 10% x 20% = 2% )
B.b.c! => C.b.b! => B ( 30% x 30% = 9% )
B.a.c! => C.b.b! => B ( 10% x 30% = 3% )
B.b.c! => C.a.b! => B ( 30% x 20% = 6% )
How meaningful is this?
Closed Systems
One thing is clear, and that is that the three states, A, B, and C, form a closed system, in that there are no routes out to other states, states from which the system might never return.
But return from what? Who or what, if anything, is "moving" or "not returning?"
It would seem to be the collective state of all of the agents in the submemeplex (6), but how do we define agents being in or not in the cohort of a submemeplex? Can they be members, or inured of, other submemeplexes? What determines things like the memetic inventory of a submemeplex or the cohort of a submemeplex?
What about the ways in which the system can exit and enter given states or constellations of states? For example, what are the relative probabilities of the ways in which the system can exit and enter State A?
P:A => B = W:A.a.b! + W:A.b.b! = 20% + 30% = 50%
P:A => C = W:A.a.c! + W:A.b.c! = 10% + 40% = 50%
P:A => [ B, C ] = 100% (closed system)
So it's even money whether we go from State A to either State B or C. What about the other states?
P:B => A = W:B.a.a! + W:B.b.a! = 50% + 10% = 60%
P:B => C = W:B.a.c! + W:B.b.c! = 10% + 30% = 40%
P:B => [ A, C ] = 100% (closed system)
P:C => A = W:C.a.a! + W:C.b.a! = 10% + 40% = 50%
P:C => B = W:C.a.b! + W:C.b.b! = 20% + 30% = 50%
P:C => [ A, B ] = 100% (closed system)
So when it States A or C, there is an even chance of going to either of the other two states, but in State B there is a higher chance of going to State A than going to State C.
What about the probability of a given agent being the one to deploy a meme in a given state (10)? What is a good notation for this?
P:A.a.[ b!, c! ] = W:A.a.b! + W:A.a.c! = 20% + 10% = 30%
P:A.b.[ b!, c! ] = W:A.b.b! + W:A.b.c! = 30% + 40% = 70%
P:B.a.[ a!, c! ] = W:B.a.a! + W:B.a.c! = 50% + 10% = 60%
P:B.b.[ a!, c! ] = W:B.b.a! + W:B.b.c! = 10% + 30% = 40%
P:C.a.[ a!, b! ] = W:C.a.a! + W:C.a.b! = 10% + 20% = 30%
P:C.b.[ a!, b! ] = W:C.b.a! + W:C.b.b! = 40% + 30% = 70%
So we see that agent-a has more "power" in State B, while agent-b has substantially more in both State A and C. This is all just a coincidence of the numbers I chose arbitrarily, and this coincidence extends to how agent-a is twice as predisposed to A => B state transition, where B is a more favorable (4).
Summary & Conclusions
Things are a bit murky yet. How to express the sum of probabilities of paths between states, even of a simple memetic system needs work? This is the Feynman Diagram thing, i.e., that the sum of all paths, with decreasing likelihoods, is the probability that a given state change will eventuate.
In principle, agents try to steer the system towards states in which they themselves have more influence, or the chance to deploy more influential memes. I take this idea as a kind of given, but it may be flawed, however. Game Theory or other such modeling techniques may be relevant, but also psychology, i.e., is this really a good description of agents' motivation?
It may be good in future to optimize over paths, which might tie better into Game Theory concepts. Agents want to deploy certain memes in certain states in the effort to effect certain paths, i.e., paths into other states where they're likely to have access to future memes and paths, so it may be more about a set of possible paths that an agent is looking for, as opposed to a given immediate meme.
There is the problem of immunomemes, and also alliance memes, their shiny counterpart (13). Memes are deployed based on a fear of immunomemes or the promise of alliance memes.
The problem of how to define a "state" is a bit murky, and may run into circular definitions. The same goes for a "submemeplex" (6) and the cohort and memetic inventory for same. The overlap between submemplexes of their cohorts and inventories.
A submemeplex can only be discussed in terms of some supermemeplex (9). We want to get at the concept of a (sub)memeplectic skin, and this implies memetic contact (8) between a submemeplex and a larger supermemeplex across which the memetic intercourse (12) characteristics change, presumably in some dramatic, consistent, and readily identifiable way. In other words, is there a boundary condition which may be identified?
I didn't really explore the idea of how memetic matrices and microchip layouts might be alike, and thus how analysis systems and software might apply. More work for a future essay.
_______________________________________
(1) It might be interesting to take a system such as Robert's Rules of Order as an example of a system that reaches decisions through the actions of multiple individuals, leading to "enabling states" and "compelled states."
(2) The definition of a "cohort" is a group of agents who are memetically connected. That may be by a common language, geographic proximity, some electronic network (phone, on-line game, etc.), or other such. In other words, a group of agents (usually people) who are able to exchange memes. Hence, pathways that lead "out" of this cohort in a sense do not really exist.
(3) Hence the idea of a "memeplectic skin" is an unavoidably nebulous concept, but nonetheless a valuable one to define and quantify.
(4) An agent tries to steer the system towards states where they themselves have more and more opportunities for deployment of future memes. What this all means in a practical modeling sense needs to be better defined. How is this reflected in the weights of deployment opportunities in a state matrix, for instance?
(5) It often makes sense for there to be a meme deployment that leads back to the same state. For example, an agent discussing a motion (discuss!) in a committee may or may not lead the system out of the Discuss(ion) state.
(6) It may be useful to define a submemeplex as a collection of agents and memes in which one or both form a subset of those of a larger memeplex (7). Also implied is that the agents of the submemeplex are in memetic contact (8) with those of the supermemeplex (9).
(7) A subculture or a counter-culture could be a submemeplex.
(8) Memetic Contact: Otherwise able to exchange memes, i.e., not impeded by physical distance or isolation, lack of a common language, or lack of access to communication technology.
(9) Supermemeplex: A memetic system which contains additional agents, memes, or allowed state transitions
(10) The idea is that agents favor deploying memes that lead to states where they are even more able to deploy further memes (11). This may or may or may not be reflected in our example, by the way, since the weights are selected arbitrarily. This may be where Game Theory or other methods of analysis come in. For example, State B favors agent-a, and this is somewhat reflected by how agent-a is twice as likely to favor meme-b! in the State A and State C matrices, but I could have just as well have selected other weights.
(11) The ability to deploy memes may be related to agent "status" or "power." For example, agent-a only has a total of 30% control over which meme gets deployed in State A, while they have 60% control in State B and again only 30% in State C. One could imagine that such a system would "settle" to each agent leaning 100% into wanting to jump to the state where they have the most power, but what would such a "settling" look like? Some kind of cooperation from other agents? This suggests other states.
(12) The type of memetic intercourse, or exchange of memes, across some posited skin is unclear at this point. One can imagine a much lessened number of memes that can cross this boundary (again, still a murky concept), or that the proportion of contact memes would shoot up (or decrease?), and that there would be a sharply different immunomemetic loading profile (again, either up or down). Again, the "dimensionality" of the space in which all of this takes place is yet to be elaborated.
(13) Immunomemes and Alliance Memes appear to be symmetrical. Symmetry-breaking factors appear to be the resulting benefit to the agent deploying the original meme, but also the presence of "rules". In the Triangular Baseball thought experiment, and also the Candy Conspiracy "experiment" we see how alliance memes and immunomemes "help" one agent to acheive a state transition that would otherwise be impossible. This is not reflected in the model in this essay, but might solve some problems. Perhaps for a future essay!
No comments:
Post a Comment