The Detailed Dynamics of Dynamical Systems

This material is intended to supplement, not to replace the required reading Pissanetzky(2012b). This article is the first in a series of articles. The other articles are Verification for the theory of Detailed Dynamics, and Historical Overview of Related Work.

This article contains an overview of a new theory of Physics that applies to a class of complex dynamical systems that interact strongly and frequently with their environment. An important member of the class are complex adaptive systems, which acquire information about their environment, identify regularities, and develop behaviors that appear to the observer as “intelligent” or “suitable for survival.” Examples are animals, plants, and motor proteins. Computers are also members of that class. Computers are dynamical systems. They acquire information from the environment in the form of programs, input data files, and sensors.  But they are not adaptive. An important goal of this theory is to make them adaptive.


Traditional approaches used in Physics and Complex Systems Science for the study of complex systems include Statistical Mechanics and the theory of Non-linear Systems. In both cases, details are ignored, and the dynamics is described in some average sense by a simplified rule, such as a statistical distribution or a non-linear differential equation. Such approaches cannot describe the details of a dynamics that constantly changes depending on the flow of input information, and not on any prescribed rule or statistical distribution. The present theory formalizes a new mode of inquiry for the study of complex dynamical systems.


The present theory uses causal sets to describe the dynamics of a system in full detail, but within a certain granularity. In Physics, causal sets are used to formalize causality. Algorithms, computer programs, signals coming from sensors, are causal sets. Causal sets are executable. They can be compiled and executed on a computer, or they can control actuators or motor nerves. The theory assumes that a causal set that describes the dynamics of the system of interest is given. Then, the theory is derived directly from the fundamental principles of Physics: causality, symmetry, least-action, and the laws of Thermodynamics.



The principle of causality states that effects follow their causes. It applies to all dynamical systems in our world, and is responsible for the direction of time and the direction of entropy change. It does not apply inside black holes, where time is believed not to exist.


The dynamics of a system is described by a set of state variables, which form the state space of the system. A state is a single point in the state space, corresponding to a certain set of values of the state variables. As the dynamics of the system causes transitions from one state to another, the representative point moves in the state space, describing a trajectory. The state variables are the elements of the causal set, and the causal relationships that apply for the transitions are the precedence relations of the causal set. Causal chains in the causal set correspond to trajectories in state space. Random or unexpected events are permitted, and they initiate causal chains.


A causal set is a discrete representation of the system. Only a discrete representation can provide sufficient detail for a representation of the dynamics. Ironically, the important advantage of being able to work with a detailed dynamics has a serious drawback of a practical nature: the power of Mathematical Analysis cannot be applied. Set theory and group theory are the tools, problems must be solved by computation, and theorems must be proved by construction.



The principle of symmetry states that any physical system that has a symmetry of the action also has a conservation law and a conserved quantity. Conserved quantities are important in Physics and natural science because they tell us what is observable, what is stable and invariant enough so we can detect it and measure it. Attractors observed in complex systems of all kinds are conserved quantities. Behaviors developed by adaptive systems are conserved quantities. Classes of objects used in object-oriented programming are conserved quantities. Mass, energy, linear and angular momentum, electric charge, and other fundamental properties of dynamical systems are conserved quantities.


Causal sets have a partial order, and because the order is only partial, they always have symmetry and a corresponding conservation law and conserved quantity. This fact is formalized by the Central Theorem of this theory, given below. Unfortunately, the principle of symmetry does not say how to determine the law of conservation or how to calculate the conserved quantity. Doing this is, precisely, the major goal of the new theory. But the task requires another principle, the principle of least-action. This is because conserved quantities appear only in conservative systems, systems that follow least-action trajectories. Causal sets in general are not conservative and do not follow least-action trajectories.



The principle of least-action establishes that every dynamical conservative system that evolves in state space from an initial state to a final state, must follow a least-action trajectory. These systems are discrete, so they either start from an initial state and halt at a final state, or they loop forever with some period. In the first case, the trajectory is the causal set. In the second case, the period is the causal set.


Let I be the initial state, and F be the final state. The set {I, F} is known as the behavior of the system. In any system, even one of relatively small size, there is an enormously large number of trajectories with the behavior {I, F}. They differ by their action. Every trajectory has a physical quantity associated with it, called the action, and the principle states that the system in the real world will follow a trajectory where the action is minimized. The principle does not say why this happens; it only says that real systems in the real world behave that way. And it just so happens that the number of least-action trajectories with the behavior {I, F} is usually very small. So, apparently on the side, the principle fixes the combinatorial problem.


An action functional is now postulated. It was experimentally observed in 2005, but it enters the theory as a postulate. The value of the functional is given by Equation (2) in the required reading Pissanetzky(2012b). It is defined as a measure of the causal set that depends on a permutation of the elements, but not on the elements themselves. The permutation defines a trajectory because it lists the elements in the order they are visited, or initialized by the dynamics. The permutation also defines a measure for each precedence relation in the causal set, given by the distance between the two elements of the relation as measured in the permutation, and the value of the functional is defined as twice the sum of the individual measures. If the local action of a state transition is defined as twice the distance between the elements, then the total action of a trajectory is the sum of the local actions of which the trajectory is made. The values of local actions and total action are positive integers.


The number of legal permutations of the causal set is very, very large, even for small causal sets. To each legal permutation there corresponds a trajectory in state space. To each legal permutation with the least value of the functional there corresponds a physical trajectory in state space. A search in the set of all legal permutations can determine all physical trajectories with the given behavior {I, F}. When the functional is minimized, the following effects follow:  

  1. The dimension of the state space is reduced and the combinatorial explosion is eliminated;

  2. The system becomes conservative: group-theoretical block systems become observable;

  3. Self-organization takes place, the block systems are self-organized attractors; and

  4. The block systems are causal sets, iteration results in hierarchies.

The fact that the action of a trajectory is the sum of local contributions, all of which are positive integers, is very important. It means that the least-action trajectories are found by minimizing each local action, in any order or all at the same time. This statement is a form of the principle of locality, well known in Computer Science. It also fits well with the notion of parallel programming. In Physics, it corresponds to the notion that a physical object is influenced only by its immediate surroundings. In Biophysics, it is what makes the existence of the brain, and of adaptive systems in general, possible. An adaptive system can be imagined as consisting of many parts, each of which knows how to minimize its own contribution. The virtual machine, the new version of the SCA algorithm, and the massively parallel neural network implementation proposed in Section 4 of the required reading, would not be possible without the principle of locality.



Once the principle of least-action has been applied to the given causal set, and the set of least-action permutations with behavior {I, F} has been found, then the system becomes conservative, and conserved quantities can be observed. Or, in terms used in Complex Systems Science, self-organization takes place and the system converges to its attractors. This theory proposes that the conserved quantity is a group-theoretical block system defined over the set of least-action permutations. A block system is a partition of the set into contiguous and disjoint subsets, known as blocks, that is invariant for all least-action permutations. Procedures for finding block systems are well known in group theory.


The resulting block system must obey the order defined by the precedence relations in the original causal set. The original precedence relations must be applied to the block system. Some of them are said to be induced, resulting in new inter-block precedence relations. The remaining relations become encapsulated as intra-block relations. The result of this process is that the block system itself becomes a new causal set. The blocks of the system, with their encapsulated relations, are the elements of the new causal set, and the induced relations are the corresponding precedence relations of the new causal set.


As the block system is now a causal set, it also has its own action functional, its own least-action trajectories, and its own higher-level block system. The higher-level block system is, again, a causal set. The process can be repeated, until only one element remains. The result is a hierarchical network of block systems, which has the property of being scale-free. That is to say, it is a mathematical fractal. Every intermediate level in the hierarchy is at the same time a causal set and a block system. It is the block system for the causal set in the level immediately below, and the causal set for the block system immediately above.


If certain conventions are established to standardize the definitions of the blocks, such as requiring the blocks to be minimal, then a theorem of uniqueness can be stated. The number of causal sets is infinite countable. The number of finite hierarchies is infinite countable. The theorem posits that the correspondence between causal sets and hierarchies is a bijection. Functions E and E-1 in the required reading define the bijection.



Minimizing the action of the trajectory also minimizes the free energy and entropy involved in the trajectory. Trajectories are uncertain, because the system is assumed to select trajectories randomly. The entropy of the system is a measure of the uncertainty. A system with many trajectories has a high entropy and a high uncertainty. The conservative system obtained by minimization of the action has fewer states, far fewer trajectories, and a much smaller entropy and uncertainty. The conservative system may be certain, if it has only one least-action trajectory. But this is not the usual case. Usually, the conservative system still has many least-action trajectories, meaning that some entropy and some uncertainty remain. That’s why the block systems are necessary. The block systems are invariant under the trajectories, meaning they are the same no matter what trajectory the system takes. The block systems are certain. This property is what makes the block systems of essential importance. The certainty of block systems supports the mechanisms of meaning. A hierarchical network constitutes an ontology of concepts and their relationships that can be used to reason. Because of uniqueness, hierarchical networks are shared and define the objects that support communication. Because of compression and sharing, the supported form of communication is very efficient.



The central theorem proposed for this theory states the following:


A causal dynamical system with 3 of more elements has a symmetry of the action, a conservation law, and a unique conserved quantity, described as a scale-free hierarchical network of group-theoretical block systems, and representing an attractor of the system.

The central theorem of this theory posits that a given body of causal information has a corresponding unique structure, described as a scale-free hierarchical network of group-theoretical block systems. The structure depends on and is determined by the existing information alone. The block systems correspond to levels in the hierarchy. Each block in a block system is in turn a network of the same kind. The lowest level in the hierarchy corresponds to the granularity of the available information. Structures are to be considered as stable forms of the given information. If the information changes, so also does the corresponding structure, as determined by function E. Some structures are very sensitive to even the slightest changes in the given information. This phenomenon is known as the butterfly effect. If the information is complex, changes can give rise to deterministic chaos.

The verification of the theory is discussed next.