Home page for Dr. Sergio Pissanetzky   
News    References    Publications    References for causal sets    Who is Sergio Pissanetzky?


Main Articles
       
Causal inference 101
       
The Theory of Detailed Dynamics
        The Theory of Detailed Dynamics and the Combinatorial Explosion
        Verification for the Theory of Detailed Dynamics
        Historical Overview of Related Work
        The mechanics of embodiment
        A case study: The European Example
        An example of Image Recognition: Point Separation
        An example of high cognitive function: Euler equations.
        The Quest for Certainty
        Differential equations of Physics in the Theory of Detailed Dynamics.

Supporting Files
        The European Example: Original Java Code
        The European Example: Execution Trace
        The European Example: Causal Set for Level 1
        The European Example: Permutations for Level 1
        The European Example: Interpretation of Level 2
        The European Example: The Fractal Hierarchy
        The Point Separation Example: Point Coordinates
        The Point Separation Example: The Causal Set
        The Point Separation Example: Permutation
        The Movement

Project Plan
       
Core implementation

Reading
        Is Causal Inference a New Theory of Physics?
        How to Build an Intelligent Machine
        You won the Lottery, but your check is shredded.
        The brain as a physical system.
        Schrödinger's cat
        Maxwell's Demon

Publications 
Publications prior to 2000
References
References for causal sets
Who is Sergio Pissanetzky?
Contact information
News


Research Interests

My research interest is causality, the principle that effects follow their causes. I am interested in the mathematical and theoretical aspects of causality, and in its applications to Computation, Artificial Intelligence, Complexity, Physics, and Neuroscience. I am particularly interested in complex adaptive systems, and in the application of causality to one of the greatest unsolved problems of the 20th. century, the identification of regularities in information. 1 In a scale from purely theoretical to strongly applied, I am positioned near the middle, somewhat shifted in the direction of the theoretical but with strong consideration given to applications. I want to make computers and robots completely autonomous from humans, and I believe they aren't because they can't identify regularities. 

Causality and time are closely related. Some scientists believe they are the same. We understand that there is a past and a future, and that time progresses in the direction from the past to the future. Scientists believe that the direction of time was set by the "big bang," and still remains the same. The entire universe is causal, time is unidirectional, and effects follow their causes. The only exception are the black holes. Cosmologists believe that time or causality do not exist inside black holes. But this side of the black holes, everything is causal.

But these are not the things I work on. My field of inquiry is causality itself. It may seem odd that something so basic as causality may have a theory. But it does. There is a large and rapidly growing body of theoretical work in causality. Because our universe is causal, my work is of a very general nature. I use causal sets as fundamental building blocks for the theory of causality. I use the causal sets to represent physical systems, I study their properties and transformations, and I try to apply what I find to the physical systems. When I say physical system, I mean anything that is physical and is not in a black hole. Natural or artificial, living or not. To me, the brain is a physical system, and a computer running a program is another. I study both in the context of causality, and I believe I know why they function so differently.

The development of this theory was made possible by a simple experimental observation I made around 2005. I observed self-organization in causal sets when I minimized a certain action functional that I had also observed, and which depends on, and only on the given causal set. At the time of the experiment I was using canonical matrices, but they are equivalent to causal sets. I reported the experiment in some detail in the Introduction of Pissanetzky(2011a).

An appropriate environment and a good deal of inspiration for these developments was provided by the Workshop on Automation and Robotics held once a year at the NASA Gilruth Center, Johnson Space Center, where I presented a series of talks. I published a number of papers on the subject, but the only two major papers published so far are Pissanetzky(2009a) and Pissanetzky(2011a). I published the physical interpretation of the functional in Pissanetzky(2012).

1 Note. Regularities are also known by other names such as patterns, conserved quantities, attractors, invariant representations.  Identification of regularities is also known as the binding problem, the problem of finding associations among elements of information.


Action functional and causal logic

The fact that the functional was experimentally observed in a physical system is very important. It means that the functional is natural, a property of nature with a physical meaning, rather than some cost function artificially engineered to gain some cost advantage. And it makes the theory a fundamental theory of Physics, rather than a method intended to solve some particular problem. I now know, but didn't know in 2005, that the process of minimizing the functional removes entropy from the system. The removal of entropy reduces the uncertainty of the system by restricting its dynamics to the subset of states in the system's attractors, and the attractors are identifiable regularities because they remain invariant under the restricted dynamics. We say that the system has converged to its attractors, or that it has self-organized, or that it is less uncertain and more compressed because the regularities are now certain. The resulting regularities are a property of the system. That is, given the system, the regularities follow.

The regularities obtained by minimizing the functional on a given causal set, form a block system. A block system is a partition of the causal set that has a property of invariance: it remains invariant under the reduced dynamics of the attractor. It can also be said that the minimization of the functional and subsequent removal of entropy has defined the reduced dynamics and created associations relative to that dynamics by binding some elements of the causal set, thus forming the blocks. Block systems are usually defined and studied in the context of group theory. When the block system is formed, some of the precedence relations in the partial order become encapsulated in the blocks. The remaining relations are induced in the block system and become precedence relations among the blocks themselves. The block system is itself a set (of blocks), and it has a partial order defined by the induced relations. The block system satisfies the definition of causal set and is itself a causal set. Since the block system is a causal set, it has its own action functional and its own entropy. The functional of the block system can be minimized, and another, usually smaller block system is obtained. Repeating the process, a multilevel inheritance hierarchy of block systems is obtained. The hierarchy has the properties of a mathematical fractal. If the hierarchy is displayed with the finest-grained level at the bottom and the coarsest-grain level at the top, then each level in the hierarchy represents the regularities of the level below it. Several such hierarchies for small causal sets have been calculate and published, see for example Fig. 1 in Pissanetzky(2011c) and  Fig. 3 in Pissanetzky(2011a).

The fact that the minimization of the action functional over a causal set results in an identifiable regularity the inheritance hierarchy of the causal set when combined with the fact that the functional depends only on the causal set, can be viewed as a function Ƹ, a directed map from the set of all causal sets to the set of all block systems. The set of all causal sets is of infinite numerable size. The set of all block systems can be shown to also be of infinite numerable size by way of the inverse transformation Ƹ-1, which exists and is computable. The hierarchy is obtained by repeated application of function  Ƹ. The existence and properties of both functions Ƹ and Ƹ-1 make the map between causal sets and inheritance hierarchies a bijective map. Function Ƹ is believed to be uncomputable. Or, to be more precise, the expression of the functional is uncomputable. Once the expression is known, from the observation of a physical system, its value is computable. And in fact, its value for a given causal set is easy to compute because the expression is very simple.

By definition, a mathematical logic is a domain-independent model of knowledge about the physical world, that includes a representation language, and a process of inference capable of deriving new facts about the physical world from the facts that are known. As discussed below, causal sets have an outstanding power of representation of the physical world. Function Ƹ, which starts from a causal set representing known facts about a physical system, and derives the corresponding inheritance hierarchy, which is a new, previously unknown fact about the system, qualifies as a process of inference. Together, the function and the causal set, define a new mathematical logic. I would like to propose that the new logic be known as causal logic, and the inference it contains as causal inference.

Causal logic is the centerpiece of this theory. I believe causal logic is the solution for the unsolved problem I was referring to above. Ca. 1850, Hermann von Helmholtz predicted that a form of inference should exist in the human vision system, with the power to uncover regularities in retinal images. He called it "unconscious inference", because we are not aware of its existence. I believe causal inference is the unconscious inference that Helmholtz was looking for. Later, in the 20th. century, and even in the first decade of the 21st. century, several lines of though predicted the need  for a similar inference, in some cases with considerable and precise detail, but were not able to explain how it works. Among them, I may mention Jeff Hawkins' 2004 book On Intelligence, where he extensively refers to certain "invariant representations" that exist in the brain but never explains what they are, and Murray Gell-Mann's 1995 book The Quark and the Jaguar, where he extensively refers to "regularities" in information and proposes some insightful approaches to obtain them, but finally leaves the problem open. Joaquin Fuster(2005) has studied the cerebral cortex and reports conclusions very similar to those obtained from the theory outlined here.

These four elements, the causal sets that formalize causality, the observed phenomenon of self-organization, the action functional with a physical meaning that causes the self-organization, and the causal logic that derives new facts from known facts, are the basic ingredients of which the theory of causality is made.


The four Fundamental Principles of Nature

A number of principles are used in Physics to describe nature, but the following four are fundamental:

Causality. Causality is the subject of the preceding Sections. A more detailed article will be published soon.

Self-organization. A symmetry of a physical system is any property or feature of the system that remains unchanged when a certain transformation is applied to the system. A system can have many different symmetries. The principle states that, if a dynamical system has a symmetry, then there exists a corresponding law that establishes the conservation of a certain conserved quantity. That "quantity" can be anything that pertains to the system, a value, a structure, a law, a property. The classical examples are the invariance under translation of the equations of motion of a mechanical system, which gives rise to the conservation of linear momentum, or the invariance of the equations of motion under translations in time, which gives rise to the conservation of energy. The principle states that a conserved quantity does exist, but does not specify what it is or how to obtain it.

Causal sets with a partial order have a symmetry. A causal set remains the same if its elements are permuted in any way that does not violate the partial order. To this symmetry, there corresponds a conserved quantity that pertains to the causal set. However, unlike the principle, the theory of causality does specify a precise way for obtaining the corresponding conserved quantity: the conserved quantity corresponding to the symmetry is a block system. The block system is also a causal set, usually smaller than the original one. It has its own symmetry and its own conserved block system. Iterating, a hierarchy of block systems on the given set is obtained. These facts are rich in significance, universal in nature, and of central importance in the theory of causality. Particularly when combined with the power causal sets have to represent nature.

By contrast, in Computer Science, the fundamental building blocks are strings of characters. But strings are sets with a total order. They have no symmetries, no conserved quantities, and no structures. An unfortunate fact, indeed.

Least-action. It is customary to describe a dynamical system by means of a set of independent variables, each of them a function of time. It is also customary to define a multi-dimensional state space, the coordinates of which are the variables. One single point in state space describes the complete state of the system at a particular instant of time. As the system evolves in time as a result of its dynamics, the point moves in state space and describes a trajectory. According to the principle of least-action, if the system travels between two given points in state space, then it does so along a path of stationary action. Fermat's principle, that light travels from one point to another along the shortest path, is a popular example of application of the principle to this particular case.

Different theories of Physics describe action differently. It is therefore better, at this introductory level, to explain action in a more intuitive way. Think of action as traveling energy, energy that travels from one point to another, or as an energy density that travels on a path. Think of a causal set represented as a directed graph, where energy travels from one vertex to the next along a directed edge. The dimensions of action are energy times time, so the action for the transfer would be the product of the amount of energy transferred and the time it takes to transfer it. In a slightly more precise definition, action is a functional that assigns a number to any given trajectory in state space. If a pair of points is given in state space, then there can be many different trajectories between them, and the action will take a different value for each trajectory. Then, the principle states that the trajectory actually followed by the system is one with a stationary value of the action. 

The principle of self-organization, and the principle of least action, are known to be closely tied. This fact is not only confirmed by the theory of causality, but also essential for its development. 

Entropy. The classical and more familiar approach to Thermodynamics is to define entropy in terms of two measurable quantities: heat exchanges between bodies, and temperature. If a system at temperature T gains an amount of heat ΔQ, then its entropy increases by an amount ΔS = ΔQ/T. There follows that, if heat in the amount ΔQ passes from a "hot" system at temperature T1 to a "cold" system at temperature T2, where T1>T2, without performing any work, then the net change of entropy is - ΔQ/T1 + ΔQ/T2, which is a positive number, meaning that the combined entropy of the two systems has increased. Thus, when heat flows from hot to cold, it is also flowing in the direction in which the total entropy increases. This definition originated from studies of the Carnot cycle, and as a response to the quest for perpetual motion machines.

The modern approach considers the internal energy E and the entropy S of a system as independent state variables, separately defined, and externally controlled. Recalling that a state variable is one that depends on the state of the system, the definition means that both E and S can be calculated if the state of the system is given. In this approach, temperature is defined as the derivative of the energy with respect to the entropy:  T = dE/dS. The classical definition above is a particular case of the modern definition. For, if a system at temperature T gains an amount of heat ΔQ, then by conservation of energy its internal energy must increase in the same amount ΔE = ΔQ. The increase of entropy would then be ΔS = ΔE/T. This can be written as T = ΔE/ΔS, and the definition of T is obtained by taking the limit.

In the modern approach, the entropy of a system is defined as a measure of uncertainty of the system's dynamics. As explained above, a system is described by a set of variables that can have different values at different instants of time, and the state of the system at a certain instant of time is the particular combination of values of the variables at that instant. The dynamics of the system is the set of rules that determine how the system transitions from one state to the next. Say that the system is in a certain state A, and a transition to some destination state is about to take place. The rules of the dynamics specify the conditions for a transition to be possible, but in general there can be many destination states that satisfy the conditions, and the rules do not give preference to any one of them. A transition will take place from A to one of the candidate destinations, but there is an uncertainty as to which one. The entropy of state A is a measure of that uncertainty. The classical example is a dice. If state A is "dice in my hand" and I throw it, it can land in any of 6 possible states. Note that the entropy is a property of the state, that's why we say the entropy is a function of state.

Entropy and Information. The first lesson to be learned about information is that information is a property of a physical system. Information does not exist by itself. There is always a physical system or media that carries the information.  It can be an optical disk, a computer's memory, a brain's memory, a beam of light or gamma rays, a fiber-optic cable that carries television signals. Radiation coming from the stars carries with it information about the history of the universe. Astronomy is the art of reading that information.

In this Section, I treat information as a physical system and study its physical properties. Information has long been known to have physical properties of its own, independent of the media that carries it.  But it wasn't until very recently, March 2012, that a direct measurement of the energy of information was completed. The amount of heat released by the act of erasing one bit of information was experimentally measured to be 3 x 10-21 Joules. This release of heat is real. It is a limitation for modern-day computing, and, even more importantly, it confirms the physical nature of information.

If information is viewed as a physical system, then it must also obey the four fundamental principles of nature, have physical properties such as energy and entropy, and, of course, it must also have a state and a dynamics. And the entropy must be a measure of the uncertainty in the information.


Complex Dynamical Systems and Complex Adaptive Systems.

A complex dynamical system (CDS) is any physical system that has a dynamics and can be considered as composed of many interactive parts. The system can be open if it interacts with its environment, or closed if it does not. A complex adaptive system (CAS) is an open CDS that can find regularities in its interactions with the environment, and use the regularities to adjust its behavior based on the prediction of possible outcomes. There is a very large volume of literature on these subjects, but much of it deals with applications to particular systems. For general-purpose basic information and definitions, see the information-theoretic primer developed by Prokopenko et. al. in 2009. For more fundamental concepts and profound insights consider The Quark and the Jaguar by Physics Nobel prize winner Murray Gell-Mann.

We shall soon see that complex systems do not need to be that complex. In fact systems with as few as a single-digit number of parts can exhibit many of the features usually attributed to systems with a very large number of parts, such as attractors and deterministic chaos.


Causal sets. Are they powerful enough to represent physical systems?

Causal sets are a particular case of partially ordered sets. Anything said about partially ordered sets applies to causal sets. But the converse is not true, and the differences are very important. Partially ordered sets can be finite or infinite, and nearly all of the attention in that field is focused on infinite partially ordered sets. But causal sets are always finite. The study and applications of causal sets are very different from those of partially ordered sets.

Under certain conditions, any algorithm or any computer program that runs on a computer and halts, is a causal set. That happens  because algorithms and computer programs that halt satisfy the definition of causal set. But the fundamental reason is that a real-world computer program running on a computer is a physical system, one that really exists in the physical world, and causality applies in the physical world. It can equally well be said that my research interests are in the properties and transformations of computer programs.

Because computer programs have been used to simulate practically anything we can think of, one can say that causal sets have been used to simulate practically anything we can think of, and that causal sets have an unparalleled ability to represent our knowledge about the world that surrounds us.

There is a big difference, though. Causal sets allow one to deal with causality and knowledge in a mathematical way. Causal sets allow transformations to be applied and consequences to be drawn from the transformations. This would be very difficult to do with programs, because the notation used in programming languages is not mathematically-friendly.

Saying that my research interest is in causality, also means that my research interest is in computer programs and their properties and transformations, and that I work with computer programs. I do not write them, I transform them. And the results are fascinating. And it further means that the scope of my research is very wide. It relates to several disciplines, among them Computer Science, Complex Systems Science, Artificial Intelligence, and Physics. 

This website contains a number of short articles describing different aspects of my works. Currently (July 2012), only a few of the articles are posted, but I am working on many more. There is a great deal of material here. I hope you will find it interesting. Check it frequently, it keeps being updated.

Sergio Pissanetzky
Who is Sergio Pissanetzky?