Verification of the Theory of Detailed Dynamics
Required reading: outline of the theory, and Pissanetzky(2012b).
The theory of detailed dynamics is derived directly from the fundamental principles of Physics and a postulated action functional. The theory is expected to correctly predict a very large number of natural phenomena, but the expectation must be verified by actually predicting different phenomena and verifying the predictions by direct comparison with experiment or with heuristic theories that have themselves been verified to correctly predict phenomena in some domain. The term "verification" is used here with that meaning.
No amount of verification can prove a theory of nature. But each time a verification is successful, the theory becomes stronger. A traditional practice is to challenge the theory by formulating unusual predictions and then trying to verify them. Verification may be difficult in some cases, but the difficulty of verification can not be held against the theory. On the other hand, any theory of natural science is always falsifiable. In order to falsify a theory, an experiment must be found and proved to be impossible to explain by the theory. A situation may arise where a theory has been proved to correctly predict a range of phenomena, and not the outcome of a certain specific experiment. When this happens, a boundary has been found for the domain of phenomena that the theory can predict, and a new domain of phenomena has been found. The new domain may necessitate a new theory, or even the formulation of a new principle of nature or the modification of an existing one.
The theory of concern is new, and the process just described is only beginning to happen. The paper in Pissanetzky(2012b) is an attempt at prediction in the domain of self-programming by using the theory. But in recent months of the year 2012, even in recent weeks of the month of October, there has been a spate of independent research with results that strongly concur with the Central Theorem, or with other tenets of the theory such as the use of action to explain the detailed dynamics of complex systems.
It is the purpose of this article to review that literature. A slide shows how the theory was developed from the bottom-up and uses the mathematical properties of causal sets and the physics of the action functional to formulate predictions, and how these predictions can be compared with heuristic theories formulated top-down in a variety of disciplines to explain their experimental observations. There follows a summary of results, categorized by discipline.
N1. Based on a simple model of brain tissue, a prediction was proposed in November 2011 in Section 3.8 of Pissanetzky (2011a) that the total length of dendritic trees in the brain must be optimally short. This prediction was independently confirmed by Cuntz, Mathy, and Häusser (June 2012), who proposed a 2/3 optimally short power law valid for all regions of the brain, for a wide diversity of dendritic trees, and even across species, with extensive experimental support. This law replaces a previously considered 4/3 power law, which was not optimally short.
The importance of this finding cannot be overemphasized. It is an application of the divide and conquer technique to the problem of brain function. It splits the problem in two, and it provides a critical confirmation of my prediction. One half of the problem is to explain why and how those connections are optimally short. Factors could be the need to save biological material, critical space in the brain, and energy wasted in the transmission of signals. The other half, is that the connections are optimally short. The requirement for optimally short connections is a condition for the brain to be able to use causal logic. As I have proposed (see, for example, Conjecture K2 and Section 3.8 of Pissanetzky 2011a), the length of connections corresponds to the value of the action functional for the information stored in memory, and the fact that the connections are optimally short confirms that the brain is using causal logic. I believe his happens as a side effect. The connections are made short for other reasons, not for causal logic, but causal logic is the result. The existence of neural cliques, discussed below, is yet another confirmation of the presence of causal logic in the brain, as neural cliques are the expected consequence of short connections.
Unfortunately, in Neuroscience, there seems to be no awareness at all about the existence of this second part in brain function.
Furthermore, the finding is one of the very few quantitative predictions of brain function available from Neuroscience.
N2. In 2005, in the Preface of Cortex and Mind, J. Fuster lists the 7 most salient ideas that he defends in the book, and reveals that a shift in paradigm had been occurring in neuroscientific thinking. The 7 ideas describe the brain as a scale-free hierarchical network of cognits, where the cognits themselves are elements of cognitive information represented as networks as well, where any cortical neuron can be part of many networks, and thus of many percepts, memories, items of experience, or knowledge. Fuster's extensive experimental experience in Neuroscience gives his ideas much weight.
N3. In 2010, theoretical neuroscientist K. Friston, inventor of Dynamical Causal Modeling, a technique used to interpret fMRI data, proposes an energy functional that accounts for action, perception, and learning, and reviews some key theories of the brain that he attempts to unify from the free-energy perspective. Subsequently, in 2011, Daunizeau et al. review DCM and describe causal models of functional integration among brain regions.
N4. In 2006, L. Lin et al. discover neural cliques, and propose that the brain can be understood as a scale-free hierarchical network. Neural cliques correspond to blocks in the theory of detailed dynamics.
N5. On September 22, 2012, renowned neuroscientist David Eagleman, author of Incognito, states that biology, not free will, is what determines our decisions, and assembles a panel in Houston to discuss the scientific, moral, and legal consequences of that fact. In causal logic, decisions correspond to the invariant behaviors described by the hierarchies, and are described by the deterministic function E. Deterministic chaos, corresponding to the butterfly effect described in the theory, may give the appearance of free will.
B1. On September 19, 2012, Susanne Still et al. published results of their studies on motor proteins. They consider proteins as dynamical systems that interact with their environment and learn, predict, and respond by modifying their state according to the interactions. They categorize the dynamical information as a fraction that is predictive, and the rest that is complex but does not improve the predictive power, and thus corresponds to thermodynamic inefficiency. This discussion is nearly identical, given the proper equivalence of terms, to the discussion presented in the theory, particularly in the sections on the principle of least-action and the laws of thermodynamics. However, causal sets and an action functional have not yet been considered. An assumed probability distribution is used to define a free energy functional, making the theory approximate and heuristic.
E1. In September 2012, the National Human Genome Institute released results from the Encode project, supported by more than 500 scientists, where DNA is described as "an extraordinarily complex network that tell our genes what to do and when, with millions of on-off switches." This descriptions makes DNA look a lot like a computer program with data files. That is, a causal set.
E2. In August 2012, evolutionary biologist V. S. Lerner, who is an expert in evolution dynamics and studies regularities in the flow of evolutionary information, proposed an action functional and least-entropy trajectories to explain the observations.
A1. In his 2004 book On Intelligence, Jeff Hawkins proposed that the key to brain function and intelligence is the ability to find patterns in knowledge of the world and make predictions based on the patterns, and that teaching a computer how to find the patterns, rather than any specific tasks, would be enough for the computer to behave intelligently. Hawkins refers to the patterns as invariant representations of knowledge, and describes them as tree-like, scale-free hierarchical networks of "nodes" that match sensory input. He justifies the theory by arguing that the remarkably uniform arrangements observed in cortical tissue reflect a single algorithm that underlies processing in the cortex.
There is a strong agreement between Hawkins' theory and the theory of detailed dynamics. Hawkins proposes the hierarchical networks, recognizes the need for invariance in the representations, that the representations should match sensory input, and that the ability to find them should be enough to produce intelligent behavior. However, the two theories differ fundamentally, and precisely in the most critical points.
Hawkins' theory is heuristic. It attempts to approximate the behavior of physical matter by means of ad-hoc hypothesis, but ignores the very large body of already existing knowledge about the behavior of matter represented by the principles and laws of Physics. His theory prescribes the existence of networks, rather than deriving them from the theory, as I have done in my theory. His invariant representations depend on a number of parameters that must be adjusted to a best fit by humans. Properties of the representations are forced upon the system, rather than deriving them from the theory. Hawkins' theory is narrow AI, not general intelligence.
By contrast, in the new theory of detailed dynamics, networks are a consequence, not a prescription. They and their properties are directly derived from first principles, not prescribed. They match sensory input, are invariant, and require no adjustable parameters or human intervention of any kind. And all that, not because I say so, but because it all follows from the principles. I sympathize with Hawkins' expectation that the ability to find regularities is sufficient for general intelligence. But I do not necessarily subscribe to the full strength of that predicament. Instead, I believe that, once that ability is found, we will also find that all else we already know how to do.
CE1. U.S. patent 8,254,699, assigned to Google, Inc., was issued on August 2012. The inventors propose a system for automatic large scale video object recognition with applications for the Internet of Things. The patent describes "feature vectors" and "rounds of dimensionality reduction." Given the proper equivalence of terms, this discussion closely parallels the discussion presented in the theory, particularly in the sections on the principle of least-action and the laws of thermodynamics.
CE2. Scale-free hierarchical networks have been studied and used for decades in the object-oriented analysis and design of software. The correspond to classes, objects, and inheritance hierarchies. The software development cycle involves three very different processes, frequently concurrent, that correspond exactly to the execution of causal logic. Process 1 is writing the code, using information obtained from a problem statement and other sources. Process 2 is refactoring, the purpose of which is to make the code more understandable so as to allow further development. Refactoring corresponds to the minimization of the action functional. It is the search for the least-action permutations of the information, that is, of the existing code, although usually only one permutation is found. Process 3 generates the block system and the invariant hierarchies of behaviors, that is, the hierarchies of classes and objects.
PHYS1. On October 19, 2012, T. Hartonen and A. Annila proposed that natural networks of all kinds should be described in terms of physical action, that the phenomenon of emergence follows from the principle of least action, and that systems in evolution are described by the least-action consumption of free energy. This publication follows another by T. K. Pernu and A. Annila, earlier in 2012. The authors use methods of statistical mechanics to describe the network system in a way that conserves energy and preserves causality. This work is strong confirmation of the theory of detailed dynamics. However, the approach appears to be superficial because it subsumes numerous mechanistic details in its general concepts, as the authors themselves admit. Statistical mechanics can not describe the dynamics in sufficient detail. The work is heuristic, the properties of causal sets are ignored, the action functional is not defined, and the anticipated hierarchical networks can not be quantitatively calculated, as they are in my theory.
PHYS2. On March 8, 2012, A. Berut et al. published the experimental confirmation of the 50-years-old Landauer's principle which states that the erasure of information from a memory is a dissipative process. They actually measured the energy released when a single classical bit of information is deleted, thus confirming that the principle is correct. The confirmation makes more solid the ground on which my comments relative to the thermodynamics of causal logic stand.
PHIL1. On May 17, 2012, Hanns Sommer and Lothar Schreiber, following on Heidegger's affirmation that pure calculations produce no ‘intelligence’, presented a formalization of the basic principles of cognition and provided the principal idea of cognition. They proposed that a "natural logic" follows from these principles, and that Mathematics is obtained only in a second step from that logic by abstraction. These conclusions, which are of a philosophical nature, show a surprising degree of agreement, even to the finest detail, with the present theory, which is a theory of Physics. Causal logic is the proposed "natural logic." The central thesis of the theory of Physics is that intelligence follows from Physics, that is, from the world, and not from Mathematics, which follows as an abstraction. The coincidence between the philosophical results and the theory of Physics are a very strong independent verification of both.
The volume and diversity of research published in recent years, and even in recent months, is impressive. These authors do not know each other, do not cite each other, and do not seek to apply causal logic to their respective fields of inquiry because they don't seem to be familiar with my works. Yet, they systematically arrive at very similar, in many cases nearly identical conclusions. The situation appears to be ripe for a general understanding of something that is so important and so ubiquitous in our world but has eluded science for so long. Is all of this a coincidence? I see it more like a movement towards creating a general quantitative understanding of physical reality. I propose that the different conclusions reached by the researchers are overlapping parts of a big picture, one that we are only beginning to see, and that causal logic provides the structure that will help science to assemble the whole picture.
Next reading: Historical overview of the binding problem.