Overview of Previous Work on Causal Logic

This article discusses work that can be considered as preliminary to the theory presented here. It is important to know the nature of the problem being addressed, the outcome of previous attempts, and the stature of the researchers that tried.

 

There really isn't any preliminary work by other authors conducive to Causal Logic (CL). This fact is consistent with the fact that CL is an experimental discovery, not a conclusion from previous research. There is, instead, a very large body of work by authors who tried more traditional approaches to some CL-related problems and obtained very important but mostly negative results. Some highlights of that work are covered in this Section.

 

One of the lines of thought followed by Boole, Frege, Gödel, Church and Turing, focused on the limits of  mathematical deduction. We now know that limits to deduction do exist and that not all functions can be computed, and we generally accept the definition of computation as anything that the Turing machine can compute. An overview and references can be found in Section 1.2.2 of Artificial Intelligence (2010).

 

Another line of thought focused on the binding problem. A quick way to introduce the binding problem to those unfamiliar with it, is the following statement written by Douglas Hofstadter in his characteristic style, page 633 of Metamagical Themas:

 

The major question of AI is this: What in the world is going on to enable you to convert 100,000,000 retinal dots into one single word `mother' in one tenth of a second?


What's going on, is known as a process of binding. I propose that CL is the process that binds the retinal dots and makes sense of them.

 

Ca. 1850, physicist and physiologist Hermann von Helmholtz, in his studies of vision, concluded that a process of inference was needed to explain the processing of visual images in the brain. He named that process  "unconscious inference," because we are not aware that it is happening. He appears to have been the first person who correctly identified what was later called the binding problem. I propose that CL is the same as unconscious inference.

 

In 1912, Bertrand Russell, who wanted to build up knowledge from experience and hoped to construct the mathematical world from first-order predicate calculus, confronted the binding problem and initiated a line of thought that was pursued by mathematical logicians such as Wittgenstein and the later Wittgenstein for decades, and finally proved to have no solution in the context of that calculus. A more detailed account of the history of the binding problem and the notion of mind as machine in the 20th century, including references, is found in Sections 1, 2, and 3 of a 2011 paper by evolutionary biologist Stuart Kauffman.

 

The modern view of the same old binding problem is the phenomenon known as self-organization, routinely observed in all kinds of dissipative physical systems and the object of intense scrutiny in Complex Systems Science (CSS). A dynamical system is said to self-organize when it suddenly and unexpectedly evolves towards an organized state where recognizable structures and behaviors are present. But self-organization, too, remains unexplained. Mathematician and philosopher S. Barry Cooper refers to it as "a case of complexity outstripping computer resources and human ingenuity." Cooper focuses on computability and definability, and writes about "the smallness of the computable world in relation with what we can describe." Authors also refer to self-organization as a case where "you get something for nothing," or even as "magic." The fact is, the binding problem remains unsolved to these days. I propose that CL has finally solved the binding problem.

 

Physics has many laws, but the supreme law of nature, the law of laws, is the principle of symmetry. The term symmetry refers to transformations of a physical system that leave the system unchanged. For example, the laws of motion of a billiard ball are the same in America or in Antarctica. In short, the principle states that symmetries in physical systems lead to conservation laws and the existence of conserved quantities, which appear as functions or structures and have the property of remaining invariant under the transformations. This fundamental principle is today of critical importance in modern theoretical Physics.

 

In 1918, Emmy Noether formalized the principle and published her now famous theorem, where she followed a constructivist approach and proved the existence of conserved quantities by actually calculating them. Her theorem has since played and still plays a fundamental role in the development of modern theoretical Physics, but has been largely ignored in other sciences because it applies only to a certain class of differential equations that appear in the variational analysis of Lagrangian systems.

 

But differential equations can be discretized and expressed as infinitesimal causal sets (Section 3.4 of the main paper), from which the original equations can be recovered as a limiting case. Noether's theorem is actually one of causality, a consequence of CL. I believe it can be proved from CL in its entirety, and I have actually proved one simple subcase of it in Section 2.6 of the main paper. I propose that CL generalizes Noether's theorem.

 

Noether's theorem is also one of self-organization, because conserved quantities are self-organized structures. And it actually solves the binding problem in one case, a fact that few people have realized. The theorem applies only to conservative systems, but not to dissipative systems, a fact that has profound consequences in AGI.

 

A physical system that is dissipative and has excess energy will dissipate its energy and settle into a stationary state, also known as an attractor. At that point, dissipation stops, simply because there is no more free energy, and the system becomes conservative. At that point, Noether's theorem kicks in, and, if symmetries exist, self-organized, conserved structures make their unexpected appearance.

 

But symmetries always exist in a causal set. If a differential equation is discretized as a causal set, the causal set will always have symmetries and will always give rise to self-organized, conserved structures. All of which, even the convergence to attractors, is predicted by CL. I propose CL as the solution for both the binding problem and the self-organization problem.

 

In Computer Science, most of the attention has focused on algorithms, their development, and their complexity. But the transformations of algorithms such as refactoring and the object-oriented analysis of software, where binding takes place and conserved quantities are obtained, have received relatively little theoretical attention, and are considered, even today, as something that human developers do better than machines. Tools have been developed, of course, but they must be used under human control.

 

Still in Computer Science, it is useful to mention Solomonoff's paradox, which is very much relevant in self-programming and AGI in general. In his studies of machine learning, Solomonof(2010), who did not intend to produce a paradox, writes:

 

A heuristic programmer would try to discover how he himself would solve (some problem) - then write a program to simulate himself.

 

When Solomonoff writes "would try to discover," he is referring to a certain process that takes place in the programmer's brain - perhaps with the help of some notes on paper. When he writes "`how he himself would solve (the problem)," he is referring to the result or output from that process. And when he refers to "a program to simulate himself," he is referring to a different process, the program. A program is an algorithm, a set of rules that explain how to solve the problem, but it is not the process that found those rules. Solomonoff clearly identifies two different processes, the process of discovery of the process that solves the problem, which in this theory corresponds to CL, and the process that actually solves the problem, the program, which in the theory corresponds to the algorithm inferred by CL. When the programmer writes the program, she is only copying the algorithm from her brain to the computer. The process that discovered the algorithm remains hidden in her brain and is not in the program. CL is not in the algorithm. Paradoxically, the more the heuristic programmer endeavors to program her intelligence, the farther away she gets from that goal. Systems that display this types of behavior, where one process controls another, are known as host-guest systems, see Pissanetzky(2010a).

 

The effect of causal inference (CI) on a sparse canonical matrix representing a causal set, is to block-diagonalize it. Techniques and approximations for block-diagonalizing sparse matrices are well-known. Several of them are discussed in Pissanetzky(1984). One of the them is reduction (Section 5.3 ibidem), where the associated digraph is partitioned into strong components, and the associated causal set into blocks. Another is the Cuthill-McKey algorithm (Section 4.6 ibidem), which in its reverse form can be used to reduce the profile of the sparse matrix. Both algorithms result in a compaction of most elements of the matrix near the diagonal, and could be contemplated as processes of self-organization and binding. They may find an application as preprocessors for CI. However, there are two problems. First, the profile is a global function of the entire matrix, unlike the functional proposed here, which is local and can be easily implemented on a massively parallel computer. Second, invariance of the blocks under transformations is not proved, because the set of permutations that define the symmetry and the transformation is never obtained.

 

HOME