Maxwell's  Demon

Recalling what we learned from the Carnot cycle, any flow of energy into (or out of) a closed physical system, causes not only the system's energy but also its entropy to increase (decrease). The system transitions from a state of lower (higher) energy and entropy to another of a higher (lower) energy and entropy. Since energy and entropy are functions of state, the system can not, by itself, return to its original state. The transformation is irreversible. If left alone, the energy will remain the same. If the system is conservative, then its entropy will also remain the same. But if internal flows of energy exist in the system, as it would be the case for a closed system with a dynamics, then the entropy will increase by itself and will only stabilize when the flows of energy stop.

Information, too, has energy. This statement is the 50 years old Landauer's principle. The energy of information has been confirmed by direct measurement for the first time only recently, in March 2012. Information has entropy, as well. The entropy of information is a measure of the degree of uncertainty in the information, that is, the degree in which the information departs from being perfectly causal and deterministic. The lessons from the Carnot cycle apply to information, and any flow of information will cause the corresponding increase in entropy, and therefore in uncertainty. Which brings me to Artificial Intelligence (AI).

An AI machine is an open physical system, it has memory, and it interacts with its environment. And it also has energy and entropy. When the AI interacts with its environment -- for example when a sensor or a sensory organ detects light or sound -- the AI stores information from the interaction into its memory. The acquisition of information causes the state of the AI to transition from one of low energy and entropy to another of higher energy and higher entropy. This new state is more uncertain than the original state. The more the AI learns, the more uncertain it becomes.

The same happens to a computer that is being programmed by a developer. A computer with a program is a physical system with a dynamics determined by the program. But the more program the developer installs on the computer, the more uncertain the system computer-program becomes. It is not possible to make the system more certain, or to cause it to self-organize, by way of programming, irrespective of how smart or imaginative the programming is, even if the computer-program system is claimed to be an AI system. This statement constitutes proof that self-organization, and the causal logic I am proposing, are uncomputable.

But the uncomputability of self-organization does not make AI impossible - after all, our brains work. Does it make AI more difficult? No, not even that. It actually makes AI much easier. Consider an AI system that is learning, meaning that it is acquiring more information, more energy, more entropy, and more uncertainty. The problem is with the entropy/uncertainty. We would like to remove the excess of entropy. Removing entropy from the AI can only be achieved by interaction with another system, say a computer or a separate program on the same computer, provided the interaction involves a flow of energy and entropy from the AI to the computer. This last statement encompasses an important conclusion for  AI: an AI system must necessarily be a host-guest system, consisting of two parts, the host, and the guest. The useful information is the guest, the host acts on the guest to remove its uncertainty. I have proposed this idea before [Pissanetzky(2010A)]. In fact, mostly all attempted AI systems are host-guest, they learn and then they process the information in some way. But the devil is in the details, and this idea brings us to the next, much bigger puzzle.

The rate at which an outgoing flow of energy causes a reduction of the entropy of the AI, is variable. At one end of the spectrum, it would be possible to remove energy by erasing all the information gained by learning. Doing that would certainly reduce the entropy and uncertainty of the AI, but it would also eliminate the useful information. At the other end, it would desirable to remove entropy preferentially, and leave the useful information behind, without caring for the energy. This would require a new type of Maxwell's demon, one that can tell information from entropy, or certainty from uncertainty, and allow the entropy but not the information to pass. And of course, not violate the Second Law of Thermodynamics. A transformation like that would preserve the causal deterministic behavior of the system, which is the certainty, while eliminating as much uncertainty as possible.

A transformation like that is known as a behavior-preserving transformation. It is commonplace in Software Engineering, where it is known as refactoring. Every developer practices refactoring nearly all of the time. By definition, refactoring is a transformation of the code - that is, of the useful information, the information that determines the dynamics of the computer - such that the behavior of the program is preserved. This remark suggests that the Maxwell's demon we are seeking does exist. The trouble is, it appears to only exist in the brain.

 My contribution to Artificial Intelligence is the discovery by experimental observation and artificial replication of the Maxwell's demon in question [Pissanetzky(2011A)]. This subject, I have published and provided a reference, but I will expand some more in another article.