All Theories Are Part of The Theory of Information

All Theories Are Part of The Theory of Information

 

This is a Monday morning brain dump to get the juices going. 

 

"Contingencies" is a difficult concept to fully elaborate in a useful manner.  A contingent thing- an event, a structure, a set of information - is such a thing by the fact that it has no existence outside of its contingent relationships.  In some sense it's the age old rhetorical question, "if a tree falls in a forest and no one is around does it make a noise?"    The key in that question is "noise."  Noise is a contingent concept in both the common sense idea as well as any physical sense.  Sound (and other sensory concepts) is contingent in that there must be a relation between the waves/particles (and their possible sources) and an observer.  Without the observer one cannot classify/name/label a sound, a sound.  A sound literally is the effect on the observer.  Hopefully that is a useful introduction to a basic idea of contingency. 

 

The definitions muddy considering contingency in a deeper and broader way such as in discussing human behavior or economics.  Over the eons humans have attempted to create and codify contingencies.  The codification is really more an attempt to reduce the noisy complication of the wider universe into acceptable and useful contingencies (laws, rules, guidelines, best practices, morals, ethics, social norms, standards, etc).  The sciences and humanities also codify but wrap these efforts in slicker packages of "discovering the natural laws" and figuring out "how to best live." 

 

These published codifications are NOT the contingencies they purport to represent but they are contingent in of themselves.  In these broader contexts contingencies refer to and are part of a complex network of relationships.  Expounded as physical or chemical models or philosophic frameworks or first order logics or computer programs all of these systems are contingent systems in the sense of their basis in previous systems, relations to associated phenomena and the substrate of their exposition and execution.  A computer programs representation of information and its utility as a program is highly contingent on the computer hardware it runs on, the human language it's written in, the compiler logic used to encode it for the computer, the application of the output, and so on.

 

The latest science in computer theory, social sciences, neuroscience, quantum physics and cosmology (and chemistry....) have somewhat converged onto very thorny (challenging to the intuition) ideas of super positions, asymmetry/symmetry, networks (neural and otherwise) and a more probabilistic mathematics.  These are all models and sciences of contingency and essentially an unified theory of information which in turn is a unified theory of networks/graphs (or geometry for the 19th centurions).  The core idea/phenomena for these ideas being useful explanations at is one of missing information and how reliable of probabilistic statements can be made about contingent things (events, objects, etc.). 

 

The components of the models that are sometimes employed in these theories involve Bayesian models, assumption of the real/actual existence of space and time and concepts of simple logic ("if then") and other first order logic concepts.  These are often chosen as building blocks because of their obvious common sense/human intuitional connection.  However, upon inspection even these assumptions add a layer that is severely off from the actual contingencies being studied and these building block assumptions are also highly contingent in of themselves.  The "model reality distance" and the "contingent in of themselves"ness quickly, exponentially explodes the relevance of the model.   

 

Consider even a basic notion of "if then" type thinking/statements in a cross substrate contingent situation - such as a simple computer program running on a basic, common computer.  A program as simple as "if X equals 1 then print 'The answer is definitely 1!'.  X = 1 + .0000000000000000000000000001" is going to print the THEN statement even though it's logically, symbolically not true (a human can obviously tell).  (The program in ALL CASES should print nothing at all, logically speaking.  Practically (in the world of daily life) the program prints the statement and everything is "ok", on average).  The abstract "if then" statement is contingent on the substrate that implements/executes/interprets it (the computer OS and hardware).  The contingencies build up from there (the language one implements the statement in matters, the ability of any observer or implementing entity to understand left to right notation, mathematical statements, variable replacement, etc). 

 

An important note: these issues of contingency ARE NOT further If Then statements.  That is, we cannot resolve the short coming of If Then thinking to just needing to build up all of the If Then statements.  The If and the Then and their references they are checking as IF (what's the X we're testing if it's the X) and the Then and it's command suffer from this infinite regress of the original simple if then statement we question!  How does anything definitely say X is in fact the thing the statement/logic is checking for?

 

The main idea here is that in all ideas of modeling or identifying contingencies information goes missing or the information was never to be had to begin with. This is a key convergent finding in mathematics (incompleteness theorem, chaos theory), computer science (halting program, computational irreducibility, p != np), quantum physics (uncertainty principle) and biology (complexity theory) and statistics (Bayesian models, statistics, etc).  How important that missing/unknown information to a situation is contingent on the situation at hand - what is the tolerance of error/inaccuracy.  In the case of high frequency economic trading, the milliseconds and trade amounts matter a lot.  In shooting a basketball, there's a fairly large tolerance margin of mismodeling.  Very noticing the Higgs Boson the margin of tolerance is almost Planck length (smallest physical distance we know of...).  The development of probability theory allows us to make useful statements about contingent situations/things.  The more we can observe similarly behaving/existing contingent things the more useful our probability models become.  EXCEPT... Sometimes not.  The Black Swan.

 

If Then and similar logic models of thinking are insufficient as explanatory reference frames.  Per the above they simply do not account for the rich effects of very small amounts of missing information or mis-information.  Which brings us to the other building blocks almost universally used in science - space and time.  These are robust common sense and in some cases scientific concepts, but they are not fundamental (in that they cannot escape being contingent in of themselves).  Time is contingent on observers and measuring devices - it literally is the observable effect of information encoding between contingent events, it does not have an independent existence.  Space is more difficult to unwind than time in that it is a very abstract concept of relative "distance" between things.  This is a useful concept even at the lowest abstraction levels.  However space, as physical space, is not fundamental.  Instead space should be reconciled as a network distance between contingent subnetworks (how much of an intervening network need to be activated to relate two subnetworks).  Spacetime is the combined, observable (yet RELATIVE to the contingent) distance in total information between contingent things (events, objects, etc). 

 

This is important!  Accepting common notions of If Then logic and spatio temporal elements prevents the convergence of explanatory models (which if the are really explanatory of reality should converge!).  A unified notion of spacetime as information distance between networks brings together  theory of behavior, learning, neural networks, computer science, genetics etc with quantum mechanics and cosmology.  The whole kit and kaboodle.  It also explains why mathematics continues to be unusually effective in all sciences... Mathematics is a descriptive symbolic a of relations and contingency.  Converging all theories upon a common set of building blocks does not INVALIDATE those theories and models in their utility nor does it make them unnecessary.  Quite the opposite.  Information IS the question at hand and HOW it is encoded is exactly what contingencies are embodied as.  Humans as humans, not as computers, are what we study in human behavior.  So we need theories of human behavior.  Planets, atoms, computers, numbers, ants, proteins, and on and on all have embodied contingencies that explanation requires be understood in nuanced but connected ideas.

 

Once enough models of the relations of contingent things are encoded in useful ways (knowledge! Computer programs/simulations/4d printing!!) spacetime travel becomes more believable... Not like 1950s movies, but by simulation and recreated/newly created ever larger universes with their own spacetime trajectories/configurations.  That's fun to think about, but actually is a much more serious point.  The more information that is encoded between networks (the solar system and humans and their machines, etc) the less spacetime (per my above definition) is required to go from one subnetwork of existence (planet earth and humanity) to another (Mars and martinity), etc.  A deep implication here is an answer to why there is a speed of light (a practical one) and whether that can be broken (it can, and has http://time.com/4083823/einstein-entanglement-quantum/).  The speed of light is due to the contingencies between massive networks - anything more sophisticated than a single electron etc has such a huge set of contingencies that to be "affected" by light or anything else enough those effects must affect the contingencies too.  This is the basis of spacetime, how much spacetime is engaged in "affecting" something.

 

This is not a clever, scifi device nor a semantic, philosophic word play.  Information and network theory are JUST beginning and are rapidly advancing both theoretically (category theory, info theory, graph theory, PAC, etc) and practically (deep learning, etc).  Big data and machine learning/deep learning/learning theory are going to up looking EXACTLY like fundamental physics theory - all theories of getting by with missing information or a limit to what can be known by any entity smaller than all the universe.  To the universe - the universe is the grand unified theory and explanation are unnecessary.