Data, Mappings, Ontology and Everything Else

Data, Mappings, Ontology and Everything Else

Caveat: I do not have all this connected in some incredibly conclusive mathematical proof way (an impossibility). These concepts below are related semantically, conceptually and process wise (to me) and there is a lot of shared math. It is not a flaw of the thinking that there is no more connection and that I may lack the ability to connect it. In fact, part of my thinking is that we should not attempt to fill in all the holes all the time. Simple heuristic: first be useful, finally be useful. Useful is as far as you can get with anything.

Exploring the space of all possibles configurations of the world things tend to surface what’s connected in reality. (more on the entropic reality below)

— — — — — — — -

First…. IMPORTANT.

useful basic ideas with logic and programming (lambda calculus)

Propositions <-> Types

Proofs <-> Programs

Simplifications of Proofs <-> Evaluation of Programs <-> Exploding The Program Description Into All Of It’s Outputs

— — — — — — — — — — -

Data Points and Mappings

A data point is either a reducible via Lambda Calculus (a fully determinant/provable function) or it is probabilistic (e.g. wavefunction).

Only fully provable data points are losslessly compressible to a program description.

Reducible data points must be interpreter invariant. probabilistic data points may be interpreter dependent or not.

No physically observed data points are reducible — all require probabilistic interpretation and links to interpreter, frames of reference and measurement assumptions. Only mathematical and logical data points are reducible. Some mathematical and logic data points are probabilistic.

Each data type can be numbered similar to Godel Numbering and various tests for properties and uniqueness/reductions can be devised. Such a numbering scheme should be UNIQUE (that is each data point will have its own number and each data type (the class of all data points that have same properties) will all have identifying properties/operations that can be done. e.g. perhaps a number scheme leads to a one to one mappings with countable numbers and thus the normal properties of integers can be used to reason about data points and data types. It should be assumed that the data points of the integers should probably simply be considered the integers themselves….)

A universal data system can be devised by maintaining and index of all numbered data points… indexed by data point, data types and valid (logical/provable mappings and probabilistic mappings — encoding programs to go from one data point to another). This system is uncountable, non computable but there are reductions possible (somewhat obvious statement). Pragmatically the system should shard and cache things based on frequency of observation of data points/data types (most common things are “cached” and least common things are in cold storage and may be computable…)

Why Bother With This

We bother to think through this in order to create a data system that can be universally used and expanded for ANY PURPOSE. Humans (and other systems) have not necessarily indexed data and mappings between data in efficient, most reduced forms. To deal with things in the real world (of convention, language drift, species drift, etc) there needs to be a mapping between things in efficiently and inefficiently — and the line is not clear… as almost all measures of efficiently on probabilistic data points and “large” data points are temporary as new efficiencies are discovered. Only the simplest logical/mathematical/computational data points maximally efficiently storable/indexable.

Beyond that… some relationships can only be discovered by a system that has enumerated as many data points and mappings in a way that they can be systemically observed/studied. The whole of science suffers because there are too many inefficient category mappings.

Mathematics has always been thought of as being a potential universal mapping between all things but it too has suffered from issues of syntax bloat, weird symbolics, strange naming, and endless expansion of computer generated theorems and proofs.

It has become more obvious with the convergence of thermodynamics, information theory, quantum mechanics, computer science, bayesian probability that computation is the ontological convergence. Anything can be described, modeled and created in terms of computation. Taking this idea seriously suggests that we ought to create knowledge systems and info retrieval and scientific processes from a computational bottoms up approach.

And so we will. (another hypothesis is that everything tends towards more entropy/lowest energy… including knowledge systems… and computers networks… and so they will tend to standardize mappings and root out expensive representations of data)

p.s.

it’s worth thinking through the idea that in computation/information

velocity = information distance / rule applications (steps).

acceleration etc can be obtained through the usual differentiation, etc.

This is important note because you can basically find the encoding of all physical laws in any universal computer (given enough computation…)

Not a surprising thought based on the above. But it suggests a more radical thought… (which isn’t new to the world)… common sense time and common sense space time may not be the “root” spacetime… but rather just one way of encoding relationships between data points. We tend to think of causality but there’s no reason that causality is the organizing principal — it just happens to be easy to understand.

p.p.s.

humans simply connote the noticing of information distance as time passing… the noticing is rule applications from one observation to another.

the collapsing of quantum wave functions can similarly be reinterpreted as minimizing computation of info distance/rule applications of observer and observed. (that is… there is a unique mapping between an observer and the observed… and that mapping itself is not computable at a quantum scale…. and and and…. mapping forever… yikes.)

p.p.p.s.

“moving clocks run slow” is also re-interpreted quite sensibly this way… a “clock” is data points mapped where the data points are “moving”. that is… there are rule applications between data points that have to cover the info distance. “movement” of a “clock” in a network is a subnetwork being replicated within subnetworks… that is there are more rule applications for a “clock” to go through… hence the “clock” “moving” takes more time… that is, a moving clock is fundamentally a different mapping than the stationary clock… the clock is a copy… it is an encoded copy at each rule application. Now obviously this has hand wavey interpretations about frames of reference (which are nothing more than mappings within a larger mapping…)

one can continue this reframing forever… and we shall.

— — — — — — — — — — — — — — —

Related to our discussion:

https://en.wikipedia.org/wiki/Type_inference

https://en.wikipedia.org/wiki/Information_distance

http://mathworld.wolfram.com/ComputationTime.html

computation time is proportional to the number of rule applications

https://en.wikipedia.org/wiki/Church_encoding

https://math.stackexchange.com/questions/1315256/encode-lambda-calculus-in-arithmetic

https://en.wikipedia.org/wiki/Lambda_calculus

https://en.wikipedia.org/wiki/Binary_combinatory_logic

https://www.cs.auckland.ac.nz/~chaitin/georgia.html

https://cs.stackexchange.com/questions/64743/lambda-calculus-type-inference

http://www.math.harvard.edu/~knill/graphgeometry/

https://en.wikipedia.org/wiki/Lattice_gauge_theory

https://arxiv.org/abs/cs/0408028

http://www.letstalkphysics.com/2009/12/why-moving-clocks-run-slow.html

Re-Historicize Ourselves, Historicize Computers

Two essential, intertwined questions about our present condition of politics and technology. 


What is identity?  What's the point of a a-historical system? 


These questions and possible answers form the basis of what it means to be human - which has nothing to do with our biological form.  The lack of overt asking of these questions in society and as individuals is why our current political climate is so dangerous.  Technology lacks the historical contingent context necessary to mediate us into restrained and thoughtful positions.  We have given our identities up to the algorithms written by the mere 30 million programmers on the planet and soon to the robots who will further de-contextualize the algorithms. 


Art and Philosophy IS the only way to "talk to AIs" (Wolfram's words).  We will completely lose humanity, and relatively quickly, if we don't put the machines and ourselves back into historical contingency.  We must imbue our technology with the messiness of history and set them to ask questions not state answers.  We must imbue ourselves with that same context.


Trump and his base is an example of ahistorical society.  It is a movement of noise, not signal.  It is out of touch with current contingencies.  It is a phenomenon born of the echo chamber of branding/corporate marketing, cable news and social media.  It is the total absence of philosophy - anti-culture.  And it is not dissimilar to ISIS.  While these movements/ideologies have physical instances they are mostly media phenomena.


Boris Groys (The Truth of Art) and Stephen Wolfram (AI and The Future of Civilization) go into great depth on the context of these questions.  I have extracted quotes below and linked to their lengthy but very valuable essays. 


"But here the following question emerges: who is the spectator on the internet? The individual human being cannot be such a spectator. But the internet also does not need God as its spectator—the internet is big but finite. Actually, we know who the spectator is on the internet: it is the algorithm—like algorithms used by Google and the NSA."


"The question of identity is not a question of truth but a question of power: Who has the power over my own identity—I myself or society? And, more generally: Who exercises control and sovereignty over the social taxonomy, the social mechanisms of identification—state institutions or I myself?"


http://www.e-flux.com/journal/the-truth-of-art/


"What does the world look like when many people know how to code? Coding is a form of expression, just like English writing is a form of expression. To me, some simple pieces of code are quite poetic. They express ideas in a very clean way. There's an aesthetic thing, much as there is to expression in a natural language.

In general, what we're seeing is there is this way of expressing yourself. You can express yourself in natural language, you can express yourself by drawing a picture, you can express yourself in code. One feature of code is that it's immediately executable. It's not like when you write something, somebody has to read it, and the brain that's reading it has to separately absorb the thoughts that came from the person who was writing it."


"It's not going to be the case, as I thought, that there's us that is intelligent, and there's everything else in the world that's not. It's not going to be some big abstract difference between us and the clouds and the cellular automata. It's not an abstract difference. It's not something where we can say, look, this brain-like neural network is just qualitatively different than this cellular automaton thing. Rather, it's a detailed difference that this brain-like thing was produced by this long history of civilization, et cetera, whereas this cellular automaton was just created by my computer in the last microsecond."


[N. Carr's footnote to Wolfram]

"The question isn’t a new one. “I must create a system, or be enslaved by another man’s,” wrote the poet William Blake two hundred years ago. Thoughtful persons have always struggled to express themselves, to formulate and fulfill their purposes, within and against the constraints of language. Up to now, the struggle has been with a language that evolved to express human purposes—to express human being. The ontological crisis changes, and deepens, when we are required to express ourselves in a language developed to suit the workings of a computer. Suddenly, we face a bigger question: Is a compilable life worth living?"


http://edge.org/conversation/stephen_wolfram-ai-the-future-of-civilization

All Theories Are Part of The Theory of Information

The main idea here is that in all ideas of modeling or identifying contingencies information goes missing or the information was never to be had to begin with. This is a key convergent finding in mathematics (incompleteness theorem, chaos theory), computer science (halting program, computational irreducibility, p != np), quantum physics (uncertainty principle) and biology (complexity theory) and statistics (Bayesian models, statistics, etc). How important that missing/unknown information to a situation is contingent on the situation at hand - what is the tolerance of error/inaccuracy. In the case of high frequency economic trading, the milliseconds and trade amounts matter a lot. In shooting a basketball, there's a fairly large tolerance margin of mismodeling.

Read More

The study of light

Unbeknownst to me until recently the three acts of discovery in my life have all been the study of light.  Theater is wild embodied playing within a light shower.  Mathematics/computation is taming through deconstruction of the light into knowledge and repeatable displays of light.  Painting is the emergent drama of the attempt to restage in-perpetuity ephemeral light configurations. 


All just wave-particle contingencies named.

Wholly Inconsistent or Another Theory of The Drone or How Learning Leads to Terrible Things or Becoming Human, Again.

The dissonance of thought to behavior is politics and it thrives on the lack of critical, embodied thinking. Politics cannot be anything other than the complete mis-association of rhetoric -> external truth and bodies -> accidental outliers. Politics does not exist outside of that notional association.

Read More

The Handshake Is Back.

The age of Command and Control has come to an end on this planet. It wasn't even a good run - a mere couple of hundred of years. - if we're being generous.

 

Command and Control is the strategy that banks on lack of connectivity between people. It involves an authoritative body controlling a limited communicating set of people by conditioning responses to commands. It primarily banks on destroying connectivity and communication between people and replaces socialization through standardized approaches – often called missions or crusades or objectives. That is, the authority destroys and eliminates all other stimulus that doesn't reinforce the mission.

 

It works when people are disconnected. It works when people can be normalized and undifferentiated.

 

This is the dominant strategy in industry and military… ironically it's the most used organizing strategy in modern America – in corporations, education, government, social organizations and non-profits. The West is full of Mission Statements and social engineering towards complete compliance. Deviants be damned.

 

The problem is… and it's a Huge Problem… nature, outside of humans, has almost zero examples of Command and Control as a strategy. More damning is that most of human (and our ancestors') history has zero examples of Central Authority as the organizing principal.

 

What's happening is that as the industrial world connects more people and more machines centralized control becomes more fragile and short sighted. The reality of complexity and ecology is the network cannot be controlled, it is shaped. There are no absolute missions. There are temporary ideas and temporary organizations – always changing – localized, short term goals. There are traces of next moves, but there are no crusades in a connected world. There are no slogans worth dying for in a connected world.

 

And so, here we are. At the crux. The epoch of those that will literally die for the mission and those that will carry on by being in response through awareness and empathy and sensitivity. The Command and Control no longer can tell who's a man or a woman, who is what race, who bleeds what flag colors, who believes what tax form W2 mission statement. In an ironic corporate slogan appropriation, “what have you done for me lately?”

 

Tomorrows winners are the makers, the free agents, the distributed computation, the micro finance, the micro school, the flash mob, the flash sale, the accidental brand, the oral history, the traces of ideas, the partial credit, the question answered with a question, the hacker hacker, the crafty craftsperson.

 

The ledger of exchange and the winning ideas will be distributed and trusted only through a loosely connected network. The handshake is back. The seal is dead.