compute, ontology, Theory Russell Foltz-Smith compute, ontology, Theory Russell Foltz-Smith

Data, Mappings, Ontology and Everything Else

Data, Mappings, Ontology and Everything Else

Caveat: I do not have all this connected in some incredibly conclusive mathematical proof way (an impossibility). These concepts below are related semantically, conceptually and process wise (to me) and there is a lot of shared math. It is not a flaw of the thinking that there is no more connection and that I may lack the ability to connect it. In fact, part of my thinking is that we should not attempt to fill in all the holes all the time. Simple heuristic: first be useful, finally be useful. Useful is as far as you can get with anything.

Exploring the space of all possibles configurations of the world things tend to surface what’s connected in reality. (more on the entropic reality below)

— — — — — — — -

First…. IMPORTANT.

useful basic ideas with logic and programming (lambda calculus)

Propositions <-> Types

Proofs <-> Programs

Simplifications of Proofs <-> Evaluation of Programs <-> Exploding The Program Description Into All Of It’s Outputs

— — — — — — — — — — -

Data Points and Mappings

A data point is either a reducible via Lambda Calculus (a fully determinant/provable function) or it is probabilistic (e.g. wavefunction).

Only fully provable data points are losslessly compressible to a program description.

Reducible data points must be interpreter invariant. probabilistic data points may be interpreter dependent or not.

No physically observed data points are reducible — all require probabilistic interpretation and links to interpreter, frames of reference and measurement assumptions. Only mathematical and logical data points are reducible. Some mathematical and logic data points are probabilistic.

Each data type can be numbered similar to Godel Numbering and various tests for properties and uniqueness/reductions can be devised. Such a numbering scheme should be UNIQUE (that is each data point will have its own number and each data type (the class of all data points that have same properties) will all have identifying properties/operations that can be done. e.g. perhaps a number scheme leads to a one to one mappings with countable numbers and thus the normal properties of integers can be used to reason about data points and data types. It should be assumed that the data points of the integers should probably simply be considered the integers themselves….)

A universal data system can be devised by maintaining and index of all numbered data points… indexed by data point, data types and valid (logical/provable mappings and probabilistic mappings — encoding programs to go from one data point to another). This system is uncountable, non computable but there are reductions possible (somewhat obvious statement). Pragmatically the system should shard and cache things based on frequency of observation of data points/data types (most common things are “cached” and least common things are in cold storage and may be computable…)

Why Bother With This

We bother to think through this in order to create a data system that can be universally used and expanded for ANY PURPOSE. Humans (and other systems) have not necessarily indexed data and mappings between data in efficient, most reduced forms. To deal with things in the real world (of convention, language drift, species drift, etc) there needs to be a mapping between things in efficiently and inefficiently — and the line is not clear… as almost all measures of efficiently on probabilistic data points and “large” data points are temporary as new efficiencies are discovered. Only the simplest logical/mathematical/computational data points maximally efficiently storable/indexable.

Beyond that… some relationships can only be discovered by a system that has enumerated as many data points and mappings in a way that they can be systemically observed/studied. The whole of science suffers because there are too many inefficient category mappings.

Mathematics has always been thought of as being a potential universal mapping between all things but it too has suffered from issues of syntax bloat, weird symbolics, strange naming, and endless expansion of computer generated theorems and proofs.

It has become more obvious with the convergence of thermodynamics, information theory, quantum mechanics, computer science, bayesian probability that computation is the ontological convergence. Anything can be described, modeled and created in terms of computation. Taking this idea seriously suggests that we ought to create knowledge systems and info retrieval and scientific processes from a computational bottoms up approach.

And so we will. (another hypothesis is that everything tends towards more entropy/lowest energy… including knowledge systems… and computers networks… and so they will tend to standardize mappings and root out expensive representations of data)

p.s.

it’s worth thinking through the idea that in computation/information

velocity = information distance / rule applications (steps).

acceleration etc can be obtained through the usual differentiation, etc.

This is important note because you can basically find the encoding of all physical laws in any universal computer (given enough computation…)

Not a surprising thought based on the above. But it suggests a more radical thought… (which isn’t new to the world)… common sense time and common sense space time may not be the “root” spacetime… but rather just one way of encoding relationships between data points. We tend to think of causality but there’s no reason that causality is the organizing principal — it just happens to be easy to understand.

p.p.s.

humans simply connote the noticing of information distance as time passing… the noticing is rule applications from one observation to another.

the collapsing of quantum wave functions can similarly be reinterpreted as minimizing computation of info distance/rule applications of observer and observed. (that is… there is a unique mapping between an observer and the observed… and that mapping itself is not computable at a quantum scale…. and and and…. mapping forever… yikes.)

p.p.p.s.

“moving clocks run slow” is also re-interpreted quite sensibly this way… a “clock” is data points mapped where the data points are “moving”. that is… there are rule applications between data points that have to cover the info distance. “movement” of a “clock” in a network is a subnetwork being replicated within subnetworks… that is there are more rule applications for a “clock” to go through… hence the “clock” “moving” takes more time… that is, a moving clock is fundamentally a different mapping than the stationary clock… the clock is a copy… it is an encoded copy at each rule application. Now obviously this has hand wavey interpretations about frames of reference (which are nothing more than mappings within a larger mapping…)

one can continue this reframing forever… and we shall.

— — — — — — — — — — — — — — —

Related to our discussion:

https://en.wikipedia.org/wiki/Type_inference

https://en.wikipedia.org/wiki/Information_distance

http://mathworld.wolfram.com/ComputationTime.html

computation time is proportional to the number of rule applications

https://en.wikipedia.org/wiki/Church_encoding

https://math.stackexchange.com/questions/1315256/encode-lambda-calculus-in-arithmetic

https://en.wikipedia.org/wiki/Lambda_calculus

https://en.wikipedia.org/wiki/Binary_combinatory_logic

https://www.cs.auckland.ac.nz/~chaitin/georgia.html

https://cs.stackexchange.com/questions/64743/lambda-calculus-type-inference

http://www.math.harvard.edu/~knill/graphgeometry/

https://en.wikipedia.org/wiki/Lattice_gauge_theory

https://arxiv.org/abs/cs/0408028

http://www.letstalkphysics.com/2009/12/why-moving-clocks-run-slow.html

Read More
ontology, perception, Life, compute Russell Foltz-Smith ontology, perception, Life, compute Russell Foltz-Smith

The Data Science of Art

The Data Science of Art or Pictures of Data about Pictures

My larger theory is that data=art and art=data and that Artificial Intelligence will be nothing more or less than a continued exercise in artistic-historical integration of new mediums and forms.   However this post isn't another rehash of those ideas.  This one is about the data of art.

Here I offer some insights by computationally analyzing art (my own and pointing to others analysis.)   There are quite a few excellent and very detailed data science analysis of art that have come out recently due to the fact that more and more collections are being digitized and data faceted.   Here's fantastic write up of MoMA's collection.  Michael Trott offers a very detailed analysis of ratios and others hard facets of a couple hundred years of visual art.   Last year Google made waves with Deep Dream which was really an analysis of how neural networks do work.  Turning that analysis on its head and you get art creation (my first point above... the lines between art and data are more than blurry!)

On to some pictures depicting data about pictures!

A cluster analysis of 534 of my visual works

While an artist has intimate knowledge of their own work it is likely highly biased knowledge.  We often become blind to our habits and tend to forget our average work.   Doing a large scale unemotional analysis of all the art is enlightening.   

The cluster analysis above was created using machine learning to cluster images by similar features (color, composition, size, subject matter, etc) (I did everything in Mathematica 11/Wolfram Language).  A cursory view shows density in deep reds and faces/portraits.  My work tends to be more varied as I move into dry media (pencil, pastels) and remove the color (or the algorithms simply don't find as much similarity.   And, of course, the obvious note here is that this is just a quick view without attempting to optimize the machine learning or feed it cleaned up images (color correction etc).   What else do you notice about the general shape of my work?  (I am currently running a larger analysis on 4500 images instead of just 530, curious if things will look different.)

Drilling in a bit we find some curiosities.  Below we see a couple of deeper dives into particular images where we break down the images geometry, detection of objects/subjects and a little bit of color analysis.  What I find most interesting is just own often similar compositions show up in varied subject matters.  Not terribly surprising as it's all just marks on a page but it is interesting just how biased I am to certain geometries?

I tend to love strong crossing lines or directly vertical or horizontal.  In my own art history studies we find this isn't necessarily optimal for keeping a viewers attention.   Often these geometries are taking eyes right of the page.

Below is a bit of a detailed view of a few images and their 4 closest neighboring images/works.  Pretty weird in a lot of ways!   I am heartened to see that over a 18 month period of work I haven't particularly sunk into a maxima or minima of ideas.  I can see many older works in a cluster with newer works as old themes/ideas resurface and are renewed, often without active thought.  Another study I should do is to sort these out by source of the picture as I can tell that photos taken from phones and posted to instagram etc often distort what was really there etc. (not a bad thing, just something to analyze to measure effects/artifacts).

artneighbors.png

I'm going to continue drilling into all this and do a full color spectrum analysis, histograms of subject matter, categorization of mediums and materials, etc.    The point is not to find a formula but instead to learn and become aware and enjoy.  I do not find data forensics to be limiting or mechanical.   You can see in the above that even the math is fairly interpretative and loose - the object identification is off often, faces often are missed, images are misgrouped... OR are they?   

One of my recurring themes in my art exploration is a simple question "what art entertains a computer/algorithm?" or more provocatively what if to art is to be human which is really just data science.... 

Read More
compute, Theory, living, ontology, philosophy Russell Foltz-Smith compute, Theory, living, ontology, philosophy Russell Foltz-Smith

Re-Historicize Ourselves, Historicize Computers

Two essential, intertwined questions about our present condition of politics and technology. 


What is identity?  What's the point of a a-historical system? 


These questions and possible answers form the basis of what it means to be human - which has nothing to do with our biological form.  The lack of overt asking of these questions in society and as individuals is why our current political climate is so dangerous.  Technology lacks the historical contingent context necessary to mediate us into restrained and thoughtful positions.  We have given our identities up to the algorithms written by the mere 30 million programmers on the planet and soon to the robots who will further de-contextualize the algorithms. 


Art and Philosophy IS the only way to "talk to AIs" (Wolfram's words).  We will completely lose humanity, and relatively quickly, if we don't put the machines and ourselves back into historical contingency.  We must imbue our technology with the messiness of history and set them to ask questions not state answers.  We must imbue ourselves with that same context.


Trump and his base is an example of ahistorical society.  It is a movement of noise, not signal.  It is out of touch with current contingencies.  It is a phenomenon born of the echo chamber of branding/corporate marketing, cable news and social media.  It is the total absence of philosophy - anti-culture.  And it is not dissimilar to ISIS.  While these movements/ideologies have physical instances they are mostly media phenomena.


Boris Groys (The Truth of Art) and Stephen Wolfram (AI and The Future of Civilization) go into great depth on the context of these questions.  I have extracted quotes below and linked to their lengthy but very valuable essays. 


"But here the following question emerges: who is the spectator on the internet? The individual human being cannot be such a spectator. But the internet also does not need God as its spectator—the internet is big but finite. Actually, we know who the spectator is on the internet: it is the algorithm—like algorithms used by Google and the NSA."


"The question of identity is not a question of truth but a question of power: Who has the power over my own identity—I myself or society? And, more generally: Who exercises control and sovereignty over the social taxonomy, the social mechanisms of identification—state institutions or I myself?"


http://www.e-flux.com/journal/the-truth-of-art/


"What does the world look like when many people know how to code? Coding is a form of expression, just like English writing is a form of expression. To me, some simple pieces of code are quite poetic. They express ideas in a very clean way. There's an aesthetic thing, much as there is to expression in a natural language.

In general, what we're seeing is there is this way of expressing yourself. You can express yourself in natural language, you can express yourself by drawing a picture, you can express yourself in code. One feature of code is that it's immediately executable. It's not like when you write something, somebody has to read it, and the brain that's reading it has to separately absorb the thoughts that came from the person who was writing it."


"It's not going to be the case, as I thought, that there's us that is intelligent, and there's everything else in the world that's not. It's not going to be some big abstract difference between us and the clouds and the cellular automata. It's not an abstract difference. It's not something where we can say, look, this brain-like neural network is just qualitatively different than this cellular automaton thing. Rather, it's a detailed difference that this brain-like thing was produced by this long history of civilization, et cetera, whereas this cellular automaton was just created by my computer in the last microsecond."


[N. Carr's footnote to Wolfram]

"The question isn’t a new one. “I must create a system, or be enslaved by another man’s,” wrote the poet William Blake two hundred years ago. Thoughtful persons have always struggled to express themselves, to formulate and fulfill their purposes, within and against the constraints of language. Up to now, the struggle has been with a language that evolved to express human purposes—to express human being. The ontological crisis changes, and deepens, when we are required to express ourselves in a language developed to suit the workings of a computer. Suddenly, we face a bigger question: Is a compilable life worth living?"


http://edge.org/conversation/stephen_wolfram-ai-the-future-of-civilization

Read More
Life, ontology, philosophy, Space, compute, Theory, time, living, perception Russell Foltz-Smith Life, ontology, philosophy, Space, compute, Theory, time, living, perception Russell Foltz-Smith

All Theories Are Part of The Theory of Information

The main idea here is that in all ideas of modeling or identifying contingencies information goes missing or the information was never to be had to begin with. This is a key convergent finding in mathematics (incompleteness theorem, chaos theory), computer science (halting program, computational irreducibility, p != np), quantum physics (uncertainty principle) and biology (complexity theory) and statistics (Bayesian models, statistics, etc). How important that missing/unknown information to a situation is contingent on the situation at hand - what is the tolerance of error/inaccuracy. In the case of high frequency economic trading, the milliseconds and trade amounts matter a lot. In shooting a basketball, there's a fairly large tolerance margin of mismodeling.

 

This is a Monday morning brain dump to get the juices going. 

 

"Contingencies" is a difficult concept to fully elaborate in a useful manner.  A contingent thing- an event, a structure, a set of information - is such a thing by the fact that it has no existence outside of its contingent relationships.  In some sense it's the age old rhetorical question, "if a tree falls in a forest and no one is around does it make a noise?"    The key in that question is "noise."  Noise is a contingent concept in both the common sense idea as well as any physical sense.  Sound (and other sensory concepts) is contingent in that there must be a relation between the waves/particles (and their possible sources) and an observer.  Without the observer one cannot classify/name/label a sound, a sound.  A sound literally is the effect on the observer.  Hopefully that is a useful introduction to a basic idea of contingency. 

 

The definitions muddy considering contingency in a deeper and broader way such as in discussing human behavior or economics.  Over the eons humans have attempted to create and codify contingencies.  The codification is really more an attempt to reduce the noisy complication of the wider universe into acceptable and useful contingencies (laws, rules, guidelines, best practices, morals, ethics, social norms, standards, etc).  The sciences and humanities also codify but wrap these efforts in slicker packages of "discovering the natural laws" and figuring out "how to best live." 

 

These published codifications are NOT the contingencies they purport to represent but they are contingent in of themselves.  In these broader contexts contingencies refer to and are part of a complex network of relationships.  Expounded as physical or chemical models or philosophic frameworks or first order logics or computer programs all of these systems are contingent systems in the sense of their basis in previous systems, relations to associated phenomena and the substrate of their exposition and execution.  A computer programs representation of information and its utility as a program is highly contingent on the computer hardware it runs on, the human language it's written in, the compiler logic used to encode it for the computer, the application of the output, and so on.

 

The latest science in computer theory, social sciences, neuroscience, quantum physics and cosmology (and chemistry....) have somewhat converged onto very thorny (challenging to the intuition) ideas of super positions, asymmetry/symmetry, networks (neural and otherwise) and a more probabilistic mathematics.  These are all models and sciences of contingency and essentially an unified theory of information which in turn is a unified theory of networks/graphs (or geometry for the 19th centurions).  The core idea/phenomena for these ideas being useful explanations at is one of missing information and how reliable of probabilistic statements can be made about contingent things (events, objects, etc.). 

 

The components of the models that are sometimes employed in these theories involve Bayesian models, assumption of the real/actual existence of space and time and concepts of simple logic ("if then") and other first order logic concepts.  These are often chosen as building blocks because of their obvious common sense/human intuitional connection.  However, upon inspection even these assumptions add a layer that is severely off from the actual contingencies being studied and these building block assumptions are also highly contingent in of themselves.  The "model reality distance" and the "contingent in of themselves"ness quickly, exponentially explodes the relevance of the model.   

 

Consider even a basic notion of "if then" type thinking/statements in a cross substrate contingent situation - such as a simple computer program running on a basic, common computer.  A program as simple as "if X equals 1 then print 'The answer is definitely 1!'.  X = 1 + .0000000000000000000000000001" is going to print the THEN statement even though it's logically, symbolically not true (a human can obviously tell).  (The program in ALL CASES should print nothing at all, logically speaking.  Practically (in the world of daily life) the program prints the statement and everything is "ok", on average).  The abstract "if then" statement is contingent on the substrate that implements/executes/interprets it (the computer OS and hardware).  The contingencies build up from there (the language one implements the statement in matters, the ability of any observer or implementing entity to understand left to right notation, mathematical statements, variable replacement, etc). 

 

An important note: these issues of contingency ARE NOT further If Then statements.  That is, we cannot resolve the short coming of If Then thinking to just needing to build up all of the If Then statements.  The If and the Then and their references they are checking as IF (what's the X we're testing if it's the X) and the Then and it's command suffer from this infinite regress of the original simple if then statement we question!  How does anything definitely say X is in fact the thing the statement/logic is checking for?

 

The main idea here is that in all ideas of modeling or identifying contingencies information goes missing or the information was never to be had to begin with. This is a key convergent finding in mathematics (incompleteness theorem, chaos theory), computer science (halting program, computational irreducibility, p != np), quantum physics (uncertainty principle) and biology (complexity theory) and statistics (Bayesian models, statistics, etc).  How important that missing/unknown information to a situation is contingent on the situation at hand - what is the tolerance of error/inaccuracy.  In the case of high frequency economic trading, the milliseconds and trade amounts matter a lot.  In shooting a basketball, there's a fairly large tolerance margin of mismodeling.  Very noticing the Higgs Boson the margin of tolerance is almost Planck length (smallest physical distance we know of...).  The development of probability theory allows us to make useful statements about contingent situations/things.  The more we can observe similarly behaving/existing contingent things the more useful our probability models become.  EXCEPT... Sometimes not.  The Black Swan.

 

If Then and similar logic models of thinking are insufficient as explanatory reference frames.  Per the above they simply do not account for the rich effects of very small amounts of missing information or mis-information.  Which brings us to the other building blocks almost universally used in science - space and time.  These are robust common sense and in some cases scientific concepts, but they are not fundamental (in that they cannot escape being contingent in of themselves).  Time is contingent on observers and measuring devices - it literally is the observable effect of information encoding between contingent events, it does not have an independent existence.  Space is more difficult to unwind than time in that it is a very abstract concept of relative "distance" between things.  This is a useful concept even at the lowest abstraction levels.  However space, as physical space, is not fundamental.  Instead space should be reconciled as a network distance between contingent subnetworks (how much of an intervening network need to be activated to relate two subnetworks).  Spacetime is the combined, observable (yet RELATIVE to the contingent) distance in total information between contingent things (events, objects, etc). 

 

This is important!  Accepting common notions of If Then logic and spatio temporal elements prevents the convergence of explanatory models (which if the are really explanatory of reality should converge!).  A unified notion of spacetime as information distance between networks brings together  theory of behavior, learning, neural networks, computer science, genetics etc with quantum mechanics and cosmology.  The whole kit and kaboodle.  It also explains why mathematics continues to be unusually effective in all sciences... Mathematics is a descriptive symbolic a of relations and contingency.  Converging all theories upon a common set of building blocks does not INVALIDATE those theories and models in their utility nor does it make them unnecessary.  Quite the opposite.  Information IS the question at hand and HOW it is encoded is exactly what contingencies are embodied as.  Humans as humans, not as computers, are what we study in human behavior.  So we need theories of human behavior.  Planets, atoms, computers, numbers, ants, proteins, and on and on all have embodied contingencies that explanation requires be understood in nuanced but connected ideas.

 

Once enough models of the relations of contingent things are encoded in useful ways (knowledge! Computer programs/simulations/4d printing!!) spacetime travel becomes more believable... Not like 1950s movies, but by simulation and recreated/newly created ever larger universes with their own spacetime trajectories/configurations.  That's fun to think about, but actually is a much more serious point.  The more information that is encoded between networks (the solar system and humans and their machines, etc) the less spacetime (per my above definition) is required to go from one subnetwork of existence (planet earth and humanity) to another (Mars and martinity), etc.  A deep implication here is an answer to why there is a speed of light (a practical one) and whether that can be broken (it can, and has http://time.com/4083823/einstein-entanglement-quantum/).  The speed of light is due to the contingencies between massive networks - anything more sophisticated than a single electron etc has such a huge set of contingencies that to be "affected" by light or anything else enough those effects must affect the contingencies too.  This is the basis of spacetime, how much spacetime is engaged in "affecting" something.

 

This is not a clever, scifi device nor a semantic, philosophic word play.  Information and network theory are JUST beginning and are rapidly advancing both theoretically (category theory, info theory, graph theory, PAC, etc) and practically (deep learning, etc).  Big data and machine learning/deep learning/learning theory are going to up looking EXACTLY like fundamental physics theory - all theories of getting by with missing information or a limit to what can be known by any entity smaller than all the universe.  To the universe - the universe is the grand unified theory and explanation are unnecessary.

Read More
compute, ontology, perception, philosophy Russell Foltz-Smith compute, ontology, perception, philosophy Russell Foltz-Smith

The study of light

Unbeknownst to me until recently the three acts of discovery in my life have all been the study of light.  Theater is wild embodied playing within a light shower.  Mathematics/computation is taming through deconstruction of the light into knowledge and repeatable displays of light.  Painting is the emergent drama of the attempt to restage in-perpetuity ephemeral light configurations. 


All just wave-particle contingencies named.

Read More
philosophy, perception, ontology, living, compute Russell Foltz-Smith philosophy, perception, ontology, living, compute Russell Foltz-Smith

What is a painting? What is a poem? What is a program? What is a person?

an alien wants to know what our poems, paintings, programs and people actually are.  What Is We.

Where exactly do we find the IT of painting, poem, program or a person?  An intrinsic, contained whole?  an experienced essessence?

These are the biggest questions of all, the questions that inform the whole of politics, religion, science, humanities, culture, family, education and identity itself.

Consider an alien from a far away galaxy arrived here or near earth wondering what exactly is at hand.   Suppose the alien doesn't have eyes or ears or fingers or anything like humans or humanlike animals.  This alien lacks computers like ours, meaning none of this alien's computers/perceptive tools run our biology nor our operating systems.  What exactly would earth and humans and our artifacts be?  What would our poems and our programs and our paintings and our people seem "like" to an alien?

There would be no decoder key overtly explaining or making sense of any of the human experience for any such decoder would be written / described within the very objects the decoder decodes.  Our natural language, color theories, agile programming methodologies, data types, frames, etc would not make any sense at any level standing alone. An alien would literally need to learn humanity from the ground up.   And it's not even clear whether learning natural language, human behavior patterns, visual systems or bits and bytes would be "the ground" from whence to go up.

Why is alien ignorance the case?  Is there a conceivable reality in which this ignorance isn't the case?   The only possible case to be made is that of ideal or universal or at least beyond human forms/ideas/information.   While it is impossible to rule out completely the possibility of ideal forms/universals it seems incredibly improbable consider the fact that no two humans will ever agree on what exactly we mean by a painting, a poem, a program nor even a person is.  In fact, that's exactly why we have these expressions and their, well, expressions.   Whatever the existence of a poem is it is more than it's commonly stated rules and favorite examples.   Paintings have battled their own existence since ancient hominids traced pigment and scratched rocks on rocks.   Programs, while wholly invented by humans mostly in our lifetimes, have no full expression of their behavior.   A person who does all these other ill-defined things cannot possibly be defined by the infinitude of its ill-defined activities.

The situation for exact knowledge and clear definitions is worse though.  Even formulations/simply ideas/systems we've created entirely are not free of unlimited ignorance of their essense.  The halting program and incompleteness theorem in mathematics and computer science, our most exacting disciplines of creation, thwart, beyond the shadow of any reasonable doubt, any idea that we can know to the full extent all things, except the most simplistic.  

This entire essay is probably not convincing.  The human apparatus isn't set up well for ignorance and the unknown.   Our biology seems to gravitate to pattern recognition and this, in turn, leads our institutations to peddle in the same.   We all teach each other that We Can Know, We Must Know, We Will Know, despite the fact never has this actually be proven nor empirically been shown nor even lasted with the most fundamental faith ideals.  It makes sense, to some extent, that we wish to know and even believe we can know considering what it seems the value of knowing would be - if we knew, we could control or at least come to grips.   Even that is a bizarre baseless notion once we dig into it.   Anything within the limits of knowing is so simple it's uninteresting and in almost every case is fading or short lived.  Find a case and surface it, please!

Yet, it is still interesting or motive to try to explore and identify these things, as aren't we all doing these activities? or being these things?  any and all?

My own response to these questions is simply... I don't know but I'd like to keep finding out what it is.   And I even reject the idea of I.   I take it as a given that I'm exploring paintings, poems and programs not for the person that is me, but for the persons I am and am connected to.   I don't paint my paintings or write my poems or create my programs.  What tinkles from these fingers someone else's DNA made and someone else's skill trained is a shared activity of connectivity.  

And so.

There remains only inquiry and more inquiry.  That is what all is not.

Until the aliens tell all myselves otherwise.

Read More
philosophy, quantifier, ontology, perception, compute, government Russell Foltz-Smith philosophy, quantifier, ontology, perception, compute, government Russell Foltz-Smith

The Handshake Is Back.

The age of Command and Control has come to an end on this planet. It wasn't even a good run - a mere couple of hundred of years. - if we're being generous.

 

Command and Control is the strategy that banks on lack of connectivity between people. It involves an authoritative body controlling a limited communicating set of people by conditioning responses to commands. It primarily banks on destroying connectivity and communication between people and replaces socialization through standardized approaches – often called missions or crusades or objectives. That is, the authority destroys and eliminates all other stimulus that doesn't reinforce the mission.

 

It works when people are disconnected. It works when people can be normalized and undifferentiated.

 

This is the dominant strategy in industry and military… ironically it's the most used organizing strategy in modern America – in corporations, education, government, social organizations and non-profits. The West is full of Mission Statements and social engineering towards complete compliance. Deviants be damned.

 

The problem is… and it's a Huge Problem… nature, outside of humans, has almost zero examples of Command and Control as a strategy. More damning is that most of human (and our ancestors') history has zero examples of Central Authority as the organizing principal.

 

What's happening is that as the industrial world connects more people and more machines centralized control becomes more fragile and short sighted. The reality of complexity and ecology is the network cannot be controlled, it is shaped. There are no absolute missions. There are temporary ideas and temporary organizations – always changing – localized, short term goals. There are traces of next moves, but there are no crusades in a connected world. There are no slogans worth dying for in a connected world.

 

And so, here we are. At the crux. The epoch of those that will literally die for the mission and those that will carry on by being in response through awareness and empathy and sensitivity. The Command and Control no longer can tell who's a man or a woman, who is what race, who bleeds what flag colors, who believes what tax form W2 mission statement. In an ironic corporate slogan appropriation, “what have you done for me lately?”

 

Tomorrows winners are the makers, the free agents, the distributed computation, the micro finance, the micro school, the flash mob, the flash sale, the accidental brand, the oral history, the traces of ideas, the partial credit, the question answered with a question, the hacker hacker, the crafty craftsperson.

 

The ledger of exchange and the winning ideas will be distributed and trusted only through a loosely connected network. The handshake is back. The seal is dead.

Read More
philosophy, perception, ontology, compute Russell Foltz-Smith philosophy, perception, ontology, compute Russell Foltz-Smith

On Originality and Uniqueness

A question of Uniqueness

A Question of Originality and Uniqueness

The more I think and read the less original (in thought and expression and being) I become to myself. The evidence mounts against original thought as the investigation deepens. Borrowers in genetics and memetics, all organic things are. I am thus led to wonder if there is any unique entity/thing/idea in existence? Or could be in existence? Unique here defined – the unique contains or embodies WHOLLY and ONLY isolated-not-found-anywhere-else properties and relations.

This is all unique-ish

This is all unique-ish


Theories and Atoms

There are many theories in the world built on the establishment of a central, unique entity or atomic element:

proteins in genetics

sub atomic particles in quantum physics

persons in sociology and psychology

numbers in mathematics

bits and algorithms in computer science

and so on…

The question of falsity of all theories comes down to the frame of measurement reference. Experiments falsify or support theories based on measurement of relational phenomena. What properties of what entities should be measured and investigated in a theory and its experiments? As the frames of reference resolve measurement access is cut off to possible that might reveal the measured objects as non-unique (borrowed/unoriginal/non-atomic) entities. For example, the measurement and investigation of behavior between humans (and not the cells, proteins, chemistry and atoms that comprise them) experiments and theories become blind to what are possible (and likely in most cases) relevant causal networks. Time and time again it is found that an observed behavior isn't due to some reified personhood but really of chemistry exchange in organ systems and their cells and the environment. And even those exchanges are explained and mediated by network patterns and geometry (neural networks/memory, protein folding, chemical bonds, atomic spin, etc)

Infinite Regress of Contingency

Examples of contingent explanation can be endlessly drawn out. So much so that it doesn't seem plausible that fixed fidelity-level of explanation is fully contained. The infinite regress of the network of explanations seems to imply the phenomena themselves are an infinite regress of relations.

Here's the stake in the ground, so to speak. It's all networks and relations – everything in contingency. The resolution of anything, in its totality, is infinite. That is, to fully measure and explain it, all of its contingencies must be dawn out. This reality of the essential nature forces a diversification of ideas, knowledge disciplines, engineering activities, language and philosophies. The work of discovery will never be done.

The Basis of Originality

The originality of ideas or activity (unique things) was never pure. The regress of contingency ensures this. Originality can be thought of as a measure of energy between observed states of affairs (ideas, concepts, explanations, pixels on the screen music, art, societies, economies, etc) To go from here to there… the connection, the leap, the activity of the relating is the originality. It's a paradoxical concept. The space distance (perception) is usually infinitesimal between the original and unoriginal but the mass of contingencies of the unoriginal (the borrowed things) tends towards infinity. Thus the energy required to connect anew requires more energy (time aka computation aka connecting).

For example, to get a new law passed when a jurisdiction is small is much easier (takes less time, has less nodes to convince) that a jurisdiction that is large (takes more time due to more nodes to convince and more nodes in opposition). Old (established, highly contingent) laws have mass, they stick around. Getting a new (original) law in place, with even a slight change in a system as large as the USA requires enormous energy (time/computation/politicking) – the more entrenched and contingent the law the harder it is… think US Constitution.

Returning to the fore the idea of originality is reified and romantic. It is personal. It is ephemeral. It is a mere superposition of possibility collapsed into the new, which is barely different (and the only difference is new participants) than the old. But if the audience hadn't crossed the bridge themselves, the new appears new. As they walk across the bridge, alone, they'll find the same old same old… there's nothing new under the sun except what's new to you.

Implications

This is why the deeper and wider science and art and philosophy goes the more it circles back on itself… finding the same shape to phenomena across space time and all levels of fidelity. To connect wider and more diverse networks new vocabularies and new perceptive tools must be engineered. Those new tools must then come under study and interpretation and ruled use. On and On.

And Yet

The basic question remains. Is there anything unique from all things or any things? A simple case…. Is 0 different than 1? It seems so but how is it different? Where is that difference? What does the work of difference? Is a circle different than a square?

Certainly these examples are too simple. The answers appear to be YES, they are different. A circle is unique from a square. But… How are they fundamentally different? Through use? Through their mathematical properties? Through definition alone? I can approximate both with a series of lines at various angles, so the method to generate them may make them different but may not?  A circle has no sides/infinite sides and a square 4... there's a concept of equidistant from the center in both but its deployed differently... there's a infinity in both... (pi and Pythagorean theorems...) on and on...

Is geometry – the relation of things to other things, the shape of things – the only way in which things are unique? (a taxonomy of possible unique things https://en.wikipedia.org/wiki/List_of_mathematical_shapes)

The question seems stuck in an infinite regress of definitions and connections.

An Anti Conclusion For Now

Originality and Uniqueness do not hold up well as stand alone, substantial concepts. At best, I'm left in contingency. Things are contingent on other things on other things… occasionally whispering out possible unique relations that require a regress of investigation to reveal more sameness. Perhaps.

Read More
compute, time, philosophy, perception, living Russell Foltz-Smith compute, time, philosophy, perception, living Russell Foltz-Smith

am I OK? a remark on authenticity.

A remark on authenticity where I non-definitely answer the question of Am I OK?

am I OK?

This is the question I get asked the most nowadays.   Certainly when posts online stop being predominantly jokes about the NFL or filtered photos of babies doing funny stuff and start being drippy, gloopy bullshit painted rectangles with captions like “exist. I. do. not.” the question sort of begs itself.   That and the bizarro, yet totally cliche, year that from age 38 to 39 turned out to be… a seemingly constant drip of great news followed by shit news… you tend to reshape your expression a bit.

This is a #selfie of #you

This is a #selfie of #you


But that’s really not what the question is about, right?


The question “am I OK?” is about authenticity and freedom and sincerity or rather the lack thereof.  Our hyperconnected world and our American society’s obsession with brand awareness led to this confused and in-authentic mediation between people (and machines.)   Listing all the causes of the mass delusion of what I’ll call the brand of My of Endless Happiness (M.E.H.: the American Dream!) is a waste of energy (as mathematicians do listing trivial causes is left as an exercise to the reader.)    


MEH has us all engaged in small talk, trivia and endless duckfaced happy posts from all the fun things we’re doing instead of communicating.  MEH has us all outraged at the outrages we all share (death and taxes from Presidents!) - those things that are mostly removed from us and outside our direct control - instead of VOTING.  MEH has us big box shopping on Black Fridays and Cyber Mondays and whining about credit card bills in January instead of MAKING THINGS FOR EACH OTHER.   MEH has us reading Fifty Shades rather than, well, GETTING IT ON.


and so on without so-ing on!


No, I’m not OK!  OK is MEH.  OK is eating Bennigan's left overs watching Game of Thrones binging while playing Angry Birds (that’s a madlib, insert your own CHAIN RESTAURANT, POPULAR TV, FAVORITE PHONE APP).    


OK is watching GOP circus debates and ranting online while passing up the last 6 local election cycles because INSERT EXCUSE.   


OK is ok, it’s normal.   It’s buzzed but not drunk nor sober.  It’s brohugs and not embraces or yoddles/chants.   It’s regrams of misquoted inspiration not climbing that mountain 5 miles from your house.   It’s watching TV not playing jazz with friends.   It’s OK not GREAT!  AMAZING!   FUCKING PISSED!   BUMMED!   DEVASTATED!   ENGAGED!   


And it’s ok.   It’s perfectly ok to want OK.   It’s OK to be ok.  Sometimes, ok is exactly where to be.  Sometimes it’s 100x better than not-OK.   

Am I OK?   Maybe.  Sometimes.  here and there.   


Above all, I’m trying to engage.  That’s it.  Chasing experience.  Being a Maker and Doer rather than an mostly an observer and critic.   


Am I sad?   sometimes.  People die.  People get sick.  People hurt.  Animals hurt.  The world and life is hard.   


Am I desperate?  always.  Desperate to exist.  Desperate to renew my own agency.


Am I depressed?  sometimes.   Self diagnosis is generally a bad idea, but I can tell you, yes, I have been and do get depressed and darkness descends.   And the times when it does… as far as I can tell it’s because I’m sitting there OK.  and letting life happen to me.


Am I drunk?  sometimes.   sometimes more than others!   probably more than I should be in quantity and quality.


Am I happy? not that often, but at least once a week.   There’s two kinds of happiness, generally, to me.  True joy… usually that’s experiencing something awesome with others.  and then the little happiness… an espresso on Sunday mornings reading the NYTimes Book Review. (though that might actually be True Joy!).


Am I having fun? YES!   Fun isn’t tickling and playing tag, though that is fun to do.  FUN! is trying To Become Something, fun is Trying To Become A Person.  Fun is being so bad at something you have to do it every single day unending to see even a shred of better than truly terrible at that something.   


Am I Trying to be An Artist?  No.   Such a limited label, IMHO.   I hate labels.


Am I Trying to be a Philosopher?  Yes!   But only for a limited time.  Philosophers like Politicians make poor career titles… the idea of making a career (bring home the bacon) out of something that literally should be blowing up careers seems like a recipe for MEH.


Am I Trying to have a Career in Anything?   No.  I have tattoos on my hands (the things i use to DO STUFF) to remind myself of Information Destiny.  Everything is Information.    I am trying to Inform My Being. Always.


Am I Authentic?   No.   I’m trying.  Each day I’m trying more and more to authentically engage myself and the world.  


Am I Free?  No.   None of us can ever be free of contingencies.  I think Authenticity and Freedom go together.   And they are a process, not an end point.


Do I Want My Endless Happiness?  No, absolutely not under any terms do I want MEH.   I do not seek happiness at the expense of authenticity/freedom.   Life is life (ugh, tautologies…)   life is struggle and competition and birth and death and growth and shrinkage and change and stasis and highs and lows.   It’s hurt and joy.   It’s fast food and gourmet.  Literally life exists on the border - the jagged blurry line - of order and chaos.


So to answer your question, no I’m not ok.  Are you?

Read More
ontology, perception, philosophy, time, compute Russell Foltz-Smith ontology, perception, philosophy, time, compute Russell Foltz-Smith

A Dialog (between friends) on The Law of the Conservation of Computation

the most fundamental law of everything:

computation is conserved.

Two friends have a dialogue on the matter.

Russell [8:16 AM] 
This is happening to programs and programming too. http://www.worksonbecoming.com/thoughts-prefaces/2015/10/1/this-is-contingency

Some Works
This is contingency
Remarks on the contingency of new forms and the phenemenon of replication.

Schoeller [8:16 AM] 
Very NKS.

Schoeller [8:17 AM]
I’m somewhat less certain of this outcome than you — it relies heavily on everyone playing nice and working with each other.

Russell [8:18 AM] 
That's just you.

Schoeller [8:18 AM] 
Which is challenging — witness the web API boom/bust of 5 years ago.

Russell [8:18 AM] 
The arc of assimilation is clear

Schoeller [8:18 AM] 
It’s the pragmatism/skepticism in me.

Russell [8:19 AM] 
Most humans almost 6.997 billion of them have no idea about computers

Schoeller [8:19 AM] 
I get it. But the pace of progress can be furiously slow in the face of economics.

Schoeller [8:19 AM]
For instance — where’s my flying car?

Schoeller [8:20 AM]
We’re not going to have networked 3D printed robots manufacturing things for some time.

Russell [8:20 AM] 
That's not progress

Russell [8:20 AM]
Flying cars aren't selected for

Russell [8:20 AM]
They lack survivability value

Russell [8:21 AM]
Amazons prime deliver moving to Amazon flex...  As they push delivery times to zero one must manufacture close to the source

Russell [8:21 AM]
Of the transaction

Russell [8:21 AM]
It's happening

Russell [8:21 AM]
Who needs to fly except the drones

Schoeller [8:21 AM] 
I understand the vision. I’m just not convinced it’ll happen...

Schoeller [8:22 AM]
Well me for one :simple_smile:

Schoeller [8:22 AM]
Drone delivery is another unlikely occurance in any large scale.

Schoeller [8:23 AM]
The economics/logistics just don’t make any sense. Packages are heavy.

Russell [8:23 AM] 
Personal drivers. Personal shoppers. Personal virtual assistants.  ... All are shaping the world to not need all this movement.  Once were three degrees removed from these activities we won't care if it's a machine doing it all.

Russell [8:24 AM]
So make people want less heavy stuff

Russell [8:24 AM]
Sell them a kindle and ebooks

Russell [8:24 AM]
:)

Schoeller [8:24 AM] 
Agree on that.

Russell [8:24 AM] 
It's happening.

Schoeller [8:24 AM] 
Although (sidebar) dead-tree’s not dead.

Schoeller [8:25 AM]
You can’t digitize the tactile feel of thumbing through the pages of a book.

Schoeller [8:25 AM]
I suspect it’ll become boutique. Soft-cover trade books are done. But hardcover, well-bound, limited edition will carry on and do quite well.

Russell [8:27 AM] 
Nice try

Schoeller [8:27 AM] 
Back on track — A lot of this future stuff is the same: the hyperloop is just the next space elevator which was the next flying car, etc.

Russell [8:27 AM] 
You can destroy people's ability to touch

Russell [8:27 AM]
Negative sir

Schoeller [8:27 AM] 
I like my fingers, thank you very  much :wink:

Russell [8:27 AM] 
I'm making a much bigger systematic argument

Russell [8:28 AM]
Don't care about the specific forms

Russell [8:28 AM]
Only that forms get selected and replicated

Schoeller [8:28 AM] 
Well, it has to be grounded in something.

Russell [8:28 AM] 
Replicability!

Russell [8:28 AM]
Is it computationally efficient!

Russell [8:29 AM]
Boom boyeeee

Schoeller [8:29 AM] 
Much of the problem of flying cars, drone delivery, space elevators, 3d printed manufacturing, and hyperloops is the connection from physics -> economics.

Schoeller [8:29 AM]
We don’t have that with software. There, the challenge is the rate and format of the bits flying around.

Russell [8:30 AM] 
Hence computationally efficient

Russell [8:30 AM]
Economic networks also replicate computational efficiency.

Russell [8:31 AM]
Commodities have stable ish values because the idea is computationally efficient. Utility etc is well established in the network.  So they are exchanged etc.

Schoeller [8:32 AM] 
You’re asserting, then, that competition == computational efficiency?

Russell [8:32 AM] 
Correct

Russell [8:32 AM]
Efficiency must have survivability.

Russell [8:32 AM]
The trivial would not be efficient for economies

Schoeller [8:33 AM] 
I can buy that. At least in the sense of efficiency from the perspective of the system as a whole. Not for any given agent participating in the system.

Russell [8:33 AM] 
Yes.

Schoeller [8:33 AM] 
The agents are horrifically inefficient.

Schoeller [8:33 AM]
(individually)

Russell [8:34 AM] 
Hard to separate them from the system

Schoeller [8:34 AM] 
True, unless you’re an agent.

Russell [8:34 AM] 
I believe there is a law of the conservation of computation.

Schoeller [8:35 AM] 
computation can neither be created nor destroyed, but can only change form?

Russell [8:35 AM] 
Correct

Russell [8:36 AM]
And that results in all other conservation laws

Russell [8:36 AM]
And is why competition in all networks is computational efficient

Russell [8:36 AM]
And cannot be any other way

Schoeller [8:36 AM] 
It’ll take a bit for me to wrap my head around that idea.

Russell [8:37 AM] 
The singularity is pure probability.  Computationally irreducible.

Russell [8:37 AM]
Once probability breaks down into four forces and matter and light etc. we have pattern

Russell [8:37 AM]
But by the law of the conservation of computation it can't go to all pattern.

Russell [8:37 AM]
Or that would reduce computation

Russell [8:38 AM]
So competition between networks must proceed.

Russell [8:40 AM]
And per my blog post the idea that replication normalizes nodes in the network as they become more fully normalized the network of replication starts to collide with other networks of replication where the normalizations selected started competing.  Until a new form and new networks begin the process again.

Russell [8:40 AM]
Computation merely moves around these networks as the process of complexification and simplification double back over and over.

Russell [8:40 AM]
Even any american company is an example

Russell [8:41 AM]
We are simplifying and normalizing them all the time.

Russell [8:41 AM]
Employees replicate basic skills

Russell [8:41 AM]
And we recruit for these skills

Russell [8:41 AM]
 revenue lines get simplified

Russell [8:41 AM]
marketing simplifies messages to the world

Russell [8:42 AM]
All for survivability.

Russell [8:42 AM]
But this is also exposes companies to competition

Russell [8:42 AM]
It gets easier to poach employees.  And to see ideas and strategies on the outside.

Russell [8:42 AM]
Soon it tips and companies need New products. New marketing. New employees.

Russell [8:43 AM]
All the while computation is preserved in the wider network

Schoeller [8:43 AM] 
Where I’m struggling is how this copes with the notion that the universe tends toward disorder.

Russell [8:44 AM] 
Normalized forms become dispensable as individual nodes.

Russell [8:44 AM]
Disorder is pure noise.

Schoeller [8:44 AM] 
Order in the universe is effectively random.

Russell [8:44 AM] 
Total entropy.

Russell [8:45 AM]
Which if every network normalizes towards highly replicated forms they have less internal competition.  They have heat death.

Russell [8:45 AM]
Which is total entropy.

Russell [8:45 AM]
Again. A singularity is pure probability.

Russell [8:46 AM]
No pattern.

Russell [8:46 AM]
Randomness.

Schoeller [8:46 AM] 
I can buy that. Certainly there’s a low probability that any agent will succeed, thus the entropy tends to increase.

Russell [8:46 AM] 
Fully replicated forms are those that maximize survivability.

Russell [8:46 AM]
So some super weird platonic object between order and chaos

Russell [8:46 AM]
Between infinities.

Russell [8:46 AM]
A circle for example is a weird object

Russell [8:47 AM]
Rule 110 is a weird object

Schoeller [8:47 AM] 
Here’s a question — where does the computation come from to achieve fully replicated forms?

Schoeller [8:48 AM]
Presumably there’s some notion of “potential” computation?

Russell [8:48 AM] 
Negative.

Russell [8:48 AM]
There's only computation

Russell [8:48 AM]
Potential is a relational concept

Schoeller [8:49 AM] 
Hmm… then back to my question.

Russell [8:49 AM] 
There is no potential time

Russell [8:49 AM]
There is no potential dimension

Russell [8:50 AM]
There is no potential temperature

Schoeller [8:50 AM] 
Right, but time only moves forward — there’s no notion of conservation of time.

Russell [8:50 AM] 
Ah!

Russell [8:50 AM]
But I'm suggesting there is

Russell [8:50 AM]
Time is computation

Schoeller [8:50 AM] 
Actually, there is potential temperature. Temperature == energy.

Russell [8:51 AM] 
Yes it gets rather semantic

Schoeller [8:51 AM] 
The whole field is “thermodynamics"

Russell [8:51 AM] 
Yes which is superseded by computation

Russell [8:51 AM]
Hence why info theory and thermodynamics are isomorphic

Russell [8:51 AM]
They are just substrate discussions

Russell [8:51 AM]
Which go away in the math

Schoeller [8:52 AM] 
Well, strictly speaking that math doesn’t govern, but attempt to describe.

Russell [8:53 AM] 
Look at how computer science handlse time

Russell [8:53 AM]
Steps or cycles

Russell [8:53 AM]
It defines time as compute steps

Russell [8:53 AM]
Hahahahaha

Schoeller [8:53 AM] 
If info theory and thermo are isomorphic, then the principal of potential has to translate in some way. It’s important because that’s one of the foundations of conservation of energy.

Russell [8:54 AM] 
Yes yes

Russell [8:54 AM]
I'll find a translation for you

Russell [8:54 AM]
It's got something to do with chaitins number

Schoeller [8:55 AM] 
Computer science handles time as a long from a particular, arbitrary point. And calculates differences as a byproduct of the way it operates.

Schoeller [8:55 AM]
A “quantum” computer would handle time very differently.

Russell [8:56 AM] 
Yes. Keep going.

Schoeller [8:56 AM] 
“We” calculate time from celestial positions.

Schoeller [8:56 AM]
None of that relates to the more generalized notion of time.

Russell [8:57 AM] 
I propose the translation of time fits within the law of conservation of computation

Russell [8:57 AM]
Quantum computers are closer to singularities. Computing with pure probabilities

Russell [8:57 AM]
Classical computers compute with approximated machine precision probabilities

Russell [8:58 AM]
Somewhere things get super weird with math (algebra and geometry meets probability theory)

Russell [8:58 AM]
Math itself suffers same challenge

Schoeller [8:59 AM] 
Yes, well math likes to be very precise.

Russell [8:59 AM] 
That which symbolically lacks pure probability humans and classical computers can handle

Russell [9:00 AM]
Once you deal with infinities and infinistimals you start getting to pure probabilities and math theory starts bleeding.

Schoeller [9:00 AM] 
Okay, so I can accept a notion of a conservation of probability of time.

 

Russell [9:00 AM] 
N-order logics require n+1 order and incompleteness and set paradoxes.

Russell [9:01 AM]
Math itself becomes computationally weird.

Schoeller [9:01 AM] 
ie that the probably of an event occurring or not occurring within a system is 1. Of course, that’s tautological.

Schoeller [9:02 AM]
But also that it would hold for any number of events over any set of times.

Russell [9:02 AM] 
Because once a math system becomes computationally inefficient it all of a sudden is  incomplete. And we reduce to "somethings are true but we can't prove them in this system"

Russell [9:03 AM]
Yes pure probability is binary.  Either everything happens or nothing happens.

Russell [9:03 AM]
If everything happens you must conserve computation as that everything happens

Russell [9:03 AM]
Can't be more than 1! Can't be less than 1!

Schoeller [9:04 AM] 
Well, I think what I’m saying is that my need for “potential” computation is solved by probability.

Russell [9:04 AM] 
And local events of everything take on less than all computation because of the halting problem.

Schoeller [9:04 AM] 
Although I haven’t completely convinced myself.

Russell [9:04 AM] 
If the halting problem weren't true every event / computation could self inspect and computation would tend to 0

Russell [9:05 AM]
Chaitins number is a measure of probability

Russell [9:05 AM]
Complexity is a measure of probability

Russell [9:05 AM]
Probability is a notion of unknown information

Russell [9:05 AM]
All data of everything would contain every program and all outputs

Russell [9:06 AM]
And has a probability of any and all events total of 1.  All information is known

Russell [9:06 AM]
And the same time it is 0

Schoeller [9:06 AM] 
Here wouldn’t the truth of the halting problem arise from the fact the system is influenced from elements outside the system?

Russell [9:06 AM] 
Because all information is computationally irreducible of the maximal kind

Schoeller [9:06 AM] 
(ie. similar to thermo)

Schoeller [9:06 AM]
Therefore a computation can never know its inputs.

Russell [9:06 AM] 
Yes.  Halting problem is exactly that

Russell [9:06 AM]
Unknowns

Schoeller [9:06 AM] 
And thus, can never know its outputs.

Schoeller [9:07 AM]
Because the program can’t see beyond itself.

Russell [9:07 AM] 
It's not a matter of inputs

Russell [9:07 AM]
It emerges from computation!

Russell [9:07 AM]
Elementary ca show this

Russell [9:07 AM]
Godel showed this

Russell [9:08 AM]
Mere DESCRIPTION!  Description is computation

Russell [9:09 AM]
I think wolfram gave in too easily

Russell [9:09 AM]
He still believes in Euclidean time

Russell [9:09 AM]
Or whatever Greek time

Schoeller [9:10 AM] 
Right. And if computation is probabilistic, the program couldn’t even know, necessarily, what it was actually computing at any given point (until that point occurrs).

Schoeller [9:11 AM]
Yeah, I think your theory only works if time is a probability not a discrete measure.

Russell [9:12 AM]
Time isn't discrete.

Russell [9:12 AM]
It's pure difference

Schoeller [9:12 AM] 
Which is really to say that the outcome of a computation can’t be known until the state of the system is known.

Schoeller [9:12 AM]
Which itself can’t be known with any certainty until it occurs.

Schoeller [9:13 AM]
Or, it’s all wibbly, wobbly, timey, wimey stuff.

Schoeller [9:14 AM]
Or, possibly the Heisenberg uncertainty principal as applied to computation.

Russell [9:14 AM] 
But 2+2 is 4

Schoeller [9:14 AM] 
Only if the state of the system is consistent.

Schoeller [9:14 AM]
(which it happens to be)

Russell [9:15 AM] 
And that math statement is a "localized" statement

Schoeller [9:15 AM] 
So, the probably of 2+2=4 is very, very close to 1, but not exactly. Possibly so close that its limit approaches.

Schoeller [9:16 AM]
Right. So, part of why the state for 2+2=4 is consistent is because we’ve defined it that way.

Russell [9:16 AM] 
It's what I call robust

Russell [9:16 AM]
In most universes 2+2 is 4

Russell [9:16 AM]
In the multiverse there are universes where that's not true

Schoeller [9:16 AM] 
But, if you shift from say cartesian to spherical, it doesn’t necessarily hold unless you change what “2” and “4” mean.

Russell [9:17 AM] 
But those are very small universes that reduce quickly

Russell [9:17 AM]
Yes.

Russell [9:17 AM]
Thank you!

Schoeller [9:17 AM] 
i.e their definition is relative to the system you’re computing within.

Russell [9:17 AM] 
Counting and the math emerges from the computational systems

Russell [9:17 AM]
Yes.

Russell [9:18 AM]
And in the entirety of the multiverse all maths exist.  All description exists.

Schoeller [9:19 AM] 
Sure. That’s as tautological as the probability that something either exists or does not is 1.

Schoeller [9:20 AM]
Since the probability of anything existing within an infinity, unbounded system would also be 1.

Russell [9:20 AM] 
And your point?

Russell [9:21 AM]
Math loves tautologies

Russell [9:21 AM]
We have to state them all the time

Russell [9:21 AM]
Or reduce to them

Schoeller [9:22 AM] 
Well, it’s consistent with probability theory. So, that’s nice.

Russell [9:22 AM] 
Is that what symbolics and rule replacements are?

Russell [9:23 AM]
One giant computational tautology

Schoeller [9:23 AM] 
If you’re going to have a theory that talks about local behvior within systems, you have to have consistency when you take that to its extreme limit — such as when the system contains everything possible.

Schoeller [9:24 AM]
Aren’t you just describing the state of the system with symbolics and rules?

Russell [9:25 AM] 
Sure.

Russell [9:25 AM]
And the state of everything is what?

Schoeller [9:25 AM] 
Here describe means “govern” (unlike my earlier math statement)

Russell [9:26 AM] 
Isn't that the state of all sub states or local states?

Russell [9:26 AM]
Of which some local states are meta descriptions of sub sub states or neighboring states

Schoeller [9:26 AM] 
I think the state of everything is that the probability of anything is 1.

Schoeller [9:26 AM]
It’s rather useless, but so is the notion of the state of everything.

Russell [9:27 AM] 
Govern gets tricky because it's non sensible as a fundamental concept. Eg the spin of a quark doesn't govern. It's just a property.

Russell [9:27 AM]
Gravity and the other forces don't govern.

Russell [9:28 AM]
They are descriptions of relationships

Schoeller [9:28 AM] 
Sure, but the definition of “2” on a Cartesian plane is.

Russell [9:28 AM] 
If Gravity is merely space time curvature. A geometry that doesn't mean it governs.

Russell [9:28 AM]
What is the definition of 2 governing?

Schoeller [9:29 AM] 
It’s governing the behavior of 2 within the cartesian system.

Russell [9:29 AM] 
It's merely a description of relations between an X position and a y position on a description of a plane

Schoeller [9:29 AM] 
i.e. that 2 can’t be 3 or an apple.

Russell [9:30 AM] 
Ah.  Yes.  Definition bounds localized networks.

Russell [9:30 AM]
2 is a 3 in some systems

Russell [9:31 AM]
Say a simple system of primes and non primes without concern of actual quantity

Schoeller [9:31 AM] 
I think this idea holds. The symbols and rules govern the system in a computational sense. But that does not mean that the system itself governs any physical phenomena. Only that it describes (to the extent that the rules reasonably describe the same.)

Schoeller [9:31 AM]
— moving back to describe and govern meaning different things --

Russell [9:31 AM] 
Yes Im in agreement

Russell [9:31 AM]
Govern is a localized concept of bounding relations

Russell [9:32 AM]
Let's return to the main q in all this

Russell [9:32 AM]
WHAT DOES THE WORK OF COMPUTATION

Schoeller [9:32 AM] 
Yes, bounding relations that define a specific system within the multiverse of possible systems.

Schoeller [9:35 AM]
Well, the computation would have to be done within the medium of the system, right?

Schoeller [9:36 AM]
It can’t be just one thing. Because we’ve already enumerated that there a quantum computers that are different than regular computers that are different than the human brain.

Russell [9:36 AM] 
yeah, i haven't figured this out.

Russell [9:36 AM]
other than, it's everything i'm trying to figure out.

Schoeller [9:37 AM] 
And to some degree, you pick the computational medium when you define the system. At least in the programming world. Mathematica vs Java vs Spark.

Russell [9:38 AM] 
i think it's this.... or related.... to perceive/observe/describe/explain at all, whatever sub network of everything (whatever universe, computer, entity, person, rock...) IS.  and the IS and IS NOT of breaking out of total relation to everything is COMPUTATION.  and it's a super weird notion.  but the mere simplification of total relation to partial relation IS the COMPUTATIONAL ACT.

Schoeller [9:39 AM] 
And with a math problem, you’re defining the computational medium to be the human brain.

Russell [9:40 AM] 
well, within the human / this universe frame of reference or partial relation to everything, yes.

Schoeller [9:41 AM] 
Agree that it’s a weird notion that computational singularity doesn’t “seem” to underly everything. But the rules and computation have to be related and even dependent.

Russell [9:41 AM] 
whether we can COMPUTE  or "IS" with a different substrate... well, i think so.... i think "computers" and "virtual reality" are moving our COMPUTE/DESCRIPTION/RELATION to everything beyond/outside the Human Brain.

Schoeller [9:42 AM] 
So, it’s easier if we constrain ourselves to the systems we make up.

Schoeller [9:43 AM]
As for what computes the physical world — maybe there’s a lesson in evolution theory, where “computation” is quite literally random mutations of the medium itself.

Schoeller [9:44 AM]
And where the “selection”/“survival”/“success” of the computation occurs outside the system (back to the halting problem discussion above)

Schoeller [9:46 AM]
I should clarify "But the rules and computation have to be related and even dependent.” … within a system. In the multiverse, anything goes. :simple_smile:

Russell [9:48 AM] 
yes, on your evolutionary theory... or something similar to that.  the resolution of probabilities IS computation.   resolution being like the resolution of super positions in quantum stuff.

Russell [9:48 AM]
i believe that basically happens as you move from logic systems, computational systems, i.e. russell's theory of types etc.

Russell [9:49 AM]
related to all this numbo jumbo: http://plato.stanford.edu/entries/quine-nf/

Schoeller [9:50 AM] 
It’s an example of a chaotic system where order appears to arise naturally, so it seems like it’d be a reasonable starting place to think about other physical systems.

Russell [9:50 AM] 
yes, i say we conclude there for now

Schoeller [9:50 AM] 
I think the key is the halting problem bit — that the computation can’t possibly know if its successful. That occurs outside the system where the computation is valid. It only blindly executes.

Russell [9:51 AM] 
we've created something between chaos and order in this dialog

Russell [9:51 AM]
which will be non trivial to clear up.

Read More