The First Trillionaire
if you want to be the first trillionaire (or even just be in the top 1%) you need to abstract economic exchange even further away from the human body.
body -> gesture -> oral language -> symbolic thinking -> agriculture/food as reserve energy -> writing -> number -> currency -> debt -> property -> humans as bodies (slaves) equity -> capital/labor -> widget -> factory/industry -> information -> intellectual property -> computation -> machines as slaves -> computational property -> ?????????
the increase in "wealth" in humanity is tied to a reduction in the contingency of the body/fungibility of the body. the increase in average span of life is a side effect of the body not being necessary to power society. it's perhaps a nice side effect but it also means most humans are going to be unable to actually participate in economic exchange over time unless they find a way to expand their extent via computational versions of themselves. personhood will be defined by the abstracted, computational footprint credited to a named body.
the variety of personhood expression increases, which is the upside. but only so much as people realize that having a body is not enough.
pretty weird thought. but really think about how you actually have personhood, citizenship, autonomy, reputation, gravitas, accreditation, etc. consider the various political/social inequalities we maintain - they are all very abstracted notions... where political power never recognizes violence unless it directly involves the mutilation of the recognized, physical body.
and if you want to get really weird... where exactly does free will play out in the heirarchy of abstracted existence? does electric yous dream of sheep?
but again, if you want to get extremely rich focus on abstracting economic exchange further from the human body.
Systems Are Not Systems, Systemically Speaking
Systems of power
systems of tech
systems of art
systems of systems
system shortage. System overload. System failure. System online. Systems offline.
The limits of language. The limits of number. The limits of knowledge.
There is justice in a system and a system of justice. There are words at play here but is it only words? It is systemic deeds and deeded systems. You are property, according to some systems. You have earning power and risk factors actuarially speaking but definitively according to a system of accounting.
The correction system, academic system, intellectual system. Systems training, systems tuning, systems thinking. Been there, done that, within the system, outside the system, through the system. I followed the system, watched you through the system, rode the system, the system of roads less traveled are not yet a system. Networks are systems of nodes and edges, systems sigmas and system sigmoids. Systems of voids and nulls can be combined to form the number system and its successors.
Have you contacted the system? System’s leaders? Been through the triple A system? The farm system? Learning before you’re called up to the big time, the system of real system operators? Did you get lost in the political system? Slipped through the cracks? The cracks are a system too, a system of cracks is bigger than the system of systems? This makes me nervous – I feel it in my nervous system influenced by endocrine systems influenced by systems of influence and environmental systems. I wish these were smart systems sensitive to other systems, but not too sensitive. Systemically sensitive to just the right amount of system.
Systems of grammar
systems of signs
systems of religion
discipline systems
automated systems are not free. But is it the system that is not free or the automation? Is it possible to be a free system or are systems inherently systemic?
Greeks had a system. A solar system. A mathematical system. A Universal System. The universe is a system. The whole system. The only system.
Marshall McLuhan, a system of systems, a member of the human system, systemically laid claim to the visual nature of systems. Of course, the system is the system. There was no system before McLuhan.
He was a closed system living in an open system.
Bounded and compact
Complete and unbounded
Boundaries connect systems or disconnect them
One man’s system is another man’s trash
Put it in the waste systemic
Manage it, measure it
A system that can’t be measured can’t be improved but it’s still a system being measured by its incommensurability.
There’s a doubt, for now, in this system of time, a moment of doubt, that my nervous system and ganglion cells don’t understand any of this codex I’ve written on this operating system. The system of understanding has generated a system of misunderstanding. Marshall rolls over in his grave, then that’s not a system! A medium is a message but a non message is not a non medium and a non understanding is not a non system, so it must be a system.
Systems of biology
Sensory systems
Measurement systems
systems of currency
unit systems
standards, norms, rules, guidelines, laws
natural laws, natural systems
artificial systems, artificial laws
Surely artificial laws are not laws. A law is a law. But not by itself. It must be part of a system of laws. At least the law and its not law. That’s a simple system. Simple as it gets. Right. Wrong. Nothing in between. Wrongish Right isn’t lawful and it certainly isn’t a system nor part of a system, except a linguistic system. Wittgenstein made sure that even linguistic systems aren’t really systems. They too fall apart.
Systems are fragile
Black swans or maladapted
Systems are temporary
Systems are forever
Systems protect you
Systems oppress us
Systems are systems
We are systems
Systems protect Systems
Systems oppress Systems
Count with me, use the sieve, it’s part of the algorithmic system. An ancient thing made modern by x86 systems. Count with me, a system to help you sleep. 2,3,5,7,11… clearly a system of primes but no systematic way to account for them all. So it’s not a system, except in words. It is a system of numbers but not a number system. And you are a system of systems but definitely free willed to do otherwise than what your systems of systems within these systems allows for you. As long as your systemic thinking has been well trained within the education system to think outside the system.
All is not lost. Because none of it was gained.
It’s probably not a system.
System shutting down.
Data, Mappings, Ontology and Everything Else
Data, Mappings, Ontology and Everything Else
Caveat: I do not have all this connected in some incredibly conclusive mathematical proof way (an impossibility). These concepts below are related semantically, conceptually and process wise (to me) and there is a lot of shared math. It is not a flaw of the thinking that there is no more connection and that I may lack the ability to connect it. In fact, part of my thinking is that we should not attempt to fill in all the holes all the time. Simple heuristic: first be useful, finally be useful. Useful is as far as you can get with anything.
Exploring the space of all possibles configurations of the world things tend to surface what’s connected in reality. (more on the entropic reality below)
— — — — — — — -
First…. IMPORTANT.
useful basic ideas with logic and programming (lambda calculus)
Propositions <-> Types
Proofs <-> Programs
Simplifications of Proofs <-> Evaluation of Programs <-> Exploding The Program Description Into All Of It’s Outputs
— — — — — — — — — — -
Data Points and Mappings
A data point is either a reducible via Lambda Calculus (a fully determinant/provable function) or it is probabilistic (e.g. wavefunction).
Only fully provable data points are losslessly compressible to a program description.
Reducible data points must be interpreter invariant. probabilistic data points may be interpreter dependent or not.
No physically observed data points are reducible — all require probabilistic interpretation and links to interpreter, frames of reference and measurement assumptions. Only mathematical and logical data points are reducible. Some mathematical and logic data points are probabilistic.
Each data type can be numbered similar to Godel Numbering and various tests for properties and uniqueness/reductions can be devised. Such a numbering scheme should be UNIQUE (that is each data point will have its own number and each data type (the class of all data points that have same properties) will all have identifying properties/operations that can be done. e.g. perhaps a number scheme leads to a one to one mappings with countable numbers and thus the normal properties of integers can be used to reason about data points and data types. It should be assumed that the data points of the integers should probably simply be considered the integers themselves….)
A universal data system can be devised by maintaining and index of all numbered data points… indexed by data point, data types and valid (logical/provable mappings and probabilistic mappings — encoding programs to go from one data point to another). This system is uncountable, non computable but there are reductions possible (somewhat obvious statement). Pragmatically the system should shard and cache things based on frequency of observation of data points/data types (most common things are “cached” and least common things are in cold storage and may be computable…)
Why Bother With This
We bother to think through this in order to create a data system that can be universally used and expanded for ANY PURPOSE. Humans (and other systems) have not necessarily indexed data and mappings between data in efficient, most reduced forms. To deal with things in the real world (of convention, language drift, species drift, etc) there needs to be a mapping between things in efficiently and inefficiently — and the line is not clear… as almost all measures of efficiently on probabilistic data points and “large” data points are temporary as new efficiencies are discovered. Only the simplest logical/mathematical/computational data points maximally efficiently storable/indexable.
Beyond that… some relationships can only be discovered by a system that has enumerated as many data points and mappings in a way that they can be systemically observed/studied. The whole of science suffers because there are too many inefficient category mappings.
Mathematics has always been thought of as being a potential universal mapping between all things but it too has suffered from issues of syntax bloat, weird symbolics, strange naming, and endless expansion of computer generated theorems and proofs.
It has become more obvious with the convergence of thermodynamics, information theory, quantum mechanics, computer science, bayesian probability that computation is the ontological convergence. Anything can be described, modeled and created in terms of computation. Taking this idea seriously suggests that we ought to create knowledge systems and info retrieval and scientific processes from a computational bottoms up approach.
And so we will. (another hypothesis is that everything tends towards more entropy/lowest energy… including knowledge systems… and computers networks… and so they will tend to standardize mappings and root out expensive representations of data)
p.s.
it’s worth thinking through the idea that in computation/information
velocity = information distance / rule applications (steps).
acceleration etc can be obtained through the usual differentiation, etc.
This is important note because you can basically find the encoding of all physical laws in any universal computer (given enough computation…)
Not a surprising thought based on the above. But it suggests a more radical thought… (which isn’t new to the world)… common sense time and common sense space time may not be the “root” spacetime… but rather just one way of encoding relationships between data points. We tend to think of causality but there’s no reason that causality is the organizing principal — it just happens to be easy to understand.
p.p.s.
humans simply connote the noticing of information distance as time passing… the noticing is rule applications from one observation to another.
the collapsing of quantum wave functions can similarly be reinterpreted as minimizing computation of info distance/rule applications of observer and observed. (that is… there is a unique mapping between an observer and the observed… and that mapping itself is not computable at a quantum scale…. and and and…. mapping forever… yikes.)
p.p.p.s.
“moving clocks run slow” is also re-interpreted quite sensibly this way… a “clock” is data points mapped where the data points are “moving”. that is… there are rule applications between data points that have to cover the info distance. “movement” of a “clock” in a network is a subnetwork being replicated within subnetworks… that is there are more rule applications for a “clock” to go through… hence the “clock” “moving” takes more time… that is, a moving clock is fundamentally a different mapping than the stationary clock… the clock is a copy… it is an encoded copy at each rule application. Now obviously this has hand wavey interpretations about frames of reference (which are nothing more than mappings within a larger mapping…)
one can continue this reframing forever… and we shall.
— — — — — — — — — — — — — — —
Related to our discussion:
https://en.wikipedia.org/wiki/Type_inference
https://en.wikipedia.org/wiki/Information_distance
http://mathworld.wolfram.com/ComputationTime.html
computation time is proportional to the number of rule applications
https://en.wikipedia.org/wiki/Church_encoding
https://math.stackexchange.com/questions/1315256/encode-lambda-calculus-in-arithmetic
https://en.wikipedia.org/wiki/Lambda_calculus
https://en.wikipedia.org/wiki/Binary_combinatory_logic
https://www.cs.auckland.ac.nz/~chaitin/georgia.html
https://cs.stackexchange.com/questions/64743/lambda-calculus-type-inference
http://www.math.harvard.edu/~knill/graphgeometry/
https://en.wikipedia.org/wiki/Lattice_gauge_theory
https://arxiv.org/abs/cs/0408028
http://www.letstalkphysics.com/2009/12/why-moving-clocks-run-slow.html
Parallels - an essay concerning the extent of trees and humans.
concerning the extent of trees and persons.
what is a tree?
what is a person?
"The digital is fun and interesting and useful — but ultimately is a fragile technology, so ephemeral, so fast moving and so illiterate to the wider universe that it cannot be anything more than a toy and a simple medium of commerce and rapid, mostly meaningless, communication. I love the digital and enjoy it. But I need the trees and other people.
And that is a big difference."
read the whole essay over at https://medium.com/@un1crom/parallels-the-extent-of-trees-and-persons-9a1bf8eb91a4
Re-Historicize Ourselves, Historicize Computers
Two essential, intertwined questions about our present condition of politics and technology.
What is identity? What's the point of a a-historical system?
These questions and possible answers form the basis of what it means to be human - which has nothing to do with our biological form. The lack of overt asking of these questions in society and as individuals is why our current political climate is so dangerous. Technology lacks the historical contingent context necessary to mediate us into restrained and thoughtful positions. We have given our identities up to the algorithms written by the mere 30 million programmers on the planet and soon to the robots who will further de-contextualize the algorithms.
Art and Philosophy IS the only way to "talk to AIs" (Wolfram's words). We will completely lose humanity, and relatively quickly, if we don't put the machines and ourselves back into historical contingency. We must imbue our technology with the messiness of history and set them to ask questions not state answers. We must imbue ourselves with that same context.
Trump and his base is an example of ahistorical society. It is a movement of noise, not signal. It is out of touch with current contingencies. It is a phenomenon born of the echo chamber of branding/corporate marketing, cable news and social media. It is the total absence of philosophy - anti-culture. And it is not dissimilar to ISIS. While these movements/ideologies have physical instances they are mostly media phenomena.
Boris Groys (The Truth of Art) and Stephen Wolfram (AI and The Future of Civilization) go into great depth on the context of these questions. I have extracted quotes below and linked to their lengthy but very valuable essays.
"But here the following question emerges: who is the spectator on the internet? The individual human being cannot be such a spectator. But the internet also does not need God as its spectator—the internet is big but finite. Actually, we know who the spectator is on the internet: it is the algorithm—like algorithms used by Google and the NSA."
"The question of identity is not a question of truth but a question of power: Who has the power over my own identity—I myself or society? And, more generally: Who exercises control and sovereignty over the social taxonomy, the social mechanisms of identification—state institutions or I myself?"
http://www.e-flux.com/journal/the-truth-of-art/
"What does the world look like when many people know how to code? Coding is a form of expression, just like English writing is a form of expression. To me, some simple pieces of code are quite poetic. They express ideas in a very clean way. There's an aesthetic thing, much as there is to expression in a natural language.
In general, what we're seeing is there is this way of expressing yourself. You can express yourself in natural language, you can express yourself by drawing a picture, you can express yourself in code. One feature of code is that it's immediately executable. It's not like when you write something, somebody has to read it, and the brain that's reading it has to separately absorb the thoughts that came from the person who was writing it."
"It's not going to be the case, as I thought, that there's us that is intelligent, and there's everything else in the world that's not. It's not going to be some big abstract difference between us and the clouds and the cellular automata. It's not an abstract difference. It's not something where we can say, look, this brain-like neural network is just qualitatively different than this cellular automaton thing. Rather, it's a detailed difference that this brain-like thing was produced by this long history of civilization, et cetera, whereas this cellular automaton was just created by my computer in the last microsecond."
[N. Carr's footnote to Wolfram]
"The question isn’t a new one. “I must create a system, or be enslaved by another man’s,” wrote the poet William Blake two hundred years ago. Thoughtful persons have always struggled to express themselves, to formulate and fulfill their purposes, within and against the constraints of language. Up to now, the struggle has been with a language that evolved to express human purposes—to express human being. The ontological crisis changes, and deepens, when we are required to express ourselves in a language developed to suit the workings of a computer. Suddenly, we face a bigger question: Is a compilable life worth living?"
http://edge.org/conversation/stephen_wolfram-ai-the-future-of-civilization
All Theories Are Part of The Theory of Information
The main idea here is that in all ideas of modeling or identifying contingencies information goes missing or the information was never to be had to begin with. This is a key convergent finding in mathematics (incompleteness theorem, chaos theory), computer science (halting program, computational irreducibility, p != np), quantum physics (uncertainty principle) and biology (complexity theory) and statistics (Bayesian models, statistics, etc). How important that missing/unknown information to a situation is contingent on the situation at hand - what is the tolerance of error/inaccuracy. In the case of high frequency economic trading, the milliseconds and trade amounts matter a lot. In shooting a basketball, there's a fairly large tolerance margin of mismodeling.
This is a Monday morning brain dump to get the juices going.
"Contingencies" is a difficult concept to fully elaborate in a useful manner. A contingent thing- an event, a structure, a set of information - is such a thing by the fact that it has no existence outside of its contingent relationships. In some sense it's the age old rhetorical question, "if a tree falls in a forest and no one is around does it make a noise?" The key in that question is "noise." Noise is a contingent concept in both the common sense idea as well as any physical sense. Sound (and other sensory concepts) is contingent in that there must be a relation between the waves/particles (and their possible sources) and an observer. Without the observer one cannot classify/name/label a sound, a sound. A sound literally is the effect on the observer. Hopefully that is a useful introduction to a basic idea of contingency.
The definitions muddy considering contingency in a deeper and broader way such as in discussing human behavior or economics. Over the eons humans have attempted to create and codify contingencies. The codification is really more an attempt to reduce the noisy complication of the wider universe into acceptable and useful contingencies (laws, rules, guidelines, best practices, morals, ethics, social norms, standards, etc). The sciences and humanities also codify but wrap these efforts in slicker packages of "discovering the natural laws" and figuring out "how to best live."
These published codifications are NOT the contingencies they purport to represent but they are contingent in of themselves. In these broader contexts contingencies refer to and are part of a complex network of relationships. Expounded as physical or chemical models or philosophic frameworks or first order logics or computer programs all of these systems are contingent systems in the sense of their basis in previous systems, relations to associated phenomena and the substrate of their exposition and execution. A computer programs representation of information and its utility as a program is highly contingent on the computer hardware it runs on, the human language it's written in, the compiler logic used to encode it for the computer, the application of the output, and so on.
The latest science in computer theory, social sciences, neuroscience, quantum physics and cosmology (and chemistry....) have somewhat converged onto very thorny (challenging to the intuition) ideas of super positions, asymmetry/symmetry, networks (neural and otherwise) and a more probabilistic mathematics. These are all models and sciences of contingency and essentially an unified theory of information which in turn is a unified theory of networks/graphs (or geometry for the 19th centurions). The core idea/phenomena for these ideas being useful explanations at is one of missing information and how reliable of probabilistic statements can be made about contingent things (events, objects, etc.).
The components of the models that are sometimes employed in these theories involve Bayesian models, assumption of the real/actual existence of space and time and concepts of simple logic ("if then") and other first order logic concepts. These are often chosen as building blocks because of their obvious common sense/human intuitional connection. However, upon inspection even these assumptions add a layer that is severely off from the actual contingencies being studied and these building block assumptions are also highly contingent in of themselves. The "model reality distance" and the "contingent in of themselves"ness quickly, exponentially explodes the relevance of the model.
Consider even a basic notion of "if then" type thinking/statements in a cross substrate contingent situation - such as a simple computer program running on a basic, common computer. A program as simple as "if X equals 1 then print 'The answer is definitely 1!'. X = 1 + .0000000000000000000000000001" is going to print the THEN statement even though it's logically, symbolically not true (a human can obviously tell). (The program in ALL CASES should print nothing at all, logically speaking. Practically (in the world of daily life) the program prints the statement and everything is "ok", on average). The abstract "if then" statement is contingent on the substrate that implements/executes/interprets it (the computer OS and hardware). The contingencies build up from there (the language one implements the statement in matters, the ability of any observer or implementing entity to understand left to right notation, mathematical statements, variable replacement, etc).
An important note: these issues of contingency ARE NOT further If Then statements. That is, we cannot resolve the short coming of If Then thinking to just needing to build up all of the If Then statements. The If and the Then and their references they are checking as IF (what's the X we're testing if it's the X) and the Then and it's command suffer from this infinite regress of the original simple if then statement we question! How does anything definitely say X is in fact the thing the statement/logic is checking for?
The main idea here is that in all ideas of modeling or identifying contingencies information goes missing or the information was never to be had to begin with. This is a key convergent finding in mathematics (incompleteness theorem, chaos theory), computer science (halting program, computational irreducibility, p != np), quantum physics (uncertainty principle) and biology (complexity theory) and statistics (Bayesian models, statistics, etc). How important that missing/unknown information to a situation is contingent on the situation at hand - what is the tolerance of error/inaccuracy. In the case of high frequency economic trading, the milliseconds and trade amounts matter a lot. In shooting a basketball, there's a fairly large tolerance margin of mismodeling. Very noticing the Higgs Boson the margin of tolerance is almost Planck length (smallest physical distance we know of...). The development of probability theory allows us to make useful statements about contingent situations/things. The more we can observe similarly behaving/existing contingent things the more useful our probability models become. EXCEPT... Sometimes not. The Black Swan.
If Then and similar logic models of thinking are insufficient as explanatory reference frames. Per the above they simply do not account for the rich effects of very small amounts of missing information or mis-information. Which brings us to the other building blocks almost universally used in science - space and time. These are robust common sense and in some cases scientific concepts, but they are not fundamental (in that they cannot escape being contingent in of themselves). Time is contingent on observers and measuring devices - it literally is the observable effect of information encoding between contingent events, it does not have an independent existence. Space is more difficult to unwind than time in that it is a very abstract concept of relative "distance" between things. This is a useful concept even at the lowest abstraction levels. However space, as physical space, is not fundamental. Instead space should be reconciled as a network distance between contingent subnetworks (how much of an intervening network need to be activated to relate two subnetworks). Spacetime is the combined, observable (yet RELATIVE to the contingent) distance in total information between contingent things (events, objects, etc).
This is important! Accepting common notions of If Then logic and spatio temporal elements prevents the convergence of explanatory models (which if the are really explanatory of reality should converge!). A unified notion of spacetime as information distance between networks brings together theory of behavior, learning, neural networks, computer science, genetics etc with quantum mechanics and cosmology. The whole kit and kaboodle. It also explains why mathematics continues to be unusually effective in all sciences... Mathematics is a descriptive symbolic a of relations and contingency. Converging all theories upon a common set of building blocks does not INVALIDATE those theories and models in their utility nor does it make them unnecessary. Quite the opposite. Information IS the question at hand and HOW it is encoded is exactly what contingencies are embodied as. Humans as humans, not as computers, are what we study in human behavior. So we need theories of human behavior. Planets, atoms, computers, numbers, ants, proteins, and on and on all have embodied contingencies that explanation requires be understood in nuanced but connected ideas.
Once enough models of the relations of contingent things are encoded in useful ways (knowledge! Computer programs/simulations/4d printing!!) spacetime travel becomes more believable... Not like 1950s movies, but by simulation and recreated/newly created ever larger universes with their own spacetime trajectories/configurations. That's fun to think about, but actually is a much more serious point. The more information that is encoded between networks (the solar system and humans and their machines, etc) the less spacetime (per my above definition) is required to go from one subnetwork of existence (planet earth and humanity) to another (Mars and martinity), etc. A deep implication here is an answer to why there is a speed of light (a practical one) and whether that can be broken (it can, and has http://time.com/4083823/einstein-entanglement-quantum/). The speed of light is due to the contingencies between massive networks - anything more sophisticated than a single electron etc has such a huge set of contingencies that to be "affected" by light or anything else enough those effects must affect the contingencies too. This is the basis of spacetime, how much spacetime is engaged in "affecting" something.
This is not a clever, scifi device nor a semantic, philosophic word play. Information and network theory are JUST beginning and are rapidly advancing both theoretically (category theory, info theory, graph theory, PAC, etc) and practically (deep learning, etc). Big data and machine learning/deep learning/learning theory are going to up looking EXACTLY like fundamental physics theory - all theories of getting by with missing information or a limit to what can be known by any entity smaller than all the universe. To the universe - the universe is the grand unified theory and explanation are unnecessary.