The process of making art beguiles me.
The process of making art beguiles me.
It’s simple in a lot of ways — make marks somewhere until what’s there is interesting. Then again, there are issues. Issues of uniqueness, interestingness, self-indulgence, meaning, distraction, triteness emerge repeatedly. Worse yet, for me a person who’s spent too much energy reflecting on free will or lack thereof, it’s unclear where the act of creation — the artistic act-actually takes place.
Obviously all that is easy to dismiss. The silly meanderings of a person doing art — an artist — naval gazing. Do the work. Do work. The powerful antidote to all creative existential questions. There’s some value in that slogan. Certainly a good motivator when there’s literally no movement at all. But it can also be a nudge into direction-less and poorly executed work. Or worse, it can lead to sustain ruts of facsimile or inefficient messes.
Or it can lead to raw expressions of the more primal kind. :Do work: can be the stuff that’s been there for awhile but needed permission to just come out. Do work maybe the most essential ideas.
And maybe it’s the only thing that can be done. It’s possible that all the other considerations of style, analysis, efficiency, meaning, symbolism, form, composition, are merely warm ups — the calisthenics of artful mark making.
My deepest suspicion is my last paragraph above is the most correct notion. And not because there’s some logical, linguistic argument or a rational art history point but because the world itself, reality in whole and in part, doesn’t have the ideas of meaning, uniqueness, interestingness, composition all in the lovely artistic senses. The world just is. What we see, hear, taste, touch, smell, feel, observe, sense just is — filtered, in relation, in biased relation, in more or less organized ways. Observation from object through senses to surface seems as reasonable a way to make marks as carefully planning things out through imagination and technical execution. Perhaps though it’s just no worse an approach, not better, just no worse.
Hilariously I find my own argument above to just Do Work incredibly lazy. It is the ultimate justification to just do work but not understand what one is doing. And in a double hilarious move I think it’s lazy until you Do Enough Work that the Doing Work accidentally emerges into understanding.
Insights come in unexpected moments and, in art practice, are rarely noticeable by an audience. Until enough work is done. So, art as a dialog between artist and audience, requires Double Doing Work — enough work that insight occurs to the artist then enough work to render that insight readily to the audience. Yikes. There might actually be three times Do Work required when considering the effects of audiences exposure to the art being filtered by algorithms. An artist must do work for their own insight, do work for audiences insight, AND do work for the algorithms insight on connecting artist and audience.
Apparently Doing Work is an endless recursion into More Work.
It’s quite possible art is never actually created.
I have to go consider the meaning of all this.
Parallels - an essay concerning the extent of trees and humans.
concerning the extent of trees and persons.
what is a tree?
what is a person?
"The digital is fun and interesting and useful — but ultimately is a fragile technology, so ephemeral, so fast moving and so illiterate to the wider universe that it cannot be anything more than a toy and a simple medium of commerce and rapid, mostly meaningless, communication. I love the digital and enjoy it. But I need the trees and other people.
And that is a big difference."
read the whole essay over at https://medium.com/@un1crom/parallels-the-extent-of-trees-and-persons-9a1bf8eb91a4
The Data Science of Art
The Data Science of Art or Pictures of Data about Pictures
My larger theory is that data=art and art=data and that Artificial Intelligence will be nothing more or less than a continued exercise in artistic-historical integration of new mediums and forms. However this post isn't another rehash of those ideas. This one is about the data of art.
Here I offer some insights by computationally analyzing art (my own and pointing to others analysis.) There are quite a few excellent and very detailed data science analysis of art that have come out recently due to the fact that more and more collections are being digitized and data faceted. Here's fantastic write up of MoMA's collection. Michael Trott offers a very detailed analysis of ratios and others hard facets of a couple hundred years of visual art. Last year Google made waves with Deep Dream which was really an analysis of how neural networks do work. Turning that analysis on its head and you get art creation (my first point above... the lines between art and data are more than blurry!)
On to some pictures depicting data about pictures!
A cluster analysis of 534 of my visual works
While an artist has intimate knowledge of their own work it is likely highly biased knowledge. We often become blind to our habits and tend to forget our average work. Doing a large scale unemotional analysis of all the art is enlightening.
The cluster analysis above was created using machine learning to cluster images by similar features (color, composition, size, subject matter, etc) (I did everything in Mathematica 11/Wolfram Language). A cursory view shows density in deep reds and faces/portraits. My work tends to be more varied as I move into dry media (pencil, pastels) and remove the color (or the algorithms simply don't find as much similarity. And, of course, the obvious note here is that this is just a quick view without attempting to optimize the machine learning or feed it cleaned up images (color correction etc). What else do you notice about the general shape of my work? (I am currently running a larger analysis on 4500 images instead of just 530, curious if things will look different.)
Drilling in a bit we find some curiosities. Below we see a couple of deeper dives into particular images where we break down the images geometry, detection of objects/subjects and a little bit of color analysis. What I find most interesting is just own often similar compositions show up in varied subject matters. Not terribly surprising as it's all just marks on a page but it is interesting just how biased I am to certain geometries?
I tend to love strong crossing lines or directly vertical or horizontal. In my own art history studies we find this isn't necessarily optimal for keeping a viewers attention. Often these geometries are taking eyes right of the page.
Below is a bit of a detailed view of a few images and their 4 closest neighboring images/works. Pretty weird in a lot of ways! I am heartened to see that over a 18 month period of work I haven't particularly sunk into a maxima or minima of ideas. I can see many older works in a cluster with newer works as old themes/ideas resurface and are renewed, often without active thought. Another study I should do is to sort these out by source of the picture as I can tell that photos taken from phones and posted to instagram etc often distort what was really there etc. (not a bad thing, just something to analyze to measure effects/artifacts).
I'm going to continue drilling into all this and do a full color spectrum analysis, histograms of subject matter, categorization of mediums and materials, etc. The point is not to find a formula but instead to learn and become aware and enjoy. I do not find data forensics to be limiting or mechanical. You can see in the above that even the math is fairly interpretative and loose - the object identification is off often, faces often are missed, images are misgrouped... OR are they?
One of my recurring themes in my art exploration is a simple question "what art entertains a computer/algorithm?" or more provocatively what if to art is to be human which is really just data science....
All Theories Are Part of The Theory of Information
The main idea here is that in all ideas of modeling or identifying contingencies information goes missing or the information was never to be had to begin with. This is a key convergent finding in mathematics (incompleteness theorem, chaos theory), computer science (halting program, computational irreducibility, p != np), quantum physics (uncertainty principle) and biology (complexity theory) and statistics (Bayesian models, statistics, etc). How important that missing/unknown information to a situation is contingent on the situation at hand - what is the tolerance of error/inaccuracy. In the case of high frequency economic trading, the milliseconds and trade amounts matter a lot. In shooting a basketball, there's a fairly large tolerance margin of mismodeling.
This is a Monday morning brain dump to get the juices going.
"Contingencies" is a difficult concept to fully elaborate in a useful manner. A contingent thing- an event, a structure, a set of information - is such a thing by the fact that it has no existence outside of its contingent relationships. In some sense it's the age old rhetorical question, "if a tree falls in a forest and no one is around does it make a noise?" The key in that question is "noise." Noise is a contingent concept in both the common sense idea as well as any physical sense. Sound (and other sensory concepts) is contingent in that there must be a relation between the waves/particles (and their possible sources) and an observer. Without the observer one cannot classify/name/label a sound, a sound. A sound literally is the effect on the observer. Hopefully that is a useful introduction to a basic idea of contingency.
The definitions muddy considering contingency in a deeper and broader way such as in discussing human behavior or economics. Over the eons humans have attempted to create and codify contingencies. The codification is really more an attempt to reduce the noisy complication of the wider universe into acceptable and useful contingencies (laws, rules, guidelines, best practices, morals, ethics, social norms, standards, etc). The sciences and humanities also codify but wrap these efforts in slicker packages of "discovering the natural laws" and figuring out "how to best live."
These published codifications are NOT the contingencies they purport to represent but they are contingent in of themselves. In these broader contexts contingencies refer to and are part of a complex network of relationships. Expounded as physical or chemical models or philosophic frameworks or first order logics or computer programs all of these systems are contingent systems in the sense of their basis in previous systems, relations to associated phenomena and the substrate of their exposition and execution. A computer programs representation of information and its utility as a program is highly contingent on the computer hardware it runs on, the human language it's written in, the compiler logic used to encode it for the computer, the application of the output, and so on.
The latest science in computer theory, social sciences, neuroscience, quantum physics and cosmology (and chemistry....) have somewhat converged onto very thorny (challenging to the intuition) ideas of super positions, asymmetry/symmetry, networks (neural and otherwise) and a more probabilistic mathematics. These are all models and sciences of contingency and essentially an unified theory of information which in turn is a unified theory of networks/graphs (or geometry for the 19th centurions). The core idea/phenomena for these ideas being useful explanations at is one of missing information and how reliable of probabilistic statements can be made about contingent things (events, objects, etc.).
The components of the models that are sometimes employed in these theories involve Bayesian models, assumption of the real/actual existence of space and time and concepts of simple logic ("if then") and other first order logic concepts. These are often chosen as building blocks because of their obvious common sense/human intuitional connection. However, upon inspection even these assumptions add a layer that is severely off from the actual contingencies being studied and these building block assumptions are also highly contingent in of themselves. The "model reality distance" and the "contingent in of themselves"ness quickly, exponentially explodes the relevance of the model.
Consider even a basic notion of "if then" type thinking/statements in a cross substrate contingent situation - such as a simple computer program running on a basic, common computer. A program as simple as "if X equals 1 then print 'The answer is definitely 1!'. X = 1 + .0000000000000000000000000001" is going to print the THEN statement even though it's logically, symbolically not true (a human can obviously tell). (The program in ALL CASES should print nothing at all, logically speaking. Practically (in the world of daily life) the program prints the statement and everything is "ok", on average). The abstract "if then" statement is contingent on the substrate that implements/executes/interprets it (the computer OS and hardware). The contingencies build up from there (the language one implements the statement in matters, the ability of any observer or implementing entity to understand left to right notation, mathematical statements, variable replacement, etc).
An important note: these issues of contingency ARE NOT further If Then statements. That is, we cannot resolve the short coming of If Then thinking to just needing to build up all of the If Then statements. The If and the Then and their references they are checking as IF (what's the X we're testing if it's the X) and the Then and it's command suffer from this infinite regress of the original simple if then statement we question! How does anything definitely say X is in fact the thing the statement/logic is checking for?
The main idea here is that in all ideas of modeling or identifying contingencies information goes missing or the information was never to be had to begin with. This is a key convergent finding in mathematics (incompleteness theorem, chaos theory), computer science (halting program, computational irreducibility, p != np), quantum physics (uncertainty principle) and biology (complexity theory) and statistics (Bayesian models, statistics, etc). How important that missing/unknown information to a situation is contingent on the situation at hand - what is the tolerance of error/inaccuracy. In the case of high frequency economic trading, the milliseconds and trade amounts matter a lot. In shooting a basketball, there's a fairly large tolerance margin of mismodeling. Very noticing the Higgs Boson the margin of tolerance is almost Planck length (smallest physical distance we know of...). The development of probability theory allows us to make useful statements about contingent situations/things. The more we can observe similarly behaving/existing contingent things the more useful our probability models become. EXCEPT... Sometimes not. The Black Swan.
If Then and similar logic models of thinking are insufficient as explanatory reference frames. Per the above they simply do not account for the rich effects of very small amounts of missing information or mis-information. Which brings us to the other building blocks almost universally used in science - space and time. These are robust common sense and in some cases scientific concepts, but they are not fundamental (in that they cannot escape being contingent in of themselves). Time is contingent on observers and measuring devices - it literally is the observable effect of information encoding between contingent events, it does not have an independent existence. Space is more difficult to unwind than time in that it is a very abstract concept of relative "distance" between things. This is a useful concept even at the lowest abstraction levels. However space, as physical space, is not fundamental. Instead space should be reconciled as a network distance between contingent subnetworks (how much of an intervening network need to be activated to relate two subnetworks). Spacetime is the combined, observable (yet RELATIVE to the contingent) distance in total information between contingent things (events, objects, etc).
This is important! Accepting common notions of If Then logic and spatio temporal elements prevents the convergence of explanatory models (which if the are really explanatory of reality should converge!). A unified notion of spacetime as information distance between networks brings together theory of behavior, learning, neural networks, computer science, genetics etc with quantum mechanics and cosmology. The whole kit and kaboodle. It also explains why mathematics continues to be unusually effective in all sciences... Mathematics is a descriptive symbolic a of relations and contingency. Converging all theories upon a common set of building blocks does not INVALIDATE those theories and models in their utility nor does it make them unnecessary. Quite the opposite. Information IS the question at hand and HOW it is encoded is exactly what contingencies are embodied as. Humans as humans, not as computers, are what we study in human behavior. So we need theories of human behavior. Planets, atoms, computers, numbers, ants, proteins, and on and on all have embodied contingencies that explanation requires be understood in nuanced but connected ideas.
Once enough models of the relations of contingent things are encoded in useful ways (knowledge! Computer programs/simulations/4d printing!!) spacetime travel becomes more believable... Not like 1950s movies, but by simulation and recreated/newly created ever larger universes with their own spacetime trajectories/configurations. That's fun to think about, but actually is a much more serious point. The more information that is encoded between networks (the solar system and humans and their machines, etc) the less spacetime (per my above definition) is required to go from one subnetwork of existence (planet earth and humanity) to another (Mars and martinity), etc. A deep implication here is an answer to why there is a speed of light (a practical one) and whether that can be broken (it can, and has http://time.com/4083823/einstein-entanglement-quantum/). The speed of light is due to the contingencies between massive networks - anything more sophisticated than a single electron etc has such a huge set of contingencies that to be "affected" by light or anything else enough those effects must affect the contingencies too. This is the basis of spacetime, how much spacetime is engaged in "affecting" something.
This is not a clever, scifi device nor a semantic, philosophic word play. Information and network theory are JUST beginning and are rapidly advancing both theoretically (category theory, info theory, graph theory, PAC, etc) and practically (deep learning, etc). Big data and machine learning/deep learning/learning theory are going to up looking EXACTLY like fundamental physics theory - all theories of getting by with missing information or a limit to what can be known by any entity smaller than all the universe. To the universe - the universe is the grand unified theory and explanation are unnecessary.