Certain : safety :: variety : aesthetic

YouTube knows what books I buy. It has algorithms for that – predictive marketing. And so, it recently recommended videos with Daniel Kahneman, Malcolm Gladwell, Daniel Goleman, and an awesome TED talk titled The surprising science of happiness (2012) by Harvard psychologist Dan Gilbert on the meaningfulness of synthetic happiness versus natural happiness.

The algorithms worked. I liked the recommendations.

I still haven’t finished Daniel Goleman’s Focus (2013). Today I will. Today. I also broke an 18-day chain of reading for at least 30 min/day yesterday. But I won’t two days in a row! I’m getting right on that. My other habits of exercising and writing everyday are still going strong.

I like analogies, metaphors, comparisons; hence, the title. I heard Tony Robbins explain once that learning simply means connecting what one already knows to something he doesn’t yet know; hence, analogies, metaphors, comparisons.

Entrepreneur and neuroscientist Beau Lotto, in his book Deviate (2017), illustrates how the brain does not make giant leaps. It takes small steps. Much like arriving at the other end of the room, one must put one foot in front of the other to get there. A particular connection of ideas may seem like a giant leap to outsiders, but in the mind of the person making the connection, it represents the next logical step in his thinking.


Among other videos as part of my morning routine, I listened to Malcolm Gladwell on engineering hits (2014). He shares a story of two businesses that have developed algorithms to identify, predict, and ultimately control for success; one for success in movies, the other for success in music. The algorithms resulted from neural networks, or machine learning. The idea is simple:

(1) Define the question such that one can answer it through objective measurement – break it down to math

(2) Obtain a very, very large data set

(3) Train the neural network or machine by identifying what is versus what is not (e.g., a successful movie script or musical composition)

(4) Continually run tests to assess the training and refine the algorithm

(5) Release the algorithm into the wild to predict the future

If I knew how to use Google’s TensorFlow, I’d most definitely use it. It’s Google’s “open-source machine learning framework for everyone.”


Toward the end of the video of Malcolm Gladwell’s speech, the tone of the talk concludes that in systematizing art, a tragedy results. The algorithm might create wealth for its producers, but it being an algorithm, its products remain formulaic. It may succeed in the marketplace, yet it takes away from the joy of creativity and the thrill of risk-taking.

The algorithms that the two businesses developed proved accurate to unbelievable degrees of accuracy. Yet, the developers admitted that some of their favorites movies or songs didn’t score well in box offices or Billboard Music Charts. Malcolm Gladwell pointed out that his interviewees felt sad at the idea of taking some of their favorite works of art and re-engineering them into hits in accordance with their respective algorithms.

To that extent, maybe yes. Maybe one can become too successful in systemizing art. In a way, doing so relegates the uniquely human expression of artistic creativity to machines. The idea almost feels cold, empty, and not alive, let alone even human.

On the higher planes of Maslow’s hierarchy, maybe we don’t really want to systematize those parts of the human experience. We want some uncertainty there. Variety.


What about in those areas lower on Maslow’s hierarchy (e.g., safety, security, basic survival)? Would we disagree about certainty in those areas?

Take predictive policing, for example. If I created an algorithm to stop or sex offenders from doing harm, would anyone argue in favor of the creative expression of the sex offender to pursue his creativity? Doubt it.

Or terrorism? Or murder? Or car theft? Or arson? Or whatever other crime?

I believe we can do this now. With enough data and computing power, we can identify a child predator versus everyone else based on his digital fingerprint alone. If we can train YouTube or Amazon or Google to predict what I’m going to search for, why can’t we use the same approach to predicting crime? We can. We do.

John Stuart Mill famously authored five methods in discerning causation from correlation. Simply put, just look for everything they do, everything they don’t, the degree to which they offend given the degree to which they do certain other things, and process-of-eliminate from other known causal connections.

And voila! A system of systems to liberate victims of abuse, to formulize what we probably should formulize. If you’ve seen Minority Report (2002), you can probably see how such a system also opens up new forms of abuse. That’s true of any decision. The better question is, is it worth it? Does the cost-benefit of such a system outweigh the cost-benefit of the next best decision?


Given the nature of my job, the military tends to value certainty over variety. Just take a look at the word uniform itself or the word regimented.

The higher a military member climbs, the more stifling this feels. Early on, he may feel a certain relief in the freedom from having to think by simply following orders, but as he grows into leadership positions, certainty feels like distrust – especially when he clearly sees that he deserves the discretion to decide certain things.