On Serendipity

Ever since I read Anne Fadiman’s At Large and at Small, I’ve been totally enamored by the idea of writing familiar essays. A familiar essay is a medium-length piece of writing that’s primarily a personal reflection on a specific, and almost whimsical, topic. Thus this first attempt, a series of musings on my views on serendipity, creating and seeking out opportunities, and finally applying a mathematical technique called simulated annealing to optimize life.

For years, I meant to read Yuval Noah Harari’s magnum opus Sapiens, but never got around to actually reading it. Some hilarity ensued, because I was aware of the premise of the book and had read some excerpts online. Thus I started brandishing opinions (“you know, Sapiens is so much better than the sequel, Homo Deus”), until it all came crashing down when someone asked me if I’d read the book at all. Since then, I’ve realized and made a mental note to myself to not talk about something if I haven’t read the original source where it’s from.

When I went home in December 2018, I realized that we had a copy of Sapiens at home, and my dad said, “you should read this. It’s great! He explains so many of our behaviors with historical and scientific arguments.” Caught up in jetlag, I read it quickly. At the same time, I read another book on the home bookshelf, Paul Davies’ The Mind of God. It’s a popular physics/science book that talks about some things that we take for granted, such as the fact that the world obeys mathematical laws at all. It also is metaphysical in the science-about-physics way; why/how do we observe physical laws at all, and why the specific ones that we do?

Anyway, why am I talking about these two books that are seemingly unrelated? At that particular point in my PhD, I was thinking (and still am) about where we stand in the grand scheme of things, and about whether or not pursuing science research is really a noble, selfless thing to do. Sapiens disabused me of this notion: we’re doing science only because someone is willing to pay for it, which means that we are primarily doing it for those people/agencies. Scientists (and sometimes the rest of humanity) like to romanticize things by framing science as a sort of “pursuit of ultimate truth”, or a natural, elegant and honorable thing to do. Ever since I started my PhD in the USA, I realized how much of the research funding in my field comes from the US departments of defense, energy, army, navy and air force. If these are the agencies funding my work, they expect to be able to put it to use. I’ve glossed over the part where I mentioned Sapiens disabusing me of my notion of ideal science; that’s something for another post.

For me, the standout part of The Mind of God was the part about algorithmic complexity. This is a mathematical construct, quantified by something called Kolmogorov complexity. The Kolmogorov complexity of a string of data is the length of the shortest computer program that can generate the string as its output. It’s a powerful concept but useful only as a thought experiment (since it cannot be computed in general). However, learning about it started tying together some strands of research topics that I had been thinking about at the time. In the book, the concept came up when discussing the possibility of the universe being a large computer that we live in as a simulation, or computer program. Thinking about things this way reminded me of our humbleness as a species; we can aspire to control physical matter via physical laws, but we are at the mercy of how the universe behaves. The laws of physics and mathematics will, most likely, exist in their same form whether or not we probe them.

I like to read multiple books at the same time, and sometimes it pays me back handsomely. This was one such instance. For a few hours, I would read Sapiens and think about our grandiose notions about ourselves, our pursuit of ever-increasing technology (and insecurity/anxiety), and for a subsequent few hours reading The Mind of God, I would think about our place in the cosmic ballet of equations and laws, wondering if we are mere puppets. These concepts merged (for me) very well, which meant that I got more out of the two books together than I would reading each separately. Both books also addressed questions and concerns I had about (a) my place in the world/humanity’s place in the world, and (b) why should I continue to pursue science?

Somehow, the (at an earlier time) bad decision to not read these two books ended up helping me now; because of this seemingly irrational decision, I extracted valuable insights from them both, and in synergy. While I don’t want to advocate making bad decisions and hoping that they’ll repay you later, I will say this: sometimes, make the irrational decision. I’ve been meaning to write about creating opportunities and leaving yourself open to serendipitous effects for a few months now; but now I think I have a good analogy to use to convey my point better.

Simulated annealing is a nature-inspired optimization algorithm that I learnt about in one of my undergrad courses. Consider that you have a function f whose minimum value you wish to find out. Start at a random initial point, and keep moving in a direction in which the function’s value reduces. At some point, you should hit a minimum. However, you might hit what’s called a “local” minimum, and not find the overall minimum value of the function. To remedy this, simulated annealing proposes a tweak: at each step, instead of always moving in a direction of decreasing f, sometimes accept motion that is worse and leads to increasing f. The name annealing comes from metallurgy, and it implies that in the algorithm, the probability of accepting a worse solution should reduce with time.

Basically, simulated annealing says: to get the best solution, sometimes accept worse solutions so that you don’t get stuck up in a narrow part of the function space.

How does this apply to serendipity? We all have priorities and goals, and we want to spend most of our working time moving towards them. We don’t want to spend time doing things that don’t have “return on investment”. And that is a natural thing to do; if you have a lofty goal, it requires many hours, weeks and sometimes years of focused attention. Being sidetracked does not help. However, some of the best creative things come from chance encounters. And how do chance encounters arise? Whenever the outcome is positive for us, we say that they arise serendipitously.

This means that cultivating serendipity is one of the most rewarding things we can do for ourselves and our goals. Many creative processes are highly nonlinear, and sometimes all it takes is a single atomic idea to bridge the gap between problem and solution. One nice way of thinking about serendipity is via the simulated annealing algorithm.

We’re all in a quest to optimize our lives. We want our lives to be as good as possible given the circumstances, and we want to minimize regret. However, if we get stuck up in “local minima”, we may never achieve our final potential. So how do we find the “best possible solution”, or the “global minimum”? Simulated annealing!

Every once in a while, accept a task, invitation, or engagement that doesn’t tie in to your narrow goal. This gives you a broader, birds-eye picture of the landscape you are in, which is exactly the point. Try to be systematic about how you generate the chance encounters that further your progress towards your goals. Attend talks in disparate fields, try to read books that you enjoy but won’t necessarily “use”, meet people and ask them about their work rather than talk about your own. I have found that most of the times these experiences are enjoyable when I go in with this mindset, and every once in a while, they’ll help me connect the dots in a way I never even thought of. If that isn’t worth it, what is? In the end, even simulated annealing came from a merging of optimization theory and …metallurgy. :)