When I was a small boy I used to spend hours playing the “war” card game. The thing about this game is, it’s totally predictable and deterministic - each player gets a shuffled half of the deck and cards are extracted one by one in order. No chance or skill involved whatsoever and the result is completely determined by the pack you were dealt - we could just as well determine the winner by a coin toss. It really is a stupid game when you think about it, assuming you value games where the result can be influenced. But as kids, we didn’t understand it nor cared and so it was a really fun game; we would sit for hours and see how the decks play out, with every bit of the emotional rollercoaster adults have watching or playing games of the more sophisticated variety. The Buddhists might say life itself isn’t so different: we should let go of our illusions of control and just relax and enjoy seeing the decks play out; viewed in this sense I now find the game highly educational. Unfortunately, some of the lessons of the game do not appear to have taken hold in the general population nor in government circles. Take professional sports for example. Whenever an Israeli athlete happens to win a medal - say a silver olympic medal in some esoteric field like Judo or sailboard - the news is filled with politicians calling for an increase in sports budget citing how “investment will lead to more medals”. Sounds logical right? except it isn’t. No amount of money will significantly change the outcome, and it should be pretty clear to everyone involved. Top performance in professional sports is basically outliers, people who have abnormal genetics and abnormal life styles. Thus a country’s performance in the olympic games is dictated by the size of its population (more people, more outliers), the genetic composition of the population (East Africans will dominate running events, it’s just the way it is) and methods used for training and conditioning (legal and illegal). Note that the methods of training and conditioning are widespread and everybody converges on them, so you can only gain a few years lead at best. So unless you change the genetic composition of the population or the population size, there’s really nothing you can do to improve your results - except of course, importing already proven athletes from abroad; Unsurprisingly many “Israeli” sports champions are, well, not Israeli. Note that while you cannot improve the results you can easily degrade them, by not drawing athletes from the population as is the case in India - they simply don’t care enough about olympic medals to draw outliers from their vast population. But with a population that is just 0.125% of world population, there is only so much you can do. So what’s the point investing in national sports teams? there isn’t one, you might as well toss a coin - just like war card games; or you could relax and enjoy the show.
This lesson should have been obvious, but sadly many decision makers are completely unaware of it, leading to what I call “management by prayer”: hoping that doing more of something that has absolutely no impact on the system will somehow yield results. Again and again we see managers and executives pursuing futile agendas without realizing that the large scale results are completely determined by distributions outside of their control and the maximal effect of their particular schemes are negligible at best. But if this is so obvious, why are people so easily fooled into believing management by prayer works? The answer is, scaling. For a single athlete, funding and training definitely makes a difference (also it levels off at some point) - so decision makers assume this will have the same effect on a population of athletes. Assuming that the same methods that work in small scale will work at large scale is a common fallacy; different methods are required and often a completely opposite strategy. For example, the laptop I’m working on is a relatively expensive model as I completely depend on it for my work - but if you have a 100,000 node server farm you would be a fool to buy expensive machines; The reliability strategy used at large scale is based on redundancy rather than survival of individual machines. In other words, at scale you capitalize on the distributions of the system or change something fundamental that hopefully impacts the distributions; Manipulating individual components is like draining the desert one sand grain at a time.
Many years ago Dr. Deming alerted managers to this problem and created a method to help people identify the correct strategy to deal with problems: Statistical Process Control. Without going too deep into the method, it basically gives a statistical criteria to distinguish between two type of events: Those attributed to special causes which should be handled individually and those attributed to common causes and should be handled systematically. The criteria is somewhat arbitrary, but at least it’s explicit and so it can be tweaked - which is where domain and statistical knowledge enter the picture. The problem is practically no one uses (or even heard of) SPC outside of some specific engineering disciplines - yet practically all of us make implicit decisions of that kind on a daily basis. These type of decisions is the basis of what I call “Thinking in dichotomies”, categorizing the world as discrete groups (often two). This type of thinking works well in small scale, which is where most of us spend most of our lives, but fails horribly at large scale. For example, take the common male/female dichotomy: It works pretty well in our daily lives as most of the people we know fall in one of these categories for most practical purposes and when they don’t we simply bash or ignore them. But as Sabine Hossenfelder points out, biological sex is more a spectrum than a dichotomy which creates a myriad of problems when you have to deal with large populations. You’d think we’d abolish the “gender” field from forms by now as it creates more problems than value, but instead we try to deal with this by adding categories, a strategy doomed to fail as no amount of categories will be enough given the huge number of humans looking to distinguish themselves. A similar problem pops up when we compose large scale populations from discrete units - or as Mark Burgess calls it, The scaling of true and false. For anyone experienced with Statistical or Quantum Mechanics this is old news and science created a solution long ago: use different models for different scales! This strategy is often called “emergence” as new type of behaviors (and associated models) “emerge” from the underlying models. Often there is a shift from discrete models to continuous or spectral models and vice versa. Likewise, we should also tune our thinking to use the correct model for the scale and problems at hand.
Let’s take bicycle theft as an example (I had 3 stolen in the last 5 years). As an individual, I am mostly concerned for the safety of my bicycle and not so much for others - and so the strategy I employ is to differentiate my bicycle from the rest. This can be done by purchasing a better lock, cheaper bicycle or locking them in a more protected location. But some bicycle will be stolen, and it’s fine as long as it’s not mine; I did nothing to impact the large scale trend. In contrast, if I was working for the government trying to reduce bicycle theft I would employ radically different strategies! better locks for everyone help very little as thieves will just up their lock picking game rapidly (where I live they sometimes cut guardrails where bicycle are locked instead of picking the locks) and likewise any strategy aimed at individual players (thieves or bicycle owners) is doomed to fail. What the government should do (and fails to) is handle conditions impacting the entire population and thus the distributions at hand: bicycle prices, poverty levels, criminal investigation efficiency, etc. In other words, the notion that “punishment” deters crime is completely bogus (as has been noted in research, it’s the probability of getting caught that matters, not the punishment); The much ignored implication is that promoting, demoting, rewarding and punishing people in the workplace has very little impact on overall performance! Points 10 and 11 from Deming’s 14 points say exactly that, with some extra wisdom on the horrible results of manipulating incentives.
This kind of opposing goals and strategies is quite prevalent once you start paying attention, and so adjusting your thinking accordingly and thinking in the right constructs is very valuable. It also helps to understand the different viewpoints of different players. If I am a startup founder success is quite dichotomic (failure and a few categories of success at best) but as a VC success is more of a spectrum and measured as a continuous yield. The founder needs his specific startup to succeed and cares very little about the portfolio, while the VC cares little about individual companies and very much about the aggregate valuation of the portfolio. For a VC it might make sense to hurt one company to strengthen another. For engineers working with large systems, this lesson should be internalized - we often need to adopt the VC perspective and thing of our systems in terms of contiuums and spectras, not categories and dichotomies. Thus safety is not a dichotomy - there is no such thing as “unsafe”, everything is “safe” to some degree. Automation is not a dichotomy - there is no such thing as “manual”, everything is “automatic” to some degree; After all, SSHing into a server and “manually” changing config files is still highly automated compared to directly changing bytes on disk. This change in thinking also helps to be more ROI oriented as you let go of dichotomic success criteria and makes for better collaboration - wicked problems can only be improved by compromising.