This blog is no longer active, you can find my new stuff at: george3d6.com, epistem.ink, ontologi.cc, and phenomenologi.cc



Audio version

Phantom status quo

  1. For a moment, let's place ourselves in a fictional world where we are capable of rational thought. Where all things that were known in the past are also known in the present. And where we can use this information to make reasonable (even if probabilistic) assumptions about the effects of our actions.

In this world I can see a single reason for not changing the status quo, effort. It requires an amount of effort disproportional to the predicted benefits.

  1. Downgrading to a world with rational actors which are near-perfect predictors given all current data. But where we aren't omniscient about the past, there's a second reason, lack of information that might have been available to those that erected the status quo.

  2. Going, further down, into a world where we are rational but are neither near-perfect predictor nor omniscient about the past, we get unforeseen consequences.

A change might look promising, but if the change is a big enough departure from the status quo, we risk slipping into issues that we couldn't have foreseen.

Combine these two things, and we get Chesterton's fence.

There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."

In other words, even if we see no reason for the status quo to exist, that might just be due to our lack of foresight or hindsight. The status quo might seem obvious if we were better at predicting and/or had the data available to those that build it.

  1. Coming back to the real world, with imperfect data, imperfect predictive abilities and irrational humans, we get the final two arguments for the status quo:

But let's assume we struggle to act rationally and think that various mental biases are artifacts, potentially useful artifacts (e.g. a built-in awareness of Chesterton's fence), but artifacts that can be discarded in a situation where we can see the higher-level objections against change don't apply.

I think codebases make for a perfect thought-experiment here.

Let's assume that, in our codebase, everything is CamelCase. A new guy comes along citing studies that claim a 6% decrease in release bugs if a team uses snake_case for variable names.

We can't bring the argument from effort here, since refactoring CamelCase to snake_case takes about 4 button clicks on our IDE, a push to master and telling everyone to pull it.

It might cause minor inconveniences (e.g. people having to do slight refactors to the PRs they are working on), but even if there's a 10% chance that the study is correct, 0.6% less release bugs on average are a worthwhile goal.

Chesterton's fence doesn't apply either since code is fairly deterministic, our tools will warn us of any potential resulting naming conflicts and in the slight off-chance that anything might still go wrong, we have our unit tests.

Do we, as wannabe rational actors, change our code to snake_case?

I don't know.

I think this is an example of what I'd call a "Phantom status quo", it's a status quo which can be easily changed with easy to foresee consequences.

Most status quo's are solid and entrenched. We might all agree that a certain change to the building code of our city would be nice. But we'd have to go ahead and physically modify every house. Furthermore, we aren't perfect predictors, we couldn't really know this change is desirable until we tried it. At which point we would have sunk a lot of effort into it.

In a codebase, however, the status quo is often easy to change and the changes are easy to test. So the two main obstacles behind changing things are gone.

I don't want to be irrational, but I also know that, if a new team member came to me and told me "Hey, we should change the casing of our variables to this because a study shows it reduces release bugs by 6%", my default reaction would be "No".

Even if I had read the study, and the study used good methodology, had good theoretical arguments, and was done by people I trust, my reaction would still be "No".

Or, perhaps more realistically, let's say our deployment toolchain uses a specific compiler, say clang. Someone comes in saying that gcc just made some improvements in areas x,y,z. So switching to gcc should bring a 2% overall speed improvement, I think my reaction here is still "No".


But is there any rational reason to defend a Phantom status quo? The ones I can come up with are as follows.

1. Avoiding minor-improvement loops

Let's stick to the compiler example. Say that we need only spend 30 minutes, on average, to change the compiler we use to release.

Let's also say that, I believe it's reasonable to spend 30 minutes in order to improve performance by 2%.

But what if, in 2 weeks, clang release a new version with improvements x,y,z. Now one of our clang-using developers notices that clang suddenly produces binaries 4% faster than the gcc version.

Well, had I made the first change, I'd now be forced to once again change compilers.

What if this happens every few weeks. Either clang or gcc ends up compiling our code slightly better and now I have to spend 30 minutes, every two weeks changing release compilers.

Furthermore, the change will not be instant. Maybe everyone is busy right now, maybe it takes 1 week to make the change. This means I am missing out on that 4% performance for a whole week.

I basically got 3 weeks of 2% improved performance out of switching to gcc. But I would have gotten 2 weeks of 0% improved performance and 1 week of 6% improved performance if I had just stuck to clang... and I'd have saved 60 minutes of my life.

Granted, this is a contrived example, but I think it generalizes.

Usually when I make a change for the sake of performance, I look for 2x kinds of improvements and I'd wager this is the underlying reason.

The time it takes you to write inline before every function that only gets called once might be the time it takes for your compiler to get smart enough to do that inlining automatically.

2. Avoiding bikeshedding loops

Assuming we are perfectly rational actors that never care about "leaving a mark" or signaling importance, bikeshedding is not an issue.

However, I feel like even in a semi-fictional world I can't make this assumption with a clean heart.

At some point, if an impactless change is allowed, people will start making impactless changes all the time.

A good example of this is the "npm effect", where people constantly broke down things into smaller and smaller packages. Until you got monstrosities such as is-odd and is-even. The fact that something like is-odd came about is not even the scary part, the scary part is that it has 32 dependencies.

But this sort of reasoning stems from realizing the status quo is spectral. There's no obvious harm in breaking up sub-modules of a package into their own packages or writing an abstraction to tidy up some code.

However, there's also no obvious stopping point.

It might be that jonschlinkert and his 1500 micro-packages are just some sort of post-modern artistic statement, but even so, art is a mirror to reality and I think this perfectly well illustrates that point.

Granted, the problem here is that you can't follow the opposite of this advice either. If you never modularize anything you'll end up with an ugly monolith.

I think the "is there a huge benefit" rule applies here fairly well, if you can come up with a really good argument as to why splitting up stuff will save you time, split it up. If you can't, then don't do it, even if you could do it and there's no obvious argument against it. Because there are no good arguments against it all the way down.

3. Avoiding the real world

I started this chain of reasoning with the assumption that code is more or less a "deterministic" system, where one can foresee the consequences of a change.

But it isn't, not really. As far as I can tell the order in which pip decides to install requirements is, complicated, dependant on the version, and can't be easily controlled.

Even for recent versions, the specifications are very loose. If I read it correctly, the order in which I specify my requirements shouldn't matter.

Yet, in spite of the fact that the ordering doesn't seem to matter in the deterministic realm of ideal programming, in the real world I know that if I re-arrange even a single line of this requirement.txt file, there's a 20% chance that within 2 weeks I'll get reports of install issues I thought I'd had fixed half a year ago.

So really, maybe, the "rational" assumption that the programming world is deterministic, doesn't always hold. Maybe sometimes certain patterns do get crystalized due to a form of selection.

Maybe it really doesn't matter if a function returns 0 or 1 as long as you never use the return value. But maybe the indeterminable but not-random behavior of your CPU when a cosmic ray strikes it means that the difference between those two things is that between utter disaster and business as usual... probably not, e^-BB(100) unlikely, but why risk it?


So overall I think I retain my bias for Phantom status quos, even though it might well be misguided, I might be masking an irrational aversion to change with fancy sociological arguments.

But you never know, and on the whole, provided no compelling reason, I think I'll choose to keep, my mental status quo.

Published on: 2020-05-11










Reactions:

angry
loony
pls
sexy
straigt-face

twitter logo
Share this article on twitter
 linkedin logo
Share this article on linkedin
Fb logo
Share this article on facebook