Meditations on software


I have been writing some code lately (goes with the job description) and one useful and extremely sensible piece of advice, when you do such a thing is,

Make interfaces easy to use and hard to misuse

Let’s consider an example why this advice is useful. Suppose you are doing something completely unrelated to software, such as cooking. What you expect from your favorite tool, say, a knife, is that it cuts things you want to cut (such as vegetables) and does not cut things you do not want to cut (such as your hand). Hence the knife has a handle that allows you to safely hold it and a blade which actually cuts.

In software engineering, the idea is the same. You present the user of your library an interface that allows to do what the library is good for and makes it hard to break internal assumptions. In the otherwise quite scary world of numerics and optimizers, the optimizer interface lets you describe your optimization problem and push the equivalent of a big flashy button with “SOLVE” on it. At no point you are allowed to change the internal state of the solver, play around with magic values or even change the optimization routine in any other way as the interface allows you to do.

Writing good library code is, to a non-trivial extent, designing good interfaces. Similar applies to tooling, programming languages and hardware. This is one of the reasons we have cables with either obviously asymmetrical or symmetric in design sockets, for example. This is one of the reasons USB-C and Lightning are superior connectors: they are really, really hard to connect with the socket the wrong way.

However, in the software world, tools and libraries evolve and gain more features. They start to gain flexibility, configurability and other nice shiny features until they become Turing-complete. And at this stage, it becomes very easy to misuse them. Any tool that allows custom, unbounded scripting evolves to a science on its own, stacking several of these tools on top of each others leads to new job descriptions because everything becomes so unfathomably complex.

So, maybe some features are better left unimplemented.


This could be your typical rant against abstraction layers (although I would not mind you joining my personal crusade against them), but there is another thought that has been nagging me. Returning to the kitchen example, let us imagine a world where all you have is a fork. Assume even you can have as many forks as you want.

Forks are a good invention. They are fine instruments, capable of many things; with enough creativity you can make a spoon out of two forks, you can even sharpen a fork so that it becomes capable of cutting, and you might even beat an egg white with a fork.

However, daily cooking besides eating pre-cooked meals would become a challenge. Cutting off a piece of cheese for your typical breakfast would be a nightmare. Dicing vegetables, eating a soup or any liquid food would be quite an exercise in applied creativity.

Naturally, kitchen operations would significantly gain in complexity. Creative solutions will have to be engineered. People would want their daily share of Geschnetzeltes, and they will receive it, at a horrendous price. Cooks and kitchen engineers would take pride in how far they are able to go to bring the desired foods to the tables of the (paying) masses. Some of them would take pride in the complexity of their solutions, in the amounts of forks used to make a good Bolognese.

“But wait”, I hear you say, “This is utter bullshit. Why should we constrain ourselves to just one instrument, this is a contrived example. Naturally people would evolve tools that are better suited for tasks like cutting, for example. You are describing a non-problem.”

Oh well.

In the software world, “make things hard to misuse” plus the usual proneness to hypes ends up in decisions to use The One And Only Tech Stack™ (insert your favorite here) that is Designed To Be Used By Everyone™ and comes with its share of opinionated assumptions of how things are meant to be done. Naturally, good design means things can, but are not meant to be done in any other way. Management and fellow engineers are no strangers to being hyped (just think of it, we were promised automated code generation over fifteen years ago and most of software jobs are still revolving around passing around JSON with the occasional seasoning of SQL) so they often tend to jump on the new technology train even if the only new thing here is just the packaging (yes, Go, I’m looking at you right here, you’re just Java 1.4 in disguise).

Suppose now, for a moment, that the requirements are extended, in such a way that the assumptions about what’s right and what’s wrong no longer hold. And then the real fun begins. Since one somehow has to solve the problem at hand, the software engineers need to explore the fun, creative ways of problem-solving including, but not limited to using a pre-processor, inventing a whole new language on top of the one that does not satisfy the needs, use code generators etc. This adds an additional layer of complexity, a source of pride for some (look how complex my problem is!) and a source of headache for the others. I don’t want to shame anyone who actually has to work down the drain and engineer knives out of forks, but in my eyes it is indefensible to normalize or glorify this practice: this is work that has to be done because there are no better tools.

Where does this leave us? I started with “Make interfaces easy to use and hard to misuse”, but now I think I have something to add to this. My version of this rule would be

Make interfaces easy to use, hard to misuse, and discourage misuse by being honest about your assumptions and offering a comprehensive interface.

Obviously there will be people who just for the sake of the argument will write Tetris in Brainfuck or Doom in Javascript. However, just because it’s possible (Turing completeness is a marvelous thing!) does not mean it should be done in a business context, where engineer time matters and is not well-spent on fighting the tech stack instead of the business problem. No tech stack is meant to be used by everyone; they are opinionated pieces of software and we should also handle them that way.

Lessons learned

I have been TAing the lectures “Computer Networks and Distributed Systems” and “Mathematics for CS students” for a term each and now I have gathered some experience with the exams. This is overall a very mixed experience.

Zeroth, most people do actually have some kind of understanding about the topics. But there is a long way from intuition towards understanding what is actually happening in the lecture and why it is happening the way it is happening. (Actually, this is a verification procedure for learning: If you know exactly what problems the lecture is solving and by what means, then you are most probably doing it right.)

First, some of the kids are pretty bad at reading and understanding. If the question is “What are the pros and cons of various methods of achieving X”, then the wrong approach is to tell me that a major downside is to implement X. Seriously? Let’s draw an analogy: A major disadvantage of owning a car is that you have to buy a car, and a major disadvantage of public transport is that you have to buy a ticket. Yeah, I’m not very impressed by this involved comparison, either.

Second, numbers and computation are a serious issue. This was evident in the networks exam, this is even more evident in the calculus exam, even if the students somehow managed to pass the initial “solve 50% of homework” filter. Integration seems to be like magic — sometimes it works, sometimes it does not, and most people seem to have no idea why. My hint “Solve 20 integrals and then you’ll know how” was not appreciated. In the networks exam, it was even worse, people failed at division of large numbers. (And it was awful to look at.)

Third, complex concepts are not easy to understand. (Captain Obvious reporting!) “This function is continuous and not continuous at the same time”, yeah, right. Sure, university lectures are not meant to be easy per se, but they are also not meant to be mandatory for everyone. And this is freshman material, not formal semantics from outer space. But this continues in the computer networks lecture, where some of the students write stuff like “Alice sends her private key to Bob”. If I were a columnist, I would write an awfully long lecture on how Facebook makes us disrespect privacy, but luckily I think that using “us” in the “us sinners” sense is a dirty rhetorical move, so I’ll just facepalm (or facedesk) one more time.

On the other hand, most people do seem to pass the exams, so it’s not all bad. But the aftertaste is pretty bitter.

Election day

A disclosure: I am not American. Hence, my interests in American elections may be very alien to actual Americans (same as the interests of the candidates may be alien to me). I have a different background, my political views (as in: what should be a priority and what are good means) are clustered differently. I also have a strong hype allergy. Long story short: I have the freedom of not having to choose and the possibility of saying “I strongly dislike both candidates” without having an impact on the outcome. The reasons are manifold, but just to give you a hint: I dislike Trump for his far-right campaign and his attitudes. I also dislike Clinton for the “vote for me, you sexist pile of shit” campaign sentiments and her rather hawkish policy.

I went to sleep on Tuesday with the thought that I missed an excellent opportunity to bet on Clinton against some politically active bloggers. On Wednesday, I woke up and the first word on my phone’s display my mind has recognized was “immigration office”. I then thought that not betting was actually a wise move (and like many wise moves, this one was due to laziness). And then the Internet exploded with pain.

Continue reading “Election day”

Communication hardness

This is not a post on computational complexity. (I can write one, though, and even on communication.)

There have been several incidents in my life that follow a pattern, and I probably should summarize them at least to think about it. It happened to me for some times that I was trying to convey to another person a thought, an idea, or a concept and was utterly failing at the task. It has taken me hours to clarify what I meant, what I wanted to say and what, for me, the logical implications were. In the end, after the task was done and the idea communicated (or so I thought), my first reaction was “Oh wow, this was hard. I think I need a drink now”.

Now one could draw a conclusion that I am simply incapable of communicating my thoughts, but this hypothesis is invalidated by contradicting observations. And the simplest assumption that matches my observation is that it is, in fact, hard to communicate complex ideas; if the person I try to communicate with has a different intuition (even for the same problem!), then the explanations that are completely clear to me may come over as confusing.

This is very, very sad. It increases the amount of communication overhead, it reduces the flow of ideas, and it makes communication sometimes rather frustrating. Furthermore, it constrains the amount of people you have fun talking to. On the other hand, this is a very good reason to appreciate these people more.

One decade of not learning

Today is a remarkable anniversary.

On October 9, 2006, a seismic event originating somewhere in the Korean peninsula exposed a lot of interesting facts about political, economical, and military experts. The event itself was quickly characterized as an explosion, and several explanations were proposed.

  • North Korea has tested a nuclear device
  • North Korea has ignited a large bomb
  • North Korea has tested a nuclear device yet it failed to ignite

The second two were by far the most popular, as it seemed to be unimaginable how these hungry, backwards, Juche-hailing and ideologically incompetent people could ever design such a technical masterpiece. Ten days later, United States have confirmed that the event originated from a 0.8-kiloton nuclear explosion. Ten years later, the public perception of North Korea is by and large still where it was back in 2006, one even films epic movies about that.

Now saying “confirmation bias” would be just saying a spell and hoping that this magically explains everything. I think this effect has more components to it.

First, it seems that historic scale is not very easy to get an accurate intuition for. For example, all the cool technical advances in air and space travel are not that recent: The first flight of the Concorde is closer to Wright brothers’ plane as to 2016. The first satellite has flown 60(!) years ago. From this point of view, it is not entirely unintuitive that even with a 40-year technological handicap, one should be capable of creating rockets and nuclear weapons. (Just to remind you, 1966 corresponds to Saturn V, XB-70 and SR-73) This makes possible developments to a matter of resources and engineering capabilities.

Second, there is a question of ideology and existing stereotypes. Clearly, North Korea is not a nice place to live in. Clearly, the state exerts a lot of pressure and control on an individual, far more than anyone would deem acceptable. But even if this has an influence on the competence of North Korean engineers (it obviously does), it remains somewhat questionable to flatly deny them engineering capabilities from 1960s. People get surprisingly agnostic when it comes to weapons.

Thirdly, there arises a question about the results. So, ten years have passed, and did the perception of North Korea change? Does not seem so. One can still make funny jokes about Dear Leader, failures in their space program, yet this does not change the facts. And the facts are that the guys are pretty close to intercontinental ballistic missiles. Probably now would be a good moment to take them seriously.


Let’s talk about literature. Again.

I have stumbled upon several discussions and a nice word that I cannot but translate into English. The word is “rivetism” which roughly expands to “an overwhelming desire to enforce absolute correctness in the details” and stems (as I’ve heard) from discussions about literary merits of a movie measured in the correctness of the number of rivets on a tank turret. Needless to say that “rivetism” is a pejorative.

Yet there is a point, and an arising question. Suppose you watch a movie about something you know, and know well. Take, for example, cooking. And there is this guy who takes a frying pan and loudly announces he’s going to make a soup. Ridiculous, right? Or imagine a book about school, where everyone loves the gym class. This is a slightly unrealistic scenario, doesn’t it seem so? So, when your expertise on the topic is sufficient to discern unrealistic assumptions, the hitherto suspended disbelief kicks in, and you have issues with connecting to the characters. This is the reason why people from air and space engineering are not so overwhelmed when talking about Gravity or pretty much any other space fiction movie that pretends to depict reality, as technical or scientific issues that are obvious to a professional are often overlooked by the authors. What makes this interesting is that disbelief is more readily suspended if the fiction is clearly depicted as fiction without further discussion of the technical details. That’s why Warhammer 40K works for me pretty well and Ready Player One does not.

Sadly, rivetism becomes inevitable if you get more closely acquainted with not only technical issues, but also real-life social interaction as well. Working in or at least witnessing a structure of at least 100 people gives insights about human interaction, and it is often the case that the human interaction patterns observed in real life do not match those depicted in fiction… at all. It begins with all those superhero movies that are plainly unbelievable when you know how many people work in order to make a single flight of a plane possible. It goes on with secret organizations no one has ever heard of yet with unlimited budgets and so on.

Probably, one just has to accept the inevitable truth that authors rarely have an idea about the (social and technical) mechanisms they use inn their plots, relax, and try to enjoy the narrative or the action nevertheless.

On Brexit

I started to think about current events, and, as it sometimes happens, I had more thoughts than I initially thought I have.
First and foremost, I am surprised. Very surprised. I went to sleep on Wednesday with a feeling that UK will vote “Remain” (not by a large margin). Last year, I was nearly brave enough to bet a bottle of whisky on it. (Hey, I would have made the same bet on Sunday!). I was wrong. However, as I do not have any responsibility (I’m not a famous expert, I have no obligations, my predictions have little, if any, influence), the sorrow of me being wrong is limited and I consider this an opportunity to update my mental model and ask questions.
Why did this happen? Obviously, more people were motivated by fear of THE IMMIGRANTS (double fun in a country with a rich colonial history) than those motivated by fear of losing economic ties. This alone tells us something about the vote, it was a vote against something, not for something. If a society votes out of fear for the lesser evil, this already is a rather unhealthy sign [CITATION NEEDED]. Even worse, this is something a political party can make advantage of by yelling “Vote or lose” without having to do anything.
Who motivated the “Leave” voters? There was a “Leave” campaign, led by non-marginal political elites. This indicates a controversy in the political class; even more, this indicates a lack of consensus on foreign policy, which is something that happened only rarely in the last 70+ years between the iron curtain and the Atlantic Ocean. It would be interesting to know the motives why a significant part of the political class decided that leaving the EU is more profitable; it would be even more iteresting to know what they know.
What has Cameron done? As I perceive that, Cameron (and his political surroundings) played the game of bluff with Brussels, and threatened to exit. As Brussels was not as prone to bluff as Cameron thought it was, he announced a referendum and suddenly other forces hijacked the issue. In football, this is something called an own goal.
Is democracy to blame? This is probably the most polarizing question, and some Germans are currently taking pride in the constitutional impossibility of referenda in Germany. As this is also a question that begs simple answers and mantras, I will try to highlight the sides of the issue as I see them. The good side of referenda is that you have direct influence on a decision. The bad side is that issues can be complicated and it is often hard to obtain information required to make a qualified decision. Especially if political campaigns work with memes instead of actual reasons (and they do). This means that we probably should do the scientific thing and consider previous work on the topic and empirical data. Empirical data suggests that referenda can be a working mode of operation if your country is at least well-connected. (Hello, Switzerland!) Empirical data does not suggest that it does not work in other conditions, although there are some indicators that not all issues should be decided by popular vote, such as death penalty (Gemany has had a popular majority for death penalty). This does not mean that popular votes are bad or that the voter is dumb; this just means that the public benefit is not the sum of individual benefits. IMO this also means that the questions decided by popular vote should be asked in a clear, understandable manner, and readiness of all political elites to take the decision as is and perform it; a referendum should not be the place of political fight.

Political movements suck

(Forgive me for lots of political posts, I am currently re-formulating my world view and this way, you are suffering the least. There are, however, emotionally demanding alternatives.)

For some time in the past, I had a grudge against political movements, but I could not pinpoint the reasons. As this state (“I don’t like it, but I have no idea, why”) did not really satisfy me, I tried to find reasons for this emotional condition.

The first reason is buzzwords. Buzzwords are words or phrases that provoke an intuitive response without actual meaning. As examples, you can look at the party names (Christian Democratic Union, doesn’t this sound nice? No goals, but this warm fuzzy feeling of being in the good old days), stuff any “political scientist” says, and even sometimes in whitepapers (will a double-blind test distinguish between Net platform neutrality and Ultra-Hardcore?). Buzzwords are bad, m-kay? No clear goals means no clear proposals means no clear requirements means no responsibility.

The second reason is something I call topic clustering. Since I don’t want it to become yet another buzzword, I’ll define it: topic clustering means that a voter chooses not between individual problems he’d like to have solved in a preferred order (with given solution methods), but between clusters of problems, represented by political movements or candidates. This profits the political movements, but not the individual voter. In the extreme case, the society will shape itself after the political spectrum with predefined thought patterns for the individual (What political cocktail do you prefer? Wait, do you want to mix it itself? That’s not available, sorry). For example, suppose you like (completely at random) science and technology, social progress (as in, worker rights, equality and stuff), and, let’s say, nuclear power since you are strongly convinced that this is a consistent, non-contradictory set of beliefs. It turns out that there is no party with completely the same goals, which is not a problem in itself, but makes you choose what goals you prefer. Fractional voting is not a thing, hence, you have to decide what part of that cocktail is really important and what is not.

Topic clustering leads also to the third unfavorable phenomenon, which I dislike most. Political movements are groups of people. Groups of people tend to work on a friend-or-foe basis, which is sometimes okay, except when it isn’t. I have had a feeling (and it becomes stronger and forms a suspicion) that as a member of a social circle you are somewhat pressured to subscribe to the complete cluster of topics formed by some movement, as political groups are, by definition, the subjects that also define the ideological agenda. This pressure is not a bad thing in itself, but it forms patterns of thought that lead to pigeonholing people. And pigeonholing leads you to believe that there are only finitely many (fingers-on-one-hand many) types of people, which mostly reduces to two kinds: the nice, smart, ones that share 99% of your ideas and the ugly, dumb ones. Which is, at least, insulting to the variety of experiences. However, the real problem is the pressure to run with the party line, which happens whenever the cluster of goals turns out to be not entirely conflict-free. In that case, some goal is decided to be the politically nicer one, which is an arbitrary, political decision that is done to attract popular support. I won’t give you extremely recent, wild examples, but consider the Alan Turing trial. Then, a political decision has been made to persecute Turing, because at that point, it was the status quo that a homosexual person was a liability to the state, disregarding any previous accomplishments, which were pretty recent back then. A more recent example involves the civil war in Lybia, where several political entities pursued the noble goal of supporting their side and immediately forgetting about the consequences instead of the less symbol-laden policy of decreasing entropy. Now, Lybia is a clusterfuck. Systems have failed, people have died and will die, but as this does not happen in the press, the situation seems not to be something to worry about; otherwise, one would defend the wrong side. This kind of thinking makes me actually cringe and think that if the people/political culture actually endorsing this way of thought will medially lose to cat gifs, I won’t exactly mourn.

See also [1], [2].

Cultural observations

I’m invading the Domain of Culture.

People close to the movie industry sometimes complain the lack of original plots together with the dominance of comic adaptations (hey there, Marvel!), sequels (who said “Fast and Furious”?), and other secondary content. Book adaptations occur in this list, too, but I prefer to consider them a different case since books are, in general, original content (I said “in general”, there are some counterexamples). While this is not bad as such, this has the obvious drawback that plots become unimportant and predictable. The extreme case are horror movies, there, you just have the same scary flick with adrenaline and naked bodies. Unimportant and predictable plots are, in my opinion, a bad thing since I have a preconception that I should get out from a movie with a feeling that I have seen something interesting. Something interesting means interesting characters, however, a predictable character is not really interesting.

It is, however, an understandable tendency. From the perspective of big studios, a movie is good if it earns money, and money is easily earned with another sequel of some well-known franchise. Sometimes, it even has an interesting plot, but this does not have to be the general case. Sometimes, it works out if Disney/Pixar is bringing out another tale, but, other than that, the conservative policy seems to work well. From the perspective of the customer, it is safe to go to a movie with known qualities, especially if you liked the last installments in the same cinematic universe and would like to see the same, but in a different colour.

This has happened in human history at least once, a couple of hundred years ago. Before the Enlightenment, the art market was completely dominated by the church and the upper class. So, what we see in the art of thhe early Middle Ages, is the Bible, in different settings, but still, the Bible. A fixed number of themes, with little variation. However, at some point, art moved to different topics. Why?

I’d argue it has to do something with the new citizen class that could afford some bread and butter and there still was something left for entertainment. And since they were not church, they probably wanted to see on the paintings in their homes something that was closer and more concrete, like a scene from their (or possibly their) lives. Which, in turn, was a perfect incentive for artists to draw for profit in industrial capacities. The Netherlands were particularly famous for industrializing art.

This is probably the key to my question. Whenever a new class, with its own cultural context, and some ability to pay, appears on stage, art will react and generate something that appeals to the new audience. So, probably, the Chinese will somehow stir up the movie market. Or the Russians. Or LatAm. Someone will, eventually.