Meditations on software


I have been writing some code lately (goes with the job description) and one useful and extremely sensible piece of advice, when you do such a thing is,

Make interfaces easy to use and hard to misuse

Let’s consider an example why this advice is useful. Suppose you are doing something completely unrelated to software, such as cooking. What you expect from your favorite tool, say, a knife, is that it cuts things you want to cut (such as vegetables) and does not cut things you do not want to cut (such as your hand). Hence the knife has a handle that allows you to safely hold it and a blade which actually cuts.

In software engineering, the idea is the same. You present the user of your library an interface that allows to do what the library is good for and makes it hard to break internal assumptions. In the otherwise quite scary world of numerics and optimizers, the optimizer interface lets you describe your optimization problem and push the equivalent of a big flashy button with “SOLVE” on it. At no point you are allowed to change the internal state of the solver, play around with magic values or even change the optimization routine in any other way as the interface allows you to do.

Writing good library code is, to a non-trivial extent, designing good interfaces. Similar applies to tooling, programming languages and hardware. This is one of the reasons we have cables with either obviously asymmetrical or symmetric in design sockets, for example. This is one of the reasons USB-C and Lightning are superior connectors: they are really, really hard to connect with the socket the wrong way.

However, in the software world, tools and libraries evolve and gain more features. They start to gain flexibility, configurability and other nice shiny features until they become Turing-complete. And at this stage, it becomes very easy to misuse them. Any tool that allows custom, unbounded scripting evolves to a science on its own, stacking several of these tools on top of each others leads to new job descriptions because everything becomes so unfathomably complex.

So, maybe some features are better left unimplemented.


This could be your typical rant against abstraction layers (although I would not mind you joining my personal crusade against them), but there is another thought that has been nagging me. Returning to the kitchen example, let us imagine a world where all you have is a fork. Assume even you can have as many forks as you want.

Forks are a good invention. They are fine instruments, capable of many things; with enough creativity you can make a spoon out of two forks, you can even sharpen a fork so that it becomes capable of cutting, and you might even beat an egg white with a fork.

However, daily cooking besides eating pre-cooked meals would become a challenge. Cutting off a piece of cheese for your typical breakfast would be a nightmare. Dicing vegetables, eating a soup or any liquid food would be quite an exercise in applied creativity.

Naturally, kitchen operations would significantly gain in complexity. Creative solutions will have to be engineered. People would want their daily share of Geschnetzeltes, and they will receive it, at a horrendous price. Cooks and kitchen engineers would take pride in how far they are able to go to bring the desired foods to the tables of the (paying) masses. Some of them would take pride in the complexity of their solutions, in the amounts of forks used to make a good Bolognese.

“But wait”, I hear you say, “This is utter bullshit. Why should we constrain ourselves to just one instrument, this is a contrived example. Naturally people would evolve tools that are better suited for tasks like cutting, for example. You are describing a non-problem.”

Oh well.

In the software world, “make things hard to misuse” plus the usual proneness to hypes ends up in decisions to use The One And Only Tech Stack™ (insert your favorite here) that is Designed To Be Used By Everyone™ and comes with its share of opinionated assumptions of how things are meant to be done. Naturally, good design means things can, but are not meant to be done in any other way. Management and fellow engineers are no strangers to being hyped (just think of it, we were promised automated code generation over fifteen years ago and most of software jobs are still revolving around passing around JSON with the occasional seasoning of SQL) so they often tend to jump on the new technology train even if the only new thing here is just the packaging (yes, Go, I’m looking at you right here, you’re just Java 1.4 in disguise).

Suppose now, for a moment, that the requirements are extended, in such a way that the assumptions about what’s right and what’s wrong no longer hold. And then the real fun begins. Since one somehow has to solve the problem at hand, the software engineers need to explore the fun, creative ways of problem-solving including, but not limited to using a pre-processor, inventing a whole new language on top of the one that does not satisfy the needs, use code generators etc. This adds an additional layer of complexity, a source of pride for some (look how complex my problem is!) and a source of headache for the others. I don’t want to shame anyone who actually has to work down the drain and engineer knives out of forks, but in my eyes it is indefensible to normalize or glorify this practice: this is work that has to be done because there are no better tools.

Where does this leave us? I started with “Make interfaces easy to use and hard to misuse”, but now I think I have something to add to this. My version of this rule would be

Make interfaces easy to use, hard to misuse, and discourage misuse by being honest about your assumptions and offering a comprehensive interface.

Obviously there will be people who just for the sake of the argument will write Tetris in Brainfuck or Doom in Javascript. However, just because it’s possible (Turing completeness is a marvelous thing!) does not mean it should be done in a business context, where engineer time matters and is not well-spent on fighting the tech stack instead of the business problem. No tech stack is meant to be used by everyone; they are opinionated pieces of software and we should also handle them that way.


When I was a child, I wanted to become an astronaut.

I look at the world through my portholes. The light illuminates my capsule, my command module. There is some food, picked to my taste. There are regular communication sessions with the rest of the team; sometimes my friends chime in.

It is easy to lose track of the days. Here, every day feels like the next. Sure, there are working days and days off, but other than that, weekdays are basically indistinguishable. My geostationary orbit is stable and safe; every time I look down I see the same spot. It is nice to occasionally look down, up here it is easy to forget there is actually something beyond my ship.

When I was a child, I wanted to become an astronaut. Now I fly, on a solo mission, locked in a thirty-something square metres capsule, the world behind my fourteen- and fifteen-inch portholes, at my fingertips and yet unreachable.

Ground Control, are you still there?

Quantifying blame

You have probably heard the phrase “We are all to blame”.

I want now to argue that whoever says this is committing a fallacy, and for this, I have to build some argumentative infrastructure.

Some explanations first. I was motivated to write this piece after some politician has said somwhere online that “we are all to blame for [some unfortunate situation which is not really relevant to this piece]”. This has pressed my berserk buttons (all of them), and I had to think why I am so displeased with these words coming from this person. So, enjoy the results of my thoughts.

In today’s moral consensus (and my personal view), blame (and its less offensive sibling, responsibility) is a function of power. If you can change the situation, you are to blame. If you cannot, you are not. So far, so simple, and up to now, there is no contradiction to everyone being to blame for anything. However, the statement misses several issues.

First, what is the consequence? Usually, “we are all to blame” results in “you should pay and atone”, or just a deep-sounding “we should all atone”. However, other than sounding deep, these words do not really mean something material by themselves.

Second, what is the measure? The implied connotation is that everyone is equally to blame. And in the discussed cases, this is as much true as following “real possibility” from “nonzero probability”. In our world, the share of responsibility for any outcome is not equal. A politician has more power to change enviromental policies than a nurse, and a doctor has more power over a patient than a schoolchild 50 kilometers away, even taking into account that the schoolchild has the theoretical option to study medicine. But if no measure is supplied, the implied meaning is that everyone is equally responsible, by which no one is actually responsible.

What do we have in the end? I propose a heuristic: Everyone saying “We are all to blame” implies “we are all equally responsible” and tries hereby to scatter her share of responsibility. In the best case, this is a fallacy. In the worst case, this is an insult to reason and an attempt to evade judgment.

Lessons learned

I have been TAing the lectures “Computer Networks and Distributed Systems” and “Mathematics for CS students” for a term each and now I have gathered some experience with the exams. This is overall a very mixed experience.

Zeroth, most people do actually have some kind of understanding about the topics. But there is a long way from intuition towards understanding what is actually happening in the lecture and why it is happening the way it is happening. (Actually, this is a verification procedure for learning: If you know exactly what problems the lecture is solving and by what means, then you are most probably doing it right.)

First, some of the kids are pretty bad at reading and understanding. If the question is “What are the pros and cons of various methods of achieving X”, then the wrong approach is to tell me that a major downside is to implement X. Seriously? Let’s draw an analogy: A major disadvantage of owning a car is that you have to buy a car, and a major disadvantage of public transport is that you have to buy a ticket. Yeah, I’m not very impressed by this involved comparison, either.

Second, numbers and computation are a serious issue. This was evident in the networks exam, this is even more evident in the calculus exam, even if the students somehow managed to pass the initial “solve 50% of homework” filter. Integration seems to be like magic — sometimes it works, sometimes it does not, and most people seem to have no idea why. My hint “Solve 20 integrals and then you’ll know how” was not appreciated. In the networks exam, it was even worse, people failed at division of large numbers. (And it was awful to look at.)

Third, complex concepts are not easy to understand. (Captain Obvious reporting!) “This function is continuous and not continuous at the same time”, yeah, right. Sure, university lectures are not meant to be easy per se, but they are also not meant to be mandatory for everyone. And this is freshman material, not formal semantics from outer space. But this continues in the computer networks lecture, where some of the students write stuff like “Alice sends her private key to Bob”. If I were a columnist, I would write an awfully long lecture on how Facebook makes us disrespect privacy, but luckily I think that using “us” in the “us sinners” sense is a dirty rhetorical move, so I’ll just facepalm (or facedesk) one more time.

On the other hand, most people do seem to pass the exams, so it’s not all bad. But the aftertaste is pretty bitter.

Book review: Ancillary Justice/Sword/Mercy

Another book cycle finished. Actually, a longer time ago, but I got wound up in random events and did not find the time to blog. My bad.

I have already written something about the first part, and now I would like to revise my conclusion. I told you that Ann Leckie is an heir to Banks, and I probably will stay with that opinion. But she is also an heir to Asimov, in the sense that she likes to talk about evolution of social structures in the far future, over large distances.

Ann Leckie’s concept of the Empire is a distributed personality ruling everything; the premise is that it is possible to link a human body to a distributed mind and let it act as an agent of the said mind. As the agent is semi-autonomus and may not necessarily be always in contact with her other selves, communication delays may let parts of her personality act independently; hence, the stability of the Empire may be in question.

As far as space operas go, this particular one is pretty constrained in time and space. However, this is not a bad thing, as the questions Leckie discusses are large and require attention, even in the far future. Again, in the tradition of early sci-fi, today’s questions are asked in the setting of a possible tomorrow to look at them from a different perspective.

What I liked most, however, were the characters. Not all of them are my favorites, but at least the main character, Breq, is exactly the rational and cold-blooded person I expected to see in her position. Not all of her surroundings are, sadly, but in most cases they don’t raise disbelief (which is already very good!).

Also, I liked the ending. I don’t want to spoil it, so I just say that it is not the one you’d typically expect and also the one that makes most sense. Yes, this is not a contradiction.

Election day

A disclosure: I am not American. Hence, my interests in American elections may be very alien to actual Americans (same as the interests of the candidates may be alien to me). I have a different background, my political views (as in: what should be a priority and what are good means) are clustered differently. I also have a strong hype allergy. Long story short: I have the freedom of not having to choose and the possibility of saying “I strongly dislike both candidates” without having an impact on the outcome. The reasons are manifold, but just to give you a hint: I dislike Trump for his far-right campaign and his attitudes. I also dislike Clinton for the “vote for me, you sexist pile of shit” campaign sentiments and her rather hawkish policy.

I went to sleep on Tuesday with the thought that I missed an excellent opportunity to bet on Clinton against some politically active bloggers. On Wednesday, I woke up and the first word on my phone’s display my mind has recognized was “immigration office”. I then thought that not betting was actually a wise move (and like many wise moves, this one was due to laziness). And then the Internet exploded with pain.

Continue reading “Election day”


I had twice an opportunity to talk about responsibility and thigs related to it, and probably I should sum up not only my but also others’ thoughts on the topic.

The first, and foremost, requirement for responsibility is the possibility to consciously decide on an action. Not deciding is also a decision, but being unable to decide has nothing to do with responsibility. Hence, learning responsible beahvior cannot be done by passively observing other people, the actions of other people, or the results of actions of other people. This does not mean that you don’t have to observe; this means that you cannot be called a responsible person kust because you have visited a (generalized) museum, read some books, and know some more or less relevant facts.

It’s a little like math:  Responsibility is an ability. You are not a math graduate just because you can cite the Banach-Tarski result on cocktail parties. Instead, you are a math graduate because you know how to apply your knowledge to new problems. Applying the same reasoning to responsible behavior, we get that being responsible means having an idea what the results of your actions will be, making a choice that will benefit te people you care about, and not denying accountability.

One can see that in this reasoning, the group of beneficiaries is not clearly defined. Moral or economic imperatives may alter the definition of benefit. The core, however, stays the same: You have several options, you choose the Right Thing™, you accept the outcome.

To sum up, “let’s look at others’ experiences to learn responsible behavior” misses the point completely.

Communication hardness

This is not a post on computational complexity. (I can write one, though, and even on communication.)

There have been several incidents in my life that follow a pattern, and I probably should summarize them at least to think about it. It happened to me for some times that I was trying to convey to another person a thought, an idea, or a concept and was utterly failing at the task. It has taken me hours to clarify what I meant, what I wanted to say and what, for me, the logical implications were. In the end, after the task was done and the idea communicated (or so I thought), my first reaction was “Oh wow, this was hard. I think I need a drink now”.

Now one could draw a conclusion that I am simply incapable of communicating my thoughts, but this hypothesis is invalidated by contradicting observations. And the simplest assumption that matches my observation is that it is, in fact, hard to communicate complex ideas; if the person I try to communicate with has a different intuition (even for the same problem!), then the explanations that are completely clear to me may come over as confusing.

This is very, very sad. It increases the amount of communication overhead, it reduces the flow of ideas, and it makes communication sometimes rather frustrating. Furthermore, it constrains the amount of people you have fun talking to. On the other hand, this is a very good reason to appreciate these people more.

One decade of not learning

Today is a remarkable anniversary.

On October 9, 2006, a seismic event originating somewhere in the Korean peninsula exposed a lot of interesting facts about political, economical, and military experts. The event itself was quickly characterized as an explosion, and several explanations were proposed.

  • North Korea has tested a nuclear device
  • North Korea has ignited a large bomb
  • North Korea has tested a nuclear device yet it failed to ignite

The second two were by far the most popular, as it seemed to be unimaginable how these hungry, backwards, Juche-hailing and ideologically incompetent people could ever design such a technical masterpiece. Ten days later, United States have confirmed that the event originated from a 0.8-kiloton nuclear explosion. Ten years later, the public perception of North Korea is by and large still where it was back in 2006, one even films epic movies about that.

Now saying “confirmation bias” would be just saying a spell and hoping that this magically explains everything. I think this effect has more components to it.

First, it seems that historic scale is not very easy to get an accurate intuition for. For example, all the cool technical advances in air and space travel are not that recent: The first flight of the Concorde is closer to Wright brothers’ plane as to 2016. The first satellite has flown 60(!) years ago. From this point of view, it is not entirely unintuitive that even with a 40-year technological handicap, one should be capable of creating rockets and nuclear weapons. (Just to remind you, 1966 corresponds to Saturn V, XB-70 and SR-73) This makes possible developments to a matter of resources and engineering capabilities.

Second, there is a question of ideology and existing stereotypes. Clearly, North Korea is not a nice place to live in. Clearly, the state exerts a lot of pressure and control on an individual, far more than anyone would deem acceptable. But even if this has an influence on the competence of North Korean engineers (it obviously does), it remains somewhat questionable to flatly deny them engineering capabilities from 1960s. People get surprisingly agnostic when it comes to weapons.

Thirdly, there arises a question about the results. So, ten years have passed, and did the perception of North Korea change? Does not seem so. One can still make funny jokes about Dear Leader, failures in their space program, yet this does not change the facts. And the facts are that the guys are pretty close to intercontinental ballistic missiles. Probably now would be a good moment to take them seriously.