I have been writing some code lately (goes with the job description) and one useful and extremely sensible piece of advice, when you do such a thing is,
Make interfaces easy to use and hard to misuse
Let’s consider an example why this advice is useful. Suppose you are doing something completely unrelated to software, such as cooking. What you expect from your favorite tool, say, a knife, is that it cuts things you want to cut (such as vegetables) and does not cut things you do not want to cut (such as your hand). Hence the knife has a handle that allows you to safely hold it and a blade which actually cuts.
In software engineering, the idea is the same. You present the user of your library an interface that allows to do what the library is good for and makes it hard to break internal assumptions. In the otherwise quite scary world of numerics and optimizers, the optimizer interface lets you describe your optimization problem and push the equivalent of a big flashy button with “SOLVE” on it. At no point you are allowed to change the internal state of the solver, play around with magic values or even change the optimization routine in any other way as the interface allows you to do.
Writing good library code is, to a non-trivial extent, designing good interfaces. Similar applies to tooling, programming languages and hardware. This is one of the reasons we have cables with either obviously asymmetrical or symmetric in design sockets, for example. This is one of the reasons USB-C and Lightning are superior connectors: they are really, really hard to connect with the socket the wrong way.
However, in the software world, tools and libraries evolve and gain more features. They start to gain flexibility, configurability and other nice shiny features until they become Turing-complete. And at this stage, it becomes very easy to misuse them. Any tool that allows custom, unbounded scripting evolves to a science on its own, stacking several of these tools on top of each others leads to new job descriptions because everything becomes so unfathomably complex.
So, maybe some features are better left unimplemented.
This could be your typical rant against abstraction layers (although I would not mind you joining my personal crusade against them), but there is another thought that has been nagging me. Returning to the kitchen example, let us imagine a world where all you have is a fork. Assume even you can have as many forks as you want.
Forks are a good invention. They are fine instruments, capable of many things; with enough creativity you can make a spoon out of two forks, you can even sharpen a fork so that it becomes capable of cutting, and you might even beat an egg white with a fork.
However, daily cooking besides eating pre-cooked meals would become a challenge. Cutting off a piece of cheese for your typical breakfast would be a nightmare. Dicing vegetables, eating a soup or any liquid food would be quite an exercise in applied creativity.
Naturally, kitchen operations would significantly gain in complexity. Creative solutions will have to be engineered. People would want their daily share of Geschnetzeltes, and they will receive it, at a horrendous price. Cooks and kitchen engineers would take pride in how far they are able to go to bring the desired foods to the tables of the (paying) masses. Some of them would take pride in the complexity of their solutions, in the amounts of forks used to make a good Bolognese.
“But wait”, I hear you say, “This is utter bullshit. Why should we constrain ourselves to just one instrument, this is a contrived example. Naturally people would evolve tools that are better suited for tasks like cutting, for example. You are describing a non-problem.”
In the software world, “make things hard to misuse” plus the usual proneness to hypes ends up in decisions to use The One And Only Tech Stack™ (insert your favorite here) that is Designed To Be Used By Everyone™ and comes with its share of opinionated assumptions of how things are meant to be done. Naturally, good design means things can, but are not meant to be done in any other way. Management and fellow engineers are no strangers to being hyped (just think of it, we were promised automated code generation over fifteen years ago and most of software jobs are still revolving around passing around JSON with the occasional seasoning of SQL) so they often tend to jump on the new technology train even if the only new thing here is just the packaging (yes, Go, I’m looking at you right here, you’re just Java 1.4 in disguise).
Suppose now, for a moment, that the requirements are extended, in such a way that the assumptions about what’s right and what’s wrong no longer hold. And then the real fun begins. Since one somehow has to solve the problem at hand, the software engineers need to explore the fun, creative ways of problem-solving including, but not limited to using a pre-processor, inventing a whole new language on top of the one that does not satisfy the needs, use code generators etc. This adds an additional layer of complexity, a source of pride for some (look how complex my problem is!) and a source of headache for the others. I don’t want to shame anyone who actually has to work down the drain and engineer knives out of forks, but in my eyes it is indefensible to normalize or glorify this practice: this is work that has to be done because there are no better tools.
Where does this leave us? I started with “Make interfaces easy to use and hard to misuse”, but now I think I have something to add to this. My version of this rule would be
Make interfaces easy to use, hard to misuse, and discourage misuse by being honest about your assumptions and offering a comprehensive interface.