Responsibility

I had twice an opportunity to talk about responsibility and thigs related to it, and probably I should sum up not only my but also others’ thoughts on the topic.

The first, and foremost, requirement for responsibility is the possibility to consciously decide on an action. Not deciding is also a decision, but being unable to decide has nothing to do with responsibility. Hence, learning responsible beahvior cannot be done by passively observing other people, the actions of other people, or the results of actions of other people. This does not mean that you don’t have to observe; this means that you cannot be called a responsible person kust because you have visited a (generalized) museum, read some books, and know some more or less relevant facts.

It’s a little like math:  Responsibility is an ability. You are not a math graduate just because you can cite the Banach-Tarski result on cocktail parties. Instead, you are a math graduate because you know how to apply your knowledge to new problems. Applying the same reasoning to responsible behavior, we get that being responsible means having an idea what the results of your actions will be, making a choice that will benefit te people you care about, and not denying accountability.

One can see that in this reasoning, the group of beneficiaries is not clearly defined. Moral or economic imperatives may alter the definition of benefit. The core, however, stays the same: You have several options, you choose the Right Thing™, you accept the outcome.

To sum up, “let’s look at others’ experiences to learn responsible behavior” misses the point completely.

Philosophy ahead, brace yourself!

At some point in the past, I have discovered a wonderful, fully general argument against (some kinds of) biological research. The argument goes like this: “People should not be allowed to play God”. This sounds great, looks great and gives you an awesome +10 bonus to wisdom. But what does that mean?

This sentence implies that there are matters puny humans are not allowed to investigate, because meddling with them means a lot of responsibility nobody can be trusted with and is beyond the boundary of the socially acceptable. But that sounds patronizing and cold, right? And it opens the road to uneasy questions like “Where is the line?” that disrupt the generality of your shiny argument; note that in the “playing God” wording the decision is (implied to be) deferred to God. This sentence carries all the premises from Christianity: that the world has been purposefully created by a deity, that humans are no deities, and that humans are (unlike deities) not responsible.

I shall not discuss whether deities exist; in this context, this is mostly moot. The real problem here is that constructive arguments (like the lack of responsibility/control and what one can do about it) are replaced by irrational, faith-like constructs. Which is not okay since the discussion is about mundane, scientific things and has to be conducted via rational arguments and not via media campaigning and fast-food philosophy.

Optimization Heuristics, part III: The bitter pill

I have previously told you that the ways in which Nature works lead to surprising and wonderful results: Evolution produces human biochemistry, thermodynamics produces beautiful crystals etc.

I did not, however, tell you that it happens fast. Because it does not.

Homo sapiens took several million years to evolve. The thermodynamic processes are overly complex and happen on a vastly stretched time scale. And even then, the results are far from the optimum. Because, seriously, four blood types? The whole mammalian biochemistry? The blind spot? It is impossible to engineer that.

The main problem in the heuristics business is that even the most sophisticated nature-inspired process is just another clever strategy to randomly poke around in the search space. Because the problems (especially the complicated, we-have-no-idea-how-to-solve-it ones) have that rather bad habit of behaving not the way you want them to behave (because, otherwise, you’d just solve them in a traditional fashion). There is a famous result, called the No Free Lunch Theorem, which states that on average, all algorithms on all problems are equally bad. Bad means here exponentially bad, the kind of bad you do not want to mess with. One may argue that this result is a bit overrated, because we do not want to consider all problems; but even in the case when we restrict ourselves to “sane” problems with exactly one local maximum, it can still be shown that bad performance may occur, which makes “hard” guarantees on optimization time rather complicated. There are promising results for fluffy toy problems where the probabilistic machinery is not too awful. Still, for any real problem (like optimizing antenna designs) there are no theoretic bounds; but then again, any solution strategy that works for this kind of problems where there is little knowledge about the problem structure is better than none.

Optimization Heuristics, part II: Learning from Nature

Last time I have been talking about what happens when we have no idea how to solve a particular problem and I concluded that in that case, it might be useful to turn our attention to the nature’s way of solving problems. Today, I want to talk about the ideas we can extract from nature.

So, what does nature offer us? The most striking example is, of course, evolution. So it may seem like a good idea to simulate evolution. That usually means that you have a population of objects in your search space and impose some kind of selection pressure on those objects, making it hard for bad solutions to procreate. It is also usual to add some kind of mutation, since without mutation, you are not getting anywhere.

This sounds simple enough. But there is a catch: you might have noticed that, for current biological landscape to evolve, evolution was not particularly acting in hurry. The point of evolution is that it works at all, not that it is efficient. But nevertheless, evolutionary algorithms have been around for forty years and they do work really well in practice. There is also a notorious example of nozzle shape optimization, which is one of the main selling points for artificial evolution: no sane engineer could imagine such nozzle shapes, but search heuristics have no idea of “sane”, “intuitive” or similar, which makes them exploit all aspects of the problem, which is both good and bad: good since they can go ways no one would go and bad since they will explore a lot of places in the search space that are not necessarily interesting. Think about the configuration of the mammalian eye and especially the blind spot: No mutation can undo this now, the future humans (if they will not turn into cyborgs) are stuck with it, since the mutation distance between the current eye design and something more optimal is too huge.

Another great hit is simulated annealing which is not in the hot trends anymore, but the idea is still cool: The search process is modeled after thermodynamic processes in the lattice structure of cooling metals: with sinking temperature, the probability of changing the solution is sinking and the solution stabilizes like a metal slab is getting solid and reaches some thermodynamically optimal point in the process of cooling. This approach has been thoroughly analyzed and has several cool properties: for some rather hard problems, it is expected to deliver optimal solutions. The key word is “expected”: since the process is stochastic, no guarantees can be made. Actually, the property is rather weak. But it is nevertheless cool.

A lot of other heuristics have been developed: DNA computing, particle swarm optimization, ant colony optimization, and a whole lot of other techniques that borrow ideas from processes occurring in nature. But they all come with a price, because all of them have an ugly side about which I will talk in the next post.

PSA, For Science! (Optimization Heuristics, part one)

One of my next assignments is to write a (rather short, by my standards) essay on artificial immune systems. In the meanwhile, I will probably dump some of the background information (in a readable fashion) on you, my dear reader. Mostly for Science, but otherwise just to give my explanation skills a try. As I am not allowed to disclose my essay before the deadline, I will write something not quite about the topic, but about some interesting things from the general context.

Optimization Heuristics, part one, or How to stop worrying and start relying on Nature

What do I mean by “optimization heuristics”? Or, better, what do I mean when I talk about “optimization”? Optimization is a problem from the vast domain of mathematics, where it is often needed to find some value x which causes a function f achieve its maximal value. This boring stuff suddenly becomes very exciting when f is your income. Or the efficiency of your car’s engine. Or something else, completely unrelated.

As you might still remember from school, optimization was this task of finding the extrema of some awfully boring function. And you had to derive once, twice, thrice to be able to talk about the behavior or the function in question. So, what is the point of researching things about optimization, if any schoolchild with some knowledge about calculus can do it? Surely it cannot be so easy.

The sad part is, not only it is not easy, it is also very, very, very complicated. Your income does not behave like a nice polynomial function (or otherwise you’d figure out how to optimize it into the skies) and neither do flow properties of the air in turbines. In some cases, we cannot even write down the function in question in a fashion such that we can analyze it on paper.

This leaves us with a heap of unresolved questions. Surely, we know that there is some function which dictates the behavior of, say, a turbine. We can simulate the turbine. We can try various configurations and compare them. And that is it. The general kind of problems is called black-box optimization, for the details of the function to optimize are unknown to us, f is said to be a black box.

What we can do now is to try different strategies for searching the optimal input for f. The most trivial is to randomly poke into the set of allowed input values (called search space for intuitive reasons) and compare the results. This works with humans, why should it not work with machines? The main problem: this kind of search strategy can lead to long waiting times for optimization. Imagine yourself searching for a book in your friend’s bookshelf. You can jump randomly around, sure, but there almost surely is some system in which the books are arranged, right? Why not just search from the beginning? Well, this might work, but the bookshelf is large, and you’ll expect to go through the half of it before you find the right book.

We have to keep in mind that we just faced an inherent problem: Any of search strategies in a black-box scenario might lead to long optimization times, because what we are doing here is basically just poking into the search space with some more or less smart strategy. But this is the problem of the scenario and there is nothing you can do about it once you confine yourself to no knowledge about the behavior of f whatsoever.

So, what can we do, if we face a difficult problem? We might try to seek for already existing and working solutions. One of the first things that comes to mind is that somewhere, somehow, something similar happened in nature. The main problem with it is that nature has no optimizing goals. But some natural processes behave like optimization goals. So we could try to copy strategies from nature just because they seem to work and hope that it will work for our problems, too.

(To be continued)

Say NO to drugs and bad IDEs!

I guess I would be more productive if I spent the time writing this post actually coding for any of the two projects I have going on, but I have to vent or I’ll just go on swearing in real life.

So, one project involves writing generic neural network code and training it on robots. The first part is actually fun (if you can wait long enough and accept that your toy functions need a lot of time to get optimized). The second part is turning into a small nightmare. Why?

  • First, the idea is that the robots are tested in a virtual environment which is run by proprietary (and the bad kind of proprietary: the one developed by small companies with strict licensing policies) software that need to be connected to the license server all the time for no reason other than authenticating your license.
  • Second, the software provides a development environment. Which sounds better than it actually is—it is just an editor with code highlighting and keyword completion. Think Kate or Notepad++. While I have been coding somewhat non-trivial database applications in PHP, like, six years ago with nothing more than an advanced editor with code highlighting, I then was not bound by any kind of deadline or any other obligation—it was a “just for fun” project—and I moved later on to better-suited tools.
  • Third, the lack of any publicly available documentation makes me code by example, which is somewhat acceptable in a small context (as I will just need to plug in the sensors into the ANN and the ANN outputs into the robot driving system), but, generally, it is just impossible, since all the cool libraries (like Qt) are not available and I am left alone with STL and Boost.
  • Fourth, and the most important part: The run/test cycles are completely inadequate for a three-week project. Training an ANN for a robot means running the simulation in real-time and praying that the ANN topology is just right and the stars stand in the positions suited for such shamanic work.

This left me with one exit: I had to code and debug the ANN code separately. Which has its own fun parts since if you have no idea what to expect from a dynamic system, you cannot tell for sure if its behavior is the intended one. But the run/test cycles are here at least less than one minute.

(The current music running in my earphones—Kill Dash Nine by Monzy—reflects my feelings towards the Webots software package as whole and the buddies that come up with such problems)