Spring in Trondheim, for reals!

Since yesterday there is no real sunset, and soon the nights will be bright enough to make night walks enjoyable.

I have used the last days to take some photos of the city, the weather was so good you could almost walk around in a t-shirt.

 The Nidaros cathedral.

 In contrast to Germany, national flags appear in Norway everywhere.

Scandinavian melancholy. Pictures of nature + black and white filter = cover picture for any melodic metal band

 Mandatory pictures from the Bakke bridge are mandatory:

 A monument for sailors.

Philosophy ahead, brace yourself!

At some point in the past, I have discovered a wonderful, fully general argument against (some kinds of) biological research. The argument goes like this: “People should not be allowed to play God”. This sounds great, looks great and gives you an awesome +10 bonus to wisdom. But what does that mean?

This sentence implies that there are matters puny humans are not allowed to investigate, because meddling with them means a lot of responsibility nobody can be trusted with and is beyond the boundary of the socially acceptable. But that sounds patronizing and cold, right? And it opens the road to uneasy questions like “Where is the line?” that disrupt the generality of your shiny argument; note that in the “playing God” wording the decision is (implied to be) deferred to God. This sentence carries all the premises from Christianity: that the world has been purposefully created by a deity, that humans are no deities, and that humans are (unlike deities) not responsible.

I shall not discuss whether deities exist; in this context, this is mostly moot. The real problem here is that constructive arguments (like the lack of responsibility/control and what one can do about it) are replaced by irrational, faith-like constructs. Which is not okay since the discussion is about mundane, scientific things and has to be conducted via rational arguments and not via media campaigning and fast-food philosophy.

Last days of classes…

…were last week. Now, I am officially entitled to have some fun, finish one last lab and prepare for the exam. My time in Trondheim comes slowly to an end, and it feels rather surreal.

On the positive side, the days are really long now. The first sunlight is visible at 3 AM, and sunset is somewhat around 11 PM. I am looking forward to virtually nonstop sunlight and hopefully some night walks 🙂 The only downside of this is my window which is directed eastwards, which means that I wake up to the sun shining into my eyes.

Optimization Heuristics, part III: The bitter pill

I have previously told you that the ways in which Nature works lead to surprising and wonderful results: Evolution produces human biochemistry, thermodynamics produces beautiful crystals etc.

I did not, however, tell you that it happens fast. Because it does not.

Homo sapiens took several million years to evolve. The thermodynamic processes are overly complex and happen on a vastly stretched time scale. And even then, the results are far from the optimum. Because, seriously, four blood types? The whole mammalian biochemistry? The blind spot? It is impossible to engineer that.

The main problem in the heuristics business is that even the most sophisticated nature-inspired process is just another clever strategy to randomly poke around in the search space. Because the problems (especially the complicated, we-have-no-idea-how-to-solve-it ones) have that rather bad habit of behaving not the way you want them to behave (because, otherwise, you’d just solve them in a traditional fashion). There is a famous result, called the No Free Lunch Theorem, which states that on average, all algorithms on all problems are equally bad. Bad means here exponentially bad, the kind of bad you do not want to mess with. One may argue that this result is a bit overrated, because we do not want to consider all problems; but even in the case when we restrict ourselves to “sane” problems with exactly one local maximum, it can still be shown that bad performance may occur, which makes “hard” guarantees on optimization time rather complicated. There are promising results for fluffy toy problems where the probabilistic machinery is not too awful. Still, for any real problem (like optimizing antenna designs) there are no theoretic bounds; but then again, any solution strategy that works for this kind of problems where there is little knowledge about the problem structure is better than none.

Optimization Heuristics, part II: Learning from Nature

Last time I have been talking about what happens when we have no idea how to solve a particular problem and I concluded that in that case, it might be useful to turn our attention to the nature’s way of solving problems. Today, I want to talk about the ideas we can extract from nature.

So, what does nature offer us? The most striking example is, of course, evolution. So it may seem like a good idea to simulate evolution. That usually means that you have a population of objects in your search space and impose some kind of selection pressure on those objects, making it hard for bad solutions to procreate. It is also usual to add some kind of mutation, since without mutation, you are not getting anywhere.

This sounds simple enough. But there is a catch: you might have noticed that, for current biological landscape to evolve, evolution was not particularly acting in hurry. The point of evolution is that it works at all, not that it is efficient. But nevertheless, evolutionary algorithms have been around for forty years and they do work really well in practice. There is also a notorious example of nozzle shape optimization, which is one of the main selling points for artificial evolution: no sane engineer could imagine such nozzle shapes, but search heuristics have no idea of “sane”, “intuitive” or similar, which makes them exploit all aspects of the problem, which is both good and bad: good since they can go ways no one would go and bad since they will explore a lot of places in the search space that are not necessarily interesting. Think about the configuration of the mammalian eye and especially the blind spot: No mutation can undo this now, the future humans (if they will not turn into cyborgs) are stuck with it, since the mutation distance between the current eye design and something more optimal is too huge.

Another great hit is simulated annealing which is not in the hot trends anymore, but the idea is still cool: The search process is modeled after thermodynamic processes in the lattice structure of cooling metals: with sinking temperature, the probability of changing the solution is sinking and the solution stabilizes like a metal slab is getting solid and reaches some thermodynamically optimal point in the process of cooling. This approach has been thoroughly analyzed and has several cool properties: for some rather hard problems, it is expected to deliver optimal solutions. The key word is “expected”: since the process is stochastic, no guarantees can be made. Actually, the property is rather weak. But it is nevertheless cool.

A lot of other heuristics have been developed: DNA computing, particle swarm optimization, ant colony optimization, and a whole lot of other techniques that borrow ideas from processes occurring in nature. But they all come with a price, because all of them have an ugly side about which I will talk in the next post.

Dancing show

This will not be about math 😉

I was volunteered (yes, exactly so) into participating in a dancing show at Studentensamfundet (which is something like a big student activity house; also totally worth visiting). It was great (after four hours of going through the choreography and studying the music, that is), and reminded me a bit of my graduation from school, but in better, because back then, dancing was rather… limited.

So yes, I was enjoying it a lot (and my partner as well), although I was nervous before the beginning (think pre-exam excitement); afterwards, a part of the dancing society sort-of celebrated the performance in a rather cute Italian place. This was a really awesome evening, in hindsight.

(It came to my notice that there is a video of the whole show and I’ll be damned if I don’t get my hands on it.)

PSA, For Science! (Optimization Heuristics, part one)

One of my next assignments is to write a (rather short, by my standards) essay on artificial immune systems. In the meanwhile, I will probably dump some of the background information (in a readable fashion) on you, my dear reader. Mostly for Science, but otherwise just to give my explanation skills a try. As I am not allowed to disclose my essay before the deadline, I will write something not quite about the topic, but about some interesting things from the general context.

Optimization Heuristics, part one, or How to stop worrying and start relying on Nature

What do I mean by “optimization heuristics”? Or, better, what do I mean when I talk about “optimization”? Optimization is a problem from the vast domain of mathematics, where it is often needed to find some value x which causes a function f achieve its maximal value. This boring stuff suddenly becomes very exciting when f is your income. Or the efficiency of your car’s engine. Or something else, completely unrelated.

As you might still remember from school, optimization was this task of finding the extrema of some awfully boring function. And you had to derive once, twice, thrice to be able to talk about the behavior or the function in question. So, what is the point of researching things about optimization, if any schoolchild with some knowledge about calculus can do it? Surely it cannot be so easy.

The sad part is, not only it is not easy, it is also very, very, very complicated. Your income does not behave like a nice polynomial function (or otherwise you’d figure out how to optimize it into the skies) and neither do flow properties of the air in turbines. In some cases, we cannot even write down the function in question in a fashion such that we can analyze it on paper.

This leaves us with a heap of unresolved questions. Surely, we know that there is some function which dictates the behavior of, say, a turbine. We can simulate the turbine. We can try various configurations and compare them. And that is it. The general kind of problems is called black-box optimization, for the details of the function to optimize are unknown to us, f is said to be a black box.

What we can do now is to try different strategies for searching the optimal input for f. The most trivial is to randomly poke into the set of allowed input values (called search space for intuitive reasons) and compare the results. This works with humans, why should it not work with machines? The main problem: this kind of search strategy can lead to long waiting times for optimization. Imagine yourself searching for a book in your friend’s bookshelf. You can jump randomly around, sure, but there almost surely is some system in which the books are arranged, right? Why not just search from the beginning? Well, this might work, but the bookshelf is large, and you’ll expect to go through the half of it before you find the right book.

We have to keep in mind that we just faced an inherent problem: Any of search strategies in a black-box scenario might lead to long optimization times, because what we are doing here is basically just poking into the search space with some more or less smart strategy. But this is the problem of the scenario and there is nothing you can do about it once you confine yourself to no knowledge about the behavior of f whatsoever.

So, what can we do, if we face a difficult problem? We might try to seek for already existing and working solutions. One of the first things that comes to mind is that somewhere, somehow, something similar happened in nature. The main problem with it is that nature has no optimizing goals. But some natural processes behave like optimization goals. So we could try to copy strategies from nature just because they seem to work and hope that it will work for our problems, too.

(To be continued)

I stand corrected…

I have been bashing Cyberbotics lately. I just want to note that their customer support even for non-premium customers was (unexpected) good. They fixed my problem working 24/7, and I am very thankful for that.

The problem with simulators

(This will be a university-related rant)

The problem with simulators is… well, the one with those enterprise-grade robot simulators is that they are a niche product and have all the disadvantages of those. For instance, there are no online discussions related to particular issues, since the user base is very scarce and consists of academia and enterprise.

The second problem is that the software is buggy. Always. Something will not work, in my case: the robot’s light sensors are returning bogus values. And since no discussions are available, you have to rely on the customer support that does never work in the hours when you experience failures. Which means that “Our support team will get back to you about it within a couple of business days.” is not a satisfying answer in any respect.

tl;dr: Someday, the whole Cyberbotics staff will burn in hell.

Northern lights II

And this, ladies and gentlemen, is how cool the northern lights can look like if you have straight hands and a camera that plays along.