Disclaimer: This is to be read as a personal position, not a hardcore philosophic work (For hardcore philosophic works, read Kant, he seems to say the same, but in a different way). Thus, the text may contain simplifications, logical shortcuts and things derived from personal or second-hand experiences and may not be generalizable to everyone.
Let’s talk about rules. Even in the age of Enlightenment, it is customary to consider rules as something holy and unalterable, like the Ten Commandments. Examples include the infamous “dating rules”, laws, and the (un)written social code. However, it is important to remember that rules have a purpose: they constrain everyone’s actions and thus impose a bound on entropy. Rules are a good thing, because they enable you to limit the diverse possibilities and allow you to concentrate more on the “allowed” alternatives; rules allow to expect behavior.
This means that the point of rules is to follow them cooperatively. Cooperation seems to be an important trait, the results of Ultimatum or Dictator games show that there is a culture-independent human tendency to cooperate and to punish non-cooperative behavior. In short, in the Ultimatum game player A chooses to divide a pile of, say, gold and player B may accept or decline the offer; if B declines, nobody will get anything. It turns out that B’s decline the offer if they get a share less than 30%, and A’s tend to offer more than the minimal nonzero value. From the game-theoretic perspective, neither is the optimal policy since B should be happy with anything at all and A should abuse this. Even more interesting, in the Dictator game, B cannot decline, yet A still offers a nonzero sum for B just for being there. I don’t want to get much into the nature vs. nurture stuff (which is at this point rather irrelevant), but the results seem reproducible independent from culture, and the most popular interpretation of these results is that
- Cooperation is something we expect from human beings
- Non-cooperative behavior is punished in order to enable (1)
All this brings me to the results that, first, there is no actual point in rules outside of a social context, and, second, cooperative behavior can and probably should be enforced. This means that there is no point in following rules just for the beauty of it if you cannot expect cooperation. Thus, if you expect or experience non-cooperative behavior, then it is in your best interest to stop following the respective part of your personal rulebook that forces you to cooperate. This is not to be interpreted as a ratchet mechanism; you can still revert to a more cooperative mode when your expectations change. Furthermore, there is no point in following rules if they affect only the cooperating parties; no external costs mean also no social context which also means no observable difference. To be short, rules are constraints, and your goals are something you want to meet with these constraints; it is obviously easier with less constraints, and there is no (inherent) goal that tells you “do not overstretch the constraints”.
There is, however, one issue with this reasoning. It goes like this: If no bounds are present, a human being will degrade itself to an animal/an inhumane being. So, it can be sensible to have a personal rulebook that is independent from external events, just to “be yourself” and not (d)evolve into a “wildling” state, whatever it is. This is actually the point when you get to define the bounds for human behavior; the small print is that this is your own decision, and you are responsible for the consequences.
So, what do we have in the bottom line? You can define bounds for your behavior on your own, but then you have the burden of accountability. Other than that, you can and should voluntarily cooperate with other sentient beings, if they want to cooperate, too (and most human beings seem to want to). If no cooperation is expected, you are left with whatever you consider the minimal complete set of “human behavior”.