A friend recently asked me why I find bounded optimality interesting. Here’s why:
It is necessary to have a normative framework for how agents should act under computational pressure because this is what the real world is like. In the real world an agent should understand not to think long when it is about to get hit by a car, but should definitely perform more computation before declaring war. (This is related to our work on bounded-optimal metareasoning!) See Rationality and Intelligence: A Brief Update (Russell 2014) for more on this point.
It’s an elegant framework because it provides “a converging paradigm for intelligence in brains, minds, and machines” that may allow for more transfer of insights between cognitive science and artificial intelligence (Gershman, Horvitz, and Tenenbaum 2015). For example, understanding the bounded-optimal solutions that humans use may be useful for creating better approximation strategies for artificial agents to use under computational pressure. On the other hand, when we have an optimal solution that an artificial agent can implement, we can then ask what the bounded-optimal solution for the real-world environment that humans live in would be and see if that is the kind of behavior that humans showcase.
It provides an appealing way to bridge the gap between Marr’s computational and algorithmic level (Griffiths, Lieder, and Goodman 2014). As mentioned in (1) bounded-optimality is necessary as a normative framework for artificial agents because the costs of computation are an important factor to decision-making in real-world environments. But humans also have costs to computation that arise from intrinsic biological bounds, rather than the environment, so an interesting question is what are the fundamental limitations on human intelligence that arise from this and how close can human bounded-optimality ever get to AI bounded-optimality? Being able to take into account computational constraints at varying levels of abstraction may be useful for progress on this question.
It can potentially be used as a more principled, versatile way to predict when people will use different types of approximations, which is useful for lots of applications. For example, understanding the circumstances under which your employee’s decisions are likely to be less accurate than usual. Or for artificial agents trying to decide how much to trust your “expert knowledge”.
I’m curious as to what the other reasons are that people find bounded optimality interesting, so please let me know. :)