But what problems are worth solving?

May 14, 2024By Cameron Witkowski

CW

A semester has passed by since the birth of the Society, and it’s a good moment to reflect. Already in our first series of meetings, many of the central themes surrounding intelligence have entered the scene. Although one theme in particular seemed to lurk everywhere we turned…

     We first considered intelligence from the perspective of problem-solving. This general framework underlies much of the thought in the machine learning field, including the influential measure of intelligence by Francois Chollet, described as “skill acquisition efficiency”  [1]. We investigated a number of missing components from the view of problem-solving, such as emotion, consciousness, and value. Whether these components are necessary for intelligence can be debated, but they certainly aren’t included in the naive picture of “general problem-solving ability”. Most damning and central was the question: what problems are worth solving? Or what skills are worth acquiring? We didn’t seem to have any framework that could appropriately answer this. How could the importance of problems possibly be modeled?

     To explore further, we dug deeper into a few central concepts we felt were underappreciated: imagination, emotion, and common sense—we didn’t know it at the time, but each of these would end up playing on the very same theme. What would it take to build an AI capable of imagination, emotion, or common sense? I’ll mention just a few of the interesting insights we came across.

     Imagination is actually a very general ability that includes counterfactual reasoning, planning, and learning from the past (ie. wishing you had done something differently). It includes any image or idea you might produce which is not forced upon you by your senses. Interestingly, (though it seems obvious in hindsight) everything you imagine must be some conglomeration or reshuffling of your previous experiences—a point eloquently put by Victor Gao. If I ask you to imagine a new color, are you really able to? Of course, it’s possible to imagine a color halfway between blue and green, or any other two colors you know. But a completely new color is outside the realm of even the imagination. Indeed, everything we imagine is like this.

     Imagination can be grounded in the discourse on machine learning through the concept of self-play, which was pointed out by Sheral Kumar. Typically, we train systems in a supervised or semi-supervised framework by presenting loads of human-generated or expert-curated data. But certain types of systems, such as RL agents, can generate training data for themselves. AlphaZero, AlphaGo, and recently AlphaGeometry are all great examples of the success this approach can bring.

     Clearly, imagination is worthwhile, and we can learn an immense amount by considering alternate sequences of events. But what in particular is worth imagining? Why don’t I spend this very moment imagining 13 orange rhinos, or a palm tree that grew in a weird L-shape? Why are certain things salient, interesting, and valuable to imagine, and others not? When we get right down to it, this question points to the very foundation of imagination—its driving force—and it’s essentially the same one that came up when thinking about “which problems are worth solving”?

     Next we considered emotion, and the critical role it plays in intelligence. Will AGI have emotions, and should it? Are emotions important to intelligence, or simply an unnecessary epiphenomenon. One crucial insight we came upon is, no matter how rational we believe ourselves to be, each of us is driven by emotion. The decisions we actually make, the paths we actually head down in life—these are consequences of our desires, ambitions, fears, hopes, loves and hates.

     But before I say any more, we must recognize the important difference between showing emotion and actually feeling it, pointed out by Sheral Kumar & Liam Wall. Psychopaths and language models can clearly fake emotions—presenting the appearance of them without actually feeling. But what does this difference actually consist in?

     While posing this question introduces thorny issues about consciousness and subjective experience, one practical inroad is to consider: when do we feel emotion strongly, versus when is it insignificant and hardly noticeable? As a first sketch, we might say we feel most strongly about the things we care about—our lives, our prized possessions, our loved ones, our status and reputation. All of a sudden we come back to what is now a recurring theme: what do we care about? What is important after all?

     Perhaps common sense will take us in a separate direction and get us away from this problem of meaning. We can treat common sense simply as a set of skills or abilities shared widely and useful for frequently occurring tasks. But immediately, from the outset, it seems common sense is inextricably tied to task-completion, or problem-solving. A point made by Juan Rojas and Harsh Grover was that these problems were somewhat dependent on the person—for a criminal, it may be ‘common sense’ that they shouldn’t do blatant things that will get them caught. Here the ‘problem’ the criminal is trying to ‘solve’ is “how do I get away with it”. Not being caught is important to the criminal, but we still haven’t quite figured out: what is ‘importance’ in the first place?

     In our journey through this terrain of ideas, we kept running up against this one crucially important theme: what has value? What is significant, or meaningful? What resonates, and what do we feel strongly about? I argue that not only is this question largely ignored in machine learning discourse, but we don’t have any framework that is capable of answering the question of the origin of value.

     It is difficult to understate the depth of this question. I’m literally asking where does value itself come from. I’m asking what should any of us care about? I’m asking what is the purpose of our existence?

     At any other point in history, such a question could easily be dismissed as hopeless navel-gazing, or impractical idealism—to which the reply “put one foot in front of the next” would be appropriate. But today, as we build more and more lifelike intelligent systems, the question cannot be ignored. Because what will AGI care about? What will AGI value? What should it?

     Until we answer this question how can we hope to make any progress on the problem of alignment? What sense does it make to align AGI to human values, if we don’t fully understand what it is that we value, and where these values actually come from? If we don’t understand the nature of value itself?

     A naive answer to this question is to ground the discourse in physics (or evolutionary psychology). As physical beings maintaining our constitution against the forces of chaos, we must constantly resist the increase of entropy implied by the second law. Effectively, this means designing systems that can make better and better predictions, and will lead to (in the future) systems that can take actions which render the world predictable. In fact, you can derive quite a lot from the principle of least action. There’s a lot to be said for this view, and an enormous literature to support it [2] [3] [4]. I don’t have the space here to go too deeply into it, but I will point out (in my opinion) the main problem with the view of meaning & value in terms of physics or evolution: the aesthetic.

Man does not live on bread alone.

     If we’re minimizing entropy and maximizing efficiency, why ever take the scenic route? Why not beeline it straight from the start right to the destination? Why ever stop to smell the roses? Indeed, beauty seems quite orthogonal to the practical realities of life. Yet, what is life without beauty?

     When you look up from your screen and face reality again, ponder this. See the beauty all around you. Stop and really appreciate it, simply for what it is. To borrow the words of William Blake:


       To see the World in a Grain of Sand. And a Heaven in a Wild Flower.

       Hold Infinity in the palm of your hand. And Eternity in an hour.


And this is all after just one semester of the Society.


References

[1] F. Chollet, “On the Measure of Intelligence” [https://arxiv.org/abs/1911.01547]

[2] K. Friston et. al., “Designing Ecosystems of Intelligence from First Principles” [https://arxiv.org/abs/2212.01354]

[3] M. Ramstead et. al. “On Bayesian mechanics: a physics of and by beliefs.” [https://royalsocietypublishing.org/doi/full/10.1098/rsfs.2022.0029]

[4] K. Friston et. al., “The free energy principle made simpler but not too simple” [https://arxiv.org/abs/2201.06387]