Let me go a step further and posit a testable hypothesis.
Although I have not read the book in detail, it appears Marvin Minsky is partly correct in his “Society of Mind” theory of human intelligence, i.e. that a human mind is simply a collection of agents.
In particular, the human mind consists of, essentially, a collection of reinforcement learning agents with differing drives which, due to the constrictions of being bound together into a single body and having the same basic need for sustenance to stay alive, must necessarily learn to cooperate in order to achieve their goals.
This video, which I linked in the “Things of Your Day” thread, points to a vital piece of the puzzle:
In this light, reconsider also the somewhat discredited notion of the triune brain. Once you analyse the parts of the triune brain in view of the fact that all of these parts are intelligent in their own right, and that their core differences are not in terms of their functions or their specialties, but rather in terms of their motivations, things start to make much more sense.
The core hypothesis is simple; what a human brain is is simply a tightly-knit bundle of five(-ish) reinforcement learning agents, each of which is intelligent in its own right (albeit to varying degrees), and with several high-bandwidth channels of communication between them:
- The so-called “reptilian brain”, motivated primarily by the most basic desires of reproduction and sustenance.
- The limbic system, motivated by needs for friends and family, and playfulness.
- The “left brain”, motivated by the desire to explain things and to solve puzzles; c.f. the Left Brain Interpreter hypothesis. Absent proper support from the other agents, the left brain has a serious failure mode of “believing your own bullshit”.
- The “right brain”, motivated by empathy and the desire to understand and be understood by others, i.e. to believe that which others believe, and/or to have others believe what it believes. Absent support from the other agents, the right brain has a serious failure mode of “believing other peoples’ bullshit”.
It’s not too likely that I am exactly correct about the precise number of these agents, what their motivations are, or their localization in the brain. But I definitely think I’m onto something here.