#link #science [Link](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence) The section that talks about Orange, Green, and Yellow is interesting. In the *Artificial Intelligence* section, he (David Chapman) talks about trying to figure out why AI operating entirely from a rational approach is infeasible. It's just really hard to figure out what to do or what approach to take if you operate from an entirely rational perspective. There's some research that's referenced in Jonah Lehrer's book *How We Decide* (which I haven't read... yet?) that illustrates that people are incapable of making decisions if they only use the rational part of their mind. Here's a quote [from a book summary](https://gulyani.com/book-review-notes-on-how-we-decide-by-jonah-lehrer/) I found: > But interestingly, it’s not only life-or-death type situations where emotional decision-making leads to better outcomes. Lehrer points to studies where otherwise intelligent adults have become virtually incapable of making even the most trivial decisions, such as what to order for lunch, when their capacity for emotion is damaged as a result of a brain injury. This is because our feelings color our rational ideas (i.e. our known food preferences) by adding direction and intensity (i.e. I like both chicken and beef, but I feel like beef today). I wish I knew what research Lehrer was referring to, because I'd love to read some of it, but I don't. This will have to be enough for now. There is a connection here that is really interesting to me, and it's something that I've been relying on more and more frequently: my gut knows the right thing to do. *Right*, as everything, is a spectrum, but I seem to be at the point where my guy knows a pretty reasonable *right* way to do a lot of things that I run into in my life. And I rely on my gut heavily, that's what allows me to stop planning and spend my time *doing*. The connection is that an AI can't be programmed such that it relies entirely on rationality. It's simply too hard to make any decision. Instead, AI needs emotions (which are effectively heuristics for making decisions) to help it decide what to do in a timely fashion. How do we know that's true? Humans are the perfect example (it's called AI after all), so let's take that parallel and run with it; humans need emotional input to make decisions, and therefore AI do as well, and they act as heuristics (in my opinion) for making decisions or for identifying the right method to make a rational decision. I doubt this is an original thought, as the google results for "emotions as heuristics in artificial intelligence" indicates, but there my be something here that's worth pointing out. This entire rabbit hole came from the last part of one of James Clear's weekly emails. Here's the quote: > Writer **David Chapman** on how to improve your thinking: "Learn from fields very different from your own. They each have ways of thinking that can be useful at surprising times. Just learning to think like an anthropologist, a psychologist, and a philosopher will beneficially stretch your mind." _Source:_ [_How to Think Real Good_](https://click.convertkit-mail4.com/r8ug6grvresohp2r42u3/8ghqhohg96gxz4bk/aHR0cHM6Ly9tZXRhcmF0aW9uYWxpdHkuY29tL2hvdy10by10aGluaw==) and interesting that's where I'll end. This is a phenomenal example of applying research from brain trauma psychology to AI, and re-applying the same principle to my everyday life. ###### My summary for myself is: *Start by learn from different fields and connect the ideas between them. They each have a set of incredibly efficient methods for solving problems in their disciplines, and those methods will almost certainly be based on underlying reasons that exist in other disciplines as well. As I learn more of the foundational techniques and methods from other fields, my fundamental faith and belief systems change and the way I perceive the world becomes more accurate. As that happens, my emotional responses begin to converge and I train my heuristics to work better. As my heuristics converge (which is a constantly evolving process), I can rely on them more to make decisions, which leaves me much more time to be present and engage with the world. That makes me feel alive.* Wow. That is fascinating/insane. I feel like I just explained who I am in six sentences. --- ## The blog post [![Ken Wilber, Boomeritis, and artificial intelligence](https://metarationality.com/images/metarationality/boomeritis_350x517.jpg)](https://www.amazon.com/dp/1590300084/?tag=meaningness-20) It’s the perfect postmodern nightmare. You wake up to discover that you are the anti-hero character in a novel. Worse, it is a famously badly written novel. It is, in fact, an endlessly long philosophical diatribe _pretending_ to be a novel. And it uses all the [tiresome technical tricks](http://en.wikipedia.org/wiki/Postmodern_literature#Common_themes_and_techniques) of postmodern fiction. It is convolutedly self-referential; a novel about a novel that is an endlessly long philosophical diatribe pretending to be a novel about a novel about… I’ve just read Ken Wilber’s [Boomeritis](https://www.amazon.com/dp/1590300084/?tag=meaningness-20). It’s all that.[1](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_1 "It’s also brilliant, inspiring, funny, and (in the end) touching. Two thumbs up.") And it seems to be about me. I mean, me _personally_. The book diagnoses the psychology of a generation. Many readers have said it is about them, in the sense that they are of that generation, and they discovered ruefully that _Boomeritis_ painted an accurate portrait. But the central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers [Continental philosophy](http://www.philosophicalgourmet.com/analytic.asp) and [social theory](http://en.wikipedia.org/wiki/Social_theory), realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints. That describes precisely two people in the real world: me, and my sometime-collaborator [Phil Agre](http://polaris.gseis.ucla.edu/pagre/). Do you know about “[delusions of reference](http://en.wikipedia.org/wiki/Ideas_of_reference)”? They are a form of “[patternicity](https://meaningness.com/pattern)”—seeing meaning where there is none. You believe that communications that have nothing to do with you, are actually about you. It’s a typical symptom of schiozophrenic psychosis. Is _Boomeritis_ actually about me? Or is my suspicion that I am the model for its central character a sign of psychosis? A question probably only of interest to me—but I’ll return to it at the end of this page. Mostly, instead, I’ll explain ways in which the novel, and my work at the MIT AI Lab, are relevant to _Meaningness_. All three concern the problem of the relationship of self and other. They address the stalemate that has resulted from [confused](https://meaningness.com/fixation-and-denial "Confused stances try to avoid the anxiety of nebulosity through fixation and denial within a dimension of meaningness. [Click for details.]"), polarized ways of understanding separation and connection. ## Orange, green, yellow _Boomeritis_ is an overview of “Spiral Dynamics”, which is a big fat hairy Theory of Life, The Universe, and Everything. I don’t like those, but the book gradually persuaded me that this one can be useful. Spiral Dynamics contrasts three worldviews, which for some awful reason are called “orange,” “green,” and “yellow.”[2](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_2 "This is a gross simplification, and probably Spiral Dynamics geeks would object that I’m distorting their story out of recognition. For one thing, there are many more than three worldviews in the system.") You’ll find the first two, at least, familiar: ### Orange - Rationality, science, technology, objectivity - Materialist, capitalist, pragmatic, utilitarian - Autonomy, independence, competition, results - Planning, controls, contracts, procedural justice - Detached, abstract, reductionistic, alienated - European (intellectual) Enlightenment; modernism Orange tends toward [dualism](https://meaningness.com/dualism "Dualism is the confused stance that everyone and everything is a clearly distinct, separate, independently-existing individual. Dualism denies connections and fixates boundaries. Compare monism, which fixates connections and denies boundaries. [Click for details.]"): the wrong idea that the self is totally distinct from the world. ### Green - Spiritual, emotional, intuitive, subjective - Relativist, pluralist; diversity, multi-culti, “political correctness” - Consensus, dialog, community, process - Harmony, healing, self-realization, social justice - Connecting, supporting, sharing, togetherness - Eastern (spiritual) Enlightenment; postmodernism Green tends toward [monism](https://meaningness.com/monism "Monism is the confused stance that All is One; that my true self is mystically identified with the Cosmic Plan; that all religions and philosophies point to the same ultimate truth. Monism denies boundaries and fixates connections. Compare dualism, which fixates boundaries and denies connections. [Click for details.]"): the wrong idea that self and other are totally connected. ### The war of orange and green You can figure this part out, right? These views hate each other; each thinks the other is the fast road to hell. ### Yellow The thing is, orange and green are _both right_. They are also both wrong. Their virulent criticisms of each other are both correct. But their own central values are also both correct. We need the _right_ parts of both, without the wrong parts. That combination, supposedly, is yellow: - Big picture, open systems, networks, global flows - Flexible, simultaneous consideration of multiple perspectives - Tolerance for chaos, change, and uncertainty - Integration of ranking (hierarchy) and linking (community) - Caring combined with freedom - Voluntary, spontaneous cooperation rather than either win/lose competition or compulsory consensus processing - Capacity to act in both orange and green modes as appropriate If this sounds less specific than the other two, it might be because “yellow” is a work in progress. I do think it’s pointing in the right direction, toward what I call [participation](https://meaningness.com/participation "Participation is the stance that there is no single right way of drawing boundaries around objects, or between self and other. Things are connected in many different ways and to different degrees; they may also be irrelevant to each other, or to you. Connections are formed by meaningful, on-going interaction. [Click for details.]"). That is the way to avoid the false alternatives of monism and dualism. ## Boomeritis The Baby Boom generation—people born roughly 1946 through 1960—was the first to include many with a green worldview. That is a great accomplishment; green is a partly-right response to errors in orange. “Boomeritis” is the syndrome of getting stuck at green, fighting fruitlessly against orange, and failing to move on to yellow. When Boomeritis looks at yellow, it sees orange—because yellow incorporates aspects of orange that green rejects. The cause of Boomeritis is narcissism. It is based on “nobody can tell me what to do.” Hierarchy is unacceptable. I take direction from no one. Objective facts limit my fantasies, so science must be a patriarchal, oppressive myth. Reasoning often points out that I am wrong, so I ditch rationality for “intuition,” which somehow always tells me what I want to hear. In any competition, I might lose, so everyone must be awarded gold stars, because everyone is equal. Nothing, and no one, can be better or worse than anything or anyone else. That _wouldn’t be nice_—because then I might not be the most [special](https://meaningness.com/specialness "Someone is thought to be special if they are given a particular distinct value by the (imaginary) Cosmic Plan. This is not actually possible. [Click for details.]") thing in the universe. This disease is not restricted to Boomers. Not all Boomers have it, and many who are younger or older do. ## Sidebar: Boomeritis Buddhism I discovered _Boomeritis_ when researching the role of the Baby Boomers in Western Buddhism.[3](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_3 "I wrote about that on this page on Vividness.") Wilber discussed that in his essay “[Boomeritis Buddhism](http://wilber.shambhala.com/html/books/boomeritis/sidebar_h/index.cfm/)”: > The result is a Buddhism that claims to be egalitarian, pluralistic, non-marginalizing, anti-stage, and especially anti-hierarchy. And, alas, all of the moves of the [mean green meme](http://en.wikipedia.org/wiki/Spiral_Dynamics#Pathologies) then swing into play: it claims to be egalitarian, but it actually condemns all those views that disagree with it (but how could it, if all views are truly equal?) > > It rejects the teacher-student model, since we are all equal spiritual friends on the same path together (but why are people paying these teachers money if we’re all equals here?) > > It rejects hierarchy in any fashion (but why does it rank its view as better than all the alternatives?) It claims that pluralism is the true voice of the Mystery of the Divine (but why does it reject all of the numerous other voices that disagree with it?) > > And sometimes it goes so far that it denies the importance of _enlightenment_ altogether, because all spiritual experiences are to be viewed equally without any judging or ranking, and saying that there is a thing called ‘enlightenment’ implies that those who are not enlightened are somehow inferior, and that’s not a nice thing to say, so we won’t say it. The very _raison d’etre_ of Buddhism—namely, release from suffering in the Great Liberation of the awakened mind, which allows the compassionate salvation of all sentient beings—is tossed out the window because it is politically incorrect… Boomeritis Buddhism is probably the greatest internal threat to Dharma in the West. I agree with this strongly,[4](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_4 "Relatedly, my Buddhist teachers (Boomers themselves) have been pointing out these problems for 20+ years. In response, they were ejected and ostracized from the (Boomer-dominated) Western Buddhist establishment. This is a classic “mean green meme” scenario. Green cannot tolerate, and tries to destroy, any contradiction to “we’re all equal.”") and have written much more about it [elsewhere](https://vividness.live/). As well as diagnosing Boomeritis Buddhism in more detail, I have tried to point a way forward, into a post-green Buddhism. ## After postmodernism “[Postmodernism](http://en.wikipedia.org/wiki/Postmodernism)” is the academic version of the green worldview. _Boomeritis_ was originally written as an academic/philosophical criticism of postmodernism. Wilber says it was pretty unreadable, when he’d finished; so at the last minute he turned it into a novel instead. Postmodernism, although obscure and obtuse, is important because it is the dominant orthodoxy in academia, and university indoctrination is one of the main ways Boomeritis is transmitted to younger generations. It is also important because, beneath its billowing briny blather, postmodernism’s green critique of orange is _right_. The problem is that, on its own, green leads inexorably to [nihilism](https://meaningness.com/preview-eternalism-and-nihilism "Nihilism is the stance that regards everything as meaningless. It forms a false dichotomy with eternalism, which sees everything as having a fixed meaning. The stance of meaningness recognizes the fluid mixture of meaningfulness and meaninglessness in everything. [Click for details.]"). That is not obvious; _Boomeritis_ spends most of its 456 pages explaining it. Here’s a super-condensed version: - If meaning is purely subjective, and you embrace all perspectives as equally valid, then at points of disagreement meaning completely disintegrates. - If ethics is merely cultural convention, there [is no way to condemn](https://meaningness.com/ethical-nihilism "Ethical nihilism is the stance that ethics are a meaningless human invention and have no real claim on us. [Click for details.]") evils such as the “honor killing” of women who have been raped. - If [everyone is automatically equal](https://meaningness.com/ordinariness "Ordinariness is the confused stance that no one is better than anyone else, and that one’s value derives from herd membership. [Click for details.]"), there is no call to be any better than you are. There is no possibility of [nobility](https://meaningness.com/nobility "Nobility is the stance that resolves specialness and ordinariness. Nobility consists in using whatever capacities one has on behalf of others. [Click for details.]"). - If everyone is _supposed_ to be equal, all differences must be due to _evil oppressors_. Anyone who is not an oppressor is an all-good [victim](https://meaningness.com/victim-think "The stance that “it’s not my fault and I am too weak to deal with it.” [Click for details.]"). Since _we_ are victims, the oppressors must be _them_. We ought to [rebel](https://meaningness.com/romantic-rebellion "Romantic rebellion is the confused stance of defying authority, in an unrealistic way, to make an emotional, artistic, or personal status statement. [Click for details.]") against the oppressors (and probably kill them all). But this is automatically doomed to failure, because (by definition) the oppressors have all the power (or else we might not be victims, just lazy). So we’d better not actually try to improve anything; instead, we’ll demonstrate sincerity with the vehemence of our denunciations. After thirty years of chewing on such contradictions, it’s widely understood that postmodernism is unworkable. There is no way forward within the green worldview. So now what? What comes after postmodernism? Shockingly few people seem to be working on that question. It’s hard, because green’s logic, its critique of orange, seems unassailable; yet it leads to a bleak dead end. Somehow we need to integrate what is right in both the orange and green worldviews to produce some sort of “yellow.” This web site—_Meaningness_—could be seen as one attempt at that. _Boomeritis_ does a fine job of exposing the contradictions in green, and has decent sketch of what yellow might look like. But then… ## Whoa! Ken, WTF? Although I admire _Boomeritis_, I oppose much of Wilber’s other work. Mainly he advocates [monist eternalism](https://meaningness.com/big-three-stance-combinations), which I think is disastrously wrong. In fact, Wilber (together with [Eckhart Tolle](https://meaningness.com/eckhart-tolle-a-new-earth)) seems to be the main source for a new form of [pop spirituality](https://meaningness.com/pop-spirituality-monism-goes-mainstream). This movement repackages the [German Idealist philosophy](https://meaningness.com/bad-ideas-from-dead-germans) Wilber loves, in a glossy new “[spiritual but not religious](https://meaningness.com/sbnr-spiritual-but-not-religious)” form that particularly appeals to younger generations. The key ideas here are [eternalism](https://meaningness.com/preview-eternalism-and-nihilism "Eternalism is the stance that sees the meaning of everything as fixed by an external principle, such as God or a Cosmic Plan. It forms a false dichotomy with nihilism, which regards everything as meaningless. The stance of meaningness recognizes the fluid mixture of meaningfulness and meaninglessness in everything. [Click for details.]") and [monism](https://meaningness.com/monism "Monism is the confused stance that All is One; that my true self is mystically identified with the Cosmic Plan; that all religions and philosophies point to the same ultimate truth. Monism denies boundaries and fixates connections. Compare dualism, which fixates boundaries and denies connections. [Click for details.]"): - Eternalism: there is a God (but sometimes we’ll call it something else, like “The Absolute,” to deflect the arguments for atheism). - Monism: you, God, and The Entire Universe are All One.[5](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_5 "Monists love capital letters. Is that because they think capitals look impressive, or is it the result of bad translations from German?") Just at the end of _Boomeritis_, something really bad happens.[6](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_6 "Philosophically bad. It works quite well as fiction.") _[Spoiler warning:]_ The anti-hero (who may be me) becomes God. Wilber proposes that becoming God is what comes after yellow—and the main reason to get to yellow is to go on to become God. This quest to become God is a central theme in his other work, so I shouldn’t be surprised; but I _am_ appalled. It’s not just that I think it’s wrong. It’s that his own critique of the green worldview—its monism and its narcissism—seems to apply directly. He recognizes the contradiction, and dismisses it. He makes the usual monist-eternalist move, which goes something like this: > When we say ‘God,’ we don’t mean God, we mean The Absolute, which is ineffable, and is the same as The Entire Universe. You have to admit that the universe exists. And when we say ‘you,’ we don’t mean your ordinary ego, we mean your [true self](https://meaningness.com/true-self "The “deep” or “true” or “authentic” self is an imaginary, inaccessible superior identity, which has a magical connection with the Cosmic Plan. “Depth psychology” is particularly big on the true self, but this confused idea has become wide-spread. [Click for details.]"), which is divine and pure, so there’s no narcissism involved. See? No problem. This is hokum. There is no Absolute, you are not the entire universe, and there is no “true self.” This stuff is simple wish-fulfillment; a fantasy of personal omnipotence and immortality. (As I will explain in plodding detail [in the book](https://meaningness.com/monism).) ## Artificial intelligence The [interesting part](http://en.wikipedia.org/wiki/Strong_AI) of [AI research](http://en.wikipedia.org/wiki/Artificial_intelligence) is the attempt to _create minds, people, selves_.[7](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_7 "“Artificial intelligence” is also used to mean “writing programs to do things that are hard, like playing chess.” This is interesting if you are an engineer, but has no broader implications.") Besides the fun of playing Dr. Frankenstein, AI calls orange’s bluff. Orange says that rationality is what is essential to being human. If that’s right, we ought be able to program rationality into a computer, and thereby create something that is also essentially human—an intelligent self—although it would not be of our species. This project seemed to be going very well up until about 1980, when progress ground to a halt. Perhaps it was a temporary lull? Ironically, by 1985, hype about AI in the press reached its all-time peak. Human-level intelligence was supposed to be just around the corner. Huge amounts of money poured into the field. For those of us on the inside, the contrast between image and reality was [getting embarrassing](http://en.wikipedia.org/wiki/AI_winter). What had gone wrong? An annoying philosopher named Hubert Dreyfus had been arguing for years that AI was impossible. He wrote a book about this called What Computers Can’t Do.[8](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_8 "Now out of print, but his revised edition, What Computers Still Can’t Do: A Critique of Artificial Reason, is still available.") We had all read it, and it was silly. He claimed that a dead German philosopher named [Martin Heidegger](http://en.wikipedia.org/wiki/Martin_Heidegger) proved that AI couldn’t work. Heidegger is famous as being the most obscure, voluminous, and anti-intellectual philosopher of all time. I found a more sensible diagnosis. Rationality requires reasoning about the effects of actions. This turned out to be surprisingly difficult, and came to be called the “[frame problem](http://plato.stanford.edu/entries/frame-problem/)”. In 1985, I proved a series of mathematical theorems that showed that the frame problem was probably _inherently unsolvable_.[9](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_9 "The canonical citation for this would be “Planning for conjunctive goals,” Artificial Intelligence, Volume 32, Issue 3, July 1987, Pages 333-377. But that costs money, and it’s a shortened version of MIT AI Technical Report 802, which you could download for free if you want to geek out. The important bit is the intractability theorem (page 23, proof pp. 45-46). The undecidability theorems are also cute, but less philosophically relevant.") This was a jarring result. Rational action requires a solution to the frame problem; but rationality (a mathematical proof) appeared to show that no solution was possible.[10](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_10 "Technically, what I proved was the NP-completeness of the frame problem. Roughly, this means that there is no solution that is both practical and general. There are general solutions that are “exponential time” (meaning inherently impractical), and non-general solutions that can solve particular classes of problems. Neither of these is philosophically interesting, in my opinion.") Orange had turned against itself, and cut off the tree-limb it was standing on. Still, as we hurtled to the ground, we figured that we’d somehow find a way out. There _had_ to be a solution, because of course we _do_ all act rationally. At this point, Phil Agre came back from a gig in California with a shocking announcement: _Dreyfus was right._ What?? Had Phil gone over to the Dark Side? But with the announcement, he brought the secret key: a pre-publication draft of Dreyfus’ next book, [Being-in-the-World](https://www.amazon.com/dp/0262540568/?tag=meaningness-20), which for the first time made Heidegger’s magnum opus, [Being and Time](https://www.amazon.com/dp/0061575593/?tag=meaningness-20), comprehensible. Being and Time demolishes the whole orange framework. Human _being_ is not a matter of calculation. People are not isolated individuals, living in a world of dead material objects, strategizing to manipulate them to achieve utilitarian goals. We are always already embedded in a web of connections with living nature and with other people. Our actions are called forth spontaneously by the situation we find ourselves in—not rationally planned in advance. If you have a green worldview, you’re thinking “duh, everyone knows all that—we don’t need a dead German philosopher to tell us.” But it is only because of Heidegger that you can be green. More than anyone else, he invented that worldview. _Being-in-the-World_ showed us _why_ the frame problem was insoluble. But it also provided an alternative understanding of activity. Most of the time, you simply _see_ what to do. Action is driven by perception, not plans. Now, _seeing_ is something us AI guys knew something about. [Computer vision research](http://en.wikipedia.org/wiki/Computer_vision) had been about identifying manufactured objects in a scene. But could it be redirected into _seeing what to do_? Yes, it could.[11](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_11 "See my “Intermediate Vision: Architecture, Implementation, and Use”, Cognitive Science 16(4) (1992), pp. 491-537.") In a feverish few months, Agre and I developed a completely new, non-orange approach to AI.[12](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_12 "The best summary of this is in Phil’s Computation and Human Experience (Cambridge University Press, 1997). The full text of his introduction is online. My take is in Vision, Instruction, and Action (MIT Press, 1991), which is more technical and less philosophical.") We found that bypassing the frame problem eliminated a host of other intractable technical difficulties that had bedeviled the field.[13](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_13 "Probably the clearest explanation of this is in my “Penguins Can Make Cake,” AI Magazine 10(4), 1989. Interestingly, two other groups came to similar conclusions independently, just about the same time Agre and I did, although based on purely technical rather than philosophical considerations. These were Rod Brooks and the team of Leslie Kaelbling and Stanley Rosenschein.") In 1987, we wrote a computer program called Pengi that illustrated some of what we had learned from Dreyfus, Heidegger, and the Continental philosophical tradition.[14](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_14 "“Pengi: An Implementation of a Theory of Activity,” Proceedings of the National Conference on Artificial Intelligence, 1987, pp. 268-272. Reprinted in George F. Luger, ed., Computation and Intelligence: Collected Readings, MIT Press, 1995, pp. 635-644.") Pengi [participated](https://meaningness.com/participation "Participation is the stance that there is no single right way of drawing boundaries around objects, or between self and other. Things are connected in many different ways and to different degrees; they may also be irrelevant to each other, or to you. Connections are formed by meaningful, on-going interaction. [Click for details.]") in a life-world. It did not have to mentally represent and reason about its circumstances, because it was embedded in them, causally coupled with them in a purposive dance. Its skill came from spontaneous improvisation, not strategic planning. Its apparently intelligent activity derived from interactive dynamics that—continually involving both its self and others—were neither subjective nor objective. Pengi was a triumph: it could do things that the old paradigm clearly couldn’t, and (although quite crude) seemed to point to a wide-open new paradigm for further research. AI was unstuck again! And, in fact, Pengi was [highly influential](http://scholar.google.com/scholar?hl=en&lr=&cites=1539961832122933456&um=1&ie=UTF-8&ei=SxaOTeeFH4SssAOqpqWDCQ&sa=X&oi=science_links&ct=sl-citedby&resnum=3&ved=0CCgQzgIwAg) for a few years. [![David Chapman, Vision, Instruction, and Action](https://metarationality.com/images/metarationality/vision_instruction_action_sonja_196x300.jpg)](https://www.amazon.com/dp/0262031817/?tag=meaningness-20) Although arguably non-orange, Pengi was hardly green. Particularly, it was in no sense social. The next program I wrote, Sonja, illustrated certain aspects of what it might mean for an AI to be socially embedded.[15](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_15 "Described in Vision, Instruction, and Action. See also my “Computer rules, conversational rules,” Computational Linguistics 18(4) (December, 1992), pp. 531-536.") I will have more to say about this elsewhere when I explain [participation](https://meaningness.com/participation "Participation is the stance that there is no single right way of drawing boundaries around objects, or between self and other. Things are connected in many different ways and to different degrees; they may also be irrelevant to each other, or to you. Connections are formed by meaningful, on-going interaction. [Click for details.]"), the [nebulosity](https://meaningness.com/nebulosity "Nebulosity is the insubstantiality, transience, boundarilessness, discontinuity, and ambiguity that (this book argues) are found in all phenomena. [Click for details.]") of the self/other boundary, and the fact that [meaningness](https://meaningness.com/what-is-meaningness "“Meaningness” is the quality of being meaningful and/or meaningless. It has various dimensions, such as value, purpose, and significance. This book suggests that meaningness is always nebulous—ambiguous and fluid—but also always patterned. [Click for details.]") is neither subjective nor objective. This work is arguably “yellow,” in offering orange-language explanations for green facts of existence. There was another problem. Pengi’s job was to play [a particular video game](http://en.wikipedia.org/wiki/Pengo_%28video_game%29). Its ability to do that had to be meticulously programmed in by hand. We found that programming more complicated abilities was difficult (although there seemed to be no obstacle in principle). Also, although perhaps ant brains come wired up by evolution to do everything they ever can, people are flexible and adaptable. We pick up new capabilities in new circumstances. The way forward seemed to be [machine learning](http://en.wikipedia.org/wiki/Machine_learning), an existing technical field. Working with Leslie Kaelbling, I tried to find ways an AI could develop skills with experience.[16](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_16 "“Input generalization in delayed reinforcement learning: An algorithm and performance comparisons,” Proceedings of the 12th International Joint Conference on Artificial Intelligence, 1991.") The more I thought about this, though, the harder it seemed. “Machine learning” is a fancy word for “statistics,” and statistics take an awful lot of data to reach any conclusions. People frequently learn all they need from a single event, because we understand what is going on. In 1992, I concluded that, although AI is probably possible in principle, _no one has any clue where to start_. So I lost interest and went off to do other things.[17](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_17 "Recently, Dreyfus published an analysis of why Phil and I failed (Artificial Intelligence 171(18), 2007).") In _Boomeritis_, the anti-hero—who may be me—says: > I know, the computer part sounds far out, but that’s only because you don’t know what’s actually happening in AI. I’m telling you, it’s moving faster than you can imagine. (p. 306) The reality, though, is that AI is moving slower than you can imagine. There’s been no noticeable progress in the past twenty years. And a few pages later “I” explain why: > There are some real stumbling blocks, things having to do mostly with [background contexts](http://philosophy.uwaterloo.ca/MindDict/thebackground.html) and billions of everyday details that just cannot all be programmed. (p. 331) ## Delusions of reference In _Boomeritis_, the AI plot is a paper-thin “[frame story](https://buddhism-for-vampires.com/stories-within-stories)” around the long philosophy lecture.[18](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_18 "Wilber says he wrote it in ten days, after the lecture was finished.") There’s just enough detail to make me think Ken Wilber did visit the MIT AI Lab, though. I suspect that he read a draft of [Flesh and Machines: How Robots Will Change Us](https://www.amazon.com/dp/037572527X//?tag=meaningness-20), by Rodney Brooks, which came out the same year as Boomeritis. Rod was head dude at the AI Lab then—and was my PhD supervisor. Here’s [an excerpt from his book:](http://people.csail.mit.edu/brooks/books%20&%20movies.html) > The body, this mass of biomolecules, is a machine that acts according to a set of specifiable rules… We are machines, as are our spouses, our children, and our dogs… I believe myself and my children all to be mere machines. But this is not how I treat them. I treat them in a very special way, and I interact with them on an entirely different level. They have my unconditional love, the furthest one might be able to get from rational analysis. Like a religious scientist, I maintain two sets of inconsistent beliefs and act on each of them in different circumstances. It is this transcendence between belief systems that I think will be what enables mankind to ultimately accept robots as emotional machines, and thereafter start to empathize with them and attribute free will, respect, and ultimately rights to them… When our robots improve enough, beyond their current limitations, and when we credit humans, then too we will break our mental barrier, our need, our desire, to retain tribal specialness, differentiating ourselves from them. If you have read _Boomeritis_, you will find this sounds familiar. So, was I the model for the book’s anti-hero? My guess is that Wilber had a conversation with Rod, who asked him what he did. Wilber mentioned German philosophy, and Rod said “hmm, that sounds like the stuff David Chapman used to go on about.” > “Who?” > > “David Chapman. He was a student here a while back. After doing some nice mathematical work, he and another guy, Phil Agre, suddenly started ranting about existential phenomenology and hermeneutics and ethnomethodology. No one could understand a word of it. We figured they were taking too much LSD. > > “But then they started writing programs, and the story gradually came into focus. Intelligence [depends on the body](http://en.wikipedia.org/wiki/Embodied_cognition); AI systems have to be [situated in an interpretable social world](http://en.wikipedia.org/wiki/Situated_cognition); understanding is not dependent on rules and representations; skillful action doesn’t usually come from planning.” > > “Whoa, that sounds like the green meme in Spiral Dynamics!” > > “Well, whatever. Spare me the gobbledegook. Anyway, I was thinking along pretty similar lines at the same time, because I was building robots, and it turns out that if you want to make a robot that actually works, the whole abstract/cognitive/logical paradigm is useless. It’s a matter of connecting perception with action. I never got into all that German stuff, though.” > > “So what happened to Chapman? It sounds like I should talk to him.” > > “[I haven’t a clue.](http://people.csail.mit.edu/brooks/phd%20students.html) He disappeared a long time ago.” > > “What a bizarre story! You know, I’ve just finished writing a long boring book critiquing postmodernism, but suddenly I’m thinking it might work better as a novel…” If that’s _not_ what happened, the coincidental similarity of Wilber’s anti-hero to me (and/or Agre) would be almost as odd. Perhaps, though, I am a _historical inevitability_. If I had not existed, it would have been necessary to invent me—and Wilber did. Of course, I could just ask him. But [uncertainty is more fun](https://vividness.live/certainty). “Yes” or “no” would remove the mystery, and the surreal groundlessness of not knowing whether I am a character in a novel. Besides, it allows for retaliation… ## Retaliation It just so happens that I am writing [a novel](https://buddhism-for-vampires.com/the-vetalis-gift) myself. Actually, it is an endlessly long philosophical diatribe, thinly disguised as a web-serial vampire romance. Already it is showing [worrying signs of postmodern literary gimmicks](https://buddhism-for-vampires.com/stories-within-stories). Naturally, as a sword-and-sorcery novel, it has a Dark Lord; a [lich](http://en.wikipedia.org/wiki/Lich) king, who seeks to unite himself with God to obtain unlimited power. I think you can guess where I am going with this… --- 1. [1.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_1)It’s also brilliant, inspiring, funny, and (in the end) touching. Two thumbs up. 2. [2.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_2)This is a gross simplification, and probably Spiral Dynamics geeks would object that I’m distorting their story out of recognition. For one thing, there are many more than three worldviews in the system. 3. [3.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_3)I wrote about that on [this page on Vividness](https://vividness.live/is-buddhism-just-for-baby-boomers). 4. [4.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_4)Relatedly, [my Buddhist teachers](http://arobuddhism.org/lamas/ngakchang-rinpoche-and-khandro-dechen.html) (Boomers themselves) have been pointing out these problems for 20+ years. In response, they were ejected and ostracized from the (Boomer-dominated) Western Buddhist establishment. This is a classic “mean green meme” scenario. Green cannot tolerate, and tries to destroy, any contradiction to “we’re all equal.” 5. [5.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_5)Monists love capital letters. Is that because they think capitals look impressive, or is it the result of bad translations from German? 6. [6.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_6)Philosophically bad. It works quite well as fiction. 7. [7.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_7)“Artificial intelligence” is also used to mean “writing programs to do things that are hard, like playing chess.” This is interesting if you are an engineer, but has no broader implications. 8. [8.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_8)Now out of print, but his revised edition, [What Computers Still Can’t Do: A Critique of Artificial Reason](https://www.amazon.com/dp/0262540673/?tag=meaningness-20), is still available. 9. [9.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_9)The canonical citation for this would be “[Planning for conjunctive goals,](https://www.sciencedirect.com/science/article/abs/pii/0004370287900920)” _Artificial Intelligence_, Volume 32, Issue 3, July 1987, Pages 333-377. But that costs money, and it’s a shortened version of [MIT AI Technical Report 802](https://dspace.mit.edu/handle/1721.1/6947), which you could download for free if you want to geek out. The important bit is the intractability theorem (page 23, proof pp. 45-46). The undecidability theorems are also cute, but less philosophically relevant. 10. [10.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_10)Technically, what I proved was the [NP-completeness](http://en.wikipedia.org/wiki/NP-complete) of the frame problem. Roughly, this means that there is no solution that is both practical and general. There are general solutions that are “exponential time” (meaning inherently impractical), and non-general solutions that can solve particular classes of problems. Neither of these is philosophically interesting, in my opinion. 11. [11.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_11)See my “[Intermediate Vision: Architecture, Implementation, and Use](http://csjarchive.cogsci.rpi.edu/1992v16/i04/p0491p0537/MAIN.PDF)”, _Cognitive Science_ 16(4) (1992), pp. 491-537. 12. [12.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_12)The best summary of this is in Phil’s [Computation and Human Experience](https://www.amazon.com/dp/052138432X/?tag=meaningness-20) (Cambridge University Press, 1997). The [full text of his introduction](http://polaris.gseis.ucla.edu/pagre/che-intro.html) is online. My take is in [Vision, Instruction, and Action](https://www.amazon.com/dp/0262031817/?tag=meaningness-20) (MIT Press, 1991), which is more technical and less philosophical. 13. [13.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_13)Probably the clearest explanation of this is in my “[Penguins Can Make Cake,](http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/965)” _AI Magazine_ 10(4), 1989. Interestingly, two other groups came to similar conclusions independently, just about the same time Agre and I did, although based on purely technical rather than philosophical considerations. These were [Rod Brooks](http://people.csail.mit.edu/brooks/) and the team of [Leslie Kaelbling](http://people.csail.mit.edu/lpk/) and Stanley Rosenschein. 14. [14.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_14)“[Pengi: An Implementation of a Theory of Activity,](http://www.aaai.org/Papers/AAAI/1987/AAAI87-048.pdf)” _Proceedings of the National Conference on Artificial Intelligence, 1987,_ pp. 268-272. Reprinted in George F. Luger, ed., _Computation and Intelligence: Collected Readings_, MIT Press, 1995, pp. 635-644. 15. [15.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_15)Described in [Vision, Instruction, and Action](https://www.amazon.com/dp/0262031817/?tag=meaningness-20). See also my “[Computer rules, conversational rules,](http://acl.ldc.upenn.edu/J/J92/J92-4006.pdf)” _Computational Linguistics_ 18(4) (December, 1992), pp. 531-536. 16. [16.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_16)“[Input generalization in delayed reinforcement learning: An algorithm and performance comparisons,](http://citeseer.ist.psu.edu/viewdoc/download;?doi=10.1.1.99.561&rep=rep1&type=pdf)” _Proceedings of the 12th International Joint Conference on Artificial Intelligence_, 1991. 17. [17.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_17)Recently, Dreyfus published an analysis of [why Phil and I failed](http://leidlmair.at/doc/WhyHeideggerianAIFailed.pdf) (_Artificial Intelligence_ 171(18), 2007). 18. [18.](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence#fn_mark_18)[Wilber says](http://wilber.shambhala.com/html/interviews/interview_bms_kk_1.cfm/) he wrote it in ten days, after the lecture was finished.