#link #productdevelopment #science
[Link](https://metarationality.com/how-to-think)
## My Thoughts
### The "It's Right In Front Of You" Principle
>One key idea came from a cookbook. [Fear of Cooking](https://www.amazon.com/dp/0395322162/?tag=meaningness-20) emphasizes “the IRIFOY principle”: _it’s right in front of you_. You know what scrambled eggs are supposed to be like; you can see what is happening in the pan; so you know what you need to do next. You don’t need to make a detailed plan ahead of time.
>
>IRIFOY doesn’t always work; sometimes you paint yourself in a corner if you don’t think ahead. But mostly that doesn’t happen; and Phil developed a deep theory of why it doesn’t. One aspect is: we can’t solve NP-complete problems, so we organize our lives (and our physical environments) so we don’t have to.
I love the idea of putting off a decision with the intention of handling it when *it's right in front of you*. Sometimes this backfires if you needed some planning, but like he said, mostly it doesn't.
---
### Actively Investigate When You Need To, And Not Before
> The classical formulation was unrealistically hard in some ways, but also artificially easy. It did not allow for any sort of uncertainty, for instance. We implemented [a series of AI programs](https://metarationality.com/ken-wilber-boomeritis-artificial-intelligence) that were effective in complex, uncertain domains, where the planning approach failed. These domains involved both inherently random events and limited sensory access to relevant factors.
>
> Our programs dealt competently with uncertainty despite _not representing it at all_. A Bayesian approach would have been overwhelmed by computational complexity; and belief probabilities wouldn’t have contributed to effective action anyway. This was the IRIFOY principle again: when our programs needed to make decisions, they could _actively investigate_ to see what they needed to know. Most of the facts about their worlds were unknowable, but they could find out enough of what mattered, and ignored the rest.
I love this sentence:
> when our programs needed to make decisions, they could *actively investigate* to see what they needed to know.
because it just works! We get to delay our investigation (and therefore our time/preparation) until we know what it is that we need to investigate. There are remnants of [[Lean Product Development|lean ideas]] here.
Plus the idea of an AI being able to investigate is really appealing and interesting, rather than it needing to know everything and consider everything *a-priori*.
---
## The Post
<iframe
src="https://metarationality.com/how-to-think"
frameborder="0"
scrolling="no"
style="overflow:hidden;height:1000px;width:100%"
id="myIframe">
</iframe>