
Had a strange moment now
I closed a book because the mental model it would have created is concave
It was about geometry and arguing about what constitutes as hole
It had the straw example
How many holes in straw
The point being made was we need hyper specific definitions to arrive at solution in that case
But for a probabilistic animal that is just exhausting and non rewarding
I mean i do love specificity
I think being hyper-specific is what leads you to have a market AND no competition
Geometric thinking and its fondness for demonstration and proposition can help in that end
But that very mode of thinking if it infects other cognitive processes then it can be very concave
Adding too much overhead before decisions in a world where volume is everything
So this makes me come to a conclusion
There are certain mental models that are great
But if they occupy huge proportion of conscious then that is corrosive
Thus here we have to delegate that to LLMs
We pass on such mental models to ai so that we get results that are possible using only these mental models
Without the cognitive risk they pose to us
The world we live in is highly non linear and relationships don’t stand
But these mental models need consistent internal logics to run
And out brain does not have a trigger which lets us switch between the probabilistic and the certain
Thus the mental model we use the most is the one that prevails
And my line of thinking leads me to the conclusion that i must have probabilistic thinking as default
Every moment spent applying a heavy, formal mental model is a moment not spent moving forward or scanning for new opportunities.
This overhead accumulates.
LLMs (like GPT) can simulate extremely detailed mental frameworks without getting tired.
AI can do the mental heavy lifting.
By designating where and when to use the AI’s rigorous approach
You keep yourself from slipping into that mental mode unconsciously
The bigger story here is one of cognitive synergy
Knowing when to rely on your flexible, intuitive mind to move quickly and sense opportunities.
Knowing when to harness the pure clarity of “concave” mental models
But relegating them to a computational partner so they don’t take over your consciousness.