1. EachPod

“Measuring intelligence and reverse-engineering goals” by jessicata

Author
LessWrong ([email protected])
Published
Tue 12 Aug 2025
Episode Link
https://www.lesswrong.com/posts/eLDgDAWphHaAJYMxN/measuring-intelligence-and-reverse-engineering-goals

It is analytically useful to define intelligence in the context of AGI. One intuitive notion is epistemology: an agent's intelligence is how good its epistemology is, how good it is at knowing things and making correct guesses. But "intelligence" in AGI theory often means more than epistemology. An intelligent agent is supposed to be good at achieving some goal, not just knowing a lot of things.


So how could we define intelligent agency? Marcus Hutter's universal intelligence measures an agent's ability to achieve observable reward across a distribution of environments; AIXI maximizes this measure. Testing across a distribution makes sense for avoiding penalizing "unlucky" agents who fail in the real world, but use effective strategies that succeed most of the time. However, maximizing observable reward is a sort of fixed goal function; it can't consider intelligent agents that effectively achieve goals other than reward-maximization. This relates to inner [...]

---


First published:

August 11th, 2025



Source:

https://www.lesswrong.com/posts/eLDgDAWphHaAJYMxN/measuring-intelligence-and-reverse-engineering-goals


---


Narrated by TYPE III AUDIO.

Share to: