Alison Gopnik pointed out something very important about AGI and general capabilities. As you increase certain capabilities, e.i. you strengthen the priors, you at the same time reduce a capability of generalizing.
This could be used to derive some theorem about limits of capability. Widening it somewhere will be necessity narrow it down somewhere else.
So creating a general intelligence like a human will forcibly reduce some other capabilities, e.g. in playing chess.
This is an indication of that there is a limit in how intelligent an entity can be and also that AGI might not supersede us after all.
Inga kommentarer:
Skicka en kommentar