LeCun thinks we need completely different approaches, including what he calls “world models,” before we can talk seriously about AGI. His timeline? At least a decade, probably much longer.
I’m obviously no expert, but I’ve been saying this for years. LLM’s are not a path to AGI or any sort of intelligence. They are fancy algorithms with random outputs. They don’t think or understand—they ingest and compute.