🤖 Seven replies to the viral Apple reasoning paper — and why they fall short /// Gary Marcus

15 June 2025 link ai

One is left simply having to test everything, all the time, with little guarantees of anything. Some model might be big enough for task T of size S and fail on the next size, or on Task T’ that is slightly different, etc. It all becomes a crapshoot.

While fun, I’m of the opinion the LLM model isn’t a pathway to AGI. It may lead someone to stumble on the right path, but complex algorithms with random output does not directly lead to it.


✉️  Reply by email
X

humdrum industries © 2025