on Toshareproject.it - curated by Bruce Sterling
https://www.newsweek.com/artificial-intelligence-impact-series-principles-future-ai-2058820
With the publication of the first three interviews in the Newsweek AI Impact series, it is a good time to reflect and distill the essence of what we have learned to date. The remarkable thing for me is the level of coherence and alignment among the views of the first three interviewees, despite their different backgrounds and focus areas: roboticist Rodney Brooks; neuroscientist David Eagleman; and AI innovator Yann LeCun.
But, on further reflection, this is perhaps not surprising given the intelligence of the three individuals and their innate curiosity, the combination of which leads to positions that are both broad in scope and deep in conception. As a result, I think we can already see a common thesis that has emerged and can be summarized as follows.
1. Magical Thinking
Humans are repeatedly seduced into thinking that any sign of intelligence is equivalent to our own intelligence; we engage in “magical anthropomorphism” of everything that appears to exhibit any human capability, and we delude ourselves about the real capabilities.
Rodney Brooks: “When we don’t have a model and can’t even conceive of the model, we of course say it’s magic. But if it sounds like magic, then you don’t understand…and you shouldn’t be buying something you don’t understand.”
David Eagleman: “Often we will ask a question to the AI, and it will give us an extraordinary answer. We’ll say, ‘My God, it’s brilliant! It has theory of mind!’ But in fact, it’s just echoing something that somebody else has already said.”
2. Beyond the IQ Test
Human intelligence cannot be quantified by a single test or score as it is a complex interplay of cognitive, creative, social, moral and physical capabilities and developed expertise. Any evaluation of machine intelligence against human intelligence can only be valid for a specific domain for which the full array of human capabilities is evaluated and compared.
David Eagleman: “We don’t have a single definition of intelligence. It’s almost certainly one of those words that has too much semantic weight on it. Intelligence presumably involves many different things, … [so] when we ask this question, is AI actually intelligent? We don’t have some clear yardstick along which we can measure that.”
Yann LeCun: “You could think of intelligence as two or three things. One is a collection of skills, but more importantly, an ability to acquire new skills quickly, with minimal or no learning.”
3. Think Fast But Also Slow
Nobel Prize-winning psychologist Daniel Kahneman’s framework for understanding how the human brain operates comprises a System 1 mode that is fast, automatic and intuitive and a System 2 mode that is slower, more deliberate and analytical.
Current LLMs exhibit only System 1 capabilities without a complementary System 2, as they lack reliable, accurate models of the world. The future of AI requires new models with a hierarchy of abstract representations of the real world in all its richness and with System 2 reasoning capabilities.
Yann LeCun: “An LLM produces one token after the other. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive…there’s no reasoning.”
Rodney Brooks: On System 1: “It seems to me that what LLMs have shown us is we can emulate language with that thoughtless part, which to me is a surprise.” On System 2: “It’s got social dynamics knowledge in it. It’s got knowledge of the physical world. It’s got a creative component to it for simulation of an unknown. It’s sort of intrigued by the unknown.”
4. Limits of Language
Written language is an insufficient basis for reliably representing the physical world and the human experience of it, as it is too highly compressed and insufficiently complete to describe this complex multidimensional, continuous reality. Therefoe, the future of AI will not be about scaling, adapting or enhancing LLMs alone…