SPEAKING BODIES

Embodied Cognition at the Crossroads of
Philosophy, Linguistics, Psychology and Artificial Intelligence

May 13-15, 2021, Cluj-Napoca, ROMANIA

Section 10B

 


10B: Artificial intelligence and embodiment

 

Time:

Saturday 15/5, 14:50-15:50

Moderator:

 
 

Răzvan Valentin Florian

Affiliation: Romanian Institute of Science and Technology
Title: Embodiment and symbolism in a framework of universal artificial intelligence
Abstract: Embodied approaches to understanding cognition have emerged in the last decades as an alternative to classical symbolic approaches. Here I attempt to explain in mechanistic terms the nature of symbolic representations, their necessity, and their limits; the embedding of symbolic representations in an embodied approach to cognition; and the continuity between symbolic representations and embodied ones. This is based on Hutter’s AIXI model of universal artificial intelligence (AI) and my own improvements of this model. A basic model of cognition involves an embodied agent that senses and acts within an environment, aiming to maximize the reward that it gets. Choosing actions for maximizing future rewards requires the ability to predict the consequences of future actions. A general theoretical framework of prediction is Solomonoff induction, also used by the AIXI model. Intuitively, our ability to make predictions for the future is based on regularities observed in the past. If such regularities exist, this implies the possibility to compress (in a computational sense) the data we experienced in the past. In Solomonoff induction, the philosophically challenging task of predicting the future is translated to the mechanistic task of generating all computer programs that are able to reproduce past data; of using their next outputs to predict the future; and, among these predictions, of assigning more probability to the predictions of the shortest programs, which best compress past data. This approach is infeasible in practice, requiring infinite computational resources, however it provides an attractive theoretical framework for the study of cognition. It implies that compressed (“symbolic”) representations are necessary for cognition. In Hutter’s AIXI model, the agent is trying to predict all future perceptions at “pixel level”. This implies that any compressed (“symbolic”) representations of the environment would be based on lossless compression. Such symbolic representations could be losslessly expanded to modal, perceptual ones. In such a framework, embodied representations would be fully equivalent to their symbolic ones, and there would be no tension between embodied and symbolic approaches to studying these cognitive systems. Then, why there is such a tension in the study of human cognition? This is because human brains, as any practical implementation of an AI system, use limited computational resources. Therefore, agents must focus on allocating these limited resources towards those predictions that are most useful for maximizing rewards. Therefore, the compressed representations (“symbols”) that they generate use lossy compression. Predictions based on such symbols balance the ability to maximize rewards with the optimal usage of computational resources, but, by construction, they are not able to predict all aspects and details of the future. This leads to the classical failures of human-generated, preprogrammed symbol systems in, for example, robotic control. Theoretically, depending on the availability of computational resources, the AIXI model can be modified such that the loss of information through compression may gradually be tuned from lossless to increasingly lossy, which would lead to symbol systems that are increasingly computationally efficient but more brittle in situations that are away from the average ones.

 

Robin Zebrowski

Affiliation: Beloit College
Title: Intelligence is a Light Source: The Role of Embodiment in Conceptual Understanding and Artificial Intelligence
Abstract: The artificial intelligence project continues moving forward across multiple academic, industrial, and commercial enterprises. However, people are forging ahead across most of these domains with poor understandings of cognition broadly, and with very little consideration of the body and the role it plays. This appears to be true despite decades of research confirming ways that physical bodies, as well as the physical and cultural environments of those bodies, are not merely implicated in cognition, but co-constitute cognition. In this paper, it is argued that we can isolate the optimal (or at least most promising, given limited resources) route to building AI, in the strong sense of creating a creature with a mind. I argue, drawing on a wide range of literature across all disciplines within the cognitive sciences, that not merely embodiment, but specifically humanoid embodiment matters for any serious attempts at human-level intelligence and communication. This evidence comes from experimental psychology’s gesture studies, cognitive linguistics, robotics, and social cognition work, among others. We must build humanoid robots, and put them in the real world and embed them in our actual social and cultural worlds, if we want to ground concepts and create understanding. I argue here that people must drastically shift research agendas if they hope to achieve a machine with anything like a human mind capable of communicating with humans, and recognize the necessary limitations even of this approach. Finally, an updated test for detecting mindedness is suggested, offering novel linguistic and gestural metaphor as the basis for assuming an underlying mind is present. Think of this as a kind of Gestural Turing Test, that attempts to capture much of the interdisciplinary research mentioned here and repackage it in a useful way in the spirit of Turing’s original “put up or shut up” style of test, avoiding some of the hard questions involved in defining consciousness or the mind, and focusing instead of operational definitions that might offer more nuanced and complex ways of making progress in AI.

 

Vanja Subotić

Affiliation: Institute for Philosophy, University of Belgrade
Title: The Virtues of Embodiment & Vices of Innateness: On Evaluating Deep Nets via Robotics
Abstract: Ullman (2019) discusses state-of-the-art deep network architectures, i.e., models of intelligent computations with biological flavor. Deep nets were introduced in cognitive science during the “new wave” in the 1980s by the PDP research group in San Diego (McClelland & Rumelhart 1986). The continuity between the early parallel distributed processing or connectionist models and current ones are hallmarked by the application of artificial neural networks, who learn by adjusting the synapses to produce the correct output to input patterns, to study of human cognition. In line with other authors well-known for opposing connectionist models (Fodor & Pylyshyn 1988, Pinker & Prince 1988, Marcus 1998, 2018), Ullman (2019) questions the extent to which the success in solving perceptual problems can be relevant for more complex aspects of cognition, such as language. He proceeds to claim that AI researchers should look up to neuroscience for inspiration: by acknowledging the role of innate cognitive structures and general learning mechanisms, they should provide current models with preexisting structures encoded in neural networks’ circuitry. However, I will be arguing that examples in Ullman (2019) – ranging from developmental psychology to zoology – suggest that what is decisive for human and animal capabilities is their body interacting with the environment, instead of being additional evidence in favor of the innateness hypothesis. Much of the Good Old-Fashioned AI (Haugeland 1985) rests upon the panegyric on innateness, and historically, proponents of connectionism have put much effort into avoiding any commitment to the innate hypothesis. Nonetheless, I will try to provide a case for deep nets not by looking into the past of cognitive science, but by state-of-the-art embodied robotics. The core claim of the embodied cognition thesis is that the particular kind of cognition is highly dependent upon the body of an agent and its ecological niche. Thus, a plethora of projects in robotics (e.g., Roboy, iCub, and fetus simulators) are devoted to building human-like morphology of robots so that they could grasp concepts and learn to be intelligent as humans are. In recent years, robots have become virtual experimental laboratories with the additional advantage that there are real physics and real sensory stimulation, which lends more credibility to the embodied cognition thesis (Hofmann & Pfeifer 2018). Moreover, the very possibility of emulating growth and development – by starting from low-level behavior such as walking through learning sensorimotor regularities – paves the way of understanding cognition without presupposing innate characteristics. By drawing on seminal work of Lakoff & Johnson in cognitive linguistics (1980, 2008), I will be arguing that an embodied theoretical framework of language can be used for showing that the combination of incorporated deep nets and embodied robotics can be promising even for the domain of higher cognitive processes such as language acquisition.

 

Print Email