Abstract
The advent of large language models (LLMs) presents an existential challenge to the education community, as we worry that learners might surrender to the convenience of these intelligent machines, transforming themselves into what philosophers called “a Chinese room”. This paper aims to theorize the challenges posed by LLMs by scrutinizing the intersection of multiple forces of varying magnitude and duration from the perspectives of social sciences and humanities. I argue that the encroachment of smart machines into education is not a novel phenomenon, as the integration of new technologies into classrooms has been a consistent feature throughout the past century. However, intelligent machines pose a threat to the modes of knowing we have cultivated. Specifically, they offer simplified access to vast repositories of past knowledge. By shielding learners from direct engagement with source material, they present information as tangible things in the guise of human-like language. Yet, obtaining knowledge in such encapsulated forms represents a restricted way of understanding the social world. These smart machines, therefore, expose the blind spots in existing educational practices. By reflecting on their particularities, we can further enhance our educational endeavors.