This chapter describes the evolution and current advancements of the xu-argument in second language acquisition (SLA) research since its initial special issue in 2021. It highlights how Large Language Models (LLMs) like ChatGPT have begun reshaping language learning, while the theoretical framework of the xu-argument has become more systematic and fully articulated. An interview conducted by M. Wang elaborates on the core principles and learning mechanisms of the xu-argument, detailing how it reconciles complexity and predictability in language learning and aids integration of theory into teaching practice through xu-based pedagogy. The interplay between LLMs and the xu-argument is examined, emphasizing their mutual reinforcement: LLM-mediated environments offer fertile grounds for applying xu-based tasks, while the xu-argument may mitigate concerns about LLM overreliance by fostering learner agency and problem-solving skills.
This chapter discusses empirical investigations into the cognitive mechanisms underlying continuation tasks derived from the xu-argument. Gao, Li, and Yuan utilized eye-tracking to reveal that continuation writing caused deeper cognitive engagement with input texts compared to summary writing and reading-only conditions, demonstrated by longer fixation durations and rereading behavior. These findings provide compelling evidence that continuation writing activates sustained attention and strengthens comprehension-production coupling. Complementing this, X. P. Zhang and Chen studied how continuation-based writing enhances the use of complex English verb-argument constructions (VACs) among Chinese high school learners over eight weeks. Their results indicate that engagement with English input texts led to increased production of sophisticated VAC types and decreased reliance on simpler forms, underscoring the effectiveness of tasks combining input and output in advancing syntactic complexity.
Another study by Yang, Guo, and Yan investigated the role of comparison as a core mechanism in continuation writing through the lens of the Competition Model, focusing on Chinese EFL learners’ acquisition of English articles. Their experiment tested different cue-based enhancements (paired, randomized, implicit) during comparative continuation tasks. The paired cue condition produced the most significant gains in article knowledge and accurate usage, attributed to enhanced contrast effects encouraging learners to actively identify similarities and differences, link explanations to input texts, and self-monitor output. X. Y. Zhang compared continuation writing with model-as-feedback writing (MAFW) for argumentative essay development among Chinese intermediate learners. The continuation task, requiring writing an essay opposing a model text, outperformed MAFW in overall writing quality and specific components such as content, organization, and language use. This superiority was linked to the continuation task’s provision of immediate, sustained feedback through input utilization during writing.
This chapter explores efforts to augment continuation writing's benefits by integrating focus-on-form techniques, commonly applied in instructed SLA. Zhai, Du, and Xu examined the effects of adding explicit input enhancement to comparative continuation writing with Chinese middle school students. Their findings show all xu-based continuation groups outperformed a control group on discourse competence and writing performance measures. However, differing enhancement configurations yielded mixed and inconsistent benefits across specific indices, suggesting that the effectiveness of enhancement techniques varies based on their design and application.
This chapter examines the integration of AI tools, specifically Grammarly-generated automated written corrective feedback, into xu-based iterative continuation tasks (ICTs). Zhan and Zhou used a mixed-methods design to compare grammar learning strategies, grit, and competence between learners who received Grammarly feedback and those who did not. Both groups improved over time, but the Grammarly group saw broader and more robust gains across all measured variables, including strategy use and motivational factors. Conversely, the control group improved more specifically in grammar competence alone. Qualitative data complemented the quantitative results, highlighting nuanced learner perceptions of feedback efficacy in grammar acquisition.
* 以上内容由AI自动生成,内容仅供参考。对于因使用本网站以上内容产生的相关后果,本网站不承担任何商业和法律责任。