All deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”).
Large language models (LLMs) have been used for a variety of time-sensitive applications such as temporal reasoning (Fatemi et al., 2024), forecasting (Jin et al., 2023) and planning (Meng et al., 2024). In addition, there has been a growing number of interdisciplinary works that use LLMs for cross-temporal research in several domains, including social science (Zhou et al., 2024), psychology (Bodroža et al., 2023), cognitive science (Huet et al., 2025), environmental science (Tian et al., 2024) and clinical studies (He et al., 2024). However, LLMs are hindered in their understanding of time due to many different reasons, including temporal biases and knowledge conflicts in pretraining and RAG data but also a fundamental limitation in LLM tokenization that fragments a date into several meaningless subtokens. Such inadequate understanding of time would lead to inaccurate reasoning, forecasting and planning, and time-sensitive findings that are potentially misleading.
Our workshop looks for (i) cross-temporal work in the NLP community and (ii) interdisciplinary work that relies on LLMs for cross-temporal studies. See call for papers for more details. Reference papers are available here.
Email: xtempllms@gmail.com