Paper Chaohui and Michel accepted and presented @HHAI2025 

Towards Chatbots That Know Their Users: A Comparative Study of Memory Systems and Benchmark Dataset

Chatbot applications based on large language models (LLMs) are increasingly popular. However, to develop conversational AI applications that collaboratively work alongside humans according to the Hybrid Intelligence paradigm, we need chatbots that remember earlier interactions and take the insights from these interactions into account. Due to input token limitations, LLMs often struggle with long-term interactions, particularly in highly interactive domains like simulated mental companionship. Recent methods such as Chroma incorporate external memory to handle extended texts, yet the diversity of text types and tasks in different

scenarios complicate choosing an appropriate Retrieval-Augmented Generation method. To address this challenge, we propose a graph-based memory management and retrieval model. To evaluate it, we constructed a long-text dataset spanning five scenarios with varying text lengths and a total of 150 questions, encompassing both simple and complex inquiries. The dataset is used to compare our memory system against common memory retrieval-augmented approaches. Results show that while our approach can sometimes produce confusing prompts when text homogeneity is high and the text length is short, it effectively captures the overall content and demonstrates reasoning in longer scenarios, thereby providing a feasible memory solution that helps chatbots better understand the user and support sustained interaction and simulated companionship.