Mitigating the Information Cocoon Effect in Cognitively Aligned Recommendations: A Human-Centered Approach

Document Type

Conference Proceeding

Publication Date

1-1-2026

Abstract

This paper introduces a Gray-Box Recommender System (RS) framework that integrates Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG) to deliver transparent, context-aware, and cognitively aligned recommendations. While deep learning-based RSs achieve high accuracy, their opaque 'black-box' nature undermines user trust and system interpretability. Conversely, transparent 'white-box' models often suffer from lower predictive performance. The proposed framework navigates the trade-off between performance and explainability by architecting a partially transparent model that exposes reasoned decision-making processes without compromising algorithm security or user privacy. By augmenting a hybrid collaborative-content-based filtering with an RAG-enhanced LLM explanation module, the framework provides contextual, cognitively-aligned recommendations that mitigate critical challenges including the information cocoon effect, coldstart problems, and recommendation hallucination. We present the formal architecture, a detailed algorithm pipeline, and evaluation methodologies. Theoretical analysis and a proposed validation plan suggest this human-centered, gray-box approach significantly advances personalization, interpretability, and trust in next-generation explainable RS (XRS).

Publication Title

2026 IEEE 16th Annual Computing and Communication Workshop and Conference Ccwc 2026

ISBN

[9798331593971]

Share

COinS