00:00:00

Share Your Feedback 🏝️

Long Context QA | Long Cite

Long Context QA | Long Cite

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: Data | Long Context Multi-Hop Next: Reasoning | Strategic CoT

Long Context QA | Long Cite

  • Related Project: Private
  • Category: Paper Review
  • Date: 2024-09-05

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA

  • url: https://arxiv.org/abs/2409.02897
  • pdf: https://arxiv.org/pdf/2409.02897
  • html: https://arxiv.org/html/2409.02897v1
  • abstract: Though current long-context large language models (LLMs) have demonstrated impressive capacities in answering user questions based on extensive text, the lack of citations in their responses makes user verification difficult, leading to concerns about their trustworthiness due to their potential hallucinations. In this work, we aim to enable long-context LLMs to generate responses with fine-grained sentence-level citations, improving their faithfulness and verifiability. We first introduce LongBench-Cite, an automated benchmark for assessing current LLMs’ performance in Long-Context Question Answering with Citations (LQAC), revealing considerable room for improvement. To this end, we propose CoF (Coarse to Fine), a novel pipeline that utilizes off-the-shelf LLMs to automatically generate long-context QA instances with precise sentence-level citations, and leverage this pipeline to construct LongCite-45k, a large-scale SFT dataset for LQAC. Finally, we train LongCite-8B and LongCite-9B using the LongCite-45k dataset, successfully enabling their generation of accurate responses and fine-grained sentence-level citations in a single output. The evaluation results on LongBench-Cite show that our trained models achieve state-of-the-art citation quality, surpassing advanced proprietary models including GPT-4o.

Previous: Data | Long Context Multi-Hop Next: Reasoning | Strategic CoT

post contain ""

    No matching posts found containing ""