ASTRA-QA: A Benchmark for Abstract Query Answering over Documents

School of Data Science, The Chinese University of Hong Kong, Shenzhen

Abstract

Document-based question answering (QA) increasingly includes abstract questions that require synthesizing scattered information from long documents or across multiple documents into coherent answers. However, this setting is still poorly supported by existing benchmarks and evaluation methods, which often lack stable abstract references or rely on coarse similarity metrics and unstable head-to-head comparisons. To alleviate this issue, we introduce ASTRA-QA, a benchmark for AbSTRAeruct Question Answering over documents.

ASTRA-QA contains 869 QA instances over academic papers and news documents, covering five abstract question types and three controlled retrieval scopes. Each instance is equipped with explicit evaluation annotations, including answer topic sets, curated unsupported topics, and aligned evidence. Building on these annotations, ASTRA-QA assesses whether answers cover required key points and avoid unsupported content by directly scoring topic coverage and curated unsupported content, enabling scalable evaluation without exhaustive head-to-head comparisons. Experiments with representative Retrieval-Augmented Generation (RAG) methods spanning vanilla, graph-based, and hierarchical retrieval settings show that ASTRA-QA provides reference-grounded diagnostics for coverage, hallucination, and retrieval-scope robustness.

ASTRA-QA Benchmark

Overview

ASTRA-QA Framework

ASTRA-QA is a benchmark for abstract QA over documents, with a focus on evaluating RAG methods. Unlike conventional QA benchmarks that emphasize short answers or extractive evidence lookup, ASTRA-QA is designed to evaluate whether a RAG system can synthesize information from long documents and produce responses that are coherent, selective, and faithful to the source content. Such questions commonly require document-level summaries, structured comparisons, thematic organization, and temporally grounded synthesis rather than isolated factual snippets. ASTRA-QA is built over two source domains, namely academic papers and news documents, and covers five abstract question types.

Formally, ASTRA-QA consists of a document corpus \(D\) and a set of QA instances. Each QA instance is represented as \((Q, A, H, M)\), where \(Q\) is an abstract question, \(A = \{\tau_1, \tau_2, \cdots, \tau_n\}\) is the answer topic set, \(H = \{h_1, h_2, \cdots, h_k\}\) is a curated hallucination set containing plausible but unsupported topics, and \(M\) denotes the associated metadata, including the aligned evidence set \(E\), the question type, and the retrieval scope. Figure 1 provides an example of this representation, showing the question, answer topic set, hallucination set, and metadata of a QA instance. The set \(A\) is constructed from high-level summary signals and used by our topic-based evaluation method to assess answer coverage, while \(H\) records relevant but unsupported topics and is used to assess whether responses include these curated hallucination targets.

Construction Pipeline

ASTRA-QA Construction Pipeline

ASTRA-QA is constructed through a three-stage construction pipeline. The pipeline converts heterogeneous source materials into abstract QA instances grounded in curated document collections, with reference-grounded evidence and comprehensive topic-set answers.

  • Step 1: Data Collection. Original documents are collected from OpenReview, focusing on ICLR 2023, survey papers from arXiv, the tagged corpus of publications of Epstein et al., and news articles downloaded through the mediastack API, forming an initial candidate pool of 700+ paper documents and 1,500+ news articles.
  • Step 2: QA Pairs Generation. We use the processed materials, along with type-specific guidance, to generate initial QA instances with an LLM. Specifically, we use GPT-4o throughout the ASTRA-QA construction pipeline for generation and refinement.
  • Step 3: QA Pairs Refinement. We refine the generated question and the topic-set answer separately.
Task Type #Q #Docs Corpus Tok. AM Tok. # C
Single-Sum 422 422 9,681,570 322,719 30
Pair-Comp 99 54 1,565,393 597,178 5
Multi-Comp 42 57 1,670,693 457,473 5
Enum 150 64 1,728,257 427,368 7
Temp 156 1,579 1,434,193 120,514 7
Total 869 2,095 16,080,106 347,963 54

Evaluation Method

Our core idea is to evaluate ASTRA-QA answers in the same spirit as grading a composition or a reading-comprehension response by checking whether the answer covers the required key points. For ASTRA-QA, a good answer should cover the answer topics in \(A\) as completely as possible and avoid hallucinated content.

We then compute topic precision, topic recall, and topic F1 as

\[ \text{T-Prec} = \frac{|S(q, y)|}{\max(1,\,|\hat{T}(q, y)|)}, \quad \text{T-Rec} = \frac{|C(q, y)|}{|T|}, \quad \text{T-F1} = \frac{2\,\text{T-Prec}\cdot\text{T-Rec}}{\text{T-Prec}+\text{T-Rec}}. \]

Based on \(C_H(q, y)\), we define a topic-level hallucination score (\(H_{\mathrm{topic}}\)) and a response-level hallucination rate (\(H_{\mathrm{resp}}\)) as

\[ H_{\mathrm{topic}}(q, y) = \frac{|C_H(q, y)|}{|H|}, \quad H_{\mathrm{resp}}(q, y) = \mathbb{I}\!\left[|C_H(q, y)| > 0\right]. \] ADC Framework

Experiments

Performance comparison of representative RAG methods on ASTRA-QA under the three retrieval settings and overall. TF1, HT, and HR denote \(\text{T-F1}\), \(H_{\mathrm{topic}}\), and \(H_{\mathrm{resp}}\), respectively.

Method Simple Middle Hard Overall
TF1HTHR TF1HTHR TF1HTHR TF1HTHR
Vanilla RAG 45.015.627.2 24.26.812.4 23.710.022.6 31.010.820.7
LLightRAG 55.09.820.7 36.715.930.4 33.917.937.0 41.814.529.4
HiLightRAG 54.08.216.7 48.412.429.9 46.015.431.3 49.512.026.0
HyLightRAG 40.77.619.5 40.013.826.7 38.417.534.9 39.713.027.0
LGraphRAG 57.811.022.3 39.311.927.5 38.313.927.9 45.112.325.9
GGraphRAG 23.15.212.7 22.57.215.5 21.87.717.4 22.56.715.2
HippoRAG 61.717.335.1 56.927.445.8 51.217.535.1 56.620.038.7
RAPTOR 64.016.431.7 53.316.637.6 52.121.145.5 55.318.038.3
ArchRAG 55.218.937.5 47.620.739.4 47.019.337.6 49.919.638.2
KET-RAG 32.74.06.2 27.83.36.6 12.95.612.8 24.54.38.5
HiRAG 68.918.129.2 45.213.833.0 35.713.434.0 49.915.132.1

BibTeX

@article{astra_qa_2026,
  title   = {ASTRA-QA: A Benchmark for Abstract Question Answering over Documents},
  author  = {TBD},
  journal = {arXiv},
  year    = {2026}
}