Document-based question answering (QA) increasingly includes abstract questions that require synthesizing scattered information from long documents or across multiple documents into coherent answers. However, this setting is still poorly supported by existing benchmarks and evaluation methods, which often lack stable abstract references or rely on coarse similarity metrics and unstable head-to-head comparisons. To alleviate this issue, we introduce ASTRA-QA, a benchmark for AbSTRAeruct Question Answering over documents.
ASTRA-QA contains 869 QA instances over academic papers and news documents, covering five abstract question types and three controlled retrieval scopes. Each instance is equipped with explicit evaluation annotations, including answer topic sets, curated unsupported topics, and aligned evidence. Building on these annotations, ASTRA-QA assesses whether answers cover required key points and avoid unsupported content by directly scoring topic coverage and curated unsupported content, enabling scalable evaluation without exhaustive head-to-head comparisons. Experiments with representative Retrieval-Augmented Generation (RAG) methods spanning vanilla, graph-based, and hierarchical retrieval settings show that ASTRA-QA provides reference-grounded diagnostics for coverage, hallucination, and retrieval-scope robustness.
ASTRA-QA is a benchmark for abstract QA over documents, with a focus on evaluating RAG methods. Unlike conventional QA benchmarks that emphasize short answers or extractive evidence lookup, ASTRA-QA is designed to evaluate whether a RAG system can synthesize information from long documents and produce responses that are coherent, selective, and faithful to the source content. Such questions commonly require document-level summaries, structured comparisons, thematic organization, and temporally grounded synthesis rather than isolated factual snippets. ASTRA-QA is built over two source domains, namely academic papers and news documents, and covers five abstract question types.
Formally, ASTRA-QA consists of a document corpus \(D\) and a set of QA instances. Each QA instance is represented as \((Q, A, H, M)\), where \(Q\) is an abstract question, \(A = \{\tau_1, \tau_2, \cdots, \tau_n\}\) is the answer topic set, \(H = \{h_1, h_2, \cdots, h_k\}\) is a curated hallucination set containing plausible but unsupported topics, and \(M\) denotes the associated metadata, including the aligned evidence set \(E\), the question type, and the retrieval scope. Figure 1 provides an example of this representation, showing the question, answer topic set, hallucination set, and metadata of a QA instance. The set \(A\) is constructed from high-level summary signals and used by our topic-based evaluation method to assess answer coverage, while \(H\) records relevant but unsupported topics and is used to assess whether responses include these curated hallucination targets.
ASTRA-QA is constructed through a three-stage construction pipeline. The pipeline converts heterogeneous source materials into abstract QA instances grounded in curated document collections, with reference-grounded evidence and comprehensive topic-set answers.
| Task Type | #Q | #Docs | Corpus Tok. | AM Tok. | # C |
|---|---|---|---|---|---|
| Single-Sum | 422 | 422 | 9,681,570 | 322,719 | 30 |
| Pair-Comp | 99 | 54 | 1,565,393 | 597,178 | 5 |
| Multi-Comp | 42 | 57 | 1,670,693 | 457,473 | 5 |
| Enum | 150 | 64 | 1,728,257 | 427,368 | 7 |
| Temp | 156 | 1,579 | 1,434,193 | 120,514 | 7 |
| Total | 869 | 2,095 | 16,080,106 | 347,963 | 54 |
Our core idea is to evaluate ASTRA-QA answers in the same spirit as grading a composition or a reading-comprehension response by checking whether the answer covers the required key points. For ASTRA-QA, a good answer should cover the answer topics in \(A\) as completely as possible and avoid hallucinated content.
We then compute topic precision, topic recall, and topic F1 as
\[ \text{T-Prec} = \frac{|S(q, y)|}{\max(1,\,|\hat{T}(q, y)|)}, \quad \text{T-Rec} = \frac{|C(q, y)|}{|T|}, \quad \text{T-F1} = \frac{2\,\text{T-Prec}\cdot\text{T-Rec}}{\text{T-Prec}+\text{T-Rec}}. \]Based on \(C_H(q, y)\), we define a topic-level hallucination score (\(H_{\mathrm{topic}}\)) and a response-level hallucination rate (\(H_{\mathrm{resp}}\)) as
\[ H_{\mathrm{topic}}(q, y) = \frac{|C_H(q, y)|}{|H|}, \quad H_{\mathrm{resp}}(q, y) = \mathbb{I}\!\left[|C_H(q, y)| > 0\right]. \]
Performance comparison of representative RAG methods on ASTRA-QA under the three retrieval settings and overall. TF1, HT, and HR denote \(\text{T-F1}\), \(H_{\mathrm{topic}}\), and \(H_{\mathrm{resp}}\), respectively.
| Method | Simple | Middle | Hard | Overall | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| TF1 | HT | HR | TF1 | HT | HR | TF1 | HT | HR | TF1 | HT | HR | |
| Vanilla RAG | 45.0 | 15.6 | 27.2 | 24.2 | 6.8 | 12.4 | 23.7 | 10.0 | 22.6 | 31.0 | 10.8 | 20.7 |
| LLightRAG | 55.0 | 9.8 | 20.7 | 36.7 | 15.9 | 30.4 | 33.9 | 17.9 | 37.0 | 41.8 | 14.5 | 29.4 |
| HiLightRAG | 54.0 | 8.2 | 16.7 | 48.4 | 12.4 | 29.9 | 46.0 | 15.4 | 31.3 | 49.5 | 12.0 | 26.0 |
| HyLightRAG | 40.7 | 7.6 | 19.5 | 40.0 | 13.8 | 26.7 | 38.4 | 17.5 | 34.9 | 39.7 | 13.0 | 27.0 |
| LGraphRAG | 57.8 | 11.0 | 22.3 | 39.3 | 11.9 | 27.5 | 38.3 | 13.9 | 27.9 | 45.1 | 12.3 | 25.9 |
| GGraphRAG | 23.1 | 5.2 | 12.7 | 22.5 | 7.2 | 15.5 | 21.8 | 7.7 | 17.4 | 22.5 | 6.7 | 15.2 |
| HippoRAG | 61.7 | 17.3 | 35.1 | 56.9 | 27.4 | 45.8 | 51.2 | 17.5 | 35.1 | 56.6 | 20.0 | 38.7 |
| RAPTOR | 64.0 | 16.4 | 31.7 | 53.3 | 16.6 | 37.6 | 52.1 | 21.1 | 45.5 | 55.3 | 18.0 | 38.3 |
| ArchRAG | 55.2 | 18.9 | 37.5 | 47.6 | 20.7 | 39.4 | 47.0 | 19.3 | 37.6 | 49.9 | 19.6 | 38.2 |
| KET-RAG | 32.7 | 4.0 | 6.2 | 27.8 | 3.3 | 6.6 | 12.9 | 5.6 | 12.8 | 24.5 | 4.3 | 8.5 |
| HiRAG | 68.9 | 18.1 | 29.2 | 45.2 | 13.8 | 33.0 | 35.7 | 13.4 | 34.0 | 49.9 | 15.1 | 32.1 |
@article{astra_qa_2026,
title = {ASTRA-QA: A Benchmark for Abstract Question Answering over Documents},
author = {TBD},
journal = {arXiv},
year = {2026}
}