ALTOReader#
- class scikitplot.corpus.ALTOReader(input_file, chunker=None, filter_=None, filename_override=None, default_language=None, source_uri=None, source_provenance=<factory>, granularity='block', max_file_bytes=5368709120, xml_encoding=None)[source]#
ALTO XML reader for scanned document archives.
Reads a ZIP archive containing one ALTO XML file per page, extracts text and physical layout metadata from each page, and yields raw chunk dicts with first-class fields
page_number,bbox,confidence, andocr_engine.ALTO namespaces v2, v3, and v4 are auto-detected from the root element. Pages are processed in natural-sort order of the XML filenames within the archive.
- Parameters:
- input_filepathlib.Path
Path to the
.ziparchive containing ALTO XML files.- granularitystr, optional
Chunking level within each ALTO page. One of:
"block"(default)One chunk per
<TextBlock>. Suitable for paragraph-level corpus construction."line"One chunk per
<TextLine>. Suitable for line-level OCR error analysis or fine-grained alignment."page"One chunk per page (all text joined). Suitable for document-level retrieval.
- max_file_bytesint, optional
Maximum ZIP file size in bytes before raising
ValueError. Default: 5 GB.- xml_encodingstr or None, optional
Force XML member encoding.
Noneuses the encoding declared in each XML header (or UTF-8). Default:None.- chunkerChunkerBase or None, optional
Inherited from
DocumentReader.- filter_FilterBase or None, optional
Inherited from
DocumentReader.- filename_overridestr or None, optional
Inherited from
DocumentReader.- default_languagestr or None, optional
Inherited from
DocumentReader.
- Attributes:
- file_typestr
Class variable. Always
".zip".- file_typeslist of str
Class variable. Registered extensions:
[".zip"].
- Raises:
- ValueError
If
granularityis not one of the valid values.- ValueError
If the file exceeds
max_file_bytes.- ValueError
If a ZIP entry contains a ZipSlip
..path traversal sequence.
- Parameters:
See also
scikitplot.corpus._readers.PDFReaderPDF reader.
scikitplot.corpus._readers.XMLReaderGeneric XML reader.
scikitplot.corpus._readers.ImageReaderOCR reader for raster images.
Notes
ALTO namespace detection: The reader inspects the root element tag of each XML file. Documents with no namespace (older ABBYY exports) are handled via the empty-string namespace path.
Word confidence (WC): ALTO
WCattributes represent per-word OCR confidence. Some tools emit values in[0, 1]; others use[0, 100].ALTOReaderauto-normalises values above 1.0 by dividing by 100, soconfidenceis always in[0.0, 1.0]when present.Bounding boxes: The
bboxfield stores(HPOS, VPOS, WIDTH, HEIGHT)in the measurement unit declared byMeasurementUnitin the ALTO header (commonly 1/10 mm for 300 DPI scans). Conversion to pixel coordinates requires the DPI value which is not stored in the chunk; retrieve it fromdoc.metadataif needed (add a"dpi"key in a custom reader subclass).Security: ZipSlip validation rejects any ZIP entry whose path contains
..components before any file is read.Examples
Default block-level chunking:
>>> from pathlib import Path >>> reader = ALTOReader(input_file=Path("newspaper_scan.zip")) >>> docs = list(reader.get_documents()) >>> print(f"Blocks extracted: {len(docs)}") >>> print(f"Page 0 confidence: {docs[0].confidence:.3f}")
Line-level granularity:
>>> reader = ALTOReader( ... input_file=Path("book_scan.zip"), ... granularity="line", ... )
With source provenance:
>>> from scikitplot.corpus._base import DocumentReader >>> from scikitplot.corpus._schema import SourceType >>> reader = DocumentReader.create( ... Path("periodical_1920.zip"), ... source_type=SourceType.ARTICLE, ... source_title="The Daily Gazette", ... source_date="1920-03-15", ... )
- chunker: ChunkerBase | None = None#
Chunker to apply to each raw text block.
Nonemeans each raw chunk is used as-is (one CorpusDocument per raw chunk).
- classmethod create(*inputs, chunker=None, filter_=None, filename_override=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, **kwargs)[source]#
Instantiate the appropriate reader for one or more sources.
Accepts any mix of file paths, URL strings, and
pathlib.Pathobjects — in any order. URL strings (those starting withhttp://orhttps://) are automatically detected and routed tofrom_url; everything else is treated as a local file path and dispatched by extension via the registry.- Parameters:
- *inputspathlib.Path or str
One or more source paths or URL strings. Pass a single value for the common case; pass multiple values to get a
_MultiSourceReaderthat chains all their documents.- chunkerChunkerBase or None, optional
Chunker injected into every reader. Default:
None.- filter_FilterBase or None, optional
Filter injected into every reader. Default:
None(DefaultFilter).- filename_overridestr or None, optional
Override the
source_filelabel. Only applied when inputs contains exactly one source. Default:None.- default_languagestr or None, optional
ISO 639-1 language code applied to all sources. Default:
None.- source_typeSourceType, list[SourceType or None], or None, optional
Semantic label for the source kind. When inputs has more than one element you may pass a list of the same length to assign a distinct type per source;
Noneentries in the list mean “infer from extension / URL”. A single value is broadcast to all sources. Default:None.- source_titlestr or None, optional
Title propagated into every yielded document. Default:
None.- source_authorstr or None, optional
Author propagated into every yielded document. Default:
None.- source_datestr or None, optional
ISO 8601 publication date. Default:
None.- collection_idstr or None, optional
Corpus collection identifier. Default:
None.- doistr or None, optional
Digital Object Identifier (file sources only). Default:
None.- isbnstr or None, optional
ISBN (file sources only). Default:
None.- **kwargsAny
Extra keyword arguments forwarded verbatim to each concrete reader constructor (e.g.
transcribe=TrueforAudioReader,backend="easyocr"forImageReader).
- Returns:
- DocumentReader
A single reader when inputs has exactly one element (backward compatible with every existing call site). A
_MultiSourceReaderwhen inputs has more than one element — it implements the sameget_documents()interface and chains documents from all sub-readers in order.
- Raises:
- ValueError
If inputs is empty, or if a source URL is invalid, or if no reader is registered for a file’s extension.
- TypeError
If any element of inputs is not a
strorpathlib.Path.
- Parameters:
chunker (ChunkerBase | None)
filter_ (FilterBase | None)
filename_override (str | None)
default_language (str | None)
source_type (SourceType | list[SourceType | None] | None)
source_title (str | None)
source_author (str | None)
source_date (str | None)
collection_id (str | None)
doi (str | None)
isbn (str | None)
kwargs (Any)
- Return type:
Notes
URL auto-detection: A
strelement is treated as a URL when it matches^https?://(case-insensitive). All other strings and allpathlib.Pathobjects are treated as local file paths. This means you no longer need to callfrom_urlexplicitly — just pass the URL string tocreate.Per-source source_type: When passing multiple inputs with different media types, supply a list:
DocumentReader.create( Path("podcast.mp3"), "report.pdf", "https://iris.who.int/.../content", # returns image/jpeg source_type=[SourceType.PODCAST, SourceType.RESEARCH, SourceType.IMAGE], )
Reader-specific kwargs (forwarded via
**kwargs):transcribe=True,whisper_model="small"→AudioReader,VideoReaderbackend="easyocr"→ImageReaderprefer_backend="pypdf"→PDFReaderclassify=True,classifier=fn→AudioReader
Examples
Single file (backward-compatible):
>>> reader = DocumentReader.create(Path("hamlet.txt")) >>> docs = list(reader.get_documents())
URL string auto-detected — no from_url() call required:
>>> reader = DocumentReader.create( ... "https://en.wikipedia.org/wiki/Python_(programming_language)" ... )
Mixed multi-source batch:
>>> reader = DocumentReader.create( ... Path("podcast.mp3"), ... "report.pdf", ... "https://iris.who.int/api/bitstreams/abc/content", ... source_type=[SourceType.PODCAST, SourceType.RESEARCH, SourceType.IMAGE], ... ) >>> docs = list(reader.get_documents()) # chained stream from all three
- default_language: str | None = None#
ISO 639-1 language code to assign when the source has no language info.
- property file_name: str#
Effective filename used in document labels.
Returns
filename_overridewhen set; otherwise returnsinput_file.name.- Returns:
- str
File name string (not a full path).
Examples
>>> from pathlib import Path >>> reader = TextReader(input_file=Path("/data/corpus.txt")) >>> reader.file_name 'corpus.txt'
- file_type: ClassVar[str | None] = '.zip'#
Single file extension this reader handles (lowercase, including leading dot). E.g.
".txt",".xml",".zip".For readers that handle multiple extensions, define
file_types(plural) instead. Exactly one offile_typeorfile_typesmust be defined on every concrete subclass.
- file_types: ClassVar[list[str] | None] = ['.zip']#
List of file extensions this reader handles (lowercase, leading dot). Use instead of
file_typewhen a single reader class should be registered for several extensions — e.g. an image reader for[".png", ".jpg", ".jpeg", ".gif", ".webp"].When both
file_typeandfile_typesare defined on the same class,file_typestakes precedence andfile_typeis ignored.
- filter_: FilterBase | None = None#
Filter applied after chunking.
Nonetriggers theDefaultFilter.
- classmethod from_manifest(manifest_path, *, chunker=None, filter_=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, encoding='utf-8', **kwargs)[source]#
Build a
_MultiSourceReaderfrom a manifest file.The manifest is a text file with one source per line — either a file path or a URL. Blank lines and lines starting with
#are ignored. JSON manifests (a list of strings or objects) are also supported.- Parameters:
- manifest_pathpathlib.Path or str
Path to the manifest file. Supported formats:
.txt/.manifest— one source per line..json— a JSON array of strings (sources) or objects with at least a"source"key (and optional"source_type","source_title"per-entry overrides).
- chunkerChunkerBase or None, optional
Chunker applied to all sources. Default:
None.- filter_FilterBase or None, optional
Filter applied to all sources. Default:
None.- default_languagestr or None, optional
ISO 639-1 language code. Default:
None.- source_typeSourceType or None, optional
Override source type for all sources. Default:
None.- source_titlestr or None, optional
Override title for all sources. Default:
None.- source_authorstr or None, optional
Override author for all sources. Default:
None.- source_datestr or None, optional
Override date for all sources. Default:
None.- collection_idstr or None, optional
Collection identifier. Default:
None.- doistr or None, optional
DOI override. Default:
None.- isbnstr or None, optional
ISBN override. Default:
None.- encodingstr, optional
Text encoding for
.txtmanifests. Default:"utf-8".- **kwargsAny
Forwarded to each reader constructor.
- Returns:
- _MultiSourceReader
Multi-source reader chaining all manifest entries.
- Raises:
- ValueError
If manifest_path does not exist or is empty after filtering blank and comment lines.
- ValueError
If the manifest format is not recognised.
- Parameters:
- Return type:
_MultiSourceReader
Notes
Per-entry overrides in JSON manifests: each entry may be an object with:
{ "source": "https://example.com/report.pdf", "source_type": "research", "source_title": "Annual Report 2024", }
String-level
source_typevalues are coerced viaSourceType(value)and an invalid value raisesValueError.Examples
Text manifest
sources.txt:# WHO corpus https://www.who.int/europe/news/item/... https://youtu.be/rwPISgZcYIk WHO-EURO-2025.pdf scan.jpg
Usage:
reader = DocumentReader.from_manifest( Path("sources.txt"), collection_id="who-corpus", ) docs = list(reader.get_documents())
- classmethod from_url(url, *, chunker=None, filter_=None, filename_override=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, **kwargs)[source]#
Instantiate the appropriate reader for a URL source.
Dispatches to
YouTubeReaderfor YouTube URLs and toWebReaderfor all otherhttp:///https://URLs.- Parameters:
- urlstr
Full URL string. Must start with
http://orhttps://.- chunkerChunkerBase or None, optional
Chunker to inject. Default:
None.- filter_FilterBase or None, optional
Filter to inject. Default:
None(DefaultFilter).- filename_overridestr or None, optional
Override for the
source_filelabel. Default:None.- default_languagestr or None, optional
ISO 639-1 language code. Default:
None.- source_typeSourceType or None, optional
Semantic label for the source. Default:
None.- source_titlestr or None, optional
Title of the source work. Default:
None.- source_authorstr or None, optional
Primary author. Default:
None.- source_datestr or None, optional
Publication date in ISO 8601 format. Default:
None.- collection_idstr or None, optional
Corpus collection identifier. Default:
None.- doistr or None, optional
Digital Object Identifier. Default:
None.- isbnstr or None, optional
International Standard Book Number. Default:
None.- **kwargsAny
Additional kwargs forwarded to the reader constructor (e.g.
include_auto_generated=FalseforYouTubeReader).
- Returns:
- DocumentReader
YouTubeReaderorWebReaderinstance.
- Raises:
- ValueError
If
urldoes not start withhttp://orhttps://.- ImportError
If the required reader class is not registered (i.e.
scikitplot.corpus._readershas not been imported yet).
- Parameters:
url (str)
chunker (ChunkerBase | None)
filter_ (FilterBase | None)
filename_override (str | None)
default_language (str | None)
source_type (SourceType | None)
source_title (str | None)
source_author (str | None)
source_date (str | None)
collection_id (str | None)
doi (str | None)
isbn (str | None)
kwargs (Any)
- Return type:
Notes
Prefer :meth:`create` for new code. Passing a URL string to
createautomatically callsfrom_url— you rarely need to callfrom_urldirectly.Examples
>>> reader = DocumentReader.from_url("https://en.wikipedia.org/wiki/Python") >>> docs = list(reader.get_documents())
>>> yt = DocumentReader.from_url("https://www.youtube.com/watch?v=dQw4w9WgXcQ") >>> docs = list(yt.get_documents())
- get_documents()[source]#
Yield validated
CorpusDocumentinstances for the input file.Orchestrates the full per-file pipeline:
validate_input— fail fast if file is missing.get_raw_chunks— format-specific text extraction.Chunker (if set) — sub-segments each raw block.
CorpusDocumentconstruction with validated schema.Filter — discards noise documents.
- Yields:
- CorpusDocument
Validated documents that passed the filter.
- Raises:
- ValueError
If the input file is missing or the format is invalid.
- Return type:
Generator[CorpusDocument, None, None]
Notes
The global
chunk_indexcounter is monotonically increasing across all raw chunks and sub-chunks for a single file, ensuring that(source_file, chunk_index)is a unique key within one reader run.Omitted-document statistics are logged at INFO level after processing each file.
Examples
>>> from pathlib import Path >>> reader = DocumentReader.create(Path("corpus.txt")) >>> docs = list(reader.get_documents()) >>> all(isinstance(d, CorpusDocument) for d in docs) True
- get_raw_chunks()[source]#
Iterate over ALTO XML pages in the ZIP and yield text chunks.
Each XML file in the archive is treated as one page, processed in natural-sort order. Pages with no
<TextBlock>elements or no extractable text are skipped.- Yields:
- dict
Keys:
"text"Extracted OCR text.
"section_type"Always
TEXT."page_number"Zero-based page index within the archive (promoted field).
"bbox"Bounding box
(HPOS, VPOS, WIDTH, HEIGHT)as atuple[float, ...], orNonefor page-level chunks."confidence"Mean word confidence in
[0.0, 1.0], orNonewhenWCattributes are absent."ocr_engine"OCR engine name from the ALTO header, or
None.
- Raises:
- ValueError
If the file exceeds
max_file_bytes.- ValueError
If a ZIP entry triggers the ZipSlip guard.
- zipfile.BadZipFile
If the file is not a valid ZIP archive.
- Return type:
- granularity: str = 'block'#
Chunking granularity within each ALTO page. One of
"block","line", or"page". Default:"block".
- input_file: Path[source]#
Path to the source file.
For URL-based readers (
WebReader,YouTubeReader), passpathlib.Path(url_string)here and setsource_urito the original URL string.validate_input()is overridden in those subclasses to skip the file-existence check.
- source_provenance: dict[str, Any][source]#
Provenance overrides propagated into every yielded
CorpusDocument.Keys may include
"source_type","source_title","source_author", and"collection_id". Populated bycreate/from_urlfrom their keyword arguments.
- source_uri: str | None = None#
Original URI for URL-based readers (web pages, YouTube videos).
Set this to the full URL string when
input_fileis a syntheticpathlib.Pathwrapping a URL. File-based readers leave thisNone.Examples
>>> reader = WebReader( ... input_file=Path("https://example.com/article"), ... source_uri="https://example.com/article", ... )
- classmethod subclass_by_type()[source]#
Return a copy of the extension → reader class registry.
- Returns:
- dict
Mapping of file extension (str) → reader class. Returns a shallow copy so callers cannot accidentally mutate the registry.
- Return type:
Examples
>>> registry = DocumentReader.subclass_by_type() >>> ".txt" in registry True
- classmethod supported_types()[source]#
Return a sorted list of file extensions supported by registered readers.
- Returns:
- list of str
Lowercase file extensions, each including the leading dot. E.g.
['.pdf', '.txt', '.xml', '.zip'].
- Return type:
Examples
>>> DocumentReader.supported_types() ['.pdf', '.txt', '.xml', '.zip']
- validate_input()[source]#
Assert that the input file exists and is readable.
- Raises:
- ValueError
If
input_filedoes not exist or is not a regular file.
- Return type:
None
Notes
Called automatically by
get_documentsbefore iterating. Can also be called eagerly after construction to fail fast.Examples
>>> reader = DocumentReader.create(Path("missing.txt")) >>> reader.validate_input() Traceback (most recent call last): ... ValueError: Input file does not exist: missing.txt