TEIReader#

class scikitplot.corpus.TEIReader(input_file, chunker=None, filter_=None, filename_override=None, default_language=None, source_uri=None, source_provenance=<factory>, include_stage_directions=True, include_speaker_tags=True, max_file_bytes=209715200)[source]#

TEI/XML document reader with dramatic structure extraction.

Reads a TEI-encoded text (plays, poems, prose) and yields one raw chunk per logical text unit, detecting:

  • Verse lines (<l> / <lg>) → section_type=VERSE

  • Prose paragraphs (<p> / <ab>) → section_type=TEXT

  • Dialogue turns (<sp>) → section_type=DIALOGUE

  • Stage directions (<stage>) → section_type=STAGE_DIRECTION

Acts (<act> or <div[@type='act']>) and scenes (<scene> or <div[@type='scene']>) are tracked and their ordinal numbers propagated as act and scene_number (both one-based) into each yielded raw chunk dict.

Parameters:
input_filepathlib.Path

Path to the TEI .xml file.

include_stage_directionsbool, optional

When True (default), stage directions are included as STAGE_DIRECTION chunks. When False, they are omitted.

include_speaker_tagsbool, optional

When True (default), the <speaker> text within a <sp> block is prepended to the chunk text (e.g. "HAMLET: To be or..."). When False, speaker names are omitted.

max_file_bytesint, optional

Maximum file size in bytes. Default: 200 MB.

chunkerChunkerBase or None, optional

Inherited from DocumentReader.

filter_FilterBase or None, optional

Inherited from DocumentReader.

filename_overridestr or None, optional

Inherited from DocumentReader.

default_languagestr or None, optional

Inherited from DocumentReader.

Attributes:
file_typeClassVar[str]

Not registered (None) — TEIReader is used as an instantiated object, not via the DocumentReader.create() factory. TEI files have .xml extension, which is already registered by XMLReader. To route .xml files to TEIReader instead, subclass TEIReader and set file_type = ".xml".

Raises:
ValueError

If the file exceeds max_file_bytes or is not valid XML.

Parameters:
  • input_file (Path)

  • chunker (ChunkerBase | None)

  • filter_ (FilterBase | None)

  • filename_override (str | None)

  • default_language (str | None)

  • source_uri (str | None)

  • source_provenance (dict[str, Any])

  • include_stage_directions (bool)

  • include_speaker_tags (bool)

  • max_file_bytes (int)

See also

scikitplot.corpus._readers.XMLReader

Generic XML reader.

scikitplot.corpus._readers.ALTOReader

ALTO XML in ZIP reader.

Notes

Namespace handling: TEIReader auto-detects the TEI namespace from the root element and builds XPath expressions accordingly. No manual namespace configuration is required.

Act/scene numbering: Acts and scenes are counted in document order (not from n attributes, which are optional in TEI). First act = 1, first scene = 1.

Yielded promoted fields per chunk:

  • "section_type"TEXT, DIALOGUE, STAGE_DIRECTION, VERSE, or ACKNOWLEDGEMENTS

  • "act" — one-based act number (None if not inside an act)

  • "scene_number" — one-based scene number (None if not in scene)

  • "line_number" — one-based verse line number within the scene/act (None for prose/stage directions)

Examples

>>> from pathlib import Path
>>> reader = TEIReader(input_file=Path("hamlet_tei.xml"))
>>> docs = list(reader.get_documents())
>>> verse = [d for d in docs if d.section_type.value == "verse"]
>>> print(f"Verse lines: {len(verse)}")

Filter out stage directions:

>>> reader = TEIReader(
...     input_file=Path("hamlet_tei.xml"),
...     include_stage_directions=False,
... )
chunker: ChunkerBase | None = None#

Chunker to apply to each raw text block. None means each raw chunk is used as-is (one CorpusDocument per raw chunk).

classmethod create(*inputs, chunker=None, filter_=None, filename_override=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, **kwargs)[source]#

Instantiate the appropriate reader for one or more sources.

Accepts any mix of file paths, URL strings, and pathlib.Path objects — in any order. URL strings (those starting with http:// or https://) are automatically detected and routed to from_url; everything else is treated as a local file path and dispatched by extension via the registry.

Parameters:
*inputspathlib.Path or str

One or more source paths or URL strings. Pass a single value for the common case; pass multiple values to get a _MultiSourceReader that chains all their documents.

chunkerChunkerBase or None, optional

Chunker injected into every reader. Default: None.

filter_FilterBase or None, optional

Filter injected into every reader. Default: None (DefaultFilter).

filename_overridestr or None, optional

Override the source_file label. Only applied when inputs contains exactly one source. Default: None.

default_languagestr or None, optional

ISO 639-1 language code applied to all sources. Default: None.

source_typeSourceType, list[SourceType or None], or None, optional

Semantic label for the source kind. When inputs has more than one element you may pass a list of the same length to assign a distinct type per source; None entries in the list mean “infer from extension / URL”. A single value is broadcast to all sources. Default: None.

source_titlestr or None, optional

Title propagated into every yielded document. Default: None.

source_authorstr or None, optional

Author propagated into every yielded document. Default: None.

source_datestr or None, optional

ISO 8601 publication date. Default: None.

collection_idstr or None, optional

Corpus collection identifier. Default: None.

doistr or None, optional

Digital Object Identifier (file sources only). Default: None.

isbnstr or None, optional

ISBN (file sources only). Default: None.

**kwargsAny

Extra keyword arguments forwarded verbatim to each concrete reader constructor (e.g. transcribe=True for AudioReader, backend="easyocr" for ImageReader).

Returns:
DocumentReader

A single reader when inputs has exactly one element (backward compatible with every existing call site). A _MultiSourceReader when inputs has more than one element — it implements the same get_documents() interface and chains documents from all sub-readers in order.

Raises:
ValueError

If inputs is empty, or if a source URL is invalid, or if no reader is registered for a file’s extension.

TypeError

If any element of inputs is not a str or pathlib.Path.

Parameters:
  • inputs (Path | str)

  • chunker (ChunkerBase | None)

  • filter_ (FilterBase | None)

  • filename_override (str | None)

  • default_language (str | None)

  • source_type (SourceType | list[SourceType | None] | None)

  • source_title (str | None)

  • source_author (str | None)

  • source_date (str | None)

  • collection_id (str | None)

  • doi (str | None)

  • isbn (str | None)

  • kwargs (Any)

Return type:

Self

Notes

URL auto-detection: A str element is treated as a URL when it matches ^https?:// (case-insensitive). All other strings and all pathlib.Path objects are treated as local file paths. This means you no longer need to call from_url explicitly — just pass the URL string to create.

Per-source source_type: When passing multiple inputs with different media types, supply a list:

DocumentReader.create(
    Path("podcast.mp3"),
    "report.pdf",
    "https://iris.who.int/.../content",  # returns image/jpeg
    source_type=[SourceType.PODCAST, SourceType.RESEARCH, SourceType.IMAGE],
)

Reader-specific kwargs (forwarded via **kwargs):

Examples

Single file (backward-compatible):

>>> reader = DocumentReader.create(Path("hamlet.txt"))
>>> docs = list(reader.get_documents())

URL string auto-detected — no from_url() call required:

>>> reader = DocumentReader.create(
...     "https://en.wikipedia.org/wiki/Python_(programming_language)"
... )

Mixed multi-source batch:

>>> reader = DocumentReader.create(
...     Path("podcast.mp3"),
...     "report.pdf",
...     "https://iris.who.int/api/bitstreams/abc/content",
...     source_type=[SourceType.PODCAST, SourceType.RESEARCH, SourceType.IMAGE],
... )
>>> docs = list(reader.get_documents())  # chained stream from all three
default_language: str | None = None#

ISO 639-1 language code to assign when the source has no language info.

property file_name: str#

Effective filename used in document labels.

Returns filename_override when set; otherwise returns input_file.name.

Returns:
str

File name string (not a full path).

Examples

>>> from pathlib import Path
>>> reader = TextReader(input_file=Path("/data/corpus.txt"))
>>> reader.file_name
'corpus.txt'
file_type: ClassVar[str] = ':tei'#

Single file extension this reader handles (lowercase, including leading dot). E.g. ".txt", ".xml", ".zip".

For readers that handle multiple extensions, define file_types (plural) instead. Exactly one of file_type or file_types must be defined on every concrete subclass.

file_types: ClassVar[list[str] | None] = [':tei']#

List of file extensions this reader handles (lowercase, leading dot). Use instead of file_type when a single reader class should be registered for several extensions — e.g. an image reader for [".png", ".jpg", ".jpeg", ".gif", ".webp"].

When both file_type and file_types are defined on the same class, file_types takes precedence and file_type is ignored.

filename_override: str | None = None#

Override for the source_file label in generated documents.

filter_: FilterBase | None = None#

Filter applied after chunking. None triggers the DefaultFilter.

classmethod from_manifest(manifest_path, *, chunker=None, filter_=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, encoding='utf-8', **kwargs)[source]#

Build a _MultiSourceReader from a manifest file.

The manifest is a text file with one source per line — either a file path or a URL. Blank lines and lines starting with # are ignored. JSON manifests (a list of strings or objects) are also supported.

Parameters:
manifest_pathpathlib.Path or str

Path to the manifest file. Supported formats:

  • .txt / .manifest — one source per line.

  • .json — a JSON array of strings (sources) or objects with at least a "source" key (and optional "source_type", "source_title" per-entry overrides).

chunkerChunkerBase or None, optional

Chunker applied to all sources. Default: None.

filter_FilterBase or None, optional

Filter applied to all sources. Default: None.

default_languagestr or None, optional

ISO 639-1 language code. Default: None.

source_typeSourceType or None, optional

Override source type for all sources. Default: None.

source_titlestr or None, optional

Override title for all sources. Default: None.

source_authorstr or None, optional

Override author for all sources. Default: None.

source_datestr or None, optional

Override date for all sources. Default: None.

collection_idstr or None, optional

Collection identifier. Default: None.

doistr or None, optional

DOI override. Default: None.

isbnstr or None, optional

ISBN override. Default: None.

encodingstr, optional

Text encoding for .txt manifests. Default: "utf-8".

**kwargsAny

Forwarded to each reader constructor.

Returns:
_MultiSourceReader

Multi-source reader chaining all manifest entries.

Raises:
ValueError

If manifest_path does not exist or is empty after filtering blank and comment lines.

ValueError

If the manifest format is not recognised.

Parameters:
  • manifest_path (Path | str)

  • chunker (ChunkerBase | None)

  • filter_ (FilterBase | None)

  • default_language (str | None)

  • source_type (SourceType | None)

  • source_title (str | None)

  • source_author (str | None)

  • source_date (str | None)

  • collection_id (str | None)

  • doi (str | None)

  • isbn (str | None)

  • encoding (str)

  • kwargs (Any)

Return type:

_MultiSourceReader

Notes

Per-entry overrides in JSON manifests: each entry may be an object with:

{
    "source": "https://example.com/report.pdf",
    "source_type": "research",
    "source_title": "Annual Report 2024",
}

String-level source_type values are coerced via SourceType(value) and an invalid value raises ValueError.

Examples

Text manifest sources.txt:

# WHO corpus
https://www.who.int/europe/news/item/...
https://youtu.be/rwPISgZcYIk
WHO-EURO-2025.pdf
scan.jpg

Usage:

reader = DocumentReader.from_manifest(
    Path("sources.txt"),
    collection_id="who-corpus",
)
docs = list(reader.get_documents())
classmethod from_url(url, *, chunker=None, filter_=None, filename_override=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, **kwargs)[source]#

Instantiate the appropriate reader for a URL source.

Dispatches to YouTubeReader for YouTube URLs and to WebReader for all other http:// / https:// URLs.

Parameters:
urlstr

Full URL string. Must start with http:// or https://.

chunkerChunkerBase or None, optional

Chunker to inject. Default: None.

filter_FilterBase or None, optional

Filter to inject. Default: None (DefaultFilter).

filename_overridestr or None, optional

Override for the source_file label. Default: None.

default_languagestr or None, optional

ISO 639-1 language code. Default: None.

source_typeSourceType or None, optional

Semantic label for the source. Default: None.

source_titlestr or None, optional

Title of the source work. Default: None.

source_authorstr or None, optional

Primary author. Default: None.

source_datestr or None, optional

Publication date in ISO 8601 format. Default: None.

collection_idstr or None, optional

Corpus collection identifier. Default: None.

doistr or None, optional

Digital Object Identifier. Default: None.

isbnstr or None, optional

International Standard Book Number. Default: None.

**kwargsAny

Additional kwargs forwarded to the reader constructor (e.g. include_auto_generated=False for YouTubeReader).

Returns:
DocumentReader

YouTubeReader or WebReader instance.

Raises:
ValueError

If url does not start with http:// or https://.

ImportError

If the required reader class is not registered (i.e. scikitplot.corpus._readers has not been imported yet).

Parameters:
  • url (str)

  • chunker (ChunkerBase | None)

  • filter_ (FilterBase | None)

  • filename_override (str | None)

  • default_language (str | None)

  • source_type (SourceType | None)

  • source_title (str | None)

  • source_author (str | None)

  • source_date (str | None)

  • collection_id (str | None)

  • doi (str | None)

  • isbn (str | None)

  • kwargs (Any)

Return type:

Self

Notes

Prefer :meth:`create` for new code. Passing a URL string to create automatically calls from_url — you rarely need to call from_url directly.

Examples

>>> reader = DocumentReader.from_url("https://en.wikipedia.org/wiki/Python")
>>> docs = list(reader.get_documents())
>>> yt = DocumentReader.from_url("https://www.youtube.com/watch?v=dQw4w9WgXcQ")
>>> docs = list(yt.get_documents())
get_documents()[source]#

Yield validated CorpusDocument instances for the input file.

Orchestrates the full per-file pipeline:

  1. validate_input — fail fast if file is missing.

  2. get_raw_chunks — format-specific text extraction.

  3. Chunker (if set) — sub-segments each raw block.

  4. CorpusDocument construction with validated schema.

  5. Filter — discards noise documents.

Yields:
CorpusDocument

Validated documents that passed the filter.

Raises:
ValueError

If the input file is missing or the format is invalid.

Return type:

Generator[CorpusDocument, None, None]

Notes

The global chunk_index counter is monotonically increasing across all raw chunks and sub-chunks for a single file, ensuring that (source_file, chunk_index) is a unique key within one reader run.

Omitted-document statistics are logged at INFO level after processing each file.

Examples

>>> from pathlib import Path
>>> reader = DocumentReader.create(Path("corpus.txt"))
>>> docs = list(reader.get_documents())
>>> all(isinstance(d, CorpusDocument) for d in docs)
True
get_raw_chunks()[source]#

Parse the TEI XML file and yield one raw chunk per logical text unit.

Yields:
dict

Keys:

"text"

Text content of the chunk (whitespace-normalised).

"section_type"

One of "text", "dialogue", "stage_direction", "verse".

"act"

One-based act number, or None.

"scene_number"

One-based scene number within the act, or None.

"line_number"

One-based verse line number within scene, or None.

Raises:
ValueError

If the file exceeds max_file_bytes or is not valid XML.

Return type:

Generator[dict[str, Any], None, None]

include_speaker_tags: bool = True#

True.

Type:

Prepend speaker name to dialogue chunks. Default

include_stage_directions: bool = True#

True.

Type:

Include stage directions as STAGE_DIRECTION chunks. Default

input_file: Path[source]#

Path to the source file.

For URL-based readers (WebReader, YouTubeReader), pass pathlib.Path(url_string) here and set source_uri to the original URL string. validate_input() is overridden in those subclasses to skip the file-existence check.

max_file_bytes: int = 209715200#

200 MB.

Type:

Maximum file size in bytes. Default

source_provenance: dict[str, Any][source]#

Provenance overrides propagated into every yielded CorpusDocument.

Keys may include "source_type", "source_title", "source_author", and "collection_id". Populated by create / from_url from their keyword arguments.

source_uri: str | None = None#

Original URI for URL-based readers (web pages, YouTube videos).

Set this to the full URL string when input_file is a synthetic pathlib.Path wrapping a URL. File-based readers leave this None.

Examples

>>> reader = WebReader(
...     input_file=Path("https://example.com/article"),
...     source_uri="https://example.com/article",
... )
classmethod subclass_by_type()[source]#

Return a copy of the extension → reader class registry.

Returns:
dict

Mapping of file extension (str) → reader class. Returns a shallow copy so callers cannot accidentally mutate the registry.

Return type:

dict[str, type[DocumentReader]]

Examples

>>> registry = DocumentReader.subclass_by_type()
>>> ".txt" in registry
True
classmethod supported_types()[source]#

Return a sorted list of file extensions supported by registered readers.

Returns:
list of str

Lowercase file extensions, each including the leading dot. E.g. ['.pdf', '.txt', '.xml', '.zip'].

Return type:

list[str]

Examples

>>> DocumentReader.supported_types()
['.pdf', '.txt', '.xml', '.zip']
validate_input()[source]#

Assert that the input file exists, is readable, and has a .xml suffix.

Raises:
ValueError

If the file does not exist, is not a regular file, or has a suffix other than .xml.

Return type:

None