YouTubeReader#

class scikitplot.corpus.YouTubeReader(input_file, chunker=None, filter_=None, filename_override=None, default_language=None, source_uri=None, source_provenance=<factory>, preferred_language=None, include_auto_generated=True, merge_short_cues=True, min_cue_chars=20)[source]#

Extract the transcript of a YouTube video using youtube-transcript-api.

Tries to retrieve manually-created captions first (preferred — often higher quality), then falls back to auto-generated captions. No audio is downloaded; no local model is required.

Parameters:
input_filepathlib.Path

Wrap the YouTube URL as pathlib.Path(url) when constructing directly. Use from_url for the canonical construction path.

source_uristr or None, optional

The original YouTube URL string. Set automatically by from_url().

preferred_languagestr or None, optional

ISO 639-1 language code for the preferred transcript (e.g. "en", "de"). When None, the API returns the first available track. Default: None.

include_auto_generatedbool, optional

When True (default), accept auto-generated captions as a fallback if no manual caption track is found.

merge_short_cuesbool, optional

When True, consecutive cues shorter than min_cue_chars are merged with the following cue. This prevents very short subtitle segments (e.g. "[Music]") from becoming separate documents. Default: True.

min_cue_charsint, optional

Minimum character count per cue when merge_short_cues=True. Default: 20.

chunkerChunkerBase or None, optional

Inherited from DocumentReader.

filter_FilterBase or None, optional

Inherited from DocumentReader.

default_languagestr or None, optional

ISO 639-1 language code for output documents. If None, the transcript language detected by the API is used. Default: None.

Attributes:
file_typestr

Class variable. Registry key ":youtube".

file_typeslist of str

Class variable. Registered extensions: [":youtube"].

Raises:
ImportError

If youtube-transcript-api is not installed.

ValueError

If the video ID cannot be extracted from the URL.

RuntimeError

If no transcript is available for the video (private, disabled, etc.).

Parameters:
  • input_file (Path)

  • chunker (ChunkerBase | None)

  • filter_ (FilterBase | None)

  • filename_override (str | None)

  • default_language (str | None)

  • source_uri (str | None)

  • source_provenance (dict[str, Any])

  • preferred_language (str | None)

  • include_auto_generated (bool)

  • merge_short_cues (bool)

  • min_cue_chars (int)

See also

scikitplot.corpus._readers.WebReader

General web page reader.

scikitplot.corpus._readers.VideoReader

Local video file reader.

Notes

Privacy: youtube-transcript-api fetches transcripts by making HTTP requests to YouTube’s timedtext API endpoint. It does not log in and does not require a Google API key for public videos. Private videos and videos with disabled captions will raise an error.

Quota: YouTube’s timedtext endpoint is rate-limited. For bulk ingestion, add delays between calls.

Examples

Via factory (recommended):

>>> from scikitplot.corpus._base import DocumentReader
>>> import scikitplot.corpus._readers
>>> reader = DocumentReader.from_url(
...     "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
...     default_language="en",
... )
>>> docs = list(reader.get_documents())
>>> print(f"Transcript cues: {len(docs)}")
chunker: ChunkerBase | None = None#

Chunker to apply to each raw text block. None means each raw chunk is used as-is (one CorpusDocument per raw chunk).

classmethod create(*inputs, chunker=None, filter_=None, filename_override=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, **kwargs)[source]#

Instantiate the appropriate reader for one or more sources.

Accepts any mix of file paths, URL strings, and pathlib.Path objects — in any order. URL strings (those starting with http:// or https://) are automatically detected and routed to from_url; everything else is treated as a local file path and dispatched by extension via the registry.

Parameters:
*inputspathlib.Path or str

One or more source paths or URL strings. Pass a single value for the common case; pass multiple values to get a _MultiSourceReader that chains all their documents.

chunkerChunkerBase or None, optional

Chunker injected into every reader. Default: None.

filter_FilterBase or None, optional

Filter injected into every reader. Default: None (DefaultFilter).

filename_overridestr or None, optional

Override the source_file label. Only applied when inputs contains exactly one source. Default: None.

default_languagestr or None, optional

ISO 639-1 language code applied to all sources. Default: None.

source_typeSourceType, list[SourceType or None], or None, optional

Semantic label for the source kind. When inputs has more than one element you may pass a list of the same length to assign a distinct type per source; None entries in the list mean “infer from extension / URL”. A single value is broadcast to all sources. Default: None.

source_titlestr or None, optional

Title propagated into every yielded document. Default: None.

source_authorstr or None, optional

Author propagated into every yielded document. Default: None.

source_datestr or None, optional

ISO 8601 publication date. Default: None.

collection_idstr or None, optional

Corpus collection identifier. Default: None.

doistr or None, optional

Digital Object Identifier (file sources only). Default: None.

isbnstr or None, optional

ISBN (file sources only). Default: None.

**kwargsAny

Extra keyword arguments forwarded verbatim to each concrete reader constructor (e.g. transcribe=True for AudioReader, backend="easyocr" for ImageReader).

Returns:
DocumentReader

A single reader when inputs has exactly one element (backward compatible with every existing call site). A _MultiSourceReader when inputs has more than one element — it implements the same get_documents() interface and chains documents from all sub-readers in order.

Raises:
ValueError

If inputs is empty, or if a source URL is invalid, or if no reader is registered for a file’s extension.

TypeError

If any element of inputs is not a str or pathlib.Path.

Parameters:
  • inputs (Path | str)

  • chunker (ChunkerBase | None)

  • filter_ (FilterBase | None)

  • filename_override (str | None)

  • default_language (str | None)

  • source_type (SourceType | list[SourceType | None] | None)

  • source_title (str | None)

  • source_author (str | None)

  • source_date (str | None)

  • collection_id (str | None)

  • doi (str | None)

  • isbn (str | None)

  • kwargs (Any)

Return type:

Self

Notes

URL auto-detection: A str element is treated as a URL when it matches ^https?:// (case-insensitive). All other strings and all pathlib.Path objects are treated as local file paths. This means you no longer need to call from_url explicitly — just pass the URL string to create.

Per-source source_type: When passing multiple inputs with different media types, supply a list:

DocumentReader.create(
    Path("podcast.mp3"),
    "report.pdf",
    "https://iris.who.int/.../content",  # returns image/jpeg
    source_type=[SourceType.PODCAST, SourceType.RESEARCH, SourceType.IMAGE],
)

Reader-specific kwargs (forwarded via **kwargs):

Examples

Single file (backward-compatible):

>>> reader = DocumentReader.create(Path("hamlet.txt"))
>>> docs = list(reader.get_documents())

URL string auto-detected — no from_url() call required:

>>> reader = DocumentReader.create(
...     "https://en.wikipedia.org/wiki/Python_(programming_language)"
... )

Mixed multi-source batch:

>>> reader = DocumentReader.create(
...     Path("podcast.mp3"),
...     "report.pdf",
...     "https://iris.who.int/api/bitstreams/abc/content",
...     source_type=[SourceType.PODCAST, SourceType.RESEARCH, SourceType.IMAGE],
... )
>>> docs = list(reader.get_documents())  # chained stream from all three
default_language: str | None = None#

ISO 639-1 language code to assign when the source has no language info.

property file_name: str#

Return the URL as the effective file name.

file_type: ClassVar[str] = ':youtube'#

Single file extension this reader handles (lowercase, including leading dot). E.g. ".txt", ".xml", ".zip".

For readers that handle multiple extensions, define file_types (plural) instead. Exactly one of file_type or file_types must be defined on every concrete subclass.

file_types: ClassVar[list[str] | None] = [':youtube']#

List of file extensions this reader handles (lowercase, leading dot). Use instead of file_type when a single reader class should be registered for several extensions — e.g. an image reader for [".png", ".jpg", ".jpeg", ".gif", ".webp"].

When both file_type and file_types are defined on the same class, file_types takes precedence and file_type is ignored.

filename_override: str | None = None#

Override for the source_file label in generated documents.

filter_: FilterBase | None = None#

Filter applied after chunking. None triggers the DefaultFilter.

classmethod from_manifest(manifest_path, *, chunker=None, filter_=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, encoding='utf-8', **kwargs)[source]#

Build a _MultiSourceReader from a manifest file.

The manifest is a text file with one source per line — either a file path or a URL. Blank lines and lines starting with # are ignored. JSON manifests (a list of strings or objects) are also supported.

Parameters:
manifest_pathpathlib.Path or str

Path to the manifest file. Supported formats:

  • .txt / .manifest — one source per line.

  • .json — a JSON array of strings (sources) or objects with at least a "source" key (and optional "source_type", "source_title" per-entry overrides).

chunkerChunkerBase or None, optional

Chunker applied to all sources. Default: None.

filter_FilterBase or None, optional

Filter applied to all sources. Default: None.

default_languagestr or None, optional

ISO 639-1 language code. Default: None.

source_typeSourceType or None, optional

Override source type for all sources. Default: None.

source_titlestr or None, optional

Override title for all sources. Default: None.

source_authorstr or None, optional

Override author for all sources. Default: None.

source_datestr or None, optional

Override date for all sources. Default: None.

collection_idstr or None, optional

Collection identifier. Default: None.

doistr or None, optional

DOI override. Default: None.

isbnstr or None, optional

ISBN override. Default: None.

encodingstr, optional

Text encoding for .txt manifests. Default: "utf-8".

**kwargsAny

Forwarded to each reader constructor.

Returns:
_MultiSourceReader

Multi-source reader chaining all manifest entries.

Raises:
ValueError

If manifest_path does not exist or is empty after filtering blank and comment lines.

ValueError

If the manifest format is not recognised.

Parameters:
  • manifest_path (Path | str)

  • chunker (ChunkerBase | None)

  • filter_ (FilterBase | None)

  • default_language (str | None)

  • source_type (SourceType | None)

  • source_title (str | None)

  • source_author (str | None)

  • source_date (str | None)

  • collection_id (str | None)

  • doi (str | None)

  • isbn (str | None)

  • encoding (str)

  • kwargs (Any)

Return type:

_MultiSourceReader

Notes

Per-entry overrides in JSON manifests: each entry may be an object with:

{
    "source": "https://example.com/report.pdf",
    "source_type": "research",
    "source_title": "Annual Report 2024",
}

String-level source_type values are coerced via SourceType(value) and an invalid value raises ValueError.

Examples

Text manifest sources.txt:

# WHO corpus
https://www.who.int/europe/news/item/...
https://youtu.be/rwPISgZcYIk
WHO-EURO-2025.pdf
scan.jpg

Usage:

reader = DocumentReader.from_manifest(
    Path("sources.txt"),
    collection_id="who-corpus",
)
docs = list(reader.get_documents())
classmethod from_url(url, *, chunker=None, filter_=None, filename_override=None, default_language=None, source_type=None, source_title=None, source_author=None, source_date=None, collection_id=None, doi=None, isbn=None, **kwargs)[source]#

Instantiate the appropriate reader for a URL source.

Dispatches to YouTubeReader for YouTube URLs and to WebReader for all other http:// / https:// URLs.

Parameters:
urlstr

Full URL string. Must start with http:// or https://.

chunkerChunkerBase or None, optional

Chunker to inject. Default: None.

filter_FilterBase or None, optional

Filter to inject. Default: None (DefaultFilter).

filename_overridestr or None, optional

Override for the source_file label. Default: None.

default_languagestr or None, optional

ISO 639-1 language code. Default: None.

source_typeSourceType or None, optional

Semantic label for the source. Default: None.

source_titlestr or None, optional

Title of the source work. Default: None.

source_authorstr or None, optional

Primary author. Default: None.

source_datestr or None, optional

Publication date in ISO 8601 format. Default: None.

collection_idstr or None, optional

Corpus collection identifier. Default: None.

doistr or None, optional

Digital Object Identifier. Default: None.

isbnstr or None, optional

International Standard Book Number. Default: None.

**kwargsAny

Additional kwargs forwarded to the reader constructor (e.g. include_auto_generated=False for YouTubeReader).

Returns:
DocumentReader

YouTubeReader or WebReader instance.

Raises:
ValueError

If url does not start with http:// or https://.

ImportError

If the required reader class is not registered (i.e. scikitplot.corpus._readers has not been imported yet).

Parameters:
  • url (str)

  • chunker (ChunkerBase | None)

  • filter_ (FilterBase | None)

  • filename_override (str | None)

  • default_language (str | None)

  • source_type (SourceType | None)

  • source_title (str | None)

  • source_author (str | None)

  • source_date (str | None)

  • collection_id (str | None)

  • doi (str | None)

  • isbn (str | None)

  • kwargs (Any)

Return type:

Self

Notes

Prefer :meth:`create` for new code. Passing a URL string to create automatically calls from_url — you rarely need to call from_url directly.

Examples

>>> reader = DocumentReader.from_url("https://en.wikipedia.org/wiki/Python")
>>> docs = list(reader.get_documents())
>>> yt = DocumentReader.from_url("https://www.youtube.com/watch?v=dQw4w9WgXcQ")
>>> docs = list(yt.get_documents())
get_documents()[source]#

Yield validated CorpusDocument instances for the input file.

Orchestrates the full per-file pipeline:

  1. validate_input — fail fast if file is missing.

  2. get_raw_chunks — format-specific text extraction.

  3. Chunker (if set) — sub-segments each raw block.

  4. CorpusDocument construction with validated schema.

  5. Filter — discards noise documents.

Yields:
CorpusDocument

Validated documents that passed the filter.

Raises:
ValueError

If the input file is missing or the format is invalid.

Return type:

Generator[CorpusDocument, None, None]

Notes

The global chunk_index counter is monotonically increasing across all raw chunks and sub-chunks for a single file, ensuring that (source_file, chunk_index) is a unique key within one reader run.

Omitted-document statistics are logged at INFO level after processing each file.

Examples

>>> from pathlib import Path
>>> reader = DocumentReader.create(Path("corpus.txt"))
>>> docs = list(reader.get_documents())
>>> all(isinstance(d, CorpusDocument) for d in docs)
True
get_raw_chunks()[source]#

Fetch the YouTube transcript and yield one chunk per cue.

Yields:
dict

Keys:

"text"

Caption cue text.

"section_type"

Always SectionType.TEXT.

"timecode_start"

Cue start time in seconds (float); promoted to CorpusDocument.timecode_start.

"timecode_end"

Cue end time in seconds (= start + duration); promoted to CorpusDocument.timecode_end.

"source_type"

Always SourceType.VIDEO; promoted to CorpusDocument.source_type.

"transcript_type"

"manual" or "auto_generated"; non-promoted, goes to metadata.

"video_id"

YouTube video ID string.

"transcript_language"

Language code of the retrieved transcript.

Raises:
ImportError

If youtube-transcript-api is not installed.

ValueError

If the video ID cannot be extracted from the URL.

RuntimeError

If no transcript is available.

Return type:

Generator[dict[str, Any], None, None]

include_auto_generated: bool = True#

Accept auto-generated captions as fallback.

input_file: Path[source]#

Path to the source file.

For URL-based readers (WebReader, YouTubeReader), pass pathlib.Path(url_string) here and set source_uri to the original URL string. validate_input() is overridden in those subclasses to skip the file-existence check.

merge_short_cues: bool = True#

Merge consecutive short cues into longer chunks.

min_cue_chars: int = 20#

Minimum cue length for merge_short_cues.

preferred_language: str | None = None#

ISO 639-1 language code for the preferred transcript track.

source_provenance: dict[str, Any][source]#

Provenance overrides propagated into every yielded CorpusDocument.

Keys may include "source_type", "source_title", "source_author", and "collection_id". Populated by create / from_url from their keyword arguments.

source_uri: str | None = None#

Original URI for URL-based readers (web pages, YouTube videos).

Set this to the full URL string when input_file is a synthetic pathlib.Path wrapping a URL. File-based readers leave this None.

Examples

>>> reader = WebReader(
...     input_file=Path("https://example.com/article"),
...     source_uri="https://example.com/article",
... )
classmethod subclass_by_type()[source]#

Return a copy of the extension → reader class registry.

Returns:
dict

Mapping of file extension (str) → reader class. Returns a shallow copy so callers cannot accidentally mutate the registry.

Return type:

dict[str, type[DocumentReader]]

Examples

>>> registry = DocumentReader.subclass_by_type()
>>> ".txt" in registry
True
classmethod supported_types()[source]#

Return a sorted list of file extensions supported by registered readers.

Returns:
list of str

Lowercase file extensions, each including the leading dot. E.g. ['.pdf', '.txt', '.xml', '.zip'].

Return type:

list[str]

Examples

>>> DocumentReader.supported_types()
['.pdf', '.txt', '.xml', '.zip']
validate_input()[source]#

Validate that the source URI is a recognisable YouTube URL.

Raises:
ValueError

If the URL is not a YouTube URL or the video ID cannot be extracted.

Return type:

None