SentenceChunker#
- class scikitplot.corpus.SentenceChunker(config=None)[source]#
Split a document into sentence-level
Chunkobjects.- Parameters:
- configSentenceChunkerConfig, optional
Chunker configuration. Defaults to
SentenceChunkerConfig(REGEX backend, no overlap, min_length=10).
- Parameters:
config (SentenceChunkerConfig | None)
Examples
>>> chunker = SentenceChunker() >>> result = chunker.chunk("Hello world. How are you? Fine thanks.") >>> len(result.chunks) 3 >>> result.chunks[0].text 'Hello world.'
- chunk(text, doc_id=None, extra_metadata=None)[source]#
Split text into sentence-level chunks.
- Parameters:
- textstr
Raw document text.
- doc_idstr, optional
Document identifier stored in chunk metadata.
- extra_metadatadict[str, Any], optional
Additional key/value pairs merged into the result metadata.
- Returns:
- ChunkResult
Chunks and aggregate metadata.
- Raises:
- TypeError
If text is not a
str.- ValueError
If text is empty or whitespace-only.
- Parameters:
- Return type:
ChunkResult
- chunk_batch(texts, doc_ids=None, extra_metadata=None)[source]#
Chunk a list of documents.
- Parameters:
- textslist[str]
Input documents.
- doc_idslist[str], optional
Parallel document identifiers.
- extra_metadatadict[str, Any], optional
Shared metadata merged into every result.
- Returns:
- list[ChunkResult]
One result per document.
- Raises:
- TypeError
If texts is not a list.
- ValueError
If doc_ids length does not match texts length.
- Parameters:
- Return type:
list[ChunkResult]