DummyCodeEncoder#

class scikitplot.preprocessing.DummyCodeEncoder(*, columns=None, sep='|', regex=False, prefix=None, prefix_sep='_', dummy_na=False, categories='auto', drop=None, sparse_output=True, dtype=<class 'numpy.float64'>, handle_unknown='error', min_frequency=None, max_categories=None, feature_name_combiner='concat')[source]#

Encode categorical features into dummy/indicator 0/1 variables.

Each string in Series is split by sep and returned as a DataFrame of dummy/indicator 0/1 variables.

Each variable is converted in as many 0/1 variables as there are different values. Columns in the output are each named after a value; if the input is a DataFrame, the name of the original variable is prepended to the value.

The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The features are encoded using a one-hot (aka ‘one-of-K’ or ‘dummy’) encoding scheme. This creates a binary column for each category and returns a sparse matrix or dense array (depending on the sparse_output parameter).

By default, the encoder derives the categories based on the unique values in each feature that contain multiple string labels separated by sep into one-hot encoded columns by pandas.get_dummies. Alternatively, you can also specify the categories manually.

This encoding is needed for feeding categorical data to many scikit-learn estimators, notably linear models and SVMs with the standard kernels.

Compatible with sklearn pipelines, set_output API, and supports both dense and sparse output.

Read more in the User Guide. For a comparison of different encoders, refer to: Comparing Target Encoder with Other Encoders.

Parameters:
columnslist-like, default=None

Column names in the DataFrame to be encoded. If columns is None then all the columns with object, string, or category dtype will be converted.

Caution

This parameter is reserved for future use, has no effect in the current implementation, and its behavior or existence may change without warning in future versions.

sepcallable or str, default=’|’

String regex or literal separator to split on (e.g., “a,b,c”).

  • sep=’,’,

  • sep=r’s*[,;|]s*’,

  • sep=lambda s: re.split(r’s*[,;|]s*’, s.lower()),

regexbool, default=True

Use regex to split on (e.g., “a,b|C;”) by sep like:

  • pattern=r'\s*[,;|]\s*'

prefixstr, list of str, or dict of str, default=None

String to append DataFrame column names. Pass a list with length equal to the number of columns when calling get_dummies on a DataFrame. Alternatively, prefix can be a dictionary mapping column names to prefixes.

Caution

This parameter is reserved for future use, has no effect in the current implementation, and its behavior or existence may change without warning in future versions.

prefix_sepstr, default=’_’

If appending prefix, separator/delimiter to use. Or pass a list or dictionary as with prefix (e.g., “tags_a”).

Caution

This parameter is reserved for future use, has no effect in the current implementation, and its behavior or existence may change without warning in future versions.

dummy_nabool, default=False

Add a column to indicate NaNs, if False NaNs are ignored.

Caution

If enabled to encode multi-feature supports only one contains None. Due to total categories need to unique so suggested dummy fill instead of keeping one of (e.g., None, np.nan, pd.Na, pd.NAT).

Caution

This parameter is reserved for future use, has no effect in the current implementation, and its behavior or existence may change without warning in future versions.

categories‘auto’ or a list of array-like, default=’auto’

Categories (unique values) per feature:

  • ‘auto’ : Determine categories automatically from the training data.

  • list : categories[i] holds the categories expected in the ith column. The passed categories should not mix strings and numeric values within a single feature, and should be sorted in case of numeric values.

The used categories can be found in the categories_ attribute.

drop{‘first’, ‘if_binary’} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per feature. This is useful in situations where perfectly collinear features cause problems, such as when feeding the resulting data into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original representation and can therefore induce a bias in downstream models, for instance for penalized linear classification or regression models.

  • None : retain all features (the default).

  • ‘first’ : drop the first category in each feature. If only one category is present, the feature will be dropped entirely.

  • ‘if_binary’ : drop the first category in each feature with two categories. Features with 1 or more than 2 categories are left intact.

  • array : drop[i] is the category in feature X[:, i] that should be dropped.

When max_categories or min_frequency is configured to group infrequent categories, the dropping behavior is handled after the grouping.

Caution

This parameter is reserved for future use, has no effect in the current implementation, and its behavior or existence may change without warning in future versions.

sparse_outputbool, default=True

When True, it returns a scipy.sparse.csr_matrix, i.e. a sparse matrix in “Compressed Sparse Row” (CSR) format.

dtypenumber type, default=np.float64

Desired dtype of output.

handle_unknown{‘error’, ‘ignore’, ‘infrequent_if_exist’, ‘warn’}, default=’error’

Specifies the way unknown categories are handled during transform.

  • ‘error’ : Raise an error if an unknown category is present during transform.

  • ‘ignore’ : When an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will be all zeros. In the inverse transform, an unknown category will be denoted as None.

  • ‘infrequent_if_exist’ : When an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will map to the infrequent category if it exists. The infrequent category will be mapped to the last position in the encoding. During inverse transform, an unknown category will be mapped to the category denoted 'infrequent' if it exists. If the 'infrequent' category does not exist, then transform and inverse_transform will handle an unknown category as with handle_unknown='ignore'. Infrequent categories exist based on min_frequency and max_categories. Read more in the User Guide.

  • ‘warn’ : When an unknown category is encountered during transform a warning is issued, and the encoding then proceeds as described for handle_unknown="infrequent_if_exist".

Caution

This parameter is reserved for future use, has no effect in the current implementation, and its behavior or existence may change without warning in future versions.

min_frequencyint or float, default=None

Specifies the minimum frequency below which a category will be considered infrequent.

  • If int, categories with a smaller cardinality will be considered infrequent.

  • If float, categories with a smaller cardinality than min_frequency * n_samples will be considered infrequent.

Added in version 1.1: Read more in the User Guide.

Caution

This parameter is reserved for future use, has no effect in the current implementation, and its behavior or existence may change without warning in future versions.

max_categoriesint, default=None

Specifies an upper limit to the number of output features for each input feature when considering infrequent categories. If there are infrequent categories, max_categories includes the category representing the infrequent categories along with the frequent categories. If None, there is no limit to the number of output features.

Added in version 1.1: Read more in the User Guide.

Caution

This parameter is reserved for future use, has no effect in the current implementation, and its behavior or existence may change without warning in future versions.

feature_name_combiner“concat” or callable, default=”concat”

Callable with signature def callable(input_feature, category) that returns a string. This is used to create feature names to be returned by get_feature_names_out.

"concat" concatenates encoded feature name and category with feature + "_" + str(category).E.g. feature X with values 1, 6, 7 create feature names X_1, X_6, X_7.

Attributes:
categories_list of arrays

The categories of each feature determined during fitting (in order of the features in X and corresponding with the output of transform). This includes the category specified in drop (if any).

drop_idx_array of shape (n_features,)
  • drop_idx_[i] is the index in categories_[i] of the category to be dropped for each feature.

  • drop_idx_[i] = None if no category is to be dropped from the feature with index i, e.g. when drop='if_binary' and the feature isn’t binary.

  • drop_idx_ = None if all the transformed features will be retained.

If infrequent categories are enabled by setting min_frequency or max_categories to a non-default value and drop_idx[i] corresponds to an infrequent category, then the entire infrequent category is dropped.

Changed in version 0.23: Added the possibility to contain None values.

infrequent_categories_list of ndarray

Infrequent categories for each feature.

n_features_in_int

Number of features seen during fit.

Added in version 1.0.

feature_names_in_ndarray of shape (n_features_in_,)

Names of features seen during fit. Defined only when X has feature names that are all strings.

Added in version 1.0.

feature_name_combinercallable or None

Callable with signature def callable(input_feature, category) that returns a string. This is used to create feature names to be returned by get_feature_names_out.

Added in version 1.3.

Parameters:

See also

GetDummies

Same but more limited and pandas based convert to dummy codes.

pandas.Series.str.get_dummies

Convert Series of strings to dummy codes.

pandas.from_dummies

Convert dummy codes back to categorical DataFrame.

sklearn.preprocessing.OrdinalEncoder

Performs an ordinal (integer) encoding of the categorical features.

sklearn.preprocessing.TargetEncoder

Encodes categorical features using the target.

sklearn.feature_extraction.DictVectorizer

Performs a one-hot encoding of dictionary items (also handles string-valued features).

sklearn.feature_extraction.FeatureHasher

Performs an approximate one-hot encoding of dictionary items or strings.

sklearn.preprocessing.LabelBinarizer

Binarizes labels in a one-vs-all fashion.

sklearn.preprocessing.MultiLabelBinarizer

Transforms between iterable of iterables and a multilabel format, e.g. a (samples x classes) binary matrix indicating the presence of a class label.

References

Examples

Given a dataset with three features, we let the encoder find the unique values per feature and transform the data to a binary one-hot dummy encoding.

>>> import pandas as pd
>>> df = pd.DataFrame(
...     {
...         "tags": ["a,b,", " A , b", "a,B,C", None],
...         "color": ["red", "blue", "green", "Red"],
...         "value": [1, 2, 3, 4],
...     }
... )
>>> from sklearn.pipeline import Pipeline
>>> from scikitplot.preprocessing import DummyCodeEncoder
>>> pipe = Pipeline(
...     [
...         (
...             "encoder",
...             DummyCodeEncoder(
...                 # sep=',',
...                 # sep=r'\s*[,;|]\s*',
...                 sep=lambda s: re.split(r'\s*[,;|]\s*', s.lower()),
...                 regex=True,
...                 sparse_output=True,
...             ),
...         )
...     ]
... )
>>> X_trans = pipe.fit_transform(df)
>>> print(X_trans)
<Compressed Sparse Row sparse matrix of dtype 'float64'
    with 15 stored elements and shape (4, 10)>
>>> type(X_trans)
scipy.sparse._csr.csr_matrix
>>> from sklearn.pipeline import Pipeline
>>> from scikitplot.preprocessing import DummyCodeEncoder
>>> pipe = Pipeline(
...     [
...         (
...             "encoder",
...             DummyCodeEncoder(
...                 # sep=',',
...                 # sep=r'\s*[,;|]\s*',
...                 sep=lambda s: re.split(r'\s*[,;|]\s*', s.lower()),
...                 regex=True,
...                 sparse_output=False,
...             ),
...         )
...     ]
... ).set_output(transform='pandas')
>>> X_trans = pipe.fit_transform(df)
>>> print(X_trans)
   value  tags_a  tags_b  tags_c  color_blue  color_green  color_red  value_1  value_2  value_3  value_4
0      1   1.0     1.0     0.0      0.0         0.0          1.0        1.0      0.0          0.0      0.0
1      2   1.0     1.0     0.0      1.0         0.0          0.0        0.0      1.0          0.0      0.0
2      3   1.0     1.0     1.0      0.0         1.0          0.0        0.0      0.0          1.0      0.0
3      4   0.0     0.0     0.0      0.0         0.0          1.0        0.0      0.0          0.0      1.0
>>> type(X_trans)
pandas.core.frame.DataFrame
fit(X, y=None)[source]#

Fit OneHotEncoder to X.

Parameters:
Xarray-like of shape (n_samples, n_features)

The data to determine the categories of each feature.

yNone

Ignored. This parameter exists only for compatibility with Pipeline.

Returns:
self

Fitted encoder.

fit_transform(X, y=None, **fit_params)[source]#

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_feature_names_out(input_features=None)[source]#

Get output feature names for transformation.

Parameters:
input_featuresarray-like of str or None, default=None

Input features.

  • If input_features is None, then feature_names_in_ is used as feature names in. If feature_names_in_ is not defined, then the following input feature names are generated: ["x0", "x1", ..., "x(n_features_in_ - 1)"].

  • If input_features is an array-like, then input_features must match feature_names_in_ if feature_names_in_ is defined.

Returns:
feature_names_outndarray of str objects

Transformed feature names.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

property infrequent_categories_#

Infrequent categories for each feature.

inverse_transform(X)[source]#

Convert the data back to the original representation.

When unknown categories are encountered (all zeros in the one-hot encoding), None is used to represent this category. If the feature with the unknown category has a dropped category, the dropped category will be its inverse.

For a given input feature, if there is an infrequent category, ‘infrequent_sklearn’ will be used to represent the infrequent category.

Parameters:
X{array-like, sparse matrix} of shape (n_samples, n_encoded_features)

The transformed data.

Returns:
X_originalndarray of shape (n_samples, n_features)

Inverse transformed array.

set_output(*, transform=None)[source]#

Set output container.

See Introducing the set_output API for an example on how to use the API.

Parameters:
transform{“default”, “pandas”, “polars”}, default=None

Configure output of transform and fit_transform.

  • "default": Default output format of a transformer

  • "pandas": DataFrame output

  • "polars": Polars output

  • None: Transform configuration is unchanged

Added in version 1.4: "polars" option was added.

Returns:
selfestimator instance

Estimator instance.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

transform(X)[source]#

Transform X using one-hot encoding.

If sparse_output=True (default), it returns an instance of scipy.sparse._csr.csr_matrix (CSR format).

If there are infrequent categories for a feature, set by specifying max_categories or min_frequency, the infrequent categories are grouped into a single category.

Parameters:
Xarray-like of shape (n_samples, n_features)

The data to encode.

Returns:
X_out{ndarray, sparse matrix} of shape (n_samples, n_encoded_features)

Transformed input. If sparse_output=True, a sparse matrix will be returned.