GetDummies#

class scikitplot.preprocessing.GetDummies(*, columns=None, sep=', ', col_name_sep='_', drop=None, sparse_output=False, dtype=<class 'numpy.float64'>, handle_unknown='error')[source]#

Multi-column multi-label string column one-hot encoder [1].

Custom transformer to expand string columns that contain multiple labels separated by sep into one-hot encoded columns by pandas.get_dummies.

Compatible with sklearn pipelines, set_output API, and supports both dense and sparse output.

Parameters:
columnsstr, list of str or None, default=None

Column(s) to encode. If str, single column. If list, multiple columns. If None, automatically detect object (string) columns containing sep.

sepstr, default “|”

String to split on (e.g., “a,b,c”).

col_name_sepstr, default=”_”

Separator for new dummy column names, e.g., “tags_a”.

sparse_outputbool, default=False

If True, return a SciPy sparse CSR matrix. If False, return pandas DataFrame (or numpy array if set_output=”default”).

handle_unknown{“ignore”, “error”}, default=”ignore”

Strategy for unknown categories at transform time.

drop{“first”, True, None}, default=None

Drop the first dummy in each feature (sorted order) to avoid collinearity.

dtypenumber type, default=np.float64

Data type for the output values. (sklearn default is float)

See also

DummyCodeEncoder

Same but more extended and support convert to dummy codes to scipy.sparse._csr.csr_matrix compressed Sparse Row matrix.

pandas.Series.str.get_dummies

Convert Series of strings to dummy codes.

pandas.from_dummies

Convert dummy codes back to categorical DataFrame.

sklearn.preprocessing.OneHotEncoder

General-purpose one-hot encoder.

sklearn.preprocessing.MultiLabelBinarizer

Multi-label binarizer for iterable of iterables.

References

Examples

>>> import pandas as pd
>>> df = pd.DataFrame(
...     {
...         "tags": ["a,b,", " A , b", "a,B,C", None],
...         "color": ["red", "blue", "green", "Red"],
...         "value": [1, 2, 3, 4],
...     }
... )
>>> from sklearn.pipeline import Pipeline
>>> from scikitplot.preprocessing import GetDummies
>>> pipe = Pipeline(
...     [
...         (
...             "encoder",
...             GetDummies(
...                 columns=["tags", "color"], drop=None, sparse_output=False
...             ),
...         )
...     ]
... )
>>> X_trans = pipe.fit_transform(df)
>>> print(X_trans)
   value  ta_a  ta_b  ta_c  co_blue  co_green  co_red
0      1   1.0   1.0   0.0      0.0       0.0     1.0
1      2   1.0   1.0   0.0      1.0       0.0     0.0
2      3   1.0   1.0   1.0      0.0       1.0     0.0
3      4   0.0   0.0   0.0      0.0       0.0     1.0
>>> type(X_trans)
<class 'pandas.core.frame.DataFrame'>
fit(X, y=None)[source]#

Learn dummy categories from training data.

Stores column order, prefixes, and categories for later alignment.

fit_transform(X, y=None, **fit_params)[source]#

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters. Pass only if the estimator accepts additional params in its fit method.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_feature_names_out(input_features=None)[source]#

Return feature names after transformation.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

set_output(*, transform=None)[source]#

Set output container.

See Introducing the set_output API for an example on how to use the API.

Parameters:
transform{“default”, “pandas”, “polars”}, default=None

Configure output of transform and fit_transform.

  • "default": Default output format of a transformer

  • "pandas": DataFrame output

  • "polars": Polars output

  • None: Transform configuration is unchanged

Added in version 1.4: "polars" option was added.

Returns:
selfestimator instance

Estimator instance.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

transform(X, y=None)[source]#

Transform new data into dummy-expanded format.

Steps: - Align columns with fit. - Drop unknown categories or raise error. - Return dense/pandas or sparse output.