optuna.integration.lightgbm.LightGBMTunerCV

class optuna.integration.lightgbm.LightGBMTunerCV(params: Dict[str, Any], train_set: lgb.Dataset, num_boost_round: int = 1000, folds: Union[Generator[Tuple[int, int], None, None], Iterator[Tuple[int, int]], BaseCrossValidator, None] = None, nfold: int = 5, stratified: bool = True, shuffle: bool = True, fobj: Optional[Callable[[…], Any]] = None, feval: Optional[Callable[[…], Any]] = None, feature_name: str = 'auto', categorical_feature: str = 'auto', early_stopping_rounds: Optional[int] = None, fpreproc: Optional[Callable[[…], Any]] = None, verbose_eval: Union[bool, int, None] = True, show_stdv: bool = True, seed: int = 0, callbacks: Optional[List[Callable[[…], Any]]] = None, time_budget: Optional[int] = None, sample_size: Optional[int] = None, study: Optional[optuna.study.Study] = None, optuna_callbacks: Optional[List[Callable[[optuna.study.Study, optuna.trial._frozen.FrozenTrial], None]]] = None, verbosity: Optional[int] = None, show_progress_bar: bool = True)[source]

Hyperparameter tuner for LightGBM with cross-validation.

It employs the same stepwise approach as LightGBMTuner. LightGBMTunerCV invokes lightgbm.cv() to train and validate boosters while LightGBMTuner invokes lightgbm.train(). See a simple example which optimizes the validation log loss of cancer detection.

Arguments and keyword arguments for lightgbm.cv() can be passed except metrics, init_model and eval_train_metric. The arguments that only LightGBMTunerCV has are listed below:

Parameters
  • time_budget – A time budget for parameter tuning in seconds.

  • study – A Study instance to store optimization results. The Trial instances in it has the following user attributes: elapsed_secs is the elapsed time since the optimization starts. average_iteration_time is the average time of iteration to train the booster model in the trial. lgbm_params is a JSON-serialized dictionary of LightGBM parameters used in the trial.

  • optuna_callbacks – List of Optuna callback functions that are invoked at the end of each trial. Each function must accept two parameters with the following types in this order: Study and FrozenTrial. Please note that this is not a callbacks argument of lightgbm.train() .

  • verbosity

    A verbosity level to change Optuna’s logging level. The level is aligned to LightGBM’s verbosity .

    Warning

    Deprecated in v2.0.0. verbosity argument will be removed in the future. The removal of this feature is currently scheduled for v4.0.0, but this schedule is subject to change.

    Please use set_verbosity() instead.

  • show_progress_bar

    Flag to show progress bars or not. To disable progress bar, set this False.

    Note

    Progress bars will be fragmented by logging messages of LightGBM and Optuna. Please suppress such messages to show the progress bars properly.

__init__(params: Dict[str, Any], train_set: lgb.Dataset, num_boost_round: int = 1000, folds: Union[Generator[Tuple[int, int], None, None], Iterator[Tuple[int, int]], BaseCrossValidator, None] = None, nfold: int = 5, stratified: bool = True, shuffle: bool = True, fobj: Optional[Callable[[…], Any]] = None, feval: Optional[Callable[[…], Any]] = None, feature_name: str = 'auto', categorical_feature: str = 'auto', early_stopping_rounds: Optional[int] = None, fpreproc: Optional[Callable[[…], Any]] = None, verbose_eval: Union[bool, int, None] = True, show_stdv: bool = True, seed: int = 0, callbacks: Optional[List[Callable[[…], Any]]] = None, time_budget: Optional[int] = None, sample_size: Optional[int] = None, study: Optional[optuna.study.Study] = None, optuna_callbacks: Optional[List[Callable[[optuna.study.Study, optuna.trial._frozen.FrozenTrial], None]]] = None, verbosity: Optional[int] = None, show_progress_bar: bool = True)None[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(params, train_set[, …])

Initialize self.

compare_validation_metrics(val_score, best_score)

higher_is_better()

run()

Perform the hyperparameter-tuning with given parameters.

sample_train_set()

Make subset of self.train_set Dataset object.

tune_bagging([n_trials])

tune_feature_fraction([n_trials])

tune_feature_fraction_stage2([n_trials])

tune_min_data_in_leaf()

tune_num_leaves([n_trials])

tune_regularization_factors([n_trials])

Attributes

best_params

Return parameters of the best booster.

best_score

Return the score of the best booster.

property best_params

Return parameters of the best booster.

property best_score

Return the score of the best booster.

run()None

Perform the hyperparameter-tuning with given parameters.

sample_train_set()None

Make subset of self.train_set Dataset object.