robustx.robustness_evaluations package

Submodules

robustx.robustness_evaluations.ApproximateDeltaRobustnessEvaluator module

class robustx.robustness_evaluations.ApproximateDeltaRobustnessEvaluator.ApproximateDeltaRobustnessEvaluator(ct, alpha=0.999, R=0.995)[source]

Bases: ModelChangesRobustnessEvaluator

A robustness evaluator that uses a Approximate Plausible Δ model shifts (APΔS) approach to evaluate the robustness of a model’s predictions when a delta perturbation is applied.

This class inherits from ModelChangesRobustnessEvaluator and uses the a probabilistic approach to determine if the model’s prediction remains stable under model perturbations.

task

The task to solve, inherited from ModelChangesRobustnessEvaluator.

Type:

Task

alpha

Confidence in the prediction.

Type:

Float

R

Fraction of samples for which the predictions should remain stable.

Type:

Float

evaluate(ce, desired_outcome=0, delta=0.5, bias_delta=0)[source]

Evaluates whether the model’s prediction for a given instance is robust to changes in the input.

@param instance: The instance to evaluate. @param desired_output: The desired output for the model (0 or 1).

The evaluation will check if the model’s output matches this.

@param delta: The maximum allowable perturbation in the input features. @param bias_delta: Additional bias to apply to the delta changes. @param M: A large constant used in MILP formulation for modeling constraints. @param epsilon: A small constant used to ensure numerical stability. @return: A boolean indicating whether the model’s prediction is robust given the desired output.

robustx.robustness_evaluations.DeltaRobustnessEvaluator module

class robustx.robustness_evaluations.DeltaRobustnessEvaluator.DeltaRobustnessEvaluator(ct)[source]

Bases: ModelChangesRobustnessEvaluator

A robustness evaluator that uses a Mixed-Integer Linear Programming (MILP) approach to evaluate the robustness of a model’s predictions when perturbations are applied.

This class inherits from ModelChangesRobustnessEvaluator and uses the Gurobi optimizer to determine if the model’s prediction remains stable under perturbations.

task

The task to solve, inherited from ModelChangesRobustnessEvaluator.

Type:

Task

opt

An optimizer instance for setting up and solving the MILP problem.

Type:

OptSolver

evaluate(instance, desired_output=1, delta=0.005, bias_delta=0.005, M=10000, epsilon=0.0001)[source]

Evaluates whether the instance is Delta-robust.

@param instance: The instance to evaluate. @param desired_output: The desired output for the model (0 or 1).

The evaluation will check if the model’s output matches this.

@param delta: The maximum allowable perturbation in the model parameters. @param bias_delta: Additional bias to apply to the delta changes. @param M: A large constant used in MILP formulation for modeling constraints. @param epsilon: A small constant used to ensure numerical stability. @return: A boolean indicating Delta-robust or not.

robustx.robustness_evaluations.InputChangesRobustnessEvaluator module

class robustx.robustness_evaluations.InputChangesRobustnessEvaluator.InputChangesRobustnessEvaluator(ct)[source]

Bases: ABC

Abstract base class for evaluating the robustness of CEs with respect to input changes.

abstract evaluate(instance, counterfactual, generator)[source]

Compare the counterfactuals for the original instance and those for the perturbed instance.

@param instance: An input instance. @param counterfactual: One or more CE points for the instance. @param generator: CE generator.

perturb_input(instance)[source]

Default method for perturbing an input instance by adding small Gaussian noise.

@param instance: An input instance.

robustx.robustness_evaluations.ModelChangesRobustnessEvaluator module

class robustx.robustness_evaluations.ModelChangesRobustnessEvaluator.ModelChangesRobustnessEvaluator(ct)[source]

Bases: ABC

Abstract base class for evaluating the robustness of CEs with respect to model changes.

This class defines an interface for evaluating how robust a CE’s validity are when the model parameters are changed.

abstract evaluate(instance, neg_value=0)[source]

Abstract method to evaluate the robustness of a model’s prediction on a given instance.

Must be implemented by subclasses.

@param instance: The instance for which to evaluate robustness.

This could be a single data point for the model.

@param neg_value: The value considered negative in the target variable.

Used to determine if the counterfactual flips the prediction.

@return: Result of the robustness evaluation. The return type should be defined by the subclass.

robustx.robustness_evaluations.ModelChangesRobustnessScorer module

class robustx.robustness_evaluations.ModelChangesRobustnessScorer.ModelChangesRobustnessScorer(ct)[source]

Bases: ABC

Abstract base class for scoring the robustness of CEs with respect to model changes.

This class defines an interface for assigning a robustness score to a model’s predictions for a CE when the model parameters are changed.

abstract score(instance, neg_value=0)[source]

Abstract method to calculate the robustness score for a model’s prediction on a given instance.

Must be implemented by subclasses.

@param instance: The instance for which to calculate the robustness score.

This could be a single data point for the model.

@param neg_value: The value considered negative in the target variable.

Used to determine if the counterfactual flips the prediction.

@return: The calculated robustness score. The return type should be defined by the subclass.

robustx.robustness_evaluations.ModelChangesRobustnessSetEvaluator module

class robustx.robustness_evaluations.ModelChangesRobustnessSetEvaluator.ModelChangesRobustnessSetEvaluator(ct, evaluator=<class 'robustx.robustness_evaluations.ModelChangesRobustnessEvaluator.ModelChangesRobustnessEvaluator'>)[source]

Bases: object

Class for evaluating the robustness of CEs with respect to model changes.

This class uses a specified evaluator to assess the robustness of model predictions for multiple CEs.

task

The task for which robustness is being evaluated.

Type:

Task

evaluator

An instance of a robustness evaluator used to assess each instance.

Type:

ModelChangesRobustnessEvaluator

evaluate(instances, neg_value=0)[source]

Evaluates the robustness of model predictions for a set of instances.

@param instances: A DataFrame containing the instances to evaluate. @param neg_value: The value considered negative in the target variable, used

to evaluate the robustness of the model’s prediction.

@return: A DataFrame containing the robustness evaluation results for each instance.

robustx.robustness_evaluations.ModelMultiplicityRobustnessEvaluator module

class robustx.robustness_evaluations.ModelMultiplicityRobustnessEvaluator.ModelMultiplicityRobustnessEvaluator(models, data)[source]

Bases: ABC

Abstract base class for evaluating the robustness of CEs with respect to model multiplicity.

abstract evaluate(instance, counterfactuals)[source]

Abstract method to evaluate the robustness of the counterfactuals for the input instance under model multiplicity.

Must be implemented by subclasses.

@param instance: A single input instance for which the counterfactual is generated @param counterfactuals: A DataFrame of counterfactuals for the input by the models.

robustx.robustness_evaluations.MultiplicityValidityRobustnessEvaluator module

class robustx.robustness_evaluations.MultiplicityValidityRobustnessEvaluator.MultiplicityValidityRobustnessEvaluator(models, data)[source]

Bases: ModelMultiplicityRobustnessEvaluator

The robustness evaluator that examines how many models (in %) each counterfactual is valid on.

evaluate(instance, counterfactuals)[source]

Evaluate onn average how many models (in %) each counterfactual is valid on.

@param instance: An input instance. @param counterfactuals: A series of CEs.

evaluate_single(instance, counterfactual)[source]

Evaluate how many models (in %) one counterfactual is valid on.

@param instance: An input instance. @param counterfactual: A CE.

robustx.robustness_evaluations.SetDistanceRobustnessEvaluator module

class robustx.robustness_evaluations.SetDistanceRobustnessEvaluator.SetDistanceRobustnessEvaluator(ct)[source]

Bases: InputChangesRobustnessEvaluator

Compare the set distance between two sets of counterfactuals

evaluate(instance, counterfactual, generator)[source]

Compare the counterfactuals for the original instance and those for the perturbed instance.

@param instance: An input instance. @param counterfactual: One or more CE points for the instance. @param generator: CE generator.

robustx.robustness_evaluations.VaRRobustnessEvaluator module

class robustx.robustness_evaluations.VaRRobustnessEvaluator.VaRRobustnessEvaluator(ct, models)[source]

Bases: ModelChangesRobustnessEvaluator

A simple and common robustness evaluation method for evaluating validity of the CE after retraining. Used for robustness against model changes.

task

The task to solve, inherited from ModelChangesRobustnessEvaluator.

Type:

Task

models

The list of models retrained on the same dataset.

Type:

List[BaseModel]

evaluate(instance, desired_outcome=1)[source]

Evaluates whether the instance (the ce) is predicted with the desired outcome by all retrained models. The instance is robust if this is true.

@param instance: The instance (in most cases a ce) to evaluate. @param desired_outcome: The value considered positive in the target variable. @return: A boolean indicating robust or not.

Module contents