robustx.evaluations package

Submodules

robustx.evaluations.CEEvaluator module

class robustx.evaluations.CEEvaluator.CEEvaluator(task)[source]

Bases: ABC

An abstract class used to evaluate CE methods for a given task

task

The task for which the CE is being evaluated

Type:

Task

abstract evaluate(counterfactual_explanations, **kwargs)[source]

Abstract method to evaluate the provided counterfactual explanations

@param counterfactual_explanations: The CE methods that are to be evaluated @param kwargs: Additional keyword arguments for the evaluation process

robustx.evaluations.DistanceEvaluator module

class robustx.evaluations.DistanceEvaluator.DistanceEvaluator(task)[source]

Bases: CEEvaluator

An Evaluator class which evaluates the average distance of counterfactuals from their original instance

Attributes / Properties

task: Task

Stores the Task for which we are evaluating the distance of CEs

distance_func: Function

A function which takes in 2 dataframes and returns an integer representing distance, defaulted to euclidean

valid_val: int

Stores what the target value of a valid counterfactual is defined as


evaluate() int:[source]

Returns the average distance of each x’ from x

-------
evaluate(counterfactuals, valid_val=1, distance_func=<function euclidean>, column_name='target', subset=None, **kwargs)[source]

Determines the average distance of the CEs from their original instances @param counterfactuals: pd.DataFrame, dataset containing CEs in same order as negative instances in dataset @param valid_val: int, what the target value of a valid counterfactual is defined as, default 1 @param distance_func: Function, function which takes in 2 dataframes and returns an integer representing

distance, defaulted to euclidean

@param column_name: name of target column @param subset: optional DataFrame, contains instances to generate CEs on @param kwargs: other arguments @return: int, average distance of CEs from their original instances

robustx.evaluations.ManifoldEvaluator module

class robustx.evaluations.ManifoldEvaluator.ManifoldEvaluator(task)[source]

Bases: CEEvaluator

An Evaluator class which evaluates the proportion of counterfactuals which are on the data manifold using LOF

Attributes / Properties

task: Task

Stores the Task for which we are evaluating the robustness of CEs


evaluate() int:[source]

Returns the proportion of CEs which are robust for the given parameters

-------
evaluate(counterfactual_explanations, n_neighbors=20, column_name='target', **kwargs)[source]

Determines the proportion of CEs that lie on the data manifold based on LOF @param counterfactual_explanations: DataFrame, containing the CEs in the same order as the negative instances in the dataset @param n_neighbors: int, number of neighbours to compare to in order to find if outlier @param column_name: str, name of target column @param kwargs: other arguments @return: proportion of CEs on manifold

robustx.evaluations.RobustnessProportionEvaluator module

class robustx.evaluations.RobustnessProportionEvaluator.RobustnessProportionEvaluator(task)[source]

Bases: CEEvaluator

An Evaluator class which evaluates the proportion of counterfactuals which are robust

Attributes / Properties

task: Task

Stores the Task for which we are evaluating the robustness of CEs

robustness_evaluator: ModelChangesRobustnessEvaluator

An instance of ModelChangesRobustnessEvaluator to evaluate the robustness of the CEs

valid_val: int

Stores what the target value of a valid counterfactual is defined as

target_col: str

Stores what the target column name is


evaluate() int:[source]

Returns the proportion of CEs which are robust for the given parameters

-------
evaluate(counterfactuals, delta=0.005, bias_delta=0.005, M=1000000, epsilon=0.001, valid_val=1, column_name='target', robustness_evaluator=<class 'robustx.robustness_evaluations.DeltaRobustnessEvaluator.DeltaRobustnessEvaluator'>, **kwargs)[source]

Evaluate the proportion of CEs which are robust for the given parameters @param counterfactuals: pd.DataFrame, the CEs to evaluate @param delta: int, delta needed for robustness evaluator @param bias_delta: int, bias delta needed for robustness evaluator @param M: int, large M needed for robustness evaluator @param epsilon: int, small epsilon needed for robustness evaluator @param column_name: str, what the target column name is @param valid_val: int, what the target value of a valid counterfactual is defined as @param robustness_evaluator: ModelChangesRobustnessEvaluator.__class__, the CLASS of the evaluator to use @return: Proportion of CEs which are robust

robustx.evaluations.ValidityEvaluator module

class robustx.evaluations.ValidityEvaluator.ValidityEvaluator(task)[source]

Bases: CEEvaluator

An Evaluator class which evaluates the proportion of counterfactuals which are valid

Attributes / Properties

task: Task

Stores the Task for which we are evaluating the validity of CEs


evaluate() int:[source]

Returns the proportion of CEs which are valid

-------
checkValidity(instance, valid_val)[source]

Checks if a given CE is valid @param instance: pd.DataFrame / pd.Series / torch.Tensor, the CE to check validity of @param valid_val: int, the target column value which denotes a valid CE @return:

evaluate(counterfactuals, valid_val=1, column_name='target', **kwargs)[source]

Evaluates the proportion of CEs are valid @param counterfactuals: pd.DataFrame, set of CEs which we want to evaluate @param valid_val: int, target column value which denotes a valid instance @param column_name: str, name of target column @param kwargs: other arguments @return: int, proportion of CEs which are valid

Module contents