DeterministicMetrics#

class DeterministicMetrics[source]#

Define and customize deterministic metrics.

Notes

Deterministic metrics compare two timeseries, typically primary (“observed”) vs. secondary (“modeled”) values. Available metrics:

Error Metrics: - MeanError, MeanSquareError, RootMeanSquareError - MeanAbsoluteError, MeanAbsoluteRelativeError

Bias Metrics: - RelativeBias, MultiplicativeBias, AnnualPeakRelativeBias

Correlation Metrics: - PearsonCorrelation, SpearmanCorrelation, Rsquared

Efficiency Metrics: - NashSutcliffeEfficiency, NormalizedNashSutcliffeEfficiency - KlingGuptaEfficiency, KlingGuptaEfficiencyMod1, KlingGuptaEfficiencyMod2

Threshold-Based Metrics: - ConfusionMatrix, FalseAlarmRatio, ProbabilityOfDetection - ProbabilityOfFalseDetection, CriticalSuccessIndex

Other: - MaxValueDelta, MaxValueTimeDelta - RootMeanStandardDeviationRatio

Example

>>> from teehr import DeterministicMetrics
>>> kge = DeterministicMetrics.KlingGuptaEfficiency(transform="log", add_epsilon=True)
>>> rmse = DeterministicMetrics.RootMeanSquareError()

Methods

class AnnualPeakRelativeBias(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Annual Peak Relative Bias: bias computed on annual peak values.

default_func() Callable#

Create the annual_peak_relative_bias metric function.

\(Ann\ PF\ Bias=\frac{\sum(ann.\ peak_{sec}-ann.\ peak_{prim})}{\sum(ann.\ peak_{prim})}\)

class ConfusionMatrix(*, return_type: ~pyspark.sql.types.DataType = MapType(StringType(), IntegerType(), True), unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, threshold_field_name: str = None)#

Confusion Matrix: TP, TN, FP, FN counts based on threshold exceedance.

Additional Parameters#

threshold_field_namestr

Field name containing location-specific threshold values.

default_func() Callable#

Create the confusion_matrix metric function.

Returns counts of TP, TN, FP, FN as a dictionary.

\(TP=\sum((prim>=threshold_{prim})\ and\ (sec>=threshold_{sec}))\) \(TN=\sum((prim<threshold_{prim})\ and\ (sec<threshold_{sec}))\) \(FP=\sum((prim<threshold_{prim})\ and\ (sec>=threshold_{sec}))\) \(FN=\sum((prim>=threshold_{prim})\ and\ (sec<threshold_{sec}))\)

class CriticalSuccessIndex(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, threshold_field_name: str = None)#

Critical Success Index (Threat Score): TP / (TP + FN + FP).

Additional Parameters#

threshold_field_namestr

Field name containing location-specific threshold values.

default_func() Callable#

Create the critical_success_index metric function.

\(CSI=\frac{TP}{(TP+FP+FN)}\)

class FalseAlarmRatio(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, threshold_field_name: str = None)#

False Alarm Ratio: FP / (TP + FP).

Additional Parameters#

threshold_field_namestr

Field name containing location-specific threshold values.

default_func() Callable#

Create the false_alarm_ratio metric function.

\(FAR=\frac{FP}{(TP+FP)}\)

class FrequencyBiasIndex(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, threshold_field_name: str = None)#

Frequency Bias Index: (TP + FP) / (TP + FN).

Additional Parameters#

threshold_field_namestr

Field name containing location-specific threshold values.

default_func() Callable#

Create the frequency_bias_index metric function.

\(FBIAS=\frac{TP+FP}{TP+FN}\)

class KlingGuptaEfficiency(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, sr: float = 1.0, sa: float = 1.0, sb: float = 1.0)#

Kling-Gupta Efficiency (original formulation).

Additional Parameters#

srfloat

Scaling factor for correlation component, by default 1.0.

safloat

Scaling factor for variability component, by default 1.0.

sbfloat

Scaling factor for bias component, by default 1.0.

default_func() Callable#

Create the kling_gupta_efficiency metric function.

\(KGE=1-\sqrt{(r(sec, prim)-1)^2+(\frac{\sigma_{sec}}{\sigma_{prim}}-1)^2+(\frac{\mu_{sec}}{\mu_{sec}/\mu_{prim}}-1)^2}\)

class KlingGuptaEfficiencyMod1(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, sr: float = 1.0, sa: float = 1.0, sb: float = 1.0)#

Kling-Gupta Efficiency - modified 1 (2012).

Additional Parameters#

srfloat

Scaling factor for correlation component, by default 1.0.

safloat

Scaling factor for variability component, by default 1.0.

sbfloat

Scaling factor for bias component, by default 1.0.

default_func() Callable#

Create the kling_gupta_efficiency_mod1 metric function.

\(KGE'=1-\sqrt{(r(sec, prim)-1)^2+(\frac{\sigma_{sec}/\mu_{sec}}{\sigma_{prim}/\mu_{prim}}-1)^2+(\frac{\mu_{sec}}{\mu_{sec}/\mu_{prim}}-1)^2}\)

class KlingGuptaEfficiencyMod2(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, sr: float = 1.0, sa: float = 1.0, sb: float = 1.0)#

Kling-Gupta Efficiency - modified 2 (2021).

Additional Parameters#

srfloat

Scaling factor for correlation component, by default 1.0.

safloat

Scaling factor for variability component, by default 1.0.

sbfloat

Scaling factor for bias component, by default 1.0.

default_func() Callable#

Create the kling_gupta_efficiency_mod2 metric function.

\(KGE''=1-\sqrt{(r(sec, prim)-1)^2+(\frac{\sigma_{sec}}{\sigma_{prim}}-1)^2+\frac{(\mu_{sec}-\mu_{prim})^2}{\sigma_{prim}^2}}\)

class MaxValueDelta(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Max Value Delta: difference between maximum values.

default_func() Callable#

Create the max_value_delta metric function.

\(mvd=max(value_{sec})-max(value_{prim})\)

class MaxValueTimeDelta(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Max Value Time Delta: time difference between max value occurrences.

default_func() Callable#

Create the max_value_timedelta metric function.

\(mvtd=max\_value\_time_{sec}-max\_value\_time_{prim}\)

class MeanAbsoluteError(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Mean Absolute Error: average of absolute differences.

default_func() Callable#

Create the mean_absolute_error metric function.

\(MAE=\frac{\sum|sec-prim|}{count}\)

class MeanAbsoluteRelativeError(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Mean Absolute Relative Error: sum of absolute differences / sum of primary.

default_func() Callable#

Create the Absolute Relative Error metric function.

\(Relative\ MAE=\frac{\sum|sec-prim|}{\sum(prim)}\)

class MeanError(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Mean Error: average difference between secondary and primary values.

default_func() Callable#

Create the Mean Error metric function.

\(Mean\ Error=\frac{\sum(sec-prim)}{count}\)

class MeanSquareError(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Mean Square Error: average of squared differences.

default_func() Callable#

Create the mean_squared_error metric function.

\(MSE=\frac{\sum(sec-prim)^2}{count}\)

class MultiplicativeBias(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Multiplicative Bias: ratio of secondary mean to primary mean.

default_func() Callable#

Create the Multiplicative Bias metric function.

\(Mult.\ Bias=\frac{\mu_{sec}}{\mu_{prim}}\)

class NashSutcliffeEfficiency(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Nash-Sutcliffe Efficiency: 1 - (MSE / variance of primary).

default_func() Callable#

Create the nash_sutcliffe_efficiency metric function.

\(NSE=1-\frac{\sum(prim-sec)^2}{\sum(prim-\mu_{prim}^2)}\)

class NormalizedNashSutcliffeEfficiency(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Normalized Nash-Sutcliffe Efficiency: 1 / (2 - NSE).

default_func() Callable#

Create the nash_sutcliffe_efficiency_normalized metric function.

\(NNSE=\frac{1}{(2-NSE)}\)

class PearsonCorrelation(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Pearson Correlation Coefficient: linear correlation between series.

default_func() Callable#

Create the Pearson Correlation Coefficient metric function.

\(r=r(sec, prim)\)

class ProbabilityOfDetection(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, threshold_field_name: str = None)#

Probability of Detection (Hit Rate): TP / (TP + FN).

Additional Parameters#

threshold_field_namestr

Field name containing location-specific threshold values.

default_func() Callable#

Create the probability_of_detection metric function.

\(POD=\frac{TP}{(TP+FN)}\)

class ProbabilityOfFalseDetection(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, threshold_field_name: str = None)#

Probability of False Detection: FP / (TN + FP).

Additional Parameters#

threshold_field_namestr

Field name containing location-specific threshold values.

default_func() Callable#

Create the probability_of_false_detection metric function.

\(POFD=\frac{FP}{(FP+TN)}\)

class RelativeBias(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Relative Bias: sum of differences divided by sum of primary values.

default_func() Callable#

Create the Relative Bias metric function.

\(Relative\ Bias=\frac{\sum(sec-prim)}{\sum(prim)}\)

class RootMeanSquareError(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Root Mean Square Error: square root of mean squared error.

default_func() Callable#

Create the root_mean_squared_error metric function.

\(RMSE=\sqrt{\frac{\sum(sec-prim)^2}{count}}\)

class RootMeanStandardDeviationRatio(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Root Mean Standard Deviation Ratio (RSR): RMSE / std dev of primary.

default_func() Callable#

Create the root_mean_standard_deviation_ratio metric function.

\(RMSE_{ratio}=\frac{RMSE}{\sigma_{obs}}\)

class Rsquared(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Coefficient of Determination: square of Pearson correlation.

default_func() Callable#

Create the R-squared metric function.

\(r^2=r(sec, prim)^2\)

class SpearmanCorrelation(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None)#

Spearman Rank Correlation Coefficient: rank-based correlation.

default_func() Callable#

Create the Spearman metric function.

\(r_s=1-\frac{6*\sum|rank_{prim}-rank_{sec}|^2}{count(count^2-1)}\)

class SuccessRatio(*, return_type: str | ~pyspark.sql.types.ArrayType | ~pyspark.sql.types.MapType = 'float', unpack_results: bool = False, unpack_function: ~typing.Callable = <function unpack_sdf_dict_columns>, reference_configuration: str = None, bootstrap: ~typing.Any = None, add_epsilon: bool = False, transform: ~typing.Any = None, output_field_name: str = None, func: ~typing.Callable = None, input_field_names: str | ~teehr.models.str_enum.StrEnum | ~typing.List[str | ~teehr.models.str_enum.StrEnum] = None, attrs: ~typing.Dict = None, threshold_field_name: str = None)#

Success Ratio: TP + TN / (TP + FP + FN + TN).

Additional Parameters#

threshold_field_namestr

Field name containing location-specific threshold values.

default_func() Callable#

Create the success_ratio metric function.

\(SR=\frac{TP+TN}{TP+TN+FP+FN}\)