public class ConfidenceEvaluator extends Object
| Modifier and Type | Class and Description |
|---|---|
static class |
ConfidenceEvaluator.EntityConfidence
a simple class to store a confidence score and whether or not this
labeling is correct
|
| Constructor and Description |
|---|
ConfidenceEvaluator(InstanceWithConfidence[] instances,
boolean sorted) |
ConfidenceEvaluator(PipedInstanceWithConfidence[] instances,
boolean sorted) |
ConfidenceEvaluator(Segment[] segments,
boolean sorted) |
ConfidenceEvaluator(Vector confidences) |
ConfidenceEvaluator(Vector confidences,
int nBins) |
| Modifier and Type | Method and Description |
|---|---|
double |
accuracyAtCoverage(double cov) |
String |
accuracyCoverageValuesToString() |
String |
accuracyRecallValuesToString(int totalTrue) |
double |
correlation()
Calculate pearson's R for the corellation between confidence and
correct, where 1 = correct and -1 = incorrect
|
double[] |
getAccuracyCoverageValues()
get accuracy at coverage for each bin of values
|
double[][] |
getAccuracyRecallValues(int totalTrue)
get accuracy at recall for each bin of values
|
double |
getAverageAccuracy() |
double |
getAverageCorrectConfidence()
Average confidence score for the incorrect entities
|
double |
getAverageIncorrectConfidence()
Average confidence score for the incorrect entities
|
double |
getAveragePrecision()
IR Average precision measure.
|
double |
getConfidenceMean() |
double |
getConfidenceStandardDeviation()
Standard deviation of confidence scores
|
double |
getConfidenceSum() |
double |
getWorstAveragePrecision()
For comparison, rank segments as badly as possible (all
"incorrect" before "correct").
|
int |
numCorrect() |
int |
numCorrectAtCoverage(double cov) |
double |
pointBiserialCorrelation()
Correlation when one variable (X) is binary: r = (bar(x1) -
bar(x0)) * sqrt(p(1-p)) / sx , where bar(x1) = mean of X when Y
is 1 bar(x0) = mean of X when Y is 0 sx = standard deviation of
X p = proportion of values where Y=1
|
int |
size() |
String |
toString() |
public ConfidenceEvaluator(Vector confidences, int nBins)
public ConfidenceEvaluator(Vector confidences)
public ConfidenceEvaluator(Segment[] segments, boolean sorted)
public ConfidenceEvaluator(InstanceWithConfidence[] instances, boolean sorted)
public ConfidenceEvaluator(PipedInstanceWithConfidence[] instances, boolean sorted)
public double pointBiserialCorrelation()
public double getAveragePrecision()
public double getWorstAveragePrecision()
public double getConfidenceSum()
public double getConfidenceMean()
public double getConfidenceStandardDeviation()
public double correlation()
public double[] getAccuracyCoverageValues()
public String accuracyCoverageValuesToString()
public double[][] getAccuracyRecallValues(int totalTrue)
totalTrue - total number of true Segmentspublic String accuracyRecallValuesToString(int totalTrue)
public double accuracyAtCoverage(double cov)
public int numCorrectAtCoverage(double cov)
public double getAverageAccuracy()
public int numCorrect()
public double getAverageIncorrectConfidence()
public double getAverageCorrectConfidence()
public int size()
Copyright © 2019 JULIE Lab, Germany. All rights reserved.