A Heaviside Function Approximation for Neural Network Binary Classification

A Heaviside Function Approximation for Neural Network Binary Classification

A Heaviside Function Approximation for Neural Network Binary Classification Nathan Tsoi, Yofti Milkessa, Marynel Vazquez´ Yale University, New Haven, CT 06511 [email protected] Abstract Neural network binary classifiers are often evaluated on met- rics like accuracy and F1-Score, which are based on con- fusion matrix values (True Positives, False Positives, False Negatives, and True Negatives). However, these classifiers are commonly trained with a different loss, e.g. log loss. While it is preferable to perform training on the same loss as the eval- uation metric, this is difficult in the case of confusion matrix based metrics because set membership is a step function with- out a derivative useful for backpropagation. To address this Figure 1: Heaviside function H (left) and (Kyurkchiev and challenge, we propose an approximation of the step function Markov 2015) approximation (right) at τ = 0:7. that adheres to the properties necessary for effective train- ing of binary networks using confusion matrix based met- rics. This approach allows for end-to-end training of binary deep neural classifiers via batch gradient descent. We demon- strate the flexibility of this approach in several applications with varying levels of class imbalance. We also demonstrate how the approximation allows balancing between precision and recall in the appropriate ratio for the task at hand. 1 Introduction Figure 2: Proposed Heaviside function approximation H In the process of creating a neural network binary classi- (left) and a sigmoid fit to our proposed Heaviside approx- fier, the network is often trained via the binary cross-entropy imation (right) at τ = 0:7. (BCE) objective function. The network’s output is a proba- bility value p 2 [0; 1] that must be translated into a binary value f0; 1g indicating set membership to the positive class. of the Heaviside function that adheres to important prop- The Heaviside step function H is commonly used to deter- erties of the original Heaviside function while allowing for mine set membership based on a heuristic threshold τ, where optimization via gradient descent. Figure 2 (left) depicts our p ≥ τ are considered positive classification outcomes. proposed approximation. It is a common assumption that optimizing a desired eval- The proposed approximation is flexible in that it can be arXiv:2009.01367v1 [cs.LG] 2 Sep 2020 uation metric is preferable to optimizing a different loss used to directly optimize any metrics based on the confu- (Eban et al. 2017; Rezatofighi et al. 2019; Herschtal and sion matrix sets without re-formulating the loss. One may Raskutti 2004; Song et al. 2016). However, the Heaviside simply substitute the step function with the proposed Heav- function, commonly used to compute confusion matrix set iside approximation and directly optimize the metric. This membership in terms of true positives, false positives, false flexibility is important because the optimal metric for binary negatives, true negatives, has a gradient with properties not classifiers often changes with the level of class imbalance of conducive to optimization via gradient descent. The Heav- the data. For example, while accuracy is popular for evaluat- iside functions gradient is not defined at the threshold τ ing classifiers, accuracy will over-estimate performance (He and is zero everywhere else (see Fig. 1 (left) for an exam- and Garcia 2009) in datasets with high levels of class im- ple with τ = 0:7). Therefore the gradient of the Heaviside balance. In these cases, better estimators of performance in- function cannot be used to effectively back propagate errors clude the harmonic mean of precision and recall (Fβ-Score). and optimize a neural network according to metrics com- Though metrics based on confusion matrix set values are posed of confusion matrix set values, such as F1-Score. To considered not decomposable, we find that when trained address this challenge, we propose a novel approximation with an adequately large batch size our method performs as well as or better than the commonly used BCE objective each confusion matrix set is computed as the sum over the function and other methods of directly optimizing the met- set membership functions: ric. Though this requires use of a large-enough batch size n n during training, we find these batch sizes to be well within X X limits of modern deep learning hardware when training on jTP j = tp(pi; yi; τ) jFP j = fp(pi; yi; τ) a variety of publicly available datasets. These datasets have i=1 i=1 varying levels of class balance, reflecting the fact that class n n X X imbalance is common in real-world data (Reed 2001). jFNj = fn(pi; yi; τ) jTNj = tn(pi; yi; τ) Our approach is scalable with the runtime equivalent to i=1 i=1 the metric’s computational cost. Even for metrics that re- Common classification metrics based on the confusion quire computation over multiple thresholds, such as area un- matrix then build on the above cardinality sets. Two ex- der the receiver operating characteristic (AUROC) curve, we amples of these metrics are precision and recall. Precision, suggest a constant-time algorithm which ensures training jTP j=(jTP j + jFP j), is the proportion of positive results runtime scales linearly with the number of samples in the that are true positive results. Recall, jTP j=(jTP j + jFNj), dataset. Moreover, the proposed method places no limit on indicates the proportion of actual positives examples that are the maximum size of the batch used for training, as is the correctly identified. case in the adversarial approach proposed by (Fathony and Precision and recall represent a trade-off between clas- Kolter 2020). sifier objectives; it is generally undesirable to optimize or In summary, our main contribution is a novel approxima- evaluate for one while ignoring the other (He and Garcia tion of the Heaviside function to facilitate batch gradient de- 2009). This makes summary evaluation metrics that balance scent when training classifiers that directly optimize evalua- between these trade-offs popular. Commonly used objec- tion metrics based on confusion-matrix sets. We explore the tives include accuracy and F1-Score: practical effects of batch size on non-decomposable metrics jTP j + jTNj accuracy = in light of our approach. Also, we show the flexibility of the jTP j + jTNj + jFP j + jFNj approach by applying our direct optimization method to sev- 2 eral confusion-matrix based metrics on various datasets. F1-Score = precision−1 + recall−1 2 Preliminaries Accuracy is the rate of correct predictions to all predictions. 2 Once a binary classification network is trained, it is common F1-Score is a specific instance of Fβ-Score = (1 + β ) · 2 −1 −1 to choose a threshold value τ to transform the output prob- (precision · recall)=(β · precision + recall ), which is ability to a binary value. The threshold is often chosen by the weighted harmonic mean of precision and recall. The searching for an optimal value considering the carnality of parameter β is chosen such that recall is considered β times the sets in the confusion matrix (jTP j, jFP j, jFNj, jTNj). more important than precision. These sets are computed using the Heaviside function H: When a binary classification neural network is trained in parts, independently of τ and the chosen metric, the network 1 p ≥ τ is unaware of the end goal and therefore is unlikely to opti- H(p; τ) = (1) 0 p < τ mize for the desired evaluation metric. This limitation often becomes evident when the goal is to balance between vari- There are several important properties of the Heaviside ous confusion matrix set values, e.g., Fβ-Score. function that must be considered when evaluating approxi- Prior works have approximated the Heaviside function. mations. First, the complement of H, 1−H, is the reflection For instance, (Kyurkchiev and Markov 2015) proposed us- over the line y = 0:5. Second, at the threshold p = τ, the ing a sigmoid function, as shown in Figure 1 (right). How- sample could be a member of either class with equal prob- ever, this prior work focused on minimizing the error be- ability. In practice, to avoid ambiguity either the positive or tween the approximation and H, not on a Heaviside approx- negative class is chosen – we chose the positive class in this imation useful for back-propagation. Such sigmoid-based work. Figure 1 (left) shows an example Heaviside function. approximations do not adhere to the properties necessary The confusion matrix set membership can then be com- for gradient descent. For example, the Heaviside approxima- puted for a prediction p and for ground truth label y via the tion proposed in (Kyurkchiev and Markov 2015), s0(k; p) = set membership functions: (1 + e−kp)−1 can be reparameterized to account for τ: ( −k(p−τ) −1 ( H(p; τ) y = 0 sτ (k; p) = (1 + e ) . While this approximation ap- H(p; τ) y = 1 fp = proaches H at the limit when τ = 0:5, with an appropriate tp = 0 else 0 else value of k, it does not necessarily approach H at the limit when τ nears the limits, for a given value of k. Moreover, ( ( 1 − H(p; τ) y = 1 1 − H(p; τ) y = 0 with increasing values of k, while the error between sτ and fn = tn = H decreases, the range over which the derivative s0 = 0 0 else 0 else τ increases. A non-zero gradient is important during training Consider a set of predictions p 2 [0; 1], ground truth so the network can learn from similar samples belonging to labels y 2 f0; 1g and threshold value τ 2 (0; 1), e.g. different classes. Figure 1 (right) shows the (Kyurkchiev and f(p1; y1; τ); (p2; y2; τ); ··· ; (pn; yn; τ)g.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us