If a study compares the combination of algorithm plus data with current "best practice,'' the results of the study will directly answer the question of whether the algorithm is "good enough.'' Otherwise, the question of whether the algorithm's sensitivity, timeliness, and false alarm rate are good enough must be addressed indirectly, based on consideration of whether the algorithm's users will take actions based on anomalies identified by the algorithm. When discussing the significance of the results from an algorithm evaluation, one factor to keep in mind is that diagnostic precision (discussed in Chapter 3) is a strong determinant of an algorithm's value. If a case detection algorithm, for example, informs a clinician that a patient has a probability of "respiratory syndrome'' of 0.2, in most cases, this information will not influence the management of the patient. In contrast, a probability of inhala-tional anthrax of 0.2 would be very influential if the clinician were not already considering this diagnosis. Similarly, if an outbreak detection algorithm informed an epidemiologist that the probability there was an ongoing "respiratory'' outbreak was 0.2, the assessment would be less influential than if the algorithm were to suggest an outbreak due to inhalational anthrax with the same level of confidence.
Was this article helpful?