The timeliness, sensitivity, and specificity of an algorithm depend not only on its ability to extract information from surveillance data but also on the information content of surveillance data that are available to the algorithm. When studying an algorithm, an evaluator—like any good scientist—will tend to hold the data available to the algorithm "constant'' to avoid introducing a confounding variable into the experiment. However, the objective of algorithm development is to improve detection relative to current "best practice,'' and improvement in the quantity and quality of surveillance data will likely improve overall performance more than any algorithm improvement. Thus, holding data constant is not always appropriate in evaluation.
Although we discuss algorithm evaluation and data evaluation separately in this book (and it is worthwhile scientifically to study their contributions separately), there are important scientific questions for which the appropriate method involves treating the combination of data and algorithm as the object of a study. Perhaps the best way to think of this is that, early in the course of algorithm development, it is appropriate to focus on the algorithm alone; but ultimately, to confirm that a new algorithmic approach is superior to current "best practice,'' the object of study becomes the combination of surveillance data and algorithm.
Was this article helpful?