Most performance analysis today uses either microbenchmarks or
standard macrobenchmarks (e.g., SPEC, LADDIS, the Andrew benchmark).
However, the results of such benchmarks provide little information
to indicate how well a particular system will handle a particular
application. Such results are, at best, useless and, at worst,
misleading. In this paper, we argue for an application-directed
approach to benchmarking, using performance metrics that reflect
the expected behavior of a particular application across a range
of hardware or software platforms. We present three different
approaches to application-specific measurement, one using vectors
that characterize both the underlying system and an application,
one using trace-driven techniques, and a hybrid approach. We argue
that such techniques should become the new standard.