Endpoint to track metrics like performance benchmarks


We use codecov on our pull requests to monitor changes in coverage. It’s great!

We also use benchmarking tools to monitor performance. We’d love to have the same concepts that codecov gives us (diffs/changes, history, etc) for benchmark metrics.


In the simplest version of this feature, it would support a single arbitrary numeric metric, and codecov would simply reply back / track the change in the metric (did the number go up or down?). This could be adapted to a number of different feature requests that have been floating around - ie tracking the number of tests run.

A more fully baked version of this would take the output of common code benchmarking libraries (pytest-benchmark for python comes to mind) and would automatically parse that output to match benchmark changes for individual tests/benchmarks.

1 Like

Hi @almartin82, I’m not completely sure what the ask here is for. I know you mentioned benchmark metrics, but what would an ideal solution look like?