PR to add codecov badge drops coverage

See Add coverage badge by thomasrockhu · Pull Request #544 · ljharb/tape · GitHub - coverage should not have dropped in a PR that does nothing but update the readme.

1 Like

Thanks @ljharb this got lost from my queue. I’ll take a look why this week.

@ljharb three things come to mind right now

  1. A significant number of commits are stuck in a processing state. We’ve made a fix recently for this.
  2. Some commits are not found. This is typically due to rebasing, but I think you need to add fetch-depth: 2 (or anything > 1 or 0) to the actions/checkout step
  3. There are a lot of uploads here per commit. We recently started enforcing approximately a 100-upload limit. I don’t know if you are going to hit this (if you do, Codecov will respond with an error), but I wanted to make sure you knew about it.

Re 1, thanks!

Re 2, i rebase and force push constantly; I’m not sure why I’d need to alter the way checkouts are done. If the sha isn’t available I’d just expect the job to fail.

Re 3, i run 200-400 jobs per run, in 200-300 projects, all with coverage data, so that limit will effectively eliminate my ability to use codecov whatsoever. Is there any workaround?

@ljharb

  1. Absolutely
  2. That shouldn’t be an issue. The problem is that actions/checkout creates a new commit SHA. We use fetch-depth: 2 to grab the true SHA
  3. Woof, ok. I don’t have a great solution for you right now. The best I can think of is to aggregate some of your uploads together. Could you describe your CI pipeline a little bit here? Maybe we can figure out a good solution.

I can certainly add fetch-depth 2, but i’ve not had to do that on hundreds of other repos so why would i need to do that now?

As for number 3, I’m not sure how to do that. I test on every minor version of node from 0.6 on, on most projects, which amounts to around 218 jobs (a number that increases by 1-3 every time node does a release).

What’s the point of limiting it to 100 uploads? If I condense 218 jobs down to a handful of uploads, using istanbul-merge - npm or something, you still have to process roughly the same amount of coverage data - what are you optimizing for with this arbitrary, and not announced at all, limit?

@ljharb re: fetch-depth, we are making some changes so that this won’t be necessary anymore. Thanks for the patience.

As for our limitations, I’ll be perfectly honest I don’t have a good answer for you. It’s something that we are working on long-term, but I just don’t have a good solution for you. We’ve been able to increase that limit over the past week, but we are wary of processing over 150 uploads.

What I do know is that it’s not necessarily a total amount of coverage data issue.

1 Like

I appreciate the transparency.

I’ve been sending hundreds of uploads per run since November, when i started converting my projects en masse from travis to actions. When did the limit come to be, and when did the problems the limit fixes start?

@ljharb, the limit has actually been in existence for a long time. Unfortunately, we didn’t realize that it was not getting pushed to clients to let them know that they have hit the limit.

This was clearly not the way we would have wanted to communicate the change, but as it has been made, we think it would be worse for users not to know about this limit, leaving them uninformed about coverage changes. We really are putting a lot of effort to find suitable workarounds for users that are uploading more than 150 reports/commit.

Fair enough.

Can you elaborate on why this limit is important? Combining raw coverage reports into one is not a particularly intensive operation - istanbul-merge - npm does it pretty trivially. If I understood the constraints, I could better help find a solution.

@ljharb looks like this one slipped through. The limit is important for us in that we were experiencing poor performance and db behavior past 150 uploads per commit. We have not had the resources to investigate further than that but have decided to cap it at that level to prevent breakages for other users.

another alternative that would make this a lot easier is if i could use a codecov action to - instead of just uploading every job in the matrix - cache the coverage info from each job, restore the cache in a final “summary” job (which i have in existence already anyways), merge them all, and then do a single upload. Perhaps the eased load of that for all customers would be worth investing into making that action/documenting that workflow?

@ljharb that’s a really great idea. I’ll look into solving for this.

1 Like