This was introduced as part of the hotfix PR to avoid hitting API rate
limits - but the condition was wrong. It was supposed to trigger in all
PR contexts, not only for the Test workflow.
(cherry picked from commit a146035a2b)
Unfortunately it still happens frequently that, after enabling
auto-merge, GitHub is stuck even though all checks have passed, and
doesn't merge the PR. Any contributor can trigger GitHub again with an
approval of the PR - this will then immediately queue the PR for merge.
Adding a hint to the posted comment, should help users through this
without my intervention.
(cherry picked from commit d086c6c6b3)
This lead to reaching secondary API limits in a treewide recently, so we
better limit it to where we actually need it.
(cherry picked from commit cd7f83638e)
We still use a few too many API requests by checking team members for
collaborator status - we can improve on that in the future.
(cherry picked from commit 17199e5ff6)
When hitting a treewide, we would previously find the username for each
user and then check all of them for collaborator status - only to then
realize that this results in more than 15 reviewers and exit.
We can put a simple stop-gap in, even before de-duplicating the combined
lists of maintainers and owners as safe guard. We could still hit huge
numbers of code owners, but in practice we don't nearly as many as
maintainers, so this will be sufficient for now.
(cherry picked from commit 9efe926863)
We use the files endpoint to get a list of all *names* of files touched
in the PR - but this endpoint will also actually download the files /
their diff, too. That's pointless and actually takes quite some time for
huge treewides.
We're just putting in a stop-gap for now, so that we're not burning more
than 1 API requests on this and don't spend so much time on it either. A
limit of 99 files will be more than enough for quite some time - we will
only need to raise this when we're able to represent package sets in
by-name properly and have "package set maintainers", who are not
committers.
(cherry picked from commit 51acc56dcb)
The recent change to use the result of requesting reviewers for setting
the `needs: reviewer` label caused a regression: It would not set the
label for PRs where no reviewers were requested, because *too many were
eligible*. Still - these PRs don't have reviewers, so they need
attention otherwise - via the label.
(cherry picked from commit 4658d0d5a3)
This was introduced shortly before merge of the reviewers.js file, but
not actually tested - I thought it was not easy to find a PR triggering
this warning. However, the scheduled run told me otherwise: The
staging-next PR is the perfect candidate.
(cherry picked from commit d76ffa4136)
We only recently introduced the owners.txt file to the comparison
artifact, so once the bot runs on a schedule it will it older artifacts
very quickly - and then can't find the owners file.
We can fallback to an empty owners list in this case, because an older
artifact also means an older workflow run previously, so this will have
pinged owners already.
(cherry picked from commit c4548e58fb)
This should fix the bug where the "needs: reviewer" label was set too
early, just to be removed immediately, because reviewers were then
requested.
(cherry picked from commit e68b0aef13)
This migrates the bash code to request reviewers to github-script. This
will allow multiple nice improvements later on, but at this stage it's
mostly a reduction in code and complexity.
(cherry picked from commit a23d0ab24c)
We technically counted bot approvals and approvals by deleted users for
the approval labels as well. The former don't exist, yet, but if they
were, I don't think we'd count them. The latter should arguably *not* be
counted, because we can't tell anymore *who* approved, so we can't put
any weight on it as reviewers.
This simplifies the logic, too.
(cherry picked from commit df6a9a739d)
This partially reverts commit 1197fe48da.
GitHub just doesn't schedule these narrow intervals. 10 minutes is
alright in practice.
(cherry picked from commit 74d6ba3ab4)
When a user tries to merge a PR, but is not allowed to, it is helpful to
explicitly list the users who *are* allowed. This helps explaining *why*
the merge-eligible label was set.
I objected to this proposal before, because it would incur too many API
requests. But after we have restructured the checklist, this is not
actually true anymore - we can now sensibly run this only when a comment
is posted and not whenever we check a PR for eligibility.
(cherry picked from commit 1e6124a504)
By only ignoring already-handled comments when running non-dry, it's
much easier to look at existing PRs, for which the merge bot already
commented, and iterate on them locally.
It's dry mode anyway, so it won't hurt to get a few more merge comments
in the console output.
(cherry picked from commit 2d6602908b)
We previously used auto-merge first and then enqueued explicitly on the
assumption that auto-merge would fail if the PR was actually in
mergeable state already. This turned out to be false.
Instead, we currently face the problem of auto-merge sometimes getting
stuck. This seems to happen when, at the time of enabling auto-merge,
the required status checks already passed and the PR would be ready to
go - but sometimes GitHub doesn't do it. This *can* be unblocked by
approving the PR again, which seems to run the internal "let's check
whether we can merge this" procedures on the GitHub side again.
However, we can probably also solve this by just explicitly trying to
enqueue the PR first - and only if that fails, fall back to auto-merge.
I previously argued against that, based on a potential race condition,
in which a PR could become ready to merge between these two requests -
at which point the auto-merge operation would fail, if the original
assumption was true. But since we don't observe this, we might as well
switch.
(cherry picked from commit 747d9e2d34)
This makes reactions to merge comments and all the labeling a bit
quicker. Lower the number of backlog items to process per run
accordingly, so that we don't really need more API requests for it.
(cherry picked from commit 1197fe48da)
We used to employ the worst strategy for parallelism possibly: The rate
limiter capped us at one concurrent request per second, while 100+ items
were handled in parallel. This lead to every item taking the full
duration of the job to proceed, making the data fetched at the beginning
of the job stale at the end. This leads to smaller hiccups when
labeling, or to the merge-bot posting comments after the PR has already
been closed.
GitHub allows 100 concurrent requests, but considers it a best practice
to serialize them. Since serializing all of them causes problems for us,
we should try to go higher.
Since other jobs are running in parallel, we use a conservative value of
20 concurrent requests here. We also introduce the same number of
workers going through the list of items, to make sure that each item is
handled in the shortest time possible from start to finish, before
proceeding to the next. This gives us roughly 2.5 seconds per individual
item - but speeds up the overall execution of the scheduled job to 20-30
seconds from 3-4 minutes before.
(cherry picked from commit 810b9ba51d)
When we recently refactored the code to use the maintainer map for
related labels, we made a mistake: When a PR has no packages with
maintainers returned from eval, the label would internally be set to `0`
instead of `false`.
The code would then go on compare the before and after labels with
strict equality - and assume a difference, because `0 !== false`. Thus,
it seemed like new labels needed to be set, so the PUT request was
actually sent. Of course, the labels were actually the same - when
filtering the labels to be set, the `0` would also be treated as falsy,
so the label would not be set. This would result in no visible change in
the PR, but internall GitHub would replace the `updated_at` timestamp
for that PR - after all we replaced all labels.
Repeatedly updating *all* PRs we're looking at quickly causes problems,
because we are going to look at the same PRs *again* in the next cycle -
essentially causing infinite recursion. The bot became slower and slower
over time, because it had to process more and more PRs each run.
Simply casting this to a proper Boolean, should get us out of the mess
soon.
(cherry picked from commit c768b4243e)
I didn't like r-ryantm "authoring"; so I changed that to "created"
earlier. Arguably, using "opened" is more consistent with what is
actually checked and can consistently be used for both.
(cherry picked from commit 5407abeb7d)
The by-name check would previously be green when the
`pkgs/by-name/README.md` file was changed. This would still not mean the
maintainer was able to merge the PR, because there'd be no maintainer
for that file, but the feedback was not 100% accurate.
(cherry picked from commit e0c0b2c54c)
This allows committers to approve PRs with additional, optional nits
that the author-maintainer can either address or merge immediately
without these changes.
It also allows committers to approve a PR for merge, while still waiting
for other maintainers to give their feedback - they can then merge the
PR directly instead of passing it back to the committer.
(cherry picked from commit ffdc8205e5)
While it was already the case that only merge comments *after* the
latest push were acted on, the logic wasn't easy to understand. This
change should make it more obvious, specially in combination with the
next commit, that all steps (comments, approvals, merge) must happen on
the same SHA - the current head SHA of the PR.
(cherry picked from commit 9a637aa7a4)
All other conditions equal, there is no reason to prevent maintainers
from backporting changes to their packages. Maintainers are probably in
the *best* position to tell whether a certain change is backportable or
not - because they know the package well.
(cherry picked from commit 91c4d9236b)
This supports AND on the first and OR on the second level, which is
needed for some follow up work like backports, approval based merges or
trusted maintainers.
(cherry picked from commit 84d6678f3b)
We have not observed this merge method being used in practice, yet. Not
in the new bot, not in the old bot. It seems like auto-merge works for
all cases.
(cherry picked from commit 6848f93842)
No special casing anymore, all conditions are in the same place. This
also has the benefit of hiding the "has maintainers eligible for merge"
condition from comments, because it is only really relevant for
labeling.
(cherry picked from commit 6a3c294f6f)
This removes the need to `await` committers further down in the function
and allows re-using the cache for other teams later.
(cherry picked from commit c7766c637f)
We have no chance of getting a token that can request the team endpoints
in the pull_request context. This makes sense, because non-members of
the org are also not allowed to view the teams' memberships.
Thus, just fake an empty team - that's fine for the Test workflow.
(cherry picked from commit 421974863f)
By not keeping the node_id in the comments resulting from a failed
merge, these merges will be automatically retried.
(cherry picked from commit 89ace76ff1)
Running the nixpkgs-merge-bot in GitHub Actions instead of a separate
workflow has multiple advantages:
- A much better development workflow, with improved testability.
- The ability to label PRs with a "merge-bot eligible" label from the
same codebase.
- Using more data for merge strategy decisions, for example the number
of rebuilds.
This commits re-implements most of the features from the current
nxipkgs-merge-bot directly in the bot workflow. Instead of reacting to
webhook events, this now runs on the regular 10 minute schedule. Some
merges might be delayed a few minutes, but that should not be a problem
in practice.
To give the user early feedback, there are additional workflows running
when a comment or review is posted. These react with "eyes" to make the
user aware that the comment has been recognized.
The only feature not taken over was the size check for files in the PR.
This kind of check is not really relevant for maintainer merges only -
if we want to prevent bigger files from making it into the tree, then we
need a generic CI check, which is out of scope for the merge-bot.
Other than that, everything should be implemented - any omissions are by
accident.
(cherry picked from commit eea09eb9d3)
This workflow / script is already doing more than must labeling: it's
already auto-closing package request issues.
Since we're going to migrate the nixpkgs-merge-bot into this workflow,
we'll rename things to a more generic name.
(cherry picked from commit d78de15627)
To be able to disable the pr.yml workflow on GitHub, we need to rename
it to a different name. Let's use the long name for consistency with
merge-group.yml. This only affects the GitHub-internal name, not the
visible name in the PR checklist, which is still "PR". This visible name
is also used by nixpkgs-review, so that won't break.
(cherry picked from commit f66a380ea3)
Instead of setting the maintainer-related labels based entirely on Eval
results, this uses the new maintainer map from the target branch. This
allows labeling PRs correctly, that had been created *before* a
contributor became a maintainer of the respective package.
(cherry picked from commit 6b5e6cbbee)
These are style-only changes, that are not enforced via tooling - but
used mostly consistently in the other github-script files.
(cherry picked from commit 3df31aa255)
Creates a team sync workflow that pushes the current state of teams to a
JSON file, which can then be ingested by `lib.teams` to expose member
lists.
Co-Authored-By: Alexander Bantyev <alexander.bantyev@tweag.io>
(cherry picked from commit c0c6684257)
Some PRs are empty on purpose, for example the yearly notification about
the election for voters. We should not close these because the merge
commit is empty - only if there was a change intended, but the merge
commit *becomes* empty, we should act.
(cherry picked from commit a705a34a22)
If the change of a PR has already been merged to the target branch
elsewhere, the PR will not be auto-closed by GitHub - and will still
show the same original diff. Still, the temporary merge commit is
actually empty. This causes all kinds of strange CI behavior, from not
showing rebuilds to not pinging maintainers.
We check the merge commit during labeling anyway, to see whether a merge
conflict is present. It's easy to just look a the number of affected
files in this merge commit - and if there are none, we can just
automatically close the PR as no longer relevant.
(cherry picked from commit 402b41c125)
This removes the "owners" check from codeowners-validator. With it, all
tokens and permissions can be removed, because these were only needed to
make these requests.
This solves the problem of codeowners-validator not supporting our new
nested team structure for nixpkgs-maintainers. To make the onboarding of
new teams easier, we moved all teams "under" the nixpkgs-maintainers
team. This makes them inherit the right privileges (triage) for Nixpkgs.
However, this inheritance is not recognized by codeowners-validator,
thus it assumes that these teams don't have access to Nixpkgs. This then
fails the owners check immediately.
Removing the owners check also has a few other advantages:
- This check depends on external state: If a user is renamed or a team
removed, the check will fail. This makes it a bad check for required
status checks or merge queues - the check might fail randomly,
independent of the current PR.
- Running this check in a fork will never work, because the respective
users and teams don't have access to the fork's repo.
Both of this required us to set `continue-on-error: true` most of the
time.
(cherry picked from commit f7d6d11e8e)