Nix has recently started using this package (transitively via
json-schema-for-humans) in the nix-manual, so it's prudent to fix it
on all supported systems.
(cherry picked from commit d329a02f9f)
We previously only did this for Syslinux, which we don't use on UEFI machines.
Co-authored-by: Lee Bousfield <ljbousfield@gmail.com>
(cherry picked from commit dcb0423368)
This partially reverts commit 1197fe48da.
GitHub just doesn't schedule these narrow intervals. 10 minutes is
alright in practice.
(cherry picked from commit 74d6ba3ab4)
When a user tries to merge a PR, but is not allowed to, it is helpful to
explicitly list the users who *are* allowed. This helps explaining *why*
the merge-eligible label was set.
I objected to this proposal before, because it would incur too many API
requests. But after we have restructured the checklist, this is not
actually true anymore - we can now sensibly run this only when a comment
is posted and not whenever we check a PR for eligibility.
(cherry picked from commit 1e6124a504)
By only ignoring already-handled comments when running non-dry, it's
much easier to look at existing PRs, for which the merge bot already
commented, and iterate on them locally.
It's dry mode anyway, so it won't hurt to get a few more merge comments
in the console output.
(cherry picked from commit 2d6602908b)
We previously used auto-merge first and then enqueued explicitly on the
assumption that auto-merge would fail if the PR was actually in
mergeable state already. This turned out to be false.
Instead, we currently face the problem of auto-merge sometimes getting
stuck. This seems to happen when, at the time of enabling auto-merge,
the required status checks already passed and the PR would be ready to
go - but sometimes GitHub doesn't do it. This *can* be unblocked by
approving the PR again, which seems to run the internal "let's check
whether we can merge this" procedures on the GitHub side again.
However, we can probably also solve this by just explicitly trying to
enqueue the PR first - and only if that fails, fall back to auto-merge.
I previously argued against that, based on a potential race condition,
in which a PR could become ready to merge between these two requests -
at which point the auto-merge operation would fail, if the original
assumption was true. But since we don't observe this, we might as well
switch.
(cherry picked from commit 747d9e2d34)
This makes reactions to merge comments and all the labeling a bit
quicker. Lower the number of backlog items to process per run
accordingly, so that we don't really need more API requests for it.
(cherry picked from commit 1197fe48da)
We used to employ the worst strategy for parallelism possibly: The rate
limiter capped us at one concurrent request per second, while 100+ items
were handled in parallel. This lead to every item taking the full
duration of the job to proceed, making the data fetched at the beginning
of the job stale at the end. This leads to smaller hiccups when
labeling, or to the merge-bot posting comments after the PR has already
been closed.
GitHub allows 100 concurrent requests, but considers it a best practice
to serialize them. Since serializing all of them causes problems for us,
we should try to go higher.
Since other jobs are running in parallel, we use a conservative value of
20 concurrent requests here. We also introduce the same number of
workers going through the list of items, to make sure that each item is
handled in the shortest time possible from start to finish, before
proceeding to the next. This gives us roughly 2.5 seconds per individual
item - but speeds up the overall execution of the scheduled job to 20-30
seconds from 3-4 minutes before.
(cherry picked from commit 810b9ba51d)