Skip to content

No way to prevent regression-analysis callbacks being aborted due to failures

The regression-analysis callback scheduled as part of the qa workflow (documentation) always ends up Aborted in non-trivial cases (example). This is because it depends on all the individual regression-analysis callbacks in QA sub-workflows, each of which has to depend on the tasks that produce the information they're trying to analyze in order that they don't run too early; and so we run into this code:

    def unblock_reverse_dependencies(self) -> None:
        """Unblock reverse dependencies."""
        # Shortcuts to keep line length sane
        r = WorkRequest.Results
        s = WorkRequest.Statuses
        allow_failure = self.workflow_data and self.workflow_data.allow_failure

        if self.result in {r.SUCCESS, r.SKIPPED} or allow_failure:
            # lookup work requests that depend on the completed work
            # request and unblock them if no other work request is
            # blocking them
            for (
                rdep
            ) in self.reverse_dependencies.can_be_automatically_unblocked():
                rdep.mark_pending()
        else:  # failure and !allow_failure
            for rdep in self.reverse_dependencies.filter(status=s.BLOCKED):
                rdep.mark_aborted()

Marking the information-producing tasks as allow_failure would fix this particular problem, but would be worse in other ways because we don't in general want to allow those QA tasks to fail. I think instead we need a variant of allow_failure that works the other way round. allow_failure means "if this work request fails, allow work requests that depend on it to proceed anyway", whereas we need something that says "if any of this work request's dependencies fail, allow it to proceed anyway". Perhaps allow_dependency_failures?

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information