Skip to content

Fix #433 Prevent per-page hangs & avoid killing job on maxbackoff#438

Open
akshan-main wants to merge 1 commit intoallenai:mainfrom
akshan-main:request_timeout_and_backoff_fix
Open

Fix #433 Prevent per-page hangs & avoid killing job on maxbackoff#438
akshan-main wants to merge 1 commit intoallenai:mainfrom
akshan-main:request_timeout_and_backoff_fix

Conversation

@akshan-main
Copy link
Copy Markdown

Closes #433

Changes proposed in this pull request:

  • apost() now takes a timeout_s param and wraps the entire network path in asyncio.timeout(), so a stalled server cant block forever
  • When max backoff is exhausted, we return None instead of sys.exit(1) - the existing fallback path (make_fallback_result) handles it from there, so the rest of the PDF still gets processed
  • New --request_timeout_s CLI flag (default 120s) to control per-request timeout

Before submitting

  • I've read and followed all steps in the Making a pull request
    section of the CONTRIBUTING docs.
  • I've updated or added any relevant docstrings following the syntax described in the
    Writing docstrings section of the CONTRIBUTING docs.
  • If this PR fixes a bug, I've added a test that will fail without my fix.
  • If this PR adds a new feature, I've added tests that sufficiently cover my new functionality.

@akshan-main
Copy link
Copy Markdown
Author

@jakep-allenai

@jakep-allenai
Copy link
Copy Markdown
Contributor

Thanks for this suggestion, let me think on it for a day or two. The reason the job exits now is because in these giant huge runs we do with hundreds of millions of documents, I found it easier to have the job die and have this show up as an obvious error right away, compared to having half complete or empty files get generated, if some consistent backend issue occurred. It happened to us that there could be weird cluster issues where jobs worked fine, then produced empty or almost incomplete jsonl result files, then went back to working and that wasn't fun.

Can you explain more about the cases you ran into?

@akshan-main
Copy link
Copy Markdown
Author

akshan-main commented Feb 16, 2026

Hey, I get why you’d rather crash early in giant runs. In my case, it wasn’t bad output, but there was no output because of hang. apost() waits on socket reads without timeout, so if the server stalls mid-response, the coroutine blocks forever(no per request deadline). With concurrency effectively at 1, it looks like it’s stuck on the last page, but it’s really just whichever page hit the wedged request first. That’s why I think the timeout is important. For the max-backoff, I changed sys.exit(1) because there is already fallback handling, and I wanted a one-off failure to not kill the entire PDF. But let me know if its better to make that behavior opt-in (using a flag) or put a threshold in so repeated failures still stop the job loudly. I can align my solution based on that and create a pr for that as well

@akshan-main
Copy link
Copy Markdown
Author

Hey @jakep-allenai was just curious to hear your thoughts on this now

@jakep-allenai
Copy link
Copy Markdown
Contributor

I've thought about it, I don't think I can make the timeout default on most paths, because for example on our runs in a big cluster, we might have 1600 concurrent requests fired off in parallel, and the server might respond to the last one only after a while. However, I see that it would be more important to have a timeout on the inference-provider external server case. And yes, sometimes servers do crash, but I imagine that would just close those sockets and return a ConnectionClosed error which would then hit the exponential backoff case.

How exactly did VLLM get wedged for you? Was it running using the subprocess inside the pipeline, or were you running an external VLLM? Any VLLM logs you can share?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[bug] Tends to get stuck at the last few pages

2 participants