Skip to content

Upload can still miss a chunk #50

@UniquePanda

Description

@UniquePanda

With #46 we introduced a new and more robust way of keeping track of the chunks that should be uploaded next.
While this improved performance and reliability for uploads with many files, I immediately encountered a scenario where one chunk was never uploaded.
The server logs showed that the test request was sent and answered with the correct 204 status code. However, the frontend never sent a corresponding POST request to actually upload the chunk.
At this point, I think the status of the chunk was not "PENDING" anymore as an XHR was already created. That's probably why the final check didn't pick it up.

While I still don't understand, why the frontend just stopped sending requests, we should mitigate that problem.
I think that maybe sometimes the XHR event listeners are not triggered, maybe because of timeouts? E.g. the answer to the test GET request might be lost somewhere on the way to the client and then the frontend code doesn't know what to do?

We should implement the following three things:

  1. Add some optional debug parameters to the requests, so you can perform at least some more debugging on the server side. E.g. add something like &uploadTaskId=1 to every request and &finalCheck=1 when it is the final check. There needs to be an option to toggle this on (obviously it's off by default).
  2. Make the final check abort all chunks of a file when the file is not yet uploaded. Or maybe just the chunks which are not in status SUCCESS. That's necessary because the ResumableFile will check for any non-SUCCESS chunks to determine if it is uploaded or not, but the uploadChunk function will only upload a chunk if it is exactly in status PENDING. This leaves a gap for the final check, when a chunk is somehow stuck in status UPLOADING.
  3. Add a timeout for every upload task. When there was no progress or other activity within a given time frame (e.g. 2 minutes) reset the upload task and start a new upload for it. Probably by aborting the chunk it was currently uploading and then specifically upload that chunk again. This way we can catch any scenarios where the upload task is waiting for an event that is never fired because e.g. the server answer is lost.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions