Avoid network timeout on large histories#7
Avoid network timeout on large histories#7brlcad wants to merge 4 commits intoblackducksoftware:mainfrom
Conversation
…on. this streams the log back in that case (which keeps the network pipe open) and grabs the last revision.
…on. this streams the log back in that case (which keeps the network pipe open) and grabs the last one.
add support for when extracting HEAD:1 times out the network connection....
There was a problem hiding this comment.
I don't see that run(back_to_front) will return a truthy value. Please refer to Scm::Adapters::AbstractAdapter. This calls Shellout#run, which calls Shellout.execute, which returns an array of [status, outbuf.string, errbuf.string]
It would be very helpful to have test coverage of this next_revision_xml method with the different pathways covered.
There was a problem hiding this comment.
The perils of writing code one cannot easily test. Fully concur that there's a problem as written. Looks like I misread a similar snippet in lib/scm/adapters/git/head.rb that let me to think a return code would result.
What about pushing the conditional into the command begin run, then making the callee always just check parse().last?
run("#{back_to_front} || #{front_to_back}")
I can redo a push with that change if the explanation is unclear.
…t wrong. push the conditional down into sh land and let the xml parser just always pull the last one (which will be the first one with --limit 1)
|
G |
This change should avoid a reverse svn log failure connection timeout by pulling the log in order (and extracting the last entry). This is only done if the reverse log fails in order to keep overhead low for most enlistments, but this should fix failures for a number of repositories that have a large revision history.