Skip to content

Comments

feat(storage): multi-copy upload with store->pull->commit flow#593

Open
rvagg wants to merge 2 commits intorvagg/sp-sp-fetchfrom
rvagg/pull-upload-flow
Open

feat(storage): multi-copy upload with store->pull->commit flow#593
rvagg wants to merge 2 commits intorvagg/sp-sp-fetchfrom
rvagg/pull-upload-flow

Conversation

@rvagg
Copy link
Collaborator

@rvagg rvagg commented Feb 6, 2026

Sits on top of #544 which has the synapse-core side of this.


Implement store->pull->commit flow for efficient multi-copy storage replication.

Split operations API on StorageContext:

  • store(): upload data to SP, wait for parking confirmation
  • presignForCommit(): pre-sign EIP-712 extraData for pull + commit reuse
  • pull(): request SP-to-SP transfer from another provider
  • commit(): add pieces on-chain with optional pre-signed extraData
  • getPieceUrl(): get retrieval URL for SP-to-SP pulls

StorageManager.upload() orchestration:

  • Default 2 copies (endorsed primary + any approved secondary)
  • Single-provider: store->commit flow
  • Multi-copy: store on primary, presign, pull to secondaries, commit all
  • Auto-retry failed secondaries with provider exclusion (up to 5 attempts)

Provider selection:

  • Primary requires endorsed provider (throws if none reachable)
  • Secondaries use any approved provider from the pool
  • 2-tier selection per role: existing dataset, then new dataset

Callback refinements:

  • Remove redundant onUploadComplete (use onStored instead)
  • onStored(providerId, pieceCid) - after data parked on provider
  • onPieceAdded(providerId, pieceCid) - after on-chain submission
  • onPieceConfirmed(providerId, pieceCid, pieceId) - after confirmation

Type clarity:

  • Rename UploadOptions.metadata -> pieceMetadata (piece-level)
  • Rename CommitOptions.pieces[].metadata -> pieceMetadata
  • StoreError/CommitError carry providerId and endpoint for optional telemetry
  • New: CopyResult, FailedCopy for multi-copy transparency

Implements #494

@rvagg rvagg requested a review from hugomrdias as a code owner February 6, 2026 14:06
@github-project-automation github-project-automation bot moved this to 📌 Triage in FOC Feb 6, 2026
@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Feb 6, 2026

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Preview URL Updated (UTC)
✅ Deployment successful!
View logs
synapse-dev a3a248f Commit Preview URL

Branch Preview URL
Feb 16 2026, 11:15 AM

@rvagg
Copy link
Collaborator Author

rvagg commented Feb 6, 2026

Docs lint failing, this still needs a big docs addition but that can come a little later as we get through review here.

Here's some notes I built up about failure modes and handling:

Multi-Copy Upload: Failure Handling

Philosophy

  1. Store failure = hard fail: If we can't store data anywhere, throw immediately
  2. All commits fail = hard fail: If no provider commits successfully, throw CommitError
  3. Partial commit failure = record and return: Record failed providers in failures[] (with role), return result with successful copies[]
  4. Secondary failure = best-effort: Retry with replacement SPs, then commit whatever succeeded
  5. Never throw away successful work: If data is committed on any provider, the user gets a result -- not an exception
  6. Explicit providers = no retry: User specified providers, respect their choice
  7. Batch semantics: All pieces must succeed on a provider, or that provider is failed
  8. Transparency over exceptions: failures[] tells the user what went wrong; copies[] tells them what worked

Partial Success Over Atomicity

When a user requests N copies and we can only achieve fewer, we commit what we have rather than throwing everything away:

  • Best-effort exhaustion: For auto-selected providers, we retry up to 5 secondaries before giving up
  • Upload work is expensive: Throwing discards successful uploads; parked pieces get GC'd by the SP
  • No information loss: throw after partial success destroys information about what did succeed
  • Result inspection is the contract: result.copies.length < count tells the user they got fewer copies; result.failures tells them why

Failure Modes by Stage

The multi-copy upload has a sequential pipeline: select → store → pull → commit.

Stage 0: Provider Selection (before any upload)

Provider selection uses a tiered approach with ping validation at each step:

Priority Selection Strategy When Used
1 Existing data set with endorsed provider Primary selection, has stored before
2 New data set with endorsed provider Primary selection, fresh start
3 Existing data set with non-endorsed provider Fallback if no endorsed available
4 New data set with non-endorsed provider Final fallback

Ping validation: Before selecting any provider, we ping their PDP endpoint. If ping fails, we try the next provider in the current tier before falling to the next tier.

What happens Behavior
Provider ping succeeds Use this provider
Provider ping fails Try next provider in tier, warn to console
All providers in tier fail ping Move to next tier
All tiers exhausted (providers remain but unreachable) Throw error: "All N providers failed health check"
No providers remain after filtering Throw error for primary, break loop for secondaries

Key distinction:

  • For primary selection (first context), exhaustion = error (can't proceed)
  • For secondary selection (subsequent contexts), exhaustion = get fewer copies (proceed with what we have)

Stage 1: Store (upload data to primary SP)

Store has two sub-stages:

Sub-stage What happens Data state Behavior
1a: Upload HTTP upload stream succeeds Data on SP (parked) Continue to 1b
1a: Upload HTTP upload stream fails (network, timeout) No Throw StoreError
1b: Confirm Polling for "parked" status succeeds Data on SP (parked) Continue to pull
1b: Confirm Polling for "parked" status times out Unknown (may or may not exist) Throw StoreError

Store failure is unambiguous from the SDK's perspective: either we have confirmed parked data, or we don't. The user can safely retry.

Note: If 1b times out, data might exist on the SP but we can't confirm it. The SP will eventually GC parked pieces that aren't committed.

Stage 2: Pull (SP-to-SP fetch to secondaries)

What happens Data on secondary? On-chain? Behaviour
Pull succeeds Yes (parked) No Continue to commit
Pull fails (auto-selected) No No Retry with next provider (up to 5 attempts)
Pull fails (explicit provider) No No Record in failures[], no retry
All secondary attempts exhausted No No Proceed to commit with primary only

Pull failure is recoverable: data is still on the primary, no on-chain state exists yet. Retrying pull is cheap (SP-to-SP, no client bandwidth).

Stage 3: Commit (addPieces on-chain transaction)

What happens Data on SP? On-chain? Behaviour
All commits succeed Yes Yes Build result with all copies
Primary commit succeeds, secondary fails Yes Primary: yes Record secondary in failures[]
Primary commit fails, secondary succeeds Yes Secondary: yes Record primary in failures[] with role: 'primary', return with secondary in copies[]
Primary commit fails, secondary also fails Yes (parked) No Throw CommitError -- nothing on-chain, safe to retry
Secondary commit fails Yes (parked) No Record in failures[] -- data on SP, will be GC'd

Behaviour Matrix

Scenario Behaviour
Primary store fails Throw StoreError -- nothing happened
Primary commit fails, secondary succeeds Record primary in failures[] with role: 'primary', return result
All commits fail Throw CommitError -- nothing on-chain
Secondary pull fails (auto-selected) Retry with next provider (up to 5 attempts)
Secondary pull fails (explicit) Record in failures[], no retry
All secondary attempts exhausted Commit primary only, record failures
Secondary commit fails Record in failures[] -- data on SP, will be GC'd
Failover creates new dataset Mark isNewDataSet: true in CopyResult
copies.length < count Partial success -- user should inspect failures[]

Error Types

/** Primary store failed - no data stored anywhere, safe to retry */
class StoreError extends Error {
  name = 'StoreError'
}

/** All commits failed - data stored on SP(s) but nothing on-chain, safe to retry */
class CommitError extends Error {
  name = 'CommitError'
}

// Partial commit failures appear in result.failures[] with role: 'primary' or 'secondary'
// Only throws CommitError when ALL providers fail to commit

What Users Must Check

Users should always inspect result.failures, not just check that upload() didn't throw:

// If ALL commits fail, upload() throws CommitError
// If at least one succeeds, we get a result:
const result = await synapse.storage.upload(data, { count: 3 })

// Check if endorsed provider (primary) failed
const primaryFailed = result.failures.find(f => f.role === 'primary')
if (primaryFailed) {
  console.warn(`Endorsed provider ${primaryFailed.providerId} failed: ${primaryFailed.error}`)
  // Data is only on non-endorsed secondaries
}

// Check if we got all requested copies
if (result.copies.length < 3) {
  console.warn(`Only ${result.copies.length}/3 copies succeeded`)
  for (const failure of result.failures) {
    console.warn(`  Provider ${failure.providerId} (${failure.role}): ${failure.error}`)
  }
}

// Every copy in copies[] is committed on-chain
for (const copy of result.copies) {
  console.log(`Provider ${copy.providerId}, dataset ${copy.dataSetId}, piece ${copy.pieceId}`)
}

Auto-Retry Logic

When user calls upload(data, { count: 2 }) without explicit providerIds or dataSetIds:

  1. Select primary (endorsed preferred)
  2. Store on primary
  3. Select secondary candidate from pool (excluding primary)
  4. Pull to secondary
  5. If pull fails:
    • Mark secondary as failed
    • Select next secondary from pool
    • Retry pull (data already on primary)
    • Repeat until: success OR exhausted pool OR hit MAX_SECONDARY_ATTEMPTS (5)
  6. If no secondary succeeded → proceed to commit with primary only
  7. Commit on all successful providers
  8. Return result with copies[] and failures[]

When user specifies providerIds or dataSetIds: no auto-retry, failures recorded in failures[].

Design Decision: Primary Commit Failure Handling

Current implementation commits on all providers in parallel via Promise.allSettled(). If primary commit fails but secondary commit succeeds, we record the primary failure and return with the secondary in copies[].

Endorsed providers are selected as primary because they're curated for reliability. If primary (endorsed) fails but secondary (non-endorsed) succeeds, the user ends up with data only on non-endorsed providers. This may not meet product requirements of having one copy on an endorsed provider.

// Check if endorsed provider failed
const primaryFailed = result.failures.some(f => f.role === 'primary')
if (primaryFailed) {
  // Handle: retry, alert, or treat as error depending on requirements
}

@timfong888
Copy link

timfong888 commented Feb 6, 2026

I noticed this:

Primary store failure = hard fail: If we can't store on primary, throw immediately

What is the test for the availability of an Endorsed Provider in the case we have more than one? If the first store fails, is there a retry?

Under retry:

Select primary (endorsed preferred)
Store on primary

If we have 2 Endorsed, and the store on primary operation fails do we retry the other endorsed?

@rvagg
Copy link
Collaborator Author

rvagg commented Feb 8, 2026

@timfong888 I've clarified the post above with more detail:

  • Now says: Store failure = hard fail: If we can't store data anywhere, throw immediately.
  • There's now also a "Stage 0" that details how we select a provider
  • I updated "Stage 1" with details about the failure modes that can happen there too because there's nuanced ways it can go wrong.

@rvagg
Copy link
Collaborator Author

rvagg commented Feb 9, 2026

Docs updated to pass lint, additional tests added to address some gaps.

@rvagg rvagg mentioned this pull request Feb 9, 2026
@BigLep BigLep linked an issue Feb 9, 2026 that may be closed by this pull request
5 tasks
@BigLep BigLep moved this from 📌 Triage to 🔎 Awaiting review in FOC Feb 9, 2026
@timfong888
Copy link

I am not clear on this:

All providers in tier fail ping Move to next tier

My understanding is if no Endorsed SP succeeds, it's a failure operation, because if there is no Endorsed and we only have Approved, that has a low durability guarantee.

@timfong888
Copy link

Key distinction:

For primary selection (first context), exhaustion = error (can't proceed)
For secondary selection (subsequent contexts), exhaustion = get fewer copies (proceed with what we have)

The above seems right. If Primary exhausts, it's error, not go to the next tier, right?

@timfong888
Copy link

Question: If the endorsed provider passed ping during selection but then fails during store() (HTTP upload or parking
confirmation), StoreError is thrown immediately. There doesn't appear to be an attempt to try another endorsed provider. But if there is, then great, but checking.

@timfong888
Copy link

All commits failed - data stored on SP(s) but nothing on-chain, safe to retry
parked pieces get GC'd by the SP

What happens if GC before retry?

@rvagg rvagg force-pushed the rvagg/pull-upload-flow branch from eb878ac to 29ac8ad Compare February 10, 2026 23:49
@rvagg
Copy link
Collaborator Author

rvagg commented Feb 11, 2026

@timfong888:

On the tier question: yes, the current code does fall back to approved-only if no endorsed provider passes the health check. A requireEndorsed option is something I wrote down as on the table for the future, but right now the priority is "data gets stored" over "only endorsed". If that's a problem we should talk about it, but I think for launch it's the right trade-off since endorsed providers failing the health check would be an unusual situation? Maybe a hard failure is a better signal for us though.

There doesn't appear to be an attempt to try another endorsed provider

Not right now. Couple of reasons:

  1. Scope, this is where I'm drawing the line for the first iteration. First pass, best effort, fail clearly.
  2. It's hard because streams can only be consumed once. If the user gives us raw bytes or a File we could restart, but for a plain stream we can't, and the DX gets complicated fast (do we silently re-send 1GiB? what about streams that can't restart?). Better to throw and let the user decide until we work through the DX of it and see if it's worth the complexity.

What happens if GC before retry

Curio GCs unreferenced pieces after 24 hours, so there's a comfortable window for retries for the commit phase.

@timfong888
Copy link

Okay. So it randomizes across the Endorsed SP for ping if no existing context.

As long as they are good and an endorsed stores and commits successfully we are good. That's a fair assumption.

rvagg added a commit that referenced this pull request Feb 12, 2026
…lity

Borrowed a lot of this from #593,
and merged with foc-devnet-info support.
@rvagg rvagg force-pushed the rvagg/sp-sp-fetch branch from 59e576b to 63c6170 Compare February 12, 2026 13:03
redpanda-f pushed a commit that referenced this pull request Feb 12, 2026
…lity (#604)

Borrowed a lot of this from #593,
and merged with foc-devnet-info support.
@rvagg rvagg force-pushed the rvagg/sp-sp-fetch branch 2 times, most recently from 2d43c4f to 70fa757 Compare February 13, 2026 03:05
@rvagg
Copy link
Collaborator Author

rvagg commented Feb 13, 2026

Two design changes landed based on product discussion with @timfong888:

  1. Endorsed provider = hard requirement for primary. No fallback to non-endorsed. If all endorsed providers fail health check, upload() throws. Tiers 3/4 removed from primary selection. Secondary selection unchanged (any approved provider).
  2. Error enrichment for telemetry. StoreError and CommitError now carry providerId (string) and endpoint properties + toJSON() for Sentry's ExtraErrorData integration where it's enabled. bigint stored as string to avoid JSON serializer breakage. Message text also includes provider ID as fallback.

@rvagg rvagg force-pushed the rvagg/pull-upload-flow branch from 29ac8ad to 98792d4 Compare February 13, 2026 04:20
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

synthesises with changes that are in #600 but demonstrates the multi upload() flow and the multi-piece variant

@rvagg
Copy link
Collaborator Author

rvagg commented Feb 13, 2026

Updated on top of #544. Minor updates to the original post here (which is the commit message) to reflect latest form with newest product requirements implemented.

Implement store->pull->commit flow for efficient multi-copy storage replication.

Split operations API on StorageContext:
- store(): upload data to SP, wait for parking confirmation
- presignForCommit(): pre-sign EIP-712 extraData for pull + commit reuse
- pull(): request SP-to-SP transfer from another provider
- commit(): add pieces on-chain with optional pre-signed extraData
- getPieceUrl(): get retrieval URL for SP-to-SP pulls

StorageManager.upload() orchestration:
- Default 2 copies (endorsed primary + any approved secondary)
- Single-provider: store->commit flow
- Multi-copy: store on primary, presign, pull to secondaries, commit all
- Auto-retry failed secondaries with provider exclusion (up to 5 attempts)

Provider selection:
- Primary requires endorsed provider (throws if none reachable)
- Secondaries use any approved provider from the pool
- 2-tier selection per role: existing dataset, then new dataset

Callback refinements:
- Remove redundant onUploadComplete (use onStored instead)
- onStored(providerId, pieceCid) - after data parked on provider
- onPieceAdded(providerId, pieceCid) - after on-chain submission
- onPieceConfirmed(providerId, pieceCid, pieceId) - after confirmation

Type clarity:
- Rename UploadOptions.metadata -> pieceMetadata (piece-level)
- Rename CommitOptions.pieces[].metadata -> pieceMetadata
- StoreError/CommitError carry providerId and endpoint for optional telemetry
- New: CopyResult, FailedCopy for multi-copy transparency

Implements #494
@rvagg rvagg force-pushed the rvagg/pull-upload-flow branch from 619499d to f63e566 Compare February 13, 2026 08:22
@rjan90 rjan90 added this to the M4.0: mainnet staged milestone Feb 16, 2026
@socket-security
Copy link

socket-security bot commented Feb 16, 2026

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Added@​hugomrdias/​docs@​0.1.117710010092100
Addedvite@​7.3.1961008299100
Added@​astrojs/​starlight@​0.37.6991008596100
Addedvite-plugin-node-polyfills@​0.25.010010010088100
Addedtw-animate-css@​1.4.01001009488100
Addedwrangler@​4.65.0981009596100

View full report

…docs for multi-copy

Move provider selection logic (selectProviders, fetchProviderSelectionInput,
findMatchingDataSets) from SDK internals to synapse-core as public API for
DIY users. Simplify selection from 4-tier fallback to 2-tier preference
(existing dataset -> new dataset) since endorsedIds already controls the
eligible pool. Clean up createContexts() to three explicit paths (dataSetIds,
providerIds, smartSelect) with count validation and duplicate-provider guard.
Update storage docs to reflect multi-copy as the default upload path.
@rvagg rvagg force-pushed the rvagg/pull-upload-flow branch from 4770bab to a3a248f Compare February 16, 2026 11:09
@rvagg
Copy link
Collaborator Author

rvagg commented Feb 16, 2026

@hugomrdias (and @rjan90 ) I'm bailing on my 3rd PR and just putting it in here as a second commit. I discovered when doing this that I'd lost something during my rebase to post-0.37 master (when you give providerIds and dataSetIds it should only use them and not do the cascade thing). I put that back in the latest commit and it's now more complete (:crossed_fingers:). But, as you might see if you looked at that commit, it's the one that pulls a bunch more stuff back into synapse-core, the previous commit didn't touch core, that was all left for #544, and this new one adds a big docs modification.

The docs have 3 levels:

  • Golden path
  • Multiple-files and using the contexts directly to control the store(), pull(), commit() operations and dealing with all of permuations in which things could go wrong (this would be a good flow for an advanced user like Shashank)
  • synapse-core only ("Using synapse-core Directly") where you can do the same thing, but only with the stateless functions from synapse-core

example-storage-e2e.js works, confirmed working for single and multiple files, small and large, in devnet and on calibnet 🥳.

redpanda-f added a commit that referenced this pull request Feb 16, 2026
test: mocked JSON RPC

Update packages/synapse-core/test/foc-devnet-info.test.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

Update packages/synapse-core/src/foc-devnet-info/src/index.ts

Co-authored-by: Rod Vagg <rod@vagg.org>

fix: make example script work again, refactor for maximum example utility (#604)

Borrowed a lot of this from #593,
and merged with foc-devnet-info support.

Update packages/synapse-core/src/foc-devnet-info/src/index.ts

Co-authored-by: Rod Vagg <rod@vagg.org>

fixes: PR review

fix: remove redundant loadDevnetInfo() function
@rvagg
Copy link
Collaborator Author

rvagg commented Feb 19, 2026

Multi-Copy Durability in Synapse (What's New)

Store data across multiple storage providers with a single upload. The SDK handles replication server-side: data is uploaded once and providers copy it between themselves.

What's New

Multi-Copy Uploads

upload() now stores data on multiple providers by default (2 copies). Your data goes to one provider, then that provider serves it directly to the others, no additional client bandwidth used.

const result = await synapse.storage.upload(data)

// result.copies: each successful copy with provider, dataset, and retrieval URL
// result.failures: any providers that failed

If the primary copy fails (store or commit), upload() throws (StoreError or CommitError). When you get a result back, the primary is always committed. Secondary failures are reported in failures[] rather than throwing; you decide what to do about them.

Target Specific Providers

Control where your copies go:

// Specific providers
await synapse.storage.upload(data, { providerIds: [1n, 2n, 3n] })

// Specific existing datasets
await synapse.storage.upload(data, { dataSetIds: [10n, 20n] })

// Or let the SDK choose (default: 2 copies, endorsed primary)
await synapse.storage.upload(data, { count: 3 })

Split Operations for Batching & Greater Control

Break the upload pipeline into independent phases - store, pull, commit - for multi-piece batch uploads and granular error handling:

const [primary, secondary] = await synapse.storage.createContexts({
  count: 2,
  metadata: { source: "my-service" },
})

// Store multiple pieces on the primary
const stored = await Promise.all(files.map(file => primary.store(file)))
const pieceCids = stored.map(s => s.pieceCid)

// Pre-sign once for all pieces (avoids multiple wallet prompts)
const extraData = await secondary.presignForCommit(
  pieceCids.map(cid => ({ pieceCid: cid }))
)

// Secondary pulls all pieces from primary (server-to-server, no client bandwidth)
await secondary.pull({ pieces: pieceCids, from: primary, extraData })

// Commit all pieces on-chain in one transaction per provider
await primary.commit({ pieces: pieceCids.map(cid => ({ pieceCid: cid })) })
await secondary.commit({ pieces: pieceCids.map(cid => ({ pieceCid: cid })), extraData })

Each phase is independently retryable. If the on-chain commit fails, the data is already stored on the provider, retry commit() without re-uploading.

Upload Progress Visibility

Track what's happening across providers:

await synapse.storage.upload(data, {
  onStored: (providerId, pieceCid) => { /* data uploaded to provider */ },
  onPullProgress: (providerId, pieceCid, status) => { /* SP-to-SP transfer progress */ },
  onCopyComplete: (providerId, pieceCid) => { /* secondary copy confirmed */ },
  onCopyFailed: (providerId, pieceCid, error) => { /* secondary copy failed */ },
  onPiecesAdded: (txHash, providerId, pieces) => { /* on-chain tx submitted */ },
  onPiecesConfirmed: (dataSetId, providerId, pieces) => { /* on-chain tx confirmed */ },
})

Structured Errors

Errors now tell you exactly what failed and where:

  • StoreError: upload failed, no data stored anywhere. Retry with same or different provider.
  • CommitError: data stored on provider but on-chain commit failed. Retry commit() without re-uploading.

Both carry the providerId and endpoint that failed.

Provider Selection for Core Users

For applications that need direct control without the SDK wrapper, provider selection is now available as stateless functions in @filoz/synapse-core:

import { fetchProviderSelectionInput, selectProviders } from "@filoz/synapse-core/warm-storage"

// Single multicall gathers providers, endorsements, and existing datasets
const input = await fetchProviderSelectionInput(client, {
  address: walletAddress,
  metadata: { source: "my-service" },
})

// Pure function, no network calls, deterministic
const [primary] = selectProviders(
  { ...input, endorsedIds: input.endorsedIds },  // endorsed only
  { count: 1 }
)
const [secondary] = selectProviders(
  { ...input, endorsedIds: new Set() },           // any approved provider
  { count: 1, excludeProviderIds: new Set([primary.provider.id]) }
)

SP-to-SP Pull for Core Users

Initiate and monitor server-side replication directly:

import { pullPieces, waitForPullStatus } from "@filoz/synapse-core/sp"

const result = await waitForPullStatus(client, {
  serviceURL: secondaryProvider.pdp.serviceURL,
  pieces: [{
    pieceCid,
    sourceUrl: `${primaryProvider.pdp.serviceURL}/pdp/piece/${pieceCid}`,
  }],
  payee: secondaryProvider.serviceProvider,
  payer: client.account.address,
  cdn: false,
  metadata: { source: "my-service" },
  onStatus: (response) => console.log(response.status),
})

The pull endpoint is idempotent, the same signed request can be safely retried and doubles as a status check.

Breaking Changes

  • UploadResult.pieceId replaced by UploadResult.copies[] (each copy has its own pieceId)
  • UploadOptions.metadata renamed to UploadOptions.pieceMetadata (clarifies piece vs dataset metadata)
  • Upload callbacks renamed: onUploadComplete is now onStored; onPiecesAdded and onPiecesConfirmed now include providerId
  • forceCreateDataSet, uploadBatchSize, and providerAddress options removed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: 🔎 Awaiting review

Development

Successfully merging this pull request may close these issues.

GA DURABILITY: Multi-copy upload via SP-to-SP pull

3 participants