Allow S3-Compatible Cloud Object Storage in CloudETL Example#1264
Allow S3-Compatible Cloud Object Storage in CloudETL Example#1264metadaddy wants to merge 4 commits intoconfluentinc:7.7.1-postfrom
Conversation
|
🎉 All Contributor License Agreements have been signed. Ready to merge. |
|
|
|
Hi @davetroiano - you replied to me on #1262 last month, pointing to the PR for the CLI 4 compatibles samples. Any interest in taking this on? It shows how you can send data to any S3-compatible cloud object store (Backblaze, Minio, IBM Cloud Object Storage, etc etc) by setting the endpoint and region in the AWS profile (which you've likely already done if you're using any of the AWS SDKs/CLI with one of those providers). |
Description
This change allows the use of an S3-compatible cloud object store such as Backblaze B2 with the CloudETL example as an alternative to Amazon S3.
With this change, the user may configure an AWS profile with
endpoint_urlset to a value such ashttps://s3.us-west-004.backblazeb2.comin theconfigfile. For example:The
setup_s3_storage.shscript reads this value (viaaws configure get endpoint_url --profile $S3_PROFILE) into a newS3_ENDPOINT_URLenvironment variable which is used asstore.urlfor the connectors. If the endpoint URL is not set in the profile, then the Amazon S3 global default,https://s3.amazonaws.com, is used.There is also a fix to
read-data.sh- thelist-objectscall was missing the--profile $S3_PROFILE, so it incorrectly used the default profile.Author Validation
[x] cloud-etl
Reviewer Tasks
Describe the tasks/validation that the PR submitter is requesting to be done by the reviewer.
[ ] cloud-etl