Skip to content

Conversation

@LiebingYu
Copy link
Contributor

Purpose

Linked issue: close #2425

Brief change log

Tests

API and Format

Documentation

@LiebingYu LiebingYu force-pushed the flink-custom-properties branch from 5f50aa7 to 793d437 Compare January 21, 2026 11:37
Copy link
Contributor

@luoyuxia luoyuxia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LiebingYu Thanks for thr pr. LGTM @loserwang1024 Could you please help review again?

@LiebingYu LiebingYu force-pushed the flink-custom-properties branch 4 times, most recently from 3c61339 to a845693 Compare January 25, 2026 09:35
@LiebingYu LiebingYu force-pushed the flink-custom-properties branch from a845693 to 784312a Compare January 25, 2026 09:36
@loserwang1024
Copy link
Contributor

loserwang1024 commented Jan 27, 2026

@LiebingYu Should we add a specially prefix to custom properties rather than skip all the check? I think it's dangerous for later compability.

Currently org.apache.fluss.flink.utils.FlinkConnectorOptionsUtils#validateTableSourceOptions only check the scan start up.

why not when catalog read from table, add a prefixes 'customer.' to customers. Then table factory check it and remove the prefixes? @leonardBang @wuchong , CC, WDYT

@leonardBang leonardBang self-requested a review January 27, 2026 06:51
@leonardBang
Copy link
Contributor

leonardBang commented Jan 27, 2026

@LiebingYu Should we add a specially prefix to custom properties rather than skip all the check? I think it's dangerous for later compability.

+1 for adding a specific prefix for custom properties instead of removing all existing validation for near all options, it make the code being weak. Imaging kafka and MySQL CDC connector, we also support custom properties with prefix i.e.:
Kafka:https://nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/kafka/#properties

@wuchong
Copy link
Member

wuchong commented Jan 27, 2026

I think it’s a good idea to use a dedicated prefix for user-extensible properties to avoid potential key conflicts in the future.

Instead of customer.*, consider clearer and more conventional prefixes like tags.*, ext.*, or properties.*, or even begins with _ prefix such as _owner=xxx. These better convey that the keys represent custom, user-defined metadata. The term customer is ambiguous and doesn’t clearly signal extensibility. Moreover, from Fluss’s perspective, any property outside the table.* namespace is effectively a custom property.

Examples:

tags.owner = 'xxx'
ext.owner = 'xxx'
properties.owner = 'xxx'
_owner = 'xxx'

@wuchong
Copy link
Member

wuchong commented Jan 27, 2026

Personally, I prefer tags.* because it clearly conveys that these are purely custom labels with no semantic impact on connector or storage behavior.

@LiebingYu
Copy link
Contributor Author

I think it’s a good idea to use a dedicated prefix for user-extensible properties to avoid potential key conflicts in the future.

Instead of customer.*, consider clearer and more conventional prefixes like tags.*, ext.*, or properties.*, or even begins with _ prefix such as _owner=xxx. These better convey that the keys represent custom, user-defined metadata. The term customer is ambiguous and doesn’t clearly signal extensibility. Moreover, from Fluss’s perspective, any property outside the table.* namespace is effectively a custom property.

Examples:

tags.owner = 'xxx'
ext.owner = 'xxx'
properties.owner = 'xxx'
_owner = 'xxx'

My question is: do we really need to define a prefix for custom properties? As you said, “from Fluss’s perspective, any property outside the table.* namespace is effectively a custom property.” When the Flink connector processes properties, it can filter out those custom ones and simply skip validating them. From this perspective, custom properties don’t necessarily need a prefix, right? After all, the Fluss server imposes no restrictions on custom properties, but the Flink read/write path introduces an additional constraint (and if users create tables via the API, they can completely bypass this constraint).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Flink Connector Fails to Validate Tables with Custom Properties

5 participants