I run Open-WebUI from this Helm Chart with S3 configured for storage. On every restart of the pods, there are several files pulled to the pod. Startup time is often more than 20s. Can those files be persisted somewhere?
...
v0.6.13 - building the best AI user interface.
https://github.com/open-webui/open-webui
data_config.json: 100% 39.3k/39.3k [00:00<00:00, 34.6MB/s]
.gitattributes: 100% 1.23k/1.23k [00:00<00:00, 6.54MB/s]
onnx/model_O1.onnx: 100% 90.4M/90.4M [00:08<00:00, 10.6MB/s]
onnx/model_O3.onnx: 100% 90.3M/90.3M [00:08<00:00, 10.6MB/s]
onnx/model_quint8_avx2.onnx: 100% 23.0M/23.0M [00:03<00:00, 6.31MB/s]
openvino_model.xml: 100% 211k/211k [00:00<00:00, 1.55MB/s]
onnx/model_O4.onnx: 100% 45.2M/45.2M [00:12<00:00, 3.53MB/s]
openvino_model_qint8_quantized.xml: 100% 368k/368k [00:00<00:00, 1.80MB/s]
openvino/openvino_model.bin: 100% 90.3M/90.3M [00:05<00:00, 17.2MB/s]
openvino/openvino_model_qint8_quantized.(…): 100% 22.9M/22.9M [00:06<00:00, 3.76MB/s]
rust_model.ot: 100% 90.9M/90.9M [00:06<00:00, 14.8MB/s]
onnx/model_qint8_arm64.onnx: 100% 23.0M/23.0M [00:20<00:00, 1.11MB/s]MB/s]
train_script.py: 100% 13.2k/13.2k [00:00<00:00, 20.2MB/s]2, 13.3MB/s]
onnx/model_O2.onnx: 100% 90.3M/90.3M [00:22<00:00, 4.03MB/s]20.1MB/s], ?B/s]
pytorch_model.bin: 100% 90.9M/90.9M [00:09<00:00, 9.93MB/s] 1.11MB/s]
tf_model.h5: 100% 91.0M/91.0M [00:05<00:00, 16.4MB/s]19MB/s]M [00:06<00:00, 3.76MB/s]
onnx/model.onnx: 100% 90.4M/90.4M [00:25<00:00, 3.51MB/s]
...
I run Open-WebUI from this Helm Chart with S3 configured for storage. On every restart of the pods, there are several files pulled to the pod. Startup time is often more than 20s. Can those files be persisted somewhere?
If I set persistent storage to
local, those files are persistent and not pulled on every restart. But I prefer the pods to be deployments, not statefulsets.