[pull] master from axboe:master#314
Open
pull[bot] wants to merge 1354 commits intokubestone:masterfrom
Open
Conversation
Signed-off-by: Runa Guo-oc <RunaGuo-oc@zhaoxin.com> Link: https://lore.kernel.org/r/20250522104032.17519-1-RunaGuo-oc@zhaoxin.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
fio_ioring_queue() bails out if the SQ's tail + 1 == head. This will always be false, since 0 <= tail - head <= entries. Probably it was meant to check whether the SQ is full, i.e. tail == head + entries. (The head index should be loaded with acquire ordering in that case.) Checking for a full SQ isn't necessary anyways, as the prior check for ld->queued == td->o.iodepth already ensures the SQ isn't full. So remove the unnecessary and misleading tail + 1 == head check. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
…sander/fio * 'opt/io_uring-sq-full-check' of https://github.com/calebsander/fio: engines/io_uring: remove unnecessary SQ full check
The crc7 checksum has a 1/128 chance of not detecting data corruption when we mangle data written to the device. Skip these tests when testing the checksum functions to avoid false test failures. Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
If a sync operation is ever requeued after a previous queue attempt returns FIO_Q_BUSY the assertion checking that the IO_U_F_FLIGHT bit is not set will fail because this bit is not cleared when the FIO_Q_BUSY return value is processed. This patch makes sure that we clear IO_U_F_FLIGHT when the queue attempt returns FIO_Q_BUSY for sync operations. The counters that are restored are not defined for sync operations, so we cannot modify them. Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
When an atttempt to queue an io_u returns FIO_Q_BUSY, the io_u is added to td->io_u_requeues. If the runtime timeout expires with td->io_u_requeues not empty, the job will not close the relevant file because its file->references will be non-zero since the requeued io_u still holds a reference to the file. This patch discards the contents of td->io_u_requeues during io_u cleanup which leads to file closure when its last reference is destroyed. This is relevant for resource-constrained environments. Suggested-by: Jonghwi Jeong <jongh2.jeong@samsung.com> Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
Use minimal distance to delimiter to determine option length Current implementation of opt_len() makes impossible to locate option name in random_distribution zones list combining ':' and ',' chars. opt_len() function should try to locate option name by all possible delimiters and return minimal length one instead of returning first found. Fixes: #1923 Signed-off-by: Leonid Kozlov <leonid.e.kozlov@gmail.com>
… with PI enabled Fix real_file_size calculation when PI is enabled When PI is enabled, the extended LBA (lba_ext) should be used to calculate real_file_size instead of lba_size. This ensures FIO can access the entire device area correctly. Signed-off by: Suho Son <suho.son@samsung.com>
…b.com/SuhoSon/fio * 'fix_real_file_size_when_pi_is_enabled' of https://github.com/SuhoSon/fio: io_uring: ensure accurate real_file_size setup for full device access with PI enabled
…hub.com/leonid-kozlov/fio * 'fix-random-distribution-parsing-failure' of https://github.com/leonid-kozlov/fio: parse: use minimum delimiter distance
Cygwin and msys2 now provide nanosleep and clock_gettime, so fio no longer needs to implement them. The presence of our implementations was triggering build failures: https://github.com/axboe/fio/actions/runs/15828051168 Since fio no longer provides clock_gettime, stop unconditionally setting clock_gettime and clock_monotonic to yes on Windows and start detectinga these features at build time. These two features are successfully detected by our configure script: https://github.com/vincentkfu/fio/actions/runs/15832278184 Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
For randtrimwrite, we should issue trim + write pair and those offsets should be same. This works good for cases without `offset=` option, but not for cases with `offset=` option. In cases with `offset=` option, it's necessary to subtract `file_offset`, which is value of `offset=` option, when calculationg offset of write. This is a bit confusing because `last_start` is an actual offset that has already been issued through trim. However, `last_start` is the value to which `file_offset` is added. Since we add back `file_offset` later on after calling `get_next_block` in `get_next_offset`, `last_start` should be adjusted. Signed-off-by: Jungwon Lee <jjung1.lee@samsung.com> Signed-off-by: Minwoo Im <minwoo.im@samsung.com> [+ updated commit title]
* 'fix-randtrimwrite' of https://github.com/minwooim/fio: io_u: fix offset calculation in randtrimwrite
Previously when using the HTTP engine and nrfiles > 1, the engine would upload a single object N times, instead of N files once. This was due to a file name reference using the first item in the files list, instead of the file name passed in the IO information. Signed-off-by: Renar Narubin <renar.narubin@snowflake.com>
…n/fio * 'http-filename-fix' of https://github.com/sfc-gh-rnarubin/fio: engines/http: fix file name
Security tokens are an element of S3 authorization in some environments. This change adds a parameter to allow users to specify a security token, and pass this to S3 requests with the appropriate header. Signed-off-by: Renar Narubin <renar.narubin@snowflake.com>
* 'security-token' of https://github.com/sfc-gh-rnarubin/fio: engines/http: Add S3 security token support
As Commit 813445e ('backend: clean up requeued io_u's') has been applied, backend cleans up the remained io_u's in td->io_u_requeues. However, with end_fsync=1, the __get_io_u() function returns an io_u from td->io_u_requeues if any io_u exist, and pops it. This leads that the synced io_u will not put file which it got, and, finally, cannot close the file. This patch returns io_u from td->io_u_free_list when td->runstate is TD_FSYNCING, so that the io_u's in td->io_u_requeues will be cleaned up and leads to close file appropriately. Signed-off-by: Jonghwi Jeong <jongh2.jeong@samsung.com>
…ngjonghwi/fio * 'fsync-get-io-u-from-freelist' of https://github.com/jeongjonghwi/fio: io_u: get io_u from io_u_freelist when TD_FSYNCING
…seen events" This reverts commit ae8646a. fio_ioring_cqring_reap() returns up to max - events CQEs. However, the return value of fio_ioring_cqring_reap() is used to both add to events and subtract from max. This means that if less than min CQEs are available and the CQ needs to be polled again, max is effectively lowered by the number of CQEs that were available. Adding to events is sufficient to ensure the next call to fio_ioring_cqring_reap() will only return the remaining CQEs. Commit ae8646a ("engines/io_uring: update getevents max to reflect previously seen events") added an incorrect subtraction from max as well, so revert it. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com> Fixes: ae8646a ("engines/io_uring: update getevents max to reflect previously seen events")
fio_ioring_cqring_reap() takes both an events and a max argument and will return up to events - max CQEs. Only one of the two callers passes an existing events count. So remove the events argument and have fio_ioring_getevents() pass events - max instead. This simplifies the function signature and avoids an addition inside the loop over CQEs. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Currently fio_ioring_cqring_reap() loops over each available CQE, re-loading the tail index, incrementing local variables, and checking whether the max requested CQEs have been seen. Avoid the loop by computing the number of available CQEs as tail - head and capping it to the requested max. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
fio_ioring_cqring_reap() can't fail and returns an unsigned variable. So change its return type from int to unsigned. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
There is no point in comparing events to min again after calling io_uring_enter() to wait for events, as it doesn't change either events of min. So remove the loop condition and only compare events to min after updating events. Don't bother repeating fio_ioring_cqring_reap() before calling io_uring_enter() if less than the min requested events were available, as it's highly unlikely the CQ tail will have changed. Avoid breaking and then branching on the return value by just returning the value from inside the loop. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Add a relaxed-ordering atomic store helper, analogous to atomic_store_release() and atomic_load_relaxed(). Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
fio_ioring_getevents() advances the io_uring CQ head index in fio_ioring_cqring_reap() before fio_ioring_event() is called to read the CQEs. In general this would allow the kernel to reuse the CQE slot prematurely, but the CQ is sized large enough for the maximum iodepth and a new io_uring operation isn't submitted until the CQE is processed. Add a comment to explain why it's safe to advance the CQ head index early. Use relaxed ordering for the store, as there aren't any accesses to the CQEs that need to be ordered before the store. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
`ipc` global variable is initialized in run_threads function before running the threads which are accessing it during IO. However, when we print out statistics, the `ipc` is cleaned up (namely it's pointer attribute `*ipts`). Since, it is possible to print out stats periodically, we can clean up `ipc->ipts` before other threads still use the value which results in segmentaion fault. This commit avoids that by freeing `ipc->ipts` at the end of `fio_backend` function. This addresses #1909 which causes segfault when --status-interval flag is set. Signed-off-by: Jakub Semrič <jakubsemric@gmail.com>
* 'prof-use-after-free-fix' of https://github.com/jsemric/fio: Fix use-after-free of idle_prof_common variable
In zonemode=zbd, random write workloads targeting zoned block devices with max write zones limits such as max_open_zones can not do write operations to randomly chosen offset because of the zoned block device constraint of writing at write pointers. To adjust the offsets to valid positions, fio calls the function zbd_convert_to_write_zone(). This function checks the current write target zones as the next offset candidates but may fail depending on the conditions of those zones. In such cases, the function waits for zone condition changes before retrying. However, the retry logic begins with the zone where the previous attempt ended, and selects the zones that were previously write target. Consequently, the same zones are repeatedly chosen for writing, resulting in writes concentrating on certain zones despite the workload specifying random write. To ensure proper zone selection for random writes, modify zbd_convert_to_write_zone() to retry the zone selection based on the original offset provided to the function. The local variable 'zb' keeps the reference to the zone corresponding to the original offset. Use 'zb' at the retry attempt start. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://lore.kernel.org/r/20260303013159.3543787-2-shinichiro.kawasaki@wdc.com Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
Currently, zbd_convert_to_write_zones() calls io_u_quiesce() when the number of write target zones hits one of the limits of write zones. This wait by io_u_quiesce() significantly degrade the performance. While I tried to remove the io_u_quiesce(), I observed that the test case 58 of t/zbd/test-zbd-support failed with null_blk devices that have a max_active_zones limit set. The failure cause is an incorrect write target zone accounting in zbd_convert_to_write_zones(). This function checks the current write target zones, and selects one of them as the next write target zone. After the zone selection, it locks the zone. However, when the zone is locked, another job such as a trim workload or a write workload with the zone_reset_threshold option might have already reset the zone and removed it from the write target zones array. This unexpected zone removal from the array caused an incorrect zone accounting and the test case failure. To avoid the incorrect zone accounting, call zbd_write_zone_get() after the selected zone gets locked. If the zone is removed from the write target zones array, the function adds the zone back to the array. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://lore.kernel.org/r/20260303013159.3543787-3-shinichiro.kawasaki@wdc.com Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
When the specified block size is not aligned with the zone size or the write pointer positions at workload start, write workloads create unwritten remainder areas at the ends of zones. These remainder areas leave zones in the open condition. This disrupts the intended write target zone selection. Previous commits e1a1b59 ("zbd: finish zones with remainder smaller than minimum write block size") and e2e29bf ("zbd: finish zone when all random write target zones have small remainder") attempted to solve this problem by issuing zone finish operation for zones with small remainders. However, this approach caused performance degradation due to two reasons. First, the zone finish operation requires substantial execution time. Second, zone finish operation requires to wait for in- flight writes from other jobs to complete, which is done by calling io_u_quiesce() before the zone finish operation. To avoid the performance degradation, introduce the new option named "write_zone_remainder". When the option is specified, issue writes to the remainder areas instead of issuing zone finish operation. The write operation makes the zones in the full condition in the same manner as the zone finish operation, freeing up the zone resource of the device and enabling writing to other zones. Also when the option is set, skip the io_u_quiesce() which was required before the zone finish operation. The performance benefit by eliminating the waits on in-flight writes are particularly significant in asynchronous I/O workloads, where the write operations to the remainder areas are managed as part of queued I/Os. The drawback of this approach is that writing these remainders requires write sizes smaller than the minimum block size. As a result, when using the write_zone_remainder option, the random map feature must be disabled using the norandommap=1 option, which is automatically done when the option is specified. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://lore.kernel.org/r/20260303013159.3543787-4-shinichiro.kawasaki@wdc.com Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
The recent commit introduced the option write_zone_remainder. Explain how it changes handling of zone end remainders. Also, amend the zbd zone mode description to explain the default handling of zone end remainders. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://lore.kernel.org/r/20260303013159.3543787-5-shinichiro.kawasaki@wdc.com Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
The previous commit introduced the option write_zone_remainder. To confirm the option works as expected, introduce the new option -m to the test scripts test-zbd-support and run-tests-against-nullb. Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://lore.kernel.org/r/20260303013159.3543787-6-shinichiro.kawasaki@wdc.com Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
When -m option is provided for t/zbd/test-zbd-support, the option write_zone_remainder is specified to fio. In this case, the test case 14 fails because the random map feature is disabled and then random writes for conventional zones may have overlap. To avoid the failure, modify the test case to count the number of overlaps. Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://lore.kernel.org/r/20260303013159.3543787-7-shinichiro.kawasaki@wdc.com Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
When -m option is provided for t/zbd/test-zbd-support, the option write_zone_remainder is specified to fio. In this case, the test case 33 fails because fio does writes to small remainder areas at zone ends and it changed the number of writes. To avoid the failure, modify the test condition of the test case. Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://lore.kernel.org/r/20260303013159.3543787-8-shinichiro.kawasaki@wdc.com Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
When -m option is provided for t/zbd/test-zbd-support, the option write_zone_remainder is specified to fio. In this case, the test case 71 fails because fio does writes to small remainder areas at zone ends and it changed the number of writes. To avoid the failure, modify the test condition of the test case. Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Link: https://lore.kernel.org/r/20260303013159.3543787-9-shinichiro.kawasaki@wdc.com Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
When parsing `ioengine=external:/path`, `td->o.ioengine_so_path` was
previously assigned as a pointer directly into the `td->o.ioengine`
string buffer. If `td->o.ioengine` was subsequently reallocated (e.g.,
due to multiple ioengine definitions or the use of include directives),
`ioengine_so_path` became a dangling pointer, resulting in a
heap-use-after-free during `dlopen_ioengine`.
Fix this by ensuring `ioengine_so_path` owns its own memory allocation
independent of the `ioengine` string. Since this field is not defined
as a standard option entry, manual lifecycle management is implemented:
1. **str_ioengine_external_cb**: Use `strdup` to store the path and
free any previously allocated string.
2. **fio_options_mem_dupe**: Explicitly duplicate the string when
copying thread options.
3. **fio_options_free**: Explicitly free the string when tearing down
thread options.
This approach resolves the UAF while adhering to the requirement of not
adding a new option entry to the parser. Verified with ASan and existing
test suites.
Signed-off-by: Matthew Suozzo <msuozzo@google.com>
* 'push-lnvrzuqpnylp' of https://github.com/msuozzo/fio: options: fix heap-use-after-free in ioengine_so_path
The --bandwidth-log option currently uses a hard-coded agg-[read,write,trim]_bw.log filename for its log files. This patch provides a means to specify the stub filename for these log files. The value assigned to this option (if supplied) will replace the "agg" in the filename. If no value is supplied the original agg-*_bw.log filenames will be used. This is useful for repeated invocations of Fio with the --bandwidth-log option. Without this option the user would have to rename the agg-*_bw.log files between invocations to avoid losing data. Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
Switch to an updated checkout@v6. The original v4 was triggering this warning: Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: actions/checkout@v4, actions/upload-artifact@v4. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/ Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
Switch to v6 of the upload-artifact action. The original v4 action was triggering this warning: Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: actions/checkout@v4, actions/upload-artifact@v4. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/ Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
Currently, rate_iops does not produce the expected I/O rate with workloads that use 'bssplit' option. Consider the following example configuration - [global] direct=1 time_based runtime=30s ioengine=io_uring thread=1 [bssplit_rate_iops_repro] filename=/dev/sdX rw=randread iodepth=8 bs=64K rate_iops=50 This works correctly and ~50 IOPS I/O rate is logged during the run. If we replace 'bs=64K' with the following bssplit option - bssplit=32ki/20:64ki/40:256ki/10:512ki/25:1mi/5 in the configuration above, then some incorrect (much lower) IOPS values are observed to be in effect at run time. This problem happens because fio, in order to derive the required I/O rate from 'rate_iops' value provided by the user, simply multiplies the IOPS value by the minimum block size (min_bs). Once bps I/O rate is calculated this way, the processing for 'rate' and 'rate_iops' becomes identical. This works if the I/O issued has the uniform min_bs, as in case of using 'bs=64K'. However, with 'bssplit' option in effect, fio may issue I/O with sizes that are much different from min_bs. Yet the code in usec_for_io() currently always calculates I/O issue delays based on min_bs leading to incorrect IOPS being produced. Fix this by modifying usec_for_io() function to check for bssplit+rate_iops being in effect. For this case, derive the IOPS rate from bps 'rate' member of thread data and then calculate the delay to the next I/O using the IOPS value, not the bps rate. Signed-off-by: Dmitry Fomichev <dmitry.fomichev@wdc.com> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Link: https://patch.msgid.link/20260310205804.477935-1-dmitry.fomichev@wdc.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Add new keys to the JSON data specifying the units for latency_target and latency_window. Also add units for these values in the normal output. Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
Added more error numbers(errno) after ERANGE to support various errno string to options like `--ignore_error=ETIMEDOUT`. unvme-cli libunvmed ioengine returns ETIMEDOUT if a command is timed out. To mask this situation with `--ignore_error=` option, errno after ERANGE should be supported in `str2errr()`. Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
o->comm may be NULL if job initialization fails or the job structure is only partially initialized before thread creation. Calling prctl(PR_SET_NAME, NULL) results in a NULL pointer dereference inside strncpy(). Add a NULL check before calling prctl(). Fixes: #2072 Reported-by: Criticayon Black Signed-off-by: Criticayon Black <1318083585@qq.com>
* 'posix-errnos' of https://github.com/minwooim/fio: options: add support more POSIX errnos
* 'fix-null-comm-prctl' of https://github.com/Criticayon/fio: backend: guard prctl(PR_SET_NAME) against NULL thread name
Issue: __show_running_run_stats() acquires stat_sem then blocks on each worker's rusage_sem. But workers need stat_sem to reach the code that posts rusage_sem, creating an ABBA deadlock. The verify path deadlocks via a blocking fio_sem_down(stat_sem). The IO path's trylock loop can mitigate the contention but times out under sustained contention with multiple workers. Fix: Moved rusage collection before the stat_sem acquire so the stat thread never holds stat_sem while waiting on rusage_sem. Added a double-check of td->runstate after setting update_rusage to guard against blocking on a worker that has already exited. The trylock loop and check_update_rusage() calls are retained as precautions. Signed-off-by: Ryan Tedrick <ryan.tedrick@nutanix.com>
…/fio * 'fix_statsem_deadlock' of https://github.com/RyanTedrick/fio: Fix stat_sem/rusage_sem deadlock during stats collection
prune_io_piece_log() is called only at the start of each loop iteration, so io_piece entries accumulated during the final do_io() run are never explicitly freed. When fio runs as a process this goes unnoticed because the OS reclaims the heap on exit. When fio is embedded as a pthread, which is a use-case of unvme-cli, the parent process keeps running, so those allocations become a genuine memory leak proportional to the number of write IOs logged for verify. Signed-off-by: Haeun Kim <hanee.kim@samsung.com> Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
* 'ipo' of https://github.com/minwooim/fio: iolog: free io_piece log on thread cleanup
Introduce a new ioengine that mmaps anonymous memory and copies data on read/write to trigger page faults. This allows us to leverage FIOs powerful framework for MM related testing, and will ideally allow us to quickly expand testing, by leveraging previously FS related fio scripts. Signed-off-by: Nico Pache <npache@redhat.com> Link: https://patch.msgid.link/20260408012004.198115-2-npache@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Document the new page fault engine. Signed-off-by: Nico Pache <npache@redhat.com> Link: https://patch.msgid.link/20260408012004.198115-3-npache@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Merge page fault engine from Nico: "This series introduces a new page_fault ioengine for Anonymous memory testing. This enables using fio’s existing framework and job files for memory management style workloads without relying on a filesystem. An example job file is included to demonstrate usage and lays the groundwork for how we plan on utilizing fio to test a number of MM related workloads." * anon-fault: engines/page_fault: minor style cleanups Documentation: update the documentation to include the page_fault engine page_fault: add mmap-backed ioengine for anonymous faults
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )