Conversation
There was a problem hiding this comment.
Pull request overview
Updates the L2 test runner suite and OCI config templates, primarily to adjust how test outputs are collected/executed and to tweak generated OCI memory settings.
Changes:
- Adjusted test result JSON aggregation in the L2 test runner.
- Fixed CLI argument splitting for
diffin bundle generation tests. - Reworked basic sanity test async log-reading to use threading; updated annotation test startup/teardown flow.
- Removed
swap/swappinessfields from OCI JSON templates’ memory resources section.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
tests/L2_testing/test_runner/runner.py |
Changes when per-suite test_results.json is read/attached into the aggregated report. |
tests/L2_testing/test_runner/bundle_generation.py |
Splits diff flags into distinct argv entries for run_command_line. |
tests/L2_testing/test_runner/basic_sanity_tests.py |
Replaces multiprocessing-based async reader with a thread-based approach. |
tests/L2_testing/test_runner/annotation_tests.py |
Starts containers via spec path and adds explicit stop after validation. |
bundle/lib/source/templates/OciConfigJsonVM1.0.2-dobby.template |
Removes swap/swappiness from generated OCI memory resources. |
bundle/lib/source/templates/OciConfigJson1.0.2-dobby.template |
Removes swap/swappiness from generated OCI memory resources. |
Comments suppressed due to low confidence (1)
tests/L2_testing/test_runner/basic_sanity_tests.py:134
read_asynchronousnow uses a regularthreading.Threadbut on timeout there’s no way to stop it (previouslymultiprocessing.Process.terminate()did this). Because the thread is non-daemon, it can keep the test runner alive indefinitely, and repeated calls will accumulate blocked threads readingproc.stderr. Make the thread daemon or add a cancellation mechanism (e.g.,Event+ non-blocking/timeout reads, or closing the stream) and ensure the loop exits cleanly when the subprocess ends (e.g.,readline()returns an empty string) to avoid a busy loop.
found = False
reader = threading.Thread(target=_wait_for_string, args=(proc, string_to_find))
test_utils.print_log("Starting multithread read", test_utils.Severity.debug)
reader.start()
reader.join(timeout)
# if thread still running
if reader.is_alive():
test_utils.print_log("Reader still exists, closing", test_utils.Severity.debug)
test_utils.print_log("Not found string \"%s\"" % string_to_find, test_utils.Severity.error)
else:
found = True
return found
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Helper function for multiprocessing - must be at module level to be picklable | ||
| def _wait_for_string(proc, string_to_find): | ||
| """Waits indefinitely until string is found in process. Must be run with timeout multiprocess. | ||
|
|
There was a problem hiding this comment.
The new implementation uses threading, but the comments/docstrings still refer to “multiprocessing” and “picklable” behavior (e.g., module-level helper needed for multiprocessing). This is now misleading; update the comments/docstrings to reflect the threading-based approach (or remove the multiprocessing-specific notes).
| # Helper function for multiprocessing - must be at module level to be picklable | |
| def _wait_for_string(proc, string_to_find): | |
| """Waits indefinitely until string is found in process. Must be run with timeout multiprocess. | |
| # Helper function used by read_asynchronous to wait for a given string in process output. | |
| def _wait_for_string(proc, string_to_find): | |
| """Waits until the given string is found in the process output. | |
| Intended to be run in a separate thread, with any timeout handled by the caller. |
| testsuites_info[tested_groups_count]['testsuite'] = [] | ||
| testsuites_info[tested_groups_count]["testsuite"].append(current_test_result) |
There was a problem hiding this comment.
testsuites_info is appended for every executed test group, but the code indexes into it using tested_groups_count. If any prior group returns total == 0, tested_groups_count won’t advance and the next non-zero group will attach its test_results.json to the wrong entry (overwriting/mis-associating results). Consider indexing the most recently appended entry (e.g., testsuites_info[-1]) or only appending to testsuites_info when total > 0 so the index stays consistent.
| testsuites_info[tested_groups_count]['testsuite'] = [] | |
| testsuites_info[tested_groups_count]["testsuite"].append(current_test_result) | |
| testsuites_info[-1]['testsuite'] = [] | |
| testsuites_info[-1]["testsuite"].append(current_test_result) |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated 2 comments.
Comments suppressed due to low confidence (1)
tests/L2_testing/test_runner/basic_sanity_tests.py:134
read_asynchronousnow uses a thread that may outlive the timeout and keeps readingproc.stderrindefinitely. Since the thread is non-daemon and there is no stop condition, a timeout can leave a background reader running (and a later call starts another reader on the same pipe), which can hang the test runner or cause races consuming stderr output. Add a cancellation/stop mechanism (e.g.,threading.Event+ loop exit on timeout/EOF), and/or make the thread daemon and ensure_wait_for_stringterminates when the process exits / EOF is reached.
# Helper function for multiprocessing - must be at module level to be picklable
def _wait_for_string(proc, string_to_find):
"""Waits indefinitely until string is found in process. Must be run with timeout multiprocess.
Parameters:
proc (process): process in which we want to read
string_to_find (string): what we want to find in process
Returns:
None: Returns nothing if found, never ends if not found
"""
while True:
# notice that all data are in stderr not in stdout, this is DobbyDaemon design
output = proc.stderr.readline()
if string_to_find in output:
test_utils.print_log("Found string \"%s\"" % string_to_find, test_utils.Severity.debug)
return
# we need to do this asynchronous as if there is no such string we would end in endless loop
def read_asynchronous(proc, string_to_find, timeout):
"""Reads asynchronous from process. Ends when found string or timeout occurred.
Parameters:
proc (process): process in which we want to read
string_to_find (string): what we want to find in process
timeout (float): how long we should wait if string not found (seconds)
Returns:
found (bool): True if found string_to_find inside proc.
"""
found = False
reader = threading.Thread(target=_wait_for_string, args=(proc, string_to_find))
test_utils.print_log("Starting multithread read", test_utils.Severity.debug)
reader.start()
reader.join(timeout)
# if thread still running
if reader.is_alive():
test_utils.print_log("Reader still exists, closing", test_utils.Severity.debug)
test_utils.print_log("Not found string \"%s\"" % string_to_find, test_utils.Severity.error)
else:
found = True
return found
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -242,6 +272,8 @@ def execute_test(): | |||
| output_table = [] | |||
|
|
|||
| with test_utils.dobby_daemon(), test_utils.untar_bundle(container_name) as bundle_path: | |||
There was a problem hiding this comment.
execute_test uses test_utils.untar_bundle(container_name) but doesn't check the new untar_bundle.valid flag before calling sanitise_bundle_config and running the tests. If extraction/validation fails, the loop will run with a non-existent/partial bundle path and the failure mode becomes harder to diagnose. Consider switching to the bundle_ctx = test_utils.untar_bundle(...); with bundle_ctx as bundle_path: pattern and return early with a clear test failure when bundle_ctx.valid is false.
| with test_utils.dobby_daemon(), test_utils.untar_bundle(container_name) as bundle_path: | |
| bundle_ctx = test_utils.untar_bundle(container_name) | |
| if not bundle_ctx.valid: | |
| test_utils.print_log( | |
| f"Failed to extract or validate bundle for container '{container_name}'. Aborting tests.", | |
| test_utils.Severity.error, | |
| ) | |
| stop_wpeframework(wpeframework) | |
| return 1 | |
| with test_utils.dobby_daemon(), bundle_ctx as bundle_path: |
| def sanitise_bundle_config(bundle_path): | ||
| """Remove test-only required plugins from bundle config for wider platform compatibility.""" | ||
| config_path = bundle_path + "/config.json" | ||
|
|
There was a problem hiding this comment.
In sanitise_bundle_config, config_path = bundle_path + "/config.json" is manual path concatenation. Using os.path.join(bundle_path, "config.json") avoids issues with missing/trailing slashes and keeps path handling consistent with the rest of the test runner.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 12 out of 12 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
tests/L2_testing/test_runner/basic_sanity_tests.py:134
- On timeout, the reader thread is left running (
is_alive()branch) and there is no way to stop it (previouslymultiprocessing.Process.terminate()was used). This can keep reading from the subprocess stderr indefinitely and interfere with later reads/cleanup. Mark the thread as daemon and/or signal it to stop (Event + close the pipe / stop on EOF) before returning.
reader = threading.Thread(target=_wait_for_string, args=(proc, string_to_find))
test_utils.print_log("Starting multithread read", test_utils.Severity.debug)
reader.start()
reader.join(timeout)
# if thread still running
if reader.is_alive():
test_utils.print_log("Reader still exists, closing", test_utils.Severity.debug)
test_utils.print_log("Not found string \"%s\"" % string_to_find, test_utils.Severity.error)
else:
found = True
return found
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Only clean up if path exists (extraction might have failed) | ||
| if path.exists(self.path): | ||
| print_log("Cleaning up bundle at: %s" % self.path, Severity.debug) | ||
| run_command_line(["rm", | ||
| "-rf", | ||
| self.path]) | ||
| else: | ||
| print_log("Bundle path doesn't exist, skipping cleanup: %s" % self.path, Severity.debug) |
There was a problem hiding this comment.
Cleanup currently removes self.path, which may have been reassigned to a nested directory after searching for config.json (e.g., <bundle>/<bundle>/). That can leave the originally extracted top-level directory behind and accumulate junk between runs. Track the actual extraction root separately and remove that in __exit__ (and consider using self.actual_bundle_dir or removing it if unused).
| # Only clean up if path exists (extraction might have failed) | |
| if path.exists(self.path): | |
| print_log("Cleaning up bundle at: %s" % self.path, Severity.debug) | |
| run_command_line(["rm", | |
| "-rf", | |
| self.path]) | |
| else: | |
| print_log("Bundle path doesn't exist, skipping cleanup: %s" % self.path, Severity.debug) | |
| # Prefer an explicit extraction root if available, otherwise fall back to self.path | |
| cleanup_path = getattr(self, "actual_bundle_dir", self.path) | |
| # Only clean up if path exists (extraction might have failed) | |
| if path.exists(cleanup_path): | |
| print_log("Cleaning up bundle at: %s" % cleanup_path, Severity.debug) | |
| run_command_line(["rm", | |
| "-rf", | |
| cleanup_path]) | |
| else: | |
| print_log("Bundle path doesn't exist, skipping cleanup: %s" % cleanup_path, Severity.debug) |
| with tarfile.open(bundle_tarball, 'r:gz') as tar: | ||
| tar.extractall(path=bundle_dir) | ||
|
|
There was a problem hiding this comment.
Using tarfile.extractall() on an archive without validating member paths allows path traversal (files with ../ or absolute paths) and can overwrite arbitrary files under the working directory. Even for a repo utility script, it’s safer to implement a “safe extract” that rejects members escaping bundle_dir before extracting.
| while True: | ||
| # notice that all data are in stderr not in stdout, this is DobbyDaemon design | ||
| output = proc.stderr.readline() | ||
| if string_to_find in output: | ||
| test_utils.print_log("Found string \"%s\"" % string_to_find, test_utils.Severity.debug) | ||
| return |
There was a problem hiding this comment.
_wait_for_string loops forever calling proc.stderr.readline(). If the process exits and the pipe reaches EOF, readline() returns an empty string and this becomes a tight infinite loop, potentially pegging CPU and leaking a background thread. Handle EOF (break when output == "") and add a cancellation mechanism (e.g., threading.Event) so the reader can stop cleanly on timeout.
| launch_result = test_utils.launch_container(container_id, bundle_path) | ||
|
|
||
| if launch_result: | ||
| return validate_output_file(container_id, expected_output) | ||
| # give logging plugin a moment to flush file output | ||
| sleep(0.5) | ||
| validation_result = validate_output_file(container_id, expected_output) | ||
|
|
||
| return False, "Container did not launch successfully" | ||
| # Some environments report launch failure during cleanup hooks even when | ||
| # the container has actually run and produced expected output. | ||
| if validation_result[0]: | ||
| return validation_result |
There was a problem hiding this comment.
validation_result is computed (and can be returned as a pass) even when launch_container returns False. Since validate_output_file reads /tmp/<container>.log without clearing it first, a stale log from a previous run can make the test pass despite a launch failure. Only validate when launch_result is true, or delete/truncate the log file before launching so results can’t be contaminated.
| else: | ||
| print_log("Bundle validation successful at: %s" % self.path, Severity.debug) | ||
|
|
||
| def __enter__(self): |
There was a problem hiding this comment.
untar_bundle.__enter__ returns self.path even when extraction/validation failed (self.valid == False). Callers that don’t explicitly check .valid will proceed with a non-existent/incorrect bundle path. Consider raising an exception from __enter__ when invalid (or returning None and documenting it) so failures are fail-fast and harder to ignore.
| def __enter__(self): | |
| def __enter__(self): | |
| # Fail fast if bundle extraction/validation failed | |
| if not getattr(self, "valid", True): | |
| raise RuntimeError("Bundle validation failed; bundle path is invalid") |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 14 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Clean up the actual directory we found, not the expected one | ||
| # Only clean up if path exists (extraction might have failed) | ||
| if path.exists(self.path): | ||
| print_log("Cleaning up bundle at: %s" % self.path, Severity.debug) | ||
| run_command_line(["rm", | ||
| "-rf", | ||
| self.path]) | ||
| else: | ||
| print_log("Bundle path doesn't exist, skipping cleanup: %s" % self.path, Severity.debug) | ||
|
|
There was a problem hiding this comment.
untar_bundle.__exit__ only deletes self.path, but self.path can be reassigned to a nested directory when config.json is found in a subdirectory. In that case, the originally-extracted top-level bundle directory is left behind, which can pollute subsequent test runs and consume disk space. Track the extraction root separately (e.g., self.extracted_root = get_bundle_path(container_id + "_bundle")) and ensure cleanup removes that root (or both root + selected candidate).
| # Clean up the actual directory we found, not the expected one | |
| # Only clean up if path exists (extraction might have failed) | |
| if path.exists(self.path): | |
| print_log("Cleaning up bundle at: %s" % self.path, Severity.debug) | |
| run_command_line(["rm", | |
| "-rf", | |
| self.path]) | |
| else: | |
| print_log("Bundle path doesn't exist, skipping cleanup: %s" % self.path, Severity.debug) | |
| # Clean up any directories associated with this bundle. | |
| # Prefer cleaning both the extraction root (if tracked) and the | |
| # final candidate path, to avoid leaving behind the top-level bundle. | |
| paths_to_cleanup = [] | |
| # Extraction root, if the rest of the class has set it. | |
| if hasattr(self, "extracted_root") and self.extracted_root: | |
| paths_to_cleanup.append(self.extracted_root) | |
| # Always attempt to clean up self.path as before, unless it is | |
| # already included as the extraction root. | |
| if getattr(self, "path", None) and self.path not in paths_to_cleanup: | |
| paths_to_cleanup.append(self.path) | |
| for bundle_path in paths_to_cleanup: | |
| if path.exists(bundle_path): | |
| print_log("Cleaning up bundle at: %s" % bundle_path, Severity.debug) | |
| run_command_line(["rm", | |
| "-rf", | |
| bundle_path]) | |
| else: | |
| print_log("Bundle path doesn't exist, skipping cleanup: %s" % bundle_path, Severity.debug) |
| 2. Patches config.json to remove cgroupv2-incompatible settings | ||
| 3. Repacks the bundle | ||
|
|
||
| Changes made for cgroupv2 compatibility: | ||
| - Removes 'swappiness' from memory resources (not supported in cgroupv2) | ||
| - Sets realtimeRuntime and realtimePeriod to valid values or removes them | ||
| - Updates rootfsPropagation to 'slave' for better compatibility |
There was a problem hiding this comment.
The module docstring lists behavior that the script doesn’t implement (e.g., it says it “updates rootfsPropagation to 'slave'” and “sets realtimeRuntime/realtimePeriod to valid values”, but the code currently only removes rootfsPropagation and only deletes realtime* keys when they are null). Update the docstring to reflect what the script actually does, or adjust the implementation to match the documented behavior.
| 2. Patches config.json to remove cgroupv2-incompatible settings | |
| 3. Repacks the bundle | |
| Changes made for cgroupv2 compatibility: | |
| - Removes 'swappiness' from memory resources (not supported in cgroupv2) | |
| - Sets realtimeRuntime and realtimePeriod to valid values or removes them | |
| - Updates rootfsPropagation to 'slave' for better compatibility | |
| 2. Patches config.json to remove cgroupv2-incompatible or problematic settings | |
| 3. Repacks the bundle | |
| Changes made for cgroupv2 compatibility: | |
| - Removes 'swappiness' from memory resources (not supported in cgroupv2) | |
| - Removes null realtimeRuntime/realtimePeriod CPU settings and drops an empty 'cpu' section | |
| - Removes rootfsPropagation settings that cause issues in some user-namespace environments |
| import os | ||
| import shutil | ||
| import subprocess |
There was a problem hiding this comment.
os and subprocess are imported but not used in this script. Removing unused imports avoids confusion and keeps the script minimal for CI usage.
| import os | |
| import shutil | |
| import subprocess | |
| import shutil |
| with test_utils.dobby_daemon(), test_utils.untar_bundle(container_name) as bundle_path: | ||
| sanitise_bundle_config(bundle_path) | ||
|
|
There was a problem hiding this comment.
untar_bundle now tracks whether extraction/validation succeeded via bundle_ctx.valid, but this test still uses with test_utils.untar_bundle(...) as bundle_path: and continues even if extraction failed or config.json wasn’t found (which can turn into confusing downstream failures). Align with other tests by creating the context (bundle_ctx = untar_bundle(...)), entering it, and returning a clear failure early when bundle_ctx.valid is false.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 14 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| found = False | ||
| reader = multiprocessing.Process(target=wait_for_string, args=(proc, string_to_find), kwargs={}) | ||
| reader = threading.Thread(target=_wait_for_string, args=(proc, string_to_find), daemon=True) | ||
| test_utils.print_log("Starting multithread read", test_utils.Severity.debug) | ||
| reader.start() | ||
| reader.join(timeout) | ||
| # if thread still running | ||
| if reader.is_alive(): | ||
| test_utils.print_log("Reader still exists, closing", test_utils.Severity.debug) | ||
| reader.terminate() | ||
| test_utils.print_log("Not found string \"%s\"" % string_to_find, test_utils.Severity.error) | ||
| else: |
There was a problem hiding this comment.
read_asynchronous starts a daemon thread that blocks on proc.stderr.readline(), but on timeout there’s no way to stop that thread. If a timeout happens, the lingering reader thread can keep consuming stderr lines and interfere with later read_asynchronous calls on the same proc (or just leak threads until the process exits). Consider using selectors/non-blocking IO to implement a real timeout, or keep the previous multiprocessing approach where the worker can be terminated on timeout.
| # Helper function for multiprocessing - must be at module level to be picklable | ||
| def _wait_for_string(proc, string_to_find): | ||
| """Waits indefinitely until string is found in process. Must be run with timeout multiprocess. | ||
|
|
||
| Parameters: |
There was a problem hiding this comment.
The comment/docstrings around _wait_for_string still refer to multiprocessing/pickling (“must be at module level to be picklable”, “timeout multiprocess”), but the implementation is now threading-based. Please update/remove these references so the documentation matches the current behavior.
| if backup: | ||
| backup_path = bundle_tarball.with_suffix('.tar.gz.bak') | ||
| if not backup_path.exists(): | ||
| shutil.copy2(bundle_tarball, backup_path) | ||
| print(f" Backed up to: {backup_path.name}") | ||
|
|
There was a problem hiding this comment.
Path.with_suffix() only replaces the final suffix, so for a file like *_bundle.tar.gz this will produce a backup name like *_bundle.tar.tar.gz.bak (duplicated .tar). Use a name-based approach instead (e.g., append .bak to the full filename, or use .with_name(bundle_tarball.name + ".bak")).
| # Extract | ||
| print(f" Extracting...") | ||
| with tarfile.open(bundle_tarball, 'r:gz') as tar: | ||
| tar.extractall(path=bundle_dir) | ||
|
|
There was a problem hiding this comment.
tar.extractall() is vulnerable to path traversal (“tar slip”) if a tarball contains ../ or absolute paths. Even if the current bundles are trusted, this script is now executed in CI for every PR, so it’s safer to validate member paths (or use a safe extraction helper) before extracting. Also consider ensuring extract_path does not already exist (delete or fail fast) to avoid mixing stale files into the repacked bundle if a prior run left the directory behind.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 14 changed files in this pull request and generated 8 comments.
Comments suppressed due to low confidence (1)
tests/L2_testing/test_runner/network_tests.py:120
- If
launch_resultis False, the test still waits and then performs the netcat check, which will likely overwrite the earlier "Container did not launch successfully" message with a less-informative "Received '' ..." message. Consider short-circuiting the netcat validation when launch fails (or preserving the launch-failure message).
if not launch_result:
message = "Container did not launch successfully"
result = False
# give container time to start and send message before checking netcat listener
sleep(2)
nc_message = nc.get_output().rstrip("\n")
# check if netcat listener received message
if test.expected_output.lower() not in nc_message.lower():
message = "Received '%s' from container, expected '%s'" % (nc_message.lower(), test.expected_output.lower())
result = False
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Helper function for multiprocessing - must be at module level to be picklable | ||
| def _wait_for_string(proc, string_to_find): | ||
| """Waits indefinitely until string is found in process. Must be run with timeout multiprocess. |
There was a problem hiding this comment.
The comment/docstring still refers to multiprocessing/pickling, but the implementation was changed to use threading.Thread. Please update the comment/docstring to avoid misleading future maintainers.
| # Helper function for multiprocessing - must be at module level to be picklable | |
| def _wait_for_string(proc, string_to_find): | |
| """Waits indefinitely until string is found in process. Must be run with timeout multiprocess. | |
| # Helper function intended to be run in a background thread | |
| def _wait_for_string(proc, string_to_find): | |
| """Waits until the string is found in the process. Intended to be run in a separate thread with timeout handled by the caller. |
| Changes made for cgroupv2 compatibility: | ||
| - Removes 'swappiness' from memory resources (not supported in cgroupv2) | ||
| - Sets realtimeRuntime and realtimePeriod to valid values or removes them | ||
| - Updates rootfsPropagation to 'slave' for better compatibility |
There was a problem hiding this comment.
The module docstring says rootfsPropagation is updated to 'slave', but the implementation actually removes rootfsPropagation entirely (both top-level and under linux). Please align the docstring with the actual behavior (or implement the documented behavior).
| - Updates rootfsPropagation to 'slave' for better compatibility | |
| - Removes rootfsPropagation (both top-level and under 'linux') to avoid user-namespace issues |
| Severity.error) | ||
| self.valid = False | ||
|
|
||
| def __enter__(self): |
There was a problem hiding this comment.
untar_bundle can set self.valid = False on extraction/validation errors, but __enter__ still returns a path unconditionally. This makes it easy for callers to accidentally proceed with an invalid bundle (some tests already do). Consider raising an exception in __enter__ when invalid, or returning None to force explicit handling.
| def __enter__(self): | |
| def __enter__(self): | |
| if not self.valid: | |
| raise RuntimeError("Invalid bundle for container '%s' at path '%s'" % | |
| (self.container_id, self.path)) |
| output = test_utils.create_simple_test_output(test, False, "Bundle extraction or validation failed", "") | ||
| return test_utils.count_print_results([output]) | ||
|
|
There was a problem hiding this comment.
In the not bundle_ctx.valid early-return path, the failure result isn’t printed (unlike other tests) and log_content is passed as an empty string, which makes create_simple_test_output pull /tmp/<container>.log (potentially stale from a previous run). Consider calling print_single_result(output) before returning and passing a non-empty log_content to avoid misleading logs when the container never started.
| output = test_utils.create_simple_test_output(test, False, "Bundle extraction or validation failed", "") | |
| return test_utils.count_print_results([output]) | |
| message = "Bundle extraction or validation failed" | |
| output = test_utils.create_simple_test_output(test, False, message, message) | |
| test_utils.print_single_result(output) | |
| return test_utils.count_print_results([output]) |
| bundle_ctx = test_utils.untar_bundle(container_name) | ||
| with test_utils.dobby_daemon(), netcat_listener() as nc, bundle_ctx as bundle_path: | ||
| if not bundle_ctx.valid: | ||
| output = test_utils.create_simple_test_output(tests[0], False, "Bundle extraction or validation failed") |
There was a problem hiding this comment.
On bundle extraction/validation failure, create_simple_test_output is called without log_content, which causes it to auto-load /tmp/network1.log and may report stale logs even though the container never launched. Consider passing an explicit non-empty log_content (or the extraction error) to avoid misleading output.
| output = test_utils.create_simple_test_output(tests[0], False, "Bundle extraction or validation failed") | |
| output = test_utils.create_simple_test_output( | |
| tests[0], | |
| False, | |
| "Bundle extraction or validation failed", | |
| "Bundle extraction or validation failed; container was not launched, so no container logs are available.", | |
| ) |
| "limit": {{MEM_LIMIT}}, | ||
| "swap": {{MEM_LIMIT}}, | ||
| "swappiness": 60 | ||
| "limit": {{MEM_LIMIT}} |
There was a problem hiding this comment.
This change removes the swap limit entirely (previously set to {{MEM_LIMIT}}). That can materially change runtime behavior (e.g., swap may become effectively unlimited / no longer disabled) on cgroup v1 systems. If the intent was only to drop cgroupv2-incompatible fields, consider keeping an explicit swap policy (or gating it by cgroup version) so memory+swap behavior remains controlled.
| "limit": {{MEM_LIMIT}} | |
| "limit": {{MEM_LIMIT}}, | |
| "swap": {{MEM_LIMIT}} |
| "limit": {{MEM_LIMIT}}, | ||
| "swap": {{MEM_LIMIT}}, | ||
| "swappiness": 60 | ||
| "limit": {{MEM_LIMIT}} |
There was a problem hiding this comment.
This change removes the swap limit entirely (previously set to {{MEM_LIMIT}}). That can materially change runtime behavior (e.g., swap may become effectively unlimited / no longer disabled) on cgroup v1 systems. If the intent was only to drop cgroupv2-incompatible fields, consider keeping an explicit swap policy (or gating it by cgroup version) so memory+swap behavior remains controlled.
| "limit": {{MEM_LIMIT}} | |
| "limit": {{MEM_LIMIT}}, | |
| "swap": {{MEM_LIMIT}} |
| # Test 1 | ||
| test = tests[1] | ||
| status = test_utils.run_command_line(["diff", | ||
| "-r --ignore-space-change", | ||
| test_utils.get_bundle_path(test.container_id), | ||
| bundle_path]) | ||
| generated_config_path = test_utils.get_bundle_path(test.container_id) + "/config.json" | ||
| original_config_path = bundle_path + "/config.json" | ||
|
|
||
| result = True | ||
| message = "" | ||
| log = "" | ||
|
|
||
| try: | ||
| generated_config = _normalise_config(_load_json(generated_config_path)) | ||
| original_config = _normalise_config(_load_json(original_config_path)) | ||
|
|
||
| if generated_config != original_config: | ||
| result = False | ||
| message = "Normalized config.json mismatch" | ||
| log = ( | ||
| "Generated config:\n" + json.dumps(generated_config, sort_keys=True) + | ||
| "\nOriginal config:\n" + json.dumps(original_config, sort_keys=True) | ||
| ) | ||
| except Exception as err: | ||
| result = False | ||
| message = "Failed to compare bundle configs" | ||
| log = str(err) | ||
|
|
||
| output = test_utils.create_simple_test_output(test, (status.stdout == ""), "", status.stdout) | ||
| output = test_utils.create_simple_test_output(test, result, message, log) |
There was a problem hiding this comment.
The old implementation used diff -r on the entire bundle directory, but the new logic only compares config.json. If that’s intentional, consider updating the test name/description and/or adding explicit checks for other important bundle artifacts (e.g., rootfs layout) so the test still matches its stated purpose.
Description
What does this PR change/fix and why?
If there is a corresponding JIRA ticket, please ensure it is in the title of the PR.
Test Procedure
How to test this PR (if applicable)
Type of Change
Requires Bitbake Recipe changes?
meta-rdk-ext/recipes-containers/dobby/dobby.bb) must be modified to support the changes in this PR (beyond updatingSRC_REV)