Skip to content

feat: add evaluation callbacks, metrics and analysis#9

Open
LAdam-ix wants to merge 13 commits intomainfrom
feature/ml-evaluation
Open

feat: add evaluation callbacks, metrics and analysis#9
LAdam-ix wants to merge 13 commits intomainfrom
feature/ml-evaluation

Conversation

@LAdam-ix
Copy link
Copy Markdown
Collaborator

@LAdam-ix LAdam-ix commented Mar 17, 2026

Callbacks:

  • TilesExport: Saves individual test/predict tiles as PNG files. Used mostly in early stages for getting some samples for visual inspection.

  • AnalysisExport: Runs StainAnalyzer (data PR) on test batches, computes all metrics comparing modified vs original and predicted vs original tiles. Saves results as CSV and logs summary statistics to MLflow.

  • WSIAssembler: Assembles predicted tiles back into full whole-slide images as pyramid TIFFs. Handles tile overlap using running average blending. Buffers one slide at a time using memmap, becase of space concerns.

Metrics:

  • SSIM (Structure similarity index)

Image metrics (metrics/image_metrics.py):

  • NMI (Normalized Median Intensity): Ratio of median intensity to 95th percentile. Measures relative brightness of the image.

  • PCC (Pearson Correlation Coefficient): Pixel-level linear correlation between two images. High PCC means the overall structure is preserved.

  • LAB Brightness PSNR: Peak signal-to-noise ratio on the L* (lightness) channel in Lab space. Measures how well brightness is preserved after normalization.

  • LAB Mean — Mean L* brightness in CIE Lab space. Useful for detecting scanner exposure differences.

Stain vector distance (metrics/vector_metrics.py):

Uses CIE76 Delta E to compare estimated stain vectors (hematoxylin, eosin). Stain vectors are converted from optical density to Lab color space. Comparison uses only chromaticity (a*, b*) with L*=0, so brightness differences do not affect the result — the metric focuses purely on dye/stain differences.

Summary by CodeRabbit

  • New Features

    • Multi-metric image analysis with per-image and per-run CSV exports and summary statistics (vector similarity, SSIM, PCC, NMI, brightness PSNR) plus utilities to compute and export baselines.
    • Tile-level prediction export during test/predict and automatic assembly of tiles into whole-slide pyramidal images.
    • Image denormalization utilities and centralized image/vector metric functions exposed for easier reuse.
  • Chores

    • Configuration updated to enable analysis, tile export, and assembly callbacks and to set a top-level output directory.

@LAdam-ix LAdam-ix requested a review from matejpekar March 17, 2026 13:10
@LAdam-ix LAdam-ix self-assigned this Mar 17, 2026
@LAdam-ix LAdam-ix requested review from a team and ejdam87 and removed request for ejdam87 March 17, 2026 13:10
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds image and vector metric functions, a new StainAnalyzer class with multi-metric comparisons and CSV export, three new Lightning callbacks (TilesExport, AnalysisExport, WSIAssembler) plus a DenormalizationCallback, and registers callbacks plus top-level output_dir in the default config.

Changes

Cohort / File(s) Summary
Configuration
configs/default.yaml
Registered three opt-in callbacks (TilesExport, AnalysisExport, WSIAssembler) and referenced callbacks.analysis_export in trainer; added top-level output_dir and metadata.output_dir; minor YAML formatting adjustments.
Metrics
stain_normalization/metrics/__init__.py, stain_normalization/metrics/image_metrics.py, stain_normalization/metrics/vector_metrics.py
Added image metric functions (compute_nmi, compute_pcc, compute_mean_brightness, compute_lab_brightness_psnr) and vector utilities (_od_to_lab, delta_e76, compare_vectors); re-exported selected metrics in package all.
Analyzer
stain_normalization/analysis/__init__.py, stain_normalization/analysis/analyzer.py
New StainAnalyzer: multi-metric comparisons vs. a reference (precompute support), paired tile handling, result aggregation, statistics/baseline computations, and CSV export (results, statistics, baseline_ranges).
Callbacks package exports
stain_normalization/callbacks/__init__.py
Now re-exports AnalysisExport, DenormalizationCallback, TilesExport, and WSIAssembler in package all.
Callback base
stain_normalization/callbacks/_base.py
New DenormalizationCallback providing denormalize and tensor→uint8 HWC numpy image conversion utilities used by other callbacks.
AnalysisExport callback
stain_normalization/callbacks/analysis_export.py
New AnalysisExport callback: initializes analyzers, runs per-tile comparisons during test, accumulates metrics, saves CSVs under analysis_metrics/{modified,predicted}, and logs artifacts.
TilesExport callback
stain_normalization/callbacks/tiles_export.py
New TilesExport callback: exports per-tile images during test/predict with per-slide sampling policy (n_first, sample_rate) and slide-aware directory layout.
WSIAssembler callback
stain_normalization/callbacks/wsi_assembler.py
New WSIAssembler callback: collects predicted tiles into per-slide buffers, running-average blending for overlaps, temporary-buffer workflow, and writes pyramidal TIFFs (pyvips) with MPP embedding; handles metadata and failures.

Sequence Diagram(s)

sequenceDiagram
    participant Trainer as Lightning Trainer
    participant DataModule as Trainer.datamodule
    participant WSI as WSIAssembler
    participant Buffers as SlideBuffers
    participant Temp as TempDir
    participant TIFF as Output TIFF

    Trainer->>WSI: on_predict_start()
    WSI->>DataModule: request slides metadata
    DataModule-->>WSI: slides list (path, level, extents, mpp)
    WSI->>Temp: create temp directory

    loop For each prediction batch
        Trainer->>WSI: on_predict_batch_end(batch)
        WSI->>Buffers: ensure slide buffer open
        loop For each tile in batch
            WSI->>Buffers: _place_tile(tile_image, xy)
            Buffers->>Buffers: running-average blend into buffers
        end
    end

    Trainer->>WSI: on_predict_end()
    WSI->>Buffers: close active slide (flush)
    Buffers->>Temp: write raw buffers
    Temp->>TIFF: convert buffers → pyramidal TIFF (embed mpp)
    TIFF-->>Trainer: final slide artifacts
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Suggested reviewers

  • matejpekar
  • vejtek
  • 172454

Poem

🐰 I hopped through pixels, vectors bright,

I nudged the tiles and stitched the night.
CSV crumbs and TIFF delight,
I munched the code till morning light.
Hoppity exports — out of sight!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.61% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add evaluation callbacks, metrics and analysis' directly summarizes the main changes: introduction of evaluation callbacks (TilesExport, AnalysisExport, WSIAssembler), metrics implementations (image metrics, vector metrics), and analysis tooling (StainAnalyzer). The title is concise, clear, and accurately reflects the primary purpose of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/ml-evaluation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the evaluation capabilities of the stain normalization pipeline by introducing a robust set of callbacks and metrics. The changes enable detailed analysis of model predictions, from individual tile inspection to comprehensive whole-slide image reconstruction and quantitative assessment of image quality and stain consistency. This provides a more thorough understanding of the model's performance and facilitates better decision-making during development and deployment.

Highlights

  • New Evaluation Callbacks: Introduced three new callbacks: TilesExport for saving individual image tiles, AnalysisExport for computing and logging various image and stain metrics, and WSIAssembler for reassembling predicted tiles into whole-slide images.
  • Comprehensive Metric Suite: Added several new metrics for image quality and stain vector comparison, including SSIM, Normalized Median Intensity (NMI), Pearson Correlation Coefficient (PCC), LAB Brightness PSNR, and CIE76 Delta E for stain vectors.
  • Stain Analysis Framework: Implemented a StainAnalyzer class to facilitate the comparison of images using the newly defined metrics, accumulate results, and generate summary statistics and baseline ranges.
  • Configuration Updates: Updated the default.yaml configuration to include the new callbacks and integrate AnalysisExport into the default trainer callbacks for automated evaluation.
Changelog
  • configs/default.yaml
    • Added configurations for TilesExport, AnalysisExport, and WSIAssembler callbacks.
    • Included analysis_export in the default trainer callbacks list.
    • Added an output_dir configuration parameter.
  • stain_normalization/analysis/init.py
    • Added StainAnalyzer to the module's __all__ export list.
  • stain_normalization/analysis/analyzer.py
    • Created StainAnalyzer class to compare images using various metrics.
    • Implemented methods to accumulate results, compute summary statistics, and determine baseline ranges.
    • Added functionality to save results and statistics to CSV files.
  • stain_normalization/callbacks/init.py
    • Exported NormalizationCallback, AnalysisExport, TilesExport, and WSIAssembler classes.
  • stain_normalization/callbacks/_base.py
    • Introduced NormalizationCallback as a base class for callbacks requiring image denormalization.
    • Provided denormalize and tensor_to_image utility methods.
  • stain_normalization/callbacks/analysis_export.py
    • Implemented AnalysisExport callback to compute and log image and stain metrics during testing.
    • Utilized StainAnalyzer to compare modified and predicted images against originals.
    • Integrated MLflow logging for analysis metrics.
  • stain_normalization/callbacks/tiles_export.py
    • Implemented TilesExport callback to save individual predicted, original, and modified image tiles.
    • Added logic to save a specified number of initial tiles and then sample randomly.
  • stain_normalization/callbacks/wsi_assembler.py
    • Implemented WSIAssembler callback to reassemble predicted tiles into whole-slide pyramid TIFFs.
    • Managed memory efficiently using np.memmap for large image buffers.
    • Handled tile overlap using a running average blending technique.
  • stain_normalization/metrics/init.py
    • Exported compare_vectors, compute_lab_brightness_psnr, compute_nmi, and compute_pcc functions.
  • stain_normalization/metrics/image_metrics.py
    • Defined compute_nmi for Normalized Median Intensity calculation.
    • Defined compute_pcc for Pearson Correlation Coefficient between images.
    • Defined compute_lab_brightness_psnr for PSNR calculation on the L* channel in Lab color space.
  • stain_normalization/metrics/vector_metrics.py
    • Defined _od_to_lab to convert optical density vectors to Lab color space.
    • Defined delta_e76 to calculate CIE76 Delta E for chromaticity differences.
    • Defined compare_vectors to compare two sets of stain vectors, including handling potential swaps.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive set of features for evaluation, including new callbacks for exporting analysis results and tiles, a WSI assembler, and a suite of image and stain vector metrics. The implementation is generally solid, particularly the WSIAssembler which robustly handles large image processing. I've identified a few areas for improvement, including a potential bug in the metric calculation logic, suggestions for improving robustness, and an optimization in one of the metric functions. My detailed comments are below.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
stain_normalization/analysis/analyzer.py (2)

91-97: Clarify the is_paired semantics in documentation.

The is_paired flag on line 97 determines whether paired metrics (SSIM, PCC, LAB PSNR) are computed. Currently, it's True only when a per-call reference is passed, not when using the fixed self._ref_img.

This behavior makes sense (paired metrics require spatially aligned images), but it's subtle. Consider adding a brief docstring note explaining when paired metrics are computed vs. skipped.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/analysis/analyzer.py` around lines 91 - 97, The comment
points out subtlety in is_paired semantics: is_paired is set based only on
whether compare() received a per-call reference (reference is not None) rather
than whether a stored self._ref_img exists; clarify this in docs. Update the
docstring for the Analyzer class and/or the compare() method to explicitly state
that paired metrics (SSIM, PCC, LAB PSNR) are computed only when a per-call
reference is provided (i.e., when is_paired is True), and not when using the
fixed self._ref_img, and give rationale that paired metrics require spatial
alignment; reference the is_paired variable, compare(), __init__, and
self._ref_img in the docstring so future readers understand the behavior.

111-114: Consider replacing assert with explicit validation.

Using assert for runtime validation can be problematic since assertions are disabled when Python runs with -O (optimized mode). If estimate_stain_vectors returns None (e.g., image has too much background), this will silently fail in optimized mode.

Proposed fix
-        assert (
-            ref_vectors is not None
-        )  # fails if reference has too much background and no valid stain vectors
-        img_vectors = estimate_stain_vectors(image)
+        if ref_vectors is None:
+            raise ValueError(
+                "Reference stain vectors unavailable (image may have too much background)"
+            )
+        img_vectors = estimate_stain_vectors(image)
+        if img_vectors is None:
+            raise ValueError(
+                "Could not estimate stain vectors for image (may have too much background)"
+            )

Similarly for line 139:

-        assert ref_nmi is not None
+        if ref_nmi is None:
+            raise ValueError("Reference NMI unavailable")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/analysis/analyzer.py` around lines 111 - 114, Replace the
runtime asserts that check for None with explicit validation and error handling:
where ref_vectors is checked (around the block that currently uses "assert
ref_vectors is not None") and where img_vectors is obtained from
estimate_stain_vectors(image) (and the similar check around line 139), detect a
None return and raise a clear exception (e.g., ValueError) or return a
controlled error result with a descriptive message indicating the
reference/image had too much background and no valid stain vectors; ensure you
reference the estimate_stain_vectors call and the ref_vectors/img_vectors
variables so the handler is applied in both locations and consider logging the
condition before raising to aid debugging.
stain_normalization/callbacks/wsi_assembler.py (1)

105-116: Consider narrowing the exception type or using logging.

The bare Exception catch (flagged by static analysis) is understandable for robustness, but it may inadvertently swallow unexpected errors like KeyboardInterrupt (though that's a BaseException). Consider:

  1. Using logging.exception() instead of print + traceback.print_exc() for consistent logging
  2. Optionally re-raising after recording the failure if you want to surface critical errors
Optional improvement using logging
+import logging
+
+logger = logging.getLogger(__name__)
+
 ...
-        except Exception:
-            print(f"ERROR: Failed to save slide '{slide_name}'")
-            traceback.print_exc()
+        except Exception:
+            logger.exception("Failed to save slide '%s'", slide_name)
             self._failed_slides.append(slide_name)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/callbacks/wsi_assembler.py` around lines 105 - 116,
Replace the bare except in the block that calls self._save_slide(slide_name,
self._active) with a captured exception (e.g., except Exception as e:) and use a
module-level logger to record it via logger.exception("Failed to save slide %s",
slide_name) instead of print + traceback; append slide_name to
self._failed_slides as before and, if you want critical errors propagated,
re-raise the exception after logging. Also ensure the finally cleanup references
(self._active.result_buffer, self._active.count_buffer,
self._active.temp_dir.cleanup(), and setting self._active / self._active_name to
None) are safe if self._active is None (use guards or getattr checks) so cleanup
does not raise another exception.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@configs/default.yaml`:
- Around line 59-62: The TilesExport and WSIAssembler callback configs
(callbacks.tiles_export and callbacks.wsi_assembler) are defined but not
registered with the trainer, so their Lightning hooks will never run; register
them by adding ${callbacks.tiles_export} and ${callbacks.wsi_assembler} to the
trainer callbacks list (the same place where ${callbacks.model_checkpoint},
${callbacks.early_stopping}, ${callbacks.analysis_export} are listed) so
trainer.callbacks includes these two entries, or alternatively add a short
comment/documentation in the config indicating they must be enabled via CLI
overrides if you prefer conditional registration.

In `@stain_normalization/callbacks/analysis_export.py`:
- Around line 26-42: The on_test_batch_end signature incorrectly types outputs
as list[torch.Tensor]; update its annotation to match the project's Outputs
alias (a single batched Tensor) so it reads outputs: Outputs (or torch.Tensor)
and keep the body that indexes outputs[b] unchanged; ensure the import or
reference to the Outputs type alias is added if needed and verify
tensor_to_image(outputs[b]) and any static type checks now accept the
batched-tensor usage.

In `@stain_normalization/callbacks/wsi_assembler.py`:
- Around line 1-6: Run the Ruff autoformatter on
stain_normalization/callbacks/wsi_assembler.py (uvx ruff format --fix
stain_normalization/callbacks/wsi_assembler.py) or manually apply the same fixes
so imports (tempfile, traceback, dataclass, Path, Any) and surrounding
whitespace follow the project's Ruff rules, ensuring proper import ordering,
spacing, and a trailing newline to satisfy the CI Ruff check for the
wsi_assembler module.
- Around line 84-89: The count buffer currently created as count_buf (np.memmap
with dtype=np.uint8) can overflow if >255 tiles overlap; change the dtype to
np.uint16 (or larger as needed) when creating count_buf in the WSI assembler to
prevent wraparound, update any dependent code that reads/writes count_buf (e.g.,
increments or dtype assumptions) to handle the new integer width, and ensure any
memory calculations or memmap shape logic around count_buf remain valid with the
larger element size.

---

Nitpick comments:
In `@stain_normalization/analysis/analyzer.py`:
- Around line 91-97: The comment points out subtlety in is_paired semantics:
is_paired is set based only on whether compare() received a per-call reference
(reference is not None) rather than whether a stored self._ref_img exists;
clarify this in docs. Update the docstring for the Analyzer class and/or the
compare() method to explicitly state that paired metrics (SSIM, PCC, LAB PSNR)
are computed only when a per-call reference is provided (i.e., when is_paired is
True), and not when using the fixed self._ref_img, and give rationale that
paired metrics require spatial alignment; reference the is_paired variable,
compare(), __init__, and self._ref_img in the docstring so future readers
understand the behavior.
- Around line 111-114: Replace the runtime asserts that check for None with
explicit validation and error handling: where ref_vectors is checked (around the
block that currently uses "assert ref_vectors is not None") and where
img_vectors is obtained from estimate_stain_vectors(image) (and the similar
check around line 139), detect a None return and raise a clear exception (e.g.,
ValueError) or return a controlled error result with a descriptive message
indicating the reference/image had too much background and no valid stain
vectors; ensure you reference the estimate_stain_vectors call and the
ref_vectors/img_vectors variables so the handler is applied in both locations
and consider logging the condition before raising to aid debugging.

In `@stain_normalization/callbacks/wsi_assembler.py`:
- Around line 105-116: Replace the bare except in the block that calls
self._save_slide(slide_name, self._active) with a captured exception (e.g.,
except Exception as e:) and use a module-level logger to record it via
logger.exception("Failed to save slide %s", slide_name) instead of print +
traceback; append slide_name to self._failed_slides as before and, if you want
critical errors propagated, re-raise the exception after logging. Also ensure
the finally cleanup references (self._active.result_buffer,
self._active.count_buffer, self._active.temp_dir.cleanup(), and setting
self._active / self._active_name to None) are safe if self._active is None (use
guards or getattr checks) so cleanup does not raise another exception.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 67fce064-bf2d-49cd-8fb7-b74f1a787e18

📥 Commits

Reviewing files that changed from the base of the PR and between d410604 and 58bcf58.

📒 Files selected for processing (11)
  • configs/default.yaml
  • stain_normalization/analysis/__init__.py
  • stain_normalization/analysis/analyzer.py
  • stain_normalization/callbacks/__init__.py
  • stain_normalization/callbacks/_base.py
  • stain_normalization/callbacks/analysis_export.py
  • stain_normalization/callbacks/tiles_export.py
  • stain_normalization/callbacks/wsi_assembler.py
  • stain_normalization/metrics/__init__.py
  • stain_normalization/metrics/image_metrics.py
  • stain_normalization/metrics/vector_metrics.py

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
stain_normalization/callbacks/wsi_assembler.py (1)

84-89: ⚠️ Potential issue | 🟠 Major

Prevent overlap-counter overflow in the blending path.

At Line 86, count_buffer uses np.uint8; once overlap exceeds 255, Line 172 wraps and Line 163 blends with corrupted counts.

Proposed fix
         count_buf = np.memmap(
             Path(tmp.name) / "count.raw",
-            dtype=np.uint8,
+            dtype=np.uint16,
             mode="w+",
             shape=(h, w),
         )
@@
         count_img = pyvips.Image.rawload(
-            str(count_path), meta.extent_x, meta.extent_y, 1
+            str(count_path), meta.extent_x, meta.extent_y, 1, format="ushort"
         )

Also applies to: 172-172, 198-200

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/callbacks/wsi_assembler.py` around lines 84 - 89, The
overlap counter memmap is created with dtype=np.uint8 (count_buf) which
overflows past 255 and corrupts the blending math; change the memmap dtype to a
wider integer (e.g., np.uint32 or np.int32) when creating count_buf via
np.memmap and ensure all places that increment or read from this buffer (the
blending/overlap logic that uses count_buf to compute averages) use the wider
dtype consistently (avoid implicit casts back to uint8), and if downstream code
expects a smaller type, cast only after computing blended results; update any
code that writes/reads count_buf in the blending path so accumulation and
division use the widened type to prevent wraparound.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@stain_normalization/callbacks/wsi_assembler.py`:
- Around line 151-153: The slice math can yield non-positive or out-of-range
heights/widths when x/y are outside the target region; before computing h and w
in the WSI assembly code, validate and clamp the coordinates: ensure ey > y and
ex > x, clamp h = max(0, min(tile.shape[0], ey - y)) and w = max(0,
min(tile.shape[1], ex - x)), and skip the tile assignment if h == 0 or w == 0
(or adjust x/y so the slice is within bounds). Update the block that computes h,
w and slices tile (variables tile, x, y, ex, ey) to perform these checks and
only perform tile = tile[:h, :w] when h and w are positive and valid.

---

Duplicate comments:
In `@stain_normalization/callbacks/wsi_assembler.py`:
- Around line 84-89: The overlap counter memmap is created with dtype=np.uint8
(count_buf) which overflows past 255 and corrupts the blending math; change the
memmap dtype to a wider integer (e.g., np.uint32 or np.int32) when creating
count_buf via np.memmap and ensure all places that increment or read from this
buffer (the blending/overlap logic that uses count_buf to compute averages) use
the wider dtype consistently (avoid implicit casts back to uint8), and if
downstream code expects a smaller type, cast only after computing blended
results; update any code that writes/reads count_buf in the blending path so
accumulation and division use the widened type to prevent wraparound.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ce89273f-1d24-4e6c-9d70-fc73c9eb4ec8

📥 Commits

Reviewing files that changed from the base of the PR and between 58bcf58 and cce0264.

📒 Files selected for processing (1)
  • stain_normalization/callbacks/wsi_assembler.py

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
stain_normalization/callbacks/wsi_assembler.py (1)

146-159: ⚠️ Potential issue | 🟠 Major

Still reject negative tile coordinates before slicing.

The new max(0, ...) guard handles tiles that start past the right or bottom edge, but negative x or y still flow into result_buffer[y:y+h, x:x+w]. NumPy interprets those indices from the end of the array, so a bad coordinate can write into the wrong region instead of being clipped.

Proposed fix
     def _place_tile(self, tile: np.ndarray[Any, Any], x: int, y: int) -> None:
         """Place a predicted tile into the active slide buffer with overlap averaging."""
         assert self._active is not None
         sb = self._active
         ex, ey = sb.meta.extent_x, sb.meta.extent_y
+
+        if x < 0 or y < 0 or x >= ex or y >= ey:
+            raise ValueError(
+                f"Tile coordinates out of bounds: x={x}, y={y}, extent=({ex}, {ey})"
+            )
 
-        h = max(0, min(tile.shape[0], ey - y))
-        w = max(0, min(tile.shape[1], ex - x))
-        if h == 0 or w == 0:
+        h = min(tile.shape[0], ey - y)
+        w = min(tile.shape[1], ex - x)
+        if h <= 0 or w <= 0:
             return
         tile = tile[:h, :w]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/callbacks/wsi_assembler.py` around lines 146 - 159, The
_place_tile method fails to reject negative tile coordinates before slicing,
allowing negative x or y to be interpreted from the end of the NumPy arrays;
update _place_tile to clamp/adjust x and y (and correspondingly adjust tile
offsets) so that x>=0 and y>=0 before computing region = sb.result_buffer[y:y+h,
x:x+w] and count = sb.count_buffer[y:y+h, x:x+w]; specifically compute tx =
max(0, -x) and ty = max(0, -y), then advance the tile/view by ty: and tx:,
reduce h and w accordingly, and shift x and y to max(0,x)/max(0,y) so slicing
never receives negative indices (refer to function _place_tile and variables x,
y, h, w, tile, result_buffer, count_buffer).
🧹 Nitpick comments (1)
stain_normalization/callbacks/analysis_export.py (1)

37-43: Pass a stable image_id into the analyzers.

Right now results.csv contains anonymous rows, so you cannot map an outlier back to slide_name/xy or the PNG exports when debugging. Threading a stable ID through both compare() calls makes the export much more usable.

Proposed fix
         """Computes metrics for each sample and accumulates results."""
         for b in range(len(outputs)):
-            original_img = batch[1][b]["original_image"].astype("uint8")
-            modified_img = (batch[1][b]["modified_image"] * 255).astype("uint8")
+            slide_name = batch[1][b]["slide_name"]
+            xy = batch[1][b]["xy"]
+            image_id = f"{slide_name}/{xy}"
+            original_img = batch[1][b]["original_image"].astype("uint8")
+            modified_img = (batch[1][b]["modified_image"] * 255).astype("uint8")
             predicted_img = self.tensor_to_image(outputs[b])
 
-            self.mod_analyzer.compare(modified_img, paired_image=original_img)
-            self.pred_analyzer.compare(predicted_img, paired_image=original_img)
+            self.mod_analyzer.compare(
+                modified_img, image_id=image_id, paired_image=original_img
+            )
+            self.pred_analyzer.compare(
+                predicted_img, image_id=image_id, paired_image=original_img
+            )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/callbacks/analysis_export.py` around lines 37 - 43, The
loop in analysis_export.py calls mod_analyzer.compare and pred_analyzer.compare
without any stable identifier, making results.csv rows anonymous; update the
loop (where tensor_to_image is used to build predicted_img from outputs and
original_img/modified_img are read from batch) to compute or extract a stable
image_id (e.g., batch[1][b]["image_id"] or compose from slide_name and xy
coordinates, falling back to a deterministic index-based id) and pass that
image_id into both mod_analyzer.compare(...) and pred_analyzer.compare(...) as
an explicit parameter (e.g., image_id=image_id); also ensure the
Analyzer.compare implementations accept and propagate this image_id into
CSV/exports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@stain_normalization/analysis/analyzer.py`:
- Around line 174-182: get_statistics currently calls
df[numeric_cols].describe(...) which raises on zero rows; update get_statistics
to detect empty results (self.results or df[numeric_cols]) and return an empty
DataFrame with the same expected columns/structure (or a describe()-shaped empty
result) instead of calling describe() when there are no samples so save_csv
(which always calls get_statistics) won't fail during teardown; reference the
get_statistics method, the self.results attribute and the numeric_cols selection
when implementing the guard.
- Around line 43-48: The constructor currently expands any falsy metrics (e.g.,
an explicit empty list) into all metrics; change the assignment in __init__ so
only None maps to self.AVAILABLE_METRICS, e.g. set self.metrics =
self.AVAILABLE_METRICS if metrics is None else metrics, so passing [] is
preserved; update any related docstring or comments on the metrics parameter in
the Analyzer class to reflect this behavior.

In `@stain_normalization/callbacks/wsi_assembler.py`:
- Around line 58-69: The code keys slide metadata by Path(row.path).stem which
causes collisions for identical filenames in different directories; change the
key to a unique identifier derived from the full slide path (e.g., use the full
path string or a sanitized/hashed version of Path(row.path).as_posix()) when
populating _slide_meta in the loop over slides_df.iterrows(), keep constructing
_SlideMeta from the same row fields, and update any other places that rely on
slide_path.stem (e.g., the batch metadata usage in predict_dataset.py) to use
the same unique identifier so output filenames and lookup keys remain consistent
and collision-free.

In `@stain_normalization/metrics/image_metrics.py`:
- Around line 47-53: Update the docstring in
stain_normalization/metrics/image_metrics.py for the function that returns the
"Mean L* brightness of an image in CIE Lab color space" by replacing the Unicode
en-dash in the range text `0–100` with an ASCII hyphen so it reads `0-100`;
ensure the docstring line that currently contains `Mean L* value (0–100 scale,
higher = brighter).` is changed accordingly to satisfy the linter.

---

Duplicate comments:
In `@stain_normalization/callbacks/wsi_assembler.py`:
- Around line 146-159: The _place_tile method fails to reject negative tile
coordinates before slicing, allowing negative x or y to be interpreted from the
end of the NumPy arrays; update _place_tile to clamp/adjust x and y (and
correspondingly adjust tile offsets) so that x>=0 and y>=0 before computing
region = sb.result_buffer[y:y+h, x:x+w] and count = sb.count_buffer[y:y+h,
x:x+w]; specifically compute tx = max(0, -x) and ty = max(0, -y), then advance
the tile/view by ty: and tx:, reduce h and w accordingly, and shift x and y to
max(0,x)/max(0,y) so slicing never receives negative indices (refer to function
_place_tile and variables x, y, h, w, tile, result_buffer, count_buffer).

---

Nitpick comments:
In `@stain_normalization/callbacks/analysis_export.py`:
- Around line 37-43: The loop in analysis_export.py calls mod_analyzer.compare
and pred_analyzer.compare without any stable identifier, making results.csv rows
anonymous; update the loop (where tensor_to_image is used to build predicted_img
from outputs and original_img/modified_img are read from batch) to compute or
extract a stable image_id (e.g., batch[1][b]["image_id"] or compose from
slide_name and xy coordinates, falling back to a deterministic index-based id)
and pass that image_id into both mod_analyzer.compare(...) and
pred_analyzer.compare(...) as an explicit parameter (e.g., image_id=image_id);
also ensure the Analyzer.compare implementations accept and propagate this
image_id into CSV/exports.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7099bc42-ac2b-4fa5-abc6-3053c1efae97

📥 Commits

Reviewing files that changed from the base of the PR and between cce0264 and e50c106.

📒 Files selected for processing (6)
  • configs/default.yaml
  • stain_normalization/analysis/analyzer.py
  • stain_normalization/callbacks/analysis_export.py
  • stain_normalization/callbacks/tiles_export.py
  • stain_normalization/callbacks/wsi_assembler.py
  • stain_normalization/metrics/image_metrics.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • configs/default.yaml

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
stain_normalization/analysis/analyzer.py (1)

152-157: Consider replacing assert with explicit checks for robustness.

The assert ref_nmi is not None (and similar for ref_brightness at line 163) could fail if self.metrics is modified after initialization to include metrics that weren't precomputed. While this is unlikely in normal usage, replacing asserts with explicit ValueError checks would provide clearer error messages in production.

♻️ Optional: Replace asserts with explicit checks
         if "nmi" in self.metrics:
-            assert ref_nmi is not None
+            if ref_nmi is None:
+                raise ValueError(
+                    "NMI metric requires a reference image with 'nmi' in metrics at init time."
+                )
             img_nmi = compute_nmi(image)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/analysis/analyzer.py` around lines 152 - 157, Replace the
assert checks that guard precomputed reference values with explicit runtime
checks that raise informative exceptions: in analyzer.py inside the block that
handles "nmi" (where self.metrics is consulted and compute_nmi(image) is called,
and result["ref_nmi"], result["nmi"], result["nmi_diff"] are set), replace
"assert ref_nmi is not None" with an explicit check that raises a ValueError (or
similar) with a clear message if ref_nmi is missing; do the same for the
"brightness" branch that checks ref_brightness so callers get a descriptive
error instead of an AssertionError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@stain_normalization/analysis/analyzer.py`:
- Around line 152-157: Replace the assert checks that guard precomputed
reference values with explicit runtime checks that raise informative exceptions:
in analyzer.py inside the block that handles "nmi" (where self.metrics is
consulted and compute_nmi(image) is called, and result["ref_nmi"],
result["nmi"], result["nmi_diff"] are set), replace "assert ref_nmi is not None"
with an explicit check that raises a ValueError (or similar) with a clear
message if ref_nmi is missing; do the same for the "brightness" branch that
checks ref_brightness so callers get a descriptive error instead of an
AssertionError.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c00411a5-8f2c-43e1-833b-f54b50031618

📥 Commits

Reviewing files that changed from the base of the PR and between e50c106 and 0052904.

📒 Files selected for processing (2)
  • stain_normalization/analysis/analyzer.py
  • stain_normalization/metrics/image_metrics.py

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is this used? Use torchmetrics instead of design your own. Look here how to integrate it. Also, if you need some custom ones you can implement your own, like here.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is used to analyze the predicted images after normalization and also for outside analysis of the standalone dataset trough script that handle loading the data.
Some of the metrics could be easily rewriten to work with tensors but a lot of them are also using rational staining and skimage color space transformation witch operates in numpy so this would need to be all re implemented or find different library that can do it with tensors.

Copy link
Copy Markdown
Collaborator Author

@LAdam-ix LAdam-ix Mar 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that for the color space transformation on tensors there is library called kornia, but the stain vectors would still needed to be done separately after transforming images back to numpy. Normlay validation uses the SSIM and MeanAbsoluteError from torchmetrics. I'm using this only on test/predict to get extra info mainly those stain vectors.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, that's not a problem. It's quite common to convert a tensor to numpy for a specific operation and then convert it back to a tensor. This does introduce a slight overhead, but it's negligible.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I implemented it with tensors but also kept the old version for standalone dataset analysis and if i need to see exact metrics per tile data.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a reason why to keep the old version. You can obtain the per tile data from the metric data = metric(preds, targets)

@LAdam-ix LAdam-ix requested a review from matejpekar March 18, 2026 13:59
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
stain_normalization/callbacks/_base.py (1)

17-20: Consider caching device-transferred tensors.

self.std.to(device) and self.mean.to(device) are called on every denormalize() invocation. For batched processing, this creates redundant tensor copies. Caching the device-transferred tensors (e.g., after first use) could reduce overhead.

Optional optimization
 def denormalize(self, tensor: torch.Tensor) -> torch.Tensor:
     """Reverse normalization: tensor → [0, 1] float."""
     device = tensor.device
-    return (tensor * self.std.to(device)) + self.mean.to(device)
+    if not hasattr(self, '_cached_device') or self._cached_device != device:
+        self._std = self.std.to(device)
+        self._mean = self.mean.to(device)
+        self._cached_device = device
+    return (tensor * self._std) + self._mean
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/callbacks/_base.py` around lines 17 - 20, The denormalize
method repeatedly calls self.std.to(device) and self.mean.to(device) creating
redundant copies; modify the class (where denormalize is defined) to cache
device-transferred tensors (e.g., store self._mean_device and self._std_device
keyed by device or keep last_device/last_std/last_mean) and in denormalize() use
the cached tensors when tensor.device matches the cached device, updating the
cache only when the device changes so you avoid repeated .to(device) calls.
stain_normalization/callbacks/analysis_export.py (1)

56-59: Consider guarding MLflow calls against missing active run.

If this callback runs outside an active MLflow run context, mlflow.log_artifact will raise an exception. A defensive check could improve robustness:

if mlflow.active_run():
    for f in mod_dir.glob("*"):
        mlflow.log_artifact(str(f), artifact_path="analysis_metrics/modified")
    for f in pred_dir.glob("*"):
        mlflow.log_artifact(str(f), artifact_path="analysis_metrics/predicted")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/callbacks/analysis_export.py` around lines 56 - 59, The
current loop calls mlflow.log_artifact unconditionally and will raise if there
is no active MLflow run; wrap the artifact logging in a guard that checks
mlflow.active_run() before iterating mod_dir and pred_dir so logging only
happens when a run exists, e.g., check mlflow.active_run() and then iterate
mod_dir.glob("*") and pred_dir.glob("*") to call mlflow.log_artifact; ensure
both artifact_path values ("analysis_metrics/modified" and
"analysis_metrics/predicted") are preserved and skip or optionally warn when no
active run is present.
stain_normalization/callbacks/wsi_assembler.py (1)

106-117: Broad exception catch is reasonable here but consider narrowing.

Ruff flags the bare Exception catch (BLE001). While narrowing to specific exceptions (e.g., IOError, pyvips errors) would be cleaner, the current approach ensures cleanup happens regardless of failure mode. The error is logged with full traceback, and the slide is recorded as failed.

If you want to satisfy the linter while maintaining robustness:

Optional refinement
-        except Exception:
+        except (IOError, OSError, RuntimeError) as e:
             print(f"ERROR: Failed to save slide '{slide_name}'")
             traceback.print_exc()
             self._failed_slides.append(slide_name)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@stain_normalization/callbacks/wsi_assembler.py` around lines 106 - 117,
Replace the bare "except Exception:" in the block that calls
self._save_slide(slide_name, self._active) with either (preferred) a narrowed
exception tuple that lists the expected failure modes (e.g., except (IOError,
OSError, pyvips.Error) as e:) and keep the same logging, failure recording
(self._failed_slides.append(slide_name)), and cleanup; or (if you must catch
everything) change to "except Exception as e:  # noqa: BLE001" so Ruff is
satisfied while preserving the traceback logging and failure handling; keep
references to _save_slide, self._active, self._failed_slides, and the
traceback.print_exc() call unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@stain_normalization/metrics/vector_metrics.py`:
- Around line 39-58: The function compare_vectors currently returns
was_swapped=False when either input contains NaN which encodes an actual pairing
decision; instead return an explicit unknown state (e.g., was_swapped=None) or
add a separate pairing_defined flag so callers can detect and skip export/
pairing; update compare_vectors' return shape and annotation to allow
Optional[bool] (or include pairing_defined: bool) and adjust downstream usage
(e.g., stain_normalization/analysis/analyzer.py where the pairing is consumed)
to check for None/ pairing_defined before treating was_swapped as a real
decision.
- Around line 60-74: The current pairing uses OD-space dot products to decide
was_swapped but then reports Lab delta E, causing inconsistent selection;
instead convert vecs1 and vecs2 into Lab with _od_to_lab, compute total delta E
for the straight pairing (delta_e76(lab1_a, lab2_a) + delta_e76(lab1_b, lab2_b))
and for the swapped pairing (delta_e76(lab1_a, lab2_b) + delta_e76(lab1_b,
lab2_a)), set was_swapped to True if the swapped total is smaller, then set
vecs2_paired accordingly and return the individual delta_e76 values and
was_swapped; update references to vecs1, vecs2, _od_to_lab, delta_e76,
was_swapped, vecs2_paired.

---

Nitpick comments:
In `@stain_normalization/callbacks/_base.py`:
- Around line 17-20: The denormalize method repeatedly calls self.std.to(device)
and self.mean.to(device) creating redundant copies; modify the class (where
denormalize is defined) to cache device-transferred tensors (e.g., store
self._mean_device and self._std_device keyed by device or keep
last_device/last_std/last_mean) and in denormalize() use the cached tensors when
tensor.device matches the cached device, updating the cache only when the device
changes so you avoid repeated .to(device) calls.

In `@stain_normalization/callbacks/analysis_export.py`:
- Around line 56-59: The current loop calls mlflow.log_artifact unconditionally
and will raise if there is no active MLflow run; wrap the artifact logging in a
guard that checks mlflow.active_run() before iterating mod_dir and pred_dir so
logging only happens when a run exists, e.g., check mlflow.active_run() and then
iterate mod_dir.glob("*") and pred_dir.glob("*") to call mlflow.log_artifact;
ensure both artifact_path values ("analysis_metrics/modified" and
"analysis_metrics/predicted") are preserved and skip or optionally warn when no
active run is present.

In `@stain_normalization/callbacks/wsi_assembler.py`:
- Around line 106-117: Replace the bare "except Exception:" in the block that
calls self._save_slide(slide_name, self._active) with either (preferred) a
narrowed exception tuple that lists the expected failure modes (e.g., except
(IOError, OSError, pyvips.Error) as e:) and keep the same logging, failure
recording (self._failed_slides.append(slide_name)), and cleanup; or (if you must
catch everything) change to "except Exception as e:  # noqa: BLE001" so Ruff is
satisfied while preserving the traceback logging and failure handling; keep
references to _save_slide, self._active, self._failed_slides, and the
traceback.print_exc() call unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0187134f-5a48-4165-8781-770ea73d8d8f

📥 Commits

Reviewing files that changed from the base of the PR and between 0052904 and 8989a91.

📒 Files selected for processing (7)
  • stain_normalization/callbacks/__init__.py
  • stain_normalization/callbacks/_base.py
  • stain_normalization/callbacks/analysis_export.py
  • stain_normalization/callbacks/tiles_export.py
  • stain_normalization/callbacks/wsi_assembler.py
  • stain_normalization/metrics/image_metrics.py
  • stain_normalization/metrics/vector_metrics.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • stain_normalization/metrics/image_metrics.py
  • stain_normalization/callbacks/tiles_export.py

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, that's not a problem. It's quite common to convert a tensor to numpy for a specific operation and then convert it back to a tensor. This does introduce a slight overhead, but it's negligible.

@LAdam-ix LAdam-ix requested a review from matejpekar March 20, 2026 09:10
@LAdam-ix LAdam-ix requested a review from matejpekar March 21, 2026 22:40
Comment on lines +29 to +30
normalize_mean: list[float] | None = None,
normalize_std: list[float] | None = None,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can that be ever None?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a reason why to keep the old version. You can obtain the per tile data from the metric data = metric(preds, targets)

@LAdam-ix
Copy link
Copy Markdown
Collaborator Author

I don't see a reason why to keep the old version. You can obtain the per tile data from the metric data = metric(preds, targets)
If I understand correctly, metric(preds, targets) only returns an aggregated number from all tiles in the batch, no?. In pure ML it probably doesn't make sense to keep it, but in the context of my thesis I want per-tile data — exact median, best/worst tiles, percentile distributions. Also I use it for standalone dataset analysis. I'll remove it from the PR but will keep the code and then add it + demo to the thesis submission.

@LAdam-ix LAdam-ix requested a review from matejpekar March 27, 2026 01:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants