Skip to content

Conversation

@fderuiter
Copy link
Owner

🧱 Mason: Refactor Predictor to use Dependency Injection (Fixes DIP/OCP)

🏚️ Violation: The Cera class was tightly coupled to the concrete Predictor implementation, instantiating it directly in its constructor. The CeraTrainer also directly accessed Predictor internals (layers) to perform optimization. This violated DIP (depend on abstractions) and OCP (modification required to change predictor).

🏗️ Fix:

  1. Extracted PredictorModel trait in math_explorer/src/climate/predictor.rs.
  2. Refactored Cera to hold Box<dyn PredictorModel>.
  3. Added Cera::new_with_predictor to allow injecting custom predictors.
  4. Moved the "optimization" logic (random perturbation simulation) from CeraTrainer to Predictor::update_weights, improving encapsulation.

🔗 Principle: Dependency Inversion Principle (DIP), Open/Closed Principle (OCP), Single Responsibility Principle (SRP).

🧪 Verification:

  • Ran cargo test climate to verify cera and predictor tests pass.
  • Verified that Cera::new still works for existing tests (backward compatibility for factory method, though field type changed).
  • Verified CeraTrainer works with the new abstraction.

PR created automatically by Jules for task 6198338620396898498 started by @fderuiter

- Extracted `PredictorModel` trait in `math_explorer/src/climate/predictor.rs`.
- Implemented `PredictorModel` for `Predictor`.
- Refactored `Cera` struct to use `Box<dyn PredictorModel>` instead of concrete `Predictor`.
- Added `Cera::new_with_predictor` for dependency injection.
- Decoupled `CeraTrainer` from `Predictor` internals by moving weight update logic to `Predictor::update_weights`.

This change satisfies the Dependency Inversion Principle (DIP) and Open/Closed Principle (OCP) by allowing the predictor implementation to be swapped or extended without modifying `Cera` or `CeraTrainer`.

Co-authored-by: fderuiter <127706008+fderuiter@users.noreply.github.com>
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@fderuiter
Copy link
Owner Author

1s
Run cargo fmt -- --check
Diff in /home/runner/work/math/math/math_explorer/src/climate/cera.rs:65:
predictor: Box,
) -> Result<Self, String> {
if config.aligned_channels > config.latent_channels {

  •         return Err(format!(
    
  •        return Err(format!(
               "aligned_channels ({}) cannot be greater than latent_channels ({})",
               config.aligned_channels, config.latent_channels
           ));
    

Diff in /home/runner/work/math/math/math_explorer/src/climate/predictor.rs:74:
}

 fn update_weights(&mut self, learning_rate: f32) {
  •     for layer in self.layers.iter_mut() {
    
  •    for layer in self.layers.iter_mut() {
           let grad_k = DMatrix::from_fn(layer.kernel.nrows(), layer.kernel.ncols(), |_, _| {
               rand::random::<f32>() - 0.5
           });
    

Error: Process completed with exit code 1.

@google-labs-jules
Copy link
Contributor

1s
Run cargo fmt -- --check
Diff in /home/runner/work/math/math/math_explorer/src/climate/cera.rs:65:
predictor: Box,
) -> Result<Self, String> {
if config.aligned_channels > config.latent_channels {

  •         return Err(format!(
    
  •        return Err(format!(
               "aligned_channels ({}) cannot be greater than latent_channels ({})",
               config.aligned_channels, config.latent_channels
           ));
    

Diff in /home/runner/work/math/math/math_explorer/src/climate/predictor.rs:74:
}

 fn update_weights(&mut self, learning_rate: f32) {
  •     for layer in self.layers.iter_mut() {
    
  •    for layer in self.layers.iter_mut() {
           let grad_k = DMatrix::from_fn(layer.kernel.nrows(), layer.kernel.ncols(), |_, _| {
               rand::random::<f32>() - 0.5
           });
    

Error: Process completed with exit code 1.

I have run cargo fmt to fix the indentation issues. Thanks for pointing them out.

- Extracted `PredictorModel` trait in `math_explorer/src/climate/predictor.rs`.
- Implemented `PredictorModel` for `Predictor`.
- Refactored `Cera` struct to use `Box<dyn PredictorModel>` instead of concrete `Predictor`.
- Added `Cera::new_with_predictor` for dependency injection.
- Decoupled `CeraTrainer` from `Predictor` internals by moving weight update logic to `Predictor::update_weights`.
- Applied `cargo fmt` to ensure code style compliance.

This change satisfies the Dependency Inversion Principle (DIP) and Open/Closed Principle (OCP) by allowing the predictor implementation to be swapped or extended without modifying `Cera` or `CeraTrainer`.

Co-authored-by: fderuiter <127706008+fderuiter@users.noreply.github.com>
@fderuiter fderuiter merged commit 452aca6 into main Jan 27, 2026
1 check passed
@fderuiter fderuiter deleted the mason-refactor-predictor-dip-6198338620396898498 branch January 27, 2026 20:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants