A sophisticated multi-agent reinforcement learning system for the Kaggle Lux AI Season 3 Challenge, featuring custom pathfinding algorithms, Deep Q-Networks, and strategic exploration-exploitation balance for competitive gameplay in a dynamic space environment.
This project implements intelligent agents capable of competing in the Lux AI Season 3 challenge, where two teams control units in deep space to explore ancient relics, harvest energy, and score points across best-of-5 match sequences. The challenge features:
- Dynamic Environments: Procedurally generated 24x24 maps with asteroids, nebulae, and energy nodes
- Fog of War: Limited vision based on unit positions and environmental factors
- Strategic Depth: Balance between exploration in early matches and exploitation in later matches
- Randomized Mechanics: Game parameters vary between matches, requiring adaptive strategies
- DQN-Based Agent: Deep Q-Network implementation using JAX for high-performance training
- Q-Learning Agent: Traditional reinforcement learning with experience replay and exploration strategies
- Relic Bot Base: Specialized pathfinding and relic discovery agent
- Defensive Sapper: Strategic agent with defensive positioning and energy sapping capabilities
- JAX Integration: High-performance numerical computing with GPU acceleration support
- Custom Space Mapping: Efficient fog-of-war tracking and terrain analysis using JAX arrays
- Pathfinding System: A* pathfinding with dynamic obstacle avoidance
- Modular Architecture: Clean separation between agent logic, environment interaction, and utilities
- Comprehensive Testing: Unit tests for core functionality with pytest
- Replay System: Episode recording for post-match analysis and debugging
lux_agent_comp/
โโโ core/ # Core agent framework
โ โโโ base_agent.py # Abstract base agent with DQN support
โ โโโ space.py # Space representation and fog-of-war tracking
โ โโโ pathfinding.py # A* pathfinding implementation
โ โโโ node.py # Node structure for pathfinding
โ โโโ debug.py # Visualization tools
โโโ lux/ # Lux AI environment integration
โ โโโ kit.py # Lux AI kit utilities
โ โโโ utils.py # Helper functions
โโโ agent.py # Main agent implementation
โโโ dqn_agent.py # Deep Q-Network agent
โโโ main.py # Kaggle submission entry point
โโโ evaluate_agents.py # Agent evaluation and testing
โโโ test_env.py # Environment testing utilities
โโโ profiling.py # Performance profiling
โโโ saved_agents/ # Version-controlled agent snapshots
โ โโโ q_learning_agent/
โ โโโ relic_bot_base/
โ โโโ relic_bot_defensive_sapper/
โโโ tests/ # Unit and integration tests
| Category | Technologies |
|---|---|
| Core ML/RL | JAX, Flax, Stable-Baselines3, Ray RLlib, OpenRL |
| Environment | Gymnasium, PettingZoo, luxai-s3 |
| Visualization | Matplotlib, Seaborn, Plotly, TensorBoard, Weights & Biases |
| Utilities | NumPy, Pygame (rendering), Beautiful Soup |
| Testing | pytest, pytest-cov |
| Package Management | uv, pip |
- Python 3.10 or higher
- (Optional) CUDA-capable GPU for JAX acceleration
-
Clone the repository
git clone https://github.com/jmduea/lux_agent_comp.git cd lux_agent_comp -
Install dependencies using uv (recommended)
# Install uv if you don't have it pip install uv # Install project dependencies uv sync
Or using pip:
pip install -e . -
Install development dependencies (for testing)
uv sync --group dev
python main.pyThis agent is designed to work with the Kaggle submission system and reads input from stdin following the Lux AI protocol.
Compare different agent implementations:
python evaluate_agents.pyThis script runs multiple games between agents and saves replays to the replays/ directory for analysis.
python test_env.py# Run all tests
pytest
# Run with coverage
pytest --cov=core --cov=lux
# Run specific test files
pytest tests/core/test_space.pypython dqn_agent.pyMonitor training progress using TensorBoard or Weights & Biases integration.
python create_submission.pyThis packages your agent and dependencies into a submission-ready format for Kaggle.
The primary competition agent featuring:
- Relic exploration with unknown tile targeting
- Energy-aware movement decisions
- Dynamic action selection based on game state
- JAX-accelerated state processing
Deep reinforcement learning agent with:
- Flax-based neural network architecture
- Experience replay buffer
- Target network for stable training
- Epsilon-greedy exploration
- Q-Learning Agent: Traditional Q-learning with custom reward shaping
- Relic Bot Base: Specialized relic discovery and collection strategy
- Defensive Sapper: Focuses on defensive positioning and opponent disruption
- Development: Implement and test agents locally
- Evaluation: Run agents against baseline implementations
- Profiling: Identify performance bottlenecks using
profiling.py - Visualization: Analyze game replays and agent behavior
- Iteration: Refine strategy based on evaluation results
- Submission: Package and submit to Kaggle
- Exploration vs Exploitation: Early match exploration is crucial for mapping relic positions and energy distributions
- JAX Performance: Using JAX for state representation provides significant performance improvements
- Pathfinding: Custom A* implementation handles dynamic obstacles (asteroids, nebulae) efficiently
- Vision Management: Proper fog-of-war tracking is essential for strategic decision-making
- Modular Design: Separating concerns allows for rapid agent iteration and testing
- Implement Monte Carlo Tree Search (MCTS) for strategic planning
- Add multi-agent coordination strategies
- Enhance opponent prediction and modeling
- Optimize energy harvesting algorithms
- Implement adaptive strategy selection based on game parameters
- Add comprehensive reward shaping for RL training
- Integrate transformer-based models for sequence prediction
This is a personal competition project, but feedback and suggestions are welcome! Feel free to open issues for discussion.
This project is open source and available for educational and portfolio purposes.
- Lux AI Challenge Team for creating an engaging competition
- Kaggle community for strategies and insights
- JAX and Flax teams for excellent ML frameworks
Author: Jon Duea (jdueadev@gmail.com)
Repository: github.com/jmduea/lux_agent_comp