Skip to content

eshwanthkartitr/Gate_neural_network_cpp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

9 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Neural Network Logic Gates

C++ License Build Platform

Faast af boiiii

๐ŸŽฏ Overview

This project is a revisit of something I first built back in my first year using Python. The idea was simple: train a small neural network to learn the behavior of basic logic gates. This C++ version is cleaner, faster, and more structured, but still stays true to that original spirit of learning by building from scratch.

Itโ€™s a compact, fromโ€‘scratch C++11 implementation that covers gates like OR, AND, XOR, NOT, NAND, and NOR. The goal is to make the code humanโ€‘friendly and easy to follow, while still showing the essential concepts of forward/backward propagation, activation functions, and training with gradient descent.

โœจ Features

  • ๐Ÿง  Complete Logic Gate Support - OR, AND, XOR, NOT, NAND, NOR
  • โš™๏ธ JSON Configuration - Easy parameter tuning without recompilation
  • ๐Ÿ“Š ASCII Visualization - Decision boundary display in terminal
  • ๐Ÿ“ˆ Real-time Monitoring - Loss tracking during training
  • Tested on Windows

๐Ÿš€ Quick Start

Prerequisites

  • C++11 compatible compiler (g++, clang++, MSVC)
  • Standard library support

Installation & Usage

# Clone the repository
git clone https://github.com/eshwanthkartitr/Gate_neural_network_cpp.git
cd Gate-neural-network-cpp

# Compile
g++ -std=c++11 -Wall -O2 -o logic_gates logic_gates_main.cpp

# Run
./logic_gates

Alternatively, use the provided Makefile:

make logic_gates
./logic_gates

๐Ÿ“ Project Structure

neural-network-logic-gates/
โ”œโ”€โ”€logic_gates_main.cpp    # Main program with JSON parser
โ”œโ”€โ”€ gates_config.json       # Gate configurations
โ”œโ”€โ”€NN.cpp                  # Neural network class
โ”œโ”€โ”€layer.cpp               # Layer implementations
โ”œโ”€โ”€activation.cpp           # Activation functions
โ”œโ”€โ”€losses.cpp              # Loss functions (BCE, MSE)
โ”œโ”€โ”€utils.cpp               # Utility functions
โ”œโ”€โ”€main.cpp                # Original XOR example
โ”œโ”€โ”€Makefile               # Build configuration
โ””โ”€โ”€README.md               # This file

โš™๏ธ Configuration

Customize training parameters by editing gates_config.json:

{
  "gates": [
    {
      "name": "XOR",
      "inputs": [[0,0], [0,1], [1,0], [1,1]],
      "outputs": [[0], [1], [1], [0]],
      "epochs": 4000,
      "learning_rate": 0.1
    },
    {
      "name": "AND",
      "inputs": [[0,0], [0,1], [1,0], [1,1]],
      "outputs": [[0], [0], [0], [1]],
      "epochs": 2000,
      "learning_rate": 0.1
    }
  ]
}

Configuration Parameters

Parameter Description Example
name Gate identifier "XOR", "AND", "OR"
inputs Training input vectors [[0,0], [0,1], [1,0], [1,1]]
outputs Expected output vectors [[0], [1], [1], [0]]
epochs Number of training iterations 4000
learning_rate Gradient descent step size 0.1

๐Ÿ—๏ธ Network Architecture

For 2-Input Gates (OR, AND, XOR, NAND, NOR)

Input(2) โ†’ Linear(2โ†’4) โ†’ ReLU โ†’ Linear(4โ†’1) โ†’ Sigmoid โ†’ Output(1)

For 1-Input Gates (NOT)

Input(1) โ†’ Linear(1โ†’3) โ†’ ReLU โ†’ Linear(3โ†’1) โ†’ Sigmoid โ†’ Output(1)

๐Ÿ“Š Sample Output

Loaded 6 gates from gates_config.json
Training All Logic Gates from JSON...

=== Training XOR Gate ===
Epochs: 4000, Learning Rate: 0.1
Epoch 4000/4000 - Loss: 0.0001
XOR Gate training completed!

=== Testing XOR Gate ===
Input -> Output (Probability) -> Predicted -> Expected
------------------------------------------------
0,0 -> 0.0234 -> 0 -> 0 [โœ“ OK]
0,1 -> 0.9876 -> 1 -> 1 [โœ“ OK]
1,0 -> 0.9823 -> 1 -> 1 [โœ“ OK]
1,1 -> 0.0187 -> 0 -> 0 [โœ“ OK]
Accuracy: 4/4 (100%)

=== Hyperplane Visualization for XOR Gate ===
Decision boundary (0=blue, 1=red):
   0.0 0.2 0.4 0.6 0.8 1.0
1.0 0 0 0 1 1 1
0.8 0 0 1 1 1 1
0.6 0 1 1 1 1 1
0.4 1 1 1 1 1 0
0.2 1 1 1 1 0 0
0.0 1 1 1 0 0 0

Successfully trained all 6 logic gates! ๐ŸŽ‰

๐ŸŽ›๏ธ Customization

Adding Custom Gates

Create new gate definitions in gates_config.json:

{
  "name": "CUSTOM_3INPUT_GATE",
  "inputs": [
    [0,0,0], [0,0,1], [0,1,0], [0,1,1],
    [1,0,0], [1,0,1], [1,1,0], [1,1,1]
  ],
  "outputs": [[0], [1], [1], [0], [1], [0], [0], [1]],
  "epochs": 5000,
  "learning_rate": 0.05
}

Modifying Network Architecture

Edit the createNetwork() function in logic_gates_main.cpp:

// Example: Deeper network
network.add(new Linear(2, 8));    // More neurons
network.add(new Relu());
network.add(new Linear(8, 4));    // Additional hidden layer
network.add(new Relu());
network.add(new Linear(4, 1));
network.add(new Sigmoid());

๐Ÿ”ง Build Options

Using Makefile

make logic_gates        # Release build
make debug             # Debug build with symbols
make clean             # Clean build files

Manual Compilation

# Release build
g++ -std=c++11 -Wall -O2 -o logic_gates logic_gates_main.cpp

# Debug build
g++ -std=c++11 -Wall -g -DDEBUG -o logic_gates_debug logic_gates_main.cpp

# With additional optimizations
g++ -std=c++17 -Wall -O3 -march=native -o logic_gates logic_gates_main.cpp

๐Ÿงฎ Mathematical Foundation

Forward Propagation

For a 2-layer network:

hโ‚ = Wโ‚x + bโ‚          # Linear transformation
aโ‚ = ReLU(hโ‚)          # Non-linear activation
hโ‚‚ = Wโ‚‚aโ‚ + bโ‚‚         # Second linear transformation
ลท = ฯƒ(hโ‚‚)              # Sigmoid output

Backward Propagation

Gradients computed using the chain rule:

โˆ‚L/โˆ‚Wโ‚‚ = โˆ‚L/โˆ‚ลท ร— โˆ‚ลท/โˆ‚hโ‚‚ ร— โˆ‚hโ‚‚/โˆ‚Wโ‚‚
โˆ‚L/โˆ‚Wโ‚ = โˆ‚L/โˆ‚ลท ร— โˆ‚ลท/โˆ‚hโ‚‚ ร— โˆ‚hโ‚‚/โˆ‚aโ‚ ร— โˆ‚aโ‚/โˆ‚hโ‚ ร— โˆ‚hโ‚/โˆ‚Wโ‚

Activation Functions

Function Formula Derivative
Sigmoid ฯƒ(x) = 1/(1+e^(-x)) ฯƒ'(x) = ฯƒ(x)(1-ฯƒ(x))
ReLU ReLU(x) = max(0,x) ReLU'(x) = x > 0 ? 1 : 0

Loss Functions

Function Formula Use Case
Binary Cross-Entropy BCE = -1/N ฮฃ[yร—ln(ลท) + (1-y)ร—ln(1-ลท)] Binary classification
Mean Squared Error MSE = 1/N ฮฃ(y - ลท)ยฒ Regression tasks

๐Ÿ“Š Logic Gates Truth Tables

Gate 0,0 0,1 1,0 1,1 Complexity
OR 0 1 1 1 Linear โญ
AND 0 0 0 1 Linear โญ
XOR 0 1 1 0 Non-linear โญโญโญ
NAND 1 1 1 0 Linear โญ
NOR 1 0 0 0 Linear โญ
NOT 1โ†’0 0โ†’1 - - Linear โญ

โšก Performance Benchmarks

Gate Training Time Epochs Final Loss Memory Usage
OR/AND/NAND/NOR ~1-2s 2000 <0.001 ~2MB
XOR ~3-4s 4000 <0.001 ~2MB
NOT ~0.5s 1500 <0.001 ~1MB

System: Intel i7, 16GB RAM, compiled with -O2

๐Ÿ’ก Tips & Best Practices

Training Optimization

  • ๐Ÿ“ˆ Increase epochs for better XOR convergence (try 5000-8000)
  • โš™๏ธ Adjust learning rate if training is unstable (try 0.05-0.2)
  • ๐Ÿ—๏ธ Add more layers for complex custom gates
  • ๐Ÿ“Š Monitor loss - should decrease and stabilize near 0

Troubleshooting

Issue Symptom Solution
Slow Convergence Loss decreasing slowly Increase learning rate
Unstable Training Loss oscillating Decrease learning rate
Poor XOR Performance XOR accuracy <90% Increase epochs or add neurons
Compilation Errors Missing C++11 features Update compiler or add -std=c++11

๐Ÿค Contributing

Contributions are welcome! Here's how you can help:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Areas

  • ๐Ÿง  New activation functions (Tanh, Leaky ReLU, Swish)
  • ๐Ÿ“Š Additional layer types (Dropout, Batch Normalization)
  • ๐ŸŽจ Enhanced visualization (matplotlib integration)
  • ๐Ÿ“ˆ Performance optimizations (SIMD, threading)
  • ๐Ÿงช Unit tests and benchmarks

๐Ÿ“š Learning Resources

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Inspired by classic neural network tutorials
  • Built for educational purposes and learning
  • Special thanks to the open-source community

โญ Star this repository if it helped you learn neural networks! โญ

Built with โค๏ธ for learning neural networks from scratch

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors