Interactive Monte Carlo simulation of the VNAE framework. A visual playground to test how Theta and Beta parameters influence win rates in asymmetric systems.
-
Updated
Feb 4, 2026 - MATLAB
Interactive Monte Carlo simulation of the VNAE framework. A visual playground to test how Theta and Beta parameters influence win rates in asymmetric systems.
Training repo for Toy GPT: unigram + small neutral corpus (000_cat_dog.txt)
Training repo for Toy GPT: Context-3 model + small neutral corpus (000_cat_dog.txt)
Training repo for Toy GPT: bigram model + small domain corpus (010_llm_glossary.txt)
Training repo for Toy GPT: context-3 model + small structured corpus (001_animals.txt)
Training repo for Toy GPT: unigram model + small structured corpus (001_animals.txt)
Training repo for Toy GPT: bigram model + small structured corpus (001_animals.txt)
Training repo for Toy GPT: context-2 model + small structured corpus (001_animals.txt)
Training repo for Toy GPT: Context-2 model + small neutral corpus (000_cat_dog.txt)
Training repo for Toy GPT: bigram + small neutral corpus (000_cat_dog.txt)
Can a cellular neural network learn how to reliably express meaningful patterns? This is a toy model for a gene regulatory network for cell specification.
Numerical consistency tests for a relational realisation-budget formulation of SR and static Schwarzschild sector.
🚀 Train a custom unigram model using simple and efficient methods, enabling easy adoption for natural language processing tasks.
Training repo for Toy GPT: context-3 model + small domain corpus (010_llm_glossary.txt)
🚀 Train a 200-bigram language model with ease, enhancing text generation tasks and improving natural language processing capabilities.
🐾 Train bigram models of 200 animal names to enhance natural language processing tasks in generating relevant text efficiently.
🐾 Train AI with 300 diverse contexts about animals to enhance understanding and interaction in natural language processing applications.
Interactive browser demo showing how reference concentration can dominate mean activation in a reaction–diffusion-style crowd model. Two identical systems with the same mean activation diverge when reference structure differs.
🚂 Train a custom GPT model with 300 contexts to enhance your natural language processing tasks and improve interactive applications.
Training repo for Toy GPT: context-2 model + small domain corpus (repo tour)
Add a description, image, and links to the toy-model topic page so that developers can more easily learn about it.
To associate your repository with the toy-model topic, visit your repo's landing page and select "manage topics."