NEURAL NETWORK EVOLUTION: CHROME DINO GAME

A machine learning demonstration using Chrome's iconic "No Internet" dinosaur game. Watch 100 AI-controlled dinosaurs learn to avoid obstacles through genetic algorithms and neural networks—no manual programming required.

1 Gen
100 Alive
0 Best
0 Avg
0 Record
3X
TOP 20 (GOLD)
RANK 21-40 (BLUE)
RANK 41-60 (PURPLE)
RANK 61-80 (PINK)
RANK 81-100 (GRAY)

Simulation Statistics

Total Generations

0
Complete evolution cycles

Best Score Ever

0
Highest fitness achieved

Average Performance

0
Mean score across all generations

Current Best

0
Top score this generation

Generation History

Each bar represents one generation. Colors show the proportional contribution of each rank tier to the total population score, sorted from highest (bottom) to lowest (top). Watch how gold dinos dominate more as evolution progresses.

No generation data yet. Start the simulation.

Performance by Rank

Rank Color Alive Avg Score Best Score
No data yet. Start the simulation.

About This Project

This interactive demonstration recreates Google Chrome's offline dinosaur game as a testbed for neuroevolution— combining neural networks with genetic algorithms to evolve game-playing AI. Rather than training through backpropagation, the population improves through natural selection over generations.

How It Works

Each dinosaur has its own neural network "brain" that controls its decisions to jump or duck based on what it sees.

The Neural Network

Each dino's brain is a simple feedforward neural network with:

The network starts with random weights, so generation 1 performs terribly. But through evolution, successful patterns emerge.

The Genetic Algorithm

After all 100 dinos die, a new generation is created through natural selection:

This process mimics biological evolution: successful traits propagate, unsuccessful ones die out, and mutations introduce variety.

Key Observations

Several patterns typically emerge:

What You'll Observe

Watch the dinos improve over generations. Early generations fail quickly as random neural weights produce poor decisions. As evolution progresses, successful jump timing emerges. Eventually, top performers can last thousands of frames. The gold-ranked dinos represent the best evolved strategies, while lower ranks show the population's diversity. Game speed increases over time, creating selection pressure that favors increasingly sophisticated obstacle avoidance behaviors.

Technical Implementation

The neural network architecture uses standard feedforward propagation with sigmoid activation functions. Weight initialization follows uniform random distribution, and mutations apply Gaussian noise to existing weights. The genetic algorithm implements elitism-based selection to preserve top performers while maintaining population diversity through stratified mutation rates. Fitness is measured purely by survival time, creating a clear optimization objective. This demonstrates core machine learning concepts: function approximation, gradient-free optimization, and the exploration-exploitation tradeoff.