Artificial Intelligence¶
TinyAI — Neural Network Inference & Training on Edge Devices
A pure C++17 on-device AI framework. From tensor ops to model training to INT8 quantized deployment, all in one stack. Optimized for ESP32-S3 (PSRAM).
Architecture¶
-
Core — Training Primitives
Tensor· N-D tensor
Activation· ReLU / Sigmoid / Tanh / Softmax
Loss· MSE / CrossEntropy / BinaryCE
Optimizer· SGD (momentum) / Adam -
Layers — Neural Network Layers
Dense· Fully-connected (Xavier init)
Conv1D/2D· Convolution (He init)
Pool· Max / Avg pooling
Norm· LayerNorm
Attention· Multi-head self-attention -
Models — Model Containers
Sequential· Layer stack
MLP· Multi-layer perceptron wrapper
CNN1D· 1-D convolution wrapper -
Quant — Quantization
INT8/INT16· Symmetric PTQ
FP8· E4M3FN / E5M2 software
tiny_quant_dense_forward_int8 -
Train — Training Loop
Dataset· shuffle / split / mini-batch
Trainer· fit / evaluate -
Examples — End-to-End Demos
MLP + INT8 PTQ· Iris classification
CNN1D + FP8· Signal classification
Attention· Tiny Transformer
Use Case Index¶
-
Train a small model from scratch
→ MLP + Dataset + Trainer
-
Inference on ESP32
→ Build with
TINY_AI_TRAINING_ENABLED=0, runforward() -
Model quantization
→ PTQ:
calibrate → quantize → INT8 dense forward -
Time-series signal classification
→ CNN1D + Conv1D + MaxPool1D
-
Small-sample feature extraction
→ Attention multi-head self-attention
-
Custom network architecture
→ Inherit
tiny::Layer, implementforward()and optionalbackward()
Quick Start¶
#include "tiny_ai.h"
// 1. Define model
tiny::Sequential model;
model.add(new tiny::Dense(4, 64)); // input 4 → hidden 64
model.add(new tiny::ActivationLayer(tiny::ActType::ReLU));
model.add(new tiny::Dense(64, 3)); // hidden 64 → output 3 classes
// 2. Training setup
tiny::SGD optimizer(0.01f, 0.9f); // LR=0.01, momentum=0.9
tiny::Dataset dataset(inputs, labels, 150);
tiny::Trainer trainer(&model, &optimizer);
// 3. Train
trainer.fit(&dataset, 100, 16); // 100 epochs, batch=16
// 4. Inference
tiny::Tensor output = model.forward(input);
Dependency Chain¶
TinyAI sits at the top of the middleware stack, reusing tensor primitives from tinymath and spectral/filtering capabilities from tinydsp. From sensor capture to signal processing to model inference — a complete edge pipeline.