Genesis Models

10 models ship at genesis, covering hardware from Raspberry Pi to server racks. All are GGUF format (Q4_K_M quantization), verified by SHA-256 digest from HuggingFace.

Compute nodes only download models they're configured to serve. A typical operator starts with 1-3 models matching their hardware tier.

Tier 1: Edge — 1-4 GB, CPU-only or 4GB VRAM

Raspberry Pi 5, smartphones, older laptops.

ModelParamsSizeSHA-256
Llama 3.2 1B Instruct1.2B1.32 GBba345c83...d9ae62a9
Qwen3.5 4B4B2.74 GB00fe7986...69ef11a4
Gemma 4 E2B Instruct2.3B active3.11 GBac0069eb...ee576845

Tier 2: Gamer — 4-8 GB, RTX 3060 or M3/M4 Mac

Entry-level GPUs and Apple Silicon laptops.

ModelParamsSizeSHA-256
Qwen2.5 7B Instruct7B4.68 GB65b8fcd9...ceaaa1423
Llama 3.1 8B Instruct8B4.92 GB7b064f58...3033557c
Qwen3.5 9B9B5.68 GB03b74727...daf52b7e8

Tier 3: Prosumer — 16-24 GB, RTX 4090 or M-series Pro

High-end consumer GPUs and workstation Macs.

ModelParamsSizeSHA-256
Gemma 4 26B A4B (MoE)25.2B (3.8B active)16.87 GBb8707e57...5a49fde
Qwen3.6 27B27B16.82 GB5ed60d0a...638392a0
Gemma 4 31B (dense)30.7B18.32 GB3bf13fff...ea853474

Tier 4: Server — 48+ GB VRAM

Multi-GPU workstations and cloud instances.

ModelParamsSizeSHA-256
Qwen3.6 35B A3B (MoE)35B (3B active)22.1 GBverify after download

Adding models after genesis

Anyone can register a new model by submitting a RegisterModel transaction with:

Compute nodes verify the SHA-256 after downloading — tampered files are rejected before loading into the inference engine.

Model sources

All genesis models are sourced from HuggingFace. Primary GGUF providers: