Node Operators
arknet has three node roles: Validator (consensus, seed, relay), Compute (inference), and Verifier (output verification). One binary, one install, pick your role.
Node types
| Role | What it does | Earns | Minimum hardware |
|---|---|---|---|
| Validator | Runs Tendermint BFT consensus, produces blocks, acts as circuit relay and seed node for the mesh | Block rewards | 2 vCPU, 2 GB RAM |
| Compute | Runs AI models via llama.cpp, serves inference, announces loaded models via gossip | 80% of job emission | Depends on model size |
| Verifier | Re-executes inference jobs deterministically and validates compute output. Catches cheaters. | 7% of job emission | Same as compute |
Install
curl -fsSL https://arknet.arkengel.com/install.sh | sh
Or build from source:
git clone https://github.com/st-hannibal/arknet.git cd arknet cargo build --release cp target/release/arknet /usr/local/bin/
Validator setup
Validators run consensus, relay P2P traffic for NAT-ed peers, and serve as seed nodes for the mesh.
Step 1: Initialize
arknet init
This creates ~/.arknet/ with your node identity (Ed25519 keypair) and default config.
Step 2: Configure
Edit ~/.arknet/node.toml:
[roles] validator = true [network] network = "mainnet" bootstrap_peers = []
Bootstrap peers: For the first validator on a new network, leave bootstrap_peers empty. For subsequent validators, add the multiaddr of at least one existing validator.
Step 3: Copy genesis
Download the genesis configuration:
curl -fsSL https://raw.githubusercontent.com/st-hannibal/arknet/main/genesis/mainnet/genesis.toml \ -o ~/.arknet/genesis.toml
Step 4: Start
arknet start --role validator
Your peer ID is logged at startup:
[INFO] peer_id=12D3KooW...
Step 5: Open ports
Open port 26656 TCP+UDP in your firewall. This is the P2P port — all node discovery, gossip, relay, and consensus traffic flows through it.
Compute setup
Compute nodes run AI models and serve inference requests from SDK clients through the mesh.
Step 1: Initialize
arknet init
Step 2: Configure
Edit ~/.arknet/node.toml:
[roles] compute = true [network] bootstrap_peers = ["/ip4/<validator-ip>/tcp/26656/p2p/<validator-peer-id>"]
Replace <validator-ip> and <validator-peer-id> with an actual validator's address. You can find public seeds at seeds.json.
Step 3: Start
arknet start --role compute
Step 4: Load a model
Use the local admin RPC to load a model:
curl -X POST http://127.0.0.1:26657/v1/models/load \
-H "Content-Type: application/json" \
-d '{
"model_ref": "Qwen/Qwen3-0.6B-Q8_0",
"url": "https://huggingface.co/Qwen/Qwen3-0.6B-GGUF/resolve/main/Qwen3-0.6B-Q8_0.gguf",
"sha256": "9465e63a22add5354d9bb4b99e90117043c7124007664907259bd16d043bb031",
"size_bytes": 639446688
}'
The node downloads the GGUF file, verifies the SHA-256 digest, and loads it into memory. Once loaded, the model is auto-announced via gossip heartbeat every 60 seconds on the arknet/pool/offer/1 topic. SDK clients in the mesh will discover it automatically.
Step 5: Open ports
Open port 26656 TCP+UDP in your firewall.
Model list: See Genesis Models for all available models with SHA-256 digests and download URLs.
Verifier setup
Verifiers are the anti-cheating layer. They re-run a sample of inference jobs deterministically and compare the output against what the compute node reported. If a compute node returned garbage, the verifier flags it and the compute node gets slashed.
How verification works
Three tiers of verification, from light to heavy:
| Tier | Method | When used |
|---|---|---|
| Optimistic | Trust the output, slash if disputed | Default — fast, low overhead |
| Deterministic | Re-run with same seed, compare token-by-token | Random 5% sample of jobs |
| TEE | Hardware attestation proves code ran correctly | When compute node has TEE |
Step 1: Initialize
arknet init
Step 2: Configure
Edit ~/.arknet/node.toml:
[roles] verifier = true [network] bootstrap_peers = ["/ip4/<validator-ip>/tcp/26656/p2p/<validator-peer-id>"]
Step 3: Start
arknet start --role verifier
The verifier subscribes to receipt gossip, selects jobs to verify, re-executes them, and submits dispute transactions if output doesn't match.
Step 4: Load the same models
A verifier needs the same models loaded as the compute nodes it checks. Load models the same way as a compute node — use /v1/models/load on the local admin RPC.
At genesis: The validator node handles verification. As the network grows, independent verifier nodes provide stronger security guarantees since they have no incentive to collude with compute nodes. Verifiers earn 7% of job emission for each receipt they validate.
Hardware requirements
A verifier needs the same hardware as a compute node — it re-runs the same models. Match the compute tier table below for the models you want to verify.
Ports
| Port | Default | Public? | Purpose |
|---|---|---|---|
| P2P | 26656 TCP+UDP | Yes — must be open | Node discovery, gossip, relay, consensus, inference. Noise-encrypted. |
| RPC | 26657 | No — localhost only | Operator admin. Load models, check status, submit transactions. |
| Metrics | 9090 | No — localhost only | Prometheus scrape endpoint for monitoring. |
Only port 26656 needs to be publicly accessible. The RPC and metrics ports bind to localhost by default and should stay that way unless you have a specific reason to expose them.
Configuration reference
Full ~/.arknet/node.toml reference:
[roles] validator = false # run consensus, act as relay/seed compute = false # run inference, announce models verifier = false # verify compute output, earn 7% [network] network = "mainnet" # chain ID p2p_listen = "0.0.0.0:26656" # P2P bind address rpc_listen = "127.0.0.1:26657" # admin RPC bind address metrics_listen = "127.0.0.1:9090" # Prometheus metrics bootstrap_peers = [] # multiaddrs of known peers
Monitoring
Check node status via the local admin RPC:
# Node health curl http://localhost:26657/health # Chain status (height, peers, consensus) curl http://localhost:26657/v1/status # List loaded models curl http://localhost:26657/v1/models # Connected peers curl http://localhost:26657/peers
Prometheus metrics
Scrape http://localhost:9090/metrics with Prometheus. Key metrics:
arknet_block_height— current chain heightarknet_connected_peers— number of connected peersarknet_inference_requests_total— total inference requests servedarknet_inference_latency_seconds— histogram of inference latencyarknet_model_loaded— currently loaded models (label: model_ref)
Hardware requirements
| Role | Minimum | Notes |
|---|---|---|
| Validator | 2 vCPU, 2 GB RAM | No GPU needed. t3.small or equivalent. |
| Compute (small models) | 2 vCPU, 4 GB RAM | For sub-1B parameter models. t3.medium or equivalent. |
| Compute (medium models) | 4 vCPU, 8 GB RAM, GPU w/ 8 GB VRAM | For 3B-8B parameter models. |
| Compute (large models) | 8 vCPU, 16 GB RAM, GPU w/ 16+ GB VRAM | For 13B+ parameter models. |
Match the model tier to your hardware. See Genesis Models for model sizes.
Earning ARK
Verified inference jobs mint ARK from the emission schedule. The reward is split:
- 80% compute — the node that ran the inference
- 7% verifier — the node that verified the output
- 5% treasury — governance-controlled fund
- 3% burn — permanently destroyed (deflationary)
- 5% delegators — pro-rata by staked amount
Bootstrap period (first 6 months or until 100 validators): No stake required, no cost to users. Run your node and earn ARK from block emission for serving free inference. This is how the token supply bootstraps from zero. See Tokenomics for details.
Staking
# Stake for a role arknet wallet stake --role compute --amount 50000000000000 # Begin unstaking (starts 14-day unbonding) arknet wallet unstake --role compute --amount 50000000000000 # Finalize unbonding after 14 days arknet wallet complete-unbond --role compute --unbond-id 1 # Move stake to a different node (1-day cooldown) arknet wallet redelegate --role compute --to-node 0x... --amount 50000000000000
Wallet management
# Create or show wallet arknet wallet create arknet wallet address arknet wallet balance # Send ARK arknet wallet send --to 0x... --amount 1000000000
Becoming a seeder
A seed node is a validator with a stable public IP or DNS name that helps new nodes bootstrap into the mesh. See the Seeder Guide for how to get your node listed in seeds.json.
TEE (confidential inference)
If your server has Intel TDX or AMD SEV-SNP hardware, you can offer confidential inference. Prompts are encrypted to your enclave and even your host OS cannot read them. TEE-verified jobs earn 1.5x emission rewards.
# Generate enclave keypair arknet tee keygen # Register TEE capability on-chain arknet tee register --platform intel-tdx --quote-file attestation.bin # Enable in node.toml # [tee] # enabled = true # platform = "intel-tdx"
Troubleshooting
- No peers connecting — verify port 26656 is open (TCP and UDP). Check
curl http://localhost:26657/peers. - Model not loading — check disk space. GGUF files can be large. Verify the SHA-256 matches.
- Node not earning — ensure your model is loaded and announced. Check
curl http://localhost:26657/v1/modelsreturns your model. - Role changes — roles are set at boot time. Edit
node.tomland restart the node. There is no hot role reload.