β‘ 5-Minute Quick Start¶
Get MATL running with Byzantine resistance in 5 minutes
π― What You'll Build¶
A simple federated learning system with: - β 45% Byzantine tolerance (vs 33% classical limit) - β Automatic attack detection - β Reputation-weighted aggregation - β Real-time trust scores
Time: 5 minutes | Level: Beginner | Lines of code: ~30
π¦ Step 1: Install (30 seconds)¶
# Option A: From PyPI (recommended)
pip install zerotrustml
# Option B: From source
git clone https://github.com/Luminous-Dynamics/mycelix
cd Mycelix-Core/0TML
pip install -e .
Verify installation:
π Step 2: Run Example (4 minutes)¶
Create quick_start.py:¶
"""5-Minute MATL Quick Start"""
import numpy as np
from zerotrustml import MATLClient, MATLMode
# Step 1: Initialize MATL client (uses in-memory backend for quick start)
matl = MATLClient(
mode=MATLMode.MODE1, # PoGQ validation
backend="memory", # No setup required!
node_id="quickstart"
)
print("β
MATL client initialized")
# Step 2: Simulate 5 clients training
num_clients = 5
byzantine_clients = [3, 4] # Clients 3 and 4 are malicious
for round_num in range(10):
print(f"\nπ Round {round_num + 1}/10")
gradients = []
trust_scores = []
# Each client submits a gradient
for client_id in range(num_clients):
# Generate gradient (random for demo)
if client_id in byzantine_clients:
# Byzantine clients: submit poisoned gradients
gradient = np.random.randn(100) * 10 # 10x magnitude
print(f" β οΈ Client {client_id}: Byzantine attack!")
else:
# Honest clients: submit normal gradients
gradient = np.random.randn(100)
print(f" β
Client {client_id}: Honest gradient")
# Submit gradient to MATL for validation
result = matl.submit_gradient(
gradient=gradient,
metadata={
"client_id": client_id,
"round": round_num,
}
)
gradients.append(result["gradient"])
trust_scores.append(result["trust_score"])
print(f" Trust score: {result['trust_score']:.3f}")
# Aggregate using reputation-weighted mean
aggregated = matl.aggregate(
gradients=gradients,
trust_scores=trust_scores,
method="reputation_weighted"
)
print(f"\n π― Aggregation complete")
print(f" Average trust (honest): {np.mean([trust_scores[i] for i in range(num_clients) if i not in byzantine_clients]):.3f}")
print(f" Average trust (byzantine): {np.mean([trust_scores[i] for i in byzantine_clients]):.3f}")
print("\nβ
Training complete! MATL successfully isolated Byzantine clients.")
print(f"\nπ Final Trust Scores:")
for client_id in range(num_clients):
status = "β οΈ BYZANTINE" if client_id in byzantine_clients else "β
HONEST"
print(f" Client {client_id}: {trust_scores[client_id]:.3f} {status}")
Run it:¶
Expected Output:¶
β
MATL client initialized
π Round 1/10
β
Client 0: Honest gradient
Trust score: 0.500
β
Client 1: Honest gradient
Trust score: 0.500
β
Client 2: Honest gradient
Trust score: 0.500
β οΈ Client 3: Byzantine attack!
Trust score: 0.500
β οΈ Client 4: Byzantine attack!
Trust score: 0.500
π― Aggregation complete
Average trust (honest): 0.500
Average trust (byzantine): 0.500
... (9 more rounds) ...
π Round 10/10
β
Client 0: Honest gradient
Trust score: 0.847
β
Client 1: Honest gradient
Trust score: 0.861
β
Client 2: Honest gradient
Trust score: 0.839
β οΈ Client 3: Byzantine attack!
Trust score: 0.142
β οΈ Client 4: Byzantine attack!
Trust score: 0.138
β
Training complete! MATL successfully isolated Byzantine clients.
π Final Trust Scores:
Client 0: 0.847 β
HONEST
Client 1: 0.861 β
HONEST
Client 2: 0.839 β
HONEST
Client 3: 0.142 β οΈ BYZANTINE
Client 4: 0.138 β οΈ BYZANTINE
π Success! What Just Happened?¶
Round 1: Everyone Equal¶
- All clients start with trust score = 0.5
- MATL hasn't learned who to trust yet
Rounds 2-10: Trust Diverges¶
- Honest clients: Trust increases (0.5 β 0.85)
- Gradients are consistent with global model improvement
-
PoGQ validation passes
-
Byzantine clients: Trust decreases (0.5 β 0.14)
- Gradients are 10x larger (poisoned)
- PoGQ validation fails
- Reputation drops each round
Aggregation: Reputation-Weighted¶
# Instead of simple mean (FedAvg):
simple_mean = sum(gradients) / len(gradients)
# MATL uses reputation weighting:
weighted_mean = sum(g * trust**2 for g, trust in zip(gradients, trust_scores))
Result: Byzantine gradients have minimal impact (0.14Β² = 0.02 weight)
π¬ Try It Yourself: Experiments¶
Experiment 1: More Byzantine Nodes¶
Observation: System still safe if they have low reputationExperiment 2: Sleeper Agent¶
# Start honest, then attack at round 5
if client_id == 3 and round_num >= 5:
gradient = np.random.randn(100) * 10 # Attack!
Experiment 3: Different Attack Magnitudes¶
Observation: Stronger attacks detected fasterπ― What's Next?¶
5 Minutes β 30 Minutes¶
Full MATL Integration Tutorial β - Real MNIST dataset - PyTorch model training - Production-ready code - Persistent PostgreSQL backend
30 Minutes β 45 Minutes¶
Healthcare FL Tutorial β - HIPAA-compliant medical AI - Diabetic retinopathy detection - 5 hospitals collaborating - Differential privacy
Hands-On Learning¶
Interactive Playground β - Byzantine tolerance calculator - Trust score simulator - Attack type comparison
π‘ Core Concepts Explained¶
1. Trust Scores¶
Range: 0.0 (completely untrusted) to 1.0 (fully trusted)
How they change:
if gradient_passes_validation:
trust_score += learning_rate * (1 - trust_score) # Move toward 1
else:
trust_score -= learning_rate * trust_score # Move toward 0
Default learning rate: 0.15 (adapts quickly but stably)
2. PoGQ Validation¶
Proof of Quality Gradient checks if gradient: - Has reasonable magnitude (not 100x normal) - Improves model accuracy (not degrades it) - Is consistent with honest behavior patterns
Pass rate: - Honest nodes: >95% - Byzantine nodes: <20%
3. Reputation-Weighted Aggregation¶
Formula:
weights = [trust**2 for trust in trust_scores] # Square emphasizes differences
aggregated = np.average(gradients, weights=weights, axis=0)
Why square? - Amplifies trust differences - Low-trust nodes have minimal impact - Example: 0.9Β² = 0.81 vs 0.1Β² = 0.01 (81Γ difference!)
4. Byzantine Power¶
System is safe when:
byzantine_power = sum(trust**2 for trust in byzantine_trust_scores)
honest_power = sum(trust**2 for trust in honest_trust_scores)
safe = byzantine_power < honest_power / 3
This enables 45% tolerance (vs 33% with equal voting)
π Quick Reference¶
Minimal MATL Integration¶
# Initialize once
matl = MATLClient(mode=MATLMode.MODE1, backend="memory")
# Inside training loop:
result = matl.submit_gradient(gradient, metadata)
aggregated = matl.aggregate(gradients, trust_scores)
Configuration Options¶
MATLClient(
mode=MATLMode.MODE1, # PoGQ validation
backend="memory", # or "postgresql", "holochain"
learning_rate=0.15, # Trust score update rate
bootstrap_rounds=3, # Rounds before trust diverges
min_trust_threshold=0.3, # Exclude nodes below this
)
Key Methods¶
# Submit gradient for validation
result = matl.submit_gradient(
gradient=np.array,
metadata=dict, # Optional: client_id, round, etc.
)
# Aggregate with reputation weighting
aggregated = matl.aggregate(
gradients=List[np.array],
trust_scores=List[float],
method="reputation_weighted", # or "simple_mean", "median"
)
# Get client trust score
trust = matl.get_trust_score(client_id)
# Get all trust scores
all_trust = matl.get_all_trust_scores()
β Common Issues¶
Issue: "No module named 'zerotrustml'"¶
Solution: Install from PyPI or source (see Step 1)
Issue: "Backend connection failed"¶
Solution: Use backend="memory" for quick start (no database required)
Issue: "All trust scores remain 0.5"¶
Solution: Increase bootstrap_rounds or run more rounds (need ~5 rounds to diverge)
Issue: "Byzantine nodes not detected"¶
Solution: Make attacks more obvious (e.g., gradient * 10) or check PoGQ threshold
π Understanding the Code¶
Line-by-Line Breakdown¶
-MODE1: Uses PoGQ (Proof of Quality Gradient) validation - backend="memory": No database setup required (for demos) - Validates gradient quality - Updates trust score - Returns validated gradient + trust score - Weighs gradients by trustΒ² (quadratic weighting) - Low-trust nodes have minimal impact - Returns final aggregated gradient π Production Checklist¶
Before deploying to production:
- Switch from
backend="memory"tobackend="postgresql" - Set up PostgreSQL database
- Enable TLS for client connections
- Configure differential privacy (if needed)
- Set up monitoring and alerting
- Read Production Operations Runbook
π― Next Steps¶
You now understand: - β How to install and use MATL - β How trust scores evolve over time - β How reputation-weighted aggregation works - β How MATL detects and isolates Byzantine nodes
Continue learning: 1. MATL Integration Tutorial - Real MNIST training 2. Interactive Playground - Hands-on experiments 3. FAQ - Common questions answered 4. Architecture Docs - Technical deep dive
π¬ Get Help¶
- Quick questions: Check FAQ
- Bug reports: GitHub Issues
- Discussions: GitHub Discussions
- Email: [email protected]
Congratulations! π You've successfully run MATL and achieved Byzantine resistance in 5 minutes!
Ready for more? β Full MATL Integration Tutorial