Skip to content

⚑ 5-Minute Quick Start

Get MATL running with Byzantine resistance in 5 minutes


🎯 What You'll Build

A simple federated learning system with: - βœ… 45% Byzantine tolerance (vs 33% classical limit) - βœ… Automatic attack detection - βœ… Reputation-weighted aggregation - βœ… Real-time trust scores

Time: 5 minutes | Level: Beginner | Lines of code: ~30


πŸ“¦ Step 1: Install (30 seconds)

# Option A: From PyPI (recommended)
pip install zerotrustml

# Option B: From source
git clone https://github.com/Luminous-Dynamics/mycelix
cd Mycelix-Core/0TML
pip install -e .

Verify installation:

python -c "from zerotrustml import MATLClient; print('βœ… MATL installed!')"


πŸš€ Step 2: Run Example (4 minutes)

Create quick_start.py:

"""5-Minute MATL Quick Start"""
import numpy as np
from zerotrustml import MATLClient, MATLMode

# Step 1: Initialize MATL client (uses in-memory backend for quick start)
matl = MATLClient(
    mode=MATLMode.MODE1,  # PoGQ validation
    backend="memory",     # No setup required!
    node_id="quickstart"
)

print("βœ… MATL client initialized")

# Step 2: Simulate 5 clients training
num_clients = 5
byzantine_clients = [3, 4]  # Clients 3 and 4 are malicious

for round_num in range(10):
    print(f"\nπŸ“Š Round {round_num + 1}/10")

    gradients = []
    trust_scores = []

    # Each client submits a gradient
    for client_id in range(num_clients):
        # Generate gradient (random for demo)
        if client_id in byzantine_clients:
            # Byzantine clients: submit poisoned gradients
            gradient = np.random.randn(100) * 10  # 10x magnitude
            print(f"  ⚠️  Client {client_id}: Byzantine attack!")
        else:
            # Honest clients: submit normal gradients
            gradient = np.random.randn(100)
            print(f"  βœ… Client {client_id}: Honest gradient")

        # Submit gradient to MATL for validation
        result = matl.submit_gradient(
            gradient=gradient,
            metadata={
                "client_id": client_id,
                "round": round_num,
            }
        )

        gradients.append(result["gradient"])
        trust_scores.append(result["trust_score"])

        print(f"      Trust score: {result['trust_score']:.3f}")

    # Aggregate using reputation-weighted mean
    aggregated = matl.aggregate(
        gradients=gradients,
        trust_scores=trust_scores,
        method="reputation_weighted"
    )

    print(f"\n  🎯 Aggregation complete")
    print(f"     Average trust (honest): {np.mean([trust_scores[i] for i in range(num_clients) if i not in byzantine_clients]):.3f}")
    print(f"     Average trust (byzantine): {np.mean([trust_scores[i] for i in byzantine_clients]):.3f}")

print("\nβœ… Training complete! MATL successfully isolated Byzantine clients.")
print(f"\nπŸ“ˆ Final Trust Scores:")
for client_id in range(num_clients):
    status = "⚠️  BYZANTINE" if client_id in byzantine_clients else "βœ… HONEST"
    print(f"   Client {client_id}: {trust_scores[client_id]:.3f} {status}")

Run it:

python quick_start.py

Expected Output:

βœ… MATL client initialized

πŸ“Š Round 1/10
  βœ… Client 0: Honest gradient
      Trust score: 0.500
  βœ… Client 1: Honest gradient
      Trust score: 0.500
  βœ… Client 2: Honest gradient
      Trust score: 0.500
  ⚠️  Client 3: Byzantine attack!
      Trust score: 0.500
  ⚠️  Client 4: Byzantine attack!
      Trust score: 0.500

  🎯 Aggregation complete
     Average trust (honest): 0.500
     Average trust (byzantine): 0.500

... (9 more rounds) ...

πŸ“Š Round 10/10
  βœ… Client 0: Honest gradient
      Trust score: 0.847
  βœ… Client 1: Honest gradient
      Trust score: 0.861
  βœ… Client 2: Honest gradient
      Trust score: 0.839
  ⚠️  Client 3: Byzantine attack!
      Trust score: 0.142
  ⚠️  Client 4: Byzantine attack!
      Trust score: 0.138

βœ… Training complete! MATL successfully isolated Byzantine clients.

πŸ“ˆ Final Trust Scores:
   Client 0: 0.847 βœ… HONEST
   Client 1: 0.861 βœ… HONEST
   Client 2: 0.839 βœ… HONEST
   Client 3: 0.142 ⚠️  BYZANTINE
   Client 4: 0.138 ⚠️  BYZANTINE

πŸŽ‰ Success! What Just Happened?

Round 1: Everyone Equal

  • All clients start with trust score = 0.5
  • MATL hasn't learned who to trust yet

Rounds 2-10: Trust Diverges

  • Honest clients: Trust increases (0.5 β†’ 0.85)
  • Gradients are consistent with global model improvement
  • PoGQ validation passes

  • Byzantine clients: Trust decreases (0.5 β†’ 0.14)

  • Gradients are 10x larger (poisoned)
  • PoGQ validation fails
  • Reputation drops each round

Aggregation: Reputation-Weighted

# Instead of simple mean (FedAvg):
simple_mean = sum(gradients) / len(gradients)

# MATL uses reputation weighting:
weighted_mean = sum(g * trust**2 for g, trust in zip(gradients, trust_scores))

Result: Byzantine gradients have minimal impact (0.14Β² = 0.02 weight)


πŸ”¬ Try It Yourself: Experiments

Experiment 1: More Byzantine Nodes

byzantine_clients = [2, 3, 4]  # 60% Byzantine!
Observation: System still safe if they have low reputation

Experiment 2: Sleeper Agent

# Start honest, then attack at round 5
if client_id == 3 and round_num >= 5:
    gradient = np.random.randn(100) * 10  # Attack!
Observation: Trust drops rapidly after attack starts

Experiment 3: Different Attack Magnitudes

gradient = np.random.randn(100) * 50  # Even stronger attack
Observation: Stronger attacks detected faster


🎯 What's Next?

5 Minutes β†’ 30 Minutes

Full MATL Integration Tutorial β†’ - Real MNIST dataset - PyTorch model training - Production-ready code - Persistent PostgreSQL backend

30 Minutes β†’ 45 Minutes

Healthcare FL Tutorial β†’ - HIPAA-compliant medical AI - Diabetic retinopathy detection - 5 hospitals collaborating - Differential privacy

Hands-On Learning

Interactive Playground β†’ - Byzantine tolerance calculator - Trust score simulator - Attack type comparison


πŸ’‘ Core Concepts Explained

1. Trust Scores

Range: 0.0 (completely untrusted) to 1.0 (fully trusted)

How they change:

if gradient_passes_validation:
    trust_score += learning_rate * (1 - trust_score)  # Move toward 1
else:
    trust_score -= learning_rate * trust_score         # Move toward 0

Default learning rate: 0.15 (adapts quickly but stably)

2. PoGQ Validation

Proof of Quality Gradient checks if gradient: - Has reasonable magnitude (not 100x normal) - Improves model accuracy (not degrades it) - Is consistent with honest behavior patterns

Pass rate: - Honest nodes: >95% - Byzantine nodes: <20%

3. Reputation-Weighted Aggregation

Formula:

weights = [trust**2 for trust in trust_scores]  # Square emphasizes differences
aggregated = np.average(gradients, weights=weights, axis=0)

Why square? - Amplifies trust differences - Low-trust nodes have minimal impact - Example: 0.9Β² = 0.81 vs 0.1Β² = 0.01 (81Γ— difference!)

4. Byzantine Power

System is safe when:

byzantine_power = sum(trust**2 for trust in byzantine_trust_scores)
honest_power = sum(trust**2 for trust in honest_trust_scores)

safe = byzantine_power < honest_power / 3

This enables 45% tolerance (vs 33% with equal voting)


πŸ“Š Quick Reference

Minimal MATL Integration

# Initialize once
matl = MATLClient(mode=MATLMode.MODE1, backend="memory")

# Inside training loop:
result = matl.submit_gradient(gradient, metadata)
aggregated = matl.aggregate(gradients, trust_scores)

Configuration Options

MATLClient(
    mode=MATLMode.MODE1,      # PoGQ validation
    backend="memory",          # or "postgresql", "holochain"
    learning_rate=0.15,        # Trust score update rate
    bootstrap_rounds=3,        # Rounds before trust diverges
    min_trust_threshold=0.3,   # Exclude nodes below this
)

Key Methods

# Submit gradient for validation
result = matl.submit_gradient(
    gradient=np.array,
    metadata=dict,           # Optional: client_id, round, etc.
)

# Aggregate with reputation weighting
aggregated = matl.aggregate(
    gradients=List[np.array],
    trust_scores=List[float],
    method="reputation_weighted",  # or "simple_mean", "median"
)

# Get client trust score
trust = matl.get_trust_score(client_id)

# Get all trust scores
all_trust = matl.get_all_trust_scores()

❓ Common Issues

Issue: "No module named 'zerotrustml'"

Solution: Install from PyPI or source (see Step 1)

Issue: "Backend connection failed"

Solution: Use backend="memory" for quick start (no database required)

Issue: "All trust scores remain 0.5"

Solution: Increase bootstrap_rounds or run more rounds (need ~5 rounds to diverge)

Issue: "Byzantine nodes not detected"

Solution: Make attacks more obvious (e.g., gradient * 10) or check PoGQ threshold


πŸŽ“ Understanding the Code

Line-by-Line Breakdown

# Create MATL client
matl = MATLClient(mode=MATLMode.MODE1, backend="memory")
- MODE1: Uses PoGQ (Proof of Quality Gradient) validation - backend="memory": No database setup required (for demos)

# Submit gradient
result = matl.submit_gradient(gradient, metadata)
- Validates gradient quality - Updates trust score - Returns validated gradient + trust score

# Aggregate
aggregated = matl.aggregate(gradients, trust_scores, method="reputation_weighted")
- Weighs gradients by trustΒ² (quadratic weighting) - Low-trust nodes have minimal impact - Returns final aggregated gradient


πŸš€ Production Checklist

Before deploying to production:

  • Switch from backend="memory" to backend="postgresql"
  • Set up PostgreSQL database
  • Enable TLS for client connections
  • Configure differential privacy (if needed)
  • Set up monitoring and alerting
  • Read Production Operations Runbook

🎯 Next Steps

You now understand: - βœ… How to install and use MATL - βœ… How trust scores evolve over time - βœ… How reputation-weighted aggregation works - βœ… How MATL detects and isolates Byzantine nodes

Continue learning: 1. MATL Integration Tutorial - Real MNIST training 2. Interactive Playground - Hands-on experiments 3. FAQ - Common questions answered 4. Architecture Docs - Technical deep dive


πŸ’¬ Get Help


Congratulations! πŸŽ‰ You've successfully run MATL and achieved Byzantine resistance in 5 minutes!

Ready for more? β†’ Full MATL Integration Tutorial