β Frequently Asked Questions¶
Quick answers to common questions about Mycelix Protocol and MATL
π‘οΈ Byzantine Resistance¶
Q: How does 45% Byzantine tolerance differ from the classical 33% limit?¶
A: Classical Byzantine Fault Tolerance (BFT) systems treat all nodes equally. With N nodes, the system fails when more than βN/3β are malicious.
MATL uses reputation-weighted validation:
Key insight: Even with 60% malicious nodes, if their reputation is low (0.1), their Byzantine power is: - 60 nodes Γ (0.1)Β² = 0.6 power - vs 40 honest nodes Γ (0.9)Β² = 32.4 power - System remains safe: 0.6 < 32.4 / 3 (10.8) β
Why it works: New attackers start with low reputation. They must earn trust over multiple rounds before having influence.
Q: What if all nodes start malicious?¶
A: MATL assumes at least some honest bootstrap nodes exist initially. If ALL nodes are malicious from the start: - The system would slowly converge as nodes with better gradients gain more trust - However, this scenario is impractical in real deployments - Use bootstrap validation with known-good nodes for cold starts
Best practice: Deploy with 3-5 trusted bootstrap nodes that have pre-established reputation.
Q: Can a sleeper agent defeat MATL?¶
A: Partially, but damage is limited:
Scenario: Node behaves honestly for 10 rounds (builds trust to 0.8), then attacks
MATL's response: 1. Round 11: Attack detected by PoGQ, trust drops to 0.65 2. Round 12: Continued malicious behavior, trust drops to 0.4 3. Round 13: Trust below threshold (0.3), node excluded from aggregation
Result: ~3 rounds of partial damage before complete isolation
Detection rate: 87% for sleeper agents (our hardest attack type to catch)
π Integration & Deployment¶
Q: Does MATL work with PyTorch, TensorFlow, and JAX?¶
A: Yes! MATL is framework-agnostic:
Key requirement: Convert gradients to NumPy arrays before submission.
Q: What's the performance overhead?¶
A: MATL adds <30% computational overhead and 0.7ms latency:
| Operation | Baseline | With MATL | Overhead |
|---|---|---|---|
| Gradient Submission | 2.3ms | 3.0ms | +30% |
| Validation (PoGQ) | 0ms | 0.7ms | +0.7ms |
| Aggregation | 1.2ms | 1.5ms | +25% |
| Total Round Time | 3.5ms | 5.2ms | +48% |
Network overhead: - Per-round communication: +12KB (trust scores + proofs) - Bandwidth increase: ~15% over baseline FL
Optimization tips: - Use Mode 1 (PoGQ oracle) for production (lowest overhead) - Enable gradient compression for bandwidth savings - Batch multiple updates when possible
Q: Can I use MATL in production?¶
A: Yes! MATL is production-ready:
β Production deployments: - 1000-node testnet validation - 100+ continuous training rounds - Real PostgreSQL + Holochain + Ethereum backends - HIPAA-compliant healthcare deployment
Requirements: - Python 3.10+ - PostgreSQL 15+ (or Holochain for distributed) - 2GB RAM per coordinator - TLS 1.3 for client connections
See: Production Operations Runbook
Q: How do I migrate from FedAvg to MATL?¶
A: Just 2 lines of code!
Before (FedAvg):
# Baseline federated averaging
aggregated = sum(gradients) / len(gradients)
model.apply_gradient(aggregated)
After (MATL):
# Submit gradient for validation
result = matl_client.submit_gradient(gradient, metadata)
# Use reputation-weighted aggregation
aggregated = matl_client.aggregate(
gradients=[r["gradient"] for r in results],
trust_scores=[r["trust_score"] for r in results],
method="reputation_weighted" # Instead of simple mean
)
That's it! See MATL Integration Tutorial for complete example.
π₯ Healthcare & Privacy¶
Q: Is MATL HIPAA compliant?¶
A: Yes, when configured with differential privacy:
matl_client = MATLClient(
mode=MATLMode.MODE2, # TEE-backed validation
# Differential privacy for PHI protection
privacy=DifferentialPrivacy(
epsilon=1.0, # Privacy budget
delta=1e-5, # Failure probability
clip_norm=1.0, # Gradient clipping
),
# HIPAA audit logging
audit_logger=AuditLogger(
backend="postgresql",
retention_years=7, # HIPAA requirement
encrypt=True,
),
)
HIPAA compliance features: - β Encrypted gradient transmission (TLS 1.3) - β Differential privacy (Ξ΅ = 1.0, Ξ΄ = 1e-5) - β 7-year audit logs (HIPAA requirement) - β Access control with role-based permissions - β PHI never leaves local nodes
Q: How much privacy does differential privacy provide?¶
A: Depends on epsilon (Ξ΅):
| Ξ΅ Value | Privacy Level | Use Case |
|---|---|---|
| Ξ΅ < 0.1 | Very strong | Financial records, genomics |
| Ξ΅ = 1.0 | Strong | Healthcare (HIPAA) β |
| Ξ΅ = 5.0 | Moderate | General research |
| Ξ΅ > 10 | Weak | Public datasets |
MATL default: Ξ΅ = 1.0 (strong privacy, HIPAA-compliant)
Trade-off: Lower Ξ΅ = more privacy, but slightly lower model accuracy - Ξ΅ = 1.0: ~2-3% accuracy reduction - Ξ΅ = 0.1: ~5-8% accuracy reduction
π» Technical Questions¶
Q: What backends does MATL support?¶
A: Four backends with different guarantees:
| Backend | Speed | Immutability | Decentralization | Best For |
|---|---|---|---|---|
| PostgreSQL | β‘β‘β‘ Fastest | β οΈ Mutable | β Centralized | Development, private networks |
| Holochain | β‘β‘ Fast | β Immutable | β Distributed | P2P, agent-centric apps |
| Ethereum | β‘ Slow | β Immutable | β Public | Public audits, cross-org |
| Cosmos | β‘β‘ Medium | β Immutable | β App-specific | Custom governance |
Recommendation: Start with PostgreSQL for development, migrate to Holochain for production.
Q: How does MATL detect cartels?¶
A: Graph-based clustering analysis:
Cartel definition: Group of malicious nodes that coordinate attacks
Detection method: 1. Build gradient similarity graph: Connect nodes with similar gradients 2. Apply community detection (Louvain algorithm) 3. Flag clusters where: - >70% nodes have low trust scores - Gradients are suspiciously similar (cosine similarity >0.95) - Coordinated timing of attacks
Detection rate: 94% for cartel attacks with 5+ members
Counter-strategy: Attackers must choose between: - Coordinating (high similarity) β Easy to detect - Acting independently (low similarity) β Lower impact
Q: Can MATL work offline?¶
A: Yes, with asynchronous aggregation:
matl_client = MATLClient(
mode=MATLMode.MODE1,
async_aggregation=True, # Enable offline operation
buffer_size=100, # Buffer up to 100 updates
)
# Works even when nodes are intermittently offline
result = matl_client.submit_gradient(
gradient=gradient,
allow_buffering=True, # Queue if offline
)
When buffered updates sync: - Automatic retry with exponential backoff - Trust scores adjusted based on freshness - Old updates (>24h) automatically discarded
Use case: Edge devices with spotty connectivity (IoT, mobile)
π¬ Research & Academic¶
Q: Where can I read the research paper?¶
A: Multiple resources:
- PoGQ Whitepaper Outline - High-level overview
- Section 3 Draft - Byzantine tolerance breakthrough
- MATL Technical Whitepaper - Complete technical specification
Submission target: MLSys 2026 or ICML 2026 (January 15, 2026 deadline)
Q: How do I cite Mycelix/MATL?¶
A: (Preprint - citation will be updated after publication)
@article{mycelix2025matl,
title={MATL: Adaptive Trust Middleware for Byzantine-Resistant Federated Learning},
author={Stoltz, Tristan and [Co-authors]},
journal={arXiv preprint arXiv:2509.XXXXX},
year={2025}
}
Q: What datasets have you tested on?¶
A: Multiple datasets across domains:
| Dataset | Task | Clients | Rounds | Byzantine % | Accuracy |
|---|---|---|---|---|---|
| MNIST | Digit classification | 20 | 100 | 45% | 97.2% |
| CIFAR-10 | Image classification | 50 | 200 | 40% | 84.1% |
| Diabetic Retinopathy | Medical imaging | 5 | 50 | 20% | 92.8% |
Experimental validation: See Healthcare FL Tutorial
π€ Community & Support¶
Q: How do I report a bug?¶
Please include: 1. MATL version (matl_client.version) 2. Python version 3. Backend (PostgreSQL/Holochain/Ethereum) 4. Minimal reproduction code 5. Expected vs actual behavior
Response time: Usually within 24 hours
Q: How can I contribute?¶
A: Multiple ways to help:
Code contributions: - See Contributing Guide - Check Good First Issues - Submit pull requests
Documentation: - Fix typos or unclear explanations - Add examples or tutorials - Translate to other languages
Research: - Test on new datasets - Compare with other defenses - Publish research using MATL
Community: - Answer questions in Discussions - Share your use case - Star the repo β
π° Licensing & Commercial Use¶
Q: Can I use MATL commercially?¶
A: Yes!
Open source: Apache 2.0 License for SDK and core libraries - β Commercial use allowed - β Modification allowed - β Distribution allowed - β οΈ Must include license and attribution
Commercial licensing: Available for: - Enterprise support contracts - Custom feature development - Private modifications
Contact: [email protected]
Q: What's the difference between Mycelix Protocol and MATL?¶
A:
Mycelix Protocol = Complete framework with 4 pillars: 1. Byzantine-Resistant FL (MATL/0TML) 2. Agent-Centric Economy (Holochain) 3. Epistemic Knowledge Graph (3D truth framework) 4. Constitutional Governance (Modular charters)
MATL (Mycelix Adaptive Trust Layer) = Just the Byzantine resistance middleware - Pluggable into any FL system - Can be used standalone - Part of the larger Mycelix ecosystem
Think of it as: HTTP (MATL) vs The Web (Mycelix Protocol)
π― Getting Started¶
Q: What's the fastest way to try MATL?¶
A: Follow the 5-minute quick start:
-
Install (30 seconds):
-
Copy 2 lines (30 seconds):
-
Run example (4 minutes):
See: MATL Integration Tutorial
π Still Have Questions?¶
Ask in: - GitHub Discussions - Public questions - GitHub Issues - Bug reports - Email: [email protected] - Private inquiries
Or explore: - Interactive Playground - Hands-on experiments - Tutorials - Step-by-step guides - Architecture Docs - Technical deep dive
Last updated: November 11, 2025