QGuardian is a federated AI system that detects network anomalies in real-time across distributed nodes
without sharing raw data. It implements three layers of security: Federated Learning, Post-Quantum Encryption,
and Blockchain Verification.
🎯 Core Purpose
Privacy-Preserving Anomaly Detection
Distributed Learning Across Organizations
Real-Time Threat Detection
Quantum-Resilient Security
Tamper-Proof Event Logging
🔑 Key Principles
Privacy: Raw data never leaves client nodes
Security: Post-quantum cryptography protection
Transparency: Blockchain-verified event logging
Collaboration: Federated learning for better models
Real-Time: Instant anomaly detection
⚠️ Privacy Guarantee: Raw network traffic data never leaves client nodes.
Only model weight updates are shared between clients and the coordinator, preserving data privacy
while enabling collaborative learning from diverse attack patterns.
A tamper-proof ledger ensures transparency, node integrity, and authenticated threat updates.
Features:
Cryptographic Chain: Immutable block structure with hashes
Event Logging: Model updates, anomaly alerts, handshakes
Integrity Verification: Detects tampering and validates chain
Signature-Based Auth: Ed25519/RSA/HMAC signing
Export/Import: JSON-based ledger persistence
Data Flow & Communication
Federated Learning Data Flow
Step 1: Data Generation
Each client node generates/maintains its own network traffic dataset locally. Data includes features like packet counts, connection durations, failed login attempts, etc.
client_data = generate_synthetic_data(client_id)
# Data stays on client node
Step 2: Model Broadcast
Coordinator sends the current global model (weights) to all client nodes. Model structure is shared, but no raw data is transmitted.
global_model → encrypt → send → decrypt → client_nodes
# Only model weights, not data
Step 3: Local Training
Each client trains the model on its local dataset. Training happens completely locally - no data leaves the client node.
for epoch in local_epochs:
loss = train_on_local_data(model, local_data)
# Training is completely local
Step 4: Weight Updates
Clients compute weight updates (gradients) and encrypt them using post-quantum cryptography before sending to coordinator.
weight_updates = compute_updates(model)
encrypted_updates = pqc_encrypt(weight_updates)
send_to_coordinator(encrypted_updates)
# Updates are encrypted with PQC
Step 5: Aggregation
Coordinator decrypts updates, aggregates them using FedAvg (weighted average), and logs the event to blockchain ledger.
5Threshold Check
Compare score against calibrated threshold
6Alert & Log
If anomaly: alert user, log to blockchain ledger
🔒 Privacy Protection: At no point in the data flow does raw network traffic data leave the client node.
Only encrypted model weight updates are transmitted, ensuring complete privacy preservation.
User Journey
Scenario: Detecting a DDoS Attack
Scene 1: System Initialization
Time: Day 1, 9:00 AM
Action: Coordinator initializes federated learning system with 5 client nodes (organizations).
Generate initial global autoencoder model
Establish secure channels with PQC encryption
Initialize blockchain ledger
Distribute model to all client nodes
Scene 2: Federated Training
Time: Day 1, 9:00 AM - 9:30 AM
Action: Multiple rounds of federated training across all nodes.