Interactive Simulation
Earth-Space Distributed AI
Explore how RotaStellar coordinates federated learning, model partitioning, and synchronization across Earth and orbital infrastructure.
Our gradient compression achieves 100× reduction while maintaining convergence by combining Top-K sparsification with stochastic quantization. This is critical for space links where bandwidth is measured in KB/s, not GB/s.
The key insight is error feedback: compression errors are accumulated locally and added to the next round's gradients, ensuring no information is permanently lost.
# RotaStellar gradient compression
def compress_gradient(gradient, k_ratio=0.01):
# Top-K sparsification: keep only top 1% by magnitude
k = int(gradient.numel() * k_ratio)
values, indices = torch.topk(gradient.abs().flatten(), k)
sparse = torch.zeros_like(gradient.flatten())
sparse[indices] = gradient.flatten()[indices]
# 8-bit stochastic quantization
scale = sparse.abs().max() / 127
quantized = (sparse / scale).round().to(torch.int8)
# Error feedback for next round
error = gradient - decompress(quantized, indices, scale)
return quantized, indices, scale, error
# Compression ratio: 32-bit × N params → 8-bit × 0.01N + indices
# For 70B params: 280GB → 2.8GB (indices) + 0.7GB (values) ≈ 100×
| Metric | Naive Sync | FedAvg | RotaStellar |
|---|---|---|---|
| Bandwidth per round | 280 GB | 280 GB | 2.8 GB |
| Handles intermittent connectivity | No | Partial | Yes (async) |
| Eclipse resilience | Fails | Stalls | Continues |
| Convergence (rounds to 95%) | N/A | 1,800 | 1,250 |
| Final accuracy | N/A | 94.2% | 94.7% |
| Supports orbital nodes | No | No | Native |
Finding the optimal model partition is a constrained optimization problem. We minimize end-to-end latency subject to memory, bandwidth, and energy constraints.
Where s is the split layer, subject to:
- Memory(0:s) ≤ Ground_VRAM
- Memory(s:L) ≤ Orbital_VRAM
- Activation_size(s) × RTT ≤ Latency_budget
- Compute(s:L) ≤ Orbital_energy_budget
Built on Open Research
Every capability demonstrated here is grounded in our published research, open datasets, and benchmarks.
Models
gradient-compress, model-partition, sync-scheduler, checkpoint-optimizer, bandwidth-predict
View 15 models →Datasets
Link Budget Archive, ISL Topology, Space Network Traces, Federated Training Logs
View 13 datasets →Ready to build Earth-space AI?
Get early access to distributed compute capabilities and start coordinating AI workloads across ground and orbital infrastructure.