Tariq HussainSite 31 just went dark. Dust storm knocked out the satellite uplink. If the anomaly detector was running in the cloud, we'd be blind for the next six hours. But it's running on the gateway, so we're still getting alerts locally.
Priya SharmaLast quarter we pushed 4.2 petabytes of raw sensor data to the cloud for inference. The bandwidth costs alone were $340,000. If the models run on-site, we send only the alerts — kilobytes instead of petabytes.
You have built offline-first PWAs. You know the pattern: cache critical resources locally so the application works without a network connection. Edge AI applies the same philosophy to machine learning inference.
// Service worker caches for offline use
self.addEventListener('fetch', handleOffline)// Model cached locally, runs without network
const pred = localModel.predict(input)A vibration sensor on a gas turbine samples at 10kHz. By the time you send that data to a cloud endpoint and get a prediction back, the bearing has already failed. Edge inference gives you sub-100ms response times.
Remote sites have unreliable networks. Satellite uplinks fail. Cell towers go down. An edge model keeps working regardless. Just like a service worker serves cached content when fetch() fails, an edge model serves predictions when the network is unavailable.
Streaming raw sensor data to the cloud is expensive and raises data sovereignty concerns. Edge models process data locally and transmit only results.
// Calculate round-trip latency for cloud inference
function calculateCloudLatency(
dataSizeKB: number,
uploadMbps: number,
serverInferenceMs: number,
responseSizeKB: number,
downloadMbps: number
): number {
const uploadMs = (dataSizeKB / (uploadMbps * 125)) * 1000;
const downloadMs = (responseSizeKB / (downloadMbps * 125)) * 1000;
return uploadMs + serverInferenceMs + downloadMs;
}
// Site 31: satellite uplink, 2 Mbps up, 5 Mbps down
const cloudLatency = calculateCloudLatency(256, 2, 50, 1, 5);
console.log(`Cloud: ${cloudLatency.toFixed(0)}ms`); // ~1,074ms
// Edge: no network overhead
const edgeLatency = 89; // local inference only
console.log(`Edge: ${edgeLatency}ms`); // 89msCalculate and compare cloud vs edge latency for a Terra Grid site.
Write a function cloudLatency(dataSizeKB, uploadMbps, serverMs) that calculates total round-trip time in ms. Formula: upload time = (dataSizeKB / (uploadMbps * 125)) * 1000, then add serverMs. Compare it to a fixed edge latency of 89ms.
function cloudLatency(dataSizeKB, uploadMbps, serverMs) { // Calculate upload time in ms, add server inference time return null; // your code here } function isFasterOnEdge(dataSizeKB, uploadMbps, serverMs) { const edgeMs = 89; // Return true if edge is faster than cloud return null; // your code here }
You understand why Terra Grid cannot rely on the cloud. Every site must be self-sufficient.
Next: understanding the hardware you're deploying to