Hassan Al-RashidWelcome to Terra Systems. I run infrastructure for the Grid — 47 monitoring sites across six continents, 2 million sensors feeding data every second. Your job is to make our ML models run out there, not in here.
GRID47 sites monitored. 2,147,483 sensors active. Average inference latency: 8,234ms. Target: 89ms. Current architecture: unacceptable.
Tariq HussainYour model needs to run on this. 512MB RAM, ARM processor, no internet until Thursday. Welcome to the edge.
You have spent your career deploying code to the cloud. But Terra Grid does not have that luxury. When a turbine bearing starts failing at 2 AM on a wind farm in Patagonia, you cannot wait for a round trip to a data center in Virginia. The model must be there, on the device, ready.
// Content served from nearest PoP
fetch('https://cdn.example.com/asset.js')// Model runs on local device
const prediction = model.predict(sensorData)You already understand edge computing. Every time you configure a CDN, you are making the same architectural decision: move the work closer to the user. CDNs cache static assets at edge locations to reduce latency. Edge AI caches models on local devices to reduce inference latency.
import * as tf from '@tensorflow/tfjs';
// Cloud inference: send data, wait for response
async function cloudInference(sensorData: number[]) {
const response = await fetch('https://api.terra-grid.com/predict', {
body: JSON.stringify({ readings: sensorData })
});
return response.json(); // ~8,234ms round trip
}
// Edge inference: model runs locally on the device
async function edgeInference(sensorData: number[]) {
const model = await tf.loadLayersModel('file://./model/model.json');
const input = tf.tensor2d([sensorData]);
const prediction = model.predict(input) as tf.Tensor;
return prediction.dataSync(); // ~89ms local
}The cloud version requires connectivity, adds network latency, and fails when the satellite uplink goes down. The edge version runs locally, responds in milliseconds, and works offline. Same model, different deployment target.
Terra Grid monitors industrial infrastructure: wind farms, solar arrays, pipeline networks, water treatment plants. Each site has gateway devices — small ARM-based computers connected to hundreds of sensors. Your models run on those gateways.
The constraints are real: 512MB RAM, quad-core ARM processors, intermittent connectivity, and power budgets measured in watts. Every byte matters.
Classify scenarios as edge or cloud computing.
Write a function classifyCompute(latencyMs, needsOffline) that returns 'edge' if latency is under 100ms OR offline is required, otherwise returns 'cloud'.
function classifyCompute(latencyMs, needsOffline) { // Return 'edge' or 'cloud' based on requirements return null; // your code here }
GRID assigns you to Site 12 — a wind farm in Patagonia with no reliable uplink.
Next: understanding why the edge matters