NEONBefore neural networks, there were rules. If the average brightness exceeds 128, the camera is active. If there is a red spot above 200 intensity, there is a recording LED. If the frame changes less than 5% between captures, the feed is static. These rules get you to 62% accuracy. Machine learning will get you to 95%.
NoorSo a threshold classifier is basically an if/else chain on pixel statistics? That's just... programming. Where does the ML start?
Every ML journey starts with a baseline. Before training a neural network, build the simplest rule-based solution — if/else on pixel values. This gives you two things: a baseline accuracy to beat and a deep understanding of why hand-written rules are insufficient for real-world vision tasks.
if (brightness > 128) return 'day'; else return 'night'; // rule-basedif (meanPixel > threshold) return 'active'; else return 'inactive'; // same patternimport * as tf from '@tensorflow/tfjs';
// The simplest possible "model" — if/else on pixel values
function classifyBrightness(image: tf.Tensor3D): string {
const mean = image.mean().dataSync()[0];
return mean > 128 ? 'daytime' : 'nighttime';
}
// Slightly better: use channel statistics
function classifyCameraStatus(image: tf.Tensor3D): string {
const [r, g, b] = tf.split(image, 3, 2);
const rMean = (r as tf.Tensor).mean().dataSync()[0];
const gMean = (g as tf.Tensor).mean().dataSync()[0];
const bMean = (b as tf.Tensor).mean().dataSync()[0];
const brightness = (rMean + gMean + bMean) / 3;
[r, g, b].forEach(t => t.dispose());
// Rule chain
if (brightness < 30) return 'offline'; // Very dark = camera off
if (rMean > 200 && gMean < 50) return 'alert'; // Red LED = recording
if (brightness > 200) return 'overexposed'; // Washed out = malfunction
return 'active'; // Normal operation
}
// Test it
const testImage = tf.randomUniform([224, 224, 3], 0, 255, 'int32');
console.log('Status:', classifyCameraStatus(testImage as tf.Tensor3D));// How good is our rule-based classifier?
interface LabeledImage {
image: tf.Tensor3D;
label: string;
}
function measureAccuracy(
classifier: (img: tf.Tensor3D) => string,
testSet: LabeledImage[]
): number {
let correct = 0;
for (const sample of testSet) {
const prediction = classifier(sample.image);
if (prediction === sample.label) correct++;
}
return correct / testSet.length;
}
// Results on Meridian surveillance feeds:
// Threshold classifier: 62% accuracy
// - Gets daytime/nighttime right most of the time
// - Fails on indoor scenes (always 'nighttime')
// - Fails on tinted camera lenses
// - Cannot distinguish camera types at all
//
// The problem: rules don't generalize.
// A camera behind tinted glass looks 'nighttime' to pixel stats
// but is clearly 'active' to a human eye.
//
// This is why we need ML:
// Instead of writing rules, show the model examples
// and let it learn its own features.
console.log('Rule-based accuracy: 62%');
console.log('Target accuracy: 95%');
console.log('Gap: neural networks fill this gap');Build a threshold-based classifier and measure its accuracy.
Write a threshold-based image classifier that categorizes images based on mean brightness. Return 'dark' for mean < 64, 'dim' for mean < 128, 'bright' for mean < 192, and 'overexposed' for mean >= 192.
function classifyBrightness(pixels: number[]): string { // 1. Compute the mean of all pixel values // 2. Classify based on thresholds: // mean < 64 → 'dark' // mean < 128 → 'dim' // mean < 192 → 'bright' // mean >= 192 → 'overexposed' return null; // your code here } function measureAccuracy( classifier: (pixels: number[]) => string, testData: { pixels: number[]; label: string }[] ): number { // Return the fraction of correct predictions return null; // your code here }
62% accuracy. Not good enough to detect Meridian's drones. Noor needs neural networks — models that learn their own rules from data.
Module 2: building a proper image dataset for training