Disclaimer: Signal Ward is an educational simulation. All clinical scenarios are fictional. Nothing in this course constitutes medical advice.
Dr. KarimiEveryone, sit down. Last night the vendor NLP system flagged a penicillin allergy note as "no known allergies." The patient received amoxicillin. We caught it at the pharmacy — barely. I'm canceling the vendor contract today.
ImranThe note said "pt has documented PCN allergy — anaphylaxis 2019." The system couldn't parse the abbreviation.
Dr. KarimiWe're building our own. Khalil, you built the patient portal frontend. I need you on the ML interface. Vikram will handle the NLP internals. We're calling it Diagnostic One.
A vendor system failed because it relied on rigid keyword matching. Machine learning takes a fundamentally different approach: instead of hand-coded rules, it learns patterns from data. As a frontend developer, you already think in systems. Now you'll apply that thinking to text intelligence.
The vendor system had rules like if note.includes('no known allergies'). But clinical text is messy, abbreviated, and context-dependent. ML models learn from thousands of examples, discovering patterns no rule author could anticipate.
const app = { ui: 'React', state: 'Redux', api: 'REST' }const pipeline = { input: 'text', model: 'classifier', output: 'prediction' }Just as a web app has layers — UI, state management, API — an ML pipeline has layers too. Data comes in, gets transformed, flows through a model, and produces a prediction. You already understand this architecture. The components are different, but the pattern is the same.
import * as tf from '@tensorflow/tfjs';
// In a web app, data flows through layers:
// User Input → Validation → State → UI Update
// In an ML pipeline, data flows the same way:
// Raw Text → Preprocessing → Model → Prediction
// Step 1: Represent text features as numbers
// Each position = a feature (e.g., contains "allergy", contains "penicillin", etc.)
const noteFeatures = tf.tensor([1, 1, 0, 1, 0]);
// Step 2: A model applies learned weights to these features
const weights = tf.tensor([0.8, 0.9, 0.1, 0.7, 0.2]);
// Step 3: Produce a prediction (dot product + activation)
const score = noteFeatures.mul(weights).sum();
console.log('Allergy risk score:', score.dataSync()[0]);
// Output: ~2.4 — high risk, flag for reviewThink of Diagnostic One the way you'd think about a React application. Components receive props, process them, and render output. An ML model receives features, processes them through learned weights, and outputs predictions.
// Your React mental model:
// <Component props={data} /> → rendered UI
// The ML equivalent:
// model.predict(features) → classification
// Diagnostic One will have three core capabilities:
const diagnosticOne = {
classify: 'Categorize note type (progress, discharge, allergy)',
extract: 'Pull structured data from unstructured text',
alert: 'Flag dangerous drug interactions or missing info',
};Create a simple feature vector representing a clinical note's characteristics.
Create a feature tensor representing a clinical note. The note mentions 'allergy' (index 0), 'penicillin' (index 1), and 'anaphylaxis' (index 3). Set those positions to 1 and the rest to 0. The tensor should have 5 elements.
import * as tf from '@tensorflow/tfjs'; // Create a 1D tensor with 5 elements // Positions 0, 1, 3 should be 1 (features present) // Positions 2, 4 should be 0 (features absent) const noteFeatures = null; // your code here
The team commits to building Diagnostic One. Khalil begins sketching the system architecture.
Next: turning clinical text into numbers a model can understand.