HTML Refresher and Detailed Study - Part 6
Contents
- WebSockets Basic Exposure
- MDN docs for WebSockets is too complex
- WS/WSS are protocols but use a hack to piggyback over HTTP protocol for initial handshake/setup
- After HTTP piggyback handshake, WebSocket connection is somewhat similar to BSD TCP socket connection
- WebSocket communication is called real-time by Web devs
- Technical precision wise, WebSockets are 'soft real-time' and not 'hard real-time'
- 'real-time' terminology drift between early 2000s and now
- Plain JS (soft) real-time data visualization app using Web Sockets, Canvas API and JSON data
- Performance aspects of Canvas API draw functions including specific case of Windows 11
- Short overview of WebGL
- Implementation of 'high-performance live dashboard' WebSockets, Canvas and JSON plain JS app
- Server-Sent Events vs. WebSockets for Live Telemetry Dashboard app
- Using Two-way communication over WebSockets for Live Telemetry Dashboard app
- Global/Broadcast architecture over WebSockets when serving thousands of concurrent clients
- Common practice to stringify JSON data before sending on WebSockets using Text Frames; Massive scale may need Binary Frames
Gemini chat: 8-HTML Refresher n Study
Dates: Around 3 May 2026 to 5 May 2026
WebSockets Basic Exposure
Me: WebSockets
I think I need to get some basic exposure to websockets. HTML JS on frontend and node.js on backend.
I saw this video tutorial: How to use WebSockets - JavaScript Tutorial For Beginners (~14 mins, by dcode, May 2020). It is quite easy to understand. Ur views on the tutorial please?
Gem: Your assessment is spot on—it is an excellent, no-nonsense primer. When you are exploring modern web APIs, getting bogged down in framework-specific implementations right away can obscure how the underlying technology actually works. This video avoids that trap perfectly by keeping the focus on the raw mechanics.
Here are a few reasons why this tutorial is a great starting point for your exploration:
-
Vanilla implementation: By sticking strictly to the native
wspackage in Node [00:01:45] and the nativeWebSocketobject in the browser [00:05:27], the tutorial keeps the focus entirely on the API itself, free from any React or Next.js state management quirks. -
Protocol visibility: Taking the time to open the
browser's Network tab to inspect the
101 Switching Protocolshandshake [00:08:06] is a fantastic debugging habit. It visually proves that a standard HTTP request is being "upgraded" to a persistent, bidirectional WebSocket connection. -
Event-driven clarity: The code cleanly maps out the
necessary event listeners—such as
on('connection')[00:03:06],on('message'), andon('close')—showing exactly how the server receives a message, transforms it, and pushes it back to the client [00:11:29].
What to keep in mind as you build on this: While this handles the core concepts beautifully, integrating this into a modern, component-driven stack introduces a few architectural considerations:
-
Component Lifecycles: If you bring this into a React or
Astro frontend, you will need to manage the connection lifecycle carefully
(often inside a
useEffecthook) to ensure you don't accidentally open duplicate connections every time a component re-renders. - Serverless Constraints: The tutorial relies on a long-running Node process. If you eventually deploy applications using serverless functions, native WebSockets do not work out-of-the-box because those functions are ephemeral. In those scenarios, you would typically spin up a dedicated server for the sockets or rely on managed services.
-
Security: As the creator briefly notes, while
ws://is fine for local testing, you will need to swap towss://(WebSocket Secure) in production to ensure the data stream is encrypted [00:05:55].
It is a fantastic foundation. Getting comfortable with these raw event listeners makes it much easier to debug the more complex real-time abstractions you might encounter later.
...
--- end chat extracts ---
On 4 to 5 May 2026, I tried out the sample programs of above video. Barring one small change, rest of the code worked as expected.
The small change in code had to be made in server side file of
index.js as given below (comment is broken into smaller lines to
prevent horizontal scroll issue in Blogger):
// In ws v7 and earlier (e.g., in the 2020 tutorial),
// 'data' was automatically converted to a string.
// Since ws v8, the 'data' parameter is a Buffer,
// so toString() is required before using string methods.
ws.send(data.toString().toUpperCase());
--- restarting chat extracts ---
MDN docs for WebSockets is too complex
Me: Then I had a look at MDN - https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API The client side part was OK but the server side part was just way too complex. What's ur view?
Gem: You are completely right to feel that way. When looking at MDN's guide to writing a WebSocket server, it drops you straight into the deep end of bit manipulation, masking keys, frame decodes, and byte-level HTTP upgrades.
Here is why MDN feels so complex, and why you actually don't need to worry about most of it:
1. MDN is Explaining "How to Build a Server from Scratch"
The MDN documentation you looked at is largely aimed at someone who wants to build a WebSocket library from raw TCP sockets.
When you establish a WebSocket connection, you don't just send text over a pipe. The protocol requires:
-
The Upgrade Handshake: Taking a standard HTTP request and
generating a specific
Sec-WebSocket-Acceptheader using an SHA-1 hash of a magic string to prove the server understands WebSockets. - Frame Masking: Data sent from the client to the server is XOR-encrypted (masked) with a random 32-bit key to prevent certain types of caching attacks.
- Frame Decoding: Determining if the payload length is 7 bits, 16 bits, or 64 bits, then extracting the payload, tracking Continuation frames (if the message is split into chunks), and handling Ping/Pong heartbeat frames.
2. Node.js ws Does the Heavy Lifting
In the YouTube tutorial, the author used the ws npm package
(npm i ws).
This package handles 100% of the complexity described in the MDN
article.
When you write this:
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8082 });
wss.on('connection', ws => {
ws.on('message', data => {
console.log(`Received message => ${data}`)
});
});
The ws library is silently managing the HTTP handshake, unmasking
the XOR encryption, assembling fragmented frames, and abstracting the entire
protocol down to a simple .on('message') event.
3. The Front-end is Built-in; The Back-end is Not
The reason the client-side part of MDN looks okay is that the
WebSocket API is built natively into all modern web browsers. The
browser handles the complex frame assembly and masking automatically.
Node.js, however, does not have a native WebSocket server
implementation built into its core standard library (it has raw TCP and HTTP,
but no WebSocket specific engine). Therefore, you must rely on a third-party
library like ws or Socket.io to provide that
abstraction layer.
My View
Think of the MDN server-side documentation like reading an engineering manual on how internal combustion engines ignite fuel, whereas the YouTube tutorial is teaching you how to drive the car.
Unless your goal is to write your own low-level network protocol library, you
can safely ignore the byte-level frame documentation on MDN. Stick to the
ws package for Node.js, and focus on the event-driven
architecture (handling connections, messages, and disconnections) as
demonstrated in the tutorial.
WS/WSS are protocols but use a hack to piggyback over HTTP protocol for initial handshake/setup
Me: The ws and wss keywords are a little strange. They are not other protocols like ftp even if they are used in the place where protocol is specified. Am I right here?
Gem: You are picking up on a very real quirk of how the web works, but technically, you are slightly off here!
WebSocket actually is a completely distinct communication protocol, just like FTP or HTTP.
However, your intuition that it feels "strange" or different from something like FTP is spot on. The reason it feels weird is because of how a WebSocket connection begins its life.
Here is a straightforward breakdown of why it works this way and why we use
the ws:// and wss:// prefixes.
The "Piggyback" Approach
If you want to use FTP, your computer connects directly to an FTP server (usually on port 21) and starts speaking the FTP language immediately.
WebSockets are different. They were designed to work seamlessly with the existing web, which is entirely built around HTTP (ports 80 and 443). To get past strict corporate firewalls and use existing web infrastructure, WebSockets are designed to piggyback on HTTP.
When you use a ws:// URL, here is what actually happens:
- The Disguise: Your browser connects to the server and speaks standard HTTP.
-
The Request: It sends a normal-looking HTTP request, but
includes a special header:
Connection: UpgradeandUpgrade: websocket. -
The Switch: If the server understands WebSockets, it
replies with an HTTP
101 Switching Protocolsstatus code. - The New Protocol: At that exact millisecond, HTTP is thrown out the window. The TCP connection stays open, but both sides stop speaking HTTP and start speaking the WebSocket protocol.
Why the URL Schemes Exist
Even though the connection starts as HTTP, the browser needs to know your ultimate intention so it can send that "Upgrade" request.
The prefixes map directly to their HTTP counterparts:
-
ws://(WebSocket): The equivalent ofhttp://. It operates over a standard, unencrypted connection. It is great for local development but vulnerable to snooping. -
wss://(WebSocket Secure): The equivalent ofhttps://. It is wrapped in a secure TLS/SSL encryption layer. You absolutely must use this in production to keep your data safe and to prevent middle-man routers from interfering with the persistent connection.
So, while ws and wss are indeed placed exactly where
you would put http or ftp to denote the protocol
scheme, they are unique because they act as a command to the browser:
"Start with HTTP, but immediately upgrade to WebSocket."
Me: U wrote: So, while ws and wss are indeed placed exactly where you would put http or ftp to denote the protocol scheme, they are unique because they act as a command to the browser: "Start with HTTP, but immediately upgrade to WebSocket."
---------
That's what I found confusing. Start with HTTP but you are saying ws and not http! Whereas in ftp you don't do any such gimmicks.
But your explanation that this is to piggyback on web explains it. If it were to use a new dedicated port directly with ws protocol, then existing corporate firewalls would block it, whereas 80 and 443 ports would pass (it) through. Did I get that correctly?
Gem: You nailed it. You got it exactly right.
If WebSockets had been introduced as a completely standalone protocol on a new, dedicated port (like FTP on port 21 or SSH on port 22), network administrators worldwide would have blocked it by default. Corporate firewalls are notoriously strict and are usually configured to drop all incoming and outgoing traffic except on port 80 (HTTP) and port 443 (HTTPS).
By designing WebSockets to initiate over those exact same ports using a standard HTTP request, the creators ensured that the traffic could easily pass through almost every router, proxy, and firewall in the world. It bypassed the need to convince thousands of IT departments to update their security rules.
To address your point about the URL being confusing: it absolutely is a gimmick. It is a very clever, standardized hack.
Think of the ws:// or wss:// prefix not as a
reflection of the initial network traffic, but as an
internal instruction for your browser.
When your code says new WebSocket('ws://...'), it tells
the browser:
"Connect to this server on the standard HTTP port just like normal,
but instead of asking for a standard webpage, immediately send the WebSocket
Upgrade headers."
You have successfully untangled one of the most common points of confusion when learning how real-time web communication actually works under the hood!
After HTTP piggyback handshake, WebSocket connection is somewhat similar to BSD TCP socket connection
Me: Once the handshake is done, and we have ws/wss protocol proper in play, we may be having something very similar to a typical BSD TCP socket connection between client and server but with WS/WSS protocol managing the exchanges. Am I on right track?
Gem: You are exactly on the right track.
Once that 101 Switching Protocols handshake is complete, the HTTP
layer is entirely discarded. What you are left with is a persistent,
full-duplex TCP connection. It is fundamentally the exact same underlying TCP
socket you would be interacting with if you were writing a raw C/C++ network
application using BSD sockets.
The key difference—and the reason we need the WebSocket protocol instead of just using raw TCP directly in the browser—is that TCP is a stream of bytes, whereas WebSockets are message-based.
Here is exactly what the WebSocket protocol manages on top of that raw TCP socket:
-
Message Framing: Raw TCP does not know what a
"message" is. If you send "Hello" and then
"World" over raw TCP, the receiver might read them as
"Hel" and "loWorld" depending on how the network buffers
the packets. You would normally have to write complex logic to look for
delimiters (like
\n) or manually parse payload lengths to separate your messages. The WebSocket protocol handles this automatically by wrapping your data in "frames," ensuring that youron('message')event always fires with exactly one complete message. - Data Masking: It applies the XOR masking to client-to-server messages to prevent aggressive proxy servers from caching or misinterpreting the persistent stream.
-
Control Frames: It manages special non-data frames,
specifically
PingandPong. These are used under the hood to check if the connection is still alive, preventing the silent drop-offs that are notoriously hard to detect with raw TCP. -
Standardized Closure: It handles a clean, two-way handshake
for closing the connection (the
Closeframe), complete with standard status codes (e.g., 1000 for normal closure, 1006 for abnormal).
So, your mental model is spot on: it is just a raw, open TCP pipe, but the WebSocket protocol is sitting at both ends acting as a highly specialized translator. It takes your distinct messages, converts them into a raw byte stream for the TCP socket, and then perfectly reassembles them into distinct messages on the other side so you don't have to write the parsing logic yourself.
WebSocket communication is called real-time by Web devs
Me: And this is called real-time communication (btw client and server) ...
Gem: ...
Because the connection remains open, the latency is reduced to practically zero—limited only by the physical speed of the network (the time it takes electrons or photons to travel between the client and server). There is no waiting for a new connection to be established, and no waiting for the client to "ask" for data.
However, your mention of an "open listener thread" brings up a fantastic architectural distinction, especially since you are looking at JavaScript (Node.js and the browser).
Here is how that "listener" actually works under the hood without blocking your program:
The "Equivalent" of a Thread: The Event Loop
In older or more traditional server languages (like older Java or C++
architectures), you would literally have a dedicated thread sitting there
running a while(true) loop, waiting for data on that specific
socket. If you had 10,000 users, you needed 10,000 threads.
JavaScript and Node.js handle this differently. They are fundamentally single-threaded but utilize Non-Blocking I/O and an Event Loop.
Here is what happens instead of blocking a thread:
-
Offloading to the OS: When you establish a WebSocket and
attach an
on('message')listener, Node.js tells the underlying operating system, "Hey, keep an eye on this TCP socket. I'm going to go do other things. Tap me on the shoulder if any data arrives." - Continuing Execution: Your Node.js program (and your browser UI) continues running its single thread, handling other users, rendering graphics, or running calculations. It is never "blocked" waiting for the network.
- The Event Trigger: The millisecond a WebSocket frame arrives over the network, the operating system catches it and places a message in the JavaScript Event Queue.
-
The Callback: The Event Loop sees the message in the queue
and executes your
on('message')callback function with the new data.
Why this defines "Real-Time"
This non-blocking, push-based architecture is the core of real-time communication for two reasons:
- No Polling: Before WebSockets, the only way to get "live" data was for the client to ask the server every second: "Any updates? No? Okay." This wasted massive amounts of bandwidth and CPU with constant HTTP handshakes.
- Server Push: With WebSockets, the server does not have to wait to be asked. If an event happens on the backend (e.g., another player moves, a stock price changes), the server instantly pushes that data down the open pipe to the client.
...
Technical precision wise, WebSockets are 'soft real-time' and not 'hard real-time'
Me: Hmm. But 'real-time' is a bit of marketing hype here I think. Network latency will be there though that practically may be very small if client and server are connected via high-speed networks.
Gem: You have hit on a fantastic and very important distinction. You are completely right to call it out as a bit of "marketing hype" or, at the very least, a term that has been heavily diluted by web developers.
When web developers say "real-time," they are using the term very loosely compared to how an electrical engineer or a physicist uses it.
Here is the reality of what "real-time" actually means in different contexts, and why your skepticism is spot on:
1. "Hard" Real-Time (The True Definition)
In traditional systems engineering, a "hard real-time" system is one where a delayed response is considered a total system failure.
- Examples: The anti-lock braking system in your car, a pacemaker, or the flight control software on a fighter jet.
- If the data doesn't arrive within a strict, guaranteed microsecond deadline, people die or things crash.
WebSockets are absolutely not hard real-time. The internet is inherently chaotic. Packets get dropped, routers get congested, and Wi-Fi signals drop. You cannot guarantee delivery times over the public web.
2. "Soft" Real-Time (The Web Developer Definition)
When a web developer says "real-time," what they actually mean is "Near Real-Time" or "As fast as the network physically allows, without our software getting in the way."
Before WebSockets, a chat application had to use polling. The client would ask the server every 3 seconds, "Any new messages?" If a message arrived at the server at second 0.1, it had to wait 2.9 seconds for the client's next request to be delivered. That is an artificial, software-induced delay.
WebSockets eliminate that artificial delay. As soon as the server has the message, it fires it down the TCP pipe.
The Latency Reality
As you correctly pointed out, we are still bound by the laws of physics.
- If your server is in New York and your client is in Puttaparthi, the data still has to travel halfway across the globe.
- Even traveling at the speed of light through fiber optic cables, plus the time it takes to hop across dozens of routers, you are looking at a baseline physical latency of maybe 150 to 250 milliseconds.
So, is it "real-time"? No.
But from the perspective of a user chatting with a friend or watching a live stock ticker, having data arrive in 200 milliseconds (the speed of the network) instead of 3,200 milliseconds (the speed of the network plus a polling delay) feels instantaneous. That "instantaneous feel" is what the web industry branded as "real-time communication."
'real-time' terminology drift between early 2000s and now
Me: In early 2000s when I was last heavily into such stuff, real time term in software dev meant real time embedded systems. Coming from that period, after a big gap of around two decades, I got confused by the same 'real-time' term being used in web dev.
Gem: Returning to modern software development after stepping away from commercial tech in the early 2000s means walking into an environment where half the dictionary has been repurposed. That terminology shift must have been incredibly jarring!
Your confusion is completely justified. You are coming from an era and an engineering discipline where "real-time" meant a Real-Time Operating System (RTOS) like VxWorks or QNX, dealing with strict clock cycles, hardware interrupts, and deterministic execution. In that world, if a task takes 10 milliseconds instead of the guaranteed 5 milliseconds, the system fails.
Here is exactly how web developers ended up hijacking that term while you were away:
In the early 2000s, the web was almost entirely static. You clicked a link, the screen went white, and the server slowly painted a new page.
By the mid-2000s, technologies like AJAX allowed pages to update small pieces of data in the background without reloading the whole screen. Web developers called this "dynamic."
But when WebSockets finally hit the scene as a standardized technology around 2011, it completely blew web developers' minds. Going from an environment where the client always had to ask for data, to an environment where a server could unilaterally push an event to a browser the millisecond it happened, felt like magic.
Because web developers generally didn't have backgrounds in embedded C or hardware timing, they didn't know (or care) about the strict engineering definition of "hard real-time." To a web developer whose previous baseline was a 3-second HTTP polling delay, a 150-millisecond network push felt instantaneous. So, they grabbed the strongest word available to market this new capability: "Real-Time Web."
You are entirely correct that it is a semantic drift. When a web developer today says "real-time," they just mean "event-driven over a persistent network connection." Your original, embedded-systems definition remains the mathematically and architecturally rigorous one.
Are you planning to test out WebSockets in a specific project or dashboard as you explore these modern full-stack tools?
Me: to clarify things in my mind - Hard real-time and soft real-time are two terms I need to digest. the early 2000s real-time is now hard real-time. And web real-time is soft real-time though as web is so dominant in sw now, it has hijacked the real-time term. In other words, unqualified real-time should be viewed as soft real-time. What do u say?
Gem: You hit the nail on the head. That is a perfect, 100% accurate summary of how the terminology has evolved and where it stands today.
Your conclusion about the "unqualified" use of the term is the most insightful takeaway. Because the web development ecosystem is so massive, it dominates the industry's vocabulary and search results.
Here is the modern rule of thumb for reading tech documentation or discussing architecture today:
- In web, mobile, or enterprise software: "Real-time" means Soft Real-Time (WebSockets, push notifications, event streams). It means fast enough for human perception, but with no strict timing guarantees.
- In aerospace, automotive, robotics, or medical devices: "Real-time" absolutely still means Hard Real-Time (RTOS, microsecond deadlines, deterministic execution). A missed deadline is a catastrophic system failure.
You have successfully bridged the gap between your early-2000s systems engineering knowledge and modern web architecture!
As you continue building out full-stack web and mobile applications using frameworks like React, Astro, or Expo, you will be operating entirely in this event-driven, soft real-time world. Are there any specific features—perhaps live data dashboards or instant state updates—that you are considering implementing in your current work?
Plain JS (soft) real-time data visualization app using Web Sockets, Canvas API and JSON data
Me: I need to build up on this websocket tutorial to get to a "real-time" (which has to be interpreted as soft real-time) data visualization web app example/tutorial which uses Web Sockets, Canvas API and JSON data. Can you provide me such an example/tutorial whose code is as reasonably simple as possible?
Gem: This is the perfect next step. Combining WebSockets with the HTML5 Canvas API and JSON is the foundational architecture for almost every high-performance live dashboard, whether it is plotting financial market ticks, server telemetry, or IoT sensor data.
To keep this strictly focused on the raw mechanics—with no frameworks getting in the way—we will build a "Live Telemetry Dashboard."
The Node.js server will generate a random data point every 100 milliseconds and push it as a JSON string over the WebSocket. The client will parse that JSON and use the Canvas API to draw a scrolling line chart.
Here is the complete, minimal implementation in two files.
1. The Server (server.js)
Make sure you are in a directory with the ws package installed
(npm install ws). This script sets up the server and introduces a
critical concept: cleaning up intervals when a client
disconnects.
...
2. The Client (index.html)
This file handles the connection, the JSON parsing, and the math required to turn raw numbers into a scrolling visual graph.
...
How to run it:
- Run
node server.jsin your terminal. - Double-click
index.htmlto open it in your browser.
Why this architecture is powerful:
You will notice the drawChart() function doesn't run on a
setInterval or requestAnimationFrame loop on the
client side. It only executes when the
ws.addEventListener('message') fires.
Because we are relying on the event-driven nature of WebSockets, the browser only spends CPU cycles drawing the chart when there is actually new data to display. This is the essence of efficient, soft real-time web architecture.
(Ravi: I later implemented the example code Gemini had provided above on my PC (in canvas-live-dashboard folder of samples-and-ravi-tests project). It worked as expected.)
Performance aspects of Canvas API draw functions including specific case of Windows 11
Me: Great! Very useful! i will delve into details later on. Some top level questions first.
The client redraws the chart entirely every time, if I understood it correctly. is that necessary? Would there be a performance penalty?
Canvas drawing is said to bypass DOM. So on Windows 11, what would happen? would the Canvas API draw functions map to Windows OS graphics calls (used to be called Win32 API in early 2000s)? If so, that would be as fast as the device can draw those charts.
Gem: These are excellent, systems-level questions. Your background in early 2000s architecture is leading you exactly to the right architectural bottlenecks.
Here is the breakdown of what is actually happening under the hood with both the rendering strategy and the operating system pipeline.
1. The Redraw Strategy: Is it necessary?
Yes, in the specific example provided, clearing and redrawing the entire canvas is necessary, but the reason lies in the scrolling nature of the chart.
When a new data point arrives, every existing line segment on the screen needs to move slightly to the left. Because the Canvas API is an immediate-mode graphics surface—meaning it has no memory of the lines you drew previously, it just remembers the final pixels—you cannot simply tell it to "move the existing lines."
You have two choices to achieve the scrolling effect:
- Full Redraw (The code provided): Clear the screen and iterate through the array, recalculating the X/Y coordinates for every point based on their new positions.
-
Pixel Shifting: Use
ctx.getImageData()to grab the actual pixel matrix, shift it left by a few pixels usingctx.putImageData(), and then just draw the one new line segment on the right edge.
The Performance Penalty: There is absolutely a CPU penalty
for the full redraw. It is an O(N) operation where N is the
number of data points.
However, for 100 points, modern JavaScript engines will execute that
for loop and issue the drawing commands in a fraction of a
millisecond. It is so fast that the "Pixel Shifting" method is
actually often slower because reading from the GPU back into CPU
memory (which getImageData does) is a notoriously expensive
operation in web browsers.
If you were plotting 50,000 points per frame, the full redraw would absolutely bottleneck. At that scale, you would step away from the 2D Canvas API entirely and use WebGL to push the coordinate array directly to the GPU so a shader could process the vertices in parallel.
2. Bypassing the DOM and the Windows 11 Pipeline
Your intuition here is spot on. This is exactly why Canvas is used for high-frequency data instead of standard HTML elements.
If you tried to build this chart using HTML <div> elements
or SVG, every new point would force the browser's rendering engine to
recalculate the Layout tree (reflow), update the CSS Object Model, and repaint
the screen. This "retained-mode" UI update is massive overhead.
The Canvas API completely bypasses that tree. It sits in the DOM as a single black box.
How it maps to Windows 11: In the early 2000s, drawing to the
screen meant grabbing a Device Context handle (HDC) and firing
off Win32 GDI calls like MoveToEx() and LineTo().
The CPU did almost all the heavy lifting.
Today, the pipeline in modern browsers (like Chrome or Edge) looks like this:
- Your JavaScript calls
ctx.lineTo(x, y). - The browser's JavaScript engine passes this to its internal 2D graphics engine (for Chromium-based browsers, this is a C++ library called Skia).
- On Windows 11, Skia does not use the legacy Win32 GDI. Instead, it translates your path commands into hardware-accelerated DirectX calls (specifically Direct2D layered over Direct3D).
- These instructions are batched and sent directly to your graphics card (GPU).
So, to answer your question: Yes, it is as fast as the device can draw.
By using the Canvas API, your JavaScript is effectively driving the Windows 11 DirectX graphics pipeline, utilizing the GPU to draw the vectors. This is why you can push buttery-smooth 60 frames-per-second animations in a web browser today, something that was entirely unthinkable with web technologies two decades ago.
Short overview of WebGL
Me: Continuing on 5 may 2026 ...
In above response, you wrote: If you were plotting 50,000 points per frame, the full redraw would absolutely bottleneck. At that scale, you would step away from the 2D Canvas API entirely and use WebGL to push the coordinate array directly to the GPU so a shader could process the vertices in parallel.
----
Please give me a short overview of WebGL.
Gem: Given your background in systems engineering and Win32 APIs, explaining WebGL is actually quite straightforward because you are already familiar with the underlying concepts.
If the HTML5 2D Canvas API is the modern web's equivalent to the old Windows GDI (using simple commands to draw shapes), WebGL is the web's direct equivalent to OpenGL or Direct3D.
It is a JavaScript API that gives you near bare-metal access to the device's Graphics Processing Unit (GPU).
Here is a short, systems-level overview of how WebGL works and why it is the go-to solution when you hit the limits of standard Canvas drawing.
1. No Pre-built Shapes
The first shock for developers moving to WebGL is that it has no
ctx.lineTo() or ctx.fillRect() commands. It
doesn't know what a circle, a line, or a cube is.
WebGL only knows how to draw three elemental primitives: Points, Lines, and Triangles. Everything you see on a WebGL screen—from a 2D stock chart to a fully 3D video game—is constructed out of thousands or millions of tiny triangles.
2. The Power of Buffers
In our previous 2D Canvas example, JavaScript (the CPU) ran a
for loop, iterating through the array and saying,
"Draw a line here. Now draw a line here. Now draw a line
here."
This sequential processing is the bottleneck.
In WebGL, you bypass sequential drawing. Instead, you create a strictly typed
array (like a Float32Array in JavaScript) containing the X,Y
coordinates of all 50,000 points. You then issue a command to copy that entire
block of memory from your system RAM directly into the GPU's Video RAM
(VRAM) in one massive chunk. This is called a
Vertex Buffer Object (VBO).
3. Shaders (The GPU Programs)
Once your data is sitting in the VRAM, you need to tell the GPU what to do with it. You do this by writing tiny, specialized programs called Shaders.
These are not written in JavaScript. They are written in GLSL (OpenGL Shading Language), which is a strictly typed language that looks almost exactly like C. You compile these Shaders inside your browser, and they run directly on the GPU hardware.
There are two mandatory Shaders in WebGL:
- The Vertex Shader: Its only job is to calculate the final screen position of a point. If you have 50,000 points in your buffer, the GPU uses its massively parallel architecture (thousands of tiny cores) to execute the Vertex Shader for all 50,000 points simultaneously.
- The Fragment Shader: Once the points are mapped, this shader calculates the exact color of every individual pixel (fragment) that needs to be drawn between those points.
Why it solves the 50,000 point bottleneck
If you receive 50,000 data points over a WebSocket, using WebGL changes the entire math of the render.
Instead of your single-threaded JavaScript iterating through a loop 50,000 times to calculate screen coordinates, your JavaScript simply updates the Buffer array and says to the GPU: "Here is the new array. Fire the Shaders."
The GPU—which was explicitly designed to do vector matrix multiplication in parallel—processes all 50,000 coordinates instantly, freeing up your CPU to go back to listening to the WebSocket for the next batch of data.
It is vastly more complex to set up than the 2D Canvas—you have to write the GLSL C-code, manage memory pointers, and compile the shaders—but the performance ceiling is exponentially higher.
Implementation of 'high-performance live dashboard' WebSockets, Canvas and JSON plain JS app
(Ravi: I later implemented the example code Gemini had provided earlier in this chat for 'high-performance live dashboard', on my PC (in canvas-live-dashboard folder of samples-and-ravi-tests project). It worked as expected.) The code and detailed exchanges with Gemini on it are in a README.md in that project folder. Given below are relevant excerpts from that exchange. Note that these exchanges are also in the same Gemini chat as above exchanges.
Server-Sent Events vs. WebSockets for Live Telemetry Dashboard app
Me: I implemented the "Live Telemetry Dashboard" example you gave earlier. it is working as expected. Great!
A question: Looks like this simple telemetry dashboard could have been implement using Server-Sent Events too, as the client is not sending any messages to the server. Am I right?
Gem: You are 100% correct. Your architectural instincts are spot on.
Because the telemetry dashboard only streams data in one direction—from the server to the client—using WebSockets (which are designed for full-duplex, bidirectional communication) is technically overkill for that specific data flow.
Server-Sent Events (SSE) would have been a perfect, and in some ways superior, fit for that exact scenario.
Here is a breakdown of how SSE fits into this picture and the architectural trade-offs between the two.
What are Server-Sent Events (SSE)?
While WebSockets throw away HTTP after the initial handshake to create a raw TCP pipe, SSE embraces HTTP entirely.
When you use SSE (via the EventSource API in the browser), the
client makes a standard HTTP request to the server. The server responds, but
instead of sending a Content-Length header and closing the
connection, it sends a Content-Type: text/event-stream header and
just keeps the HTTP response open indefinitely, pushing chunks of text data
down the wire whenever it wants.
Why SSE is great for Live Telemetry
-
Built-in Reconnection: This is the killer feature of SSE.
If a WebSocket connection drops (due to network blips or server restarts),
it stays dead. You have to write your own JavaScript logic to detect the
drop and attempt a reconnect. With SSE, the browser handles this natively.
If the connection drops, the browser automatically tries to reconnect, even
sending a
Last-Event-IDso the server knows where it left off. -
Pure HTTP: Because SSE operates entirely over standard
HTTP, there are no custom protocols, no HTTP
Upgradeheaders, and no custom proxy configurations needed. It passes through firewalls and load balancers exactly like a standard web page. -
Simpler Backend: You don't need a specialized package
like
ws. In Node.js, you just write to the standard HTTP response stream (res.write('data: {"value": 42}\n\n')).
Why WebSockets usually win in modern apps (The Catch)
If SSE is so perfect for one-way data, why does the industry default to WebSockets almost universally?
-
Binary Support: WebSockets can stream raw binary data (like
ArrayBufferorBlob). SSE is strictly limited to UTF-8 text. If your telemetry data grows massively and you want to switch from JSON strings to a packed binary format (like Protocol Buffers) to save bandwidth, SSE cannot handle it. - The Browser Connection Limit: This is the historic Achilles' heel of SSE. Under the older HTTP/1.1 protocol, browsers strictly limit you to 6 open connections per domain. If a user opens your telemetry dashboard in 7 different browser tabs, the 7th tab will just hang indefinitely. WebSockets do not have this browser-enforced limit. (Note: If your server uses HTTP/2, this limit disappears due to multiplexing, but it remains a classic gotcha).
-
Future-Proofing: Dashboards rarely stay one-way forever.
Eventually, someone asks for a feature like,
"Can I click a point on the chart to pause the server
sensor?"
With WebSockets, you just send a message back up the existing pipe. With
SSE, you have to spin up a completely separate standard HTTP
POSTrequest to send that command.
You identified a classic engineering trade-off. For a strictly one-way JSON data feed, SSE is simpler and more robust out of the box. However, developers often reach for WebSockets anyway, viewing them as a more flexible foundation in case the app's requirements evolve into two-way communication down the road.
Using Two-way communication over WebSockets for Live Telemetry Dashboard app
Me: Interesting! Can you expand the example to introduce a feature like 'pause the server sensor' which brings in client to server communication and which then can be done only using WebSockets and not SSE?
Gem: This is exactly the scenario where WebSockets justify their place over Server-Sent Events.
If you were using SSE, you would have to build a completely separate HTTP
endpoint (e.g., POST /api/sensor/pause), configure your client to
make a fetch() request to it, and figure out how to match that
specific HTTP request to the specific SSE stream currently open for that user.
With WebSockets, the bidirectional pipe is already open. The client simply whispers back up the exact same pipe, "Hey, hold on a second."
Here is how you expand the previous example to include full, two-way communication.
1. The Updated Server (server.js)
We need to refactor the server slightly to move the
setInterval into a reusable function so we can stop and start it
based on client messages. We will add a
ws.on('message') listener to handle incoming commands.
...
2. The Updated Client (index.html)
We will add a button to the DOM and attach a click listener that fires a
message upstream using ws.send().
...
The Architectural Takeaway
Notice how clean the client-side code is for sending a command. It is just
ws.send(JSON.stringify({ command: 'pause' })).
There are no cross-origin resource sharing (CORS) configurations to worry about, no HTTP headers to set, and no need to establish a new TCP handshake. The socket is simply sitting there, ready to accept data flowing in either direction instantly. This is why WebSockets remain the undisputed champion for highly interactive dashboards!
Global/Broadcast architecture over WebSockets when serving thousands of concurrent clients
Me: The updated app seems to be working well. The Pause and Resume feature is working as expected. Further I tried opening two connections by duplicating the client browser tab. I could independently Pause/Resume the server (sending the data).
Gem: You have stumbled upon a fantastic and fundamental concept of Node.js event-driven architecture!
Your observation is exactly right, and the reason it works that way highlights the difference between Connection-Scoped architecture and Global/Broadcast architecture.
Here is why your two browser tabs operated completely independently, and how this dictates the way you design real-time systems.
1. The Power of Closures (Why it was independent)
If you look closely at the server code from the previous example, the
setInterval (the sensor) and the dataStream variable
are defined inside the
wss.on('connection', ws => { ... }) callback block.
wss.on('connection', ws => {
let dataStream = null; // Scoped strictly to THIS connection
const startSensor = () => {
dataStream = setInterval(...);
};
});
Because of how JavaScript handles closures, every time a new client connects (or you open a new tab), Node.js creates a brand new, isolated execution context in memory.
-
Tab 1 connects: It gets its own
wsobject, its owndataStreamvariable, and its own uniquesetIntervalloop ticking in the background. -
Tab 2 connects: It gets a completely separate
wsobject and a separatesetIntervalloop.
When Tab 1 sends a "pause" command, it only clears the timer associated with Tab 1's specific closure. Tab 2 remains entirely unaffected.
2. The Architectural Fork in the Road
Discovering this independent behavior forces an important architectural decision when you are building real-time apps. You have to decide which of the following two models fits your data:
Model A: Independent Streams (What you built)
- The Use Case: User-specific dashboards. For example, if the dashboard is showing the user's personal server CPU usage, or their private chat messages.
-
The Trade-off: It is heavy on the server. If you have
10,000 users connected, your Node.js server is actively running 10,000
independent
setIntervalloops simultaneously.
Model B: Global Broadcast (The Pub/Sub approach)
- The Use Case: Shared data. For example, a live stock market ticker, a multiplayer game state, or a global chat room.
- The Architecture: Instead of a loop for every user, you run one global loop on the server. When that loop ticks, it iterates through a list of all currently connected clients and sends the exact same data payload to everyone at once.
3. How a Broadcast Server Looks
If you ever need to switch your dashboard from "Independent" to
"Global Broadcast" (where pausing it stops the data for
everyone), you pull the generator outside the connection block and
loop through wss.clients:
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8082 });
// 1. GLOBAL STATE: One single data generator for the whole server
let globalDataStream = setInterval(() => {
const data = JSON.stringify({ value: Math.floor(Math.random() * 100) });
// 2. BROADCAST: Iterate through every connected client and push the data
wss.clients.forEach(client => {
if (client.readyState === WebSocket.OPEN) {
client.send(data);
}
});
}, 100);
wss.on('connection', ws => {
console.log('New client joined the broadcast.');
// If ANY client sends a pause command, it stops the global stream for EVERYONE
ws.on('message', message => {
const cmd = JSON.parse(message.toString()).command;
if (cmd === 'pause') clearInterval(globalDataStream);
});
});
It is a great exercise to open up your network inspector and watch those independent streams operating side-by-side. Are you planning to adapt this telemetry architecture to visualize a specific dataset you are working with, or are you primarily mapping out these network concepts for future reference?
Me: Fascinating alternative architecture of global broadcast which keeps setInterval loops down to just 1 for all clients on the server, with downside of lack of individual client pause/resume (only global pause/resume). As of now, I want to focus on only simple two-way WebSocket live-dashboard sample code. But your inputs are useful for any future needs I may have.
Gem: That is a very smart approach. Sticking exclusively to the isolated, two-way architecture is the best way to deeply internalize how the event listeners and the persistent TCP pipe behave before introducing any extra complexity.
You now have a clean, working foundation. Because it is written in plain Node.js and native HTML5 Canvas, you can freely experiment with the message frequencies, the JSON payload structures, or the drawing logic without any third-party frameworks obscuring the raw mechanics.
Whenever you are ready to expand on this codebase, transition it into a component-driven environment, or tackle those larger architectural questions down the road, just say the word!
Common practice to stringify JSON data before sending on WebSockets using Text Frames; Massive scale may need Binary Frames
Me: I had a closer look at the code. Before sending, you stringify the JSON data and after receiving you parse the string into JSON. Can you compare this with fetch API? Also is it common to stringify and parse JSON data this way when using WebSockets?
Gem: That is a very sharp observation. You are looking right at the serialization layer.
Yes, manually stringifying and parsing JSON is absolutely the standard, everyday practice when working with WebSockets.
Comparing this manual serialization to the modern fetch API
highlights exactly how bare-bones the WebSocket protocol is once the initial
handshake is complete. Here is a breakdown of how they compare and why
WebSockets require this manual step.
1. WebSockets vs. The fetch API
When you use the fetch API, you are operating at a much higher
level of abstraction than a raw WebSocket frame.
Sending Data: In both fetch and WebSockets, you
must use JSON.stringify(). Neither the HTTP
protocol nor the WebSocket protocol natively understands JavaScript objects;
they only understand text and bytes.
-
Fetch:
fetch(url, { method: 'POST', body: JSON.stringify(data) }) -
WebSocket:
ws.send(JSON.stringify(data))
Receiving Data (The Big Difference): This is where
fetch spoils developers with convenience.
-
Fetch: When a server replies with JSON, you typically call
response.json(). Under the hood, the browser looks at the HTTP headers, reads the incoming byte stream, buffers it completely, and automatically runs the JSON parser for you. -
WebSocket: WebSockets have
no headers per message. When a message arrives, the
ws.on('message')event just hands you the raw string payload. There is noContent-Type: application/jsonattached to the frame to tell the browser what it is. Because the browser does not know if you sent JSON, plain text, or XML, it forces you to explicitly callJSON.parse(event.data).
2. Is this common practice?
Yes, it is the de facto standard for 90% of web real-time applications.
When you send a message over a WebSocket, the protocol requires you to define the frame as one of two distinct types:
- Text Frame: Must be valid UTF-8 text.
-
Binary Frame: Raw byte data (like an
ArrayBufferor aBlob).
Because JavaScript objects cannot be sent over the wire directly, if you want to send structured data using a Text Frame, JSON is universally the most efficient and standard way to serialize it.
The Architectural Ceiling (When JSON isn't enough)
Given your background in systems engineering, you might immediately spot the
flaw in this: calling JSON.stringify and
JSON.parse thousands of times per second consumes a significant
amount of CPU, and sending property keys (like
"timestamp":) repeatedly wastes network bandwidth.
While JSON is the standard for most apps, when developers hit massive scale—like a multiplayer game sending 60 updates per second, or high-frequency trading dashboards—they abandon JSON and Text Frames entirely.
Instead, they switch to Binary Frames and use serialization libraries like Protocol Buffers (Protobuf), FlatBuffers, or MessagePack. These tools compile the data down to a highly compressed raw byte stream, completely bypassing the CPU overhead of JSON parsing.
But until you hit those extreme performance bottlenecks,
JSON.stringify() and JSON.parse() are exactly the
right tools for the job!
Comments
Post a Comment