Disclaimer: I am the author of Stl.Fusion — an open-source library helping to address the long-standing problem described in this post. But the problem is real, and the post is relevant even if this library won’t exist.
What’s the biggest difference between the modern and the future web apps?
To answer this question, let’s ask the opposite one first: what’s the most rudimentary UX feature used by almost any web app today?
I nominate this button:
- Intel P5 (Intel Pentium 60 MHz), the best consumer CPU in 1993, could crunch ~ 60 MIPS (million instructions per second)
- 2019’s Ryzen Threadripper 3990x crunches 2,356,230 MIPS, which is 40,000x more.
- The floating-point performance shows even more staggering difference: 35.7 TFLOPS on CUDA cores of upcoming NVidia RTX 3090 are 500,000x more than 60 MFLOPS of Intel P5, and that’s not counting extra up to 285 of extra (but specialized) TFLOPs on 3090’s Tensor Cores (TF32 & sparse matrixes).
So modern PCs are 50,000 to 5 million times faster; the amount of memory (2–4MB → 16 … 64GB) and connection bandwidth (19.2Kbps vs 1Gbps) increased similarly.
Why F5 / “Refresh” action is still acceptable?
For the note, lots of SPAs are capable of updating a part of content in real-time, but even these apps do this quite selectively, i.e. they still make you to hit F5 from time to time. And an average web app just doesn’t update anything unless you take some action.
Why the default is still “non-real time”?
- Almost 30 years of reliance on “Refresh” / F5 turned it into a familiar action for a majority of users, but more importantly, for web developers. As a result, everyone thinks it’s ok to rely on it — at least sometimes. So, it’s a chicken and egg problem: the more real-time apps we see, the less it becomes acceptable to create a regular one.
- There are cases when it’s desirable to have only explicit updates.
- There is a common perception that real-time updates are complex to implement and costly to run. There must be a code “watching” for every change and notifying every client that’s “observing” it, which already looks like a potential performance problem: N clients making 1 action per second each & watching for other clients’ actions = O(N²) updates per second. Besides that, the client should somehow apply every change notification it gets, the server should property serialize the updates it sends, etc. — in short, this looks scary.
I agree just with the first statement; as for #2 and #3, I’m going to debunk them further in this post.
“Only explicit updates” is never the best option
Even if you want certain content to be updated manually, delivering the update notification in real-time is a better option than doing nothing at all.
- “The post was edited - [click here see the changes]”
- “A new content is added to your feed. Do you want to see it?”
- “This search result has changed. Do you want to see a new one?”
Let’s visualize this:
- If there is a separate notification, the update is usually manual, so the user decides what’s the update delay.
- Otherwise the update delay can be anything from zero (instant updates) to infinity (no automatic updates). The shorter is the delay, the more real-time you are, but simultaneously, the more resources you need to scale the system. Can infinite delay be your sweet spot? Yes, but it’s highly unlikely. And it’s even less likely to assume it’s a sweet spot for every piece of content you present.
- No notification + no update is the worst case: the only way user may learn about the update is by… Triggering the update! 🙈
Why real-time updates are valuable?
Nearly every creature on Earth evolved to react in real-time, including us. We implicitly value an event happening right now higher than a similar event in past, because the opportunity tends to decrease over time.
And that’s why real-time features are so useful — if you use them right, they help to significantly increase user engagement.
Let me try to prove this by listing some real-time features I’d love to see in well-known products:
Medium and other content sharing services (Reddit, etc.):
- “10 people including your friends Alice and Bob are reading this post now — do you want to [show them a note] right now or [start a video chat with the readers]?”
- “The author edits the content of this post right now. Do you want us to [notify you once he publishes an update]?” and generally more notifications like this one.
- “Your friend AY is writing a post titled ‘Go vs C#, part 3’ 3rd day in a row. Do you want to ask him to [be an early reviewer of this post]?”
- Real-time Quip-style comments attached to the content, but also presented below as usual threads. IMO Medium comments is a disaster.
Totally don’t get why Google Maps completely ignores friends.
- “Are you driving to Glendale? Share your [approximate] or [precise location] with your friends there and let them know you’re coming!”
Interestingly, Facebook, WhatsApp, or Telegram could probably implement this even better.
E-commerce (goods, services):
- Real-time offers & discounts based on estimated probability of buying vs leaving and other knowns (quantity in stock, profit margin, etc.) to maximize the profit.
- Real-time Q/A. Lots of sales are driven by emotions, and the chance to sell decreases dramatically once your customer gets a question with no answer. There is a Q/A section on almost any e-commerce website, but I never saw an experience like this: “Hey, the seller of this product just saw your question on her phone. Wait for a moment, and we’ll tell you whether she’s going to respond right now.” — “Ok, she’s writing a response:  words typed. Feel free to [add more comments] while she’s typing or leave a [voice message]”.
- Sales assistants for the most valuable buyers. “An expert in Sony cameras Charlie is ready to help you — just start a [voice] or [text chat]”. And oh god, this shouldn’t work as a regular chat — seeing something like “Charlie’s recommendations” — a list of products with real person’s comments — would be more convenient.
Live support chat widgets:
Honestly, such chats are mostly a disaster. Ok, I get it, you don’t want to pay for support and try to rely more on bots. But things like “sorry, we can’t open insecure links”, no real emoticons, showing links as-is (no preview — even for service’s own products!), auto-closing a chat after 5 min. of inactivity (why?), no “continue chat” feature, no single attempt to upsell something in case of a successful interaction, boring “rate your interaction with our support agent on the scale of 1 to 10”, asking this question in the same form when a chat history has a bunch of F words, etc., etc.
But speaking of real-time features, what I’d really love to see is:
- Voice chat / audio messaging with automatic transcription
- Pretty sure a fair % of people would prefer a video chat; in some cases it is simply a necessity
- Real-time “how would you rate the interaction with our support so far?” — and real-time support agent change based on response + availability of other agents. “We’re sorry to see this rating. If you think it makes sense to change Alice with someone else to help you better, click [Confirm] to do this right now”.
These are just some examples. Obviously, you need a fair amount of tuning and A/B testing to tell if such features are really useful. But at the first glance they look valuable, which usually means that if a cost of adding them is reasonable, we should try to implement them at some point.
And that’s how we get to the second question:
Are real-time updates significantly more costly to implement?
A short answer is: no. There is a difference, but probably it’s less than 10%. The reasons we mentally exaggerate the cost are:
- When you think of real-time, you typically imagine SignalR-style interaction, which requires pub-sub, many different event types and event processors, more complex error handling, but most importantly, an event-driven architecture. But most of web apps are initially designed to work in request-response mode, and the migration to event-driven architecture is nearly as complex as adding a GUI for a complex console app.
- Interestingly, even though SignalR helps you to implement pub/sub, it still doesn’t solve the biggest problem. If you build a fully real-time system, the number of events is proportional to the number of API endpoints you have, and a majority of these events (and related events) could be defined as reactions to some other events: “Product added to the shopping cart” -> “Cart content changed” -> “Cart cost changed” -> “Volume discount changed” -> “Display new volume discount”. You need at least 5 event types and 5 handlers to implement this reaction, right?
That’s why real-time feels complex. But what if I told you:
- Request-response architecture is perfectly fine for real-time apps — you don’t need to refactor much at all.
- You don’t need events, event handlers, etc. — it’s fine to have them, of course, but there is a way to achieve 99% of your goals with one universal event and implicit subscription to it.
- Since the subscriptions and unsubscriptions are implicit, you also don’t need to worry about explicit pub/sub.
- And the solution is nearly as scalable as your non-real time system was.
As you might guess, this post is about Stl.Fusion — a free (MIT license) open-source library allowing you to use request-response architecture to build real-time apps, and as a result, dropping the cost of adding real-time features to nearly zero.
If this sounds interesting, I recommend you to read:
As for the original topic, I’d like to talk about the last piece here:
The #1 reason to have a fully real-time UI
We live in 2020, and what happened this year is shows we inevitably transition to mostly online collaboration and communication. And as I shown above:
- The lack of “real-time” is real in web apps. “Non-real time by default” plays a big role here + it’s hard to add real-time features quickly.
- But simultaneously, real-time interactions are quite valuable.
- And the further we go, the more valuable they will be: more and more apps get are getting “multiplayer” features because they’re used by groups of people (teams, etc.) rather than individuals. The “multiplayer” aspect was ignored for years in web world, and that’s why the potential to improve is huge in this area. E.g. group-centric experience alone helped Google Docs to crush Microsoft Word.
So why it’s so hard to add real-time features? It’s hard because most of developers are lazy — but not in the way you might think. Being lazy is actually a good thing — in fact, that’s why many of us love to write code in the first place. That’s why we also love hacks — high-ROI solutions for specific problems.
The trap is:
- “Non-real time” is the default
- At some point you decide to add your first real-time feature. And since it looks unique and pretty painful to add (you need to refactor a fair amount of code), you end up with a hacky way of adding it.
- Once you see the next real-time feature, you just repeat the same process.
That’s why almost any large app implements N different ways of “pushing” the updates. All of them are problem or domain-specific, so adding N+1 real-time feature usually means the same amount of pain as adding the first one.
That’s why every real-time feature is explicitly penalized. But more importantly, “real-time is costly” becomes an implicit constraint, and as a result, people tend to think less of anything that’s real-time, even though based on pure logic, they should do exactly the opposite.
I’d argue the real cost we pay for keeping with the “non-real time is the default” motto is the value of the opportunities we lose.
The photo above shows Tesla Model 3 UI. A lot is written about it, but I’d like to highlight its single feature: no matter how good/bad it is now, contrary to a typical car’s UI, it’s designed to evolve rather than to stay the same. Tesla made a huge upfront investment to open up this room for future improvements — powerful onboard computer, fully electronic control, 8 cameras, next-gen software, etc. — none of this is absolutely necessary for an electric car. The fact these investments were made while Tesla was burning billions to scale its manufacturing proves this wasn’t an experiment, but the plan.
Interestingly, this “designed to evolve” concept also helps Model 3 to retain almost 90% its value over 3-year period — that’s unimaginable for any other car.
Most successful Apple products were also designed to evolve much faster than their competitors. This doesn’t fully explain their success, of course. But no doubts non-stop evolution paired with an excellent quality helped to drive product growth by turning every customer into an evangelist.
So, what “designed to evolve” means?
- A substantial room for new features is created in advance
- There is a foundation allowing to add new features much faster.
And speaking of web apps, a low-cost option to be fully real-time fits above description quite well. This is the promise on which Stl.Fusion stands:
Real-time everywhere at ~zero cost.
Is this a good motto for your next web app? Please share your thoughts.