This sounded unhinged so I just had to check the article to confirm…
Yep, another AI startup.Ok there’s a whole lot of wtf going on here.
AI codebots in the cloud doing your code for you, cool, I guess.
So you need to watch them? And presumably intervene if necessary? Ok.
So then:
They decided that they’d stream a video of the AI codebots doing their thing.
At 40Mbps per stream.
For “enterprise use”.
Where presumably they want lots of users.
And then they didn’t know about locked down enterprise internet and had to engineer a fallback to jpeg for when things aren’t great for them. Newsflash - with streaming video peaking at 40Mbs per user, things will never be great for your product in the real world.
How, in any way, does this scale to anything approaching success? Their back end now has to have the compute power to encode and serve up gigabits of streaming video for anything more than ~50 concurrent users, let alone the compute usage of the actual “useful” bit , the AI codebots.
For say, 5 users out of a site of 200, IT departments will now see hundreds of megabits of streaming traffic - and if they’re proactive, they will choke those endpoints to a crawl so that their pathetic uplink has a chance to serve the other 195 users.
All of this for a system that is fundamentally working on maybe 5kB of visible unicode text at any particular moment.
You are right but this comment gives me iPod introduction feelings. That company will be huge in some years.
Quit reading at:
…AI platform where autonomous coding agents…
But your comment made me go back and look out of disbelief. How does a person get this far down a rabbit hole?
How does a person get this far down a rabbit hole?
I don’t know. Software engineering is tangential to my field but I have to wonder, is software efficiency even a consideration these days?
It seems that maybe a week of just solid thinking about what they have and what they need - WITHOUT touching a keyboard - could have put them in a better position. But move fast and break things doesn’t seem to accommodate that kind of approach.
What a glorious future AI is heralding.
How does a person get this far down a rabbit hole?
AI psychosis is a real thing, and it’s apparently a lot easier to fall into these rabbit holes than most people think (unless, I suspect, like me, you have a thick foundation of rock-solid cynicism that the AI simply will never penetrate). This is probably another interesting example of it.
unless, I suspect, like me, you have a thick foundation of rock-solid cynicism that the AI simply will never penetrate
Do we know each other or something :).
Honestly great comment, couldn’t agree more.
I’m commenting on this because I want to read/discuss it later. I can’t seem to save this post
But our WebSocket streaming layer sits on top of the Moonlight protocol, which is reverse-engineered from NVIDIA GameStream.
Mf? The GameStream protocol is designed to be ultra low latency because it’s made for Game Streaming, you do not need ultra low latency streaming to watch your agents typing, WTF?
They’re gonna be “working” on a desktop, why the hell didn’t you look into VNC instead? RDP??
You know, protocols with built-in compression and other techniques to reduce their bandwidth usage? Hello?? The fuck are you doing???Why aren’t they streaming the text changes and rendering them them client side?
That makes too much sense and you can’t shoehorn AI for that sweet sweet VC money.
It’s still AI, it’s just more like a standard language server approach. Based on the article they don’t seem willing to learn how existing tools work (oof, reading documentation? Nothx).
Laundering VC money for this. 😂
You are making it so people need to live watch AIs coding? This is insanity. This bubble is gonna hurt.
hardware-accelerated, WebCodecs-powered, 60fps H.264 streaming pipeline over WebSockets and replaced it with grim | curl when the WiFi got a bit sketchy.
I think they took https://justuse.org/curl/ a bit too serious.
We’re building Helix, an AI platform where autonomous coding agents work in cloud sandboxes. Users need to watch their AI assistants work. Think “screen share, but the thing being shared is a robot writing code.”
Oh, they are all about useless tech. Why would you need to watch an agent code at 1080P 60fps using 40Mbps?
Yup, crazy. I record all my coding screencasts at 15 fps and it looks fine while the video file is tiny.
Thanks, I was confused about why the helix editor might need screen sharing. Haha.
I was thinking the mattress company. Totally different vibes if they are needing to send video…
Would have been more sensible to stream h.264 at a lower fps. Or maybe stream the text directly
I don’t understand why they bother with the “modern” method if the fallback works so well and is much simpler and cheaper.
JPEG method tops out at 5-10fps.
Modern method is better if network can keep up.
Don’t need high fps to watch an ai type.
Have you ever told an engineer not to build something overdesigned and fun to do?
When the unga bunga solution works better than the modern one
From what I read, the modern solution has smooth 60 fps, compared to 2-10 FPS with the JPEG method. Granted, that probably also factors in low network speeds, but I’d imagine you may hit a framerate cap lower than 60 when just spamming JPEGs.
you don’t need 60 fps to read text? All you need is to stream the text directly?
They didn’t explicitly say but it sounds like the JPEG solution can’t put out a substantial FPS. If you start to do fancier stuff like sending partial screenshots or deltas only then you get the same issues as H264 (you miss a keyframe and things start to degrade). Also if you try and put out 30 JPEGs per second you could start to get TCP queuing (i.e. can’t see screenshot 31 until screenshot 30 is complete). UDP might have made this into a full replacement but as they said sometimes it’s blocked.
Yeah… I mean they should have just copied whatever video conferencing platforms do because they all work fine behind corporate proxies and they also don’t suffer from this “increasing delay” problem.
I haven’t actually looked into what they do but presumably it’s something like webrtc with a fallback to HLS with closed loop feedback about the delay.
Though in fairness it doesn’t sound like “watching an AI agent” is the most critical thing and mjpeg is surprisingly decent.
The video conferencing platform my work uses works well because it’s a large well-known platform and they punched holes for it into the firewall and the vpn. Not really something a service provider can just replicate.













