A Roblox cheat and one AI tool brought down Vercel's platform - https://news.ycombinator.com/item?id=47844431 - April 2026 (145 comments)
Hacker News
Latest
California has more money than projected after admin miscalculated state budget
2026-04-21 @ 20:28:57Points: 56Comments: 23
Zindex – Diagram Infrastructure for Agents
2026-04-21 @ 20:27:37Points: 11Comments: 5
I don't want your PRs anymore
2026-04-21 @ 20:21:40Points: 118Comments: 77
Framework Laptop 13 Pro
2026-04-21 @ 18:00:34Points: 690Comments: 392
Cal.diy: open-source community edition of cal.com
2026-04-21 @ 17:58:21Points: 110Comments: 33
Meta capturing employee mouse movements, keystrokes for AI training data
2026-04-21 @ 17:40:39Points: 172Comments: 111
Britannica11.org – a structured edition of the 1911 Encyclopædia Britannica
2026-04-21 @ 17:33:50Points: 169Comments: 80
The Vercel breach: OAuth attack exposes risk in platform environment variables
2026-04-21 @ 17:14:35Points: 213Comments: 84
Ibuilt a tiny Unix‑like 'OS' with shell and filesystem for Arduino UNO (2KB RAM)
2026-04-21 @ 17:14:26Points: 53Comments: 12
Trellis AI (YC W24) Is hiring engineers to build self-improving agents
2026-04-21 @ 17:01:15Points: 1
A Periodic Map of Cheese
2026-04-21 @ 16:31:21Points: 142Comments: 62
Kasane: New drop-in Kakoune front end with GPU rendering and WASM Plugins
2026-04-21 @ 15:53:22Points: 38Comments: 5
Fusion Power Plant Simulator
2026-04-21 @ 14:26:52Points: 129Comments: 70
Show HN: GoModel – an open-source AI gateway in Go
2026-04-21 @ 14:11:53Points: 147Comments: 56
I’ve been building GoModel since December with a couple of contributors. It's an open-source AI gateway that sits between your app and model providers like OpenAI, Anthropic or others.
I built it for my startup to solve a few problems:
- track AI usage and cost per client or team
- switch models without changing app code
- debug request flows more easily
- reduce AI spendings with exact and semantic caching
How is it different? - ~17MB docker image
- LiteLLM's image is more than 44x bigger ("docker.litellm.ai/berriai/litellm:latest" ~ 746 MB on amd64)
- request workflow is visible and easy to inspect
- config is environment-variable-first by default
I'm posting now partly because of the recent LiteLLM supply-chain attack. Their team handled it impressively well, but some people are looking at alternatives anyway, and GoModel is one. Website: https://gomodel.enterpilot.io
Any feedback is appreciated.
Show HN: VidStudio, a browser based video editor that doesn't upload your files
2026-04-21 @ 11:58:16Points: 223Comments: 78
Some of the features: multi-track timeline, frame accurate seek, MP4 export, audio, video, image, and text tracks, and a WebGL backed canvas where available. It also works on mobile.
Under the hood, WebCodecs handles frame decode for timeline playback and scrubbing, which is what makes seeking responsive since decode runs on the hardware decoder when the browser supports it. FFmpeg compiled to WebAssembly handles final encode, format conversion, and anything WebCodecs does not cover. Rendering goes through Pixi.js on a WebGL canvas, with a software fallback when WebGL is not available. Projects live in IndexedDB and the heavy work runs in Web Workers so the UI stays responsive during exports.
Happy to answer technical questions about the tradeoffs involved in keeping the whole pipeline client-side. Any feedback welcome.
Tim Cook's Impeccable Timing
2026-04-21 @ 11:30:03Points: 284Comments: 370
Laws of Software Engineering
2026-04-21 @ 11:04:56Points: 762Comments: 383
A type-safe, realtime collaborative Graph Database in a CRDT
2026-04-21 @ 10:33:24Points: 138Comments: 41
Colorado River disappeared record for 5M years: now we know where it was
2026-04-20 @ 17:05:29Points: 40Comments: 8
Show HN: Ctx – a /resume that works across Claude Code and Codex
2026-04-20 @ 16:35:05Points: 47Comments: 20
Here is a video of how it works: https://www.loom.com/share/5e558204885e4264a34d2cf6bd488117
I initially built ctx because I wanted to try a workstream that I started on Claude and continue it from Codex. Since then, I’ve added a few quality of life improvements, including the ability to search across previous workstreams, manually delete parts of the context with, and branch off existing workstreams.. I’ve started using ctx instead of the native ‘/resume’ in Claude/Codex because I often have a lot of sessions going at once, and with the lists that these apps currently give, it’s not always obvious which one is the right one to pick back up. ctx gives me a much clearer way to organize and return to the sessions that actually matter.
It’s simple to install after you clone the repo with one line: ./setup.sh, which adds the skill to both Claude Code and Codex. After that, you should be able to directly use ctx in your agent as a skill with ‘/ctx [command]’ in Claude and ‘ctx [command]’ in Codex.
A few things it does:
- Resume an existing workstream from either tool
- Pull existing context into a new workstream
- Keep stable transcript binding, so once a workstream is linked to a Claude or Codex conversation, it keeps following that exact session instead of drifting to whichever transcript file is newest
- Search for relevant workstreams
- Branch from existing context to explore different tasks in parallel
It’s intentionally local-first: SQLite, no API keys, and no hosted backend. I built it mainly for myself, but thought it would be cool to share with the HN community.
My practitioner view of program analysis
2026-04-20 @ 15:27:30Points: 18Comments: 0
Show HN: Mediator.ai – Using Nash bargaining and LLMs to systematize fairness
2026-04-20 @ 15:07:04Points: 141Comments: 72
Yet if John Nash had solved negotiation in the 1950s, why did it seem like nobody was using it today? The issue was that Nash's solution required that each party to the negotiation provide a "utility function", which could take a set of deal terms and produce a utility number. But even experts have trouble producing such functions for non-trivial negotiations.
A few years passed and LLMs appeared, and about a year ago I realized that while LLMs aren’t good at directly producing utility estimates, they are good at doing comparisons, and this can be used to estimate utilities of draft agreements.
This is the basis for Mediator.ai, which I soft-launched over the weekend. Be interviewed by an LLM to capture your preferences and then invite the other party or parties to do the same. These preferences are then used as the fitness function for a genetic algorithm to find an agreement all parties are likely to agree to.
An article with more technical detail: https://mediator.ai/blog/ai-negotiation-nash-bargaining/