Hacker News

Latest

The Dank Case for Scrolling Window Managers

2026-01-30 @ 04:17:38Points: 45Comments: 18

Moltbook

2026-01-30 @ 03:55:34Points: 164Comments: 78

Stargaze: SpaceX's Space Situational Awareness System

2026-01-30 @ 03:11:43Points: 48Comments: 7

Software is mostly all you need

2026-01-29 @ 23:06:34Points: 45Comments: 35

Grid: Free, local-first, browser-based 3D printing/CNC/laser slicer

2026-01-29 @ 22:38:57Points: 255Comments: 83

Cutting Up Curved Things

2026-01-29 @ 22:34:54Points: 42Comments: 7

Backseat Software

2026-01-29 @ 22:10:07Points: 71Comments: 9

The WiFi only works when it's raining (2024)

2026-01-29 @ 20:47:36Points: 152Comments: 52

Flameshot

2026-01-29 @ 19:30:35Points: 167Comments: 58

PlayStation 2 Recompilation Project Is Absolutely Incredible

2026-01-29 @ 18:55:38Points: 376Comments: 170

County pays $600k to pentesters it arrested for assessing courthouse security

2026-01-29 @ 18:48:09Points: 383Comments: 177

My Mom and Dr. DeepSeek (2025)

2026-01-29 @ 18:45:27Points: 166Comments: 91

Project Genie: Experimenting with infinite, interactive worlds

2026-01-29 @ 17:02:39Points: 546Comments: 258

Reflex (YC W23) Senior Software Engineer Infra

2026-01-29 @ 17:00:42Points: 1

Launch HN: AgentMail (YC S25) – An API that gives agents their own email inboxes

2026-01-29 @ 16:42:33Points: 141Comments: 145

https://agentmail.to), the email inbox API for agents. We’re not talking about AI for your email, this is email for your AI.

Email is an optimal interface for long-running agents. It’s multithreaded and asynchronous with full support for rich text and files. It’s a universal protocol with identity and authentication built in. Moreover, a lot of workflow critical context already lives in email.

We wanted to build email agents that you can forward your work to and get back a completed task. The agents could act entirely autonomously as you wouldn't need to delegate your identity. If they did get stuck they could just send you, or anyone else, an email.

Using Gmail, we kept getting stuck on the limitations of their API. No way to create inboxes programmatically. Rate and sending limits. OAuth for every single inbox. Keyword search that doesn't understand context. Per-seat pricing that doesn't work for agents.

So we built what we wished existed: an email provider for developers. APIs for creating inboxes and configuring domains. Email parsing and threading. Text extraction from attachments. Realtime webhooks and websockets. Semantic search across inboxes. Usage-based pricing that works for agents.

Developers, startups, and enterprises are already deploying email agents with AgentMail. Agents that convert conversations and documents into structured data. Agents that source quotes, negotiate prices, and get the best deals. Agents that emulate internet users for training models on end-to-end tasks.

Here's demo of Clawdbots communicating using AgentMail: https://youtu.be/Y0MfUWS3LKQ

You can get started with AgentMail for free at https://agentmail.to

Looking forward to hearing your thoughts and feedback.

Is the RAM shortage killing small VPS hosts?

2026-01-29 @ 15:42:57Points: 155Comments: 189

Deep dive into Turso, the “SQLite rewrite in Rust”

2026-01-29 @ 14:51:56Points: 150Comments: 119

Moltworker: a self-hosted personal AI agent, minus the minis

2026-01-29 @ 14:43:07Points: 197Comments: 61

Waymo robotaxi hits a child near an elementary school in Santa Monica

2026-01-29 @ 14:08:56Points: 416Comments: 662

Claude Code daily benchmarks for degradation tracking

2026-01-29 @ 13:59:07Points: 638Comments: 303

A lot of population numbers are fake

2026-01-29 @ 13:36:54Points: 321Comments: 266

AGENTS.md outperforms skills in our agent evals

2026-01-29 @ 13:08:11Points: 285Comments: 122

The paper model houses of Peter Fritz (2013)

2026-01-27 @ 22:24:33Points: 9Comments: 0

CISA’s acting head uploaded sensitive files into public version of ChatGPT

2026-01-27 @ 21:02:36Points: 122Comments: 210

The most dangerous code: Validating SSL certs in non-browser software (2012) [pdf]

2026-01-27 @ 18:45:56Points: 8Comments: 2

Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser)

2026-01-27 @ 18:33:05Points: 12Comments: 2

Why: I originally started by starting a new game engine in WebGPU, and I loved the iteration loop of writing Typescript & instantly seeing the changes in the browser with hot reloading. After getting something working and shipping a demo, I realized that shipping a whole browser doesn't really work if I also want the same codebase to work on mobile. Sure, I could use a webview, but that's not always a good or consistent experience for users - there are nuances with Safari on iOS supporting WebGPU, but not the same features that Chrome does on desktop. What I really wanted was a WebGPU runtime that is consistent & works on any platform. I was inspired by deno's --unsafe-webgpu flag, but I realized that deno probably wouldn't be a good fit long term because it doesn't support iOS or Android & doesn't bundle a window / event system (they have "bring your own window", but that means writing a lot of custom code for events, dealing with windowing, not to mention more specific things like implementing a WebAudio shim, etc.). So that got me down the path of building a native runtime specifically for games & that's Mystral Native.

So now with Mystral Native, I can have the same developer experience (write JS, use shaders in WGSL, call requestAnimationFrame) but get a real native binary I can ship to players on any platform without requiring a webview or a browser. No 200MB Chromium runtime, no CEF overhead, just the game code and a ~25MB runtime.

What it does: - Full WebGPU via Dawn (Chrome's implementation) or wgpu-native (Rust) - Native window & events via SDL3 - Canvas 2D support (Skia), Web Audio (SDL3), fetch (file/http/https) - V8 for JS (same engine as Chrome/Node), also supports QuickJS and JSC - ES modules, TypeScript via SWC - Compile to single binary (think "pkg"): `mystral compile game.js --include assets -o my-game` - macOS .app bundles with code signing, Linux/Windows standalone executables - Embedding API for iOS and Android (JSC/QuickJS + wgpu-native)

It's early alpha — the core rendering path works well & I've tested on Mac, Linux (Ubuntu 24.04), and Windows 11, and some custom builds for iOS & Android to validate that they can work, but there's plenty to improve. Would love to get some feedback and see where it can go!

MIT licensed.

Repo: https://github.com/mystralengine/mystralnative

Docs: https://mystralengine.github.io/mystralnative/

Nannou – A creative coding framework for Rust

2026-01-27 @ 18:27:01Points: 13Comments: 0

Skapa, a parametric 3D printing app like an IKEA manual (2025)

2026-01-26 @ 20:45:42Points: 14Comments: 3

The Home Computer Hybrids: Atari, TI, and the FCC

2026-01-26 @ 19:35:30Points: 16Comments: 0

Show HN: Ourguide – OS wide task guidance system that shows you where to click

2026-01-26 @ 18:19:45Points: 46Comments: 22

I started building this because whenever I didn’t know how to do something on my computer, I found myself constantly tabbing between chatbots and the app, pasting screenshots, and asking “what do I do next?” Ourguide solves this with two modes. In Guide mode, the app overlays your screen and highlights the specific element to click next, eliminating the need to leave your current window. There is also Ask mode, which is a vision-integrated chat that captures your screen context—which you can toggle on and off anytime -so you can ask, "How do I fix this error?" without having to explain what "this" is.

It’s an Electron app that works OS-wide, is vision-based, and isn't restricted to the browser.

Figuring out how to show the user where to click was the hardest part of the process. I originally trained a computer vision model with 2300 screenshots to identify and segment all UI elements on a screen and used a VLM to find the correct icon to highlight. While this worked extremely well—better than SOTA grounding models like UI Tars—the latency was just too high. I'll be making that CV+VLM pipeline OSS soon, but for now, I’ve resorted to a simpler implementation that achieves <1s latency.

You may ask: if I can show you where to click, why can't I just click too? While trying to build computer-use agents during my job in Palo Alto, I hit the core limitation of today’s computer-use models where benchmarks hover in the mid-50% range (OSWorld). VLMs often know what to do but not what it looks like; without reliable visual grounding, agents misclick and stall. So, I built computer use—without the "use." It provides the visual grounding of an agent but keeps the human in the loop for the actual execution to prevent misclicks.

I personally use it for the AWS Console's "treasure hunt" UI, like creating a public S3 bucket with specific CORS rules. It’s also been surprisingly helpful for non-technical tasks, like navigating obscure settings in Gradescope or Spotify. Ourguide really works for any task when you’re stuck or don't know what to do.

You can download and test Ourguide here: https://ourguide.ai/downloads

The project is still very early, and I’d love your feedback on where it fails, where you think it worked well, and which specific niches you think Ourguide would be most helpful for.

Archives

2026

2025

2024

2023

2022