https://toni.org/2026/03/09/coming-off-the-bench-for-bluesky...
Hacker News
Latest
Rendezvous with Rama
2026-03-09 @ 21:34:45Points: 64Comments: 63
So you want to write an "app" (2025)
2026-03-09 @ 20:50:59Points: 18Comments: 6
Oracle is building yesterday's data centers with tomorrow's debt
2026-03-09 @ 20:36:43Points: 111Comments: 36
Thomas Selfridge: The First Airplane Fatality
2026-03-09 @ 20:32:45Points: 28Comments: 3
Things I've Done with AI
2026-03-09 @ 19:24:20Points: 64Comments: 80
The Most Beautiful Freezer in the World: Notes on Baking at the South Pole
2026-03-09 @ 19:12:14Points: 19Comments: 4
Bluesky CEO Jay Graber is stepping down
2026-03-09 @ 19:09:03Points: 234Comments: 209
Durdraw – ANSI art editor for Unix-like systems
2026-03-09 @ 18:59:32Points: 25Comments: 12
Workers report watching Ray-Ban Meta-shot footage of people using the bathroom
2026-03-09 @ 18:51:34Points: 149Comments: 58
Show HN: The Mog Programming Language
2026-03-09 @ 17:57:00Points: 96Comments: 45
- Mog is a statically typed, compiled, embedded language (think statically typed Lua) designed to be written by LLMs -- the full spec fits in 3,200 tokens. - An AI agent writes a Mog program, compiles it, and dynamically loads it as a plugin, script, or hook. - The host controls exactly which functions a Mog program can call (capability-based permissions), so permissions propagate from agent to agent-written code. - Compiled to native code for low-latency plugin execution -- no interpreter overhead, no JIT, no process startup cost. - The compiler is written in safe Rust so the entire toolchain can be audited for security. Even without a full security audit, Mog is already useful for agents extending themselves with their own code. - MIT licensed, contributions welcome.
Motivations for Mog:
1. Syntax Only an AI Could Love: Mog is written for AIs to write, so the spec fits easily in context (~3200 tokens), and it's intended to minimize foot-guns to lower the error rate when generating Mog code. This is why Mog has no operator precedence: non-associative operations have to use parentheses, e.g. (a + b) * c. It's also why there's no implicit type coercion, which I've found over the decades to be an annoying source of runtime bugs. There's also less support in Mog for generics, and there's absolutely no support for metaprogramming, macros, or syntactic abstraction.
When asking people to write code in a language, these restrictions could be onerous. But LLMs don't care, and the less expressivity you trust them with, the better.
2. Capabilities-Based Permissionsl: There's a paradox with existing security models for AI agents. If you give an agent like OpenClaw unfettered access to your data, that's insecure and you'll get pwned. But if you sandbox it, it can't do most of what you want. Worse, if you run scripts the agent wrote, those scripts don't inherit the permissions that constrain the agent's own bash tool calls, which leads to pwnage and other chaos. And that's not even assuming you run one of the many OpenClaw plugins with malware.
Mog tries to solve this by taking inspiration from embedded languages. It compiles all the way to machine code, ahead of time, but the compiler doesn't output any dangerous code (at least it shouldn't -- Mog is quite new, so that could still be buggy). This allows a host program, such as an AI agent, to generate Mog source code, compile it, and load it into itself using dlopen(), while maintaining security guarantees.
The main trick is that a Mog program on its own can't do much. It has no direct access to syscalls, libc, or memory. It can basically call functions, do heap allocations (but only within the arena the host gives it), and return something. If the host wants the Mog program to be able to do I/O, it has to supply the functions that the Mog program will call. A core invariant is that a Mog program should never be able to crash the host program, corrupt its state, or consume more resources than the host allows.
This allows the host to inspect the arguments to any potentially dangerous operation that the Mog program attempts, since it's code that runs in the host. For example, a host agent could give a Mog program a function to run a bash command, then enforce its own session-level permissions on that command, even though the command was dynamically generated by a plugin that was written without prior knowledge of those permission settings.
(There are a couple other tricks that PL people might find interesting. One is that the host can limit the execution time of the guest program. It does this using cooperative interrupt polling, i.e. the compiler inserts runtime checks that check if the host has asked the guest to stop. This causes a roughly 10% drop in performance on extremely tight loops, which are the worst case. It could almost certainly be optimized.)
3. Self Modification Without Restart: When I try to modify my OpenClaw from my phone, I have to restart the whole agent. Mog fixes this: an agent can compile and run new plugins without interrupting a session, which makes it dynamically responsive to user feedback (e.g., you tell it to always ask you before deleting a file and without any interruption it compiles and loads the code to... actually do that).
Async support is built into the language, by adapting LLVM's coroutine lowering to our Rust port of the QBE compiler, which is what Mog uses for compilation. The Mog host library can be slotted into an async event loop (tested with Bun), so Mog async calls get scheduled seamlessly by the agent's event loop. Another trick is that the Mog program uses a stack inside the memory arena that the host provides for it to run in, rather than the system stack. The system tracks a guard page between the stack and heap. This design prevents stack overflow without runtime overhead.
Lots of work still needs to be done to make Mog a "batteries-included" experience like Python. Most of that work involves fleshing out a standard library to include things like JSON, CSV, Sqlite, and HTTP. One high-impact addition would be an `llm` library that allows the guest to make LLM calls through the agent, which should support multiple models and token budgeting, so the host could prevent the plugin from burning too many tokens.
I suspect we'll also want to do more work to make the program lifecycle operations more ergonomic. And finally, there should be a more fully featured library for integrating a Mog host into an AI agent like OpenClaw or OpenAI's Codex CLI.
Fixfest is a global gathering of repairers, tinkerers, and activists
2026-03-09 @ 17:34:27Points: 127Comments: 13
Florida judge rules red light camera tickets are unconstitutional
2026-03-09 @ 17:20:29Points: 241Comments: 373
Building a Procedural Hex Map with Wave Function Collapse
2026-03-09 @ 17:02:22Points: 320Comments: 44
DARPA's new X-76
2026-03-09 @ 16:54:31Points: 119Comments: 125
Launch HN: Terminal Use (YC W26) – Vercel for filesystem-based agents
2026-03-09 @ 16:53:52Points: 68Comments: 51
Here's a demo: https://www.youtube.com/watch?v=ttMl96l9xPA.
Our biggest pain point with hosting agents was that you'd need to stitch together multiple pieces: packaging your agent, running it in a sandbox, streaming messages back to users, persisting state across turns, and managing getting files to and from the agent workspace.
We wanted something like Cog from Replicate, but for agents: a simple way to package agent code from a repo and serve it behind a clean API/SDK. We wanted to provide a protocol to communicate with your agent, but not constraint the agent logic or harness itself.
On Terminal Use, you package your agent from a repo with a config.yaml and Dockerfile, then deploy it with our CLI. You define the logic of three endpoints (on_create, on_event, and on_cancel) which track the lifecycle of a task (conversation). The config.yaml contains details about resources, build context, etc.
Out of the box, we support Claude Agent SDK and Codex SDK agents. By support, we mean that we have an adapter that converts from the SDK message types to ours. If you'd like to use your own custom harness, you can convert and send messages with our types (Vercel AI SDK v6 compatible). For the frontend, we have a Vercel AI SDK provider that lets you use your agent with Vercel's AI SDK, and have a messages module so that you don't have to manage streaming and persistence yourself.
The part we think is most different is storage.
We treat filesystems as first-class primitives, separate from the lifecycle of a task. That means you can persist a workspace across turns, share it between different agents, or upload / download files independent of the sandbox being active. Further, our filesystem SDK provides presigned urls which makes it easy for your users to directly upload and download files which means that you don't need to proxy file transfer through your backend.
Since your agent logic and filesystem storage are decoupled, this makes it easy to iterate on your agents without worrying about the files in the sandbox: if you ship a bug, you can deploy and auto-migrate all your tasks to the new deployment. If you make a breaking change, you can specify that existing tasks stay on the existing version, and only new tasks use the new version.
We're also adding support for multi-filesystem mounts with configurable mount paths and read/write modes, so storage stays durable and reusable while mount layout stays task-specific.
On the deployment side, we've been influenced by modern developer platforms: simple CLI deployments, preview/production environments, git-based environment targeting, logs, and rollback. All the configuration you need to build, deploy & manage resources for your agent is stored in the config.yaml file which makes it easy to build & deploy your agent in CI/CD pipelines.
Finally, we've explicitly designed our platform for your CLI coding agents to help you build, test, & iterate with your agents. With our CLI, your coding agents can send messages to your deployed agents, and download filesystem contents to help you understand your agent's output. A common way we test our agents is that we make markdown files with user scenarios we'd like to test, and then ask Claude Code to impersonate our users and chat with our deployed agent.
What we do not have yet: full parity with general-purpose sandbox providers. For example, preview URLs and lower-level sandbox.exec(...) style APIs are still on the roadmap.
We're excited to hear any thoughts, insights, questions, and concerns in the comments below!
JSLinux Now Supports x86_64
2026-03-09 @ 16:43:39Points: 190Comments: 46
Jolla on track to ship new phone with Sailfish OS, user-replaceable battery
2026-03-09 @ 16:41:54Points: 171Comments: 112
An opinionated take on how to do important research that matters
2026-03-09 @ 16:24:22Points: 58Comments: 9
Restoring a Sun SPARCstation IPX part 1: PSU and NVRAM (2020)
2026-03-09 @ 15:23:08Points: 83Comments: 46
Is legal the same as legitimate: AI reimplementation and the erosion of copyleft
2026-03-09 @ 15:12:53Points: 256Comments: 275
Show HN: DenchClaw – Local CRM on Top of OpenClaw
2026-03-09 @ 14:55:42Points: 67Comments: 70
Building consumer / power-user software always gave me more joy than FDEing into an enterprise. It did not give me joy to manually add AI tools to a cloud harness for every small new thing, at least not as much as completely local software that is open source and has all the powers of OpenClaw (I can now talk to my CRM on Telegram!).
A week ago, we launched Ironclaw, an Open Source OpenClaw CRM Framework (https://x.com/garrytan/status/2023518514120937672?s=20) but people confused us with NearAI’s Ironclaw, so we changed our name to DenchClaw (https://denchclaw.com).
OpenClaw today feels like early React: the primitive is incredibly powerful, but the patterns are still forming, and everyone is piecing together their own way to actually use it. What made React explode was the emergence of frameworks like Gatsby and Next.js that turned raw capability into something opinionated, repeatable, and easy to adopt.
That is how we think about DenchClaw. We are trying to make it one of the clearest, most practical, and most complete ways to use OpenClaw in the real world.
Demo: https://www.youtube.com/watch?v=pfACTbc3Bh4#t=43
npx denchclaw
I use DenchClaw daily for almost everything I do. It also works as a coding agent like Cursor - DenchClaw built DenchClaw. I am addicted now that I can ask it, “hey in the companies table only show me the ones who have more than 5 employees” and it updates it live than me having to manually add a filter. On Dench, everything sits in a file system, the table filters, views, column toggles, calendar/gantt views, etc, so OpenClaw can directly work with it using Dench’s CRM skill.
The CRM is built on top of DuckDB, the smallest, most performant and at the same time also feature rich database we could find. Thank you DuckDB team!
It creates a new OpenClaw profile called “dench”, and opens a new OpenClaw Gateway… that means you can run all your usual openclaw commands by just prefixing every command with `openclaw --profile dench` . It will start your gateway on port 19001 range. You will be able to access the DenchClaw frontend at localhost:3100. Once you open it on Safari, just add it to your Dock to use it as a PWA.
Think of it as Cursor for your Mac (also works on Linux and Windows) which is based on OpenClaw. DenchClaw has a file tree view for you to use it as an elevated finder tool to do anything on your mac. I use it to create slides, do linkedin outreach using MY browser.
DenchClaw finds your Chrome Profile and copies it fully into its own, so you won’t have to log in into all your websites again. DenchClaw sees what you see, does what you do. It’s an everything app, that sits locally on your mac.
Just ask it “hey import my notion”, “hey import everything from my hubspot”, and it will literally go into your browser, export all objects and documents and put it in its own workspace that you can use.
We would love you all to break it, stress test its CRM capabilities, how it streams subagents for lead enrichment, hook it into your Apollo, Gmail, Notion and everything there is. Looking forward to comments/feedback!