Hacker News

Latest

Gas Town: From Clown Show to v1.0

2026-04-14 @ 19:18:53Points: 44Comments: 36

ClawRun – Deploy and manage AI agents in seconds

2026-04-14 @ 19:12:09Points: 22Comments: 3

California ghost-gun bill wants 3D printers to play cop, EFF says

2026-04-14 @ 19:08:32Points: 119Comments: 2

I wrote to Flock's privacy contact to opt out of their domestic spying program

2026-04-14 @ 17:47:00Points: 392Comments: 166

OpenSSL 4.0.0

2026-04-14 @ 17:45:34Points: 131Comments: 34

Show HN: Plain – The full-stack Python framework designed for humans and agents

2026-04-14 @ 17:43:17Points: 34Comments: 13

Turn your best AI prompts into one-click tools in Chrome

2026-04-14 @ 17:09:43Points: 55Comments: 27

Spain to expand internet blocks to tennis, golf, movies broadcasting times

2026-04-14 @ 16:59:09Points: 383Comments: 355

Claude Code Routines

2026-04-14 @ 16:54:33Points: 251Comments: 156

5NF and Database Design

2026-04-14 @ 16:22:49Points: 101Comments: 43

Show HN: Kelet – Root Cause Analysis agent for your LLM apps

2026-04-14 @ 16:16:51Points: 37Comments: 18

AI agents don't crash. They just quietly give wrong answers. You end up scrolling through traces one by one, trying to find a pattern across hundreds of sessions.

Kelet automates that investigation. Here's how it works:

1. You connect your traces and signals (user feedback, edits, clicks, sentiment, LLM-as-a-judge, etc.) 2. Kelet processes those signals and extracts facts about each session 3. It forms hypotheses about what went wrong in each case 4. It clusters similar hypotheses across sessions and investigates them together 5. It surfaces a root cause with a suggested fix you can review and apply

The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.

The fastest way to integrate is through the Kelet Skill for coding agents — it scans your codebase, discovers where signals should be collected, and sets everything up for you. There are also Python and TypeScript SDKs if you prefer manual setup.

It’s currently free during beta. No credit card required. Docs: https://kelet.ai/docs/

I'd love feedback on the approach, especially from anyone running agents in prod. Does automating the manual error analysis sound right?

Show HN: A memory database that forgets, consolidates, and detects contradiction

2026-04-14 @ 15:41:01Points: 35Comments: 21

YantrikDB is a cognitive memory engine — embed it, run it as a server, or connect via MCP. It thinks about what it stores: consolidation collapses duplicate memories, contradiction detection flags incompatible facts, temporal decay with configurable half-life lets unimportant memories fade like human memory does.

Single Rust binary. HTTP + binary wire protocol. 2-voter + 1-witness HA cluster via Docker Compose or Kubernetes. Chaos-tested failover, runtime deadlock detection (parking_lot), per-tenant quotas, Prometheus metrics. Ran a 42-task hardening sprint last week — 1178 core tests, cargo-fuzz targets, CRDT property tests, 5 ops runbooks.

Live on a 3-node Proxmox homelab cluster with multiple tenants. Alpha — primary user is me, looking for the second one.

Show HN: LangAlpha – what if Claude Code was built for Wall Street?

2026-04-14 @ 14:48:46Points: 77Comments: 26

MCP tools don't really work for financial data at scale. One tool call for five years of daily prices dumps tens of thousands of tokens into the context window. And data vendors pack dozens of tools into a single MCP server, schemas alone can eat 50k+ tokens before the agent does anything useful. So we auto-generate typed Python modules from the MCP schemas at workspace init and upload them into the sandbox. The agent just imports them like a normal library. Only a one-line summary per server stays in the prompt. We have around 80 tools across our servers and the prompt cost is the same whether a server has 3 tools or 30. This part isn't finance-specific, it works with any MCP server.

The other big thing was making research actually persist across sessions. Most agents treat a single deliverable (a PDF, a spreadsheet) as the end goal. In investing that's day one. You update the model when earnings drop, re-run comps when a competitor reports, keep layering new analysis on old. But try doing that across agent sessions, files don't carry over, you re-paste context every time. So we built everything around workspaces. Each one maps to a persistent sandbox, one per research goal. The agent maintains its own memory file with findings and a file index that gets re-read before every LLM call. Come back a week later, start a new thread, it picks up where it left off.

We also wanted the agent to have real domain context the way Claude Code has codebase context. Portfolio, watchlist, risk tolerance, financial data sources, all injected into every call. Existing AI investing platforms have some of that but nothing close to what a proper agent harness can do. We wanted both and couldn't find it, so we built it and open-sourced the whole thing.

Rare concert recordings are landing on the Internet Archive

2026-04-14 @ 13:46:31Points: 424Comments: 124

Show HN: Kontext CLI – Credential broker for AI coding agents in Go

2026-04-14 @ 13:26:53Points: 56Comments: 24

The problem isn't just secret sprawl. It's that there's no lineage of access. You don't know which developer launched which agent, what it accessed, or whether it should have been allowed to. The moment you hand raw credentials to a process, you've lost the ability to enforce policy, audit access, or rotate without pain. The credential is the authorization, and that's fundamentally broken when autonomous agents are making hundreds of API calls per session.

Kontext takes a different approach. You declare what credentials a project needs in a .env.kontext file:

  GITHUB_TOKEN={{kontext:github}}
  STRIPE_KEY={{kontext:stripe}}
  LINEAR_TOKEN={{kontext:linear}}
Then run `kontext start --agent claude`. The CLI authenticates you via OIDC, and for each placeholder: if the service supports OAuth, it exchanges the placeholder for a short-lived access token via RFC 8693 token exchange; for static API keys, the backend injects the credential directly into the agent's runtime environment. Either way, secrets exist only in memory during the session — never written to disk on your machine. Every tool call is streamed for audit as the agent runs.

The closest analogy is a Security Token Service (STS): you authenticate once, and the backend mints short-lived, scoped credentials on-the-fly — except unlike a classical STS, we hold the upstream secrets, so nothing long-lived ever reaches the agent. The backend holds your OAuth refresh tokens and API keys; the CLI never sees them. It gets back short-lived access tokens scoped to the session.

What the CLI captures for every tool call: what the agent tried to do, what happened, whether it was allowed, and who did it — attributed to a user, session, and org.

Install with one command: `brew install kontext-dev/tap/kontext`

The CLI is written in Go (~5ms hook overhead per tool call), uses ConnectRPC for backend communication, and stores auth in the system keyring. Works with Claude Code today, Codex support coming soon.

We're working on server-side policy enforcement next — the infrastructure for allow/deny decisions on every tool call is already wired, we just need to close the loop so tool calls can also be rejected.

We'd love feedback on the approach. Especially curious: how are teams handling credential management for AI agents today? Are you just pasting env vars into the agent chat, or have you found something better?

GitHub: https://github.com/kontext-dev/kontext-cli Site: https://kontext.security

jj – the CLI for Jujutsu

2026-04-14 @ 10:33:39Points: 452Comments: 388

Backblaze has stopped backing up OneDrive and Dropbox folders and maybe others

2026-04-14 @ 08:30:27Points: 857Comments: 527

Introspective Diffusion Language Models

2026-04-14 @ 07:57:33Points: 208Comments: 41

A new spam policy for “back button hijacking”

2026-04-14 @ 03:06:27Points: 789Comments: 452

DaVinci Resolve – Photo

2026-04-14 @ 02:25:15Points: 1015Comments: 256

Lean proved this program correct; then I found a bug

2026-04-14 @ 00:25:08Points: 371Comments: 165

Let's Talk Space Toilets

2026-04-13 @ 22:41:19Points: 89Comments: 33

The Orange Pi 6 Plus

2026-04-11 @ 17:48:02Points: 49Comments: 16

The Mouse Programming Language on CP/M

2026-04-10 @ 23:18:37Points: 37Comments: 3

Carol's Causal Conundrum: a zine intro to causally ordered message delivery

2026-04-10 @ 18:43:00Points: 34Comments: 3

The acyclic e-graph: Cranelift's mid-end optimizer

2026-04-10 @ 12:37:27Points: 60Comments: 18

Nucleus Nouns

2026-04-10 @ 12:06:43Points: 51Comments: 12

guide.world: A compendium of travel guides

2026-04-09 @ 18:17:32Points: 40Comments: 5

YouTube now world's largest media company, topping Disney

2026-04-09 @ 15:50:54Points: 207Comments: 161

The M×N problem of tool calling and open-source models

2026-04-09 @ 15:07:13Points: 116Comments: 39

Archives

2026

2025

2024

2023

2022