Hacker News

Latest

Universal vaccine against respiratory infections and allergens

2026-03-10 @ 22:33:48Points: 43Comments: 16

U+237C ⍼ Is Azimuth

2026-03-10 @ 22:33:45Points: 64Comments: 5

Cloudflare Crawl Endpoint

2026-03-10 @ 22:27:15Points: 48Comments: 14

RISC-V Is Sloooow

2026-03-10 @ 20:11:54Points: 142Comments: 123

Tell HN: Apple development certificate server seems down?

2026-03-10 @ 19:56:52Points: 44Comments: 17

https://developer.apple.com/system-status/, but I haven't been able to install apps for development on my own devices starting at 11AM PDT.

Other people on Reddit seem to be hitting this too [0]. Anyone knows anything about it?

[0]: https://www.reddit.com/r/iOSProgramming/comments/1rq4uxl

Edit: Now getting intermittent 502s from https://ppq.apple.com/. Something is definitely going on.

HyperCard discovery: Neuromancer, Count Zero, Mona Lisa Overdrive (2022)

2026-03-10 @ 19:17:26Points: 89Comments: 23

Agents that run while I sleep

2026-03-10 @ 19:09:46Points: 181Comments: 129

FFmpeg-over-IP – Connect to remote FFmpeg servers

2026-03-10 @ 18:26:39Points: 95Comments: 39

Billion-Parameter Theories

2026-03-10 @ 17:49:53Points: 85Comments: 59

Launch HN: RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon

2026-03-10 @ 17:14:52Points: 172Comments: 77

Also, we've open-sourced RCLI, the fastest end-to-end voice AI pipeline on Apple Silicon. Mic to spoken response, entirely on-device. No cloud, no API keys.

To get started:

  brew tap RunanywhereAI/rcli https://github.com/RunanywhereAI/RCLI.git
  brew install rcli
  rcli setup   # downloads ~1 GB of models
  rcli         # interactive mode with push-to-talk
Or:

  curl -fsSL https://raw.githubusercontent.com/RunanywhereAI/RCLI/main/install.sh | bash
The numbers (M4 Max, 64 GB, reproducible via `rcli bench`):

LLM decode – 1.67x faster than llama.cpp, 1.19x faster than Apple MLX (same model files): - Qwen3-0.6B: 658 tok/s (vs mlx-lm 552, llama.cpp 295) - Qwen3-4B: 186 tok/s (vs mlx-lm 170, llama.cpp 87) - LFM2.5-1.2B: 570 tok/s (vs mlx-lm 509, llama.cpp 372) - Time-to-first-token: 6.6 ms

STT – 70 seconds of audio transcribed in *101 ms*. That's 714x real-time. 4.6x faster than mlx-whisper.

TTS – 178 ms synthesis. 2.8x faster than mlx-audio and sherpa-onnx.

We built this because demoing on-device AI is easy but shipping it is brutal. Voice is the hardest test: you're chaining STT, LLM, and TTS sequentially, and if any stage is slow, the user feels it. Most teams fall back to cloud APIs not because local models are bad, but because local inference infrastructure is.

The thing that's hard to solve is latency compounding. In a voice pipeline, you're stacking three models in sequence. If each adds 200ms, you're at 600ms before the user hears a word, and that feels broken. You can't optimize one stage and call it done. Every stage needs to be fast, on one device, with no network round-trip to hide behind.

We went straight to Metal. Custom GPU compute shaders, all memory pre-allocated at init (zero allocations during inference), and one unified engine for all three modalities instead of stitching separate runtimes together.

MetalRT is the first engine to handle all three modalities natively on Apple Silicon. Full methodology:

LLM benchmarks: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...

Speech benchmarks: https://www.runanywhere.ai/blog/metalrt-speech-fastest-stt-t...

How: Most inference engines add layers between you and the GPU: graph schedulers, runtime dispatchers, memory managers. MetalRT skips all of it. Custom Metal compute shaders for quantized matmul, attention, and activation - compiled ahead of time, dispatched directly.

Voice Pipeline optimizations details: https://www.runanywhere.ai/blog/fastvoice-on-device-voice-ai... RAG optimizations: https://www.runanywhere.ai/blog/fastvoice-rag-on-device-retr...

RCLI is the open-source voice pipeline (MIT) built on MetalRT: three concurrent threads with lock-free ring buffers, double-buffered TTS, 38 macOS actions by voice, local RAG (~4 ms over 5K+ chunks), 20 hot-swappable models, and a full-screen TUI with per-op latency readouts. Falls back to llama.cpp when MetalRT isn't installed.

Source: https://github.com/RunanywhereAI/RCLI (MIT)

Demo: https://www.youtube.com/watch?v=eTYwkgNoaKg

What would you build if on-device AI were genuinely as fast as cloud?

I built a programming language using Claude Code

2026-03-10 @ 16:37:29Points: 101Comments: 148

Launch HN: Didit (YC W26) – Stripe for Identity Verification

2026-03-10 @ 15:08:05Points: 48Comments: 45

https://didit.me) with my identical twin brother Alejandro. We are building a unified identity layer—a single integration that handles KYC, AML, biometrics, authentication, and fraud prevention globally. Here’s a demo: https://www.youtube.com/watch?v=eTdcg7JCc4M&t=7s.

Being identical twins, we’ve spent our whole lives dealing with identity confusion, so it is a bit of irony that we ended up building a company to solve it for the internet.

Growing up in Barcelona, we spent years working on products where identity issues were a massive pain. We eventually realized that for most engineering teams, "global identity" is a fiction—in reality it is a fragmented mess. You end up stitching together one provider for US driver's licenses, another for NFC chip extraction in Europe, a third for AML screening, a fourth for government database validation in Brazil, a fifth for liveness detection on low-end Android devices, and yet another for biometric authentication and age estimation. Orchestrating these into a cohesive flow while adapting to localized regulations like GDPR or CCPA is a nightmare that makes no sense for most teams to be working on.

When we looked at the existing "enterprise" solutions, we were baffled. Most require a three-week sales cycle just to see a single page of documentation. Pricing is hidden behind "Contact Us" buttons, and the products themselves are often bloated legacy systems with high latency and abysmal accuracy.

We also noticed a recurring pattern: these tools are frequently optimized only for the latest iOS hardware, performing poorly on the mid-range or older Android devices that make up a huge percentage of the market. This results in a "leaky" funnel where legitimate users drop off due to technical friction and fraud goes undetected because data points are spread across disparate systems. Also, these systems are expensive, often requiring massive annual commits that price out early-stage startups.

We wanted to build a system that is accessible to everyone—a tool that works like Stripe for identity, where you can get a sandbox key in thirty seconds and start running real verifications with world-class UX and transparent pricing.

To solve this, we took the "delusional" path of full vertical integration. Rather than just wrapping existing APIs, we built our own ID verification and biometric AI models—from classification and fraud detection to OCR models for almost every language. This vertical integration is fundamental to how we handle user data. Because we own the entire stack, we control the flow of sensitive information from end-to-end. Your users' data doesn't get bounced around through a chain of third-party black boxes or regional middle-men. This allows us to provide a level of security and privacy that is impossible when you are just an orchestration layer for other people's APIs.

We believe that identity verification is one of the most critical problems on the internet, and must be solved correctly and ethically. Many people are rightfully skeptical, especially given recent news about projects that have turned identity into a tool for mass data collection or surveillance. We don’t do anything of the sort, but we also don’t want to be coerced in the future, so we facilitate data minimization on the customer side. Instead of a business asking for a full ID scan, we allow them to simply verify a specific attribute—like "is this person over 18?"—without ever seeing the document itself. Our goal is to move the industry away from data hoarding and toward zero knowledge, or at least minimal knowledge, verification.

The result of our all-in-one approach is a platform that increases onboarding rates while lowering identity costs. We’ve focused on building a high-confidence automated loop that reduces the need for manual review by up to 90%, catching sophisticated deepfakes and spoofing attempts that standard vision models miss. Our SDK is optimized for low bandwidth connections, ensuring it works on spotty 3G networks where legacy providers usually fail.

We are fully live, and you can jump into the dashboard at https://business.didit.me to see the workflow orchestration immediately. Our pricing is transparent and success-based; we don’t believe in hiding costs behind a sales call.

We’re here all day to answer any question—whether it’s about how we handle NFC verification, our approach to deepfake detection, the general ethics behind biometric data retention, or how we think about the future of identity. We’d love your brutal HN feedback on our APIs, platform, and integration flow!

Debian decides not to decide on AI-generated contributions

2026-03-10 @ 14:53:13Points: 259Comments: 205

We are building data breach machines and nobody cares

2026-03-10 @ 14:50:43Points: 89Comments: 33

Tony Hoare has died

2026-03-10 @ 14:50:16Points: 1367Comments: 188

Meta acquires Moltbook

2026-03-10 @ 14:38:06Points: 379Comments: 249

Rebasing in Magit

2026-03-10 @ 13:38:39Points: 177Comments: 121

After outages, Amazon to make senior engineers sign off on AI-assisted changes

2026-03-10 @ 13:31:17Points: 393Comments: 361

Show HN: How I Topped the HuggingFace Open LLM Leaderboard on Two Gaming GPUs

2026-03-10 @ 13:18:55Points: 257Comments: 79

Intel Demos Chip to Compute with Encrypted Data

2026-03-10 @ 13:10:48Points: 213Comments: 83

Online age-verification tools for child safety are surveilling adults

2026-03-10 @ 12:55:42Points: 508Comments: 292

I put my whole life into a single database

2026-03-10 @ 10:07:48Points: 408Comments: 201

Show HN: What's my JND? – a colour guessing game

2026-03-10 @ 10:01:58Points: 28Comments: 25

Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

2026-03-10 @ 08:54:53Points: 361Comments: 365

Levels of Agentic Engineering

2026-03-10 @ 08:48:40Points: 103Comments: 58

Yann LeCun raises $1B to build AI that understands the physical world

2026-03-10 @ 08:46:53Points: 279Comments: 304

Open Weights isn't Open Training

2026-03-09 @ 23:37:08Points: 73Comments: 27

Invoker Commands API

2026-03-08 @ 08:16:53Points: 45Comments: 10

Show HN: Joha – a free browser-based drawing playground with preset shape tools

2026-03-08 @ 06:45:45Points: 7Comments: 0

You can click or drag to quickly generate individual shapes like waves, stars, layered squares, particles, textured strokes, and ring patterns, then combine them into larger compositions.

It’s designed for fast visual exploration and composition rather than precise vector editing.

Under the hood, it’s built with Vue 3, Vite, and p5.js for the drawing engine.

Exploring the ocean with Raspberry Pi–powered marine robots

2026-03-07 @ 17:29:54Points: 30Comments: 5

Archives

2026

2025

2024

2023

2022