Hacker News

Latest

How to Find LinkedIn Profiles and Work Emails in 5 Minutes

2026-01-22 @ 22:31:27Points: 5Comments: 0

Anthropic Economic Index economic primitives

2026-01-22 @ 21:54:02Points: 24Comments: 22

Viking Ship Museum in Denmark announces the discovery of the largest cog

2026-01-22 @ 21:43:14Points: 23Comments: 10

Show HN: CLI for working with Apple Core ML models

2026-01-22 @ 20:12:26Points: 30Comments: 1

Why does SSH send 100 packets per keystroke?

2026-01-22 @ 19:27:32Points: 213Comments: 151

'Active' sitting is better for brain health: review of studies

2026-01-22 @ 19:03:56Points: 52Comments: 24

I was banned from Claude for scaffolding a Claude.md file?

2026-01-22 @ 18:38:27Points: 270Comments: 216

Recent discoveries on the acquisition of the highest levels of human performance

2026-01-22 @ 18:01:02Points: 87Comments: 44

CSS Optical Illusions

2026-01-22 @ 17:41:22Points: 115Comments: 11

Show HN: First Claude Code client for Ollama local models

2026-01-22 @ 17:26:12Points: 28Comments: 12

Here is the release note from Ollama that made this possible: https://ollama.com/blog/claude

Technically, what I do is pretty straightforward:

- Detect which local models are available in Ollama.

- When internet access is unavailable, the client automatically switches to Ollama-backed local models instead of remote ones.

- From the user’s perspective, it is the same Claude Code flow, just backed by local inference.

In practice, the best-performing model so far has been qwen3-coder:30b. I also tested glm-4.7-flash, which was released very recently, but it struggles with reliably following tool-calling instructions, so it is not usable for this workflow yet.

Launch HN: Constellation Space (YC W26) – AI for satellite mission assurance

2026-01-22 @ 17:03:21Points: 28Comments: 11

https://constellation-io.com/). We built an AI system that predicts satellite link failures before they happen. Here's a video walkthrough: https://www.youtube.com/watch?v=069V9fADAtM.

Between us, we've spent years working on satellite operations at SpaceX, Blue Origin, and NASA. At SpaceX, we managed constellation health for Starlink. At Blue, we worked on next-gen test infra for New Glenn. At NASA, we dealt with deep space communications. The same problem kept coming up: by the time you notice a link is degrading, you've often already lost data.

The core issue is that satellite RF links are affected by dozens of interacting variables. A satellite passes overhead, and you need to predict whether the link will hold for the next few minutes. That depends on: the orbital geometry (elevation angle changes constantly), tropospheric attenuation (humidity affects signal loss via ITU-R P.676), rain fade (calculated via ITU-R P.618 - rain rates in mm/hr translate directly to dB of loss at Ka-band and above), ionospheric scintillation (we track the KP index from magnetometer networks), and network congestion on top of all that.

The traditional approach is reactive. Operators watch dashboards, and when SNR drops below a threshold, they manually reroute traffic or switch to a backup link. With 10,000 satellites in orbit today and 70,000+ projected by 2030, this doesn't scale. Our system ingests telemetry at around 100,000 messages per second from satellites, ground stations, weather radar, IoT humidity sensors, and space weather monitors. We run physics-based models in real-time - the full link budget equations, ITU atmospheric standards, orbital propagation - to compute what should be happening. Then we layer ML models on top, trained on billions of data points from actual multi-orbit operations.

The ML piece is where it gets interesting. We use federated learning because constellation operators (understandably) don't want to share raw telemetry. Each constellation trains local models on their own data, and we aggregate only the high-level patterns. This gives us transfer learning across different orbit types and frequency bands - learnings from LEO Ka-band links help optimize MEO or GEO operations. We can predict most link failures 3-5 minutes out with >90% accuracy, which gives enough time to reroute traffic before data loss. The system is fully containerized (Docker/Kubernetes) and deploys on-premise for air-gapped environments, on GovCloud (AWS GovCloud, Azure Government), or standard commercial clouds.

Right now we're testing with defense and commercial partners. The dashboard shows real-time link health, forecasts at 60/180/300 seconds out, and root cause analysis (is this rain fade? satellite setting below horizon? congestion?). We expose everything via API - telemetry ingestion, predictions, topology snapshots, even an LLM chat endpoint for natural language troubleshooting.

The hard parts we're still working on: prediction accuracy degrades for longer time horizons (beyond 5 minutes gets dicey), we need more labeled failure data for rare edge cases, and the federated learning setup requires careful orchestration across different operators' security boundaries. We'd love feedback from anyone who's worked on satellite ops, RF link modeling, or time-series prediction at scale. What are we missing? What would make this actually useful in a production NOC environment?

Happy to answer any technical questions!

AnswerThis (YC F25) Is Hiring

2026-01-22 @ 17:00:40Points: 1

Show HN: isometric.nyc – giant isometric pixel art map of NYC

2026-01-22 @ 16:52:35Points: 528Comments: 139

I didn't write a single line of code.

Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!

I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity:

http://cannoneyed.com/projects/isometric-nyc

Reverse engineering Lyft Bikes for fun (and profit?)

2026-01-22 @ 16:45:52Points: 45Comments: 13

Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)

2026-01-22 @ 16:31:47Points: 30Comments: 9

https://www.linum.ai/field-notes/launch-linum-v2

We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.

These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics.

We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves.

Why train a model from scratch?

We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over.

For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.)

The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale).

What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text.

Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one.

What’s next? - Post-training for physics/deformations - Distillation for speed - Audio capabilities - Model scaling

We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome!

Show HN: BrowserOS – "Claude Cowork" in the browser

2026-01-22 @ 16:30:58Points: 40Comments: 17

The big differentiator: on BrowserOS you can use local LLMs or BYOK and run the agent entirely on the client side, so your company/sensitive data stays on your machine!

Today we're launching filesystem access... just like Claude Cowork, our browser agent can read files, write files, run shell commands! But honestly, we didn't plan for this. It turns out the privacy decision we made 9 months ago accidentally positioned us for this moment.

The architectural bet we made 9 months ago: Unlike other AI browsers (ChatGPT Atlas, Perplexity Comet) where the agent loop runs server-side, we decided early on to run our agent entirely on your machine (client side).

But building everything on the client side wasn't smooth. We initially built our agent loop inside a Chrome extension. But we kept hitting walls -- service worker being single thread JS; not having access to NodeJS libraries. So we made the hard decision 2 months ago to throw away everything and start from scratch.

In the new architecture, our agent loop sits in a standalone binary that we ship alongside our Chromium. And we use gemini-cli for the agent loop with some tweaks! We wrote a neat adapter to translate between Gemini format and Vercel AI SDK format. You can look at our entire codebase here: https://git.new/browseros-agent

How we give browser access to filesystem: When Claude Cowork launched, we realized something: because Atlas and Comet run their agent loop server-side, there's no good way for their agent to access your files without uploading them to the server first. But our agent was already local. Adding filesystem access meant just... opening the door (with your permissions ofc). Our agent can now read and write files just like Claude Code.

What you can actually do today:

a) Organize files in my desktop folder https://youtu.be/NOZ7xjto6Uc

b) Open top 5 HN links, extract the details and write summary into a HTML file https://youtu.be/uXvqs_TCmMQ

--- Where we are now If you haven't tried us since the last Show HN (https://news.ycombinator.com/item?id=44523409), give us another shot. The new architecture unlocked a ton of new features, and we've grown to 8.5K GitHub stars and 100K+ downloads:

c) You can now build more reliable workflows using n8n-like graph https://youtu.be/H_bFfWIevSY

d) You can also use BrowserOS as an MCP server in Cursor or Claude Code https://youtu.be/5nevh00lckM

We are very bullish on browser being the right platform for a Claude Cowork like agent. Browser is the most commonly used app by knowledge workers (emails, docs, spreadsheets, research, etc). And even Anthropic recognizes this -- for Claude Cowork, they have janky integration with browser via a chrome extension. But owning the entire stack allows us to build differentiated features that wouldn't be possible otherwise. Ex: Browser ACLs.

Agents can do dumb or destructive things, so we're adding browser-level guardrails (think IAM for agents): "role(agent): can never click buy" or "role(agent): read-only access on my bank's homepage."

Curious to hear your take on this and the overall thesis.

We’ll be in the comments. Thanks for reading!

GitHub: https://github.com/browseros-ai/BrowserOS

Download: https://browseros.com (available for Mac, Windows, Linux!)

It looks like the status/need-triage label was removed

2026-01-22 @ 16:10:20Points: 266Comments: 67

GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers

2026-01-22 @ 15:20:48Points: 634Comments: 340

Tree-sitter vs. Language Servers

2026-01-22 @ 14:47:58Points: 195Comments: 54

Qwen3-TTS family is now open sourced: Voice design, clone, and generation

2026-01-22 @ 13:51:25Points: 424Comments: 127

Design Thinking Books (2024)

2026-01-22 @ 11:51:10Points: 263Comments: 120

Show HN: Synesthesia, make noise music with a colorpicker

2026-01-22 @ 05:52:54Points: 27Comments: 12

NOTE! Turn the volume way down before using the site. It is noise music. :)

Mote: An Interactive Ecosystem Simulation [video]

2026-01-21 @ 22:30:18Points: 51Comments: 8

Preserved Fish, Boss of New York City

2026-01-19 @ 17:01:55Points: 20Comments: 2

Your app subscription is now my weekend project

2026-01-18 @ 22:16:43Points: 158Comments: 134

Keeping 20k GPUs healthy

2026-01-18 @ 16:16:11Points: 70Comments: 27

Vulnerable WhisperPair Devices – Hijack Bluetooth Accessories Using Fast Pair

2026-01-17 @ 23:33:27Points: 20Comments: 4

Compiling Scheme to WebAssembly

2026-01-17 @ 23:29:54Points: 51Comments: 8

A Year of 3D Printing

2026-01-17 @ 20:56:03Points: 73Comments: 75

My first year in sales as technical founder

2026-01-17 @ 19:24:28Points: 45Comments: 10

Archives

2026

2025

2024

2023

2022