The latest feature: google_lens_detect uses OpenCV to find objects in an image, crops each one, and sends them to Google Lens for identification. GPT-OSS-120B, a text-only model with
zero vision support, correctly identified an NVIDIA DGX Spark and a SanDisk USB drive from a desk photo.
Also includes Google Search, News, Shopping, Scholar, Maps, Finance, Weather, Flights, Hotels, Translate, Images, Trends, and more. 17 tools total.
Two commands: pip install noapi-google-search-mcp && playwright install chromium
GitHub: https://github.com/VincentKaufmann/noapi-google-search-mcp
PyPI: https://pypi.org/project/noapi-google-search-mcp/
Booyah! Hacker News
Latest
Communities Are Not Fungible
2026-02-11 @ 07:42:57Points: 18Comments: 10
CoLoop (YC S21) Is Hiring Ex Technical Founders in London
2026-02-11 @ 07:00:46Points: 1
Windows Notepad App Remote Code Execution Vulnerability
2026-02-11 @ 06:15:33Points: 131Comments: 65
Show HN: I taught GPT-OSS-120B to see using Google Lens and OpenCV
2026-02-11 @ 05:40:20Points: 21Comments: 11
Stay Hungry, Stay Foolish (2005)
2026-02-11 @ 05:32:10Points: 26Comments: 14
Rivian R2: Electric Mid-Size SUV
2026-02-11 @ 01:00:15Points: 111Comments: 198
Fun With Pinball
2026-02-11 @ 00:21:55Points: 81Comments: 8
The Day the Telnet Died
2026-02-10 @ 22:20:40Points: 337Comments: 236
The Falkirk Wheel
2026-02-10 @ 20:42:20Points: 72Comments: 28
Tambo 1.0: Open-source toolkit for agents that render React components
2026-02-10 @ 20:16:12Points: 90Comments: 20
We've been building Tambo for about a year, and just released our 1.0.
We make are making it easier to register React components with Zod schemas, a build an agent picks the right one and renders the right props.
We handle many of the complications with building generative user interfaces like: managing state between the user, the agent, and react component, rendering partial props, and we handle auth between your user, and MCP. We also support adding MCP servers and most of the spec.
We are 100% open-source and currently have 8k+ GitHub stars, thousands of developers, and over half-millions messages processed by our hosted service.
If you're building AI agents with generative UI, we'd like to hear from you.
How did Windows 95 get permission to put Weezer video 'Buddy Holly' on the CD?
2026-02-10 @ 19:25:55Points: 171Comments: 131
The Singularity will occur on a Tuesday
2026-02-10 @ 17:04:31Points: 1068Comments: 594
Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)
2026-02-10 @ 16:47:29Points: 161Comments: 39
AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.
For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.
Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I
Rowboat has two parts:
(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.
(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.
Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.
Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.
Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.
We’d love to hear your thoughts and welcome contributions!
Mathematicians disagree on the essential structure of the complex numbers (2024)
2026-02-10 @ 16:36:30Points: 202Comments: 250
Competition is not market validation
2026-02-10 @ 16:04:21Points: 108Comments: 29
Ex-GitHub CEO launches a new developer platform for AI agents
2026-02-10 @ 15:44:47Points: 479Comments: 441
Simplifying Vulkan one subsystem at a time
2026-02-10 @ 13:26:14Points: 249Comments: 159
Europe's $24T Breakup with Visa and Mastercard Has Begun
2026-02-10 @ 11:42:14Points: 892Comments: 740
The Feynman Lectures on Physics (1961-1964)
2026-02-10 @ 11:36:14Points: 309Comments: 78
Clean-room implementation of Half-Life 2 on the Quake 1 engine
2026-02-10 @ 11:21:56Points: 381Comments: 78
Show HN: I built a macOS tool for network engineers – it's called NetViews
2026-02-10 @ 05:20:32Points: 213Comments: 55
I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.
About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.
NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.
As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.
Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.
Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.
Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.
Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).
Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.
I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?
Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.
Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.
A brief history of oral peptides
2026-02-09 @ 21:23:27Points: 116Comments: 42
Show HN: JavaScript-first, open-source WYSIWYG DOCX editor
2026-02-09 @ 16:33:38Points: 91Comments: 27
As an experiment, we gave Claude Code the OOXML spec, a concrete editor architecture, and a Playwright-based test suite. The agent iterated in a (Ralph) loop over a few nights and produced a working editor from scratch.
Core text editing works today. Tables and images are functional but still incomplete. MIT licensed.