In-person in SoMa. $150K-$350K + 0.5-3% equity. https://www.workatastartup.com/jobs/75962
Hacker News
Latest
The A in AGI Stands for Ads
2026-01-18 @ 14:25:49Points: 172Comments: 109
What is Plan 9?
2026-01-18 @ 13:32:25Points: 76Comments: 14
Software engineers can no longer neglect their soft skills
2026-01-18 @ 13:14:20Points: 46Comments: 60
Keystone (YC S25) Is Hiring
2026-01-18 @ 12:00:10Points: 1
Starting from scratch: Training a 30M Topological Transformer
2026-01-18 @ 11:39:14Points: 61Comments: 16
A free and open-source rootkit for Linux
2026-01-18 @ 09:36:25Points: 45Comments: 10
Consent-O-Matic
2026-01-18 @ 09:35:19Points: 137Comments: 74
Command-line Tools can be 235x Faster than your Hadoop Cluster (2014)
2026-01-18 @ 08:58:40Points: 120Comments: 77
Iconify: Library of Open Source Icons
2026-01-18 @ 06:53:36Points: 393Comments: 42
Show HN: GibRAM an in-memory ephemeral GraphRAG runtime for retrieval
2026-01-18 @ 06:47:17Points: 45Comments: 4
I have been working with regulation-heavy documents lately, and one thing kept bothering me. Flat RAG pipelines often fail to retrieve related articles together, even when they are clearly connected through references, definitions, or clauses.
After trying several RAG setups, I subjectively felt that GraphRAG was a better mental model for this kind of data. The Microsoft GraphRAG paper and reference implementation were helpful starting points. However, in practice, I found one recurring friction point: graph storage and vector indexing are usually handled by separate systems, which felt unnecessarily heavy for short-lived analysis tasks.
To explore this tradeoff, I built GibRAM (Graph in-buffer Retrieval and Associative Memory). It is an experimental, in-memory GraphRAG runtime where entities, relationships, text units, and embeddings live side by side in a single process.
GibRAM is intentionally ephemeral. It is designed for exploratory tasks like summarization or conversational querying over a bounded document set. Data lives in memory, scoped by session, and is automatically cleaned up via TTL. There are no durability guarantees, and recomputation is considered cheaper than persistence for the intended use cases.
This is not a database and not a production-ready system. It is a casual project, largely vibe-coded, meant to explore what GraphRAG looks like when memory is the primary constraint instead of storage. Technical debt exists, and many tradeoffs are explicit.
The project is open source, and I would really appreciate feedback, especially from people working on RAG, search infrastructure, or graph-based retrieval.
GitHub: https://github.com/gibram-io/gibram
Happy to answer questions or hear why this approach might be flawed.
ThinkNext Design
2026-01-18 @ 06:27:24Points: 163Comments: 70
Show HN: Figma-use – CLI to control Figma for AI agents
2026-01-18 @ 05:55:48Points: 30Comments: 9
What it does: 100 commands to create shapes, text, frames, components, modify styles, export assets. JSX importing that's ~100x faster than any plugin API import. Works with any LLM coding assistant.
Why I built it: The official Figma MCP server can only read files. I wanted AI to actually design — create buttons, build layouts, generate entire component systems. Existing solutions were either read-only or required verbose JSON schemas that burn through tokens.
Demo (45 sec): https://youtu.be/9eSYVZRle7o
Tech stack: Bun + Citty for CLI, Elysia WebSocket proxy, Figma plugin. The render command connects to Figma's internal multiplayer protocol via Chrome DevTools for extra performance when dealing with large groups of objects.
Try it: bun install -g @dannote/figma-use
Looking for feedback on CLI ergonomics, missing commands, and whether the JSX syntax feels natural.