HTTP Strict Transport Security (HSTS)
3 by arunc | 0 comments on Hacker News.
Tuesday, December 30, 2025
Monday, December 29, 2025
New top story on Hacker News: Show HN: Per-instance TSP Solver with No Pre-training (1.66% gap on d1291)
Show HN: Per-instance TSP Solver with No Pre-training (1.66% gap on d1291)
4 by jivaprime | 0 comments on Hacker News.
OP here. Most Deep Learning approaches for TSP rely on pre-training with large-scale datasets. I wanted to see if a solver could learn "on the fly" for a specific instance without any priors from other problems. I built a solver using PPO that learns from scratch per instance. It achieved a 1.66% gap on TSPLIB d1291 in about 5.6 hours on a single A100. The Core Idea: My hypothesis was that while optimal solutions are mostly composed of 'minimum edges' (nearest neighbors), the actual difficulty comes from a small number of 'exception edges' outside of that local scope. Instead of pre-training, I designed an inductive bias based on the topological/geometric structure of these exception edges. The agent receives guides on which edges are likely promising based on micro/macro structures, and PPO fills in the gaps through trial and error. It is interesting to see RL reach this level without a dataset. I have open-sourced the code and a Colab notebook for anyone who wants to verify the results or tinker with the 'exception edge' hypothesis. Code & Colab: https://ift.tt/dx93voO Happy to answer any questions about the geometric priors or the PPO implementation!
4 by jivaprime | 0 comments on Hacker News.
OP here. Most Deep Learning approaches for TSP rely on pre-training with large-scale datasets. I wanted to see if a solver could learn "on the fly" for a specific instance without any priors from other problems. I built a solver using PPO that learns from scratch per instance. It achieved a 1.66% gap on TSPLIB d1291 in about 5.6 hours on a single A100. The Core Idea: My hypothesis was that while optimal solutions are mostly composed of 'minimum edges' (nearest neighbors), the actual difficulty comes from a small number of 'exception edges' outside of that local scope. Instead of pre-training, I designed an inductive bias based on the topological/geometric structure of these exception edges. The agent receives guides on which edges are likely promising based on micro/macro structures, and PPO fills in the gaps through trial and error. It is interesting to see RL reach this level without a dataset. I have open-sourced the code and a Colab notebook for anyone who wants to verify the results or tinker with the 'exception edge' hypothesis. Code & Colab: https://ift.tt/dx93voO Happy to answer any questions about the geometric priors or the PPO implementation!
Sunday, December 28, 2025
New top story on Hacker News: Ask HN: Best Podcasts of 2025?
Ask HN: Best Podcasts of 2025?
22 by adriancooney | 14 comments on Hacker News.
The Rest is Politics, Leading, Philosophize This and Stratechery (paid) are the podcasts that stood out the most in 2025. Curious what other HNers listen to.
22 by adriancooney | 14 comments on Hacker News.
The Rest is Politics, Leading, Philosophize This and Stratechery (paid) are the podcasts that stood out the most in 2025. Curious what other HNers listen to.
Saturday, December 27, 2025
Friday, December 26, 2025
Thursday, December 25, 2025
New top story on Hacker News: Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat
Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat
13 by Evidlo | 0 comments on Hacker News.
I wanted to share this fun craft activity for the holidays that I've been doing with my family over the last few years. I came up with these while cutting up some cans trying to make an aluminum version of paper spinners. There are a variety of shapes that work, but generally bigger+lighter spinners are better. Also incandescent bulbs are the best, but LEDs work too. They remind me of candle carousels I would see at my grandparents' house during Christmas. Let me know what you think!
13 by Evidlo | 0 comments on Hacker News.
I wanted to share this fun craft activity for the holidays that I've been doing with my family over the last few years. I came up with these while cutting up some cans trying to make an aluminum version of paper spinners. There are a variety of shapes that work, but generally bigger+lighter spinners are better. Also incandescent bulbs are the best, but LEDs work too. They remind me of candle carousels I would see at my grandparents' house during Christmas. Let me know what you think!
Wednesday, December 24, 2025
Tuesday, December 23, 2025
Monday, December 22, 2025
Sunday, December 21, 2025
Saturday, December 20, 2025
New top story on Hacker News: Show HN: HN Wrapped 2025 - an LLM reviews your year on HN
Show HN: HN Wrapped 2025 - an LLM reviews your year on HN
9 by hubraumhugo | 3 comments on Hacker News.
I was looking for some fun project to play around with the latest Gemini models and ended up building this :) Enter your username and get: - Generated roasts and stats based on your HN activity 2025 - Your personalized HN front page from 2035 (inspired by a recent Show HN [0]) - An xkcd-style comic of your HN persona It uses the latest gemini-3-flash and gemini-3-pro-image (nano banana pro) models, which deliver pretty impressive and funny results. A few examples: - dang: https://ift.tt/UGT3bsZ - myself: https://ift.tt/v6nK0jC Give it a try and share yours:) [0] https://ift.tt/GltEwDA
9 by hubraumhugo | 3 comments on Hacker News.
I was looking for some fun project to play around with the latest Gemini models and ended up building this :) Enter your username and get: - Generated roasts and stats based on your HN activity 2025 - Your personalized HN front page from 2035 (inspired by a recent Show HN [0]) - An xkcd-style comic of your HN persona It uses the latest gemini-3-flash and gemini-3-pro-image (nano banana pro) models, which deliver pretty impressive and funny results. A few examples: - dang: https://ift.tt/UGT3bsZ - myself: https://ift.tt/v6nK0jC Give it a try and share yours:) [0] https://ift.tt/GltEwDA
Friday, December 19, 2025
New top story on Hacker News: Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)
Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)
9 by linggen | 5 comments on Hacker News.
Hi HN, Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this. My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem. The Tech: Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required. Team Memory: Index knowledge so teammates' LLMs get context automatically. Visual Map: See file dependencies and refactor "blast radius." MCP-Native: Supports Cursor, Zed, and Claude Desktop. Linggen saves me hours. I’d love to hear how you manage complex system context! Repo: https://ift.tt/bi7RZdC Website: https://linggen.dev
9 by linggen | 5 comments on Hacker News.
Hi HN, Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this. My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem. The Tech: Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required. Team Memory: Index knowledge so teammates' LLMs get context automatically. Visual Map: See file dependencies and refactor "blast radius." MCP-Native: Supports Cursor, Zed, and Claude Desktop. Linggen saves me hours. I’d love to hear how you manage complex system context! Repo: https://ift.tt/bi7RZdC Website: https://linggen.dev
Thursday, December 18, 2025
New top story on Hacker News: Show HN: Paper2Any – Open tool to generate editable PPTs from research papers
Show HN: Paper2Any – Open tool to generate editable PPTs from research papers
5 by Mey0320 | 0 comments on Hacker News.
Hi HN, We are the OpenDCAI group from Peking University. We built Paper2Any, an open-source tool designed to automate the "Paper to Slides" workflow based on our DataFlow-Agent framework. The Problem: Writing papers is hard, but creating professional architecture diagrams and slides (PPTs) is often more tedious. Most AI tools just generate static images (PNGs) that are impossible to tweak for final publication. The Solution: Paper2Any takes a PDF, text, or sketch as input, understands the research logic, and generates fully editable PPTX (PowerPoint) files and SVGs. We prioritize flexibility and fidelity—allowing you to specify page ranges, switch visual styles, and preserve original assets. How it works: 1. Multimodal Reading: Extracts text and visual elements from the paper. You can now specify page ranges (e.g., Method section only) to focus the context and reduce token usage. 2. Content Understanding: Identifies core contributions and structural logic. 3. PPT Generation: Instead of generating one flat image, it generates independent elements (blocks, arrows, text) with selectable visual styles and organizes them into a slide layout. Links: - Demo: http://dcai-paper2any.cpolar.top/ - Code (DataFlow-Agent): https://ift.tt/H60ROgZ We'd love to hear your feedback on the generation quality and the agent workflow!
5 by Mey0320 | 0 comments on Hacker News.
Hi HN, We are the OpenDCAI group from Peking University. We built Paper2Any, an open-source tool designed to automate the "Paper to Slides" workflow based on our DataFlow-Agent framework. The Problem: Writing papers is hard, but creating professional architecture diagrams and slides (PPTs) is often more tedious. Most AI tools just generate static images (PNGs) that are impossible to tweak for final publication. The Solution: Paper2Any takes a PDF, text, or sketch as input, understands the research logic, and generates fully editable PPTX (PowerPoint) files and SVGs. We prioritize flexibility and fidelity—allowing you to specify page ranges, switch visual styles, and preserve original assets. How it works: 1. Multimodal Reading: Extracts text and visual elements from the paper. You can now specify page ranges (e.g., Method section only) to focus the context and reduce token usage. 2. Content Understanding: Identifies core contributions and structural logic. 3. PPT Generation: Instead of generating one flat image, it generates independent elements (blocks, arrows, text) with selectable visual styles and organizes them into a slide layout. Links: - Demo: http://dcai-paper2any.cpolar.top/ - Code (DataFlow-Agent): https://ift.tt/H60ROgZ We'd love to hear your feedback on the generation quality and the agent workflow!
Wednesday, December 17, 2025
Tuesday, December 16, 2025
New top story on Hacker News: Show HN: Zenflow – orchestrate coding agents without "you're right" loops
Show HN: Zenflow – orchestrate coding agents without "you're right" loops
11 by andrewsthoughts | 4 comments on Hacker News.
Hi HN, I’m Andrew, Founder of Zencoder. While building our IDE extensions and cloud agents, we ran into the same issue many of you likely face when using coding agents in complex repos: agents getting stuck in loops, apologizing, and wasting time. We tried to manage this with scripts, but juggling terminal windows and copy-paste prompting was painful. So we built Zenflow, a free desktop tool to orchestrate AI coding workflows. It handles the things we were missing in standard chat interfaces: Cross-Model Verification: You can have Codex review Claude’s code, or run them in parallel to see which model handles the specific context better. Parallel Execution: Run five different approaches on a backlog item simultaneously—mix "Human-in-the-Loop" for hard problems with "YOLO" runs for simple tasks. Dynamic Workflows: Configured via simple .md files. Agents can actually "rewire" the next steps of the workflow dynamically based on the problem at hand. Project list/kanban views across all workload What we learned building this To tune Zenflow, we ran 100+ experiments across public benchmarks (SWE-Bench-*, T-Bench) and private datasets. Two major takeaways that might interest this community: Benchmark Saturation: Models are becoming progressively overtrained on all versions of SWE-Bench (even Pro). We found public results are diverging significantly from performance on private datasets. If you are building workflows, you can't rely on public benches. The "Goldilocks" Workflow: In autonomous mode, heavy multi-step processes often multiply errors rather than fix them. Massive, complex prompt templates look good on paper but fail in practice. The most reliable setups landed in a narrow “Goldilocks” zone of just enough structure without over-orchestration. The app is free to use and supports Claude Code, Codex, Gemini, and Zencoder. We’ve been dogfooding this heavily, but I'd love to hear your thoughts on the default workflows and if they fit your mental model for agentic coding. Download: https://ift.tt/izQZ07A YT flyby: https://www.youtube.com/watch?v=67Ai-klT-B8
11 by andrewsthoughts | 4 comments on Hacker News.
Hi HN, I’m Andrew, Founder of Zencoder. While building our IDE extensions and cloud agents, we ran into the same issue many of you likely face when using coding agents in complex repos: agents getting stuck in loops, apologizing, and wasting time. We tried to manage this with scripts, but juggling terminal windows and copy-paste prompting was painful. So we built Zenflow, a free desktop tool to orchestrate AI coding workflows. It handles the things we were missing in standard chat interfaces: Cross-Model Verification: You can have Codex review Claude’s code, or run them in parallel to see which model handles the specific context better. Parallel Execution: Run five different approaches on a backlog item simultaneously—mix "Human-in-the-Loop" for hard problems with "YOLO" runs for simple tasks. Dynamic Workflows: Configured via simple .md files. Agents can actually "rewire" the next steps of the workflow dynamically based on the problem at hand. Project list/kanban views across all workload What we learned building this To tune Zenflow, we ran 100+ experiments across public benchmarks (SWE-Bench-*, T-Bench) and private datasets. Two major takeaways that might interest this community: Benchmark Saturation: Models are becoming progressively overtrained on all versions of SWE-Bench (even Pro). We found public results are diverging significantly from performance on private datasets. If you are building workflows, you can't rely on public benches. The "Goldilocks" Workflow: In autonomous mode, heavy multi-step processes often multiply errors rather than fix them. Massive, complex prompt templates look good on paper but fail in practice. The most reliable setups landed in a narrow “Goldilocks” zone of just enough structure without over-orchestration. The app is free to use and supports Claude Code, Codex, Gemini, and Zencoder. We’ve been dogfooding this heavily, but I'd love to hear your thoughts on the default workflows and if they fit your mental model for agentic coding. Download: https://ift.tt/izQZ07A YT flyby: https://www.youtube.com/watch?v=67Ai-klT-B8
Monday, December 15, 2025
New top story on Hacker News: Show HN: A pager
Show HN: A pager
38 by keepamovin | 22 comments on Hacker News.
Hello HN, I basically don't use notifications for anything. The noise is too much. Slack is too loud. Email is too slow. But sometimes you do need a note in your face. I found myself missing 1990s pagers. I wanted a digital equivalent - something that does one thing: beep until I ack it. So I built UDP-7777. Concept: - 0% Cloud: It listens on UDP Port 7777. No accounts, no central servers. You don't need Tailscale/ZeroTier/WG/etc, it's just easy for device sets. - CAPCODES: It maps your IP address (LAN or Tailscale) to a retro 10-digit "CAPCODE" that looks like a phone number (e.g., (213) 070-6433 for loopback). - Minimalism: Bare-bones interface. Just a box, a few buttons, and a big red blinker. The Tech: It's a single binary written in Go (using Fyne). It implements "burst fire" UDP (sending packets 3x) to ensure delivery without the handshake overhead of TCP. New in v2.2.7: - Frequency Tuning: Bind specifically to your Tailscale/ZeroTier interface. - Squelch: Optional shared-secret keys to ignore unauthorized packets. - Heartbeat: Visual/Audio alerts that persist until you physically click ACK. I built this for anyone looking to cut through the noise—DevOps teams handing off the "on-call IP", or deep-work focus where you only want interruptions from a high-trust circle. I'd love to hear your thoughts on the IP-to-Phone-Number mapping logic (it's purely visual, but I'm really into it). Site & Binaries (Signed for Mac/Win): https://udp7777.com
38 by keepamovin | 22 comments on Hacker News.
Hello HN, I basically don't use notifications for anything. The noise is too much. Slack is too loud. Email is too slow. But sometimes you do need a note in your face. I found myself missing 1990s pagers. I wanted a digital equivalent - something that does one thing: beep until I ack it. So I built UDP-7777. Concept: - 0% Cloud: It listens on UDP Port 7777. No accounts, no central servers. You don't need Tailscale/ZeroTier/WG/etc, it's just easy for device sets. - CAPCODES: It maps your IP address (LAN or Tailscale) to a retro 10-digit "CAPCODE" that looks like a phone number (e.g., (213) 070-6433 for loopback). - Minimalism: Bare-bones interface. Just a box, a few buttons, and a big red blinker. The Tech: It's a single binary written in Go (using Fyne). It implements "burst fire" UDP (sending packets 3x) to ensure delivery without the handshake overhead of TCP. New in v2.2.7: - Frequency Tuning: Bind specifically to your Tailscale/ZeroTier interface. - Squelch: Optional shared-secret keys to ignore unauthorized packets. - Heartbeat: Visual/Audio alerts that persist until you physically click ACK. I built this for anyone looking to cut through the noise—DevOps teams handing off the "on-call IP", or deep-work focus where you only want interruptions from a high-trust circle. I'd love to hear your thoughts on the IP-to-Phone-Number mapping logic (it's purely visual, but I'm really into it). Site & Binaries (Signed for Mac/Win): https://udp7777.com
Sunday, December 14, 2025
Saturday, December 13, 2025
Friday, December 12, 2025
Thursday, December 11, 2025
New top story on Hacker News: Show HN: SIM – Apache-2.0 n8n alternative
Show HN: SIM – Apache-2.0 n8n alternative
10 by waleedlatif1 | 0 comments on Hacker News.
Hey HN, Waleed here. We're building Sim ( https://sim.ai/ ), an open-source visual editor to build agentic workflows. Repo here: https://ift.tt/GKQo8a9 . Docs here: https://docs.sim.ai . You can run Sim locally using Docker, with no execution limits or other restrictions. We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves. We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added: - 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ... - Tool calling with granular control: forced, auto - Agent memory: conversation memory with sliding window support (by last n messages or tokens) - Trace spans: detailed logging and observability for nested workflows and tool calling - Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents - Workflow deployment versioning with rollbacks - MCP support, Human-in-the-loop block - Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows) Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives. Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening. We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :) [1] https://ift.tt/QPnpqwI [2] https://ift.tt/4FNhca7
10 by waleedlatif1 | 0 comments on Hacker News.
Hey HN, Waleed here. We're building Sim ( https://sim.ai/ ), an open-source visual editor to build agentic workflows. Repo here: https://ift.tt/GKQo8a9 . Docs here: https://docs.sim.ai . You can run Sim locally using Docker, with no execution limits or other restrictions. We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves. We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added: - 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ... - Tool calling with granular control: forced, auto - Agent memory: conversation memory with sliding window support (by last n messages or tokens) - Trace spans: detailed logging and observability for nested workflows and tool calling - Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents - Workflow deployment versioning with rollbacks - MCP support, Human-in-the-loop block - Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows) Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives. Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening. We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :) [1] https://ift.tt/QPnpqwI [2] https://ift.tt/4FNhca7
Wednesday, December 10, 2025
New top story on Hacker News: Show HN: Automated license plate reader coverage in the USA
Show HN: Automated license plate reader coverage in the USA
21 by sodality2 | 3 comments on Hacker News.
Built this over the last few days, based on a Rust codebase that parses the latest ALPR reports from OpenStreetMaps, calculates navigation statistics from every tagged residential building to nearby amenities, and tests each route for intersection with those ALPR cameras (Flock being the most widespread). These have gotten more controversial in recent months, due to their indiscriminate large scale data collection, with 404 Media publishing many original pieces ( https://ift.tt/CJcSVgr ) about their adoption and (ab)use across the country. I wanted to use open source datasets to track the rapid expansion, especially per-county, as this data can be crucial for 'deflock' movements to petition counties and city governments to ban and remove them. In some counties, the tracking becomes so widespread that most people can't go anywhere without being photographed. This includes possibly sensitive areas, like places of worship and medical facilities. The argument for their legality rests upon the notion that these cameras are equivalent to 'mere observation', but the enormous scope and data sharing agreements in place to share and access millions of records without warrants blurs the lines of the fourth amendment.
21 by sodality2 | 3 comments on Hacker News.
Built this over the last few days, based on a Rust codebase that parses the latest ALPR reports from OpenStreetMaps, calculates navigation statistics from every tagged residential building to nearby amenities, and tests each route for intersection with those ALPR cameras (Flock being the most widespread). These have gotten more controversial in recent months, due to their indiscriminate large scale data collection, with 404 Media publishing many original pieces ( https://ift.tt/CJcSVgr ) about their adoption and (ab)use across the country. I wanted to use open source datasets to track the rapid expansion, especially per-county, as this data can be crucial for 'deflock' movements to petition counties and city governments to ban and remove them. In some counties, the tracking becomes so widespread that most people can't go anywhere without being photographed. This includes possibly sensitive areas, like places of worship and medical facilities. The argument for their legality rests upon the notion that these cameras are equivalent to 'mere observation', but the enormous scope and data sharing agreements in place to share and access millions of records without warrants blurs the lines of the fourth amendment.
Tuesday, December 9, 2025
Monday, December 8, 2025
Sunday, December 7, 2025
Saturday, December 6, 2025
Friday, December 5, 2025
Thursday, December 4, 2025
Wednesday, December 3, 2025
Tuesday, December 2, 2025
New top story on Hacker News: OpenAI declares 'code red' as Google catches up in AI race
OpenAI declares 'code red' as Google catches up in AI race
64 by goplayoutside | 83 comments on Hacker News.
64 by goplayoutside | 83 comments on Hacker News.
Monday, December 1, 2025
Sunday, November 30, 2025
Saturday, November 29, 2025
Friday, November 28, 2025
Thursday, November 27, 2025
Wednesday, November 26, 2025
Tuesday, November 25, 2025
Monday, November 24, 2025
Sunday, November 23, 2025
Saturday, November 22, 2025
Friday, November 21, 2025
Thursday, November 20, 2025
Wednesday, November 19, 2025
Tuesday, November 18, 2025
Monday, November 17, 2025
Sunday, November 16, 2025
Saturday, November 15, 2025
Friday, November 14, 2025
Thursday, November 13, 2025
Wednesday, November 12, 2025
Tuesday, November 11, 2025
Monday, November 10, 2025
Sunday, November 9, 2025
Saturday, November 8, 2025
Friday, November 7, 2025
Thursday, November 6, 2025
Wednesday, November 5, 2025
Tuesday, November 4, 2025
Sunday, November 2, 2025
Saturday, November 1, 2025
New top story on Hacker News: Nutella maker in hazelnut stand-off with Turkish dealers
Nutella maker in hazelnut stand-off with Turkish dealers
27 by bookofjoe | 16 comments on Hacker News.
https://ift.tt/zhYDiSl
27 by bookofjoe | 16 comments on Hacker News.
https://ift.tt/zhYDiSl
Friday, October 31, 2025
Thursday, October 30, 2025
Wednesday, October 29, 2025
Tuesday, October 28, 2025
Monday, October 27, 2025
New top story on Hacker News: 10M people watched a YouTuber shim a lock; the lock company sued him – bad idea
10M people watched a YouTuber shim a lock; the lock company sued him – bad idea
140 by Brajeshwar | 57 comments on Hacker News.
https://www.youtube.com/shorts/YjzlmKz_MM8
140 by Brajeshwar | 57 comments on Hacker News.
https://www.youtube.com/shorts/YjzlmKz_MM8
Sunday, October 26, 2025
Saturday, October 25, 2025
New top story on Hacker News: Dead soldiers' teeth reveal diseases that doomed Napoleon's army
Dead soldiers' teeth reveal diseases that doomed Napoleon's army
6 by reaperducer | 62 comments on Hacker News.
https://ift.tt/5wLjyxA... https://ift.tt/4tPJOmZ...
6 by reaperducer | 62 comments on Hacker News.
https://ift.tt/5wLjyxA... https://ift.tt/4tPJOmZ...
Friday, October 24, 2025
Thursday, October 23, 2025
New top story on Hacker News: Show HN: Tommy – Turn ESP32 devices into through-wall motion sensors
Show HN: Tommy – Turn ESP32 devices into through-wall motion sensors
4 by mike2872 | 0 comments on Hacker News.
Hi HN! I would like to present my project called TOMMY, which turns ESP32 devices into motion sensors that work through walls and obstacles using Wi-Fi sensing. TOMMY started as a project for my own use. I was frustrated with motion sensors that didn't detect stationary presence and left dead zones everywhere. Presence sensors existed but were expensive and needed one per room. I explored echo localization first, but microphones listening 24/7 felt too creepy. Then I discovered Wi-Fi sensing - a huge research topic but nothing production-ready yet. It ticked all the boxes: could theoretically detect stationary presence through breathing/micromovements and worked through walls and furniture so devices could be hidden away. Two years and dozens of research papers later, TOMMY has evolved into software I'm honestly quite proud of. Although it doesn't have stationary presence detection yet (coming Q1 2026) it detects motion really well. It works as a Home Assistant Add-on or Docker container, supports a range of ESP32 devices, and can be flashed through the built-in tool or used alongside existing ESPHome setups. I released the first version a couple of months ago on Home Assistant's subreddit and got a lot of interest and positive feedback. More than 200 people joined the Discord community and almost 2,000 downloaded it. Right now TOMMY is in beta, which is completely free for everyone to use. I'm also offering free lifetime licenses to every beta user who joins the Discord channel. You can read more about the project on https://ift.tt/byX6tZw . Please join the Discord channel if you are interested in the project. A note on open source: There's been a lot of interest in having TOMMY as an open source project, which I fully understand. I'm reluctant to open source before reaching sustainability, as I'd love to work on this full time. However, privacy is verifiable - it's 100% local with no data collection (easily confirmed via packet sniffing or network isolation). Happy to help anyone verify this.
4 by mike2872 | 0 comments on Hacker News.
Hi HN! I would like to present my project called TOMMY, which turns ESP32 devices into motion sensors that work through walls and obstacles using Wi-Fi sensing. TOMMY started as a project for my own use. I was frustrated with motion sensors that didn't detect stationary presence and left dead zones everywhere. Presence sensors existed but were expensive and needed one per room. I explored echo localization first, but microphones listening 24/7 felt too creepy. Then I discovered Wi-Fi sensing - a huge research topic but nothing production-ready yet. It ticked all the boxes: could theoretically detect stationary presence through breathing/micromovements and worked through walls and furniture so devices could be hidden away. Two years and dozens of research papers later, TOMMY has evolved into software I'm honestly quite proud of. Although it doesn't have stationary presence detection yet (coming Q1 2026) it detects motion really well. It works as a Home Assistant Add-on or Docker container, supports a range of ESP32 devices, and can be flashed through the built-in tool or used alongside existing ESPHome setups. I released the first version a couple of months ago on Home Assistant's subreddit and got a lot of interest and positive feedback. More than 200 people joined the Discord community and almost 2,000 downloaded it. Right now TOMMY is in beta, which is completely free for everyone to use. I'm also offering free lifetime licenses to every beta user who joins the Discord channel. You can read more about the project on https://ift.tt/byX6tZw . Please join the Discord channel if you are interested in the project. A note on open source: There's been a lot of interest in having TOMMY as an open source project, which I fully understand. I'm reluctant to open source before reaching sustainability, as I'd love to work on this full time. However, privacy is verifiable - it's 100% local with no data collection (easily confirmed via packet sniffing or network isolation). Happy to help anyone verify this.
Wednesday, October 22, 2025
Tuesday, October 21, 2025
Monday, October 20, 2025
Sunday, October 19, 2025
New top story on Hacker News: Ask HN: What are people doing to get off of VMware?
Ask HN: What are people doing to get off of VMware?
24 by jwithington | 19 comments on Hacker News.
In certain large industries it feels like there's more urgency to migrate off of VMware than there is to do genAI stuff. Do others sense this? If so, what options do you see for folks to keep their servers but move off of VMware? Is it all RedHat?
24 by jwithington | 19 comments on Hacker News.
In certain large industries it feels like there's more urgency to migrate off of VMware than there is to do genAI stuff. Do others sense this? If so, what options do you see for folks to keep their servers but move off of VMware? Is it all RedHat?
Saturday, October 18, 2025
Friday, October 17, 2025
New top story on Hacker News: Show HN: We packaged an MCP server inside Chromium
Show HN: We packaged an MCP server inside Chromium
7 by felarof | 2 comments on Hacker News.
Hey HN, we just shipped a browser with an inbuilt MCP server! We're a YC startup (S24) building BrowserOS — an open‑source Chromium fork. We're a privacy‑first alternative to the new wave of AI browsers like Dia, Perplexity Comet. Since launching ~3 months ago, the #1 request has been to expose our browser as an MCP server. -- Google beat us to launch with chrome-devtools-mcp (solid product btw), which lets you build/debug web apps by connecting Chrome to coding assistants. But we wanted to take this a step further: we packaged the MCP server directly into our browser binary. That gives three advantages: 1. MCP server setup is super simple — no npx install, no starting Chrome with CDP flags, you just download the BrowserOS binary. 2. with our browser's inbuilt MCP server, AI agents can interact using your logged‑in sessions (unlike chrome-devtools-mcp which starts a fresh headless instance each time) 3. our MCP server also exposes new APIs from Chromium's C++ core to click, type, and draw bounding boxes on a webpage. Our APIs are also not CDP-based (Chrome Debug Protocol) and have robust anti-bot detection. -- Few example use cases for BrowserOS-mcp are: a) *Frontend development with Claude Code*: instead of screenshot‑pasting, claude-code gets WYSIWYG access. It can write code, take a screenshot, check console logs, and fix issues in one agentic sweep. Since it has your sessions, it can do QA stuff like "test the auth flow with my Google Sign‑In." Here's a video of claude-code using browserOS to improve the css styling with back-and-forth checking: https://youtu.be/vcSxzIIkg_0 b) *Use as an agentic browser:* You can install BrowserOS-mcp in claude-code or Claude Desktop and do things like form-filling, extraction, multi-step agentic tasks, etc. It honestly works better than Perplexity Comet! Here's a video of claude-code opening top 5 hacker news posts and summarizing: https://youtu.be/rPFx_Btajj0 -- *How we packaged MCP server inside Chromium binary*: We package the server as a Bun binary and expose MCP tools over HTTP instead of stdio (to support multiple sessions). And we have a BrowserOS controller installed as an extension at the application layer which the MCP server connects to over WebSocket to control the browser. Here's a rough architecture diagram: https://dub.sh/browseros-mcp-diag -- *How to install and use it:* We put together a short guide here: https://ift.tt/qSQMVTK Our vision is to reimagine the browser as an operating system for AI agents, and packaging an MCP server directly into it is a big unlock for that! I'll be hanging around all day, would love to get your feedback and answer any questions!
7 by felarof | 2 comments on Hacker News.
Hey HN, we just shipped a browser with an inbuilt MCP server! We're a YC startup (S24) building BrowserOS — an open‑source Chromium fork. We're a privacy‑first alternative to the new wave of AI browsers like Dia, Perplexity Comet. Since launching ~3 months ago, the #1 request has been to expose our browser as an MCP server. -- Google beat us to launch with chrome-devtools-mcp (solid product btw), which lets you build/debug web apps by connecting Chrome to coding assistants. But we wanted to take this a step further: we packaged the MCP server directly into our browser binary. That gives three advantages: 1. MCP server setup is super simple — no npx install, no starting Chrome with CDP flags, you just download the BrowserOS binary. 2. with our browser's inbuilt MCP server, AI agents can interact using your logged‑in sessions (unlike chrome-devtools-mcp which starts a fresh headless instance each time) 3. our MCP server also exposes new APIs from Chromium's C++ core to click, type, and draw bounding boxes on a webpage. Our APIs are also not CDP-based (Chrome Debug Protocol) and have robust anti-bot detection. -- Few example use cases for BrowserOS-mcp are: a) *Frontend development with Claude Code*: instead of screenshot‑pasting, claude-code gets WYSIWYG access. It can write code, take a screenshot, check console logs, and fix issues in one agentic sweep. Since it has your sessions, it can do QA stuff like "test the auth flow with my Google Sign‑In." Here's a video of claude-code using browserOS to improve the css styling with back-and-forth checking: https://youtu.be/vcSxzIIkg_0 b) *Use as an agentic browser:* You can install BrowserOS-mcp in claude-code or Claude Desktop and do things like form-filling, extraction, multi-step agentic tasks, etc. It honestly works better than Perplexity Comet! Here's a video of claude-code opening top 5 hacker news posts and summarizing: https://youtu.be/rPFx_Btajj0 -- *How we packaged MCP server inside Chromium binary*: We package the server as a Bun binary and expose MCP tools over HTTP instead of stdio (to support multiple sessions). And we have a BrowserOS controller installed as an extension at the application layer which the MCP server connects to over WebSocket to control the browser. Here's a rough architecture diagram: https://dub.sh/browseros-mcp-diag -- *How to install and use it:* We put together a short guide here: https://ift.tt/qSQMVTK Our vision is to reimagine the browser as an operating system for AI agents, and packaging an MCP server directly into it is a big unlock for that! I'll be hanging around all day, would love to get your feedback and answer any questions!
Thursday, October 16, 2025
New top story on Hacker News: Show HN: How Useless Are You? A brutally honest skills check
Show HN: How Useless Are You? A brutally honest skills check
14 by mraspuzzi | 9 comments on Hacker News.
We built this to answer "am I a fit for this role?" after noticing how hard it is to get honest feedback when applying to a YC startup or something else entirely. It's a custom 5-minute challenge that roasts you after. Added a leaderboard for those who want to see how they stack up. Roast us below.
14 by mraspuzzi | 9 comments on Hacker News.
We built this to answer "am I a fit for this role?" after noticing how hard it is to get honest feedback when applying to a YC startup or something else entirely. It's a custom 5-minute challenge that roasts you after. Added a leaderboard for those who want to see how they stack up. Roast us below.
Wednesday, October 15, 2025
Tuesday, October 14, 2025
Monday, October 13, 2025
Sunday, October 12, 2025
New top story on Hacker News: Show HN: I built a simple ambient sound app with no ads or subscriptions
Show HN: I built a simple ambient sound app with no ads or subscriptions
6 by alpaca121 | 0 comments on Hacker News.
I’ve always liked having background noise while working or falling asleep, but I got frustrated that most “white noise” or ambient sound apps are either paywalled, stuffed with ads, or try to upsell subscriptions for basic features. So I made Ambi, a small iOS app with a clean interface and a set of freely available ambient sounds — rain, waves, wind, birds, that sort of thing. You can mix them, adjust volume levels, and just let it play all night or while you work. Everything works offline and there are no hidden catches. It’s something I built for myself first, but I figured others might find it useful too. Feedback, bugs, and suggestions are all welcome. https://ift.tt/ATXzOt0...
6 by alpaca121 | 0 comments on Hacker News.
I’ve always liked having background noise while working or falling asleep, but I got frustrated that most “white noise” or ambient sound apps are either paywalled, stuffed with ads, or try to upsell subscriptions for basic features. So I made Ambi, a small iOS app with a clean interface and a set of freely available ambient sounds — rain, waves, wind, birds, that sort of thing. You can mix them, adjust volume levels, and just let it play all night or while you work. Everything works offline and there are no hidden catches. It’s something I built for myself first, but I figured others might find it useful too. Feedback, bugs, and suggestions are all welcome. https://ift.tt/ATXzOt0...
Saturday, October 11, 2025
Friday, October 10, 2025
Thursday, October 9, 2025
Wednesday, October 8, 2025
Tuesday, October 7, 2025
Monday, October 6, 2025
Sunday, October 5, 2025
New top story on Hacker News: Show HN: ut – Rust based CLI utilities for devs and IT
Show HN: ut – Rust based CLI utilities for devs and IT
7 by ksdme9 | 2 comments on Hacker News.
Hey HN, I find myself reaching for tools like it-tools.tech or other random sites every now and then during development or debugging. So, I built a toolkit with a sane and simple CLI interface for most of those tools. For the curious and lazy, at the moment, ut has tools for, - Encoding: base64 (encode, decode), url (encode, decode) - Hashing: md5, sha1, sha224, sha256, sha384, sha512 - Data Generation: uuid (v1, v3, v4, v5), token, lorem, random - Text Processing: case (lower, upper, camel, title, constant, header, sentence, snake), pretty-print, diff - Development Tools: calc, json (builder), regex, datetime - Web & Network: http (status), serve, qr - Color & Design: color (convert) - Reference: unicode For full disclosure, parts of the toolkit were built with Claude Code (I wanted to use this as an opportunity to play with it more). Feel free to open feature requests and/or contribute.
7 by ksdme9 | 2 comments on Hacker News.
Hey HN, I find myself reaching for tools like it-tools.tech or other random sites every now and then during development or debugging. So, I built a toolkit with a sane and simple CLI interface for most of those tools. For the curious and lazy, at the moment, ut has tools for, - Encoding: base64 (encode, decode), url (encode, decode) - Hashing: md5, sha1, sha224, sha256, sha384, sha512 - Data Generation: uuid (v1, v3, v4, v5), token, lorem, random - Text Processing: case (lower, upper, camel, title, constant, header, sentence, snake), pretty-print, diff - Development Tools: calc, json (builder), regex, datetime - Web & Network: http (status), serve, qr - Color & Design: color (convert) - Reference: unicode For full disclosure, parts of the toolkit were built with Claude Code (I wanted to use this as an opportunity to play with it more). Feel free to open feature requests and/or contribute.
Saturday, October 4, 2025
New top story on Hacker News: Show HN: Run – a CLI universal code runner I built while learning Rust
Show HN: Run – a CLI universal code runner I built while learning Rust
11 by esubaalew | 4 comments on Hacker News.
Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively. I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit. Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://ift.tt/PDl9nek I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution. Thanks — I’ll try to answer questions and share design notes.
11 by esubaalew | 4 comments on Hacker News.
Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively. I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit. Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://ift.tt/PDl9nek I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution. Thanks — I’ll try to answer questions and share design notes.
Friday, October 3, 2025
Thursday, October 2, 2025
Wednesday, October 1, 2025
Tuesday, September 30, 2025
Monday, September 29, 2025
Sunday, September 28, 2025
Saturday, September 27, 2025
Friday, September 26, 2025
Thursday, September 25, 2025
Wednesday, September 24, 2025
Tuesday, September 23, 2025
New top story on Hacker News: Show HN: FlyCode – Recover Stripe payments by automatically using backup cards
Show HN: FlyCode – Recover Stripe payments by automatically using backup cards
10 by JakeVacovec | 25 comments on Hacker News.
We built FlyCode after seeing subscription businesses lose ~35% of recurring revenue each year to failed payments — even when customers had other valid cards on file. *The problem:* When a customer's primary card fails, Stripe retries a few times then cancels the subscription. If that customer has a backup card, it isn’t tried. At least 20% of active customers have more than one card on file, which means a lot of preventable churn. *Our solution:* FlyCode automatically identifies if a customer has other valid cards on file and retries them when a subscription payment fails. You can configure when these retries happen during the dunning period (beginning, middle, end) and define validity rules (e.g. “card was used in last 180 days”). It’s a Stripe app — no code changes needed. We've seen 18%-20% higher recovery rates from our core retry engine, plus another 5–10% from using backup cards. Importantly, there was no increase in refunds or chargebacks — in fact, rates were lower than merchant averages. Big companies like Microsoft and Amazon already do this internally; we wanted to make the same capability accessible to smaller SaaS teams. *Under the hood:* FlyCode monitors for failed invoices, checks for available backup methods via Stripe’s PaymentMethod API, and systematically retries in a way that avoids service disruption or manual workflows. We’re Jake, Etai, and Tzachi — we previously built payment recovery systems at startups and enterprises, which is how we discovered this gap. You can try it here: [ https://ift.tt/xzNwomf ] We’d love feedback from anyone dealing with subscription payment failures. What’s been your experience with involuntary churn? Have you considered leveraging backup payment methods?
10 by JakeVacovec | 25 comments on Hacker News.
We built FlyCode after seeing subscription businesses lose ~35% of recurring revenue each year to failed payments — even when customers had other valid cards on file. *The problem:* When a customer's primary card fails, Stripe retries a few times then cancels the subscription. If that customer has a backup card, it isn’t tried. At least 20% of active customers have more than one card on file, which means a lot of preventable churn. *Our solution:* FlyCode automatically identifies if a customer has other valid cards on file and retries them when a subscription payment fails. You can configure when these retries happen during the dunning period (beginning, middle, end) and define validity rules (e.g. “card was used in last 180 days”). It’s a Stripe app — no code changes needed. We've seen 18%-20% higher recovery rates from our core retry engine, plus another 5–10% from using backup cards. Importantly, there was no increase in refunds or chargebacks — in fact, rates were lower than merchant averages. Big companies like Microsoft and Amazon already do this internally; we wanted to make the same capability accessible to smaller SaaS teams. *Under the hood:* FlyCode monitors for failed invoices, checks for available backup methods via Stripe’s PaymentMethod API, and systematically retries in a way that avoids service disruption or manual workflows. We’re Jake, Etai, and Tzachi — we previously built payment recovery systems at startups and enterprises, which is how we discovered this gap. You can try it here: [ https://ift.tt/xzNwomf ] We’d love feedback from anyone dealing with subscription payment failures. What’s been your experience with involuntary churn? Have you considered leveraging backup payment methods?
Monday, September 22, 2025
Sunday, September 21, 2025
New top story on Hacker News: Lightweight, highly accurate line and paragraph detection
Lightweight, highly accurate line and paragraph detection
3 by colonCapitalDee | 0 comments on Hacker News.
3 by colonCapitalDee | 0 comments on Hacker News.
Saturday, September 20, 2025
Friday, September 19, 2025
Thursday, September 18, 2025
New top story on Hacker News: Learn Your Way: Reimagining Textbooks with Generative AI
Learn Your Way: Reimagining Textbooks with Generative AI
51 by FromTheArchives | 12 comments on Hacker News.
51 by FromTheArchives | 12 comments on Hacker News.
Wednesday, September 17, 2025
Tuesday, September 16, 2025
Monday, September 15, 2025
New top story on Hacker News: Show HN: AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers
Show HN: AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers
10 by Aherontas | 2 comments on Hacker News.
Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems. To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://ift.tt/0ZYk5fR ) https://ift.tt/UAjGfrd... The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration. Features: - Multiple agents running in containers - MCP servers (Brave search, GitHub, filesystem, etc.) as tools - A2A communication between services - Minimal UI for experimentation for Tech Trend - repo analysis I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases. It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions: Do you think agent-to-agent protocols like MCP/A2A will stick? Or will the future be mostly single powerful LLMs with plugin stacks? Thanks — excited to hear what the HN crowd thinks!
10 by Aherontas | 2 comments on Hacker News.
Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems. To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://ift.tt/0ZYk5fR ) https://ift.tt/UAjGfrd... The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration. Features: - Multiple agents running in containers - MCP servers (Brave search, GitHub, filesystem, etc.) as tools - A2A communication between services - Minimal UI for experimentation for Tech Trend - repo analysis I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases. It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions: Do you think agent-to-agent protocols like MCP/A2A will stick? Or will the future be mostly single powerful LLMs with plugin stacks? Thanks — excited to hear what the HN crowd thinks!
Sunday, September 14, 2025
Saturday, September 13, 2025
Friday, September 12, 2025
Thursday, September 11, 2025
Wednesday, September 10, 2025
New top story on Hacker News: Launch HN: Recall.ai (YC W20) – API for meeting recordings and transcripts
Launch HN: Recall.ai (YC W20) – API for meeting recordings and transcripts
13 by davidgu | 3 comments on Hacker News.
Hey HN, we're David and Amanda from Recall.ai ( https://www.recall.ai ). Today we’re launching our Desktop Recording SDK, a way to get meeting data without a bot in the meeting: https://ift.tt/82ac1b0 . It’s our biggest release in quite a while so we thought we’d finally do our Launch HN :) Here’s a demo that shows it producing a transcript from a meeting, followed by examples in code: https://www.youtube.com/watch?v=4croAGGiKTA . API docs are at https://docs.recall.ai/ . Back in W20, our first product was an API that lets you send a bot participant into a meeting. This gives developers access to audio/video streams and other data in the meeting. Today, this API powers most of the meeting recording products on the market. Recently, meeting recording through a desktop form factor instead of a bot has become popular. Many products like Notion and ChatGPT have added desktop recording functionality, and LLMs have made it easier to work with unstructured transcripts. But it’s actually hard to reliably record meetings at scale with a desktop app, and most developers who want to add recording functionality don’t want to build all this infrastructure. Doing a basic recording with just the microphone and system audio is fairly straightforward since you can just use the system APIs. But it gets a lot harder when you want to capture speaker names, produce a video recording, get real-time data, or run this in production at large scale: - Capturing speaker names involves using accessibility APIs to screen-scrape the video conference window to monitor who is speaking at what time. When video conferencing platforms change their UI, we must ship a change immediately, so this keeps working. - Producing a video recording that is clean, and doesn’t capture the video conferencing platform UI involves detecting the participant tiles, cropping them out, and compositing them together into a clean video recording. - Because the desktop recording code runs on end-user machines, we need to make it as efficient as possible. This means writing highly platform-optimized code, taking advantage of hardware encoders when available, and spending a lot of time doing profiling and performance testing. Meeting recording has zero margin for failure because if anything breaks, you lose the data forever. Reliability is especially important, which dramatically increases the amount of engineering effort required. Our Desktop Recording SDK takes care of all this and lets developers build meeting recording features into their desktop apps, so they can record both video conferences and in-person meetings without a bot. We built Recall.ai because we experienced this problem ourselves. At our first startup, we built a tool for product managers that included a meeting recording feature. 70% of our engineering time was taken up by just this feature! We ended up starting Recall.ai to solve this instead. Since then, over 2000 companies use us to power their recording features, e.g. Hubspot for sales call recording, Clickup for their AI note taker. Our users are engineering teams building commercial products for financial services, telehealth, incident management, sales, interviewing, and more. We also power internal tooling for large enterprises. Running this sort of infrastructure has led to unexpected technical challenges! For example, we had to debug a 1 in 36 million segfault in our audio encoder ( https://ift.tt/G9hD4lT... ), we encountered a Postgres lock-up that only occurs when you have tens of thousands of concurrent writers ( https://ift.tt/ZbCPdpu ), and we saved over $1M a year on AWS by optimizing the way we shuffle data around between our processes ( https://ift.tt/5c8vl3h ). You can try it here: https://www.recall.ai . It's self-serve with $5 of free credits. Pricing starts at $0.70 for every hour of recording, prorated to the second. We offer volume discounts with scale. All data recorded through Recall.ai is the property of our customers, we support 0-day retention, and we don’t train models on customer data. We would love your feedback!
13 by davidgu | 3 comments on Hacker News.
Hey HN, we're David and Amanda from Recall.ai ( https://www.recall.ai ). Today we’re launching our Desktop Recording SDK, a way to get meeting data without a bot in the meeting: https://ift.tt/82ac1b0 . It’s our biggest release in quite a while so we thought we’d finally do our Launch HN :) Here’s a demo that shows it producing a transcript from a meeting, followed by examples in code: https://www.youtube.com/watch?v=4croAGGiKTA . API docs are at https://docs.recall.ai/ . Back in W20, our first product was an API that lets you send a bot participant into a meeting. This gives developers access to audio/video streams and other data in the meeting. Today, this API powers most of the meeting recording products on the market. Recently, meeting recording through a desktop form factor instead of a bot has become popular. Many products like Notion and ChatGPT have added desktop recording functionality, and LLMs have made it easier to work with unstructured transcripts. But it’s actually hard to reliably record meetings at scale with a desktop app, and most developers who want to add recording functionality don’t want to build all this infrastructure. Doing a basic recording with just the microphone and system audio is fairly straightforward since you can just use the system APIs. But it gets a lot harder when you want to capture speaker names, produce a video recording, get real-time data, or run this in production at large scale: - Capturing speaker names involves using accessibility APIs to screen-scrape the video conference window to monitor who is speaking at what time. When video conferencing platforms change their UI, we must ship a change immediately, so this keeps working. - Producing a video recording that is clean, and doesn’t capture the video conferencing platform UI involves detecting the participant tiles, cropping them out, and compositing them together into a clean video recording. - Because the desktop recording code runs on end-user machines, we need to make it as efficient as possible. This means writing highly platform-optimized code, taking advantage of hardware encoders when available, and spending a lot of time doing profiling and performance testing. Meeting recording has zero margin for failure because if anything breaks, you lose the data forever. Reliability is especially important, which dramatically increases the amount of engineering effort required. Our Desktop Recording SDK takes care of all this and lets developers build meeting recording features into their desktop apps, so they can record both video conferences and in-person meetings without a bot. We built Recall.ai because we experienced this problem ourselves. At our first startup, we built a tool for product managers that included a meeting recording feature. 70% of our engineering time was taken up by just this feature! We ended up starting Recall.ai to solve this instead. Since then, over 2000 companies use us to power their recording features, e.g. Hubspot for sales call recording, Clickup for their AI note taker. Our users are engineering teams building commercial products for financial services, telehealth, incident management, sales, interviewing, and more. We also power internal tooling for large enterprises. Running this sort of infrastructure has led to unexpected technical challenges! For example, we had to debug a 1 in 36 million segfault in our audio encoder ( https://ift.tt/G9hD4lT... ), we encountered a Postgres lock-up that only occurs when you have tens of thousands of concurrent writers ( https://ift.tt/ZbCPdpu ), and we saved over $1M a year on AWS by optimizing the way we shuffle data around between our processes ( https://ift.tt/5c8vl3h ). You can try it here: https://www.recall.ai . It's self-serve with $5 of free credits. Pricing starts at $0.70 for every hour of recording, prorated to the second. We offer volume discounts with scale. All data recorded through Recall.ai is the property of our customers, we support 0-day retention, and we don’t train models on customer data. We would love your feedback!
Tuesday, September 9, 2025
Monday, September 8, 2025
Sunday, September 7, 2025
Saturday, September 6, 2025
Friday, September 5, 2025
Thursday, September 4, 2025
New top story on Hacker News: A high schooler writes about AI tools in the classroom
A high schooler writes about AI tools in the classroom
88 by dougb5 | 74 comments on Hacker News.
https://ift.tt/TcXbSxm
88 by dougb5 | 74 comments on Hacker News.
https://ift.tt/TcXbSxm
Wednesday, September 3, 2025
New top story on Hacker News: Vector search on our codebase transformed our SDLC automation
Vector search on our codebase transformed our SDLC automation
18 by antonybrahin | 2 comments on Hacker News.
Hey HN, In software development, the process of turning a user story into detailed documentation and actionable tasks is critical. However, this manual process can often be a source of inconsistency and a significant time investment. I was driven to see if I could streamline and elevate it. I know this is a hot space, with big players like GitHub and Atlassian building integrated AI, and startups offering specialized platforms. My goal wasn't to compete with them, but to see what was possible by building a custom, "glass box" solution using the best tools for each part of the job, without being locked into a single ecosystem. What makes this approach different is the flexibility and full control. Instead of a pre-packaged product, this is a resilient workflow built on Power Automate, which acts as the orchestrator for a sequence of API calls: Five calls to the Gemini API for the core generation steps (requirements, tech spec, test strategy, etc.). One call to an Azure OpenAI model to create vector embeddings of our codebase. One call to Azure AI Search to perform the Retrieval-Augmented Generation (RAG). This was the key to getting context-aware, non-generic outputs. It reads our actual code to inform the technical spec and tasks. A bunch of direct calls to the Azure DevOps REST API (using a PAT) to create the wiki pages and work items, since the standard connectors were a bit limited. The biggest challenge was moving beyond simple prompts and engineering a resilient system. Forcing the final output into a rigid JSON schema instead of parsing text was a game-changer for reliability. The result is a system that saves us hours on every story and produces remarkably consistent, high-quality documentation and tasks. The full write-up with all the challenges, final prompts, and screenshots is in the linked blog post. I’m here to answer any questions. Would love to hear your feedback and ideas!
18 by antonybrahin | 2 comments on Hacker News.
Hey HN, In software development, the process of turning a user story into detailed documentation and actionable tasks is critical. However, this manual process can often be a source of inconsistency and a significant time investment. I was driven to see if I could streamline and elevate it. I know this is a hot space, with big players like GitHub and Atlassian building integrated AI, and startups offering specialized platforms. My goal wasn't to compete with them, but to see what was possible by building a custom, "glass box" solution using the best tools for each part of the job, without being locked into a single ecosystem. What makes this approach different is the flexibility and full control. Instead of a pre-packaged product, this is a resilient workflow built on Power Automate, which acts as the orchestrator for a sequence of API calls: Five calls to the Gemini API for the core generation steps (requirements, tech spec, test strategy, etc.). One call to an Azure OpenAI model to create vector embeddings of our codebase. One call to Azure AI Search to perform the Retrieval-Augmented Generation (RAG). This was the key to getting context-aware, non-generic outputs. It reads our actual code to inform the technical spec and tasks. A bunch of direct calls to the Azure DevOps REST API (using a PAT) to create the wiki pages and work items, since the standard connectors were a bit limited. The biggest challenge was moving beyond simple prompts and engineering a resilient system. Forcing the final output into a rigid JSON schema instead of parsing text was a game-changer for reliability. The result is a system that saves us hours on every story and produces remarkably consistent, high-quality documentation and tasks. The full write-up with all the challenges, final prompts, and screenshots is in the linked blog post. I’m here to answer any questions. Would love to hear your feedback and ideas!
Tuesday, September 2, 2025
Monday, September 1, 2025
New top story on Hacker News: Show HN: woomarks, transfer your Pocket links to this app or self-host it
Show HN: woomarks, transfer your Pocket links to this app or self-host it
9 by earlyriser | 0 comments on Hacker News.
Pocket is shutting down and I really, really liked it. So I built woomarks, an app that let's you save links with a similar UI. It's very minimal, but it's doing everything I liked from Pocket and you can bulk import your links and use the app or self-host. - Public app that you can test: https://woomarks.com/ - My self-hosted version, where you can see my saves: https://ift.tt/m9ZM5dY - Repository if you want to self-host: https://ift.tt/nMdf8s6 Export links from Pocket here: https://ift.tt/y4KiESX the last day will be on October 20025. Features: - Add/Delete links - Search - Tags - Bookmarklet (useful for a 2-click-save) - Data reads from: csv file in server (these links are public) local storage in browser (these links are visible just for the user) - Local storage saving. - Import to local storage from csv file - Export to csv from local storage. - Export to csv from csv file (useful when links are "deleted" using the app and just hidden using a local storage blacklist). - Export to csv from both places. - No external libraries. - Vanilla css code. - Vanilla js code.
9 by earlyriser | 0 comments on Hacker News.
Pocket is shutting down and I really, really liked it. So I built woomarks, an app that let's you save links with a similar UI. It's very minimal, but it's doing everything I liked from Pocket and you can bulk import your links and use the app or self-host. - Public app that you can test: https://woomarks.com/ - My self-hosted version, where you can see my saves: https://ift.tt/m9ZM5dY - Repository if you want to self-host: https://ift.tt/nMdf8s6 Export links from Pocket here: https://ift.tt/y4KiESX the last day will be on October 20025. Features: - Add/Delete links - Search - Tags - Bookmarklet (useful for a 2-click-save) - Data reads from: csv file in server (these links are public) local storage in browser (these links are visible just for the user) - Local storage saving. - Import to local storage from csv file - Export to csv from local storage. - Export to csv from csv file (useful when links are "deleted" using the app and just hidden using a local storage blacklist). - Export to csv from both places. - No external libraries. - Vanilla css code. - Vanilla js code.
Sunday, August 31, 2025
New top story on Hacker News: Show HN: Anonymous Age Verification
Show HN: Anonymous Age Verification
4 by jwally | 2 comments on Hacker News.
So I'm not an expert in this area, but here's an attempt at cost effective, anonymous, age verification flow that probably covers ~70% of use cases in the United States. The basic premise is to leverage your bank (who already has had to perform KYC on you to open an account) to attest to your age for age-restricted merchant sites (pornhub, gambling, etc) without sharing any more information than necessary. Flow works like this: 1) You go to gambling.com 2) They request you to verify your age 3) You choose "Bank Verification" 4) You trigger a WebAuthn Credential Creation flow 5) gambling.com gives you a string to copy ------------- 6) You log into your bank 7) You go to bank.com/age-verify 8) You paste in the string you were given 9) The bank verifies it/you and creates a signed payload with your age-claims (over_18: true, over_21: false) 10) You copy this and go back to gambling.com --------------- 11) You paste the string back into gambling.com 12) You perform WebAuthn Auth flow 13) gambling.com verifies everything (signatures, webauthn, etc) 14) gambling.com sets a session-cookie and _STRONGLY_ encourages you to create an account (with a pass key). This will prevent you from having to verify your age every time you visit gambling.com The mechanics might feel off, but it feels like this in the neighborhood of a way to perform anonymous age verification. This is virtually free, and requires extremely light infra. Banks can be incentivized with small payments, or offer it because everyone else does and don't want to get left behind.
4 by jwally | 2 comments on Hacker News.
So I'm not an expert in this area, but here's an attempt at cost effective, anonymous, age verification flow that probably covers ~70% of use cases in the United States. The basic premise is to leverage your bank (who already has had to perform KYC on you to open an account) to attest to your age for age-restricted merchant sites (pornhub, gambling, etc) without sharing any more information than necessary. Flow works like this: 1) You go to gambling.com 2) They request you to verify your age 3) You choose "Bank Verification" 4) You trigger a WebAuthn Credential Creation flow 5) gambling.com gives you a string to copy ------------- 6) You log into your bank 7) You go to bank.com/age-verify 8) You paste in the string you were given 9) The bank verifies it/you and creates a signed payload with your age-claims (over_18: true, over_21: false) 10) You copy this and go back to gambling.com --------------- 11) You paste the string back into gambling.com 12) You perform WebAuthn Auth flow 13) gambling.com verifies everything (signatures, webauthn, etc) 14) gambling.com sets a session-cookie and _STRONGLY_ encourages you to create an account (with a pass key). This will prevent you from having to verify your age every time you visit gambling.com The mechanics might feel off, but it feels like this in the neighborhood of a way to perform anonymous age verification. This is virtually free, and requires extremely light infra. Banks can be incentivized with small payments, or offer it because everyone else does and don't want to get left behind.
Saturday, August 30, 2025
Friday, August 29, 2025
Thursday, August 28, 2025
Wednesday, August 27, 2025
Tuesday, August 26, 2025
New top story on Hacker News: Show HN: SecretMemoryLocker – File Encryption Without Static Passwords
Show HN: SecretMemoryLocker – File Encryption Without Static Passwords
7 by YuriiDev | 0 comments on Hacker News.
I built SecretMemoryLocker ( https://ift.tt/3K57sAi ), a file encryption tool that generates keys dynamically from your answers to personal questions instead of using a static master password. This makes offline brute-force attacks much more difficult. Think of it as a password manager that meets mnemonic seed recovery, but without storing any sensitive keys on disk. Why? I kept losing master passwords and wanted a solution that wasn't tied to a single point of failure. I also wanted to create a "digital legacy" that my family could access only under specific conditions. The core principle is knowledge-based encryption: the key only exists in memory when you provide the correct answers. Status: * MVP is ready for Windows (.exe). * Linux and macOS support is planned. * UI is available in English, Spanish, and Ukrainian. Key Features: * No Static Secrets: No master password or seed phrase is ever stored. The key is reconstructed on the fly. * Knowledge-Based Key Generation: The final encryption key is derived from a combination of your personal answers and file metadata. * Offline Brute-Force Resistance: Uses MirageLoop, a decoy system that activates when incorrect answers are entered. Instead of decrypting real data, it generates an endless sequence of AI-created questions from a secure local database, creating an illusion of progress while keeping your real data untouched. * Offline AI Generation Mode: Optional offline Q&A generator (prototype). How It Works (Simplified): 1) Files are packed into an AES-256 encrypted ZIP archive. 2) A JSON key file stores the questions in an encrypted chain. Each subsequent question is encrypted with a key derived from the previous correct answer and the file's hash. This forces you to answer them sequentially. 3) The final encryption key for the ZIP file is derived by combining the hashes of all your correct answers. The key derivation formula looks like this: K_final = SHA256(H(answer1+file_hash) + H(answer2+file_hash) + ...) (Note: We are aware that a fast hash like SHA256 is not ideal for a KDF. We plan to migrate to Argon2 in a future release to further strengthen resistance against brute-force attacks.) To encrypt, you provide a file. This creates two outputs: your_file.txt → your_file_SMLkey.json + your_file_SecretML.zip To decrypt, you need both files and the correct answers. Install & Quick Start: Download the EXE from GitHub Releases (no dependencies needed): https://ift.tt/cy2ZRuK Encrypt: SecretMemoryLocker.exe --encrypt "C:\docs\important.pdf" Decrypt: SecretMemoryLocker.exe --decrypt "C:\docs\important_SMLkey.json" I would love to get your feedback on the concept, the user experience, and any security assumptions I've made. Thanks!
7 by YuriiDev | 0 comments on Hacker News.
I built SecretMemoryLocker ( https://ift.tt/3K57sAi ), a file encryption tool that generates keys dynamically from your answers to personal questions instead of using a static master password. This makes offline brute-force attacks much more difficult. Think of it as a password manager that meets mnemonic seed recovery, but without storing any sensitive keys on disk. Why? I kept losing master passwords and wanted a solution that wasn't tied to a single point of failure. I also wanted to create a "digital legacy" that my family could access only under specific conditions. The core principle is knowledge-based encryption: the key only exists in memory when you provide the correct answers. Status: * MVP is ready for Windows (.exe). * Linux and macOS support is planned. * UI is available in English, Spanish, and Ukrainian. Key Features: * No Static Secrets: No master password or seed phrase is ever stored. The key is reconstructed on the fly. * Knowledge-Based Key Generation: The final encryption key is derived from a combination of your personal answers and file metadata. * Offline Brute-Force Resistance: Uses MirageLoop, a decoy system that activates when incorrect answers are entered. Instead of decrypting real data, it generates an endless sequence of AI-created questions from a secure local database, creating an illusion of progress while keeping your real data untouched. * Offline AI Generation Mode: Optional offline Q&A generator (prototype). How It Works (Simplified): 1) Files are packed into an AES-256 encrypted ZIP archive. 2) A JSON key file stores the questions in an encrypted chain. Each subsequent question is encrypted with a key derived from the previous correct answer and the file's hash. This forces you to answer them sequentially. 3) The final encryption key for the ZIP file is derived by combining the hashes of all your correct answers. The key derivation formula looks like this: K_final = SHA256(H(answer1+file_hash) + H(answer2+file_hash) + ...) (Note: We are aware that a fast hash like SHA256 is not ideal for a KDF. We plan to migrate to Argon2 in a future release to further strengthen resistance against brute-force attacks.) To encrypt, you provide a file. This creates two outputs: your_file.txt → your_file_SMLkey.json + your_file_SecretML.zip To decrypt, you need both files and the correct answers. Install & Quick Start: Download the EXE from GitHub Releases (no dependencies needed): https://ift.tt/cy2ZRuK Encrypt: SecretMemoryLocker.exe --encrypt "C:\docs\important.pdf" Decrypt: SecretMemoryLocker.exe --decrypt "C:\docs\important_SMLkey.json" I would love to get your feedback on the concept, the user experience, and any security assumptions I've made. Thanks!
Monday, August 25, 2025
Sunday, August 24, 2025
Saturday, August 23, 2025
Friday, August 22, 2025
Thursday, August 21, 2025
New top story on Hacker News: Show HN: Tool shows UK properties matching group commute/time preferences
Show HN: Tool shows UK properties matching group commute/time preferences
5 by fryingdan | 3 comments on Hacker News.
I came up with this idea when I was looking to move to London with a friend. I quickly learned how frustrating it is to trial-and-error housing options for days on end, just to be denied after days of searching due to some grotesque counteroffer. To add to this, finding properties that meet the budgets, commuting preferences and work locations of everyone in a group is a Sisyphean task - it often ends in failure, with somebody exceeding their original budget or somebody dropping out. To solve this I built a tool ( https://closemove.com/ ) that: - lets you enter between 1-6 people’s workplaces, budgets, and maximum commute times - filters public rental listings and only shows the ones that satisfy everyone’s constraints - shows results in either a list or map view No sign-up/validation required at present. Currently UK only, but please let me know if you'd want me to expand this to your city/country. This currently works best in London (with walking, cycling, driving and public transport links connected), and works decently in the rest of the UK (walking, cycling, driving only). This started as a side project and it still needs improvement. I’d appreciate any feedback!
5 by fryingdan | 3 comments on Hacker News.
I came up with this idea when I was looking to move to London with a friend. I quickly learned how frustrating it is to trial-and-error housing options for days on end, just to be denied after days of searching due to some grotesque counteroffer. To add to this, finding properties that meet the budgets, commuting preferences and work locations of everyone in a group is a Sisyphean task - it often ends in failure, with somebody exceeding their original budget or somebody dropping out. To solve this I built a tool ( https://closemove.com/ ) that: - lets you enter between 1-6 people’s workplaces, budgets, and maximum commute times - filters public rental listings and only shows the ones that satisfy everyone’s constraints - shows results in either a list or map view No sign-up/validation required at present. Currently UK only, but please let me know if you'd want me to expand this to your city/country. This currently works best in London (with walking, cycling, driving and public transport links connected), and works decently in the rest of the UK (walking, cycling, driving only). This started as a side project and it still needs improvement. I’d appreciate any feedback!
Wednesday, August 20, 2025
Tuesday, August 19, 2025
Monday, August 18, 2025
Sunday, August 17, 2025
New top story on Hacker News: Show HN: NextDNS Adds "Bypass Age Verification"
Show HN: NextDNS Adds "Bypass Age Verification"
20 by nextdns | 3 comments on Hacker News.
We just shipped a new feature in NextDNS: Bypass Age Verification. More and more sites (especially adult ones) are now forcing users to upload IDs or selfies to continue. We think that’s a terrible idea: handing over government documents to random sites is a huge privacy risk. This new setting workarounds those verification flows via DNS tricks. It’s available today to all users, including free accounts. We’re curious how the HN community feels about this. Is it the right way to protect privacy online, or will it just provoke regulators to push harder? https://nextdns.io
20 by nextdns | 3 comments on Hacker News.
We just shipped a new feature in NextDNS: Bypass Age Verification. More and more sites (especially adult ones) are now forcing users to upload IDs or selfies to continue. We think that’s a terrible idea: handing over government documents to random sites is a huge privacy risk. This new setting workarounds those verification flows via DNS tricks. It’s available today to all users, including free accounts. We’re curious how the HN community feels about this. Is it the right way to protect privacy online, or will it just provoke regulators to push harder? https://nextdns.io
Saturday, August 16, 2025
Friday, August 15, 2025
New top story on Hacker News: When the CIA got away with building a heart attack gun
When the CIA got away with building a heart attack gun
38 by douchecoded | 10 comments on Hacker News.
38 by douchecoded | 10 comments on Hacker News.
Thursday, August 14, 2025
Wednesday, August 13, 2025
Tuesday, August 12, 2025
Monday, August 11, 2025
New top story on Hacker News: Show HN: Play Pokémon to unlock your Wayland session
Show HN: Play Pokémon to unlock your Wayland session
3 by anajimi | 2 comments on Hacker News.
Hello everyone! I've created a gameboy emulator to unlock my Wayland session and wanted to share this project to everyone here! I've been a Linux enthusiast since I was a kid. What always captivated me was the freedom to customize my system exactly the way I wanted. With Wayland, we've reached an incredible level of performance. It's like turning your operating system into a video game! I've always been fascinated by the blend of fun and the serious, technical nature of an OS. That’s what inspired me to create this project. I started by studying Wayland, its protocol and how to build a compositor. Then I became particularly intrigued by the concept of a locker, which reminded me a bit of an escape game. That’s when I thought: how cool would it be to solve a puzzle to unlock your session, instead of just typing a password? Since I’ve worked with emulators in the past and I’m a huge Pokémon fan, the idea of building the puzzle around that game came to me instantly! Technically, the locker code and the wayland protocol have been implemented from scratch ( using EGL and wl_keyboard_listeners ). My locker runs a version of the gbcc emulator modded by myself. This emulator waits for one precise value to be set in a given memory address. I have modded the Pokémon game to my needs: when the password is good, I put the good value in the good memory address so the emulator knows it needs to unlock the session. Hope you will appreciate this project!
3 by anajimi | 2 comments on Hacker News.
Hello everyone! I've created a gameboy emulator to unlock my Wayland session and wanted to share this project to everyone here! I've been a Linux enthusiast since I was a kid. What always captivated me was the freedom to customize my system exactly the way I wanted. With Wayland, we've reached an incredible level of performance. It's like turning your operating system into a video game! I've always been fascinated by the blend of fun and the serious, technical nature of an OS. That’s what inspired me to create this project. I started by studying Wayland, its protocol and how to build a compositor. Then I became particularly intrigued by the concept of a locker, which reminded me a bit of an escape game. That’s when I thought: how cool would it be to solve a puzzle to unlock your session, instead of just typing a password? Since I’ve worked with emulators in the past and I’m a huge Pokémon fan, the idea of building the puzzle around that game came to me instantly! Technically, the locker code and the wayland protocol have been implemented from scratch ( using EGL and wl_keyboard_listeners ). My locker runs a version of the gbcc emulator modded by myself. This emulator waits for one precise value to be set in a given memory address. I have modded the Pokémon game to my needs: when the password is good, I put the good value in the good memory address so the emulator knows it needs to unlock the session. Hope you will appreciate this project!
Sunday, August 10, 2025
Saturday, August 9, 2025
Friday, August 8, 2025
Thursday, August 7, 2025
New top story on Hacker News: GPT-5: Key characteristics, pricing and system card
GPT-5: Key characteristics, pricing and system card
107 by Philpax | 28 comments on Hacker News.
System card: https://ift.tt/SnPKQUH...
107 by Philpax | 28 comments on Hacker News.
System card: https://ift.tt/SnPKQUH...
Wednesday, August 6, 2025
Tuesday, August 5, 2025
Monday, August 4, 2025
Sunday, August 3, 2025
Saturday, August 2, 2025
Friday, August 1, 2025
New top story on Hacker News: Show HN: TraceRoot – Open-source agentic debugging for distributed services
Show HN: TraceRoot – Open-source agentic debugging for distributed services
10 by xinweihe | 0 comments on Hacker News.
Hey Xinwei and Zecheng here, we are the authors of TraceRoot ( https://ift.tt/0intYSb ). TraceRoot ( https://traceroot.ai ) is an open-source debugging platform that helps engineers fix production issues faster by combining structured traces, logs, source code contexts and discussions in Github PRs, issues and Slack channels, etc. with AI Agents. At the heart are our lightweight Python ( https://ift.tt/s07MzWj ) and TypeScript ( https://ift.tt/LOFJDxj ) SDKs - they can hook into your app using OpenTelemetry and captures logs and traces. These are either sent to a local Jaeger ( https://ift.tt/PIspe6o ) + SQLite backend or to our cloud backend, where we correlate them into a single view. From there, our custom agent takes over. The agent builds a heterogeneous execution tree that merges spans, logs, and GitHub context into one internal structure. This allows it to model the control and data flow of a request across services. It then uses LLMs to reason over this tree - pruning irrelevant branches, surfacing anomalous spans, and identifying likely root causes. You can ask questions like “what caused this timeout?” or “summarize the errors in these 3 spans”, and it can trace the failure back to a specific commit, summarize the chain of events, or even propose a fix via a draft PR. We also built a debugging UI that ties everything together - you explore traces visually, pick spans of interest, and get AI-assisted insights with full context: logs, timings, metadata, and surrounding code. Unlike most tools, TraceRoot stores long-term debugging history and builds structured context for each company - something we haven’t seen many others do in this space. What’s live today: - Python and TypeScript SDKs for structured logs and traces. - AI summaries, GitHub issue generation, and PR creation. - Debugging UI that ties everything together TraceRoot is MIT licensed and easy to self-host (via Docker). We support both local mode (Jaeger + SQLite) and cloud mode. Inspired by OSS projects like PostHog and Supabase - core is free, enterprise features like agent mode multi-tenant and slack integration are paid. If you find it interesting, you can see a demo video here: https://www.youtube.com/watch?v=nb-D3LM0sJM We’d love you to try TraceRoot ( https://traceroot.ai ) and share any feedback. If you're interested, our code is available here: https://ift.tt/0intYSb . If we don’t have something, let us know and we’d be happy to build it for you. We look forward to your comments!
10 by xinweihe | 0 comments on Hacker News.
Hey Xinwei and Zecheng here, we are the authors of TraceRoot ( https://ift.tt/0intYSb ). TraceRoot ( https://traceroot.ai ) is an open-source debugging platform that helps engineers fix production issues faster by combining structured traces, logs, source code contexts and discussions in Github PRs, issues and Slack channels, etc. with AI Agents. At the heart are our lightweight Python ( https://ift.tt/s07MzWj ) and TypeScript ( https://ift.tt/LOFJDxj ) SDKs - they can hook into your app using OpenTelemetry and captures logs and traces. These are either sent to a local Jaeger ( https://ift.tt/PIspe6o ) + SQLite backend or to our cloud backend, where we correlate them into a single view. From there, our custom agent takes over. The agent builds a heterogeneous execution tree that merges spans, logs, and GitHub context into one internal structure. This allows it to model the control and data flow of a request across services. It then uses LLMs to reason over this tree - pruning irrelevant branches, surfacing anomalous spans, and identifying likely root causes. You can ask questions like “what caused this timeout?” or “summarize the errors in these 3 spans”, and it can trace the failure back to a specific commit, summarize the chain of events, or even propose a fix via a draft PR. We also built a debugging UI that ties everything together - you explore traces visually, pick spans of interest, and get AI-assisted insights with full context: logs, timings, metadata, and surrounding code. Unlike most tools, TraceRoot stores long-term debugging history and builds structured context for each company - something we haven’t seen many others do in this space. What’s live today: - Python and TypeScript SDKs for structured logs and traces. - AI summaries, GitHub issue generation, and PR creation. - Debugging UI that ties everything together TraceRoot is MIT licensed and easy to self-host (via Docker). We support both local mode (Jaeger + SQLite) and cloud mode. Inspired by OSS projects like PostHog and Supabase - core is free, enterprise features like agent mode multi-tenant and slack integration are paid. If you find it interesting, you can see a demo video here: https://www.youtube.com/watch?v=nb-D3LM0sJM We’d love you to try TraceRoot ( https://traceroot.ai ) and share any feedback. If you're interested, our code is available here: https://ift.tt/0intYSb . If we don’t have something, let us know and we’d be happy to build it for you. We look forward to your comments!
Thursday, July 31, 2025
New top story on Hacker News: Show HN: Sourcebot – Self-hosted Perplexity for your codebase
Show HN: Sourcebot – Self-hosted Perplexity for your codebase
14 by bshzzle | 1 comments on Hacker News.
Hi HN, We’re Brendan and Michael, the creators of Sourcebot ( https://ift.tt/aACFRoB ), a self-hosted code understanding tool for large codebases. We originally launched on HN 9 months ago with code search ( https://ift.tt/rvWYhQF ), and we’re excited to share our newest feature: Ask Sourcebot. Ask Sourcebot is an agentic search tool that lets you ask complex questions about your entire codebase in natural language, and returns a structured response with inline citations back to your code. Some types of questions you might ask: - “How does authentication work in this codebase? What library is being used? What providers can a user log in with?” ( https://ift.tt/G4BgALK ) - “When should I use channels vs. mutexes in go? Find real usages of both and include them in your answer” ( https://ift.tt/qIgYBu0 ) - “How are shards laid out in memory in the Zoekt code search engine?” ( https://ift.tt/N2K3gkM ) - "How do I call C from Rust?" ( https://ift.tt/MrYvzHy ) You can try it yourself here on our demo site ( https://ift.tt/tDUsZ0F ) or checkout our demo video ( https://youtu.be/olc2lyUeB-Q ). How is this any different from existing tools like Cursor or Claude code? - Sourcebot solely focuses on code understanding . We believe that, more than ever, the main bottleneck development teams face is not writing code, it’s acquiring the necessary context to make quality changes that are cohesive within the wider codebase. This is true regardless if the author is a human or an LLM. - As opposed to being in your IDE or terminal, Sourcebot is a web app. This allows us to play to the strengths of the web: rich UX and ubiquitous access. We put a ton of work into taking the best parts of IDEs (code navigation, file explorer, syntax highlighting) and packaging them with a custom UX (rich Markdown rendering, inline citations, @ mentions) that is easily shareable between team members. - Sourcebot can maintain an up-to date index of thousands of repos hosted on GitHub, GitLab, Bitbucket, Gerrit, and other hosts. This allows you to ask questions about repositories without checking them out locally. This is especially helpful when ramping up on unfamiliar parts of the codebase or working with systems that are typically spread across multiple repositories, e.g., micro services. - You can BYOK (Bring Your Own API Key) to any supported reasoning model. We currently support 11 different model providers (like Amazon Bedrock and Google Vertex), and plan to add more. - Sourcebot is self-hosted, fair source, and free to use. Under the hood, we expose our existing regular expression search, code navigation, and file reading APIs to a LLM as tool calls. We instruct the LLM via a system prompt to gather the necessary context via these tools to sufficiently answer the users question, and then to provide a concise, structured response. This includes inline citations, which are just structured data that the LLM can embed into it’s response and can then be identified on the client and rendered appropriately. We built this on some amazing libraries like the Vercel AI SDK v5, CodeMirror, react-markdown, and Slate.js, among others. This architecture is intentionally simple. We decided not to introduce any additional techniques like vector embeddings, multi-agent graphs, etc. since we wanted to push the limits of what we could do with what we had on hand. We plan on revisiting our approach as we get user feedback on what works (and what doesn’t). We are really excited about pushing the envelope of code understanding. Give it a try: https://ift.tt/AmIv0zN . Cheers!
14 by bshzzle | 1 comments on Hacker News.
Hi HN, We’re Brendan and Michael, the creators of Sourcebot ( https://ift.tt/aACFRoB ), a self-hosted code understanding tool for large codebases. We originally launched on HN 9 months ago with code search ( https://ift.tt/rvWYhQF ), and we’re excited to share our newest feature: Ask Sourcebot. Ask Sourcebot is an agentic search tool that lets you ask complex questions about your entire codebase in natural language, and returns a structured response with inline citations back to your code. Some types of questions you might ask: - “How does authentication work in this codebase? What library is being used? What providers can a user log in with?” ( https://ift.tt/G4BgALK ) - “When should I use channels vs. mutexes in go? Find real usages of both and include them in your answer” ( https://ift.tt/qIgYBu0 ) - “How are shards laid out in memory in the Zoekt code search engine?” ( https://ift.tt/N2K3gkM ) - "How do I call C from Rust?" ( https://ift.tt/MrYvzHy ) You can try it yourself here on our demo site ( https://ift.tt/tDUsZ0F ) or checkout our demo video ( https://youtu.be/olc2lyUeB-Q ). How is this any different from existing tools like Cursor or Claude code? - Sourcebot solely focuses on code understanding . We believe that, more than ever, the main bottleneck development teams face is not writing code, it’s acquiring the necessary context to make quality changes that are cohesive within the wider codebase. This is true regardless if the author is a human or an LLM. - As opposed to being in your IDE or terminal, Sourcebot is a web app. This allows us to play to the strengths of the web: rich UX and ubiquitous access. We put a ton of work into taking the best parts of IDEs (code navigation, file explorer, syntax highlighting) and packaging them with a custom UX (rich Markdown rendering, inline citations, @ mentions) that is easily shareable between team members. - Sourcebot can maintain an up-to date index of thousands of repos hosted on GitHub, GitLab, Bitbucket, Gerrit, and other hosts. This allows you to ask questions about repositories without checking them out locally. This is especially helpful when ramping up on unfamiliar parts of the codebase or working with systems that are typically spread across multiple repositories, e.g., micro services. - You can BYOK (Bring Your Own API Key) to any supported reasoning model. We currently support 11 different model providers (like Amazon Bedrock and Google Vertex), and plan to add more. - Sourcebot is self-hosted, fair source, and free to use. Under the hood, we expose our existing regular expression search, code navigation, and file reading APIs to a LLM as tool calls. We instruct the LLM via a system prompt to gather the necessary context via these tools to sufficiently answer the users question, and then to provide a concise, structured response. This includes inline citations, which are just structured data that the LLM can embed into it’s response and can then be identified on the client and rendered appropriately. We built this on some amazing libraries like the Vercel AI SDK v5, CodeMirror, react-markdown, and Slate.js, among others. This architecture is intentionally simple. We decided not to introduce any additional techniques like vector embeddings, multi-agent graphs, etc. since we wanted to push the limits of what we could do with what we had on hand. We plan on revisiting our approach as we get user feedback on what works (and what doesn’t). We are really excited about pushing the envelope of code understanding. Give it a try: https://ift.tt/AmIv0zN . Cheers!
Subscribe to:
Comments (Atom)