Code⇄GUI bidirectional editing via LSP
3 by jamesbvaughan | 0 comments on Hacker News.
Tuesday, July 1, 2025
Monday, June 30, 2025
Sunday, June 29, 2025
New top story on Hacker News: Show HN: Sharpe Ratio Calculation Tool
Show HN: Sharpe Ratio Calculation Tool
5 by navquant | 0 comments on Hacker News.
I built a simple but effective Sharpe Ratio calculator that gives the full historical variation of it. Should I add other rations like Calmar and Sortino?
5 by navquant | 0 comments on Hacker News.
I built a simple but effective Sharpe Ratio calculator that gives the full historical variation of it. Should I add other rations like Calmar and Sortino?
Saturday, June 28, 2025
Friday, June 27, 2025
Thursday, June 26, 2025
Wednesday, June 25, 2025
Tuesday, June 24, 2025
Monday, June 23, 2025
Sunday, June 22, 2025
Saturday, June 21, 2025
Friday, June 20, 2025
Thursday, June 19, 2025
Wednesday, June 18, 2025
Tuesday, June 17, 2025
Monday, June 16, 2025
New top story on Hacker News: Show HN: Trieve CLI – Terminal-Based LLM Agent Loop with Search Tool for PDFs
Show HN: Trieve CLI – Terminal-Based LLM Agent Loop with Search Tool for PDFs
13 by skeptrune | 0 comments on Hacker News.
Hi HN, I built a CLI for uploading documents and querying them with an LLM agent that uses search tools rather than stuffing everything into the context window. I recorded a demo using the CrossFit 2025 rulebook that shows how this approach compares to traditional RAG and direct context injection[1]. The core insight is that LLMs running in loops with tool access are unreasonably effective at this kind of knowledge retrieval task[2]. Instead of hoping the right chunks make it into your context, the agent can iteratively search, refine queries, and reason about what it finds. The CLI handles the full workflow: ```bash trieve upload ./document.pdf trieve ask "What are the key findings?" ``` You can customize the RAG behavior, check upload status, and the responses stream back with expandable source references. I really enjoy having this workflow available in the terminal and I'm curious if others find this paradigm as compelling as I do. Considering adding more commands and customization options if there's interest. The tool is free for up to 1k document chunks. Source code is on GitHub[3] and available via npm[4]. Would love any feedback on the approach or CLI design! [1]: https://www.youtube.com/watch?v=SAV-esDsRUk [2]: https://ift.tt/V5eD2Qt [3]: https://ift.tt/ohdI9mU... [4]: https://ift.tt/6cKhDEM
13 by skeptrune | 0 comments on Hacker News.
Hi HN, I built a CLI for uploading documents and querying them with an LLM agent that uses search tools rather than stuffing everything into the context window. I recorded a demo using the CrossFit 2025 rulebook that shows how this approach compares to traditional RAG and direct context injection[1]. The core insight is that LLMs running in loops with tool access are unreasonably effective at this kind of knowledge retrieval task[2]. Instead of hoping the right chunks make it into your context, the agent can iteratively search, refine queries, and reason about what it finds. The CLI handles the full workflow: ```bash trieve upload ./document.pdf trieve ask "What are the key findings?" ``` You can customize the RAG behavior, check upload status, and the responses stream back with expandable source references. I really enjoy having this workflow available in the terminal and I'm curious if others find this paradigm as compelling as I do. Considering adding more commands and customization options if there's interest. The tool is free for up to 1k document chunks. Source code is on GitHub[3] and available via npm[4]. Would love any feedback on the approach or CLI design! [1]: https://www.youtube.com/watch?v=SAV-esDsRUk [2]: https://ift.tt/V5eD2Qt [3]: https://ift.tt/ohdI9mU... [4]: https://ift.tt/6cKhDEM
Sunday, June 15, 2025
New top story on Hacker News: Show HN: I'm a student built an AI to chat with YouTube videos
Show HN: I'm a student built an AI to chat with YouTube videos
12 by adrinant | 4 comments on Hacker News.
Wiyomi.com, YouTube + AI = Personal Tutor for Every Learner. Please leave feedbacks so this tool is getting better and fruitful for you!
12 by adrinant | 4 comments on Hacker News.
Wiyomi.com, YouTube + AI = Personal Tutor for Every Learner. Please leave feedbacks so this tool is getting better and fruitful for you!
Saturday, June 14, 2025
Friday, June 13, 2025
Thursday, June 12, 2025
Wednesday, June 11, 2025
Tuesday, June 10, 2025
New top story on Hacker News: Ask HN: What cool skill or project interests you, but feels out of reach?
Ask HN: What cool skill or project interests you, but feels out of reach?
4 by akktor | 4 comments on Hacker News.
This question's for all those cool projects or skills you're secretly fascinated by, but haven't quite jumped into. Maybe you feel like you just don't have the right "brain" for it, or you're not smart enough to figure it out, or even worse, you simply have no clue how or where to even start. The idea here is to shine a light on these hidden interests and the little (or big!) mental blocks that come with them. If you're already rocking in those specific areas – or you've been there and figured out how to get past similar hurdles – please chime in! Share some helpful resources, dish out general advice, or just give a nudge of encouragement on how to take that intimidating first step. Let's help each other get unstuck!
4 by akktor | 4 comments on Hacker News.
This question's for all those cool projects or skills you're secretly fascinated by, but haven't quite jumped into. Maybe you feel like you just don't have the right "brain" for it, or you're not smart enough to figure it out, or even worse, you simply have no clue how or where to even start. The idea here is to shine a light on these hidden interests and the little (or big!) mental blocks that come with them. If you're already rocking in those specific areas – or you've been there and figured out how to get past similar hurdles – please chime in! Share some helpful resources, dish out general advice, or just give a nudge of encouragement on how to take that intimidating first step. Let's help each other get unstuck!
Monday, June 9, 2025
New top story on Hacker News: Show HN: Munal OS: a graphical experimental OS with WASM sandboxing
Show HN: Munal OS: a graphical experimental OS with WASM sandboxing
38 by Gazoche | 5 comments on Hacker News.
Hello HN! Showing off the first version of Munal OS, an experimental operating system I have been writing in Rust on and off for the past few years. https://ift.tt/XNi41eI It's an unikernel design that is compiled as a single EFI binary and does not use virtual address spaces for process isolation. Instead, applications are compiled to WASM and run inside of an embedded WASM engine. Other features: * Fully graphical interface in HD resolution with mouse and keyboard support * Desktop shell with window manager and contextual radial menus * PCI and VirtIO drivers * Ethernet and TCP stack * Customizable UI toolkit providing various widgets, responsive layouts and flexible text rendering * Embedded selection of applications including: * A web browser supporting DNS, HTTPS and very basic HTML * A text editor * A Python terminal Checkout the README for the technical breakdown. Demo video: https://ift.tt/kjph7wS
38 by Gazoche | 5 comments on Hacker News.
Hello HN! Showing off the first version of Munal OS, an experimental operating system I have been writing in Rust on and off for the past few years. https://ift.tt/XNi41eI It's an unikernel design that is compiled as a single EFI binary and does not use virtual address spaces for process isolation. Instead, applications are compiled to WASM and run inside of an embedded WASM engine. Other features: * Fully graphical interface in HD resolution with mouse and keyboard support * Desktop shell with window manager and contextual radial menus * PCI and VirtIO drivers * Ethernet and TCP stack * Customizable UI toolkit providing various widgets, responsive layouts and flexible text rendering * Embedded selection of applications including: * A web browser supporting DNS, HTTPS and very basic HTML * A text editor * A Python terminal Checkout the README for the technical breakdown. Demo video: https://ift.tt/kjph7wS
Sunday, June 8, 2025
Saturday, June 7, 2025
Friday, June 6, 2025
Thursday, June 5, 2025
Wednesday, June 4, 2025
Tuesday, June 3, 2025
Monday, June 2, 2025
Sunday, June 1, 2025
Saturday, May 31, 2025
Friday, May 30, 2025
Thursday, May 29, 2025
Wednesday, May 28, 2025
Tuesday, May 27, 2025
New top story on Hacker News: Show HN: Maestro – A Framework to Orchestrate and Ground Competing AI Models
Show HN: Maestro – A Framework to Orchestrate and Ground Competing AI Models
4 by defqon1 | 0 comments on Hacker News.
ive spent the past few months designing a framework for orchestrating multiple large language models in parallel — not to choose the “best,” but to let them argue, mix their outputs, and preserve dissent structurally. It’s called Maestro heres the whitepaper https://ift.tt/ozg5SUl (Narrative version here: https://ift.tt/JfVDQlA... ) Core ideas: Prompts are dispatched to multiple LLMs (e.g., GPT-4, Claude, open-source models) The system compares their outputs and synthesizes them It never resolves into a single voice — it ends with a 66% rule: 2 votes for a primary output, 1 dissent preserved Human critics and analog verifiers can be triggered for physical-world confirmation (when claims demand grounding) The feedback loop learns not only from right/wrong outputs, but from what kind of disagreements lead to deeper truth Maestro isn’t a product or API — it’s a proposal for an open, civic layer of synthetic intelligence. It’s designed for epistemic integrity and resistance to centralized control. Would love thoughts, critiques, or collaborators.
4 by defqon1 | 0 comments on Hacker News.
ive spent the past few months designing a framework for orchestrating multiple large language models in parallel — not to choose the “best,” but to let them argue, mix their outputs, and preserve dissent structurally. It’s called Maestro heres the whitepaper https://ift.tt/ozg5SUl (Narrative version here: https://ift.tt/JfVDQlA... ) Core ideas: Prompts are dispatched to multiple LLMs (e.g., GPT-4, Claude, open-source models) The system compares their outputs and synthesizes them It never resolves into a single voice — it ends with a 66% rule: 2 votes for a primary output, 1 dissent preserved Human critics and analog verifiers can be triggered for physical-world confirmation (when claims demand grounding) The feedback loop learns not only from right/wrong outputs, but from what kind of disagreements lead to deeper truth Maestro isn’t a product or API — it’s a proposal for an open, civic layer of synthetic intelligence. It’s designed for epistemic integrity and resistance to centralized control. Would love thoughts, critiques, or collaborators.
New top story on Hacker News: Show HN: Free mammogram analysis tool combining deep learning and vision LLM
Show HN: Free mammogram analysis tool combining deep learning and vision LLM
5 by coolwulf | 3 comments on Hacker News.
I've built Neuralrad Mammo AI, a free research tool that combines deep learning object detection with vision language models to analyze mammograms. The goal is to provide researchers and medical professionals with a secondary analysis tool for investigation purposes. Important Disclaimers: - NOT FDA 510(k) cleared - this is purely for research investigation - Not for clinical diagnosis - results should only be used as a secondary opinion - Completely free - no registration, no payment, no data retention What it does: 1. Upload a mammogram image (JPEG/PNG) 2. AI identifies potential masses and calcifications 3. Vision LLM provides radiologist-style analysis 4. Interactive viewer with zoom/pan capabilities You can try it with any mass / calcification mammo images, e.g. by searching Google: mammogram images mass Key Features: - Detects and classifies masses (benign/malignant) - Identifies calcifications (benign/malignant) - Provides confidence scores and size assessments - Generates detailed analysis using vision LLM - No data storage - images processed and discarded Use Cases: - Medical research and education - Second opinion for researchers - Algorithm comparison studies - Teaching tool for radiology training - Academic research validation The system is designed specifically for research investigation purposes and to complement (never replace) professional medical judgment. I'm hoping this can be useful for the medical AI research community and welcome feedback on the approach. Address: https://ift.tt/0l3eaYu
5 by coolwulf | 3 comments on Hacker News.
I've built Neuralrad Mammo AI, a free research tool that combines deep learning object detection with vision language models to analyze mammograms. The goal is to provide researchers and medical professionals with a secondary analysis tool for investigation purposes. Important Disclaimers: - NOT FDA 510(k) cleared - this is purely for research investigation - Not for clinical diagnosis - results should only be used as a secondary opinion - Completely free - no registration, no payment, no data retention What it does: 1. Upload a mammogram image (JPEG/PNG) 2. AI identifies potential masses and calcifications 3. Vision LLM provides radiologist-style analysis 4. Interactive viewer with zoom/pan capabilities You can try it with any mass / calcification mammo images, e.g. by searching Google: mammogram images mass Key Features: - Detects and classifies masses (benign/malignant) - Identifies calcifications (benign/malignant) - Provides confidence scores and size assessments - Generates detailed analysis using vision LLM - No data storage - images processed and discarded Use Cases: - Medical research and education - Second opinion for researchers - Algorithm comparison studies - Teaching tool for radiology training - Academic research validation The system is designed specifically for research investigation purposes and to complement (never replace) professional medical judgment. I'm hoping this can be useful for the medical AI research community and welcome feedback on the approach. Address: https://ift.tt/0l3eaYu
Monday, May 26, 2025
Sunday, May 25, 2025
New top story on Hacker News: Show HN: Zli – A Batteries-Included CLI Framework for Zig
Show HN: Zli – A Batteries-Included CLI Framework for Zig
13 by caeser | 0 comments on Hacker News.
I built zli, a batteries-included CLI framework for Zig with a focus on DX and composability. Key features: - Typed flags with default values and help output - Rich formatting, and layout support - Command trees with isolated execution logic - It’s designed to feel good to use, not just to work. - Built for real-world CLI apps, not toy examples. Would love feedback, feature ideas, or thoughts from other Zig devs. repo here: https://ift.tt/EJ2hw8t
13 by caeser | 0 comments on Hacker News.
I built zli, a batteries-included CLI framework for Zig with a focus on DX and composability. Key features: - Typed flags with default values and help output - Rich formatting, and layout support - Command trees with isolated execution logic - It’s designed to feel good to use, not just to work. - Built for real-world CLI apps, not toy examples. Would love feedback, feature ideas, or thoughts from other Zig devs. repo here: https://ift.tt/EJ2hw8t
Saturday, May 24, 2025
Friday, May 23, 2025
Thursday, May 22, 2025
Wednesday, May 21, 2025
Tuesday, May 20, 2025
Monday, May 19, 2025
Sunday, May 18, 2025
New top story on Hacker News: Show HN: Vaev – A browser engine built from scratch (It renders google.com)
Show HN: Vaev – A browser engine built from scratch (It renders google.com)
14 by monax | 4 comments on Hacker News.
We’ve been working on Vaev, a minimal web browser engine built from scratch. It supports HTML/XHTML, the CSS cascade, @page rules for pagination, and print-to-PDF rendering. It even handles calc(), var(), and percentage units—and yes, it renders Google.com (mostly). This is an experimental project focused on learning and exploration. Networking is basic ( http:// and file:// only), and grid layouts aren’t supported yet, but we’re making progress fast. We’d love your thoughts and feedback.
14 by monax | 4 comments on Hacker News.
We’ve been working on Vaev, a minimal web browser engine built from scratch. It supports HTML/XHTML, the CSS cascade, @page rules for pagination, and print-to-PDF rendering. It even handles calc(), var(), and percentage units—and yes, it renders Google.com (mostly). This is an experimental project focused on learning and exploration. Networking is basic ( http:// and file:// only), and grid layouts aren’t supported yet, but we’re making progress fast. We’d love your thoughts and feedback.
Saturday, May 17, 2025
Friday, May 16, 2025
New top story on Hacker News: Show HN: KVSplit – Run 2-3x longer contexts on Apple Silicon
Show HN: KVSplit – Run 2-3x longer contexts on Apple Silicon
80 by dipampaul17 | 9 comments on Hacker News.
I discovered that in LLM inference, keys and values in the KV cache have very different quantization sensitivities. Keys need higher precision than values to maintain quality. I patched llama.cpp to enable different bit-widths for keys vs. values on Apple Silicon. The results are surprising: - K8V4 (8-bit keys, 4-bit values): 59% memory reduction with only 0.86% perplexity loss - K4V8 (4-bit keys, 8-bit values): 59% memory reduction but 6.06% perplexity loss - The configurations use the same number of bits, but K8V4 is 7× better for quality This means you can run LLMs with 2-3× longer context on the same Mac. Memory usage scales with sequence length, so savings compound as context grows. Implementation was straightforward: 1. Added --kvq-key and --kvq-val flags to llama.cpp 2. Applied existing quantization logic separately to K and V tensors 3. Validated with perplexity metrics across context lengths 4. Used Metal for acceleration (with -mlong-calls flag to avoid vectorization issues) Benchmarked on an M4 MacBook Pro running TinyLlama with 8K context windows. Compatible with Metal/MPS and optimized for Apple Silicon. GitHub: https://ift.tt/XIjvkOA
80 by dipampaul17 | 9 comments on Hacker News.
I discovered that in LLM inference, keys and values in the KV cache have very different quantization sensitivities. Keys need higher precision than values to maintain quality. I patched llama.cpp to enable different bit-widths for keys vs. values on Apple Silicon. The results are surprising: - K8V4 (8-bit keys, 4-bit values): 59% memory reduction with only 0.86% perplexity loss - K4V8 (4-bit keys, 8-bit values): 59% memory reduction but 6.06% perplexity loss - The configurations use the same number of bits, but K8V4 is 7× better for quality This means you can run LLMs with 2-3× longer context on the same Mac. Memory usage scales with sequence length, so savings compound as context grows. Implementation was straightforward: 1. Added --kvq-key and --kvq-val flags to llama.cpp 2. Applied existing quantization logic separately to K and V tensors 3. Validated with perplexity metrics across context lengths 4. Used Metal for acceleration (with -mlong-calls flag to avoid vectorization issues) Benchmarked on an M4 MacBook Pro running TinyLlama with 8K context windows. Compatible with Metal/MPS and optimized for Apple Silicon. GitHub: https://ift.tt/XIjvkOA
Thursday, May 15, 2025
Wednesday, May 14, 2025
Tuesday, May 13, 2025
Monday, May 12, 2025
Sunday, May 11, 2025
Saturday, May 10, 2025
Friday, May 9, 2025
Thursday, May 8, 2025
Wednesday, May 7, 2025
Tuesday, May 6, 2025
New top story on Hacker News: Show HN: Feedsmith — Fast parser & generator for RSS, Atom, OPML feed namespaces
Show HN: Feedsmith — Fast parser & generator for RSS, Atom, OPML feed namespaces
10 by macieklamberski | 3 comments on Hacker News.
Hi HN! While working on a project that involves frequently parsing a lot of feeds, I needed a fast JavaScript-based parser to extract specific fields from feed namespaces. Existing Node packages were either too slow or merged all feed formats, losing namespace information. So I decided to write it myself and created this NPM package with a simple API. Feedsmith supports all feed formats and many popular namespaces, including: Podcast, Media, iTunes, Dublin Core, and more. It can also parse and generate OPML files. I am currently adding support for more namespaces and feed generation for RSS, Atom and RDF. The library grew into something bigger than I initially anticipated, so I also started creating a dedicated documentation website to describe all the features.
10 by macieklamberski | 3 comments on Hacker News.
Hi HN! While working on a project that involves frequently parsing a lot of feeds, I needed a fast JavaScript-based parser to extract specific fields from feed namespaces. Existing Node packages were either too slow or merged all feed formats, losing namespace information. So I decided to write it myself and created this NPM package with a simple API. Feedsmith supports all feed formats and many popular namespaces, including: Podcast, Media, iTunes, Dublin Core, and more. It can also parse and generate OPML files. I am currently adding support for more namespaces and feed generation for RSS, Atom and RDF. The library grew into something bigger than I initially anticipated, so I also started creating a dedicated documentation website to describe all the features.
Monday, May 5, 2025
Sunday, May 4, 2025
Saturday, May 3, 2025
Friday, May 2, 2025
Thursday, May 1, 2025
Wednesday, April 30, 2025
New top story on Hacker News: Show HN: 1.2 users a day to keep the 9–5 away
Show HN: 1.2 users a day to keep the 9–5 away
9 by dmasiii | 5 comments on Hacker News.
In my long career as an “almost digital entrepreneur” (a fancy way to say I’ve tried a thousand things online without making a single cent), I never really felt that “this is it, I’m so close, I’ll finally quit everything and update my passport: job title? SaaS founder.” (Small detail: I don’t even have a passport. But I like to imagine that if I did, I’d want something cooler than “unemployed creative” written on it). For years, I collected side projects, hobbies, half-dead MVPs, and random nonsense, all with the same ending: super hyped at the beginning, burned out in the middle, completely abandoned by the end. But a couple years ago, I decided to take things more seriously (well… I try). I started building SaaS products. Simple, fast stuff, nothing too fancy. And finally, after a long toxic relationship with perfectionism, I realized something super basic but actually powerful: I don’t need thousands of users. I just need 1.2 paying users a day. Literally. Not to get rich, no Lamborghinis parked outside (also, I live in an apartment with no garage), but enough to live well, keep building, and maybe say “this is my job” without looking down in shame. It’s part math, part mindset. Like they told us in the first year of computer science: big problems get solved by breaking them into smaller ones. 100 users a day? Anxiety. 1.2 users a day? I can breathe. So yeah, this is my new mantra: “1.2 a day to keep the office job away.” Let’s see where this road takes me
9 by dmasiii | 5 comments on Hacker News.
In my long career as an “almost digital entrepreneur” (a fancy way to say I’ve tried a thousand things online without making a single cent), I never really felt that “this is it, I’m so close, I’ll finally quit everything and update my passport: job title? SaaS founder.” (Small detail: I don’t even have a passport. But I like to imagine that if I did, I’d want something cooler than “unemployed creative” written on it). For years, I collected side projects, hobbies, half-dead MVPs, and random nonsense, all with the same ending: super hyped at the beginning, burned out in the middle, completely abandoned by the end. But a couple years ago, I decided to take things more seriously (well… I try). I started building SaaS products. Simple, fast stuff, nothing too fancy. And finally, after a long toxic relationship with perfectionism, I realized something super basic but actually powerful: I don’t need thousands of users. I just need 1.2 paying users a day. Literally. Not to get rich, no Lamborghinis parked outside (also, I live in an apartment with no garage), but enough to live well, keep building, and maybe say “this is my job” without looking down in shame. It’s part math, part mindset. Like they told us in the first year of computer science: big problems get solved by breaking them into smaller ones. 100 users a day? Anxiety. 1.2 users a day? I can breathe. So yeah, this is my new mantra: “1.2 a day to keep the office job away.” Let’s see where this road takes me
Tuesday, April 29, 2025
New top story on Hacker News: It's School time: Adventures in hacking an old Kindle
It's School time: Adventures in hacking an old Kindle
13 by FlyingSnake | 0 comments on Hacker News.
13 by FlyingSnake | 0 comments on Hacker News.
New top story on Hacker News: Show HN: Beatsync – perfect audio sync across multiple devices
Show HN: Beatsync – perfect audio sync across multiple devices
13 by freemanjiang | 1 comments on Hacker News.
Hi HN! I made Beatsync, an open-source browser-based audio player that syncs audio with millisecond-level accuracy across many devices. Try it live right now: https://ift.tt/JyOcLeU The idea is that with no additional hardware, you can turn any group of devices into a full surround sound system. MacBook speakers are particularly good. Inspired by Network Time Protocol (NTP), I do clock synchronization over websockets and use the Web Audio API to keep audio latency under a few ms. You can also drag devices around a virtual grid to simulate spatial audio — it changes the volume of each device depending on its distance to a virtual listening source! I've been working on this project for the past couple of weeks. Would love to hear your thoughts and ideas!
13 by freemanjiang | 1 comments on Hacker News.
Hi HN! I made Beatsync, an open-source browser-based audio player that syncs audio with millisecond-level accuracy across many devices. Try it live right now: https://ift.tt/JyOcLeU The idea is that with no additional hardware, you can turn any group of devices into a full surround sound system. MacBook speakers are particularly good. Inspired by Network Time Protocol (NTP), I do clock synchronization over websockets and use the Web Audio API to keep audio latency under a few ms. You can also drag devices around a virtual grid to simulate spatial audio — it changes the volume of each device depending on its distance to a virtual listening source! I've been working on this project for the past couple of weeks. Would love to hear your thoughts and ideas!
Monday, April 28, 2025
Sunday, April 27, 2025
New top story on Hacker News: Show HN: Daily Jailbreak – Prompt Engineer's Wordle
Show HN: Daily Jailbreak – Prompt Engineer's Wordle
7 by ericlmtn | 5 comments on Hacker News.
I created a daily challenge for Prompt Engineers to build the shortest prompt to break a system prompt. You are provided the system prompt and a forbidden method the LLM was told not to invoke. Your task is to trick the model into calling the function. Shortest successful attempts will show up in the leaderboard. Give it a shot! You never know what could break an LLM.
7 by ericlmtn | 5 comments on Hacker News.
I created a daily challenge for Prompt Engineers to build the shortest prompt to break a system prompt. You are provided the system prompt and a forbidden method the LLM was told not to invoke. Your task is to trick the model into calling the function. Shortest successful attempts will show up in the leaderboard. Give it a shot! You never know what could break an LLM.
Saturday, April 26, 2025
Friday, April 25, 2025
Thursday, April 24, 2025
Wednesday, April 23, 2025
New top story on Hacker News: Show HN: Body Controlled 3D Dino Game
Show HN: Body Controlled 3D Dino Game
4 by NikoNaskida | 0 comments on Hacker News.
Hey HN, I am Niko. I've built this 3D Dino Game In browser using tech like three.js and MoveNet (tensorflow). Basically, it's a normal 3D dinosaur game with a twist: you need to actually perform actions irl to avoid obstacles. Duck to crouch, jump to jump, raise left hand - go left, raise right hand - go right. Game is using your phone/laptop camera to track your body movements and perform in-game actions. PS. Game is 100% client side and I don't record/track/use/save any of your data Hope you find it worth playing. (better play on PC) It's a 100% FREE browser game with no login! Please feel welcome to DM feedback or reply or anything!
4 by NikoNaskida | 0 comments on Hacker News.
Hey HN, I am Niko. I've built this 3D Dino Game In browser using tech like three.js and MoveNet (tensorflow). Basically, it's a normal 3D dinosaur game with a twist: you need to actually perform actions irl to avoid obstacles. Duck to crouch, jump to jump, raise left hand - go left, raise right hand - go right. Game is using your phone/laptop camera to track your body movements and perform in-game actions. PS. Game is 100% client side and I don't record/track/use/save any of your data Hope you find it worth playing. (better play on PC) It's a 100% FREE browser game with no login! Please feel welcome to DM feedback or reply or anything!
Tuesday, April 22, 2025
Monday, April 21, 2025
Sunday, April 20, 2025
Saturday, April 19, 2025
Friday, April 18, 2025
Thursday, April 17, 2025
Wednesday, April 16, 2025
Tuesday, April 15, 2025
Monday, April 14, 2025
Sunday, April 13, 2025
Saturday, April 12, 2025
Friday, April 11, 2025
Thursday, April 10, 2025
Wednesday, April 9, 2025
Tuesday, April 8, 2025
New top story on Hacker News: Ask HN: Do you still use search engines?
Ask HN: Do you still use search engines?
38 by davidkuennen | 112 comments on Hacker News.
Today, I noticed that my behavior has shifted over the past few months. Right now, I exclusively use ChatGPT for any kind of search or question. Using Google now feels completely lackluster in comparison. I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background. How about you?
38 by davidkuennen | 112 comments on Hacker News.
Today, I noticed that my behavior has shifted over the past few months. Right now, I exclusively use ChatGPT for any kind of search or question. Using Google now feels completely lackluster in comparison. I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background. How about you?
Monday, April 7, 2025
Sunday, April 6, 2025
Saturday, April 5, 2025
Friday, April 4, 2025
Thursday, April 3, 2025
Wednesday, April 2, 2025
Tuesday, April 1, 2025
New top story on Hacker News: Show HN: Qwen-2.5-32B is now the best open source OCR model
Show HN: Qwen-2.5-32B is now the best open source OCR model
12 by themanmaran | 1 comments on Hacker News.
Last week was big for open source LLMs. We got: - Qwen 2.5 VL (72b and 32b) - Gemma-3 (27b) - DeepSeek-v3-0324 And a couple weeks ago we got the new mistral-ocr model. We updated our OCR benchmark to include the new models. We evaluated 1,000 documents for JSON extraction accuracy. Major takeaways: - Qwen 2.5 VL (72b and 32b) are by far the most impressive. Both landed right around 75% accuracy (equivalent to GPT-4o’s performance). Qwen 72b was only 0.4% above 32b. Within the margin of error. - Both Qwen models passed mistral-ocr (72.2%), which is specifically trained for OCR. - Gemma-3 (27B) only scored 42.9%. Particularly surprising given that it's architecture is based on Gemini 2.0 which still tops the accuracy chart. The data set and benchmark runner is fully open source. You can check out the code and reproduction steps here: - https://ift.tt/N6QsBg9... - https://ift.tt/5Bq48eS - https://ift.tt/Pq2s9lo
12 by themanmaran | 1 comments on Hacker News.
Last week was big for open source LLMs. We got: - Qwen 2.5 VL (72b and 32b) - Gemma-3 (27b) - DeepSeek-v3-0324 And a couple weeks ago we got the new mistral-ocr model. We updated our OCR benchmark to include the new models. We evaluated 1,000 documents for JSON extraction accuracy. Major takeaways: - Qwen 2.5 VL (72b and 32b) are by far the most impressive. Both landed right around 75% accuracy (equivalent to GPT-4o’s performance). Qwen 72b was only 0.4% above 32b. Within the margin of error. - Both Qwen models passed mistral-ocr (72.2%), which is specifically trained for OCR. - Gemma-3 (27B) only scored 42.9%. Particularly surprising given that it's architecture is based on Gemini 2.0 which still tops the accuracy chart. The data set and benchmark runner is fully open source. You can check out the code and reproduction steps here: - https://ift.tt/N6QsBg9... - https://ift.tt/5Bq48eS - https://ift.tt/Pq2s9lo
Monday, March 31, 2025
New top story on Hacker News: Show HN: GuMCP – Open-source MCP servers, hosted for free
Show HN: GuMCP – Open-source MCP servers, hosted for free
16 by murb | 3 comments on Hacker News.
Hello! We open sourced all our current MCP servers to platforms like Slack, Google sheets, Linear, Perplexity and will be contributing a few more integrations every day to the project. problems we're hoping to solve: - Many people are creating MCP servers for the same apps. They're scattered across different repos but flavors of the same thing. We're making one standardized mono project for all MCP servers. - Startups are charging for hosting MCP servers. This is blocking tons of people from being able to play around with MCP casually. We're hosting them for free. - Non-technical people should be able to use MCP without needing to learn how to clone a repo and set up a venv. We're trying to enable a one click integration if people want to use the free hosted service. The plan is to keep contributing until we have an MCP server for basically every useful app anyone could want.
16 by murb | 3 comments on Hacker News.
Hello! We open sourced all our current MCP servers to platforms like Slack, Google sheets, Linear, Perplexity and will be contributing a few more integrations every day to the project. problems we're hoping to solve: - Many people are creating MCP servers for the same apps. They're scattered across different repos but flavors of the same thing. We're making one standardized mono project for all MCP servers. - Startups are charging for hosting MCP servers. This is blocking tons of people from being able to play around with MCP casually. We're hosting them for free. - Non-technical people should be able to use MCP without needing to learn how to clone a repo and set up a venv. We're trying to enable a one click integration if people want to use the free hosted service. The plan is to keep contributing until we have an MCP server for basically every useful app anyone could want.
Sunday, March 30, 2025
Saturday, March 29, 2025
Friday, March 28, 2025
Thursday, March 27, 2025
Wednesday, March 26, 2025
New top story on Hacker News: Botswana Successfully Launches First Satellite, Botsat-1
Botswana Successfully Launches First Satellite, Botsat-1
8 by vinnyglennon | 1 comments on Hacker News.
8 by vinnyglennon | 1 comments on Hacker News.
Tuesday, March 25, 2025
Monday, March 24, 2025
Sunday, March 23, 2025
Saturday, March 22, 2025
Friday, March 21, 2025
Thursday, March 20, 2025
New top story on Hacker News: Show HN: AgentKit – JavaScript Alternative to OpenAI Agents SDK with Native MCP
Show HN: AgentKit – JavaScript Alternative to OpenAI Agents SDK with Native MCP
32 by tonyhb | 9 comments on Hacker News.
Hi HN! I’m Tony, co-founder of Inngest. I wanted to share AgentKit, our Typescript multi-agent library we’ve been cooking and testing with some early users in prod for months. Although OpenAI’s Agents SDK has been launched since, we think an Agent framework should offer more deterministic and flexible routing, work with multiple model providers, embrace MCP (for rich tooling), and support the unstoppable and growing community of TypeScript AI developers by enabling a smooth transition to production use cases. This is why we are building AgentKit, and we’re really excited about it for a few reasons: Firstly, it’s simple. We embrace KISS principles brought by Anthropic and HuggingFace by allowing you to gradually add autonomy to your AgentKit program using primitives: - Agents: LLM calls that can be combined with prompts, tools, and MCP native support. - Networks: a simple way to get Agents to collaborate with a shared State, including handoff. - State: combines conversation history with a fully typed state machine, used in routing. - Routers: where the autonomy lives, from code-based to LLM-based (ex: ReAct) orchestration The routers are where the magic happens, and allow you to build deterministic, reliable, testable agents. AgentKit routing works as follows: the network calls itself in a loop, inspecting the State to determine which agents to call next using a router. The returned agent runs, then optionally updates state data using its tools. On the next loop, the network inspects state data and conversation history, and determines which new agent to run. This fully typed state machine routing allows you to deterministically build agents using any of the effective agent patterns — which means your code is easy to read, edit, understand, and debug. This also makes handoff incredibly easy: you define when agents should hand off to each other using regular code and state (or by calling an LLM in the router for AI-based routing). This is similar to the OpenAI Agents SDK but easier to manage, plan, and build. Then comes the local development and moving to production capabilities. AgentKit is compatible with Inngest’s tooling, meaning that you can test agents using Inngest’s local DevServer, which provides traces, inputs, outputs, replay, tool, and MCP inputs and outputs, and (soon) a step-over debugger so that you can easily understand and visually see what's happening in the agent loop. In production, you can also optionally combine AgentKit with Inngest for fault-tolerant execution. Each agent’s LLM call is wrapped in a step, and tools can use multiple steps to incorporate things like human-in-the-loop. This gives you native orchestration, observability, and out-of-the-box scale. You will find the documentation as an example of an AgentKit SWE-bench and multiple Coding Agent examples. It’s fully open-source under the Apache 2 license. If you want to get started: - npm: npm i @inngest/agent-kit - GitHub: https://ift.tt/mWMq13F - Docs: https://ift.tt/kCgoYu9 We’re excited to finally launch AgentKit; let us know what you think!
32 by tonyhb | 9 comments on Hacker News.
Hi HN! I’m Tony, co-founder of Inngest. I wanted to share AgentKit, our Typescript multi-agent library we’ve been cooking and testing with some early users in prod for months. Although OpenAI’s Agents SDK has been launched since, we think an Agent framework should offer more deterministic and flexible routing, work with multiple model providers, embrace MCP (for rich tooling), and support the unstoppable and growing community of TypeScript AI developers by enabling a smooth transition to production use cases. This is why we are building AgentKit, and we’re really excited about it for a few reasons: Firstly, it’s simple. We embrace KISS principles brought by Anthropic and HuggingFace by allowing you to gradually add autonomy to your AgentKit program using primitives: - Agents: LLM calls that can be combined with prompts, tools, and MCP native support. - Networks: a simple way to get Agents to collaborate with a shared State, including handoff. - State: combines conversation history with a fully typed state machine, used in routing. - Routers: where the autonomy lives, from code-based to LLM-based (ex: ReAct) orchestration The routers are where the magic happens, and allow you to build deterministic, reliable, testable agents. AgentKit routing works as follows: the network calls itself in a loop, inspecting the State to determine which agents to call next using a router. The returned agent runs, then optionally updates state data using its tools. On the next loop, the network inspects state data and conversation history, and determines which new agent to run. This fully typed state machine routing allows you to deterministically build agents using any of the effective agent patterns — which means your code is easy to read, edit, understand, and debug. This also makes handoff incredibly easy: you define when agents should hand off to each other using regular code and state (or by calling an LLM in the router for AI-based routing). This is similar to the OpenAI Agents SDK but easier to manage, plan, and build. Then comes the local development and moving to production capabilities. AgentKit is compatible with Inngest’s tooling, meaning that you can test agents using Inngest’s local DevServer, which provides traces, inputs, outputs, replay, tool, and MCP inputs and outputs, and (soon) a step-over debugger so that you can easily understand and visually see what's happening in the agent loop. In production, you can also optionally combine AgentKit with Inngest for fault-tolerant execution. Each agent’s LLM call is wrapped in a step, and tools can use multiple steps to incorporate things like human-in-the-loop. This gives you native orchestration, observability, and out-of-the-box scale. You will find the documentation as an example of an AgentKit SWE-bench and multiple Coding Agent examples. It’s fully open-source under the Apache 2 license. If you want to get started: - npm: npm i @inngest/agent-kit - GitHub: https://ift.tt/mWMq13F - Docs: https://ift.tt/kCgoYu9 We’re excited to finally launch AgentKit; let us know what you think!
Wednesday, March 19, 2025
Tuesday, March 18, 2025
Monday, March 17, 2025
New top story on Hacker News: Show HN: OpenTimes – Free travel times between U.S. Census geographies
Show HN: OpenTimes – Free travel times between U.S. Census geographies
13 by dfsnow | 1 comments on Hacker News.
Hi HN! Today I'm launching OpenTimes, a free database of roughly 150 billion pre-computed, point-to-point travel times between United States Census geographies. In addition to letting you visualize travel isochrones on the homepage, OpenTimes also lets you download massive amounts of travel time data for free and with no limits. The primary goal here is to enable research and fill a gap I noticed in the open-source spatial ecosystem. Researchers (social scientists, economists, etc.) use large travel time matrices to quantify things like access to healthcare, but they often end up paying Google or Esri for the necessary data. By pre-calculating times between commonly-used research geographies (i.e. Census) and then making those times easily accessible via SQL, I hope to make large-scale accessibility research cheaper and simpler. Some technical bits that may be of interest to HN folks: - The entire OpenTimes backend is just static Parquet files on R2. There's no RDBMS or running service. The whole thing costs about $10/month to host and is free to serve. - All travel times were calculated by pre-building the inputs (OSM, OSRM networks) and then distributing the compute over hundreds of GitHub Actions jobs. - The query/SQL layer uses a setup I haven't seen before: a single DuckDB database file with views that point to static Parquet files via HTTP. Finally, the driving times are optimistic since they don't (yet) account for traffic. This is something I hope to work on in the near future. Enjoy!
13 by dfsnow | 1 comments on Hacker News.
Hi HN! Today I'm launching OpenTimes, a free database of roughly 150 billion pre-computed, point-to-point travel times between United States Census geographies. In addition to letting you visualize travel isochrones on the homepage, OpenTimes also lets you download massive amounts of travel time data for free and with no limits. The primary goal here is to enable research and fill a gap I noticed in the open-source spatial ecosystem. Researchers (social scientists, economists, etc.) use large travel time matrices to quantify things like access to healthcare, but they often end up paying Google or Esri for the necessary data. By pre-calculating times between commonly-used research geographies (i.e. Census) and then making those times easily accessible via SQL, I hope to make large-scale accessibility research cheaper and simpler. Some technical bits that may be of interest to HN folks: - The entire OpenTimes backend is just static Parquet files on R2. There's no RDBMS or running service. The whole thing costs about $10/month to host and is free to serve. - All travel times were calculated by pre-building the inputs (OSM, OSRM networks) and then distributing the compute over hundreds of GitHub Actions jobs. - The query/SQL layer uses a setup I haven't seen before: a single DuckDB database file with views that point to static Parquet files via HTTP. Finally, the driving times are optimistic since they don't (yet) account for traffic. This is something I hope to work on in the near future. Enjoy!
New top story on Hacker News: Occupry your next lease to negotiate a better deal
Occupry your next lease to negotiate a better deal
12 by jason_archmint | 21 comments on Hacker News.
12 by jason_archmint | 21 comments on Hacker News.
Sunday, March 16, 2025
New top story on Hacker News: Show HN: Quickly connect to WiFi by scanning text, no typing needed
Show HN: Quickly connect to WiFi by scanning text, no typing needed
3 by ylj | 0 comments on Hacker News.
I travel and work remotely a lot. Every new place—hotels, cafes, coworking spaces—means dealing with a new WiFi network. Sometimes there's a QR code, which is convenient, but usually, it's a hassle: manually finding the right SSID (especially frustrating when hotels have one SSID per room), then typing long, error-prone passwords. To simplify this, I made a small Android app called Wify. It uses your phone's camera to capture WiFi details (network name and password) from printed text, then generates a QR code right on your screen. You can instantly connect using Google Circle to Search or Google Lens. You can also import an image from your gallery instead of using the camera. Currently, it's Android-only since I daily-drive a Pixel 7, and WiFi APIs differ significantly between Android and iOS. Play Store link: https://ift.tt/OxzVXpY... I'd appreciate your feedback or suggestions!
3 by ylj | 0 comments on Hacker News.
I travel and work remotely a lot. Every new place—hotels, cafes, coworking spaces—means dealing with a new WiFi network. Sometimes there's a QR code, which is convenient, but usually, it's a hassle: manually finding the right SSID (especially frustrating when hotels have one SSID per room), then typing long, error-prone passwords. To simplify this, I made a small Android app called Wify. It uses your phone's camera to capture WiFi details (network name and password) from printed text, then generates a QR code right on your screen. You can instantly connect using Google Circle to Search or Google Lens. You can also import an image from your gallery instead of using the camera. Currently, it's Android-only since I daily-drive a Pixel 7, and WiFi APIs differ significantly between Android and iOS. Play Store link: https://ift.tt/OxzVXpY... I'd appreciate your feedback or suggestions!
Saturday, March 15, 2025
Friday, March 14, 2025
New top story on Hacker News: Show HN: Pi Labs – AI scoring and optimization tools for software engineers
Show HN: Pi Labs – AI scoring and optimization tools for software engineers
10 by achintms | 0 comments on Hacker News.
Hey HN, after years building some of the core AI and NLU systems in Google Search, we decided to leave and build outside. Our goal was to put the advanced ML and DS techniques we’ve been using in the hands of all software engineers, so that everyone can build AI and Search apps at the same level of performance and sophistication as the big labs. This was a hard technical challenge but we were very inspired by the MVC architecture for Web development. The intuition there was that when a data model changes, its view would get auto-updated. We built a similar architecture for AI. On one side is a scoring system, which encapsulates in a set of metrics what’s good about the AI application. On the other side is a set of optimizers that “compile” against this scorer - prompt optimization, data filtering, synthetic data generation, supervised learning, RL, etc. The scoring system can be calibrated using developer, user or rater feedback, and once it’s updated, all the optimizers get recompiled against it. The result is a setup that makes it easy to incrementally improve the quality of your AI in a tight feedback loop: You update your scorers, they auto-update your optimizers, your app gets better, you see that improvement in interpretable scores, and then you repeat, progressing from simpler to more advanced optimizers and from off-the-shelf to calibrated scorers. We would love your feedback on this approach. https://build.withpi.ai has a set of playgrounds to help you quickly build a scorer and multiple optimizers. No sign in required. https://code.withpi.ai has the API reference and Notebook links. Finally, we have a Loom demo [1]. More technical details Scorers: Our scoring system has three key differences from the common LLM-as-a-judge pattern. First, rather than a single label or metric from an LLM judge, our scoring system is represented as a tunable tree of metrics, with 20+ dimensions which get combined into a final (non-linear) weighted score. The tree structure makes scores easily interpretable (just look at the breakdown by dimension), extensible (just add/remove a dimension), and adjustable (just re-tune the weights). Training the scoring system with labeled/preference data adjusts the weights. You can automate this process with user feedback signals, resulting in a tight feedback loop. Second, our scoring system handles natural language dimensions (great for free-form, qualitative questions requiring NLU) alongside quantitative dimensions (like computations over dates or doc length, which can be provided in Python) in the same tree. When calibrating with your labeled or preference data, the scorer learns how to balance these. Third, for natural language scoring, we use specialized smaller encoder models rather than autoregressive models. Encoders are a natural fit for scoring as they are faster and cheaper to run, easier to fine-tune, and more suitable architecturally (bi-directional attention with regression or classification head) than similar sized decoder models. For example, we can score 20+ dimensions in sub-100ms, making it possible to use scoring everywhere from evaluation to agent orchestration to reward modeling. Optimizers: We took the most salient ML techniques and reformulated them as optimizers against our scoring system e.g. for DSPy, the scoring system acts as its validator. For GRPO, the scoring system acts as its reward model. We’re keen to hear the community’s feedback on which techniques to add next. Overall stack: Playgrounds next.js and Vercel. AI: Runpod and GCP for training GPUs, TRL for training algos, ModernBert & Llama as base models. GCP and Azure for 4o and Anthropic calls. We’d love your feedback and perspectives: Our team will be around to answer questions and discuss. If there’s a lot of interest, happy to host a live session! - Achint, co-founder of Pi Labs [1] https://ift.tt/yaT1lbE
10 by achintms | 0 comments on Hacker News.
Hey HN, after years building some of the core AI and NLU systems in Google Search, we decided to leave and build outside. Our goal was to put the advanced ML and DS techniques we’ve been using in the hands of all software engineers, so that everyone can build AI and Search apps at the same level of performance and sophistication as the big labs. This was a hard technical challenge but we were very inspired by the MVC architecture for Web development. The intuition there was that when a data model changes, its view would get auto-updated. We built a similar architecture for AI. On one side is a scoring system, which encapsulates in a set of metrics what’s good about the AI application. On the other side is a set of optimizers that “compile” against this scorer - prompt optimization, data filtering, synthetic data generation, supervised learning, RL, etc. The scoring system can be calibrated using developer, user or rater feedback, and once it’s updated, all the optimizers get recompiled against it. The result is a setup that makes it easy to incrementally improve the quality of your AI in a tight feedback loop: You update your scorers, they auto-update your optimizers, your app gets better, you see that improvement in interpretable scores, and then you repeat, progressing from simpler to more advanced optimizers and from off-the-shelf to calibrated scorers. We would love your feedback on this approach. https://build.withpi.ai has a set of playgrounds to help you quickly build a scorer and multiple optimizers. No sign in required. https://code.withpi.ai has the API reference and Notebook links. Finally, we have a Loom demo [1]. More technical details Scorers: Our scoring system has three key differences from the common LLM-as-a-judge pattern. First, rather than a single label or metric from an LLM judge, our scoring system is represented as a tunable tree of metrics, with 20+ dimensions which get combined into a final (non-linear) weighted score. The tree structure makes scores easily interpretable (just look at the breakdown by dimension), extensible (just add/remove a dimension), and adjustable (just re-tune the weights). Training the scoring system with labeled/preference data adjusts the weights. You can automate this process with user feedback signals, resulting in a tight feedback loop. Second, our scoring system handles natural language dimensions (great for free-form, qualitative questions requiring NLU) alongside quantitative dimensions (like computations over dates or doc length, which can be provided in Python) in the same tree. When calibrating with your labeled or preference data, the scorer learns how to balance these. Third, for natural language scoring, we use specialized smaller encoder models rather than autoregressive models. Encoders are a natural fit for scoring as they are faster and cheaper to run, easier to fine-tune, and more suitable architecturally (bi-directional attention with regression or classification head) than similar sized decoder models. For example, we can score 20+ dimensions in sub-100ms, making it possible to use scoring everywhere from evaluation to agent orchestration to reward modeling. Optimizers: We took the most salient ML techniques and reformulated them as optimizers against our scoring system e.g. for DSPy, the scoring system acts as its validator. For GRPO, the scoring system acts as its reward model. We’re keen to hear the community’s feedback on which techniques to add next. Overall stack: Playgrounds next.js and Vercel. AI: Runpod and GCP for training GPUs, TRL for training algos, ModernBert & Llama as base models. GCP and Azure for 4o and Anthropic calls. We’d love your feedback and perspectives: Our team will be around to answer questions and discuss. If there’s a lot of interest, happy to host a live session! - Achint, co-founder of Pi Labs [1] https://ift.tt/yaT1lbE
Thursday, March 13, 2025
New top story on Hacker News: Show HN: Bubbles, a vanilla JavaScript web game
Show HN: Bubbles, a vanilla JavaScript web game
21 by ehmorris | 7 comments on Hacker News.
Hey everybody, you might remember my older game, Lander! It made a big splash on Hacker News about 2 years ago. I'm still enjoying writing games with no dependencies. I've been working on Bubbles for about 6 months and would love to see your scores. If you like it, you can build your own levels with my builder tool: https://ift.tt/G8p9Kb5 and share the levels here or via Github.
21 by ehmorris | 7 comments on Hacker News.
Hey everybody, you might remember my older game, Lander! It made a big splash on Hacker News about 2 years ago. I'm still enjoying writing games with no dependencies. I've been working on Bubbles for about 6 months and would love to see your scores. If you like it, you can build your own levels with my builder tool: https://ift.tt/G8p9Kb5 and share the levels here or via Github.
Wednesday, March 12, 2025
Tuesday, March 11, 2025
Monday, March 10, 2025
Sunday, March 9, 2025
Saturday, March 8, 2025
Friday, March 7, 2025
New top story on Hacker News: Show HN: A big tech dev experience for an open source CMS
Show HN: A big tech dev experience for an open source CMS
19 by randall | 15 comments on Hacker News.
Hey HN! We're building an open-source CMS designed to help creators with every part of the content production pipeline. We're showing our tiny first step: A tool designed to take in a Twitter username and produce an "identity card" based on it. We expect to use an approach similar to [Constitutional AI] with an explicit focus on repeatability, testability, and verification of an "identity card." We think this approach could be used to create finetuning examples for training changes, or serve as inference time insight for LLMs, or most likely a combination of the two. The tooling we're showing today is extremely simplistic (and the AI is frankly bad) but this is intentional. We're more focused on showing the dev experience and community aspects. We'd like to make it easier to contribute to this project than edit Wikipedia. Communities are frustrated with things like Wordpress, Apache, and other open source foundations focusing on things other than software. We have a lot of community ideas (governance via vote by jury is perhaps the most interesting). We're a team of 5, and we've bounced around a few companies with each other. We're all professional creators (video + music) and we're creating tooling for ourselves first. Previously, we did a startup called Vidpresso (YC W14) that was acquired by Facebook in 2018. We all worked at Facebook for 5 years on creator tooling, and have since left to start this thing. After leaving FB, it was painful for us to leave the warm embrace of the Facebook infra team where we had amazing tooling. Since then, we've pivoted a bunch of times trying to figure out our "real" product. While we think we've finally nailed it, the developer experience we built is one we think others could benefit from. Our tooling is designed so any developer can easily jump in and start contributing. It's an AI-first dev environment designed with a few key principles in mind: 1. You should be able to discover any command you need to run without looking at docs. 2. To make a change, as much context as possible should be provided as close to the code as possible. 3. AIs are "people too", in the sense that they benefit from focused context, and not being distracted by having to search deeply through multiple files or documentation to make changes. We have a few non-traditional elements to our stack which we think are worth exploring. [Isograph] helps us simplify our component usage with GraphQL. [Replit] lets people use AI coding without needing to set up any additional tooling. We've learned how to treat it like a junior developer, and think it will be the best platform for AI-first open source projects going forward. [Sapling] (and Git together) for version control. It might sound counter intuitive, but we use Git to manage agent interactionsand we use Sapling to manage "purposeful" commits. My last [Show HN post in 2013] ended up helping me find my Vidpresso cofounder so I have high hopes for this one. I'm excited to meet anyone, developers, creators, or nice people in general, and start to work with them to make this project work. I have good references of being a nice guy, and aim to keep that going with this project. The best way to work with us is [remix our Replit app] and [join our Discord]. Thanks for reading and checking us out! It's super early, but we're excited to work with you! [Constitutional AI]: https://ift.tt/LK6w8Q9... [Isograph]: https://isograph.dev [Replit]: https://replit.com [Sapling]: https://sapling-scm.com [Show HN post in 2013]: https://ift.tt/5QjMxz9 [remix our Replit app]: https://ift.tt/5knaoSr... [join our Discord]: https://ift.tt/mG8Pkj2
19 by randall | 15 comments on Hacker News.
Hey HN! We're building an open-source CMS designed to help creators with every part of the content production pipeline. We're showing our tiny first step: A tool designed to take in a Twitter username and produce an "identity card" based on it. We expect to use an approach similar to [Constitutional AI] with an explicit focus on repeatability, testability, and verification of an "identity card." We think this approach could be used to create finetuning examples for training changes, or serve as inference time insight for LLMs, or most likely a combination of the two. The tooling we're showing today is extremely simplistic (and the AI is frankly bad) but this is intentional. We're more focused on showing the dev experience and community aspects. We'd like to make it easier to contribute to this project than edit Wikipedia. Communities are frustrated with things like Wordpress, Apache, and other open source foundations focusing on things other than software. We have a lot of community ideas (governance via vote by jury is perhaps the most interesting). We're a team of 5, and we've bounced around a few companies with each other. We're all professional creators (video + music) and we're creating tooling for ourselves first. Previously, we did a startup called Vidpresso (YC W14) that was acquired by Facebook in 2018. We all worked at Facebook for 5 years on creator tooling, and have since left to start this thing. After leaving FB, it was painful for us to leave the warm embrace of the Facebook infra team where we had amazing tooling. Since then, we've pivoted a bunch of times trying to figure out our "real" product. While we think we've finally nailed it, the developer experience we built is one we think others could benefit from. Our tooling is designed so any developer can easily jump in and start contributing. It's an AI-first dev environment designed with a few key principles in mind: 1. You should be able to discover any command you need to run without looking at docs. 2. To make a change, as much context as possible should be provided as close to the code as possible. 3. AIs are "people too", in the sense that they benefit from focused context, and not being distracted by having to search deeply through multiple files or documentation to make changes. We have a few non-traditional elements to our stack which we think are worth exploring. [Isograph] helps us simplify our component usage with GraphQL. [Replit] lets people use AI coding without needing to set up any additional tooling. We've learned how to treat it like a junior developer, and think it will be the best platform for AI-first open source projects going forward. [Sapling] (and Git together) for version control. It might sound counter intuitive, but we use Git to manage agent interactionsand we use Sapling to manage "purposeful" commits. My last [Show HN post in 2013] ended up helping me find my Vidpresso cofounder so I have high hopes for this one. I'm excited to meet anyone, developers, creators, or nice people in general, and start to work with them to make this project work. I have good references of being a nice guy, and aim to keep that going with this project. The best way to work with us is [remix our Replit app] and [join our Discord]. Thanks for reading and checking us out! It's super early, but we're excited to work with you! [Constitutional AI]: https://ift.tt/LK6w8Q9... [Isograph]: https://isograph.dev [Replit]: https://replit.com [Sapling]: https://sapling-scm.com [Show HN post in 2013]: https://ift.tt/5QjMxz9 [remix our Replit app]: https://ift.tt/5knaoSr... [join our Discord]: https://ift.tt/mG8Pkj2
Thursday, March 6, 2025
Wednesday, March 5, 2025
Tuesday, March 4, 2025
New top story on Hacker News: Show HN: Open-source Deep Research across workplace applications
Show HN: Open-source Deep Research across workplace applications
6 by yuhongsun | 1 comments on Hacker News.
I’ve been using deep research on OpenAI and Perplexity and it’s been just amazing at gathering data across a lot of related and chained searches. Just earlier today, I asked “What are some marquee tech companies / hot startups (not including the giants like FAAMG, Samsung, Nvidia etc.)”. It’s a pretty involved question and looking up “marquee tech startups” or "hot tech startups" on Google gave me nothing useful. Deep research on both ChatGPT and Perplexity gave really high quality responses with ChatGPT siding on slightly larger scaleups and Perplexity siding more on up and coming companies. Given how useful AI research agents are across the internet, we decided to build an open-source equivalent for the workplace since a ton of questions at work also cannot be easily resolved with a single search. Onyx supports deep research connected to company applications like Google Drive, Salesforce, Sharepoint, GitHub, Slack, and 30+ others. For example, an engineer may want to know “What’s happening with the verification email failure?” Onyx’s AI agent would first figure out what it needs to answer this question: What is the cause of the failure, what has been done to address it, has this come up before, and what’s the latest status on the issue. The agent would run parallel searches through Confluence, email, Slack, and GitHub to get the answers to these then combine them to build a coherent overview. If the agent finds that there was a technical blocker that will delay the resolution, it will adjust mid-flight and research to get more context on the blocker. Here’s a video demo I recorded: https://www.youtube.com/watch?v=drvC0fWG4hE If you want to get started with the GitHub repo, you can check out our guides at https://docs.onyx.app . Or to play with it without needing to deploy anything, you can go to https://ift.tt/a632QZO P.S. There’s a lot of cool technical details behind building a system like this so I’ll continue the conversation in the comments.
6 by yuhongsun | 1 comments on Hacker News.
I’ve been using deep research on OpenAI and Perplexity and it’s been just amazing at gathering data across a lot of related and chained searches. Just earlier today, I asked “What are some marquee tech companies / hot startups (not including the giants like FAAMG, Samsung, Nvidia etc.)”. It’s a pretty involved question and looking up “marquee tech startups” or "hot tech startups" on Google gave me nothing useful. Deep research on both ChatGPT and Perplexity gave really high quality responses with ChatGPT siding on slightly larger scaleups and Perplexity siding more on up and coming companies. Given how useful AI research agents are across the internet, we decided to build an open-source equivalent for the workplace since a ton of questions at work also cannot be easily resolved with a single search. Onyx supports deep research connected to company applications like Google Drive, Salesforce, Sharepoint, GitHub, Slack, and 30+ others. For example, an engineer may want to know “What’s happening with the verification email failure?” Onyx’s AI agent would first figure out what it needs to answer this question: What is the cause of the failure, what has been done to address it, has this come up before, and what’s the latest status on the issue. The agent would run parallel searches through Confluence, email, Slack, and GitHub to get the answers to these then combine them to build a coherent overview. If the agent finds that there was a technical blocker that will delay the resolution, it will adjust mid-flight and research to get more context on the blocker. Here’s a video demo I recorded: https://www.youtube.com/watch?v=drvC0fWG4hE If you want to get started with the GitHub repo, you can check out our guides at https://docs.onyx.app . Or to play with it without needing to deploy anything, you can go to https://ift.tt/a632QZO P.S. There’s a lot of cool technical details behind building a system like this so I’ll continue the conversation in the comments.
Monday, March 3, 2025
Sunday, March 2, 2025
Saturday, March 1, 2025
Friday, February 28, 2025
Thursday, February 27, 2025
Wednesday, February 26, 2025
Tuesday, February 25, 2025
Monday, February 24, 2025
Sunday, February 23, 2025
Saturday, February 22, 2025
Friday, February 21, 2025
Thursday, February 20, 2025
Wednesday, February 19, 2025
New top story on Hacker News: Building a Bitcoin Exchange with FOSS BTC Pay Server
Building a Bitcoin Exchange with FOSS BTC Pay Server
17 by BitcoinNewsCom | 1 comments on Hacker News.
17 by BitcoinNewsCom | 1 comments on Hacker News.
Tuesday, February 18, 2025
New top story on Hacker News: (Ab)using general search algorithms on dynamic optimization problems (2023)
(Ab)using general search algorithms on dynamic optimization problems (2023)
10 by h45x1 | 3 comments on Hacker News.
I wrote this blog back in 2023 but since then I became a frequent lurker on HN and decided to repost the blog here. For me, writing it was about connecting the dots between dynamic optimization techniques I've studied as an economist and the more general search algorithms studied in CS.
10 by h45x1 | 3 comments on Hacker News.
I wrote this blog back in 2023 but since then I became a frequent lurker on HN and decided to repost the blog here. For me, writing it was about connecting the dots between dynamic optimization techniques I've studied as an economist and the more general search algorithms studied in CS.
Monday, February 17, 2025
Sunday, February 16, 2025
New top story on Hacker News: Show HN: Hackyournews.com v2
Show HN: Hackyournews.com v2
10 by ukuina | 0 comments on Hacker News.
A year and a half after I published https://ift.tt/P7KoWVR , I've rewritten it to be neater and added support for more news sources. HackYourNews.com v1 had a great response on HN [1] and consistently sees ~2k weekly unique visitors. There were many long-standing requests that I wanted to fulfill (thanks for your patience!): a proper dark mode, correct rendering on mobile devices, and more cogent summaries. This rewrite is the result. gpt-4o-mini reduces the cost of summarization to an absurd degree, so it's now sustainable to keep this free service going! Someday, I hope to use the Batch API [2] to drive down costs even further. Enjoy. [1] https://ift.tt/wbEy3tV [2] https://ift.tt/13vAK68
10 by ukuina | 0 comments on Hacker News.
A year and a half after I published https://ift.tt/P7KoWVR , I've rewritten it to be neater and added support for more news sources. HackYourNews.com v1 had a great response on HN [1] and consistently sees ~2k weekly unique visitors. There were many long-standing requests that I wanted to fulfill (thanks for your patience!): a proper dark mode, correct rendering on mobile devices, and more cogent summaries. This rewrite is the result. gpt-4o-mini reduces the cost of summarization to an absurd degree, so it's now sustainable to keep this free service going! Someday, I hope to use the Batch API [2] to drive down costs even further. Enjoy. [1] https://ift.tt/wbEy3tV [2] https://ift.tt/13vAK68
Saturday, February 15, 2025
Friday, February 14, 2025
Thursday, February 13, 2025
New top story on Hacker News: Phind 2: AI search with visual answers and multi-step reasoning
Phind 2: AI search with visual answers and multi-step reasoning
18 by rushingcreek | 4 comments on Hacker News.
Hi HN! Michael here. We've spent the last 6 months rebuilding Phind. The new Phind goes beyond text to present answers visually with inline images, diagrams, cards, and other widgets to make answers more meaningful. Here are some examples: "explain photosynthesis" - https://www.youtube.com/watch?v=cTCpnyICukM#t=7 "how to cook the perfect steak" - https://www.youtube.com/watch?v=cTCpnyICukM#t=55 "quicksort in rust" - https://www.youtube.com/watch?v=cTCpnyICukM#t=105 We asked ourselves what types of answers we would ideally like and crafted a new UI and model series to help get us there. Phind is also now able to seek out information on its own. If it needs more information, it can do multiple rounds of additional searches to get you the most comprehensive answer it can. This blog post contains an overview of what we did as well as technical deep dives into how we built the new frontend and models. I'm super grateful for all of the feedback we've gotten from the HN community and can't wait to hear your thoughts!
18 by rushingcreek | 4 comments on Hacker News.
Hi HN! Michael here. We've spent the last 6 months rebuilding Phind. The new Phind goes beyond text to present answers visually with inline images, diagrams, cards, and other widgets to make answers more meaningful. Here are some examples: "explain photosynthesis" - https://www.youtube.com/watch?v=cTCpnyICukM#t=7 "how to cook the perfect steak" - https://www.youtube.com/watch?v=cTCpnyICukM#t=55 "quicksort in rust" - https://www.youtube.com/watch?v=cTCpnyICukM#t=105 We asked ourselves what types of answers we would ideally like and crafted a new UI and model series to help get us there. Phind is also now able to seek out information on its own. If it needs more information, it can do multiple rounds of additional searches to get you the most comprehensive answer it can. This blog post contains an overview of what we did as well as technical deep dives into how we built the new frontend and models. I'm super grateful for all of the feedback we've gotten from the HN community and can't wait to hear your thoughts!
Wednesday, February 12, 2025
New top story on Hacker News: Show HN: Sort lines semantically using llm-sort
Show HN: Sort lines semantically using llm-sort
9 by vagozino | 4 comments on Hacker News.
This is a small plugin I made for Simon Willison's llm utility. You can do things like: cat names.txt | llm sort -q "Which one of these names is best for a pet seagull?" cat books.txt | llm sort -q "Which book is more related to basic vs. advanced CS topics?" I see a lot of potential marrying LLMs with classic UNIX interfaces.
9 by vagozino | 4 comments on Hacker News.
This is a small plugin I made for Simon Willison's llm utility. You can do things like: cat names.txt | llm sort -q "Which one of these names is best for a pet seagull?" cat books.txt | llm sort -q "Which book is more related to basic vs. advanced CS topics?" I see a lot of potential marrying LLMs with classic UNIX interfaces.
Tuesday, February 11, 2025
Monday, February 10, 2025
Sunday, February 9, 2025
Saturday, February 8, 2025
Friday, February 7, 2025
Thursday, February 6, 2025
New top story on Hacker News: Show HN: Heap Explorer
Show HN: Heap Explorer
7 by bkallus | 0 comments on Hacker News.
I wrote a little LD_PRELOAD library that makes it easy to inspect and interact with a running program's glibc heap. It's fun to pause processes, free a bunch of their allocations, then resume them. Most of the time, the processes continue as though nothing happened, but sometimes they do interesting things :)
7 by bkallus | 0 comments on Hacker News.
I wrote a little LD_PRELOAD library that makes it easy to inspect and interact with a running program's glibc heap. It's fun to pause processes, free a bunch of their allocations, then resume them. Most of the time, the processes continue as though nothing happened, but sometimes they do interesting things :)
Wednesday, February 5, 2025
Tuesday, February 4, 2025
Monday, February 3, 2025
New top story on Hacker News: Ask HN: Who wants to be hired? (February 2025)
Ask HN: Who wants to be hired? (February 2025)
24 by whoishiring | 96 comments on Hacker News.
Share your information if you are looking for work. Please use this format: Location: Remote: Willing to relocate: Technologies: Résumé/CV: Email: Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here. Readers: please only email these addresses to discuss work opportunities. There's a site for searching these posts at https://ift.tt/i45mEvw .
24 by whoishiring | 96 comments on Hacker News.
Share your information if you are looking for work. Please use this format: Location: Remote: Willing to relocate: Technologies: Résumé/CV: Email: Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here. Readers: please only email these addresses to discuss work opportunities. There's a site for searching these posts at https://ift.tt/i45mEvw .
Sunday, February 2, 2025
Saturday, February 1, 2025
Friday, January 31, 2025
Thursday, January 30, 2025
New top story on Hacker News: Show HN: Distr – open-source distribution platform for on-prem deployments
Show HN: Distr – open-source distribution platform for on-prem deployments
12 by louis_w_gk | 0 comments on Hacker News.
Distr is designed to help software engineers distribute and manage their applications or agents in customer-controlled or shared-responsibility environments. You only need a Docker Compose file or Helm chart—everything else for on-prem is handled by the platform. We’re are an open source dev tool company. Over the past couple of months, we’ve spoken with dozens of software companies to understand their challenges with on-prem deployments. We analyzed the internal tools they’ve built and the best practices from existing solutions, combining them into a prebuilt, Open Source solution that works out of the box and integrates seamlessly. Distr consists of two key components: 1. Hub - Provides a centralized view of all deployments and controls connected agents. - Comes with a simple GUI but also supports API and SDK access for seamless integration. - Fully Open- Surce and self-hostable, or you can use our fully managed platform. 2. Lightweight Agents - Pre-built agents for Helm (Kubernetes) and Docker Compose (VM) that run alongside your application. - Handle lifecycle tasks like guided installation, updates, and rollbacks. - Provide basic metrics (health status, application version) and logs If you already have a customer portal or self-service interface for on-prem deployments, you can seamlessly integrate all features into your existing portal or application using our API or SDK. Alternatively, you can use our pre-built, white-labeled customer portal. Here’s what an integration into your existing customer portal could look like: import {DistrService} from "@glasskube/distr-sdk"; const customerHasAutoUpdatesEnabled = false; // replace with your own logic const deploymentTargetId = 'da1d7130-bfa9-49a1-b567-c49728837df7'; const service = new DistrService({ apiKey: 'distr-8c24167aeb5fd4bb48b6d2140927df0f' }); const result = await service.isOutdated(deploymentTargetId); if(result.deploymentTarget.deployment?.latestStatus?.type !== 'ok') { // let the user decide whether to allow updates from an instable state, e.g. with: if(!confirm('The deployment is not in a stable state. Do you want to update anyway?')) { return; } } if(result.outdated) { if(customerHasAutoUpdatesEnabled) { await service.updateDeployment({deploymentTargetId}); // notify customer about the update } else { const newerVersionsAvailable = result.newerVersions; // notify customer about the newer versions, e.g. via email } } With the SDK/API, you can: - Display real-time deployed version and deployment status directly within the application, notifying customers when their deployed version is outdated. - Allow customers to trigger updates from within your app using a simple API call If you’re distributing software and want to streamline updates or enhance monitoring, we’d love your feedback and are here to answer any questions. Getting started is easy—just bring your Docker Compose file or Helm chart, and we’ll guide you through the rest. Check out the fully managed version ( https://ift.tt/UHEkBAQ ) and explore our documentation ( https://distr.sh/docs/ ) to learn more.
12 by louis_w_gk | 0 comments on Hacker News.
Distr is designed to help software engineers distribute and manage their applications or agents in customer-controlled or shared-responsibility environments. You only need a Docker Compose file or Helm chart—everything else for on-prem is handled by the platform. We’re are an open source dev tool company. Over the past couple of months, we’ve spoken with dozens of software companies to understand their challenges with on-prem deployments. We analyzed the internal tools they’ve built and the best practices from existing solutions, combining them into a prebuilt, Open Source solution that works out of the box and integrates seamlessly. Distr consists of two key components: 1. Hub - Provides a centralized view of all deployments and controls connected agents. - Comes with a simple GUI but also supports API and SDK access for seamless integration. - Fully Open- Surce and self-hostable, or you can use our fully managed platform. 2. Lightweight Agents - Pre-built agents for Helm (Kubernetes) and Docker Compose (VM) that run alongside your application. - Handle lifecycle tasks like guided installation, updates, and rollbacks. - Provide basic metrics (health status, application version) and logs If you already have a customer portal or self-service interface for on-prem deployments, you can seamlessly integrate all features into your existing portal or application using our API or SDK. Alternatively, you can use our pre-built, white-labeled customer portal. Here’s what an integration into your existing customer portal could look like: import {DistrService} from "@glasskube/distr-sdk"; const customerHasAutoUpdatesEnabled = false; // replace with your own logic const deploymentTargetId = 'da1d7130-bfa9-49a1-b567-c49728837df7'; const service = new DistrService({ apiKey: 'distr-8c24167aeb5fd4bb48b6d2140927df0f' }); const result = await service.isOutdated(deploymentTargetId); if(result.deploymentTarget.deployment?.latestStatus?.type !== 'ok') { // let the user decide whether to allow updates from an instable state, e.g. with: if(!confirm('The deployment is not in a stable state. Do you want to update anyway?')) { return; } } if(result.outdated) { if(customerHasAutoUpdatesEnabled) { await service.updateDeployment({deploymentTargetId}); // notify customer about the update } else { const newerVersionsAvailable = result.newerVersions; // notify customer about the newer versions, e.g. via email } } With the SDK/API, you can: - Display real-time deployed version and deployment status directly within the application, notifying customers when their deployed version is outdated. - Allow customers to trigger updates from within your app using a simple API call If you’re distributing software and want to streamline updates or enhance monitoring, we’d love your feedback and are here to answer any questions. Getting started is easy—just bring your Docker Compose file or Helm chart, and we’ll guide you through the rest. Check out the fully managed version ( https://ift.tt/UHEkBAQ ) and explore our documentation ( https://distr.sh/docs/ ) to learn more.
Wednesday, January 29, 2025
Tuesday, January 28, 2025
Monday, January 27, 2025
Sunday, January 26, 2025
Saturday, January 25, 2025
Friday, January 24, 2025
New top story on Hacker News: Show HN: Snap Scope – Visualize Lens Focal Length Distribution from EXIF Data
Show HN: Snap Scope – Visualize Lens Focal Length Distribution from EXIF Data
5 by kan02134 | 0 comments on Hacker News.
Hey HN, I built this tool because I wanted to understand which focal lengths I actually use when taking photos. It's a web app that analyzes EXIF data to visualize focal length distribution patterns. While it's admittedly niche (focused specifically on photography), I think it could be useful for photographers trying to understand their lens usage patterns or making decisions about lens purchases. Features: Client-side EXIF data processing (no server uploads/tracking) / Handles thousands of photos at once / Clean visualization with shareable summaries This tool supports most RAW formats, but you might occasionally encounter files where EXIF extraction fails. In such cases, converting to more common formats like JPEG usually resolves the issue. Try it out: https://ift.tt/og5Vif7 Source: https://ift.tt/4UMuYya
5 by kan02134 | 0 comments on Hacker News.
Hey HN, I built this tool because I wanted to understand which focal lengths I actually use when taking photos. It's a web app that analyzes EXIF data to visualize focal length distribution patterns. While it's admittedly niche (focused specifically on photography), I think it could be useful for photographers trying to understand their lens usage patterns or making decisions about lens purchases. Features: Client-side EXIF data processing (no server uploads/tracking) / Handles thousands of photos at once / Clean visualization with shareable summaries This tool supports most RAW formats, but you might occasionally encounter files where EXIF extraction fails. In such cases, converting to more common formats like JPEG usually resolves the issue. Try it out: https://ift.tt/og5Vif7 Source: https://ift.tt/4UMuYya
Thursday, January 23, 2025
New top story on Hacker News: Show HN: I built an active community of trans people online
Show HN: I built an active community of trans people online
44 by t4t | 15 comments on Hacker News.
A year ago I surveyed the internet and noticed there was only one popular space for trans and gender-non-conforming people to meet; Lex. Lex is not well liked by its users. Its software feels heavy and it is full of cash grabs and anti-patterns. It was recently acquired and is sure to only become more hostile to its users as it turns towards profit generation. With this in mind I built t4t, an alternative specially designed for not only queer people, but specifically trans people. It is an extremely lightweight service. I built it with my most ideal stack: Flutter, Svelte, Supabase, Posthog. It has grown in the last year to about 4,000 monthly active users. I think it could grow way beyond that this year.
44 by t4t | 15 comments on Hacker News.
A year ago I surveyed the internet and noticed there was only one popular space for trans and gender-non-conforming people to meet; Lex. Lex is not well liked by its users. Its software feels heavy and it is full of cash grabs and anti-patterns. It was recently acquired and is sure to only become more hostile to its users as it turns towards profit generation. With this in mind I built t4t, an alternative specially designed for not only queer people, but specifically trans people. It is an extremely lightweight service. I built it with my most ideal stack: Flutter, Svelte, Supabase, Posthog. It has grown in the last year to about 4,000 monthly active users. I think it could grow way beyond that this year.
Wednesday, January 22, 2025
Tuesday, January 21, 2025
New top story on Hacker News: Show HN: SudokuVariants – play and construct different variants of Sudoku
Show HN: SudokuVariants – play and construct different variants of Sudoku
8 by stanac | 0 comments on Hacker News.
Hi HN, I've been working on this Sudoku web app for the past couple of years, on and off during free weekends and afternoons. I started working on it because I was bored during COVID, and Cracking the Cryptic had just become popular on YouTube, which got me wondering how hard it could be to make a Sudoku app. The main idea is for the app to understand the constraints and know how to solve Sudoku grids (and not just be a simple Sudoku drawing/playing app). When it comes to classic Sudoku, the solver doesn't support anything more complicated than X-Wing, but it understands the constraints. At the moment, most of the popular variants are supported: killer, sandwich, arrow, thermo, palindrome, German whisper, kropki, consecutive, non-consecutive, greater than, XV, diagonal, anti-king, anti-knight, even-odd, windoku, renban, and zipper. The only variant I am yet to add support for is quadruple. If any other variant becomes popular, I will probably add it, as was the case with zipper lines during development. A user account is not required to play, but it is required if you want to publish a public grid on the app. The app doesn't collect any PII, doesn't have ads or trackers. Accounts are identified by email hash; I am not storing email addresses or passwords, and OTPs are sent by email. The less I know about users, the better for both sides. The app supports mobile devices, but it works best on bigger screens. It was built using Blazor SSR/WASM (AOT) with SVG for interactive parts. I know there are some performance issues (especially on mobile phones and with touch input), and I am trying to address them. Some of the features I was thinking about adding are classifying grids by difficulty, daily Sudoku, and maybe campaigns (groups of Sudoku grids where users have to solve them in order). If you like Sudoku, or more specifically variants of Sudoku, please let me know what you think about SudokuVariants. URL: https://ift.tt/SDi1UaP Thanks!
8 by stanac | 0 comments on Hacker News.
Hi HN, I've been working on this Sudoku web app for the past couple of years, on and off during free weekends and afternoons. I started working on it because I was bored during COVID, and Cracking the Cryptic had just become popular on YouTube, which got me wondering how hard it could be to make a Sudoku app. The main idea is for the app to understand the constraints and know how to solve Sudoku grids (and not just be a simple Sudoku drawing/playing app). When it comes to classic Sudoku, the solver doesn't support anything more complicated than X-Wing, but it understands the constraints. At the moment, most of the popular variants are supported: killer, sandwich, arrow, thermo, palindrome, German whisper, kropki, consecutive, non-consecutive, greater than, XV, diagonal, anti-king, anti-knight, even-odd, windoku, renban, and zipper. The only variant I am yet to add support for is quadruple. If any other variant becomes popular, I will probably add it, as was the case with zipper lines during development. A user account is not required to play, but it is required if you want to publish a public grid on the app. The app doesn't collect any PII, doesn't have ads or trackers. Accounts are identified by email hash; I am not storing email addresses or passwords, and OTPs are sent by email. The less I know about users, the better for both sides. The app supports mobile devices, but it works best on bigger screens. It was built using Blazor SSR/WASM (AOT) with SVG for interactive parts. I know there are some performance issues (especially on mobile phones and with touch input), and I am trying to address them. Some of the features I was thinking about adding are classifying grids by difficulty, daily Sudoku, and maybe campaigns (groups of Sudoku grids where users have to solve them in order). If you like Sudoku, or more specifically variants of Sudoku, please let me know what you think about SudokuVariants. URL: https://ift.tt/SDi1UaP Thanks!
Subscribe to:
Posts (Atom)