Tied Crosscoders: Tracing How Chat LLM Behavior Emerges from Base Model
3 by aranguri | 0 comments on Hacker News.
Sunday, March 23, 2025
Saturday, March 22, 2025
Friday, March 21, 2025
Thursday, March 20, 2025
New top story on Hacker News: Show HN: AgentKit – JavaScript Alternative to OpenAI Agents SDK with Native MCP
Show HN: AgentKit – JavaScript Alternative to OpenAI Agents SDK with Native MCP
32 by tonyhb | 9 comments on Hacker News.
Hi HN! I’m Tony, co-founder of Inngest. I wanted to share AgentKit, our Typescript multi-agent library we’ve been cooking and testing with some early users in prod for months. Although OpenAI’s Agents SDK has been launched since, we think an Agent framework should offer more deterministic and flexible routing, work with multiple model providers, embrace MCP (for rich tooling), and support the unstoppable and growing community of TypeScript AI developers by enabling a smooth transition to production use cases. This is why we are building AgentKit, and we’re really excited about it for a few reasons: Firstly, it’s simple. We embrace KISS principles brought by Anthropic and HuggingFace by allowing you to gradually add autonomy to your AgentKit program using primitives: - Agents: LLM calls that can be combined with prompts, tools, and MCP native support. - Networks: a simple way to get Agents to collaborate with a shared State, including handoff. - State: combines conversation history with a fully typed state machine, used in routing. - Routers: where the autonomy lives, from code-based to LLM-based (ex: ReAct) orchestration The routers are where the magic happens, and allow you to build deterministic, reliable, testable agents. AgentKit routing works as follows: the network calls itself in a loop, inspecting the State to determine which agents to call next using a router. The returned agent runs, then optionally updates state data using its tools. On the next loop, the network inspects state data and conversation history, and determines which new agent to run. This fully typed state machine routing allows you to deterministically build agents using any of the effective agent patterns — which means your code is easy to read, edit, understand, and debug. This also makes handoff incredibly easy: you define when agents should hand off to each other using regular code and state (or by calling an LLM in the router for AI-based routing). This is similar to the OpenAI Agents SDK but easier to manage, plan, and build. Then comes the local development and moving to production capabilities. AgentKit is compatible with Inngest’s tooling, meaning that you can test agents using Inngest’s local DevServer, which provides traces, inputs, outputs, replay, tool, and MCP inputs and outputs, and (soon) a step-over debugger so that you can easily understand and visually see what's happening in the agent loop. In production, you can also optionally combine AgentKit with Inngest for fault-tolerant execution. Each agent’s LLM call is wrapped in a step, and tools can use multiple steps to incorporate things like human-in-the-loop. This gives you native orchestration, observability, and out-of-the-box scale. You will find the documentation as an example of an AgentKit SWE-bench and multiple Coding Agent examples. It’s fully open-source under the Apache 2 license. If you want to get started: - npm: npm i @inngest/agent-kit - GitHub: https://ift.tt/mWMq13F - Docs: https://ift.tt/kCgoYu9 We’re excited to finally launch AgentKit; let us know what you think!
32 by tonyhb | 9 comments on Hacker News.
Hi HN! I’m Tony, co-founder of Inngest. I wanted to share AgentKit, our Typescript multi-agent library we’ve been cooking and testing with some early users in prod for months. Although OpenAI’s Agents SDK has been launched since, we think an Agent framework should offer more deterministic and flexible routing, work with multiple model providers, embrace MCP (for rich tooling), and support the unstoppable and growing community of TypeScript AI developers by enabling a smooth transition to production use cases. This is why we are building AgentKit, and we’re really excited about it for a few reasons: Firstly, it’s simple. We embrace KISS principles brought by Anthropic and HuggingFace by allowing you to gradually add autonomy to your AgentKit program using primitives: - Agents: LLM calls that can be combined with prompts, tools, and MCP native support. - Networks: a simple way to get Agents to collaborate with a shared State, including handoff. - State: combines conversation history with a fully typed state machine, used in routing. - Routers: where the autonomy lives, from code-based to LLM-based (ex: ReAct) orchestration The routers are where the magic happens, and allow you to build deterministic, reliable, testable agents. AgentKit routing works as follows: the network calls itself in a loop, inspecting the State to determine which agents to call next using a router. The returned agent runs, then optionally updates state data using its tools. On the next loop, the network inspects state data and conversation history, and determines which new agent to run. This fully typed state machine routing allows you to deterministically build agents using any of the effective agent patterns — which means your code is easy to read, edit, understand, and debug. This also makes handoff incredibly easy: you define when agents should hand off to each other using regular code and state (or by calling an LLM in the router for AI-based routing). This is similar to the OpenAI Agents SDK but easier to manage, plan, and build. Then comes the local development and moving to production capabilities. AgentKit is compatible with Inngest’s tooling, meaning that you can test agents using Inngest’s local DevServer, which provides traces, inputs, outputs, replay, tool, and MCP inputs and outputs, and (soon) a step-over debugger so that you can easily understand and visually see what's happening in the agent loop. In production, you can also optionally combine AgentKit with Inngest for fault-tolerant execution. Each agent’s LLM call is wrapped in a step, and tools can use multiple steps to incorporate things like human-in-the-loop. This gives you native orchestration, observability, and out-of-the-box scale. You will find the documentation as an example of an AgentKit SWE-bench and multiple Coding Agent examples. It’s fully open-source under the Apache 2 license. If you want to get started: - npm: npm i @inngest/agent-kit - GitHub: https://ift.tt/mWMq13F - Docs: https://ift.tt/kCgoYu9 We’re excited to finally launch AgentKit; let us know what you think!
Wednesday, March 19, 2025
Tuesday, March 18, 2025
Monday, March 17, 2025
New top story on Hacker News: Show HN: OpenTimes – Free travel times between U.S. Census geographies
Show HN: OpenTimes – Free travel times between U.S. Census geographies
13 by dfsnow | 1 comments on Hacker News.
Hi HN! Today I'm launching OpenTimes, a free database of roughly 150 billion pre-computed, point-to-point travel times between United States Census geographies. In addition to letting you visualize travel isochrones on the homepage, OpenTimes also lets you download massive amounts of travel time data for free and with no limits. The primary goal here is to enable research and fill a gap I noticed in the open-source spatial ecosystem. Researchers (social scientists, economists, etc.) use large travel time matrices to quantify things like access to healthcare, but they often end up paying Google or Esri for the necessary data. By pre-calculating times between commonly-used research geographies (i.e. Census) and then making those times easily accessible via SQL, I hope to make large-scale accessibility research cheaper and simpler. Some technical bits that may be of interest to HN folks: - The entire OpenTimes backend is just static Parquet files on R2. There's no RDBMS or running service. The whole thing costs about $10/month to host and is free to serve. - All travel times were calculated by pre-building the inputs (OSM, OSRM networks) and then distributing the compute over hundreds of GitHub Actions jobs. - The query/SQL layer uses a setup I haven't seen before: a single DuckDB database file with views that point to static Parquet files via HTTP. Finally, the driving times are optimistic since they don't (yet) account for traffic. This is something I hope to work on in the near future. Enjoy!
13 by dfsnow | 1 comments on Hacker News.
Hi HN! Today I'm launching OpenTimes, a free database of roughly 150 billion pre-computed, point-to-point travel times between United States Census geographies. In addition to letting you visualize travel isochrones on the homepage, OpenTimes also lets you download massive amounts of travel time data for free and with no limits. The primary goal here is to enable research and fill a gap I noticed in the open-source spatial ecosystem. Researchers (social scientists, economists, etc.) use large travel time matrices to quantify things like access to healthcare, but they often end up paying Google or Esri for the necessary data. By pre-calculating times between commonly-used research geographies (i.e. Census) and then making those times easily accessible via SQL, I hope to make large-scale accessibility research cheaper and simpler. Some technical bits that may be of interest to HN folks: - The entire OpenTimes backend is just static Parquet files on R2. There's no RDBMS or running service. The whole thing costs about $10/month to host and is free to serve. - All travel times were calculated by pre-building the inputs (OSM, OSRM networks) and then distributing the compute over hundreds of GitHub Actions jobs. - The query/SQL layer uses a setup I haven't seen before: a single DuckDB database file with views that point to static Parquet files via HTTP. Finally, the driving times are optimistic since they don't (yet) account for traffic. This is something I hope to work on in the near future. Enjoy!
New top story on Hacker News: Occupry your next lease to negotiate a better deal
Occupry your next lease to negotiate a better deal
12 by jason_archmint | 21 comments on Hacker News.
12 by jason_archmint | 21 comments on Hacker News.
Sunday, March 16, 2025
New top story on Hacker News: Show HN: Quickly connect to WiFi by scanning text, no typing needed
Show HN: Quickly connect to WiFi by scanning text, no typing needed
3 by ylj | 0 comments on Hacker News.
I travel and work remotely a lot. Every new place—hotels, cafes, coworking spaces—means dealing with a new WiFi network. Sometimes there's a QR code, which is convenient, but usually, it's a hassle: manually finding the right SSID (especially frustrating when hotels have one SSID per room), then typing long, error-prone passwords. To simplify this, I made a small Android app called Wify. It uses your phone's camera to capture WiFi details (network name and password) from printed text, then generates a QR code right on your screen. You can instantly connect using Google Circle to Search or Google Lens. You can also import an image from your gallery instead of using the camera. Currently, it's Android-only since I daily-drive a Pixel 7, and WiFi APIs differ significantly between Android and iOS. Play Store link: https://ift.tt/OxzVXpY... I'd appreciate your feedback or suggestions!
3 by ylj | 0 comments on Hacker News.
I travel and work remotely a lot. Every new place—hotels, cafes, coworking spaces—means dealing with a new WiFi network. Sometimes there's a QR code, which is convenient, but usually, it's a hassle: manually finding the right SSID (especially frustrating when hotels have one SSID per room), then typing long, error-prone passwords. To simplify this, I made a small Android app called Wify. It uses your phone's camera to capture WiFi details (network name and password) from printed text, then generates a QR code right on your screen. You can instantly connect using Google Circle to Search or Google Lens. You can also import an image from your gallery instead of using the camera. Currently, it's Android-only since I daily-drive a Pixel 7, and WiFi APIs differ significantly between Android and iOS. Play Store link: https://ift.tt/OxzVXpY... I'd appreciate your feedback or suggestions!
Saturday, March 15, 2025
Friday, March 14, 2025
New top story on Hacker News: Show HN: Pi Labs – AI scoring and optimization tools for software engineers
Show HN: Pi Labs – AI scoring and optimization tools for software engineers
10 by achintms | 0 comments on Hacker News.
Hey HN, after years building some of the core AI and NLU systems in Google Search, we decided to leave and build outside. Our goal was to put the advanced ML and DS techniques we’ve been using in the hands of all software engineers, so that everyone can build AI and Search apps at the same level of performance and sophistication as the big labs. This was a hard technical challenge but we were very inspired by the MVC architecture for Web development. The intuition there was that when a data model changes, its view would get auto-updated. We built a similar architecture for AI. On one side is a scoring system, which encapsulates in a set of metrics what’s good about the AI application. On the other side is a set of optimizers that “compile” against this scorer - prompt optimization, data filtering, synthetic data generation, supervised learning, RL, etc. The scoring system can be calibrated using developer, user or rater feedback, and once it’s updated, all the optimizers get recompiled against it. The result is a setup that makes it easy to incrementally improve the quality of your AI in a tight feedback loop: You update your scorers, they auto-update your optimizers, your app gets better, you see that improvement in interpretable scores, and then you repeat, progressing from simpler to more advanced optimizers and from off-the-shelf to calibrated scorers. We would love your feedback on this approach. https://build.withpi.ai has a set of playgrounds to help you quickly build a scorer and multiple optimizers. No sign in required. https://code.withpi.ai has the API reference and Notebook links. Finally, we have a Loom demo [1]. More technical details Scorers: Our scoring system has three key differences from the common LLM-as-a-judge pattern. First, rather than a single label or metric from an LLM judge, our scoring system is represented as a tunable tree of metrics, with 20+ dimensions which get combined into a final (non-linear) weighted score. The tree structure makes scores easily interpretable (just look at the breakdown by dimension), extensible (just add/remove a dimension), and adjustable (just re-tune the weights). Training the scoring system with labeled/preference data adjusts the weights. You can automate this process with user feedback signals, resulting in a tight feedback loop. Second, our scoring system handles natural language dimensions (great for free-form, qualitative questions requiring NLU) alongside quantitative dimensions (like computations over dates or doc length, which can be provided in Python) in the same tree. When calibrating with your labeled or preference data, the scorer learns how to balance these. Third, for natural language scoring, we use specialized smaller encoder models rather than autoregressive models. Encoders are a natural fit for scoring as they are faster and cheaper to run, easier to fine-tune, and more suitable architecturally (bi-directional attention with regression or classification head) than similar sized decoder models. For example, we can score 20+ dimensions in sub-100ms, making it possible to use scoring everywhere from evaluation to agent orchestration to reward modeling. Optimizers: We took the most salient ML techniques and reformulated them as optimizers against our scoring system e.g. for DSPy, the scoring system acts as its validator. For GRPO, the scoring system acts as its reward model. We’re keen to hear the community’s feedback on which techniques to add next. Overall stack: Playgrounds next.js and Vercel. AI: Runpod and GCP for training GPUs, TRL for training algos, ModernBert & Llama as base models. GCP and Azure for 4o and Anthropic calls. We’d love your feedback and perspectives: Our team will be around to answer questions and discuss. If there’s a lot of interest, happy to host a live session! - Achint, co-founder of Pi Labs [1] https://ift.tt/yaT1lbE
10 by achintms | 0 comments on Hacker News.
Hey HN, after years building some of the core AI and NLU systems in Google Search, we decided to leave and build outside. Our goal was to put the advanced ML and DS techniques we’ve been using in the hands of all software engineers, so that everyone can build AI and Search apps at the same level of performance and sophistication as the big labs. This was a hard technical challenge but we were very inspired by the MVC architecture for Web development. The intuition there was that when a data model changes, its view would get auto-updated. We built a similar architecture for AI. On one side is a scoring system, which encapsulates in a set of metrics what’s good about the AI application. On the other side is a set of optimizers that “compile” against this scorer - prompt optimization, data filtering, synthetic data generation, supervised learning, RL, etc. The scoring system can be calibrated using developer, user or rater feedback, and once it’s updated, all the optimizers get recompiled against it. The result is a setup that makes it easy to incrementally improve the quality of your AI in a tight feedback loop: You update your scorers, they auto-update your optimizers, your app gets better, you see that improvement in interpretable scores, and then you repeat, progressing from simpler to more advanced optimizers and from off-the-shelf to calibrated scorers. We would love your feedback on this approach. https://build.withpi.ai has a set of playgrounds to help you quickly build a scorer and multiple optimizers. No sign in required. https://code.withpi.ai has the API reference and Notebook links. Finally, we have a Loom demo [1]. More technical details Scorers: Our scoring system has three key differences from the common LLM-as-a-judge pattern. First, rather than a single label or metric from an LLM judge, our scoring system is represented as a tunable tree of metrics, with 20+ dimensions which get combined into a final (non-linear) weighted score. The tree structure makes scores easily interpretable (just look at the breakdown by dimension), extensible (just add/remove a dimension), and adjustable (just re-tune the weights). Training the scoring system with labeled/preference data adjusts the weights. You can automate this process with user feedback signals, resulting in a tight feedback loop. Second, our scoring system handles natural language dimensions (great for free-form, qualitative questions requiring NLU) alongside quantitative dimensions (like computations over dates or doc length, which can be provided in Python) in the same tree. When calibrating with your labeled or preference data, the scorer learns how to balance these. Third, for natural language scoring, we use specialized smaller encoder models rather than autoregressive models. Encoders are a natural fit for scoring as they are faster and cheaper to run, easier to fine-tune, and more suitable architecturally (bi-directional attention with regression or classification head) than similar sized decoder models. For example, we can score 20+ dimensions in sub-100ms, making it possible to use scoring everywhere from evaluation to agent orchestration to reward modeling. Optimizers: We took the most salient ML techniques and reformulated them as optimizers against our scoring system e.g. for DSPy, the scoring system acts as its validator. For GRPO, the scoring system acts as its reward model. We’re keen to hear the community’s feedback on which techniques to add next. Overall stack: Playgrounds next.js and Vercel. AI: Runpod and GCP for training GPUs, TRL for training algos, ModernBert & Llama as base models. GCP and Azure for 4o and Anthropic calls. We’d love your feedback and perspectives: Our team will be around to answer questions and discuss. If there’s a lot of interest, happy to host a live session! - Achint, co-founder of Pi Labs [1] https://ift.tt/yaT1lbE
Thursday, March 13, 2025
New top story on Hacker News: Show HN: Bubbles, a vanilla JavaScript web game
Show HN: Bubbles, a vanilla JavaScript web game
21 by ehmorris | 7 comments on Hacker News.
Hey everybody, you might remember my older game, Lander! It made a big splash on Hacker News about 2 years ago. I'm still enjoying writing games with no dependencies. I've been working on Bubbles for about 6 months and would love to see your scores. If you like it, you can build your own levels with my builder tool: https://ift.tt/G8p9Kb5 and share the levels here or via Github.
21 by ehmorris | 7 comments on Hacker News.
Hey everybody, you might remember my older game, Lander! It made a big splash on Hacker News about 2 years ago. I'm still enjoying writing games with no dependencies. I've been working on Bubbles for about 6 months and would love to see your scores. If you like it, you can build your own levels with my builder tool: https://ift.tt/G8p9Kb5 and share the levels here or via Github.
Wednesday, March 12, 2025
Tuesday, March 11, 2025
Monday, March 10, 2025
Sunday, March 9, 2025
Saturday, March 8, 2025
Friday, March 7, 2025
New top story on Hacker News: Show HN: A big tech dev experience for an open source CMS
Show HN: A big tech dev experience for an open source CMS
19 by randall | 15 comments on Hacker News.
Hey HN! We're building an open-source CMS designed to help creators with every part of the content production pipeline. We're showing our tiny first step: A tool designed to take in a Twitter username and produce an "identity card" based on it. We expect to use an approach similar to [Constitutional AI] with an explicit focus on repeatability, testability, and verification of an "identity card." We think this approach could be used to create finetuning examples for training changes, or serve as inference time insight for LLMs, or most likely a combination of the two. The tooling we're showing today is extremely simplistic (and the AI is frankly bad) but this is intentional. We're more focused on showing the dev experience and community aspects. We'd like to make it easier to contribute to this project than edit Wikipedia. Communities are frustrated with things like Wordpress, Apache, and other open source foundations focusing on things other than software. We have a lot of community ideas (governance via vote by jury is perhaps the most interesting). We're a team of 5, and we've bounced around a few companies with each other. We're all professional creators (video + music) and we're creating tooling for ourselves first. Previously, we did a startup called Vidpresso (YC W14) that was acquired by Facebook in 2018. We all worked at Facebook for 5 years on creator tooling, and have since left to start this thing. After leaving FB, it was painful for us to leave the warm embrace of the Facebook infra team where we had amazing tooling. Since then, we've pivoted a bunch of times trying to figure out our "real" product. While we think we've finally nailed it, the developer experience we built is one we think others could benefit from. Our tooling is designed so any developer can easily jump in and start contributing. It's an AI-first dev environment designed with a few key principles in mind: 1. You should be able to discover any command you need to run without looking at docs. 2. To make a change, as much context as possible should be provided as close to the code as possible. 3. AIs are "people too", in the sense that they benefit from focused context, and not being distracted by having to search deeply through multiple files or documentation to make changes. We have a few non-traditional elements to our stack which we think are worth exploring. [Isograph] helps us simplify our component usage with GraphQL. [Replit] lets people use AI coding without needing to set up any additional tooling. We've learned how to treat it like a junior developer, and think it will be the best platform for AI-first open source projects going forward. [Sapling] (and Git together) for version control. It might sound counter intuitive, but we use Git to manage agent interactionsand we use Sapling to manage "purposeful" commits. My last [Show HN post in 2013] ended up helping me find my Vidpresso cofounder so I have high hopes for this one. I'm excited to meet anyone, developers, creators, or nice people in general, and start to work with them to make this project work. I have good references of being a nice guy, and aim to keep that going with this project. The best way to work with us is [remix our Replit app] and [join our Discord]. Thanks for reading and checking us out! It's super early, but we're excited to work with you! [Constitutional AI]: https://ift.tt/LK6w8Q9... [Isograph]: https://isograph.dev [Replit]: https://replit.com [Sapling]: https://sapling-scm.com [Show HN post in 2013]: https://ift.tt/5QjMxz9 [remix our Replit app]: https://ift.tt/5knaoSr... [join our Discord]: https://ift.tt/mG8Pkj2
19 by randall | 15 comments on Hacker News.
Hey HN! We're building an open-source CMS designed to help creators with every part of the content production pipeline. We're showing our tiny first step: A tool designed to take in a Twitter username and produce an "identity card" based on it. We expect to use an approach similar to [Constitutional AI] with an explicit focus on repeatability, testability, and verification of an "identity card." We think this approach could be used to create finetuning examples for training changes, or serve as inference time insight for LLMs, or most likely a combination of the two. The tooling we're showing today is extremely simplistic (and the AI is frankly bad) but this is intentional. We're more focused on showing the dev experience and community aspects. We'd like to make it easier to contribute to this project than edit Wikipedia. Communities are frustrated with things like Wordpress, Apache, and other open source foundations focusing on things other than software. We have a lot of community ideas (governance via vote by jury is perhaps the most interesting). We're a team of 5, and we've bounced around a few companies with each other. We're all professional creators (video + music) and we're creating tooling for ourselves first. Previously, we did a startup called Vidpresso (YC W14) that was acquired by Facebook in 2018. We all worked at Facebook for 5 years on creator tooling, and have since left to start this thing. After leaving FB, it was painful for us to leave the warm embrace of the Facebook infra team where we had amazing tooling. Since then, we've pivoted a bunch of times trying to figure out our "real" product. While we think we've finally nailed it, the developer experience we built is one we think others could benefit from. Our tooling is designed so any developer can easily jump in and start contributing. It's an AI-first dev environment designed with a few key principles in mind: 1. You should be able to discover any command you need to run without looking at docs. 2. To make a change, as much context as possible should be provided as close to the code as possible. 3. AIs are "people too", in the sense that they benefit from focused context, and not being distracted by having to search deeply through multiple files or documentation to make changes. We have a few non-traditional elements to our stack which we think are worth exploring. [Isograph] helps us simplify our component usage with GraphQL. [Replit] lets people use AI coding without needing to set up any additional tooling. We've learned how to treat it like a junior developer, and think it will be the best platform for AI-first open source projects going forward. [Sapling] (and Git together) for version control. It might sound counter intuitive, but we use Git to manage agent interactionsand we use Sapling to manage "purposeful" commits. My last [Show HN post in 2013] ended up helping me find my Vidpresso cofounder so I have high hopes for this one. I'm excited to meet anyone, developers, creators, or nice people in general, and start to work with them to make this project work. I have good references of being a nice guy, and aim to keep that going with this project. The best way to work with us is [remix our Replit app] and [join our Discord]. Thanks for reading and checking us out! It's super early, but we're excited to work with you! [Constitutional AI]: https://ift.tt/LK6w8Q9... [Isograph]: https://isograph.dev [Replit]: https://replit.com [Sapling]: https://sapling-scm.com [Show HN post in 2013]: https://ift.tt/5QjMxz9 [remix our Replit app]: https://ift.tt/5knaoSr... [join our Discord]: https://ift.tt/mG8Pkj2
Thursday, March 6, 2025
Wednesday, March 5, 2025
Tuesday, March 4, 2025
New top story on Hacker News: Show HN: Open-source Deep Research across workplace applications
Show HN: Open-source Deep Research across workplace applications
6 by yuhongsun | 1 comments on Hacker News.
I’ve been using deep research on OpenAI and Perplexity and it’s been just amazing at gathering data across a lot of related and chained searches. Just earlier today, I asked “What are some marquee tech companies / hot startups (not including the giants like FAAMG, Samsung, Nvidia etc.)”. It’s a pretty involved question and looking up “marquee tech startups” or "hot tech startups" on Google gave me nothing useful. Deep research on both ChatGPT and Perplexity gave really high quality responses with ChatGPT siding on slightly larger scaleups and Perplexity siding more on up and coming companies. Given how useful AI research agents are across the internet, we decided to build an open-source equivalent for the workplace since a ton of questions at work also cannot be easily resolved with a single search. Onyx supports deep research connected to company applications like Google Drive, Salesforce, Sharepoint, GitHub, Slack, and 30+ others. For example, an engineer may want to know “What’s happening with the verification email failure?” Onyx’s AI agent would first figure out what it needs to answer this question: What is the cause of the failure, what has been done to address it, has this come up before, and what’s the latest status on the issue. The agent would run parallel searches through Confluence, email, Slack, and GitHub to get the answers to these then combine them to build a coherent overview. If the agent finds that there was a technical blocker that will delay the resolution, it will adjust mid-flight and research to get more context on the blocker. Here’s a video demo I recorded: https://www.youtube.com/watch?v=drvC0fWG4hE If you want to get started with the GitHub repo, you can check out our guides at https://docs.onyx.app . Or to play with it without needing to deploy anything, you can go to https://ift.tt/a632QZO P.S. There’s a lot of cool technical details behind building a system like this so I’ll continue the conversation in the comments.
6 by yuhongsun | 1 comments on Hacker News.
I’ve been using deep research on OpenAI and Perplexity and it’s been just amazing at gathering data across a lot of related and chained searches. Just earlier today, I asked “What are some marquee tech companies / hot startups (not including the giants like FAAMG, Samsung, Nvidia etc.)”. It’s a pretty involved question and looking up “marquee tech startups” or "hot tech startups" on Google gave me nothing useful. Deep research on both ChatGPT and Perplexity gave really high quality responses with ChatGPT siding on slightly larger scaleups and Perplexity siding more on up and coming companies. Given how useful AI research agents are across the internet, we decided to build an open-source equivalent for the workplace since a ton of questions at work also cannot be easily resolved with a single search. Onyx supports deep research connected to company applications like Google Drive, Salesforce, Sharepoint, GitHub, Slack, and 30+ others. For example, an engineer may want to know “What’s happening with the verification email failure?” Onyx’s AI agent would first figure out what it needs to answer this question: What is the cause of the failure, what has been done to address it, has this come up before, and what’s the latest status on the issue. The agent would run parallel searches through Confluence, email, Slack, and GitHub to get the answers to these then combine them to build a coherent overview. If the agent finds that there was a technical blocker that will delay the resolution, it will adjust mid-flight and research to get more context on the blocker. Here’s a video demo I recorded: https://www.youtube.com/watch?v=drvC0fWG4hE If you want to get started with the GitHub repo, you can check out our guides at https://docs.onyx.app . Or to play with it without needing to deploy anything, you can go to https://ift.tt/a632QZO P.S. There’s a lot of cool technical details behind building a system like this so I’ll continue the conversation in the comments.
Monday, March 3, 2025
Sunday, March 2, 2025
Saturday, March 1, 2025
Friday, February 28, 2025
Thursday, February 27, 2025
Wednesday, February 26, 2025
Tuesday, February 25, 2025
Monday, February 24, 2025
Sunday, February 23, 2025
Saturday, February 22, 2025
Friday, February 21, 2025
Thursday, February 20, 2025
Wednesday, February 19, 2025
New top story on Hacker News: Building a Bitcoin Exchange with FOSS BTC Pay Server
Building a Bitcoin Exchange with FOSS BTC Pay Server
17 by BitcoinNewsCom | 1 comments on Hacker News.
17 by BitcoinNewsCom | 1 comments on Hacker News.
Tuesday, February 18, 2025
New top story on Hacker News: (Ab)using general search algorithms on dynamic optimization problems (2023)
(Ab)using general search algorithms on dynamic optimization problems (2023)
10 by h45x1 | 3 comments on Hacker News.
I wrote this blog back in 2023 but since then I became a frequent lurker on HN and decided to repost the blog here. For me, writing it was about connecting the dots between dynamic optimization techniques I've studied as an economist and the more general search algorithms studied in CS.
10 by h45x1 | 3 comments on Hacker News.
I wrote this blog back in 2023 but since then I became a frequent lurker on HN and decided to repost the blog here. For me, writing it was about connecting the dots between dynamic optimization techniques I've studied as an economist and the more general search algorithms studied in CS.
Monday, February 17, 2025
Sunday, February 16, 2025
New top story on Hacker News: Show HN: Hackyournews.com v2
Show HN: Hackyournews.com v2
10 by ukuina | 0 comments on Hacker News.
A year and a half after I published https://ift.tt/P7KoWVR , I've rewritten it to be neater and added support for more news sources. HackYourNews.com v1 had a great response on HN [1] and consistently sees ~2k weekly unique visitors. There were many long-standing requests that I wanted to fulfill (thanks for your patience!): a proper dark mode, correct rendering on mobile devices, and more cogent summaries. This rewrite is the result. gpt-4o-mini reduces the cost of summarization to an absurd degree, so it's now sustainable to keep this free service going! Someday, I hope to use the Batch API [2] to drive down costs even further. Enjoy. [1] https://ift.tt/wbEy3tV [2] https://ift.tt/13vAK68
10 by ukuina | 0 comments on Hacker News.
A year and a half after I published https://ift.tt/P7KoWVR , I've rewritten it to be neater and added support for more news sources. HackYourNews.com v1 had a great response on HN [1] and consistently sees ~2k weekly unique visitors. There were many long-standing requests that I wanted to fulfill (thanks for your patience!): a proper dark mode, correct rendering on mobile devices, and more cogent summaries. This rewrite is the result. gpt-4o-mini reduces the cost of summarization to an absurd degree, so it's now sustainable to keep this free service going! Someday, I hope to use the Batch API [2] to drive down costs even further. Enjoy. [1] https://ift.tt/wbEy3tV [2] https://ift.tt/13vAK68
Saturday, February 15, 2025
Friday, February 14, 2025
Thursday, February 13, 2025
New top story on Hacker News: Phind 2: AI search with visual answers and multi-step reasoning
Phind 2: AI search with visual answers and multi-step reasoning
18 by rushingcreek | 4 comments on Hacker News.
Hi HN! Michael here. We've spent the last 6 months rebuilding Phind. The new Phind goes beyond text to present answers visually with inline images, diagrams, cards, and other widgets to make answers more meaningful. Here are some examples: "explain photosynthesis" - https://www.youtube.com/watch?v=cTCpnyICukM#t=7 "how to cook the perfect steak" - https://www.youtube.com/watch?v=cTCpnyICukM#t=55 "quicksort in rust" - https://www.youtube.com/watch?v=cTCpnyICukM#t=105 We asked ourselves what types of answers we would ideally like and crafted a new UI and model series to help get us there. Phind is also now able to seek out information on its own. If it needs more information, it can do multiple rounds of additional searches to get you the most comprehensive answer it can. This blog post contains an overview of what we did as well as technical deep dives into how we built the new frontend and models. I'm super grateful for all of the feedback we've gotten from the HN community and can't wait to hear your thoughts!
18 by rushingcreek | 4 comments on Hacker News.
Hi HN! Michael here. We've spent the last 6 months rebuilding Phind. The new Phind goes beyond text to present answers visually with inline images, diagrams, cards, and other widgets to make answers more meaningful. Here are some examples: "explain photosynthesis" - https://www.youtube.com/watch?v=cTCpnyICukM#t=7 "how to cook the perfect steak" - https://www.youtube.com/watch?v=cTCpnyICukM#t=55 "quicksort in rust" - https://www.youtube.com/watch?v=cTCpnyICukM#t=105 We asked ourselves what types of answers we would ideally like and crafted a new UI and model series to help get us there. Phind is also now able to seek out information on its own. If it needs more information, it can do multiple rounds of additional searches to get you the most comprehensive answer it can. This blog post contains an overview of what we did as well as technical deep dives into how we built the new frontend and models. I'm super grateful for all of the feedback we've gotten from the HN community and can't wait to hear your thoughts!
Wednesday, February 12, 2025
New top story on Hacker News: Show HN: Sort lines semantically using llm-sort
Show HN: Sort lines semantically using llm-sort
9 by vagozino | 4 comments on Hacker News.
This is a small plugin I made for Simon Willison's llm utility. You can do things like: cat names.txt | llm sort -q "Which one of these names is best for a pet seagull?" cat books.txt | llm sort -q "Which book is more related to basic vs. advanced CS topics?" I see a lot of potential marrying LLMs with classic UNIX interfaces.
9 by vagozino | 4 comments on Hacker News.
This is a small plugin I made for Simon Willison's llm utility. You can do things like: cat names.txt | llm sort -q "Which one of these names is best for a pet seagull?" cat books.txt | llm sort -q "Which book is more related to basic vs. advanced CS topics?" I see a lot of potential marrying LLMs with classic UNIX interfaces.
Tuesday, February 11, 2025
Monday, February 10, 2025
Sunday, February 9, 2025
Saturday, February 8, 2025
Friday, February 7, 2025
Thursday, February 6, 2025
New top story on Hacker News: Show HN: Heap Explorer
Show HN: Heap Explorer
7 by bkallus | 0 comments on Hacker News.
I wrote a little LD_PRELOAD library that makes it easy to inspect and interact with a running program's glibc heap. It's fun to pause processes, free a bunch of their allocations, then resume them. Most of the time, the processes continue as though nothing happened, but sometimes they do interesting things :)
7 by bkallus | 0 comments on Hacker News.
I wrote a little LD_PRELOAD library that makes it easy to inspect and interact with a running program's glibc heap. It's fun to pause processes, free a bunch of their allocations, then resume them. Most of the time, the processes continue as though nothing happened, but sometimes they do interesting things :)
Wednesday, February 5, 2025
Tuesday, February 4, 2025
Monday, February 3, 2025
New top story on Hacker News: Ask HN: Who wants to be hired? (February 2025)
Ask HN: Who wants to be hired? (February 2025)
24 by whoishiring | 96 comments on Hacker News.
Share your information if you are looking for work. Please use this format: Location: Remote: Willing to relocate: Technologies: Résumé/CV: Email: Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here. Readers: please only email these addresses to discuss work opportunities. There's a site for searching these posts at https://ift.tt/i45mEvw .
24 by whoishiring | 96 comments on Hacker News.
Share your information if you are looking for work. Please use this format: Location: Remote: Willing to relocate: Technologies: Résumé/CV: Email: Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here. Readers: please only email these addresses to discuss work opportunities. There's a site for searching these posts at https://ift.tt/i45mEvw .
Sunday, February 2, 2025
Saturday, February 1, 2025
Friday, January 31, 2025
Thursday, January 30, 2025
New top story on Hacker News: Show HN: Distr – open-source distribution platform for on-prem deployments
Show HN: Distr – open-source distribution platform for on-prem deployments
12 by louis_w_gk | 0 comments on Hacker News.
Distr is designed to help software engineers distribute and manage their applications or agents in customer-controlled or shared-responsibility environments. You only need a Docker Compose file or Helm chart—everything else for on-prem is handled by the platform. We’re are an open source dev tool company. Over the past couple of months, we’ve spoken with dozens of software companies to understand their challenges with on-prem deployments. We analyzed the internal tools they’ve built and the best practices from existing solutions, combining them into a prebuilt, Open Source solution that works out of the box and integrates seamlessly. Distr consists of two key components: 1. Hub - Provides a centralized view of all deployments and controls connected agents. - Comes with a simple GUI but also supports API and SDK access for seamless integration. - Fully Open- Surce and self-hostable, or you can use our fully managed platform. 2. Lightweight Agents - Pre-built agents for Helm (Kubernetes) and Docker Compose (VM) that run alongside your application. - Handle lifecycle tasks like guided installation, updates, and rollbacks. - Provide basic metrics (health status, application version) and logs If you already have a customer portal or self-service interface for on-prem deployments, you can seamlessly integrate all features into your existing portal or application using our API or SDK. Alternatively, you can use our pre-built, white-labeled customer portal. Here’s what an integration into your existing customer portal could look like: import {DistrService} from "@glasskube/distr-sdk"; const customerHasAutoUpdatesEnabled = false; // replace with your own logic const deploymentTargetId = 'da1d7130-bfa9-49a1-b567-c49728837df7'; const service = new DistrService({ apiKey: 'distr-8c24167aeb5fd4bb48b6d2140927df0f' }); const result = await service.isOutdated(deploymentTargetId); if(result.deploymentTarget.deployment?.latestStatus?.type !== 'ok') { // let the user decide whether to allow updates from an instable state, e.g. with: if(!confirm('The deployment is not in a stable state. Do you want to update anyway?')) { return; } } if(result.outdated) { if(customerHasAutoUpdatesEnabled) { await service.updateDeployment({deploymentTargetId}); // notify customer about the update } else { const newerVersionsAvailable = result.newerVersions; // notify customer about the newer versions, e.g. via email } } With the SDK/API, you can: - Display real-time deployed version and deployment status directly within the application, notifying customers when their deployed version is outdated. - Allow customers to trigger updates from within your app using a simple API call If you’re distributing software and want to streamline updates or enhance monitoring, we’d love your feedback and are here to answer any questions. Getting started is easy—just bring your Docker Compose file or Helm chart, and we’ll guide you through the rest. Check out the fully managed version ( https://ift.tt/UHEkBAQ ) and explore our documentation ( https://distr.sh/docs/ ) to learn more.
12 by louis_w_gk | 0 comments on Hacker News.
Distr is designed to help software engineers distribute and manage their applications or agents in customer-controlled or shared-responsibility environments. You only need a Docker Compose file or Helm chart—everything else for on-prem is handled by the platform. We’re are an open source dev tool company. Over the past couple of months, we’ve spoken with dozens of software companies to understand their challenges with on-prem deployments. We analyzed the internal tools they’ve built and the best practices from existing solutions, combining them into a prebuilt, Open Source solution that works out of the box and integrates seamlessly. Distr consists of two key components: 1. Hub - Provides a centralized view of all deployments and controls connected agents. - Comes with a simple GUI but also supports API and SDK access for seamless integration. - Fully Open- Surce and self-hostable, or you can use our fully managed platform. 2. Lightweight Agents - Pre-built agents for Helm (Kubernetes) and Docker Compose (VM) that run alongside your application. - Handle lifecycle tasks like guided installation, updates, and rollbacks. - Provide basic metrics (health status, application version) and logs If you already have a customer portal or self-service interface for on-prem deployments, you can seamlessly integrate all features into your existing portal or application using our API or SDK. Alternatively, you can use our pre-built, white-labeled customer portal. Here’s what an integration into your existing customer portal could look like: import {DistrService} from "@glasskube/distr-sdk"; const customerHasAutoUpdatesEnabled = false; // replace with your own logic const deploymentTargetId = 'da1d7130-bfa9-49a1-b567-c49728837df7'; const service = new DistrService({ apiKey: 'distr-8c24167aeb5fd4bb48b6d2140927df0f' }); const result = await service.isOutdated(deploymentTargetId); if(result.deploymentTarget.deployment?.latestStatus?.type !== 'ok') { // let the user decide whether to allow updates from an instable state, e.g. with: if(!confirm('The deployment is not in a stable state. Do you want to update anyway?')) { return; } } if(result.outdated) { if(customerHasAutoUpdatesEnabled) { await service.updateDeployment({deploymentTargetId}); // notify customer about the update } else { const newerVersionsAvailable = result.newerVersions; // notify customer about the newer versions, e.g. via email } } With the SDK/API, you can: - Display real-time deployed version and deployment status directly within the application, notifying customers when their deployed version is outdated. - Allow customers to trigger updates from within your app using a simple API call If you’re distributing software and want to streamline updates or enhance monitoring, we’d love your feedback and are here to answer any questions. Getting started is easy—just bring your Docker Compose file or Helm chart, and we’ll guide you through the rest. Check out the fully managed version ( https://ift.tt/UHEkBAQ ) and explore our documentation ( https://distr.sh/docs/ ) to learn more.
Wednesday, January 29, 2025
Tuesday, January 28, 2025
Monday, January 27, 2025
Sunday, January 26, 2025
Saturday, January 25, 2025
Friday, January 24, 2025
New top story on Hacker News: Show HN: Snap Scope – Visualize Lens Focal Length Distribution from EXIF Data
Show HN: Snap Scope – Visualize Lens Focal Length Distribution from EXIF Data
5 by kan02134 | 0 comments on Hacker News.
Hey HN, I built this tool because I wanted to understand which focal lengths I actually use when taking photos. It's a web app that analyzes EXIF data to visualize focal length distribution patterns. While it's admittedly niche (focused specifically on photography), I think it could be useful for photographers trying to understand their lens usage patterns or making decisions about lens purchases. Features: Client-side EXIF data processing (no server uploads/tracking) / Handles thousands of photos at once / Clean visualization with shareable summaries This tool supports most RAW formats, but you might occasionally encounter files where EXIF extraction fails. In such cases, converting to more common formats like JPEG usually resolves the issue. Try it out: https://ift.tt/og5Vif7 Source: https://ift.tt/4UMuYya
5 by kan02134 | 0 comments on Hacker News.
Hey HN, I built this tool because I wanted to understand which focal lengths I actually use when taking photos. It's a web app that analyzes EXIF data to visualize focal length distribution patterns. While it's admittedly niche (focused specifically on photography), I think it could be useful for photographers trying to understand their lens usage patterns or making decisions about lens purchases. Features: Client-side EXIF data processing (no server uploads/tracking) / Handles thousands of photos at once / Clean visualization with shareable summaries This tool supports most RAW formats, but you might occasionally encounter files where EXIF extraction fails. In such cases, converting to more common formats like JPEG usually resolves the issue. Try it out: https://ift.tt/og5Vif7 Source: https://ift.tt/4UMuYya
Thursday, January 23, 2025
New top story on Hacker News: Show HN: I built an active community of trans people online
Show HN: I built an active community of trans people online
44 by t4t | 15 comments on Hacker News.
A year ago I surveyed the internet and noticed there was only one popular space for trans and gender-non-conforming people to meet; Lex. Lex is not well liked by its users. Its software feels heavy and it is full of cash grabs and anti-patterns. It was recently acquired and is sure to only become more hostile to its users as it turns towards profit generation. With this in mind I built t4t, an alternative specially designed for not only queer people, but specifically trans people. It is an extremely lightweight service. I built it with my most ideal stack: Flutter, Svelte, Supabase, Posthog. It has grown in the last year to about 4,000 monthly active users. I think it could grow way beyond that this year.
44 by t4t | 15 comments on Hacker News.
A year ago I surveyed the internet and noticed there was only one popular space for trans and gender-non-conforming people to meet; Lex. Lex is not well liked by its users. Its software feels heavy and it is full of cash grabs and anti-patterns. It was recently acquired and is sure to only become more hostile to its users as it turns towards profit generation. With this in mind I built t4t, an alternative specially designed for not only queer people, but specifically trans people. It is an extremely lightweight service. I built it with my most ideal stack: Flutter, Svelte, Supabase, Posthog. It has grown in the last year to about 4,000 monthly active users. I think it could grow way beyond that this year.
Wednesday, January 22, 2025
Tuesday, January 21, 2025
New top story on Hacker News: Show HN: SudokuVariants – play and construct different variants of Sudoku
Show HN: SudokuVariants – play and construct different variants of Sudoku
8 by stanac | 0 comments on Hacker News.
Hi HN, I've been working on this Sudoku web app for the past couple of years, on and off during free weekends and afternoons. I started working on it because I was bored during COVID, and Cracking the Cryptic had just become popular on YouTube, which got me wondering how hard it could be to make a Sudoku app. The main idea is for the app to understand the constraints and know how to solve Sudoku grids (and not just be a simple Sudoku drawing/playing app). When it comes to classic Sudoku, the solver doesn't support anything more complicated than X-Wing, but it understands the constraints. At the moment, most of the popular variants are supported: killer, sandwich, arrow, thermo, palindrome, German whisper, kropki, consecutive, non-consecutive, greater than, XV, diagonal, anti-king, anti-knight, even-odd, windoku, renban, and zipper. The only variant I am yet to add support for is quadruple. If any other variant becomes popular, I will probably add it, as was the case with zipper lines during development. A user account is not required to play, but it is required if you want to publish a public grid on the app. The app doesn't collect any PII, doesn't have ads or trackers. Accounts are identified by email hash; I am not storing email addresses or passwords, and OTPs are sent by email. The less I know about users, the better for both sides. The app supports mobile devices, but it works best on bigger screens. It was built using Blazor SSR/WASM (AOT) with SVG for interactive parts. I know there are some performance issues (especially on mobile phones and with touch input), and I am trying to address them. Some of the features I was thinking about adding are classifying grids by difficulty, daily Sudoku, and maybe campaigns (groups of Sudoku grids where users have to solve them in order). If you like Sudoku, or more specifically variants of Sudoku, please let me know what you think about SudokuVariants. URL: https://ift.tt/SDi1UaP Thanks!
8 by stanac | 0 comments on Hacker News.
Hi HN, I've been working on this Sudoku web app for the past couple of years, on and off during free weekends and afternoons. I started working on it because I was bored during COVID, and Cracking the Cryptic had just become popular on YouTube, which got me wondering how hard it could be to make a Sudoku app. The main idea is for the app to understand the constraints and know how to solve Sudoku grids (and not just be a simple Sudoku drawing/playing app). When it comes to classic Sudoku, the solver doesn't support anything more complicated than X-Wing, but it understands the constraints. At the moment, most of the popular variants are supported: killer, sandwich, arrow, thermo, palindrome, German whisper, kropki, consecutive, non-consecutive, greater than, XV, diagonal, anti-king, anti-knight, even-odd, windoku, renban, and zipper. The only variant I am yet to add support for is quadruple. If any other variant becomes popular, I will probably add it, as was the case with zipper lines during development. A user account is not required to play, but it is required if you want to publish a public grid on the app. The app doesn't collect any PII, doesn't have ads or trackers. Accounts are identified by email hash; I am not storing email addresses or passwords, and OTPs are sent by email. The less I know about users, the better for both sides. The app supports mobile devices, but it works best on bigger screens. It was built using Blazor SSR/WASM (AOT) with SVG for interactive parts. I know there are some performance issues (especially on mobile phones and with touch input), and I am trying to address them. Some of the features I was thinking about adding are classifying grids by difficulty, daily Sudoku, and maybe campaigns (groups of Sudoku grids where users have to solve them in order). If you like Sudoku, or more specifically variants of Sudoku, please let me know what you think about SudokuVariants. URL: https://ift.tt/SDi1UaP Thanks!
Monday, January 20, 2025
Sunday, January 19, 2025
Saturday, January 18, 2025
New top story on Hacker News: Show HN: ZX Spectrum SCR to PNG Converter
Show HN: ZX Spectrum SCR to PNG Converter
2 by iamflimflam1 | 0 comments on Hacker News.
Scratching my own itch. I had to do this for showing information on ZX Spectrum games. So thought I'd turn it into a useful tool for other people to use.
2 by iamflimflam1 | 0 comments on Hacker News.
Scratching my own itch. I had to do this for showing information on ZX Spectrum games. So thought I'd turn it into a useful tool for other people to use.
Friday, January 17, 2025
New top story on Hacker News: Dr. TVAM – Inverse Rendering for Tomographic Volumetric Additive Manufacturing
Dr. TVAM – Inverse Rendering for Tomographic Volumetric Additive Manufacturing
7 by roflmaostc | 0 comments on Hacker News.
We published this work at SIGGRAPH ASIA 2024. Based on Mitsuba 3 and Dr. JIT we provide our software Dr. TVAM to optimize patterns for TVAM. TVAM allows to print centi-meter scale objects within seconds.
7 by roflmaostc | 0 comments on Hacker News.
We published this work at SIGGRAPH ASIA 2024. Based on Mitsuba 3 and Dr. JIT we provide our software Dr. TVAM to optimize patterns for TVAM. TVAM allows to print centi-meter scale objects within seconds.
Thursday, January 16, 2025
Wednesday, January 15, 2025
New top story on Hacker News: Show HN: I Built a Fair Alternative to Product Hunt for Indie Makers Like You
Show HN: I Built a Fair Alternative to Product Hunt for Indie Makers Like You
6 by lakshikag | 2 comments on Hacker News.
I’m an indie maker, just like many of you. A few months back, I launched a product on one of the big platforms, and... nothing. It got buried under dozens of other launches within hours. All that work, all that excitement is gone in the blink of an eye. No one even saw it. It stung. I wasn’t mad, well, maybe a little but mostly, I just felt invisible. The truth is, indie makers like me don’t have big teams or budgets to fight for visibility. We rely on genuine support and connections. I couldn’t stop thinking about how many great ideas never get the attention they deserve because they’re overshadowed. So, I decided to build something different: https://itslaunched.com Here’s the idea: • 10 launches per day, max. Limiting the number of daily launches ensures that every product gets its moment in the spotlight. • 2 votes per user, per day. This isn’t a popularity contest. You only get two votes, so people have to really think about which products they want to support. It’s quality over quantity. • “Under Radar” feature. This one’s my favorite. If a product doesn’t get much love on its launch day, it gets a second chance to shine the next day. Because timing shouldn’t be the only thing standing between you and success. There’s more like badges, comments, streaks but the heart of it is simple: a fair shot for indie makers. I built this because I believe every product deserves to be seen, especially the ones built by solo makers and small teams putting their heart into something they truly care about. And I didn’t build this to compete with Product Hunt. I built it to give indie makers the platform they deserve, one where their creativity truly gets noticed. If this sounds like something you’d want to check out, I’d love your thoughts. I’m still tweaking and improving it every day based on feedback. Let me know what you think and if you’ve got a product you’re proud of, I’d love to see it shine.
6 by lakshikag | 2 comments on Hacker News.
I’m an indie maker, just like many of you. A few months back, I launched a product on one of the big platforms, and... nothing. It got buried under dozens of other launches within hours. All that work, all that excitement is gone in the blink of an eye. No one even saw it. It stung. I wasn’t mad, well, maybe a little but mostly, I just felt invisible. The truth is, indie makers like me don’t have big teams or budgets to fight for visibility. We rely on genuine support and connections. I couldn’t stop thinking about how many great ideas never get the attention they deserve because they’re overshadowed. So, I decided to build something different: https://itslaunched.com Here’s the idea: • 10 launches per day, max. Limiting the number of daily launches ensures that every product gets its moment in the spotlight. • 2 votes per user, per day. This isn’t a popularity contest. You only get two votes, so people have to really think about which products they want to support. It’s quality over quantity. • “Under Radar” feature. This one’s my favorite. If a product doesn’t get much love on its launch day, it gets a second chance to shine the next day. Because timing shouldn’t be the only thing standing between you and success. There’s more like badges, comments, streaks but the heart of it is simple: a fair shot for indie makers. I built this because I believe every product deserves to be seen, especially the ones built by solo makers and small teams putting their heart into something they truly care about. And I didn’t build this to compete with Product Hunt. I built it to give indie makers the platform they deserve, one where their creativity truly gets noticed. If this sounds like something you’d want to check out, I’d love your thoughts. I’m still tweaking and improving it every day based on feedback. Let me know what you think and if you’ve got a product you’re proud of, I’d love to see it shine.
Tuesday, January 14, 2025
Monday, January 13, 2025
Sunday, January 12, 2025
Saturday, January 11, 2025
Friday, January 10, 2025
New top story on Hacker News: Show HN: KeyTik: The All-in-One Input Automation Tool
Show HN: KeyTik: The All-in-One Input Automation Tool
3 by Fajar_Rahmad | 0 comments on Hacker News.
KeyTik now has it's own website Key Features: - Multiple Keyboard Remap Profile: Not like most of keyboard remapper, KeyTik can handle multiple keyboard remap. You don't have to set remap again when you need to use another remap then set it back again after done. Just create multiple remap and activate or deactivate it whenever you want. - Advance Keyboard Remap: Keyboard remap not only able to remap single key but also key combination and text or typing (Example: Clicking 's' will type 'Select"). - Assign Script or Remap Profile to Specific Keyboard or Mouse Using Device VID & PID or Device Handle: Make script or remap profile to only work for specific physical keyboard or mouse using device VID & PID or device handle as identifier. - Assign Script or Remap Profile to Specific Programs Using Class or Process: Make script or remap profile to only work for specific programs class, like specific Chrome tab or entire program. - Auto Clicker: KeyTik comes with Auto Clicker in the download. On default, it simulate 'left click' when 'e' is held. You can change the 'left click', 'e', interval part to your preference. - Screen Clicker: KeyTik also comes with Screen Clicker in the download. It work with simulate 'left click' on specific screen coordinate. You can change coordinate and interval to your preference. Don't worry because KeyTik also comes with tool to find screen coordinate then it will automatically copy coordinate and you can paste it to screen clicker in text mode. - Screen Coordinate Auto Detect And Copy: To make screen clicker editing easier, KeyTik also comes with coordinate finder. On default, you just need to press 'space' then it will show coordinate and automatically copy it. You can also change 'space' part to your preference. - Multiple Files Opener: Multiple files opener also comes with KeyTik download. It work with, if you click key or key combination, then it will open the files. You can change the files with your files or programs path to your preference. Additional Features: - Run & Exit Remap Profile: Activate or deactivate profiles individually, so you don't need to adjust the remap every time. - Run Profile on Startup: Run profiles on startup, so it will automatically activate when you open your device—no need to manually activate it each time. - Delete & Store Remap Profile: Delete unnecessary profiles and store profiles for a clean main window without permanently removing them. - Pin Profile: Pin your favorite profiles for quick and easy access. - Edit Remap Profile: Adjust your profile to your preference. - Assign Shortcut on Each Profile: Enable or Disable your profile using shortcuts. - Default Mode in Create or Edit Profile: The easiest way to remap your keyboard. - Text Mode in Create or Edit Profile: Text Mode allows you to adjust or create your AutoHotkey script easily, without needing an external editor. - Make Window Always on Top: "Always on top" feature lets you easily remap keys while other windows are open, without minimizing KeyTik window. This is especially useful during gaming. - Show Stored Profile: Display your stored profile or restore it to main window. - Import Profile: Use AutoHotkey script from external source like download and make it as profile. - Automatically Take Key Input: A button that can make you click your desired key and it will automatically fill key entry
3 by Fajar_Rahmad | 0 comments on Hacker News.
KeyTik now has it's own website Key Features: - Multiple Keyboard Remap Profile: Not like most of keyboard remapper, KeyTik can handle multiple keyboard remap. You don't have to set remap again when you need to use another remap then set it back again after done. Just create multiple remap and activate or deactivate it whenever you want. - Advance Keyboard Remap: Keyboard remap not only able to remap single key but also key combination and text or typing (Example: Clicking 's' will type 'Select"). - Assign Script or Remap Profile to Specific Keyboard or Mouse Using Device VID & PID or Device Handle: Make script or remap profile to only work for specific physical keyboard or mouse using device VID & PID or device handle as identifier. - Assign Script or Remap Profile to Specific Programs Using Class or Process: Make script or remap profile to only work for specific programs class, like specific Chrome tab or entire program. - Auto Clicker: KeyTik comes with Auto Clicker in the download. On default, it simulate 'left click' when 'e' is held. You can change the 'left click', 'e', interval part to your preference. - Screen Clicker: KeyTik also comes with Screen Clicker in the download. It work with simulate 'left click' on specific screen coordinate. You can change coordinate and interval to your preference. Don't worry because KeyTik also comes with tool to find screen coordinate then it will automatically copy coordinate and you can paste it to screen clicker in text mode. - Screen Coordinate Auto Detect And Copy: To make screen clicker editing easier, KeyTik also comes with coordinate finder. On default, you just need to press 'space' then it will show coordinate and automatically copy it. You can also change 'space' part to your preference. - Multiple Files Opener: Multiple files opener also comes with KeyTik download. It work with, if you click key or key combination, then it will open the files. You can change the files with your files or programs path to your preference. Additional Features: - Run & Exit Remap Profile: Activate or deactivate profiles individually, so you don't need to adjust the remap every time. - Run Profile on Startup: Run profiles on startup, so it will automatically activate when you open your device—no need to manually activate it each time. - Delete & Store Remap Profile: Delete unnecessary profiles and store profiles for a clean main window without permanently removing them. - Pin Profile: Pin your favorite profiles for quick and easy access. - Edit Remap Profile: Adjust your profile to your preference. - Assign Shortcut on Each Profile: Enable or Disable your profile using shortcuts. - Default Mode in Create or Edit Profile: The easiest way to remap your keyboard. - Text Mode in Create or Edit Profile: Text Mode allows you to adjust or create your AutoHotkey script easily, without needing an external editor. - Make Window Always on Top: "Always on top" feature lets you easily remap keys while other windows are open, without minimizing KeyTik window. This is especially useful during gaming. - Show Stored Profile: Display your stored profile or restore it to main window. - Import Profile: Use AutoHotkey script from external source like download and make it as profile. - Automatically Take Key Input: A button that can make you click your desired key and it will automatically fill key entry
Thursday, January 9, 2025
New top story on Hacker News: Show HN: TabPFN v2 – A SOTA foundation model for small tabular data
Show HN: TabPFN v2 – A SOTA foundation model for small tabular data
17 by onasta | 2 comments on Hacker News.
I am excited to announce the release of TabPFN v2, a tabular foundation model that delivers state-of-the-art predictions on small datasets in just 2.8 seconds for classification and 4.8 seconds for regression compared to strong baselines tuned for 4 hours. Published in Nature, this model outperforms traditional methods on datasets with up to 10,000 samples and 500 features. The model is available under an open license: a derivative of the Apache 2 license with a single modification, adding an enhanced attribution requirement inspired by the Llama 3 license: https://ift.tt/WLg2C9x . You can also try it via API: https://ift.tt/QnE5tsk TabPFN v2 is trained on 130 million synthetic tabular prediction datasets to perform in-context learning and output a predictive distribution for the test data points. Each dataset acts as one meta-datapoint to train the TabPFN weights with SGD. As a foundation model, TabPFN allows for fine-tuning, density estimation and data generation. Compared to TabPFN v1, v2 now natively supports categorical features and missing values. TabPFN v2 performs just as well on datasets with or without these. It also handles outliers and uninformative features naturally, problems that often throw off standard neural nets. TabPFN v2 performs as well with half the data as the next best baseline (CatBoost) with all the data. We also compared TabPFN to the SOTA AutoML system AutoGluon 1.0. Standard TabPFN already outperforms AutoGluon on classification and ties on regression, but ensembling multiple TabPFNs in TabPFN v2 (PHE) is even better. There are some limitations: TabPFN v2 is very fast to train and does not require hyperparameter tuning, but inference is slow. The model is also only designed for datasets up to 10k data points and 500 features. While it may perform well on larger datasets, it hasn't been our focus. We're actively working on removing these limitations and intend to release new versions of TabPFN that can handle larger datasets, have faster inference and perform in additional predictive settings such as time-series and recommender systems. We would love for you to try out TabPFN v2 and give us your feedback!
17 by onasta | 2 comments on Hacker News.
I am excited to announce the release of TabPFN v2, a tabular foundation model that delivers state-of-the-art predictions on small datasets in just 2.8 seconds for classification and 4.8 seconds for regression compared to strong baselines tuned for 4 hours. Published in Nature, this model outperforms traditional methods on datasets with up to 10,000 samples and 500 features. The model is available under an open license: a derivative of the Apache 2 license with a single modification, adding an enhanced attribution requirement inspired by the Llama 3 license: https://ift.tt/WLg2C9x . You can also try it via API: https://ift.tt/QnE5tsk TabPFN v2 is trained on 130 million synthetic tabular prediction datasets to perform in-context learning and output a predictive distribution for the test data points. Each dataset acts as one meta-datapoint to train the TabPFN weights with SGD. As a foundation model, TabPFN allows for fine-tuning, density estimation and data generation. Compared to TabPFN v1, v2 now natively supports categorical features and missing values. TabPFN v2 performs just as well on datasets with or without these. It also handles outliers and uninformative features naturally, problems that often throw off standard neural nets. TabPFN v2 performs as well with half the data as the next best baseline (CatBoost) with all the data. We also compared TabPFN to the SOTA AutoML system AutoGluon 1.0. Standard TabPFN already outperforms AutoGluon on classification and ties on regression, but ensembling multiple TabPFNs in TabPFN v2 (PHE) is even better. There are some limitations: TabPFN v2 is very fast to train and does not require hyperparameter tuning, but inference is slow. The model is also only designed for datasets up to 10k data points and 500 features. While it may perform well on larger datasets, it hasn't been our focus. We're actively working on removing these limitations and intend to release new versions of TabPFN that can handle larger datasets, have faster inference and perform in additional predictive settings such as time-series and recommender systems. We would love for you to try out TabPFN v2 and give us your feedback!
Wednesday, January 8, 2025
New top story on Hacker News: Show HN: Counting Tap Toy
Show HN: Counting Tap Toy
8 by memalign | 1 comments on Hacker News.
Hi HN! This is a project I made for my 3-year-old who always skips “14” when counting. In Counting Tap Toy, you can tap to count various aquatic creatures. The count is displayed and announced. My hope is that seeing and hearing the numbers will reinforce 14’s existence. I find tapping all the fish while listening to the songs and popping sound effects to be pretty relaxing too. Technical details: https://ift.tt/RLapCjw This is the fourth “Tap Toy”, joining: - Slice: https://ift.tt/bMwXLdE - Fireworks: https://ift.tt/SM8gJWo - Original: https://ift.tt/ZQmcOoe
8 by memalign | 1 comments on Hacker News.
Hi HN! This is a project I made for my 3-year-old who always skips “14” when counting. In Counting Tap Toy, you can tap to count various aquatic creatures. The count is displayed and announced. My hope is that seeing and hearing the numbers will reinforce 14’s existence. I find tapping all the fish while listening to the songs and popping sound effects to be pretty relaxing too. Technical details: https://ift.tt/RLapCjw This is the fourth “Tap Toy”, joining: - Slice: https://ift.tt/bMwXLdE - Fireworks: https://ift.tt/SM8gJWo - Original: https://ift.tt/ZQmcOoe
Tuesday, January 7, 2025
Monday, January 6, 2025
Sunday, January 5, 2025
Saturday, January 4, 2025
New top story on Hacker News: Show HN: Open Rewind – POC for audio and screen and video streaming to S3
Show HN: Open Rewind – POC for audio and screen and video streaming to S3
8 by wwoessi | 0 comments on Hacker News.
Got into a rabbit hole today. POC works using 'npx efficient-recorder'. Is this useful to anyone?
8 by wwoessi | 0 comments on Hacker News.
Got into a rabbit hole today. POC works using 'npx efficient-recorder'. Is this useful to anyone?
Friday, January 3, 2025
New top story on Hacker News: Show HN: I'm tired of sharing code using PasteBin and Slack, so I made this
Show HN: I'm tired of sharing code using PasteBin and Slack, so I made this
19 by moeen-mahmud | 17 comments on Hacker News.
Hey developers I think we're tired of copying and pasting our codes and sharing links using PasteBin, GithubGist, or Slack. What if you could share the codes without copying the link and share them right from your favorite editor? That was the motivation for creating TurboGist. Right now, it's still in the MVP stage, and I'm trying to gather feedback from developers like you. It's available as a beta in the VS Code Extension store. Can you guys check this out? It'd help me a lot. You don't need to pay any penny, 100% FREE. However, I'm working on introducing a self-hosted feature. Besides, a better alternative to PasteBin or GithubGist. Looking for your input on: - How this would fit your workflow? - Must-have features or integrations (e.g., GitHub Gist, PasteBin, etc.)? - Pain points in your current code-sharing process? - Do you have features in your mind? Thanks for reading this.
19 by moeen-mahmud | 17 comments on Hacker News.
Hey developers I think we're tired of copying and pasting our codes and sharing links using PasteBin, GithubGist, or Slack. What if you could share the codes without copying the link and share them right from your favorite editor? That was the motivation for creating TurboGist. Right now, it's still in the MVP stage, and I'm trying to gather feedback from developers like you. It's available as a beta in the VS Code Extension store. Can you guys check this out? It'd help me a lot. You don't need to pay any penny, 100% FREE. However, I'm working on introducing a self-hosted feature. Besides, a better alternative to PasteBin or GithubGist. Looking for your input on: - How this would fit your workflow? - Must-have features or integrations (e.g., GitHub Gist, PasteBin, etc.)? - Pain points in your current code-sharing process? - Do you have features in your mind? Thanks for reading this.
Thursday, January 2, 2025
Subscribe to:
Posts (Atom)