Show HN: Any-LLM – lightweight and open-source router to access any LLM Provider
27 by AMeckes | 11 comments on Hacker News.
We built any-llm because we needed a lightweight router for LLM providers with minimal overhead. Switching between models is just a string change : update "openai/gpt-4" to "anthropic/claude-3" and you're done. It uses official provider SDKs when available, which helps since providers handle their own compatibility updates. No proxy or gateway service needed either, so getting started is pretty straightforward - just pip install and import. Currently supports 20+ providers including OpenAI, Anthropic, Google, Mistral, and AWS Bedrock. Would love to hear what you think!
Tuesday, July 22, 2025
Monday, July 21, 2025
Sunday, July 20, 2025
Saturday, July 19, 2025
Friday, July 18, 2025
Thursday, July 17, 2025
Wednesday, July 16, 2025
Tuesday, July 15, 2025
Monday, July 14, 2025
New top story on Hacker News: Cidco MailStation as a Z80 Development Platform (2019)
Cidco MailStation as a Z80 Development Platform (2019)
13 by robin_reala | 1 comments on Hacker News.
13 by robin_reala | 1 comments on Hacker News.
Sunday, July 13, 2025
Saturday, July 12, 2025
Friday, July 11, 2025
New top story on Hacker News: Show HN: RULER – Easily apply RL to any agent
Show HN: RULER – Easily apply RL to any agent
2 by kcorbitt | 0 comments on Hacker News.
Hey HN, Kyle here, one of the co-founders of OpenPipe. Reinforcement learning is one of the best techniques for making agents more reliable, and has been widely adopted by frontier labs. However, adoption in the outside community has been slow because it's so hard to implement. One of the biggest challenges when adapting RL to a new task is the need for a task-specific "reward function" (way of measuring success). This is often difficult to define, and requires either high-quality labeled data and/or significant domain expertise to generate. RULER is a drop-in reward function that works across different tasks without any of that complexity. It works by showing N trajectories to an LLM judge and asking it to rank them relative to each other. This sidesteps the calibration issues that plague most LLM-as-judge approaches. Combined with GRPO (which only cares about relative scores within groups), it just works (surprisingly well!). We have a full writeup on the blog, including results on 4 production tasks. On all 4 tasks, small Qwen 2.5 models trained with RULER+GRPO beat the best prompted frontier model, despite being significantly smaller and cheaper to run. Surprisingly, they even beat models trained with hand-crafted reward functions on 3/4 tasks! https://ift.tt/JqXsny3 Repo: https://ift.tt/Ktp08kA
2 by kcorbitt | 0 comments on Hacker News.
Hey HN, Kyle here, one of the co-founders of OpenPipe. Reinforcement learning is one of the best techniques for making agents more reliable, and has been widely adopted by frontier labs. However, adoption in the outside community has been slow because it's so hard to implement. One of the biggest challenges when adapting RL to a new task is the need for a task-specific "reward function" (way of measuring success). This is often difficult to define, and requires either high-quality labeled data and/or significant domain expertise to generate. RULER is a drop-in reward function that works across different tasks without any of that complexity. It works by showing N trajectories to an LLM judge and asking it to rank them relative to each other. This sidesteps the calibration issues that plague most LLM-as-judge approaches. Combined with GRPO (which only cares about relative scores within groups), it just works (surprisingly well!). We have a full writeup on the blog, including results on 4 production tasks. On all 4 tasks, small Qwen 2.5 models trained with RULER+GRPO beat the best prompted frontier model, despite being significantly smaller and cheaper to run. Surprisingly, they even beat models trained with hand-crafted reward functions on 3/4 tasks! https://ift.tt/JqXsny3 Repo: https://ift.tt/Ktp08kA
Thursday, July 10, 2025
Wednesday, July 9, 2025
Tuesday, July 8, 2025
Monday, July 7, 2025
New top story on Hacker News: Show HN: I Got Tired of Calculator Sites, So I Built My Own
Show HN: I Got Tired of Calculator Sites, So I Built My Own
8 by calculatehow | 7 comments on Hacker News.
I’ve always found that online calculators tend to have bad UIs, especially on mobile. Most of the calculator websites I’ve come across use outdated and inconvenient ways of inputting data, or they format the results in confusing ways. I’ve noticed that fraction calculators (especially mixed fractions) are terrible to use, even on desktop. I haven’t built one of those yet, but it’s something I’m planning to tackle soon. This is a project I’ve always wanted to work on, but I’m relatively new to this space. So far, I’ve created a collection of simple calculators focused on math and finance. I’d really appreciate any feedback on the UI/UX or anything else you think could be improved. You can try it here: https://ift.tt/Qxw0Ujy
8 by calculatehow | 7 comments on Hacker News.
I’ve always found that online calculators tend to have bad UIs, especially on mobile. Most of the calculator websites I’ve come across use outdated and inconvenient ways of inputting data, or they format the results in confusing ways. I’ve noticed that fraction calculators (especially mixed fractions) are terrible to use, even on desktop. I haven’t built one of those yet, but it’s something I’m planning to tackle soon. This is a project I’ve always wanted to work on, but I’m relatively new to this space. So far, I’ve created a collection of simple calculators focused on math and finance. I’d really appreciate any feedback on the UI/UX or anything else you think could be improved. You can try it here: https://ift.tt/Qxw0Ujy
Sunday, July 6, 2025
Saturday, July 5, 2025
Friday, July 4, 2025
Thursday, July 3, 2025
Wednesday, July 2, 2025
Tuesday, July 1, 2025
Subscribe to:
Posts (Atom)