Wednesday, March 27, 2024

New top story on Hacker News: Show HN: I built an interactive plotter art exhibit for SIGGRAPH

Show HN: I built an interactive plotter art exhibit for SIGGRAPH
6 by cosiiine | 0 comments on Hacker News.
I'm enthralled with using pen plotters to make generative art. Last August at SIGGRAPH, I built an interactive experience for others to see how code can be used to make visual art. The linked blog post is my trials and tribulations of linking a MIDI controller to one of these algorithms and sending its output to a plotter, so that people may witness the end-to-end experience.

Monday, March 25, 2024

New top story on Hacker News: ZenHammer: Rowhammer Attacks on AMD Zen-Based Platforms

ZenHammer: Rowhammer Attacks on AMD Zen-Based Platforms
19 by transpute | 4 comments on Hacker News.

New top story on Hacker News: Show HN: Tracecat – Open-source security alert automation / SOAR alternative

Show HN: Tracecat – Open-source security alert automation / SOAR alternative
16 by neochris | 7 comments on Hacker News.
Hi HN, we are building Tracecat ( https://tracecat.com/ ), an open source automation platform for security alerts. Tracecat automates the tasks a security analyst has to do when responding to a security alert: e.g. contact victims, investigate security logs, report vulnerability. The average security analyst deals with 100 alerts per day. As soon as an alert comes in, you have to investigate and respond. An average alert takes ~30 minutes to analyze (and 100 x 30 min = 50 hours > one whole day) Lots of things get dropped, and this creates vulnerabilities. Many breaches can be traced back to week old alerts that didn’t get properly investigated. Since the risks and costs are so high, top security teams currently pay Splunk SOAR $100,000/year to help automate alert processing. It’s a click-and-drag workflow builder with webhooks, REST API integrations, and JSON processors. A security engineer would use it to build alert automations that look like this: (1) webhook to receive alert (e.g. unusual powershell cmd) from Microsoft Defender; (2) send yes/no Slackbot to ask employee about the alert; (3) if confirmed as suspicious, send malware sample to VirusTotal for report (4) collect evidence from previous steps and dump it into a ticket. If $100k a year seems wildly expensive for a Zapier-like platform, you’d be half right. Splunk SOAR is actually a Zapier + log search + Jira ticketing system. Log storage—that’s how Splunk turns a $99/month workflow automation tool into a pricey enterprise product. Every piece of evidence collected (e.g. Slackbot response, malware report, GeoIP enrichment) and every past workflow trail has to be searchable by a human incident responder or auditor. Security teams need to know why every alert escalated to a SEV1 or not. My cofounder and I are data engineers who fell into this space. We heard our security friends constantly complain about being priced out of a SOAR (security orchestration, automation, and response platform) like Splunk SOAR. We both wrote a lot of event-driven code at school (Master’s thesis) and work (Meta / PwC). We’re also early adopters of Quickwit / Tantivy, an OSS alternative to Elasticsearch / Apache Lucene that is cheaper and faster. It didn’t seem that difficult to build a cheaper open source SOAR, so we decided to do it. Tracecat is also different as it can run in a single VM / laptop. Splunk SOAR and Tines are built for Fortune 10 needs, which means expensive Kubernetes clusters. Most security teams don’t need that scale, but are forced to pay the K8s “premium” (high complexity, hard to maintain). Tracecat uses OSS embedded databases (SQLite) and an event processing engine we built using Python 3.12 asyncio. So far, we’ve just got a bare-bones alpha but you can already do quite a few things with it. e.g. trigger event-driven workflows from webhooks; use REST API integrations; parse responses using JSONPath; control flow using conditional blocks; store logs cheaply in Tantivy; open cases directly from workflows; prioritize and manage cases in a Jira-like table. Tracecat uses Pydantic V2 for fast input / output validation and Zod for fast form validation. We care a lot about data quality! It’s also Apache-2.0 licensed so anyone can self-host the platform. On our roadmap: integrations with popular security tools (Crowdstrike, Microsoft defender); pre-built workflows (e.g. investigating phishing email); better docs; more AI features like auto-labeling tickets, extracting data from unstructured text etc. We’re still early so would love your feedback and opinions. Feel free to try us out or share it with your security friends. We have a cloud version up and running: https://ift.tt/zQjSXUa . Dear HN readers, we’d love to hear your incident response stories and the software you use (or not) to automate the work. Stories from security, site reliability engineering, or even physical systems like critical infrastructure monitoring are all very welcome!

New top story on Hacker News: Hotel Hotspot Hijinks

Hotel Hotspot Hijinks
14 by oalders | 8 comments on Hacker News.

New top story on Hacker News: Bump Allocation: Up or Down?

Bump Allocation: Up or Down?
25 by celeritascelery | 0 comments on Hacker News.

Thursday, March 21, 2024

New top story on Hacker News: Launch HN: Soundry AI (YC W24) – Music sample generator for music creators

Launch HN: Soundry AI (YC W24) – Music sample generator for music creators
16 by kantthpel | 5 comments on Hacker News.
Hi everyone! We’re Mark, Justin, and Diandre of Soundry AI ( https://soundry.ai/ ). We provide generative AI tools for musicians, including text-to-sound and infinite sample packs. We (Mark and Justin) started writing music together a few years ago but felt limited in our ability to create anything that we were proud of. Modern music production is highly technical and requires knowledge of sound design, tracking, arrangement, mixing, mastering, and digital signal processing. Even with our technical backgrounds (in AI and cloud computing respectively), we struggled to learn what we needed to know. The emergence of latent diffusion models was a turning point for us just like many others in tech. All of a sudden it was possible to leverage AI to create beautiful art. After meeting our cofounder Diandre (half of the DJ duo Bandlez and expert music producer), we formed a team to apply generative AI to music production. We began by focusing on generating music samples rather than full songs. Focusing on samples gave us several advantages, but the biggest one was the ability to build and train our custom models very quickly due to the small required length of the generated audio (typically 2-10 seconds). Conveniently, our early text-to-sample model also fit well within many existing music producers’ workflows which often involve heavy use of music samples. We ran into several challenges when creating our text-to-sound model. The first was that we began by training our latent transformer (similar to Open AI’s Sora) using off-the-shelf audio autoencoders (like Meta’s Encodec) and text embedders (like Google’s T5). The domain gap between the data used to train these off-the-shelf models and sample data was much greater than we expected, which caused us to incorrectly attribute blame for issues in the three model components (latent transformer, autoencoder, and embedder) during development. To see how musicians can use our text-to-sound generator to write music, you can see our text-to-sound demo below: https://www.youtube.com/watch?v=MT3k4VV5yrs&ab_channel=Sound... The second issue we experienced was more on the product design side. When we spoke with our users in-depth we learned that novice music producers had no idea what to type into the prompt box, and expert music producers felt that our model’s output wasn’t always what they had in mind when they typed in their prompt. It turns out that text is much better at specifying the contents of visual art than music. This particular issue is what led us to our new product: the Infinite Sample Pack. The Infinite Sample Pack does something rather unconventional: prompting with audio rather than text. Rather than requiring you to type out a prompt and specify many parameters, all you need to do is click a button to receive new samples. Each time you select a sound, our system embeds “prompt samples” as input to our model which then creates infinite variations. By limiting the number of possible outputs we’re able to hide inference latency by pre-computing lots of samples ahead of time. This new approach has seen much wider adoption and so this month we’ll be opening the system up so that everyone can create Infinite Sample Packs of their very own! To compare the workflow of the two products, you can check out our new demo using the Infinite Sample Pack: https://www.youtube.com/watch?v=BqYhGipZCDY&ab_channel=Sound... Overall, our founding principle is to start by asking the question: "what do musicians actually want?" Meta's open sourcing of MusicGen has resulted in many interchangeable text-to-music products, but ours is embraced by musicians. By constantly having an open dialog with our users we’ve been able to satisfy many needs including the ability to specify BPM and key, including one-shot instrument samples (so musicians can write their own melodies), and adding drag-and-drop support for digital audio workstations via our desktop app and VST. To hear some of the awesome songs made with our product, take a listen to our community showcases below! https://ift.tt/mylEpBt We hope you enjoy our tool, and look forward to discussion in the comments

New top story on Hacker News: Array Languages: R vs. APL

Array Languages: R vs. APL
6 by todsacerdoti | 0 comments on Hacker News.

Monday, March 18, 2024

New top story on Hacker News: Ravi is a dialect of Lua, with JIT and AOT compilers

Ravi is a dialect of Lua, with JIT and AOT compilers
10 by InitEnabler | 0 comments on Hacker News.

New top story on Hacker News: Show HN: Fake or real? Try our AI image detector

Show HN: Fake or real? Try our AI image detector
11 by aymandfire | 30 comments on Hacker News.
Hey HN! We're Ayman and Dylan, co-founders of Nuanced ( https://ift.tt/98YZERs ). We want to share a tool we’re working on to detect fake and real images: https://ift.tt/gRTYn89 . The UI is bare-bones but you’ll get the idea. Drag or upload an image and our tool will display the probabilities with which it thinks that the image might be AI-generated or not. If you want, you can click “No, it’s AI” to confirm that the image was AI-generated, or “No, it’s real” to confirm that the image was not AI-generated. Why we’re working on this: as AI-generated images continue to blur the line between real and artificial and their adoption and quality rises, so too does the risk for fraud and misinformation. Not being able to trust what you see online threatens whatever level of "realness" or authenticity online material has. Companies like dating apps, news sites, and trust and safety teams have a growing need to distinguish AI-generated images from authentic ones. The models we built are trained on various architectures, such as Dalle-3, Midjourney, and SDXL, with continuous integration of data from the latest AI image generators. Our technology can detect deepfakes and verify user profile images, documents, IDs, or media images. Additionally, it can detect fake or counterfeit products, services, or experiences being marketed on e-commerce platforms. We hope it’s fun and would be very interested in any cases it gets wrong, as well as whatever else you’d like to ask or say!

Sunday, March 17, 2024

New top story on Hacker News: Compressing Images with Neural Networks

Compressing Images with Neural Networks
18 by skandium | 1 comments on Hacker News.

New top story on Hacker News: Show HN: Interactive Smartlog VSCode Extension – An Interactive Git GUI

Show HN: Interactive Smartlog VSCode Extension – An Interactive Git GUI
10 by tnesbitt210 | 1 comments on Hacker News.
Interactive Smartlog is a graphical VSCode extension that presents a simplified view of the Git log, directly highlighting the branches and commits that are most relevant to your current work. And it's not just a visual tool — it's fully interactive, allowing you to add/switch/remove branches, stage/unstage files, and manage commits directly from the GUI. This tool draws inspiration from Meta's Interactive Smartlog built for the Sapling source control system, and I've adapted it to work with Git. Transitioning the functionality from Sapling to Git wasn't just about a one-to-one feature transfer; it involved changing how data is queried & presented, as well as introducing UI interactions for several Git concepts (like branches, staging/unstaging changes, etc) which are not present in the Sapling source control system. Originally a personal project to enhance my own workflow, I've published the extension on the VSCode marketplace for anyone who would like to use it. I'm keen to hear your feedback and suggestions, as community input is invaluable in shaping its future updates.

Monday, March 11, 2024

New top story on Hacker News: Who uses Google TPUs for inference in production?

Who uses Google TPUs for inference in production?
16 by arthurdelerue | 2 comments on Hacker News.
I am really puzzled by TPUs. I've been reading everywhere that TPUs are powerful and a great alternative to NVIDIA. I have been playing with TPUs for a couple of months now, and to be honest I don't understand how can people use them in production for inference: - almost no resources online showing how to run modern generative models like Mistral, Yi 34B, etc. on TPUs - poor compatibility between JAX and Pytorch - very hard to understand the memory consumption of the TPU chips (no nvidia-smi equivalent) - rotating IP addresses on TPU VMs - almost impossible to get my hands on a TPU v5 Is it only me? Or did I miss something? I totally understand that TPUs can be useful for training though.

Thursday, March 7, 2024

New top story on Hacker News: Gabriel García Márquez Wanted to Destroy His Last Novel. It's Being Published

Gabriel García Márquez Wanted to Destroy His Last Novel. It's Being Published
6 by lermontov | 2 comments on Hacker News.

New top story on Hacker News: Launch HN: SiLogy (YC W24) – Chip design and verification in the cloud

Launch HN: SiLogy (YC W24) – Chip design and verification in the cloud
10 by pkkim | 0 comments on Hacker News.
Hi everyone! We’re the cofounders of SiLogy ( https://silogy.io/ ). We’re building chip design and verification tools to speed up the semiconductor development cycle. Here's a demo: https://www.youtube.com/watch?v=u0wAegt79EA Interest in designing new chips is growing, thanks to demand from AI and the predicted decline of Moore’s Law. All these chips need to be tested in simulation. Since the number of possible states grows exponentially with chip complexity, the need for verification is exploding. Chip developers already spend 70% of their time on testing. (See this video on the “verification gap”: https://www.youtube.com/watch?v=rtaaOdGuMCc ). Tooling hasn’t kept up. The state of the art in collaborative debugging is to walk to a coworker’s desk and point to an error in a log file or waveform file. Each chip company rolls out its own tooling and infra to deal with this—this was Kay’s (one of our cofounders) entire job at his last gig. But they want to work on chips, not devtools! The solutions they come up with are often inadequate and frustrating. That’s why we started SiLogy. SiLogy is a web app to manage the entire digital verification workflow. (“Digital verification” means testing the logic of the design and includes everything before the physical design of the chip. It’s the most time-consuming stage in verification.) We combine three capabilities: Test orchestration and running : The heart of our product is a CI tool that runs Verilator, a popular open-source simulator, in a Docker container. When you push to your repo or manually trigger a job in the UI, we install your dependencies and compile your binaries into a Docker image, and run your tests. You can also rerun a single test with custom arguments using the UI. Test results and statistics : We display logs from each test in the web app. We’re working on displaying waveform files in the app, too. We also keep track of passing and failing tests within each test suite, and we’re working on slick visualizations of test trends, to keep managers happy. :) Collaboration : soon you’ll be able to send a link to and leave a comment on a specific location within a log or waveform file, just like in Google Docs. Unlike generic CI tools, we focus on tight integration with verification workflows. When an assertion fails, we show you the source code where it happened. We’re hard at work on waveform viewing – soon you’ll be able to generate waves from a failing test, with the click of a button. Our roadmap includes support for the major commercial simulators: VCS, Xcelium, and Questa. We’re also working on a test gen framework based on Buck2 to statically declare tests for your post-commit runs, or programmatically generate thousands of tests for nightly regressions. We plan to sell seats, with discounts for individuals, startups, and research labs (we’re working on pricing). For now, we’re opening up guest registration so HN can play with what we hope is the future of design verification. We owe so much of what we know to this community and we’d be so grateful for any feedback. <3 You can sign up here, just press "Use guest email address" if you don't want to give up your email: https://dash.silogy.io/signup/

Wednesday, March 6, 2024

New top story on Hacker News: Show HN: My first programming project – userscripts to change forum UIs

Show HN: My first programming project – userscripts to change forum UIs
9 by willthereader | 5 comments on Hacker News.
Hi, I'm Will. I'm 24, autistic, and have OCD tendencies. I'm learning to code and this is my first public project. I’d really appreciate your feedback and encouragement! This project lets me solve some of my OCD problems online. There are a couple of parts of the forums that I visit – Space Battles, Sufficient Velocity, and Questionable Questing – that I want to remove. Specifically, I hate seeing indicators of how much is left in a forum thread, because I keep thinking about how much content is left. It stops me from immersing myself in the story. It stressed me out. Before I learned to code, I'd use my hand to block the total chapter count so I could read the blurb and see the word count. I would do my best to ignore the page navigation bar except for the next page button, but I usually ended up failing. One of the reasons I always read in full-screen Safari is that I didn't have to see the tab name that always had the page number. I learned not to hover my cursor over the window because it would tell me the page number. This project is a series of userscripts that hide those indicators. I coded the userscripts in JavaScript, and I used https://ift.tt/UWne3yg as the system. Despite the fact I didn't know what a userscript was until I started coding them, AI assistance allowed me to code them with minimal help from my brother, Stevie. Khanmigo helped me plan, write, and debug code. ChatGPT taught me the theory. Part of the reason I coded a lot faster with the later userscripts is I knew enough to realize when AI was talking about something irrelevant and redirect it. One cool moment was when I correctly predicted I didn't need to code different userscripts for SpaceBattles and Sufficient Velocity because Sufficient Velocity used to be part of SpaceBattles. I find it relaxing not to have to worry about accidentally seeing the chapter count or the final page number. Maybe they’ll help one of you!

Monday, March 4, 2024

New top story on Hacker News: Show HN: Workflow Orchestrator in Golang

Show HN: Workflow Orchestrator in Golang
4 by harshadmanglani | 0 comments on Hacker News.
A brief overview: 1. Workflows steps share a running context, with access to data they need require. 2. Steps in the workflow (builders) are chained together based on a topologically sorted built from the predefined input & output. 3. No servers spin up (like Conductor/Cadence) - the orchestrator is low level and meant for simplifying business logic. 4. Before/After listeners for each step. Would love to hear your thoughts and feedback!