智能助手网
标签聚合 Source

/tag/Source

hnrss.org · 2026-04-18 20:17:13+08:00 · tech

150 applications. One offer. Each application took 5+ manual steps. Separate tools, separate tabs, separate sites — none of them talking to each other. Generic output. Over an hour per application. Paste a job description — or pull it from any job site with the Chrome extension — and five AI agents run an orchestrated pipeline in under 30 seconds: analyzing the role, scoring your fit, researching the company, writing a targeted cover letter, and tailoring your resume to the role. Sequential where it needs to be, parallel where it can be, each agent's output feeding the next. Also includes a dashboard to track every application. And tools for everything around it: interview prep with mock sessions, salary negotiation, job comparison, follow-ups, thank you notes, and references. Runs on your machine. No subscriptions, no data stored on our servers — just your own Gemini API key connecting directly to Google. Comments URL: https://news.ycombinator.com/item?id=47815326 Points: 1 # Comments: 0

linux.do · 2026-04-18 15:25:23+08:00 · tech

TechCrunch – 17 Apr 26 Sources: Cursor in talks to raise $2B+ at $50B valuation as enterprise growth... Returning backers a16z and Thrive are expected to lead the round. Est. reading time: 2 minutes [!quote]+ 据四位知情人士透露,人工智能编程初创公司 Cursor 即将完成新一轮融资,这家成立四年的公司预计将筹集至少 20 亿美元的新资金。知情人士称,现有投资者 Thrive 和 Andreessen Horowitz 预计将领投此轮融资,在新一轮融资之前,Cursor 的估值将达到 500 亿美元。 https://www.bloomberg.com/news/articles/2026-04-17/ai-coding-startup-cursor-in-talks-to-raise-2-billion-in-funding 1 个帖子 - 1 位参与者 阅读完整话题

hnrss.org · 2026-04-18 11:09:05+08:00 · tech

devnexus is an open-source cli that gives agents persistent shared memory across repos, sessions, and engineers. It maps out dependencies and relations at the function level, builds a code graph, and writes it into a shared Obsidian vault that every agent reads before writing code. Past decisions are also linked directly to the code they touched, so no one goes down the same dead end twice. Still building it out but I would love to hear any thoughts/feedback Comments URL: https://news.ycombinator.com/item?id=47812829 Points: 4 # Comments: 0

hnrss.org · 2026-04-17 20:38:48+08:00 · tech

Hey HN, recently I wrote an open-source Z-machine ( https://github.com/techbelly/elm-zmachine ) to support a course I'm teaching about interpreters and functional programming. Once I'd done that, I just had to make my own client. Partly, I wanted to enjoy playing the games I played when I was a kid. Partly, I just wanted to give my Z-machine a real test and see what kind of things I could build with access to the internals of the VM. Those old games could be super-frustrating. Especially the ones that teach you how to play by killing you over and over again - looking at you, Infidel. And while I used to sit and play for hours at a time, these days I only have a few minutes here and there. So, in Planedrift, every time you move, the full transcript and game state are snapshotted to localStorage. You can close the tab mid-game and come back to exactly where you were or use the history list to jump back in time. The idea is to make it easy to pick up a game for ten minutes and then put it down again. I'm no designer, and I've done my best to make it pleasant to look at. Behind the scenes it's written in Elm - which I know is not everyone's first choice, but it works for me! It only supports .z3 files at the minute, and .z5 is in progress. I’ve bundled the three publicly available Zorks, but you can bring your own .z3 file from one of the online archives. I'm thinking of adding more comprehensive note taking, maybe auto-mapping, transcript search and I'm playing with some plug-in ideas, and of course, dark mode! What do you think? What features should I prioritize? Ultimately, I hope you play some old Infocom games with Planedrift and enjoy it. Comments URL: https://news.ycombinator.com/item?id=47805289 Points: 2 # Comments: 0

hnrss.org · 2026-04-17 19:39:46+08:00 · tech

Hi HN, I built NotchPrompter because I needed a simple way to read notes while looking at the camera during calls, without heavy or paid software. - 100% free & open-source - native macOS (SwiftUI) - minimalist - focuses on the essentials. Feedback and contributions are more than welcome! PS No, I didn't use AI for it. I always wanted to play with SwiftUI and this is my 6th approach to this. Previous projects were too complex for my beginner skills. I'm mainly a Java developer. It took me ~5 months to build this during free weekends. Comments URL: https://news.ycombinator.com/item?id=47804818 Points: 1 # Comments: 0

linux.do · 2026-04-17 18:00:02+08:00 · tech

Source 来源: Cheat Sheet — claude.nagdy.me 一、斜杠命令 / Slash Commands 命令 Command 英文说明 English 中文说明 Chinese /help Show available commands and usage hints 显示可用命令和使用提示 /clear Start a fresh conversation. CLAUDE.md instructions stay active 开始新对话,CLAUDE.md 指令保持生效 /compact Summarize conversation to reduce context usage. Accepts focus instructions 压缩对话以减少上下文占用,可指定关注重点 /context Show context window usage with a visual breakdown of token allocation 显示上下文窗口使用情况,含 token 分配可视化 /diff Open an interactive viewer for uncommitted changes 打开交互式查看器,查看未提交的更改 /model Switch between Sonnet, Opus, and Haiku mid-session 在会话中切换 Sonnet、Opus、Haiku 模型 /cost Show session token usage and estimated cost 显示当前会话的 token 用量和预估费用 /status Show current version, model, and account info 显示当前版本、模型和账户信息 /doctor Run a health check on your Claude Code installation 对 Claude Code 安装进行健康检查 /init Scan your project and generate a starter CLAUDE.md 扫描项目并生成初始 CLAUDE.md 文件 /memory View and edit CLAUDE.md memory files (global, project, auto) 查看和编辑 CLAUDE.md 记忆文件(全局/项目/自动) /review Review code changes on the current branch with suggestions 审查当前分支的代码更改并提供建议 /permissions View and manage tool permissions. Configure in .claude/settings.json 查看和管理工具权限,在 .claude/settings.json 中配置 /config Open Claude Code configuration and settings 打开 Claude Code 配置和设置 /login Switch Anthropic accounts 切换 Anthropic 账户 /branch Fork conversation into a parallel branch to explore alternatives 将对话分叉为并行分支以探索替代方案 /rewind Roll back to a previous message and undo file changes after that point 回退到之前的消息并撤销该点之后的文件更改 /resume Resume a previously saved session by name or ID 通过名称或 ID 恢复之前保存的会话 /rename Rename the current session for easier recall later 重命名当前会话以便后续查找 /export Export the conversation to a markdown file 将对话导出为 Markdown 文件 /effort Set reasoning depth: low, medium, high, or max 设置推理深度:low(低)、medium(中)、high(高)、max(最大) /plan Enter planning mode — Claude researches first, then presents a plan for approval 进入规划模式——Claude 先研究,再呈现计划等待批准 /btw Ask a side question without adding it to conversation history 提问附带问题,不会加入对话历史 /batch Split work across parallel agents in isolated git worktrees 将工作拆分到隔离的 git worktree 中的并行代理 /loop Run a task on a recurring interval within your session 在会话内按间隔循环运行任务 /schedule Create a cloud-backed scheduled task that runs even when offline 创建云端定时任务,离线时也能运行 /debug Toggle verbose mode to see tool calls and thinking steps 切换详细模式,查看工具调用和思考步骤 /simplify Review recently changed files for code quality, reuse, and efficiency 审查最近更改的文件的代码质量、复用性和效率 /agents List, create, edit, or remove subagent definitions 列出、创建、编辑或删除子代理定义 /mcp Show active MCP server connections and available tools 显示活跃的 MCP 服务器连接和可用工具 /plugin Manage plugins — install, list, remove, or reload 管理插件——安装、列出、删除或重新加载 /reload-plugins Hot-reload all plugin files during development 开发时热重载所有插件文件 /sandbox Enable OS-level isolation for file system and network access 启用操作系统级别的文件系统和网络隔离 二、键盘快捷键 / Keyboard Shortcuts 快捷键 Shortcut 英文说明 English 中文说明 Chinese Shift+Tab Cycle through permission modes: default → plan → acceptEdits → auto 循环切换权限模式:default → plan → acceptEdits → auto Option+T / Alt+T Toggle extended thinking on or off 开启/关闭扩展思考 Ctrl+C Cancel the current operation or stop a running command 取消当前操作或停止运行中的命令 Ctrl+D Exit Claude Code from the terminal 从终端退出 Claude Code Ctrl+B Background a currently running subagent task 将运行中的子代理任务放到后台 Ctrl+O Toggle verbose/debug mode (same as /debug) 切换详细/调试模式(同 /debug) Ctrl+G Open the current plan in an external editor 在外部编辑器中打开当前计划 Ctrl+K Open site search on claude.nagdy.me 打开 claude.nagdy.me 站内搜索 Esc Dismiss the active dialog or current suggestion 关闭当前对话框或建议 三、CLI 命令行参数 / CLI Flags 参数 Flag 英文说明 English 中文说明 Chinese claude -p "prompt" Run a one-shot prompt non-interactively. Foundation for CI/CD integration 非交互式运行单次提示,CI/CD 集成基础 --output-format json Return structured JSON output. Useful for parsing in scripts and pipelines 返回结构化 JSON 输出,便于脚本和管道解析 --model <name> Override the default model for a single invocation 单次调用时覆盖默认模型 --permission-mode <mode> Set permission mode: default, plan, acceptEdits, auto, or bypassPermissions 设置权限模式:default、plan、acceptEdits、auto 或 bypassPermissions --sandbox Enable OS-level isolation for safe automated analysis 启用操作系统级隔离以安全地自动分析 --max-turns <n> Cap execution to n turns. Useful for time-limiting automated runs 限制执行最多 n 轮,用于限制自动化运行时间 --no-session-persistence Don’t save session data. Good for disposable automation tasks 不保存会话数据,适合一次性自动化任务 --resume Resume the most recent Claude Code session 恢复最近的 Claude Code 会话 --continue Continue a paused workflow from the current repository 从当前仓库继续暂停的工作流 --agent <name> Start a session with a specific subagent 使用指定子代理启动会话 --plugin-dir <path> Load a plugin for this session only (for testing) 仅为本次会话加载插件(用于测试) --bare Cleanest output for scripted usage. No formatting or decoration 最简洁输出,适合脚本使用,无格式装饰 --worktree Run in an isolated git worktree for experimental work 在隔离的 git worktree 中运行,用于实验性工作 四、重要配置文件 / Key Configuration Files 文件 File 英文说明 English 中文说明 Chinese CLAUDE.md Project-level instructions, conventions, and workflow notes. Committed to git and shared with the team 项目级指令、约定和工作流笔记,提交到 git 并与团队共享 CLAUDE.local.md Personal overrides for CLAUDE.md. Git-ignored, not shared CLAUDE.md 的个人覆盖配置,被 git 忽略,不共享 .claude/settings.json Project settings: permissions, hooks, MCP servers. Committed to git 项目设置:权限、钩子、MCP 服务器,提交到 git .claude/settings.local.json Personal project settings. Git-ignored, overrides .claude/settings.json 个人项目设置,被 git 忽略,覆盖 .claude/settings.json ~/.claude/CLAUDE.md Global user instructions that apply to all projects 全局用户指令,适用于所有项目 ~/.claude/settings.json Global user settings that apply to all projects 全局用户设置,适用于所有项目 .claude/skills/ Project-scoped custom skills (SKILL.md files). Committed to git 项目范围的自定义技能(SKILL.md 文件),提交到 git .claude/agents/ Project-scoped subagent definitions. Committed to git 项目范围的子代理定义,提交到 git .claude/rules/*.md Path-scoped rules. Use frontmatter paths: field to target specific files 路径范围的规则,使用 frontmatter 的 paths: 字段指定目标文件 .mcp.json Project MCP server configuration. Committed to git, shared with team 项目 MCP 服务器配置,提交到 git,与团队共享 ~/.claude.json User/local MCP server configuration 用户/本地 MCP 服务器配置 五、MCP 服务器管理 / MCP Server Management 命令 Command 英文说明 English 中文说明 Chinese claude mcp add <name> <uri> Register a new MCP server. Supports HTTP and stdio transports 注册新 MCP 服务器,支持 HTTP 和 stdio 传输 claude mcp add --header Add an MCP server with authentication headers 添加带认证头的 MCP 服务器 claude mcp list List all configured MCP servers with transport and connection status 列出所有配置的 MCP 服务器及其传输方式和连接状态 claude mcp get <name> Show details for a specific MCP server 显示特定 MCP 服务器的详细信息 claude mcp remove <name> Remove an MCP server configuration 移除 MCP 服务器配置 claude mcp add-from-claude-desktop Import MCP server configurations from Claude Desktop 从 Claude Desktop 导入 MCP 服务器配置 /mcp Show active connections in-session and trigger OAuth flows 在会话内显示活跃连接并触发 OAuth 流程 mcp__server__tool MCP tools appear namespaced. Use naturally in conversation MCP 工具以命名空间方式出现,在对话中自然使用即可 六、Hooks 钩子系统 / Hook System 钩子 Hook 英文说明 English 中文说明 Chinese PreToolUse Runs before a tool executes. Can block the action (exit code 2) 在工具执行前运行,可阻止操作(退出码 2) PostToolUse Runs after a tool completes. Use for formatting, linting, or logging 在工具完成后运行,用于格式化、lint 或日志记录 UserPromptSubmit Intercept user input before Claude processes it 在 Claude 处理之前拦截用户输入 Stop Runs when Claude finishes responding. Check completion criteria 当 Claude 完成回复时运行,检查完成条件 SubagentStart / SubagentStop Track subagent lifecycle for orchestration and logging 跟踪子代理生命周期,用于编排和日志记录 Hook types 钩子类型 command (shell), prompt (LLM evaluation), agent (subagent), http (webhook) command(Shell)、prompt(LLM 评估)、agent(子代理)、http(Webhook) Hook matchers 钩子匹配器 Filter which tools trigger hooks: exact name, regex, or * for all 过滤触发钩子的工具:精确名称、正则或 * 匹配全部 Skill-level hooks 技能级钩子 Define hooks in SKILL.md frontmatter. Scoped to that skill only 在 SKILL.md frontmatter 中定义钩子,仅作用于该技能 七、权限模式 / Permission Modes 模式 Mode 英文说明 English 中文说明 Chinese default Ask before write/edit/bash operations. Read, Glob, Grep always allowed 写入/编辑/bash 操作前询问,Read、Glob、Grep 始终允许 plan Research and present plans only. No file modifications until approved 仅研究并展示计划,批准前不修改文件 acceptEdits Allow file edits without prompting. Still ask for Bash commands 允许文件编辑无需确认,Bash 命令仍需询问 auto Allow all operations without prompting. Use in trusted environments 允许所有操作无需确认,在受信任环境中使用 bypassPermissions Skip all safety checks. Only for fully automated CI/CD pipelines 跳过所有安全检查,仅用于全自动 CI/CD 管道 Allow patterns 允许模式 Pre-approve specific tools in .claude/settings.json 在 .claude/settings.json 中预批准特定工具 Deny patterns 拒绝模式 Block dangerous operations regardless of permission mode 无论权限模式如何,都阻止危险操作 八、子代理 / Subagents 命令/配置 Command/Config 英文说明 English 中文说明 Chinese @"agent-name" Invoke a specific agent inline during conversation 在对话中内联调用指定代理 claude --agent <name> Start a full session with a specific agent from the CLI 从命令行以指定代理启动完整会话 .claude/agents/*.md Define project-scoped agents with frontmatter for tools, model, effort 定义项目范围代理,frontmatter 配置工具、模型、effort Built-in agents 内置代理 general-purpose, Explore (Haiku, read-only), Plan (research first) 通用代理、Explore(Haiku,只读)、Plan(先研究) isolation: worktree Run agent in an isolated git worktree for safe experimentation 在隔离的 git worktree 中运行代理,安全实验 background: true Run agent in background. Use Ctrl+B to background a running agent 后台运行代理,使用 Ctrl+B 将运行中的代理放到后台 九、插件管理 / Plugin Management 命令 Command 英文说明 English 中文说明 Chinese /plugin install <name> Install a plugin from the official marketplace 从官方市场安装插件 /plugin install github:user/repo Install a plugin directly from a GitHub repository 直接从 GitHub 仓库安装插件 /plugin list List installed plugins with their skills, agents, and hooks 列出已安装插件及其技能、代理和钩子 /reload-plugins Hot-reload plugin files during development 开发时热重载插件文件 claude --plugin-dir ./path Load a plugin for one session only (for testing) 仅为一次会话加载插件(用于测试) .claude-plugin/plugin.json Required plugin manifest. Declares name, version, author, userConfig 必需的插件清单文件,声明名称、版本、作者、用户配置 plugin-name:command Plugin commands are namespaced to avoid conflicts 插件命令使用命名空间以避免冲突 十、常用工作流组合 / Common Workflow Combos 工作流 Workflow 英文说明 English 中文说明 Chinese /effort high → /plan → approve → implement Deep work: set high reasoning, plan first, then execute 深度工作:设置高推理,先规划再执行 /diff → /cost → /export → /compact End-of-session: review changes, check cost, export, then compact 会话结束:审查更改、查看费用、导出、然后压缩 /batch <instruction> Large refactors: split work across parallel agents in isolated worktrees 大型重构:将工作拆分到隔离 worktree 中的并行代理 /loop 5m <check> Monitoring: poll build status, error logs, or deploy health on interval 监控:按间隔轮询构建状态、错误日志或部署健康度 /branch → experiment → /resume Exploration: branch conversation, try an approach, resume if it fails 探索:分叉对话,尝试方案,失败则恢复 echo $DIFF | claude -p "review" --output-format json CI/CD: pipe diffs into Claude for automated code review with JSON output CI/CD:将 diff 输入 Claude 进行自动代码审查,输出 JSON /init → edit CLAUDE.md → commit Project setup: generate instructions, customize, share with team 项目初始化:生成指令、自定义、与团队共享 1 个帖子 - 1 位参与者 阅读完整话题

hnrss.org · 2026-04-17 17:45:10+08:00 · tech

ShadowStrike Phantom is a Open-Source Endpoint Protection Platform at Github. Mainly we will have 3 product tiers | |->ShadowStrike Phantom Shared Modules(PhantomCore + PhantomEmulator/disassembler + PhantomCortex AI/ML models + PhantomSensor(kernel driver)) |+ |->Phantom Home(For mostly home users - will have a local UI to control the Antivirus and will have some extra stuff like privacy - gamemode etc. For home-users) | |->Phantom EDR(For the Endpoints - will have the local web dashboard For Community Enterprise-users + Endpoint-specific additional protections + forensics) | |->Phantom XDR(Extended detection for endpoints - will include SIEM Integrations etc. Every related-stuff will be added to this product) Community/EDR-XDR-Home products will be able to work locally at the Host Machine. And we are planning to do Phantom Pro - Phantom Enterprise products that will include cloud-based systems - Global Threat Intelligence - Online Web Threat Intel Dashboards For companies etc. stuff.[Of course, we need capital to do these things, so they are part of our long-term plan.] Currently We are mostly extracting the Attack surface-map of the ShadowStrike Phantom and Fuzzing it with our harnesses - Integration/Unit tests - Coverity/PVS-Studio scans - Working with the Product Splits and their own additional protection features - Bugfixes - Security Vulnerabilities - Kernel BSODs. Pretty much everything... If you are interested in the Open-Source Endpoint Detection and Response/Extended detection and Response/Antivirus Systems and Kernel Stuff and lots of C/C++: Github : https://github.com/ShadowStrike-Labs/ShadowStrike Comments URL: https://news.ycombinator.com/item?id=47804144 Points: 1 # Comments: 0

hnrss.org · 2026-04-17 15:47:30+08:00 · tech

Hi HN, This is Tudor from Xata. You can think of Xata as an open-source, self-hosted, alternative to Aurora/Neon. Highlight features are: - Fast copy-on-write branching. - Automatic scale-to-zero and wake-up on new connections. - 100% Vanilla Postgres. We run upstream Postgres, no modifications. - Production grade: high availability, read replicas, automatic failover/switchover, upgrades, backups with PITR, IP filtering, etc. You can self-host it, or you can use our [cloud service]( https://xata.io ). Background story: we exist as a company for almost 5 years, offered a Postgres service from the start, and have launched several different products and open source projects here on HN before, including pgroll and pgstream. About a year and half ago, we’ve started to rearchitect our core platform from scratch. It is running in production for almost an year now, and it’s serving customers of all sizes, including many multi-TB databases. One of our goals in designing the new platform was to make it cloud independent and with a careful selection of dependencies. Part of the reason was for us to be able to offer it in any cloud, and the other part is the subject of the announcement today: we wanted to have it open source and self-hostable. Use cases: We think Xata OSS is appropriate for two use cases: - get fast your preview / testing / dev / ephemeral environments with realistic data. We think for many companies this is a better alternative to seed or synthetic data, and allows you to catch more classes of bugs. Combined with anonymization, especially in the world of coding agents, this is an important safety and productivity enabler. - offer an internal PGaaS. The alternative we usually see at customers is that they use a Kubernetes operator to achieve this. But there’s more to a Postgres platform than just the operator. Xata is more opinionated and comes with APIs and CLI. Technical details: We wanted from the start to offer CoW branching and vanilla Postgres. This basically meant that we wanted to do CoW at the storage layer, under Postgres. We’ve have tested a bunch of storage system for performance and reliability and ultimately landed on using OpenEBS. OpenEBS is an umbrella project for more storage engines for Kubernetes, and the one that we use is the replicated storage engine (aka Mayastor). Small side note on separation of storage from compute: since the introduction of PlanetScale Metal, there has been a lot of discussion about the performance of local storage. We had these discussions internally as well, and what’s nice about OpenEBS is that it actually supports both: there are local storage engines and over-the-network storage engines. For our purpose of running CoW branches, however, the advantages of the separation are pretty clear: it allows spreading the compute to multiple nodes, while keeping the storage volumes colocated, which is needed for CoW. So for now the Xata platform is focused on this, but it’s entirely possible to run Xata with local storage: basically a storage-class change away. Another small side note: while Mayastor is serving us well, and it’s what we recommend for OSS installations, we have been working on our own storage engine in parallel (called Xatastor). It is the key to having sub-second branching and wake-up times and we’ll release it in a couple of weeks. For the compute layer, we are building on top of CloudNativePG. It’s a stable and battle-tested operator covering all the production great concerns. We did add quite a lot of services around it, though: our custom SQL gateway, a “branch” operator, control plane and authentication services, etc. The end result is what we think is an opinionated but flexible Postgres platform. More high level and easier to use than a K8s operator, and with a lot of battery included goodies. Let us know if you have any questions! Comments URL: https://news.ycombinator.com/item?id=47803480 Points: 2 # Comments: 0

hnrss.org · 2026-04-17 02:20:14+08:00 · tech

I built an open-source research agent. You ask a question, it searches the web via Tavily, synthesizes an answer with an LLM, and shows the sources it used. Answers stream in real-time. The interesting part is the backend. It's a single JS file (~100 lines) that handles web search, LLM streaming, and per-user conversation history. No vector database, no Redis, no separate storage service. It runs inside a cell — an isolated environment with a built-in database, search index, and filesystem. The cell handles persistence and streaming natively, so the agent code only has to deal with the actual logic. Tech: Next.js frontend, Tavily for search, OpenRouter for LLM (Gemini 2.5 Flash default). Demo: https://youtu.be/jvTVA7J925Y Comments URL: https://news.ycombinator.com/item?id=47797393 Points: 6 # Comments: 0

hnrss.org · 2026-04-16 23:32:07+08:00 · tech

Hacker News, Training custom wake words like "Hey Alexa" is often a resource-intensive task, demanding powerful hardware and complex manual tuning. NanoWakeWord is an open-source framework designed to solve this. It features an intelligent engine that automates the ML pipeline, making it possible to build high-performance, production-ready wake word models with minimal effort. What makes it different: Train Anywhere, On Anything: The core architecture is built for extreme efficiency. You can train on massive, terabyte-scale datasets using a standard laptop or even a low-spec machine, all without needing a GPU. This is achieved through memory-mapped files that stream data directly from disk, eliminating RAM limitations. Intelligent Automation: The framework analyzes your data to automatically configure an optimal model architecture, learning schedule, and training parameters. It removes the guesswork from building a robust model. Total Flexibility and Control: While it automates everything, it also offers deep customization. You can choose from 11+ built-in architectures (from lightweight DNNs to SOTA Conformers) or easily extend the framework to add your own custom architecture. Every single parameter generated by the engine can be manually overridden for full control. Smarter Data Processing: It moves beyond generic negatives. The system performs phonetic analysis on your wake word to synthesize acoustically confusing counter-examples, which drastically reduces the false positive rate in real-world use. Ready for the Edge: Models are exported to the standard ONNX format. The framework also includes a lightweight, stateful streaming inference engine designed for low-latency performance on devices like the Raspberry Pi. Try It in Your Browser (No Install Needed): This single Google Colab notebook is a playground to train your first model. Inside, you can select and experiment with any of the available architectures with just a few clicks. Launch the Training Notebook: https://colab.research.google.com/github/arcosoph/nanowakewo... The goal is to produce models with an extremely low false positive rate (tests show less than one false activation every 16-28 hours on average). The project is actively developed by Arcosoph, and all feedback or questions are highly welcome! Key Links: GitHub Repo: https://github.com/arcosoph/nanowakeword PyPI Package: https://pypi.org/project/nanowakeword/ Pre-trained Models: https://huggingface.co/arcosoph/nanowakeword-models#pre-trai... Discord Community: https://discord.gg/rYfShVvacB Comments URL: https://news.ycombinator.com/item?id=47794771 Points: 1 # Comments: 0

hnrss.org · 2026-04-16 22:05:01+08:00 · tech

The open source, self-hosted alternative to Restream, StreamYard, and vMix. Self-hosted multistream studio. Ingest from any source -- OBS, hardware encoders, mobile, browser -- over RTMP, SRT, or WebRTC (WHIP coming soon). Switch between inputs live in a browser-based production switcher. Fan-out to every platform at once: YouTube, Twitch, Kick, Facebook, or any custom RTMP/RTMPS endpoint. One stream in, or many. Every destination, simultaneously. No cloud middleman, no per-channel fees, no limits. Comments URL: https://news.ycombinator.com/item?id=47793158 Points: 1 # Comments: 0