智能助手网
标签聚合 https

/tag/https

hnrss.org · 2026-04-18 22:48:39+08:00 · tech

Hi HN, I built PushToPost ( https://pushtopost.com ) to automate the tedious parts of shipping. It analyzes GitHub diffs via webhooks to generate platform-native social posts and SEO-friendly changelogs. One of the main goals is using JSON-LD schema so the changelogs get indexed and cited properly by AI search engines like Perplexity or ChatGPT. I'd love to hear your feedback on the workflow and the technical implementation! Comments URL: https://news.ycombinator.com/item?id=47816357 Points: 1 # Comments: 0

linux.do · 2026-04-18 15:43:51+08:00 · tech

本人一直想要搭建一个中转站,偶然看见sub2api,故使用它搭建了一个,以下是步骤: 先约定 3 个你要替换的值: api.example.com :改成你的域名 [email protected] :改成你的管理员邮箱 CHANGE_ME... :改成你自己生成的随机密钥 1)登录服务器并更新系统 ssh root@你的服务器IP apt update apt -y upgrade timedatectl 这一步是基础准备,先把系统更新到当前仓库版本,并确认时间正常。时间不准会影响 HTTPS、登录态和支付回调之类的功能。Docker 官方当前 Ubuntu 安装文档仍然建议使用官方 apt 仓库安装 Docker Engine。 2)安装 Docker Engine 和 Docker Compose v2 先卸载可能冲突的旧包: for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do apt-get remove -y $pkg done 安装 Docker 官方仓库: apt-get update apt-get install -y ca-certificates curl install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc chmod a+r /etc/apt/keyrings/docker.asc echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \ > /etc/apt/sources.list.d/docker.list 安装 Docker 和 Compose 插件: apt-get update apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin 检查版本: docker --version docker compose version systemctl enable docker systemctl start docker systemctl status docker --no-pager Docker 官方当前安装文档给出的推荐安装包名就是 docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin 。另外,Docker 也明确提醒:如果你用 UFW 或 firewalld,Docker 暴露出来的端口可能绕过防火墙表面规则,所以正式环境尽量只暴露 80/443,把 8080 留给本机反代。 3)安装 Git、openssl 和基础工具 apt-get install -y git curl wget nano openssl ufw 这些工具后面都会用到: git 拉仓库 openssl 生成密钥 nano 编辑配置 ufw 放行 80/443/22 4)准备部署目录并拉取官方文件 mkdir -p /opt/sub2api cd /opt/sub2api git clone https://github.com/Wei-Shaw/sub2api.git source cp source/deploy/docker-compose.local.yml . cp source/deploy/.env.example .env cp source/deploy/config.example.yaml config.yaml Sub2API 官方部署说明里,手动部署路径就是:克隆仓库、复制 .env.example 、创建 data postgres_data redis_data ,再用 docker-compose.local.yml 启动;并且官方明确把 local 版描述为“本地目录、易迁移”。 5)生成生产环境密钥 先生成三个随机值: openssl rand -hex 32 openssl rand -hex 32 openssl rand -hex 32 把输出保存下来,分别用于: POSTGRES_PASSWORD JWT_SECRET TOTP_ENCRYPTION_KEY 官方 .env 模板和部署说明都强调: POSTGRES_PASSWORD 必填,而 JWT_SECRET 和 TOTP_ENCRYPTION_KEY 最好固定,否则会影响持久登录态和 2FA。 6)写入最终版 .env cat > /opt/sub2api/.env <<'EOF' BIND_HOST=127.0.0.1 SERVER_PORT=8080 SERVER_MODE=release RUN_MODE=standard TZ=Asia/Shanghai POSTGRES_USER=sub2api POSTGRES_PASSWORD=CHANGE_ME_TO_A_LONG_RANDOM_PASSWORD POSTGRES_DB=sub2api DATABASE_MAX_OPEN_CONNS=50 DATABASE_MAX_IDLE_CONNS=10 DATABASE_CONN_MAX_LIFETIME_MINUTES=30 DATABASE_CONN_MAX_IDLE_TIME_MINUTES=5 REDIS_PASSWORD= REDIS_DB=0 REDIS_POOL_SIZE=1024 REDIS_MIN_IDLE_CONNS=10 REDIS_ENABLE_TLS=false [email protected] ADMIN_PASSWORD= JWT_SECRET=CHANGE_ME_TO_A_LONG_RANDOM_HEX_STRING JWT_EXPIRE_HOUR=24 JWT_ACCESS_TOKEN_EXPIRE_MINUTES=0 TOTP_ENCRYPTION_KEY=CHANGE_ME_TO_ANOTHER_LONG_RANDOM_HEX_STRING GEMINI_OAUTH_CLIENT_ID= GEMINI_OAUTH_CLIENT_SECRET= GEMINI_OAUTH_SCOPES= GEMINI_QUOTA_POLICY= GEMINI_CLI_OAUTH_CLIENT_SECRET= ANTIGRAVITY_OAUTH_CLIENT_SECRET= SECURITY_URL_ALLOWLIST_ENABLED=true SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP=false SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS=false SECURITY_URL_ALLOWLIST_UPSTREAM_HOSTS= UPDATE_PROXY_URL= EOF 然后编辑,把占位符改成你自己的值: nano /opt/sub2api/.env 这里我保留了 .env 里的基础白名单开关,但把域名清单放到 config.yaml 里统一管理,因为官方 config.example.yaml 里真正完整的 URL 白名单字段在 security.url_allowlist 下。 7)写入最终版 config.yaml cat > /opt/sub2api/config.yaml <<'EOF' server: host: "0.0.0.0" port: 8080 mode: "release" frontend_url: "https://api.example.com" trusted_proxies: [] max_request_body_size: 268435456 h2c: enabled: true max_concurrent_streams: 50 idle_timeout: 75 max_read_frame_size: 1048576 max_upload_buffer_per_connection: 2097152 max_upload_buffer_per_stream: 524288 run_mode: "standard" cors: allowed_origins: - "https://api.example.com" allow_credentials: true security: url_allowlist: enabled: true upstream_hosts: - "api.openai.com" - "api.anthropic.com" - "generativelanguage.googleapis.com" - "cloudcode-pa.googleapis.com" - "*.openai.azure.com" pricing_hosts: - "raw.githubusercontent.com" crs_hosts: [] allow_private_hosts: false allow_insecure_http: false response_headers: enabled: true additional_allowed: [] force_remove: [] csp: enabled: true policy: "default-src 'self'; script-src 'self' __CSP_NONCE__ https://challenges.cloudflare.com https://static.cloudflareinsights.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; img-src 'self' data: https:; font-src 'self' data: https://fonts.gstatic.com; connect-src 'self' https:; frame-src https://challenges.cloudflare.com; frame-ancestors 'none'; base-uri 'self'; form-action 'self'" proxy_probe: insecure_skip_verify: false proxy_fallback: allow_direct_on_error: false EOF 改域名: nano /opt/sub2api/config.yaml 官方当前配置示例里, frontend_url 用于生成邮件等外部链接;URL 白名单示例里也明确列出了 upstream_hosts 、 pricing_hosts 、 crs_hosts 、 allow_private_hosts 、 allow_insecure_http 。我这里把样例里的 allow_private_hosts 和 allow_insecure_http 从 true 收紧成了更适合公网生产的 false 。 8)写入最终版 docker-compose.local.yml cat > /opt/sub2api/docker-compose.local.yml <<'EOF' services: sub2api: image: weishaw/sub2api:latest container_name: sub2api restart: unless-stopped ulimits: nofile: soft: 100000 hard: 100000 ports: - "${BIND_HOST:-127.0.0.1}:${SERVER_PORT:-8080}:8080" volumes: - ./data:/app/data - ./config.yaml:/app/data/config.yaml:ro environment: - AUTO_SETUP=true - SERVER_HOST=0.0.0.0 - SERVER_PORT=8080 - SERVER_MODE=${SERVER_MODE:-release} - RUN_MODE=${RUN_MODE:-standard} - DATABASE_HOST=postgres - DATABASE_PORT=5432 - DATABASE_USER=${POSTGRES_USER:-sub2api} - DATABASE_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required} - DATABASE_DBNAME=${POSTGRES_DB:-sub2api} - DATABASE_SSLMODE=disable - DATABASE_MAX_OPEN_CONNS=${DATABASE_MAX_OPEN_CONNS:-50} - DATABASE_MAX_IDLE_CONNS=${DATABASE_MAX_IDLE_CONNS:-10} - DATABASE_CONN_MAX_LIFETIME_MINUTES=${DATABASE_CONN_MAX_LIFETIME_MINUTES:-30} - DATABASE_CONN_MAX_IDLE_TIME_MINUTES=${DATABASE_CONN_MAX_IDLE_TIME_MINUTES:-5} - REDIS_HOST=redis - REDIS_PORT=6379 - REDIS_PASSWORD=${REDIS_PASSWORD:-} - REDIS_DB=${REDIS_DB:-0} - REDIS_POOL_SIZE=${REDIS_POOL_SIZE:-1024} - REDIS_MIN_IDLE_CONNS=${REDIS_MIN_IDLE_CONNS:-10} - REDIS_ENABLE_TLS=${REDIS_ENABLE_TLS:-false} - ADMIN_EMAIL=${ADMIN_EMAIL:[email protected]} - ADMIN_PASSWORD=${ADMIN_PASSWORD:-} - JWT_SECRET=${JWT_SECRET:-} - JWT_EXPIRE_HOUR=${JWT_EXPIRE_HOUR:-24} - JWT_ACCESS_TOKEN_EXPIRE_MINUTES=${JWT_ACCESS_TOKEN_EXPIRE_MINUTES:-0} - TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY:-} - TZ=${TZ:-Asia/Shanghai} - GEMINI_OAUTH_CLIENT_ID=${GEMINI_OAUTH_CLIENT_ID:-} - GEMINI_OAUTH_CLIENT_SECRET=${GEMINI_OAUTH_CLIENT_SECRET:-} - GEMINI_OAUTH_SCOPES=${GEMINI_OAUTH_SCOPES:-} - GEMINI_QUOTA_POLICY=${GEMINI_QUOTA_POLICY:-} - GEMINI_CLI_OAUTH_CLIENT_SECRET=${GEMINI_CLI_OAUTH_CLIENT_SECRET:-} - ANTIGRAVITY_OAUTH_CLIENT_SECRET=${ANTIGRAVITY_OAUTH_CLIENT_SECRET:-} - SECURITY_URL_ALLOWLIST_ENABLED=${SECURITY_URL_ALLOWLIST_ENABLED:-true} - SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP=${SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP:-false} - SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS=${SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS:-false} - SECURITY_URL_ALLOWLIST_UPSTREAM_HOSTS=${SECURITY_URL_ALLOWLIST_UPSTREAM_HOSTS:-} - UPDATE_PROXY_URL=${UPDATE_PROXY_URL:-} depends_on: postgres: condition: service_healthy redis: condition: service_healthy networks: - sub2api-network healthcheck: test: ["CMD", "wget", "-q", "-T", "5", "-O", "/dev/null", "http://localhost:8080/health"] interval: 30s timeout: 10s retries: 3 start_period: 30s postgres: image: postgres:18-alpine container_name: sub2api-postgres restart: unless-stopped ulimits: nofile: soft: 100000 hard: 100000 volumes: - ./postgres_data:/var/lib/postgresql/data environment: - POSTGRES_USER=${POSTGRES_USER:-sub2api} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required} - POSTGRES_DB=${POSTGRES_DB:-sub2api} - PGDATA=/var/lib/postgresql/data - TZ=${TZ:-Asia/Shanghai} networks: - sub2api-network healthcheck: test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-sub2api} -d ${POSTGRES_DB:-sub2api}"] interval: 10s timeout: 5s retries: 5 start_period: 10s redis: image: redis:8-alpine container_name: sub2api-redis restart: unless-stopped ulimits: nofile: soft: 100000 hard: 100000 volumes: - ./redis_data:/data command: > sh -c ' redis-server --save 60 1 --appendonly yes --appendfsync everysec ${REDIS_PASSWORD:+--requirepass "$REDIS_PASSWORD"} ' environment: - TZ=${TZ:-Asia/Shanghai} - REDISCLI_AUTH=${REDIS_PASSWORD:-} networks: - sub2api-network healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 5s retries: 5 start_period: 5s networks: sub2api-network: driver: bridge EOF 这份 compose 依然遵循官方 local 版思路:本地目录持久化、 weishaw/sub2api:latest + postgres:18-alpine + redis:8-alpine 、 /health 健康检查;另外我把 config.yaml 的挂载打开了,因为官方默认是注释状态。 9)创建数据目录并启动容器 cd /opt/sub2api mkdir -p data postgres_data redis_data docker compose -f docker-compose.local.yml up -d docker compose -f docker-compose.local.yml ps 如果一切正常,再看日志: docker compose -f docker-compose.local.yml logs -f sub2api Sub2API 官方说明里写得很明确:Compose 模式下 AUTO_SETUP=true 时,首次启动会自动连接 PostgreSQL 和 Redis、执行数据库迁移、创建管理员账号、在未提供时自动生成管理员密码。 10)取出管理员密码并做健康检查 如果你在 .env 里把 ADMIN_PASSWORD= 留空,就执行: docker compose -f docker-compose.local.yml logs sub2api | grep -i "admin password" 本机健康检查: curl http://127.0.0.1:8080/health 官方手动部署说明和命令示例里都给了从日志里查自动生成管理员密码的方法。( GitHub ) 11)安装 Caddy 并启用自动 HTTPS 先安装 Caddy 官方仓库: apt install -y debian-keyring debian-archive-keyring apt-transport-https curl curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list chmod o+r /usr/share/keyrings/caddy-stable-archive-keyring.gpg chmod o+r /etc/apt/sources.list.d/caddy-stable.list apt update apt install -y caddy 这正是 Caddy 官方当前给出的 Debian/Ubuntu stable 安装路径。( Caddy Web Server ) 12)写入 Caddyfile cat > /etc/caddy/Caddyfile <<'EOF' api.example.com { @static { path /assets/* path /logo.png path /favicon.ico } header @static { Cache-Control "public, max-age=31536000, immutable" -Pragma -Expires } tls { protocols tls1.2 tls1.3 } reverse_proxy 127.0.0.1:8080 { health_uri /health health_interval 30s health_timeout 10s health_status 200 header_up X-Real-IP {remote_host} header_up X-Forwarded-For {remote_host} header_up X-Forwarded-Proto {scheme} header_up X-Forwarded-Host {host} header_up CF-Connecting-IP {http.request.header.CF-Connecting-IP} } encode { zstd gzip 6 minimum_length 256 } request_body { max_size 100MB } log { output file /var/log/caddy/sub2api.log { roll_size 50mb roll_keep 10 roll_keep_for 720h } format json level INFO } handle_errors { respond "{err.status_code} {err.status_text}" } } EOF 检查并重载: caddy fmt --overwrite /etc/caddy/Caddyfile caddy validate --config /etc/caddy/Caddyfile systemctl enable caddy systemctl restart caddy systemctl status caddy --no-pager 官方仓库当前确实自带 deploy/Caddyfile ,里面已经包含 TLS、 reverse_proxy localhost:8080 、 /health 健康检查、转发真实 IP 头和日志滚动思路,所以这条路线最省心。 13)放行防火墙 ufw allow 22/tcp ufw allow 80/tcp ufw allow 443/tcp ufw enable ufw status verbose 不要开放 8080 给公网,因为你已经通过 BIND_HOST=127.0.0.1 把应用只绑在本机,再让 Caddy 反代它。这样也符合 Docker 官方对防火墙的安全提醒。 14)最终验证 先本机验证: curl http://127.0.0.1:8080/health curl -I https://api.example.com 然后浏览器访问: https://api.example.com 用管理员邮箱和日志里拿到的密码登录。 15)部署完成后立刻执行的 5 个检查 登录后台,确认能打开首页。 到设置里确认站点 URL 是否正确。 frontend_url 如果没配对,后面邮件链接和支付回调会出错。 如果你要启用 URL 白名单,只保留自己真的要用的上游域名。官方样例里带了 OpenAI、Anthropic、Gemini、Azure OpenAI 等域名,但生产上不建议全开。 如果要开支付,后台路径是 设置 → 支付设置 ,官方当前支持 EasyPay、支付宝官方、微信官方、Stripe;多实例分流支持 round-robin 和 least-amount ,回调地址会按你的域名自动拼接。 如果你用 Stripe,记得订阅 payment_intent.succeeded 和 payment_intent.payment_failed 。 16)后续最常用的运维命令 cd /opt/sub2api # 看状态 docker compose -f docker-compose.local.yml ps # 看日志 docker compose -f docker-compose.local.yml logs -f sub2api # 重启应用 docker compose -f docker-compose.local.yml restart sub2api # 更新镜像 docker compose -f docker-compose.local.yml pull docker compose -f docker-compose.local.yml up -d # 停服务 docker compose -f docker-compose.local.yml down 官方部署说明里也给了 local 版这组常用命令,并强调 local 版最方便整目录迁移和备份。 4 个帖子 - 4 位参与者 阅读完整话题