Skip to content

[BUG] Failed to get usage limit info(v2) #1393

@LonePheasantWarrior

Description

@LonePheasantWarrior

Docker image pulling time:
2025-09-19 20:00 UTC
Source of Docker image:
ghcr.io/simstudioai/simstudio:latest

After local deployment via Docker, the Block cannot be used. Clicking on an item in the Blocks area with the mouse or dragging the target item to the editing area has no response. The following is the simstudio log when the problem occurs:

   ▲ Next.js 15.4.1

   - Local:        http://localhost:3000

   - Network:      http://0.0.0.0:3000



 ✓ Starting...

[Main Instrumentation] register() called, environment: {

  NEXT_RUNTIME: "nodejs",

  NODE_ENV: "production",

}

[Main Instrumentation] Loading Node.js instrumentation...

 ✓ Ready in 250ms

[Main Instrumentation] Calling Node.js register()...

2025-09-20T14:15:57.921Z WARN [Better Auth]: Social provider github is missing clientId or clientSecret

2025-09-20T14:15:57.921Z WARN [Better Auth]: Social provider google is missing clientId or clientSecret

2025-09-20T14:16:02.178Z WARN [Better Auth]: Social provider github is missing clientId or clientSecret

2025-09-20T14:16:02.178Z WARN [Better Auth]: Social provider google is missing clientId or clientSecret

[Main Instrumentation] register() called, environment: { NEXT_RUNTIME: 'edge', NODE_ENV: 'production' }

[Main Instrumentation] Loading Edge Runtime instrumentation...

[Main Instrumentation] Calling Edge Runtime register()...

[2025-09-20T14:16:17.367Z] [ERROR] [UsageManagement] Failed to get usage limit info {"userId":"11223344556677889900","error":{}}

[2025-09-20T14:16:17.367Z] [ERROR] [UnifiedUsageAPI] Failed to get usage limit info {"userId":"11223344556677889900","error":{}}

[2025-09-20T14:16:17.828Z] [ERROR] [OllamaModelsAPI] Failed to fetch Ollama models {"error":"Unable to connect. Is the computer able to access the url?","host":"http://localhost:11434"}

The following is my Docker Compose configuration:

networks:
  default:
    external: true
    name: local

services:
  simstudio:
    image: ghcr.io/simstudioai/simstudio:latest
    restart: unless-stopped
    container_name: simstudio
    ports:
      - '3000:3000'
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgresql:5432/${POSTGRES_DB:-simstudio}
      - BETTER_AUTH_URL=${NEXT_PUBLIC_APP_URL:-https://myDomain:3000}
      - NEXT_PUBLIC_APP_URL=${NEXT_PUBLIC_APP_URL:-https://myDomain:3000}
      - BETTER_AUTH_SECRET=${BETTER_AUTH_SECRET:-11223344556677889900}
      - ENCRYPTION_KEY=${ENCRYPTION_KEY:-11223344556677889900}
      # - OLLAMA_URL=${OLLAMA_URL:-http://localhost:11434}
      - SOCKET_SERVER_URL=${SOCKET_SERVER_URL:-http://realtime:3002}
      - NEXT_PUBLIC_SOCKET_URL=${NEXT_PUBLIC_SOCKET_URL:-http://realtime:3002}
      - RESEND_API_KEY=re_11223344556677889900
      - EMAIL_DOMAIN=notifications.myDomain
    depends_on:
      realtime:
        condition: service_healthy
    healthcheck:
      test: ['CMD', 'wget', '--spider', '--quiet', 'http://127.0.0.1:3000']
      interval: 90s
      timeout: 5s
      retries: 3
      start_period: 10s
    deploy:
      resources:
        limits:
          memory: 2G
        reservations:
          memory: 512MB

  realtime:
    image: ghcr.io/simstudioai/realtime:latest
    restart: unless-stopped
    container_name: realtime
    ports:
      - '3002:3002'
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgresql:5432/${POSTGRES_DB:-simstudio}
      - NEXT_PUBLIC_APP_URL=${NEXT_PUBLIC_APP_URL:-http://simstudio:3000}
      - BETTER_AUTH_URL=${BETTER_AUTH_URL:-http://simstudio:3000}
      - BETTER_AUTH_SECRET=${BETTER_AUTH_SECRET:-11223344556677889900}
    healthcheck:
      test: ['CMD', 'wget', '--spider', '--quiet', 'http://127.0.0.1:3002/health']
      interval: 90s
      timeout: 5s
      retries: 3
      start_period: 10s
    deploy:
      resources:
        limits:
          memory: 512MB
        reservations:
          memory: 128MB

To prevent the warning "Some containers have not started" from appearing in my Docker graphical interface, I separated migrations from Compose:

networks:
  default:
    external: true
    name: local

services:
  simstudio-migrations:
    image: ghcr.io/simstudioai/migrations:latest
    working_dir: /app/packages/db
    container_name: simstudio-migrations
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgresql:5432/${POSTGRES_DB:-simstudio}
    command: ['bun', 'run', 'db:migrate']
    restart: 'no'
    deploy:
      resources:
        limits:
          memory: 1G
        reservations:
          memory: 256MB

The following is my PostgreSQL configuration:

networks:
  default:
    external: true
    name: local

services:
  postgresql:
    image: pgvector/pgvector:pg17
    restart: always
    container_name: postgresql
    ports:
      - '${POSTGRES_PORT:-5432}:5432'
    environment:
      - POSTGRES_USER=${POSTGRES_USER:-postgres}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
      - POSTGRES_DB=${POSTGRES_DB:-simstudio}
    volumes:
      - ./data:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U postgres']
      interval: 5s
      timeout: 5s
      retries: 5
    deploy:
      resources:
        limits:
          memory: 512MB
        reservations:
          memory: 72MB

The following is the log of realtime:

2025-09-20T14:15:51.955Z WARN [Better Auth]: Social provider github is missing clientId or clientSecret

2025-09-20T14:15:51.956Z WARN [Better Auth]: Social provider google is missing clientId or clientSecret

The following is the log of migrations:

$ bunx drizzle-kit migrate --config=./drizzle.config.ts

Reading config file '/app/packages/db/drizzle.config.ts'

Using 'postgres' driver for database querying

[⣷] applying migrations...{

  severity_local: "NOTICE",

  severity: "NOTICE",

  code: "42P06",

  message: "schema \"drizzle\" already exists, skipping",

  file: "schemacmds.c",

  line: "132",

  routine: "CreateSchemaCommand",

}

{

  severity_local: "NOTICE",

  severity: "NOTICE",

  code: "42P07",

  message: "relation \"__drizzle_migrations\" already exists, skipping",

  file: "parse_utilcmd.c",

  line: "207",

  routine: "transformCreateStmt",

}

[✓] migrations applied successfully!

I replaced the sensitive information with meaningless strings.

My service is deployed in mainland China, and I suspect the Failed to fetch error might be related to network restrictions.
Hope this issue can be resolved.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions