Skip to content

How to configure to use gpt-oss served by llama-server #445

@lostmsu

Description

@lostmsu

What is the type of issue?

Documentation is missing

What is the issue?

I am running llama-server --gpt-oss-20b-default that hosts the OpenAI chat compatible endpoint at http://localhost:9000/v1. How do I configure code to use it?

Where did you find it?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions