无法配置LLM模型 #671
YinkunhaoDaEr
started this conversation in
General
无法配置LLM模型
#671
Replies: 1 comment
-
|
可以创建一个自定义的 OpenAI Chat API 中间转发层,具体代码可以让大语言模型幫忙生成 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
在模型配置页面,chat和EMBEDDING均计划采用openai相关模型,EMBEDDING已成功配置,但chat模型始终无法配置,持续报错
配置如下:chat,https://api.openai.com/v1,gpt-3.5-turbo
报错如下:unknown error
PemjaUtils.invoke Exception:pemja.core.PythonException: <class 'RuntimeError'>: invalid llm config: {'api_key': 'sk-proj--*****************', 'base_url': 'https://api.openai.com/v1', 'model': 'gpt-3.5-turbo', 'modelType': 'chat', 'type': 'maas', 'customize': {}}, for details: Error code: 400 - {'error': {'message': 'Unrecognized request argument supplied: chat_template_kwargs', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Beta Was this translation helpful? Give feedback.
All reactions