Replies: 1 comment
-
I am not clear on port you are running curl command on one port and connecting on different. Which app is on container or both app is in local |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have connection problems with my local LLMs on the GPU cluster.
My LLM specification looks like this:
The base_url should be correct because I had to choose a different port for Ollama. But I always get the error:
But when I look on the server it finds the models:
Does anyone have any idea what the problem could be?
Beta Was this translation helpful? Give feedback.
All reactions