Replies: 2 comments
-
Hi @rmdes, thank you 🙏 I really appreciate it! Off the top of my head, one option could be to run a local instance of Ollama on your system instead of relying on the one in the Docker image with GPU pass trough. You’d just need to update the OLLAMA_URL in the services/api/.env file accordingly to the default ollama port and remove the ollama from the docker compose file. I also came across this Reddit thread: https://www.reddit.com/r/docker/s/xFWeFQdbbY. I’m not sure how well it works, though, since I don’t have a Radeon GPU to test it myself. |
Beta Was this translation helpful? Give feedback.
-
Hi @rmdes, |
Beta Was this translation helpful? Give feedback.
-
Hey, congrats on how you set this up, it seems very interesting, I wanted to try out with a bunch of documents I have locally but I'm getting this error after the build sequence :
Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]
is there a way to use this with a
AMD Ryzen 9 7940HS w/ Radeon 780M Graphics (16) @ 4.00 GHz
?Beta Was this translation helpful? Give feedback.
All reactions