You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently updated Kotaemon to the latest version after six months of exceptional performance, where it could summarize 40-page documents in under 20 seconds, even on a smaller machine.
After the upgrade, however, the application now takes 30 minutes to summarize a 6-page resume. The hardware and LLM (local Ollama 3.18b parameters) remain unchanged, but unlike the older version where the inbuilt LLM came as package and was seamlessly operational, the local ollama model now required to be manual pulled .
I’m attaching screenshots of the current LLM settings for review. Please confirm whether Ollama 3.18b is the most suitable model or if a better alternative exists. Additionally, was the LLM in the previous version customized, as this could explain its higher efficiency compared to the current setup?
There are also dependency issues with GraphRAG, NanoGraphRAG Collection, and LightRAG Collection, which do not seem to function correctly despite being downloaded from the links provided.This raises concerns as the dependencies should ideally be identified and solved by developers for usage. Furthermore, citations are not working as they did in the previous version.
Please provide clear guidance to resolve these issues and how to restore the application to its previous level of efficiency.
I genuinely like the application as compared to the alternatives.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I recently updated Kotaemon to the latest version after six months of exceptional performance, where it could summarize 40-page documents in under 20 seconds, even on a smaller machine.
After the upgrade, however, the application now takes 30 minutes to summarize a 6-page resume. The hardware and LLM (local Ollama 3.18b parameters) remain unchanged, but unlike the older version where the inbuilt LLM came as package and was seamlessly operational, the local ollama model now required to be manual pulled .
I’m attaching screenshots of the current LLM settings for review. Please confirm whether Ollama 3.18b is the most suitable model or if a better alternative exists. Additionally, was the LLM in the previous version customized, as this could explain its higher efficiency compared to the current setup?
There are also dependency issues with GraphRAG, NanoGraphRAG Collection, and LightRAG Collection, which do not seem to function correctly despite being downloaded from the links provided.This raises concerns as the dependencies should ideally be identified and solved by developers for usage. Furthermore, citations are not working as they did in the previous version.
Please provide clear guidance to resolve these issues and how to restore the application to its previous level of efficiency.
I genuinely like the application as compared to the alternatives.
Beta Was this translation helpful? Give feedback.
All reactions