Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot create Quiz with Olama #19

Open
Sauron-sol opened this issue Oct 7, 2024 · 20 comments
Open

Cannot create Quiz with Olama #19

Sauron-sol opened this issue Oct 7, 2024 · 20 comments
Labels
bug Something isn't working

Comments

@Sauron-sol
Copy link

Hello,

I'm experiencing an issue when trying to create a quiz using Olama. After selecting my file, I encountered the following error: "Cannot read properties of undefined (reading 'forEach')."
My file is located in a folder, so I also tried using both the "Add note" and "Add folder" options to test if there was a bug, but the issue persists.

Could you please provide guidance on how I can successfully create a quiz?
Thanks

@ECuiDev ECuiDev added the bug Something isn't working label Oct 7, 2024
@ECuiDev
Copy link
Owner

ECuiDev commented Oct 7, 2024

Do you get the error when you try to add a note/folder or when you try to generate questions?

@Sauron-sol
Copy link
Author

When I try to generate questions

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 7, 2024

Do you have Ollama installed and running when you try to generate questions? If so, what generation model are you using?

@LeonelRFF
Copy link

Hi, do you have ollama pull nomic-embed-text installed? It is necessary for it to work, just in case if you don't have it you should open cmd or powershell and install it

ollama pull nomic-embed-text

image

Since I'm passing by it seems that just in this case a bug occurred 😁 as you can see in the video the options card to join cannot be displayed I realized that it is because for this case as you can see in the callout it is like this

quiz.mp4
> [!question] El contenido estructurado se disfruta más porque cada párrafo tiene que estar delimitado por un elemento de tipo _______________
> a) <b>
> b) <i>
> c) <p>
> d) <h1>
>> [!success]- Answer
>> c) <p>

if I change <h1> or the rest of the options for another word it works correctly

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 7, 2024

Hi, do you have ollama pull nomic-embed-text installed? It is necessary for it to work, just in case if you don't have it you should open cmd or powershell and install it

ollama pull nomic-embed-text

image

Since I'm passing by it seems that just in this case a bug occurred 😁 as you can see in the video the options card to join cannot be displayed I realized that it is because for this case as you can see in the callout it is like this

quiz.mp4

> [!question] El contenido estructurado se disfruta más porque cada párrafo tiene que estar delimitado por un elemento de tipo _______________
> a) <b>
> b) <i>
> c) <p>
> d) <h1>
>> [!success]- Answer
>> c) <p>

if I change <h1> or the rest of the options for another word it works correctly

This is because the actual HTML is being rendered. I'll add a generation-level fix for this in the future, but for now you should enclose HTML with backticks so that it gets treated as text.

> [!question] El contenido estructurado se disfruta más porque cada párrafo tiene que estar delimitado por un elemento de tipo `_______________`.
> a) `<b>`
> b) `<i>`
> c) `<p>`
> d) `<h1>`
>> [!success]- Answer
>> c) `<p>`

@Sauron-sol
Copy link
Author

image Yes, Ollama is running and here are the models I'm running for generation and embedding (when I type "ollama ps", my two models are running) I also tried with llama3.2:latest, but it doesn't work (of course I have "nomic-embed-text:latest" installed)

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 8, 2024

Hmm, that's very strange. Is it possible for you to send a video of what happens when you try to generate a quiz?

@Sauron-sol
Copy link
Author

QuizGenerator.mp4

Here's the video

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 8, 2024

What happens if you try generating using a small note (few hundred tokens)?

@Sauron-sol
Copy link
Author

It seems to work, but not over 1000 tokens. I tried it with 349 tokens and it worked

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 8, 2024

I believe the hardware affects the model speed, so maybe if your machine isn't powerful enough and you make a really large request, the model could timeout? That feels like the most likely scenario to me right now.

You could also try asking in the Ollama discord server if anyone has encountered a similar problem when sending a large number of tokens in a single request.

@ECuiDev ECuiDev closed this as completed Oct 11, 2024
@SeamusMullan
Copy link

I've been having this same issue with the following specs + settings

Apple Macbook Air M2 (2022) 8GB Ram
MacOS Sequoia 15.0.1

image

I've noticed this issue also only seems to present itself when theres more that 1000 tokens making me think its an Ollama limitation. Perhaps splitting multiple pages into chunks and processing them seperately could be a temporary fix for this if there's no solution in the Ollama discord that's known of

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 11, 2024

I see. I'll look into a fix.

@ECuiDev ECuiDev reopened this Oct 11, 2024
@NotPubi
Copy link

NotPubi commented Oct 16, 2024

Hello, i am having this same issue, any ideas?

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 16, 2024

You get the same error when using more than 1000 tokens?

@NotPubi
Copy link

NotPubi commented Oct 16, 2024 via email

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 16, 2024

How many questions did you try generating? And what models did you try?

@NotPubi
Copy link

NotPubi commented Oct 17, 2024 via email

@ECuiDev
Copy link
Owner

ECuiDev commented Oct 17, 2024

I would try starting from a working threshold and go from there to see what breaks it. For example, generating 1 true or false question using 200 tokens. Then increasing either the number/type of questions or number of tokens.

@SeamusMullan
Copy link

Is there a way we could split the input info into chunks and generate questions with multiple calls to Ollama?

Something like every 800-1000 tokens since that seems to be the limit people are hitting before they get an error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants