Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

这里好像有点问题 #8

Open
marcury6 opened this issue Sep 8, 2024 · 2 comments
Open

这里好像有点问题 #8

marcury6 opened this issue Sep 8, 2024 · 2 comments

Comments

@marcury6
Copy link

marcury6 commented Sep 8, 2024

transformers下载并更新到最新版本。把huggingface上面关于模型的文件都下载了,也通过 pip install --upgrade torch命令下载最新的Pytorch库。并访问英伟达toolkit仓库下载toolkit12.4.1版本,最后终端运行了

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

还是出现报错

报错信息

Traceback (most recent call last):
File "C:\Users\Administrator\Desktop\Qwen2-Boundless-main\continuous_conversation.py", line 10, in
model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\python\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\python\Lib\site-packages\transformers\modeling_utils.py", line 3318, in from_pretrained
raise ImportError(
ImportError: Using low_cpu_mem_usage=True or a device_map requires Accelerate: pip install accelerate
BaiduShurufa_2024-9-8_21-57-10

@marcury6
Copy link
Author

marcury6 commented Sep 8, 2024

看漏了报错信息,终端运行
pip install accelerate
得到解决

最后运行时有两条奇怪的提醒

Assistant: The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results.
E:\python\Lib\site-packages\transformers\models\qwen2\modeling_qwen2.py:580: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
attn_output = torch.nn.functional.scaled_dot_product_attention(

@ystemsrx
Copy link
Owner

ystemsrx commented Sep 9, 2024

这两个警告没有什么大影响,一个是Attention Mask 没有设置,还有一个是没有编译 Flash Attention,可以不用处理

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants