-
Notifications
You must be signed in to change notification settings - Fork 528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
無法在 Linux 下執行? #50
Comments
支持linux,安装python依赖并编译可执行程序后直接可用,我还没写说明文档 |
近期会更新相关安装说明 |
感謝等好消息 |
非常期待有安装说明!谢谢 |
@Bills135 @Karen0103 从以上链接下载linux程序后 |
后续我会进一步给出服务器部署配合chatgpt应用使用的示例 |
能从源码安装吗?没有configure文件 |
目前没写configure,你需要自己安装 wails: https://wails.io/docs/gettingstarted/installation |
你好,好像看到linux的使用教程还没有更新完成,正在在尝试部署,可以帮忙给个指示吗?https://github.com/josStorer/RWKV-Runner/blob/master/build/linux/Readme_Install.txt |
@baofengqqwwff gui按照这个说明安装依赖后即可使用 |
有办法用python的方式启动webui吗?直接用可执行文件启动会报错Gtk-WARNING **: 16:57:37.766: cannot open display: |
@baofengqqwwff 后续会出说明 |
@Bills135 @Karen0103 @baofengqqwwff |
I wrote an AUR pkg config: https://gist.github.com/BoyanXu/9961a27587984073458d15cfa47a0ab0. The problem I met is the strict requirement of torch version to be torch-1.13.1+cu117 triggers issue #17 on my machine with torch-2.0.1-2 The program seems to rely on default Currently, a workaround, maybe use a virtual environment, and remove the PyTorch dependency from PKGBUILD accordingly. But I wonder what prevented the project to support the latest PyTorch? Will Pytorch 2.0.X be supported in the future? |
@boyanxu |
@boyanxu |
|
@josStorer 我目前正在尝试在“揽睿星舟”的算力服务器上部署Runner的仅后端,我创建的基础环境是 如果需要,我不确定这个编译动作需要如何操作(即使我已经详细阅读并能独立在BlinkDL的ChatRWKV项目中编译CUDA,但我仍不知道如何将其用于Runner项目)。 |
@eyaeya 编译是可选的, 在runner的后端推理服务中, 调用 |
问题1:@josStorer 请教,但我在“揽睿星舟”的算力服务器上部署Runner的仅后端之后(未Switch模型),试图用 URL/docs 来验证接口是否运行起来时报错: 复现步骤:1、按照文档执行以下命令,且成功完成。
2、在浏览器打开
|
问题2:当我在Linux Server部署仅后端之后,Switch模型时报错。 环境:
CUDA版本
复现步骤:1、新建的服务器环境,安装依赖后,启动后端服务user@lsp-ws:~ /netdisk/data/RWKV-Runner$ 2、切换模型user@lsp-ws:~ /netdisk/data/ninja$ {"detail":"failed to load: CUDA out of memory. Tried to allocate 224.00 MiB (GPU 0; 23.70 GiB total capacity; 21.72 GiB already allocated; 202.56 MiB free; 22.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"} 附件:我所参考的服务器端部署方法如下 #install git python3.10 npm by yourself #下步未执行,因环境已自带Python git clone https://github.com/josStorer/RWKV-Runner --depth=1 python3 ./backend-python/main.py --webui > log.txt & #安装cuda支持 curl http://127.0.0.1:8000/switch-model -X POST -H "Content-Type: application/json" -d '{"model":"./models/rwkv_v5.2_7B_role_play_16k.pth","strategy":"cuda fp32","customCuda":"true","deploy":"true"}' |
|
谢谢 @josStorer 的深夜回复,我明天将按此再次尝试并反馈,圣诞快乐🎄 |
反馈:成功运行以下是运行记录 环境
进入VSCode在线调试界面。 运行命令该环境已自带Python3-dev,无需安装
安装后端
安装前端
安装CUDA依赖
运行服务端
切换模型
API调用
备忘说明该平台该环境,已自带CUDA算子无需安装以下步骤 查看Ubuntu版本lsb_release -a
安装CUDA算子Base Installerwget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin Driver Installersudo apt-get install -y nvidia-kernel-open-545 |
@josStorer 但我仍未解决打开 http://URL:8000/docs 时的报错,不过似乎不影响API调用的使用。 |
@josStorer 请教:在Linux的(cuda fp16)中,我是否还需要设置 |
能写一个完整的吗,我觉得这个Linux的安装真的搞得太混乱了 |
%cd /content/RWKV-Runner/frontend %cd .. 这一段出错 找不到tsc |
Traceback (most recent call last): |
@eyaeya 不需要 |
看說明只有 mac / windows 模式是嗎?
The text was updated successfully, but these errors were encountered: