Replies: 4 comments 1 reply
-
Take a look at |
Beta Was this translation helpful? Give feedback.
-
I have a C++ based implementation that I want to run with DML. I call OrtSessionOptionsAppendExecutionProvider_DML with my Ort::SessionOptions object like this, but later, in the actual inference, it just crashed with a Microsoft com_error exception. I don't know if this is connected to me using C++ API apart from this snippet, but I can see that your demo programs for DML are all C through and through, and these programs work. Our program works fine with CPU inference by commenting this part out:
|
Beta Was this translation helpful? Give feedback.
-
Yes, my bad. I was first using the dirctml.dll distributed to my Windows 11 but this does not seem to be updated wirth Windows update at all, and was too old (1.6). When I use 1.10 it works, I just didn't skip through all those exceptions in VS. The question remains: Why isn't there C++ methods in SessionOptions for DML as for the other providers? I also wanted to ask why the onnxruntime.dll is not the same irrespective of which providers were enabled when it was built. As it stands there is no pre-built binary which we can use if we want to providse both CUDA and DML possibilities. To me it seems that the onnxruntime.dll should be the same but I can see that it gets 1 MB larger when you enable DML, which is logical as DML interface is not a separate provider dll (which would have been logical). So I tried using the DML-enabled onnxruntime.dll with the cuda provider DLL from another pre-built binary but that didn't work. I don't get link errors so the AppendExecutionProvider_CUDA method is there, but if I call it it doesn't work. |
Beta Was this translation helpful? Give feedback.
-
Hey @BengtGustafsson , take a look at the perf tests for all execution providers, including dml. if you search for dml here youll see the headers you need to include along with how to pass the session options. -> After setting the desired options you just do This file will also outline any other execution providers you may be curious about in C++ best, |
Beta Was this translation helpful? Give feedback.
-
Is your feature request related to a problem? Please describe.
I have to compile my C++ code under x86 mode, I want to accelerate my code by using DirectML but not found DirectML interface in header "onnxruntime_cxx_api.h". How to use DirectML to accelerate inference with C++ API?
System information
Describe the solution you'd like
I want to call DirectML interface in C++ library but not found how to append DML execution provider.
Additional context
give me a sample please, THANKS!!!!!
Beta Was this translation helpful? Give feedback.
All reactions