Fixed- Torch is not able to use GPU AMD Stable Diffusion --skip-torch-cuda-test (AUTOMATIC1111) Bug
TLDRThe video script offers a solution for users encountering a runtime error when running the web UI-user dobat file for stable, diffusion with AMD support. It guides viewers to edit configuration files, upgrade pip, and install necessary packages to resolve the issue. The steps are designed to help users successfully build and run stable, diffusion with or without ONNX and Olive support, and the video encourages feedback and subscription for more AI and technology content.
Takeaways
- 🛠️ Update the stable, diffusion build following the latest video instructions to avoid runtime errors.
- 💻 If encountering a GPU-related error with torch, add `torch D direct ml` to the command line arguments.
- 📂 Locate and edit the `requirements.txt` and `web_ui-user-dobat` files in the stable, diffusion folder.
- 🖋️ Append `torch D direct ml` at the end of `requirements.txt` and add `D- use-direct ml` to the command line arguments in `web_ui-user-dobat`.
- 🚀 Run the environment activation command `V back SL scriptsackslash activate` from the command prompt with admin privileges.
- 🔄 Upgrade pip using the command `python.exe -m pip install --upgrade pip` to ensure a successful build process.
- 📦 Install httpx with the command `pip install httpx==0.241` to address the requirements.
- 🔧 Execute `pip install -r requirements.txt` to install necessary packages for stable, diffusion.
- 🌐 Launch the stable, diffusion web interface by running the modified `web_ui-user-dobat` file.
- 🎯 If building with ONNX and OpenVINO support, add `D-nnx` to the command line arguments.
- 📢 Seek help in the comments section if the procedure does not work, and share success with a thumbs up or comment.
Q & A
What is the main issue discussed in the video?
-The main issue discussed is a runtime error encountered when running the web ui-user dobat file for building stable, diffusion with AMD support, which started around mid-December due to a repo reversion to CUDA.
How can one fix the error with Torch not being able to use the GPU?
-To fix the error, add `torch + direct ml` to your command line arguments before running the web ui-user dobat file. This will force the use of direct ml.
What are the two files that need to be edited to resolve the issue?
-The two files that need to be edited are `requirements.txt` and `web ui-user dobat` file.
What should be added to the `requirements.txt` file to fix the issue?
-At the bottom of the `requirements.txt` file, you should add `torch direct ml` and then save the file.
What changes should be made to the `web ui-user dobat` file?
-In the `web ui-user dobat` file, add `-use-direct-ml` to the command line arguments to force the use of direct ml.
What is the next step after editing the files?
-After editing the files, open a command prompt in admin mode, navigate to the stable, diffusion folder, and execute the command to activate the environment.
How can you ensure that the environment is activated successfully?
-The environment is activated successfully when it shows in brackets at the bottom left hand corner of the command prompt window.
What command should be executed to upgrade pip?
-To upgrade pip, execute the command `python.exe -m pip install --upgrade pip`.
Which package is recommended to install after upgrading pip?
-After upgrading pip, it is recommended to install the package `httpx` with the command `pip install httpx==0.241`.
What is the final step to build stable, diffusion?
-The final step is to execute the command `pip install -r requirements.txt` to install the necessary requirements and build stable, diffusion on your PC.
What should you do if the web ui-user dobat file compiles and builds successfully?
-If the file compiles and builds successfully, it should launch your browser and open the stable, diffusion web interface, indicating a functioning stable, diffusion setup.
How can viewers provide feedback on the video?
-Viewers can provide feedback by leaving a comment, giving a thumbs up, or subscribing to the channel for future AI and technology videos.
Outlines
💻 Fixing GPU Error in Stable Diffusion Build
The paragraph discusses a common issue faced when building Stable Diffusion with AMD support, where an error occurs when running the web UI using a .bat file. The error is related to Torch not being able to use the GPU. The video provides a solution to this problem by suggesting the addition of specific command-line arguments to bypass the CUDA test. It also mentions that the issue might be resolved in the future and advises viewers to follow the steps carefully to avoid the error.
Mindmap
Keywords
💡stable diffusion
💡AMD support
💡runtime error
💡CUDA
💡command line arguments
💡requirements.txt
💡web UI-user dobat
💡direct ml
💡ONNX
💡Olive
💡pip
💡httpx
Highlights
Introduction to building stable, diffusion with AMD support.
Addressing a common runtime error when running the web UI-user dobat file.
The error may be resolved in future updates to the repo.
Instructions based on a previous build video.
Editing the 'requirements_uncore_version.txt' file to add 'torch D direct ml'.
Modifying the 'web ui-user dobat' file to force the use of direct ml.
Running the setup natively or with ONNX and OpenVINO support.
Opening Windows Explorer to locate the stable, diffusion config files.
Executing admin mode command prompt and changing directory to the stable, diffusion folder.
Activating the environment and upgrading pip.
Installing httpx with a specific version (0.241).
Executing the command to install requirements from 'requirements.txt'.
Compiling and building stable diffusion on your PC.
Launching the browser and accessing the stable, diffusion web interface upon successful build.
Providing a solution for users experiencing issues with their stable, diffusion build.
Invitation for feedback on the procedure in the comments section.
Encouragement to subscribe for future AI and technology videos.