Thank you for watching! please consider to subscribe. thank you!
👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ
Step by step guide on how to run LLaMA or other models using AMD GPU is shown in this video.
0:6 Intro
1:34 Ensure that ROCm is installed. If not, check the tutorial on
7:53 Install Bitsandbytes library
12:34 Download llama model
13:49 Start the webui and testing
17:50 Download lora model
18:58 Start the webui and load LoRA.
👉The discord server invite is There is a llama bot free to use. see this for demo:
If you would like to support me, here is my Kofi link: and Patreon page:
Thank you!