AMD GPU run large language model locally – LLaMA and LoRA: Ubuntu step by step tutorial

Thank you for watching! please consider to subscribe. thank you!
👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ
Step by step guide on how to run LLaMA or other models using AMD GPU is shown in this video.
0:6 Intro
1:34 Ensure that ROCm is installed. If not, check the tutorial on
7:53 Install Bitsandbytes library
12:34 Download llama model
13:49 Start the webui and testing
17:50 Download lora model
18:58 Start the webui and load LoRA.

👉The discord server invite is There is a llama bot free to use. see this for demo:

If you would like to support me, here is my Kofi link: and Patreon page:
Thank you!

Leave a Reply

Your email address will not be published. Required fields are marked *

Amazon Affiliate Disclaimer

Amazon Affiliate Disclaimer

“As an Amazon Associate I earn from qualifying purchases.”

Learn more about the Amazon Affiliate Program