THE EASIEST WAY to Run Meta’s LLaMA Model on Windows (any GPU)

Download Git:
Download Python:
Tinygrad:
LLaMA Model Leak:

Put this in batch script:

set OPTLOCAL=1
set GPU=1
python examples/llama.py
pause

Join the Discord server:
The $30 microphone I'm using:

Timestamps:
00:00:00 Tinygrad deep learning framework
00:00:40 Requirements
00:01:02 Downloading and installing Tinygrad
00:01:58 Where to download LLaMA model
00:02:24 Change model location in Tinygrad
00:03:05 Creating batch script
00:04:27 Running the batch file
00:05:20 Talking to model

Leave a Reply

Your email address will not be published. Required fields are marked *