From 9707ec1b9c764efbd27243548dc7147b24b2e427 Mon Sep 17 00:00:00 2001 From: leafspark <78000825+leafspark@users.noreply.github.com> Date: Fri, 2 Aug 2024 21:13:59 -0700 Subject: [PATCH] Update README.md --- README.md | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index f4cc714..31d1040 100644 --- a/README.md +++ b/README.md @@ -1,24 +1,26 @@ -AutoGGUF - Automated GGUF Model Quantizer +# AutoGGUF - automated GGUF model quantizer This application provides a graphical user interface for quantizing GGUF models using the llama.cpp library. It allows users to download different versions of llama.cpp, manage multiple backends, and perform quantization tasks with various options. -Main features: +**Main features**: 1. Download and manage llama.cpp backends 2. Select and quantize GGUF models 3. Configure quantization parameters 4. Monitor system resources during quantization -Usage: -Run the main.py script to start the application. +**Usage**: +1. Install dependencies, either using the `requirements.txt` file or `pip install PyQt6 requests psutil`. +2. Run the `run.bat` script to start the application, or run the command `python src/main.py`. -Dependencies: +**Dependencies**: - PyQt6 - requests - psutil -Author: leafspark -Version: 1.0.0 -License: apache-2.0 \ No newline at end of file +**To be implemented:** +- Actual progress bar tracking +- Download safetensors from HF and convert to unquanted GGUF +- Specify multiple KV overrides