From 1c36d737cf45d1e927ae495f7d08bae16048d575 Mon Sep 17 00:00:00 2001 From: leafspark <78000825+leafspark@users.noreply.github.com> Date: Fri, 2 Aug 2024 21:28:19 -0700 Subject: [PATCH] revert accidental change --- README.md | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index f4cc714..bafe4c7 100644 --- a/README.md +++ b/README.md @@ -1,24 +1,29 @@ -AutoGGUF - Automated GGUF Model Quantizer +# AutoGGUF - automated GGUF model quantizer This application provides a graphical user interface for quantizing GGUF models using the llama.cpp library. It allows users to download different versions of llama.cpp, manage multiple backends, and perform quantization tasks with various options. -Main features: +**Main features**: 1. Download and manage llama.cpp backends 2. Select and quantize GGUF models 3. Configure quantization parameters 4. Monitor system resources during quantization -Usage: -Run the main.py script to start the application. +**Usage**: +1. Install dependencies, either using the `requirements.txt` file or `pip install PyQt6 requests psutil`. +2. Run the `run.bat` script to start the application, or run the command `python src/main.py`. -Dependencies: +**Dependencies**: - PyQt6 - requests - psutil -Author: leafspark -Version: 1.0.0 -License: apache-2.0 \ No newline at end of file +**To be implemented:** +- Actual progress bar tracking +- Download safetensors from HF and convert to unquanted GGUF +- Specify multiple KV overrides + +**User interface:** +![image](https://github.com/user-attachments/assets/b1b58cba-4314-479d-a1d8-21ca0b5a8935)