This commit is contained in:
BuildTools 2024-08-04 16:04:06 -07:00
commit eba074ae89
1 changed files with 19 additions and 14 deletions

View File

@ -1,29 +1,31 @@
![AutoGGUF-banner](https://github.com/user-attachments/assets/0f74b104-0541-46a7-9ac8-4a3fcb74b896)
# AutoGGUF - automated GGUF model quantizer
This application provides a graphical user interface for quantizing GGUF models
AutoGGUF provides a graphical user interface for quantizing GGUF models
using the llama.cpp library. It allows users to download different versions of
llama.cpp, manage multiple backends, and perform quantization tasks with various
options.
**Main features**:
## Features:
1. Download and manage llama.cpp backends
2. Select and quantize GGUF models
3. Configure quantization parameters
4. Monitor system resources during quantization
**Usage**:
## Usage:
Cross platform:
**Cross platform**:
1. Install dependencies, either using the `requirements.txt` file or `pip install PyQt6 requests psutil`.
2. Run the `run.bat` script to start the application, or run the command `python src/main.py`.
Windows:
**Windows**:
1. Download latest release, extract all to folder and run `AutoGGUF.exe`
2. Enjoy!
**Building**:
## Building:
Cross platform:
**Cross platform**:
```bash
cd src
pip install -U pyinstaller
@ -31,20 +33,20 @@ # AutoGGUF - automated GGUF model quantizer
cd dist/main
./main
```
Windows:
**Windows**:
```bash
build RELEASE/DEV
```
Find exe in `build/<type>/dist/AutoGGUF.exe`.
**Dependencies**:
## Dependencies:
- PyQt6
- requests
- psutil
- shutil
- OpenSSL
**Localizations:**
## Localizations:
The following languages are currently supported (machine translated, except for English):
```python
@ -81,7 +83,7 @@ # AutoGGUF - automated GGUF model quantizer
```
In order to use them, please set the `AUTOGGUF_LANGUAGE` environment variable to one of the listed language codes.
**Issues:**
## Issues:
- Actual progress bar tracking
- Download safetensors from HF and convert to unquanted GGUF
- Perplexity testing
@ -94,12 +96,15 @@ # AutoGGUF - automated GGUF model quantizer
- ~~Cannot select output/token embd type~~ (fixed in v1.1.0)
- ~~Importing presets with KV overrides causes UI thread crash~~ (fixed in v1.3.0)
**Prerelease issues:**
## Prerelease issues:
- Base Model label persists even when GGML type is selected
**Troubleshooting:**
## Troubleshooting:
- ~~llama.cpp quantizations errors out with an iostream error: create the `quantized_models` directory (or set a directory)~~ (fixed in v1.2.1, automatically created on launch)
- SSL module cannot be found error: Install OpenSSL or run from source `python src/main.py` using the `run.bat` script (`pip install requests`)
**User interface:**
## User interface:
![image](https://github.com/user-attachments/assets/906bf9cb-38ed-4945-a32e-179acfdcc529)
## Stargazers:
[![Star History Chart](https://api.star-history.com/svg?repos=leafspark/AutoGGUF&type=Date)](https://star-history.com/#leafspark/AutoGGUF&Date)