automatically quant GGUF models
Go to file
BuildTools 1adad11a62 Merge branch 'main' of https://github.com/leafspark/AutoGGUF 2024-08-04 18:01:20 -07:00
assets edit favicon 2024-08-04 16:04:04 -07:00
src print out quant command and log, dynamic kv 2024-08-04 18:01:17 -07:00
.gitignore add build script + instructions and favicon 2024-08-04 15:24:54 -07:00
LICENSE add details 2024-08-03 19:41:08 -07:00
README.md Update README.md 2024-08-04 16:51:43 -07:00
build.bat edit favicon 2024-08-04 16:04:04 -07:00
requirements.txt add code 2024-08-02 21:10:32 -07:00
run.bat modify backend check logic 2024-08-04 09:12:07 -07:00

README.md

AutoGGUF-banner

AutoGGUF - automated GGUF model quantizer

AutoGGUF provides a graphical user interface for quantizing GGUF models using the llama.cpp library. It allows users to download different versions of llama.cpp, manage multiple backends, and perform quantization tasks with various options.

Features:

  1. Download and manage llama.cpp backends
  2. Select and quantize GGUF models
  3. Configure quantization parameters
  4. Monitor system resources during quantization

Usage:

Cross platform:

  1. Install dependencies, either using the requirements.txt file or pip install PyQt6 requests psutil.
  2. Run the run.bat script to start the application, or run the command python src/main.py.

Windows:

  1. Download latest release, extract all to folder and run AutoGGUF.exe
  2. Enjoy!

Building:

Cross platform:

cd src
pip install -U pyinstaller
pyinstaller main.py --onefile
cd dist/main
./main

Windows:

build RELEASE/DEV

Find exe in build/<type>/dist/AutoGGUF.exe.

Dependencies:

  • PyQt6
  • requests
  • psutil
  • shutil
  • OpenSSL

Localizations:

The following languages are currently supported (machine translated, except for English):

{
    'en-US': _English,              # American English
    'fr-FR': _French,               # Metropolitan French
    'zh-CN': _SimplifiedChinese,    # Simplified Chinese
    'es-ES': _Spanish,              # Spanish (Spain)
    'hi-IN': _Hindi,                # Hindi (India)
    'ru-RU': _Russian,              # Russian (Russia)
    'uk-UA': _Ukrainian,            # Ukrainian (Ukraine)
    'ja-JP': _Japanese,             # Japanese (Japan)
    'de-DE': _German,               # German (Germany)
    'pt-BR': _Portuguese,           # Portuguese (Brazil)
    'ar-SA': _Arabic,               # Arabic (Saudi Arabia)
    'ko-KR': _Korean,               # Korean (Korea)    
    'it-IT': _Italian,              # Italian (Italy)
    'tr-TR': _Turkish,              # Turkish (Turkey)
    'nl-NL': _Dutch,                # Dutch (Netherlands)
    'fi-FI': _Finnish,              # Finnish (Finland)
    'bn-BD': _Bengali,              # Bengali (Bangladesh) 
    'cs-CZ': _Czech,                # Czech (Czech Republic)
    'pl-PL': _Polish,               # Polish (Poland)
    'ro-RO': _Romanian,             # Romanian (Romania)
    'el-GR': _Greek,                # Greek (Greece)
    'pt-PT': _Portuguese_PT,        # Portuguese (Portugal)
    'hu-HU': _Hungarian,            # Hungarian (Hungary)
    'en-GB': _BritishEnglish,       # British English
    'fr-CA': _CanadianFrench,       # Canadian French
    'en-IN': _IndianEnglish,        # Indian English
    'en-CA': _CanadianEnglish,      # Canadian English
    'zh-TW': _TraditionalChinese,   # Traditional Chinese (Taiwan)
}

In order to use them, please set the AUTOGGUF_LANGUAGE environment variable to one of the listed language codes.

Issues:

  • Saving preset while quantizing causes UI thread crash (planned fix: remove this feature)
  • Cannot delete task while processing, you must cancel it first or the program crashes (planned fix: don't allow deletion before cancelling, or cancel automatically)
  • Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)
  • Cannot disable llama.cpp update check on startup (fixed in v1.3.1)
  • _internal directory required, will see if I can package this into a single exe on the next release (fixed in v1.3.1)
  • Custom command line parameters (added in v1.3.0)
  • More iMatrix generation parameters (added in v1.3.0)
  • Specify multiple KV overrides (added in v1.1.0)
  • Better error handling (added in v1.1.0)
  • Cannot select output/token embd type (fixed in v1.1.0)
  • Importing presets with KV overrides causes UI thread crash (fixed in v1.3.0)

Planned features:

  • Actual progress bar tracking
  • Download safetensors from HF and convert to unquanted GGUF
  • Perplexity testing
  • Managing shards (coming in the next release)
  • Time estimated for quantization
  • Dynamic values for KV cache, e.g. autogguf.quantized.time=str:{system.time.milliseconds} (coming in the next release)
  • Ability to select and start multiple quants at once (saved in presets) (coming in the next release)

Troubleshooting:

  • llama.cpp quantizations errors out with an iostream error: create the quantized_models directory (or set a directory) (fixed in v1.2.1, automatically created on launch)
  • SSL module cannot be found error: Install OpenSSL or run from source python src/main.py using the run.bat script (pip install requests)

Contributing:

Simply fork the repo and make your changes; when merging make sure to have the latest commits. Description should contain a changelog of what's new.

User interface:

image

Stargazers:

Star History Chart