automatically quant GGUF models
Go to file
BuildTools 35ad690198
feat(core): update llama.cpp, improve backend UI, logging, and task handling
- update llama.cpp python to `bc098c3` (now adds support for Qwen3, Llama 4, etc.)
- update requirements and general maint
- UI fixes in AutoGGUF
- Updated backend selection box to sort by newest version
- Fixed log information box inserting newlines on open and autoscroll
- Modified task deletion behavior
- Fixed logging for cancellation/deletion
- Updated readme information
2025-05-15 19:01:51 -07:00
.github ci: update artifact upload version 2025-02-10 17:36:09 -08:00
assets refactor: prepare repo for v1.8.1 2024-09-04 17:19:54 -07:00
docs feat(ui): update Japanese and German localizations 2024-09-15 12:48:41 -07:00
plugins feat(core): implement plugins 2024-08-22 20:08:02 -07:00
src feat(core): update llama.cpp, improve backend UI, logging, and task handling 2025-05-15 19:01:51 -07:00
.env.example feat(backend): allow setting fetch repository 2025-01-27 15:32:07 -08:00
.gitattributes chore: add .gitattributes 2024-08-04 18:52:14 -07:00
.gitignore refactor: optimize GGUF imports 2024-09-14 10:11:43 -07:00
.pre-commit-config.yaml ci: remove crlf 2024-08-04 21:15:34 -07:00
CHANGELOG.md chore: updated changelog for v2 2025-01-27 19:04:15 -08:00
CODE_OF_CONDUCT.md docs: add code of conduct 2024-08-05 11:47:18 -07:00
CONTRIBUTING.md refactor: prepare for v1.9.1 2024-10-13 10:21:28 -07:00
LICENSE chore: update for new year and improve compliance 2025-01-08 15:11:47 -08:00
README.md feat(core): update llama.cpp, improve backend UI, logging, and task handling 2025-05-15 19:01:51 -07:00
SECURITY.md refactor: prepare for v1.9.1 2024-10-13 10:21:28 -07:00
build.bat edit favicon 2024-08-04 16:04:04 -07:00
build.sh refactor: prepare for v1.9.1 2024-10-13 10:21:28 -07:00
build_optimized.bat refactor: prepare for v1.9.1 2024-10-13 10:21:28 -07:00
build_optimized.sh refactor: prepare for v1.9.1 2024-10-13 10:21:28 -07:00
requirements.txt feat(core): update llama.cpp, improve backend UI, logging, and task handling 2025-05-15 19:01:51 -07:00
run.bat modify backend check logic 2024-08-04 09:12:07 -07:00
run.sh refactor: prepare for v1.9.1 2024-10-13 10:21:28 -07:00
setup.py feat(core): update llama.cpp, improve backend UI, logging, and task handling 2025-05-15 19:01:51 -07:00

README.md

AutoGGUF-banner

AutoGGUF - automated GGUF model quantizer

GitHub release GitHub last commit CI/CD Status

Powered by llama.cpp Platform Compatibility GitHub license GitHub top language

GitHub stars GitHub forks GitHub release (latest by date) GitHub repo size Lines of Code

Issues Code Style: Black PRs Welcome

The most comprehensive GUI tool for GGUF model quantization. Stop wrestling with command lines - quantize, merge, and optimize your models with just a few clicks.

Features

  • 📩 Update and manage llama.cpp backends
  • 🗃️ Download and quantize GGUF/safetensors models
  • 📐 Configure quantization parameters
  • 💻 Monitor system resources in real time during quantization
  • Parallel quantization + imatrix generation
  • 🎉 LoRA conversion and merging
  • 📁 Preset saving and loading
  • 8 AutoFP8 quantization
  • 🪓 GGUF splitting and merging
  • 🌐 HTTP API for automation and monitoring

Why AutoGGUF?

  • Fast: Saves time on manual configuration
  • Simple: Clean UI, no terminal needed
  • Powerful: Handles models up to infinite size, only limited by your RAM
  • Resource-aware: Optimized memory management and efficient UI library

AutoGGUF-v1 8 1-showcase-blue

Quick Start

Cross-platform

  1. git clone https://github.com/leafspark/AutoGGUF
  2. cd AutoGGUF
  3. Install dependencies:
    pip install -r requirements.txt
    
  4. Run the application:
    python src/main.py
    
    or use the run.bat script.

macOS and Ubuntu builds are provided with GitHub Actions, you may download the binaries in the releases section.

Windows (for the impatient)

Standard builds:

  1. Download the latest release
  2. Extract all files to a folder
  3. Run AutoGGUF-x64.exe
  4. Any necessary folders will be automatically created

Setup builds:

  1. Download setup variant of latest release
  2. Extract all files to a folder
  3. Run the setup program
  4. The .GGUF extension will be registered with the program automatically
  5. Run the program from the Start Menu or desktop shortcuts

After launching the program, you may access its local server at port 7001 (set AUTOGGUF_SERVER to "enabled" first)

Verifying Releases

Linux/macOS:

gpg --import AutoGGUF-v1.5.0-prerel.asc
gpg --verify AutoGGUF-v1.9.1-Windows-avx2.zip.sig AutoGGUF-v1.9.1-Windows-avx2.zip
sha256sum -c AutoGGUF-v1.9.1.sha256

Windows (PowerShell):

# Import the public key
gpg --import AutoGGUF-v1.5.0-prerel.asc

# Verify the signature
gpg --verify AutoGGUF-v1.9.1-Windows-avx2.zip.sig AutoGGUF-v1.9.1-Windows-avx2.zip

# Check SHA256
$fileHash = (Get-FileHash -Algorithm SHA256 AutoGGUF-v1.9.1-Windows-avx2.zip).Hash.ToLower()
$storedHash = (Get-Content AutoGGUF-v1.9.1.sha256 | Select-String AutoGGUF-v1.9.1-Windows-avx2.zip).Line.Split()[0]
if ($fileHash -eq $storedHash) { "SHA256 Match" } else { "SHA256 Mismatch" }

Release keys are identical to ones used for commiting.

Building

Cross-platform

pip install -U pyinstaller
./build.sh RELEASE | DEV
cd build/<type>/dist/
./AutoGGUF

Windows

pip install -U pyinstaller
build RELEASE | DEV

Find the executable in build/<type>/dist/AutoGGUF-x64.exe.

You can also use Nuitka, which may result in a slower build but a faster output executable:

build_optimized RELEASE | DEV

Localizations

View the list of supported languages at AutoGGUF/wiki/Installation#configuration (LLM translated, except for English).

More languages will be updated as soon as possible!

To use a specific language, set the AUTOGGUF_LANGUAGE environment variable to one of the listed language codes (note: some languages may not be fully supported yet, those will fall back to English).

Issues

  • Some inconsistent logging and signal handling
  • Missing or duplicated translations
  • Buggy/incomplete API interfaces

Planned Features

  • Time estimation for quantization
  • Quantization file size estimate
  • Perplexity testing
  • bitsandbytes support

Project Status

AutoGGUF has now entered maintenance mode. It's considered stable and feature-complete for most use cases, so I'm not actively developing new features, but Ill continue to publish occasional builds, update dependencies regularly, and fix critical bugs as needed. If you encounter issues or have suggestions, feel free to open an issue.

Support

  • SSL module cannot be found error: Install OpenSSL or run from source using python src/main.py with the run.bat script (pip install requests)
  • Check out the Wiki for advanced usage and configuration

Contributing

Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description. Read CONTRIBUTING.md for more information.

Stargazers

Star History Chart

Last Updated: 5/15/2025