- updated project files
- added certifi to backend download and update checking
- add and fix type hints
- small file formatting changes
- update formatting of KV pairs to be cleaner
- update tensor data formatting and remove redundant KV pairs property
- add human readable mappings from KV pairs into model properties
- update CUDA backend check for latest llama.cpp format
- use urllib globally
- use localizations for menubar
- bump AutoGGUF version to v2.0.0
- rename imports_and_globals.py to globals.py
- reformat code
- use file select for Merge/Split GGUF functions
- move general functions verify_gguf and process_args to globals.py
- create Plugins class for extensibility
- add RAM and CPU usage graphs
- add input validation using wraps
- reduce strictness of iMatrix status checking
- add right click context menu to models list
- add more configuration options for AUTOGGUF_MODEL_DIR_NAME, AUTOGGUF_OUTPUT_DIR_NAME, and AUTOGGUF_RESIZE_FACTOR (these get created on startup)
- move some UI helper funtions out of AutoGGUF.py and into ui_update and Ta
- optimize imports for utility classes
- fix some missing imports
- Replaced all PyQt6 imports with PySide6
- Updated signal syntax (pyqtSignal to Signal)
- Modified requirements.txt to use PySide6
- Ensured compatibility with Apache-2.0 license
This commit adds the ability to select and run multiple quantization types simultaneously. It includes:
- Replacing the quantization type dropdown with a multi-select list
- Updating preset saving and loading to handle multiple quantization types
- Modifying the quantize_model function to process all selected types
- fix formatting issue with previous commit
- use error and in progress messages from localizations in QuantizationThread