From 8fcbc14f4ca19101e4c84302d876e2f54a17e7bf Mon Sep 17 00:00:00 2001 From: leafspark <78000825+leafspark@users.noreply.github.com> Date: Sun, 4 Aug 2024 09:04:44 -0700 Subject: [PATCH] update to v1.3.0 and build instructions --- README.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 375515e..5b0cd3f 100644 --- a/README.md +++ b/README.md @@ -22,18 +22,20 @@ # AutoGGUF - automated GGUF model quantizer 2. Enjoy! **Building**: -``` +```bash cd src pip install -U pyinstaller pyinstaller main.py cd dist/main -main +./main ``` **Dependencies**: - PyQt6 - requests - psutil +- shutil +- OpenSSL **Localizations:** @@ -75,9 +77,15 @@ # AutoGGUF - automated GGUF model quantizer **Issues:** - Actual progress bar tracking - Download safetensors from HF and convert to unquanted GGUF +- Perplexity testing +- Cannot disable llama.cpp update check on startup +- `_internal` directory required, will see if I can package this into a single exe on the next release +- ~~Custom command line parameters~~ (added in v1.3.0) +- ~~More iMatrix generation parameters~~ (added in v1.3.0) - ~~Specify multiple KV overrides~~ (added in v1.1.0) - ~~Better error handling~~ (added in v1.1.0) - ~~Cannot select output/token embd type~~ (fixed in v1.1.0) +- ~~Importing presets with KV overrides causes UI thread crash~~ (fixed in v1.3.0) **Troubleshooting:** - ~~llama.cpp quantizations errors out with an iostream error: create the `quantized_models` directory (or set a directory)~~ (fixed in v1.2.1, automatically created on launch)