mirror of https://github.com/leafspark/AutoGGUF
update to v1.3.0 and build instructions
This commit is contained in:
parent
cb11440e1f
commit
8fcbc14f4c
12
README.md
12
README.md
|
@ -22,18 +22,20 @@ # AutoGGUF - automated GGUF model quantizer
|
||||||
2. Enjoy!
|
2. Enjoy!
|
||||||
|
|
||||||
**Building**:
|
**Building**:
|
||||||
```
|
```bash
|
||||||
cd src
|
cd src
|
||||||
pip install -U pyinstaller
|
pip install -U pyinstaller
|
||||||
pyinstaller main.py
|
pyinstaller main.py
|
||||||
cd dist/main
|
cd dist/main
|
||||||
main
|
./main
|
||||||
```
|
```
|
||||||
|
|
||||||
**Dependencies**:
|
**Dependencies**:
|
||||||
- PyQt6
|
- PyQt6
|
||||||
- requests
|
- requests
|
||||||
- psutil
|
- psutil
|
||||||
|
- shutil
|
||||||
|
- OpenSSL
|
||||||
|
|
||||||
**Localizations:**
|
**Localizations:**
|
||||||
|
|
||||||
|
@ -75,9 +77,15 @@ # AutoGGUF - automated GGUF model quantizer
|
||||||
**Issues:**
|
**Issues:**
|
||||||
- Actual progress bar tracking
|
- Actual progress bar tracking
|
||||||
- Download safetensors from HF and convert to unquanted GGUF
|
- Download safetensors from HF and convert to unquanted GGUF
|
||||||
|
- Perplexity testing
|
||||||
|
- Cannot disable llama.cpp update check on startup
|
||||||
|
- `_internal` directory required, will see if I can package this into a single exe on the next release
|
||||||
|
- ~~Custom command line parameters~~ (added in v1.3.0)
|
||||||
|
- ~~More iMatrix generation parameters~~ (added in v1.3.0)
|
||||||
- ~~Specify multiple KV overrides~~ (added in v1.1.0)
|
- ~~Specify multiple KV overrides~~ (added in v1.1.0)
|
||||||
- ~~Better error handling~~ (added in v1.1.0)
|
- ~~Better error handling~~ (added in v1.1.0)
|
||||||
- ~~Cannot select output/token embd type~~ (fixed in v1.1.0)
|
- ~~Cannot select output/token embd type~~ (fixed in v1.1.0)
|
||||||
|
- ~~Importing presets with KV overrides causes UI thread crash~~ (fixed in v1.3.0)
|
||||||
|
|
||||||
**Troubleshooting:**
|
**Troubleshooting:**
|
||||||
- ~~llama.cpp quantizations errors out with an iostream error: create the `quantized_models` directory (or set a directory)~~ (fixed in v1.2.1, automatically created on launch)
|
- ~~llama.cpp quantizations errors out with an iostream error: create the `quantized_models` directory (or set a directory)~~ (fixed in v1.2.1, automatically created on launch)
|
||||||
|
|
Loading…
Reference in New Issue