mirror of https://github.com/leafspark/AutoGGUF
update issues and troubleshooting
This commit is contained in:
parent
41ebb7d609
commit
eedf326c02
10
README.md
10
README.md
|
@ -78,8 +78,8 @@ # AutoGGUF - automated GGUF model quantizer
|
|||
- Actual progress bar tracking
|
||||
- Download safetensors from HF and convert to unquanted GGUF
|
||||
- Perplexity testing
|
||||
- Cannot disable llama.cpp update check on startup
|
||||
- `_internal` directory required, will see if I can package this into a single exe on the next release
|
||||
- ~~Cannot disable llama.cpp update check on startup~~ (fixed in v1.3.1)
|
||||
- ~~`_internal` directory required, will see if I can package this into a single exe on the next release~~ (fixed in v1.3.1)
|
||||
- ~~Custom command line parameters~~ (added in v1.3.0)
|
||||
- ~~More iMatrix generation parameters~~ (added in v1.3.0)
|
||||
- ~~Specify multiple KV overrides~~ (added in v1.1.0)
|
||||
|
@ -87,8 +87,12 @@ # AutoGGUF - automated GGUF model quantizer
|
|||
- ~~Cannot select output/token embd type~~ (fixed in v1.1.0)
|
||||
- ~~Importing presets with KV overrides causes UI thread crash~~ (fixed in v1.3.0)
|
||||
|
||||
**Prerelease issues:**
|
||||
- Base Model label persists even when GGML type is selected
|
||||
|
||||
**Troubleshooting:**
|
||||
- ~~llama.cpp quantizations errors out with an iostream error: create the `quantized_models` directory (or set a directory)~~ (fixed in v1.2.1, automatically created on launch)
|
||||
- SSL module cannot be found error: Install OpenSSL or run from source `python src/main.py` using the `run.bat` script (`pip install requests`)
|
||||
|
||||
**User interface:**
|
||||

|
||||

|
||||
|
|
Loading…
Reference in New Issue