mirror of https://github.com/leafspark/AutoGGUF
Merge branch 'main' of https://github.com/leafspark/AutoGGUF
This commit is contained in:
commit
1adad11a62
19
README.md
19
README.md
|
@ -84,9 +84,9 @@ ## Localizations:
|
||||||
In order to use them, please set the `AUTOGGUF_LANGUAGE` environment variable to one of the listed language codes.
|
In order to use them, please set the `AUTOGGUF_LANGUAGE` environment variable to one of the listed language codes.
|
||||||
|
|
||||||
## Issues:
|
## Issues:
|
||||||
- Actual progress bar tracking
|
- Saving preset while quantizing causes UI thread crash (planned fix: remove this feature)
|
||||||
- Download safetensors from HF and convert to unquanted GGUF
|
- Cannot delete task while processing, you must cancel it first or the program crashes (planned fix: don't allow deletion before cancelling, or cancel automatically)
|
||||||
- Perplexity testing
|
- Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)
|
||||||
- ~~Cannot disable llama.cpp update check on startup~~ (fixed in v1.3.1)
|
- ~~Cannot disable llama.cpp update check on startup~~ (fixed in v1.3.1)
|
||||||
- ~~`_internal` directory required, will see if I can package this into a single exe on the next release~~ (fixed in v1.3.1)
|
- ~~`_internal` directory required, will see if I can package this into a single exe on the next release~~ (fixed in v1.3.1)
|
||||||
- ~~Custom command line parameters~~ (added in v1.3.0)
|
- ~~Custom command line parameters~~ (added in v1.3.0)
|
||||||
|
@ -96,13 +96,22 @@ ## Issues:
|
||||||
- ~~Cannot select output/token embd type~~ (fixed in v1.1.0)
|
- ~~Cannot select output/token embd type~~ (fixed in v1.1.0)
|
||||||
- ~~Importing presets with KV overrides causes UI thread crash~~ (fixed in v1.3.0)
|
- ~~Importing presets with KV overrides causes UI thread crash~~ (fixed in v1.3.0)
|
||||||
|
|
||||||
## Prerelease issues:
|
## Planned features:
|
||||||
- Base Model label persists even when GGML type is selected
|
- Actual progress bar tracking
|
||||||
|
- Download safetensors from HF and convert to unquanted GGUF
|
||||||
|
- Perplexity testing
|
||||||
|
- Managing shards (coming in the next release)
|
||||||
|
- Time estimated for quantization
|
||||||
|
- Dynamic values for KV cache, e.g. autogguf.quantized.time=str:{system.time.milliseconds} (coming in the next release)
|
||||||
|
- Ability to select and start multiple quants at once (saved in presets) (coming in the next release)
|
||||||
|
|
||||||
## Troubleshooting:
|
## Troubleshooting:
|
||||||
- ~~llama.cpp quantizations errors out with an iostream error: create the `quantized_models` directory (or set a directory)~~ (fixed in v1.2.1, automatically created on launch)
|
- ~~llama.cpp quantizations errors out with an iostream error: create the `quantized_models` directory (or set a directory)~~ (fixed in v1.2.1, automatically created on launch)
|
||||||
- SSL module cannot be found error: Install OpenSSL or run from source `python src/main.py` using the `run.bat` script (`pip install requests`)
|
- SSL module cannot be found error: Install OpenSSL or run from source `python src/main.py` using the `run.bat` script (`pip install requests`)
|
||||||
|
|
||||||
|
## Contributing:
|
||||||
|
Simply fork the repo and make your changes; when merging make sure to have the latest commits. Description should contain a changelog of what's new.
|
||||||
|
|
||||||
## User interface:
|
## User interface:
|
||||||

|

|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue