mirror of https://github.com/leafspark/AutoGGUF
docs: update readme to v1.4.2
added and organized shields, clear building instructions and bugfix reflected in Issues section
This commit is contained in:
parent
08bff51ab3
commit
eb8096699d
32
README.md
32
README.md
|
@ -2,14 +2,26 @@
|
|||
|
||||
# AutoGGUF - automated GGUF model quantizer
|
||||
|
||||
<!-- Project Status -->
|
||||
[](https://github.com/leafspark/AutoGGUF/releases)
|
||||
[](https://github.com/leafspark/AutoGGUF/commits)
|
||||
[]()
|
||||
|
||||
<!-- Project Info -->
|
||||
[](https://github.com/ggerganov/llama.cpp)
|
||||

|
||||

|
||||

|
||||
[]()
|
||||
[](https://github.com/leafspark/AutoGGUF/blob/main/LICENSE)
|
||||
|
||||
<!-- Repository Stats -->
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
<!-- Contribution -->
|
||||
[](https://github.com/psf/black)
|
||||
[](https://github.com/leafspark/AutoGGUF/issues)
|
||||
[](https://github.com/leafspark/AutoGGUF/pulls)
|
||||
|
||||
AutoGGUF provides a graphical user interface for quantizing GGUF models using the llama.cpp library. It allows users to download different versions of llama.cpp, manage multiple backends, and perform quantization tasks with various options.
|
||||
|
||||
|
@ -55,17 +67,19 @@ ### Cross-platform
|
|||
|
||||
### Windows
|
||||
```bash
|
||||
build RELEASE/DEV
|
||||
build RELEASE | DEV
|
||||
```
|
||||
Find the executable in `build/<type>/dist/AutoGGUF.exe`.
|
||||
|
||||
## Dependencies
|
||||
|
||||
- PyQt6
|
||||
- requests
|
||||
- psutil
|
||||
- shutil
|
||||
- OpenSSL
|
||||
- numpy
|
||||
- torch
|
||||
- safetensors
|
||||
- gguf (bundled)
|
||||
|
||||
## Localizations
|
||||
|
||||
|
@ -77,7 +91,7 @@ ## Known Issues
|
|||
|
||||
- Saving preset while quantizing causes UI thread crash (planned fix: remove this feature)
|
||||
- Cannot delete task while processing (planned fix: disallow deletion before cancelling or cancel automatically)
|
||||
- Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)
|
||||
- ~~Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)~~ (fixed in v1.4.2)
|
||||
|
||||
## Planned Features
|
||||
|
||||
|
@ -95,7 +109,7 @@ ## Troubleshooting
|
|||
|
||||
## Contributing
|
||||
|
||||
Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description.
|
||||
Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description. Read `CONTRIBUTING.md` for more information.
|
||||
|
||||
## User Interface
|
||||
|
||||
|
|
Loading…
Reference in New Issue