Updated changelog.

This commit is contained in:
Tim Dettmers 2021-10-20 19:26:43 -07:00
parent a6eae2e7f2
commit d06c5776e4

View File

@ -1,11 +1,11 @@
v0.0.21
### 0.0.21
- Ampere, RTX 30 series GPUs now compatible with the library.
v0.0.22:
### 0.0.22:
- Fixed an error where a `reset_parameters()` call on the `StableEmbedding` would lead to an error in older PyTorch versions (from 1.7.0).
v0.0.23:
### 0.0.23:
Bugs:
- Unified quantization API: each quantization function now returns `Q, S` where `Q` is the quantized tensor and `S` the quantization state which may hold absolute max values, a quantization map or more. For dequantization all functions now accept the inputs `Q, S` so that `Q` is dequantized with the quantization state `S`.
@ -18,7 +18,18 @@ Features:
- Block-wise quantization routines now support CPU Tensors.
v0.0.24:
### 0.0.24:
- Fixed a bug where a float/half conversion led to a compilation error for CUDA 11.1 on Turning GPUs.
- removed Apex dependency for bnb LAMB
### 0.0.25:
Features:
- Added `skip_zeros` for block-wise and 32-bit optimizers. This ensures correct updates for sparse gradients and sparse models.
- Added support for Kepler GPUs. (#4)
Bug fixes:
- fixed "undefined symbol: \_\_fatbinwrap_38" error for P100 GPUs on CUDA 10.1 (#5)