#|
Annual Reports. Analysts Meeting. Meeting Information. Quarterly Earning Release. Shareholder Services. Common Stock Quote. Shareholder's meeting. Dividend and Capital Information. Contact for stock transfer and register. News about Realtek Company code Company News Releases. Major Shareholders. Corporate Governance. Financial Calendar. Corporate Social Responsibility. Sustainable Development.
Note : this release has an issue when compiling liblz4 dynamic library on Mac OS-X. This issue is fixed in : Warning : this version has a known bug in the decompression function which makes it read a few bytes beyond input limit. Upgrade to v1. Dave Watson djwatson managed to carefully optimize the LZ4 decompression hot loop, offering substantial speed improvements on x86 and x64 platforms.
Here are some benchmark running on a Core iK, source compiled using gcc v8. Given that decompression speed has always been a strong point of lz4 , the improvement is quite substantial.
The new decoding loop is automatically enabled on x64 and x For other cpu types, since our testing capabilities are more limited, the new decoding loop is disabled by default.
The outcome will vary depending on exact target and build chains. For example, in our limited tests with ARM platforms, we found that benefits vary strongly depending on cpu manufacturer, chip model, and compiler version, making it difficult to offer a "generic" statement. ARM situation may prove extreme though, due to the proliferation of variants available.
Other cpu types may prove easier to assess. These variants reverse the logic, by trying to fit as much input data as possible into a fixed memory budget. This is used for example in WiredTiger and EroFS , which cram as much data as possible into the size of a physical sector, for improved storage density. When compressing small inputs, the fixed cost of clearing the compression's internal data structures can become a significant fraction of the compression cost.
In v1. This proves especially effective when compressing a lot of small data. But this is the next stage, and is likely to happen in a future release. This is maintenance release, mainly triggered by issue Big thanks to Pashugan for finding and sharing a reproducible sample.
This is equivalent to the acceleration parameter in the API, in which user forfeit some compression ratio for the benefit of better speed. The verbose CLI has been fixed, and now displays the real amount of time spent compressing instead of cpu time. Partial decoding can be useful to save CPU time and memory, when the objective is to extract a limited portion from a larger block. LZ4 decompression speed has always been a strong point. Compression speeds also receive a welcomed boost, though improvement is not evenly distributed, with higher levels benefiting quite a lot more.
Should you aim for best possible decompression speed, it's possible to request LZ4 to actively favor decompression speed, even if it means sacrificing some compression ratio in the process. This can be requested in a variety of ways depending on interface, such as using command --favor-decSpeed on CLI. The resulting compressed object always decompresses faster, but is also larger.
Your mileage will vary, depending on file content. It's matched by a corresponding file size increase, which tends to be proportional. This allows it to take advantage of all the optimization work that has gone into the main implementation. This release adds a new way, under certain conditions, to perform this initialization at effectively zero cost. New, experimental LZ4 APIs have been introduced to take advantage of this functionality in block mode:. LZ4 Frame mode has been modified to use this faster reset whenever possible.
Windows 11 Windows 10 More Need more help? Expand your skills. Get new features first. Was this information helpful? Yes No. Thank you! Any more feedback? The more you tell us the more we can help.
Can you help us improve?
0コメント