Parallel Data Compression Using Lzma
Now-a-days a tremendous amount of data is generated and shared per second, so there is real need of efficient
data storage management and utilization of bandwidth. Data compression is one of solution to this problem. Compression
ratio and time required for compression and decompression are two main pillars of data compression. If either of the two
gets hampered then data compression may be termed as inefficient.
LZMA (Lample-Ziv-Markov) is one of the finest algorithms for data, to be more precise, text compression in
terms of compression ratio. But we found that it has not got that much amount of success due to its comparatively high time
complexity for compression. We are trying to improve its time complexity by implementing it in a parallel fashion. GPGPU
has provided a well-defined and economic path of parallel implementation to a computer community.
Here we suggest an implementation of a general data compression algorithm that has been designed by keeping in
mind the parallel nature of the computing devices that we will be using. This, as we have discovered over the past few
months, not only decreases the compression time but also provides us with a compression ratio slightly better than that of