Raphael Canut
2017-11-17 16:09:27 UTC
Hello,
I am progressively getting back to the NHW Project development (but it is
not that easy, I am a little depressed currently...).
I realize what is needed for the NHW codec is more HF energy preservation,
but as all the HF energy seems to require lot of bits, I wanted to select
specifically the HF energy around blurred edges, in order not to have edges
with aliasing.Is it possible to distinguish this HF energy from other ones:
noise, little details,...? I think it is possible as the SPIHT algorithm
tends to give such results.Does someone can explain me where in the
interband dependencies (parent->child coeffs trees) do this HF energy is
taken into account?
The additional but secondary problem that will arise, is that if I manage
to define 2 or 4 entropy words for HF energy for blurred edges, then it
will unbalance my current Huffman tree, so I will have to write an other
Huffman tree that takes into account these new words.
I think this is the major thing that remains to be done (with improving the
compression schemes).So I will try to take a look and study SPIHT algorithm.
Any help, advice, opinion,... welcome!
Cheers,
Raphael
I am progressively getting back to the NHW Project development (but it is
not that easy, I am a little depressed currently...).
I realize what is needed for the NHW codec is more HF energy preservation,
but as all the HF energy seems to require lot of bits, I wanted to select
specifically the HF energy around blurred edges, in order not to have edges
with aliasing.Is it possible to distinguish this HF energy from other ones:
noise, little details,...? I think it is possible as the SPIHT algorithm
tends to give such results.Does someone can explain me where in the
interband dependencies (parent->child coeffs trees) do this HF energy is
taken into account?
The additional but secondary problem that will arise, is that if I manage
to define 2 or 4 entropy words for HF energy for blurred edges, then it
will unbalance my current Huffman tree, so I will have to write an other
Huffman tree that takes into account these new words.
I think this is the major thing that remains to be done (with improving the
compression schemes).So I will try to take a look and study SPIHT algorithm.
Any help, advice, opinion,... welcome!
Cheers,
Raphael