Why is it possible to use “future” pixels in an experiment but not in practice? It would seem that the image, or part of it, could be stored in memory and the encoder could use any pixel as part of a context? One disadvantage of a large context is that it takes the algorithm longer to “learn” it. A 20-bit context, for example, allows for about a million different contexts. It takes many millions of pixels to collect enough counts for all those contexts, which is one reason large contexts do not result in better compression. One way to improve our method is to implement atwo-levelalgorithm that uses a long context only if that context has already been seen Q times or more (where Q is a parameter, typically set to a small value such as 2 or 3). If a context has been seen fewer than Q times, it is deemed unreliable, and only a small subset of it is used to predict the current pixel. Figure 4.117 shows four such contexts, where the pixels of the subset are labeled S. The notationp, qmeans a two-level context ofpbits with a subset ofqbits.
Already registered? Login
Not Account? Sign up
Enter your email address to reset your password
Back to Login? Click here