Description

A Successively Refinable Lossless Image-Coding Algorithm

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 3, MARCH 2005 445
A Successively Reﬁnable LosslessImage-Coding Algorithm
˙Ismail Avcibas¸
, Member, IEEE
, Nasir Memon
, Member, IEEE
, Bülent Sankur
, Senior Member, IEEE
, andKhalid Sayood
Abstract—
We present a compression technique that providesprogressive transmission as well as lossless and near-losslesscompression in a single framework. The proposed techniqueproduces a bit stream that results in a progressive, and ultimatelylossless, reconstruction of an image similar to what one can obtainwith a reversible wavelet codec. In addition, the proposed schemeprovides near-lossless reconstruction with respect to a givenbound, after decoding of each layer of the successively reﬁnablebit stream. We formulate the image data-compression problem asone of successively reﬁning the probability density function (pdf)estimate of each pixel. Within this framework, restricting theregion of support of the estimated pdf to a ﬁxed size interval thenresults in near-lossless reconstruction. We address the context-selection problem, as well as pdf-estimation methods based oncontext data at any pass. Experimental results for both losslessand near-lossless cases indicate that the proposed compressionscheme, that innovatively combines lossless, near-lossless, andprogressive coding attributes, gives competitive performance incomparison with state-of-the-art compression schemes.
Index Terms—
Embedded bit stream, image compression, loss-less compression, near-lossless compression, probability mass esti-mation, successive reﬁnement.
I. I
NTRODUCTION
L
OSSLESS or reversible compression refers to compres-sion techniques in which the reconstructed data exactlymatches the srcinal. Near-lossless compression denotes com-pression methods, which give quantitative bounds on the natureof the loss that is introduced. Such compression techniques pro-vide the guarantee that no pixel difference between the srcinaland the compressed image is above a given value [1]. Both loss-less and near-lossless compression ﬁnd potential applicationsin remote sensing, medical and space imaging, and multispec-tral image archiving. In these applications, the volume of thedata would call for lossy compression for practical storage or
PaperapprovedbyK.Illgner,theEditorforSpeech,Image,Video,andSignalProcessing of the IEEE Communications Society. Manuscript received August7, 2002; revisedMay 31, 2003 and February 22, 2004. This work was supportedin part by the National Science Foundation under INT 9996097. The work of ˙I. Avcibas¸ was supported in part by TUB˙ITAK BDP program. The work of K.Sayood was supported by NASA Goddard Space Flight Center.˙I. Avcibas¸ is with the Electrical and Electronics Engineering Department,Uludag University, 16059 Bursa, Turkey (e-mail: avcibas@uludag.edu.tr).N.MemoniswiththeComputerScienceDepartment,PolytechnicUniversity,Brooklyn, NY 11201 USA (e-mail: memon@poly.edu).B. Sankur is with the Electrical and Electronics Engineering Department,Bogaziçi University, ˙Istanbul, Turkey (e-mail: sankur@boun.edu.tr).K. Sayood is with the Electrical Engineering Department, University of Nebraska at Lincoln, Lincoln, NE 68588-0511 USA (e-mail: ksayood@eecomm.unl.edu).Digital Object Identiﬁer 10.1109/TCOMM.2005.843421
transmission.However,thenecessitytopreservethevalidityandprecision of data for subsequent reconnaissance, diagnosis op-erations, forensic analysis, as well as scientiﬁc or clinical mea-surements,oftenimposesstrictconstraintsonthereconstructionerror. In such situations, near-lossless compression becomes aviable solution, as, on the one hand, it provides signiﬁcantlyhigher compression gains vis-à-vis lossless algorithms, and onthe other hand, it provides guaranteed bounds on the nature of loss introduced by compression.Another way to deal with the lossy-lossless dilemma facedin applications such as medical imaging and remote sensing isto use a successively reﬁnable compression technique that pro-vides a bit stream that leads to a progressive reconstruction of the image. Using wavelets, for example, one can obtain an em-bedded bit stream from which various levels of rate and distor-tion can be obtained. In fact, with reversible integer wavelets,one gets a progressive reconstruction capability all the way tolossless recovery of the srcinal. Such techniques have beenexplored for potential use in teleradiology, where a physiciantypically requests portions of an image at increased quality (in-cluding lossless reconstruction) while accepting initial render-ings and unimportant portions at lower quality, and thus re-ducing the overall bandwidth requirements. In fact, the newstill-image compression standard, JPEG 2000, provides suchfeatures in its extended form [2].Although reversible integer wavelet-based image-compres-sion techniques provide an integrated scheme for both losslessand lossy compression, the resulting compression performanceis typically inferior to the state-of-the-art nonembedded andpredictively encoded techniques like CALIC [3] and TMW[4], [5]. Another drawback is that, while lossless compression
is achieved when the entire bit stream has been received, forthe lossy reconstructions at the intermediate stages, no precisebounds can be set on the extent of distortion present. Near-loss-less compression in such a framework is only possible eitherby an appropriate prequantization of the wavelet coefﬁcientsand lossless transmission of the resulting bit stream, or bytruncation of the bit stream at an appropriate point followedby transmission of a residual layer to provide the near-losslessbound. Both these approaches have been shown to provide infe-rior compression, as compared with near-lossless compressionin conjunction with predictive coding [1].Inthispaper,wepresentacompressiontechniquethatincorpo-rates the above two desirable characteristics, namely, near-loss-less compression and progressive reﬁnement from lossy to loss-less reconstruction. In other words, the proposed technique pro-duces a bit stream that results in a progressive reconstruction of theimagesimilartowhatonecanobtainwithareversiblewavelet
0090-6778/$20.00 © 2005 IEEE
446 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 3, MARCH 2005
codec. In addition, our scheme provides near-lossless (and loss-less)reconstructionwithrespecttoagivenboundaftereachlayerof the successively re
ﬁ
nable bit stream is decoded. Note, how-ever, that these bounds need to be set at compression time andcannotbechangedduringdecompression.Thecompressionper-formance provided by the proposed technique is comparable tothebest-knownlosslessandnear-losslesstechniquesproposedintheliterature.Itshouldbenotedthattothebestknowledgeoftheauthors, this is the
ﬁ
rst technique reported in the literature thatprovides lossless and near-lossless compression, as well as pro-gressive reconstruction, all in a single framework.The rest of this paper is organized as follows. We
ﬁ
rst discussour approach to near-lossless compression and the tools usedin our algorithm, such as successive re
ﬁ
nement, density estima-tion, and the data model in Section II. The proposed compres-sion method is described in Section III. In Section III, we giveexperimental results, and Section IV concludes the paper.II. F
ORMULATION OF
O
UR
A
PPROACH
The key problem in lossless compression involves estimatingthe probability density function (pdf) of the current pixel basedon previously known pixels (or previously received informa-tion). With this in mind, the problem of successive re
ﬁ
nementcan then be viewed as the process of obtaining improved es-timates of the pdf of each pixel with successive passes on theimage, until all the pixels are uniquely determined. The factof restricting the
“
support
”
(that is the interval where the pdf is nonzero) of the pdf to a successively re
ﬁ
ned set of intervalsleads to the integration of lossless/near-lossless compression ina single framework. More explicitly, diminishing the support of the pdf in each pass to a narrower interval gives progressive-ness, while
ﬁ
xing the size of the interval provides near-loss-less (or lossless, if the interval size is one) coding. In this uni-
ﬁ
ed scheme, we obtain a bit stream that gives us a near-loss-less reconstruction after each pass, in the sense that each pixelis within quantal bins of its srcinal value. The value of thiscoding-error bound, , decreases with successive passes, and if desired, in the
ﬁ
nal pass, we can achieve lossless compression.In order to design a compression algorithm with the proper-ties described above, we need three things.1) Given a pixel neighborhood and a gray-level interval,estimate the pdf of the pixel in that interval.2) Givenapdfforapixel,howtobestpredictitsactualvalueor the subinterval in which it is found.3) Updatethepdfofapixel,giventhepdfsofneighborhoodpixels.Equitz and Cover [7] have discussed the problem of succes-sive re
ﬁ
nement of information from a rate-distortion point of view. They show thatthe rate-distortion problem is successivelyre
ﬁ
nable if and only if individual solutions to the rate-distortionproblemcanbewrittenasaMarkovchain.Oneexampleprocessthat admits successive re
ﬁ
nement is a Gaussian source, togetherwith the mean-square error (MSE) criterion. Hence, for the
ﬁ
rstrequirement in our compression scheme, we adopt the Gaussiandata model in the
ﬁ
rst pass. We assume the data is stationaryand Gaussian in a small neighborhood, and therefore, we uselinear prediction. We
ﬁ
t a Gaussian density function for the cur-rent pixel, with the linear prediction value taken as the optimalestimate of its mean, and the mean-square prediction error as itsvariance,asdetailedinSectionII-A.However,inthesucceedingpasses, we relax the Gaussian assumption.For the second requirement, we use a technique based onMassey
’
s optimal guessing principle [6]. Let a discrete randomvariable be characterized by a set of allowable values. Massey [6] has observed that the average numberof guesses to determine the value of a discrete random variableis minimized by a strategy that guesses the possible values of the random variable in decreasing order of probability. Theguessing game is pursued till the question of the form
“
Isequal to ?
”
is positively answered for lossless compression,and the question
“
Does lies within neighborhood of ?
”
issatis
ﬁ
ed for the near-lossless scheme. Therefore, we considerthe coding problem as one of asking the optimal sequenceof questions to determine the exact value of the pixel, or theinterval in which the pixel lies, depending on whether we areinterested in lossless or near-lossless compression. Speci
ﬁ
-cally, we divide the support of the current pixel
’
s pdf, initially, into a nonoverlapping set of intervals , wherethe search interval is twice the uncertainty interval, that is,. The intervals are then sorted with respect totheir probability mass obtained from the estimated densityfunction. Next, the interval with the highest probability massis identi
ﬁ
ed, and if the pixel is found to lie in this interval,the probability mass outside the interval is zeroed out, and thebit 1 is fed to the entropy coder. Otherwise, bit zero is sent tothe encoder, and we repeat the test if the pixel lies in the nexthighest probability interval. Every time one receives a negativeanswer, the probability mass of the failing interval is zeroed outand the pdf normalized to a mass of 1. This process is repeateduntil the right interval is found. Such a pdf with intervals set tozero is called an
“
interval-censored pdf
”
in the following. Atthe end of the
ﬁ
rst pass, the maximum error in the reconstructedimage, , is since the midpoint of the interval is selectedas the reconstructed pixel value.Forthethirdrequirement,intheremainingpassesofourcom-pression algorithm, we proceed to re
ﬁ
ne the pdf for each pixelbynarrowing the size of theinterval in which it is nowknown tolie. We investigate various pdf re
ﬁ
nement schemes as describedin Section II-C. The re
ﬁ
nement schemes use both
causal
pixels(i.e.,pixelsthatthathaveoccurredbeforeagivenpixelinarasterscan order) as well as
noncausal
pixels (i.e., the ones occurringafterwards). Note that in this update scheme, the causal pixelsalready have a re
ﬁ
ned pdf, but the noncausal pixels yet do not.We do not want to discard the noncausal pixels, as they maycomplement, alongwiththe causalpixels,some usefulinforma-tion,likethepresenceofedgesandtexturepatterns.Ontheotherhand, the information provided by the causal pixels is more ac-curate, as compared with that of pixels yet to be visited on thispass. This differential in their precision is automatically takeninto account when we re
ﬁ
ne the pdf of the current pixel, as willget clear in Section II-C where we present several pdf-re
ﬁ
ne-ment techniques. In any case, with every re
ﬁ
nement pass overtheimage,wecontinuethisestimationandre
ﬁ
nementprocedureand constrain pixels to narrower and narrower intervals (smaller
’
s, hence, smaller values of ) to the required precision,and if needed, all the way to their exact values.
AVCIBAS
¸
et al.
: A SUCCESSIVELY REFINABLE LOSSLESS IMAGE-CODING ALGORITHM 447
Fig. 1. Ordering of the causal prediction neighbors of the current pixel
2
,
.Fig. 2. Context pixels, denoted by
and
, used in the covariance estimationof the current pixel
3
. The number of context pixels is
. These are usedin the covariance estimation of the current pixel
3
, as in (3). Each of these 40context pixels has their causal neighborhoods, as in Fig. 1, for prediction. Someof the causal neighborhood pixels on the context border fall outside the supportof the context (these pixels are not shown).
We should note in passing that although image gray valuesare discrete and their statistics are described with a probabilitymass function (pmf), we
ﬁ
nd it convenient to treat them
ﬁ
rst ascontinuous random variables. Thus we estimate their pdf, andthen revert to their discrete nature during the compression algo-rithm when we have to compute estimates of their likelihood tolie in a given interval of size.
A. PDF Estimation in the First Pass: The Gaussian Model
Natural images, in general, may not satisfy Gaussianity orstationarity assumptions. But at a coarse level and in a reason-able size neighborhood, the statistics can be assumed to be bothGaussianandstationary,andtheGauss
–
Markovpropertycanbeinvoked.Basedonthisassumption,we
ﬁ
taGaussiandensityforthe current pixel. In fact, we use causal neighborhood pixels topredict the current pixel via a linear regression model. Letdenotethepdfofthe thpixelatgrayvalue .Lettheprobabilitymass of the th interval, of length , and denoted by the upperand lower limits , be .We assume a discrete-time scalar-valued random process. Then denote the random vari-ables representing the causal neighbors of the current pixel, where the reading order of the pixels is as shown inthe mask in Fig. 1. Similarly, let us consider contextpixels of , as shown in Fig. 2, and let us denote them as, in some reading order. For each of these context pixels, we have to consider their individualcausal prediction neighborhood. With this goal in mind, weuse the notation with double indexes, , ;, where the visited pixel index runs overthe context pixels , and the prediction indexruns over the prediction neighborhood of each contextpixel . Thus, ,indicate all the realizations of the neighborhoods for contextpixels. In Fig. 2, the causal mask, shown only for the
“
starpixel,
”
must be replicated for each of the other contextpixels. Note that each of the context pixels possesses its ownprediction neighborhood consisting of its causal pixel group,. This is illustrated in Fig. 2, whereeach of the pixels is predicted using a prediction mask asinFig.1(someofthepixelsformingthecausalneighborhoodof the border context pixels are outside the support of the mask shown). Obviously, pixels are involved for the compu-tation of the regression coef
ﬁ
cients. Each of these realizationsis assumed to satisfy the th-order linear prediction equation.In particular, for the realization , one has(1)where are real-valued linear prediction coef
ﬁ
cients of the process, and is a sequence that consists of indepen-dent, identically distributed (i.i.d.) random variables having aGaussiandensitywithzeromeanandvariance .Optimalmin-imum MSE (MMSE) linear prediction for an th-order sta-tionary Gauss-Markov process can be formulated as(2)AccordingtotheGauss
–
Markovtheorem, theminimumvari-ance linear unbiased estimator is the leastsquare solution of the ensemble of equations as (2), resultingfrom the prediction of the context pixels, and is given by [8],[9](3)where denote the contextpixels, while the data matrix.........consists of the prediction neighbors of .The expected value of is given by (2), and an unbiased esti-mator of prediction error variance can be obtained [4] asFinally, based on the principle that the mean-square predic-tion for a normal random variable is its mean value, the densityof , conditioned on causal neighbors, is then given by(4)
B. Encoding the Interval
We can treat the problem of determining the interval wherethe current pixel value lies within the framework of Massey
’
sguessing principle, as described in Section II. Letdenote the probabilities, where each probability is as-sociated with an interval that has a length of .
448 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 3, MARCH 2005
Fig. 3. Causal,
, and noncausal,
, neighbors of the current pixel,
3
, used forprobability mass estimation in the second and higher passes.
Thenumberofintervals dependsuponthequantalresolution,and becomes equal to 256 at the
ﬁ
nest resolution. The unionof these nonoverlapping intervals covers the support of the pdf.We treat the pixel value, whose bin location is being guessed,as a random variable . The rule for minimizing the number of guesses in
ﬁ
nding the interval where the pixel lies is to choosethat for which is maximum. Having thus chosen as themost likely interval of the pixel, we then use the indicator func-tion to test whether the actual pixel value, ,liesinthatintervalornot.Ifthe thintervalprovestobecorrect,thatis,thepixelis ,thenentropycoderisfedwithbit 1, otherwise, if , the entropy coder isfed bit 0 and the interval with the next highest probability istested. We used the XYZ adaptive binary arithmetic coder asour entropy coder, and the binary events to be coded were justconsidered to be an i.i.d. sequence [10]. We would like to notethatthearithmeticcoderwasbene
ﬁ
cialonlyintheinitialpasses.Inthe
ﬁ
rstfewpasses,thedatatobeencodedhaslowentropy,asthe strategy of selecting the interval with the highest probabilitymass proves to be correct most of the time, and the arithmeticcodertakesthisintoaccount,producingahighlycompressedbitstream. The performance improvement in the last pass is mar-ginal, as the data has become more random or less correlated.We expect that context-based arithmetic coding could result infurther (but nevertheless small) improvement in the bit rates, ascomparedwiththosethatweobtained,asreportedinSectionIII.The average number of guesses that one has to make be-fore one correctly identi
ﬁ
es the right interval in which a pixelbelongs, gives the number of binary arithmetic coding opera-tionsthatneedtobeperformedandin
ﬂ
uencesthecomputationalcomplexity of our technique as well. To give an example, forthe Lena image and for the one-pass scheme, one can guess anypixel value in a correct interval of size , within the
ﬁ
rstthreequestions withrespectivecumulativeprobabilitiesof0.73,0.86, and 0.97. That is, with a 97% chance, we determine thecorrect gray-level interval of a pixel after the
ﬁ
rst three guesses.
C. PDF Estimation in the Reﬁnement Passes
After the
ﬁ
rst pass, we know each pixel value within an un-certainty of . In the successive passes, we pro-ceed to decrease the uncertainty about the value of the pixels.In these
ﬁ
ner resolution passes, we drop the Gaussian assump-tion. Also, as more and more of the redundancy is removedafter each pass, this results in decreased correlation in the re-maining passes, thereby leading to conditions that cannot bemodeled well with a Gaussian assumption. Below, we presentthree pdf update schemes for the second and higher passes. Inthese passes, we use both the causal neighborhood densities,which have just been updated, and the noncausal neighborhooddensities,whichwereupdatedonlyinthepreviouspass.Theup-dating neighborhood used in the pdf updates in the second andhigher passes can be deduced from Fig. 3.
Method1: NormMinimizingProbabilityEstimation:
Thispdf estimate is based on the norm estimate of the currentpixel
’
spdf,usingthepdfsofthecausalandnoncausalneighbor-hoods. Let denote the pdfto be estimated giventhecausalandnoncausaldistributions ,(cf. Fig. 3). Notice that in the re
ﬁ
nement stage, the predictionneighborhood is different than in the initial stage, as the non-causal pixels also become involved. Minimizing the averagedifference of subject to the constraint,andusingLagrangemultipliers,wehaveUsing the variational derivative [11] with respect to ,and the information in , one
ﬁ
nds the updated pdf to be of the form(5)where is the normalizing constant and where is an in-dicator function. In other words, if that quantallevel is not censored, and it is zero, , if falls ina censored interval. Thus, the sum in (5) is to be interpreted asan interval-censored averaging of pdfs. Recall that every pixelin the neighborhood after the
ﬁ
rst pass has an interval-censoredpdf. Here we combine the interval-censored pdfs, , de-
ﬁ
ned over one or more intervals of the form , forin the neighborhood of the th pixel. If the neighboring interval-censored pdf, say , does not overlap with that of the cur-rentone ,whichisbeingestimated,thenithasnocontribu-tion on the estimate. In other words, if has an empty bin,it will veto any contribution to the corresponding bin of from its neighbors , . On theother hand, if that bin was not censored in , then evidenceaccumulation occurs from the corresponding bins of neighbors, . Notice that this methodof summing neighboring densities gives implicitly more impor-tance to the relatively more precise pdfs in the causal neighbor-hoods (1 to ), as they will be concentrated in narrower sup-ports after the latest pass, while the noncausal neighborhoods( to ) will have more dispersed pdfs, that is over a largersupport, since their updates are yet to occur.
Method 2: Hellinger Norm Minimizing Probability Estima-tion:
The relative entropy, , is a measure of distancebetween two distributions. In other words, it is a measure of theinef
ﬁ
ciency by assuming that the distribution is when the truedistribution is . For example, if we knew the true distributionof the random variable, then we could construct a code with av-erage length . If instead, we were to use the code for distri-bution , we would need bitson theaveragetodescribe the random variable [12]. The squared Hellinger normbetween distributions with densities and is de
ﬁ
ned asMany, if not all, smooth function classes satisfy the equiv-alence . The advantage of is that itsatis
ﬁ
esthetriangleinequality,while doesnot[13].However,
AVCIBAS
¸
et al.
: A SUCCESSIVELY REFINABLE LOSSLESS IMAGE-CODING ALGORITHM 449
brings in clean information-theoretic identities, such as min-imum description-length principle, stochastic complexity, etc.[13]. Taking advantage of the equivalence between and ,wecanuseonefortheotherinthederivationoftheoptimal .When we have a class of candidate densitiesand want to
ﬁ
nd the ,which minimizes the inef
ﬁ
ciency of assuming the distributionwas , we can minimize the total extra bits to obtain theshortest description length on the averagewhere istheLagrangemultiplier.Again
ﬁ
ndingthevariationalderivativewithrespectto andsettingitequaltozero,weget(6)where is the normalizing constant. In general, the relativeentropyortheKullback
–
Leiblerdistancehasacloseconnectionwith more traditional statistical estimation measures, such asthe norm (MSE) and Hellinger norm, when the distributionsare bounded away from zero, and is equivalent to MSE whenboth and areGaussiandistributionswiththesamecovariancestructure [13].The two methods described above were implemented for thepdfupdatestep.Theirperformances,asmeasuredintermsofthe
ﬁ
nal coding ef
ﬁ
ciency of the algorithm, were found to be com-parable. The proposed norm pdf update was then adopteddue to its computational simplicity.
D. Algorithm Summary
In summary, the proposed algorithm consists of
ﬁ
rst esti-mating the pdf of the pixel
’
s grey value, based on the values of the previously decoded pixels, and then to successively re
ﬁ
nethe estimate by restricting the support of the pdf to narrowerand narrower regions in subsequent iterations. The pseudocodeof the algorithm is as follows.Encoderinform the decoder about the number of passes , near loss-less parameters
for
each pass
for
each pixel in the image
if
ﬁ
rst passpredict pdf as per
(4)else
predict pdf as per
(5)end if
ﬁ
nd length interval with highest probability mass
while
pixel is
not
in the intervalentropy code failure event zerozero out the probability in the interval
ﬁ
nd length interval with highest probability mass
end while
entropy code success event onezero out the probability out of the interval
end for
end forDecoderGet number of passes , near lossless parameters
for
each pass
for
each pixel in the image
if
ﬁ
rst passpredict pdf
(4)else
predict pdf
(5)end if
ﬁ
nd length interval with highest probability massdecode the event
while
decoded event is failurezero out the probability in the interval
ﬁ
nd length interval with highest probability massdecode the event
end while
take midpoint of length interval as reconstruction valuefor the pixelzero out the probability out of the interval
end
endIII. E
XPERIMENTAL
R
ESULTS
In this section, we present simulation results of the perfor-mance of our algorithm and compare them with those of itsnearest competitors, namely with the CALIC [3], SPIHT [14],
and JPEG 2000 [2] algorithms. However, before we do this,we
ﬁ
rst describe some implementation details and parameterchoices used in the speci
ﬁ
c implementation of our algorithm.In the implementation of our multipass compression proce-dure, we
ﬁ
rst estimate the Gaussian pdf as in (4) not based onthe exact pixel values, but on their less precise values resultingfrom quantization with step size . Thus, we regress on thequantized version of the context pixelsin (1). In other words, we substitute for each pixel,Midpoint , the midpoint of the -interval withinwhich it is found, where obviously, denotes this quantizedvalue. This is necessary, since the same information set mustbe used at both the encoder and decoder, and the context pixelscan be known at the decoder only within precision of . In otherwords, to be consistent, both the encoder and decoder guess thepixel value to be at midpoint of the -interval. It can be arguedthat if one were to use the centroid of each interval instead of the midpoint, the prediction could improve. However, the in-terval centroid and its midpoint would differ signi
ﬁ
cantly onlyin the high-sloped intervals of the pdf. In our case, the -inter-vals are small, and furthermore, the correct interval is guessedoften in the
ﬁ
rst few steps, where the relatively
ﬂ
at intervals of the Gaussian pdf around its center is being used. The centroiddidnotbringinanysigni
ﬁ
cantimprovement,hence,forthesakeof simplicity, we opted for the interval midpoint. We note that,thoughtheinitialGaussian pdfestimateusing(4)will benoisieras a consequence of midpoint quantization, the
“
missed
”
infor-mation will be recovered later using pdf updates based on con-textpixel pdfs. Butthe pdf updates necessitate that we pass overthe image more than once, hence, the multipass compression.Anotherconsiderationistheinitializationofthealgorithmatthe

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks