Description

DCT-domain watermarking techniques for still images: detector performance analysis and a new structure

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 9, NO. 1, JANUARY 2000 55
DCT-Domain Watermarking Techniques forStill Images: Detector Performance Analysisand a New Structure
Juan R. Hernández
, Associate Member, IEEE
, Martín Amado, and Fernando Pérez-González
, Member, IEEE
Abstract—
In this paper, a spread-spectrum-like discrete cosinetransform domain (DCT domain) watermarking technique forcopyright protection of still digital images is analyzed. The DCTis applied in blocks of 8 × 8 pixels as in the JPEG algorithm. Thewatermark can encode information to track illegal misuses. Forflexibility purposes, the srcinal image is not necessary during theownership verification process, so it must be modeled by noise.Two tests are involved in the ownership verification stage: water-mark decoding, in which the message carried by the watermarkis extracted, and watermark detection, which decides whether agiven image contains a watermark generated with a certain key.We apply generalized Gaussian distributions to statistically modelthe DCT coefficients of the srcinal image and show how theresulting detector structures lead to considerable improvementsin performance with respect to the correlation receiver, whichhas been widely considered in the literature and makes use of theGaussian noise assumption. As a result of our work, analyticalexpressions for performance measures such as the probability of error in watermark decoding and probabilities of false alarm anddetection in watermark detection are derived and contrasted withexperimental results.
Index Terms—
Copyright protection, discrete cosine transform,signal detection, spread spectrum communication.
I. I
NTRODUCTION
T
HE DEVELOPMENT achieved during the last decadesby both image processing techniques and telecommuni-cation networks have facilitated the current explosion of appli-cations that considerably simplify the acquisition, representa-tion, storage, and distribution of images in digital format. Sinceprotocol and network design is oriented to digital data delivery,most contents providers are rapidly transforming their archivesto a digital format. However, all these advances have also madeit possible to produce digital copies identical to the srcinalwiththegreatestease.Inaddition,unauthorizedmanipulationorreuse of information has become so common that this emergingcreative process has been put in danger.Although cryptography is an effective tool against the illegaldigital distribution problem, it has to be coupled with special-ized hardware in order to avoid direct access to data in digitalformat, something that it is not only costly but would also re-
ManuscriptreceivedNovember30,1998;revisedJuly7,1999.Thisworkwassupported in part by CICYT under Project TIC96-0500-C10-10 and by Xuntade Galicia under Project PGIDT99 PXI32203BThe authors are with the Departamento de Tecnologías de las Comu-nicaciones, E.T.S.I. Telecomunicación, Universidad de Vigo, Vigo 36200,Pontevedra, Spain (e-mail: jhernan@tsc.uvigo.es; fperez@tsc.uvigo.es).Publisher Item Identifier S 1057-7149(00)00176-7.
duce the marketing possibilities for the service provider. It is inthis scenario where watermarking techniques can prove to beuseful. A digital watermark is a distinguishing piece of infor-mation that is adhered to the data that it is intended to protect.For obvious reasons, we will only consider here invisible water-marks that protect rights by embedding ownership informationinto the digital image in an unnoticeable way. This impercepti-bility constraint is attained by taking into account the propertiesof the human visual system (HVS), which in turn helps to makethe watermark more robust to most types of attacks. In fact, ro-bustness of the watermark is a capital issue, since it should beresilient to standard manipulations, both intentional or uninten-tional and should also withstand multiple watermarking to fa-cilitate the tracking of the subsequent transactions an image issubject to. Ideally, watermark destruction attacks should affectthe srcinal image in a similar way. However, to what extent thecreatorregardsanimageashis/hersisadifficultquestionwhoseanswer depends on the application.Watermarking, like cryptography, needs secret keys to iden-tify legal owners; furthermore, most applications demand extrainformation to be hidden in the srcinal image (steganography).This information may consist in ownership identifiers, transac-tion dates, serial numbers, etc., that play a key role when illegalprovidersarebeingtracked.Closelyrelatedtotheembedmentof this information is the extraction (
watermark decoding
) processwhenever in possession of the secret key. In most cases of in-terest, there will be a certain probability of error for the hiddeninformation which can be used as a measure of the performanceof the system. Clearly, this probability will increase with thenumber of information bits in the message.Another importantproblem isthe so-called
watermark detec-tion
problem,that consists in testing whether theimage was wa-termarkedbyacertainkeyowner.Thus,thewatermarkdetectionproblemcanbeformulatedasabinaryhypothesistest,forwhichaprobabilityoffalsealarm(i.e.,ofdecidingthatagivenkeywasused whileitwas not) and aprobability of detection(i.e.,of cor-rectlydecidingthatagivenkeywasused)canbedefined.Notethatthe existence of “false positives” would dramatically reduce thecredibilityofthesystem,sotheprobabilityoffalsealarmshouldbe kept to a very small value by adequately designing the test soit produces acceptable probabilities of detection.Existing algorithms for watermarking still images usuallywork either in the spatial domain [1]–[8] or in a transformeddomain [7], [9]–[15].
In this paper, we will deal with the analysis of watermarkingmethods in the discrete cosine transform (DCT) domain, which
1057–7149/00$10.00 © 2000 IEEE
56 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 9, NO. 1, JANUARY 2000
Fig. 1. Block diagram of the watermark embedding process.
is widely used in compression applications and consequently indigital distribution networks. Moreover, invisibility constraintsare easier to impose when working in such domain. We will as-sumethattheoriginalimageisnotpresentduringthewatermarkdetection and information decoding processes, since this wouldbehardtomanagewhenhugequantitiesofimagesneedbecom-pared by intelligent agents searching the net for unauthorizedcopies.The main goal of this paper is to present a novel analyticalframework that allow to assess the performance of a given wa-termarking method in the DCT-domain. New results have beenachieved by resorting to a theoretical model of the problem thatpermitstoderiveoptimaldetectoranddecodingstructures.Withthose results, it is possible to determine how much informa-tion one can hide in a given image for a certain probability of error or what isthe probability of correctly deciding that agivenimage has been watermarked. Moreover, these results are nottied to a specific key but rather they are averaged over the setof possible keys. One relevant by-product of the rigorous anal-ysis presented in this paper has been the discovery of new de-tector and decoder structures that outperform the existing ones,usually based in the calculation of the correlation coefficient(sometimes called
similarity
) between the image and the wa-termark. As shown in this paper, these correlation structures,that have been somewhat taken for granted in the previous lit-erature in DCT-domain watermarking, would be optimal onlyif the DCT coefficients followed a Gaussian distribution. How-ever, as many authors have pointed out, midfrequency DCT co-efficients can be quite accurately modeled by a so-called gen-eralized Gaussian distribution. This distribution is the basis forthe development of the structures that will be proposed. Recentworks in image watermarking have produced a vast amount of algorithms; unfortunately, in most cases no theoretical resultsassessing their performance are known. In this regard, the theo-retical approach presented here is crucial if we want to comparedifferent methods and know what are their fundamental limitsof performance [16].This paper is organized as follows. In Sections II and III, thewatermarkgenerationandthewatermarkverificationprocesses,respectively, are presented and formulated in mathematical no-tation. In Section IV, the perceptual model used in our experi-ments to guarantee that the watermarks generated are invisibleisbrieflydiscussed.ThegeneralizedGaussianmodelforthesta-tistical representation of images in the DCT domain is reviewedin Section V. Then, in Sections VI and VII, the watermark de-coding(extraction)anddetectionproblems,respectively,arean-alyzedandoptimumdetectorstructuresandexpressionsforper-formance measures are obtained. Theoretical results are con-trasted against empirical data obtained through experimentationin Sections VI-D and VII-B for the watermark decoder and de-tector, respectively.II. W
ATERMARK
E
MBEDDING
P
ROCESS
Let , , be atwo-dimensional(2-D)discretesequencerepresentingthelumi-nance component of a sampled image with size pixels.In the sequel we will always use vector notation (in boldfacetypesetting) to represent 2-D indexes. The watermark is gener-ated as a DCT-domain signal employing a technique sim-ilartothedirect-sequencespreadspectrummodulation schemesused in communications (Fig. 1). In this paper, we will assumethat the DCT is applied in blocks of 8 × 8 pixels, as in the JPEGalgorithm [17]. We will allow the watermark to carry a
hiddenmessage
with informationthat could beused, for instance, toidentifytheintendedrecipientoftheprotectedimage.Thismes-sage is mapped by an encoder into an -dimensional codewordvector . Then, a 2-D sequence is gener-ated in what we call an expansion process, by repeating eachelement of the codeword in a different setof points in the DCT-domain discrete grid, in such a way thatthe whole transformed image is covered.For security purposes, it is possible to introduce uncertaintyabouttheDCTcoefficientsalteredbyeachcodewordelementbyintroducing an interleaving stage, consisting in a key-dependentpseudorandom permutation of the samples of . In the sequelwewill denote by theset ofDCTcoefficientsassociated withthe codeword coefficient after the interleaving stage.The resulting signal is multiplied in a pixel-by-pixel mannerbytheoutput ofapseudorandom sequence(PRS)generatorwithaninitialstatewhichdependsonthevalueofthesecretkey.Finally, the spread spectrum signal is further multiplied by whatwe call a perceptual mask , which is basically used to am-plify or attenuate the watermark at each DCT coefficient so thatthe watermark energy is maximized while the alterations suf-fered by the image are kept invisible. The perceptual mask
HERNÁNDEZ
et al.
: DCT-DOMAIN WATERMARKING TECHNIQUES FOR STILL IMAGES 57
is obtained through a perceptual analysis of the srcinal image, based on a perceptual model in which frequency maskingproperties of the HVS are taken into account. The perceptualmodel used in the experiments presented in this paper is dis-cussed in more detail in Section IV. The resulting signalis the watermark and is added to the original image to ob-tain the watermarked version .III. W
ATERMARK
V
ERIFICATION
P
ROCESS
Once a watermarked image is distributed, the rights holdershould be able to verify the copyright information to prove hisauthorship and possibly trace illegal misuses. In Fig. 2 we haverepresented in block diagram form the different steps involvedin the watermark verification process, which will be analyzedthroughout this paper.First, given an image , the 8 × 8 block-wise DCT trans-form is computed. The watermark verification processis comprised of two tests. First, a
watermark detector
decideswhether contains a watermark generated from the secretkey . If it does contain such a watermark, then the
watermark decoder
obtains an estimate of the message (see Section II).For each of these tests (Sections VII and VI, respectively), a setof sufficient statistics is computed from . Since we assumethat the original image is not available during watermarkverification, we must treat it as additive noise. In fact, we willuse statistical models of DCT coefficients of common images(Section V) to analytically derive the appropriate sufficientstatistics (Sections VI-A and VII-A). Suitable values for theparameters of these models must be either fixed previously oradaptively estimated from (Section VI-C). As we willsee, in order to be able to compute the sufficient statistics, itis necessary to know the pseudorandom sequence andthe perceptual mask used in the watermark embeddingprocess (see Section II). Note that it is impossible to knowexactly the perceptual mask since the srcinal image is notaccessible. However, a good estimate can be obtained fromusing exactly the same perceptual analysis as in thewatermark embedding unit, given the low perceptual distortionthat the watermark introduces.Oncethesufficientstatisticsarecomputed,adecisionismadein the watermark detector after evaluating the log-likelihoodfunctionforthebinaryhypothesistest(SectionVII-A)andcom-paring the result with a threshold . The sufficient statistics forwatermark decoding are passed to a decoder, which obtains anestimate of the message denoted by .IV. P
ERCEPTUAL
A
NALYSIS
As we have seen in the previous section, during the water-mark generation process a perceptual mask multiplies thepseudorandom sequence to guarantee that the alterations intro-duced by the watermark are invisible. We will assume that theperceptual mask indicates the maximum admissible am-plitude of the alteration suffered by the DCT coefficient .To obtain the mask , it is necessary to use a psychovisualmodel in the DCT domain. For the work presented in this paper,we have followed the model proposed by Ahumada
et al.
[18],[19], which has been applied by Watson to the adaptive compu-tation of quantization matrices in the JPEG algorithm [20]. Thismodel has been here simplified by disregarding the so-called
contrast-masking effect
for which the perceptual mask at a cer-tain coefficient depends on the amplitude of the coefficient it-self. Consideration of this effect constitutes a future line of re-search. On the other hand, the
background intensity effect
, forwhich the mask depends on themagnitude of theDC coefficient(i.e., the background), has been taken into account.The so-called
visibility threshold
,determines the maximum allowable magnitude of an invisiblealteration of the th DCT coefficient and can be approxi-mated in logarithmic units by the following quadratic functionwith parameter(1)where and are, respectively, the vertical and horizontalspatial frequencies (in cycles/degree) of the DCT-basis func-tions, is the minimum value of , associated to thespatialfrequency ,and istakenas0.7following[18].Thismathematical model is not valid for the DC coefficient, so bothand cannot be simultaneously zero. The thresholdcan be corrected for each block by considering the dc coeffi-cient and the average luminance of the screen (1024for an 8-b image) in the following way:(2)Notethattheactualdependenceof ontheblockindiceshasbeen dropped in the notation for conciseness. Following [18],theparametersused inourschemehavebeenset to ,cycles/degree, , and .Once the corrected threshold value has been obtained,the perceptual mask is calculated as(3)where , , is the Kroneckerfunction, and is a scaling factor that allows to introducea certain degree of conservativeness in the watermark due tothose effects that have been overlooked (e.g., spatial masking inthe frequency domain [21]). The remaining factors in (3) allowto express the corrected threshold in terms of DCT coefficientsinstead of luminances (see [18]).Inalltheexperimentsshowninthispapertheperceptualmaskobtained by following the procedure explained above has beendivided by five. The only purpose of this modification is to ar-tificially introduce a degradation of the performance so that wecan obtain empirical estimates of performance measures in areasonable amount of simulation time, within a range of valuesofsystemparameters(e.g.,thepulsesize)thatallowsustocom-pare results for different images. Hence, in a practical situationwe should expect considerable better performance.
58 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 9, NO. 1, JANUARY 2000
V. S
TATISTICAL
M
ODELS OF THE
DCT C
OEFFICIENTS
In this section, we briefly describe a statistical model that hasbeen proposed to better characterize the DCT coefficients of common images. As we will see in the following sections, theuse of these models when designing the watermark detector anddecoder will lead to considerable improvements in performance[16] with respect to the correlating detector structure, usuallyproposed in previous literature, which would be optimum onlyif the srcinal image (i.e., the channel additive noise) could bemodeled as a Gaussian stochastic process.Let be the 2–D sequence that we obtain after applyingtheDCTtotheoriginalimage ,inblocksof8×8pixels.Letbe the 2–D sequence that results when we extractthe DCT coefficient of each block, i.e.,(4)A detailed discussion on the shapes of the histograms for dif-ferent types of images can be found in [22]. Here we will onlyreview the results in [22] that are relevant for our work. Consid-ering the irregularity of its histogram and its highly image-de-pendent nature, the dc coefficient cannot be approximated ac-curately by any closed-form pdf (probability density functions).Theaccoefficients,asshownin[22]and[23],canbereasonably
well approximated by zero-mean
generalized Gaussian
pdf’s,given by the expression(5)Both and can be expressed as a function of and the stan-dard deviation(6)Therefore, this distribution is completely specified by two pa-rameters, and . Note that the Gaussian and Laplacian distri-butions are just special cases of this generalized pdf, given byand , respectively.Oneofthemostimportantconclusionsin[22]isthattheDCTcoefficients at low frequencies cannot be well approximated byaGaussianpdf,notevenbyaLaplacepdf.Forinstance,in[22]itis shown how for a given test image, with characteristics similarto the well-known Lena image, the coefficients at low frequen-cies can be reasonably well modeled by a generalized Gaussianpdf with . This fact has important consequences, as wewill see later, in the derivation of detector structures involved inwatermark verification procedures.For the sake of generality in the theoretical derivations, wewillassumethattheparameters and cantakedifferentvaluesforeachcoefficient oftheblockwiseDCT.Inotherwords,we will model each sequence as the output of an inde-pendent identically distributed (i.i.d.) generalized Gaussian sto-chasticprocesswithparameters and .Wewillalsodefine, in order to simplify the notation, the 2–D sequences rep-resenting the parameters that characterize the pdf of the DCTcoefficients, as(7)(8)VI. H
IDDEN
I
NFORMATION
D
ECODING
We have just seen that, even though the DCT coefficients(excluding the DC coefficient) have histograms that can bereasonably well approximated by generalized Gaussian distri-butions, the pure Gaussian model is in many cases far frombeing the most adequate representation. This fact raises thequestionofwhetherthedetectorstructurebasedonacorrelationreceiver,usuallyproposedinthepreviousliteratureonDCT-do-main watermarking of images [24], is appropriate for eitherwatermark decoding or watermark detection. In the followingsections we will obtain the optimum
maximum likelihood
(ML)decoder structures which result when a generalized Gaussiannoise model is assumed and will also analyze performancefor different values of , including the Gaussian case ( ).We will see that in some cases, such as the pure Gaussiandistribution, improvements can be attained just by disregardingDCT coefficients with high amplitudes. We will also showhow to estimate the parameters of the generalized Gaussiandistribution that are necessary for a practical implementation of the newly proposed structures.
A. Optimum Decoder for the Generalized Gaussian Model
Letusassumethatthevector canbeusedtoencodeoneoutof possiblemessages( bits)andthattheset of vectors indicates which codeword corresponds toeach message. During the watermark verification process, thehidden information decoder obtains an estimate of the hiddenmessage embedded in the watermarked image . If weassume that the messages are equiprobable, the detector whichminimizes the average probability of error is given by the MLtest, in which the estimated message is the one satisfying(9)Considering the statistical model that has been assumed for thesequences , and assuming also that these sequences arestatistically independent (this assumption is justified given theuncorrelating properties of the DCT for common images), theMLtestisequivalenttoseekingtheindex whichsatisfies(10)where and are the watermarks generated fromthe codewords and following the mechanism presentedin Section II. We can easily prove that the coefficients(11)are sufficient statistics for the ML hidden information de-coding process, which is equivalent to seeking the codeword
HERNÁNDEZ
et al.
: DCT-DOMAIN WATERMARKING TECHNIQUES FOR STILL IMAGES 59
that maximizes theexpression(12)For the case that a binary antipodal constellation withcodewords such thatis used, this test is equivalent to a bit-by-bit harddecoder(13)It is interesting to analyze the performance of the hidden in-formation decoder for a given srcinal image. Specifically, wewill derive the bit error rate, measured as the proportion of se-cret keys for which an error occurs while decoding a certain bit.In order to do so, first it is necessary to statistically characterizethecoefficients assuming afixedoriginal image.InAp-pendix A we show that these coefficients can be modeled as theoutput of an
additive white Gaussian noise
(AWGN) channel,, where are i.i.d. samples of zero-mean white Gaussian noise with variance . The param-eters of this equivalent vector channel are(14)(15)where(16)(17)Therefore, when codewords form a binary antipodal constella-tion, the probability of bit error for the bit-by-bit hard decoderis(18)where and is defined as.
B. Decoder with Point Elimination
IfweexaminetypicalhistogramsofDCTcoefficients,wecanobserve that there are some samples in the tails with relativelyhigh amplitudes that the generalized Gaussian model cannotmodel adequately. This deviation with respect to the theoret-ical model in the tails of the histogram leads us to think thatincluding these samples in the optimum detector for that modelmay result in a loss of performance in the watermark decoder.To explore this possibility, next we will define a new statisticfollowing the same expression as that derived in Section VI-Afor a generalized Gaussian noise model, but in which only thoseDCT coefficients whose amplitude is below a certainthreshold are taken into account. We will express this thresholdas the product of a constant and the perceptual mask .The resulting statistic is(19)where . Following similar argu-mentsasinAppendixAitcanbeshownthatthestatisticscanbemodeledastheoutputofanAWGNchannel,whose parameters are now(20)(21)The sets and are defined asWe have also defined the 2–D sequenceif otherwise (22)whose mean and variance areThe mean and variance of are given by (16) and (17). InSection VI-D, in which we discuss the experimental results, wewill see how the modified statistic just derived can lead to aconsiderable improvement in the performance of the hidden in-formation decoder.
C. Estimation of Distribution Parameters
In the preceding sections, we have assumed that the parame-ters and of the distributions of the DCT coefficientswereknown.However,inpracticetheseparametersmustbeesti-matedfromtheimageintendedtobetested .Inotherwords,in a practical implementation, a parameter estimation stage hasto be added at the input of the block devoted to the computationof sufficient statistics in Fig. 2.As a first approach, the results presented in this paper corre-spond to a decoder structure in which is fixed to a uniquevalue, independent of the image under test, for all the coeffi-cients. Then, different values have been tried in order to analyzethe impact of this parameter on the performance of the decoder

Search

Similar documents

Tags

Related Search

Techniques for altered statesExtended techniques for stringsWater and Managment Techniques for SustanabilStatistical Analysis Techniques for GeographyCombined Quantitative Techniques for GeochemiFeature Extraction Techniques for Eeg SignalsData mining techniques for improved WSR-88D rTransmission Techniques for Emergent MulticasBandwidth Estimation Techniques for MANETsLow-Power Techniques for Processor Design

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks