Comics

A Framework for Secure Computations With Two Non-Colluding Servers and Multiple Clients, Applied to Recommendations

Description
A Framework for Secure Computations With Two Non-Colluding Servers and Multiple Clients, Applied to Recommendations
Categories
Published
of 13
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 3, MARCH 2015 445 A Framework for Secure Computations With TwoNon-Colluding Servers and Multiple Clients,Applied to Recommendations Thijs Veugen, Robbert de Haan, Ronald Cramer, and Frank Muller  Abstract —We provide a generic framework that, with the helpof a preprocessing phase that is independent of the inputs of theusers, allows an arbitrary number of users to securely outsource acomputation to two non-colluding external servers. Our approachis shown to be provably secure in an adversarial model whereone of the servers may arbitrarily deviate from the protocolspecification, as well as employ an arbitrary number of dummyusers. We use these techniques to implement a secure recom-mender system based on collaborative filtering that becomes moresecure, and significantly more efficient than previously knownimplementations of such systems, when the preprocessing effortsare excluded. We suggest different alternatives for preprocessing,and discuss their merits and demerits.  Index Terms —Secure multi-party computation, maliciousmodel, secret sharing, client-server systems, preprocessing,recommender systems. I. I NTRODUCTION R ECOMMENDATION systems consist of a processortogether with a multitude of users, where the processorprovides recommendations to requesting users, which arededuced from personal ratings that were initially submittedby all users. It is easy to see that, in a non-cryptographicsetup of such a system, the processor is both able to learnall data submitted by the users, and spoof arbitrary, incorrectrecommendations.In this work we replace the recommendation processor bya general two-server processor in such a way that, as long asone of the two servers is not controlled by an adversary andbehaves correctly, Manuscript received February 17, 2014; revised September 2, 2014 andOctober 29, 2014; accepted October 29, 2014. Date of publication Novem-ber 13, 2014; date of current version January 22, 2015. The work of T. Veugenand F. Muller was supported by the Dutch National COMMIT Programme.The work of R. de Haan was supported by the Dutch Science Foundationthrough the Vrije Competitie Project entitled Applications of ArithmeticSecret Sharing. The associate editor coordinating the review of this manuscriptand approving it for publication was Prof. C.-C. Jay Kuo.T. Veugen is with the Cyber Security Group, Delft University of Technology,Delft, The Netherlands, and also with the Department of Technical Sciences,TNO, The Netherlands (e-mail: thijs.veugen@tno.nl).R. de Haan was with the Centrum Wiskunde and Informatica (CWI),Amsterdam, The Netherlands.R. Cramer is with the Centrum Wiskunde and Informatica (CWI),Amsterdam, The Netherlands, and also with Leiden University, Leiden, TheNetherlands (e-mail: cramer@cwi.nl).F. Muller is with the Department of Technical Sciences, TNO, The Nether-lands (e-mail: frank.muller@tno.nl).Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TIFS.2014.2370255 1) the privacy of the ratings and recommendations of theusers is maintained to the fullest extent possible, and2) a server that is under adversarial control is unable todisrupt the recommendation process in such a way thatan incorrectly computed recommendation will not bedetected by the requesting user.Our result uses a modified version of the standard modelfor secure multi-party computation, which is a cryptologicparadigm in which the players jointly perform a single securecomputation and then abort. In our model the computationis ongoing (recommendations are repeatedly requested) andoutsourced to two external servers that do not collude. Thisapproach allows for the involvement of many users thatneed only be online for very short periods of time in orderto provide input data to, or request output data from, theservers. In practice, one of the two servers could be theservice provider that wishes to recommend particular servicesto users, and the other server could be a governmentalorganisation guarding the privacy protection of users. Therole of the second server could also be commercially exploitedby a privacy service provider, supporting service providers inprotecting the privacy of their customers.While this work focuses on the application of securerecommendation systems, we point out that our underlyingframework is sufficiently generic for use in other, similarapplications, and also easily extends to model variationsinvolving more than two servers. In general, any configurationis foreseen consisting of multiple servers that have a jointgoal of securely delivering a service to a multitude of users.The online phase will be computationally secure as long asone of the servers is honest [1]. However, the applicationshould allow for a significant amount of preprocessing, whichis independent of the data, and could be performed duringinactive time.Although most related work (see Subsection I-A) is onlysecure in the semi-honest model, we provide security in themalicous model (see Subsection III-B). There are severalreasons why security in the malicious model is important.First of all, the main goal of securing the system is that wedo not want the servers to learn the personal data of users.We therefore should not just trust them to follow the rules of the protocol. Second, in a malicious model the correctness of the user outputs is better preserved, because outputs cannotbe corrupted by one (malicious) server on his own. Third, amalicious server might introduce a couple of new, so called 1556-6013 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.  446 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 3, MARCH 2015 dummy users, which he controls. These dummy users mighthelp him deduce more personal data than is available throughthe protocol outputs. We show that in our malicious modelthis is not possible.  A. Related Work  Most related work on privacy preserving recommenda-tions is secure in the semi-honest model, so parties areassumed to follow the rules of the protocol. As mentioned byLagendijk et al. [2], “Achieving security against maliciousadversaries is a hard problem that has not yet been studiedwidely in the context of privacy-protected signal processing.”Erkin et al. [3] securely computed recommendationsbased on collaborative filtering. They used homomorphicencryption within a semi-honest security model just likeBunn and Ostrovsky [4]. Goethals et al. [5] stated that althoughsuch techniques can be made secure in the malicious model,that will make them unsuitable for real life applicationsbecause of the increased computational and communicationcosts.Polat and Du [6] used a more lightweight approach bystatistically hiding personal data, which unfortunately has beenproven insecure by Zhang et al. [7]. Atallah et al. [8] useda threshold secret-sharing approach for secure collaborativeforecasting with multiple parties.Nikolaenko et al. [9] securely computed collaborativefiltering by means of matrix factorization. They used bothhomomorphic encryption and garbled circuits in a semi-honestsecurity model. In another paper [10], these authors use similartechniques to securely implement Ridge regression, a differentapproach of collaborative filtering.Catrina and de Hoogh [11] developedan efficient framework for secure computations in the semi-honest model, based onsecret sharing and statistical security, which could also be usedfor a recommendersystem. Peter et al. [12] considered a modelwhere users can outsource computations to two non-colludingservers. They use homomorphic encryption, each user havingits own key, but require the servers to follow the rules of theprotocol.Canny’s approach [13] on private collaborative filtering isable to cope with malicious behaviour. A detailed comparisonwith our approach can be found in Subsection V-E.In the last several years, a couple of computation protocolshave been developed, which are both practical and secure inthe malicious model. The idea is to use public-key techniquesin a data-independent pre-processing phase, such that cheapinformation-theoretic primitives can be exploited in the onlinephase, which makes the online phase efficient. In 2011,Bendlin et al. [1] presented such a framework with a some-what homomorphic encryption scheme for implementing thepre-processing phase. This has been improved lately byDamgård et al. [14], which has become known as SPDZ(pronounced “Speedz”). Last year, Damgård et al. [15] showedhow to further reduce the precomputation effort.  B. Our Contribution We used the SPDZ framework, as introduced in the previoussubsection, which enables secure multi-party computations inthe malicious model, extended it to the client-server model,and worked out a secure recommendation system within thissetting. Not only did this lead to a recommendation systemthat is secure in the malicious model, but also the onlinephase became very efficient in terms of computation andcommunication complexity.To extend SPDZ to the client-server model, we developedsecure protocols that enable users (clients) to upload their datato the servers, and afterwards obtain the computed outputsfrom the servers. This required a subprotocol for generatingduplicate sharings, as described in Appendix A.To securely compute a recommendation within SPDZ,we had to develop a secure comparison protocol and asecure integer division protocol. For the comparison proto-col, we combined ideas from Nishide and Ohta [16], andSchoenmakers and Tuyls [17]. In fact, we finetuned thebitwise comparison protocol from [17] to a linear-roundsecret-sharing setting, and replaced the constant-roundsolutionof [16]. For the integer division, which forms the bottle-neck of our application, we took a secure solution fromBunn and Ostrovsky [4], translated it to a secret-sharingsetting, and put in our secure comparison protocol. Thisyielded an efficient secure integer division, because its outputwas relatively small in our application.Finally, we took the effort for implementing and testingour recommender system within the SPDZ framework, andcomparing it with the current state-of-the-art. C. Organization of the Paper  We first explain the basics of collaborative filtering andauthenticated secret sharing, which are necessary for ourpaper. In Section III we describe how we fit secure multi-party computation in the client-server model, and the resultingsecurity model. Section IV focusses on the secure implemen-tation of our recommender system, the various options forpreprocessing, and the more complicated subprotocols that areneeded. The complexity of this implementation is analysed inSection V, and compared with related work. Finally, we stressthe wide applicability of our secure framework.II. P RELIMINARIES After explaining collaborative filtering, we introduce theconcepts of SPDZ: tagged secret sharing, and the basic pro-tocols of the computation phase.  A. User-Based Collaborative Filtering We provide as an example application, taken from [3],a basic implementation for generating recommendations,more specifically a collaborative filtering method with user-neighborhood-based rating prediction [18]. We do not pretendto present the best recommender system, but merely want tointroduce some basic components, and show how they can beimplemented securely.There is one processor  R , with  N   users, and  M   differentpredefined items. A small subset of size  S   (1  ≤  S   <  M  )of these items is assumed to have been rated by each user,  VEUGEN  et al. : FRAMEWORK FOR SECURE COMPUTATIONS 447 reflecting his or her personal taste. The remaining  M   −  S  items have only been rated by a small subset of users that haveexperienced the particular item before. A user that is lookingfor new, unrated items, can ask the processor to produceestimated ratings for (a subset of) the  M  − S   items. The number  N   of users can be large, say 1 million,  M   is in the order of hundreds, and  S   usually is a few tens [18].During initialisation, each user  n  uploads to the processora list of at most  M   ratings of items, where each rating  V  ( n , m ) is represented by a value within a pre-specified interval. Userscan update their ratings at any time during the lifetime of the system. A user can, at any time after the initialization,request a recommendation from the processor. When theprocessor receives such a request from a user, it computesa recommendation for this user as follows. First, it uses theinitial  S   ratings in each list to determine which other usersare considered to be similar to the requesting user, i.e. havesimilarly rated the first  S   items. The remaining  M  − S   entries inthe lists are then used to compute and return a recommendationfor the requesting user, consisting of   M   −  S   ratings averagedover all similar users.To get an idea of the required computation we describe therequired computational steps. Let  U  m  be the set of users thathave rated item  m ,  S   <  m  ≤  M  .1) Each user uploads his (between  S   and  M  ) ratings to theprocessor. Like in [3], we assume the first  S   (similarity)ratings have been normalized and scaled beforehand.A rating is normalized by dividing it by the length(Euclidean norm) of the vector  ( V  ( n , 1 ) ,..., V  ( n , S  ) ) ,yielding a real number between 0 and 1. Next, thisreal number is scaled and rounded to a positive integerconsisting of a few (e.g. four) bits. The remaining ratingsshould only be scaled and rounded to an integer with thesame maximal number of bits.2) When user A asks for a recommendation, the processorcomputes  M   −  S   estimated ratings for A. First, the sim-ilarities Sim  A , n  =  S m = 1  V  ( n , m )  ·  V  (  A , m )  are computedfor each user  n .3) Each similarity value is compared with a public thresh-old  t   ∈  N + , and its outcome is presented by the bit δ  n  =  ( t   <  Sim  A , n ) .4) The recommendation for user A consists of   M   −  S  estimated ratings, the estimated rating for item  m , S   <  m  ≤  M  , simply being an average of the ratingsof the similar users: Rec m  =  (  n ∈ U  m  δ  n  ·  V  ( n , m ) )  ÷ (  n ∈ U  m  δ  n ) , where  ÷  denotes integer division.5) The processor sends back the recommendationRec S  + 1  ... Rec  M   to user A.Here, we used the notation  (  x   <  y )  to denote the binary resultof the comparison of two numbers  x   and  y , which equals 1when the comparison is true, and 0 otherwise.  B. Tagged Secret Sharing Let  p  be a prime consisting of   ℓ  bits. We denote  F  p the field of integers modulo  p  with ordinary addition andmultiplication, and use the elements as if they were integers of the set  { 0 , 1 ,...,  p − 1 } . We use  +  for addition and  ·  for multi-plication, both in the field F  p , and therefore omit the reductionmodulo  p . All secret values, which are elements of   F  p ,are additively shared between the servers. The framework of tagged secret sharing used here is from Bendlin et al. [1]. 1) Authenticated Secret Sharing:  An external dealer distrib-utes shares of a secret value  x   ∈  F  p  to the two servers asfollows:1) The dealer selects a value  r   ∈  F  p  uniformly at random.2) The dealer sends the value  r   to server 1 and the value  x   − r   to server 2.The values  x  1  =  r   and  x  2  =  x   −  r   are considered to bethe share of server 1 and the share of server 2, respectively.It should be clear from the description above that the shares  x  1  and  x  2  are both individually statistically independent of thesecret  x  , while they together allow to determine the value of   x  ,by simply adding these shares together.In addition to the distribution of the shares, the dealerdistributes authentication tags on the shares with respect tothe authentication code  C   :  F  p  ×  F 2  p  →  F  p , defined as C  (  x  ,(α,β))  =  α  ·  x   +  β  . Here the value  (α,β)  is called the authentication key  and the value  α  ·  x   +  β   the  authenticationtag  for the share  x  .For every share  x  1  for server 1, the dealer generates arandom authentication key  (α 2 ,β  2 ) , computes the correspond-ing authentication tag  m 1  =  α 2  ·  x  1  +  β  2  and sends thekey  (α 2 ,β  2 )  to server 2, and the share  x  1  and tag  m 1  toserver 1. Symmetrically, this is done for the share  x  2  of server 2. We use notation    x    to denote such a randomizedshare distribution with corresponding authentication keys andtags for the value  x  , and call    x    a(n)  (authenticated) secret sharing . Equivalently, we say that the value  x   is  secret-shared  when a secret sharing    x    has been constructed.Suppose that for a secret sharing    x    the value  x   is to berevealed to server 1. This proceeds by server 2 sending theshare  x  2  and the tag  m 2  to server 1. Server 1 then verifieswhether  m 2  =  α 1  ·  x  2  +  β  1  and recovers  x   =  x  1  +  x  2 .If the tag values are incorrect, the protocol is aborted. This isreferred to as  opening    x    to server 1, and works symmetricallyfor server 2. A secret sharing can also be opened to a partythat is external to servers 1 and 2, by sending the party allrelevant shares, authentication tags and authentication keys,and letting this party do the corresponding verifications.It is easy to see that any attempt by, for example, server 1 tocorrectly adjust a tag  m 1  with key  (α 2 ,β  2 )  during a maliciousshare modification from  x  1  to the value  x  ′ 1  =  x  1 , boils downto guessing the value  α 2  · (  x  ′ 1 −  x  1 ) , which only succeeds withprobability 1 /  p  if the value of   α 2  is unknown.Although we only focus on two servers, the framework cancope with multiple servers. When  n  servers are available, thebasic idea is that each server is given authentication keys foreach of the  n  shares [1]. Intuitively, this allows an honestserver to detect malicious behaviour of the remaining servers,so in SPDZ one honest server is sufficient for providingsecurity. The quadratic amount of authentication keys hasbeen improved lately by Damgård et al. [14], where tagauthentication is linear in the number of players. 2) Structured-Authentication:  As we demonstrate in thenext section, it is possible to perform basic linear operations onsecret-shared values without interaction. However, in order to  448 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 3, MARCH 2015 make it possible to non-interactively update the correspondingauthentication keys and tags, we require both servers to fixtheir respective  α -components of their authentication keys.Henceforth, each server  i  maintains a fixed private value α i  ∈  F  p  that is used in all of its authentication keys, andwe use notation  [  x  ]  for a secret sharing of   x   based on sucha-priori fixed  α -components.Having an incorruptible external dealer, the servers caneasily create such a structured distribution  [  x  ]  using thestandard distribution    x    obtained from the dealer. We describethe necessary steps for server 1; they are similar for server 2.All that is required is that for every  α  x  1  received from thedealer, server 1 sends the difference  δ   x  1  =  α 1 − α  x  1  to server 2,after which server 2 updates his tag  m 2  to  m 2  + δ   x  1  ·  x  2 . Sincethe values  α 1  and  α  x  1  are only known to server 1,  δ   x  1  revealsno information about  α 1  to server 2 [1]. C. Core Protocols for the Computation Phase In this section we describe the core protocols that are neededfor our recommender system. Suppose that servers 1 and 2hold fixed partial authentication keys  α 1 ,α 2  ∈  F  p  as describedin Subsection II-B.2. 1) Linear Operations:  Let  [  x  ]  and  [  y ]  be secret sharings of arbitrary values  x  ,  y  ∈  F  p  and let  c  ∈  F  p  be a public constant.We will show how to non-interactivelycompute secret sharingsfor  [  x   +  y ] ,  [ cx  ]  and  [  x   +  c ]  respectively.An authenticated secret sharing of the sum  z  =  x   +  y is computed by locally adding the shares, keys, and tags of   x   and  y . Let  z i  =  x  i  +  y i ,  β   zi  =  β   x i  +  β   yi  and  m  zi  =  m  x i  + m  yi  ,for  i  =  1 , 2, where  z i ,  m  zi , and  β   zi  are held by server  i . Thenindeed  z  =  z 1 +  z 2  =  x   +  y , and the authentication tags  m  zi  of the shares of   z  are correct, with authentication keys  (α i ,β   zi  ) .From a sharing  [  x  ] , an authenticated secret sharing of   cx  is computed by local multiplication of the shares, keys, andtags of   x  . Let  z i  =  c  ·  x  i ,  β   zi  =  c  ·  β   x i  , and  m  zi  =  c  ·  m  x i  , for i  =  1 , 2. Then indeed  z  =  z 1 +  z 2  =  cx  , and the authenticationtags  m  zi  of the shares of   z  are correct.To add a public constant to a secret sharing  [  x  ] , one partyadds the constant to its share, and the other party adjusts itsauthentication key. Let  z 1  =  c  +  x  1 ,  z 2  =  x  2 ,  β   z 1  =  β   x  1 ,  β   z 2  =  β   x  2  −  c  ·  α 2 ,  m  z 1  =  m  x  1 , and  m  z 2  =  m  x  2 . Then indeed  z  =  z 1  +  z 2  =  c  +  x  , and the authentication tags remaincorrect. 2) Multiplication:  We have seen in the previous subsectionthat linear operations on secret-shared values can be handlednon-interactively, including the updates of the correspondingauthentication keys and tags. In this section we describe howto turn the multiplication of two secret-shared values intoan operation that is essentially linear, using a precomputedmultiplication triplet. Since the multiplication protocol belowrequires some secret sharings to be opened, the multiplica-tion of two shared values requires interaction between thetwo servers. For clarity, we omit the updating of authenticationkeys and tags, and the details of opening a secret shared value,which have been described earlier.Let  [  x  ]  and  [  y ]  denote secret sharings of arbitrary values  x  ,  y  ∈  F  p , and let  ( [ a ] , [ b ] , [ c ] )  be a given multiplicationtriplet such that  c  =  ab . The following sequence of local linearoperations and interactions is used to compute a sharing  [  z ] where  z  =  xy , making use of the precomputed multiplicationtriplet:1) The servers locally compute the secret sharing  [ v   ] =[  x   − a ]  from  [  x  ]  and  [ a ] , and then open it towards eachother.2) The servers locally compute the secret sharing  [ w ] =[  y  − b ]  from  [  y ]  and  [ b ] , and then open it towards eachother.3) The servers locally compute the secret sharing [  z ] = [  xy ] =  w [ a ] +  v   [ b ] + [ c ] +  vw. The correctness of the outcome of the multiplication protocolis easily verified by working out  xy  =  (v    +  a )(w  +  b ) .III. T HE  S ECURE  C LIENT -S ERVER  M ODEL  A. Overview Our secure framework relies heavily on techniques from thecryptologic area of secure multi-party computation [19], [20].The problem of secure multi-party computation considers afully connected communication network of   n  parties, wherethe parties wish to compute the outcome of a given function  f  on their respective inputs. However, apart from the output, noinformation on the inputs should leak during the course of the computation. In this work we consider functions  f   thatcorrespond with the computation of a recommendation for auser.The ideal solution to the problem of secure multi-partycomputation involves an independent, incorruptible mediatorthat privately takes all the required inputs, computes andreveals the desired output to the appropriate parties, andthen forgets everything. The research area of secure multi-party computation studies techniques that allow the parties to simulate  the behavior of such a mediator.These simulations have the following structure. First, thereis an  input phase  that enables the parties to encrypt theirrespective inputs for use in the secure computation. Then a computation phase  takes place during which an encryptedoutput of the function  f   is computed from the encryptedinputs. Finally, an  output phase  takes place where the outputis decrypted, and sent to the appropriate parties.We consider secure multi-party computation in the  pre- processing model  [21], where at some point in time priorto the selection of the inputs, a preprocessing phase takesplace that establishes the distribution of an arbitrary amountof correlated data between the parties involved in the compu-tation. Consequently, this data is completely independent of the input data of the parties. The goal of the preprocessing isto remove as much of the complexity and interaction from theactual computation as possible, which as a result makes thiscomputation extremely efficient. We discuss possible ways toimplement such a preprocessing phase in Section IV-A.In theory, using techniques from secure multi-party compu-tation, it is possible for the users of the recommender systemto securely and jointly compute the recommendations them-selves. However, for a large number of users that approach  VEUGEN  et al. : FRAMEWORK FOR SECURE COMPUTATIONS 449 quickly becomes impractical. In this work we instead let theusers  outsource  the problem of multi-party computation totwo dedicated servers that execute a number of two-partycomputations with preprocessing. Unlike in ordinary securemulti-party computation, the inputs are here provided by someexternal parties, and the outputs are also returned to externalparties. We therefore require special techniques to correctlyintegrate these operations into the computation. Moreover, wedesign the system in such a manner that the users need onlyprovide every input (and possible update) once, so they neednot retransmit their current inputs for the computation of everyindividual recommendation.  B. Adversarial Behavior and Security We consider an adversary that is able to take full control of one of the two servers involved in the two-party computationwith preprocessing.Moreover, the adversary can initially intro-duce an arbitrary number of so-called  dummy  users into thesystem, which are also under his complete control. Dummyusers can in particular, like ordinary users, input or updatetheir ratings, and request recommendations. The adversary canread all data available to the entities that are under his control,and can make them deviate from the cryptographic protocolspecification arbitrarily.The goal is to prove that the actions of the adversary haveessentially no impact on the outcome or the security of theprotocol. In order to describe the security level of our securerecommendation system we make use of the  real/ideal world  paradigm  [22].This paradigm is based on the comparison between twoscenarios. In the first scenario, the  real world  , the adversaryparticipates in a normal protocol execution. In the secondscenario, the  ideal world  , all participants in the protocol haveblack-box access to the functionality that the protocol inthe real world attempts to emulate. Moreover, there existsa simulator that creates a virtual environment around theadversary, and interacts with the adversary in a special fashion.The goal of the simulator in the ideal world is to simulate theexpected world-view of the adversary during a real protocolexecution. The security analysis now consists of showing thatthe adversary cannot distinguish whether it is acting duringa protocol execution in the real world, or in the ideal world.Since no effective attack is possible in the ideal world, it thenfollows that no effective attack is possible in the real worldeither.Our description of the real/ideal world paradigm so farhas been generic. We now provide additional details forthe real/ideal world setup for our specific setting of securerecommender systems. The real world corresponds with themodel outlined in Section III-A, except that we additionallyhave an entity called  the environment   that fully controls theactions of the adversary and additionally chooses the inputsof the non-adversarial users.The ideal world describes how we would ideally expect therecommendation processor  R  to behave. The users in the idealworld have access to an independent incorruptible recommen-dation processor  I   that privately takes all the required inputsand updates from the users and, upon request, privately returnsthe correct recommendations to the users. The two servers thatare used to implement the processor  R  in the real world donot exist in the ideal world.Just like in the real world, the environmentin the ideal worldfully controls the actions of the adversary, and provides inputsto the non-adversarial users, except that all communicationbetween the adversary and the non-adversarial entities (i.e., thenon-dummy users and the non-existent second server of thereal-world setting) is intercepted by the simulator. Specifically,the goal of the simulator is to simulate all communicationbetween the two servers, and all communication from the non-adversarial server to the dummy users, as it would occur duringa protocol execution in the real world.To help with this simulation, the simulator has access to alldata generated in the preprocessing phase for the two servers,and it may in fact even be assumed without loss of generalitythat it actually generates this data. Moreover, since in the idealworld non-adversarial users communicate directly with theideal processor, this processor notifies the simulator whenevera non-adversarial user either provides it with an input, orrequests a recommendation.The level of security that our secure recommendationsystemachieves can now be described as follows. It is possible todesign a simulator for our secure recommendation systemin such a manner that regardless of what the environment(and therefore the adversary) does, it cannot significantlydistinguish whether it is acting in the real world or in theideal world. Since no meaningful attack is possible in the idealworld, the system is therefore secure in the real world.Because a server is able to change his share of a particularsecret-shared value, and correctly guess the updated tag valuewith probability 1 /  p , perfect security is not achievable. Thebest security level is attained by having a trusted dealerperform the preprocessing phase, in which case the real worldand the ideal world are statistically indistinguishable by thesimulator, and our recommender system is statistically secure.When the preprocessing has been performed by public-keycryptography between the two servers, this will reduce tocomputationally indistinguishable worlds, and eventually acomputationally secure recommender system.Since our core protocols, as described in Section II-C, fallwithin the general framework of [1], we for these proto-cols refer to their proof of security in the malicious model.A key point in which our model differs is that all inputssrcinate from, and all output is returned to, external parties.We therefore include appropriate ideal world simulation detailsto our input (see Section IV-D) and output (see Section IV-E)subprotocols. C. Our Approach Every computation corresponds with a function  f   , whichcan be represented via an arithmetic circuit consisting of basicoperations like addition and multiplication. For the recom-mender application we consider here, it suffices to considerthese basic operations together with the more complex opera-tions of comparison and integer division, which are composedof basic operations.
Search
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks