The challenges of ambient law and legal protection in the profiling era

The challenges of ambient law and legal protection in the profiling era
of 38
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
  7 Profiling and AmI Mireille Hildebrandt Summary. Some of the most critical challenges for ‘ the    future of identity in the information society’   must be located in the domain of automated profiling practices. Profiling technologies enable the con-struction and application of group profiles used for targeted advertis-ing, anti-money laundering, actuarial justice, etc. Profiling is also the   conditio sine qua non  for the realisation of the vision of Ambient In-telligence. Though automated profiling seems to provide the only vi-able answer for the increasing information overload and though it seems to be a promising tool for the selection of relevant and useful information, its invisible nature and pervasive character may affect core principles of democracy and the rule of law, especially privacy and non-discrimination. In response to these challenges we suggest novel types of protection next to the existing data protection regimes. Instead of focusing on the  protection of personal data , these novel tools focus on the  protection against invisible or unjustified profiling  . Finally, we develop the idea of Ambient Law, advocating a frame-work of technologically embedded legal rules that guarantee a trans- parency of profiles that should allow European citizens to decide which of their data they want to hide, when and in which context. So far, profiling has not been the subject of a coherent, cross-disciplinary knowledge domain. Research is fragmented between computer engineers, social studies, law-yers, mathematicians, and those working on specific applications within for instance medical research, marketing or forensic science. Profiling is often reduced to data mining and discussed in highly technical terms (Fayyad et al., 1996) or from a social theory perspective in terms of semiotic or Deleuzian inquiries (Elmer, 2004; Hildebrandt, 2008). A coherent legal perspective on profiling, integrating privacy and data protection, non-discrimination, liability issues and forensic profiling has not been attempted yet, even if partial analyses have been made within the context of the FIDIS network (Schreurs et al., 2008; Hildebrandt and Koops, 2007; Geradts and Sommer, 2008). 1  For this reason FIDIS has devoted serious attention to the question 1 From a legal perspective analyses are often made in terms of the protection of per-sonal data, whereas specific attention to the legal status of profiles, especially group  profiles is lacking.  274 Mireille Hildebrandt of what profiling actually is, how it can be defined and explained in a way that is easily understandable across different disciplines. This will be discussed in Section 7.1, mainly building on the cross-disciplinary findings of  Profiling the European Citizen (Hildebrandt and Gutwirth, 2008). An important domain of research within the framework programmes of the European Commission as well as within industry is what has been coined as Am- bient Intelligence (AmI), ubiquitous computing or autonomic computing. One could translate these terms into the idea of a ‘smart’ adaptive environment that requires little deliberate human intervention. Though AmI depends on a series of enabling technologies for its realisation of smart environments, profiling can be seen as the enabling technology, because to make sense out of the ‘tsunami’ of data that is generated by RFID systems and sensor technologies, profiling is essen-tial. 2  In Section 7.2 we address profiling within the context of AmI. To assess the impact of profiling technologies on the identity of European citi-zens two notions of identity have been introduced and explored within the FIDIS network, coined by the French philosopher Paul Ricoeur: idem and ipse. Idem (sameness) stands for the third person, objectified observer’s perspective of iden-tity as a set of attributes that allows comparison between different people, as well as unique identification, whereas ipse (self) stands for the first person perspective constituting a ‘sense of self’. Their intersection provides for the construction of a  person’s identity. In Section 7.3 these concepts will be further explored and their relevance for democracy and rule of law will be discussed, pointing out that pri-vacy as a matter of boundary negotiations and identity construction necessitates understanding privacy as a private interest as well as a public good. After having discussed the risks of increased profiling throughout Sections 7.1, 2 and 3, we turn to a discussion of the legal implications. Data protection and privacy rights provide a legal framework that is mostly focused on the protection of personal data. With regards to the kind of threats generated by refined profiling a comple-mentary focus is needed on protection against unwarranted application of profiles. On top of that the legal framework still ‘thinks’ in terms of the technologies of the script, which renders it ineffective in protecting against dangers afforded by the technologies of the digital and the virtual. In Section 7.4 this challenge is taken up in exploring the notion of Ambient Law (AmLaw), i.e., a type of law that is articulated into the socio-technical infrastructure that it aims to protect against. Section 7.5 provides some concise conclusions. 2 The phrase ‘tsunami’ of data was used in The Future Group Report (2008), written by the Informal High Level Advisory Group on the Future of European Home Affairs Policy. ‘The findings and recommendations of the Future Group are meant to be an important contribution and a source of inspiration for the European Commission’s  proposal for the next multi-annual program in the field of Justice and Home Affairs’, see the report at p.3.  7 Profiling and AmI 275 7.1 Profiling: Definitions, Applications and Risks Profiling occurs in a diversity of contexts: from criminal investigation to market-ing research, from mathematics to computer engineering, from healthcare applica-tions for elderly people to genetic screening and preventive medicine, from foren-sic biometrics to immigration policy, from credit scoring to actuarial justice. Looking into these different domains it soon becomes clear that the term profiling is used to refer to a set of technologies that share at least one common characteris-tic: the use of algorithms or other mathematical (computer) techniques to create, discover or construct knowledge out of huge sets of data. Automated profiling involves different technologies (hardware), such as computers, RFID-tags, bio-metric applications and sensors, and techniques (software), such as data cleansing, data aggregation and data mining. These technologies and techniques are inte-grated into socio-technical profiling practices that allow both the construction and the application of profiles. Profiles are used to make decisions, sometimes even without human intervention. The visions of Ambient Intelligence, autonomic and ubiquitous computing depend entirely on autonomic profiling, the type of profil-ing that allows machines to communicate with other machines and take decisions without human intervention. 7.1.1 What Is Profiling? 3   Before proceeding to describe some of the applications and some of the risks, we need a provisional definition to clear the ground. A working definition of profiling should take into account that the term is used both for the construction of profiles and their application:  Profiling is the process of ‘discovering’ patterns in data in data-bases that can be used to identify or represent a human or nonhu-man subject (individual or group) and     /    or the application of profiles (sets of correlated data) to individuate and represent an individual  subject or to identify a subject as a member of a group (which can be an existing community or a ‘discovered’ category). 4   The difference between the construction and the application of profiles is a first important distinction to be made when discussing profiling, which will be dis-cussed hereunder. After that, three more distinctions will be discussed: the differ-ence between individual and group profiling, between direct and indirect profiling and the difference and the one between distributive and non-distributive profiling. 3 This section builds on FIDIS deliverables 7.2/3/4/5 and on part I of  Profiling the Euro- pean Citizen  (Hildebrandt and Gutwirth, 2008). 4 See Hildebrandt and Gutwirth (2008: 19).  276 Mireille Hildebrandt Construction and Application of Profiles As mentioned above, machine profiling makes use of mathematical techniques to uncover patterns that are invisible to the naked human eye. The process of profil-ing is often broken down to a series of 5 or 6 subsequent steps, that are interrelated and looped together in a process of constant feed-back. This process is called knowledge discovery in databases (KDD) and can be summed up as follows: 1.   recording of data in a machine-readable, computable manner 2.   storing and aggregating of data in databases 3.   data mining, i.e. running algorithms through the database 4.   interpreting the results 5.   applying the resulting profiles to new data (matching) and monitoring for outliers To highlight the feed-back that is constitutive of profiling Gasson and Browne (2008) visualise the different steps in terms of the Cross-Industry Standard Proc-ess for Data Mining (CRISP-DM), a non-proprietary and freely-available stan-dard (cf. Fig. 7.1). In this case the socio-technical nature of the process is highlighted in six steps, starting with business understanding (crucial for the choice of which data to collect and store), followed by data understanding (crucial for the choice of data recording, storing and aggregation), data preparation (data storage and ag-gregation), modelling (data mining), evaluation (interpretation) and deployment (application). Interestingly, the deployment of profiles implies matching them with new data, that will either confirm or falsify the patterns that have been found, thus allowing continuous fine tuning or even reconstruction of the pro-files. This way the application of profiles can loop back into the construction  phase. In as far as this is the case, the difference between the construction and application of profiles is relative.  Individual and Group Profiling Besides the difference between the construction and the application of profiles, a second distinction is the one between individual and group profiling (Jaquet-Chiffelle, 2008). At the level of the construction of profiles, individual profiling concerns the construction of the profile of an individual person, either to individuate her or to infer her preferences, habits, earning capacity or whatever other specific characteristics she may be found to have. An individual profile is inferred from the data of one individual. Group profiling then concerns the construction of the profile of a group, which can be either an existing community or a category that emerges as such in the process of data mining. In the case of a community the group profile may  be inferred from the data of one existing community. In the case of a category the group profile may have been inferred from data of many individuals.  7 Profiling and AmI 277 Fig. 7.1.  A facsimile of the key phases of the CRISP-DM process model for the life cycle of a data mining project from the CRISP-DM process guide and user manual 5    Direct and Indirect Profiling At the level of the application of profiles we can make a third distinction, speaking of either direct or indirect profiling. If an individual profile is applied to the person whose data have been used to construct the profile, we speak of direct individual  profiling. If a group profile is applied to an individual whose data match with the  profile, we speak of indirect individual profiling. If a group profile is applied to the group whose data have been used to infer the profile, we speak of direct group  profiling. If a group profile is applied to another group, whose data match with this profile, we speak of indirect group profiling. 5 © CRISP-DM consortium:  NCR Systems Engineering Copenhagen (USA and Den-mark), DaimlerChrysler AG (Germany), SPSS Inc. (USA) and OHRA Verzekeringen en Bank Groep B.V (The Netherlands), see
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!