Science

TOWARDS SEMIOTIC AGENT-BASED MODELS OF SOCIO-TECHNICAL ORGANIZATIONS

Description
TOWARDS SEMIOTIC AGENT-BASED MODELS OF SOCIO-TECHNICAL ORGANIZATIONS
Categories
Published
of 10
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  Figure 1 : The architecture of STOs. TOWARDS SEMIOTIC AGENT-BASED MODELS OFSOCIO-TECHNICAL ORGANIZATIONS  Cliff Joslyn and Luis M. Rocha Computer Research and Applications Group (CIC-3)Los Alamos National LaboratoryMS B265, Los Alamos, New Mexico 87545 {joslyn,rocha}@lanl.govhttp://www.c3.lanl.gov/~{joslyn,rocha} Citation:  Joslyn, Cliff and Luis M. Rocha [2000]. "Towards Semiotic Agent_Based Models of Socio_TechnicalOrganizations."  Proc. AI, Simulation and Planning in High Autonomy Systems  (AIS 2000) Conference,Tucson, Arizona, USA. ed. HS Sarjoughian et al., pp. 70-79. A BSTRACT We present an approach to agent modeling of socio-technical organizations based on the principles of semiotics. After reviewing complexsystems theory and traditional Artificial Life(ALife) and Artificial Intelligence (AI)approaches to agent-based modeling, we introducethe fundamental principles of semiotic agents asdecision-making entities embedded in artificialenvironments and exchanging and interpretingsemiotic tokens. We proceed to discuss the designrequirements for semiotic agents, including thosefor artificial environments with a rich enough“virtual physics” to support selected self-organization; semiotic agents as implementing ageneralized control relation; and situatedcommunication and shared knowledge within acommunity of such agents. We conclude with adiscussion of the resulting properties of suchsystems for dynamical incoherence, and finallydescribe an application to the simulation of thedecision structures of Command and ControlOrganizations. 1.   M OTIVATION Our world is becoming an interlocking collectiveof Socio-Technical Organizations (STOs) : largenumbers of groups of people hyperlinked byinformation channels and interacting withcomputer systems, and which themselves interactwith a variety of physical systems in order tomaintain them under conditions of good control.Primary examples of STOs include Commandand Control Organizations (CCOs) such as911/Emergency Response Systems (911/ERS) andmilitary organizations, as well as utilityinfrastructures such as power grids, gas pipelines,and the Internet. The architecture of such systemsis shown in Fig. 1, where a physical system iscontrolled by a computer-based informationnetwork, which in turn interacts with ahierarchically structured organization of semioticagents.  The potential impacts on planetary economy andecology are just beginning to be understood.The vast complexity and quantity of informationinvolved in these systems makes simulationapproaches necessary, and yet the existingformalisms available for simulation are notsufficient to reflect their full characteristics. In particular, simulations built on strict formalismssuch as discrete-event systems cannot capturethe inherent freedom available to humansinteracting with such systems; and simulations built on formal logic, such as most ArtificialIntelligence (AI) approaches, are too brittle andspecific to allow for the emergent phenomenawhich characterize such complex systems. We pursue an approach to agent modeling whichaims squarely between collective automatasystems typically used in Complex Systems andArtificial Life (ALife), and knowledge-basedformal systems as used in AI. This approach provides a robust capability to simulatehuman-machine interaction at the collective level.We identify this approach as Semiotic Agent-Based Modeling , as it expands the existing Agent-Based Model (ABM)  approach, astypically used in Complex Systems, withmechanisms for the creation, communication, andinterpretation of signs and symbols by and between agents and their environments. 2.   C OMPLEX S YSTEMS AND A GENT -B ASED M ODELS Many researchers are pursuing the hypothesis thatABMs are a highly appropriate method for simulating complex systems.A complex system  is commonly understood asany system consisting of a large number of interacting components (agents, processes, etc.)whose aggregate activity is non-linear (notderivable from the summations of the activity of individual components), and typically exhibitshierarchical self-organization under selective pressures. While almost all interesting processes in natureare cross-linked to some extent, in complexsystems we can distinguish a set of fundamental building blocks which interact non-linearly toform compound structures or functions. Together these form an identity whose understandingrequires different explanatory modalities fromthose used to explain the building blocks. This process of emergence results in the need for modes of description which are complementary. Itis also known as hierarchical self-organization ;complex systems are often defined as those whichhave this property [Pattee, 1973]. Examples of complex systems in this sense aregenetic networks which direct developmental processes, immune networks which preserve theidentity of organisms, social insect colonies,neural networks in the brain which producesintelligence and consciousness, ecologicalnetworks, social networks comprised of transportation, utilities, and telecommunicationsystems, as well as economies.Recent developments in software engineering,artificial intelligence, complex systems, andsimulation science have placed an increasingemphasis on concepts of autonomous and/or intelligent agents  as the hallmark of a new paradigm for information systems. Hype has led tothe situation where we can identify nearlyanything as an agent, from software subroutinesand objects, through asynchronous or separatelythreaded processes, to physically autonomousrobots, AI systems, organisms, all the way toconscious entities.We can recognize two strands of development of the concept of agent in modeling and simulation.The first school draws on examples of complex phenomena from biology such as social insectsand immune systems. These systems aredistributed collections of large numbers of interacting entities (agents) that function withouta "leader." From simple agents, which interactlocally with simple rules of behavior, merelyresponding befittingly to environmental cues andnot necessarily striving for an overall goal, weobserve a synergy which leads to a higher-levelwhole with much more intricate behavior than thecomponent agents, e.g. insect colonies andimmune responses. Most such complex systems have been shown to be members of a broad class of dynamicalsystems, and their emergent phenomena shown to be forms of dynamical attractors (a now classicalexample is the work of Kauffman [1990]). Morefamously, ALife [Langton, 1989], a subset of   Complex Systems Research, produced a number of models based on simple agent rules capable of  producing a higher-level identity, such as theflocking behavior of birds, which were calledSwarms. In these models, agents are typicallydescribed by state-determined automata: that is,they function by reaction to input and presentstate using some iterative mapping in a statespace. Such ABMs can be used, for instance, tosimulate massively parallel computing systems.The ABM approach rooted in Complex SystemsResearch contrasts with the other strand, whichdraws from AI. In this field, the goal was usuallythe creation and study of a small number of verycomplicated actors endowed with a great deal of on-board computational intelligence and planningability dedicated to solving specific tasks, withlittle or no room for emergent, collective behavior. 3.   S EMIOTIC A GENT -B ASED M ODELS It has become clear in recent years that themodeling of phenomena such as ecologicalsystems, and social systems such as STOs,requires elements of both the Complex Systemsand the AI approaches. First, large humancollective systems clearly manifest emergentcomplex behavior: the emergent constraintscreated by the coarse dynamics of interactionamong agents can dominate overall system behavior and performance.But a complex systems approach is not sufficientto model social systems. Rather, to model thesesystems appropriately we need agents whose behavior is not entirely dictated by local, state-determined interaction. A globalized humansociety, which impacts on planetary ecology, isempowered and driven by language andhyperlinked by information channels. Its agentshave access to and rely on accumulatedknowledge which escapes local constraints viacommunication, and is stored in media beyondindividual agents and their states. Indeed, many if not most researchers in AI, Cognitive Science, andPsychology have come to pursue the idea thatintelligence is not solely an autonomouscharacteristic of agents, but heavily depends onsocial, linguistic, and organizational knowledgewhich exists beyond individual agents. Such agents can be characterized as situated[Clark, 1997]. It has also been shown that agentsimulations which rely on shared socialknowledge can model social choice moreaccurately [Richards et al.  1998]. We turn to semiotics , or the general science of signs andsymbols, because the presence of representations in such systems is so important. Suchrepresentations, either stored internally to theagent or distributed through agent-environmentcouplings, can be of measured states of affairs,goal states, and possible actions.Originally a sub-field of linguistics [Eco 1986],semiotics has come to become more prominentfirst in text and media analysis, and then in biology, computer engineering, and controlengineering [Meystel 1996]. Semiotic processesinvolve the reference and interpretation of signtokens maintained in coding relations with their interpretants. Thus semiotics in general isconcerned with issues of sign typologies,digital/analog and symbolic/iconicrepresentations, the “motivation'' (intrinsicrelations of sign to meaning) of signs, andmappings among representational systems [Joslyn1995, 1998; Rocha, 1998b, 1999]. Since models of STOs amount to modeling socialnetworks, our agent designs need to move beyondstate-determined automata by including random-access memory capabilities. Our agents aresystems capable of engaging with their environments beyond concurrent state-determinedinteraction by using memory to store descriptionsand representations of their environments. Theyalso have access to shared knowledge amongst themembers of their particular agent society. Suchagents are dynamically incoherent   in the sensethat their next state or action is not solelydependent on the previous state, but also on some(random-access) stable memory that keeps thesame value until it is accessed and does notchange with the dynamics of the environment-agent interaction. In this sense, our agent designscreate ABM which bridges traditional ArtificialLife ABM and AI knowledge-based approaches. 4.   D ESIGN R  EQUIREMENTS FOR S EMIOTIC A GENTS As we mentioned above, there are a variety of senses of the term “agent” available the literature  today, deriving from, for example, biology,robotics, ALife, and AI, and having applicationsin information systems, dynamical systemssimulation, and natural systems simulation. In thissection we lay out our sense of Semiotic Agents(SAs) , and make a principled argument about thenecessary components for SAs interacting witheach other and within artificial environments.First, what are the necessary components of agents in general? Commonly cited propertiesinclude asynchrony, interactivity, mobility,distribution, etc. In general, we can see that anagent must possess some degree of independence or autonomy , and gain identity by beingdistinguishable from its environment by somekind of spatial, temporal, or functional boundary[Joslyn 1998, 1999]. In seeking out a specific and useful sense of "agent", we require that they havesome autonomy of action , an ability to engage intasks in an environment without direct externalcontrol. Thus our concept of an SA distinguishes agentsspecifically as decision-making   systems. Thesehave a sufficient freedom over a variety of  possible actions to make specific predictions of the outcomes of their actions. Clearly this classincludes AI systems, but leaves out many simpler collective automata or state-transition systemstypical of ALife. However, as discussed above,we are interested not in individual agent decision-making capabilities, but rather in the complex,emergent, collective behavior of populations of decision-making agents with access to simple personal and shared knowledge structures. That is,we propose multi-agent system designs that usetechniques from AI, but in an enlarged Alifesetting. We thereby further distinguish SAs from puredecision-making algorithms [Wolpert et al.  1999],in that they are embedded in (hopefully rich)virtual environments in which they take actionswhich have consequences for the future of theagents themselves. These environmentalinteractions induce constraints on the freedom of decision-making on the part of the SAs, and produce emergent behavior not explicitly definedin the agents’ (decision-making) rules of behavior. 4.1 Artificial Environments: The SelectedSelf-Organization Principle In ABM, agents interact in an artificialenvironment. However, it is often the case that thedistinction between agents and environments isnot clear. In contrast, in natural environments, theself-organization of living organisms is bound andis itself a result of inexorable laws of physics.Living organisms can generate an open-endedarray of morphologies and modalities, but theycan never change these laws. It is from theseconstant laws (and their initial conditions) that alllevels of organization we wish to model, from lifeto cognition and social structure, emerge. Theselevels of emergence typically produce their own principles of organization, which we can refer toas rules, but all of these cannot control or escape physical law and are “neither invariant nor universal like laws” [Pattee, 1995, page 27].For ABM to be relevant for science in general, thesame distinctions that we recognize in the naturalworld between laws and initial conditionsdetermined in the environment, and emergentrules of behavior of the objects of study, need to be explicitly included in an artificial form. Thusthe most important consideration is that simulatedagents operate within environments which havetheir “laws of nature” or “virtual physics”. Fromthese, different emergent rules of behavior for agents can be generated and studied. As we haveargued elsewhere [Rocha and Joslyn, 1998], theseneed to be explicitly distinguished in artificialenvironments. Once this distinction is recognized, then thefreedom of decision making which the SAs haveis necessarily constrained by these environmentaldynamics which are, for them, necessary. Suchconstraints can be, in fact, crucial for thedevelopment of emergent properties within agentsystems embedded in those environments. For example, Gordon, Spears, et al.  [1999] havesimulated distributed sensor grids exploiting anagent model interacting with an environmentwhich manifests a certain limited virtual physics.They have shown that they can achieve hexagonalor square grids based on the dynamics of the agentinteractions with those “natural laws''. Similarly,Pepper and Smuts [1999] have demonstrated thedevelopment of cooperative and altruistic behavior in simulated ecologies, but only  whenthe environment had a rich enough “texture'' of simulated vegetative diversity.  Therefore, the setup of environments for multi-agent simulations needs to:i  Specify the dynamics of self-organization : specify the laws, and their initial conditions, which characterizethe artificial environment (includingagents) and the emergence of context-specific rules. For example, inLindgren’s [1991] experiments, the lawsof the environment are the conditionsspecified in the iterated prisoner’sdilemma and a genetic algorithm – theseare inexorable in this model. Thecontext-specific rules are the severalstrategies that emerge whose successdepends on the other strategies whichco-exist in the environment andtherefore also specify its demandstogether with the laws. However, thesame laws can lead to differenttransitory rules, and thus, different agentenvironments. ii Observe an emergent or specify aconstructed semantics : identifyemergent or pre-programmed (butchangeable) rules that generate agent behavior in tandem with environmentallaws. In particular, we are interested inthe behavior of agents that can simulatesemantics and decision processes. As anexample consider the experiments of Hutchins and Hazlehurst [1991] whichclearly separate between the laws of anenvironment (the regularity of tide andmoon states) and the artifacts used byagents to communicate the semantics of these regularities among themselves.The semantics of the artifacts are not pre-specified but emerge in thesesimulations.iii  Provide pragmatic selection criteria :create or identify a mechanism of selection so that the semantics identifiedin ii  is grounded in a given environmentdefined by the laws of i . These selectioncriteria are based on constraintsimposed both by the inexorable laws of the environment and the emergent rules.When based only on the first, we modelan unchanging set of environmentaldemands (explicit selection), whilewhen we include the second, we modela changing set of environmentaldemands instead (implicit selection).The first are often implemented by aGenetic Algorithm (GA) with fixedevaluation function, while the secondwith a GA whose evaluation functionemerges from agent interactions [e.g.Ackley and Littman, 1991].These three requirements establish a  selected self-organization principle  [Rocha, 1996, 1998a]observed in natural evolutionary systems. This principle is also essential to model the emergenceof semantics and decision processes in ABMwhich can inform us about natural world phenomena. This is because without an explicittreatment or understanding of these components ina simulation, it is impossible to observe whichsimulations results pertain to unchangeableconstraints (laws), changeable, emergent,constraints (rules), and selective demands. It isoften the case in Artificial Life computationalexperiments that one does not know how tointerpret the results – is it life-as-it-could-be or  physics-as-it-could-be? If we are to move theseexperiments to a modeling and simulationframework, then we need to establish anappropriate modeling relation with natural agentsystems which are also organized according tolaws, rules, and selection processes. 4.2 Semiotic Agents SAs as we see them need to be based on a fewfundamental requirements. The primary internalcomponents of semiotic agents involve a series of  representations , in particular representations of the current state and past states (the agent's"beliefs"), and the goal state (its "desires"). Thisis partly motivation for usage of the term"semiotic", since we draw from a number of  principles from this general science of representations. i  Measurement  . Only certain aspects of the environment are measurable byagents, and this repertoire forms the potential field of knowledge for theagent, its "world as perceived". ii Capacity to evaluate current status .Since a goal of ABM of social systemsis to study decision processes, our 
Search
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks