Professor Nigel Gilbert CBE
Academic and research departments
Sociology, Faculty of Arts, Business and Social Sciences, Institute for Sustainability, Surrey Institute for People-Centred Artificial Intelligence (PAI).About
Biography
Nigel Gilbert has a Distinguished Chair in Computational Social Science at the University of Surrey. He is Director of the Centre for Research in Social Simulation, Director of the Centre for the Evaluation of Complexity Across the Nexus (CECAN), and Director of the University's Institute of Advanced Studies.
His main research interests are processual theories of social phenomena; the development of computational sociology and the methodology of computer simulation, especially agent-based modelling; and the development, appraisal and evaluation of public policies.
He read for a first degree in Engineering, initially intending to go into the computer industry. However, he was attracted into sociology and obtained his doctorate on the sociology of scientific knowledge from the University of Cambridge, under the supervision of Michael Mulkay. His research and teaching interests have reflected his continuing interest in both sociology and computer science (and engineering more widely).
He is the author or editor of several textbooks on sociological methods of research and statistics and was the founding editor of the Journal of Artificial Societies and Social Simulation as well as helping to establish the innovative online journal, Sociological Research Online.
Further details about Nigel Gilbert may be found in Wikipedia
Editor, Social Research Update
Director, Centre for Research in Social Simulation
Director, University of Surrey Institute of Advanced Studies
News
ResearchResearch interests
Computational social science, sociology of science and science policy, innovation, sociology of the environment, energy policy, the use of models in the policy process.
Research projects
The Centre for the Evaluation of Complexity Across the Nexus, a £3m research centre hosted by the University of Surrey, brings together a unique coalition of experts to address some of the greatest issues in policy making and evaluation. Nexus issues are complex, with many diverse, interconnected factors involved. This presents a major challenge to policy making because changing one factor can often have unexpected knock-on effects in seemingly unrelated areas. We need new ways to evaluate policy in these situations. CECAN will pioneer, test and promote innovative evaluation approaches and methods across nexus problem domains, such as biofuel production or climate change, where food, energy, water and environmental issues intersect. The Centre will promote 'evidence based policymaking' by finding ways for the results of evaluation to both inform policy, and reflect back onto future policy design. Embracing an 'open research' culture of knowledge exchange, CECAN benefits from a growing network of policymakers, practitioners and researchers and a core group of academic and non-academic experts.Whole Systems Energy Modelling Consortium (WholeSEM)Energy models provide essential quantitative insights into the 21st Century challenges of decarbonisation, energy security and cost-effectiveness. Models provide the integrating language that assists energy policy makers to make improved decisions under conditions of pervasive uncertainty. Whole systems energy modelling also has a central role in helping industrial and wider stakeholders assess future energy technologies and infrastructures, and the potential role of societal and behavioural change. Our contribution to this major four-year EPSRC funded project is to develop models of household energy demand.
ACCESS is a funded programme of work providing leadership on the social science contribution to tackling and solving a range of climate and environmental problems.
We recognise that technology alone is not enough.
And we are working to share our understanding of how people, our societies and our systems need to change and adapt to create a healthier environment and meet our net zero goals.
In more and more countries, policymakers increasingly use Artificial Intelligence (AI) algorithms to decide on public service provisions and state benefits among their citizens whose profiles are assessed and evaluated for decision making. This applies, for example, to assigning retirement or unemployment benefits and public health insurance.
AI-based social assessment technologies for public service provision categorise present and future human behaviour on scales such as legal recipients/fraudulent recipients, deserving/non-deserving, needy/non-needy, high-performing/low-performing, desirable/non-desirable, or acceptable/not-acceptable.
Delegating decisions based on such value judgements to machines raises ethical, philosophical and social issues and leads to important questions of responsibility, accountability, transparency and the quality of social decision making about the distribution of scarce resources. Often, public opinion and discourse in these areas is highly emotional and emphatic, because societal core values are affected and at stake, and decisions on providing or refusing public social services have far-reaching consequences for the concerned individuals.
However, perceptions, attitudes, discussions and acceptance of AI use for public policy vary between countries, as do the types and degrees of AI implementation, with reference to norms and values in-use, but also related to technology status, economic models, civil society sentiments, and legislative, executive and judicial characteristics. For example, while in Germany (and most of the so-called western world) the Chinese project of a “Social Credit System” is portrayed as ethically debatable and worrying, the majority of China’s citizens seems to welcome the initiative for its assumed generation of benefit and promotion of honesty in society and the economy. In digital frontrunner Estonia, AI use for assessing the future job market performance of individuals is publicly supported as a government project and highly appreciated by citizens. According to the Digital Economy and Society Index 2019, Estonia ranks second in Europe in both digital public service provision and in e-governance usage. In India, however, resistance of AI use by central Government for welfare provisions is constantly rising and gains more and more voice. Thus, attitudes not only vary between countries but also within countries between societal groups where winners and victims can be discerned supporting or rejecting technological developments: for example, while many municipalities and districts in the US wholeheartedly embrace the use of AI for assigning welfare provisions, legal and civil society organisations provide example cases for the opting-out from these developments, e.g. stopping the Arkansas State Department of Human Services from using a computer algorithm to determine medical benefits for people with severe disabilities.
To understand and shape the role of AI for future societies, therefore, needs a participatory approach involving many relevant stakeholders, which includes research methods to compare empirical cases, to model future societal scenarios on detail level, and to create better, i.e. more responsible AI technology adapted to context-specific social value requirements. AI FORA’s approach represents this general research design, which is realised by an interdisciplinary cooperation between the technical and the social sciences embedded in a transdisciplinary multi-stakeholder framework at all levels of the project.
TRANSITION is a UK wide Clean Air Network programme led by the University of Birmingham in collaboration with nine universities and over 20 cross-sector partners, which seeks to deliver air quality and health benefits associated with the UK transition to a low-emission transport economy. The academic investigators and policy, public, commercial and not-for-profit sector partners will undertake joint research, to co-define indoor and outdoor air quality challenges and co-deliver innovative, evidence-based solutions.
The Participatory System Mapper (PRSM) is an app that runs in a web browser and makes it easy for peole to draw networks (or 'maps') of systems, working together collaboratively
Research interests
Computational social science, sociology of science and science policy, innovation, sociology of the environment, energy policy, the use of models in the policy process.
Research projects
The Centre for the Evaluation of Complexity Across the Nexus, a £3m research centre hosted by the University of Surrey, brings together a unique coalition of experts to address some of the greatest issues in policy making and evaluation. Nexus issues are complex, with many diverse, interconnected factors involved. This presents a major challenge to policy making because changing one factor can often have unexpected knock-on effects in seemingly unrelated areas. We need new ways to evaluate policy in these situations. CECAN will pioneer, test and promote innovative evaluation approaches and methods across nexus problem domains, such as biofuel production or climate change, where food, energy, water and environmental issues intersect. The Centre will promote 'evidence based policymaking' by finding ways for the results of evaluation to both inform policy, and reflect back onto future policy design. Embracing an 'open research' culture of knowledge exchange, CECAN benefits from a growing network of policymakers, practitioners and researchers and a core group of academic and non-academic experts.Whole Systems Energy Modelling Consortium (WholeSEM)Energy models provide essential quantitative insights into the 21st Century challenges of decarbonisation, energy security and cost-effectiveness. Models provide the integrating language that assists energy policy makers to make improved decisions under conditions of pervasive uncertainty. Whole systems energy modelling also has a central role in helping industrial and wider stakeholders assess future energy technologies and infrastructures, and the potential role of societal and behavioural change. Our contribution to this major four-year EPSRC funded project is to develop models of household energy demand.
ACCESS is a funded programme of work providing leadership on the social science contribution to tackling and solving a range of climate and environmental problems.
We recognise that technology alone is not enough.
And we are working to share our understanding of how people, our societies and our systems need to change and adapt to create a healthier environment and meet our net zero goals.
In more and more countries, policymakers increasingly use Artificial Intelligence (AI) algorithms to decide on public service provisions and state benefits among their citizens whose profiles are assessed and evaluated for decision making. This applies, for example, to assigning retirement or unemployment benefits and public health insurance.
AI-based social assessment technologies for public service provision categorise present and future human behaviour on scales such as legal recipients/fraudulent recipients, deserving/non-deserving, needy/non-needy, high-performing/low-performing, desirable/non-desirable, or acceptable/not-acceptable.
Delegating decisions based on such value judgements to machines raises ethical, philosophical and social issues and leads to important questions of responsibility, accountability, transparency and the quality of social decision making about the distribution of scarce resources. Often, public opinion and discourse in these areas is highly emotional and emphatic, because societal core values are affected and at stake, and decisions on providing or refusing public social services have far-reaching consequences for the concerned individuals.
However, perceptions, attitudes, discussions and acceptance of AI use for public policy vary between countries, as do the types and degrees of AI implementation, with reference to norms and values in-use, but also related to technology status, economic models, civil society sentiments, and legislative, executive and judicial characteristics. For example, while in Germany (and most of the so-called western world) the Chinese project of a “Social Credit System” is portrayed as ethically debatable and worrying, the majority of China’s citizens seems to welcome the initiative for its assumed generation of benefit and promotion of honesty in society and the economy. In digital frontrunner Estonia, AI use for assessing the future job market performance of individuals is publicly supported as a government project and highly appreciated by citizens. According to the Digital Economy and Society Index 2019, Estonia ranks second in Europe in both digital public service provision and in e-governance usage. In India, however, resistance of AI use by central Government for welfare provisions is constantly rising and gains more and more voice. Thus, attitudes not only vary between countries but also within countries between societal groups where winners and victims can be discerned supporting or rejecting technological developments: for example, while many municipalities and districts in the US wholeheartedly embrace the use of AI for assigning welfare provisions, legal and civil society organisations provide example cases for the opting-out from these developments, e.g. stopping the Arkansas State Department of Human Services from using a computer algorithm to determine medical benefits for people with severe disabilities.
To understand and shape the role of AI for future societies, therefore, needs a participatory approach involving many relevant stakeholders, which includes research methods to compare empirical cases, to model future societal scenarios on detail level, and to create better, i.e. more responsible AI technology adapted to context-specific social value requirements. AI FORA’s approach represents this general research design, which is realised by an interdisciplinary cooperation between the technical and the social sciences embedded in a transdisciplinary multi-stakeholder framework at all levels of the project.
TRANSITION is a UK wide Clean Air Network programme led by the University of Birmingham in collaboration with nine universities and over 20 cross-sector partners, which seeks to deliver air quality and health benefits associated with the UK transition to a low-emission transport economy. The academic investigators and policy, public, commercial and not-for-profit sector partners will undertake joint research, to co-define indoor and outdoor air quality challenges and co-deliver innovative, evidence-based solutions.
The Participatory System Mapper (PRSM) is an app that runs in a web browser and makes it easy for peole to draw networks (or 'maps') of systems, working together collaboratively
Supervision
Postgraduate research supervision
Nigel Gilbert is interested in supervising doctoral students wishing to study innovative ways of using computational models in the social sciences, and interdisciplinary topics bridging engineering (especially computer science) and the social sciences. For more more details about PhDs in CRESS, see: https://www.surrey.ac.uk/department-sociology/study/postgraduate-research
Publications
Highlights
Agent-Based Models (Quantitative Applications in the Social Sciences), 2008, Sage Publications.
Researching Social Life, fourth edition, 2016, edited by Nigel Gilbert and Paul Stoneman, Sage Publications.
Simulation for the Social Scientist, second edition 2005, Nigel Gilbert and Klaus G. Troitzsch, Open University Press (also available in Japanese, Russian and Spanish).
Understanding Social Statistics, 2000, Jane Fielding and Nigel Gilbert, Sage Publications.
From Postgraduate to Social Scientist: A Guide to Key Skills (Sage Study Skills Series) 2006, Sage Publications.
Opening Pandora's Box, available online (2003), Cambridge University Press, 1984.
The effectiveness and cost of a public health intervention is dependent on complex human behaviors, yet health economic models typically make simplified assumptions about behavior, based on little theory or evidence. This paper reviews existing methods across disciplines for incorporating behavior within simulation models, to explore what methods could be used within health economic models and to highlight areas for further research. This may lead to better-informed model predictions. The most promising methods identified which could be used to improve modeling of the causal pathways of behavior-change interventions include econometric analyses, structural equation models, data mining and agent-based modeling; the latter of which has the advantage of being able to incorporate the non-linear, dynamic influences on behavior, including social and spatial networks. Twenty-two studies were identified which quantify behavioral theories within simulation models. These studies highlight the importance of combining individual decision making and interactions with the environment and demonstrate the importance of social norms in determining behavior. However, there are many theoretical and practical limitations of quantifying behavioral theory. Further research is needed about the use of agent-based models for health economic modeling, and the potential use of behavior maintenance theories and data mining.
The housing market in the UK features a mortgaging system where interest rates are either fixed for short periods (typically 2 or 5 years) or varied to track interest rates of the Bank of England base rate. The reactions of home buyers and investors to changes in the mortgage rate have impacts on the buy-to-let housing market, and this in turn impacts tenants who are renting from private landlords. Such reactions become more significant when there are financial shocks, as occurred in 2022, which create chain events that can affect house prices and rents. To explore the dynamics of the UK housing market, we introduce an Agent Based Model (ABM) featuring interactions between the mortgage, buy-to-let and rental housing markets. We use the model to understand the effects of interest rate and maximum loan-to-value shocks. The ABM demonstrates the complex associations between such shocks, house prices and rents. It shows that a sudden increase in mortgage interest rates decreases housing prices and steeply increases rent prices within 5 years. It also shows that a sudden decrease of the loan-to-value ratio significantly decreases housing prices.
Why we need Environmental Social SciencesEnvironmental issues are ultimately social issues. They are caused by, understood by, and must be solved by, people as individuals, groups, communities, political and institutional systems.Securing the future of our planet and the wellbeing of humankind requires an in-depth understanding of the interdependent relationships between people and their social and natural environment. It requires knowledge and expertise of all environmental social sciences.Who is this document for?This document is for anyone with an interest in understanding and tackling people-environment relationships and environmental problems. This includes those working as, or with, environmental social scientists in academia, policy and practice, as well as scientists, knowledge brokers and policy makers and practitioners wishing to explicitly consider people-environment relationships in their work. This may include questions around climate change, net zero emissions, nature-climate relations, biodiversity, use of natural resources, consumer choices, as well as people’s relationships with their natural environment and non-human nature.Why this document is neededEnvironmental problems are deeply rooted in social structures and tackling these problems requires significant social transformation for which environmental social science (ESS) knowledge and expertise is essential. However, the potential value and role of environmental social sciences in research and policy and practice is not always clearly valued or understood. Overlooking the vital role of people and ESS insights contributes to inadequate environmental policy.Social sciences can sometimes be dismissed as common sense and is too often carried out by those without proper training and social science expertise is often under resourced. There is frequently a narrow understanding of the range of insights, tools and techniques that different environmental social sciences can offer.Knowledge of and requests for environmental social sciences (for instance by other researchers or decision makers) is often limited to research that studies how end users (consumers) respond to new technologies or environmental policies after the problems have been framed and the solutions have been designed. Knowledge of ESS research is often based on outdated ways of thinking, for example the knowledge-deficit model, which assumes that people lack the knowledge to “do the right thing”. There is also less emphasis on more inclusive methods that bring different groups into framing the problem. However, there is a huge variety of different types of environmental social sciences. Social science has played an important and significant role in research impact. Researchers have a vast array of methods and knowledge at their disposal that is critical to help understand and improve people-environment interactions and successful delivery of policy and practice. Environmental problems require insight and knowledge from a range of different disciplines, including social sciences. This document provides a synopsis of what environmental social science is, what it does and what it can offer to environmental research, policy and practice.
An exploration of the implications of developments in artificial intelligence for social scientific research, which builds on the theoretical and methodological insights provided by ""Simulating societies"".; This book is intended for worldwide library market for social science subjects such as sociology, political science, geography, archaeology/anthropology, and significant appeal within computer science, particularly artificial intelligence. Also personal reference for researchers
Utilisation de la simulation par ordinateur dans l'étude de phénomènes sociaux. La simulation par ordinateur jouit actuellement d'un renouveau comme outil méthodologique en sciences sociales. En sociologie, des progrès en matériel and particulièrement en logiciels ont permis la construction de modèles de simulation beaucoup plues Intéressante qu'auparavant. Cet article donne une vues d'ensemble de deux courants de travaux aujourd'hui en simulation par ordinateur - la micro-simulation dynamique et la simulation basée sur l'intelligence artificielle distribuée - et suggère quelque principes généraux de méthodologie pour la recherche par simulation. Simulation partage des difficultés méthodologiques avec les autres types de modélisation comme la modélisation statistique, mais offre aussi de nouvelles perspectives.
Using data taken from a major European Union funded project on speech understanding, the SunDial project, this book considers current perspectives on human computer interaction and argues for the value of an approach taken from sociology which is based on conversation analysis.
This paper uses interview data from retired households to inform a discussion about economic models of consumption. It is divided into two parts. In the first part, the economic models are described. The paper then discusses several different types of reasons for finding them unhelpful in explaining consumption. The second part of the paper considers the role of 'middle range' theories in developing plausible models of household behaviour. Phenomena which the interviews suggest are important in explaining consumption, such as time allocation, the labour supply decision, the ubiquitous durability of goods and the structure of the household, are not typically supported by middle range theory in current models. Without the constraints of such theory, it is very hard to distinguish models providing genuine explanation from those that merely fit the data. The latter part of the paper also discusses aspects of a new middle range theory of consumption suggested by the interviews.
New opportunities for building computational simulation models have multiplied over the last few years, inspired partly by extraordinary advances in computing hardware and software and partly by influences from other disciplines, particularly physics, artificial intelligence and theoretical biology. Since the mid-1980s there has been rapidly increasing interest world-wide in the possibility of using simulation in sociology and the other social sciences as sociologists have realised that it offers the possibility of building models which are process-oriented and in which some of the mechanisms of social life can be explicitly represented. This introductory chapter will explore the potential of computer simulation for the study of science by describing some recent examples, chosen to give a flavour of the range of simulation methods and the variety of research areas which are now using simulation as a research tool.
This chapter begins the specification of the ideal features of a toolkit for social simulation, starting from a consideration of the standard methodology for simulation research. Several essential components, commonly used in social science simulation research, are identified and it is argued that implementations of these will need to be included in the toolkit. Additional modules, providing graphical output, scheduling, random number generation and parameter editing are also required.
Recent approaches to providing advisory knowledge-based systems with explanation capabilities are reviewed. The importance of explaining a system's behaviour and conclusions was recognized early in the development of expert systems. Initial approaches were based on the presentation of an edited proof trace to the user, but while helpful for debugging knowledge bases, these explanations are of limited value to most users. Current work aims to expand the kinds of explanation which can be offered and to embed explanations into a dialogue so that the topic of the explanation can be negotiated between the user and the system. This raises issues of mutual knowledge and dialogue control which are discussed in the review.
First published in 1993, Analyzing Tabular Data is an accessible text introducing a powerful range of analytical methods. Empirical social research almost invariably requires the presentation and analysis of tables, and this book is for those who have little prior knowledge of quantitative analysis or statistics, but who have a practical need to extract the most from their data. The book begins with an introduction to the process of data analysis and the basic structure of cross-tabulations. At the core of the methods described in the text is the loglinear model. This and the logistic model, are explained and their application to causal modelling, to event history analysis, and to social mobility research are described in detail. Each chapter concludes with sample programs to show how analysis on typical datasets can be carried out using either the popular computer packages, SPSS, or the statistical programme, GLIM. The book is packed with examples which apply the methods to social science research. Sociologists, geographers, psychologists, economists, market researchers and those involved in survey research in the fields of planning, evaluation and policy will find the book to be a clear and thorough exposition of methods for the analysis of tabular data.
The contemporary structure of scientific activity, including the publication of papers in academic journals, citation behaviour, the clustering of research into specialties and so on has been intensively studied over the last fifty years. A number of quantitative relationships between aspects of the system have been observed. This paper reports on a simulation designed to see whether it is possible to reproduce the form of these observed relationships using a small number of simple assumptions. The simulation succeeds in generating a specialty structure with 'areas' of science displaying growth and decline. It also reproduces Lotka's Law concerning the distribution of citations among authors. The simulation suggests that it is possible to generate many of the quantitative features of the present structure of science and that one way of looking at scientific activity is as a system in which scientific papers generate further papers, with authors (scientists) playing a necessary but incidental role. The theoretical implications of these suggestions are briefly explored. Adapted from the source document.
The range of tools designed to help build agent-based models is briefly reviewed. It is suggested that although progress has been made, there is much further design and development work to be done. Modelers have an important part to play, because the creation of tools and models using those tools proceed in a dialectical relationship.
To help health economic modelers respond to demands for greater use of complex systems models in public health. To propose identifiable features of such models and support researchers to plan public health modeling projects using these models. A working group of experts in complex systems modeling and economic evaluation was brought together to develop and jointly write guidance for the use of complex systems models for health economic analysis. The content of workshops was informed by a scoping review. A public health complex systems model for economic evaluation is defined as a quantitative, dynamic, non-linear model that incorporates feedback and interactions among model elements, in order to capture emergent outcomes and estimate health, economic and potentially other consequences to inform public policies. The guidance covers: when complex systems modeling is needed; principles for designing a complex systems model; and how to choose an appropriate modeling technique. This paper provides a definition to identify and characterize complex systems models for economic evaluations and proposes guidance on key aspects of the process for health economics analysis. This document will support the development of complex systems models, with impact on public health systems policy and decision making.
Emerging research suggests exposure to high levels of air pollution at critical points in the life-course is detrimental to brain health, including cognitive decline and dementia. Social determinants play a significant role, including socio-economic deprivation, environmental factors and heightened health and social inequalities. Policies have been proposed more generally, but their benefits for brain health have yet to be fully explored. Over the course of two years, we worked as a consortium of 20+ academics in a participatory and consensus method to develop the first policy agenda for mitigating air pollution's impact on brain health and dementia, including an umbrella review and engaging 11 stakeholder organisations. We identified three policy domains and 14 priority areas. Research and Funding included: (1) embracing a complexities of place approach that (2) highlights vulnerable populations; (3) details the impact of ambient PM2.5 on brain health, including current and historical high-resolution exposure models; (4) emphasises the importance of indoor air pollution; (5) catalogues the multiple pathways to disease for brain health and dementia, including those most at risk; (6) embraces a life course perspective; and (7) radically rethinks funding. Education and Awareness included: (8) making this unrecognised public health issue known; (9) developing educational products; (10) attaching air pollution and brain health to existing strategies and campaigns; and (11) providing publicly available monitoring, assessment and screening tools. Policy Evaluation included: (12) conducting complex systems evaluation; (13) engaging in co-production; and (14) evaluating air quality policies for their brain health benefits. Given the pressing issues of brain health, dementia and air pollution, setting a policy agenda is crucial. Policy needs to be matched by scientific evidence and appropriate guidelines, including bespoke strategies to optimise impact and mitigate unintended consequences. The agenda provided here is the first step toward such a plan.
Microsimulation en sciences sociales. Ci-dessous se trouvent trois articles basés sur des présentations et des discussions qui ont eu lieu au Séminaire de Dagstuhl sur la microsimulation en sciences sociales: "Une défi à l'informatique". Schloβ Dagstuhl. 1-5 mai 1995, et ont été publiés depuis dans K. G. Troitzsch, U. Mueller, G. N. Gilbert et J.E. Doran (sous la direction de). Social Science Microsimulation, 1996, Springer Verlag. Les trois articles sont "Simulation comme une stratégie de recherche" et "Environnements et langages informatiques pour la simulation sociale: Résumé d'une discussion informelle . par G. Niget Gilbert, et "Simulation informatique et sciences sociales: Sur l'avenir d'une relation difficile — Résumé d'une discussion informelle par Klaus G. Troitzsch.
It is challenging to predict long-term outcomes of interventions without understanding how they work. Health economic models of public health interventions often do not incorporate the many determinants of individual and population behaviours that influence long term effectiveness. The aim of this paper is to draw on psychology, sociology, behavioural economics, complexity science and health economics to: (a) develop a toolbox of methods for incorporating the influences on behaviour into public health economic models (PHEM-B); and (b) set out a research agenda for health economic modellers and behavioural/ social scientists to further advance methods to better inform public health policy decisions. A core multidisciplinary group developed a preliminary toolbox from a published review of the literature and tested this conceptually using a case study of a diabetes prevention simulation. The core group was augmented by a much wider group that covered a broader range of multidisciplinary expertise. We used a consensus method to gain agreement of the PHEM-B toolbox. This included a one-day workshop and subsequent reviews of the toolbox. The PHEM-B toolbox sets out 12 methods which can be used in different combinations to incorporate influences on behaviours into public health economic models: collaborations between modellers and behavioural scientists, literature reviewing, application of the Behaviour Change Intervention Ontology, systems mapping, agent-based modelling, differential equation modelling, social network analysis, geographical information systems, discrete event simulation, theory-informed statistical and econometric analyses, expert elicitation, and qualitative research/process tracing. For each method, we provide a description with key references, an expert consensus on the circumstances when they could be used, and the resources required. This is the first attempt to rigorously and coherently propose methods to incorporate the influences on behaviour into health economic models of public health interventions. It may not always be feasible or necessary to model the influences on behaviour explicitly, but it is essential to develop an understanding of the key influences. Changing behaviour and maintaining that behaviour change could have different influences; thus, there could be benefits in modelling these separately. Future research is needed to develop, collaboratively with behavioural scientists, a suite of more robust health economic models of health-related behaviours, reported transparently, including coding, which would allow model reuse and adaptation.
Concern has been expressed in some quarters about the ability of Agent-Based Modelling to progress rather than merely proliferate arbitrary models. This chapter offers a case study of an elderly Agent-Based Model (hereafter ABM) being resurrected because it seemed suitable to a new use, namely to study the effect of the current UK economic crisis on health and wellbeing. The chapter aims to contribute on two levels. One is to discuss the resurrection of the ABM, replicate its original results (enhanced with explanations) and extend it to the new situation. This is very much research in progress. The other aim is to show how, to be progressive, ABMs need to remain in use and effectively accessible to the research community. The present case study shows how published ABMs alone are unlikely to satisfy that requirement as time passes (and thus facilitate progressive research). Because this is work in progress, it also attempts to show how some modelling commonplaces (for example that arbitrary models still enhance understanding) work in practice. Such aspects are often written out when publishing research that is considered finished.
This is an Evaluation Policy and Practice Note that explores the application of Agent-Based Modelling (ABM) for complex policy evaluation.
This briefing explains what complexity science and systems thinking means for people developing and delivering policy. It also introduces a common language and set of symbols to help frame thinking, conversations and action on complexity.
https://www.youtube.com/watch?v=U-nqs9ak2nY
The rational choice framework is commonly used in many energy demand models and energy economic policy models. However, the notion of reasoned decision-making underpinning the rational actor models is less useful to explain the dynamics of routine household activities (e.g., cooking, showering, heating, etc.) which result in energy use. An alternative body of work collectively referred to as social practice theories offers a more practical explanation of routines. It is also argued that practices, i.e. the routine activities that people do in the service of normal everyday living, is at the centre of social change, and hence should be the focus of interventions concerned with demand reduction. One of the main criticisms of social practice theories, however, is that the concepts proposed are high-level and abstract and hence difficult to apply to real-world problems. Most existing practice-centric models are also abstract implementations. To address this gap, in this paper, we present a concrete, empirically-based practice-centric agent-based model to simulate the dynamics of household heating practices. We also use the model to explore consumer response to a simulated price-based demand response scheme. We show how a practice-centric approach leads to a more realistic understanding of the energy use patterns of households by revealing the underlying contexts of consumption. The overall motivation is that by gaining insight into the trajectories of unsustainable energy consuming practices, it might be possible to propose alternative pathways that allow more sustainable practices to take hold.
https://blogs.surrey.ac.uk/sociology/2020/02/18/a-computational-model-to-explore-decentralised-water-governance/
The Environmental Social Science kNowledge Exchange Map of Opportunities (ESS NEMO) is a package of systems maps and associated documentation that show the groups, organisations and individual actors that environmental social scientists could engage with in knowledge creation and/or exchange in the UK. ESS NEMO is a tool that, among other rationales, aims to promote greater collaboration and trust between environmental social scientists, and with other groups that may not traditionally be included in ESS activities. Environmental Social Science (ESS) is defined as the systematic study of people and their social and their non-human physical environment (their habitat) (Gatersleben et al., forthcoming). The information presented in ESS NEMO reflects the situation as of 10th July 2024. It is acknowledged that the landscape is dynamic, and changes frequently over time.
This paper describes a participatory approach to co-designing social simulation models with policymakers using a case study of modeling European Commission policy. Managing the collaboration of a wide range of individuals or organizations is challenging but increasingly important as policy making becomes more complex. A framework for a co-design process based on a participatory approach is proposed. The framework suggests that the collaborative design should go through the following phases: Identifying user questions, data provision, model discussion for validation, visualization of results and discussing scope and limitations with stakeholders. Key findings are that the co-design process requires communication skills, patience, willingness to compromise, and motivation to make the formal world of modelers and the narrative world of policymaking meet. Furthermore, especially in participatory modeling scenarios, stakeholders have to be included not only in the provision of data, but also in order to confirm the existence, quality and availability of data.
The question of whether local regulations should limit the supply of taxicabs and control taxi fares is considered. Recently, deregulation has become a popular suggestion. Effects of taxi deregulation in monopolistic and competitive markets are discussed within a framework of eight regulatory scenarios involving different price, entry, and industry concentration factors. Results indicate no single optimal strategy, but rather several possible scenarios, each with certain advantages and drawbacks.
This paper is an attempt to show that scientific humour is an important topic analytically, because it reveals with particular clarity some of the interpretative resources by means of which scientists create social meaning. A discourse analysis is presented of a recurrent participants' `proto-joke' dealing with communication among scientists, of a satirical article appearing in a `joke journal', and of a cartoon. This analysis adds further support to prior work on the nature of scientists' interpretative repertoires; as well as describing some of the organizational forms which are used to construct scientists' humour. It is suggested that there is no difference, in general principle, between the social production of humour and the social production of, say, consensus or controversy; in other words, that the phenomena traditionally investigated by sociologists of science are best conceived, like humour, as highly variable outcomes of the interpretative procedures used by scientists to organize their versions of social action in specific contexts.
Everyone acknowledges the importance of responsible computing, but practical advice is hard to come by. Important Internet applications are ways to accomplish business processes. We investigate how they can be geared to support responsibility as illustrated via sustainability. Sustainability is not only urgent and essential but also challenging due to engagement with human and societal concerns, diverse success criteria, and extended temporal and spatial scopes. This article introduces a new framework for developing responsible Internet applications that synthesizes the perspectives of the theory of change, participatory system mapping, and computational sociotechnical systems.
This paper reviews the “Wizard of Oz” technique for simulating future interactive technology and develops a partial taxonomy of such simulations. The issues of particular relevance to Wizard of Oz simulations of speech input/output computer systems are discussed and some experimental variables and confounding factors are reviewed. A general Wizard of Oz methodology is suggested.
A safe and just operating space for socioecological systems is a powerful bridging concept in sustainability science. It integrates biophysical earth-system tipping points (ie, thresholds at which small changes can lead to amplifying effects) with social science considerations of distributional equity and justice. Often neglected, however, are the multiple feedback loops between self-identity and planetary boundaries. Environmental degradation can reduce self -identification with nature, leading to decreased pro-environmental behaviours and decreased cooperation with out -groups, further increasing the likelihood of transgressing planetary boundaries. This vicious cycle competes with a virtuous one, where improving environmental quality enhances the integration of nature into self-identity and improves health, thereby facilitating prosocial and pro-environmental behaviour. These behavioural changes can also cascade up to influence social and economic institutions. Given a possible minimum degree of individual self-care to maintain health and prosperity, there would seem to exist an analogous safe and just operating space for self-identity, for which system stewardship for planetary health is crucial.
This paper collects the positions of four experts who participated in a panel on simulation-based policy support. The experts are well known for their contributions in this domain, predominantly for their use of agent-based approaches. The first section addresses the increasing requirements to integrate social modeling to support the evaluation of the socio-economic impact of policies, including questions of equity. Artificial societies are presented as enablers. The second section observes that the purpose of a simulation model is very much linked to its usefulness for supporting policy decisions. This implies requirements for learning agents and better representation of time and space. The third section focuses on the need to give agents more active behavior, to let them drive the action. While digital twin technology promises to help, the current state of the art seems insufficient. The section closes by looking at trust in simulation models, which will be needed for policy support.
Household activities nowadays heavily rely on electrical and electronic devices, the operation of which are largely reflected in the household energy usage. With the advance of sensor technology, smart meters are increasingly adopted in people's homes, which makes it easier to access finer-grained energy consumption data and more importantly enables the study of household activities via the patterns in energy consumption. In this paper, we investigate the application of k-nearest Neighbours algorithm (k-NN) and Convolutional Neutral Network (CNN) to predict whether specific appliances are being used (on/off status) at different times based on the total energy consumption of a whole house. The experiment results on three types of appliances in one household show that CNN in general achieves better performance than k-NN and both methods perform better on the appliances with relatively large energy consumption.
The COVID-19 pandemic is causing a dramatic loss of lives worldwide, challenging the sustainability of our health care systems, threatening economic meltdown, and putting pressure on the mental health of individuals (due to social distancing and lock-down measures). The pandemic is also posing severe challenges to the scientific community, with scholars under pressure to respond to policymakers' demands for advice despite the absence of adequate, trusted data. Understanding the pandemic requires fine-grained data representing specific local conditions and the social reactions of individuals. While experts have built simulation models to estimate disease trajectories that may be enough to guide decision-makers to formulate policy measures to limit the epidemic, they do not cover the full behavioural and social complexity of societies under pandemic crisis. Modelling that has such a large potential impact upon people's lives is a great responsibility. This paper calls on the scientific community to improve the transparency, access, and rigour of their models. It also calls on stakeholders to improve the rapidity with which data from trusted sources are released to the community (in a fully responsible manner). Responding to the pandemic is a stress test of our collaborative capacity and the social/economic value of research.
This book presents the state-of-the-art in social simulation as presented at the Social Simulation Conference 2018 in Stockholm, Sweden. It covers the developments in applications and methods of social simulation, addressing societal issues such as socio-ecological systems and policy making. Methodological issues discussed include large-scale empirical calibration, model sharing and interdisciplinary research, as well as decision making models, validation and the use of qualitative data in simulation modeling. Research areas covered include archaeology, cognitive science, economics, organization science, and social simulation education. This collection gives readers insight into the increasing use of social simulation in both its theoretical development and in practical applications such as policy making whereby modelling and the behavior of complex systems is key. The book will appeal to students, researchers and professionals in the various fields. .
At the time of writing, the UK government is attempting to tackle place-based inequality through its 'levelling up' agenda. To be effective, such interventions require local institutions with the capacity, powers, and budgets to develop and implement long-term strategies. Multi-level metagovernance, the ongoing reorganisation of local governance systems by the central state, has become a salient political process in England, characterised by fragmented system design, distorted local strategies, micromanagement and mistrustful central-local relations. These various problems are underpinned by a problematic combination of quasi-markets and state hierarchy. Together, these metagovernance mechanisms significantly constrain local capacity to deliver economic development.
In contrast to the importance placed on secondary analysis by researchers in other disciplines, British sociologists have long neglected the rich data available from large scale government surveys, perhaps because of technical obstacles. This report describes work to make the General Household Survey (GHS) data more easily accessible for sociological analysis, summarizes the contents and structure of the GHS and reviews the arrangements which have been made to allow researchers access to it.
This paper presents first results of modelling income and wealth inequalities resulting solely from housing market dynamics. An existing behaviour based agent-based model of the English and Welsh housing market is used to analyse the demographies of more expensive and cheaper areas emerging from buyer, seller and realtor interactions. The model is analysed for a small set of macroeconomic configurations of interest rates and loan to value ratios. The model demonstrates how quickly higher income areas emerge, with mean incomes between 10 and 20% higher in the expensive area. This difference is accentuated for higher interest rates. The model also demonstrates the strong relationship between wealth and housing assets with the mean level of wealth higher bu about 20% (but up to 50% for loan to value ratios of 80%).
This paper reviews the need for the development of transit performance measures, in the light of recent legislation and public subsidy issues for public transportation in the United States. An evaluation framework is presented, which defines and distinguishes between the efficiency, effectiveness and impact of public transit efforts. The application of this framework in evaluating public transit investments, and the use of the performance measures obtained through the application of this framework, in the allocation of funds among systems is then discussed. Research needs with respect to data collection requirements, cross-jurisdictional comparability, and the utility of the proposed performance measures for decision-making are finally addressed.
This paper aims to improve the transparency of agent-based social simulation (ABSS) models and make it easier for various actors engaging with these models to make sense of them. It studies what ABSS is and juxtaposes its basic conceptual elements with insights from the agency/structure debate in social theory to propose a framework that captures the 'conceptual anatomy' of ABSS models in a simple and intuitive way. The five elements of the framework are: agency, social structure, environment, actions and interactions, and temporality. The paper also examines what is meant by the transparency or opacity of ABSS in the rapidly grow-ing literature on the epistemology of computer simulations. It deconstructs the methodological criticism that ABSS models are black boxes by identifying multiple categories of transparency/opacity. It argues that neither opacity nor transparency is intrinsic to ABSS. Instead, they are dependent on research habitus -practices that are developed in a research field that are shaped by structure of the field and available resources. It discusses the ways in which thinking about the conceptual anatomy of ABSS can improve its transparency.
An introduction to the Special Issue of the JASSS Forum. Science is the result of a substantially social process. That is, science relies on many inter-personal processes, including: selection and communication of research findings, discussion of method, checking and judgement of others' research, development of norms of scientific behaviour, organisation of the application of specialist skills/tools, and the organisation of each field (e. g. allocation of funding). An isolated individual, however clever and well resourced, would not produce science as we know it today. Furthermore, science is full of the social phenomena that are observed elsewhere: fashions, concern with status and reputation, group-identification, collective judgements, social norms, competitive and defensive actions, to name a few. Science is centrally important to most societies in the world, not only in technical, military and economic ways, but also in the cultural impacts it has, providing ways of thinking about ourselves, our society and our environment. If we believe the following: simulation is a useful tool for understanding social phenomena, science is substantially a social phenomenon, and it is important to understand how science operates, then it follows that we should be attempting to build simulation models of the social aspects of science. This JASSS forum presents a collection of position papers by philosophers, sociologists and others describing the features and issues the authors would like to see in social simulations of the many processes and aspects that we lump together as "science". It is intended that this collection will inform and motivate substantial simulation work as described in the last section of this introduction.
The use of computer simulations to study social phenomena has grown rapidly during the last few years. Many social scientists from the fields of economics, sociology, psychology and other disciplines now use computer simulations to study a wide range of social phenomena. The availability of powerful personal computers, the development of multidisciplinary approaches and the use of artificial intelligence models have all contributed to this development. The benefits of using computer simulations in the social sciences are obvious. This holds true for the use of simulations as tools for theory building and for its implementation as a tool for sensitivity analysis and parameter optimization in application-oriented models. In both, simulation provides powerful tools for the study of complex social systems, especially for dynamic and multi-agent social systems in which mathematical tractability is often impossible. The graphical display of simulation output renders it user friendly to many social scientists that lack sufficient familiarity with the language of mathematics. The present volume aims to contribute in four directions: (1) To examine theoretical and methodological issues related to the application of simulations in the social sciences. By this we wish to promote the objective of designing a unified, user-friendly, simulation toolkit which could be applied to diverse social problems. While no claim is made that this objective has been met, the theoretical issues treated in Part 1 of this volume are a contribution towards this objective.
This paper introduces and illustrates a process for stakeholder-driven innovation in a highly contested domain: using artificial intelligence (AI) algorithms for social service delivery in national welfare systems. AI technologies are increasingly being applied because they are assumed to lead to efficiency gains. However, the use of AI is being challenged for its fairness. Existing biases and discrimination in service delivery appear to be perpetuated and cemented as a result of basing the AI on machine learning of past data. Fairness, however, is a dynamic cultural concept: its meaning in terms of values and beliefs, its implications for technology design, and the desired techno-futures need to be societally negotiated with all stakeholders, especially vulnerable groups suffering from current practices. The challenge is to provide contextualized, value-sensitive and participatory AI that is responsive to societal needs and change. The ‘AI for Assessment’ (AI FORA) project combines empirical research on AI-based social service delivery with gamification at community-based multi-stakeholder workshops and a series of case-specific agent-based models for assessing the status quo of AI-based distribution fairness in different countries, for simulating desired policy scenarios, and for generating an approach to ‘Better AI’. The paper is structured as follows: after introducing the participatory approach of AI FORA with its motivation and overall elements, the paper focuses on gamification and simulation as central components of the modelling strategy. Case-specific game design and ABMs are described and illustrated using the example of the AI FORA Spanish case study.
In this introduction, we outline the theoretical background for the most important concepts of the Simulating Knowledge Dynamics in Innovation Networks (SKIN) model. We describe the basic model, which we understand more as a theoretical framework than as a piece of code and preview the following chapters, which apply the SKIN model to diverse industrial sectors and develop related network models to generate insights about the dynamics of innovation networks.
Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.
Noting that although the analysis of citations has become a frequently used resource in recent empirical studies of science, little progress has been made towards understanding the reasons for the practice of citation, the paper explores the notion that references provide persuasive support for the results announced in the citing paper. It is argued that authors choose to cite articles they recognize to be authoritative in order to justify the validity, novelty and significance of their own work. In so doing, authors can be seen to be both demonstrating their allegiance to a particular section of the research community and contributing to the establishment of a consensus about the worth of the cited work. These ideas are applied to explain the findings of research on the quality of articles and on co-citation analysis, and is used to criticize studies which undertake a content analysis of citations.
A theoretical framework is proposed by which women as well as men may be included in class theory, and a methodology is suggested by which one aspect of women's class location, their relationship to the labour market, may be measured. It is argued that social class in a Weberian sense may be seen as comprising two distinct although related dimensions. Firstly, that based upon relationship to the labour market, measured at the level of the individual; and second, that represented by patterns of consumption (in terms of goods and services), measured at the level of the family. All those with a direct relationship to the labour market may be allocated to an occupational class position, irrespective of position within the family. Data from the General Household Survey are used to produce a preliminary occupational class schema for women which does not depend upon assumptions of skill or the manual/non-manual nature of the work.
The ways in which scientists account for and justify their own scientific views are analyzed by examining in detail transcripts of interviews with biochemists working on oxidative phosphorylation. It is shown that scientists use two repertoires, the `empiricist' and the `contingent', to account for their beliefs. The empiricist repertoire derives from and reinforces the traditional conception of scientific rationality according to which data obtained from impersonal, standardized routines are used to establish the validity of hypotheses and to decide between competing theories. However, when the contingent repertoire is adopted, `facts' are seen as depending on fallible interpretative work. Both repertoires are used in informal interaction, scientists moving flexibly between the two as they construct accounts of theory-choice. In view of this variability of accounts, it is concluded that it is impossible to obtain definitive evidence of how theories are actually chosen and that a new form of sociological analysis is required. An attempt is made to illustrate such an analysis.
A general account is presented of the emergence, growth, and decline of scientific research networks and their associated problem areas. Research networks are seen to pass through three phases. The first, exploratory phase is distinguished by a lack of effective communication among participants and by the pursuit of imprecisely defined problems. The second phase is one of rapid growth, associated with increasing social and intellectual integration, made possible by improved communication. An increasingly precise scientific consensus gradually emerges from a process of negotiation, in which those participants who are members of the scientific elite exert most influence. But as consensus is achieved the problem area becomes less scientifically fruitful; and as the network grows, career opportunities diminish. Consequently, the third, final phase is one of decline and disbandment of the network, together with the movement of participants to new areas of scientific opportunity.
One of the four large demonstrator projects being funded by the UK government is a five year project for developing decision support systems which could be used by the Department of Health and Social Security. The project is being undertaken by a group comprising companies and universities. Although the final results are not intended to be used, as the projects are demonstrators, valuable lessons are likely to be learnt regarding development of advanced software, and there will be useful examples of systems based on artificial intelligence techniques.
A class of social phenomena, exhibiting fluid boundaries and constant change, called 'collectivities,' is modeled using an agent-based simulation, demonstrating how such models can show that a set of plausible microbehaviors can yield the observed macro-phenomenon. Some features of the model are explored and its application to a wide range of social phenomena is described.
Christopher Watts and Nigel Gilbert explore the generation, diffusion and impact of innovations, which can now be studied using computer simulations. Agent-based simulation models can be used to explain the innovation that emerges from interactions among complex, adaptive, diverse networks of firms, people, technologies, practices and resources. This book provides a critical review of recent advances in agent-based modelling and other forms of the simulation of innovation. Elements explored include: diffusion of innovations, social networks, organisational learning, science models, adopting and adapting, and technological evolution and innovation networks. Many of the models featured in the book can be downloaded from the book's accompanying website. Bringing together simulation models from several innovation-related fields, this book will prove a fascinating read for academics and researchers in a wide range of disciplines, including: innovation studies, evolutionary economics, complexity science, organisation studies, social networks, and science and technology studies. Scholars and researchers in the areas of computer science, operational research and management science will also be interested in the uses of simulation models to improve the understanding of organisation
Attempts to control the current pandemic through public health interventions have been driven by predictions based on modelling, thus bringing epidemiological models to the forefront of policy and public interest. It is almost inevitable that there will be further pandemics and controlling, suppressing and ameliorating their effects will undoubtedly involve the use of models. However, the accuracy and usefulness of models are highly dependent on the data that are used to calibrate and validate them. In this article, we consider the data needed by the two main types of epidemiological modelling (compartmental and agent-based) and the adequacy of the currently available data sources. We conclude that at present the data for epidemiological modelling of pandemics is seriously deficient and we make suggestions about how it would need to be improved. Finally, we argue that it is important to initiate efforts to collect appropriate data for modelling now, rather than waiting for the next pandemic.
There has been increasing interest in deploying Internet of Things (IoT) devices to study human behavior in locations such as homes and offices. Such devices can be deployed in a laboratory or “in the wild” in natural environments. The latter allows one to collect behavioral data that is not contaminated by the artificiality of a laboratory experiment. Using IoT devices in ordinary environments also brings the benefits of reduced cost, as compared with lab experiments, and less disturbance to the participants’ daily routines, which in turn helps with recruiting them into the research. However, in this case, it is essential to have an IoT infrastructure that can be easily and swiftly installed and from which real-time data can be securely and straightforwardly collected. In this article, we present MakeSense, an IoT testbed that enables real-world experimentation for large-scale social research on indoor activities through real-time monitoring and/or situation-aware applications. The testbed features quick setup, flexibility in deployment, the integration of a range of IoT devices, resilience, and scalability. We also present two case studies to demonstrate the use of the testbed: one in homes and one in offices.
Energy disaggregation, a.k.a. Non-Intrusive Load Monitoring, aims to separate the energy consumption of individual appliances from the readings of a mains power meter measuring the total energy consumption of, e.g. a whole house. Energy consumption of individual appliances can be useful in many applications, e.g., providing appliance-level feedback to the end users to help them understand their energy consumption and ultimately save energy. Recently, with the availability of large-scale energy consumption datasets, various neural network models such as convolutional neural networks and recurrent neural networks have been investigated to solve the energy disaggregation problem. Neural network models can learn complex patterns from large amounts of data and have been shown to outperform the traditional machine learning methods such as variants of hidden Markov models. However, current neural network methods for energy disaggregation are either computational expensive or are not capable of handling long-term dependencies. In this paper, we investigate the application of the recently developed WaveNet models for the task of energy disaggregation. Based on a real-world energy dataset collected from 20 households over two years, we show that WaveNet models outperforms the state-of-the-art deep learning methods proposed in the literature for energy disaggregation in terms of both error measures and computational cost. On the basis of energy disaggregation, we then investigate the performance of two deep-learning based frameworks for the task of on/off detection which aims at estimating whether an appliance is in operation or not. Based on the same dataset, we show that for the task of on/off detection the second framework, i.e., directly training a binary classifier, achieves better performance in terms of F1 score.
This paper looks at 10 years of reviews in a multidisciplinary journal, The Journal of Artificial Societies and Social Simulation (JASSS), which is the flagship journal of social simulation. We measured referee behavior and referees' agreement. We found that the disciplinary background and the academic status of the referee have an influence on the report time, the type of recommendation and the acceptance of the reviewing task. Referees from the humanities tend to be more generous in their recommendations than other referees, especially economists and environmental scientists. Second, we found that senior researchers are harsher in their judgments than junior researchers, and the latter accept requests to review more often and are faster in reporting. Finally, we found that articles that had been refereed and recommended for publication by a multidisciplinary set of referees were subsequently more likely to receive citations than those that had been reviewed by referees from the same discipline. Our results show that common standards of evaluation can be established even in multidisciplinary communities.
Optimising policy choices to steer social/economic systems efficiently towards desirable outcomes is challenging. The inter-dependent nature of many elements of society and the economy means that policies designed to promote one particular aspect often have secondary, unintended, effects. In order to make rational decisions, methodologies and tools to assist the development of intuition in this complex world are needed. One approach is the use of agent-based models. These have the ability to capture essential features and interactions and predict outcomes in a way that is not readily achievable through either equations or words alone. In this paper we illustrate how agent-based models can be used in a policy setting by using an example drawn from the biowaste industry. This example describes the growth of in-vessel composting and anaerobic digestion to reduce food waste going to landfill in response to policies in the form of taxes and financial incentives. The fundamentally dynamic nature of an agent-based modelling approach is used to demonstrate that policy outcomes depend not just on current policy levels but also on the historical path taken.
There has been increasing interest in deploying IoT devices to study human behaviour in locations such as homes and offices. Such devices can be deployed in a laboratory or `in the wild' in natural environments. The latter allows one to collect behavioural data that is not contaminated by the artificiality of a laboratory experiment. Using IoT devices in ordinary environments also brings the benefits of reduced cost, as compared with lab experiments, and less disturbance to the participants' daily routines which in turn helps with recruiting them into the research. However, in this case, it is essential to have an IoT infrastructure that can be easily and swiftly installed and from which real-time data can be securely and straightforwardly collected. In this paper, we present MakeSense, an IoT testbed that enables real-world experimentation for large scale social research on indoor activities through real-time monitoring and/or situation-aware applications. The testbed features quick setup, flexibility in deployment, the integration of a range of IoT devices, resilience, and scalability. We also present two case studies to demonstrate the use of the testbed, one in homes and one in offices.
This paper reports the results of a multi-agent simulation designed to study the emergence and evolution of symbolic communica- tion. The novelty of this model is that it considers some interactional and spatial constraints to this process that have been disregarded by previous research. The model is used to give an account of the impli- cations of differences in the agents’ behaviour, which are embodied in a spatial environment. Two communicational dimensions are identified and four types of communication strategies are simultaneously tested. We use the model to point out some interesting emergent communica- tional properties when the agents’ behaviour is altered by considering those two dimensions.
Models are used to inform policymaking and underpin large amounts of government expenditure. Several authors have observed a discrepancy between the actual and potential use of models in government. While there have been several studies investigating model acceptance in government, it remains unclear under what conditions models are accepted. In this paper, we address the question ‘‘What criteria affect model acceptance in policymaking?’’, the answer to which will contribute to the wider understanding of model use in government. We employ a thematic coding approach to identify the acceptance criteria for the eight models in our sample. Subsequently, we compare our findings with existing literature and use qualitative comparative analysis to explore what configurations of the criteria are observed in instances of model acceptance. We conclude that model acceptance is affected by a combination of the model’s characteristics, the supporting infrastructure and organizational factors.
This book presents a multi-disciplinary investigation into extortion rackets with a particular focus on the structures of criminal organisations and their collapse, societal processes in which extortion rackets strive and fail and the impacts of bottom-up and top-down ways of fighting extortion racketeering. Through integrating a range of disciplines and methods the book provides an extensive case study of empirically based computational social science. It is based on a wealth of qualitative data regarding multiple extortion rackets, such as the Sicilian Mafia, an international money laundering organisation and a predatory extortion case in Germany. Computational methods are used for data analysis, to help in operationalising data for use in agent-based models and to explore structures and dynamics of extortion racketeering through simulations. In addition to textual data sources, stakeholders and experts are extensively involved, providing narratives for analysis and qualitative validation of models. The book presents a systematic application of computational social science methods to the substantive area of extortion racketeering. The reader will gain a deep understanding of extortion rackets, in particular their entrenchment in society and processes supporting and undermining extortion rackets. Also covered are computational social science methods, in particular computationally assisted text analysis and agent-based modelling, and the integration of empirical, theoretical and computational social science.
In order to deal with an increasingly complex world, we need ever more sophisticated computational models that can help us make decisions wisely and understand the potential consequences of choices. But creating a model requires far more than just raw data and technical skills: it requires a close collaboration between model commissioners, developers, users and reviewers. Good modelling requires its users and commissioners to understand more about the whole process, including the different kinds of purpose a model can have and the different technical bases. This paper offers a guide to the process of commissioning, developing and deploying models across a wide range of domains from public policy to science and engineering. It provides two checklists to help potential modellers, commissioners and users ensure they have considered the most significant factors that will determine success. We conclude there is a need to reinforce modelling as a discipline, so that misconstruction is less likely; to increase understanding of modelling in all domains, so that the misuse of models is reduced; and to bring commissioners closer to modelling, so that the results are more useful.
The concept of self-organization in social science is reviewed. In the first two sections, some basic features of self-organizing dynamical systems in general science are presented and the origin of the concept is reconstructed, paying special attention to social science accounts of self-organization. Then, theoretical and methodological considerations regarding the current application of the concept and prospective challenges are examined.
EC policy reveals a strong conviction that CSO’s main function in EU-funded research and innovation projects is to take care of the ‘societal perspective’, which would not be adequately represented otherwise. With this, CSOs are supposed to be the main advocates of RRI in project consortia and are supported by all kinds of EC policy measures to fulfil this role. This conviction is not only problematic due to definition problems concerning CSO as such. Empirical data about the role of CSOs in high-tech/high-innovation research projects and the distribution of RRI activities among consortia members reveal that the role of CSOs is much more multi-faceted (data providers, providers of access to the research field, providers of specific domain expertise etc.) than currently assumed. Furthermore, RRI policies of the EC have managed to sensitise all other actors in consortia to the societal perspective: universities and companies are likewise active to promote and realise RRI perspectives in project consortia with CSO not really standing out among them. These findings have at least two interesting implications: (1) CSOs have far more to offer than being just the ‘moral voice’ of society in research and innovation; their contribution is multi-faceted and beneficial in many respects. (2) The RRI policy of the EC is even more successful than expected: it is not just one actor type that is supposed to introduce RRI keys and elements in consortia; we can observe something like RRI diffusion among different types of consortia members with all of them active in supporting RRI perspectives in their research work.
Computational sociology models social phenomena using the concepts of emergence and downward causation. However, the theoretical status of these concepts is ambiguous; they suppose too much ontology and are invoked by two opposed sociological interpretations of social reality: the individualistic and the holistic. This paper aims to clarify those concepts and argue in favour of their heuristic value for social simulation. It does so by proposing a link between the concept of emergence and Luhmann's theory of communication. For Luhmann, society emerges from the bottom-up as communication and he describes the process by which society limits the possible selections of individuals as downward causation. It is argued that this theory is well positioned to overcome some epistemological drawbacks in computational sociology. © 2012 Blackwell Publishing Ltd.
This paper presents the agent-based model INFSO-SKIN, which provides ex-ante evaluation of possible funding policies in Horizon 2020 for the European Commission’s DG Information Society and Media (DG INFSO). Informed by a large dataset recording the details of funded projects, the simulation model is set up to reproduce and assess the funding strategies, the funded organisations and projects, and the resulting network structures of the Commission’s Framework 7 (FP7) programme. To address the evaluative questions of DG INFSO, this model, extrapolated into the future without any policy changes, is taken as an evidence-based benchmark for further experiments. Against this baseline scenario the following example policy changes are tested: (i) What if there were changes to the thematic scope of the programme? (ii) What if there were changes to the instruments of funding? (iii) What if there were changes to the overall amount of programme funding? (iv) What if there were changes to increase Small and Medium Enterprise (SME) participation? The results of these simulation experiments reveal some likely scenarios as policy options for Horizon 2020. The paper thus demonstrates that realistic modelling with a close data-to-model link can directly provide policy advice.
Abstract This chapter addresses the relationship between sociology and Non- Equilibrium Social Science (NESS). Sociology is a multiparadigmatic discipline with significant disagreement regarding its goals and status as a scientific discipline. Different theories and methods coexist temporally and geographically. However, it has always aimed at identifying the main factors that explain the temporal stability of norms, institutions and individuals’ practices; and the dynamics of institutional change and the conflicts brought about by power relations, economic and cultural inequality and class struggle. Sociologists considered equilibrium could not sufficiently explain the constitutive, maintaining and dissolving dynamics of society as a whole. As a move from the formal apparatus for the study of equilibrium, NESS does not imply a major shift from traditional sociological theory. Complex features have long been articulated in sociological theorization, and sociology embraces the complexity principles of NESS through its growing attention to complex adaptive systems and non-equilibrium sciences, with human societies seen as highly complex, path-dependent, far-from equilibrium, and selforganising systems. In particular, Agent-BasedModelling provides a more coherent inclusion of NESS and complexity principles into sociology. Agent-based sociology uses data and statistics to gauge the ‘generative sufficiency’ of a given microspecification by testing the agreement between ‘real-world’ and computer generated macrostructures.When the model cannot generate the outcome to be explained, the microspecification is not a viable candidate explanation. The separation between the explanatory and pragmatic aspects of social science has led sociologists to be highly critical about the implementation of social science in policy. However, ABM allows systematic exploration of the consequences of modelling assumptions and makes it possible to model much more complex phenomena than previously. ABM has proved particularly useful in representing socio-technical and socio-ecological systems, with the potential to be of use in policy. ABM offers formalized knowledge that can appear familiar to policymakers versed in the methods and language of economics, with the prospect of sociology becoming more influential in policy.
Peer production communities are based on the collaboration of communities of people, mediated by the Internet, typically to create digital commons, as in Wikipedia or free software. The contribution activities around the creation of such commons (e.g., source code, articles, or documentation) have been widely explored. However, other types of contribution whose focus is directed toward the community have remained significantly less visible (e.g., the organization of events or mentoring). This work challenges the notion of contribution in peer production through an in-depth qualitative study of a prominent “code-centric” example: the case of the free software project Drupal. Involving the collaboration of more than a million participants, the Drupal project supports nearly 2% of websites worldwide. This research (1) offers empirical evidence of the perception of “community-oriented” activities as contributions, and (2) analyzes their lack of visibility in the digital platforms of collaboration. Therefore, through the exploration of a complex and “code-centric” case, this study aims to broaden our understanding of the notion of contribution in peer production communities, incorporating new kinds of contributions customarily left invisible.
A multi-agent simulation embodying a theory of innovation networks has been built and used to suggest a number of policy-relevant conclusions. The simulation animates a model of innovation (the successful exploitation of new ideas) and this model is briefly described. Agents in the model representing firms, policy actors, research labs, etc. each have a knowledge base that they use to generate ‘artefacts’ that they hope will be innovations. The success of the artefacts is judged by an oracle that evaluates each artefact using a criterion that is not available to the agents. Agents are able to follow strategies to improve their artefacts either on their own (through incremental improvement or by radical changes), or by seeking partners to contribute additional knowledge. It is shown though experiments with the model's parameters that it is possible to reproduce qualitatively the characteristics of innovation networks in two sectors: personal and mobile communications and biotechnology.
The TELL ME simulation model is being developed to assist health authorities to understand the effects of their choices about how to communicate with citizens about protecting themselves from influenza epidemics. It will include an agent based model to simulate personal decisions to seek vaccination or adopt behaviour such as improved hand hygiene. This paper focusses on the design of the agents' decisions, using a combination of personal attitude, average local attitude, the local number of influenza cases and the case fatality rate. It also describes how personal decision making is connected to other parts of the model.
A number of indicators of the growth of science are critically reviewed to asses their strengths and weaknesses. The focus is on the problems involved in measuring two aspects of scientific growth, growth in manpower and growth in knowledge. It is shown that the design of better indicators depends on careful consideration of the theoretical framework within which the indicators are intended to be used. Recent advances in the sociology of science suggest ways in which the validity of existing indicators may be assessed and improved.
The NewTies project is implementing a simulation in which societies of agents are expected to de-velop autonomously as a result of individual, population and social learning. These societies are expected to be able to solve environmental challenges by acting collectively. The challenges are in-tended to be analogous to those faced by early, simple, small-scale human societies. This report on work in progress outlines the major features of the system as it is currently conceived within the project, including the design of the agents, the environment, the mechanism for the evolution of language and the peer-to-peer infrastructure on which the simulation runs.
Computational models are increasingly being used to assist in developing, implementing and evaluating public policy. This paper reports on the experience of the authors in designing and using computational models of public policy (‘policy models’, for short). The paper considers the role of computational models in policy making, and some of the challenges that need to be overcome if policy models are to make an effective contribution. It suggests that policy models can have an important place in the policy process because they could allow policy makers to experiment in a virtual world, and have many advantages compared with randomised control trials and policy pilots. The paper then summarises some general lessons that can be extracted from the authors’ experience with policy modelling. These general lessons include the observation that often the main benefit of designing and using a model is that it provides an understanding of the policy domain, rather than the numbers it generates; that care needs to be taken that models are designed at an appropriate level of abstraction; that although appropriate data for calibration and validation may sometimes be in short supply, modelling is often still valuable; that modelling collaboratively and involving a range of stakeholders from the outset increases the likelihood that the model will be used and will be fit for purpose; that attention needs to be paid to effective communication between modellers and stakeholders; and that modelling for public policy involves ethical issues that need careful consideration. The paper concludes that policy modelling will continue to grow in importance as a component of public policy making processes, but if its potential is to be fully realised, there will need to be a melding of the cultures of computational modelling and policy making.
Despite growing interest, public uptake of 'smart home technologies' in the UK remains low. Barriers for accepting and opting to use smart home technologies have been linked to various socio-technical issues, including data governance. Understanding barriers for accepting to use smart home technologies is therefore important for improving their future design. Equally, enabling the public to help shape design features of these technologies from evidence-informed and deliberative approaches is also important. However, this remains an understudied area. This article reports a UK study exploring public opinion towards smart home technologies, using a Citizens' Jury method. Findings indicate that whilst participants identified the benefits of smart home technologies, participants' data sharing intentions and practices are contingent upon the condition of trust in technology developers. Study outcomes could support practitioners and policymakers in making informed, citizen-led decisions about how to adapt existing data governance frameworks pertaining to smart home technologies.
Agent-based simulation can model simple micro-level mechanisms capable of generating macro-level patterns, such as frequency distributions and network structures found in bibliometric data. Agent-based simulations of organisational learning have provided analogies for collective problem solving by boundedly rational agents employing heuristics. This paper brings these two areas together in one model of knowledge seeking through scientific publication. It describes a computer simulation in which academic papers are generated with authors, references, contents, and an extrinsic value, and must pass through peer review to become published. We demonstrate that the model can fit bibliometric data for a token journal, Research Policy. Different practices for generating authors and references produce different distributions of papers per author and citations per paper, including the scale-free distributions typical of cumulative advantage processes. We also demonstrate the model’s ability to simulate collective learning or problem solving, for which we use Kauffman’s NK fitness landscape. The model provides evidence that those practices leading to cumulative advantage in citations, that is, papers with many citations becoming even more cited, do not improve scientists’ ability to find good solutions to scientific problems, compared to those practices that ignore past citations. By contrast, what does make a difference is referring only to publications that have successfully passed peer review. Citation practice is one of many issues that a simulation model of science can address when the data-rich literature on scientometrics is connected to the analogy-rich literature on organisations and heuristic search.
Government communication is an important management tool during a public health crisis, but understanding its impact is difficult. Strategies may be adjusted in reaction to developments on the ground and it is challenging to evaluate the impact of communication separately from other crisis management activities. Agent-based modeling is a well-established research tool in social science to respond to similar challenges. However, there have been few such models in public health. We use the example of the TELL ME agent-based model to consider ways in which a non-predictive policy model can assist policy makers. This model concerns individuals’ protective behaviors in response to an epidemic, and the communication that influences such behavior. Drawing on findings from stakeholder workshops and the results of the model itself, we suggest such a model can be useful: (i) as a teaching tool, (ii) to test theory, and (iii) to inform data collection. We also plot a path for development of similar models that could assist with communication planning for epidemics.
The relationship between social segregation and workplace segregation has been traditionally studied as a one-way causal relationship mediated by referral hiring. In this paper we introduce an alternative framework which describes the dynamic relationships between social segregation, workplace segregation, individuals’ homophily levels, and referral hiring. An agent-based simulation model was developed based on this framework. The model describes the process of continuous change in composition of workplaces and social networks of agents, and how this process affects levels of workplace segregation and the segregation of social networks of the agents (people). It is concluded that: (1) social segregation and workplace segregation may co-evolve even when hiring of workers occurs mainly through formal channels and the population is initially integrated (2) majority groups tend to be more homophilous than minority groups, and (3) referral hiring may be beneficial for minority groups when the population is highly segregated.
New methods of economic modelling have been sought as a result of the global economic downturn in 2008. This unique book highlights the benefits of an agent-based modelling (ABM) approach. It demonstrates how ABM can easily handle complexity: heterogeneous people, households and firms interacting dynamically. Unlike traditional methods, ABM does not require people or firms to optimise or economic systems to reach equilibrium. ABM offers a way to link micro foundations directly to the macro situation. Key features: • Introduces the concept of agent-based modelling and shows how it differs from existing approaches. • Provides a theoretical and methodological rationale for using ABM in economics, along with practical advice on how to design and create the models. • Starts each chapter with a short summary of the relevant economic theory and then shows how to apply ABM. • Explores both topics covered in basic economics textbooks and current important policy themes; unemployment, exchange rates, banking and environmental issues. • Describes the models in pseudocode, enabling the reader to develop programs in their chosen language. • Is supported by a website featuring the NetLogo models described in the book. Agent-based Modelling in Economics provides students and researchers with the skills to design, implement, and analyze agent-based models. Third year undergraduate, master and doctoral students, faculty and professional economists will find this book an invaluable resource.
This contribution deals with the assessment of the quality of a simulation. The first section points out the problems of the Standard View and the Constructivist View in evaluating social simulations. A simulation is good when we get from it what we originally would have liked to get from the target; in this, the evaluation of the simulation is guided by the expectations, anticipations and experience of the community that uses it. This makes the user community view the most promising mechanism to assess the quality of a policy modelling exercise. The second section looks at a concrete policy modelling example to test this idea. It shows that the very first negotiation and discussion with the user community to identify their questions is highly user-driven, interactive, and iterative. It requires communicative skills, patience, willingness to compromise on both sides, and motivation to make the formal world of modellers and the narrative world of practical policymaking meet. Often, the user community is involved in providing data for calibrating the model. It is not an easy issue to confirm the existence, quality and availability of data and check for formats and database requirements. As the quality of the simulation in the eyes of the user will very much depend on the quality of the informing data and the quality of the model calibration, much time and effort need to be spent in coordinating this issue with the user community. Last but not least, the user community has to check the validity of simulation results and has to believe in their quality. Users have to be enabled to understand the model, to agree with its processes and ways to produce results, to judge similarity between empirical and simulated data etc. Although the User Community view might be the most promising, it is the most work-intensive mechanism to assess the quality of a simulation. Summarising, to trust the quality of a simulation means to trust the process that produced its results. This process includes not only the design and construction of the simulation model itself, but also the whole interaction between stakeholders, study team, model, and findings.
There is an asymmetry in the procedures used by natural scientists to account for `correct belief' and for `error'. Correct belief is treated as the normal state of affairs, as deriving unproblematically from experimental evidence, and as requiring no special explanation. Errors are seen as something to be explained away, as due to the intrusion of non-scientific influences. An elaborate repertoire of interpretative resources is employed in accounting for error. Asymmetrical accounting for error and for correct belief is a social device which reinforces the traditional conception of scientific rationality and which makes the community of scientists appear as the kind of community we, and they, recognize as scientific.
The Living Earth Simulator (LES) is one of the core components of the FuturICT architecture. It will work as a federation of methods, tools, techniques and facilities supporting all of the FuturICT simulation-related activities to allow and encourage interactive exploration and understanding of societal issues. Society-relevant problems will be targeted by leaning on approaches based on complex systems theories and data science in tight interaction with the other components of FuturICT. The LES will evaluate and provide answers to realworld questions by taking into account multiple scenarios. It will build on present approaches such as agent-based simulation and modeling, multiscale modelling, statistical inference, and data mining, moving beyond disciplinary borders to achieve a new perspective on complex social systems. © The Author(s) 2012.
In this paper, we apply the agent-based SKIN model (Simulating Knowledge Dynamics in Innovation Networks) to university-industry links. The model builds on empirical research about innovation networks in knowledge-intensive industries with procedures relying on theoretical frameworks of innovation economics and economic sociology. Our experiments compare innovation networks with and without university agents. Results show that having universities in the co-operating population of actors raises the competence level of the whole population, increases the variety of knowledge among the firms, and increases innovation diffusion in terms of quantity and speed. Furthermore, firms interacting with universities are more attractive for other firms when new partnerships are considered. These results can be validated against empirical findings. The simulation confirms that university-industry links improve the conditions for innovation diffusion and enhance collaborative arrangements in innovation networks.
Combinations of intense non-pharmaceutical interventions (lockdowns) were introduced worldwide to reduce SARSCoV- 2 transmission. Many governments have begun to implement exit strategies that relax restrictions while attempting to control the risk of a surge in cases. Mathematical modelling has played a central role in guiding interventions, but the challenge of designing optimal exit strategies in the face of ongoing transmission is unprecedented. Here, we report discussions from the Isaac Newton Institute ‘Models for an exit strategy’ workshop (11–15 May 2020). A diverse community of modellers who are providing evidence to governments worldwide were asked to identify the main questions that, if answered, would allow for more accurate predictions of the effects of different exit strategies. Based on these questions, we propose a roadmap to facilitate the development of reliable models to guide exit strategies. This roadmap requires a global collaborative effort from the scientific community and policymakers, and has three parts: (i) improve estimation of key epidemiological parameters; (ii) understand sources of heterogeneity in populations; and (iii) focus on requirements for data collection, particularly in low-tomiddle-income countries. This will provide important information for planning exit strategies that balance socioeconomic benefits with public health.
While there is growing interest in the design and deployment of smart and modular homes in the UK, there remain questions about the public’s readiness and willingness to live in them. Understanding what conditions prospective residents might place upon the decisions to live in such homes stands to improve their design, helping them to meet with the expectations, and requirements of their residents. Through direct interaction with a prototype of a smart and modular home within a university context, the current study investigated how people negotiate the prospect of smart and modular living, and the conditions they would place on doing so. The study explores the short observational experiences of 20 staff and students within a UK university context, using think aloud interviews. Findings indicate that whilst participants were able to identify the benefits of smart and modular homes, there were nuanced responses when they negotiated the challenges of living. Further, a framework of considerations and recommendations are presented which could support practitioners and policy makers in making more informed, citizen-led decisions on ways to adapt and improve these home solutions.
The academic study and the applied use of agent-based modelling of social processes has matured considerably over the last thirty years. The time is now right to engage seriously with the ethics and responsible practice of agent-based social simulation. In this paper, we first outline the many reasons why it is appropriate to explore an ethics of agent-based modelling and how ethical issues arise in its practice and organisation. We go on to discuss different approaches to standardisation as a way of supporting responsible practice. Some of the main conclusions are organised as provisions in a draft code of ethics. We intend for this draft to be further developed by the community before being adopted by individuals and groups within the field informally or formally.
The relationship between social segregation and workplace segregation has been traditionally studied as a one-way causal relationship mediated by referral hiring. In this paper we introduce an alternative framework which describes the dynamic relationships between social segregation, workplace segregation, individuals’ homophily levels, and referral hiring. An agent-based simulation model was developed based on this framework. The model describes the process of continuous change in composition of workplaces and social networks of agents, and how this process affects levels of workplace segregation and the segregation of social networks of the agents (people). It is concluded that: (1) social segregation and workplace segregation may co- evolve even when hiring of workers occurs mainly through formal channels and the population is initially integrated (2) majority groups tend to be more homophilous than minority groups, and (3) referral hiring may be beneficial for minority groups when the population is highly segregated.
This article proposes (and demonstrates the effectiveness of) a new strategy for assessing the results of epidemic models which we designate reproduction. The strategy is to build an independent model that uses (as far as possible) only the published information about the model to be assessed. In the example presented here, the independent model also follows a different modelling approach (agent-based modelling) to the model being assessed (the London School of Hygiene and Tropical Medicine compartmental model which has been influential in COVID lockdown policy). The argument runs that if the policy prescriptions of the two models match then this independently supports them (and reduces the chance that they are artefacts of assumptions, modelling approach or programming bugs). If, on the other hand, they do not match then either the model being assessed is not provided with sufficient information to be relied on or (perhaps) there is something wrong with it. In addition to justifying the approach, describing the two models and demonstrating the success of the approach, the article also discusses additional benefits of the reproduction strategy independent of whether match between policy prescriptions is actually achieved.
Missing data frequently occurs in quantitative social research. For example, in a survey of individuals, some of those selected for interview will not agree to participate (unit non-response) and others who do agree to be interviewed will not always answer all the questions (item non-response). At its most benign, missing data reduces the achieved sample size, and consequently the precision of estimates. However, missing data can also result in biased inferences about outcomes and relationships of interest. Broadly, if the underlying, unseen, responses from those individuals in the survey frame who have one or more missing responses differ systematically from those individuals in the survey frame whose responses are all observed, then any analysis restricted to the subset of individuals whose responses are all observed runs the risk of producing biased inferences for the target population. Thus every researcher needs to take seriously the potential consequences of missing data. This paper describes the use of Multiple Imputation (MI) to correct estimates for missing data, under a general assumption about the cause, or reason for missing data. This is generally termed the missingness mechanism. MI has robust theoretical properties while being flexible, generalisable and readily available in a range of statistical software.
Computational sociology models social phenomena using the concepts of emergence and downward causation. But the theoretical status of these concepts is ambiguous; they suppose too much ontology and are invoked by two opposed sociological stands, namely, individualistic and holistic interpretations of social phenomena. In this paper, we propose a theoretical alternative that not only might clarify those concepts, but also keep their heuristic and interpretative value for computational sociology. We do so by advancing two proposals. Firstly, we suggest a non-ontological framework that allows modellers to identify emergent processes. This framework asserts the macro level and micro level as the emergent by-products of an instrumental prompting (the very modellers’ act of distinguishing). Secondly, in order to support analytically the modellers’ simulations, we link this non-ontological framework with the theory of self-referential social systems. This theory gives an account of the emergence of the social realm from the bottom-up as communication and describes the process by which society limits the possible selections of individuals. These two proposals are well-positioned to overcome some epistemological drawbacks, although they also generate new challenges to computational sociology.
What activities take place at home? When do they occur, for how long do they last and who is involved? Asking such questions is important in social research on households, e.g., to study energyrelated practices, assisted living arrangements and various aspects of family and home life. Common ways of seeking the answers rest on self-reporting which is provoked by researchers (interviews, questionnaires, surveys) or non-provoked (time use diaries). Longitudinal observations are also common, but all of these methods are expensive and time-consuming for both the participants and the researchers. The advances of digital sensors may provide an alternative. For example, temperature, humidity and light sensors report on the physical environment where activities occur, while energy monitors report information on the electrical devices that are used to assist the activities. Using sensor-generated data for the purposes of activity recognition is potentially a very powerful means to study activities at home. However, how can we quantify the agreement between what we detect in sensor-generated data and what we know from self-reported data, especially nonprovoked data? To give a partial answer, we conduct a trial in a household in which we collect data from a suite of sensors, as well as from a time use diary completed by one of the two occupants. For activity recognition using sensor-generated data, we investigate the application of mean shift clustering and change points detection for constructing features that are used to train a Hidden Markov Model. Furthermore, we propose a method for agreement evaluation between the activities detected in the sensor data and that reported by the participants based on the Levenshtein distance. Finally, we analyse the use of different features for recognising different types of activities.
Agent-based simulation can model simple micro-level mechanisms capable of generating macro-level patterns, such as frequency distributions and network structures found in bibliometric data. Agent-based simulations of organisational learning have provided analogies for collective problem solving by boundedly rational agents employing heuristics. This paper brings these two areas together in one model of knowledge seeking through scientific publication. It describes a computer simulation in which academic papers are generated with authors, references, contents, and an extrinsic value, and must pass through peer review to become published. We demonstrate that the model can fit bibliometric data for a token journal, Research Policy. Different practices for generating authors and references produce different distributions of papers per author and citations per paper, including the scale-free distributions typical of cumulative advantage processes. We also demonstrate the model’s ability to simulate collective learning or problem solving, for which we use Kauffman’s NK fitness landscape. The model provides evidence that those practices leading to cumulative advantage in citations, that is, papers with many citations becoming even more cited, do not improve scientists’ ability to find good solutions to scientific problems, compared to those practices that ignore past citations. By contrast, what does make a difference is referring only to publications that have successfully passed peer review. Citation practice is one of many issues that a simulation model of science can address when the data-rich literature on scientometrics is connected to the analogy-rich literature on organisations and heuristic search.
The value of complexity science and related approaches in policy evaluation have been widely discussed over the last 20 years, not least in this journal. We are now at a crossroads; this Special Issue argues that the use of complexity science in evaluation could deepen and broaden rendering evaluations more practical and rigorous. The risk is that the drive to better evaluate policies from a complexity perspective could falter. This special issue is the culmination of 4 years’ work at this crossroads in the UK Centre for the Evaluation of Complexity Across the Nexus. It includes two papers which consider the cultural and organisational operating context for the use of complexity in evaluation and four methodological papers on developments and applications. Together, with a strong input from practitioners, these papers aim to make complexity actionable and expand the use of complexity ideas in evaluation and policy practice.
This article traces the rise and eventual decline of a field of research devoted to the study of meteors by radar. It shows how a new experimental tool, radar, provided the impetus for the emergence of a new scientific specialty, and how this specialty later declined after its initial problems had been solved and after most of its participants had moved on to more promising fields. Radar meteor research provides an example of how new fields grow and how scientific developments affect the research careers of scientists.
This thesis offers a sociological analysis of food waste as a social issue of importance. Alongside government intervention, numerous community groups and social enterprises have emerged across the UK which attempt to mitigate the costs of food waste in different ways. Drawing on ethnographic examples, this thesis draws attention to one grassroots social response to the food waste issue, freegan dumpster diving. Freeganism is a counter cultural movement which rejects capitalism and promotes more socially and environmentally equitable relations. Freegans reject the normative categorization of discarded food as valueless, unhygienic and inedible, and instead reclaim food disposed of by retailers for human consumption. Literature to date constructs freegan dumpster diving as a niche practice performed by individuals for political resistance or food poverty. Little attention has addressed the transformation of food waste into a valuable resource or what happens to food waste once it has been reclaimed. Drawing on participant observations and interviews conducted with six freegan community groups in the UK over 18-months, this thesis draws attention to the processes freegans engage in when dumpster diving to explore how food waste is re-valued and re-used. This emerges as a complex process. Dumpster diving is not an independent moment of recovery; attention to the different food waste pathways, as practitioners access, assess, reclaim, consume and distribute food waste varyingly, is required. Freegans regularly enact dumpster diving but for multiple reasons and in shifting configurations. A shared practice is visible across all freegan communities, albeit with some variations. These deviations allow freegans to navigate the social barriers to performance in different ways, enabling the practice to become entrenched in everyday life. When barriers prove insurmountable, practitioners move in and out of affiliation with the practice over their life-course. A similar but distinct practice has emerged in recent years with the growth of food redistribution organizations (FROs). FROs promote the re-valuing and re-using of food waste as a joint business and charity venture, supporting retailers in managing food waste by redistributing it to vulnerable people in food poverty. Utilising insights gathered through participant observations and interviews with two different FROs, these practices promote a more socially acceptable and scalable approach to reclaiming food waste than dumpster diving through their partnerships with food retailers. This, however, is at the expense of the wider socio-political objectives at the core of freeganism. The radical philosophy of freeganism thus both define its existence yet also constrains the ability for wider participation and social impact. This analysis provides useful insights into the freegan subculture and the food waste debate more widely, by exploring 1) the journeys of food waste 2) processes of reclaiming food waste 3) practitioner relationships to food waste over time and space. Freegan dumpster diving is revealed as an everyday practice that is constrained by, and constrains everyday life. At any one time, multiple food waste practices circulate, connect and transform. If points of intervention or transition to more sustainable food waste configurations are sought, further attention to this linked nexus of practices is required.
Professional medical practice, like other organizational conduct, relies upon records which document transactions between members and their clientele. Medical practitioners employ a set of conventions providing for the systematic recording and interpretation of medical record cards that forms a social organization underlying the records cards' ordinary usage. In this paper we examine these conventions and develop a computer program which captures elements of their structure and use. By doing so we illustrate one way in which sociological analysis can contribute to the design of ‘intelligent systems.’ We also suggest that the emerging discipline of Artificial Intelligence might find recent developments in sociology pertinent to its concerns.
None of the standard network models fit well with sociological theory. This paper presents a simple agent-based model of social networks that have fat-tailed distributions of connectivity, that are assortative by degree of connectivity, that are highly clustered and that can be used to create a large variety of social worlds.
In the past few years a branch of sociology, conversation analysis, has begun to have a significant impact on the design of human*b1computer interaction (HCI). The investigation of human*b1human dialogue has emerged as a fruitful foundation for interactive system design.****This book includes eleven original chapters by leading researchers who are applying conversation analysis to HCI. The fundamentals of conversation analysis are outlined, a number of systems are described, and a critical view of their value for HCI is offered.****Computers and Conversation will be of interest to all concerned with HCI issues--from the advanced student to the professional computer scientist involved in the design and specification of interactive systems.
This position paper proposes a vision for the research activity about sustainability in global environmental change (GEC) taking place in the FuturICT flagship project. This activity will be organised in an "Exploratory", gathering a core network of European scientists from ICT, social simulation, complex systems, economics, demographics, Earth system science. These research teams will collaborate in building a self-organising network of data sources and models about GEC and in using new facilities fostering stakeholder participation. We develop examples of concrete directions for this research: world wide virtual population with demographic and some economic descriptors, ecosystem services production and distribution, governance systems at various scales. © EDP Sciences, Springer-Verlag 2012.
When designing an agent-based simulation, an important question to answer is how to model the decision making processes of the agents in the system. A large number of agent decision making models can be found in the literature, each inspired by different aims and research questions. In this paper we provide a review of 14 agent decision making architectures that have attracted interest. They range from production-rule systems to psychologically- and neurologically-inspired approaches. For each of the architectures we give an overview of its design, highlight research questions that have been answered with its help and outline the reasons for the choice of the decision making model provided by the originators. Our goal is to provide guidelines about what kind of agent decision making model, with which level of simplicity or complexity, to use for which kind of research question.
Collective representations of the quality of artifacts are produced by human societies in a variety of contexts. These representations of quality emerge from a broad range of social interactions, from the uncoordinated behaviour of large collectives of individuals, to the interaction between individuals and organizations, to complex socio-technical processes such as those enabled by online peer production systems. This special issue brings together contributions from sociology, social psychology and social simulation to shed light on the nature of these representations and the social processes that produce them.
According to the organizational learning literature, the greatest competitive advantage a firm has is its ability to learn. In this paper, a framework for modeling learning competence in firms is presented to improve the understanding of managing innovation. Firms with different knowledge stocks attempt to improve their economic performance by engaging in radical or incremental innovation activities and through partnerships and networking with other firms. In trying to vary and/or to stabilize their knowledge stocks by organizational learning, they attempt to adapt to environmental requirements while the market strongly selects on the results. The simulation experiments show the impact of different learning activities, underlining the importance of innovation and learning. (c) 2006 Elsevier B.V. All rights reserved.
Les communautés eén ligne collaboratives ont connu un succés massif avec l’émergence des services et des plates-formes Web 2.0. Les wikis, et notamment la Wikipedia sont un des exemples les plus saillants de ce type de communautés de construction collective de contenus. La Wikipedia a á cet égard jusqu’ici concentré l’essentiel des efforts de recherche au sujet de ces communautés, même si l’ensemble des wikis constitue un écosystème possédant une très grande diversité de contenus, de populations, d’usages, de systèmes de gouvernance. Au contraire de la Wikipedia qui a probablement atteint la masse critique lui permettant d’être viable, la plupart des wikis luttent pour survivre et sont en compétition afin d’attirer contributeurs et articles de qualit é, connaissant ainsi des destinées variées, vertueuses – croissance en population et en contenu – ou fatales – inactivité et vandalisme.
A comparison of the current structures and dynamics of UK and German biotech- nology-based industries reveals a striking convergence of industrial organisations and innovation directions in both countries. This counteracts propositions from theoretical frameworks such as the varieties-of-capitalism hypothesis and the na- tional innovation systems approach which suggest substantial differences between the industrial structures of the countries due to differing institutional frameworks. In this paper, we question these approaches and show that the observed structural alignment can be explained by the network organisation of research and produc- tion in knowledge-based industries.
This paper contributes to the debate about governance behaviour in on-line communities, particularly those associated with Open Source. It addresses evidence of normative self- regulation by analysing the discussion pages of a sample of Wikipedia Controversial and Featured articles. It was assumed that attempts by editors to influence one another within these pages will be revealed by their use of rules and norms as well as the illocutionary force of speech acts. The results reveal some unexpected patterns. Despite the Wikipedia community generating a large number of rules, etiquettes and guidelines, explicit invocation of rules and/or use of wider social norms appeared to play a small role in regulating editor behaviour. The emergent pattern of communicative exchange was not well aligned either with these rules or with the characteristics of a coherent community. Nor was it consistent with the behaviour needed to reach agreement on controversial topics. The paper concludes by offering some tentative hypotheses as to why this is so.
This paper investigates the fate of manuscripts that were rejected from JASSS-The Journal of Artificial Societies and Social Simulation, the flagship journal of social simulation. We tracked 456 manuscripts that were rejected from 1997 to 2011 and traced their subsequent publication as journal articles, conference papers or working papers. We compared the impact factor of the publishing journal and the citations of those manuscripts that were eventually published against the yearly impact factor of JASSS and the number of citations achieved by the JASSS mean and top cited articles. Only 10% of the rejected manuscripts were eventually published in a journal that was indexed in the Web of Science (WoS), although most of the rejected manuscripts were published elsewhere. Being exposed to more than one round of reviews before rejection, having received a more detailed reviewer report and being subjected to higher inter-reviewer disagreement were all associated with the number of citations received when the manuscript was eventually published. This indicates that peer review could contribute to increasing the quality even of rejected manuscripts.
User-Centred Design (UCD) researchers have been investigating smart homes for 20 years and have highlighted the approaches’ effectiveness in identifying the requirements of users. Despite the growing interest in smart homes, research has shown that its adoption remains low. This owes to the tendency for research to often use a technological-centred approach to improve a pre-existing product or tailor it to target users. Visions of smart homes may therefore not have been fully based on a clear understanding of users’ needs and sociotechnical issues of concern. Enabling the public to have a role in shaping the future of smart home technologies and related sociotechnical issues of concern in the early stages of the UCD process have been widely recommended. Specifically, there have been calls to engage the public in sharing responsibility for developing data privacy agreements, data governance frameworks, and effectively domesticating technologies into life and ‘home’ systems. This paper introduces the citizens’ jury method to enable the public to have a role in shaping the future of smart homes and related sociotechnical issues. This is an understudied area of research that would be considerably valuable for practitioners in the usability and smart technology sectors. Findings from this paper are based on a cross-section of UK citizens’, exploring their opinions on sociotechnical issues of data security, accessibility to and control over use of devices and technological appliances associated with smart homes. A set of recommendation are developed to provide guidance and suggested actions on approaching these issues in the future.
An agent-based computational model, based on longitudinal ethnographic data about the dynamics of intra-group behaviour and work group performance, has been developed from observing an organizational group in the service sector. The model, in which the agents represent workers and tasks, is used to assess the effect of emotional expressions on the dynamics of interpersonal behaviour in work groups, particularly for groups that have recent newcomers. The model simulates the gradual socialization of newcomers into the work group. Through experimenting with the model, conclusions about the factors that influence the socialization process were studied in order to obtain a better understanding of the effect of emotional expressions. It is shown that although positive emotional display accelerates the socialization process, it can have negative effects on work group performance.
None of the standard network models fit well with sociological observations of real social networks. This paper presents a simple structure for use in agent-based models of large social networks. Taking the idea of social circles, it incorporates key aspects of large social networks such as low density, high clustering and assortativity of degree of connectivity. The model is very flexible and can be used to create a wide variety of artificial social worlds.
Peer review is not only a quality screening mechanism for scholarly journals. It also connects authors and referees either directly or indirectly. This means that their positions in the network structure of the community could influence the process, while peer review could in turn influence subsequent networking and collaboration. This paper aims to map these complex network implications by looking at 2232 author/referee couples in an interdisciplinary journal that uses double blind peer review. By reconstructing temporal co-authorship networks, we found that referees tended to recommend more positively submissions by authors who were within three steps in their collaboration network. We also found that co-authorship network positions changed after peer review, with the distances between network neighbours decreasing more rapidly than could have been expected had the changes been random. This suggests that peer review could not only reflect but also create and accelerate scientific collaboration.
Managing non-communicable diseases requires policy makers to adopt a whole systems perspective that adequately represents the complex causal architecture of human behaviour. Agent-based modelling is a computational method to un- derstand the behaviour of complex systems by simulating the actions of entities within the system, including the way these individuals in uence and are in u- enced by their physical and social environment. The potential benefits of this method have led to several calls for greater use in public health research. We discuss three challenges facing potential modellers: model specification, obtain- ing required data, and developing good practices. We also present steps to assist researchers to meet these challenges and implement their agent-based model.
Understanding home activities is important in social research to study aspects of home life, e.g., energy-related practices and assisted living arrangements. Common approaches to identifying which activities are being carried out in the home rely on self-reporting, either retrospectively (e.g., interviews, questionnaires, and surveys) or at the time of the activity (e.g., time use diaries). The use of digital sensors may provide an alternative means of observing activities in the home. For example, temperature, humidity and light sensors can report on the physical environment where activities occur, while energy monitors can report information on the electrical devices that are used to assist the activities. One may then be able to infer from the sensor data which activities are taking place. However, it is first necessary to calibrate the sensor data by matching it to activities identified from self-reports. The calibration involves identifying the features in the sensor data that correlate best with the self-reported activities. This in turn requires a good measure of the agreement between the activities detected from sensor-generated data and those recorded in self-reported data. To illustrate how this can be done, we conducted a trial in three single-occupancy households from which we collected data from a suite of sensors and from time use diaries completed by the occupants. For sensor-based activity recognition, we demonstrate the application of Hidden Markov Models with features extracted from mean-shift clustering and change points analysis. A correlation-based feature selection is also applied to reduce the computational cost. A method based on Levenshtein distance for measuring the agreement between the activities detected in the sensor data and that reported by the participants is demonstrated. We then discuss how the features derived from sensor data can be used in activity recognition and how they relate to activities recorded in time use diaries.
EXECUTIVE SUMMARY One of the aims of the National Centre for Research Methods (NCRM) is to identify and foster methodological innovation in the UK. The aim of this project was to identify methodological innovations outside the UK and draw NCRM’s attention to them. The project sought out research practices that have not yet filtered through to typical research methods courses or that impact on the research process in novel ways. These usually entailed (i) technological innovation, (ii) the use of existing theoretical approaches and methods in new ways and (iii) interdisciplinary. The project’s focus on innovative research practices ranged from data collection to analysis and covered disciplines such as (social) psychology, sociology, social work, socio-legal studies, political science (including public health and public policy) and international studies, (social) geography (area studies, demography, environmental and urban planning), (social) anthropology, (socio-)linguistics, education, communication studies, economic and social history, economics (management and business studies), science and technology studies, statistics, methods and computing. The work was conducted between October 2008 and March 2009 and written up in April and May 2009. The project gathered evidence by reviewing previous reports, carrying out desktop research, conducting an e-mail survey with academics, practitioners, research methods experts and others (N=215) - registering data entries in the form of nominations of experts, institutions and links to explore (N=670) - and holding interviews with gatekeepers (N=36) and telephone interviews with nominated experts (N=40). The project concluded, firstly, that innovative methodologies usually entail the use of one or more technological innovation(s) (visual, digital or online). This could be the advent of new software or the development of online methods and the use of the Internet to conduct research. Secondly, innovative methodologies often entail crossing disciplinary boundaries. This is observed in combinations of disciplines and methods such as in ethnography, anthropology and psychology. Thirdly, innovative methodologies often entail the use of existing theoretical approaches and methods in reformed or mixed and applied ways. This is observed in participatory methods, action research, professional work, social and consultancy work. Finally, innovative methodologies reside both inside traditional academic institutions (universities) and outside (research centres, institutes, consultancy agencies and organisations), yet even in the latter methods developers and experts usually have academic backgrounds and previous or current affiliations, status or posts. Overall, psychology figured prominently in methodological innovations and developments followed by survey methodology, ethnography, sociology and management. These developments were classified into mixed (N=8), qualitative (N=7) and quantitative (N=7) types of research. The institutional structures identified as ‘hosting’ these developments are primarily Academic followed by both Academic and Professional, then Research Centres and finally Professional and Consultancy institutions. The majority of the innovations are a consequence of working across disciplinary boundaries, followed by developments within methods and disciplines and then by developments in technology. Innovations were mainly spotted in North America – the USA and Canada – Italy, Germany and the Netherlands. The report includes summary descriptions of the methodological innovations located by the project. As a follow up to this project a workshop will be organised to bring together some of the developers and experts identified of these innovations. The workshop is planned to be adjacent to the NCRM Research Methods Festival to be held in July 2010.
In a drive to achieve net zero emissions, U.K. transport decarbonisation policies are predominantly focussed on measures to promote the uptake and use of electric vehicles (EVs). This is reflected in the COP26 Transport Declaration signed by 38 national governments, alongside city region governments, vehicle manufacturers and investors. However, emerging evidence suggests that EVs present multiple challenges for air quality, mobility and health, including risks from non-exhaust emissions (NEEs) and increasing reliance on vehicles for short trips. Understanding the interconnected links between electric mobility, human health and the environment, including synergies and trade-offs, requires a whole systems approach to transport policymaking. In the present paper, we describe the use of Participatory Systems Mapping (PSM) in which a diverse group of stakeholders collaboratively constructed a causal model of the U.K. surface transport system through a series of interactive online workshops. We present the map and its analysis, with our findings illustrating how unintended consequences of EV-focussed transport policies may have an impact on air quality, human health and important social functions of the transport system. We conclude by considering how online participatory causal modelling techniques could be effectively integrated with empirical metrics to facilitate effective policy design and appraisal in the transport sector.
According to the organizational learning literature, the greatest competitive advantage a firm has is its ability to learn. In this paper, a framework for modeling learning competence in firms is presented to improve the understanding of managing innovation. Firms with different knowledge stocks attempt to improve their economic performance by engaging in radical or incremental innovation activities and through partnerships and networking with other firms. In trying to vary and/or to stabilize their knowledge stocks by organizational learning, they attempt to adapt to environmental requirements while the market strongly selects on the results. The simulation experiments show the impact of different learning activities, underlining the importance of innovation and learning. This chapter is a reprint of an article published as Gilbert, GN, Ahrweiler, P. & Pyka, A. (2007). Learning in innovation networks: Some simulation experiments. Physica A, 378 (1): 100–109 DOI:10.1016/j.physa.2006.11.050. Available online at: http://www.sciencedirect.com/science/article/pii/S0378437106012714
This extended abstract presents an integrated agent-based and hydrological model to explore the impacts of dams in transboundary river basins where riparian nations have competing water uses. The purpose of the model is to explore the effects of interactions between stakeholders from multiple levels and sectors on the management of dams and its subsequent effects on the water-energy-food-environment (WEFE) nexus in river basins.
This paper seeks to learn lessons about the role of the private sector in subnational governance by analysing the UK’s Local Enterprise Partnerships (LEPs). The paper outlines the public justifications for LEPs using documentary analysis, and then considers these against findings from interviews and network analysis, concluding that the justifications are problematic. LEPs were established on the assumption that civic and business leaders needed to be brought together in business-led institutions. However, network analysis shows most civic leaders also hold private sector roles, undermining the assumed need for a ‘bringing together’. Three further justifications of the LEP model are also challenged. Firstly, business leaders were supposed to enable knowledge flows, but analysis shows that this knowledge is skewed by unrepresentative LEP boards. Secondly, it was assumed that LEPs would catalyse networks, but the networks have been built around individual interests, without transparency. Finally, LEPs were meant to mirror business structures, but this has undermined democratic accountability. Taken together, these findings suggest that the creation of LEPs has attempted to solve the wrong problem in the wrong way. The paper concludes by proposing guiding principles for the role of the private sector in the Levelling Up agenda: representation, transparency and accountability.
The book focusses on questions of individual and collective action, the emergence and dynamics of social norms and the feedback between individual behaviour and social phenomena. It discusses traditional modelling approaches to social norms and shows the usefulness of agent-based modelling for the study of these micro-macro interactions. Existing agent-based models of social norms are discussed and it is shown that so far too much priority has been given to parsimonious models and questions of the emergence of norms, with many aspects of social norms, such as norm-change, not being modelled. Juvenile delinquency, group radicalisation and moral decision making are used as case studies for agent-based models of collective action extending existing models by providing an embedding into social networks, social influence via argumentation and a causal action theory of moral decision making. The major contribution of the book is to highlight the multifaceted nature of the dynamics of social norms, consisting not only of emergence, and the importance of embedding of agent-based models into existing theory.
This asset is available at Zenodo: https://doi.org/10.5281/zenodo.6323633 This NetLogo model is a reusable component (also referred to as a Reusable Building Block or RBB) called WATERING_IRRIGATION_RBB. Please: Download the WATERING_IRRIGATION_RBB.nlogo file Open downloaded file Click on the Info. tab for model description, context specification, executable demonstration, and suggestions to extend/adapt/use the model WATERING_IRRIGATION_RBB is a sub-model of the WATER user associations at the Interface of Nexus Governance (WATERING) model (for further information about WATERING, please see https://www.youtube.com/watch?v=U-nqs9ak2nY) Please email Dr Kavin Narasimhan (k.narasimhan@surrey.ac.uk) for comments or questions. If you adapt/use the WATERING_IRRIGATION_RBB model, we would appreciate if you cite our repo, as well as the Watershed model (http://ccl.northwestern.edu/netlogo/models/community/watershed) licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License based on which we have created this model. Please see the Info tab of the model for further documentation.
We use an agent-based model to help to refine and clarify social practice theory, wherein the focus is neither on individuals nor on any form of societal totality, but on the repeated performances of practices ordered across space and time. The recursive relationship between social practices and practitioners (individuals performing practices) is strongly emphasised in social practice theory. We intend to have this recursive relationship unfold dynamically in a model where practitioners and social practices are both considered as agents. Model conceptualisation is based on the principle of structuration theory—the focus is neither on micro causing macro nor on macro influencing micro, but on the duality between structure (macro) and agency (micro). In our case, we conceptualise the duality between practitioners and practices based on theoretical insights from social practices literature; where information is unclear or insufficient, we make systematic assumptions and account for these.
Modern knowledge-intensive economies are complex social systems where intertwining factors are responsible for the shaping of emerging industries: the self-organising interaction patterns and strategies of the individual actors (an agency-oriented pattern) and the institutional frameworks of different innovation systems (a structure-oriented pattern). In this paper, we examine the relative primacy of the two patterns in the development of innovation networks, and find that both are important. In order to investigate the relative significance of strategic decision making by innovation network actors and the roles played by national institutional settings, we use an agent-based model of knowledge-intensive innovation networks, SKIN. We experiment with the simulation of different actor strategies and different access conditions to capital in order to study the resulting effects on innovation performance and size of the industry. Our analysis suggests that actors are able to compensate for structural limitations through strategic collaborations. The implications for public policy are outlined.
Computer simulation models have been proposed as a tool for understanding innovation, including models of organisational learning, technological evolution, knowledge dynamics and the emergence of innovation networks. By representing micro-level interactions they provide insight into the mechanisms by which are generated various stylised facts about innovation phenomena. This paper summarises work carried out as part of the SIMIAN project and to be covered in more detail in a forthcoming book. A critical review of existing innovation- related models is performed. Models compared include a model of collective learning in networks [1], a model of technological evolution based around percolation on a grid [2, 3], a model of technological evolution that uses Boolean logic gate designs [4], the SKIN model [5], a model of emergent innovation networks [6], and the hypercycles model of economic production [7]. The models are compared for the ways they represent knowledge and/or technologies, how novelty enters the system, the degree to which they represent open-ended systems, their use of networks, landscapes and other pre-defined structures, and the patterns that emerge from their operations, including networks and scale-free frequency distributions. Suggestions are then made as to what features future innovation models might contain. © Springer-Verlag Berlin Heidelberg 2014.
This asset is available at Zenodo: https://doi.org/10.5281/zenodo.6323653 This NetLogo model is a reusable component (also referred to as a Reusable Building Block or RBB) called WATERING_CROPGROWTH_RBB. Please: Download the WATERING_CROPGROWTH_RBB.nlogo file Open downloaded file Click on the Info. tab for model description, context specification, executable demonstration, and suggestions to extend/adapt/use the model WATERING_CROPGROWTH_RBB is a sub-model of the WATER user associations at the Interface of Nexus Governance (WATERING) model (for further information about WATERING, please see https://www.youtube.com/watch?v=U-nqs9ak2nY) Please email Dr Kavin Narasimhan (k.narasimhan@surrey.ac.uk) for comments or questions. If you adapt/use the WATERING_CROPGROWTH_RBB model, we would appreciate if you cite our repo, as well as the Watershed model (http://ccl.northwestern.edu/netlogo/models/community/watershed) licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License based on which we have created the irrigation component of WATERING_CROPGROWTH_RBB. Note: WATERING was developed as an exploratory tool to understand and explain how participatory irrigation management through Water User Associations (WUAs) work. The model allows exploring the impact of community-based water management (through WUAs) on water availability, water use and economic productivity within an irrigation scheme. While WATERING_CROPGROWTH_RBB is not WATERING, it is a sub-model of WATERING to simulate water flow and crop growth within an irrigation scheme - you can change the values of the input controls via the Interface and see how that affects water use and crop growth within the scheme (through visualisation in the NetLogo world and output plots). Our complete WATERING model includes other components to simulate various aspects of community-based water management through WUAs. Please get in touch with the author if you are interested in the complete WATERING model.
A class of social phenomena, exhibiting fluid boundaries and constant change, called 'collectivities,' is modeled using an agent-based simulation, demonstrating how such models can show that a set of plausible microbehaviors can yield the observed macrophenomenon. Some features of the model are explored and its application to a wide range of social phenomena is described.
Agent-based modeling and social simulation have emerged as both developments of and challenges to the social sciences.
Compartmental models of COVID-19 transmission have been used to inform policy, including the decision to temporarily reduce social contacts among the general population (“lockdown”). One such model is a Susceptible-Exposed-Infectious-Removed (SEIR) model developed by a team at the London School of Hygiene and Tropical Medicine (hereafter, “the LSHTM model”, Davies et al., 2020a). This was used to evaluate the impact of several proposed interventions on the numbers of cases, deaths, and intensive care unit (ICU) hospital beds required in the UK. We wish here to draw attention to behaviour common to this and other compartmental models of diffusion, namely their sensitivity to the size of the population simulated and the number of seed infections within that population. This sensitivity may compromise any policy advice given. We therefore describe below the essential details of the LSHTM model, our experiments on its sensitivity, and why they matter to its use in policy making.