Dr Wolfgang Garn


Senior Lecturer in Analytics
PhD, MSc, BSc (Hons)
+44 (0)1483 682005
Student feedback & Consultations hours: Tuesday 1pm-1:30pm (online), Thursday 1pm-2pm.

About

Affiliations and memberships

AIS
Association for Information Systems
ARGESIM
Working Group Simulation News
EURO
The Association of European Operational Research Societies
EUROSIM
Federation of European Simulation Societies
FITCE
Federation of Telecommunications Engineers of the European Community
INFORMS
Institute for Operations Research and the Management Sciences
ÖGOR
Austrian Society of Operations Research

Research

Research interests

Research projects

Indicators of esteem

  • Reviewer for the ...

    • European Journal of Operational Research
    • Neurocomputing Journal
    • International Journal of Production Economics
    • International Conference on Information Systems
    • and many more

    Supervision

    Postgraduate research supervision

    Teaching

    Publications

    Highlights

    Eleanor Ruth Mill, Wolfgang Garn, Christopher Turner (2024)Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design, In: Applied Artificial Intelligence Routledge

    This paper demonstrates a design and evaluation approach for delivering real world efficacy of an explainable artificial intelligence (XAI) model. The first of its kind, it leverages three distinct but complimentary frameworks to support a user-centric and context-sensitive, post-hoc explanation for fraud detection. Using the principles of scenario-based design, it amalgamates two independent real-world sources to establish a realistic card fraud prediction scenario. The SAGE (Settings, Audience, Goals and Ethics) framework is then used to identify key context-sensitive criteria for model selection and refinement. The application of SAGE reveals gaps in the current XAI model design and provides opportunities for further model development. The paper then employs a functionally-grounded evaluation method to assess its effectiveness. The resulting explanation represents real-world requirements more accurately than established models.

    Wolfgang Garn (2024)Data Science Languages, In: Data Analytics for Businesspp. 93-122 Routledge

    Many business challenges require advanced, adapted or new data-mining techniques. For this endeavour, Data Science languages are needed. Often Business or Data Analytics tasks have to be automatised, and programming using Data Science languages such as R or Python comes in handy.

    Wolfgang Garn (2024)Business Intelligence, In: Data Analytics for Businesspp. 43-89 Routledge

    Business Intelligence (BI) uses Data Analytics to provide insights to make business decisions. Tools like Power Business Intelligence (PBI) and Tableau allow us to gain an understanding of a business scenario rapidly. This is achieved by first allowing easy access to all kinds of data sources (e.g. databases, text files, Web). Next, the data can be visualised via graphs and tables, which are connected and interactive. Then, analytical components such as forecasts, key influencers and descriptive statistics are built in, and access to the most essential Data Science languages (R and Python) is integrated.

    Wolfgang Garn (2024)Introduction, In: Data Analytics for Businesspp. 3-7 Routledge

    What would you like to get out of Data Analytics? Data processing, data mining, tools, data structuring, data insights and data storage are typical first responses. So, we defnitely want to analyse, manipulate, visualise and learn about tools to help us in this endeavour. We do not want to analyse the data for the sake of analysing it; the insights need to be actionable for businesses, organisations or governments. How do we achieve this? The process of discovering knowledge in databases and CRISP-DM helps us with this. Of course, we need to know about databases. There are tools such as Power BI which allow us to transform, analyse and visualise data. So we are "analysing" the data - analysing ranges from formulating a data challenge in words to writing a simple structured query and up to applying mathematical methods to extract knowledge. Of course, the "fun" part is refected in state-of-the-art methods implemented in data mining tools. But in Data Analytics, your mind is set to ensure your findings are actionable and relevant to the business. For instance, can we: find trading opportunities, figure out the most important products, identify relevant quality aspects and many more so that the management team can devise actions that benefit the business? This motivates the following definition: Data Analytics is the discipline of extracting actionable insights by structuring, processing, analysing and visualising data using methods and software tools. Where does Data Analytics "sit" in the area of Business Analytics? Often, Data Analytics is mentioned in conjunction with Business Analytics. Data Analytics can be seen as part of Business Analytics. Business Analytics also includes Operational Analytics. It has become fashionable to divide analytics into Descriptive, Predictive, and Prescriptive Analytics. Sometimes these terms are further refined by adding Diagnostic and Cognitive Analytics. What is what?

    Wolfgang Garn (2024)Data Analytics for Business Routledge

    We are drowning in data but are starved for knowledge. Data Analytics is the discipline of extracting actionable insights by structuring, processing, analysing and visualising data using methods and software tools. Hence, we gain knowledge by understanding the data. A roadmap to achieve this is encapsulated in the knowledge discovery in databases (KDD) process. Databases help us store data in a structured way. The structure query language (SQL) allows us to gain first insights about business opportunities. Visualising the data using business intelligence tools and data science languages deepens our understanding of the key performance indicators and business characteristics. This can be used to create relevant classification and prediction models; for instance, to provide customers with the appropriate products or predict the eruption time of geysers. Machine learning algorithms help us in this endeavour. Moreover, we can create new classes using unsupervised learning methods, which can be used to define new market segments or group customers with similar characteristics. Finally, artificial intelligence allows us to reason under uncertainty and find optimal solutions for business challenges. All these topics are covered in this book with a hands-on process, which means we use numerous examples to introduce the concepts and several software tools to assist us. Several interactive exercises support us in deepening the understanding and keep us engaged with the material. This book is appropriate for master students but can also be used for undergraduate students. Practitioners will also benefit from the readily available tools. The material was especially designed for Business Analytics degrees with a focus on Data Science and can also be used for machine learning or artificial intelligence classes. This entry-level book is ideally suited for a wide range of disciplines wishing to gain actionable data insights in a practical manner.

    Wolfgang Garn (2024)Unsupervised Machine Learning, In: Data Analytics for Businesspp. 199-218 Routledge

    In this chapter, unsupervised learning will be introduced. In contrast to supervised learning, no target variable is available to guide the learning via quality measures.

    Wolfgang Garn (2024)Databases, In: Data Analytics for Businesspp. 8-42 Routledge

    This chapter introduces databases and the Structured Query Language (SQL). Most of today's data is stored in databases within tables. In order to gain knowledge and make valuable business decisions, it is necessary to extract information efficiently. This is achieved using the Structured Query Language. Good relational databases avoid repetitive data using standard normalisation approaches: for instance, by splitting raw data into multiple tables. These tables (=entities) are linked (=related) using business rules. Structured queries make use of entity relationships to obtain data. This is one of the fundamental concepts of data mining of structured data.

    Wolfgang Garn (2024)Data Analytics Frameworks, In: Data Analytics for Businesspp. 123-144 Routledge

    It is always good to follow a strategic roadmap. We motivate Data Analytics roadmaps by first developing a business scenario and introducing associated modelling. Here, formal definitions for real and prediction models are provided. These are of foremost importance. We will use them to derive one of the most commonly used measures - the mean squared error (MSE) - to evaluate the quality of a data mining model. Several business applications are mentioned to give an idea of which kind of projects can be built around a simple linear model. The models and quality measures provide us with a solid foundation for the frameworks. This chapter introduces the methodology of knowledge discovery in databases (KDD), which identifies essential steps in the Data Analytics life-cycle process. We discuss KDD with the help of some examples. The different stages of KDD are introduced, such as data preprocessing and data modelling. We explore the cross-industry standard process for data mining (CRISP-DM).

    Wolfgang Garn (2024)Artificial Intelligence, In: Data Analytics for Businesspp. 221-260 Routledge

    Artificial intelligence (AI) is concerned with: Enabling computers to "think"; Creating machines that are "intelligent"; Enabling computers to perceive, reason and act; Synthesising intelligent behaviour in artefacts. Previously, we looked at statistical and machine learning. Learning is an essential area of AI. But this is only one part; it also includes problem solving (e.g. optimisations), planning, communicating, acting and reasoning.

    Wolfgang Garn (2024)Statistical Learning, In: Data Analytics for Businesspp. 147-182 Routledge

    Classic statistical learning methods are linear and logistic regression. Both methods are supervised learners. That means the target and input features are known. Linear regression is used for predictions, whilst logistic regression is a classifier. These methods are well-established and have many benefits. For instance, a lot of the theory of the methods is understood and the resulting models are easy to interpret. To evaluate the quality of the prediction and classification models, we need to understand the errors. This chapter uses an example to introduce the concept of errors.

    Wolfgang Garn (2024)Supervised Machine Learning, In: Data Analytics for Businesspp. 183-198 Routledge

    Machine learning (ML) is mainly concerned with identifying patterns in data in order to predict and classify. Previously, we used linear regression and logistic regression to achieve this. However, we heard about them in the context of statistical learning (SL) since they are widely used for inference about the population and hypotheses. Typically, SL relies on assumptions such as normality, homoscedasticity, independent variables and others, whereas ML often ignores these. We continue with supervised learning (i.e. the response is known) approaches. Please be sure you have done the statistical learning chapter. Particularly, we will look at tree-based methods such as the decision tree and random forest. Then we will look at a nearest neighbour approach.

    Christopher Turner, John Oyekan, Wolfgang Garn, Cian Duggan , Khaled Abdou (2022)Industry 5.0 and the Circular Economy: Utilizing LCA with Intelligent Products, In: Sustainability14(22)14847

    While the move towards Industry 4.0 has motivated a re-evaluation of how a manufacturing organization should operate in light of the availability of a new generation of digital production equipment, the new emphasis is on human worker inclusion to provide decision making activities or physical actions (at decision nodes) within an otherwise automated process flow; termed by some authors as Industry 5.0 and seen as related to the earlier Japanese Society 5.0 concept (seeking to address wider social and environmental problems with the latest developments in digital system, artificial Intelligence and automation solutions). As motivated by the EU the Industry 5.0 paradigm can be seen as a movement to address infrastructural resilience, employee and environmental concerns in industrial settings. This is coupled with a greater awareness of environmental issues, especially those related to Carbon output at production and throughout manufactured products lifecycle. This paper proposes the concept of dynamic Life Cycle Assessment (LCA), enabled by the functionality possible with intelligent products. A particular focus of this paper is that of human in the loop assisted decision making for end-of-life disassembly of products and the role intelligent products can perform in achieving sustainable reuse of components and materials. It is concluded by this research that intelligent products must provide auditable data to support the achievement of net zero carbon and circular economy goals. The role of the human in moving towards net zero production, through the increased understanding and arbitration powers over information and decisions, is paramount; this opportunity is further enabled through the use of intelligent products.

    Abstract The mTSP is solved using an exact method and two heuristics, that balances the number of nodes per route. The first heuristic uses a nearest node approach and the second assigns the closest salesman. A comparison of heuristics with test-instances being in the Euclidean plane showed that the closest node approach delivers better solutions and a faster runtime. On average, the closest node solutions are approximately one percent better than the other heuristic. Furthermore, it is found that increasing the number of salesman or customers results in a distance growth for uniformly distributed nodes in an Euclidean grid plane. The distance growth is almost proportional to the square root of number of customers (nodes). In this context we reviewed the expected distance of two uniformly distributed random (real and integer) points. The minimum distance of a node to n uniformly distributed random (real and integer) points was derived and expressed as functional relationship. This gives theoretical underpinnings for - previously - empirical distance to salesmen growth insights.

    Vasilis Nikolaou, Sebastiano Massaro, Masoud Fakhimi, Wolfgang Garn (2024)Identifying Biomarkers of Cardiovascular Diseases with Machine Learning: Evidence from The UK Household Longitudinal Study, In: Medical Research Archives12(1)

    Cardiovascular diseases are a significant global health concern, responsible for one-third of deaths worldwide and posing a substantial burden on society and national healthcare systems. To effectively address this challenge and develop targeted intervention strategies, the ability to predict cardiovascular diseases from standardized assessments, such as occupational health encounters or national surveys, is critical. This study aims to assist these efforts by identifying a set of biomarkers, which together with known risk factors, can predict cardiovascular diseases on the onset. We used a sample of 7,767 individuals from the UK household longitudinal study ‘Understanding Society’ to train several machine learning models able to pinpoint biomarkers and risk factors at baseline that predict cardiovascular diseases at a ten-year follow-up. A logistic regression model was trained for comparison. A gaussian naïve bayes classifier returned 82% recall in contrast to 48% of the logistic regression, allowing us to identify the most prominent biomarkers predicting cardiovascular diseases. These findings show the opportunity to use machine learning to identify a wide range of previously overlooked biomarkers associated with cardiovascular diseases onset and thus encourage the implementation of such a model in the early diagnosis and prevention of cardiovascular diseases in future research and practice.

    Eleanor Ruth Mill, Wolfgang Garn, Nicholas F Ryman-Tubb, Christopher Turner (2023)Opportunities in Real Time Fraud Detection: An Explainable Artificial Intelligence (XAI) Research Agenda, In: International Journal of Advanced Computer Science and Applications14(5)pp. 1172-1186 SAI Organization

    Regulatory and technological changes have recently transformed the digital footprint of credit card transactions, providing at least ten times the amount of data available for fraud detection practices that were previously available for analysis. This newly enhanced dataset challenges the scalability of traditional rule-based fraud detection methods and creates an opportunity for wider adoption of artificial intelligence (AI) techniques. However, the opacity of AI models, combined with the high stakes involved in the finance industry, means practitioners have been slow to adapt. In response, this paper argues for more researchers to engage with investigations into the use of Explainable Artificial Intelligence (XAI) techniques for credit card fraud detection. Firstly, it sheds light on recent regulatory changes which are pivotal in driving the adoption of new machine learning (ML) techniques. Secondly, it examines the operating environment for credit card transactions, an understanding of which is crucial for the ability to operationalise solutions. Finally, it proposes a research agenda comprised of four key areas of investigation for XAI, arguing that further work would contribute towards a step-change in fraud detection practices.

    Eleanor Ruth Mill, Wolfgang Garn, Nicholas F Ryman-Tubb, Christopher Turner (2024)The SAGE Framework for Explaining Context in Explainable Artificial Intelligence, In: Applied artificial intelligence : AAI38(1)2318670

    Scholars often recommend incorporating context into the design of an ex-plainable artificial intelligence (XAI) model in order to deliver the successful integration of an explainable agent into a real-world operating environment. However, few in the field of XAI have expanded upon the meaning of context, or provided clarification as to what they consider its constituent parts. This paper answers that question by providing a thematic review of the extant literature , revealing an interaction between the contextual elements of Setting, Audience, Goals and Ethics (SAGE). This paper therefore proposes SAGE as a conceptual framework that enables researchers to build audience-centric and context-sensitive XAI, thereby strengthening the prospects for successful adoption of an XAI solution.

    Wolfgang Garn, James Aitken (2015)Agile factorial production for a single manufacturing line with multiple products, In: L Peccati (eds.), European Journal of Operational Research245(3)pp. 754-766 Elsevier

    Industrial practices and experiences highlight that demand is dynamic and non-stationary. Research however has historically taken the perspective that stochastic demand is stationary therefore limiting its impact for practitioners. Manufacturers require schedules for multiple products that decide the quantity to be produced over a required time span. This work investigated the challenges for production in the framework of a single manufacturing line with multiple products and varying demand. The nature of varying demand of numerous products lends itself naturally to an agile manufacturing approach. We propose a new algorithm that iteratively refines production windows and adds products. This algorithm controls parallel genetic algorithms (pGA) that find production schedules whilst minimizing costs. The configuration of such a pGA was essential in influencing the quality of results. In particular providing initial solutions was an important factor. Two novel methods are proposed that generate initial solutions by transforming a production schedule into one with refined production windows. The first method is called factorial generation and the second one fractional generation method. A case study compares the two methods and shows that the factorial method outperforms the fractional one in terms of costs.

    Wolfgang Garn, James Aitken (2015)Splitting hybrid Make-To-Order and Make-To-Stock demand profiles, In: arXiv

    In this paper a demand time series is analysed to support Make-To-Stock (MTS) and Make-To-Order (MTO) production decisions. Using a purely MTS production strategy based on the given demand can lead to unnecessarily high inventory levels thus it is necessary to identify likely MTO episodes. This research proposes a novel outlier detection algorithm based on special density measures. We divide the time series' histogram into three clusters. One with frequent-low volume covers MTS items whilst a second accounts for high volumes which is dedicated to MTO items. The third cluster resides between the previous two with its elements being assigned to either the MTO or MTS class. The algorithm can be applied to a variety of time series such as stationary and non-stationary ones. We use empirical data from manufacturing to study the extent of inventory savings. The percentage of MTO items is reflected in the inventory savings which were shown to be an average of 18.1%.

    Eleanor Mill, Wolfgang Garn , Nick Ryman-Tubb (2022)Managing Sustainability Tensions in Artificial Intelligence: Insights from Paradox Theory, In: AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Societypp. 491-498 Association for Computing Machinery (ACM)

    This paper offers preliminary reflections on the sustainability tensions present in Artificial Intelligence (AI) and suggests that Paradox Theory, an approach borrowed from the strategic management literature, may help guide scholars towards innovative solutions. The benefits of AI to our society are well documented. Yet those benefits come at environmental and sociological cost, a fact which is often overlooked by mainstream scholars and practitioners. After examining the nascent corpus of literature on the sustainability tensions present in AI, this paper introduces the Accuracy-Energy Paradox and suggests how the principles of paradox theory can guide the AI community to a more sustainable solution.

    Carmen Barletta, Wolfgang Garn, Christopher Turner, Saber Fallah (2023)Hybrid fleet capacitated vehicle routing problem with flexible Monte Carlo Tree search, In: International journal of systems science. Operations & logistics10(1) Taylor & Francis

    The rise in EVs popularity, combined with reducing emissions and cutting costs, encouraged delivery companies to integrate them in their fleets. The fleet heterogeneity brings new challenges to the Capacitated Vehicle Routing Problem (CVRP). Driving range and different vehicles' capacity constraints must be considered. A cluster-first, route-second heuristic approach is proposed to maximise the number of parcels delivered. Clustering is achieved with two algorithms: a capacitated k-median algorithm that groups parcel drop-offs based on customer location and parcel weight; and a hierarchical constrained minimum weight matching clustering algorithm which considers EVs' range. This reduces the solution space in a meaningful way for the routing. The routing heuristic introduces a novel Monte-Carlo Tree Search enriched with a custom objective function, rollout policy and a tree pruning technique. Moreover, EVs are preferred overother vehicles when assigning parcels to vehicles. A Tabu Search step further optimises the solution. This two-step procedure allows problems with thousands of customers and hundreds of vehicles to be solved accomodating customised vehicle constraints. The solution quality is similar to other CVRP implementations when applied to classic VRP test-instances; and its quality is superior in real-world scenarios when constraints for EVs must be used.

    Wolfgang Garn, James Aitken, Roger Schmenner (2024)Smoothly pass the parcel: implementing the theory of swift, even flow, In: Smoothly Pass the Parcel: Implementing the Theory of Swift, Even Flow EDP Sciences

    This research examines the application of the Theory of Swift, Even Flow (TSEF) by a distribution company to improve the performance of its processes for parcels. TSEF was deployed by the company after experiencing lean improvement fatigue and diminishing returns from the time and effort invested. This case study combined quantitative and qualitative approaches to develop a good understanding of the operation. This approach enabled the business to utilise Discrete Event Simulation (DES), which facilitated the implementation of TSEF. From this study, the development of a novel DES application revealed the primacy of process variation and throughput time, key factors in TSEF, in driving improvements. The derived DES approach is reproducible and demonstrates its utility with production improvement frameworks. TSEF, through the visualisations and analysis provided by DES, broadened the scope of improvements to an enterprise level, therefore assisting the business managers in driving forward when lean improvement techniques stagnated. The impact of the research is not limited to the theoretical contribution, as the combination of DES and TSEF led to significant managerial insights on how to overcome obstacles and substantiate change.

    Scheduling multiple products with limited resources and varying demands remain a critical challenge formany industries. This work presents mixed integer programs(MIPs) that solve the Economic Lot SizingProblem (ELSP) and other Dynamic Lot-Sizing (DLS) models with multiple items. DLS systems are clas-sified, extended and formulated as MIPs. Especially, logical constraints are a key ingredient in succeedingin this endeavour. They were used to formulate the setup/changeover of items in the production line. Min-imising the holding, shortage and setup costs is the primaryobjective for ELSPs. This is achieved by findingan optimal production schedule taking into account the limited manufacturing capacity. Case studies for aproduction plants are used to demonstrate the functionality of the MIPs. Optimal DLS and ELSP solutionsare given for a set of test-instances. Insights into the runtime and solution quality are given.

    Athary Alwasel, Lampros Stergioulas, Masoud Fakhimi, Wolfgang Garn (2021)Assessing Patient Engagement in Health Care: Proposal for a Modeling and Simulation Framework for Behavioral Analysis, In: JMIR research protocols10(12)pp. e30092-e30092 JMIR
    Vasilis Nikolaou, Sebastiano Massaro, Masoud Fakhimi, Lampros Stergioulas, Wolfgang Garn (2021)COVID-19 diagnosis from chest x-rays: developing a simple, fast, and accurate neural network, In: Health Information Science and Systems936 Springer

    Purpose Chest x-rays are a fast and inexpensive test that may potentially diagnose COVID-19, the disease caused by the novel coronavirus. However, chest imaging is not a first-line test for COVID-19 due to low diagnostic accuracy and confounding with other viral pneumonias. Recent research using deep learning may help overcome this issue as convolutional neural networks (CNNs) have demonstrated high accuracy of COVID-19 diagnosis at an early stage. Methods We used the COVID-19 Radiography database [36], which contains x-ray images of COVID-19, other viral pneumonia, and normal lungs. We developed a CNN in which we added a dense layer on top of a pre-trained baseline CNN (EfficientNetB0), and we trained, validated, and tested the model on 15,153 X-ray images. We used data augmentation to avoid overfitting and address class imbalance; we used fine-tuning to improve the model’s performance. From the external test dataset, we calculated the model’s accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1-score. Results Our model differentiated COVID-19 from normal lungs with 95% accuracy, 90% sensitivity, and 97% specificity; it differentiated COVID-19 from other viral pneumonia and normal lungs with 93% accuracy, 94% sensitivity, and 95% specificity. Conclusions Our parsimonious CNN shows that it is possible to differentiate COVID-19 from other viral pneumonia and normal lungs on x-ray images with high accuracy. Our method may assist clinicians with making more accurate diagnostic decisions and support chest X-rays as a valuable screening tool for the early, rapid diagnosis of COVID-19.

    Vasilis Nikolaou, Sebastiano Massaro, Wolfgang Garn, Masoud Fakhimi, Lampros Stergioulas, David B Price (2021)Fast decliner phenotype of chronic obstructive pulmonary disease (COPD): Applying machine learning for predicting lung function loss, In: BMJ Open Respiratory Research8(1)e000980 BMJ Publishing Group

    Background: Chronic obstructive pulmonary disease (COPD) is a heterogeneous group of lung conditions challenging to diagnose and treat. Identification of phenotypes of patients with lung function loss may allow early intervention and improve disease management. We characterised patients with the ‘fast decliner’ phenotype, determined its reproducibility and predicted lung function decline after COPD diagnosis. Methods: A prospective 4 years observational study that applies machine learning tools to identify COPD phenotypes among 13 260 patients from the UK Royal College of General Practitioners and Surveillance Centre database. The phenotypes were identified prior to diagnosis (training data set), and their reproducibility was assessed after COPD diagnosis (validation data set). Results: Three COPD phenotypes were identified, the most common of which was the ‘fast decliner’—characterised by patients of younger age with the lowest number of COPD exacerbations and better lung function—yet a fast decline in lung function with increasing number of exacerbations. The other two phenotypes were characterised by (a) patients with the highest prevalence of COPD severity and (b) patients of older age, mostly men and the highest prevalence of diabetes, cardiovascular comorbidities and hypertension. These phenotypes were reproduced in the validation data set with 80% accuracy. Gender, COPD severity and exacerbations were the most important risk factors for lung function decline in the most common phenotype. Conclusions: In this study, three COPD phenotypes were identified prior to patients being diagnosed with COPD. The reproducibility of those phenotypes in a blind data set following COPD diagnosis suggests their generalisability among different populations.

    Wolfgang Garn (2021)Balanced dynamic multiple travelling salesmen: Algorithms and continuous approximations, In: Computers and Operations Research136105509 Elsevier

    Dynamic routing occurs when customers are not known in advance, e.g. for real-time routing. Two heuristics are proposed that solve the balanced dynamic multiple travelling salesmen problem (BD-mTSP). These heuristics represent operational (tactical) tools for dynamic (online, real-time) routing. Several types and scopes of dynamics are proposed. Particular attention is given to sequential dynamics. The balanced dynamic closest vehicle heuristic (BD-CVH) and the balanced dynamic assignment vehicle heuristic (BD-AVH) are applied to this type of dynamics. The algorithms are applied to a wide range of test instances. Taxi services and palette transfers in warehouses demonstrate how to use the BD-mTSP algorithms in real-world scenarios. Continuous approximation models for the BD-mTSP’s are derived and serve as strategic tools for dynamic routing. The models express route lengths using vehicles, customers, and dynamic scopes without the need of running an algorithm. A machine learning approach was used to obtain regression models. The mean absolute percentage error of two of these models is below 3%.

    H Van Der Heijden, W Garn (2013)Profitability in the car industry: New measures for estimating targets and target directions, In: European Journal of Operational Research225(3)pp. 420-428 Elsevier

    In this paper we study the profitability of car manufacturers in relation to industry-wide profitability targets such as industry averages. Specifically we are interested in whether firms adjust their profitability in the direction of these targets, whether it is possible to detect any such change, and, if so, what the precise nature is of these changes. This paper introduces several novel methods to assess the trajectory of profitability over time. In doing so we make two contributions to the current body of knowledge regarding the dynamics of profitability. First, we develop a method to identify multiple profitability targets. We define these targets in addition to the commonly used industry average target. Second, we develop new methods to express movements in the profitability space from t to t + j, and define a notion of agreement between one movement and another. We use empirical data from the car industry to study the extent to which actual movements are in alignment with these targets. Here we calculate the three targets that we have previously identified, and contrast them with the actual profitability movements using our new agreement measure. We find that firms tend to move more towards to the new targets we have identified than to the common industry average. © 2012 Elsevier B.V. All rights reserved.

    Decision making is often based on Bayesian networks. The building blocks for Bayesian networks are its conditional probability tables (CPTs). These tables are obtained by parameter estimation methods, or they are elicited from subject matter experts (SME). Some of these knowledge representations are insufficient approximations. Using knowledge fusion of cause and effect observations lead to better predictive decisions. We propose three new methods to generate CPTs, which even work when only soft evidence is provided. The first two are novel ways of mapping conditional expectations to the probability space. The third is a column extraction method, which obtains CPTs from nonlinear functions such as the multinomial logistic regression. Case studies on military effects and burnt forest desertification have demonstrated that so derived CPTs have highly reliable predictive power, including superiority over the CPTs obtained from SMEs. In this context, new quality measures for determining the goodness of a CPT and for comparing CPTs with each other have been introduced. The predictive power and enhanced reliability of decision making based on the novel CPT generation methods presented in this paper have been confirmed and validated within the context of the case studies.

    George Kireulishvili, Wolfgang Garn, James Aitken, Jane Hemsley-Brown (2018)Prediction methods improve bus services profitability, In: Proceedings of Euro 2018 - 29th European Conference on Operational Research EURO 2018

    Since the bus deregulation (Transport Act 1985) the patronage for bus services has been decreasing in a county in South of England. Hence, methods that increase patronage, focus subsidies and stimulate the bus industry are required. Our surveys and market research identified and quantified essential factors. The top three factors are price, frequency, and dependability. The model was further enhanced by taking into account real time passenger information (RTPI), socio-demographics and ticket machine data along targeted bus routes. These allowed the design of predictive models. Here, feature engineering was essential to boost the solution quality. We compared several models such as regression, decision tress and random forest. Additionally, traditional price elasticity formulas have been confirmed. Our results indicate that more accuracy can be gained using prediction methods based on the engineered features. This allows to identify routes that have the potential to increase in profitability - allowing a more focused subsidy strategy.

    W Garn (2010)Issues in Operations Management Pearson

    This book explores a set of critical areas of Operations Management in depth. The contents covers topics such as: - Waiting Line Models (e.g. Multiple Server Waiting Line) - Transportation and Network Models (e.g. Shortest Route) - Inventory Management (e.g. Economic Order Quantity model) This will enable the reader: - To increase the efficiency and productivity of business firms - To observe and define “challenges” in a concise, precise and logical manner - To be familiar with a selected number of classical and state-of-the art Management Science methods and tools to solve management problems - To create solution models, to develop and create procedures that offer competitive advantage to the business/organisation - To communicate and provide results to the management for decision making and implementation

    Wolfgang Garn, James Aitken, Roger Schmenner (2020)Smoothly Pass the Parcel: Implementing the Theory of Swift, Even Flow, In: Smoothly pass the parcel: implementing the theory of swift, even flow

    This research examines the application of the Theory of Swift, Even Flow (TSEF) by a distribution company to improve the performance of its processes for parcels. TSEF was deployed by the company after experiencing improvement fatigue and diminishing returns from the time and effort invested. The fatigue was resolved through the deployment of swift, even flow and the adoption of "focused factories". The case study conducted semi-structured interviews, mapped the parcel processes and applied Discrete Event Simulation (DES). From this study we not only documented the value of TSEF as a strategic tool but we also developed insights into the challenges that the firm encountered when utilising the concept. DES confirmed the feasibility of change and its cost savings. This research demonstrates DES as tool for TSEF to stimulate management thinking about productivity

    Drone delivery services (DDS) are an upcoming reality. Companies such as Amazon, DHL and Google are investing in developments in this area. German’s parcel delivery company DHL has begun applying them commercially. This emphasises the importance to have insights in the economic benefits of drone delivery services. This study compares drone delivery services with traditional delivery services. It looks at the cost effectiveness of integrating drones into a delivery services’ operations. The study identifies and categorise factors that control the delivery operations. It develops a mathematical model that compares 3D flight paths of a fleet of drones with a fleet of 2D van routes. The developed method can be seen as an extension of the Vehicle Routing Problem (VRP). Efficiency savings of drone services of real world rural postcode sectors are analysed. The study limits itself to the use of small unmanned aerial vehicles (UAVs) with a low payload, which are GPS controlled and autonomous. The case study shows time, distance and cost savings when using drones rather than delivery vans. The model reveals efficiency factors to operate DDS. The study shows the economic necessity for delivering low weight goods via DDS. The primary methodological novelty of this study is a model that integrates factors relevant to drones into the VRP.

    Wolfgang Garn, Yin Hu, Paul Nicholson, Bevan Jones, Hongying Tang (2018)LSTM network time series predicts high-risk tenants, In: Proceedings of Euro 2018 - 29th European Conference on Operational Research EURO 2018

    In the United Kingdom, local councils and housing associations provide social housing at secure, low-rent housing options to those most in need. Occasionally tenants have difficulties in paying their rent on time and fall into arrears. The lost revenue can cause financial burden and stress to tenants. An efficient arrear management scheme is to target those who are more at risk of falling into long-term arrears so that interventions can avoid lost revenue. In our research, a Long Short-Term Memory Network (LSTM) based time series prediction model is implemented to differentiate the high-risk tenants from temporary ones. The model measures the arrear risk for each individual tenant and differentiates between short-term and long-term arrears risk. Furthermore it predicts the trajectory of arrears for each individual tenant. The arrears analysis investigates factors that provide assistance to tenants to trigger preventions before their debt becomes unmanageable. A five-years rent arrears dataset is used to train and evaluate the proposed model. The root mean squared error (RMSE) punishes large errors by measuring differences between actually observed and predicted arrears. The novel model benefits the sector by allowing a decrease in lost revenue; an increase in efficiency; and protects tenants from unmanageable debt.

    VASILEIOS NIKOLAOU, SEBASTIANO MASSARO, WOLFGANG GARN, MASOUD FAKHIMI, LAMPROS STERGIOULAS, David Price (2021)The cardiovascular phenotype of Chronic Obstructive Pulmonary Disease (COPD): Applying machine learning to the prediction of cardiovascular comorbidities, In: Respiratory Medicine 186106528 Elsevier

    Background: Chronic Obstructive Pulmonary Disease (COPD) is a heterogeneous group of lung conditions that are challenging to diagnose and treat. As the presence of comorbidities often exacerbates this scenario, the characterization of patients with COPD and cardiovascular comorbidities may allow early intervention and improve disease management and care. Methods: We analysed a 4-year observational cohort of 6883 UK patients who were ultimately diagnosed with COPD and at least one cardiovascular comorbidity. The cohort was extracted from the UK Royal College of General Practitioners and Surveillance Centre database. The COPD phenotypes were identified prior to diagnosis and their reproducibility was assessed following COPD diagnosis. We then developed four classifiers for predicting cardiovascular comorbidities. Results: Three subtypes of the COPD cardiovascular phenotype were identified prior to diagnosis. Phenotype A was characterised by a higher prevalence of severe COPD, emphysema, hypertension. Phenotype B was char-acterised by a larger male majority, a lower prevalence of hypertension, the highest prevalence of the other cardiovascular comorbidities, and diabetes. Finally, phenotype C was characterised by universal hypertension, a higher prevalence of mild COPD and the low prevalence of COPD exacerbations. These phenotypes were reproduced after diagnosis with 92% accuracy. The random forest model was highly accurate for predicting hypertension while ruling out less prevalent comorbidities. Conclusions: This study identified three subtypes of the COPD cardiovascular phenotype that may generalize to other populations. Among the four models tested, the random forest classifier was the most accurate at predicting cardiovascular comorbidities in COPD patients with the cardiovascular phenotype.

    DANIEL BOOS, NIKOLAOS KARAMPATSAS, Wolfgang Garn, Lampros K. Stergioulas (2023)Predicting corporate restructuring and financial distress in banks: The case of the Swiss banking industry, In: The Journal of financial research Routledge

    The global financial crisis of 2007–2009 is widely regarded as the worst since the Great Depression and threatened the global financial system with a total collapse. This article distinguishes itself from the vast literature of bankruptcy, bank failure, and bank exit prediction models by introducing novel categorical parameters inspired by Switzerland's banking landscape. We evaluate data from 274 banks in Switzerland from 2007 to 2017 using generalized linear model logit and multinomial logit regressions and examine the determinants of corporate restructuring and financial distress. We complement our results with a robustness test via a Bayesian inference framework. We find that total assets and net interest margin affect bank exit and mergers and acquisitions, and that banks operating in the Zurich area have a higher likelihood of exiting and becoming takeover targets relative to banks operating in the Geneva area.

    Lean and Swift-Even-Flow (SEF) operations are compared in the context of sorting facilities. Lean approaches tend to attack parts of their processes for improvement and waste reduction, sometimes overlooking the impact this will have on their overall pipeline. A SEF approach on the other hand is driven by a desire to reduce variations by enabling the practitioner to visualise himself as the material that flows through the system thus unearthing all the problems that occur in the process as a whole. This study integrates Discrete Event Simulations (DES) into the lean and SEF framework. A real world case study with high levels of variations is used to gain insights and to derive relevant simulation models. The models were used to find the optimal configuration of machines and labour such that the operational costs are minimised. It was found that DES and SEF have a common basis. Lean processes as well as SEF processes both converge to similar solutions. However, SEF arrives faster at a near optimum solution. DES is a valuable tool to model, support and implement the lean and SEF approach. The SEF approach is superior to lean processes in the initial phases of a business process optimisation. The primary novelty of this study is the usage of DES to compare the lean and SEF approach. This study presents a systematic approach of how DES and optimisation can be applied to lean and SEF operations.

    James Aitken, C Bozarth, Wolfgang Garn (2016)To eliminate or absorb supply chain complexity: A conceptual model and case study, In: Supply Chain Management21(6)pp. 759-774 Emerald Group Publishing Limited

    Existing works in the supply chain complexity area have either focused on the overall behavior of multi-firm complex adaptive systems (CAS) or on listing specific tools and techniques that business units (BUs) can use to manage supply chain complexity, but without providing a thorough discussion about when and why they should be deployed. This research seeks to address this gap by developing a conceptually sound model, based on the literature, regarding how an individual BU should reduce versus absorb supply chain complexity. This research synthesizes the supply chain complexity and organizational design literature to present a conceptual model of how a BU should respond to supply chain complexity. We illustrate the model through a longitudinal case study analysis of a packaged foods manufacturer. Regardless of its type or origin, supply chain complexity can arise due to the strategic business requirements of the BU (strategic) or due to suboptimal business practices (dysfunctional complexity). Consistent with the proposed conceptual model, the illustrative case study showed that a firm must first distinguish between strategic and dysfunctional drivers prior to choosing an organizational response. Furthermore, it was found that efforts to address supply chain complexity can reveal other system weaknesses that lie dormant until the system is stressed. The case study provides empirical support for the literature-derived conceptual model. Nevertheless, any findings derived from a single, in-depth case study require further research to produce generalizable results. The conceptual model presented here provides a more granular view of supply chain complexity, and how an individual BU should respond, than what can be found in the existing literature. The model recognizes that an individual BU can simultaneously face both strategic and dysfunctional complexity drivers, each requiring a different organizational response. We are aware of no other research works that have synthesized the supply chain complexity and organizational design literature to present a conceptual model of how an individual business unit (BU) should respond to supply chain complexity. As such, this paper furthers our understanding of supply chain complexity effects and provides a basis for future research, as well as guidance for BUs facing complexity challenges.

    Wolfgang Garn, Christopher Turner, George Kireulishvili, Vasiliki Panagi (2019)The impact of catchment areas in predicting bus journeys

    The catchment area along a bus route is key in predicting bus journeys. In particular, the aggregated number of households within the catchment area are used in the prediction model. The model uses other factors, such as head-way, day-of-week and others. The focus of this study was to classify types of catchment areas and analyse the impact of varying their sizes on the quality of predicting the number of bus passengers. Machine Learning techniques: Random Forest, Neural Networks and C5.0 Decision Trees, were compared regarding solution quality of predictions. The study discusses the sensitivity of catchment area size variations. Bus routes in the county Surrey in the United Kingdom were used to test the quality of the methods. The findings show that the quality of predicting bus journeys depends on the size of the catchment area.

    In this paper we study the optimality of production schedules in the food industry. Specifically we are interested whether stochastic economic lot scheduling based on aggregated forecasts outperforms other lot sizing approaches. Empirical data on the operation’s customer side such as product variety, demand and inventory is used. Hybrid demand profiles are split into make-to-order (MTO) and make-to-stock (MTS) time series. We find that the MTS demand aggregation stabilizes, minimizes change-overs, and optimizes manufacturing.

    W Garn (2009)Chess with "Greedy Edi" Matlab-Central

    You can play against "Edi" a chess program using Matlab. It uses a greedy heuristic to find the "best" move.

    Vasilis Nikolaou, Sebastiano Massaro, Masoud Fakhimi, Wolfgang Garn (2022)Using Machine Learning to Detect Theranostic Biomarkers Predicting Respiratory Treatment Response, In: Life12(6)775 MDPI

    Background: Theranostic approaches—the use of diagnostics for developing targeted therapies—are gaining popularity in the field of precision medicine. They are predominately used in cancer research, whereas there is little evidence of their use in respiratory medicine. This study aims to detect theranostic biomarkers associated with respiratory-treatment responses. This will advance theory and practice on the use of biomarkers in the diagnosis of respiratory diseases and contribute to developing targeted treatments. Methods: We performed a cross-sectional analysis on a sample of 13,102 adults from the UK household longitudinal study ‘Understanding Society’. We used recursive feature selection to identify 16 biomarkers associated with respiratory treatment responses. We then implemented several machine learning algorithms using the identified biomarkers as well as age, sex, body mass index, and lung function to predict treatment response. Results: Our analysis shows that subjects with increased levels of alkaline phosphatase, glycated haemoglobin, high-density lipoprotein cholesterol, c-reactive protein, triglycerides, hemoglobin, and Clauss fibrinogen are more likely to receive respiratory treatments, adjusting for age, sex, body mass index, and lung function. Conclusions: These findings offer a valuable blueprint on why and how the use of biomarkers as diagnostic tools can prove beneficial in guiding treatment management in respiratory diseases.

    Nick F. Ryman-Tubb, Paul Krause, Wolfgang Garn (2018)How Artificial Intelligence and machine learning research impacts payment card fraud detection: A survey and industry benchmark, In: Engineering Applications of Artificial Intelligence76pp. 130-157 Elsevier

    The core goal of this paper is to identify guidance on how the research community can better transition their research into payment card fraud detection towards a transformation away from the current unacceptable levels of payment card fraud. Payment card fraud is a serious and long-term threat to society (Ryman-Tubb and d’Avila Garcez, 2010) with an economic impact forecast to be $416bn in 2017 (see Appendix A).1 The proceeds of this fraud are known to finance terrorism, arms and drug crime. Until recently the patterns of fraud (fraud vectors) have slowly evolved and the criminals modus operandi (MO) has remained unsophisticated. Disruptive technologies such as smartphones, mobile payments, cloud computing and contactless payments have emerged almost simultaneously with large-scale data breaches. This has led to a growth in new fraud vectors, so that the existing methods for detection are becoming less effective. This in turn makes further research in this domain important. In this context, a timely survey of published methods for payment card fraud detection is presented with the focus on methods that use AI and machine learning. The purpose of the survey is to consistently benchmark payment card fraud detection methods for industry using transactional volumes in 2017. This benchmark will show that only eight methods have a practical performance to be deployed in industry despite the body of research. The key challenges in the application of artificial intelligence and machine learning to fraud detection are discerned. Future directions are discussed and it is suggested that a cognitive computing approach is a promising research direction while encouraging industry data philanthropy.

    Chris J Turner, Wolfgang Garn (2022)Next generation DES simulation: A research agenda for human centric manufacturing systems, In: Journal of industrial information integration28100354 Elsevier Inc

    In this paper we introduce a research agenda to guide the development of the next generation of Discrete Event Simulation (DES) systems. Interfaces to digital twins are projected to go beyond physical representations to become blueprints for the actual “objects” and an active dashboard for their control. The role and importance of real-time interactive animations presented in an Extended Reality (XR) format will be explored. The need for using game engines, particularly their physics engines and AI within interactive simulated Extended Reality is expanded on. Importing and scanning real-world environments is assumed to become more efficient when using AR. Exporting to VR and AR is recommended to be a default feature. A technology framework for the next generation simulators is presented along with a proposed set of implementation guidelines. The need for more human centric technology approaches, nascent in Industry 4.0, are now central to the emerging Industry 5.0 paradigm; an agenda that is discussed in this research as part of a human in the loop future, supported by DES. The potential role of Explainable Artificial Intelligence is also explored along with an audit trail approach to provide a justification of complex and automated decision-making systems with relation to DES. A technology framework is proposed, which brings the above together and can serve as a guide for the next generation of holistic simulators for manufacturing.

    As a first contribution the mTSP is solved using an exact method and two heuristics, where the number of nodes per route is balanced. The first heuristic uses a nearest node approach and the second assigns the closest vehicle (salesman). A comparison of heuristics with test-instances being in the Euclidean plane showed similar solution quality and runtime. On average, the nearest node solutions are approximately one percent better. The closest vehicle heuristic is especially important when the nodes (customers) are not known in advance, e.g. for online routing. Whilst the nearest node is preferable when one vehicle has to be used multiple times to service all customers. The second contribution is a closed form formula that describes the mTSP distance dependent on the number of vehicles and customers. Increasing the number of salesman results in an approximately linear distance growth for uniformly distributed nodes in a Euclidean grid plane. The distance growth is almost proportional to the square root of number of customers (nodes). These two insights are combined in a single formula. The minimum distance of a node to n uniformly distributed random (real and integer) points was derived and expressed as functional relationship dependent on the number of vehicles. This gives theoretical underpinnings and is in agreement with the distances found via the previous mTSP heuristics. Hence, this allows to compute all expected mTSP distances without the need of running the heuristics.