Dr Mehdi Toloo
Academic and research departments
Business Analytics and Operations, Faculty of Arts, Business and Social Sciences.About
Biography
Mehdi Toloo is a Reader in Business Analytics at Surrey Business School, UK. Before that, he held a Professor position in Systems Engineering and Informatics at the Technical University of Ostrava, Czech Republic and a Professor in Operations Management at Sultan Qaboos University, Muscat, Oman. Areas of interest include Business Analytics, Operational Analytics, Operations Research/Management, Decision Analysis, Performance Evaluation, Multi-Objective Programming, and Mathematical Modelling. He has contributed to numerous international conferences as a chair, keynote speaker, presenter, track/session chair, workshop organizer, and member of the scientific committee. Mehdi is recognized among the top 2% of scientists worldwide in Business Analytics & Operations Research for 2020-2024, an annual ranking published by Stanford University in collaboration with Elsevier.
Mehdi has extensive experience in leading and collaborating on many international research projects. He acts as an editor for Computers & Industrial Engineering (ELSEVIER), Decision Analytics (ELSEVIER), Healthcare Analytics (ELSEVIER), Journal of Business Logistics (WIELY), RAIRO-Operations Research (EDP Sciences), and Central European Journal of Operations Research (SPRINGER). He has written fifteen books and his research mainly has been published in top-tier (4*/3*) journals including European Journal of Operational Research, OMEGA, Energy, Computers & Industrial Engineering, Journal of the Operational Research Society, and Annals of Operations Research.
ResearchResearch interests
- Operational Analytics
- Business Analytics
- Operations Research/Management
- Performance Evaluation
- Data Envelopment Analysis
- Decision Analysis
- Risk Analysis
- Multiple Criteria Decision Making
- Mathematical Modelling
Research projects
Performance evaluation of a system is the main theme of data envelopment analysis (DEA). Non-parametricity, data-driven modelling and axiomatic framework are the most essential properties of DEA. Indeed, DEA is a mathematical programming approach for assessing the relative efficiency of systems by estimating the best practice in terms of all observations. Performance factors are classified into input and output groups and the efficiency of a system is defined as the ratio of a weighted sum of its outputs to a weighted sum of its inputs. In some applications, there are some unclassified factors, which can simultaneously play input and output roles. The main aim of our research project is to get around DEA implementation problems when unclassified factors are available. With unclassified factors, we need to revisit the axiomatic framework of DEA, extend some non-oriented DEA models, handle data irregularities, and develop some DEA models with the inclusion of weight. We aim to scrutinize the properties and validity of the proposed models from both theoretical and practical standpoints.
The aim of the project is to investigate the ability of particular models for network systems, especially two-stage ones, to estimate suitable measures of efficiency. Then, the DEA models which are available for determining the existence of economies of scope will be analyzed. The special aim of the project is to present an approach by which the existence of economies of scope for two-stage production systems can be studied. To reach the aim the team will: (i) create a comprehensive report on the results of models of economies of scope using DEA and two-stage systems using network DEA and also analyze and compare the recent models; (ii) validate formulated models towards more complicated decision-making units involving various internal processes.
Data Envelopment Analysis (DEA) seeks a frontier to envelop all data with data acting in a critical role in the process and in such a way measures the relative efficiency of each Decision Making Unit (DMU) in comparison with other units. If the number of performance measures is high in comparison with the number of units, then a large percentage of the units will be determined as efficient, which is obviously a questionable result. In this project, some new DEA models are formulated for selecting performance measures. For this aim, we (i) extend some selecting approaches (ii) extend them to accommodate some special data sets (iii) formulate some new hybrid models to consider both selective and flexible measures (iv) develop some multiplicative DEA models associated with the selecting approaches.
Due to global competitive conditions and economic crises, significant changes in product processing play an increasingly important role in maintaining a competitive and effective position over organizations. Based on the theory of traditional Data Envelopment Analysis (DEA), the DEA approach represents a convenient way to analyze the efficiency of a set of Decision Making Units (DMUs), and the order of them taking into account their level of efficiency. The project adopts DEA to identify the key factors of efficiency evaluation in network organizations. Within the project, the ability of particular integrated network DEA (NDEA) models to estimate allocative efficiency scores will be analyzed. Newly developed models will represent an effective way to identify existing issues and their causes in network organizations, and thus determine the degree of changes required for optimal utilization of resources in order to increase the efficiency score of the organization. Conclusions and validity of the NDEA method's results will be verified in real situations in the Moravian-Silesian Region.
Research collaborations
Multiple Criteria Decision Making Modelling: Novel Weighting Methods and Hybrid Approaches, Czech Science Foundation, Czech Republic, ČACR project number: 17- 22662S, 2017-2018.
Research Team for Modelling of Economic and Financial Processes at VŠB-TU Ostrava, , Technical University of Ostrava, Ostrava, Czech Republic. European Social Project CZ.1.07/2.3.00/20.0296, 2013-2015.
Research interests
- Operational Analytics
- Business Analytics
- Operations Research/Management
- Performance Evaluation
- Data Envelopment Analysis
- Decision Analysis
- Risk Analysis
- Multiple Criteria Decision Making
- Mathematical Modelling
Research projects
Performance evaluation of a system is the main theme of data envelopment analysis (DEA). Non-parametricity, data-driven modelling and axiomatic framework are the most essential properties of DEA. Indeed, DEA is a mathematical programming approach for assessing the relative efficiency of systems by estimating the best practice in terms of all observations. Performance factors are classified into input and output groups and the efficiency of a system is defined as the ratio of a weighted sum of its outputs to a weighted sum of its inputs. In some applications, there are some unclassified factors, which can simultaneously play input and output roles. The main aim of our research project is to get around DEA implementation problems when unclassified factors are available. With unclassified factors, we need to revisit the axiomatic framework of DEA, extend some non-oriented DEA models, handle data irregularities, and develop some DEA models with the inclusion of weight. We aim to scrutinize the properties and validity of the proposed models from both theoretical and practical standpoints.
The aim of the project is to investigate the ability of particular models for network systems, especially two-stage ones, to estimate suitable measures of efficiency. Then, the DEA models which are available for determining the existence of economies of scope will be analyzed. The special aim of the project is to present an approach by which the existence of economies of scope for two-stage production systems can be studied. To reach the aim the team will: (i) create a comprehensive report on the results of models of economies of scope using DEA and two-stage systems using network DEA and also analyze and compare the recent models; (ii) validate formulated models towards more complicated decision-making units involving various internal processes.
Data Envelopment Analysis (DEA) seeks a frontier to envelop all data with data acting in a critical role in the process and in such a way measures the relative efficiency of each Decision Making Unit (DMU) in comparison with other units. If the number of performance measures is high in comparison with the number of units, then a large percentage of the units will be determined as efficient, which is obviously a questionable result. In this project, some new DEA models are formulated for selecting performance measures. For this aim, we (i) extend some selecting approaches (ii) extend them to accommodate some special data sets (iii) formulate some new hybrid models to consider both selective and flexible measures (iv) develop some multiplicative DEA models associated with the selecting approaches.
Due to global competitive conditions and economic crises, significant changes in product processing play an increasingly important role in maintaining a competitive and effective position over organizations. Based on the theory of traditional Data Envelopment Analysis (DEA), the DEA approach represents a convenient way to analyze the efficiency of a set of Decision Making Units (DMUs), and the order of them taking into account their level of efficiency. The project adopts DEA to identify the key factors of efficiency evaluation in network organizations. Within the project, the ability of particular integrated network DEA (NDEA) models to estimate allocative efficiency scores will be analyzed. Newly developed models will represent an effective way to identify existing issues and their causes in network organizations, and thus determine the degree of changes required for optimal utilization of resources in order to increase the efficiency score of the organization. Conclusions and validity of the NDEA method's results will be verified in real situations in the Moravian-Silesian Region.
Research collaborations
Multiple Criteria Decision Making Modelling: Novel Weighting Methods and Hybrid Approaches, Czech Science Foundation, Czech Republic, ČACR project number: 17- 22662S, 2017-2018.
Research Team for Modelling of Economic and Financial Processes at VŠB-TU Ostrava, , Technical University of Ostrava, Ostrava, Czech Republic. European Social Project CZ.1.07/2.3.00/20.0296, 2013-2015.
Supervision
Postgraduate research supervision
- E. K. Mensah, Robust Optimization in Data Envelopment Analysis: Extended Theory and Applications, Department of Economics, University of Insubria, Varese, Italy, 2019.
- H. Naseri, Stochastic Noise and Heavy-Tailed (Stable) Distribution in DEA, Department of Industrial Engineering, Islamic Azad University, Science and Research, Tehran, Iran, 2018.
- E. Keshavarz, Solving the multi-objective network flow optimization problems by a DEA methodology, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2015.
- A. Mahmoodirad, Modeling and solving a multi-product fixed charge solid transportation problem in a supply chain by meta-heuristics, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2015.
- A. Masoumzade, Increasing discriminating power in DEA, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2014.
- S. Nalchigar, A new methodology to develop and evaluate context-aware recommender systems based on data mining, Department of Information Technology Management, University of Tehran, Iran, 2014.
- S. Banihashemi, Optimization modelling of portfolio and sensitivity analysis supply chains, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2014.
- L.P. Navas, Application of DEA to public sectors of Colombia with some extensions of model development, Department of Industrial Engineering, Universidad de los Andes, Bogotá, Colombia, 2018.
- M. Barat, Quantitative data in data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2006.
- M. Ahmadzadeh, Using lexicographic parametric programming for identifying efficient units in data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2006.
- E. Sabertahan, A new framework in solving data envelopment analysis models, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2006.
- S. Bahiraee, Evaluation of information technology investment: a data envelopment analysis approach, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2006.
- E. Sarfi, Performance measuring and data categorizing in data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2007.
- N. Aghaee, Overall efficiency and effectiveness measuring in data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2007.
- M. Yekkalam Tash, Decomposition in data envelopment analysis: a relational network, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2008.
- M. Shadab, Data envelopment analysis based on auctions, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2008.
- S. Ghorbani, Ranking of units on the DEA frontier with common set of weights, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2008.
- A. Hashemi, Balanced score card and data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2008.
- S. Soleimani Nadaf, Two-level optimization and data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2010.
- S. Ranjbar, Two-stage processes in data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2010.
- Z. Dinarvand, Classifying inputs and outputs based on distance function, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- E. Falatouri, Data mining and data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- M. Maleklou, Evaluation of credits risk using data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- A. Zandi, Neural network and its application in data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- Z. Molaee, Presenting data envelopment analysis graphically, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- L. Narimisa, Multi-objective problems and data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- H. Gharaee, SBM models in two-stage network structures, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- N. Chalambari, Two-stage network structures in DEA: a game theory approach, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- S. Hassan Nejad, Ratio data in data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- Z. Khoshhal, Supplier selection using data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
- V. Choobkar, Undesirable input and output modelling in efficiency analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2005.
- S. Sadeghi, A model for decision making ranking with sum-zero profit and comparing with some ranking models, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2005
- M. Mirsadeghpoor, Network DEA: evaluating the efficiency of organization with complex internal structure, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2006
- M. Sahraee, Models for performance evaluation and cost efficiency with price uncertainty and its application to banks braches assessment, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2006
- Gh. Rozbehi, Interval efficiency measurement with imprecise data, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2007
- Kh. Nasrollahzadeh, Optimal paths and costs of adjustment in dynamic DEA models and application, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2007.
- S. Joshaghani, Centralized resource allocation models: a data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2008.
- H. Saleh, A fuzzy DEA/AR approach to the selection of flexible manufacturing System, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2008
- S. Nalchigar, A new framework for ranking associate rules of data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2009.
- M. Izadkhah, Integrating DEA-oriented performance assessment and target setting using interactive MOLP methods, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2010.
- P. Madhooshi, The improved OWA model and determining the most preferred OWA Operator, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2010.
- S. Rahmatfam, Cross-efficiency and determination of ultimate cross efficiency weights, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2010.
- S. Mamizadeh, Supply chain management in data envelopment analysis, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2010.
- R. Motefaker Fard, Measurement of multi-period aggregative efficiency, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2010.
- E. Keramati, Some extensions about integrating DEA-oriented performance assessment, Department of Mathematics and Statistics, Islamic Azad University, Central Tehran Branch, Tehran, Iran, 2011.
Teaching
- Fundamental of Computer, Foundation of Algorithms, Data Structure, Programming with Pascal, Programming with C, Advanced Programming with Visual Basic, Microsoft Excel, Microsoft Access, Mathematica, MATLAB, Operations Research, Calculus, Graph Theory, Statistic and Probability. Faculty of Basic Sciences, Islamic Azad University, Tehran, Iran, 1999-2013,
- Microsoft Excel, Microsoft PowerPoint, Microsoft Access, Mathematica, Operations Research, Faculty of Management, University of Tehran, Tehran, Iran, 2001-2011,
- Operations Research, Faculty of Economics, VSB-TU Ostrava, Czech Republic, 2016-2017,
- Statistics for Business, School of Management, University of Turin, Italy, 2018.
- Mathematics for Business and Finance, School of Management, University of Turin, Italy, 2019.
- Mathematics for Business and Finance, Faculty of Business, University of Economics, Prague, Czech Republic, 2019.
- Statistics for Business, Department of Operations Management & Business Statistics, Sultan Qaboos University, Oman, 2020.
- Time Series Forecasting for Business, Department of Operations Management & Business Statistics, Sultan Qaboos University, Oman, 2020
- Applied Optimization Methods, Department of Operations Management & Business Statistics, Sultan Qaboos University, Oman, 2020.
POSTGRADUATE
- Computer Simulation, Fuzzy Sets, Network Flows, Advanced Operations Research, Linear Programming, Integer Programming, Non-Linear Programming, Dynamic Programming, Multi-Criteria Decision Making. Faculty of Basic Sciences, Islamic Azad University, Tehran, Iran, 2005-2013.
- Business Diagnostics, Special Seminar for Diploma Thesis, Faculty of Economics, VSB-TU Ostrava, Czech Republic, 2016-2017.
- Quantitative Methods in Decision Making (QMDM), School of Management, University of Turin, Italy, 2018.
- Special Topics in Operations Management, Sultan Qaboos University, Muscat, Oman, 2020.
- Operational Management, Sultan Qaboos University, Muscat, Oman, 2021.
- Business Modelling and Optimization, Sultan Qaboos University, Muscat, Oman, 2021.
- Operational Analytics, Surrey Business School, University of Surrey, Guildford, UK, 2022.
- Evaluation of Performance and Efficiency (Data Envelopment Analysis), Advanced Linear Programming, Advanced Dynamic Programming, Advanced Non-Linear Programming, Faculty of Basic Sciences, Islamic Azad University, Tehran, Iran, 2010-2013.
- Quantitative Methods of Economic Analysis (QMEA), Faculty of Economics, VSB-TU Ostrava, 2016-present
Publications
The COVID-19 pandemic, which is still spreading its new mutations all around the world, is considered a healthcare challenge around the world. The best approach to forestall this pandemic is to avoid exposure to the virus. Therefore, medical protective equipment is essential for fighting this pandemic. This underlines the chief role of having a sustainable supply chain network (SCN) for producing and distributing personal protective equipment to avoid shortage and augmenting costs. The reality, however, is that the COVID-19 pandemic leads to many developments in countries and this study is another step for these purposes. This research developsa multi-period, multi-objective, multi-echelon, and multi-product medical protective equipment sustainable SCN considering the production, distribution, allocation, and inventory with “risk pooling” strategy effect with the aim of filling the existing gaps in health SCN research during the COVID-19 pandemic. By applying the risk pooling strategy, lower inventory levels or higher service levels can be achieved without increasing inventory costs. This model explores the possibility of lateral transshipments between distribution centers, as a way to increase the reliability of SCN performance. We model a new production, inventory, distribution, location, and allocation problem and consider four objectives for our suggested model (i) minimizing total SCN costs, (ii) minimizing environmental effects, (iii) minimizing social impacts, and (iv) maximizing the reliability of demand delivery. The proposedmodel simultaneously examines all three dimensions of sustainability (economic, environmental, and social) as well as the reliability of demand delivery. Considering all of these decisions and assumptions brings the studied problem closer to reality. We employ various algorithms for solving our developed model with different sizes: the improved version of the augmented ε-constraint (AUGMECON2) algorithm for small and medium-sized problems and two meta-heuristic algorithms, i.e., Multi-Objective Whale Optimization Algorithm (MOWOA) and Multi-Objective Variable Neighborhood Search (MOVNS) algorithm, for large-sized ones. The Taguchi approach is used to tune the parameters of meta-heuristic algorithms, and a comparison is performed using four evaluation metrics: Mean Ideal Distance (MID), Number of Pareto Solutions (NPS), Maximum Spread (MS), and Spread of Non-Dominance Solution (SNS). Proposed solving methods for the studied problem and making comparisons between them are another innovation of this study. A couple of numerical examples are provided to illustrate the applicability of the presented solution methods. Finally, sensitivity analysis for problem parameters is performed to validate our suggested model. Our study reveals the superiority of the MOWOA over the other algorithms.
Adopting various pricing policies has been highly regarded in recent years for setting prices and increasing firms' profits. One of the most common steps in pricing is to identify costs. Since a significant part of costs is related to the corresponding supply chain, many researchers in different fields have used decision-making for simultaneous pricing and network design. However, there is no such approach in the field of healthcare. This paper tries to fill this gap by formulating a mixed-integer nonlinear bi-level programming model examining the interaction of hospitals and their medicines suppliers. At the upper level, there is a competitive market where a new firm (entrant) intends to enter the market and faces the challenge of pricing medicines and making network design decisions. At the lower level, there is a private hospital competing with a public hospital, and it also struggles with healthcare services pricing and supplier selection. A comprehensive utility function that considers healthcare services prices, quality, waiting time, health insurance, readmission rate, and referral rate is extended at this level. Three novel meta-heuristic algorithms are recommended, including bi-level, improved fruit fly, jellyfish optimization, and forensic-based investigation optimization algorithms to solve the presented complex mathematical problem.
This study proposes a novel multiple criteria decision making (MCDM) framework aimed at selecting refrigeration technologies that are both carbon- and energy-efficient, aligning with the UK's net-zero policies and the UN's Sustainable Development Goals (SDGs). Addressing the challenge of a limited number of competing technologies and the need to incorporate diverse stakeholders’ perspectives, we design a hybrid DEA-TOPSIS approach utilizing the Feasible Super-Efficiency Slacks-Based Algorithm (FSESBA). FSESBA proves invaluable, especially in scenarios involving super-efficiency or efficiency trend measurement, where addressing undesirable factors may lead to the well-known infeasibility problem. While we establish the theoretical soundness of the DEA-TOPSIS model, we validate the efficacy of our proposed approach through comparative analysis with conventional methods. Subsequently, we evaluate the choices of present and upcoming refrigeration technologies at a leading UK supermarket. Our findings reveal a shift from prevalent HFO-based technologies in 2020 to CO2-based technologies by 2050, attributed to their lower energy usage and GHG emissions. In addition, maintaining current refrigeration systems could contribute to achieving international and national targets to decrease F-Gas refrigerant usage, although net-zero targets will remain out of reach. In summary, our research findings underscore the potential of the introduced model to reinforce the adoption of novel refrigeration system technology, offering valuable support for the UK SDGs taskforces and net-zero policy formulation. •This study focuses on the selection of sustainable refrigeration technology.•Criteria are selected by aligning the UK SDGs and Net-Zero Policies.•We propose a DEA-TOPSIS model to address the issue of small DMUs.•Greener technologies become more viable with stricter environmental regulations.•Results contribute to UK SDG 2, SDG 12, SDG 13 as well as Net-Zero Policies.
•Propose a novel approach for Opinion Leader (OL) detection in social networks using DEA.•DEA minimizes data needs, improving OL identification efficiency.•Propose a new framework for balanced OL algorithm assessment.•Show DEA’s superior OL detection performance through comparative analysis.•DEA outperforms social network analysis in accuracy, precision, recall, and F1-score. Through Online Social Networks (OSNs) such as Instagram, X (Twitter), and Facebook, employing Opinion Leaders (OLs) is becoming integral to companies’ strategies for influencing the public. The graph-based methods are one of the most important approaches for finding OLs in OSNs. Social Network Analysis (SNA)-based OLs finding methods deal with a considerable amount of data due to using entire relationships between all of the users in a network, which makes the algorithms time-consuming. Our main goal is to introduce a new method of OLs discovery that works with fewer data and maintains or improves performance metrics. Consequently, a new application of the Data Envelopment Analysis (DEA) method is presented here for OLs identification in social media. Another contribution of this paper is introducing a new framework (OL-Finder Evaluator or OLFE) for validating the OLs’ detection algorithms under imbalanced datasets. DEA methods, when compared with SNA methods, have the advantage of being able to apply over non-graph-based datasets and to work with substantially smaller datasets. In contrast, SNA methods require transparent relationships between people. In this study, we compare both DEA (including CCR and BCC) and SNA measures (including “Betweenness (BC),” “Degree (DC),” “Page Rank (PRC),” “Closeness (CC),” and “Eigenvector (EC)” centralities) on a real Instagram network for OLs detection. Compared with SNA, our proposed method can identify OLs with considerably fewer data. Besides the advantages of DEA for time-saving, close competition exists between the DEA and the SNA methods. On average, DEA performs better in accuracy, precision, recall, and F1-score performance metrics.
Owing to today’s highly competitive market environments, substantial attention has been focused on sustainably resilient supply chains (SCs) over the last few years. Nevertheless, very few studies have focused on the efficiency evaluation analysis of the sustainability and resilience of SCs as an inevitable essential in any profitable business. This study aims to address this issue by proposing a novel fuzzy chance-constrained two-stage data envelopment analysis (DEA) model as an advanced and rigorous approach in the performance evaluation of sustainably resilient SCs. To the best of our knowledge, the current study is pioneering as it introduces a new fuzzy chance-constrained two-stage method that can be used to undertake the deterministic non-fuzzy programming of the proposed model. The proposed approach is validated and applied to evaluate a real case study including 21 major public transport providers in three megacities. The results demonstrate the advantages of the proposed approach in comparison to the existing approaches in the literature. •A novel fuzzy chance-constrained two-stage data envelopment analysis model is developed.•The sustainably resilient supply chains of 21 major public transport providers in three megacities are investigated.•The results illustrate the superiority of our proposed model over the black-box model.
This paper presents a framework where data envelopment analysis (DEA) is used to measure overall profit efficiency with interval data. Specifically, it is shown that as the inputs, outputs and price vectors each vary in intervals, the DMUs cannot be easily evaluated. Thus, presenting a new method for computing the efficiency of DMUs with interval data, an interval will be defined for the efficiency score of each unit. As well as, all the DMUs are divided into three groups which are defined according to the interval obtained for the efficiency value of DMUs.
Data envelopment analysis (DEA) is a data-driven and benchmarking tool for evaluating the relative efficiency of production units with multiple outputs and inputs. Conventional DEA models are based on a production system by converting inputs to outputs using input-transformation-output processes. However, in some situations, it is inescapable to think of some assessment factors, referred to as dual-role factors, which can play simultaneously input and output roles in DEA. The observed data are often assumed to be precise although it needs to consider uncertainty as an inherent part of most real-world applications. Dealing with imprecise data is a perpetual challenge in DEA that can be treated by presenting the interval data. This paper develops an imprecise DEA approach with dual-role factors based on revised production possibility sets. The resulting models are a pair of mixed binary linear programming problems that yield the possible relative efficiencies in the form of intervals. In addition, a procedure is presented to assign the optimal designation to a dual-role factor and specify whether the dual-role factor is a nondiscretionary input or output. Given the interval efficiencies, the production units are categorized into the efficient and inefficient sets. Beyond the dichotomized classification, a practical ranking approach is also adopted to achieve incremental discrimination through evaluation analysis. Finally, an application to third-party reverse logistics providers is studied to illustrate the efficacy and applicability of the proposed approach.
Data Envelopment Analysis (DEA) is a non-parametric technique for evaluating a set of homogeneous decision-making units (DMUs) with multiple inputs and multiple outputs. Various DEA methods have been proposed to rank all the DMUs or to select a single efficient DMU with a single constant input and multiple outputs [i.e., without explicit inputs (WEI)] as well as multiple inputs and a single constant output [i.e., without explicit outputs (WEO)]. However, the majority of these methods are computationally complex and difficult to use. This study proposes an efficient method for finding a single efficient DMU, known as the most efficient DMU, under WEI and WEO conditions. Two compact forms are introduced to determine the most efficient DMU without solving an optimization model under the DEA-WEI and DEA-WEO conditions. A comparative analysis shows a significant reduction in the computational complexity of the proposed method over previous studies. Four numerical examples from different contexts are presented to demonstrate the applicability and exhibit the effectiveness of the proposed compact forms.
Toloo and Tichý (2015) with the aim of holding the rule of thumb in data envelopment analysis, developed a pair of models which optimally chooses some inputs and outputs among selective measures under variable returns to scale assumption. Their approach involves a lower bound for the input and output weights in the multiplier model and a penalty term in the objective function of envelopment model. These models possess an epsilon which on the one hand turns the selecting envelopment model non-linear and on the other hand increases the required computational burden for solving the selecting multiplier models. Selecting an improper value for the epsilon may cause the infeasibility and unboundedness issues for the multiplier and envelopment model, respectively. This paper demonstrates that the method of Toloo and Tichý (2015) is valid even with excluding the epsilon. The method is extended to generalized returns to scale model which considers other returns to scale assumptions, i.e. non-increasing, constant, and non-decreasing. The obtained results point out that the simplified approach is more stable and more reliable and substantially reduces the required calculations.
Supplier selection, a multi-criteria decision making (MCDM) problem, is one of the most important strategic issues in supply chain management (SCM). A good solution to this problem significantly contributes to the overall supply chain performance. This paper proposes a new integrated mixed integer programming ‐ data envelopment analysis (MIP‐DEA) model for finding the most efficient suppliers in the presence of imprecise data. Using this model, a new method for full ranking of units is introduced. This method tackles some drawbacks of the previous methods and is computationally more efficient. The applicability of the proposed model is illustrated, and the results and performance are compared with the previous studies.
This paper proposes a new integrated mixed integer programing – data envelopment analysis (MIP–DEA) model to improve the integrated DEA model which was introduced by Toloo & Nalchigar [M. Toloo, S. Nalchigar. A new integrated DEA model for finding most BCC–efficient DMU. Appl. Math. Model. 33 (2009) 597–60]. In this study some problems of applying Toloo & Nalchigar’s model are addressed. A new integrated MIP–DEA model is then introduced to determine the most BCC-efficient decision making unit (DMU). Moreover, it is mathematically proved that the new model identifies only a single BCC-efficient DMU by a common set of optimal weights. To show applicability of proposed models, a numerical example is used which contains a real data set of nineteen facility layout designs (FLDs).
Cost efficiency (CE) assesses the ability to produce current output at minimal cost. There are some models which are introduced to measure cost efficiency with certain and uncertain input prices. Normally, by using data envelopment analysis (DEA) models, more than one cost efficient decision making units (DMUs) are recognized. The main contribution of this paper consists of development of a model which was proposed by Amin and Toloo (2007) to some models for finding the most cost efficient DMU in various situations of input prices. These models find the most cost efficient DMU by solving only one mixed integer linear programming (MILP) in each case.
Measurement of performance is an important activity in identifying weaknesses in managerial efficiency and devising goals for improvement. Data envelopment analysis (DEA) is a mathematical quantitative approach for measuring the performance of a set of similar units. Toloo (2013) extended a DEA approach for finding the most efficient unit considering a data set without explicit inputs. The aim of this paper is to develop DEA models without explicit outputs, henceforth called DEA-WEO, to find the most efficient unit when outputs are not directly considered. The suggested models directly utilize the data without the need of adding a virtual output, whose value is equal to for all units. A real data set involving 139 different alternatives for long-term asset financing provided by Czech banks and leasing companies is taken to illustrate the potential application of the proposed approach.
•The role of epsilon in classifier data envelopment analysis models has been investigated.•An epsilon-finder model in the envelopment form has been formulated.•A pair of epsilon-based multiplier and envelopment classifier models have been developed.•An approach for finding a suitable epsilon value for our developed classifier models has been extended.•A case study of the Iranian Space Research Center has been employed to illustrate the applicability of our new epsilon-based approach. Some input-output classifier data envelopment analysis (DEA) models in multiplier and envelopment forms were developed to designate the status of flexible measures, playing either input or output roles. These models ignore the role of non-Archimedean epsilon in the input-output classification process. We show that these epsilon-free models may ignore some flexible measures in the performance evaluation process and hence the status of such flexible measure(s) can be randomly and inappropriately identified. To fill this gap, we develop a pair of epsilon-based multiplier and envelopment classifier models. We also develop an approach to find a suitable epsilon value for our developed classifier models. A case study of the supplier selection problem in the Iranian Space Research Center (ISRC) is provided to illustrate the potential application of our new epsilon-based approach.
You et al. (2013) indicated two errors in Amin and Toloo (2007). The first error was the infeasibility of Amin and Toloo's (2007) model and the second drawback was the lack of a suitable value for the non-Archimedean epsilon in the proposed approach of Amin and Toloo (2007). This paper deals with the raised issues and proves that the model of Amin and Toloo (2007) is always feasible. In addition, we also formulate a new model for finding a suitable value for the epsilon.
This paper presents an Improved MCDM Data Envelopment Analysis (DEA) model in order to evaluate the best efficient DMUs in Advanced Manufacturing Technology (AMT). This model is capable of ranking the next most efficient DMUs after removing the previous best one.
Data envelopment analysis (DEA) is the most widely used methods for measuring the efficiency and productivity of decision-making units (DMUs). The need for huge computer resources in terms of memory and CPU time in DEA is inevitable for a large-scale data set, especially with negative measures. In recent years, wide ranges of studies have been conducted in the area of artificial neural network and DEA combined methods. In this study, a supervised feed-forward neural network is proposed to evaluate the efficiency and productivity of large-scale data sets with negative values in contrast to the corresponding DEA method. Results indicate that the proposed network has some computational advantages over the corresponding DEA models; therefore, it can be considered as a useful tool for measuring the efficiency of DMUs with (large-scale) negative data.
Several mixed binary linear programming models have been proposed in the literature to rank decision-making units (DMUs) in data envelopment analysis (DEA). However, some of these models fail to consider the decision-makers' preferences. We propose a new mixed binary linear DEA model for finding the most efficient DMU by considering the decision-makers' preferences. The model proposed in this study is motivated by the approach introduced by Toloo and Salahi (2018). We extend their model by introducing additional assurance region type I (ARI) weight restrictions (WRs) based on the decision-makers' preferences. We show that direct addition of assurance region type II (ARII) and absolute WRs in traditional DEA models leads to infeasibility and free production problems, and we prove ARI eliminates these problems. We also show our epsilon-free model is less complicated and requires less effort to determine the best efficient unit compared with the existing epsilon-based models in the literature. We provide two real-life applications to show the applicability and exhibit the efficacy of our model.
Data envelopment analysis (DEA) has been a very popular method for measuring and benchmarking relative efficiency of each decision making units (DMUs) with multiple inputs and multiple outputs. DEA and Discriminant Analysis (DA) are similar in classifying units to exhibit either good or poor performance. On the other hand, selecting the most efficient unit between several efficient ones is one of the main issues in multi-criteria decision making (MCDM). Some proponents have suggested some approaches and claimed their methodologies involve discriminating power to determine the most efficient DMU without explicit input. This paper focuses on the weakness of a recent methodology of these approaches and to avoid this drawback presents a mixed integer programming (MIP) approach. To illustrate this drawback and compare discriminating power of the recent methodology to our new approach, a real data set containing 40 professional tennis players is utilized.
The Data Envelopment Analysis (DEA) has been the benchmarked model for measuring the efficiency of banks over the years. However, inherent noise and uncertainties in the data are hardly considered for robust efficiency scores. The disadvantage is that a small perturbation in the uncertain parameters can lead to high infeasibility of the efficient solutions. This paper introduces a robust DEA into the measurement of banks efficiency. The proposed robust approach is based on the robust counterpart optimization of Ben-Tal & Nemirovski (2000), and it is implemented in the traditional DEA models germane to the performance measurement of banks. A preliminary result from data on banks in the Czech Republic indicates that efficiency scores measured with the robust DEA model provides a true and stable performance measure than the normal DEA model.
Data envelopment analysis (DEA) is a non-parametric data-driven approach for evaluating the efficiency of a set of homogeneous decision-making units (DMUs) with multiple inputs and multiple outputs. The number of performance factors (inputs and outputs) plays a crucial role when applying DEA to real-world applications. In other words, if the number of performance factors is significantly greater than the number of DMUs, it is highly possible to arrive at a large portion of efficient DMUs, which practically may become problematic due to the lack of ample discrimination among DMUs. The current research aims to develop an array of selecting DEA models to narrow down the performance factors based upon a rule of thumb. To this end, we show that the input-and output-oriented selecting DEA models may select different factors and then present the integrated models to identify a set of common factors for both orientations. In addition to efficiency evaluation at the individual level, we study structural efficiency with a single production unit at the industry level. Finally, a case study on the EU countries is presented to give insight into business innovation, social economy and growth with regard to the efficiency of the EU countries and entire EU. (C) 2020 Elsevier Ltd. All rights reserved.
The rapid growth of advanced technologies such as cloud computing in the Industry 4.0 era has provided numerous advantages. Cloud computing is one of the most significant technologies of Industry 4.0 for sustainable development. Numerous providers have developed various new services, which have become a crucial ingredient of information systems in many organizations. One of the challenges for cloud computing customers is evaluating potential providers. To date, considerable research has been undertaken to solve the problem of evaluating the efficiency of cloud service providers (CSPs). However, no study addresses the efficiency of providers in the context of an entire supply chain, where multiple services interact to achieve a business objective or goal. Data envelopment analysis (DEA) is a powerful method for efficiency measurement problems. However, the current models ignore undesirable outputs, integer-valued, and stochastic data which can lead to inaccurate results. As such, the primary objective of this paper is to design a decision support system that accurately evaluates the efficiency of multiple CSPs in a supply chain. The current study incorporates undesirable outputs, integer-valued, and stochastic data in a network DEA model for the efficiency measurement of service providers. The results from a case study illustrate the applicability of our new system. The results also show how taking undesirable outputs, integer-valued, and stochastic data into account changes the efficiency of service providers. The system is also able to provide the optimal composition of CSPs to suit a customer's priorities and requirements.
Cook and Zhu (2007) introduced an innovative method to deal with flexible measures. Toloo (2009) found a computational problem in their approach and tackled this issue. Amirteimoori and Emrouznejad (2012) claimed that both Cook and Zhu (2007) and Toloo (2009) models overestimate the efficiency. In this response, we prove that their claim is incorrect and there is no overestimate in these approaches.
Information System (IS) project selection is a critical decision making task that can significantly impact operational excellence and competitive advantage of modern enterprises and also can involve them in a long-term commitment. This decision making is complicated due to availability of numerous IS projects, their increasing complexities, importance of timely decisions in a dynamic environment, as well as existence of multiple qualitative and quantitative criteria. This paper proposes a Data Envelopment Analysis approach to find most efficient IS projects while considering subjective opinions and intuitive senses of decision makers. The proposed approach is validated by a real world case study involving 41 IS projects at a large financial institution as well as 18 artificial projects which are defined by the decision makers.
In the traditional data envelopment analysis (DEA) approach for a set of n Decision Making Units (DMUs), a standard DEA model is solved n times, one for each DMU. As the number of DMUs increases, the running-time to solve the standard model sharply rises. In this study, a new framework is proposed to significantly decrease the required DEA calculation time in comparison with the existing methodologies when a large set of DMUs (e.g., 20,000 DMUs or more) is present. The framework includes five steps: (i) selecting a subsample of DMUs using a proposed algorithm, (ii) finding the best-practice DMUs in the selected subsample, (iii) finding the exterior DMUs to the hull of the selected subsample, (iv) identifying the set of all efficient DMUs, and (v) measuring the performance scores of DMUs as those arising from the traditional DEA approach. The variable returns to scale technology is assumed and several simulation experiments are designed to estimate the running-time for applying the proposed method for big data. The obtained results in this study point out that the running-time is decreased up to 99.9% in comparison with the existing techniques. In addition, we illustrate the essential computation time for applying the proposed method as a function of the number of DMUs (cardinality), number of inputs and outputs (dimension), and the proportion of efficient DMUs (density). The methods are also compared on a real data set consisting of 30,099 electric power plants in the United States from 1996 to 2016. (C) 2018 Elsevier B.V. All rights reserved.
•An index is constructed using a common weight approach representing the hierarchical structure of indicators.•The suggested model successfully addresses the reality, complexity, discrimination power, and fairness issues.•A case study of a road user behavior index for 13 European countries is provided. In recent years, composite indicators have become increasingly recognized as a useful tool for performance evaluation, benchmarking, and decision-making by summarizing complex and multidimensional issues. In this study, we focus on the application of data envelopment analysis (DEA) on index construction in the context of road safety and highlight the shortcomings of using the classical DEA models. The DEA method assigns a weight to each indicator by selecting the best set of weights for the unit under evaluation. The flexibility in selecting the weights in the classical DEA approach may lead to two interrelated problems: compensability and unfairness. These shortcomings are, respectively, overcome traditionally by imposing weight restrictions and applying a common weights approach. However, the problem of evaluating a layered hierarchy of indicators with a common set of weights (CSW) has not been addressed in the literature. To fill this gap, we propose a new approach for index construction to determine an optimal CSW to assess all units simultaneously while reflecting the hierarchical structure of the indicators in the model. The applicability of the suggested common-weight approach is illustrated by a case study on constructing a road user behavior index for a set of European countries. From a theoretical point of view, our approach provides a fair and identical basis for evaluation and comparison of countries in terms of driver’s behaviors and, from a practical point of view, it significantly reduces the required computational burden for solving the formulated model. The obtained results clarify the sharper discrimination power of our model compared to the other methods in the literature.
This paper presents a new algorithm for computing the non-Archimedean ε in DEA models. It is shown that this algorithm is polynomial-time of O(n), where n is the number of decision making units (DMUs). Also it is proved that using only inputs and outputs of DMUs, the non-Archimedean ε can be found such that, the optimal values of all CCR models, which are corresponding to all DMUs, are bounded and an assurance value is obtained.
All models in data envelopment analysis (DEA) have been built on the foundation of performance factors. Performance factors in DEA are divided conventionally into the input and output measures. In some positions, we confront with dual-role factors which can play simultaneously input and output roles. Traditionally, all performance factors are considered as precise values, while in some real-world problems they characterized as imprecise values. In this paper, we evaluate the performance of 18 third-party reverse logistics (3PL) providers in the presence of dual-role factors and under uncertainty. We illustrate the superiority of the employed DEA approach over a suggested approach in the literature.
•A new method for target setting in mergers is proposed.•The method combines goal programming and inverse data envelopment analysis.•The method allows decision makers to save desired resources.•An application in banking sector is proposed. This paper suggests a novel method to deal with target setting in mergers using goal programming (GP) and inverse data envelopment analysis (InvDEA). A conventional DEA model obtains the relative efficiency of decision making units (DMUs) given multiple inputs and multiple outputs for each DMU. However, the InvDEA aims to identify the quantities of inputs and outputs when efficiency score is given as a target. This study provides an effective method that allows decision makers to incorporate their preference in target setting of a merger for saving specific input(s) or producing certain output(s) as much as possible. The proposed method is validated through an illustrative application in banking industry.
Park proposed a pair of mathematical data envelopment analysis (DEA) models to estimate the lower and upper bound of efficiency scores in the presence of imprecise data. This article illustrates that his approach suffers from some drawbacks: (i) it may convert weak ordinal data into an incorrect set of precise data; (ii) it utilizes various production frontiers to obtain an interval efficiency score for each decision making unit (DMU); (iii) in the absence of exact output data (pure ordinal output data), the approach leads to a meaningless model; (iv) the built model is infeasible with pure ordinal input data; (v) it may include free or unlimited production output which results in unreliable and suspicious results. Moreover, the utilized models by Park involve a positive lower bound (non-Archimedean epsilon) for the weights to deter them from being zero. However, the author ignored the requirement of determining a suitable value for the epsilon. This study constructs two new DEA models with a fixed and unified production frontier (the same constraint set) to compute the upper and lower bounds of efficiency. It is demonstrated that the suggested models can successfully capture the aforementioned shortcomings. Although these models are also epsilon-based, a new model is developed to obtain a suitable epsilon value for the proposed models. It is proved that the suggested approach effectively eliminates all the weaknesses. Additionally, a case study of Iranian Space Agency (ISA) industry is taken as an example to illustrate the superiority of the new approach over the previous ones.
In order to find hyperparameters for a machine learning model, algorithms such as grid search or random search are used over the space of possible values of the models' hyperparameters. These search algorithms opt the solution that minimizes a specific cost function. In language models, perplexity is one of the most popular cost functions. In this study, we propose a fractional nonlinear programming model that finds the optimal perplexity value. The special structure of the model allows us to approximate it by a linear programming model that can be solved using the well-known simplex algorithm. To the best of our knowledge, this is the first attempt to use optimization techniques to find perplexity values in the language modeling literature. We apply our model to find hyperparameters of a language model and compare it to the grid search algorithm. Furthermore, we illustrate that it results in lower perplexity values. We perform this experiment on a real-world dataset from SwiftKey to validate our proposed approach.
The concept of sustainability consists of three main dimensions: environmental, techno-economic, and social. Measuring the sustainability status of a system or technology is a significant challenge, especially when it needs to consider a large number of attributes in each dimension of sustainability. In this study, we first propose a hybrid approach, involving data envelopment analysis (DEA) and a multi-attribute decision making (MADM) methodologies, for computing an index for each dimension of sustainability, and then we define the overall sustainability index as the mean of the three measured indexes. Towards this end, we define new concepts ofefficiency and cross-efficiency of order(p, q)wherepandqare the number of inputs and outputs, respectively. For a given(p, q), we address the problem of finding efficiency of order(p, q)by developing a novel DEA-based selecting method. Finally, we define the sustainability index as a weighted sum of all possible cross-efficiencies of order(p, q). Form a computational viewpoint, the proposed selecting model significantly decreases the computational burden in comparison with the successive solving of traditional DEA models. A case study of the electricity-generation technologies in the United Kingdom is taken as a real-world example to illustrate the potential application of our method.
Cook and Zhu [Cook, W.D., Zhu, J., 2007. Classifying inputs and outputs in data envelopment analysis. European Journal of Operational Research 180, 692–699] introduced a new method to determine whether a measure is an input or an output. In practice, however, their method may produce incorrect efficiency scores due to a computational problem as result of introducing a large positive number to the model. This note introduces a revised model that does not need such a large positive number.
The dataset contains financial indicators from the financial statements of 250 banks operating in Europe which are collated for the 2015 accounting year. First, the dataset is split into input and outputs measures. Then the preferred number of inputs and outputs in relation to the total number of data is selected according to the rule of thumb in data envelopment analysis (DEA). The dataset is related to the research article entitled “Robust optimization with nonnegative decision variables: A DEA approach” (Toloo and Mensah, 2018) [1]. The dataset can be used to evaluate the performance of banks and bank efficiency under uncertainty.
In many applications of widely recognized technique, DEA, finding the most efficient DMU is desirable for decision maker. Using basic DEA models, decision maker is not able to identify most efficient DMU. Amin and Toloo [Gholam R. Amin, M. Toloo, Finding the most efficient DMUs in DEA: an improved integrated model. Comput. Ind. Eng. 52 (2007) 71–77] introduced an integrated DEA model for finding most CCR-efficient DMU. In this paper, we propose a new integrated model for determining most BCC-efficient DMU by solving only one linear programming (LP). This model is useful for situations in which return to scale is variable, so has wider range of application than other models which find most CCR-efficient DMU. The applicability of the proposed integrated model is illustrated, using a real data set of a case study, which consists of 19 facility layout alternatives.
Multiobjective combinatorial optimization problems appear in a wide range of applications including operations research/management, engineering, biological sciences, and computer science. This work presents a brief analysis of most concepts and studies of solution approaches applied to multiobjective combinatorial optimization problems. A detailed scientometric analysis presents an influential tool for bibliometric analyses that were performed on multiobjective combinatorial optimization problems and the solution approaches data from the Scopus databases. To this end, we address social, keywords, and subject areas by employing two well-known tools: VOSviewer and Mendeley. Finally, the conclusion and discussion are provided with a couple of directions for future researches.
Owing to the increasing importance of sustainable supply chain management (SSCM), it has received much attention from both corporate and academic over the past decade. SSCM performance evaluation plays a crucial role in organizations success. One of the practical techniques that can be used for SSCM performance assessment is network data envelopment analysis (NDEA). This paper develops a new NDEA for performance evaluation of SSCM in the presence of stochastic data. The proposed model can evaluate the efficiency of SSCM under uncertain conditions. A case study in the soft drinks industry is presented to demonstrate the efficacy of the proposed method.
This article presents the dataset of the healthcare systems indicators of 120 countries during 2010-2017, which is related to the research article "Cross-efficiency evaluation in the presence of flexible measures with an application to healthcare systems" [1]. The data is collected from the World Bank and selected for the 120 countries. Depending on their role in the performance of the healthcare systems, the indicators are categorized into input (I), output (O) and flexible measure (FM) where the FM measure can play either role of input or output in the healthcare system. The dataset can be used to perform efficiency as well as cross-efficiency analysis of the healthcare systems using methods such as data envelopment analysis (DEA) in the presence of flexible measure. (c) 2019 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Strategic vendor selection problem (VSP) has been investigated in different purchasing literature during the last two decades. Indeed, senior purchasing managers always deal with such crucial decisions. Manufacturing managers in the global market are faced with challenging and complex tasks very similar to VSP. Increasing outsourcing and opportunity provided by automotive industry to the worldwide markets make these decisions, even more, complex. Various methodologies, from simple weighted scoring methods to complex mathematical programming models, are introduced to tackle the VSP. Data envelopment analysis (DEA) is a non-parametric method in operations research and economics for evaluating the productive efficiency of decision-making units (DMUs). This study utilizes the proposed approach in Toloo and Ertay (2014) to develop a method for finding the most cost efficient DMU when the prices are fixed and known. A case study of an automotive company located in Turkey is adapted from the literature to illustrate the potential application of the suggested approach.
Computational science (or scientific computing) is concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. In practical use, it is typically the application of computer simulation and other forms of computation from numerical analysis and theoretical computer science to problems in various scientific disciplines. Pascal is an influential imperative and procedural programming language, designed in 1968–1969 and published in 1970 by Niklaus Wirth as a small and efficient language intended to encourage good programming practices using structured programming and data structuring. Another object- oriented deviation of it was known as Object Pascal was developed in 1985. Today, Pascal is mainly abandoned in the industry or scientific teams however it has had its influence on both syntax and data structure of Java programming language, the most expressive language for scientific computing up to the time being.
Linear fractional programming has been an important planning tool for the past four decades. The main contribution of this study is to show, under some assumptions, for a linear programming problem, that there are two different dual problems (one linear programming and one linear fractional functional programming) that are equivalent. In other words, we formulate a linear programming problem that is equivalent to the general linear fractional functional programming problem. These equivalent models have some interesting properties which help us to prove the related duality theorems in an easy manner. A traditional data envelopment analysis (DEA) model is taken, as an instance, to illustrate the applicability of the proposed approach.
Data envelopment analysis (DEA) is a data based mathematical approach, which handles large numbers of variables, constraints, and data. Hence, data play an important and critical role in DEA. Given a set of decision making units (DMUs) and identified inputs and outputs (performance measures), DEA evaluates each DMU in comparison with all DMUs. According to some statistical and empirical rules, a balance between the number of DMUs and the number of performance measures should exist. However, in some situations the number of performance measures is relatively large in comparison with the number of DMUs. These cases lead us to choose some inputs and outputs in a way that produces acceptable results. We refer to these selected inputs and outputs as selective measures. This paper presents an approach toward a large number of inputs and outputs. Individual DMU and aggregate models are recommended and expanded separately for developing the idea of selective measures. The practical aspect of the new approach is illustrated by two real data set applications.
The assignment problem (AP) is one of the well-known and most studied combinatorial optimization problems. The single objective AP is an integer programming problem that can be solved with efficient algorithms such as the Hungarian or the successive shortest paths methods. On the other hand, finding and classifying all efficient assignments for a Multicriteria AP (MCAP) remains a controversial issue in Multicriteria Decision Making (MCDM) problems. In this chapter, we tackle the issue by using data envelopment analysis (DEA) models. Importantly, we focus on identifying the efficiency status of assignments using a two-phase algorithm. In phase I, a mixed-integer linear programming (MILP) based on the Free disposable Hull (FDH) model is used to determine minimal complete set (MCS) of efficient assignments. In Phase II, the DEA-BCC model is used to classify efficient assignments as supported or nonsupported. A numerical example is provided to illustrate the presented approach.
In order to deal with finding the most efficient unit problem, Lam (2015) recently built a new integrated mixed integer linear programming model which is nearly close to the super-efficiency model. The suggested model involves a non-Archimedean epsilon as the lower bound for the input and output weights. Selecting a suitable value for epsilon is a challenging issue in DEA (Data Envelopment Analysis). Lam (2015) suggested a value for epsilon which guarantees the feasibility of his model; however, this paper illustrates that the model may fail to find the most efficient unit due to unsuitable selected value for epsilon. To cope with this issue, a new model is formulated which provides the maximum epsilon value for the model of Lam (2015). The built model guarantees that when epsilon is maximum, then Lam’s model gives exactly one DMU (Decision Making Unit) as the most efficient unit with the maximum discrimination distance from the other DMUs.
Finding and classifying all efficient assignments for a Multi-Criteria Assignment Problem (MCAP) is one of the controversial issues in Multi-Criteria Decision Making (MCDM) problems. The main aim of this study is to utilize Data Envelopment Analysis (DEA) methodology to tackle this issue. Toward this end, we first state and prove some theorems to clarify the relationships between DEA and MCAP and then design a new two-phase approach to find and classify a set of efficient assignments. In Phase I, we formulate a new Mixed Integer Linear Programming (MILP) model, based on the Additive Free Disposal Hull (FDH) model, to gain an efficient assignment and then extend it to determine a Minimal Complete Set (MCS) of efficient assignments. In Phase II, we use the BCC model to classify all efficient solutions obtained from Phase I as supported and non-supported. A 4 x 4 assignment problem, containing two cost-type and single profit-type of objective functions, is solved using the presented approach. (C) 2014 Elsevier Ltd. All rights reserved.
Data envelopment analysis (DEA) with considering the best condition for each decision making unit (DMU) assesses the relative efficiency for it and divides a homogenous group of DMUs into two categories: efficient and inefficient, but traditional DEA models can not rank efficient DMUs. Although some models were introduced for ranking efficient DMUs, Franklin Lio & Hsuan peng (2008), proposed a common weights analysis (CWA) approach for ranking them. These DMUs are ranked according to the efficiency score weighted by the common set of weights and shadow prices. This study shows there are some cases that shadow prices of efficient DMUs are equal, hence this method is not applicable for ranking them. Next, we propose a new method for ranking units with equal shadow prices.
Our peripheral environment is changing rapidly and globalization of organizations has made them more complex. Therefore, organizations should codify their strategic plans and executive methods more accurately. However, some executive methods are not properly fulfilling the organization's strategic priorities. This paper proposes a comprehensive framework in order to evaluate and prioritize strategies and rank executive methods. To do this, firstly, the strategic plans are developed with SWOT (Strength, Weakness, Opportunity, Threat) analysis and then plans are weighted and diminished by using FQSPM-Gap (Fuzzy Quantitative Strategic Planning Matrix) model. Finally, the executive methods of the company are prioritized by QFD (Quality Function Deployment) matrix to accomplish its strategic plans. The model is implemented in a textile and clothing Company.
Data envelopment analysis (DEA) is a mathematical approach deals with the performance evaluation problem. Traditional DEA models partition the set of units into two distinct sets: efficient and inefficient. These models fail to get more information about efficient units whereas there are some applications, known as selection-based problems, where the concern is selecting only a single efficient unit. To address the problem, several mixed integer linear/nonlinear programming models are developed in the literature using DEA. The aim of all these approaches is formulating a model with more discriminating power. This paper presents a new nonlinear mixed integer programming model with significantly higher discriminating power than the existing ones in the literature. The suggested model lets the efficiency score of only a single unit be strictly greater than one. It is observed that the discrimination power of the model is high enough for fully ranking all units. More importantly, a linearization technique is used to formulate an equivalent mixed integer linear programming model which significantly decreases the computational burden. Finally, to validate the proposed model and also compare with some recent approaches, two numerical examples are utilized from the literature. Our founding points out the superiority of our model over all the previously suggested models from both theoretical and practical standpoints.
Data envelopment analysis (DEA) deals with the evaluation of efficiency score of peer decision making units (DMUs) and divides them in two mutually exclusive sets: efficient and inefficient. There are various ranking methods to get more information about the efficient units. Nevertheless, finding the most efficient unit is a scientific challenge and hence has been the subject of numerous studies. Here, the main contribution is an integrated model that is able to determine the most efficient unit under a common condition is developed. The current research formulates a new minimax mixed integer linear programming (MILP) model for fining the most efficient DMU. Three different case studies from different contexts are taken as numerical examples to compare the proposed model with other methods. These numerical examples also illustrate the various potential applications of the suggested model.
The determination of a single efficient decision making unit (DMU) as the most efficient unit has been attracted by decision makers in some situations. Some integrated mixed integer linear programming (MILP) and mixed integer nonlinear programming (MINLP) data envelopment analysis (DEA) models have been proposed to find a single efficient unit by the optimal common set of weights. In conventional DEA models, the non-Archimedean infinitesimal epsilon, which forestalls weights from being zero, is useless if one utilizes the well-known two-phase method. Nevertheless, this approach is inapplicable to integrated DEA models. Unfortunately, in some proposed integrated DEA models, the epsilon is neither considered nor determined. More importantly, based on this lack some approaches have been developed which will raise this drawback. In this paper, first of all some drawbacks of these models are discussed. Indeed, it is shown that, if the non-Archimedean epsilon is ignored, then these models can neither find the most efficient unit nor rank the extreme efficient units. Next, we formulate some new models to capture these drawbacks and hence attain assurance regions. Finally, a real data set of 53 professional tennis players is applied to illustrate the applicability of the suggested models.
Countries need robust long-term plans to keep up with the global pace of transitioning from pollutant fossil fuels towards clean, renewable energies. Renewable energy generation expansion plans can be either centralized, decentralized, or a combination of these two. This paper presents a novel approach to obtain an optimal multi-period plan for generating each type of renewable energy (solar, wind, hydro, geothermal, and biomass) via multi-objective mathematical modeling. The proposed model has integrated with Autoregressive Integrated Moving Average (ARIMA) econometric method to forecast the country’s demand during the planning horizon. The optimal energy mix based on several socio-economic aspects of renewable sources was obtained using the Passive and Active Compensability Multicriteria ANalysis (PACMAN) multi-attribute decision-making method. The model has been solved by a Non-dominated Sorting Genetic Algorithm (NSGA-II) metaheuristic algorithm. Each solution in the Pareto front contains a plan for each electricity generation region under a certain combination of centralization and decentralization strategies.
The slacks-based measure (SBM) model can divide the set of observations into two mutually exclusive and collectively exhaustive sets: efficient and inefficient. However, it fails to provide more details about efficient DMUs, which reveals the lack of discrimination power in the SBM model. With the aim of addressing this issue, the super SBM (SupSBM) model has been suggested which can rank the SBM-efficient DMUs without providing any useful information about SBM-inefficient DMUs. As a result, in order to fully rank both efficient and inefficient DMUs, one needs to run both SBM and SupSBM models which leads to a significant increase in the number of required computations. This paper tackles this problem and modifies the SBM model which measures SBM-efficiency score for inefficient DMUs and SupSBM-efficiency score for strong efficient DMUs, simultaneously. Finally, a simulation study is presented to illustrate the superiority of our proposed model over the existing models with various problem sizes. (C) 2020 Elsevier B.V. All rights reserved.
In recent years, most countries around the world have struggled with the consequences of budget cuts in health expenditure, obliging them to utilize their resources efficiently. In this context, performance evaluation facilitates the decision-making process in improving the efficiency of the healthcare system. However, the performance evaluation of many sectors, including the healthcare systems, is, on the one hand, a challenging issue and on the other hand a useful tool for decision- making with the aim of optimizing the use of resources. This study proposes a new methodology comprising two well-known analytical approaches: (i) data envelopment analysis (DEA) to measure the efficiencies and (ii) data science to complement the DEA model in providing insightful recommendations for strategic decision making on productivity enhancement. The suggested method is a first attempt to combine two DEA extensions: flexible measure and cross-efficiency. We develop a pair of benevolent and aggressive scenarios aiming at evaluating cross-efficiency in the presence of flexible measures. Next, we perform data mining cluster analysis to create groups of homogeneous countries. Organizing the data in similar groups facilitates identifying a set of benchmarks that perform similarly in terms of operating conditions. Comparing the benchmark set with poorly performing countries we can obtain attainable goals for performance enhancement which will assist policymakers to strategically act upon it. A case study of healthcare systems in 120 countries is taken as an example to illustrate the potential application of our new method.
Data envelopment analysis (DEA) is a non-parametric data oriented method for evaluating relative efficiency of the number of decision making units (DMUs) based on pre-selected inputs and outputs. In some real DEA applications, the large number of inputs and outputs, in comparison with the number of DMUs, is a pitfall that could have major influence on the efficiency scores. Recently, an approach was introduced which aggregates collected inputs and outputs in order to reduce the number of inputs and outputs iteratively. The purpose of this paper is to show that there are three drawbacks in this approach: instability due to existence of an infinitesimal epsilon, iteratively which can be improved to just one iteration, and providing non-radial inputs and outputs and then capturing them. In order to illustrate the applicability of the improved approach, a real data set involving 14 large branches of National Iranian Gas Company (NIGC) is utilized.
Data envelopment analysis-discriminant analysis (DEA-DA) has been used for predicting cluster membership of decision-making units (DMUs). One of the possible applications of DEA-DA is in the marketing research area. This paper uses cluster analysis to cluster customers into two clusters: Gold and Lead. Then, to predict cluster membership of new customers, DEA-DA is applied. In DEA-DA, an arbitrary parameter imposing a small gap between two clusters (η) is incorporated. It is shown that different η leads to different prediction accuracy levels since an unsuitable value for η leads to an incorrect classification of DMUs. We show that even the data set with no overlap between two clusters can be misclassified. This paper proposes a new DEA-DA model to tackle this issue. The aim of this paper is to illustrate some computational difficulties in previous DEA-DA approaches and then to propose a new DEA-DA model to overcome the difficulties. A case study demonstrates the efficacy of the proposed model.
In microeconomics, a production function is a mathematical function that transforms all combinations of inputs of an entity, firm or organization into the output. Given the set of all technically feasible combinations of outputs and inputs, only the combinations encompassing a maximum output for a specified set of inputs would constitute the production function. Data Envelopment Analysis (DEA), which has initially been originated by Charnes, Cooper and Rhodes in 1978, is a well-known non-parametric mathematical method with the aim of estimating the production function. In fact, DEA evaluates the relative performance of a set of homogeneous decision making units with multiple inputs and multiple outputs.This book covers some basic DEA models and disregards more complicated ones, such as network DEA, and mainly stresses the importance of weights in DEA and some of their applications. As a result, this book mainly considers the multiplier form of DEA models to extend some new approaches, however, the envelopment forms are introduced in some possible approaches. This book also aims at dealing with some innovative uses of binary variables in extended DEA model formulations. The auxiliary variables enable us to formulate Mixed Integer Programming (MIP) DEA models for addressing the problem of finding a single efficient and ranking efficient DMUs. In some cases, the status of input(s) or output(s) measure is unknown and binary variables are utilized to accommodate these flexible measures. Furthermore, the binary variables approach tackles the problem of selecting input or output measures.The book also stresses the mathematical aspects of selected DEA models and their extensions so as to illustrate their potential uses with applications to different contexts, such as banking industry in the Czech Republic, financing decision problem, technology selection problem, facility layout design problem, and selecting the best tennis player. In addition, the majority of the extended models in this book can be extended to some other DEA models, such as slacks-based measures, hybrids, non-discretionary, and fuzzy DEA which are applicable in some other contexts.This research-based book contains six chapters as follows:The first chapter (General Discussion) starts with a simple numerical example to explain the concept of relative efficiency and to clarify the importance of input and output weights in measuring the efficiency score. Then these basic concepts are extended to some more complex cases. Efficient frontiers and projection points are illustrated by means of some constructive and insightful graphs.The second chapter (Basic DEA Models) presents both envelopment and multiplier forms of the DEA models in the presence of multiple inputs and multiple outputs. However, this book mainly focuses on the multiplier form of DEA models. In addition, this chapter illustrates the role of each axiom to construct the production possibility set (PPS). It is also concerned with some DEA models to deal with pure input data as long as with pure output data sets. Apart from basic input- and output-oriented DEA models with different returns to scale, the chapter includes a model that combines both orientations. Three various case studies involving banking industry, technology selection, and asset financing are provided in this section. In chapter 3 (GAMS Software), we briefly introduce General Algebraic Modeling System (GAMS) software, a modelling system for linear, nonlinear and mixed integer optimization problems for solving DEA models. Chapter 4 (Weights in DEA) treats the weights in DEA and their importance along with various weight restrictions and common set of weights (CSW) approaches. The chapter includes Assurance Region (AR) and Assurance Region Global (ARG) methods to restrict weight flexibility in DEA. Two DEA models with different types of efficiency, i.e. minsum and minimax, with their integrated versions are introduced in this chapter. The evaluation of facility layout design problem is addressed as a numerical example.Chapter 5 (Best Efficient Unit) considers CSW and binary variable approaches as the main tool for developing models that have the capability to find the most efficient DMU and also rank DMUs. We cover WEI/WEO data sets along with multiple input and multiple output data sets. Some epsilon-free DEA models are introduced to overcome the problem of finding a set of positive weights. The problem of finding the most cost-efficient under certain and uncertain input prices is also discussed. Two real data sets involving professional tennis players and a Turkish automotive company are rendered to validate the approaches in this chapter. Chapter 6 (Data Selection in DEA) closes the book by considering the data selection problem in DEA and presenting some modifications of the standard DEA models to accommodate flexible and selective measures. To deal with these problems, two multiplier and envelopment DEA models are developed where each model contains two alternative approaches: individual and integrated models. The individual approach classifies flexible measures and identifies selective measures for each DMU, and the aggregate approach accommodates these measures using integrated DEA models. We present three case studies to examine and validate the approaches in this chapter.Evidently, my deepest gratitude and love go to my family, Laleh and Arad, for supporting me in writing this book. Ronak Azizi saved me a lot of trouble by tackling all formatting issues in Microsoft. Last, but certainly not least, I would like to extend my thanks to my friend, Dr Adel Hatami-Marbini, for helping me with editing the book and for invaluable ideas and comments.This publication has been elaborated in the framework of the project “Support research and development in the Moravian-Silesian Region 2013 DT 1 - International research teams“ (02613/2013/RRC). Financed from the budget of the Moravian-Silesian Region.
Effectual air quality monitoring network (AQMN) design plays a prominent role in environmental engineering. An optimal AQMN design should consider stations' mutual information and system uncertainties for effectiveness. This study develops a novel optimization model using a non-dominated sorting genetic algorithm II (NSGA-II). The Bayesian maximum entropy (BME) method generates potential stations as the input of a framework based on the transinformation entropy (TE) method to maximize the coverage and minimize the probability of selecting stations. Also, the fuzzy degree of membership and the nonlinear interval number programming (NINP) approaches are used to survey the uncertainty of the joint information. To obtain the best Pareto optimal solution of the AQMN characterization, a robust ranking technique, called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) approach, is utilized to select the most appropriate AQMN properties. This methodology is applied to Los Angeles, Long Beach, and Anaheim in California, USA. Results suggest using 4, 4, and 5 stations to monitor CO, NO , and ozone, respectively; however, implementing this recommendation reduces coverage by 3.75, 3.75, and 3 times for CO, NO , and ozone, respectively. On the positive side, this substantially decreases TE for CO, NO , and ozone concentrations by 8.25, 5.86, and 4.75 times, respectively.
Efficiency analyses are crucial to managerial competency for evaluating the degree to which resources are consumed in the production process of gaining desired services or products. Among the vast available literature on performance analysis, Data Envelopment Analysis (DEA) has become a popular and practical approach for assessing the relative efficiency of Decision-Making Units (DMUs) which employ multiple inputs to produce multiple outputs. However, in addition to inputs and outputs, some situations might include certain factors to simultaneously play the role of both inputs and outputs. Contrary to conventional DEA models which account for precise values for inputs, outputs and dual-role factors, we develop a methodology for quantitatively handling imprecision and uncertainty where a degree of imprecision is not trivial to be ignored in efficiency analysis. In this regard, we first construct a pair of interval DEA models based on the pessimistic and optimistic standpoints to measure the interval efficiencies where some or all observed inputs, outputs and dual-role factors are assumed to be characterized by interval measures. The optimal multipliers associated with the dual-role factors are then used to determine whether a factor is designated as an output, an input, or is in equilibrium even though the status of the dual-role factors may not be unique based upon the pessimistic and optimistic standpoints. To deal with the problem, we present a new model which integrates both pessimistic and optimistic models. The integrated model enables us to identify a unique status of each imprecise dual-role factor as well as to develop a structure for calculating an optimal reallocation model of each dual-role factor among the DMUs. As another method to investigate the role for dual-role factors, we introduce a fuzzy decision making model which evaluates all DMUs simultaneously. We finally present an application to a data set of 20 banks to showcase the applicability and efficacy of the proposed procedures and algorithm. (C) 2017 Elsevier Ltd. All rights reserved.
Russell measure is among non-radial measures for efficiency evaluation of decision making units in data envelopment analysis. Due to the nonlinearity of its objective function, an enhanced version of it is proposed that can be linearized using the known Charnes-Cooper change of variables. In this article, we give equivalent formulations of the robust Russell measure and its enhanced models under interval and ellipsoidal uncertainties in their best- and worst-cases. We show that the built formulations stay convex for both best- and worst-cases under interval uncertainty as well as worst-case with ellipsoidal uncertainty. In other words, these formulations are nonconvex only for ellipsoidal uncertainty in their best-case. Some illustrative examples are provided to validate the new models.
Two-stage data envelopment analysis (DEA) efficiency models identify the efficient frontier of a two-stage production process. In some two-stage processes, the inputs to the first stage are shared by the second stage, known as shared inputs. This paper proposes a new relational linear DEA model for dealing with measuring the efficiency score of two-stage processes with shared inputs under constant returns-to-scale assumption. Two case studies of banking industry and university operations are taken as two examples to illustrate the potential applications of the proposed approach.
Conventional data envelopment analysis evaluates the relative efficiency of a set of homogeneous decision making units (DMUs), where DMUs are evaluated in terms of a specified set of inputs and outputs. In some situations, however, a performance factor could serve as either an output or an input. These factors are referred to as dual-role factors. The presence of dual-role factor among performance factors gives rise to the issue of how to fairly designate the input/output status to such factor. Several studies have been conducted treating a dual-role factor in both methodological and applied nature. One approach taken to address this problem is to view the dual-role factor as being nondiscretionary and connect it to the returns to scale concepts. It is argued that the idea of classifying a factor as an input or an output within a single model cannot consider the causality relationships between inputs and outputs. In this paper we present a mixed integer linear programming approach with the aim at dealing with the dual-role factor. Model structure is developed for finding the status of a dual-role factor via solving a single model while considering the causality relationships between inputs and outputs. It is shown that the new model can designate the status of a dual-role factor with half calculations as the previous model. Both individual and aggregate points of view are suggested for deriving the most appropriate designation of the dual-role factor. A data set involving 18 supplier selections is adapted from literature review to illustrate the efficacy of the proposed models and compare the new approach with the previous ones.
Measuring and managing of financial risks is an essential part of the management of financial institutions. The appropriate risk management should lead to an efficient allocation of available funds. Approaches based on Value at Risk measure have been used as a means for measuring market risk since the late 20th century, although regulators newly suggest to apply more complex method of Expected Shortfall. While evaluating models for market risk estimation based on Value at Risk is relatively simple and involves so-called backtesting procedure, in the case of Expected Shortfall we cannot apply similar procedure. In this article we therefore focus on an alternative method for comprehensive evaluation of VaR models at various significance levels by means of data envelopment analysis (DEA). This approach should lead to the adoption of the model which is also suitable in terms of the Expected Shortfall criterion. Based on the illustrative results from the US stock market we conclude that NIG model and historical simulation should be preferred to normal distribution and GARCH model. We can also recommend to estimate the parameters from the period slightly shorter than two years.
The success of a supply chain is highly dependent on selection of best suppliers. These decisions are an important component of production and logistics management for many firms. Little attention is given in the literature to the simultaneous consideration of cardinal and ordinal data in supplier selection process. This paper proposes a new integrated data envelopment analysis (DEA) model which is able to identify most efficient supplier in presence of both cardinal and ordinal data. Then, utilizing this model, an innovative method for prioritizing suppliers by considering multiple criteria is proposed. As an advantage, our method identifies best supplier by solving only one mixed integer linear programming (MILP). Applicability of proposed method is indicated by using data set includes specifications of 18 suppliers.
Flexibility in selecting the weights of inputs and outputs in data envelopment analysis models and uncertainty associated with the data might lead to unreliable efficiency scores. In this paper, to avoid these problems, first, we discuss robust Charnes, Cooper, Rhodes (CCR) model under Bertsimas and Sim approach. Then, the robust CCR solutions are used to find robust common set of weights under norm-1 and Bertsimas and Sim approach. Finally, on two numerical real-world examples, the performance of the proposed approach is compared by a similar recent approach from the literature to show the advantages of the new method and its applicability.
Over the last twenty years, access to higher education has grown extraordinarily in Latin America. Higher education systems have been challenged to improve their efficiency while strengthening quality assurance processes. In Colombia, the government and the researchers developed models to assess the performance of Higher Education Institutions (HEIs). Nevertheless, the current scholarship does not have a model that allows the system to measure multiple efficiencies in a diverse environment. In this study, we address the challenge of evaluating the efficiency of HEIs taking into account different goals of the Colombian education system. To this aim, we extend a cross-efficiency data envelopment analysis (DEA) approach to evaluate the efficiency of Colombian HEIs in the presence of flexible measures. While some HEIs are efficient in terms of teaching or employment, others are efficient in terms of research. Therefore, the model suggests broader policies to achieve the efficiency of the institutions under multiple goals.
A fundamental problem that usually appears in linear systems is to find a vector satisfying . This linear system is encountered in many research applications and more importantly, it is required to be solved in many contexts in applied mathematics. LU decomposition method, based on the Gaussian elimination, is particularly well suited for spars and large-scale problems. Linear programming (LP) is a mathematical method to obtain optimal solutions for a linear system that is more being considered in various fields of study in recent decades. The simplex algorithm is one of the mostly used mathematical techniques for solving LP problems. Data envelopment analysis (DEA) is a non-parametric approach based on linear programming to evaluate relative efficiency of decision making units (DMUs). The number of LP models that has to be solved in DEA is at least the same as the number of DMUs. Toloo et al. (Comput Econ 45(2):323-326, 2015) proposed an initial basic feasible solution for DEA models which practically reduces at least 50 % of the whole computations. The main contribution of this paper is in utlizing this solution to implement LU decomposition technique on the basic DEA models which is more accurate and numerically stable. It is shown that the number of computations in applying the Gaussian elimination method will be fairly reduced due to the special structure of basic DEA models. Potential uses are illustrated with applications to hospital data set.
Several researchers have adapted the data envelopment analysis (DEA) models to deal with two inter-related problems: weak discriminating power and unrealistic weight distribution. The former problem arises as an application of DEA in the situations where decision-makers seek to reach a complete ranking of units, and the latter problem refers to the situations in which basic DEA model simply rates units 100% efficient on account of irrational input and/or output weights and insufficient number of degrees of freedom. Improving discrimination power and yielding more reasonable dispersion of input and output weights simultaneously remain a challenge for DEA and multiple criteria DEA (MCDEA) models. This paper puts emphasis on weight restrictions to boost discriminating power as well as to generate true weight dispersion of MCDEA when a priori information about the weights is not available. To this end, we modify a very recent MCDEA models in the literature by determining an optimum lower bound for input and output weights. The contribution of this paper is sevenfold: first, we show that a larger amount for the lower bound on weights often leads to improving discriminating power and reaching realistic weights in MCDEA models due to imposing more weight restrictions; second, the procedure for sensitivity analysis is designed to define stability for the weights of each evaluation criterion; third, we extend a weighted MCDEA model to three evaluation criteria based on the maximum lower bound for input and output weights; fourth, we develop a super-efficiency model for efficient units under the proposed MCDEA model in this paper; fifth, we extend an epsilon-based minsum BCC-DEA model to proceed our research objectives under variable returns to scale (VRS); sixth, we present a simulation study to statistically analyze weight dispersion and rankings between five different methods in terms of non-parametric tests; and seventh, we demonstrate the applicability of the proposed models with an application to European Union member countries.
Traditionally, data envelopment analysis (DEA) evaluates the performance of decision-making units (DMUs) with the most favorable weights on the best practice frontier. In this regard, less emphasis is placed on non-performing or distressed DMUs. To identify the worst performers in risk-taking industries, the worst-practice frontier (WPF) DEA model has been proposed. However, the model does not assume evaluation in the condition that the environment is uncertain. In this paper, we examine the WPF-DEA from basics and further propose novel robust WPF-DEA models in the presence of interval data uncertainty and non-discretionary factors. The proposed approach is based on robust optimization where uncertain input and output data are constrained in an uncertainty set. We first discuss the applicability of worst-practice DEA models to a broad range of application domains and then consider the selection of worst-performing suppliers in supply chain decision analysis where some factors are unknown and not under varied discretion of management. Using the Monte-Carlo simulation, we compute the conformity of rankings in the interval efficiency as well as determine the price of robustness for selecting the worst-performing suppliers.
One of the main objectives in restructuring power industry is enhancing the efficiency of power facilities. However, power generation industry, which plays a key role in the power industry, has a noticeable share in emission amongst all other emission-generating sectors. In this study, we have developed some new Data Envelopment Analysis models to find efficient power plants based on less fuel consumption, combusting less polluting fuel types, and incorporating emission factors in order to measure the ecological efficiency trend. We then applied these models to measuring eco-efficiency during an eight-year period of power industry restructuring in Iran. Results reveal that there has been a significant improvement in eco-efficiency, cost efficiency and allocative efficiency of the power plants during the restructuring period. It is also shown that despite the hydro power plants look eco-efficient; the combined cycle ones have been more allocative efficient than the other power generation technologies used in Iran.
Dynamic data envelopment analysis (DEA) models are built on the idea that single period optimization is not fully appropriate to evaluate the performance of decision making units (DMUs) through time. As a result, these models provide a suitable framework to incorporate the different cumulative processes determining the evolution and strategic behavior of firms in the economics and business literatures. In the current paper, we incorporate two distinct complementary types of sequentially cumulative processes within a dynamic slacks-based measure DEA model. In particular, human capital and knowledge, constituting fundamental intangible inputs, exhibit a cumulative effect that goes beyond the corresponding factor endowment per period. At the same time, carry-over activities between consecutive periods will be used to define the pervasive effect that technology and infrastructures have on the productive capacity and efficiency of DMUs. The resulting dynamic DEA model accounts for the evolution of the knowledge accumulation and technological development processes of DMUs when evaluating both their overall and per period efficiency. Several numerical examples and a case study are included to demonstrate the applicability and efficacy of the proposed method. (C) 2018 Elsevier B.V. All rights reserved.
Data envelopment analysis seeks a frontier to envelop all data with data acting in a critical role in the process and in such a way measures the relative efficiency of each decision making unit in comparison with other units. There is a statistical and empirical rule that if the number of performance measures is high in comparison with the number of units, then a large percentage of the units will be determined as efficient, which is obviously a questionable result. It also implies that the selection of performance measures is very crucial for successful applications. In this paper, we extend both multiplier and envelopment forms of data envelopment analysis models and propose two alternative approaches for selecting performance measures under variable returns to scale. The multiplier form of selecting model leads to the maximum efficiency scores and the maximum discrimination between efficient units is achieved by applying the envelopment form. Also individual unit and aggregate models are formulated separately to develop the idea of selective measures. Finally, in order to illustrate the potential of the proposed approaches a case study using a data from a banking industry in the Czech Republic is utilized. (C) 2014 Elsevier Ltd. All rights reserved.
•Traditional data envelopment analysis measures technical (radial) efficiency.•The input and output status of each performance measure is assumed known.•The data associated with each performance measure is assumed non-negative.•A new non-radial directional distance model is proposed to relax these assumptions.•A case study in the automotive industry demonstrates the efficacy of this approach. Data envelopment analysis (DEA) is a mathematical approach for evaluating the efficiency of decision-making units that convert multiple inputs into multiple outputs. Traditional DEA models measure technical (radial) efficiencies by assuming the input and output status of each performance measure is known, and the data associated with the performance measures are non-negative. These assumptions are restrictive and limit the applications of DEA to real-world problems. We propose a new extended non-radial directional distance model, which is a variant of the weighted additive model, to cope with negative data. We then extend our model and use flexible measures, which play the role of both inputs and outputs, to cope with the unknown status of the performance measures. We also present a case study in the automotive industry to exhibit the efficacy of the models proposed in this study.
Vendor’s performance evaluation is an important subject which has strategic implications for managing an efficient company. However, there are many important criteria for prospering company. These criteria may contradict together. In other words, while a criterion is improved, the other may worsen. Indeed, similar to manufacturing manager in global market, purchasing manager who has significant practical implications deals with this issue. The vendor selection problem (VSP) is obviously affected by the complexity and uncertainty due to the lack of information associated with related business environment of countries in a global market. On the other hand, in the automotive industry which plays an important role in the worldwide market, these decisions will be exacerbated by increasing the outsourcing and opportunities. There are varieties of techniques, from simple weighted scoring methods to complex mathematical programming, for handling VSP. In this study, we propose a new cost efficiency data envelopment analysis (CE–DEA) approach with price uncertainty for finding the most cost efficient unit. Potential uses are then illustrated with an application to automotive industry involving 73 vendors in Turkey.
Efficient solutions in Multi-Objective Integer Linear Programming (MOILP) problems are categorized into two distinct types, supported and non-supported. Many researchers try to gain some conditions to determine whether a feasible solution is efficient, nevertheless there is no attempt to identify the efficiency status of a given efficient solution, i.e. supported and non-supported. In this paper, we first verify the relationships between Data Envelopment Analysis (DEA) and MOILP and then design two distinct practical procedures: the first one specifies whether or not an arbitrary feasible solution is efficient, meanwhile the second one as the main aim of this study, determines the efficiency status of an efficient solution. Finally, as a contribution of the suggested approach, we illustrate the drawback of Chen and Lu's methodology (Chen and Lu, 2007) which is developed for solving an extended assignment problem. (C) 2014 Elsevier Inc. All rights reserved.
Data mining techniques, extracting patterns from large databases have become widespread in business. Using these techniques, various rules may be obtained and only a small number of these rules may be selected for implementation due, at least in part. to limitations of budget and resources. Evaluating and ranking the interestingness or usefulness of association rules is important in data mining. This paper proposes a new integrated data envelopment analysis (DEA) model which is able to find most efficient association rule by solving only one mixed integer linear programming (MILP). Then, utilizing this model, a new method for prioritizing association rules by considering Multiple criteria is proposed. As an advantage, the proposed method is computationally more efficient than previous works. Using an example of market basket analysis, applicability of our DEA based method for measuring the efficiency of association rules with multiple criteria is illustrated. (C) 2008 Elsevier Ltd. All rights reserved.
Data envelopment analysis (DEA), considering the best condition for each decision making unit (DMU), assesses the relative efficiency and partitions DMUs into two sets: efficient and inefficient. Practically, in traditional DEA models more than one efficient DMU are recognized and these models cannot rank efficient DMUs. Some studies have been carried out aiming at ranking efficient DMUs, although in some cases only discrimination of the most efficient unit is desirable. Furthermore, several investigations have been done for finding the most CCR-efficient DMU. The basic idea of the majority of them is to introduce an integrated model which achieves an optimal common set of weights (CSW). These weights help us identify the most efficient unit in an identical condition. Recently, Toloo (2012) [13] proposed a new mixed integer programming (MIP) model to find the most BCC-efficient unit. Based on this study, we propose a new basic integrated linear programming (LP) model to identify candidate DMUs for being the most efficient unit; next a new MIP integrated DEA model is introduced for determining the most efficient DMU. Moreover, these models exclude the non-Archimedean epsilon and consequently the optimal solution of these models can be obtained, straightforwardly. We claim that the most efficient unit, which could be obtained from all other integrated models, has to be one of the achieved candidates from the basic integrated LP model. Two numerical examples are illustrated to show the variant use of these models in different important cases. (C) 2013 Elsevier Inc. All rights reserved.
•Data envelopment analysis (DEA) is challenged by imprecise, uncertain, and stochastic data.•Two DEA adaptations (interval and robust) are developed with uncertain data and undesirable outputs.•An epsilon-based robust interval cross-efficiency model is extended.•An example and a real-world application are presented to compare our method with an interval method.•The ability of our method to improve discernibility among DMUs is demonstrated. Degenerate optimal weights and uncertain data are two challenging problems in conventional data envelopment analysis (DEA). Cross-efficiency and robust optimization are commonly used to handle such problems. We develop two DEA adaptations to rank decision-making units (DMUs) characterized by uncertain data and undesirable outputs. The first adaptation is an interval approach, where we propose lower- and upper-bounds for the efficiency scores and apply a robust cross-efficiency model to avoid problems of non-unique optimal weights and uncertain data. We initially use the proposed interval approach and categorize DMUs into fully efficient, efficient, and inefficient groups. The second adaptation is a robust approach, where we rank the DMUs, with a measure of cross-efficiency that extends the traditional classification of efficient and inefficient units. Results show that we can obtain higher discriminatory power and higher-ranking stability compared with the interval models. We present an example from the literature and a real-world application in the banking industry to demonstrate this capability.
Finding and classifying all efficient solutions for a Bi-Objective Integer Linear Programming (BOILP) problem is one of the controversial issues in Multi-Criteria Decision Making problems. The main aim of this study is to utilize the well-known Data Envelopment Analysis (DEA) methodology to tackle this issue. Toward this end, we first state some propositions to clarify the relationships between the efficient solutions of a BOILP and efficient Decision Making Units (DMUs) in DEA and next design a new two-stage approach to find and classify a set of efficient solutions. Stage I formulates a two-phase Mixed Integer Linear Programming (MILP) model, based on the Free Disposal Hull (FDH) model in DEA, to gain a Minimal Complete Set of efficient solutions. Stage II uses a variable returns to scale DEA model to classify the obtained efficient solutions from Stage I as supported and non-supported. A BOILP model containing 6 integer variables and 4 constraints is solved as an example to illustrate the applicability of the proposed approach.
The problem of ranking efficient decision making units (DMUs) is of interest from both theoretical and practical points of view. In this paper, we propose an integrated data envelopment analysis and mixed integer non-linear programming (MINLP) model to find the most efficient DMU using a common set of weights. We linearize the MINLP model to an equivalent mixed integer linear programming (MILP) model by eliminating the non-linear constraints in which the products of variables are incorporated. The formulated MILP model is simpler and computationally more efficient. In addition, we introduce a model for finding the value of epsilon, since the improper choice of the non-Archimedean epsilon may result in infeasible conditions. We use a real-life facility layout problem to demonstrate the applicability and exhibit the efficacy of the proposed model.
Data envelopment analysis (DEA) is a well-known data-driven mathematical modeling approach that aims at evaluating the relative efficiency of a set of comparable decision making units (DMUs) with multiple inputs and multiple outputs. The number of inputs and outputs (performance factors) plays a vital role for successful applications of DEA. There is a statistical and empirical rule in DEA that if the number of performance factors is high in comparison with the number of DMUs, then a large percentage of the units will be determined as efficient, which is questionable and unacceptable in the performance evaluation context. However, in some real-world applications, the number of performance factors is relatively larger than the number of DMUs. To cope with this issue, selecting models have been developed to select a subset of performance factors that lead to acceptable results. In this paper, we extend a pair of optimistic and pessimistic approaches, involving two alternative individual and summative selecting models, based on the slacks-based model. We mathematically validate the proposed models with some theorems and lemmas and illustrate the applicability of our models using 18 active auto part companies in the largest stock exchange in Iran.
Russell measure (RM) and enhanced Russell measure (ERM) are popular non-radial measures for efficiency assessment of decision-making units (DMUs) in data envelopment analysis (DEA). Input and output data of both original RM and ERM are assumed to be deterministic. However, this assumption may not be valid in some situations because of data uncertainty arising from measurement errors, data staleness, and multiple repeated measurements. Interval DEA (IDEA) has been proposed to measure the interval efficiencies from the optimistic and pessimistic viewpoints while the robustness of the assessment is questionable. This paper draws on a class of robust optimisation models to surmount uncertainty with a high degree of robustness in the RM and ERM models. The contribution of this paper is fivefold; (1) we develop new robust non-radial DEA models to measure the robust efficiency of DMUs under data uncertainty, which are adjustable based upon conservatism levels, (2) we use Monte-Carlo simulation in an attempt to identify an appropriate range for the budget of uncertainty in terms of the highest conformity of ranking results, (3) we introduce the concept of the price of robustness to scrutinise the effectiveness and robustness of the proposed models, (4) we compare the developed robust models in this paper with other existing approaches, both radial and non-radial models, and (5) we explore an application to assess the efficiency of the Master of Business Administration (MBA) programmes where data uncertainties in-fluence the quality and reliability of results.
•We develop robust equivalents for fractional DEA models.•The proposed models give a proper interpretation of robust efficiency.•The superiorities of our approach models over the existing ones have been investigated.•Duality relation in robust DEA is established according to the “primal worst equal dual best” theorem in robust optimization.•We show an equivalent relation between robust input-and output-oriented models.•We illustrate our proposed models with a study from the largest airports in Europe. Robust Data Envelopment Analysis (RDEA) is a DEA-based conservative approach used for modeling uncertainties in the input and output data of Decision-Making Units (DMUs) to guarantee stable and reliable performance evaluation. The RDEA models proposed in the literature apply robust optimization techniques to the linear and conventional DEA models which lead to the difficulty of obtaining a robust efficient DMU. To overcome this difficulty, this paper tackles uncertainty in DMUs from the original fractional DEA model. We propose a robust fractional DEA (RFDEA) model in both input and output orientation which enables us to overcome the deficiency of existing RDEA models. The linearized models of the fractional DEA are further used to establish duality relations from a pessimistic and optimistic view of the data. We show that the primal worst of the multiplier model is equivalent to the dual best of the envelopment model. Furthermore, we show that the robust efficiency in the input- and output-oriented DEA models are still equivalent in the new approach which is not the case in conventional RDEA models. We finally present a study of the largest airports in Europe to illustrate the efficacy of the proposed models. The proposed RDEA is found to provide an effective management evaluation strategy under uncertain environments.
•Our approach addresses the sharing risk problem between government and investors.•It includes constraints that limit the pollution effects on population centers.•It considers social responsibility, economics factors, and benefits of waste recycling. The public–private partnership (PPP) is a practical and standard model that has been at the center of attention over the past two decades. Sharing risk between government and investors has been a challenging issue over the last year. This study formulates a model that aims to define the investors’ longing and allocate risks to the government in a logical range. Besides, in some real-world conditions, foreign investors with lower cost, higher quality, and better technology than domestic investors partner with the government. Under this condition, it is essential to consider the disruption risks because of sanctions and currency price fluctuations. Furthermore, the limited budget of the government for investing in infrastructure projects is intended. In this paper, the government's disruption risks and limited budget are added to the risk-sharing ratio model for the first time in literature. Moreover, the Pythagorean fuzzy sets (PFSs) are applied to cope with the uncertainty of real-world conditions. The PFSs are more potent than classical and intuitionistic fuzzy sets (IFSs) in dealing with uncertainty. The PFSs provide the membership, non-membership, and hesitancy degree for experts to better address the derived uncertainty of real-world conditions. Also, compared with the IFSs, PFSs prepare more space, consequently providing more freedom to address the uncertainty. Finally, a case study is presented to illustrate the applicability and susceptibility of the suggested model. As disruption risks increase, general utility degree, government utility, and investor’s effort decrease, and the guarantee risk ratio by government increases. Note that, investor’s effort decreases because the government is forced to give the unfinished project to the domestic investor; consequently, exclusive terms arise for the domestic investor.
•Propose a framework for measuring a maturity level of performance-based budgeting.•Develop a parallel network data envelopment analysis model.•Consider the hierarchical configuration of performance indicators.•Use fuzzy sets theory to deal with vagueness and ambiguity.•Present a case study to demonstrate the applicability of the developed framework. Performance-based budgeting (PBB) aims to formulate and manage public budgetary resources to improve managerial decisions based on actual performance measures of agencies. Although the PBB system has been overwhelmingly applied by various agencies, the progress and maturity of its implementation process are not satisfactory at large. Therefore, it warrants to find, evaluate and improve the performance of organisations in relation to implementing a PBB system. To do so, the composite indicators (CIs) have been proposed to aggregate multiple indicators associated with the PBB system, but their employment is contentious as they often lean on ad-hoc and troublesome assumptions. Data envelopment analysis (DEA) methods as a powerful and established tool help to contend with key limitations of CIs. Although the original DEA method ignores an internal production process, the knowledge of the internal structure of the PBB systems and indicators is of importance to provide further insights when assessing the performance of PBB systems. In this paper, we present a budget assessment framework by breaking a PBB system into two parallel stages including operations performance (OP) and financial performance enhancement (FPE) to open up the black-box structure of the system and consider the indicator hierarchy configuration of each stage. In situations of the hierarchical configuration of indicators, we develop a multilayer parallel network DEA-based CIs model to measure the PBB maturity levels of the system and its stages. It is shown that the discrimination power of the proposed multilayer model is better than the existing models with one layer and in situations of relatively small number of DMUs the model developed in this paper can be a good solution to the dimension reduction of indicators. Moreover, this research leverages fuzzy logic to surmount the subjective information that is often available in collecting indicators of the PBB systems. The major contribution of this research is to examine a case study of a PBB maturity award in Iran, as a developing country with a myriad of financial challenges, to adopt a PBB maturity model as well as point towards the efficacy and applicability of the proposed framework in practice.
The role of medicines in health systems is increasing day by day. The medicine supply chain is a part of the health system that if not properly addressed, the concept of health in that community is unlikely to experience significant growth. To fill gaps and available challenging in the medicine supply chain network (MSCN), in the present paper, efforts have been made to propose a location-production-distribution-transportation-inventory holding problem for a multi-echelon multi-product multi-period bi-objective MSCN network under production technology policy. To design the network, a mixed-integer linear programming (MILP) model capable of minimizing the total costs of the network and the total time the transportation is developed. As the developed model was NP-hard, several meta-heuristic algorithms are used and two heuristic algorithms, namely, Improved Ant Colony Optimization (IACO) and Improved Harmony Search (IHS) algorithms are developed to solve the MSCN model in different problems. Then, some experiments were designed and solved by an optimization solver called GAMS (CPLEX) and the presented algorithms to validate the model and effectiveness of the presented algorithms. Comparison of the provided results by the presented algorithms and the exact solution is indicative of the high-quality efficiency and performance of the proposed algorithm to find a near-optimal solution within reasonable computational time. Hence, the results are compared with commercial solvers (GAMS) with the suggested algorithms in the small-sized problems and then the results of the proposed meta-heuristic algorithms with the heuristic methods are compared with each other in the large-sized problems. To tune and control the parameters of the proposed algorithms, the Taguchi method is utilized. To validate the proposed algorithms and the MSCN model, assessment metrics are used and a few sensitivity analyses are stated, respectively. The results demonstrate the high quality of the proposed IACO algorithm.
Fractional programming (FP) refers to a family of optimization problems whose objective function is a ratio of two functions. FP has been studied extensively in economics, management science, information theory, optic and graph theory, communication, and computer science, etc. This paper presents a bibliometric review of the FP-related publications over the past five decades in order to track research outputs and scholarly trends in the field. The reviews are conducted through the Science Citation Index Expanded (SCI-EXPANDED) database of the Web of Science Core Collection (Clarivate Analytics). Based on the bibliometric analysis of 1811 documents, various theme-related research indicators were described, such as the most prominent authors, the most commonly cited papers, journals, institutions, and countries. Three research directions emerged, including Electrical and Electronic Engineering, Telecommunications, and Applied Mathematics.
In this paper, a metaheuristic-based design approach is developed in which the structural design optimization of large-scale steel frame structures is concerned. Although academics have introduced form-dominant methods, yet using artificial intelligence in structural design is one of the most critical challenges in recent years. However, the Charged System Search (CSS) is utilized as the primary optimization approach, which is improved by using the main principles of quantum mechanics and fuzzy logic systems. In the proposed Fuzzy Adaptive Quantum Inspired CSS algorithm, the position updating procedure of the standard algorithm is developed by implementing the center of potential energy presented in quantum mechanics into the general formulation of CSS to enhance the convergence capability of the algorithm. Simultaneously, a fuzzy logic-based parameter tuning process is also conducted to enhance the exploitation and exploration rates of the standard optimization algorithm. Two 10 and 60 story steel frame structures with 1026 and 8272 structural members, respectively, are utilized as design examples to determine the performance of the developed algorithm in dealing with complex optimization problems. The overall capability of the presented approach is compared with the Charged System Search and other metaheuristic optimization algorithms. The proposed enhanced algorithm can prepare better results than the other metaheuristics by considering the achieved results.
•Infeasibility under the super-efficiency problem aggravates under nonconvexity.•New super-efficiency cost frontier is feasible under constant returns to scale•Super-efficiency cost frontier may be infeasible under variable returns to scale.•The super-efficiency decomposition is new in the literature.•New cost super-efficiency model under incomplete price data is proposed. This contribution extends the literature on super-efficiency by focusing on ranking cost-efficient observations. To the best of our knowledge, the focus has always been on technical super-efficiency and this focus on ranking cost-efficient observations may well open up a new topic. Furthermore, since the convexity axiom has both an impact on technical and cost efficiency, we pay a particular attention to the effect of nonconvexity on both super-efficiency notions. Apart from a numerical example, we use a secondary data set guaranteeing replication to illustrate these efficiency and super-efficiency concepts. Two empirical conclusions emerge. First, the cost super-efficiency notion ranks differently from the technical super-efficiency concept. Second, both cost and technical super-efficiency notions rank differently under convex and nonconvex technologies.
Data envelopment analysis (DEA) is a data-oriented mathematical programming approach that evaluates a set of peer decision making units (DMUs) dealing directly with the observed inputs and outputs (performance measures). Empirically, in order to have a logical assessment, there should be a balance between the number of performance measures and the number of DMUs. Accordingly, applying an appropriate method so that one can select some performance measures is very crucial for successful applications. In this paper, we suggest the envelopment form of selecting model under constant returns to scale (CRS) from both individual and aggregate points of view. We also show that applying these selecting models leads to the maximum discrimination between efficient units.
Multi-Objective Combinatorial Optimization Problems and Solution Methods discusses the results of a recent multi-objective combinatorial optimization achievement that considered metaheuristic, mathematical programming, heuristic, hyper heuristic and hybrid approaches. In other words, the book presents various multi-objective combinatorial optimization issues that may benefit from different methods in theory and practice. Combinatorial optimization problems appear in a wide range of applications in operations research, engineering, biological sciences and computer science, hence many optimization approaches have been developed that link the discrete universe to the continuous universe through geometric, analytic and algebraic techniques. This book covers this important topic as computational optimization has become increasingly popular as design optimization and its applications in engineering and industry have become ever more important due to more stringent design requirements in modern engineering practice.
Conventional data envelopment analysis (DEA) methods are useful for estimating the performance measure of decision making units (DMUs) that each DMU uses multiple inputs to produce multiple outputs without considering any partial impacts between inputs and outputs. Nevertheless, there are some real-world situations where DMUs may possess several production lines with a two-stage network structure that each production line use inputs according to their needs. The current paper extends the recent work by Ma (Expert Syst Appl Int J 42:4339-4347, 2015) to consider partial impact between inputs and outputs for two-stage network production systems. Toward this end, we consider several input-output bundles in each stage for production lines. We formulate a couple of new mathematical programming models in the DEA framework with the aim of considering partial impact between inputs and outputs for calculating aggregate, overall, and subunit efficiencies along with resource usage by production lines for a two-stage production system Finally, an application in refinery industries is provided as an example to illustrate the potential application of the proposed method.
•A non-radial nor-oriented method is developed to deal with flexible measures.•Two optimistic and pessimistic approaches are proposed.•Each approach contains two individual and integrated models.•A case study of 61 banks in the Visegrad Four region validates the new models. The original Data Envelopment Analysis (DEA) models have required an assumption that the status of all inputs and outputs be known exactly, whilst we may face a case with some flexible performance measures whose status is unknown. Some classifier approaches have been proposed in order to deal with flexible measures. This contribution develops a new classifier non-radial directional distance method with the aim of taking into account input contraction and output expansion, simultaneously, in the presence of flexible measures. To make the most appropriate decision for flexible measures, we suggest two pessimistic and optimistic approaches from both individual and summative points of view. Finally, a numerical real example in the banking system in the countries of the Visegrad Four (i.e. Czech Republic, Hungary, Poland, and Slovakia) is presented to elaborate applicability of the proposed method.
Accurate evaluation of emission governance efficiency can build fundament to develop haze control strategy towards sustainable development. By features of the haze, we view the haze formation stage as the first sub-process and the haze control stage as the second sub-process. This paper proposes an additive aggregation network data envelopment analysis (DEA) model with undesirable intermediate measures and undesirable outputs, which have not been thoroughly studied in previous literature. We found the newly developed network DEA model was nonlinear and cannot be converted into a linear program, and then developed an improved second-order cone programming approach to solve this problem. After analyzing the data of haze control in China, we drew the following conclusions: Firstly, different weights of preference for two sub-process can lead to the variation in the overall efficiency. Under different weights of preference, although the efficiency of the haze formation has a very small change in some provinces, the efficiency of the haze control has a large change. Secondly, decision makers can achieve the adjust goal of reducing haze by adjusting their preferences on the information of the haze formation and haze control stages, which are helpful for policy making in haze control strategy and sustainable development.
In conventional data envelopment analysis (DEA) models, a performance measure whether as an input or output usually has to be known. Nevertheless, in some cases, the type of a performance measure is not clear and some models are introduced to accommodate such flexible measures. In this paper, it is shown that alternative optimal solutions of these models has to be considered to deal with the flexible measures, otherwise incorrect results might occur. Practically, the efficiency scores of a DMU could be equal when the flexible measure is considered either as input or output. These cases are introduced and referred as share cases in this study specifically. It is duplicated that share cases must not be taken into account for classifying inputs and outputs. A new mixed integer linear programming (MILP) model is proposed to overcome the problem of not considering the alternative optimal solutions of classifier models. Finally, the applicability of the proposed model is illustrated by a real data set.
Robust optimization has become the state-of-the-art approach for solving linear optimization problems with uncertain data. Though relatively young, the robust approach has proven to be essential in many real-world applications. Under this approach, robust counterparts to prescribed uncertainty sets are constructed for general solutions to corresponding uncertain linear programming problems. It is remarkable that in most practical problems, the variables represent physical quantities and must be nonnegative. In this paper, we propose alternative robust counterparts with nonnegative decision variables - a reduced robust approach which attempts to minimize model complexity. The new framework is extended to the robust Data Envelopment Analysis (DEA) with the aim of reducing the computational burden. In the DEA methodology, first we deal with the equality in the normalization constraint and then a robust DEA based on the reduced robust counterpart is proposed. The proposed model is examined with numerical data from 250 European banks operating across the globe. The results indicate that the proposed approach (i) reduces almost 50% of the computational burden required to solve DEA problems with nonnegative decision variables; (ii) retains only essential (non-redundant) constraints and decision variables without alerting the optimal value.
While conventional data envelopment analysis (DEA) models set targets separately for each decision making unit (DMU), Lozano and Villa (2004) introduced the concept of "centralized" DEA models, which aim at optimizing the combined resource consumption by all units in an organization rather than considering the consumption by each unit, separately. In these models, there is a centralized decision maker (DM) who supervises all DMUs. The main aim is optimizing total input consumption and output production. In this paper, firstly we present centralized output product model. Then we introduce parametric centralized additive model, which during one phase minimizes total consumption inputs and maximizes total output production simultaneously, in the direction of optimization vector. Some numerical examples of the proposed models and their results are presented.
This article investigates a JIT single machine scheduling problem with a periodic preventive maintenance. Also to maintain the quality of the products, there is a limitation on the maximum number of allowable jobs in each period. The proposed bi-objective mixed integer model minimizes total earliness-tardiness and makespan simultaneously. Due to the computational complexity of the problem, multi-objective particle swarm optimization (MOPSO) algorithm is implemented. Also, as well as MOPSO, two other optimization algorithms are used for comparing the results. Eventually, Taguchi method with metrics analysis is presented to tune the algorithms' parameters and a multiple criterion decision making (MCDM) technique based on the technique for order of preference by similarity to ideal solution (TOPSIS) is applied to choose the best algorithm. Comparison results confirmed supremacy of MOPSO to the other algorithms.
Data envelopment analysis (DEA) evaluates the relative efficiency of a set of comparable decision making units (DMUs) with multiple performance measures (inputs and outputs). Classical DEA models rely on the assumption that each DMU can improve its performance by increasing its current output level and decreasing its current input levels. However, undesirable outputs (like wastes and pollutants) may often be produced together with desirable outputs in final products which have to be minimized. On the other hands, in some real-world situations, we may encounter some specific performance measures with more than one value which are measured by various standards. In this study, we referee such measures as multi-valued measures which only one of their values should be selected. For instance, unemployment rate is a multi-valued measure in economic applications since there are several definitions or standards to measure it. As a result, selecting a suitable value for a multi-valued measure is a challenging issue and is crucial for successful application of DEA. The aim of this study is to accommodate multi-valued measures in the presence of undesirable outputs. In doing so, we formulate two individual and summative selecting directional distance models and develop a pair of multiplier- and envelopment-based selecting approaches. Finally, we elaborate applicability of the proposed method using a real data on 183 NUTS 2 regions in 23 selected EU-28 countries. (C) 2019 Elsevier Ltd. All rights reserved.
In this race for productivity, the most successful leaders in the banking industry are those with high-efficiency and a competitive edge. Data envelopment analysis is one of the most widely used methods for measuring efficiency in organizations. In this study, we use the ideal point concept and propose a common weights model with fuzzy data and non-discretionary inputs. The proposed model considers environmental criteria with uncertain data to produce a full ranking of homogenous decision-making units. We use the proposed model to investigate the efficiency-based leaders in the Russian banking industry. The results show that the unidimensional and unilateral assessment of leading organizations solely according to corporate size is insufficient to characterize industry leaders effectively. In response, we recommend a multilevel, multicomponent, and multidisciplinary evaluation framework for a more reliable and realistic investigation of leadership at the network level of analysis.
This study proposes a new fuzzy adaptive Charged System Search (CSS) for global optimization. The suggested algorithm includes a parameter tuning process based on fuzzy logic with the aim of improving its performance. In this regard, four linguistic variables are defined which configures a fuzzy system for parameter identification of the standard CSS algorithm. This process provides a focus for the algorithm on higher levels of global searching in the initial iterations while the local search is considered in the last iterations. Twenty mathematical benchmark functions, the Competitions on Evolutionary Computation (CEC) regarding CEC 2020 benchmark, three well-known constrained, and two engineering problems are utilized to validate the new algorithm. Moreover, the performance of the new algorithm is compared and contrasted with other metaheuristic algorithms. The obtained results reveal the superiority of the proposed approach in dealing with different unconstraint, constrained, and engineering design problems. •Fuzzy Adaptive Charged System Search (FACSS) algorithm is presented.•FACSS is investigated through mathematical and engineering problems.•Statistical analysis proves the superiority of the FACSS algorithm.
Nowadays, algorithms and computer programs, which are going to speed up, short time to run and less memory to occupy have special importance. Toward these ends, researchers have always regarded suitable strategies and algorithms with the least computations. Since linear programming (LP) has been introduced, interest in it spreads rapidly among scientists. To solve an LP, the simplex method has been developed and since then many researchers have contributed to the extension and progression of LP and obviously simplex method. A vast literature has been grown out of this original method in mathematical theory, new algorithms, and applied nature. Solving an LP via simplex method needs an initial basic feasible solution (IBFS), but in many situations such a solution is not readily available so artificial variables will be resorted. These artificial variables must be dropped to zero, if possible. There are two main methods that can be used to eliminate the artificial variables: two-phase method and Big-M method. Data envelopment analysis (DEA) applies individual LP for evaluating performance of decision making units, consequently, to solve these LPs an IBFS must be on hand. The main contribution of this paper is to introduce a closed form of IBFS for conventional DEA models, which helps us not to deal with artificial variables directly. We apply the proposed form to a real-data set to illustrate the applicability of the new approach. The results of this study indicate that using the closed form of IBFS can reduce at least 50 % of the whole computations.
•We criticize some developed DEA models to deal with ratio data.•We make modifications to explicitly overcome the flaws.•We provide a case study in the education sector to validate our proposed approach. The performance evaluation of for-profit and not-for-profit organisations is a unique tool to support the continuous improvement of processes. Data envelopment analysis (DEA) is literally known as an impeccable technique for efficiency measurement. However, the lack of the ability to attend to ratio measures is an ongoing challenge in DEA. The convexity axiom embedded in standard DEA models cannot be fully satisfied where the dataset includes ratio measures and the results obtained from such models may not be correct and reliable. There is a typical approach to deal with the problem of ratio measures in DEA, in particular when numerators and denominators of ratio data are available. In this paper, we show that the current solutions may also fail to preserve the principal properties of DEA as well as to instigate some other flaws. We also make modifications to explicitly overcome the flaws and measure the performance of a set of operating units for the input- and output orientations regardless of assumed technology. Finally, a case study in the education sector is presented to illustrate the strengths and limitations of the proposed approach.
•Two linear and nonlinear two-stage data envelopment analysis models are compared.•A relationship between these two models is developed.•It is shown that the linear model is more computationally efficient.•The linear model excludes the estimation error of the nonlinear model.•The linear and nonlinear models are compared with real and simulated data. This paper develops a relationship between two linear and nonlinear data envelopment analysis (DEA) models which have previously been developed for the joint measurement of the efficiency and effectiveness of decision making units (DMUs). It will be shown that a DMU is overall efficient by the nonlinear model if and only if it is overall efficient by the linear model. We will compare these two models and demonstrate that the linear model is an efficient alternative algorithm for the nonlinear model. We will also show that the linear model is more computationally efficient than the nonlinear model, it does not have the potential estimation error of the heuristic search procedure used in the nonlinear model, and it determines global optimum solutions rather than the local optimum. Using 11 different data sets from published papers and also 1000 simulated sets of data, we will explore and compare these two models. Using the data set that is most frequently used in the published papers, it is shown that the nonlinear model with a step size equal to 0.00001, requires running 1,955,573 linear programs (LPs) to measure the efficiency of 24 DMUs compared to only 24 LPs required for the linear model. Similarly, for a very small data set which consists of only 5 DMUs, the nonlinear model requires running 7861 LPs with step size equal to 0.0001, whereas the linear model needs just 5 LPs.
•This paper studies the project selection problem.•We develop a new project selection method under resource limitations.•The advantage of the new method is that it accomplishes both individual evaluation and selection.•A case study of information system projects in Iran e-commerce development center validates the new method. The project selection problem plays a vital role in an organization to successfully attain its competitive advantages and corporate strategies. The problem is more exacerbated and compounded if the decision-maker takes the limitation of resources into consideration. As a matter of fact, the project selection problem deals with opting a set of best feasible proposals from a large pool of proposals with making the best use of available resources. It is assumed that each proposal employs various resources, such as personnel, capital, equipment, and facilities. Each subset of feasible proposals constitutes a single, composite project that utilizes a set of available but limited resources to produce various outputs. It is desired to select the best subset of proposals with the aim of using the available resources as much as possible. Data envelopment analysis (DEA) is commonly used as a prioritization method to evaluate each feasible composite project. This paper develops a new project selection method based on the performance of each contained proposals by solving a single linear DEA model. Finally, we provide a real dataset containing 21 information system proposal at Iran e-commerce development center to illustrate the potential application of our suggested method.
Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.
This paper suggests new data envelopment analysis (DEA) models for input and output scaling in advanced manufacturing technology (AMT). For a given group of AMT observations using the traditional DEA models, it is not possible to evaluate the units when a specified input (or specified output) is required to be scaled for all units. The paper provides theoretical results for obtaining the relationship between the original AMT observations and the corresponding scaled data. Also, the paper uses numerical illustrations to show the usefulness of the suggested contribution.
In many applications of DEA finding the most efficient DMUs is desirable. This paper presents an improved integrated DEA model in order to detect the most efficient DMUs. The proposed integrated DEA model does not use the trial and error method in the objective function. Also, it is able to find the most efficient DMUs without solving the model n times (one linear programming (LP) for each DMU) and therefore allows the user to get faster results. It is shown that the improved integrated DEA model is always feasible and capable to rank the most efficient one. To illustrate the model capability the proposed methodology is applied to a real data set consisting of the 19 facility layout alternatives. (c) 2006 Elsevier Ltd. All rights reserved.
A new research issue in the context of production theory is production without explicit inputs. In such systems, input consumption is not important to the decision-maker and the focus is on output production. In the presence of desirable and undesirable outputs, modelling undesirable outputs is an important problem. This paper discusses the problem of weak disposability in the absence of explicit inputs. A linear production technology is constructed axiomatically to handle desirable and undesirable outputs in production systems without explicit inputs. A simple linear formulation of weak disposability in such systems is proposed that enables us to reduce undesirable production outputs.
Environmental issues and depletion of fossil energy resources have triggered a sense among and practitioners to seek the ways of substituting fossil energy resources with renewable ones. Biodiesel is a green fuel which is produced from different oleaginous biomass. Nevertheless, producing biodiesel from edible feedstock is strongly criticized by Food and Agriculture Organization. Recently, microalgae have been identified as a source that can be the purification factor of wastewater and appropriate feedstock for biodiesel production. The high potential of microalgae to produce biodiesel and low-cost recovery in large-scale production encourage investors to utilize microalgae for biodiesel production. Accordingly, selecting the best locations for microalgae cultivation has a great impact on the economic viability of biodiesel production from microalgae. This paper studies application of a data envelopment analysis (DEA) approach in selecting the best locations for microalgae cultivation through ecological and economic factors. The DEA method is applied to a real case in Iran. Moreover, the well-known principal component analysis and numerical taxonomy methods are used for verification and validation of the applied DEA approach. The results confirm the applicability of the DEA approach in selecting suitable locations for microalgae cultivation areas.
The convergence of computing and communication has resulted in a society that feeds on information. There is exponentially increasing huge amount of information locked up in databases—information that is potentially important but has not yet been discovered or articulated (Whitten & Frank, 2005). Data mining, the extraction of implicit, previously unknown, and potentially useful information from data, can be viewed as a result of the natural evolution of Information Technology (IT). An evolutionary path has been passed in database field from data collection and database creation to data management, data analysis and understanding. According to Han & Camber (2001) the major reason that data mining has attracted a great deal of attention in information industry in recent years is due to the wide availability of huge amounts of data and the imminent need for turning such data into useful information and knowledge. The information and knowledge gained can be used for applications ranging from business management, production control, and market analysis, to engineering design and science exploration. In other words, in today’s business environment, it is essential to mine vast volumes of data for extracting patterns in order to support superior decision-making. Therefore, the importance of data mining is becoming increasingly obvious. Many data mining techniques have also been presented in various applications, such as association rule mining, sequential pattern mining, classification, clustering, and other statistical methods (Chen & Weng, 2008).
In this paper, a modified composite index is developed to measure digital inclusion for a group of cities and regions. The developed model, in contrast to the existing benefit-of-the-doubt (BoD) composite index literature, considers the subindexes as non-compensatory. This new way of modeling results in three important properties: (i) all subindexes are taken into account when assessing the digital inclusion of regions and are not removed (substituted) from the composite index, (ii) in addition to an overall composite index (aggregation of the subindexes), partial indexes (aggregated scores for each subindex) are also provided so that weak performances can be detected more effectively than when only the overall index is measured, and (iii) compared with current BoD models, the developed model has improved discriminatory power. To demonstrate the developed model, we use the Australian digital inclusion index as a real-world example.
Additional publications
He is the author/editor of several books including:
- Multi-Objective Combinatorial Optimization Problems and Solution Methods, ELSEVIER, ISBN: 978-0-12-823799-1, 2022.
- Optimization Problems in Economics and Finance, series on advanced economic issues (SAEI), Vol. 40. Ostrava: VSB-TU Ostrava, ISBN: 978-80-248-3837-3, 2015.
- Data Envelopment Analysis with selected models and applications, series on advanced economic issues (SAEI), Vol. 30, Ostrava: VSB-TU Ostrava, ISBN: 978-80-248-3738-3, 2014.
- Operations Research II, Modaresan Sharif, ISBN:978-964-187-609-0, 2012.
- Solution Manual of Problems in Operations Research, Azarakhsh, ISBN: 964-6294-68-5.
- MATHEMATICA Applications in Calculus, Azarakhsh, ISBN: 964-6294-72-2.
- 100 Programs in PASCAL, Azad University Press,
- 101 Programs in C++, Azad University Press, ISBN: 978-964-6493-83-4.
- GAMS User Guide with DEA Models, Nasher Kotob Daneshgahi, IBN: 978-600-510-42-5.
- Introduction to Scientific Computing, Scholars’ Press ISBN: 978-3639511161.
- Visual FOXPRO User Guide, Azarakhsh, ISBN: 964-6294-33-2. (Translated to Persian)
- EXCEL for Beginners, Azarakhsh, ISBN: 964-6294-33-2. (Translated to Persian)
- Introduction to Operations Research, 6th Edition, Azarakhsh, ISBN: 64-6294-53-7. (Translated to Persian)
- Introduction to Operations Research, 7th Edition, Nasher Daneshgahi, ISBN: 978-964-01-1317-2. (Translated to Persian)
- Schaum’s Outline of Operations Research, University of Tehran Press, ISBN: 978-964-01-1317-2. (Translated to Persian)
Book chapters:
- Multi-Objective Combinatorial Optimization Problems and Solution Methods, Chapter 01, Multiobjective combinatorial optimization problems: social, keywords, and journal maps, ELSEVIER, ISBN: 978-0-12-823799-1, 2022.
- Multi-Objective Combinatorial Optimization Problems and Solution Methods, Chapter 10, Finding efficient solutions of the multicriteria assignment problem, ELSEVIER, ISBN: 978-0-12-823799-1, 2022.
- New Fundamental Technologies in Data Mining, Chapter 23, On Ranking Discovered Rules of Data Mining by Data Envelopment Analysis: Some New Models with Wider Applications, InTech Publisher, ISBN 978-953-307-547-1, 2011.