Seminar series

Room: 202C Peñalolén A200 Viña

Abstract

El diseño cuestionarios para la evaluación de rendimiento presenta diversos problemas de optimización combinatoria cuyo estudio resulta interesante por sus aplicaciones. Este trabajo se centra en uno de estos problemas, en que las preguntas tienen un peso que representa su dificultad y están divididas inicialmente en sets. Los cuestionarios deben construirse de manera que contengan una pregunta de cada uno de estos sets, minimizando la diferencia entre la dificultad total de los exámenes obtenidos. En este trabajo se demuestra la complejidad del problema y se presentan diversos procedimientos para su resolución: una heurística constructiva, un algoritmo genético híbrido y una búsqueda en vecindario variable.

Bio:
Mariona Vilà Bonilla es Ingeniera Química por la Universitat Politècnica de Catalunya (España) y Doctora en Administración y Dirección de Empresas por la misma universidad. Trabaja como profesora en el Departamento de Organización de Empresas (DOE) de la Escuela Universitaria de Ingenieria Técnica Industrial de Barcelona desde septiembre del 2011, donde imparte docencia sobre Organización de la Producción. Su labor investigadora se centra en el estudio de problemas de optimización combinatoria, como problemas de empaquetamiento y equilibrado de líneas de montaje.

Room: 307C Peñalolén A200 Viña (via Webcast)

Abstract

While Vehicle Routing Problems have now been studied extensively for more than 50 years, those in which some parameters are uncertain at the time where the routes are made have received significantly less attention, in spite of the fact that there are many real-life settings where key parameters are not known with certainty.
In the first part of this talk, we will examine the main classes of Stochastic Vehicle Routing Problems: problems with stochastic demands, stochastic customers, and stochastic service or travel times. We will emphasize the main approaches for modeling and tackling uncertainty: a priori models, a posteriori approaches, and chance-constrained models. The second part of the talk will devoted to a presentation of some of our recent work in the area.

Bio:
Michel Gendreau is Professor of Operations Research at the Department of Applied Mathematics and Industrial Engineering of École Polytechnique de Montréal (Canada). His main research area is the application of operations research methods to transportation and logistics systems planning and operation. Dr. Gendreau has published around 300 papers in peer-reviewed journals and conference proceedings. He is also the co-editor of six books dealing with transportation planning and scheduling, as well as with metaheuristics.
Dr. Gendreau was the Director of the Centre for Research on Transportation (formerly CRT and now CIRRELT) from 1999 to 2007. He completed his 6-year term as Editor in chief of Transportation Science at the end of 2014. In 2001, he received the Merit Award of the Canadian Operational Research Society in recognition of his contributions to the development of O.R. in Canada. He was elected Fellow of INFORMS in 2010. In 2015, Dr. Gendreau received the prestigious Robert Herman Lifetime Achievement Award of the Transportation Science & Logistics Society of INFORMS.

Room: TBD Peñalolén A200 Viña (via Webcast)

Abstract

Modeling nonstationarity in marginal distributions has been the focus of much recent literature in applied extreme value modeling. The simplest approach was popularized long ago by Davison and Smith (1990), and it is based on indexing the location and scale parameters of the generalized extreme value distribution by a predictor. And how to model ‘nonstationary multivariate/spatial extremes’ if one must? Surprisingly, by comparison to the marginal case, approaches to modelling nonstationarity in the extremal dependence structure have received relatively little attention.
In this talk, I will discuss approaches for modeling nonstationary extremal dependence structures. The approaches to be discussed can be regarded as natural extensions of the Davison--Smith paradigm to the multivariate and spatial contexts.

Bio:
Miguel de Carvalho is Assistant Professor of Applied Statistics, Pontificia Universidad Católica de Chile. Before moving to Chile he was a post-doctoral fellow at EPFL---Swiss Federal Institute of Technology. He is an Applied Mathematical Statistician with a variety of interdisciplinary interests, inter alia, Biostatistics, Econometrics, and Statistics of Extremes. Beyond serving at the academy he is also a regular academic consultant of Banco de Portugal (Portuguese Central Bank). He his currently an Associate Editor of the Annals of Applied Statistics (IMS), and of Statistics and Public Policy (ASA).

Room: 307C Peñalolén A200 Viña (via Webcast)

Abstract

In this talk I will give an overview of recent representative works from my two lines of research: (1) swarm intelligence and (2) the hybridization of metaheuristics with other techniques for optimization. The first part will show recent examples of taking inspiration from nature for designing distributed algorithms. The second part will focus on large neighborhood search and a new hybrid called Construct, Merge, Solve & Adapt in the context of combinatorial optimization.

Bio:
Christian Blum es doctor en Ciencias Aplicadas por la Universidad Libre de Bruselas y profesor investigador de la fundación para la ciencia Ikerbasque en San Sebastián. Sus áreas principales de investigación son las metaheurísticas híbridas y los algoritmos basados en inteligencia de masas (algoritmos de hormigas y optimización mediante partículas). Es autor de diversos libros, así como editor de varias revistas de investigación operativa e inteligencia artificial.

Room: 201D Peñalolén A200 Viña (via Webcast)

Abstract

Agglomeration of facilities belonging to firms that compete with each other is common in practice, which suggests the existence of forces driving these facilities to locate in clusters. Shopping centers and food courts are everyday examples. For two facilities, Hotelling found in 1929 that stable equilibrium locations, at which none of the facilities could increase its profit by relocating, correspond to both of them sited beside each other at the center of the market. This has been called the principle of Minimum Differentiation and it was thought to explain agglomeration. However, this principle critically depends on some of the settings which, when changed, lead to quite different results. This is because the agglomeration forces considered by Hotelling do not drive facilities to locate close to each other, but to locate closer to customers than the competitor.
The prevalence of competitive clusters in practice can be better explained by forces that push facilities toward each other. These forces have their origin in customer behaviors including comparative shopping and multipurpose trips, and in firms’ reactions to these behaviors. Although these forces have been adequately analyzed and explained in the economic literature, operational research competitive location models have not taken them into consideration as of today. This is particularly troublesome, as locations prescribed by these models are rather dispersed, which is in disagreement with the examples that can be observed in real life.
We present a selective review of the economic literature dealing with agglomeration forces acting in a linear market. Our goal is to show how prescriptive models have included only “weak" agglomeration forces and our hope is to attract the interest of operational research locators for including “strong" agglomeration forces into our models. We finally discuss the difficulties of including strong forces in prescriptive models.

Bio:
Vladimir Marianov is a Professor at the Department of Electrical Engineering of the Pontificia Universidad Católica de Chile. He earned a M.S.E. and a Ph.D. degrees from The Johns Hopkins University in 1987 and 1989, respectively, as well as an Electrical Engineering degree from Universidad de Chile in 1978. His teaching and research interests are in the area of facility location, modeling, with applications in communications networks, nature reserve selection, location of public services, location under congestion and location of competitive facilities. A member of the Editorial Advisory Board of Computers & Operations Research, OR spectrum and TOP, his more than 60 papers have appeared in such journals as: Computers and Operational Research, Annals of Operational Research, European Journal of Operational Research, Papers in Regional Science, Socio- Economic Planning Sciences, IIE Transactions, Journal of Regional Science, Journal of the Operational Research Society, Location Science, Environment and Planning B, Computers and Education, Computer Communications, Journal of Computer Assisted Learning, RAIRO Operations Research, OR Letters and INFOR Journal (Canada). He has co-edited two books published by Springer on Facility Location Foundations and Applications, respectively, and he is the author of several book chapters on Location of Emergency Systems, Location in the Public Sector, The p-median model, Lagrangian Relaxation, Location of Jails and Competitive Location.

Room: 401C Peñalolén A200 Viña (via Webcast)

Abstract

Three data-driven mineral exploration approaches are studied and applied to define new target areas for copper ore deposits exploration in the north of Chile. The three different approaches are motivated by considering three data availability scenarios: 1) information of the location of existing deposits only; 2) information of mineral exploration data (geophysics, geology, faults, magnetics, etc.) but with a reduced amount of known deposits location; 3) complete data information case, but still with the class imbalance problem. For 1) we developed a deposit distribution density, for 2) we used self-organizing maps, and for 3) we used support vector machines (SVM) in combination with SMOTE, an over-sampling approach for the minority class. The results using data from the north of Chile suggests that attributes from the Magnetics, Geologic Unit, and Gravity data layers are most favorable for deposit predictions. The three approaches obtained candidate target areas for exploration, in particular the trained SVM was capable of identifying the existing deposits as well as proposing 9 new target areas.

Bio:
Gonzalo is an Associate Professor of the Faculty of Engineering and Sciences of the University Adolfo Ibáñez in Santiago, Chile. He has a B.S. and M.S. in Electrical Engineering from the University of Chile, and a Ph.D. in Machine Learning from Cardiff University, UK. His research interests are learning algorithms for Bayesian networks, theoretical and computational aspects of Boolean networks (and their applications in gene regulatory network modeling), and data mining applications in industry.

Room: 406C Peñalolén A200 Viña (via Webcast)

Abstract

The Quadratic Eigenvalue Problem is to find eigenvalues and eigenvectors a quadratic matrix pencil of the form P(L) = ML^2+CL+K , where the matrices M, C, and K are square matrices. Unfortunately, the problem has not been widely studied because of the intrinsic difficulties with solving the problem in a numerically effective way. Indeed, the state-of-the-art computational techniques are capable of computing only a few extremal eigenvalues and eigenvectors, especially if the matrices are large and sparse, which is often the case in practical applications. The inverse quadratic eigenvalue problem, on the other hand, refers to constructing the matrices M, C, and K, given the complete or partial spectrum and the associated eigenvectors. The inverse quadratic eigenvalue problem is equally important and arises in a wide variety of engineering applications, including mechanical vibrations, aerospace engineering, design of space structures, structural dynamics, etc.
Of special practical importance is to construct the coefficient matrices from the knowledge of only partial spectrum and the associated eigenvectors. The greatest computational challenge is to solve the partial quadratic inverse eigenvalue problem using the small number of eigenvalues and eigenvectors which are all that are computable using the state-of-the-art techniques. Furthermore, computational techniques must be able to take advantage of the exploitable physical properties, such as the symmetry, positive definiteness, sparsity, etc., which are computational assets for solution of large and sparse problems.
This talk will deal with two special quadratic inverse eigenvalue problems that arise in mechanical vibration and structural dynamics. The first one, Quadratic Partial Eigenvalue Assignment Problem(QPEVAP), arises in controlling dangerous vibrations in mechanical structures. Mathematically, the problem is to find two control feedback matrices such that a small amount of the eigenvalues of the associated quadratic eigenvalue problem, which are responsible for dangerous vibrations, are reassigned to suitably chosen ones while keeping the remaining large number of eigenvalues and eigenvectors unchanged. Additionally, for robust and economic control design, these feedback matrices must be found in such a way that they have the norms as small as possible and the condition number of the modified quadratic inverse problem is minimized. These considerations give rise to two nonlinear unconstrained optimization problems, known respectively, as Robust Quadratic Partial Eigenvalue Assignment Problem (RQPEVAP) and Minimum Norm Quadratic Partial Eigenvalue Assignment Problem (MNQPEVAP). The other one, the Finite Element Model Updating Problem (FEMUP), arising in the design and analysis of structural dynamics, refers to updating an analytical finite element model so that a set of measured eigenvalues and eigenvectors from a real-life structure are reproduced and the physical and structural properties of the original model are preserved. A properly updated model can be used in confidence for future designs and constructions. Another major application of FEMUP is the damage detections in structures. Solution of FEMUP also give rises to several constrained nonlinear optimization problems. I will give an overview of the recent developments on computational methods for these difficult nonlinear optimization problems and discuss directions of future research with some open problems for future research. The talk is interdisciplinary in nature and will be of interests to computational and applied mathematicians, and control and vibration engineers and optimization experts.

Bio:
Biswa Nath Datta is a Distinguished Research Professor at Northern Illinois University. He is a Professor in the Mathematical Sciences Department and an adjunct professor in the Electrical and Mechanical Engineering Department at NIU. He has served as the Director of the Applications Involvement Component (AIC) of the Doctoral Program at NIU. He developed the current “Computational Mathematics” program and took a major active role in the development of the Mathematical Sciences Ph.D program at NIU. Datta also held visiting professorship at the University of Illinois at Urbana-Champaign, Pennsylvania State University, Southern Illinois University, University of California at San Diego, State University of Campinas, Campinas, Brazil, as well as many other universities, research organizations, and industries around the world, including the Boeing Company and Wolfram Research Incorporation.
His research interests are interdisciplinary that blend numerical linear algebra and scientific computing (including large-scale and high performance computing) with control and vibration engineering. He research has produced computationally viable algorithms and high-quality engineering and scientific software packages scientific computing and electrical and vibration control systems design and analysis. He also pioneered the research in on large-scale and parallel computations in control. His current research involves development of computationally viable and mathematically sound algorithms for “Active Vibration Control” and “Model Updating” of large vibration systems modeled by finite element techniques. The thrusts of this research are in its applications to real-life problems arising in vibration industries, including automobiles, buildings, bridges, highways, air and space crafts, and in mathematical justifications of many ad hoc industrial techniques which lack solid mathematical foundations. Datta has authored more than 115 research papers, two books, Numerical Methods for Linear Control Systems-Design and Analysis, and Numerical Linear Algebra and Applications, and several software packages, including MATLAB-based toolboxes, MATCONTROL, MATCOM, and MATHEMATICA-based Advanced Numerical Methods. These packages are routinely used for classroom instructions, and academic and industrial research and developments.
His research has been supported by NSF, the Airforce Office of Scientific Research, US Department of Education, the Office of Naval Research (Japan), the Boeing Company, Wolfram Research Incorporation and numerous overseas funding agencies. In recognition of his research contributions, he has received several prestigious honors and awards. These included, election to IEEE Fellow in 2000, induction as an Academician of the Academy of Nonlinear Sciences (Russia) in 2002, Senior Fulbright Specialist Award in 2006 and 2009, and several IEEE Plaques and Medals of Honor. He is an IEEE Distinguished Lecturer and also has been honored by several IEEE sponsored conferences. Datta has served on the editorial board of premier mathematics journals in his areas of expertise, such as SIAM J. Matrix Analysis and Applications and Linear Algebra and its Applications (Special Editor) and is currently serving on the editorial board of about a dozen mathematics and engineering journals, including Numerical Linear Algebra with Applications, Mechanical Systems and Signal Processing, and Dynamical Systems. Datta has served as the vice-chair of the SIAM Linear Algebra Activity Group and has organized several successful interdisciplinary conferences sponsored by the American Mathematical Society and SIAM, and MTNS (Mathematical Theory of Networks and Systems), IEEE. He also co-edited several Proceedings books of these conferences.

Room: 303C (Peñalolén) y A210 (Viña)

Abstract

Motivated by the increasing adoption of wind and solar power generation in power systems, we present a multistage adaptive robust optimization model for the unit commitment problem, considering uncertainty in nodal net electricity loads. The key aspect of the proposed model is that it takes into account the time causality of dispatch decisions as uncertain parameters are unfolded in the hour-to-hour operation of the power system. To deal with large-scale instances, we explore the idea of simplified affine policies and develop a solution method based on constraint generation. Computational experiments on a 2736-bus system demonstrate the effectiveness of the proposed algorithm in practice and that the proposed model can outperform deterministic and two-stage robust models in both operational costs and system reliability. This is joint work with Andy Sun, Eugene Litvinov and Tongxin Zheng.

Bio:
Álvaro Lorca is a Ph.D. Candidate in Operations Research at the School of Industrial and Systems Engineering at Georgia Tech. Previously, he received a degree in Industrial and Mathematical Engineering, and an M.Sc. degree in Industrial Engineering, from Pontificia Universidad Católica de Chile. His main research interests are in robust and stochastic optimization, power systems, and post-disaster logistics.

Room: 303C (Peñalolén), A210 Viña

Abstract

This talk will present one result of research started in 2001 following the 9/11 attacks on the World Trade Center in New York City, USA, a decision technology MUNICIPAL (Multi-Network Interdependent Critical Infrastructure Program for the Analysis of Lifelines). This technology supports decision makers in the restoration of critical infrastructure systems after an extreme event. MUNICIPAL consists of four components: a vulnerability simulator which predicts damage to infrastructure components given a specific disaster scenario, an optimization module which produces a restoration plan given a damage scenario, a GIS interface to visualize and manipulate the data, and a database structured to support the data needs and integration of the other three modules. In addition, several research tasks currently being undertaken or whose results are being prepared for publication will be discussed.

Bio:
Professor William (Al) Wallace is Yamada Corporation Professor in the Industrial and Systems Engineering Department, with a joint appointment in the Civil and Environmental Engineering Department, at Rensselaer Polytechnic Institute, and was Director of Rensselaer's Center for Infrastructure and Transportation Studies. As a researcher and a consultant in Management Science and Information Systems, he has over 40 years of experience in research on the development of decision technologies for industry and government. He is presently engaged in research on the application of agent based technology to problems in incident management and emergency response, issues in trust and ethical decision making, resilience supply networks, and in studying emergent and improvisational behavior in social media immediately before and following a disaster. Professor Wallace’s research has been supported by agencies and organizations such as the U.S. National Science Foundation, U.S. Department of Homeland Security (including the U.S, Coast Guard), U.S. Department of Transportation and Army Research Office, and has resulted in over 200 archival publications. He was a member of the National Research Council's Board on Infrastructure and the Built Environment and served on the National Research Council Committee on Social Science Research on Disasters. Professor Wallace received the International Emergency Management and Engineering Conference Award for Outstanding Long-Term Dedication to the Field of Emergency Management, The Institute of Electrical and Electronics Engineers (IEEE) Third Millennium Medal and is a Fellow of the IEEE, and received the 2004 INFORMS President’s Award for work that advances the welfare of society. In addition, he was either Project Director or co-Project Director for research that resulted in the ITS-America “Best of ITS” award in the area of Research and Innovation and four project of the year awards from ITS-New York.

Room: TBD

Abstract

In this talk we study properties of the t-branch split cuts introduced by Li and Richard (2008). A t-branch split cut is an inequality that is valid for the set obtained by removing from a polyhedron the union of t split sets. This notion is a generalization of split cuts (1-branch split cuts). Cook et al. (1990) showed that the split closure of a rational polyhedron is again a polyhedron and Dash et al (2013) showed that cross cuts (2-branch splits cuts) also yield closures that are rational polyhedra. We further extend these results and show that the t-branch split closure is a polyhedron for all t=1,2,.... This is joint work with Sanjeeb Dash and Oktay Günlük (IBM Research).

Bio:
Diego Morán is an Assistant Professor in the Grado Department of Industrial and Systems Engineering at Virginia Tech. He earned his Ph.D. in Operations Research from the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. Previously he attended the Universidad de Chile, earning a Mathematical Engineer degree (equivalent to a master’s degree in the U.S.) in 2009, and a Master of Science degree in Operations Management in 2009. His main research interest is Optimization with a focus on the theory and applications of mixed-integer programming.

Room: 309C Peñalolén and A210 Viña (via Webcast)

Abstract

Typical healthcare appointment scheduling problems assume that patients arrive punctually according to assigned appointment time, which is rarely true in practice, especially in outpatient clinics. Unpunctual arrivals lead to over-congestion of waiting rooms, long idle time and overtime work of physicians. In this paper we study the design of healthcare appointment systems when patient arrivals deviate from the scheduled appointment times by some random amounts. We use a network flow model to capture the dynamics of the system and develop a copositive optimization model to solve the appointment scheduling problem. Our analysis using clinical data suggests it is important to account for unpunctual patient arrivals in the design of appointment policies.

Bio:
Qingxia Kong is an assistant professor of operations at UAI business school. She graduated from National University of Singapore in 2012. Her current research focuses on appointment scheduling in health care management, cancer screening guidelines, and behavioral decision making

Room: 404C Peñalolén and A210 Viña (via Webcast)

Abstract

In this talk we introduce a new relaxation for the elementary shortest path problem under resource constraints. The elementarity constraint is replaced by partial elementarity using a novel cycle-breaking strategy.
We present ad-hoc dominance rules and labeling algorithms. We present preliminary results on some challenging instances of the capacitated vehicle routing problem (CVRP).

Bio:
Claudio Contardo es Profesor Asistente de la Escuela de Negocios de la Universidad de Quebec en Montreal (ESG UQAM). Claudio es además Ingeniero Civil Matemático de la Universidad de Chile y Doctor en Ciencias de la Computación de la Universidad de Montreal. Sus intereses de investigación son en el área de ruteamiento de vehículos y scheduling con aplicaciones en transporte.

Room: 404C Peñalolén and A210 Viña (via Webcast)

Abstract

An important problem in electronic commerce is that of finding optimal pricing mechanisms to sell a single item when the number of buyers is random and they arrive over time. In this paper we combine ideas from auction theory and recent work on pricing with strategic consumers to derive the optimal continuous pricing scheme in this situation. Our main assumption is that buyers are split among those who have a high valuation and those having a low valuation for the item. We conclude that, depending on the specific instance it is optimal to either use a fixed price strategy, or to use steep markdowns by the end of the selling season. To complement this optimality result we prove that under a large family of price functions there exists equilibrium for the buyers. Finally we tackle the general valuation situation using an approach based on optimal control theory.

Bio:
José Correa is an Associate Professor at the Industrial Engineering Department at Universidad de Chile. His research interests include Algorithmic Game Theory, Combinatorial Optimization, Scheduling, and Operations Research. He is Associate Editor of Operations Research, RAIRO - Operations Research, and Mathematical Programming B.
Jose holds a B.S. and M.S. in Mathematical Engineering from Universidad de Chile and a Ph.D. in Operations Research from MIT.

Room: 309C Peñalolén and A210 Viña (via Webcast)

Abstract

In a report published in 1937, Doeblin - who is regarded the father of the "coupling method" - introduced an ergodicity coefficient that provided the first necessary and sufficient condition for the weak ergodicity of non-homogeneous Markov chains over finite state spaces. In today's jargon, this coefficient corresponds to the maximal coupling of the probability transition kernels associated with a Markov chain, and the Monte Carlo literature has (often implicitly) used it to draw perfectly from the stationary distribution of a homogeneous Markov chain over a Polish state space. Motivated by pattern problems in computational biology, in this talk I will show how Doeblin's ergodicity coefficient can be used to approximate the distribution of additive functionals of homogeneous Markov chains - rather than characterizing asymptotic objects such as stationary distributions. The proposed methodology (which is based on analytic combinatorics) leads to easy to compute and explicit error bounds in total variation distance, and may give access to approximations of additive functionals that are too long for exact calculation but also too short to rely on Normal approximations or stationary assumptions underlying Poisson approximations. This research was partially funded by the NSF DMS Grant #0805950.

Bio:
Manuel Lladser is a mathematical civil engineer from Universidad de Chile. In 2003, he obtained his PhD in mathematics from the Ohio State University and in 2012 became associate professor in applied mathematics at the University of Colorado in Boulder. Manuel is a probabilist, however, for the last several years in problems related to computational biology.

Room: 402C Peñalolén and TBD Viña (via Webcast)

Abstract

Phi-divergences (Kullback-Leibler divergence, chi-squared distance, etc.) provide a natural way to create a set of distributions that are centered around a nominal distribution. In a data-driven setting, the nominal distribution is often determined by collected observations, expert opinions, results of simulations, etc. In this talk, we investigate the properties of phi-divergences for their use in data-driven stochastic optimization. In particular, we focus on two-stage ambiguous stochastic programs, a class of distributionally robust optimization problems. We present a condition for assessing the value of collecting additional data, and we discuss properties of the phi-divergence-based ambiguous program as more data are collected. A classification of phi-divergences elucidates their use for models with different properties and different sources of data. A decomposition-based solution algorithm to solve the resulting model is given. The results are demonstrated on a real-world multi-period water allocation problem with various sources of uncertain data including (1) future population growth, (2) availability of water, and (3) climate variability and how it relates to water usage.

Bio:
Güzin Bayraksan is an associate professor in the Integrated Systems Engineering department at the Ohio State University. Prior to joining OSU, she was a member of the Systems and Industrial Engineering Department and the Graduate Interdisciplinary Program in Applied Mathematics at the University of Arizona. She received her Ph.D. in Operations Research and Industrial Engineering from the University of Texas at Austin and B.S. in Industrial Engineering from Bosphorus (Bogazici) University in Istanbul, Turkey. Her research interests are in stochastic optimization, particularly Monte Carlo sampling-based and data-driven methods for stochastic programming with applications in water resources management. She is the recipient of 2012 NSF CAREER award, 2012 Five Star Faculty Award (UA), and the 2008 INFORMS best case study award. She currently serves as an elected member and treasurer of the Committee on Stochastic Programming (COSP), president of the Forum for Women in Operations Research and Management Science (WORMS), and on the editorial board of IIE Transactions.

Room: 303C Peñalolén and A102 Viña (via Webcast)

Abstract

Often hierarchies of mathematical programming formulations with different numbers of variables and constraints may have a major impact when it comes to solutions to be obtained based on these formulations fed to a commercial solver. But even if the dimensions are literally the same, the difference may have an impact on solvability and quality of results. We first show this by means of the Multiple-choice Multidimensional Knapsack Problem (MMKP), a well-known combinatorial optimization problem from the knapsack problem family. This incorporates a relationship of the MMKP to some generalized set partitioning problem. We exemplify by means of one or two applications from reliability engineering (redundancy allocation) and maritime shipping and logistics (berth allocation).

Bio:
Prof. Stefan Voß is Chair and Director of the Institute of Information Systems within the Faculty of Business, Economics and Social Sciences at the University of Hamburg. He holds degrees in Mathe-matics (diploma) and Economics from the University of Hamburg and a Ph.D. and the habilitation from the University of Technology Darmstadt. The main focus of Prof. Voß´ interests is located in the fields of Information Systems, Supply Chain Management and Logistics as well as Intelligent Search. He has an international reputation as a result of numerous publications in these fields. Current research projects are, among others, considering problem formulations in the field of Infor-mation Systems in Supply Chain Management, Computational Lo-gistics as well as Metaheuristics and Intelligent Search Algorithms in practical applications. In various areas the research of Prof. Voß and the members of his institute may be rated as world-class. This includes recent work related to container terminals as well as developments in metaheuristics. Prof. Voß is editor of Netnomics and Public Transport. He is organizer of the ICCL - International Conference on Computational Logistics among other international conferences.

Room: 309C Peñalolén and A210 Viña (via Webcast)

Abstract

n this talk, we give online and offline approximation algorithms for energy efficient circuit routing protocols for a network of components that are speed scalable, and that may be shutdown when idle. We will consider both the case where the speed-scalable components are edges and the case when they are nodes. For the case of edges, we describe a polynomial-time offline algorithm that has a poly-log approximation ratio. The key step of the algorithm design is a random sampling technique that we call hallucination. The algorithm extends rather naturally to an online algorithm. The analysis of the online algorithm introduces a natural “priority”' multicommodity flow problem, and bounds the priority multicommodity flow-cut gap. We will then explain the additional difficulty faced if the power management components are vertices, and explain how to how to surmount this difficulty in the offline case. This work spans several papers and is joint with Antonios Antoniadis, Nikhil Bansal, Sungjin Im, Ravishankar Krishnaswamy, Benjamin Moseley, Viswanath Nagarajan, Kirk Pruhs.

Bio:
Clifford Stein is a Professor of Industrial Engineering and Operations Research and Computer Science at Columbia University. He received his B.S.E. from Princeton University in 1987 and his Ph.D. degree from MIT in 1992, and was a faculty member at Dartmouth College from 1992-2001. His research interests include the design and analysis of algorithms, combinatorial optimization, operations research, scheduling, algorithm engineering and internet algorithms. In addition to his many scientific papers, he has occupied a variety of editorial positions including the journals ACM Transactions on Algorithms, Mathematical Programming, Journal of Algorithms, SIAM Journal on Discrete Mathematics and Operations Research Letters. He has been the recipient of an NSF Career Award, and an Alfred Sloan Research Fellowship, and is a Fellow of the ACM. He is the chair of the Steering Committee for the Symposium on Discrete Algorithms and is also the co-author of the two textbooks, Introduction to Algorithms, with T. Cormen, C. Leiserson and R. Rivest and Discrete Math for Computer Science, with K. Bogart and S. Drysdale.

Room: A210 Viña and 302C Peñalolén (via Webcast)

Abstract

The purpose of this lecture is to introduce humanitarian logistics and supply chain management (HLSCM) and discuss how it differs from that of the commercial side. HLSCM discusses both continuous aid and disaster relief, with the focus of the lecture being on disaster relief after sudden-onset disasters. Disaster relief is a joint effort with volunteers, companies, humanitarian organizations and armed forces aiming at alleviating the suffering of those impacted by a disaster. Scholars within HLSCM aim at improving the efficiency and effectiveness of disaster relief through bringing learnings from the commercial side into the humanitarian context. While increasing the performance of disaster relief supply chains can lead to more lives saved with the same amount of resources, application of commercial principles in the humanitarian domain is not straightforward. There is no customer demand - there are human needs. There are no companies competing over the demand, there are organizations trying to answer the basic needs in collaboration, while potentially competing for donations. Furthermore, will the focus on efficiency blur what is really effective for the long-term recovery of a community impacted by a disaster?

Bio:
Eija Susanna Meriläinen is a Doctoral Student of the Henken School of Economics in Finland. She is also affiliated to the Humanitarian Logistics and Supply Chain Research Institute (HUMLOG) in the same institution. Through her research she tries to build bridge between humanitarian logistics and supply chain management, as well as community resilience, in order to shed understanding on how disaster relief can support the long term recovery of a disaster-impacted community. The focus of her research is especially on the role of local actors in disaster relief and over the life cycle of a disaster. She has previously studied the reconstruction phase after the 2010/2011 Christchurch earthquakes and prior to that worked in various supply chain management posts within the FMCG industry.

Room: 307C (Edificio Postgrados) - Webcast to Room A210 Viña

Abstract

Los modelos de optimización constituyen una de las metodologías más empleadas en el apoyo a la toma de decisiones. Sin embargo, en numerosas situaciones el modelo resultante plantea igualmente desafíos algorítmicos para una resolución computacional eficiente. Entre las alternativas existentes en la literatura destacan técnicas algorítmicas como los llamados Métodos de Descomposición, siendo el método de Generación de Columnas uno de estos últimos. En esta presentación se mostrará antecedentes de este método y su aplicación específica en la resolución de un problema de planificación agrícola. El problema consiste en la partición de un terreno agrícola en zonas rectangulares lo más homogéneas posibles, basados en antecedentes del suelo con información obtenida con herramientas propias de la agricultura de precisión. Todo lo anterior permitirá un adecuado manejo agrícola que resulta fundamental en los aspectos operativos asociados a su explotación. Se mostrará los antecedentes del problema, el modelo propuesto, aspectos de la implementación algorítmica del método de resolución, los resultados alcanzados y las conclusiones obtenidas.

Bio:
El Dr. Victor M. Albornoz S. es Licenciado en Matemáticas de la Pontifica Universidad Católica de Chile. Además tiene estudios de postgrado, siendo Magíster en Ciencias Exactas y Doctor en Ciencias de la Ingeniería de la Pontifica Universidad Católica de Chile. A partir de 1999 se desempeña como académico de jornada completa en el Departamento de Industrias de la Universidad Técnica Federico Santa María, donde usualmente imparte asignaturas de pre y postgrado en Investigación y Gestión de Operaciones, habiendo obtenido en numerosas ocasiones la distinción de maestro destacado y de excelencia en reconocimiento por la opinión de sus propios alumnos.

Sus áreas de investigación están en el ámbito de la Investigación de Operaciones, con especial énfasis en el modelado y resolución de problemas de planificación y programación estocástica. Es autor y coautor de una veintena de publicaciones en revistas indexadas y de unos 60 trabajos en actas de congresos nacionales e internacionales. Ha participado y dirigido proyectos de investigación con financiamiento interno y externo a su universidad. Es miembro de diversas asociaciones académicas como el Instituto Chileno de Investigación Operativa (ICHIO), ocupando con anterioridad el cargo presidente del mismo y la vice-presidencia de la Asociación Latinoamericana de Investigación Operativa (ALIO). Actualmente es Editor Asociado de la revista Pesquisa Operacional (SOBRAPO) y participa regularmente en congresos patrocinados por ALIO, EURO e INFORMS.

Room: 202C (Edificio Postgrados) - Webcast to Room A210 Viña

Abstract

The complex optimization problems arising in the scheduling of operating rooms have received considerable attention in recent scientific literature because of their impact on patient health on the one hand and hospital costs and revenues on the other hand. For an important part, the complexity stems from the stochastic nature of the problem, which in practice often leads to dynamic adjustments of daily schedules as the day progresses. However, scientific literature on such adaptive scheduling approaches is scarce. Our contribution is to formally define various practically relevant adaptive scheduling problems and corresponding adaptive policies. The approaches take into account that surgical procedures and patients need preparation and introduce the concept of committing. The core of the adaptive policies is formed by a pseudopolynomial algorithm to solve a general class of static stochastic operating room scheduling problems. Moreover, we present extensive computational analysis, based on data derived from literature as well as recent operating room schedules from the largest academic medical center in The Netherlands, and show that the benefit of the proposed adaptive policies over static ones is significant.

This is a joint work with Guanlian Xiao and Willem van Jaarsveld.

Bio:
Joris van de Klundert (1967, Losser, The Netherlands) holds an MSc in Management Informatics from the Erasmus School of Economics, Erasmus University Rotterdam and a Ph.D. in Operations Research from the School of Business & Economics, Maastricht University. He continued his academic career at Maastricht University where he held appointments at the Department of Mathematics of the Faculty of Sciences, and was appointed as a full professor Value Chain Optimization in 2007 at the school of Business & Economics. During these years he founded and directed the university spin off company MateUM, where he got involved in optimization of health services. In 2009 Joris moved to the institute of Health Policy and Management of Erasmus University Rotterdam to chair the department of Health Services Management & Organisation. From 2011 till 2014 he served as Director of Education. Joris is a former president of the Dutch Operations Research Society. Presently, his main research interests go out to optimization problems in the effectiveness of health service networks.

Room: 202C (Edificio Postgrados) - Webcast to Room A210 Viña

Abstract

La planificación minera estratégica consiste en diseñar un plan de vida (20-50 años) para un yacimiento minero. Es decir, consiste en especificar qué parte de un yacimiento será explotado, cómo será explotado en el tiempo, y qué productos serán producidos a partir de esta explotación. La planificación minera es, esencialmente, un problema de gestión de proyectos que se destaca por su gran escala (medido en la cantidad de datos involucrados, y el alcance de horizonte de tiempo), así como por la gran incertidumbre presente en el negocio (leyes de mineral, precios de commodities). En este seminario describo de forma amplia el esfuerzo de un grupo de investigadores de distintas universidades por poder comprender, evaluar, y mejorar la forma en que ingenieros de minas actualmente desempeñan la labor de planificación minera estratégica. En particular, discutiré como se hacen las cosas en la práctica hoy, propuestas de metodologías más modernas, estudios que muestran las ventajas de utilizar estas, y desafíos pendientes. Este es parte de un trabajo desarrollado en conjunto a varios investigadores, incluyendo Daniel Espinoza, Eduardo Moreno, Gonzalo Muñoz, Alexandra Newman, Orlando Rivera, numerosos practicantes de verano, y otros.

Bio:
Doctor en Ingeniería Industrial (2006), Georgia Institute of Technoogy, Estados Unidos. Ingeniero Civil Matemático (2001), Universidad de Chile. Marcos Goycoolea es académico jornada completa de la Escuela de Negocios en la Universidad Adolfo Ibañez desde el año 2006. Actualmente se desempeña como coordinador de área de operaciones. Dicta cursos en gestión de operaciones a nivel de pre-grado, masters, MBA y PhD.

Su especialización es en el desarrollo de métodos computacionales para la optimización a gran escala, con un especial énfasis en la gestión de recursos naturales. Algunos de los temas en que investiga son la generación de planos cortantes para programación entera mixta , el problema del vendedor viajero, planificación de la cosecha forestal, y planificación de operaciones mineras.

Para mayores informaciones, visitar su página web: http://mgoycool.uai.cl.

Room: 202C (Edificio Postgrados) - Webcast to Room A210 Viña

Abstract

Kidney exchanges enable end-stage renal disease patients with an incompatible living donor to exchange donors so the involved patients can receive a transplant. Typically, the allocation of donors to patients is determined by a central authority. The allocation policy used by this authority has a substantial effect on the outcomes of the exchanges. It determines not only which patient-donor pairs are involved in an exchange but also with whom they exchange. In this paper we analyze the health outcomes of various allocation policies proposed in the literature and propose a new policy intended to maximize health value. We present a novel Markov model for the dynamic health status of patients by employing match-dependent transition probabilities. Using this model, we conduct long term simulations with kidney exchange data from the Netherlands. In order to maximize health outcomes we couple the Markov chain model to a branch-and-price algorithm that solves policy-specific integer programming models. Policies are evaluated in terms of quality adjusted life years, equity, and number of transplants. We find that conventional allocation rules and criteria do not increase health value compared to a simple policy intended to maximize the number of transplants. However, under our newly proposed policy an improvement in quality adjusted life years of 6 % over current practice is possible. In particular, an improvement of 31 % is possible for the group of patients that are left unmatched. Finally, we calculate an upper bound on the maximum health value that can be achieved by any allocation policy and show that our newly proposed policy comes 32 % closer to this bound than existing policies. This is a joint work with Kristiaan Glorie, Guanlian Xiao, Joris van de Klundert.

Bio:
Joris van de Klundert (1967, Losser, The Netherlands) holds an MSc in Management Informatics from the Erasmus School of Economics, Erasmus University Rotterdam and a Ph.D. in Operations Research from the School of Business & Economics, Maastricht University. He continued his academic career at Maastricht University where he held appointments at the Department of Mathematics of the Faculty of Sciences, and was appointed as a full professor Value Chain Optimization in 2007 at the school of Business & Economics. During these years he founded and directed the university spin off company MateUM, where he got involved in optimization of health services. In 2009 Joris moved to the institute of Health Policy and Management of Erasmus University Rotterdam to chair the department of Health Services Management & Organisation. From 2011 till 2014 he served as Director of Education. Joris is a former president of the Dutch Operations Research Society. Presently, his main research interests go out to optimization problems in the effectiveness of health service networks.

Room: 309C (Edificio Postgrados)

Abstract

This work deals with a facility location problem in which location and allocation policy is defined in two stages such that a first-stage solution should be robust against the possible realizations (scenarios) of the input data that can only be revealed in a second stage. This solution should be robust enough so that it can be recovered promptly and at low cost in the second stage. In contrast to some related modeling approaches from the literature, this new recoverable robust model is more general in terms of the considered data uncertainty; it can address situations in which uncertainty may be present in any of the following four categories: provider-side uncertainty, receiver-side uncertainty, uncertainty in-between, and uncertainty with respect to the cost parameters.

For this novel problem, a sophisticated algorithmic framework based on a Benders decomposition approach is designed and complemented by several non-trivial enhancements, including dual lifting, branching priorities, matheuristics and zero-half cuts. Two large sets of realistic instances that incorporate spatial and demographic information of countries such as Germany and US (transportation) and Bangladesh and the Philippines (disaster management) are introduced. They are used to analyze in detail the characteristics of the proposed model and the obtained solutions as well as the effectiveness, behavior and limitations of the designed algorithm. (This is a joint work with E.Álvarez-Miranda and E. Fernández).

Bio:
Ivana Ljubic holds a PhD degree in computer science from the Vienna University of Technology (2004) and master's degree in mathematics from the University of Belgrade (2000). She worked for two years in a company dealing with portfolio optimization before continuing her university career as a Post-Doc researcher at the University of Vienna (2007-2010). She was Assistant Professor at the Vienna University of Technology (2010-2011), Visiting Assistant Professor at the University of Maryland (2011-2012), Visiting Researcher at the Research Group on Algorithm Engineering, TU Dortmund (SS 2012), Visiting Researcher at the COGA Group, at the TU Berlin (2012-2013) and Visiting Professor at LAMSADE, at the University of Paris, Dauphine. She currently holds Assistant Professor position (tenure-track) at the University of Vienna. She completed her habilitation in January 2013.

Research interests of Ivana Ljubic include network design problems, combinatorial optimization and optimization under uncertainty. She uses tools and methods of mixed integer programming, (meta-)heuristics and their successful combinations for solving optimization problems with applications in telecommunications, design of data and distribution networks and bioinformatics. She received PhD Fellowship of Austrian Academy of Sciences (DOC Fellowship, 2003-2004), PhD award of Austrian Society for Operations Research (2005), Hertha-Firnberg Post-Doc Fellowship of the Austrian Science Fund (2007-2010) and APART Fellowship of the Austrian Academy of Sciences (2011-2013).

2014

Room: 309C (Edificio Postgrados)

Abstract

Stochastic transmission and generation expansion planning models are receiving increasing attention among researchers today. They are being used to explicitly model uncertainties that result from the increasing penetration of renewable energy technologies, as well as from long-term market and regulatory conditions. However, existing commercial planning tools still lack stochastic capabilities. I propose a two-stage investment-planning model that takes into account the aforementioned uncertainties and I describe scalable decomposition algorithms to solve large-scale problems. An application of my algorithms is illustrated using a 240-bus network representation of the Western Electricity Coordinating Council. I discuss their performance when implemented in both the Red Mesa/Sky high-performance computer and a commodity multi-core workstation.

Bio:
Francisco Munoz is a Postdoctoral Appointee in the Analytics Department at Sandia National Laboratories in Albuquerque, New Mexico, where he works on the research and development of short- and long-term planning tools for large-scale power systems. His interests include market design, stochastic programming, decision analysis, software development, power systems economics, and the application of operations research methods to investment and operations planning problems that leverage high-performance computers. Francisco holds a Ph.D. in Energy Economics and Operations Research from The Johns Hopkins University. He received a Bachelors degree in Mechanical Engineering from the Universidad de Chile and a M.S.E. in Environmental Management and Economics from The Johns Hopkins University.

Room: 309C (Edificio Postgrados)

Abstract

Ad Exchanges are emerging Internet markets where advertisers may purchase display ad placements, in real-time and based on specific viewer information, directly from publishers via a simple auction mechanism. Advertisers join these markets with a pre-specified budget and participate in multiple second-price auctions over the length of a campaign. This paper studies the competitive landscape that arises in Ad Exchanges and the implications for publishers’ decisions. The presence of budgets introduces dynamic interactions among advertisers that need to be taken into account when attempting to characterize the bidding landscape or the impact of changes in the auction design. To this end, we introduce the notion of a Fluid Mean Field Equilibrium (FMFE) that is behaviorally appealing, computationally tractable, and in some important cases yields a closed-form characterization. We establish that a FMFE approximates well the rational behavior of advertisers in these markets. We then show how this framework may be used to provide sharp prescriptions for key auction design decisions that publishers face in these markets. In particular, we show that ignoring budgets, a common practice in this literature, can result in significant profit losses for the publisher when setting the reserve price. (Joint work with Santiago Balseiro and Omar Besbes).

Bio:
Gabriel Weintraub is the Sidney Taurel Associate Professor of Business at Columbia Business School. He holds a PhD in Management Science and Engineering and a MA in Economics, both from Stanford University. His research covers several subjects that lie in the intersection between operations/management science and microeconomics. He is particularly interested in developing mathematical and computational models for the economic analysis of problems in operations; as well as making contributions to industrial organization, computational economics, and market design.

Room: 112D (Edificio Talleres)

Abstract

The traveling salesman problem is easy to state: given a number of cities along with the cost of travel between each pair of them, find the cheapest way to visit them all and return to your starting point. Easy to state, but difficult to solve. In this talk we discuss the history, applications, and general impact of this fascinating problem.

Bio:
William Cook is a professor of Combinatorics and Optimization at the University of Waterloo, a member of the National Academy of Engineering, a Fellow of the American Mathematical Society, a Fellow of the Society for Industrial and Applied Mathematics (SIAM), and a Fellow of INFORMS. Together with David Applegate, Robert Bixby, and Vasek Chvatal, Cook created the Concorde computer code for the traveling salesman problem.

Room: 309C (Edificio Postgrados)

Abstract

Stochastic integer programming (SIP) is widely applicable because many problems require discrete decisions to be made in the face of uncertainty. Decomposition algorithms are usually required to solve stochastic programs because a large number of scenarios may be required to represent the uncertainty. Benders decomposition can be used to efficiently solve the linear programming (LP) relaxation of a SIP, but the resulting LP relaxation may be weak, leading to a large branch-and-bound tree. In this talk, I discuss techniques for improving the LP relaxation when using a Benders decomposition approach. I will begin with a motivating application in service systems scheduling, and discuss a novel application of Mixed-Integer Rounding that we found effective for solving the resulting two-stage stochastic integer program. I will then discuss our results investigating the use of split cuts within Benders decomposition in two different ways: directly in the master problem, or within the subproblems. We investigate the relative strengths and weaknesses of the two approaches and present numerical results to illustrate these insights.

This is joint work with Merve Bodur, Sanjeeb Dash, and Oktay Gunluk.

Bio:
Jim Luedtke is an Associate Professor in the department of Industrial and Systems Engineering at the University of Wisconsin-Madison. Luedtke earned his Ph.D. at Georgia Tech and did a postdoc at the IBM T.J. Watson Research Center. Luedtke's research lies in the areas of stochastic and mixed-integer programming, with interest in applications such as service systems and energy. Luedtke is a recipient of an NSF CAREER award, was a finalist in the INFORMS JFIG Best Paper competition, and was awarded the 2013 INFORMS Optimization Society Prize for Young Researchers. Luedtke is the INFORMS Optimization Society Secretary/Treasurer, and has served as member and chair of the 2012 and 2013 Mixed Integer Programming Workshops, respectively.

Room: 202C (Edificio Postgrados)

Abstract

This seminar presents the mathematical formulation and control architecture of a stochastic-predictive energy management system for isolated microgrids. The proposed strategy addresses uncertainty using a two-stage decision process combined with a receding horizon approach. The first stage decision variables (Unit Commitment) are determined using a stochastic Mixed-Integer Linear Programming formulation; whereas the second stage variables (Optimal Power Flow) are refined using a Nonlinear Programming formulation. This novel approach was tested on a modified CIGRE test system under different configurations comparing the results with respect to a deterministic approach. The results show the appropriateness of the method to account for uncertainty in the power forecast.

Bio:
Daniel E. Olivares was born in Santiago of Chile, received the B.Sc. and the Engineer degree in electrical engineering from the University of Chile, Santiago, Chile, in 2006 and 2008 respectively, and the Ph.D. degree in electrical and computer engineering from the University of Waterloo, Waterloo, ON, Canada, in 2014. He is currently Assistant Professor at the Electrical Engineering Department of the Pontificia Universidad Católica de Chile, Santiago, Chile. His research interests include modelling, simulation, control and optimization of power systems in the context of smart grids.

Room: 202C (Edificio Postgrados)

Abstract

El objetivo de esta charla es mostrar un enfoque, en conjunto con las herramientas que de éste se desprenden, para abordar el problema de sustentabilidad y recuperación de algunos recursos naturales renovables. El contexto general en el cual se enmarca el enfoque, es en el de sistemas dinámicos (determinista) a tiempo discreto bajo la acción de un control, constituyendo las variables de estado una descripción temporal del o los recursos y el control la o las acciones que implican la extracción de estos. En este marco, el concepto de sustentabilidad es definido utilizando herramientas de la Teoría de la Viabilidad. El concepto de sustentabilidad estará asociado a requerimientos (definidos por grupos de interés) que la secuencia de estados y controles debe satisfacer en el tiempo. Si estos requerimientos son posibles de cumplir a partir del estado actual de un recurso, diremos que el recurso es sustentable, de acuerdo a los requerimientos considerados. Por otra parte, a partir del estado del recurso, es importante conocer cuáles son los requerimientos que se podrían imponer con tal de que se logre una situación de sustentabilidad. Si un determinado grupo de interés (pescadores industriales, artesanales, Comités Científicos, Gobierno, medioambientalistas, etc.) desea en el transcurso del tiempo se satisfagan requerimientos dados, pero a partir del estado del recurso actual no se puede hacer, entonces decimos estamos frente a un problema de recuperación, para que en un determinado tiempo, el estado del recurso si pueda cumplir con las necesidades manifestadas por los grupos de interés (de manera sustentable). El problema de la recuperación -por ejemplo para recursos sobre-explotados- es planteado en términos de que las variables de estado alcancen un conjunto objetivo definido a partir de los conceptos de sustentabilidad previamente introducidos. De esta etapa, se desprenden diversos problemas de optimización que pueden ser planteados. Finalmente, se expondrán las condiciones bajo las cuales hemos desarrollado este enfoque, describiendo cuáles son los desafíos matemáticos para poder abordar una mayor cantidad de problemas que involucren modelos y objetivos más complejos. Algunos resultados y desarrollos a presentar son los realizados para la recuperación de la merluza común en la Región de Valparaíso.

Bio:
Pedro Gajardo es Profesor Asociado del Departamento de Matemática de la Universidad Técnica Federico Santa María (UTFSM). Desde el 2011 hasta septiembre del 2014, fue Vicerrector Económico de la UTFSM. Se tituló de Ingeniero Civil Matemático (2001) en la Universidad de Chile y luego realizó un Doctorado en Matemáticas Aplicadas (2004) en la misma Universidad, en cotutela con la Universidad de Avignon (Francia). Sus temas de investigación abarcan el análisis variacional, el análisis y optimización de bioprocesos y la gestión de recursos pesqueros.

Room: 309C (Edificio Postgrados)

Abstract

We study the design of healthcare appointment system to minimize the weighted sum of the expected total patient waiting time for patients and expected physician overtime when patients service durations are random. The challenge is to find the best sequencing rule for a group of mixed patients. This is a notoriously difficult problem, with the optimal sequencing result known only for two patients using the tool of stochastic ordering. On the other hand, numerous studies report that sequencing patients by increasing order of variance of service duration (Smallest-Variance-First or SVF rule) performs extremely well in many environments. While this seems be intuitive, we use results from random walk theory to argue that, in general, this is not true. In particular, we show that the optimal sequence depends on the mix of patients in the problem: It may be advantageous to schedule some patients with higher variability early in the session, so that we can use succeeding patients to cushion the impact on other patients in the later session. This trade-off partly explains why past attempts that used the theory of stochastic ordering on this problem have failed to produce a reasonable answer. We also introduce a deterministic variant of the appointment sequencing problem and prove that the deterministic version of the problem (when service durations are known) is already NP-hard.

 

Bio:
Qingxia Kong is an assistant professor of operations at UAI business school. She graduated from National University of Singapore. Her current research focuses on appointment scheduling in health care management, cancer prevention guidelines, and behavioral decision making.

Room: 309C (Edificio Postgrados)

Abstract

For large wine making companies bottling the product is a critical process. It can determine the financial success or failure of a company, because it immobilizes a large amount of working capital in terms of the materials (bottle, capsule, labels and box), equipment (bottling and labeling) and labor. The bottles and packaging material can account to over 30\% of the total costs of the final product (Gonzalez_2006) and bottling is a labor intensive activity, which requires crews of up to 6 workers per shift. Of course, an inefficient use of the bottling lines may also cause delays in meeting customers’ orders, which can lead to a possible loss of clients. All of this makes the process of planning and sequencing the bottling lines important for the success of the business.

To support the winery bottling decision process we have develop a model that produces solutions for the wine bottling lot sizing and scheduling problem with sequence dependent setup times, in an adequate time-frame, which can be implemented by large wineries. The model incorporates particular aspects of the wine bottling problem such as: major/minor setups, sequence dependent setup times, crewing limitations and finally, sanitation and traceability constraints. Also, using a bi-criteria approach, previously used by Ehrgott and Ryan (2002), we introduce a robust schedule approach.

Finally, we implemented an effective decomposition algorithm that uses the structure of the problem, to produce good solutions that can be applied to other families of sequence dependent scheduling and lot sizing problem. We use the major/minor setup structure to decompose the problem into a two-stage iterative optimization problem. This decomposition approach allows us to parallelize the optimization, which significantly reduces the solution time. Our computational results indicate that the model achieves reductions of 30\% in the total plan costs with respect to their current plans. The introduction of demand and capacity robustness produces solutions that are stable and greatly reduces the need of rescheduling in the case of momentary line breakdowns or the appearance of emergency order. Introducing this robustness does not significantly increase the optimal plan costs. Finally, we present a visualization and solution intervention decision support system that is currently being implemented by a large winery.

 

Bio:
Alejandro Mac Cawley es profesor asistente de la Pontificia Universidad Católica de Chile con un nombramiento conjunto en el Departamento de Ingeniería Industrial y de Sistemas en la Escuela de Ingeniería de la Pontificia Universidad Católica de Chile y el Departamento de Economía Agraria de la Facultad de Agronomía y Ciencias Forestales. Es Ingeniero Agrónomo de la Pontificia Universidad Católica, con un PhD de la Escuela de Ingeniería Industrial y de Sistemas en el Georgia Institute of Technology. Su trabajó se ha enfocado en la investigación relacionada con la optimización de la cadena de suministro en los sectores del vino, productos frescos, lácteos y flores. Realizando una serie de consultorías con empresas productoras de alimentos. Fue profesor de los cursos de la cadena de suministro agrícola e Investigación Operación de estudiantes de agronomía y también es profesor de Gestión de la Cadena de Suministro para el Wine MBA en Burdeos, Francia. Ha escrito numerosos artículos, tanto en revistas nacionales como internacionales y dictando conferencias, tanto en Chile como en el extranjero, en los temas de Operaciones y Logística de Alimentos.

 

Room: 406C (Edificio Postgrados)

Abstract

We consider a natural scheduling problem where we need to assign a set of jobs to machines in order to minimize the makespan, i.e., the time where the last job finishes. In our variant jobs may be split into parts, where each part can be (simultaneously) processed on different machines. Splitting jobs can help balancing the load of the machines, but this is not for free: each part requires a setup time which increases the processing requirement of the job. I will give an introduction to the topic and present algorithms based on rounding and linear programming that yield solutions with guaranteed quality. Our main result is an algorithm whose solutions have makespan at most 1 + φ times the optimum, where φ ≈ 1.618 is the golden ratio. No knowledge of the area will be assumed.
This is joint work with J. R. Correa, A. Marchetti-Spaccamela, J. Matuschke, O. Svensson, L. Stougie and V. Verdugo.

Bio:

José Verschae is an assistant professor with a joint position in the Department of Mathematics and Department of Industrial Engineering at the Pontifical Catholic University of Chile. His research area is combinatorial optimization, in particular approximation and online algorithms for scheduling and network design problems. In 2012 he finished his Phd at TU Berlin, and until 2014 he held a Fondecyt postdoc project hosted at University of Chile.

Room: 202C (Edificio Postgrados)

Abstract

Desarrollo de servicios como LinkedIn, Facebook y Twitter potenció el interés por analizar grafos masivos, donde las aristas representan una relación particular entre los nodos-participantes. Por su tamaño, los cálculos salen fuera de las capacidades de un sólo computador centralizado. Hay que recurrir a la computación distribuida en clústers de computadores. En búsqueda de un paradigma computacional adecuado, los nodos parecen ser unidades naturales para la computación distribuida. Las aristas, al no representar canales de comunicación, no restringen el flujo de comunicación. Sin embargo, por el costo relativo del intercambio de mensajes entre los computadores del clúster, el tamaño de la unidad básica de comunicación es uno de los factores principales bajo estudio. En 2010 Google presentó a Pregel. A pesar de su éxito en el procesamiento de simples datos transaccionales en clústers de computadores de producción masiva, el modelo MapReduce se había mostrado ineficiente para la resolución de problemas analíticos en grafos masivos. El caracter iterativo de los algoritmos involucrados y problemas de localidad de los datos hicieron que las ideas implementadas en Pregel ganaran popularidad. Desde entonces presenciamos una serie de desarrollos, que además del modelo de computación síncrona, como en Pregel, exploran modelos de computación asíncrona, como en GraphLab, por ejemplo. Estos trabajos prácticos refuerzan el interés teórico. Nuestro objetivo es de modelar y analizar el poder de cálculo de este tipo de sistemas distribuidos. La comunicación se realiza a través de una "pizarra" compartida. Cada nodo puede leer el contenido de la pizarra y, cuando activado, escribir un pequeño mensaje en función de su conocimiento local. Cuando el protocolo termina, su salida se calcula a partir del contenido final escrito en la pizarra. En un análisis de efectos de sincronía y secuenciación de cálculos de los nodos, describimos cuatro modelos de sincronización para el acceso a la pizarra. Demostramos que el tamaño del mensaje y el poder de sincronización constituyen dos jerarquías ortogonales para estos sistemas.

Bio:
Karol Suchan es profesor asociado de la Facultad de Ingeniería y Ciencias (FIC), UAI. Recibió su grado de magíster en matemáticas aplicadas de la AGH Universidad Técnica de Cracovia (Polonia, 2003) y doctorado en informática de la Universidad de Orleans (Francia, 2006). Después de 2 años como postdoc en el Departamento de Ingeniería Matemática de la Universidad de Chile, se unió a la FIC UAI. A partir de este año es Director del Centro de Investigación TIC de la misma facultad. Actualmente, su investigación teórica se concentra en computación eficiente y modelos de computación distribuida, mientras sus trabajos aplicados tocan temas de análisis de datos de grandes sistemas (especialmente, sociales) y el uso de redes de dispositivos inteligentes.

Room: 303C (Edificio Postgrados)

Abstract

Long term investments such as Private Equity (PE), present timing differences in the cash inflows and outflows. Besides they are more illiquid compared to public equity quoted in the market. When using the standard internal rate of return (IRR) to measure performance, key information is lost due to these timing issues. As a consequence, we can get to allocations that might leave no liquidity to meet future PE commitments.
To avoid the latter, we present a multistage stochastic optimization model that allows assets to be included as free cash flows projects. With this model we analyze the amount allocated in PE in relation to public equity, under different settings. In particular, see how allocation and performance changes with different investors target returns and risk aversion levels, different volatilities in PE cash flows and different correlations between PE cash flows and public equity returns.

Bio:
Lorenzo Reus es actualmente profesor de la facultad de Ingeniería y Ciencias de la Universidad Adolfo Ibanez en Chile. Se graduó como Ingeniero Civil Industrial de la Universidad de Chile en 2008 y posee un magister de gestión de operaciones en la misma universidad. El 2013 obtuvo un PhD en Operations Research and Financial Engineering en la Universidad de Princeton. Sus intereses son principalmente modelos y herramientas de optimización bajo incertidumbre para problemas de asignación de activos financieros. Entre ellos se encuentran modelos de optimización robusta y estocástica para portafolios con Private Equity y métodos de Hidden Markov Models para estrategias de futuros en monedas. También tiene experiencia en modelación y resolución en problemas reales, como diseñar itinerario en flota aviones con imprevistos en los vuelos y diseñar el ruteo de vehículos con restricciones en tiempos de entrega.

Room: 308C (Edificio Postgrados)

Abstract

The building block for estimating customer choice is the specification of a choice model, either parametric or nonparametric. In this work, we take a non-parametric approach. We use a simple, though quite general, non-parametric choice model in which customer types are defined by their rank ordering of all alternatives (along with the no-purchase alternative). When faced with a choice from an offer set, a customer is assumed to purchase the available product that ranks highest in her preference list -- or she does not purchase at all if the no-purchase alternative ranks higher than any available product. Customers that share similar preference lists constitute a “type”. With a finite number of alternatives, there is a finite number of rankings and hence a finite number of customer types. Demand is then described by a discrete probability mass function (pmf) on the set of customer types.

We assume that a set of customers make repeated purchases from the firm. The canonical example is customers buying groceries on a weekly basis from a grocery retailer, but more broadly the setting includes any application in which customers exhibit loyalty through repeated purchases be it apparel, hotel, airline, etc. The goal of the retailer is to use the information from the repeated interactions to learn the preferences of the customer and customize the offering (both the assortment and the prices) in response to the learned preferences. We assume that the retailer has collected purchase transactions tagged by customer id. This type of data, popularly referred to as “panel data”, is very common in practical settings because of the proliferation of loyalty cards, personalized discount coupons, and other marketing programs.

Our goal in this paper is to use such panel data collected by the retailer to construct the preferences of the individual users. The additional information provided by panel data would allow building models that have better predictive accuracy. At the same time, these models should be tractable enough to make practical operational decisions. Access to individual-level data as opposed to just aggregate data allows us to personalize assortment and pricing decisions. Our primary focus in this presentation is estimating the choic-based demand model from data. We propose a methodology to estimate customer preferences and illustrate our approach through an exhaustive set of numerical experiments.

Bio:
Gustavo Vulcano is Associate Professor at the Leonard N. Stern School of Business, New York University. He obtained his B.S.(1994) and M.S.(1997) in Computer Science from the University of Buenos Aires, Argentina; and his Ph.D. (2003) in Decision, Risk and Operations from the Graduate School of Business, Columbia University. His research interests are primarily in revenue management, including pricing mechanisms and capacity control. Currently, his focus is on three research streams: models for customer choice and strategic behavior, computational methods for network revenue management, and competition and airline alliances. Gustavo has been published in leading journals of his field such as Management Science, Operations Research, and MSOM. He is also serving as associate editor of Management Science and Operations Research.

Room: 309 C Edificio Postgrados

Abstract

En muchos ámbitos de uso de modelos de Optimización se deben tomar decisiones en distintos horizontes de tiempo. La planificación de operaciones es un ejemplo de ello: primero se planifica a nivel táctico y luego a nivel operacional detallado. Debido a diversas incertidumbres, los planes tácticos a veces no son compatibles con las necesidades de producción de corto plazo, pero una planificación "robusta" debiera garantizar una alta probabilidad de consistencia entre los distintos niveles de decisiones. En este trabajo mostramos cómo modelar esto mediante problemas multietapas y también mostramos cómo algunas características de estabilidad de los problemas de optimización involucrados podrían usarse para predecir el grado de robustez del proceso de decisiones. También discutiremos brevemente cómo estos problemas han servido para probar algunos nuevos Métodos de Primer Orden basados en subgradientes para el problema estocástico.

Bio:
El Profesor Jorge Vera es Ingeniero Civil Matemático de la Universidad de Chile y obtuvo su doctorado en la Universidad de Cornell, Estados Unidos en el área de Investigación Operacional. Es profesor en la Universidad Católica desde 1998 y fue Profesor Visitante en el Sloan School of Management del Instituto Tecnológico de Massachesetts (MIT) desde Agosto de 2007 a Junio de 2008. Se desempeñó como Jefe de Departamento entre Abril de 2004 y Junio de 2007. Ha desarrollado su investigación y enseñanza en las áreas de Investigación Operacional y Gestión de Operaciones, con especial interés en el uso de los modelos de Optimización para apoyar decisiones en las empresas. Su trabajo de investigación ha estado centrado en entender los factores que afectan la sensibilidad y robustez de las soluciones entregadas por modelos de Optimización, especialmente aquellos utilizados en la toma de decisiones. Ha realizado, igualmente, investigación y desarrollo aplicado en la industria Forestal y en la industrial del Vino, entre otras. Tiene publicaciones en las más importantes revistas del área. Es miembro del Institute for Operations Research and Management Sciences (INFORMS), de la Society for Industrial and Applied Mathematics (SIAM), de la Mathematical Programming Society y del Instituto Chileno de Investigación Operacional, ICHIO. Es actualmente Director del Magister en Ingeniería Industrial (MII), programa de Postgrado Ejecutivo del Departamento.

Room: 309 C Edificio Postgrados

Abstract

Engineering ethics education is taking on increasing importance due to globalization, where current and new global issues need to be addressed with both technologies and values. In particular, multiculturalism is part of the process of globalization, and that can be seen in U.S. universities whose students have different nationalities, ethnicities, and cultures. But we know very little about the impact engineering ethics education has on students´ ethical values or attitudes and moreover, if diversity contributes to it.

This research examines the impact of a stand-alone ethics course offered by an U.S. university on student´s moral values. Statistical tools such as Analysis of Variance, non-parametric test for independent data, and non-parametric test for paired data are used to assess the relative impact of an ethics class in student´s individual and inherent moral values, and to determine if demographic characteristics of the students as individuals and as a group contribute to this impact.

Bio:

Ruth Murrugarra is Assistant Professor of the School of Engineering and Sciences at Universidad Adolfo Ibañez. She got her Ph.D. in Decision Sciences and Engineering from Rensselaer Polytechnic Institute, and her Master of Engineering in Applied Operations Research from Cornell University. Her areas of interest are Applied Statistics (Data\information fusion from multiple sources and multiple formats, statistical analysis of static and time series data and the relevance of including the correct distribution function to build the confidence interval, and statistical text analysis from open-ended questions), applied Operations Research, and Engineering Ethics (Non-prescriptive and case-based educational approach of ethics in engineering curriculums to enhance ethical decision making and to recognize the differences in culture and values).

Room: 309C

Abstract

In this talk we consider risk neutral and risk averse approaches to multistage stochastic programming. In the risk neutral approach the total cost is minimized on average and it is possible to write dynamic programming equations in a natural way. The situation is considerably more involved when the optimization involves various measures of risk.
We discuss various concepts of time consistency suggested in the literature. This will be illustrated on the example of the multistage inventory model.

Bio:
Alexander Shapiro is a Professor in the School of Industrial and Systems Engineering at Georgia Institute of Technology. He has published more than 120 research articles in peer review journals and is a coauthor of several books. His research is widely cited and he was listed as an ISI Highly Cited Researcher in 2004 (ISI = Institute for Scientific Information). He is on the editorial board of several professional journals, such as Mathematics of Operations Research, ESAIM: Control, Optimization and Calculus of Variations, Computational Management Science. He was an area editor (Optimization) of Operations Research, currently he is the Editor-in-Chief of the Mathematical Programming, Series A, journal. He gave numerous invited keynote and plenary talks, including invited section talk (section Control Theory & Optimization) at the International Congress of Mathematicians 2010, Hyderabad, India. In 2013 he was a recipient of Khachiyan prize awarded by the INFORMS Optimization Society.

Room: TBD

Abstract

The multicommodity fixed-charge network flow problem (MFCFP) is an important optimization problem arising in numerous applications like transportation, telecommunications, manufacturing, and logistics, among others. The problem consists of sending flow through a capacitated network for a set of origin-destination pairs representing different commodities, satisfying the demands corresponding to the commodities at the minimum possible cost. One important aspect of this problem is that there is a fixed cost for using an arc to send flow. This increases the combinatorial difficulty of the problem. In this talk, we will introduce an extended formulation for the MFCFP and, by using Lagrangian relaxation, an algorithm that finds valid lower bounds for the problem (that can be better than those of two other well studied Lagrangian relaxations). This approach can also provide us with feasible solutions. We will present some preliminary computational results.
This is joint work with G. Nemhauser and S. Ahmed.

Bio:
Rodolfo Carvajal is a Ph.D. candidate in Operations Research from Georgia Institute of Technology, he’s also an Instructor at the School of Business of Universidad Adolfo Ibáñez. His interests are theory, applications, and computational methodologies of integer programming; particularly parallel computing applied to mixed-integer programming and applications of mixed-integer programming to natural resources.

Room: 303C

Abstract

Congestion pricing has been gaining popularity among transportation agencies as a way to reduce the congestion at peak hours. London, Singapore, and Stockholm are well-known examples that exhibit some success of this policy. In other cities, such as New York City (NYC), this policy has shown limited success. The limitation in NYC is related to the capacity of the users of the transportation network to change their time of travel or to change to an alternative mode. At the heart of this issue are the differences between compulsory and non-compulsory trips. While in the latter, the traveler has great liberty to change travel patterns; the same cannot be said about the former because of the existence of more rigid travel constraints. It is the demand generator (e.g., companies, schools) who controls the behavior of the travelers and, when such type of asymmetry is present, the first two theorems for welfare do not hold. This poses a major challenge because in the peak hours (and peak periods) –where strategies such as congestion pricing are justified- the bulk of this traffic (passenger and freight) is of a compulsory nature implying that travelers have little or no means to change their arrival times by themselves.

This presentation: (1) reviews a set of strategies to reschedule work (i.e, staggered work hours and flextime) to correct this market failure by influencing the demand generator in relaxing the arrival constraints; (2) present theoretical models used to derive some conditions about the feasibility of these strategies, (3) optimization models for reschedule of work activities, and (4) discuss challenges in measuring the effects on firm which is key for the sustainability of these strategies.

Bio:
Wilfredo Yushimito is currently an Assistant Professor at the Universidad Adolfo Ibáñez (UAI) in Chile and an affiliated researcher of Embarq Andino. He received a Ph.D. in Transportation Engineering and a MS in Applied Mathematics from Rensselaer Polytechnic Institute (RPI); a MS in Industrial Engineering by the University of Puerto Rico at Mayagüez; and a BS in Industrial Engineering by the Pontificia Universidad Católica del Perú. His current research interests are transportation demand management, transportation systems optimization models, and freight transportation.

Room: 309 Edificio C - Postgrados

Abstract

Topological optimization of reliable networks is a problem that has been deeply studied. This problem looks for the optimal topology of a network that maximizes the reliability subject to a budget constraint. Since even the computation of the reliability of a network is an NP-hard problem, proposed solutions to this problem are based on meta-heuristics or sampled approximations. However, most of these studies assume independence among the link failures. Recent literature shows empirically that correlations between failures are significant. In this work, we compare the impact of failure correlations in the optimal topological design of a network. We use copula-based failure models, which allow us to incorporate explicitly a correlation matrix between failures. Using this failure model, we show that correlation can change the optimal solution drastically. We also propose a sample-average-approximation integer programming model to solve this problem incorporating the correlation of failures. Joint work with Héctor Cancela (Universidad de la República, Uruguay) and Eduardo Moreno (UAI).

Bio:
Javiera is an Assistant Professor in the Faculty of Engineering at Universidad Adolfo Ibáñez (UAI). She completed her Ph.D. studies at the Laboratoire MAP5 of l’Université de Paris 5-Renné Descartes and at the School of Engineering of Universidad de Chile, under the supervision of professors Servet Martínez and Bernard Ycart. Javiera has an undergraduate degree in Mathematical Engineering from the School of Engineering of Universidad de Chile. Her research interests are probability theory, stochastic processes, analysis of algorithms and telecommunications.


2013

Room: TBD

Abstract

This paper is motivated by the following question: How to construct good approximation for the distribution of the solution value to linear optimization problem when the objective function is random? More generally, we consider any mixed zero-one linear optimization problem, and develop an approach to approximate the distribution of its optimal value when the random objective coefficients follow a multivariate normal distribution. Linking our model to the classical Stein’s Identity, we show that the least squares normal approximation of the random optimal value can be computed by solving the persistency problem, first introduced by Bertsimas et al. (2006). We further extend our method to construct a least squares quadratic estimator to improve the accuracy of the approximation, in particular, to capture the skewness of the objective. We use this approach to construct good estimators for (a) the fill rate of an inventory system in a finite horizon; (b) the waiting time distribution of the nth customer in a G/G/1 system when the arrival rate equals the service rate; and (c) the project completion time distribution.

Bio:
Prof. Daniel Zhichao Zheng is an assistant professor of operations management at Singapore Management University. He got his PhD degree from NUS Business School, National University of Singapore. His current research interests lie in the area of prediction and planning under uncertainty. He applies his research in various industrial domains including healthcare operations management, aircraft spares logistics service planning and contracting, and liner shipping scheduling.

Room: 302 C Edificio de Postgrados

Abstract

Motivated by retailers' frequent introduction of new items to refresh product lines and maintain their market shares, we present the assortment packing problem in which a firm must decide, in advance, the release date of each product in a given collection over a selling season. Our formulation models the trade-offs among profit margins, preference weights, and limited life cycles. A key aspect of the problem is that each product is short-lived in the sense that, once introduced, its attractiveness lasts only a few periods and vanishes over time. The objective is to determine when to introduce each product to maximize the total profit over the selling season. Even for two periods, the corresponding optimization problem is shown to be NP-complete. As a result, we study a continuous relaxation of the problem that approximates the problem well, when the number of products is large. When margins are identical and product preferences decay exponentially, its solution can be characterized: it is optimal to introduce products with slower decays earlier. The structural properties of the relaxation also help us to develop several heuristics, for which we establish performance guarantees. We test our heuristics with data on sales and release dates of woman handbags from an accessories retailer. The numerical experiments show that the heuristics perform very well and can yield significant improvements in profitability.

Bio:
Felipe Caro is an Associate Professor of Decisions, Operations, and Technology Management at the UCLA Anderson School of Management. He holds a Ph.D. in Operations Management from the Massachusetts Institute of Technology and an Industrial Engineering degree from the University of Chile. His research interests span retail operations, supply chain management, carbon footprinting and natural resources, with a strong emphasis on practical applications.

Room: 201 C – Edificio Postgrados

Abstract

The increasing reliance on intermittent renewable power is pushing the development of new market designs and computational models capable of mitigating high levels of uncertainty. In this talk, we review some recent advances in stochastic optimization for power grid systems under uncertain supply. We discuss pricing inconsistencies resulting from deterministic market clearing models and propose a new consistent stochastic model. Finally, we discuss scalability limitations of existing decomposition techniques and propose a new scalable clustering-based strategy.

Bio:
Victor M. Zavala is an assistant computational mathematician in the Mathematics and Computer Science Division at Argonne National Laboratory and he is a fellow in the Computation Institute at the University of Chicago.
He received his B.Sc. degree from Universidad Iberoamericana (2003) and his Ph.D. degree from Carnegie Mellon University (2008), both in chemical engineering. He is currently a recipient of the DOE Office of Science Early Career Award under which he develops scalable algorithms for optimization under uncertainty. He also leads an advanced grid modeling project funded by DOE Office of Electricity to develop and test large-scale power grid models and he participates in the Multifaceted Mathematics for Complex Energy Systems project funded by DOE Office of Science. His research interests are in the areas of mathematical modeling of energy and power systems, uncertainty modeling, stochastic optimization, and real-time operations.