Tutorial Sessions
Tutorial sessions address state-of-the-art control theory and industrial applications. While session formats vary, tutorial sessions often start with a longer 40- or 60-minute talk on the underlying theory or application area. After the lead presentation, there are usually several 20-minute talks highlighting particular aspects or applications of the topic area in further detail.
We are pleased to offer 12 tutorial sessions this year.
MoA19 Airborne Experimental Test Platforms: From Theory to Flight
Presenters:
Andrei Dorobantu & Brian Taylor (University of Minnesota)
David Cox (NASA Langley Research Center)
Brian Holm-Hansen (Lockheed Martin)
Gabriel Hugh Elkaim (UC Santa Cruz)
Vladimir Dobrokhodov (Naval Postgraduate School)
Time: Monday, June 17, 9:30 a.m. – 11:30 a.m.
Location: Grand Ballroom Central
Reliable and accessible experimental test platforms are key enablers for the transition of theoretical research into practice. In the aerospace domain, these platforms include a flight test system as well as simulation environments for aircraft dynamics, software-in-the-loop (SIL) testing, and hardware-in-the-loop (HIL) testing. It is important to invest in and develop these infrastructures so that new theory can be efficiently guided through a process of verification, validation, and, ultimately, application. Over the past decade, fixed-wing unmanned aerial vehicles (UAVs) have become critical in the aerospace community as experimental test platforms for transitioning new theory to real systems. Areas of particular research interest have included novel algorithms for guidance, navigation, control, and fault detection. This tutorial session, along with an affiliated invited session in the afternoon (MoB19), is organized to provide an opportunity for sharing ideas on recent UAV infrastructure innovation and capabilities.
Successful UAV experimental test platforms often take advantage of components developed internally by the research groups. It is important to bring the aerospace community together to discuss approaches towards these innovations. This tutorial/invited session pair gathers experts from both academia and industry to discuss their philosophies and approaches to UAV infrastructure development. The sessions will have a general focus on high-fidelity simulations, software- and hardware-in-the-loop setups, real-time flight software, and flight test systems.
The tutorial/invited session pair is designed as a forum to discuss UAV infrastructure development. The primary objective is to provide an overview of recent advancements behind UAV test platforms developed by the international aerospace community. Academic and industry researchers will have the opportunity to share ideas and gain a broader understanding of current and future directions for UAV experimental test infrastructures.
MoB20 Optical Frequency Stabilization and Optical Phase Locked Loops
Presenters:
Matthew Taubman (PNNL)
Wei Zhang (JILA, NIST and University of Colorado)
David Leibrandt (National Institute of Standards and Technology)
Ronald Holzwarth (Max-Planck-Institute for Quantum Optics)
Time: Monday, June 17, 1:30 p.m. – 3:30 p.m.
Location: Grand Ballroom North
Much progress in the research of optical frequency stabilization and optical phase locked loops has been
made since the invention of the laser. The research led to the definition of the "speed of light" and made the SI unit "second" the most precisely measured physical quantity. The main objective of this tutorial session is to introduce some remarkable research results and some related control challenges to the control research community. This tutorial session consists of four talks.
The lead tutorial presentation "Optical Frequency Stabilization and Optical Phase Locked Loops: Golden Threads of Precision Measurement" will introduce the basics of optical frequency stabilization and the optical phase locked loop. It will give an overview of this research field from early "heroic" work to the latest 10-16 thermal noise-limited precision attained for a stable laser, and the ongoing quest for ever finer precision and accuracy. The issues of understanding and measuring line widths and shapes will be also studied in some depth, highlighting implications for servo design for sub-Hz linewidths.
Often the optical interferometers, e.g., Fabry-Perot cavities, serve as the frequency references for optical frequency stabilization. The ultimate limits of this type of optical frequency reference is the thermal noise induced cavity length fluctuations. The talk "Crystalline Coatings for Thermal Noise Reduction in Optical Interferometers" will demonstrate a tenfold reduction of Brownian noise using high-reflectivity monocrystalline AlGaAs multilayer mirrors. This novel optical coating technology is capable of significantly improving the stability of optical interferometers.
The talk "Ultra-Stable Laser Local Oscillators" will discuss performing laser frequency stabilization to spherical Fabry-Perot cavities and spectral holes in cryogenically cooled crystals. From a control perspective, these lasers are highly nonlinear, multiple input multiple output (MIMO) dynamical systems. The talk will discuss the implementation of high-speed feedback (500 kHz) and digital feedforward using field programmable gate arrays (FPGAs).
Optical frequency combs play a very important role in optical frequency metrology and many other precision measurements. The talk "Application of Stabilized Optical Frequency Combs" will discuss the methods of generating and stabilizing optical frequency combs. Various applications of the optical frequency combs will also be covered.
MoC19 Identification of Nonlinear Parameter-Varying Systems: Theory and Applications
Presenters:
Wallace E. Larimore (Adaptics, Inc)
Michael Buchholz (University of Ulm)
Jürgen Remmlinger (University of Ulm)
Time: Monday, June 17, 4:00 p.m. – 6:00 p.m.
Location: Grand Ballroom Central
In this Tutorial Session, recent major results are developed in an extended tutorial on the system identification of linear parameter-varying (LPV) and nonlinear state affine systems using the well developed and understood subspace methods for linear time-invariant (LTI) systems. Specifically the canonical variate analysis (CVA) method of subspace system identification for LTI systems is developed from first principles, and is extended to LPV and nonlinear systems.
A basic LPV concept is that certain known scheduling functions of the operating conditions describe how the coefficients of the state equations vary with operating conditions, i.e. temperatures, pressures, RPM, speed, etc. The products of scheduling functions with the input data, output data, and states of the system are time invariant quantities from which the 'past' and 'corrected future' are defined. From these quantities, the standard LTI CVA method is applied, first by fitting an ARX-LPV model that is then used in calculating the corrected future, and finally doing a time invariant CVA of the past and corrected future to determine the state of the LPV system. The procedure is then extended to state affine nonlinear
systems that also possibly include LPV structure. This provides the recipe for identifying the state equations from observational data given the scheduling functions. The CVA method avoids the computational difficulties suffered by currently available system ID methods for LPV systems.
The CVA-LPV method is demonstrated on several simulated and real data sets including a pitch-plunge aircraft wing flutter simulation where the dynamics change with aircraft speed and air density. It is shown that by using data over a small region of the speed-altitude operating region, a highly accurate model for a much larger region is identified. In a second example, a nonlinear model is identified for high-power lithium-ion cells in hybrid vehicles that is accurate over the whole operating range with temperature variation. A third application to automotive engine modeling discusses the identification of models for the intake manifold and fuel injection subsystems of a combustion engine.
These new results have major implications for modeling and control by greatly extending the possible applications of subspace ID to closed-loop LPV and affine nonlinear systems for monitoring, fault detection, control design, and robust and adaptive control. The precise statistical theory gives tight bounds on the model accuracy that can be used in robust control analysis and design. Also precise distribution theory is available for tests of hypotheses on model state order and structure, process changes and faults.
The intended audience includes practitioners who are primarily interested in applying system identification and monitoring techniques, engineers who desire an introduction to the advanced concepts of LPV and nonlinear system identification and monitoring, and faculty members and graduate students who wish to pursue research into some of the more advanced topics.
MoC20 Discriminative Sparse Representations with Applications
Presenters:
Vishal Monga (Pennsylvania State University)
Trac Tran (Johns Hopkins University)
Time: Monday, June 17, 4:00 p.m. – 6:00 p.m.
Location: Grand Ballroom North
Significant advances in compressive sensing and sparse signal encoding have provided a rich set of mathematical tools for signal analysis and representation. In addition to novel formulations for enabling sparse solutions to underdetermined systems, exciting progress has taken place in efficiently solving these problems from an optimization theoretic viewpoint.
The focus of the wide body of literature in compressive sensing/sparse signal representations has however been on the problem of signal recovery from a small number of measurements (equivalently a sparse coefficient vector). This tutorial will discuss the design of sparse signal representations explicitly for the purposes of signal classification. The tutorial will focus on and build upon two significant recent advances. First, the work by Wright et al. which advocates the use of a dictionary (or basis) matrix comprising of class-specific training sub-dictionaries. In this framework, a test signal is modeled as a sparse linear combination of training vectors in the dictionary, sparsity being enforced by the assertion that only coefficients corresponding to one class (from which the test signal is drawn) ought to be active. The second set of ideas we leverage are recent key contributions in model-based compressive sensing where prior information or constraints on sparse coefficients are used to enhance signal recovery. These ideas will be combined towards the exposition of current trends: namely the development of class-specific priors or constraints to capture structure on sparse coefficients that helps explicitly distinguish between signal classes.
In summary, the central goal of the tutorial will be to introduce ideas in sparse signal recovery, representation and classification to the controls audience. Because of natural ties to optimization theory
and algorithms, controls researchers have the necessary background to absorb the key ideas and emerge as potential contributors to this exciting research area. The practical impact of theory and algorithms introduced in the main one hour tutorial will be illustrated by three accompanying talks which will involve applications to medical imaging, video recovery and hyperspectral data analysis for defense.
TuB19 Laser Interferometry for Precision Measurements
Presenters:
Daniel Y. Abramovitch (Agilent Laboratories)
Eric Johnstone (Agilent Technologies)
Russell Loughridge (Agilent Technologies)
Xu Chen (University of California at Berkeley)
Masayoshi Tomizuka (University of California at Berkeley)
Vasishta Ganguly (University of North Carolina at Charlotte)
Tony Schmitz (University of North Carolina at Charlotte)
Janet Yun (Agilent Technologies)
Time: Tuesday, June 18, 1:30 p.m. – 3:30 p.m.
Location: Grand Ballroom Central
Laser interferometers determine displacement by measuring the phase difference between two interference signals. A reference signal is derived directly from the laser source. A measurement signal is obtained from the interference between two beams, one which travels to a moving target and the other that travels to a fixed (reference target). The phase between the reference and measurement signals is used to determine the displacement of the moving target. The utility of these methods is that the measurement can be made over long distances with high resolution. However, as the required accuracy of the target applications has increased, interferometers have been adjusted to desensitize them to an increasing number of effects. Interferometers today are high sample rate, high fidelity, non-contact, multi-axis position measurement instruments that can be used in precision motion systems.
This tutorial session will introduce more controls engineers to the field of interferometer measurements and their use in feedback systems in general. We will show how the required accuracy and bandwidth of high precision motion systems push interferometer methods to the limit. We will describe both traditional and new techniques to satisfy the increasingly demanding performance requirements.
In the leadoff talk, A Tutorial on Laser Interferometry for Precision Measurements, Russ Loughridge and Danny Abramovitch will introduce the field of laser interferometry, going from the wave equation to the change in phase of interference patterns at the detector. They will show how these phase measurements become displacement calculations, and then lead the audience through a list of measurement issues and subsequent fixes.
In the second talk, Control Methodologies for Precision Positioning Systems, Xu Chen and Masayoshi Tomizuka will describe advances in control techniques for precision motion systems that use laser interferometers as the position measurement.
The third talk, Periodic Error Correction in Heterodyne Interferometry, by Vasishta Ganguly, Tony Schmitz, Janet Yun, and Russ Loughridge, will describe periodic errors and their identification.
The final talk, Quintessential Phase: A Method of Mitigating Turbulence Effects in Interferometer Measurements of Precision Motion, by Eric Johnstone and Danny Abramovitch, will discuss a new method of correcting turbulence, one of the most pernicious issues with measuring displacement through air. The method uses a unique combination of a multi-segment optical detector (to give observability) and an Extended Kalman Filter, to detect, model, and remove turbulence from the displacement estimate.
TuC19 Automated Steady and Transient State Identification in Noisy Processes
Presenters:
R. Russell Rhinehart (Oklahoma State University)
Ting Huang (Oklahoma State University)
Anand N. Venavelli (Fractionation Research, Inc.)
Mike R. Resetarits (Fractionation Research, Inc.)
Time: Tuesday, June 18, 4:00 p.m. – 6:00 p.m.
Location: Grand Ballroom Central
A computationally simple method is developed, analyzed and demonstrated for automated identification of steady state and transient state in noisy process signals. The method acts as a "virtual employee" in managing a process, and is based on automatic identification of probably steady and probably transient conditions by a statistical method in noisy multivariable processes. The method is insensitive to noise variance. The tutorial will develop the equations, reveal the execution code (5 lines, explicit calculations, low storage), discuss implementation and critical values, and compare the approach to other approaches (computational burden, sensitivity, average run length, etc.).
The tutorial presentation will be followed by three applications demonstrations. One from a commercial-scale distillation unit will reveal ability to handle autocorrelated, multivariable, and sometimes noiseless variables confounded by discrimination error. The second demonstration will represent a pilot-scale, orifice measured flow rate in which process noise is strongly dependent on the flow rate. It will also reveal the detection of a poorly tuned controller. The third demonstration will reveal the application of the probable steady state detection to automate the identification of convergence in nonlinear regression.
Steady state models are widely used in process control, analysis, and optimization; but, the use and adjustment of SS models should only be triggered when the process is at SS. Additionally, detection of SS triggers the collection of data for process model adjustment, process analysis, fault detection, data reconciliation, neural network training, the end of an experimental trial (collect data and implement the next set of conditions), etc.
In contrast, transient, time-dependent, or dynamic models are also used in control, forecasting, and scheduling applications. With these are often model parameters representing time-constants which can only be adjusted during transient conditions. Detection of TS triggers the collection of data for dynamic modeling. Additionally, detection of TS provides recognition of points of change, wake-up data recording, etc.
The detection of both SS and TS can be useful in automating a sequence of experimental conditions. Often, engineers run a sequence of experiments to collect data throughout a range of operating conditions, and process operators sequence the next stage of a process. Each sampling event is initiated when the operator observes that steady conditions are met. Automated real-time SS and TS identification would be useful to trigger the next stage of an experimental plan or process phase.
WeA18 Pathways Toward Smart, Flexible and Efficient Power Systems
Presenters:
Pramod Khargonekar (National Science Foundation and the University of Florida)
Steven Low (California Institute of Technology)
Dennice Gayme (Johns Hopkins University)
Ufuk Topcu (University of Pennsylvania)
Time: Wednesday June 19, 9:30 a.m. -11:30 a.m.
Location: Grand Ballroom South
The power system is rapidly changing due to increasing demand; renewable energy mandates
altering the make-up of generation resources; and new technologies such as hybrid electric vehicles and advanced metering that have the potential to shift usage patterns. The traditional grid is not equipped to deal with additional variability from intermittent renewable power sources or rapidly changing load patterns, either from electric vehicles connecting and disconnecting from the grid in an adhoc manner or demand response and other consumer based programs. This tutorial session discusses the changing state of the power grid and provides several perspectives on technologies, algorithms and systems required to facilitate a transition towards a reliable and efficient electric grid that is flexible enough to adapt to future changes in the ways that energy is procured and used. The lead talk will provide an exposition of the key factors that are driving the development of this emerging power system along with an overview of some related research challenges. This is followed by three talks describing recent relevant results in these areas. The first talk presents an overview of a series of works developing tractable methods for computing optimal power flow in both transmission and distribution systems. This is followed by a discussion of optimal power flow based methods for storage allocation in a transmission system. The session concludes with a discussion of control strategies for electric vehicle and consumer based energy programs
WeA20 Semiconductor Equipment Design
Presenters:
Upendra Ummethala (KLA-Tencor)
Pradeep Subrahmanyan (KLA-Tencor)
Anne van Lievenoogen (Philips Innovation Services)
John Hench (KLA-Tencor)
Time: Wednesday, June 19, 9:30 a.m. – 11:30 a.m.
Location: Grand Ballroom North
Semiconductor manufacturing equipment such as lithography and inspection machines are very sophisticated instruments with average selling prices on the order of $50M and above. These machines have many types of cutting edge technology including light optics, charged particle optics, sophisticated high power lasers, precision magnetically levitated stages, etc.
Key control systems challenges arise on multiple subsystems in these tools. One of the key subsystems in these tools is sophisticated positioning stages for positioning wafers or reticles. The required positioning accuracies are on the order of single nanometers while moving at linear speeds on the order of 1 m/s. In addition there are stringent requirements on long term stability of dimensions in these tools. Controls and signal processing techniques are central to several of the subsystems used in these tools. The set of presentations in this tutorial session show some examples of challenging technologies that have been implemented in a few machines. The objective of this tutorial session is to give the community a flavor for the types of issues involved in architecting these systems and give some specific examples of methods used in solving key technical problems.
The overview presentation will give an example of a system used in direct-write electron beam lithography showing some of the key components of the platform used in this tool. Examples of Magnetically levitated wafer stages which control the position of wafers in all six-degrees of freedom while controlling the flexible dynamics of the stages will be provided. Methods for precisely measuring the positions of these stages will be shown. Some of the key thermal challenges will also be presented.
The second presentation will show an approach to architecting platforms using systematic analyses and dynamic systems modeling to optimize control-structure interactions in the presence of disturbances.
The third presentation will show some examples of actuators in these Maglev stages showing feedback linearization techniques to design controllers for use with these highly non-linear actuators.
The fourth presentation gives an overview of the mathematics of the shape recovery algorithm used in Reflective Electron Beam Lithography as well as a guide for some of the practical issues that arise in shape estimation and stage design, not the least of which is the angular placement of the sensors.
WeB18 Online Ad Systems
Presenters:
Richard E. Chatwin (Adchemy, Inc.)
Niklas Karlsson (AOL Networks)
Ayman Farahat (Adobe)
James G. Shanahan (Church and Duncan Group, Inc.)
Time: Wednesday, June 19, 1:30 p.m. – 3:30 p.m.
Location: Grand Ballroom South
Online advertising is a large and rapidly growing business that affords advertisers both the opportunity to target their advertising based on consumers' behavior and demographics and the ability to measure the success of that advertising. These aspects of online advertising engender a broad range of fascinating problems in the areas of performance estimation, campaign optimization, and marketplace design that are amenable to formulation and solution via the techniques of control and optimization theory.
The main objective of this session is to provide an introduction to and overview of the domain of online advertising for the Controls & Optimization community. As such, the overview presentation will introduce conference attendees to the application of control, estimation and optimization techniques in the arena of online advertising. The goal will be to highlight the challenges faced by the main presenters, namely advertisers, online publishers and ad exchanges. Examples from a variety of sources will demonstrate how these challenges can be met by formulating them as optimization problems of various types (convex programs, stochastic control problems, etc.) and, utilizing the latest research in the field, solving these problems so as to generate actionable decisions.
Specific applications to be discussed include performance estimation methods in support of campaign allocation decisions by advertisers; topic discovery methods for attribute extraction to facilitate the generation of targeted text ads; and acceptance strategies for publishers in regard to advanced sales contracts with guaranteed delivery terms.
The supporting presentations will describe specific industry applications: control strategies for matchmaking between advertisers and publishers in an ad network; accurate estimation of the impact of targeted ads; and control strategies for real-time bidding in display advertising.
WeB20 Optimization Problem Challenges in Physical Design
Presenters:
Ismail Bustany (Mentor Graphics)
Igor Markov (University of Michigan)
Martin Wong (University of Illinois at Urbana Champaign)
Time: Wednesday, June 19, 2013, 1:30 p.m. – 3:30 p.m.
Location: Grand Ballroom North
Think of the growing consumer demands for iPhones, iPads, Kindles, online gaming, online video streaming, hybrid cars, cloud storage, etc. The appetite for smaller, more powerful, energy efficient computing, graphics, and wireless communication chips is insatiable. This all translates to a technology roadmap that demands aggressive shrinking of transistor feature size. According to the International Technology Roadmap for Semiconductors Agency 2012 report, "the semiconductor's industry ability to follow Moore's law has been the engine of a virtuous cycle: through transistor scaling one obtains a better performance to cost ratio of products which induces an exponential growth of the semiconductor market." This in turn allows further investments in new technologies which will fuel further scaling. In addition to performance, there is a pervasive need for designs that allow for increased bandwidth and reduced energy consumption.
The main objective of this tutorial session is to introduce optimization and control engineers to the optimization problem challenges present in the physical design synthesis step. Throughout this session, we shall highlight the nature of the optimization problems encountered in the different parts of the physical design synthesis step. The goal is to engender awareness and an understanding of these challenging problems and stoke an interest in developing efficient scalable optimization algorithms to solving them.
The physical synthesis step is the link between the logical description of a circuit and its physical realization on a silicon die. The input to physical synthesis is a logical net-list that describes the logical connections among physical components (logic gates, macro/IP blocks, input/output pins, etc.) generated from a logical design synthesis step. The physical synthesis step generates an "optimized" net-list along with a description of a physically realizable layout. This requires solving a multi-objective optimization problem to ensure the circuit meets specified timing, area, power, and routability requirements subject to various process fabrication and operational mode variations. The design variables are versatile and mostly discrete. They include logic gate placement positions, gate flipping and re-orientation, buffer insertion, routing path layer assignments, discrete gate sizing, buffer insertions, combinational logic remapping, etc. With today's 22nm transistor feature size, we typically have 1 million to 100 million gates depending upon the chip's application. That is, this problem is large-scale, nonlinear, constrained, multi-objective, and discrete in nature. In short, it is NP hard. Accordingly, there is no known efficient algorithm for solving it. Given the large-scale and discrete nonlinear complexity challenges, it has been the practice in physical synthesis to break this optimization problem into a sequence of interacting sub-problems with different design variables, approximately solve these sub-separately in a block coordinate descent fashion, and orchestrate a flow to tie up the approximate solutions from one sub-problem to another to arrive at a sub-optimal feasible solution.
These sub-problems are classified as follows: Logic partitioning, floor-planning, placement, clock tree synthesis, routing, and timing closure. These sub-problems are manifestations of continuous nonlinear optimization problems, linear and convex problems, integer programming problems, min-cost max-flow problems, partitioning, assignment, and many other graph theory problems. In this tutorial, we shall go over these problem instances.
WeC18 Wide Area Control of Large Power Systems
Presenters:
Aranya Chakrabortty (North Carolina State University)
Pramod Khargonekar (Univesity of Florida)
Anjan Bose (Washington State University)
Joe Chow (Rensselaer Polytechnic Institute)
Christopher DeMarco (University of Wisconsin-Madison)
Time: Wednesday, June 19, 4:00 p.m. – 5:40 p.m.
Location: Grand Ballroom South
A key element in the development of smart power transmission systems over the past decade is the tremendous advancement of the Wide-Area Measurement System (WAMS) technology, also commonly referred to as the Synchrophasor technology. Sophisticated digital recording devices called Phasor Measurement Units or PMUs are currently being installed at different points in the North American grid, especially under the smart grid initiatives of the US Department of Energy, to record and communicate GPS-synchronized, high sampling rate (6-60 samples/sec), dynamic power system data. Significant research efforts have been made on techniques to use WAMS for monitoring large power networks dispersed across wide geographical areas. In contrast, the use of WAMS for automatic feedback control has received less attention from the research community. The objective of this tutorial session is to bridge this gap by formulating wide-area control problems in light of various control-theoretic topics, and by educating control engineers, especially graduate students and future researchers, about the tremendous potential of control theory in impacting this emerging area of smart grid research. Our goal will be to clearly define several representative mathematical problems that evolve from the basic needs of making a power system smart (or, self-automated) using Synchrophasors, and to point out how different aspects of control theory such as modeling, identification, network theory, stability analysis, and most importantly, robust and optimal control, can play an instrumental role in solving those needs.
The lead tutorial talk will formulate the wide-area control problem in terms of four primary applications, namely, oscillation damping control, voltage control, wide-area protection, and disturbance localization. Various mathematical algorithms reported recently in the literature as well as several future directions of research on these four applications will be presented. The second talk of the session will discuss the ongoing research activities on wide-area monitoring and control in the NSF/DOE Center on Ultra-wide-area Resilient Electric Energy Transmission Networks, or CURENT at the University of Tennessee. Results and design challenges for dynamic state estimation, system frequency control with high renewable penetration, wide-area damping control, remedial action schemes (RAS), and voltage stability control will be discussed. The third talk of the session will provide a detailed description of wide-area protection using model-based control and protection. Special protection schemes (SPS) that are currently used in different parts of the US grid to keep the power system stable after occurrence of certain short-circuits will be presented to motivate the discussion. The final talk will focus on the data-mining aspects of Synchrophasors for gaining insight into the grid power flow properties in real-time, and for using such insights for control actuation.
We believe that this is the perfect time for control engineers to delve into wide-area control problems for power systems, and also investigate how WAMS can be used for mathematical modeling of new types of loads (such as plug-in hybrid vehicles) as well as generators (such as wind-fed induction generators with associated power electronics and battery storage devices), each of which contribute its own share of dynamics into the grid operations. We strongly envision that this session will initiate a coherent thinking along these entirely new lines of research, and carry these ideas forward for solving the emerging technical challenges in power and energy.
WeC20 A Tutorial on Optimization Methods for Cancer Radiation Treatment Planning
Presenter: Haitham Hindi (Walmart Labs)
Time: Wednesday, June 19, 4:00 p.m. – 5:40 p.m.
Location: Grand Ballroom North
Every year, thousands of cancer patients receive radiation treatment. Radiation beams are delivered to the patient from different directions, with high precision, with the objective of maximizing dose to the tumor, while minimizing damage to the surrounding healthy tissue. The first part of this two part tutorial is an introduction to the basic optimization problem underlying radiation treatment planning. Specifically, we show how the computation of optimal beam directions and intensities can be formulated as a convex optimization problem. We discuss some common metrics and constraints used in radiation treatment planning, using methods from optimization, medical physics, and finance and risk management. We also review some effective parallelizable methods that have been developed for solving this inherently large-scale, multi-objective optimization problem. The second part will cover two main areas: (a) more details on modeling and implementation, with the goal of preparing the audience to develop a simple simulator that can be used in an academic or industrial environment, for the purposes of exploring and benchmarking new algorithms; and (b) further discussion of more advanced optimization algorithms from practice and from the literature, including large scale, robust, and multiobjective optimization.
|