OPTIMIZATION ALGORITHMS

Many algorithms exist to run optimization and many different procedures exist when optimization is coupled with Monte Carlo simulation. In Risk Simulator, there are three distinct optimization procedures and optimization types as well as different decision variable types. For instance, Risk Simulator can handle Continuous Decision Variables (1.2535, 0.2215, and so forth), Integer Decision Variables (e.g., 1, 2, 3, 4 or 1.5, 2.5, 3.5, and so forth), Binary Decision Variables (1 and 0 for go and no-go decisions), and Mixed Decision Variables (both integers and continuous variables). On top of that, Risk Simulator can handle Linear Optimization (i.e., when both the objective and constraints are all linear equations and functions) and Nonlinear Optimizations (i.e., when the objective and constraints are a mixture of linear and nonlinear functions and equations).

As far as the optimization process is concerned, Risk Simulator can be used to run a Discrete Optimization, that is, an optimization that is run on a discrete or static model, where no simulations are run. In other words, all the inputs in the model are static and unchanging. This optimization type is applicable when the model is assumed to be known and no uncertainties exist. Also, a discrete optimization can first be run to determine the optimal portfolio and its corresponding optimal allocation of decision variables before more advanced optimization procedures are applied. For instance, before running a stochastic optimization problem, a discrete optimization is first run to determine if solutions to the optimization problem exist before a more protracted analysis is performed.

Next, Dynamic Optimization is applied when Monte Carlo simulation is used together with optimization. Another name for such a procedure is Simulation-Optimization. That is, a simulation is first run, then the results of the simulation are applied in the Excel model, and then an optimization is applied to the simulated values. In other words, a simulation is run for N trials, and then an optimization process is run for M iterations until the optimal results are obtained or an infeasible set is found. Using Risk Simulator’s optimization module, you can choose which forecast and assumption statistics to use and replace in the model after the simulation is run. Then, these forecast statistics can be applied in the optimization process. This approach is useful when you have a large model with many interacting assumptions and forecasts, and when some of the forecast statistics are required in the optimization. For example, if the standard deviation of an assumption or forecast is required in the optimization model (e.g., computing the Sharpe Ratio in asset allocation and optimization problems where we have the mean divided by the standard deviation of the portfolio), then this approach should be used.

The Stochastic Optimization process, in contrast, is similar to the dynamic optimization procedure with the exception that the entire dynamic optimization process is repeated T times. That is, a simulation with N trials is run, and then an optimization is run with M iterations to obtain the optimal results. Then the process is replicated T times. The results will be a forecast chart of each decision variable with T values. In other words, a simulation is run, and the forecast or assumption statistics are used in the optimization model to find the optimal allocation of decision variables. Then, another simulation is run, generating different forecast statistics, and these new updated values are then optimized, and so forth. Hence, the final decision variables will each have their own forecast chart, indicating the range of the optimal decision variables. For instance, instead of obtaining single-point estimates in the dynamic optimization procedure, you can now obtain a distribution of the decision variables, hence, a range of optimal values for each decision variable, also known as a stochastic optimization.

Finally, an Efficient Frontier optimization procedure applies the concepts of marginal increments and shadow pricing in optimization. That is, what would happen to the results of the optimization if one of the constraints were relaxed slightly? Say for instance, if the budget constraint is set at $1 million. What would happen to the portfolio’s outcome and optimal decisions if the constraint were now $1.5 million, or $2 million, and so forth. This is the concept of the Markowitz efficient frontier in investment finance, where if the portfolio standard deviation is allowed to increase slightly, what additional returns will the portfolio generate? This process is similar to the dynamic optimization process with the exception that one of the constraints is allowed to change, and with each change, the simulation and optimization process is run. This process is best applied manually using Risk Simulator. This process can be run either manually (rerunning the optimization several times) or automatically (using Risk Simulator’s changing constraint and efficient frontier functionality). As an example, the manual process is: Run a dynamic or stochastic optimization, then rerun another optimization with a new constraint, and repeat that procedure several times. This manual process is important, as by changing the constraint, the analyst can determine if the results are similar or different, and, hence, whether it is worthy of any additional analysis, or to determine how far a marginal increase in the constraint should be to obtain a significant change in the objective and decision variables. This is done by comparing the forecast distribution of each decision variable after running a stochastic optimization. Alternatively, the automated efficient frontier approach will be shown later in the chapter.

One item is worthy of consideration. Other software products exist that supposedly perform stochastic optimization, but, in fact, they do not. For instance, after a simulation is run, then one iteration of the optimization process is generated, and then another simulation is run, then the second optimization iteration is generated, and so forth. This process is simply a waste of time and resources; that is, in optimization, the model is put through a rigorous set of algorithms, where multiple iterations (ranging from several to thousands of iterations) are required to obtain the optimal results. Hence, generating one iteration at a time is a waste of time and resources. The same portfolio can be solved using Risk Simulator in under a minute as compared to multiple hours using such a backward approach. Also, such a simulation-optimization approach will typically yield bad results and is not a stochastic optimization approach. Be extremely careful of such methodologies when applying optimization to your models.

The following are two example optimization problems. One uses continuous decision variables while the other uses discrete integer decision variables. In either model, you can apply discrete optimization, dynamic optimization, or stochastic optimization, or even manually generate efficient frontiers with shadow pricing. Any of these approaches can be used for these two examples. Therefore, for simplicity, only the model setup is illustrated, and it is up to the user to decide which optimization process to run. Also, the continuous decision variable example uses the nonlinear optimization approach (because the portfolio risk computed is a nonlinear function, and the objective is a nonlinear function of portfolio returns divided by portfolio risks) while the second example of an integer optimization is an example of a linear optimization model (its objective and all of its constraints are linear). Therefore, these two examples encapsulate all of the procedures aforementioned.

error: Content is protected !!