Calculation aspects of the European rebalanced basket option using Monte Carlo methods: Valuation

Extra premiums may be charged to a client to guarantee a minimum payout of a contract on a portfolio that gets rebalanced back to fixed proportions on a regular basis. The valuation of this premium can be seen as that of the pricing of a European put option with underlying rebalanced portfolio. This paper finds the most efficient estimators for the value of this path-dependent multi-asset put option using different Monte Carlo methods. With the help of a refined method, computational time of the value decreased significantly. Furthermore, variance reduction techniques and Quasi-Monte Carlo methods delivered more accurate and faster converging estimates as well.


Introduction
A wide variety of products exist in life insurance and pension fund companies.Some of these products offer the holder of the product a minimum payout guarantee by charging them an extra premium.This extra guarantee forms a liability to the insurer that needs to be managed in terms of risks and must be valued daily.Due to the implementation of Solvency II across the European Union (for more information see Financial Services Authority, 2011) most nations have started adopting the same principles, with South Africa adopting the Solvency Assessment and Management (SAM) programme (Financial Services Board, 2010).According to Pillar I of these programmes, all Solvency Capital Requirements (SCR) need to be accurately measured, and kept as a reserve.A standard formula provided by the regulators may be used to help calculate the SCR, or an internal model may be used to estimate these requirements.Adopting an internal model brings forth the advantage of more accurate valuation and therefore, mostly a smaller SCR reserve.
The product considered in this paper is a portfolio, consisting of a assets, that is rebalanced back to fixed proportions, v i , every τ years.Rebalancing is done by selling the better performing assets and buying the poorer performing assets.A minimum payout guarantee is offered to the client on this product and this forms the main focus of this paper.
Given that the client will receive a payout of Π T at the end of the life of the contract (the value of the portfolio at time T ), this could be guaranteed to be a minimum of K. Therefore at time T the payout of this contract, which would have been Π T , becomes max{Π T , K}.That is Π T + max{K − Π T , 0}, with the second part of this expression exactly the payoff of a European put option.This problem may therefore be seen as that of the valuation of a European put option with underlying Π.
In this paper the price (value) of this put option is estimated using different Monte Carlo (MC) methods in order to find the most efficient method.Due to the large and increasing computational power of corporations' clusters of servers, simulation becomes a feasible numerical method for calculating aspects of options where no closed-form solution or formulae exist -therefore the focus of this paper will only be MC numerical methods.
Although general methods to apply MC simulations to path-dependent and multi-asset options exist, currently no literature on this specific type of option exists.As such, this new option will from here on be referred to as the European Rebalanced Basket Call/Put Option (ERBCO / ERBPO).Only the ERBPO will be considered, but the put-call parity for the European Rebalanced Basket Option (ERBO) is given in the concluding section and may be used to calculate the value of the ERBCO once the price of the ERBPO is found.Only a put option with an underlying portfolio that consists of two assets is considered in the results sections, but can easily be changed to that of an a-asset ERBO.
The focus of the next section is the valuation of the ERBPO using general MC methods.First, a simplistic method is used which is then substantially improved by using a new formula to simulate the value of the underlying portfolio.These are then compared in terms of computational time.This is followed by a section which improves the error of the estimates with the help of Variance Reduction Techniques (VRTs).These methods are compared in terms of the estimates' standard error (SE) given a fixed simulation time.The fourth section improves the convergence of the estimates using Quasi-Monte Carlo (QMC) techniques, and the different methods are also compared in terms of different performance criteria.

General Monte Carlo
When used to value options, MC simulation uses the risk-neutral valuation result -the premium that needs to be charged for an option can be estimated by sampling paths for its underlying distribution to obtain the expected payoff in a risk-neutral world, and then discounting this payoff at the risk-free rate.The literature in this section is from Glasserman (2004, pp. 39-95) which builds on the work done by Boyle (1977).
A brief overview of the general MC Framework for option valuation will be provided in the following section.This will be followed by the derivation of the Simplistic MC (SMC) approach to valuing the ERBPO, which will be refined in the subsection that follows.This section will be concluded with a comparison of the two different methods in terms of value, error, and computational times.

Monte Carlo Framework for option valuation
Let X be a given random variable with E[X] = λ, where the true value is unknown, and V (X) = σ 2 .In MC simulation, given the distribution of X, n independent observations of X, i.e. {X i : i = 1, . . ., n}, are generated.The parameter λ is estimated by λ As the number of simulations, n, increases, λ(n) becomes a better estimator of λ as a consequence of the Law of Large Numbers (LLN) (Rice 2007, pp. 175).
The payoff at time T for the ERBPO is max{K − Π T , 0}.This needs to be discounted back to time T = 0 to obtain the value today.That is, in terms of the previous paragraph: X = e −rT max{K − Π T , 0} with E[X] = α, the price of the option, and r the zero-coupon risk-free rate.It is important to note that throughout this paper it will be assumed that the term structure of risk-free rates is flat.It is, however, not difficult to incorporate a non-flat term structure of interest rates, as one simply need to calculate the forward rates, r t,t+∆ = ((t + ∆) × r t+∆ − t × r t )/∆ (with ∆ the length of the time step being simulated) when simulating between time steps.
Therefore, the problem changes to simulating the variate Π T .Once this has been simulated, the payoff can easily be discounted.The simulation of Π T forms the focus of the next two subsections.

Simplistic Monte Carlo
Before considering the simulation of an a-asset option, the process followed by the underlying stocks needs to be discussed.Each stock has a continuous dividend yield, q j , with j = 1, . . ., a.For multi-asset options the correlated underlying stocks are assumed to follow the Geometric Brownian motion process (considering that in risk-neutral valuation all assets are assumed to have the same return, r) resulting in dS j S j = (r − q j ) dt + σ j dz j , with Ê[dz j dz k ] = ρ jk dt, Ê the expected value in the risk-neutral world, and ρ jk the correlation between assets j and k.In sampling the paths of these assets (for j = 1, . . ., a with correlation matrix Σ), the following well-known result is obtained.This can be simulated with the use of Cholesky factorization.It will be explained in terms of generating a correlated normally distributed variables 1 , 2 ,. .., a .A sequence of a uncorrelated normally distributed variables Z 1 , Z 2 , . . ., Z a can be generated and transformed with = M Z, where T = ( 1 , . . ., a ) and Z T = (Z 1 , . . ., Z a ) are column vectors.The matrix M : a × a must satisfy M M T = Σ, with Σ : a × a the correlation matrix.This can be confirmed by taking the expectation of Therefore, to simulate paths for the underlying stocks, only a uncorrelated N (0, 1) variables are needed.These are simulated by generating U ∼ U (0, 1) a and applying the Inverse Probability Integral Transform (IPIT) (Rice 2007, pp. 352-358).It is important to note that the whole simulation process originates from U ∼ U (0, 1) a .Note, however, that the dimensionality increases as more time steps are simulated: say 5 jumps for each asset needs to be simulated and there are 4 assets, then the dimensionality of one simulation (that is the portfolio value at time T ) depends on 4 independent U ∼ U (0, 1) 5 , or simply U ∼ U (0, 1) 5•4=20 .
The portfolio Π gets rebalanced every τ years, with the option expiring at T .The underlying assets need to be simulated on times {t = τ : = 1, . . ., T /τ , T /τ } which implies that the number of jumps for each stock that need to be simulated, is T /τ .Thus, the dimension of the problem changes to a • T /τ with a the number of assets underlying the portfolio.
To simulate the value of the portfolio (in order to calculate the discounted payoff) the following needs to be considered first: the value of the portfolio, at any given time t, is expressed as Π t = w 1, τ S 1,t + . . .+ w a, τ S a,t , with the timestamp of the rebalancing prior to time t and w j, τ the number of units of asset j held in Π at that point in time.Further, note that Π τ −δt = Π τ when δt → 0 -that is, the value of the portfolio before rebalancing is exactly the same as after rebalancing.
Values for {w j, T /τ τ : j = 1, . . ., a} are found by means of equation ( 1) and a possible value w j,( T /τ )τ S j,T can be simulated.This will be used to simulate n independent values of X, {X i : i = 1, . . ., n}, such that the estimator for the price of the ERBPO is given by αSMC e −rT max{K − Π i,T , 0}.

Refined Monte Carlo
It can be proved by means of mathematical induction that the value of Π T that consists of two non-dividend paying assets, with ∆t T = (τ, . . ., τ, T mod τ ), can be calculated with For a portfolio that consists of a dividend paying assets, with dividend yield q i , (2) can be generalised to This formula may be seen as the initial portfolio value, Π 0 , growing proportionally to the growth rates of the different underlying correlated assets.
Hence, to simulate the price of this portfolio one only needs an observed value of U ∼ U (0, 1) d , with d = a • T /τ , to obtain T /τ correlated normally distributed vectors, where ∼ M V N a (0; Σ).This method delivers exactly the same results, but in considerably less computational time.
The final Refined Monte Carlo (RMC) estimator is given by αRMC for n • T /τ independently identically distributed (i.i.d.) i ∼ M V N a (0; Σ).The vectors are obtained by generating U i ∼ U (0, 1) a , using the IPIT to obtain Z i ∼ M V N a (0, I), and finally transforming it to i = M Z i .
R uses the function Random in its base package (R Development Core Team and contributors worldwide 2011) as the Random Number Generator (RNG).This function randomly chooses which algorithm to use to generate the random uniformly distributed variables.
The seed consists of a vector of different integers, and the length depends on the methods chosen, and would therefore be cumbersome to include here.Note that, whenever simulations are performed for a specific result, R enables the user to keep that initial seed constant, and this was done throughout all simulations.

Results for the comparison of the simplistic vs. the refined Monte Carlo approach
This section provides a comparison of the SMC to the RMC approach.Simulations were performed over 9 combinations of ρ ∈ {-0.5, 0, 0.5}, Π 0 ∈ {500, 1 000, 1 500} and n ∈ {500, 50 000, 500 000}, for a two-asset ERBPO.The other parameters were held fixed as follows: τ = 1, v 1 = v 2 = 0.5, T = 10, S 1,0 = 15, S 2,0 = 20, r = 0.03, σ 1 = σ 2 = 0.3, q 1 = q 2 = 0 and K = 1 000.The initial seed for the RNG was also fixed so that results from different methods could be compared.The Price and SE were found to be exactly the same for both approaches with the only difference being the computational time for the two methods.
Note that, the combinations of the inputs used were chosen arbitrarily -any other parameter could have been chosen as part of the different combinations.The most important parameter that is incorporated here is the number of simulations, n, this would significantly increase the computational time for the different combinations, while the other inputs simply group each simulation.
Figure 1 shows the different methods in terms of computational time on a logarithmic scale, with distinction made between n ∈ {500, 50 000, 500 000}.Note that these differences will increase/decrease as the dimensionality of the problem increases/decreases.The computational time is considerably less for the refined method.It is interesting to note that, in nominal terms, the computational times are considerably reduced.For n = 500 000 the simplistic approach took approximately 340 seconds, where the refined method only took approximately 8 seconds yielding a reduction of approximately 5 1 2 minutes.In real terms, the refined method only took approximately 2% of the computational time of the simplistic method resulting in approximately a 98% reduction.The decrease in computational time becomes especially important in practice when valuing a large number of contracts at the same time.

Variance reduction techniques
In this section three different VRTs are applied to the RMC simulation of the ERBPO.All the methodology behind each VRT in terms of the ERBPO will briefly be described in each subsection, after which the estimator of the ERBPO will be given.The section concludes with a comparison of results, facilitating a choice with regard to the superior method.A brief discussion of the methodology behind VRTs is supplied first.
In MC simulation, λ = E[X] is estimated by generating a sample {X i : i = 1, . . ., n} and then determining λ(n) = 1 n n i=1 X i .Furthermore, the SE of the estimator is σ/

√
n with σ 2 the variance of X.Note that there are two elements that influence the SE, namely √ n and σ.The first element can easily be interpreted: the more simulations that are performed, the smaller the SE will become, and the more accurate the estimate will be.The other element is the square root of the variance of the simulated variable X.Therefore, to make the SE smaller, the variance of X should be reduced.
The SE is directly related to the width of the confidence intervals (CIs) constructed after simulations are performed.If the SE can be reduced, then smaller CIs can be obtained.In this section methods are thus introduced to reduce the size of σ.The best method will be chosen on the basis of a fixed computational time and size of the SE.That is, given a fixed amount of time, which method yields the smallest SE and therefore the smallest CI?Some VRTs increased the computational time of the simulation, but this was brought into consideration since simulations across all VRTs where performed over a fixed amount of time.A VRT that increases computational time will thus be simulated a smaller number of times.

Antithetic variables
Antithetic variables (AVs) attempt to reduce the variability of the simulations by producing a set of simulations with the help of uniformly distributed random variables and then a second set of simulations are performed with a set of perfectly negative correlated uniformly distributed variables to the first set.
Thus, using these in a monotonic function, H(•) would cause Generate n/2 observations to obtain the average, that is with X i and Y i (i.i.d.) simulations of X and Y as given above (using negatively correlated underlying uniformly distributed variables).Hence It is clear that the SE reduces when ρ X,Y < 0.

Control variates
Control Variates (CVs) incorporates a variate into the simulation process, of which the true value is known.This subsection will start by mathematically explaining why this could possibly reduce the variablility of the simulated variables.A more detailed summary of CVs may be found in Chan and Wong (2006, pp.104-109) and will be applied here to the pricing of the ERBPO.Y the sample variance.
The CV, Y , will be chosen as a basket of a options, each being a plain vanilla European put option (this changes to call options when working with the ERBCO) on asset j, with j = 1, . . ., a.That is Y CV = a j=1 Y j with Y j the discounted payoff from each of these put options.For simplicity, all assets are assumed to have an initial value of Π 0 such that Y i = e −rT max{K − S j,T , 0}, with S j,T the value of the single dividend paying stock at time T .
The true value for each of these a options can be found with the help of the Black-Scholes (BS) option pricing formulae (Black and Scholes 1973) since all volatility surfaces for each of the underlying asset are known.The true value is denoted by µ The observations of the CVs, Y j , are computed from the already simulated observations of Π T by splitting the calculation of Π T .
∆t + σ j j ∆t , with j = 1, . . ., a and = 1, . . ., T /τ , such that Using Θ j, , the prices of dividend paying assets with initial prices, Π 0 , is easily computed with such that S j,T , with j = 1, . . ., a, may be used in the simulation of the prices of the vanilla puts Y j .Hence the true values are calculated with µ where and Φ(•) is the cumulative standard normal distribution function (Black and Scholes 1973).
The final estimator of the price of the ERBPO using CVs is given by αCV with X j as i.i.d.observations of the discounted simulated payoffs of the ERBPO, Y j as i.i.d.observations of Y i = e −rT max{K − S j,T , 0}, µ CV as the true value of the sum of the a put options found using the BS pricing formulae, and the sample averages of the simulated observations.

Latin hypercube sampling
Latin Hypercube Sampling (LHS) is the method of systematic sampling for higher dimensions.It was first introduced by McKay, Conover, and Beckman (1979) and further analysed in Stein (1987).LHS treats all coordinates equally and avoids the exponential growth in sample size, resulting from full stratification, by stratifying only the one-dimensional marginals of a multi-dimensional joint distribution.The method helps with the reduction of variance by sampling systematically (evenly spread out) throughout the unit hypercube.
The package lhs (Carnell 2009) for the statistical program R (R Development Core Team 2009), has a built-in function for generating an LHS of size B for dimension d.This was used to generate the underlying U ∼ U (0, 1) d for the simulations.
If n samples are generated from the unit hypercube (0, 1) d using LHS, it might as well have been a realisation of n samples from the unit hypercube (0, 1) d using normal pseudorandom numbers.The only difference being that the LHS samples are chosen evenly through the unit hypercube.Therefore, the SE of this method's estimate will be almost exactly the same as for the normal refined MC method.No variance reduction will be visible.Glasserman (2004, p. 242)

Results for measuring the efficiency of different VRTs
To determine the efficiency of VRTs, the various methods were compared with each other.In total, 1 620 two-asset ERBPO problem instances were constructed through all combina-tions of the following parameters: the correlation ρ ∈ {−1, −0.5, 0, 0.5, 1}; the initial value of the portfolio Π 0 ∈ {500, 1 000, 1 500}; the time between rebalancing τ ∈ {0.25, 0.5, 1}; the proportion invested in the first asset v 1 ∈ {0.1, 0.5}; the time till maturity T ∈ {2, 10, 15}; the risk-free rate r ∈ {0.01, 0.03, 0.08}; the standard deviation of the first asset σ 1 ∈ {0.05, 0.3}; the standard deviation of the second asset σ 2 = 0.3; and strike price K = 1 000.Each of these inputs used in the combinations affect the price of the ERBO.As an example, if the option were far out-of-the-money (Π 0 >> K) most of the simulations will return a result of 0, and therefore would not return a significant SE.Furthermore, no variance reduction effect will be visible.
The ERBPO was priced using the RMC (Normal) method as well as the RMC method with AV, CV and LHS as VRTs, using all possible combinations of the above parameters.
The number of repetitions for each of the 1 620 combinations were adjusted so that the computational time only lasted approximately one second.That is, given one second, which method will yield, relative to the other methods, the smallest SE and therefore the smallest CI? Due to the extent of the results, only a summary is provided here.
Figure 2 gives the percentage of times a certain VRT outperformed others, i.e. produced the smallest SE in one second.The second column (Smallest) counts the percentage of times a VRT outperformed other methods.The NA (Not Available) row indicates the situation where there was no clear winner.The next five columns indicate the percentage of times a certain VRT outperformed the VRT that performed second best relative to a certain threshold.These thresholds are 0.05, 0.1, 0.2, 0.5 and 0.75.For example, in 29% of the 1 620 instances the CV method produced an SE that was more than 0.05 smaller than the second best method.Initially, AV outperformed all of the other VRTs.However, considering by how much AV reduces the variance with respect to other methods, it soon does not hold up to its initial reputation.The number of times AV performed the best reduces much faster than that of CV, when looking at the margin by which it outperformed the second most efficient VRT.CVs will be chosen as the better VRT.Therefore, when applying this VRT, the SE of the estimate should mostly be smaller than for other methods.
In the next section, QMC techniques will be used to price the options.

Quasi-Monte Carlo techniques
In this section the application of QMC techniques to the normal RMC method are considered.QMC techniques attempt to accelerate convergence of the method by using low discrepancy sequences (LDSs) which are deterministically chosen sequences.The methodology behind the implementation of QMC in the ERBPO problem is first discussed, followed by a method to evaluate the performance of these methods -one with fixed dimensionality and the other over increasing dimensionality.

LDS and RLDS
Several QMC techniques exist that are implemented in the pricing of the ERBO.These may be split up into two different sequences, namely normal LDSs and randomised LDSs (RLDSs).Randomising QMC points opens the possibility of measuring error through a CI while preserving much of the accuracy of pure QMC.Randomised QMC (RQMC), which uses RLDSs, thus seeks to combine the best feature of ordinary MC and QMC.Randomisation sometimes improves accuracy as well.
The three LDSs that were implemented was the Halton (HS), Faure (FS) and Sobol' (SS) sequences (Halton 1960, Hammersley 1960, Faure 1982, Sobol' 1967).The algorithm used for pricing the ERBPO was exactly the same as for the RMC method except that LDSs were used as the U ∼ U (0, 1) d , with d = a• T /τ , instead of generating them with a RNG.Four RLDSs were also implemented, all based on the SS, since this is the best performing sequence of the three normal LDSs discussed.In the first RLDS, U ∼ U (0, 1) d is generated with the RNG, Random (R Development Core Team and contributors worldwide 2011) and then added to each point of the SS modular 1 (Sobol' R) (Glasserman 2004, pp. 320-321).The next three RLDSs are built into R, and include: Owen type scrambling (Sobol' R1); Faure-Tezuka type scrambling (Sobol' R2); and both Owen and Faure-Tezuka type scrambling (Sobol' R3) (Owen 1998, Tezuka andFaure 2003).These can all be found in the package fOptions (Wuertz 2010) in R (R Development Core Team 2009).

Measuring efficiency and results for QMC techniques
The results section will be divided into two parts.The first will compare the different LDSs and RLDSs over different inputs for the ERBPO with the dimension kept constant at 4 and the number of points used to estimate the price as the factor over which they will be compared.The second will compare the different LDSs and RLDSs over different inputs for the ERBPO with the number of points used to estimate the price kept constant and the dimension being the factor over which the sequences will be compared.
The first method uses the Root Mean Squared Error (RMSE) to compare the different values for the number of points used.After this the Relative RMSE (RRMSE) will be used to compare the different values for the dimension.

Constant dimensionality
The first results will be obtained with the equation with m ERBPO problems with true values α 1 , . . ., α m and n-point approximations denoted by α 1 (n), . . ., α m (n).Unfortunately, the true values are not known and will have to be estimated with MC.That is, let the true values be denoted by α i and be estimated with α M C,i (N * ) with N * → ∞.The LLN implies that this will be arbitrarily close to the true value (given N * is sufficiently large).The RMSE equation now changes to Thus, RM SE(n) → d m,N * as n → ∞ and RM SE(n) → 0 as N * → ∞.In reality, the RMSE will always converge to a number d m,N * due to the fact that N * → ∞ is computationally impossible.Nevertheless, the different methods may still be compared on how fast they converge to this value.Figure 3 provides this comparison for the different LDSs together with some RLDSs.Note that α i was estimated with α M C,i (N * ) using the normal refined MC algorithm with N * = 10 7 .The other parameters were chosen as follows: ρ ∈ {−1, −0.5, 0, 0.5, 1}, Π 0 ∈ {500, 1 000, 1 500}, v 1 ∈ {0.1, 0.25, 0.5}, r ∈ {0.01, 0.03, 0.08}, σ 1 ∈ {0.05, 0.1, 0.3, 0.5}, K = 1 000, T = 10, τ = 2 and σ 2 = 0.3.The values for n were chosen carefully to optimally fill the unit hypercube.They are n ∈ {4  4 7 , 4 8 , 4 9 }.Here, the most important input to the combinations is once again the number of simulations, n.The other parameters were simply chosen to group the simulations.
Figure 3 displays the graph for the first part of the results for this section.From the graph, it is clear that, the best performing sequence is one of the randomised SSs (R1, R2 or R3) as they converge much faster to the value d m,N * .This gives two advantages: faster convergence, and because this is a randomised sequence, the construction of a CI.
From this result it is interesting to note that, to obtain the same RMSE of 0.9, the normal MC simulation has to use approximately 100 000 simulations compared to the Sobol' R3 method that only needs approximately 1 000.That is approximately a 99% reduction in the number of simulations.

Increasing dimensionality
The second part of this results section will state the results on how the LDSs and RLDSs performed over different values for the dimensions.Glasserman (2004, p. 327) suggests using the RRMSE to compare the different sequences when considering increasing dimensionality.The formula is given by with m ERBPO problems with true values α 1 , . . ., α m and n-point approximations denoted by α 1 (n), . . ., α m (n).Unfortunately, the true values are not known, and will have to be estimated with MC.That is, let the true values be denoted by α i and be estimated by α M C,i (N * ) with N * → ∞.
In the results given in Figure 4, n was chosen as 5 120 and N * as 900 000.The time between rebalancing was carefully chosen such that the dimension of the problems changes from 20, The FS and HS do not perform well at all due to the nature of the sequences (n has to be chosen carefully to obtain better results).Comparing the other LDSs it is clear that the other methods all produce smaller RRMSEs than that of the normal MC method.However, the efficiency decreases as the dimension increases.For example, when d = 10 the Sobol' R1 method produced an RRMSE which was significantly smaller than the normal MC method, but when d = 200 they produced almost the same RRMSE.

Conclusion
Monte Carlo techniques may be used as a method to price a variety of different exotic options.This article aimed to find the best Monte Carlo technique to price the ERBO.A simplistic approach was refined using a mathematical proof after which different VRT were applied to help reduce the size of the error.Convergence was then increased with the help of different Quasi-Monte Carlo and Randomised Quasi-Monte Carlo techniques.The final combined algorithm for optimal pricing of the ERBO is given in the Appendix.This can be programmed in any mathematical/statistical package.It would be advised to program it in R (R Development Core Team 2009) as the Randomised Quasi-Monte Carlo techniques are readily available.Thus, by using the Refined Monte Carlo method with option prices as Control Variates together with Owen and Faure-Tezuke type randomised Sobol' Sequences as a Quasi-Monte Carlo method, more efficient methods to price this option are obtained.
Only the pricing of the ERBPO was discussed in this paper, the algorithm was adapted to price the ERBCO.Note, however, that the price of the ERBCO can be found with the help of the put-call parity that can easiliy be derived using the normal arguments for the plain vanilla put-call parity.That is c t + Ke −r(T −t) = p t + Π t + Π t a j=1 v j 1 − e −q j (T −t) , with c t and p t the prices at time t of the ERBCO and ERBPO respectively.
Although the initial research question was answered, there still exist some open questions which can serve as future research topics.Further research could be performed on combining different VRT to possibly find an improvement on the classical VRT.Another possibility includes (a) determining whether different input parameters may be considered and (b) a method on how to predict which VRT would reduce the SE the most can be found.This could be done with Linear Discrimenant Analysis, Classification and Regression Trees or other Data-mining techniques on previous simulations.
Finally, this article showed that the Refined Monte Carlo method decreased the computational time of the value of the ERBO by approximately 98% (compared to the Simplified Monte Carlo method); the error of the estimates was smaller than it was for normal Monte Carlo approximately 95% of the time using different Variance Reduction Techniques; and by applying Quasi-Monte Carlo methods, the number of simulations needed to obtain the same accuracy than normal Monte Carlo decreased by approximately 99%.Hence, advanced simulation procedures are worthwhile to implement when pricing exotic type derivatives using Monte Carlo simulation.

Figure 1 :
Figure 1: A graph of the computational times for the simplistic and refined MC methods for three different number of simulations.
When α = E[X] is estimated with MC simulation a CV, Y , may be introduced.This variable has a known mean µ Y = E[Y ].For any given constant c, the quantity X CV = X + c(Y − µ Y ) can be used to construct an unbiased estimator of α, i.e.E[X CV ] = α.By taking the derivative of V (X CV ) = σ 2 X +c 2 σ 2 Y +2cσ X,Y and setting it equal to 0, the optimal c, called c * , can be found as c * = −σ X,Y /σ 2 Y .This c * is estimated as ĉ * = −σ X,Y /σ 2 Y , with σX,Y the sample covariance and σ2 suggests generating i.i.d.estimators α1 (B), . . ., αm (B), each based on an LHS of size B, such that the estimator, with n = m • B, k , with {Y i,k } generated from B uniformly distributed variables on (0, 1) d , generated using m different LHSs.Note that E[α LHS ] = α, and that the SE of this estimator is the standard deviation of α1 (B), . . ., αm (B) divided by √ m.

Figure 3 :
Figure 3: RMSE for different RLDSs over increasing values of n.

Figure 4 :
Figure 4: RRMSE for different RLDSs over increasing sizes of the dimension.