“The impact of sectorial and geographical segmentation on risk-based asset allocation techniques”

In the last decades, risk-based portfolio construction techniques have enjoyed a wide- spread diffusion in the financial community. This study aims at evaluating how these approaches produce different results depending on whether the segmentation of the stock market investment universe is based on sectorial or geographical criteria. An empirical analysis, applied on the global equity market, is carried out by making use of the typical and most advanced statistical and financial evaluation measures. Geographical segmentation is carried out in relation to the listing market, while sectorial segmenta- tion is made in relation to the productive sectors to which individual companies be-long. Our comparative analysis provides substantially coherent results, demonstrating a significant preference for the sectorial criterion compared to the geographic one. In conclusion, this result can be attributed to the subdivision of the investment universe into sectorial indices characterized by greater internal coherence and better external differentiation, in addition to the lower concentration of sectorial segmentation com- pared to the geographical one. classes must comply with the following requirements: completeness, internal consistency, and external differentiation. Completeness implies that the selected asset classes should cover the entire investment universe. Internal consistency is satisfied if each asset class consists of financial instruments that are as homogenous as


INTRODUCTION
Using an empirical analysis, this study identifies the segmentation criterion most suited to the implementation of risk-based portfolio construction strategies in the "equity" asset class.
The investment process in a top-down approach starts with the division of the investment universe into different asset classes, with each asset class representing a set of financial assets characterized by high similarity in terms of their risk-return combination. The significance of the identification of the asset classes is clear, since the forecasting process of the market variables is achieved via the formulation of expectations concerning the evolution of the general economic scenario to produce the forecasts regarding the future of single-market sectors, which are specifically distributed into as many asset classes.
The composition criteria are different for equities, bonds, or money market instruments. However, in every case, the selected asset classes must comply with the following requirements: completeness, internal consistency, and external differentiation. Completeness implies that the selected asset classes should cover the entire investment universe. Internal consistency is satisfied if each asset class consists of financial instruments that are as homogenous as possible and similarly subject to systematic risk factors. Finally, external differentiation requires that different asset classes have distinct exposures to sources of systematic risk, such as macroeconomic and political factors (Basile & Ferrari, 2016).
In the equity market, asset classes are usually defined using the sectorial or geographic criteria. Sectorial criteria are based on the assumption that securities of firms in the same industry move in a similar way, since the company's industry determines the degree of sensitivity to macroeconomic and political factors. These factors include technological advancements and the consequent changes in production processes, the competitive structure of the market, economies of scale and infrastructural needs, the evolution of consumer preferences, the dynamics of the global economic cycle, and the commodities market. Geographic criteria are based on the assumption that securities listed in the same market tend to be correlated as companies operate with the same currency, have the same basic interest rates, and are subject to the same economic policy and country risk.
This paper is divided into three sections: the first provides a review of the literature on risk-based strategies; the second explains the methodology of the analysis and the chosen sample; and the third focuses on measurement and interpretation of the results.

RISK-BASED STRATEGIES: REVIEW OF LITERATURE
For decades, and with a greater focus in the years following the 2008 financial crisis, scholars, practitioners, and institutional investors have been reevaluating approaches to asset allocation with "ancient" origins or promoting new approaches to the construction of portfolios that avoid the optimization of the trade-off between expected return and risk, and, consequently, the application of the mean-variance optimization.
The fundamental and common characteristic of these alternative approaches to portfolio construction must be identified by removing the expected returns from the set of inputs; thus, they are defined as μ-free strategies. The reasons underlying this choice can be traced to the literature concerning estimation risk (Best & Grauer, 1991). Chopra and Ziemba (1993) show that an investor with average risk aversion can incur losses, measured in terms of lower utility, eleven times higher in the event of a wrong estimation of the means compared to an identical estimation error of variances. Notwithstanding the advantage derived from the simplification of estimating inputs, some studies have criticized these models because of the absence of a clearly defined objective function (Lee, 2011;Scherer, 2011).
Therefore, the implementation of risk-based strategies requires only the estimation of the risk measures (volatilities and correlations or, equivalently, the covariance matrix), as they are the only inputs relevant to the asset allocation process (Braga, 2016). In the following subsections, the most widespread risk-based techniques, such as optimal risk parity, global minimum variance, most diversified portfolio, and equal weighting, are studied in depth.

The optimal risk parity
After certain pioneering contributions by asset managers (Qian, 2005(Qian, , 2006Neurich, 2008), the theoretical foundation of risk parity was defined and formalized for the first time by Maillard, Roncalli, and Teiletche (2010). It is based on the principle of risk budgeting, which allows the portfolio construction process to be set up in terms of risk allocation, rather than asset allocation (Denault, 2001). The idea behind the optimal risk parity approach is to prevent the concentration of portfolio risks in a limited number of dominant positions. Thus, the risk allocation is defined such that each component of the portfolio offers the same ex ante risk contribution, namely, a contribution equal to the formation of the overall portfolio risk.
Portfolio weights, therefore, are identified through an optimization process subject to the following constraints (Roncalli, 2014): The symbol i x refers to the portfolio weights, while the symbol i b refers to the relative budgets of risk, predefined by the manager based on the risk exposure objectives. Both are subject to the budget constraint (i.e., they must sum to 1) and the non-negativity constraint. Moreover, the first constraint requires that the risk contribution i RC of each -th i asset class correspond to the objectives of risk budgeting. We can observe that the first constraint implicitly does not allow weights to take a value equal to zero. Consequently, the procedure does not exclude any component of the investment universe from the portfolio. Furthermore, the allocation of negative risk budgets to one or more constituents of the portfolio determines the concentration of the entire risk exposure regarding the other components of the investment universe; thus, the relative risk budgets are subject to the non-negativity constraint.
The optimal risk parity technique is a type of risk budget portfolio, in which all the components of the investment universe are expected to have the same risk contribution. Thus, the constraint on risk budgets becomes where n is the number of asset classes into which the investment universe is subdivided.
The constrained optimization problem does not provide a closed-form solution, but a numerical solution can be derived by minimization of the following objective function: Given the aforementioned constraints, it is a matter of solving a constrained non-linear programming problem, which only admits a numerical solution, through an iterative process using a sequential quadratic programming algorithm (Basile & Ferrari, 2016). It should be noted that this technique, while widely used and easily implemented, is not the only one that can be applied to optimal risk parity. For further details on this topic, refer to Chaves et al. (2012). Roncalli (2014) and Scherer (2015) note that the tangency portfolio satisfies the condition that the ratio between the marginal expected excess return and the marginal risk is identical for all the components of the portfolio, with a consequent proportional relation between the expected excess returns and marginal risks of the constituents of the tangency portfolio. This assertion can be formalized as follows: Therefore, the impact on the return of an increase in the weight of a portfolio component is counterbalanced by the additional risk from the extension of its position in the same way across asset classes. Evidently, in the case of the tangency portfolio, a change in the allocation cannot provide an enhancement in the portfolio risk-adjusted performance. Thus, the ratio in the previous equation is also in the Sharpe ratio of the tangency portfolio, which every investor should prefer. This portfolio, consequently, also verifies the following equation: Namely, the excess return of each constituent, implied by the allocations of the tangency portfolio, is proportional to its marginal risk.

The global minimum variance portfolio
The objective of the global minimum variance approach is to minimize the total portfolio risk. Among the risk-based approaches, this is the only one to indicate the portfolio that lies on the ex-ante efficient frontier (Markowitz, 1952;Clark et al., 2011). In fact, the results of the optimization process are portfolio weights that minimize the portfolio variance, with the calculation formula being the objective function. Therefore, the sole inputs of the process are the elements of the covariance matrix. The exclusion of the expected returns from the portfolio construction process justifies the inclusion of the global minimum variance portfolio into the class of risk-based strategies.
The constraints on the optimization process remain the budget and the no-short selling constraints. In this case, the quadratic programming problem assumes the following matrix formulation: The solution to the problem admits the existence of weights equal to zero, so the global minimum variance approach can exclude some of the constituents of the investment universe from the portfolio. The minimization of the total risk is achieved when all the components included in the portfolio have equal marginal risks, that is, when it is verified that: ,.
Given the equality of the marginal risks, to have an equal risk contribution, each portfolio constituent should have the same weight. However, as the weights are usually different from each other, so are the relative risk contributions. Additionally, the equality of the marginal risks means that the percentage of risk contribution (i.e., the percentage of the risk derived from the exposure in each component) corresponds to the respective portfolio weight (Braga, 2016).

The most diversified portfolio
In the most diversified portfolio approach (Choueifaty & Coignard, 2008), asset allocation is based on maximizing the degree of diversification, which is measured by the diversification ratio. This measure is calculated using the ratio between the weighted average of the standard deviations of the portfolio constituents and the standard devia-tion of the portfolio itself. The matrix formulation of the diversification ratio is as follows: where σ is the vector of the estimated standard deviations of constituents' returns, and w is the portfolio weight vector.
The numerator of the diversification ratio corresponds to the standard deviation of the portfolio if all the constituents have a correlation of +1. In this case, the standard deviation is at the highest possible level. Consequently, for a long-only portfolio, the minimum value that the diversification ratio can take is equal to 1, if all correlations are perfectly positive (alternatively, in the case of a portfolio with a single constituent).
The objective of the most diversified portfolio approach is to maximize the diversification ratio; therefore, this index acts as the objective function in the corresponding constrained optimization problem, whose matrix is represented as follows: In addition, in this case, we find the budget and no-short selling constraints common to the other approaches. The solution is represented by the portfolio weight vector and is derived by an iterative numerical process. This optimization does not require all constituents of the investment universe to be included in the portfolio; therefore, the procedure can define weights to be equal to zero for some of them. Furthermore, the strategy does not use the risk budgeting tools; thus, the most diversified portfolio approach does not guarantee ex ante a balanced investment in terms of risk or asset allocation.

The equal-weighted portfolio
The equal-weighted approach, or equal weighting, is a simple heuristic strategy, which assigns the A similarity with the optimal risk parity strategy can be noted, although the latter is more sophisticated. To illustrate, the equal-weighted approach applies the 1 N rule to asset allocation, whereas the optimal risk parity approach applies it to risk allocation. However, another aspect that unites the two approaches is the selection of assets. Unlike the global minimum variance and maximum diversification approaches, the equal weighting and optimal risk parity approaches guarantee that the entire investment universe is always included in the portfolio selected by the investor (Clark et al., 2013).
Therefore, the equal weighting approach does not require any statistical analysis of returns or any estimation; however, despite being very simplified, it is still considered a risk-based strategy, as the allocation mechanism seeks a strong diversification of the risks.

METHODOLOGY OF THE EMPIRICAL ANALYSIS
The evaluation of the efficiency of the different asset class breakdown techniques in the risk-based portfolio construction models requires the measurement of their out-of-sample performance. These data can be obtained by implementing a rolling-window procedure, which allows the simulation of the behavior of a portfolio constructed by an investor who performs portfolio optimization based on the available data at the time of the allocation, measures the statistical characteristics of the portfolio, and rebalances its weights according to predefined techniques.
In the subsequent empirical analyses, the optimization processes are carried out using the returns on equity investments in excess of the risk-free rate. Since we intend to carry out the empirical analysis from the perspective of a Eurozone investor, the benchmarks used are all denominated in euro. Accordingly, the 12-month Euribor rate is assumed as the risk-free rate.

Descriptive statistics of the data sample
Regarding the alternative criteria for segmentation of the global stock market, represented by the MSCI All Country World Index (ACWI) benchmark, the sectorial approach identifies eleven sectors, while the geographic approach designates six main geographical areas. We choose not to use a higher number of geographical segments because of two reasons. The first is to avoid assigning high weights to marginal areas in the actual composition of the global market. The second is to limit the number of parameters to be estimated, and therefore, the estimation risk (Basile & Ferrari, 2016). Table 1 shows the actual weights of the MSCI ACWI global benchmark in October 2018, broken down sectorially and geographically. From the comparison between the two methodologies, we can observe a lower degree of concentration of the first criterion with respect to the second, an element, which, as we shall see, is decisive in ensuring the superior efficiency of the risk-based portfolios using sector indices.
The indices used are total return, gross of taxes, and free-float weighted. For each index, the sample comprises a time series of 240 monthly returns from November 1998 to October 2018. We choose a long-term sample to obtain a time series of 15 years, thus including different market phases and the occurrence of extreme events. The out-of-sample time span of five years is used for estimating the performances of the risk-based portfolios.
The first four sample moments of the entire dataset are shown in Table 2. Overall, there is only one case of positive skewness, whereas all the other stock indices are characterized by negative skewness; moreover, all the empirical distributions are leptokurtic.
Regarding the sectorial and the geographic criteria, there are no negative correlations between the excess returns (Tables 3 and 4), stressing that the stock markets tend to move in the same direction when shocks caused by global risk factors occur.

Tests of deviations from normality for the data sample
In light of the sample moments shown in Table 2, it is necessary to test the deviations from normality of the time series. To this end, the following aspects are analyzed: normality, autocorrelation, heteroscedasticity, and stationarity of distributions, with a significance level of 5% being selected. The results of the statistical tests are summarized in Table 5.     The normality hypothesis is verified by two tests. The first, the Jarque-Bera test (Jarque & Bera, 1987) is more suitable when used on large samples; however, when applied to a not very large sample size like in this case, it is preferable to combine it with the Lilliefors normality test (Lilliefors, 1967). The two tests produce similar results. According to the Jarque-Bera test, the hypothesis of Gaussian distribution is accepted in only two cases, while the Lilliefors test accepts this hypothesis in only four cases. The conclusion of non-normality holds for both the sectorial and geographic segmentation approaches.
The presence of autocorrelation is verified by the Ljung-Box test (Box et al., 2015), which is used to verify the hypothesis that the correlation coefficients between a variable and its first m lags are all null. The statistic has an asymptotic Chi-square distribution with m degrees of freedom, and the rejection area corresponds to the right tail of the distribution. The number of lags is determined following Tsay (2000), who suggests using a number close to the natural logarithm of the number of observations in the time series. As the sample size in this case is 240, the test is performed considering five lags.
The Ljung-Box test (Table 5) reports ten cases in which the hypothesis of absence of autocorrelation is accepted, and seven cases in which the hypothesis is rejected, both with reference to the sectorial and geographic criteria, but with a higher incidence in the latter one.
Heteroscedasticity is verified through two different tests: the Engle's ARCH test (Engle, 1982) and the Ljung-Box test on the squared residuals. Both require the definition of the number of lags. As for the previous Ljung-Box test, this parameter is set equal to five.
As reported in Table 5, the Engle's ARCH test accepts the hypothesis of absence of heteroscedasticity in only three cases, while the Ljung-Box test on the squared residuals accepts this hypothesis in only two cases. Therefore, these empirical analyses indicate that volatility is not constant over time, but tends to change depending on the past values of the variable.
The time series analysis further requires verifying stationarity, a necessary condition to assume the constancy of the estimated distribution parameters.
For this purpose, we use the augmented Dickey-Fuller test (ADF), which verifies that the stochastic process has no unit root; thus, it does not satisfy the stationarity condition (Dickey & Fuller, 1979). Also, for this test, five lags are taken into account.
As seen in Table 5, in all cases, the hypothesis of a unit root is rejected; therefore, the time series of each index can be considered stationary.
The statistical analyses presented here highlight severe deviations from the hypotheses formulated in the portfolio theory and the capital asset pricing model for the construction of efficient portfolios. Consequently, risk-based allocation techniques appear more suitable, as they are more parsimonious in terms of estimates necessary for their implementation.

The implementation of the empirical analysis
The strategies of portfolio construction subjected to analysis are the following: • the optimal risk parity, using the standard deviation as the measure of risk; • the global minimum variance portfolio; • the most diversified portfolio; • the equal-weighted portfolio; • the optimal risk parity, using the expected shortfall at the 95% confidence level as the measure of risk; • the minimization of the expected shortfall (95%); • the maximization of the Sharpe ratio, included for comparison with risk-based techniques.
Each strategy is implemented using both the sectorial and the geographic criteria to determine which of the two is preferable.
Given that risk decomposition techniques can be applied using different risk measures, empirical analyses are also carried out with the objective of comparing risk-based strategies that differ from each other in this element, with the selected risk measures being the standard deviation and the expected shortfall (95%). The first is chosen, because it is used in traditional asset allocation models, while the use of the expected shortfall is consistent with the pres-ence of significant deviations from the Gaussian distribution, as previously verified empirically.
The sample estimates of the standard deviations are based on the 60 most recent observations, according to a rolling-window procedure. Similarly, the expected shortfall is estimated by the method of historical simulations using the 60 most recent observations and following a rolling-window procedure.
The budget and the no short-selling constraints are imposed on the optimization processes. We decided not to use additional constraints, both in terms of portfolio allocation and risk allocation, as their presence would attenuate the distinctive features of the different approaches, making them more similar to each other, which would lead to significant difficulties in the comparative assessment.
Calendar rebalancing, with a quarterly frequency, is chosen for the empirical analysis. Therefore, given the sample length of 240 months and the 60 months used in samples estimates, 60 portfolios are processed for each strategy examined with a rolling-window procedure, for a total of 180 monthly out-of-sample observations. With a quarterly frequency, the specific portfolio optimization process is carried out (based on the data provided by the rolling-window procedure) for each portfolio construction strategy, and the previous portfolio weights are modified.
The quantification of the transaction costs has an important role, since some strategies are less stable than others. A lower stability means higher rebalancing costs, and therefore, lower net returns. In this case, a uniform cost of 0.2% is defined for each component of the portfolio, since the financial instruments that allow the replication of the return of each asset class are characterized by a similar level of liquidity.
The matrix formula used to represent the value of a portfolio considering the costs incurred to carry out each rebalancing is as follows: where i v represents the money value of the -th i portfolio, c is the transaction cost, and i x and reb i x represent the vectors of the weights of the -th i portfolio before and after the rebalancing, respectively.

Descriptive statistics of the out-of-sample portfolios
Firstly, we perform an analysis of the sample moments. Table 6 summarizes the out-of-sample first four moments of the excess returns of the different strategies. We observe that the strategies are suitable for the construction of portfolios with performance objectives that are also significantly higher than the risk-free rate; thus, the investor's choice is not limited to defensive portfolios only. In general, we observe that all the distributions present negative skewness and leptokurtosis, with the results being similar to those found in the analyses of the asset class benchmarks.

Tests of deviations
from normality for the outof-sample portfolios The hypothesis of normality of the excess returns of risk-based portfolios is tested using the same procedures and methods set for the individual indices. The results are reported in Table 7.
According to the Jarque-Bera test, we observe that in no case is the hypothesis of Gaussian distribution accepted, while with the Lilliefors test there is only one portfolio with a p-value higher than the significance level of 5%, thus, the Gaussian distribution hypothesis of excess returns is accepted only in this case.
The presence of autocorrelation is verified using the Ljung-Box test, whose results indicate six cases in which the hypothesis of absence of autocorrelation is accepted, and eight cases in which the hypothesis is rejected.
The presence of heteroscedasticity is verified using the Engle's ARCH test and the Ljung-Box test on the squared residuals. The results, summarized in Table 7, indicate that homoscedasticity is rejected in every portfolio.
The verification of stationarity is carried out using the Augmented Dickey-Fuller (ADF) test. In all cases, the hypothesis of unit root is rejected; thus, all the time series are stationary.
In general, all the hypothesis tests carried out on the time series of excess returns validate the characteristics observed in benchmarks.

Comparative analysis of the riskbased strategies
The identification of the segmentation technique most suitable for risk-based portfolios requires the evaluation of different elements such as the portfolio risk, portfolio efficiency, and the higher moments of the distribution of excess returns.
The first element considered is the portfolio risk. It is a highly distinctive characteristic of risk-based strategies, which are based above all on it. Risk is assessed using the measures previously considered in the portfolio optimization process, namely, the standard deviation and the expected shortfall (95%).
The second element considered is efficiency, as the identification of the portfolio with the best risk-adjusted performance is the investor's primary objective. The evaluation of portfolios' efficiency is carried out using three metrics, namely, the Sharpe ratio (Sharpe, 1994), the Sortino ratio (Sortino & Van der Meer, 1991), and the conditional Sharpe ratio at 95% (Bacon, 2008) (i.e., the ratio between the mean excess return and the expected shortfall at the 95% confidence level).
The third element is represented by the higher moments of the distributions. As the tests carried out previously on the time series reject the hypothesis of normality, it is necessary to consider skewness and kurtosis.
It is possible to make some observations based on the sample standard deviation (Table 6), measured outof-sample on a monthly basis. Firstly, the sectorial criterion is significantly superior to the geographical criterion, as shown by the comparative results for each strategy that uses it, with the exception of equal weighting. Secondly, traditional strategies produce unsatisfactory results maximizing the Sharpe ratio. Table 8 reports the expected shortfall (95%) measured for the different strategies. In line with observations of the standard deviation, the sectorial criterion appears to be systematically preferable to the geographic criterion regarding all allocation techniques. In particular, the two techniques of segmentation of the investable equity universe produce antithetical results, especially when applied to the minimization of the expected shortfall (95%). In fact, the portfolio based on sector indices has a lower level of ex post risk than almost every other strategy, with the exception of the global minimum variance. On the contrary, the geographic minimization strategy of the expected shortfall (95%) achieves an extremely negative result, among the worst in the sample, demonstrating the inconsistency of the ex post asset allocation with respect to the ex ante measured inputs.  According to the traditional capital asset pricing model, the measure that enables the identification of the most efficient portfolio is the Sharpe ratio. Thus, the efficiency analysis begins with this index, whose values are reported in Table 8. Also in this case the sectorial criterion is significantly preferable to the geographic one. Furthermore, the traditional strategy, which selects the portfolio with the maximum ex ante Sharpe ratio, produces a positive ex post result; however, the most efficient portfolio is the one constructed by the global minimum variance strategy using sector indices.
The non-normality of distributions makes the Sharpe ratio a suboptimal indicator of portfolio efficiency. Therefore, the Sortino ratio is also used, as it is calculated as the ratio between the mean excess return and the downside deviation. Additionally, the Sortino ratio represents the extra yield compared to the objective rate of return per unit of asymmetric risk (i.e., of downside risk). In this analysis, the target rate is set equal to the risk-free rate, considering the risk-free return as the opportunity cost.
The values of the Sortino ratio are reported in Table 8. Despite the different measurement methodology, its ranking replicates the one achieved previously with the Sharpe ratio. The previous considerations are also verified, indicating that the sectorial criterion is significantly preferable to the geographic one.
The conditional Sharpe ratio (95%) aims to consider investors' preference in preventing extreme negative events (i.e., "tail risk"). The results presented in Table 8 show a very strong similarity with those drawn from the other two risk-adjusted performance measures. Also in this third case, we can verify the considerations discussed above and the superiority of the sector segmentation to the geographic one.
The preceding empirical analyses show that both the time series of the excess returns of the benchmarks and of the risk-based strategies are subject to negative skewness and leptokurtosis.
The values of the skewness are shown in Table 7.
The fact that this measure is not considered in the portfolio optimization processes causes a certain degree of randomness in the results; therefore, we cannot infer the dominance of a segmentation criterion.
The investors' interest in risk-based strategies is due to their conservative nature and their focus on the risk alone. Hence, kurtosis can be particularly important, given its effect on determining the probability of extreme events. As for the skewness, it must be considered that the parameter is not included in the inputs of the optimization processes.
The levels of kurtosis are presented in Table 7. In this case, the best result is produced by the strategy of minimization of the expected shortfall (95%) using sector indices.

CONCLUSION
Regarding the two alternative approaches for segmentation of the stock market, our comparative analysis provides substantially coherent results, demonstrating a significant preference for the sectorial criterion compared to the geographic one. This result can be attributed to the subdivision of the investment universe into sectorial indices characterized by greater internal coherence and better external differentiation, in addition to the lower concentration of sectorial segmentation compared to the geographical one. In fact, this last characteristic ensures that the outcome of the risk-based strategies is not strongly linked to the relative performance of the markets characterized by a greater weight, as happens in the geographic decomposition.
Risk-based strategies aim to provide a solution to the critical elements of traditional asset allocation models. Based on the results of the empirical analysis, we observe that the strategies based on the minimization of a risk measure show overall superior results to sector indices. In particular, the strategy that has shown the best results is the global minimum variance with sectorial segmentation, which particularly benefits from the considerable capacity for diversification inherent in this decomposition technique.
Conversely, in all cases, strategies based on the optimal risk parity do not rank in the top positions of the various evaluation methods employed. Nevertheless, for these techniques, the results produced using the sector criterion dominate those produced using the geographic alternative.
These empirical evidences can be interpreted starting from the theoretical foundations on which the optimal risk parity approach is based. Assuming a high estimation error in the parameters, this is a strategy that imposes tight constraints on the portfolio construction process, since all the components of the investment universe must have the same ex ante percentage risk contribution; therefore, no component can be excluded from the asset allocation. The constraints imposed have two purposes, the first is to avoid the concentration of risk in a limited number of assets, and the second is the containment of transaction costs due to rebalancing. If, as in the present case, the estimation error is not sufficiently severe, these constraints make it impossible to reach an optimal allocation in the mean-variance space, empirically verifying the criticisms formulated by Lee (2011) and Scherer (2011).