• Title/Summary/Keyword: optimization problem

Search Result 4,333, Processing Time 0.029 seconds

Efficient Utilization of Private Resources for the National Defense - Focused on maintenance, supply, transportation, training & education - (국방분야 민간자원의 효율적 활용방안 - 정비, 보급, 수송, 교육훈련분야를 중심으로 -)

  • Park, Kyun-Yong
    • Journal of National Security and Military Science
    • /
    • s.9
    • /
    • pp.313-340
    • /
    • 2011
  • The National Defense Reformation bill of "National Defense Reformation 2020" which have been constantly disputed and reformed by the government went through various levels of complementary measures after the North Korean sinking on the Republic of Korea (ROK) Naval Vessel "Cheonan". The final outcome of this reform is also known as the 307 Plan and this was announced on the 8th March. The reformed National Defense Reformation is to reduce the number of units and military personnel under the military structure reformation. However, in order for us to undertake successful National Defense Reformation, the use of privatized civilian resources are essential. Therefore according to this theory, the ROK Ministry of National Defense (MND) have selected the usage of privatized resources as one of the main core agenda for the National Defense Reformation management procedures, and under this agenda the MND plans to further expand the usage of private Especially the MND plans to minimize the personnel resources applied in non-combat areas and in turn use these supplemented personnel with optimization. In order to do this, the MND have initiated necessary appropriate analysis over the whole national defense section by understanding various projects and acquisition requests required by each militaries and civilian research institutions. However for efficient management of privatized civilian resources, first of all, those possible efficient private resources which can achieve optimization will need to be identified, and secondly continuous systematic reinforcements will need to be made in private resource usage legislations. Furthermore, we would need to consider the possibility of labor disputes because of privatization expansion. Therefore, full legal and systematic complementary measures are required in all possible issue arising areas which can affect the combat readiness posture. There is another problem of huge increase in operational expenses as reduction of standby forces are only reducing the number of soldiers and filling these numbers with more cost expensive commissioned officers. However, to overcome this problem, we would need to reduce the number of positions available for active officers and fill these positions with military reserve personnel who previously had working experiences with the related positions (thereby guaranteeing active officers re-employment after completing active service). This would in tum maintain the standards of combat readiness posture and reduce necessary financial budgets which may newly arise. The area of maintenance, supply, transportation, training & education duties which are highly efficient when using privatized resources, will need to be transformed from military management based to civilian management based system. For maintenance, this can be processed by integrating National Maintenance Support System. In order for us to undertake this procedure, we would need to develop maintenance units which are possible to be privatized and this will in turn reduce the military personnel executing job duties, improve service quality and prevent duplicate investments etc. For supply area, we will need to establish Integrated Military Logistics Center in-connection with national and civilian logistics system. This will in turn reduce the logistics time frame as well as required personnel and equipments. In terms of transportation, we will need to further expand the renting and leasing system. This will need to be executed by integrating the National Defense Transportation Information System which will in turn reduce the required personnel and financial budgets. Finally for training and education, retired military personnel can be employed as training instructors and at the military academy, further expansion in the number of civilian professors can be employed in-connection with National Defense Reformation. In other words, more active privatized civilian resources will need to be managed and used for National Defense Reformation.

  • PDF

Dynamic Economic Load Dispatch Problem Applying Valve-Point Balance and Swap Optimization Method (밸브지점 균형과 교환 최적화 방법을 적용한 동적경제급전문제)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.253-262
    • /
    • 2016
  • This paper proposes a balance-swap method for the dynamic economic load dispatch problem. Based on the premise that all generators shall be operated at valve-points, the proposed algorithm initially sets the maximum generation power at $P_i{\leftarrow}P_i^{max}$. As for generator i with $_{max}c_i$, which is the maximum operating cost $c_i=\frac{F(P_i)-F(P_{iv_k})}{(P_i-P_{iv_k})}$ produced when the generation power of each generator is reduced to the valve-point $v_k$, the algorithm reduces i's generation power down to $P_{iv_k}$, the valve-point operating cost. When ${\Sigma}P_i-P_d$ > 0, it reduces the generation power of a generator with $_{max}c_i$ of $c_i=F(P_i)-F(P_i-1)$ to $P_i{\leftarrow}P_i-1$ so as to restore the equilibrium ${\Sigma}P_i=P_d$. The algorithm subsequently optimizes by employing an adult-step method in which power in the range of $_{min}\{_{max}(P_i-P_i^{min}),\;_{max}(P_i^{max}-P_i)\}$>${\alpha}{\geq}10$ is reduced by 10; a baby step method in which power in the range of 10>${\alpha}{\geq}1$ is reduced by 1; and a swap method for $_{max}[F(P_i)-F(P_i-{\alpha})]$>$_{min}[F(P_j+{\alpha})-F(P_j)]$, $i{\neq}j$ of $P_i=P_i{\pm}{\alpha}$, in which power is swapped to $P_i=P_i-{\alpha}$, $P_j=P_j+{\alpha}$. It finally executes minute swap process for ${\alpha}=\text{0.1, 0.01, 0.001, 0.0001}$. When applied to various experimental cases of the dynamic economic load dispatch problems, the proposed algorithm has proved to maximize economic benefits by significantly reducing the optimal operating cost of the extant Heuristic algorithm.

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Optimum Population in Korea : An Economic Perspective (한국의 적정인구: 경제학적 관점)

  • Koo, Sung-Yeal
    • Korea journal of population studies
    • /
    • v.28 no.2
    • /
    • pp.1-32
    • /
    • 2005
  • The optimum population of a society or country can be defined as 'the population growth path that maximizes the welfare level of the society over the whole generations of both the present and the future, under the paths allowed by its endowments of production factors such as technology, capital and labor'. Thus, the optimum size or growth rate of population depends on: (i) the social welfare function, (ii) the production function, and (iii)demographic economic interrelationship which defines how the national income is disposed into consumption(birth and education of children included) and savings on the one hand and how the demographic and economic change induced thereby, in turn, affect production capacities on the other. The optimum population growth path can, then, be derived in the process of dynamic optimization of (i) under the constraints of (ii) and (iii), which will give us the optimum population growth rate defined as a function of parameters thereof. This paper estimates the optimum population growth rate of Korea by: specifying (i), (ii), and (iii) based on the recent development of economic theories, solving the dynamic optimization problem and inserting empirical estimates in Korea as the parametric values. The result shows that the optimum path of population growth in Korea is around TFR=1.81, which is affected most sensitively, in terms of the size of the partial elasticity around the optimum path, by the cost of children, share of capital income, consumption rate, time preference, population elasticity of utility function, etc. According to a survey implemented as a follow up study, there are quite a significant variations in the perceived cost of children, time preference rate, population elasticity of utility across different socio-economic classes in Korea, which implied that, compared to their counterparts, older generation and more highly educated classes prefer higher growth path for the population of Korea.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

The Optimal Configuration of Arch Structures Using Force Approximate Method (부재력(部材力) 근사해법(近似解法)을 이용(利用)한 아치구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;Ro, Min Lae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.95-109
    • /
    • 1993
  • In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.

  • PDF

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Convolution-Superposition Based IMRT Plan Study for the PTV Containing the Air Region: A Prostate Cancer Case (Convolution-Superposition 알고리즘을 이용한 치료계획시스템에서 공기가 포함된 표적체적에 대한 IMRT 플랜: 전립선 케이스)

  • Kang, Sei-Kwon;Yoon, Jai-Woong;Park, Soah;Hwang, Taejin;Cheong, Kwang-Ho;Han, Taejin;Kim, Haeyoung;Lee, Me-Yeon;Kim, Kyoung Ju;Bae, Hoonsik
    • Progress in Medical Physics
    • /
    • v.24 no.4
    • /
    • pp.271-277
    • /
    • 2013
  • In prostate IMRT planning, the planning target volume (PTV), extended from a clinical target volume (CTV), often contains an overlap air volume from the rectum, which poses a problem inoptimization and prescription. This study was aimed to establish a planning method for such a case. There can be three options in which volume should be considered the target during optimization process; PTV including the air volume of air density ('airOpt'), PTV including the air volume of density value one, mimicking the tissue material ('density1Opt'), and PTV excluding the air volume ('noAirOpt'). Using 10 MV photon beams, seven field IMRT plans for each target were created with the same parameter condition. For these three cases, DVHs for the PTV, bladder and the rectum were compared. Also, the dose coverage for the CTV and the shifted CTV were evaluated in which the shifted CTV was a copied and translated virtual CTV toward the rectum inside the PTV, thus occupying the initial position of the overlap air volume, simulating the worst condition for the dose coverage in the target. Among the three options, only density1Opt plan gave clinically acceptable result in terms of target coverage and maximum dose. The airOpt plan gave exceedingly higher dose and excessive dose coverage for the target volume whereas noAirOpt plan gave underdose for the shifted CTV. Therefore, for prostate IMRT plan, having an air region in the PTV, density modification of the included air to the value of one, is suggested, prior to optimization and prescription for the PTV. This idea can be equally applied to any cases including the head and neck cancer with the PTV having the overlapped air region. Further study is being under process.

Distributed Throughput-Maximization Using the Up- and Downlink Duality in Wireless Networks (무선망에서의 상하향 링크 쌍대성 성질을 활용한 분산적 수율 최대화 기법)

  • Park, Jung-Min;Kim, Seong-Lyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.11A
    • /
    • pp.878-891
    • /
    • 2011
  • We consider the throughput-maximization problem for both the up- and downlink in a wireless network with interference channels. For this purpose, we design an iterative and distributive uplink algorithm based on Lagrangian relaxation. Using the uplink power prices and network duality, we achieve throughput-maximization in the dual downlink that has a symmetric channel and an equal power budget compared to the uplink. The network duality we prove here is a generalized version of previous research [10], [11]. Computational tests show that the performance of the up- and downlink throughput for our algorithms is close to the optimal value for the channel orthogonality factor, ${\theta}{\in}$(0.5, 1]. On the other hand, when the channels are slightly orthogonal (${\theta}{\in}$(0, 0.5]), we observe some throughput degradation in the downlink. We have extended our analysis to the real downlink that has a nonsymmetric channel and an unequal power budget compared to the uplink. It is shown that the modified duality-based approach is thoroughly applied to the real downlink. Considering the complexity of the algorithms in [6] and [18], we conclude that these results are quite encouraging in terms of both performance and practical applicability of the generalized duality theorem.