• Title/Summary/Keyword: Gradient-based Method

Search Result 1,193, Processing Time 0.029 seconds

TACAN modulation generator for antenna purpose that precisely adjusts factor of modulation (변조도를 정밀하게 조정 하는 TACAN 안테나용 변조신호발생기)

  • Kim, Jong-Won;Son, Kyong-Sik;Lim, Jae-Hyun
    • Journal of Digital Convergence
    • /
    • v.15 no.4
    • /
    • pp.275-284
    • /
    • 2017
  • TACAN(TACtical Air Navigation) was created to support military aircraft's short range navigation (200~300 mile). TACAN must fulfill a condition of MIL-STD-291C, the U.S. Military Standards, which addresses the sum of 15Hz and 135Hz should be within 55%, following the factor of modulations for both to be $21{\pm}9%$ each. Within the existing TACAN antenna, modulation factor for 15Hz and 135Hz are created differently depending on its diameter, wavelength, angle of gradient, internal modulation method or using frequency code. It brings up a problem where applications needed to be stopped and repaired when modulating signal exceeds the standard of MIL-STD-291C since the existing TACAN antenna does not have coordination function. Hence, plan and produce a modulating signal generator using FPGA, and check the changes in the modulation factor for 15HZ and 135Hz, depending on the values that have been set in each criteria. Moreover, allow the modulating signal generator to be automatically adjusted based on the monitoring signal emitted by antenna, and place alarm sound just in case if it exceeds the standard.

A Fast-Loaming Algorithm for MLP in Pattern Recognition (패턴인식의 MLP 고속학습 알고리즘)

  • Lee, Tae-Seung;Choi, Ho-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.3
    • /
    • pp.344-355
    • /
    • 2002
  • Having a variety of good characteristics against other pattern recognition techniques, Multilayer Perceptron (MLP) has been used in wide applications. But, it is known that Error Backpropagation (EBP) algorithm which MLP uses in learning has a defect that requires relatively long leaning time. Because learning data in pattern recognition contain abundant redundancies, in order to increase learning speed it is very effective to use online-based teaming methods, which update parameters of MLP pattern by pattern. Typical online EBP algorithm applies fixed learning rate for each update of parameters. Though a large amount of speedup with online EBP can be obtained by choosing an appropriate fixed rate, fixing the rate leads to the problem that the algorithm cannot respond effectively to different leaning phases as the phases change and the learning pattern areas vary. To solve this problem, this paper defines learning as three phases and proposes a Instant Learning by Varying Rate and Skipping (ILVRS) method to reflect only necessary patterns when learning phases change. The basic concept of ILVRS is as follows. To discriminate and use necessary patterns which change as learning proceeds, (1) ILVRS uses a variable learning rate which is an error calculated from each pattern and is suppressed within a proper range, and (2) ILVRS bypasses unnecessary patterns in loaming phases. In this paper, an experimentation is conducted for speaker verification as an application of pattern recognition, and the results are presented to verify the performance of ILVRS.

Study on Stable Gait Generation of Quadruped Walking Robot Using Minimum-Jerk Trajectory and Body X-axis Sway (최소저크궤적과 X축-스웨이를 이용한 4족 보행로봇의 안정적 걸음새 연구)

  • Lee, Dong-Goo;Shin, Wu-Hyeon;Kim, Tae-Jung;Lee, Jeong-Ho;Lee, Young-Seok;Hwang, Heon;Choi, Sun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.2
    • /
    • pp.170-177
    • /
    • 2019
  • In this paper, three theories for improving the stability of quadruped robot are presented. First, the Minimum-Jerk Trajectory is used to optimize the leg trajectory. Second, we compare the newly proposed sine wave and the conventional LSM in this paper based on the Jerk value. Third, we calculate the optimum stride of the sway through repetitive robot simulation using ADAMS-MATLAB cosimulation. Through the above process, the improvement of the robot walking is compared with the existing theory. First, the average gradient of the point where the leg trajectory changes rapidly was reduced from at least 1.2 to 2.9 by using the Minimum-Jerk targetory for the movement of the body and the end of the leg during the first walk, thereby increasing the walking stability. Second, the average Jerk was reduced by 0.019 on the Z-axis, 0.457 on the X-axis, and 0.02, 3D on the Y-axis by 0.479 using the Sin wave type sways presented in this paper, rather than the LSM(Longitude Stability Margin) method. Third, the length of the optimal stride for walking at least the Jerk value was derived from the above analysis, and the 20cm width length was the most stable.

Line-Segment Feature Analysis Algorithm for Handwritten-Digits Data Reduction (필기체 숫자 데이터 차원 감소를 위한 선분 특징 분석 알고리즘)

  • Kim, Chang-Min;Lee, Woo-Beom
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.125-132
    • /
    • 2021
  • As the layers of artificial neural network deepens, and the dimension of data used as an input increases, there is a problem of high arithmetic operation requiring a lot of arithmetic operation at a high speed in the learning and recognition of the neural network (NN). Thus, this study proposes a data dimensionality reduction method to reduce the dimension of the input data in the NN. The proposed Line-segment Feature Analysis (LFA) algorithm applies a gradient-based edge detection algorithm using median filters to analyze the line-segment features of the objects existing in an image. Concerning the extracted edge image, the eigenvalues corresponding to eight kinds of line-segment are calculated, using 3×3 or 5×5-sized detection filters consisting of the coefficient values, including [0, 1, 2, 4, 8, 16, 32, 64, and 128]. Two one-dimensional 256-sized data are produced, accumulating the same response values from the eigenvalue calculated with each detection filter, and the two data elements are added up. Two LFA256 data are merged to produce 512-sized LAF512 data. For the performance evaluation of the proposed LFA algorithm to reduce the data dimension for the recognition of handwritten numbers, as a result of a comparative experiment, using the PCA technique and AlexNet model, LFA256 and LFA512 showed a recognition performance respectively of 98.7% and 99%.

Vegetation Classification and Ecological Characteristics of Black Locust (Robinia pseudoacacia L.) Plantations in Gyeongbuk Province, Korea (경북지방 아까시나무 조림지의 식생유형과 생태적 특성)

  • Jae-Soon Song;Hak-Yun Kim;Jun-Soo Kim;Seung-Hwan Oh;Hyun-Je Cho
    • Journal of Korean Society of Forest Science
    • /
    • v.112 no.1
    • /
    • pp.11-22
    • /
    • 2023
  • This study was established to provide basic information necessary for ecological management to restore the naturalness of black locust (Robinia pseudoacacia L.) plantations located in the mountains of Gyeongbuk, Korea. Using vegetation data collected from 200 black locust stands, vegetation types were classified using the TWINSPAN method, the spatial arrangement status according to the environmental gradient was identified through DCA analysis, and a synoptic table of communities was prepared based on the diagnostic species determined by determining community fidelity (Φ) for each vegetation type. The vegetation types were classified into seven types, namely, Quercus mongolica-Polygonatum odoratum var. pluriflorum type, Castanea crenata-Smilax china type, Clematis apiifolia-Lonicera japonica type, Rosa multiflora-Artemisia indica type, Quercus variabilis-Lindera glauca type, Ulmus parvifolia-Celtis sinensis type, and Prunus padus-Celastrus flagellaris type. These types usually reflected differences in complex factors such as altitude, moisture regime, successional stage, and disturbance regime. The mean relative importance value of the constituent species was highest for black locust(39.7), but oaks such as Quercus variabilis, Q. serrata, Q. mongolica, Q. acutissima, and Q. aliena were also identified as important constituent species with high relative importance values, indicating their potential for successional trends. In addition, the total percent cover of constituent species by vegetation type, life form composition, species diversity index, and indicator species were compared.

A Study on the Operational Method of Urban Arterial With U-Turn (U-Turn을 이용한 간선도로 운영방안)

  • 박용진;손한철
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.1
    • /
    • pp.17-26
    • /
    • 2000
  • U-turns are allowed widely at the intersections by local police department while the left-turn Phases have been gradually Prohibited. However, any strategies for U-turn movements at signalized intersections are unavailable. Therefore, the Purpose of this study is to Propose the efficient operational method of Urban arterial adopting U-turn strategies. Four alternatives are evaluated they are, 1) U-turn movements are allowed at the adjacent intersection with exclusive U-turn lane while the major or the minor approach is Prohibited, 2) U-turn movements are allowed at the adjacent mid-block Pedestrian crossing with exclusive U-turn lane while the major approach is Prohibited. 3) U-turn movements are allowed at the adjacent mid-block Pedestrian crossing with exclusive U-turn lane while the minor approach is prohibited and 4) Comparative one between alternative 3 and 4. From the results of this study, it concludes that the method of U-turn movements allowed at the adjacent mid-block pedestrian crossing with exclusive U-turn lane is the most effective strategy among those alternatives. The strategies of alternative 1 and 4 are Proposed by the boundary based on the major through and left-turn volumes and the minor left-turn volume.

  • PDF

A Study for Drying of Sewage Sludge through Immersion Frying Using Used Oil (폐유를 이용한 하수슬러지 유중 건조 연구)

  • Shin, Mi-Soo;Kim, Hey-Suk;Hong, Ji-Eun;Jang, Dong-Soon;Ohm, Tae-In
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.7
    • /
    • pp.694-699
    • /
    • 2008
  • Considering the severe regulation associated with sludge treatment such as direct landfill and ocean dumping, there is no doubt in that an advanced study for the proper treatment of sludge is urgently needed in near feature. As one of viable method for sludge treatment, fry-drying of sludge by waste oil has been investigated in this study. The fundamental mechanism of this drying method lies in the phenomenon of rapid moisture escape in the sludge pore toward oil media. This is caused by the severe pressure gradient formed by the rapid oil heating between sludge and oil. As part of research effort of fry-drying using waste oil, a series of basic study has been made experimentally to obtain typical drying curves as function of important parameters such as drying temperature, drying time, oil type and geometrical shape of sludge formed. Based on this study, a number of useful conclusion can be drawn as following. The fry-drying method by oil immersion was found quite effective in the removal efficiency of sludge moisture, in general, the moisture content decreases significantly after 10 minutes and the whole moisture content was less than 5% after 14 minutes regardless of the drying temperature. The increase of oil temperature up to 140$^{\circ}C$ favors significantly for the removal of moisture but there was no visible difference above 140$^{\circ}C$. As expected, the decrease of diameter in sludge was efficient in drying due to the increased surface area per unit volume. Further, the effect of oil property by the change of oil type was noted. To be specific, for the case of engine oil the efficiency was found to be remarkably delayed in moisture evaporation compared with that of vegetable oil due to the increased viscosity of engine oil. It produced a result of increasing the evaporation of moisture largely relatively high in the drying temperature over 140$^{\circ}C$ compared with the drying temperature 120$^{\circ}C$ drying temperature as the drying time passed. Accordingly, the drying temperature is considered desirable as keeping over 140$^{\circ}C$ regardless of a sort of used oil.

Verification of Radiation Therapy Planning Dose Based on Electron Density Correction of CT Number: XiO Experiments (컴퓨터영상의 전자밀도보정에 근거한 치료선량확인: XiO 실험)

  • Choi Tae-Jin;Kim Jin-Hee;Kim Ok-Bae
    • Progress in Medical Physics
    • /
    • v.17 no.2
    • /
    • pp.105-113
    • /
    • 2006
  • This study peformed to confirm the corrected dose In different electron density materials using the superposition/FFT convolution method in radiotherapy Planning system. The experiments of the $K_2HPO_4$ diluted solution for bone substitute, Cork for lung and n-Glucose for soft tissue are very close to effective atomic number of tissue materials. The image data acquisited from the 110 KVp and 130 KVp CT scanner (Siemes, Singo emotions). The electron density was derived from the CT number (H) and adapted to planning system (Xio, CMS) for heterogeneity correction. The heterogeneity tissue phantom used for measurement dose comparison to that of delivered computer planning system. In the results, this investigations showed the CT number is highly affected in photoelectric effect in high Z materials. The electron density in a given energy spectrum showed the relation of first order as a function of H in soft tissue and bone materials, respectively. In our experiments, the ratio of electron density as a function of H was obtained the 0.001026H+1.00 in soft tissue and 0.000304H+1.07 for bone at 130 KVp spectrum and showed 0.000274H+1.10 for bone tissue in low 110 KVp. This experiments of electron density calibrations from CT number used to decide depth and length of photon transportation. The Computed superposition and FFT convolution dose showed very close to measurements within 1.0% discrepancy in homogeneous phantom for 6 and 15 MV X rays, but it showed -5.0% large discrepancy in FFT convolution for bone tissue correction of 6 MV X rays. In this experiments, the evaluated doses showed acceptable discrepancy within -1.2% of average for lung and -2.9% for bone equivalent materials with superposition method in 6 MV X rays. However the FFT convolution method showed more a large discrepancy than superposition in the low electron density medium in 6 and 15 MV X rays. As the CT number depends on energy spectrum of X rays, it should be confirm gradient of function of CT number-electron density regularly.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Echocardiographic Diagnosis of Pulmonary Arterial Hypertension in Chronic Lung Disease with Hypoxemia (만성 저산소성 폐질환의 폐동맥 고혈압에 대한 심초음파 검사)

  • Chang, Jung-Hyun
    • Tuberculosis and Respiratory Diseases
    • /
    • v.46 no.6
    • /
    • pp.846-855
    • /
    • 1999
  • Background : Secondary pulmonary hypertension is an important final endpoint in patients with chronic hypoxic lung disease, accompanied by deterioration of pulmonary hemodynamics. The clinical diagnosis of pulmonary hypertension and/or cor pulmonale could be difficult, and simple noninvasive evaluation of pulmonary artery pressures has been an relevant clinical challenge for many years. Doppler echocardiography might to be a more reliable method for evaluating pulmonary hemodynamics in such patients in terms of the accuracy, reproducibility and easiness for obtaining an appropriate echocardiographic window than M-mode echocardiography. The aim of this study was to assess echocardiographic parameters associated with pulmonary arterial hypertension, defined by increasing right ventricular systolic pressure(RVSP), calculated from trans-tricuspid gradient in patients with chronic hypoxic lungs. Method : We examined 19 patients with chronic hypoxic lung disease, suspected pulmonary hypertension under the clinical guidelines by two dimensional echocardiography via the left parasternal and subcostal approach in a supine position. Doppler echocardiography measured RVSP from tricuspid regurgitant velocity in continuous wave with 2.5MHz transducer and acceleration time(AT) on right ventricular outflow tract in pulsed wave for the estimation of pulmonary arterial pressure. Results : On echocardiography, moderate to severe degree of pulmonary arterial hypertension was defined as RVSP more than 40mmHg, presenting tricuspid regurgitation. Increased right ventricular endsystolic diameter and shortened AT were noted in the increased RVSP group. Increased RVSP was correlated negatively with the shortening of AT. Other clinical data, including pulmonary functional parameters, arterial blood gas analysis and M mode echocardiographic parameters were not changed significantly with the increased RVSP. Conclusion : These findings suggest that shortened AT on pulsed doppler can be useful when quantifying pulmonary arterial pressure with increased RVSP in patients with chronic lung disease with hypoxemia. Doppler echocardiography in pulmonary hypertension of chronic hypoxic lungs is an useful option, based on noninvasiveness under routine clinical practice.

  • PDF