• Title/Summary/Keyword: Optimization Technique

Search Result 2,674, Processing Time 0.038 seconds

PRISM-KNU Development and Monthly Precipitation Mapping in South Korea (PRISM-KNU의 개발과 남한 월강수량 분포도 작성)

  • PARK, Jong-Chul;KIM, Man-Kyu
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.2
    • /
    • pp.27-46
    • /
    • 2016
  • In this study, the parameter-elevation regressions on independent slopes model-Kongju National University(PRISM-KNU) system was developed to interpolate monthly precipitation data. One of the features of PRISM-KNU is that it can adjust the allowable range of slope according to the elevation range in the equation representing a linear relationship between the precipitation and elevation. The parameter value of the model was determined by using the optimization technique, and the result was applied to produce monthly precipitation data with a spatial resolution of $1{\times}1km$ from 2000 to 2014 in South Korea. In the result, the Kling-Gupta Efficiency for model evaluation was over 0.7 in 86% of the total cases simulated. In addition, a dramatic change in the spatial pattern of precipitation data was observed in the output of the Modified Korean PRISM, but such a phenomenon did not occur in the output of the PRISM-KNU. This study confirmed the appropriateness of the PRISM-KNU, and the result showed that the spatial consistency of the data produced by the model improved compared with that produced by the Modified Korean PRISM. It is expected that the PRISM-KNU and its output will be utilized in various studies in the future.

A Study on the Enhancement of Isolation of the MIMO Antenna for LTE/DCS1800/USPCS1900 Handset (LTE/DCS1800/USPCS1900 단말기용 MIMO 안테나의 격리도 개선에 관한 연구)

  • Cho, Dong-Ki;Son, Ho-Cheol;Lee, Jin-Woo;Lee, Sang-Woon;Lee, Mun-Soo
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.10
    • /
    • pp.80-85
    • /
    • 2010
  • In this paper, a MIMO antenna is proposed for LTE/DCSl800/USPCSl900 handset applications. The proposed antenna is based on the IFA and its wide bandwidth is obtained by using a stagger tuning technique. To improve the isolation, a suspended line is connected to the shorting points in two antennas, and capacitors and inductors are added to the connected suspended line. Two identical antennas of which dimension is 2.8cc($40{\times}10{\times}7mm$) are mounted on the two end lines of the system ground plane($40{\times}60mm$). Analysis of the antenna performance and optimization is performed using CST Microwave Studio. The bandwidths are satisfied for LTE band class 13(746-787MHz), class 14(758-798MHz) and DCSl800/USPCSl900 band (1710-1990MHz). The isolations between two antennas are about -12dB for LTE band and -10dB for DCSl800/USPCSl900 band. And the radiation efficiency of each antenna is about for LTE band 33% and 45% for DCSl800/USPCSl900 band respectively.

Improvement of Address Pointer Assignment in DSP Code Generation (DSP용 코드 생성에서 주소 포인터 할당 성능 향상 기법)

  • Lee, Hee-Jin;Lee, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.37-47
    • /
    • 2008
  • Exploitation of address generation units which are typically provided in DSPs plays an important role in DSP code generation since that perform fast address computation in parallel to the central data path. Offset assignment is optimization of memory layout for program variables by taking advantage of the capabilities of address generation units, consists of memory layout generation and address pointer assignment steps. In this paper, we propose an effective address pointer assignment method to minimize the number of address calculation instructions in DSP code generation. The proposed approach reduces the time complexity of a conventional address pointer assignment algorithm with fixed memory layouts by using minimum cost-nodes breaking. In order to contract memory size and processing time, we employ a powerful pruning technique. Moreover our proposed approach improves the initial solution iteratively by changing the memory layout for each iteration because the memory layout affects the result of the address pointer assignment algorithm. We applied the proposed approach to about 3,000 sequences of the OffsetStone benchmarks to demonstrate the effectiveness of the our approach. Experimental results with benchmarks show an average improvement of 25.9% in the address codes over previous works.

Measurement of Viscosity Behavior in In-situ Anionic Polymerization of ε-caprolactam for Thermoplastic Reactive Resin Transfer Molding (반응액상성형에서 ε-카프로락탐의 음이온 중합에 따른 점도 거동 평가)

  • Lee, Jae Hyo;Kang, Seung In;Kim, Sang Woo;Yi, Jin Woo;Seong, Dong Gi
    • Composites Research
    • /
    • v.33 no.2
    • /
    • pp.39-43
    • /
    • 2020
  • Recently, fabrication process of thermoplastic polyamide-based composites with recyclability as well as impact, chemical, and abrasion resistance have been widely studied. In particular, thermoplastic reactive resin transfer molding (TRTM) in which monomer with low viscosity is injected and in-situ polymerized inside mold has received a great attention, because thermoplastic melts are hard to impregnate fiber preform due to their very high viscosity. However, it is difficult to optimize the processing conditions because of high reactivity and sensitivity to external environments of the used monomer, ε-caprolactam. In this study, viscosity as an important process parameter in TRTM was measured during in-situ anionic polymerization of ε-caprolactam and the solutions for problems caused by high polymerization rate and sensitivity to moisture and oxygen were suggested. Reliability of the improved measurement technique was verified by comparing the viscosity behavior at various environmental conditions including humidity and atmosphere, and it is expected to be helpful for optimization of TRTM process.

A Study on the Thermo-Mechanical Fatigue Loading for Time Reduction in Fabricating an Artificial Cracked Specimen (열-기계적 피로하중을 받는 균열시편 제작시간 단축에 관한 연구)

  • Lee, Gyu-Beom;Choi, Joo-Ho;An, Dae-Hwan;Lee, Bo-Young
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.21 no.1
    • /
    • pp.35-42
    • /
    • 2008
  • In the nuclear power plant, early detection of fatigue crack by non-destructive test (NDT) equipment due to the thermal cyclic load is very important in terms of strict safety regulation. To this end, many efforts are exerted to the fabrication of artificial cracked specimen for practicing engineers in the NDT company. The crack of this kind, however, cannot be made by conventional machining, but should be made under thermal cyclic load that is close to the in-situ condition, which takes tremendous time due to the repetition. In this study, thermal loading condition is investigated to minimize the time for fabricating the cracked specimen using simulation technique which predicts the crack initiation and propagation behavior. Simulation and experiment are conducted under an initial assumed condition for validation purpose. A number of simulations are conducted next under a variety of heating and cooling conditions, from which the best solution to achieve minimum time for crack with wanted size is found. In the simulation, general purpose software ANSYS is used for the stress analysis, MATLAB is used to compute crack initiation life, and ZENCRACK, which is special purpose software for crack growth prediction, is used to compute crack propagation life. As a result of the study, the time for the crack to reach the size of 1mm is predicted from the 418 hours at the initial condition to the 319 hours at the optimum condition, which is about 24% reduction.

An Efficient Clustering Algorithm based on Heuristic Evolution (휴리스틱 진화에 기반한 효율적 클러스터링 알고리즘)

  • Ryu, Joung-Woo;Kang, Myung-Ku;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.80-90
    • /
    • 2002
  • Clustering is a useful technique for grouping data points such that points within a single group/cluster have similar characteristics. Many clustering algorithms have been developed and used in engineering applications including pattern recognition and image processing etc. Recently, it has drawn increasing attention as one of important techniques in data mining. However, clustering algorithms such as K-means and Fuzzy C-means suffer from difficulties. Those are the needs to determine the number of clusters apriori and the clustering results depending on the initial set of clusters which fails to gain desirable results. In this paper, we propose a new clustering algorithm, which solves mentioned problems. In our method we use evolutionary algorithm to solve the local optima problem that clustering converges to an undesirable state starting with an inappropriate set of clusters. We also adopt a new measure that represents how well data are clustered. The measure is determined in terms of both intra-cluster dispersion and inter-cluster separability. Using the measure, in our method the number of clusters is automatically determined as the result of optimization process. And also, we combine heuristic that is problem-specific knowledge with a evolutionary algorithm to speed evolutionary algorithm search. We have experimented our algorithm with several sets of multi-dimensional data and it has been shown that one algorithm outperforms the existing algorithms.

Trend of Research and Industry-Related Analysis in Data Quality Using Time Series Network Analysis (시계열 네트워크분석을 통한 데이터품질 연구경향 및 산업연관 분석)

  • Jang, Kyoung-Ae;Lee, Kwang-Suk;Kim, Woo-Je
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.295-306
    • /
    • 2016
  • The purpose of this paper is both to analyze research trends and to predict industrial flows using the meta-data from the previous studies on data quality. There have been many attempts to analyze the research trends in various fields till lately. However, analysis of previous studies on data quality has produced poor results because of its vast scope and data. Therefore, in this paper, we used a text mining, social network analysis for time series network analysis to analyze the vast scope and data of data quality collected from a Web of Science index database of papers published in the international data quality-field journals for 10 years. The analysis results are as follows: Decreases in Mathematical & Computational Biology, Chemistry, Health Care Sciences & Services, Biochemistry & Molecular Biology, Biochemistry & Molecular Biology, and Medical Information Science. Increases, on the contrary, in Environmental Sciences, Water Resources, Geology, and Instruments & Instrumentation. In addition, the social network analysis results show that the subjects which have the high centrality are analysis, algorithm, and network, and also, image, model, sensor, and optimization are increasing subjects in the data quality field. Furthermore, the industrial connection analysis result on data quality shows that there is high correlation between technique, industry, health, infrastructure, and customer service. And it predicted that the Environmental Sciences, Biotechnology, and Health Industry will be continuously developed. This paper will be useful for people, not only who are in the data quality industry field, but also the researchers who analyze research patterns and find out the industry connection on data quality.

Efficient 3D Object Simplification Algorithm Using 2D Planar Sampling and Wavelet Transform (2D 평면 표본화와 웨이브릿 변환을 이용한 효율적인 3차원 객체 간소화 알고리즘)

  • 장명호;이행석;한규필;박양우
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.5_6
    • /
    • pp.297-304
    • /
    • 2004
  • In this paper, a mesh simplification algorithm based on wavelet transform and 2D planar sampling is proposed for efficient handling of 3D objects in computer applications. Since 3D vertices are directly transformed with wavelets in conventional mesh compression and simplification algorithms, it is difficult to solve tiling optimization problems which reconnect vertices into faces in the synthesis stage highly demanding vertex connectivities. However, a 3D mesh is sampled onto 2D planes and 2D polygons on the planes are independently simplified in the proposed algorithm. Accordingly, the transform of 2D polygons is very tractable and their connection information Is replaced with a sequence of vertices. The vertex sequence of the 2D polygons on each plane is analyzed with wavelets and the transformed data are simplified by removing small wavelet coefficients which are not dominant in the subjective quality of its shape. Therefore, the proposed algorithm is able to change the mesh level-of-detail simply by controlling the distance of 2D sampling planes and the selective removal of wavelet coefficients. Experimental results show that the proposed algorithm is a simple and efficient simplification technique with less external distortion.

Vibration Analysis of Large Structures by the Component-Mode Synthesis (부분구조진동형 합성방법에 의한 대형구조계의 진동해석)

  • B.H. Kim;T.Y. Chung;K.C. Kim
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.30 no.3
    • /
    • pp.116-126
    • /
    • 1993
  • The finite element method(FEM) has been commonly used for structural dynamic analysis. However, the direct global application of FEM to large complex structures such as ships and offshore structures requires considerable computational efforts, and remarkably more in structural dynamic optimization problems. Adoption of the component-mode synthesis method is an efficient means to overcome the above difficulty. Among three classes of the component-mode synthesis method, the free-interface mode method is recognized to have the advantages of better computational efficiency and easier implementation of substructures' experimental results, but the disadvantage of lower accuracy in analytical results. In this paper, an advanced method to improve the accuracy in the application of the free-interface mode method for the vibration analysis of large complex structures is presented. In order to compensate the truncation effect of the higher modes of substructures in the synthesis process, both residual inertia and stiffness effects are taken into account and a frequency shifting technique is introduced in the formulation of the residual compliance of substructures. The introduction of the frequency shrift ins not only excludes cumbersome manipulation of singular matrices for semi-definite substructural systems but gives more accurate results around the specified shifting frequency. Numerical examples of typical structural models including a ship-like two dimensional finite element model show that the analysis results based on the presented method are well competitive in accuracy with those obtained by the direst global FEM analysis for the frequencies which are lower than the highest one employed in the synthesis with remarkably higher computational efficiency and that the presented method is more efficient and accurate than the fixed-interface mode method.

  • PDF

U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images (Landsat 8 기반 SPARCS 데이터셋을 이용한 U-Net 구름탐지)

  • Kang, Jonggu;Kim, Geunah;Jeong, Yemin;Kim, Seoyeon;Youn, Youjeong;Cho, Soobin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1149-1161
    • /
    • 2021
  • With a trend of the utilization of computer vision for satellite images, cloud detection using deep learning also attracts attention recently. In this study, we conducted a U-Net cloud detection modeling using SPARCS (Spatial Procedures for Automated Removal of Cloud and Shadow) Cloud Dataset with the image data augmentation and carried out 10-fold cross-validation for an objective assessment of the model. Asthe result of the blind test for 1800 datasets with 512 by 512 pixels, relatively high performance with the accuracy of 0.821, the precision of 0.847, the recall of 0.821, the F1-score of 0.831, and the IoU (Intersection over Union) of 0.723. Although 14.5% of actual cloud shadows were misclassified as land, and 19.7% of actual clouds were misidentified as land, this can be overcome by increasing the quality and quantity of label datasets. Moreover, a state-of-the-art DeepLab V3+ model and the NAS (Neural Architecture Search) optimization technique can help the cloud detection for CAS500 (Compact Advanced Satellite 500) in South Korea.