• Title/Summary/Keyword: Computation Time

Search Result 3,149, Processing Time 0.032 seconds

Evaluation of the DCT-PLS Method for Spatial Gap Filling of Gridded Data (격자자료 결측복원을 위한 DCT-PLS 기법의 활용성 평가)

  • Youn, Youjeong;Kim, Seoyeon;Jeong, Yemin;Cho, Subin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1407-1419
    • /
    • 2020
  • Long time-series gridded data is crucial for the analyses of Earth environmental changes. Climate reanalysis and satellite images are now used as global-scale periodical and quantitative information for the atmosphere and land surface. This paper examines the feasibility of DCT-PLS (penalized least square regression based on discrete cosine transform) for the spatial gap filling of gridded data through the experiments for multiple variables. Because gap-free data is required for an objective comparison of original with gap-filled data, we used LDAPS (Local Data Assimilation and Prediction System) daily data and MODIS (Moderate Resolution Imaging Spectroradiometer) monthly products. In the experiments for relative humidity, wind speed, LST (land surface temperature), and NDVI (normalized difference vegetation index), we made sure that randomly generated gaps were retrieved very similar to the original data. The correlation coefficients were over 0.95 for the four variables. Because the DCT-PLS method does not require ancillary data and can refer to both spatial and temporal information with a fast computation, it can be applied to operative systems for satellite data processing.

Hierarchical Particle Swarm Optimization for Multi UAV Waypoints Planning Under Various Threats (다양한 위협 하에서 복수 무인기의 경로점 계획을 위한 계층적 입자 군집 최적화)

  • Chung, Wonmo;Kim, Myunggun;Lee, Sanha;Lee, Sang-Pill;Park, Chun-Shin;Son, Hungsun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.6
    • /
    • pp.385-391
    • /
    • 2022
  • This paper presents to develop a path planning algorithm combining gradient descent-based path planning (GBPP) and particle swarm optimization (PSO) for considering prohibited flight areas, terrain information, and characteristics of fixed-wing unmmaned aerial vehicle (UAV) in 3D space. Path can be generated fast using GBPP, but it is often happened that an unsafe path can be generated by converging to a local minimum depending on the initial path. Bio-inspired swarm intelligence algorithms, such as Genetic algorithm (GA) and PSO, can avoid the local minima problem by sampling several paths. However, if the number of optimal variable increases due to an increase in the number of UAVs and waypoints, it requires heavy computation time and efforts due to increasing the number of particles accordingly. To solve the disadvantages of the two algorithms, hierarchical path planning algorithm associated with hierarchical particle swarm optimization (HPSO) is developed by defining the initial path, which is the input of GBPP, as two variables including particles variables. Feasibility of the proposed algorithm is verified by software-in-the-loop simulation (SILS) of flight control computer (FCC) for UAVs.

A Design of Point Scalar Multiplier for Binary Edwards Curves Cryptography (이진 에드워즈 곡선 암호를 위한 점 스칼라 곱셈기 설계)

  • Kim, Min-Ju;Jeong, Young-Su;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1172-1179
    • /
    • 2022
  • This paper describes a design of point scalar multiplier for public-key cryptography based on binary Edwards curves (BEdC). For efficient implementation of point addition (PA) and point doubling (PD) on BEdC, projective coordinate was adopted for finite field arithmetic, and computational performance was improved because only one inversion was involved in point scalar multiplication (PSM). By applying optimizations to hardware design, the storage and arithmetic steps for finite field arithmetic in PA and PD were reduced by approximately 40%. We designed two types of point scalar multipliers for BEdC, Type-I uses one 257-b×257-b binary multiplier and Type-II uses eight 32-b×32-b binary multipliers. Type-II design uses 65% less LUTs compared to Type-I, but it was evaluated that it took about 3.5 times the PSM computation time when operating with 240 MHz. Therefore, the BEdC crypto core of Type-I is suitable for applications requiring high-performance, and Type-II structure is suitable for applications with limited resources.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

An Improved Reliability-Based Design Optimization using Moving Least Squares Approximation (이동최소자승근사법을 이용한 개선된 신뢰도 기반 최적설계)

  • Kang, Soo-Chang;Koh, Hyun-Moo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1A
    • /
    • pp.45-52
    • /
    • 2009
  • In conventional structural design, deterministic optimization which satisfies codified constraints is performed to ensure safety and maximize economical efficiency. However, uncertainties are inevitable due to the stochastic nature of structural materials and applied loads. Thus, deterministic optimization without considering these uncertainties could lead to unreliable design. Recently, there has been much research in reliability-based design optimization (RBDO) taking into consideration both the reliability and optimization. RBDO involves the evaluation of probabilistic constraint that can be estimated using the RIA (Reliability Index Approach) and the PMA(Performance Measure Approach). It is generally known that PMA is more stable and efficient than RIA. Despite the significant advancement in PMA, RBDO still requires large computation time for large-scale applications. In this paper, A new reliability-based design optimization (RBDO) method is presented to achieve the more stable and efficient algorithm. The idea of the new method is to integrate a response surface method (RSM) with PMA. For the approximation of a limit state equation, the moving least squares (MLS) method is used. Through a mathematical example and ten-bar truss problem, the proposed method shows better convergence and efficiency than other approaches.

Histological Validation of Cardiovascular Magnetic Resonance T1 Mapping for Assessing the Evolution of Myocardial Injury in Myocardial Infarction: An Experimental Study

  • Lu Zhang;Zhi-gang Yang;Huayan Xu;Meng-xi Yang;Rong Xu;Lin Chen;Ran Sun;Tianyu Miao;Jichun Zhao;Xiaoyue Zhou;Chuan Fu;Yingkun Guo
    • Korean Journal of Radiology
    • /
    • v.21 no.12
    • /
    • pp.1294-1304
    • /
    • 2020
  • Objective: To determine whether T1 mapping could monitor the dynamic changes of injury in myocardial infarction (MI) and be histologically validated. Materials and Methods: In 22 pigs, MI was induced by ligating the left anterior descending artery and they underwent serial cardiovascular magnetic resonance examinations with modified Look-Locker inversion T1 mapping and extracellular volume (ECV) computation in acute (within 24 hours, n = 22), subacute (7 days, n = 13), and chronic (3 months, n = 7) phases of MI. Masson's trichrome staining was performed for histological ECV calculation. Myocardial native T1 and ECV were obtained by region of interest measurement in infarcted, peri-infarct, and remote myocardium. Results: Native T1 and ECV in peri-infarct myocardium differed from remote myocardium in acute (1181 ± 62 ms vs. 1113 ± 64 ms, p = 0.002; 24 ± 4% vs. 19 ± 4%, p = 0.031) and subacute phases (1264 ± 41 ms vs. 1171 ± 56 ms, p < 0.001; 27 ± 4% vs. 22 ± 2%, p = 0.009) but not in chronic phase (1157 ± 57 ms vs. 1120 ± 54 ms, p = 0.934; 23 ± 2% vs. 20 ± 1%, p = 0.109). From acute to chronic MI, infarcted native T1 peaked in subacute phase (1275 ± 63 ms vs. 1637 ± 123 ms vs. 1471 ± 98 ms, p < 0.001), while ECV progressively increased with time (35 ± 7% vs. 46 ± 6% vs. 52 ± 4%, p < 0.001). Native T1 correlated well with histological findings (R2 = 0.65 to 0.89, all p < 0.001) so did ECV (R2 = 0.73 to 0.94, all p < 0.001). Conclusion: T1 mapping allows the quantitative assessment of injury in MI and the noninvasive monitoring of tissue injury evolution, which correlates well with histological findings.

A study on the design of an efficient hardware and software mixed-mode image processing system for detecting patient movement (환자움직임 감지를 위한 효율적인 하드웨어 및 소프트웨어 혼성 모드 영상처리시스템설계에 관한 연구)

  • Seungmin Jung;Euisung Jung;Myeonghwan Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.29-37
    • /
    • 2024
  • In this paper, we propose an efficient image processing system to detect and track the movement of specific objects such as patients. The proposed system extracts the outline area of an object from a binarized difference image by applying a thinning algorithm that enables more precise detection compared to previous algorithms and is advantageous for mixed-mode design. The binarization and thinning steps, which require a lot of computation, are designed based on RTL (Register Transfer Level) and replaced with optimized hardware blocks through logic circuit synthesis. The designed binarization and thinning block was synthesized into a logic circuit using the standard 180n CMOS library and its operation was verified through simulation. To compare software-based performance, performance analysis of binary and thinning operations was also performed by applying sample images with 640 × 360 resolution in a 32-bit FPGA embedded system environment. As a result of verification, it was confirmed that the mixed-mode design can improve the processing speed by 93.8% in the binary and thinning stages compared to the previous software-only processing speed. The proposed mixed-mode system for object recognition is expected to be able to efficiently monitor patient movements even in an edge computing environment where artificial intelligence networks are not applied.

Theoretical analysis of erosion degradation and safety assessment of submarine shield tunnel segment based on ion erosion

  • Xiaohan Zhou;Yangyang Yang;Zhongping Yang;Sijin Liu;Hao Wang;Weifeng Zhou
    • Geomechanics and Engineering
    • /
    • v.37 no.6
    • /
    • pp.599-614
    • /
    • 2024
  • To evaluate the safety status of deteriorated segments in a submarine shield tunnel during its service life, a seepage model was established based on a cross-sea shield tunnel project. This model was used to study the migration patterns of erosive ions within the shield segments. Based on these laws, the degree of deterioration of the segments was determined. Using the derived analytical solution, the internal forces within the segments were calculated. Lastly, by applying the formula for calculating safety factors, the variation trends in the safety factors of segments with different degrees of deterioration were obtained. The findings demonstrate that corrosive seawater presents the evolution characteristics of continuous seepage from the outside to the inside of the tunnel. The nearby seepage field shows locally concentrated characteristics when there is leakage at the joint, which causes the seepage field's depth and scope to significantly increase. The chlorine ion content decreases gradually with the increase of the distance from the outer surface of the tunnel. The penetration of erosion ions in the segment is facilitated by the presence of water pressure. The ion content of the entire ring segment lining structure is related in the following order: vault < haunch < springing. The difference in the segment's rate of increase in chlorine ion content decreases as service time increases. Based on the analytical solution calculation, the segment's safety factor drops more when the joint leaks than when its intact, and the change rate between the two states exhibits a general downward trend. The safety factor shows a similar change rule at different water depths and continuously decreases at the same segment position as the water depth increases. The three phases of "sudden drop-rise-stability" are represented by a "spoon-shaped" change rule on the safety factor's change curve. The issue of the poor applicability of indicators in earlier studies is resolved by the analytical solution, which only requires determining the loss degree of the segment lining's effective bearing thickness to calculate the safety factor of any cross-section of the shield tunnel. The analytical solution's computation results, however, have some safety margins and are cautious. The process of establishing the evaluation model indicates that the secondary lining made of molded concrete can also have its safety status assessed using the analytical solution. It is very important for the safe operation of the tunnel and the safety of people's property and has a wide range of applications.

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.