• Title/Summary/Keyword: Computation

Search Result 7,985, Processing Time 0.035 seconds

Approximate Dynamic Programming Based Interceptor Fire Control and Effectiveness Analysis for M-To-M Engagement (근사적 동적계획을 활용한 요격통제 및 동시교전 효과분석)

  • Lee, Changseok;Kim, Ju-Hyun;Choi, Bong Wan;Kim, Kyeongtaek
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.4
    • /
    • pp.287-295
    • /
    • 2022
  • As low altitude long-range artillery threat has been strengthened, the development of anti-artillery interception system to protect assets against its attacks will be kicked off. We view the defense of long-range artillery attacks as a typical dynamic weapon target assignment (DWTA) problem. DWTA is a sequential decision process in which decision making under future uncertain attacks affects the subsequent decision processes and its results. These are typical characteristics of Markov decision process (MDP) model. We formulate the problem as a MDP model to examine the assignment policy for the defender. The proximity of the capital of South Korea to North Korea border limits the computation time for its solution to a few second. Within the allowed time interval, it is impossible to compute the exact optimal solution. We apply approximate dynamic programming (ADP) approach to check if ADP approach solve the MDP model within processing time limit. We employ Shoot-Shoot-Look policy as a baseline strategy and compare it with ADP approach for three scenarios. Simulation results show that ADP approach provide better solution than the baseline strategy.

A new warp scheduling technique for improving the performance of GPUs by utilizing MSHR information (GPU 성능 향상을 위한 MSHR 정보 기반 워프 스케줄링 기법)

  • Kim, Gwang Bok;Kim, Jong Myon;Kim, Cheol Hong
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.3
    • /
    • pp.72-83
    • /
    • 2017
  • GPUs can provide high throughput with latency hiding by executing many warps in parallel. MSHR(Miss Status Holding Registers) for L1 data cache tracks cache miss requests until required data is serviced from lower level memory. In recent GPUs, excessive requests for cache resources cause underutilization problem of GPU resources due to cache resource reservation fails. In this paper, we propose a new warp scheduling technique to reduce stall cycles under MSHR resource shortage. Cache miss rates for each warp is predicted based on the observation that each warp shows similar cache miss rates for long period. The warps showing low miss rates or computation-intensive warps are given high priority to be issued when MSHR is full status. Our proposal improves GPU performance by utilizing cache resource more efficiently based on cache miss rate prediction and monitoring the MSHR entries. According to our experimental results, reservation fail cycles can be reduced by 25.7% and IPC is increased by 6.2% with the proposed scheduling technique compared to loose round robin scheduler.

Analysis of Principal Stress Distribution Difference of Tensile Plate with Partial Through-hole (부분 관통 구멍이 있는 인장판의 주응력 분포 차이 해석)

  • Park, Sang Hyun;Kim, Young Chul;Kim, Myung Soo;Baek, Tae Hyun
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.2
    • /
    • pp.437-444
    • /
    • 2017
  • Stress concentrations around discontinuities, such as a hole in cross section of a structural member, have great importance because the most materials failure around the region may be occurred. Stress on the point applied by concentrated load reaches much larger value than the average stress in structural member. In this paper, stress analysis was performed for the plate with a partial through-hole to find the difference of the principal stress distribution. The difference between maximum principal stress and minimum principal stress in photoelasticity is equal to the value obtained by multiplying the isochromatic fringe order by the fringe constant of the material divided by the distance through which the light passes, that is, the thickness of the specimen. Since the difference of principal stress is proportional to the photoelastic fringe order, the distribution of the principal stress difference by the finite element analysis can be compared with the photoelasticity experimental result. ANSYS Workbench, that is the finite element software, is used to compute the differences of principal stresses at the specific points on the measured lines. The computation values obtained by ANSYS are compared with the experimental measurements by photoelasticity, and two results are comparable to each other. In addition, the stress concentration factor is obtained using the stress distribution analyzed from the variation of hole depth. Stress concentration factor is increasing, as the depth of hole increase.

A Study about Learning Graph Representation on Farmhouse Apple Quality Images with Graph Transformer (그래프 트랜스포머 기반 농가 사과 품질 이미지의 그래프 표현 학습 연구)

  • Ji Hun Bae;Ju Hwan Lee;Gwang Hyun Yu;Gyeong Ju Kwon;Jin Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Recently, a convolutional neural network (CNN) based system is being developed to overcome the limitations of human resources in the apple quality classification of farmhouse. However, since convolutional neural networks receive only images of the same size, preprocessing such as sampling may be required, and in the case of oversampling, information loss of the original image such as image quality degradation and blurring occurs. In this paper, in order to minimize the above problem, to generate a image patch based graph of an original image and propose a random walk-based positional encoding method to apply the graph transformer model. The above method continuously learns the position embedding information of patches which don't have a positional information based on the random walk algorithm, and finds the optimal graph structure by aggregating useful node information through the self-attention technique of graph transformer model. Therefore, it is robust and shows good performance even in a new graph structure of random node order and an arbitrary graph structure according to the location of an object in an image. As a result, when experimented with 5 apple quality datasets, the learning accuracy was higher than other GNN models by a minimum of 1.3% to a maximum of 4.7%, and the number of parameters was 3.59M, which was about 15% less than the 23.52M of the ResNet18 model. Therefore, it shows fast reasoning speed according to the reduction of the amount of computation and proves the effect.

Estimation of Measure of Alarmness of Drivers in Ubiquitous Transport Based on Fuzzy Set Theory (퍼지이론에 기초한 유비쿼터스 교통시대 첨단차량 운전자의 불안감도 산정)

  • Park, Hee Je;Bae, Sang Hoon;Kim, Young Seup
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.1D
    • /
    • pp.11-19
    • /
    • 2008
  • Currently, existing car following models among several basic systems of advanced vehicle systems are almost developed related to the physical relation between two vehicles except for the driver's behavior or environmental factors. But the consideration of driver's character and environmental factors on driving are very essential factors for actual application. Hence, we suggested calibrating the degree of driver's discomfort on driving that is the former study to develop a new car following model of advanced vehicle to use in actuality. The degree of driver's discomfortness (Measure-of-Alarmness; MOA)is measured related to the relationship between the following vehicle and the preceding vehicle, the environmental factors and driver's characters in ubiquitous traffic. We made up questions to drivers to obtain the general and the objective measurement of driver's MOA. And the fuzzy logic model for measurement of MOA was constructed based on the results of survey. We verified the suitability of fuzzy logic model through the computation of MOA with several scenarios. And we measured the quantitative degree of driver's discomfortness on car following related to several factors which affect drivers. In accordance with this study, development of car following model applying driver's MOA will promote the actual application of advanced vehicle more effectively than the existing models. Finally, we thought the measurement of driver's MOA will be applied significantly to evaluate safety and comfort of drivers on driving.

Estimation of Harbor Operating Ratio Based on Moored Ship Motion (계류선박의 동요에 기초한 항만가동률 산정)

  • Kwak, Moonsu;Chung, Jaewan;Ahn, Sungphil;Pyun, Chongkun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6B
    • /
    • pp.651-660
    • /
    • 2006
  • Although a harbor may be constructed with calmness in harbor in mind, which satisfies the design standard, it is frequently reported that the motion of moored ships disrupt the cargo handling. This is because of current design standard, which only deals with the wave height in the decision making process of cargo handling, and, now, a new kind of estimation method of operating ratio for calmness based on the motion of moored ship is in need. In this research, a computational method that analyses the harbor operation rate in harbor was put forward by considering the relation of allowable quantity of motion for cargo handling and the computation of the motion of moored ship at wharf by using moored ship motion analysis model. Here, a new estimetion method was applied at Onsan harbor, and it was compared with the current estimation method, and, then, the difference between the two methods was showed. The harbor operating ratio gained by a new method was dropped by 2~11% at ENE and NE directions when it was compared with the operating ratio based on the current design standard. However, when a harbor structure layout is to be designed, a harbor operating ratio test according to the wave height and a harbor operation rate test, which considers the motion of moored ship, are to be run side by side at a harbor designing process.

The Estimation of Soil Moisture Index by SWAT Model and Drought Monitoring (SWAT 모형을 이용한 토양수분지수 산정과 가뭄감시)

  • Hwang, Tae Ha;Kim, Byung Sik;Kim, Hung Soo;Seoh, Byung Ha
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4B
    • /
    • pp.345-354
    • /
    • 2006
  • Drought brings on long term damage in contrast to flood, on economic loss in the region, and on ecologic and environmental disruptions. Drought is one of major natural disasters and gives a painful hardship to human beings. So we have tried to quantify the droughts for reducing drought damage and developed the drought indices for drought monitoring and management. The Palmer's drought severity index (PDSI) is widely used for the drought monitoring but it has the disadvanges and limitations in that the PDSI is estimated by considering just climate conditions as pointed out by many researchers. Thus this study uses the SWAT model which can consider soil conditions like soil type and land use in addition to climate conditions. We estimate soil water (SW) and soil moisture index (SMI) by SWAT which is a long term runoff simulation model. We apply the SWAT model to Soyang dam watershed for SMI estimation and compare SMI with PDSI for drought analysis. Say, we calibrate and validate the SWAT model by daily inflows of Soyang dam site and we estimate long term daily soil water. The estimated soil water is used for the computation of SMI based on the soil moisture deficit and we compare SMI with PDSI. As the results, we obtained the determination coefficient of 0.651 which means the SWAT model is applicable for drought monitoring and we can monitor drought in more high resolution by using GIS. So, we suggest that SMI based on the soil moisture deficit can be used for the drought monitoring and management.

Automated Satellite Image Co-Registration using Pre-Qualified Area Matching and Studentized Outlier Detection (사전검수영역기반정합법과 't-분포 과대오차검출법'을 이용한 위성영상의 '자동 영상좌표 상호등록')

  • Kim, Jong Hong;Heo, Joon;Sohn, Hong Gyoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4D
    • /
    • pp.687-693
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene, one of which represents a reference image, while the other is geometrically transformed to the one. In order to improve efficiency and effectiveness of the co-registration approach, the author proposed a pre-qualified area matching algorithm which is composed of feature extraction with canny operator and area matching algorithm with cross correlation coefficient. For refining matching points, outlier detection using studentized residual was used and iteratively removes outliers at the level of three standard deviation. Throughout the pre-qualification and the refining processes, the computation time was significantly improved and the registration accuracy is enhanced. A prototype of the proposed algorithm was implemented and the performance test of 3 Landsat images of Korea. showed: (1) average RMSE error of the approach was 0.435 pixel; (2) the average number of matching points was over 25,573; (3) the average processing time was 4.2 min per image with a regular workstation equipped with a 3 GHz Intel Pentium 4 CPU and 1 Gbytes Ram. The proposed approach achieved robustness, full automation, and time efficiency.

An Improved Reliability-Based Design Optimization using Moving Least Squares Approximation (이동최소자승근사법을 이용한 개선된 신뢰도 기반 최적설계)

  • Kang, Soo-Chang;Koh, Hyun-Moo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1A
    • /
    • pp.45-52
    • /
    • 2009
  • In conventional structural design, deterministic optimization which satisfies codified constraints is performed to ensure safety and maximize economical efficiency. However, uncertainties are inevitable due to the stochastic nature of structural materials and applied loads. Thus, deterministic optimization without considering these uncertainties could lead to unreliable design. Recently, there has been much research in reliability-based design optimization (RBDO) taking into consideration both the reliability and optimization. RBDO involves the evaluation of probabilistic constraint that can be estimated using the RIA (Reliability Index Approach) and the PMA(Performance Measure Approach). It is generally known that PMA is more stable and efficient than RIA. Despite the significant advancement in PMA, RBDO still requires large computation time for large-scale applications. In this paper, A new reliability-based design optimization (RBDO) method is presented to achieve the more stable and efficient algorithm. The idea of the new method is to integrate a response surface method (RSM) with PMA. For the approximation of a limit state equation, the moving least squares (MLS) method is used. Through a mathematical example and ten-bar truss problem, the proposed method shows better convergence and efficiency than other approaches.

Histological Validation of Cardiovascular Magnetic Resonance T1 Mapping for Assessing the Evolution of Myocardial Injury in Myocardial Infarction: An Experimental Study

  • Lu Zhang;Zhi-gang Yang;Huayan Xu;Meng-xi Yang;Rong Xu;Lin Chen;Ran Sun;Tianyu Miao;Jichun Zhao;Xiaoyue Zhou;Chuan Fu;Yingkun Guo
    • Korean Journal of Radiology
    • /
    • v.21 no.12
    • /
    • pp.1294-1304
    • /
    • 2020
  • Objective: To determine whether T1 mapping could monitor the dynamic changes of injury in myocardial infarction (MI) and be histologically validated. Materials and Methods: In 22 pigs, MI was induced by ligating the left anterior descending artery and they underwent serial cardiovascular magnetic resonance examinations with modified Look-Locker inversion T1 mapping and extracellular volume (ECV) computation in acute (within 24 hours, n = 22), subacute (7 days, n = 13), and chronic (3 months, n = 7) phases of MI. Masson's trichrome staining was performed for histological ECV calculation. Myocardial native T1 and ECV were obtained by region of interest measurement in infarcted, peri-infarct, and remote myocardium. Results: Native T1 and ECV in peri-infarct myocardium differed from remote myocardium in acute (1181 ± 62 ms vs. 1113 ± 64 ms, p = 0.002; 24 ± 4% vs. 19 ± 4%, p = 0.031) and subacute phases (1264 ± 41 ms vs. 1171 ± 56 ms, p < 0.001; 27 ± 4% vs. 22 ± 2%, p = 0.009) but not in chronic phase (1157 ± 57 ms vs. 1120 ± 54 ms, p = 0.934; 23 ± 2% vs. 20 ± 1%, p = 0.109). From acute to chronic MI, infarcted native T1 peaked in subacute phase (1275 ± 63 ms vs. 1637 ± 123 ms vs. 1471 ± 98 ms, p < 0.001), while ECV progressively increased with time (35 ± 7% vs. 46 ± 6% vs. 52 ± 4%, p < 0.001). Native T1 correlated well with histological findings (R2 = 0.65 to 0.89, all p < 0.001) so did ECV (R2 = 0.73 to 0.94, all p < 0.001). Conclusion: T1 mapping allows the quantitative assessment of injury in MI and the noninvasive monitoring of tissue injury evolution, which correlates well with histological findings.