• 제목/요약/키워드: $A^*$ algorithm

Search Result 54,210, Processing Time 0.08 seconds

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

Prediction of Isothermal and Reacting Flows in Widely-Spaced Coaxial Jet, Diffusion-Flame Combustor (큰 지름비를 가지는 동축제트 확산화염 연소기내의 등온 및 연소 유동장의 예측)

  • O, Gun-Seop;An, Guk-Yeong;Kim, Yong-Mo;Lee, Chang-Sik
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.20 no.7
    • /
    • pp.2386-2396
    • /
    • 1996
  • A numerical simulation has been performed for isothermal and reacting flows in an exisymmetric, bluff-body research combustor. The present formulation is based on the density-weighted averaged Navier-Stokes equations together with a k-epsilon. turbulence model and a modified eddy-breakup combustion model. The PISO algorithm is employed for solution of thel Navier-Stokes system. Comparison between measurements and predictions are made for a centerline axial velocities, location of stagnation points, strength of recirculation zone, and temperature profile. Even though the numerical simulation gives acceptable agreement with experimental data in many respects, the present model is defictient in predicting the recoveryt rate of a central near-wake region, the non-isotropic turbulence effects, and variation of turbulent Schmidt number. Several possible explanations for these discrepancies have been discussed.

A Review of Multivariate Analysis Studies Applied for Plant Morphology in Korea (국내 식물 형태 연구에 사용된 다변량분석 논문에 대한 재고)

  • Chang, Kae Sun;Oh, Hana;Kim, Hui;Lee, Heung Soo;Chang, Chin-Sung
    • Journal of Korean Society of Forest Science
    • /
    • v.98 no.3
    • /
    • pp.215-224
    • /
    • 2009
  • A review was given of the role of traditional morphometrics in plant morphological studies using 54 published studies in three major journals and others in Korea, such as Journal of Korean Forestry Society, Korean Journal of Plant Taxonomy, Korean Journal of Breeding, Korean Journal of Apiculture, Journal of Life Science, and Korean Journal of Plant Resources from 1997 to 2008. The two most commonly used techniques of data analysis, cluster analysis (CA) and principal components analysis (PCA) with other statistical tests were discussed. The common problem of PCA is the underlying assumptions of methods, like random sampling and multivariate normal distribution of data. The procedure was intended mainly for continuous data and was not efficient for data which were not well summarized by variances or covariances. Likewise CA was most appropriate for categorical rather than continuous data. Also, the CA produced clusters whether or not natural groupings existed, and the results depended on both the similarity measure chosen and the algorithm used for clustering. An additional problems of the PCA and the CA arised with both qualitative and quantitative data with a limited number of variables and/or too few numbers of samples. Some of these problems may be avoided if a certain number of variables (more than 20 at least) and sufficient samples (40-50 at least) are considered for morphometric analyses, but we do not think that the methods are all mighty tools for data analysts. Instead, we do believe that reasonable applications combined with focus on objectives and limitations of each procedure would be a step forward.

Improvement of 2-pass DInSAR-based DEM Generation Method from TanDEM-X bistatic SAR Images (TanDEM-X bistatic SAR 영상의 2-pass 위성영상레이더 차분간섭기법 기반 수치표고모델 생성 방법 개선)

  • Chae, Sung-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.847-860
    • /
    • 2020
  • The 2-pass DInSAR (Differential Interferometric SAR) processing steps for DEM generation consist of the co-registration of SAR image pair, interferogram generation, phase unwrapping, calculation of DEM errors, and geocoding, etc. It requires complicated steps, and the accuracy of data processing at each step affects the performance of the finally generated DEM. In this study, we developed an improved method for enhancing the performance of the DEM generation method based on the 2-pass DInSAR technique of TanDEM-X bistatic SAR images was developed. The developed DEM generation method is a method that can significantly reduce both the DEM error in the unwrapped phase image and that may occur during geocoding step. The performance analysis of the developed algorithm was performed by comparing the vertical accuracy (Root Mean Square Error, RMSE) between the existing method and the newly proposed method using the ground control point (GCP) generated from GPS survey. The vertical accuracy of the DInSAR-based DEM generated without correction for the unwrapped phase error and geocoding error is 39.617 m. However, the vertical accuracy of the DEM generated through the proposed method is 2.346 m. It was confirmed that the DEM accuracy was improved through the proposed correction method. Through the proposed 2-pass DInSAR-based DEM generation method, the SRTM DEM error observed by DInSAR was compensated for the SRTM 30 m DEM (vertical accuracy 5.567 m) used as a reference. Through this, it was possible to finally create a DEM with improved spatial resolution of about 5 times and vertical accuracy of about 2.4 times. In addition, the spatial resolution of the DEM generated through the proposed method was matched with the SRTM 30 m DEM and the TanDEM-X 90m DEM, and the vertical accuracy was compared. As a result, it was confirmed that the vertical accuracy was improved by about 1.7 and 1.6 times, respectively, and more accurate DEM generation was possible with the proposed method. If the method derived in this study is used to continuously update the DEM for regions with frequent morphological changes, it will be possible to update the DEM effectively in a short time at low cost.

Functional MR Imaging of Cerbral Motor Cortex: Comparison between Conventional Gradient Echo and EPI Techniques (뇌 운동피질의 기능적 영상: 고식적 Gradient Echo기법과 EPI기법간의 비교)

  • 송인찬
    • Investigative Magnetic Resonance Imaging
    • /
    • v.1 no.1
    • /
    • pp.109-113
    • /
    • 1997
  • Purpose: To evaluate the differences of functional imaging patterns between conventional spoiled gradient echo (SPGR) and echo planar imaging (EPI) methods in cerebral motor cortex activation. Materials and Methods: Functional MR imaging of cerebral motor cortex activation was examined on a 1.5T MR unit with SPGR (TRfrE/flip angle=50ms/4Oms/$30^{\circ}$, FOV=300mm, matrix $size=256{\times}256$, slice thickness=5mm) and an interleaved single shot gradient echo EPI (TRfrE/flip angle = 3000ms/40ms/$90^{\circ}$, FOV=300mm, matrix $size=128{\times}128$, slice thickness=5mm) techniques in five male healthy volunteers. A total of 160 images in one slice and 960 images in 6 slices were obtained with SPGR and EPI, respectively. A right finger movement was accomplished with a paradigm of an 8 activation/ 8 rest periods. The cross-correlation was used for a statistical mapping algorithm. We evaluated any differences of the time series and the signal intensity changes between the rest and activation periods obtained with two techniques. Also, the locations and areas of the activation sites were compared between two techniques. Results: The activation sites in the motor cortex were accurately localized with both methods. In the signal intensity changes between the rest and activation periods at the activation regions, no significant differences were found between EPI and SPGR. Signal to noise ratio (SNR) of the time series data was higher in EPI than in SPGR by two folds. Also, larger pixels were distributed over small p-values at the activation sites in EPI. Conclusions: Good quality functional MR imaging of the cerebral motor cortex activation could be obtained with both SPGR and EPI. However, EPI is preferable because it provides more precise information on hemodynamics related to neural activities than SPGR due to high sensitivity.

  • PDF

Study of Crustal Structure in North Korea Using 3D Velocity Tomography (3차원 속도 토모그래피를 이용한 북한지역의 지각구조 연구)

  • So Gu Kim;Jong Woo Shin
    • The Journal of Engineering Geology
    • /
    • v.13 no.3
    • /
    • pp.293-308
    • /
    • 2003
  • New results about the crustal structure down to a depth of 60 km beneath North Korea were obtained using the seismic tomography method. About 1013 P- and S-wave travel times from local earthquakes recorded by the Korean stations and the vicinity were used in the research. All earthquakes were relocated on the basis of an algorithm proposed in this study. Parameterization of the velocity structure is realized with a set of nodes distributed in the study volume according to the ray density. 120 nodes located at four depth levels were used to obtain the resulting P- and S-wave velocity structures. As a result, it is found that P- and S-wave velocity anomalies of the Rangnim Massif at depth of 8 km are high and low, respectively, whereas those of the Pyongnam Basin are low up to 24 km. It indicates that the Rangnim Massif contains Archean-early Lower Proterozoic Massif foldings with many faults and fractures which may be saturated with underground water and/or hot springs. On the other hand, the Pyongyang-Sariwon in the Pyongnam Basin is an intraplatform depression which was filled with sediments for the motion of the Upper Proterozoic, Silurian and Upper Paleozoic, and Lower Mesozoic origin. In particular, the high P- and S-wave velocity anomalies are observed at depth of 8, 16, and 24 km beneath Mt. Backdu, indicating that they may be the shallow conduits of the solidified magma bodies, while the low P-and S-wave velocity anomalies at depth of 38 km must be related with the magma chamber of low velocity bodies with partial melting. We also found the Moho discontinuities beneath the Origin Basin including Sari won to be about 55 km deep, whereas those of Mt. Backdu is found to be about 38 km. The high ratio of P-wave velocity/S-wave velocity at Moho suggests that there must be a partial melting body near the boundary of the crust and mantle. Consequently we may well consider Mt. Backdu as a dormant volcano which is holding the intermediate magma chamber near the Moho discontinuity. This study also brought interesting and important findings that there exist some materials with very high P- and S-wave velocity annomoalies at depth of about 40 km near Mt. Myohyang area at the edge of the Rangnim Massif shield.

A review of Deepwater Horizon Oil Budget Calculator for its Application to Korea (딥워터 호라이즌호 유출유 수지분석 모델의 국내 적용성 검토)

  • Kim, Choong-Ki;Oh, Jeong-Hwan;Kang, Seong-Gil
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.19 no.4
    • /
    • pp.322-331
    • /
    • 2016
  • Oil budget calculator identifies the removal pathways of spilled oil by both natural and response methods, and estimates the remaining oil required response activities. A oil budget calculator was newly developed as a response tool for Deepwater Horizon oil spill incident in Gulf of Mexico in 2010 to inform clean up decisions for Incident Comment System, which was also successfully utilized to media and general public promotion of oil spill response activities. This study analyzed the theoretical background of the oil budget calculator and explored its future application to Korea. The oil budge calculation of four catastrophic marine pollution incidents indicates that 3~8% of spilled oil was removed mechanically by skimmers, 1~5% by in-situ burning, 4.8~16% by chemical dispersion due to dispersant operation, and 37~56% by weathering processes such as evaporation, dissolution, and natural dispersion. The results show that in-situ burning and chemical dispersion effectively remove spilled oil more than the mechanical removal by skimming, and natural weathering processes are also very effective to remove spilled oil. To apply the oil budget calculator in Korea, its parameters need to be optimized in response to the seasonal characteristics of marine environment, the characteristics of spilled oil and response technologies. A new algorithm also needs to be developed to estimate the oil budget due to shoreline cleanup activities. An oil budget calculator optimized in Korea can play a critical role in informing decisions for oil spill response activities and communicating spill prevention and response activities with the media and general public.

Operational Ship Monitoring Based on Multi-platforms (Satellite, UAV, HF Radar, AIS) (다중 플랫폼(위성, 무인기, AIS, HF 레이더)에 기반한 시나리오별 선박탐지 모니터링)

  • Kim, Sang-Wan;Kim, Donghan;Lee, Yoon-Kyung;Lee, Impyeong;Lee, Sangho;Kim, Junghoon;Kim, Keunyong;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.379-399
    • /
    • 2020
  • The detection of illegal ship is one of the key factors in building a marine surveillance system. Effective marine surveillance requires the means for continuous monitoring over a wide area. In this study, the possibility of ship detection monitoring based on satellite SAR, HF radar, UAV and AIS integration was investigated. Considering the characteristics of time and spatial resolution for each platform, the ship monitoring scenario consisted of a regular surveillance system using HFR data and AIS data, and an event monitoring system using satellites and UAVs. The regular surveillance system still has limitations in detecting a small ship and accuracy due to the low spatial resolution of HF radar data. However, the event monitoring system using satellite SAR data effectively detects illegal ships using AIS data, and the ship speed and heading direction estimated from SAR images or ship tracking information using HF radar data can be used as the main information for the transition to UAV monitoring. For the validation of monitoring scenario, a comprehensive field experiment was conducted from June 25 to June 26, 2019, at the west side of Hongwon Port in Seocheon. KOMPSAT-5 SAR images, UAV data, HF radar data and AIS data were successfully collected and analyzed by applying each developed algorithm. The developed system will be the basis for the regular and event ship monitoring scenarios as well as the visualization of data and analysis results collected from multiple platforms.

The Evaluation of Meteorological Inputs retrieved from MODIS for Estimation of Gross Primary Productivity in the US Corn Belt Region (MODIS 위성 영상 기반의 일차생산성 알고리즘 입력 기상 자료의 신뢰도 평가: 미국 Corn Belt 지역을 중심으로)

  • Lee, Ji-Hye;Kang, Sin-Kyu;Jang, Keun-Chang;Ko, Jong-Han;Hong, Suk-Young
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.481-494
    • /
    • 2011
  • Investigation of the $CO_2$ exchange between biosphere and atmosphere at regional, continental, and global scales can be directed to combining remote sensing with carbon cycle process to estimate vegetation productivity. NASA Earth Observing System (EOS) currently produces a regular global estimate of gross primary productivity (GPP) and annual net primary productivity (NPP) of the entire terrestrial earth surface at 1 km spatial resolution. While the MODIS GPP algorithm uses meteorological data provided by the NASA Data Assimilation Office (DAO), the sub-pixel heterogeneity or complex terrain are generally reflected due to coarse spatial resolutions of the DAO data (a resolution of $1{\circ}\;{\times}\;1.25{\circ}$). In this study, we estimated inputs retrieved from MODIS products of the AQUA and TERRA satellites with 5 km spatial resolution for the purpose of finer GPP and/or NPP determinations. The derivatives included temperature, VPD, and solar radiation. Seven AmeriFlux data located in the Corn Belt region were obtained to use for evaluation of the input data from MODIS. MODIS-derived air temperature values showed a good agreement with ground-based observations. The mean error (ME) and coefficient of correlation (R) ranged from $-0.9^{\circ}C$ to $+5.2^{\circ}C$ and from 0.83 to 0.98, respectively. VPD somewhat coarsely agreed with tower observations (ME = -183.8 Pa ~ +382.1 Pa; R = 0.51 ~ 0.92). While MODIS-derived shortwave radiation showed a good correlation with observations, it was slightly overestimated (ME = -0.4 MJ $day^{-1}$ ~ +7.9 MJ $day^{-1}$; R = 0.67 ~ 0.97). Our results indicate that the use of inputs derived MODIS atmosphere and land products can provide a useful tool for estimating crop GPP.