• Title/Summary/Keyword: Computational burden

Search Result 442, Processing Time 0.03 seconds

Pace and Facial Element Extraction in CCD-Camera Images by using Snake Algorithm (스네이크 알고리즘에 의한 CCD 카메라 영상에서의 얼굴 및 얼굴 요소 추출)

  • 판데홍;김영원;김정연;전병환
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.535-542
    • /
    • 2002
  • 최근 IT 산업이 급성장하면서 화상 회의, 게임, 채팅 등에서의 아바타(avatar) 제어를 위한 자연스러운 인터페이스 기술이 요구되고 있다. 본 논문에서는 동적 윤곽선 모델(active contour models; snakes)을 이용하여 복잡한 배경이 있는 컬러 CCD 카메라 영상에서 얼굴과 눈, 입, 눈썹, 코 등의 얼굴 요소에 대해 윤곽선을 추출하거나 위치를 파악하는 방법을 제안한다. 일반적으로 스네이크 알고리즘은 잡음에 민감하고 초기 모델을 어떻게 설정하는가에 따라 추출 성능이 크게 좌우되기 때문에 주로 단순한 배경의 영상에서 정면 얼굴의 추출에 사용되어왔다 본 연구에서는 이러한 단점을 파악하기 위해, 먼저 YIQ 색상 모델의 I 성분을 이용한 색상 정보와 차 영상 정보를 사용하여 얼굴의 최소 포함 사각형(minimum enclosing rectangle; MER)을 찾고, 이 얼굴 영역 내에서 기하학적인 위치 정보와 에지 정보를 이용하여 눈, 입, 눈썹, 코의 MER을 설정한다. 그런 다음, 각 요소의 MER 내에서 1차 미분과 2차 미분에 근거한 내부 에너지와 에지에 기반한 영상 에너지를 이용한 스네이크 알고리즘을 적용한다. 이때, 에지 영상에서 얼굴 주변의 복잡한 잡음을 제거하기 위하여 색상 정보 영상과 차 영상에 각각 모폴로지(morphology)의 팽창(dilation) 연산을 적용하고 이들의 AND 결합 영상에 팽창 연산을 다시 적용한 이진 영상을 필터로 사용한다. 총 7명으로부터 양 눈이 보이는 정면 유사 방향의 영상을 20장씩 취득하여 총 140장에 대해 실험한 결과, MER의 오차율은 얼굴, 눈, 입에 대해 각각 6.2%, 11.2%, 9.4%로 나타났다. 또한, 스네이크의 초기 제어점을 얼굴은 44개, 눈은 16개, 입은 24개로 지정하여 MER추출에 성공한 영상에 대해 스네이크 알고리즘을 수행한 결과, 추출된 영역의 오차율은 각각 2.2%, 2.6%, 2.5%로 나타났다.해서 Template-based reasoning 예를 보인다 본 방법론은 검색노력을 줄이고, 검색에 있어 Feasibility와 Admissibility를 보장한다.매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having the same condition of

  • PDF

A Performance Comparison of Spatial Scalable Encoders with the Constrained Coding Modes for T-DMB/AT-DMB Services (T-DMB/AT-DMB 서비스를 위한 부호화 모드 제한을 갖는 공간 확장성 부호기의 성능 비교)

  • Kim, Jin-Soo;Park, Jong-Kab;Kim, Kyu-Seok;Choi, Sung-Jin;Seo, Kwang-Deok;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.501-515
    • /
    • 2008
  • Recently, as users' requests for high quality mobile multimedia services are rapidly increasing and additional bandwidth can be provided by adopting the hierarchical modulation transmission technology, the research on the Advanced Terrestrial DMB (AT-DMB) service using the SVC (Scalable Video Coding) scheme is being actively studied. But, in order to realize a compatible video service and to accelerate the successful standardization and commercialization, it is necessary to simplify the compatible encoder structure. In this parer, we propose a fast mode decision method by constraining the redundant coding modes in the spatial scalable encoder that keeps the current T-DMB video in base layer. The proposed method is based on the statistical characteristics of each coding mode at both base and enhancement layers, inter-layer predictions, which are derived by investigating macroblock-layer coding modes of the spatial scalable encoder's functional structure. Through computer simulations, it is shown that a simplified encoder model that reduces the heavy computational burden can be found, while keeping the objective visual quality very high.

Classification of Magnetic Resonance Imagery Using Deterministic Relaxation of Neural Network (신경망의 결정론적 이완에 의한 자기공명영상 분류)

  • 전준철;민경필;권수일
    • Investigative Magnetic Resonance Imaging
    • /
    • v.6 no.2
    • /
    • pp.137-146
    • /
    • 2002
  • Purpose : This paper introduces an improved classification approach which adopts a deterministic relaxation method and an agglomerative clustering technique for the classification of MRI using neural network. The proposed approach can solve the problems of convergency to local optima and computational burden caused by a large number of input patterns when a neural network is used for image classification. Materials and methods : Application of Hopfield neural network has been solving various optimization problems. However, major problem of mapping an image classification problem into a neural network is that network is opt to converge to local optima and its convergency toward the global solution with a standard stochastic relaxation spends much time. Therefore, to avoid local solutions and to achieve fast convergency toward a global optimization, we adopt MFA to a Hopfield network during the classification. MFA replaces the stochastic nature of simulated annealing method with a set of deterministic update rules that act on the average value of the variable. By minimizing averages, it is possible to converge to an equilibrium state considerably faster than standard simulated annealing method. Moreover, the proposed agglomerative clustering algorithm which determines the underlying clusters of the image provides initial input values of Hopfield neural network. Results : The proposed approach which uses agglomerative clustering and deterministic relaxation approach resolves the problem of local optimization and achieves fast convergency toward a global optimization when a neural network is used for MRI classification. Conclusion : In this paper, we introduce a new paradigm to classify MRI using clustering analysis and deterministic relaxation for neural network to improve the classification results.

  • PDF

MODFLOW or FEFLOW: A Case Study of Groundwater Model Selection for the Upper Waikato Catchment, New Zealand

  • Weir, Julian;Moore, Dr Catherine;Hadfield, John
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.14-14
    • /
    • 2011
  • Groundwater in the Waikatoregion is a valuable resource for agriculture, water supply, forestry and industries. The 434,000 ha study area comprises the upper Waikato River catchment from the outflow of Lake Taupo (New Zealand's largest lake) through to Lake Karapiro (a man-made hydro lake with high recreational value) (Figure 1). Water quality in the area is naturally high. However, there are indications that this quality is deteriorating as a result of land use intensification and deforestation. Compounding this concern for decision makers is the lag time between land use changes and the realisation of effects on groundwater and surface water quality. It is expected that the effects of land use changes have not yet fully manifested, and additional intensification may take decadesto fully develop, further compounding the deterioration. Consequently, Environment Waikato (EW) have proposed a programme of work to develop a groundwater model to assist managing water quality and appropriate policy development within the catchment. One of the most important and critical decisions of any modelling exercise is the choice of the modelling platform to be used. It must not inhibit future decision making and scenario exploration and needs to allow as accurate representation of reality as feasible. With this in mind, EW requested that two modelling platforms, MODFLOW/MT3DMS and FEFLOW, be assessed for their ability to deliver the long-term modelling objectives for this project. The two platforms were compared alongside various selection criteria including complexity of model set-up and development, computational burden, ease and accuracy of representing surface water-groundwater interactions, precision in predictive scenarios and ease with which the model input and output files could be interrogated. This latter criteria is essential for the thorough assessment of predictive uncertainty with third-party software, such as PEST. This paper will focus on the attributes of each modelling platform and the comparison of the two approaches against the key criteria in the selection process. Primarily due to the ease of handling and developing input files and interrogating output files, MODFLOW/MT3DMS was selected as the preferred platform. Other advantages and disadvantages of the two modelling platforms were somewhat balanced. A preliminary regional groundwater numerical model of the study area was subsequently constructed. The model simulates steady state groundwater and surface water flows using MODFLOW and transient contaminant transport with MT3DMS, focussing on nitrate nitrogen (as a conservative solute). Geological information for this project was provided by GNS Science. Professional peer review was completed by Dr. Vince Bidwell (of Lincoln Environmental).

  • PDF

Estimating Stability Indices from the MODIS Infrared Measurements over the Korean Peninsula (MODIS 적외 자료를 이용한 한반도 지역의 대기 안정도 지수 산출)

  • Park, Sung-Hee;Chung, Eui-Seok;Koenig, Marianne;Sohn, B.J.
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.6
    • /
    • pp.469-483
    • /
    • 2006
  • An algorithm was developed to estimate stability indices (SI) over the Korean peninsula using Terra Moderate Resolution Imaging Spectroradiometer (MODIS) infrared brightness temperatures (TBs). The SI is defined as the stability of the atmosphere in the hydrostatic equilibrium with respect to the vertical displacements and is used as an index for the potential severe storm development. Using atmosphere temperature and moisture profiles from Regional Data Assimilation and Prediction System (RDAPS) as initial guess data for a nonlinear physical relaxation method, K index (KI), KO Index (KO), lifted index (LI), and maximum buoyancy (MB) were estimated. A fast radiative transfer model, RTTOV-7, is utilized for reducing the computational burden related to the physical relaxation method. The estimated TBs from the radiative transfer simulation are in good agreement with observed MODIS TBs. To test usefulness for the short-term forecast of severe storms, the algorithm is applied to the rapidly developed convective storms. Compared with the SIs from the RDAPS forecasts and NASA products, the MODIS SI obtained in this research predicts the instability better over the pre-convection areas. Thus, it is expected that the nowcasting and short-term forecast can be improved by utilizing the algorithms developed in this study.

Determination of Volume Porosity and Permeability of Drainage Layer in Rainwater Drainage System Using 3-D Numerical Method (3차원 수치해석기법을 이용한 우수배수시스템 배수층의 체적공극과 투수도 결정)

  • Yeom, Seong Il;Park, Sung Won;Ahn, Jungkyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.8
    • /
    • pp.449-455
    • /
    • 2019
  • The increase in impermeable pavement from recent urbanization has resulted in an increase in surface runoff. The surface runoff has also increased the burden of the existing drainage system. This drainage system has structural limitations in that the catchment area is reduced by the waste particles transported with the surface runoff. In addition, the efficiency of the drainage system is decreased. To overcome these limitations, a new type of drainage system with a drainage layer was developed and applied. In this study, various volume porosity and permeability of the lower drainage layer were simulated using ANSYS CFX, which is a three dimensional computational fluid dynamics program. The results showed that the outlet velocity of the 35% volume porosity was faster than that of the 20% and 50% cases, and there was no relationship between the volume porosity and drainage performance. The permeability of the drainage layer can be determined from the particle size of the material, and a simulation of five conditions showed that 2 mm sand grains are most suitable for workability and usability. This study suggests appropriate values of the volume porosity and particle size of the drainage layer. This consideration can be advantageous for reducing and preventing flood damage.

Improvement of LMS Algorithm Convergence Speed with Updating Adaptive Weight in Data-Recycling Scheme (데이터-재순환 구조에서 적응 가중치 갱신을 통한 LMS 알고리즘 수렴 속 도 개선)

  • Kim, Gwang-Jun;Jang, Hyok;Suk, Kyung-Hyu;Na, Sang-Dong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.4
    • /
    • pp.11-22
    • /
    • 1999
  • Least-mean-square(LMS) adaptive filters have proven to be extremely useful in a number of signal processing tasks. However LMS adaptive filter suffer from a slow rate of convergence for a given steady-state mean square error as compared to the behavior of recursive least squares adaptive filter. In this paper an efficient signal interference control technique is introduced to improve the convergence speed of LMS algorithm with tap weighted vectors updating which were controled by reusing data which was abandoned data in the Adaptive transversal filter in the scheme with data recycling buffers. The computer simulation show that the character of convergence and the value of MSE of proposed algorithm are faster and lower than the existing LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having the same condition of LMS algorithm.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

A User Optimer Traffic Assignment Model Reflecting Route Perceived Cost (경로인지비용을 반영한 사용자최적통행배정모형)

  • Lee, Mi-Yeong;Baek, Nam-Cheol;Mun, Byeong-Seop;Gang, Won-Ui
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.117-130
    • /
    • 2005
  • In both deteministic user Optimal Traffic Assignment Model (UOTAM) and stochastic UOTAM, travel time, which is a major ccriterion for traffic loading over transportation network, is defined by the sum of link travel time and turn delay at intersections. In this assignment method, drivers actual route perception processes and choice behaviors, which can become main explanatory factors, are not sufficiently considered: therefore may result in biased traffic loading. Even though there have been some efforts in Stochastic UOTAM for reflecting drivers' route perception cost by assuming cumulative distribution function of link travel time, it has not been fundamental fruitions, but some trials based on the unreasonable assumptions of Probit model of truncated travel time distribution function and Logit model of independency of inter-link congestion. The critical reason why deterministic UOTAM have not been able to reflect route perception cost is that the route perception cost has each different value according to each origin, destination, and path connection the origin and destination. Therefore in order to find the optimum route between OD pair, route enumeration problem that all routes connecting an OD pair must be compared is encountered, and it is the critical reason causing computational failure because uncountable number of path may be enumerated as the scale of transportation network become bigger. The purpose of this study is to propose a method to enable UOTAM to reflect route perception cost without route enumeration between an O-D pair. For this purpose, this study defines a link as a least definition of path. Thus since each link can be treated as a path, in two links searching process of the link label based optimum path algorithm, the route enumeration between OD pair can be reduced the scale of finding optimum path to all links. The computational burden of this method is no more than link label based optimum path algorithm. Each different perception cost is embedded as a quantitative value generated by comparing the sub-path from the origin to the searching link and the searched link.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.