• Title/Summary/Keyword: Module Extraction

Search Result 211, Processing Time 0.023 seconds

A Bone Age Assessment Method Based on Normalized Shape Model (정규화된 형상 모델을 이용한 뼈 나이 측정 방법)

  • Yoo, Ju-Woan;Lee, Jong-Min;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.3
    • /
    • pp.383-396
    • /
    • 2009
  • Bone age assessment has been widely used in pediatrics to identify endocrine problems of children. Since the number of trained doctors is far less than the demands, there has been numerous requests for automatic estimation of bone age. Therefore, in this paper, we propose an automatic bone age assessment method that utilizes pattern classification techniques. The proposed method consists of three modules; a finger segmentation module, a normalized shape model generation module and a bone age estimation module. The finger segmentation module segments fingers and epiphyseal regions by means of various image processing algorithms. The shape model abstraction module employ ASM to improves the accuracy of feature extraction for bone age estimation. In addition, SVM is used for estimation of bone age. Features for the estimation include the length of bone and the ratios of bone length. We evaluated the performance of the proposed method through statistical analysis by comparing the bone age assessment results by clinical experts and the proposed automatic method. Through the experimental results, the mean error of the assessment was 0.679 year, which was better than the average error acceptable in clinical practice.

  • PDF

System Optimization Technique using Crosscutting Concern (크로스커팅 개념을 이용한 시스템 최적화 기법)

  • Lee, Seunghyung;Yoo, Hyun
    • Journal of Digital Convergence
    • /
    • v.15 no.3
    • /
    • pp.181-186
    • /
    • 2017
  • The system optimization is a technique that changes the structure of the program in order to extract the duplicated modules without changing the source code, reuse of the extracted module. Structure-oriented development and object-oriented development are efficient at crosscutting concern modular, however can't be modular of crosscutting concept. To apply the crosscutting concept in an existing system, there is a need to a extracting technique for distributed system optimization module within the system. This paper proposes a method for extracting the redundant modules in the completed system. The proposed method extracts elements that overlap over a source code analysis to analyze the data dependency and control dependency. The extracted redundant element is used to program dependency analysis for the system optimization. Duplicated dependency analysis result is converted into a control flow graph, it is possible to produce a minimum crosscutting module. The element extracted by dependency analysis proposes a system optimization method which minimizes the duplicated code within system by setting the crosscutting concern module.

RE circuit simulation for high-power LDMOS modules

  • fujioka, Tooru;Matsunaga, Yoshikuni;Morikawa, Masatoshi;Yoshida, Isao
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.1119-1122
    • /
    • 2000
  • This paper describes on RF circuit simulation technique, especially on a RF modeling and a model extraction of a LDMOS(Lateral Diffused MOS) that has gate-width (Wg) dependence. Small-signal model parameters of the LDMOSs with various gate-widths extracted from S-parameter data are applied to make the relation between the RF performances and gate-width. It is proved that a source inductance (Ls) was not applicable to scaling rules. These extracted small-signal model parameters are also utilized to remove extrinsic elements in an extraction of a large-signal model (using HP Root MOSFET Model). Therefore, we can omit an additional measurement to extract extrinsic elements. When the large-signal model with Ls having the above gate-width dependence is applied to a high-power LDMOS module, the simulated performances (Output power, etc.) are in a good agreement with experimental results. It is proved that our extracted model and RF circuit simulation have a good accuracy.

  • PDF

GUI-based HTML2XML Wrapperusing Inductive Reasoning (학습 추론을 이용한 GUI 기반의 HTML2XML 래퍼)

  • Jang, Mun-Seong;Jeong, Jae-Mok;Choe, Il-Hwan;Kim, Hyeong-Ju
    • Journal of KIISE:Databases
    • /
    • v.29 no.4
    • /
    • pp.311-320
    • /
    • 2002
  • The 'wrapper' is a module that extracts and processes information from the specified data source by the pre-composed extraction rule. 'HTML Wrapper for XML' extracts information from the web source as the form of XML document. Since composing the extraction rule is a repetitious and tedious job, it should be done as easy and fast as possible. This paper presents the method to minimize the composing job, which integrates GUI based training and scripting.

Infrared Visual Inertial Odometry via Gaussian Mixture Model Approximation of Thermal Image Histogram (열화상 이미지 히스토그램의 가우시안 혼합 모델 근사를 통한 열화상-관성 센서 오도메트리)

  • Jaeho Shin;Myung-Hwan Jeon;Ayoung Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.260-270
    • /
    • 2023
  • We introduce a novel Visual Inertial Odometry (VIO) algorithm designed to improve the performance of thermal-inertial odometry. Thermal infrared image, though advantageous for feature extraction in low-light conditions, typically suffers from a high noise level and significant information loss during the 8-bit conversion. Our algorithm overcomes these limitations by approximating a 14-bit raw pixel histogram into a Gaussian mixture model. The conversion method effectively emphasizes image regions where texture for visual tracking is abundant while reduces unnecessary background information. We incorporate the robust learning-based feature extraction and matching methods, SuperPoint and SuperGlue, and zero velocity detection module to further reduce the uncertainty of visual odometry. Tested across various datasets, the proposed algorithm shows improved performance compared to other state-of-the-art VIO algorithms, paving the way for robust thermal-inertial odometry.

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Extraction of Passive Device Model Parameters Using Genetic Algorithms

  • Yun, Il-Gu;Carastro, Lawrence A.;Poddar, Ravi;Brooke, Martin A.;May, Gary S.;Hyun, Kyung-Sook;Pyun, Kwang-Eui
    • ETRI Journal
    • /
    • v.22 no.1
    • /
    • pp.38-46
    • /
    • 2000
  • The extraction of model parameters for embedded passive components is crucial for designing and characterizing the performance of multichip module (MCM) substrates. In this paper, a method for optimizing the extraction of these parameters using genetic algorithms is presented. The results of this method are compared with optimization using the Levenberg-Marquardt (LM) algorithm used in the HSPICE circuit modeling tool. A set of integrated resistor structures are fabricated, and their scattering parameters are measured for a range of frequencies from 45 MHz to 5 GHz. Optimal equivalent circuit models for these structures are derived from the s-parameter measurements using each algorithm. Predicted s-parameters for the optimized equivalent circuit are then obtained from HSPICE. The difference between the measured and predicted s-parameters in the frequency range of interest is used as a measure of the accuracy of the two optimization algorithms. It is determined that the LM method is extremely dependent upon the initial starting point of the parameter search and is thus prone to become trapped in local minima. This drawback is alleviated and the accuracy of the parameter values obtained is improved using genetic algorithms.

  • PDF

Implementation of Voice Awareness Security Sytems (음성인식 보안 시스템의 구현)

  • Lee, Moon-Goo
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.799-800
    • /
    • 2006
  • This thesis implemented security systems of voice awareness which is higher accessible than existing security system using biological authentication system and is inexpensive in module of security device, and has an advantage in usability. Proposed the security systems of voice awareness implemented algorithm for characteristic extraction of inputted speaker's voice signal verification, and also implemented database of access control that is founded on extractible output. And a security system of voice awareness has a function of an authority of access control to system.

  • PDF

A Study on the Integration of Information Extraction Technology for Detecting Scientific Core Entities based on Large Resources (대용량 자원 기반 과학기술 핵심개체 탐지를 위한 정보추출기술 통합에 관한 연구)

  • Choi, Yun-Soo;Cheong, Chang-Hoo;Choi, Sung-Pil;You, Beom-Jong;Kim, Jae-Hoon
    • Journal of Information Management
    • /
    • v.40 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • Large-scaled information extraction plays an important role in advanced information retrieval as well as question answering and summarization. Information extraction can be defined as a process of converting unstructured documents into formalized, tabular information, which consists of named-entity recognition, terminology extraction, coreference resolution and relation extraction. Since all the elementary technologies have been studied independently so far, it is not trivial to integrate all the necessary processes of information extraction due to the diversity of their input/output formation approaches and operating environments. As a result, it is difficult to handle scientific documents to extract both named-entities and technical terms at once. In this study, we define scientific as a set of 10 types of named entities and technical terminologies in a biomedical domain. in order to automatically extract these entities from scientific documents at once, we develop a framework for scientific core entity extraction which embraces all the pivotal language processors, named-entity recognizer, co-reference resolver and terminology extractor. Each module of the integrated system has been evaluated with various corpus as well as KEEC 2009. The system will be utilized for various information service areas such as information retrieval, question-answering(Q&A), document indexing, dictionary construction, and so on.

An Efficient Object Extraction Scheme for Low Depth-of-Field Images (낮은 피사계 심도 영상에서 관심 물체의 효율적인 추출 방법)

  • Park Jung-Woo;Lee Jae-Ho;Kim Chang-Ick
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.9
    • /
    • pp.1139-1149
    • /
    • 2006
  • This paper describes a novel and efficient algorithm, which extracts focused objects from still images with low depth-of-field (DOF). The algorithm unfolds into four modules. In the first module, a HOS map, in which the spatial distribution of the high-frequency components is represented, is obtained from an input low DOF image [1]. The second module finds OOI candidate by using characteristics of the HOS. Since it is possible to contain some holes in the region, the third module detects and fills them. In order to obtain an OOI, the last module gets rid of background pixels in the OOI candidate. The experimental results show that the proposed method is highly useful in various applications, such as image indexing for content-based retrieval from huge amounts of image database, image analysis for digital cameras, and video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing system.

  • PDF