• Title/Summary/Keyword: Semiautomatic

Search Result 58, Processing Time 0.028 seconds

Automatic Road Extraction by Gradient Direction Profile Algorithm (GDPA) using High-Resolution Satellite Imagery: Experiment Study

  • Lee, Ki-Won;Yu, Young-Chul;Lee, Bong-Gyu
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.5
    • /
    • pp.393-402
    • /
    • 2003
  • In times of the civil uses of commercialized high-resolution satellite imagery, applications of remote sensing have been widely extended to the new fields or the problem solving beyond traditional application domains. Transportation application of this sensor data, related to the automatic or semiautomatic road extraction, is regarded as one of the important issues in uses of remote sensing imagery. Related to these trends, this study focuses on automatic road extraction using Gradient Direction Profile Algorithm (GDPA) scheme, with IKONOS panchromatic imagery having 1 meter resolution. For this, the GDPA scheme and its main modules were reviewed with processing steps and implemented as a prototype software. Using the extracted bi-level image and ground truth coming from actual GIS layer, overall accuracy evaluation and ranking error-assessment were performed. As the processed results, road information can be automatically extracted; by the way, it is pointed out that some user-defined variables should be carefully determined in using high-resolution satellite imagery in the dense or low contrast areas. While, the GDPA method needs additional processing, because direct results using this method do not produce high overall accuracy or ranking value. The main advantage of the GDPA scheme on road features extraction can be noted as its performance and further applicability. This experiment study can be extended into practical application fields related to remote sensing.

Process fault diagnostics using the integrated graph model

  • Yoon, Yeo-Hong;Nam, Dong-Soo;Jeong, Chang-Wook;Yoon, En-Sup
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10b
    • /
    • pp.1705-1711
    • /
    • 1991
  • On-line fault detection and diagnosis has an increasing interest in a chemical process industry, especially for a process control and automation. The chemical process needs an intelligent operation-aided workstation which can do such tasks as process monitoring, fault detection, fault diagnosis and action guidance in semiautomatic mode. These tasks can increase the performance of a process operation and give merits in economics, safety and reliability. Aiming these tasks, series of researches have been done in our lab. Main results from these researches are building appropriate knowledge representation models and a diagnosis mechanism for fault detection and diagnosis in a chemical process. The knowledge representation schemes developed in our previous research, the symptom tree model and the fault-consequence digraph, showed the effectiveness and the usefulness in a real-time application, of the process diagnosis, especially in large and complex plants. However in our previous approach, the diagnosis speed is its demerit in spite of its merits of high resolution, mainly due to using two knowledge models complementarily. In our current study, new knowledge representation scheme is developed which integrates the previous two knowledge models, the symptom tree and the fault-consequence digraph, into one. This new model is constructed using a material balance, energy balance, momentum balance and equipment constraints. Controller related constraints are included in this new model, which possesses merits of the two previous models. This new integrated model will be tested and verified by the real-time application in a BTX process or a crude unit process. The reliability and flexibility will be greatly enhanced compared to the previous model in spite of the low diagnosis speed. Nexpert Object for the expert system shell and SUN4 workstation for the hardware platform are used. TCP/IP for a communication protocol and interfacing to a dynamic simulator, SPEEDUP, for a dynamic data generation are being studied.

  • PDF

Bilateral and pseudobilateral tonsilloliths: Three dimensional imaging with cone-beam computed tomography

  • Misirlioglu, Melda;Nalcaci, Rana;Adisen, Mehmet Zahit;Yardimci, Selmi
    • Imaging Science in Dentistry
    • /
    • v.43 no.3
    • /
    • pp.163-169
    • /
    • 2013
  • Purpose: Tonsilloliths are calcifications found in the crypts of the palatal tonsils and can be detected on routine panoramic examinations. This study was performed to highlight the benefits of cone-beam computed tomography (CBCT) in the diagnosis of tonsilloliths appearing bilaterally on panoramic radiographs. Materials and Methods: The sample group consisted of 7 patients who had bilateral radiopaque lesions at the area of the ascending ramus on panoramic radiographs. CBCT images for every patient were obtained from both sides of the jaw to determine the exact locations of the lesions and to rule out other calcifications. The calcifications were evaluated on the CBCT images using Ez3D2009 software. Additionally, the obtained images in DICOM format were transferred to ITK SNAP 2.4.0 pc software for semiautomatic segmentation. Segmentation was performed using contrast differences between the soft tissues and calcifications on grayscale images, and the volume in mm3 of the segmented three dimensional models were obtained. Results: CBCT scans revealed that what appeared on panoramic radiographs as bilateral images were in fact unilateral lesions in 2 cases. The total volume of the calcifications ranged from 7.92 to $302.5mm^3$. The patients with bilaterally multiple and large calcifications were found to be symptomatic. Conclusion: The cases provided the evidence that tonsilloliths should be considered in the differential diagnosis of radiopaque masses involving the mandibular ramus, and they highlight the need for a CBCT scan to differentiate pseudo- or ghost images from true bilateral pathologies.

Automatic Information Extraction for Structured Web Documents (구조화된 웹 문서에 대한 자동 정보추출)

  • Yun, Bo-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.6 no.3
    • /
    • pp.129-145
    • /
    • 2005
  • This paper proposes the web information extraction system that extracts the pre-defined information automatically from web documents (i.e, HTML documents) and integrates the extracted information, The system recognizes entities without lables by the probabilistic based entity recognition method and extends the existing domain knowledge semiautomatically by using the extracted data, Moreover, the system extracts the sub-linked information linked to the basic page and integrates the similar results extracted from heterogeneous sources, The experimental result shows that the system extracts the sub-linked information and uses the probabilistic based entity recognition enhances the precision significantly against the system using only the domain knowledge, Moreover, the presented system can the more various information precisely due to applying the system with flexibleness according to domains, Because bath the semiautomatic domain knowledge expansion and the probabilistic based entity recognition improve the quality of the information, the system can increase the degree of user satisfaction at its maximum. Thus, this system can satisfy the intellectual curiosity of users from movie sites, performance sites, and dining room sites, We can construct various comparison shopping mall and contribute the revitalization of e-business.

  • PDF

User-steered balloon: Application to Thigh Muscle Segmentation of Visible Human (사용자 조정 풍선 : Visible Human의 다리 근육 분할의 적용)

  • Lee, Jeong-Ho;Kim, Dong-Sung;Kang, Heung-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.3
    • /
    • pp.266-274
    • /
    • 2000
  • Medical image segmentation, which is essential in diagnosis and 3D reconstruction, is performed manually in most applications to produce accurate results. However, manual segmentation requires lots of time to segment, and is difficult even for the same operator to reproduce the same segmentation results for a region. To overcome such limitations, we propose a convenient and accurate semiautomatic segmentation method. The proposed method initially receives several control points of an ROI(Region of Interest Region) from a human operator, and then finds a boundary composed of a minimum cost path connecting the control points, which is the Live-wire method. Next, the boundary is modified to overcome limitations of the Live-wire, such as a zig-zag boundary and erosion of an ROI. Finally, the region is segmented by SRG(Seeded Region Growing), where the modified boundary acts as a blockage to prevent leakage. The proposed User-steered balloon method can overcome not only the limitations of the Live-wire but also the leakage problem of the SRG. Segmentation results of thigh muscles of the Visible Human are presented.

  • PDF

A Study on the Semiautomatic Construction of Domain-Specific Relation Extraction Datasets from Biomedical Abstracts - Mainly Focusing on a Genic Interaction Dataset in Alzheimer's Disease Domain - (바이오 분야 학술 문헌에서의 분야별 관계 추출 데이터셋 반자동 구축에 관한 연구 - 알츠하이머병 유관 유전자 간 상호 작용 중심으로 -)

  • Choi, Sung-Pil;Yoo, Suk-Jong;Cho, Hyun-Yang
    • Journal of Korean Library and Information Science Society
    • /
    • v.47 no.4
    • /
    • pp.289-307
    • /
    • 2016
  • This paper introduces a software system and process model for constructing domain-specific relation extraction datasets semi-automatically. The system uses a set of terms such as genes, proteins diseases and so forth as inputs and then by exploiting massive biological interaction database, generates a set of term pairs which are utilized as queries for retrieving sentences containing the pairs from scientific databases. To assess the usefulness of the proposed system, this paper applies it into constructing a genic interaction dataset related to Alzheimer's disease domain, which extracts 3,510 interaction-related sentences by using 140 gene names in the area. In conclusion, the resulting outputs of the case study performed in this paper indicate the fact that the system and process could highly boost the efficiency of the dataset construction in various subfields of biomedical research.

A Study on the Wastewater and Air Pollution, Noise and Vibration Management and Discharge Control at the Industries (환경오염의 방지시설의 운영에 관한 실태조사)

  • Kim Nam Cheon;Woo Se Hong;Koo Sung Hoi
    • Journal of environmental and Sanitary engineering
    • /
    • v.1 no.1 s.1
    • /
    • pp.81-96
    • /
    • 1986
  • 510 random samples were studied during the months of may through November 1985 at the various industries and conclustions were made as follows; 1. $43.94\%$ of the plants studied operated their plants with semiautomatic control system, and better efficiency were observed at the plants where automatic control systems emplorid and also large industries showed more tendency adopting the automatic plant control system. 2. Overall efficiency of the treatment plants were seen much higher at the first and secand discharge class categories then the lower discharge classes, $80.79\%$ of the plants were see their daily plant operation being controlled by the operator himself. 3. The main causes of the plant stopage and in efficient discharge control were found to be malfunctioning of the plants machineries and equipment or inadequate decision made by the management to save chemicals or electricity. 4. The study showed $60\%$ of the industry treated their wastwater wholly and the rest discharged only with dilution without receiving any further treatment, and this tendency pronounced at the 4th and 5th class discharge category industries. 5. $66.17\%$ of the industry had their storage capacity to accommodate the waste discharge during plants outage while $92.67\%$ of the air pollution discharge industries had no means for the plant outage. 6. $56.77\%$ of the studied industry maintained 24 hour operation of their discharge control systems whill $18.67\%$ of air pollution discharge industries and $10.53\%$ of the waste water discharge industries showed no control effort during the night.

  • PDF

Intra-Rater and Inter-Rater Reliability of Brain Surface Intensity Model (BSIM)-Based Cortical Thickness Analysis Using 3T MRI

  • Jeon, Ji Young;Moon, Won-Jin;Moon, Yeon-Sil;Han, Seol-Heui
    • Investigative Magnetic Resonance Imaging
    • /
    • v.19 no.3
    • /
    • pp.168-177
    • /
    • 2015
  • Purpose: Brain surface intensity model (BSIM)-based cortical thickness analysis does not require complicated 3D segmentation of brain gray/white matters. Instead, this technique uses the local intensity profile to compute cortical thickness. The aim of the present study was to evaluate intra-rater and inter-rater reliability of BSIM-based cortical thickness analysis using images from elderly participants. Materials and Methods: Fifteen healthy elderly participants (ages, 55-84 years) were included in this study. High-resolution 3D T1-spoiled gradient recalled-echo (SPGR) images were obtained using 3T MRI. BSIM-based processing steps included an inhomogeneity correction, intensity normalization, skull stripping, atlas registration, extraction of intensity profiles, and calculation of cortical thickness. Processing steps were automatic, with the exception of semiautomatic skull stripping. Individual cortical thicknesses were compared to a database indicating mean cortical thickness of healthy adults, in order to produce Z-score thinning maps. Intra-class correlation coefficients (ICCs) were calculated in order to evaluate inter-rater and intra-rater reliabilities. Results: ICCs for intra-rater reliability were excellent, ranging from 0.751-0.940 in brain regions except the right occipital, left anterior cingulate, and left and right cerebellum (ICCs = 0.65-0.741). Although ICCs for inter-rater reliability were fair to excellent in most regions, poor inter-rater correlations were observed for the cingulate and occipital regions. Processing time, including manual skull stripping, was $17.07{\pm}3.43min$. Z-score maps for all participants indicated that cortical thicknesses were not significantly different from those in the comparison databases of healthy adults. Conclusion: BSIM-based cortical thickness measurements provide acceptable intra-rater and inter-rater reliability. We therefore suggest BSIM-based cortical thickness analysis as an adjunct clinical tool to detect cortical atrophy.

Use of Cardiac Computed Tomography for Ventricular Volumetry in Late Postoperative Patients with Tetralogy of Fallot

  • Kim, Ho Jin;Mun, Da Na;Goo, Hyun Woo;Yun, Tae-Jin
    • Journal of Chest Surgery
    • /
    • v.50 no.2
    • /
    • pp.71-77
    • /
    • 2017
  • Background: Cardiac computed tomography (CT) has emerged as an alternative to magnetic resonance imaging (MRI) for ventricular volumetry. However, the clinical use of cardiac CT requires external validation. Methods: Both cardiac CT and MRI were performed prior to pulmonary valve implantation (PVI) in 11 patients (median age, 19 years) who had undergone total correction of tetralogy of Fallot during infancy. The simplified contouring method (MRI) and semiautomatic 3-dimensional region-growing method (CT) were used to measure ventricular volumes. Results: All volumetric indices measured by CT and MRI generally correlated well with each other, except for the left ventricular end-systolic volume index (LV-ESVI), which showed the following correlations with the other indices: the right ventricular end-diastolic volume index (RV-EDVI) (r=0.88, p<0.001), the right ventricular end-systolic volume index (RV-ESVI) (r=0.84, p=0.001), the left ventricular end-diastolic volume index (LV-EDVI) (r=0.90, p=0.001), and the LV-ESVI (r=0.55, p=0.079). While the EDVIs measured by CT were significantly larger than those measured by MRI (median RV-EDVI: $197mL/m^2$ vs. $175mL/m^2$, p=0.008; median LV-EDVI: $94mL/m^2$ vs. $92mL/m^2$, p=0.026), no significant differences were found for the RV-ESVI or LV-ESVI. Conclusion: The EDVIs measured by cardiac CT were greater than those measured by MRI, whereas the ESVIs measured by CT and MRI were comparable. The volumetric characteristics of these 2 diagnostic modalities should be taken into account when indications for late PVI after tetralogy of Fallot repair are assessed.

Lung Segmentation Considering Global and Local Properties in Chest X-ray Images (흉부 X선 영상에서의 전역 및 지역 특성을 고려한 폐 영역 분할 연구)

  • Jeon, Woong-Gi;Kim, Tae-Yun;Kim, Sung Jun;Choi, Heung-Kuk;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.7
    • /
    • pp.829-840
    • /
    • 2013
  • In this paper, we propose a new lung segmentation method for chest x-ray images which can take both global and local properties into account. Firstly, the initial lung segmentation is computed by applying the active shape model (ASM) which keeps the shape of deformable model from the pre-learned model and searches the image boundaries. At the second segmentation stage, we also applied the localizing region-based active contour model (LRACM) for correcting various regional errors in the initial segmentation. Finally, to measure the similarities, we calculated the Dice coefficient of the segmented area using each semiautomatic method with the result of the manually segmented area by a radiologist. The comparison experiments were performed using 5 lung x-ray images. In our experiment, the Dice coefficient with manually segmented area was $95.33%{\pm}0.93%$ for the proposed method. Effective segmentation methods will be essential for the development of computer-aided diagnosis systems for a more accurate early diagnosis and prognosis regarding lung cancer in chest x-ray images.