• Title/Summary/Keyword: Automatic Extraction Algorithm

Search Result 296, Processing Time 0.024 seconds

Spatio-Temporal Analysis of Trajectory for Pedestrian Activity Recognition

  • Kim, Young-Nam;Park, Jin-Hee;Kim, Moon-Hyun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.2
    • /
    • pp.961-968
    • /
    • 2018
  • Recently, researches on automatic recognition of human activities have been actively carried out with the emergence of various intelligent systems. Since a large amount of visual data can be secured through Closed Circuit Television, it is required to recognize human behavior in a dynamic situation rather than a static situation. In this paper, we propose new intelligent human activity recognition model using the trajectory information extracted from the video sequence. The proposed model consists of three steps: segmentation and partitioning of trajectory step, feature extraction step, and behavioral learning step. First, the entire trajectory is fuzzy partitioned according to the motion characteristics, and then temporal features and spatial features are extracted. Using the extracted features, four pedestrian behaviors were modeled by decision tree learning algorithm and performance evaluation was performed. The experiments in this paper were conducted using Caviar data sets. Experimental results show that trajectory provides good activity recognition accuracy by extracting instantaneous property and distinctive regional property.

A Study on the Feature Region Segmentation for the Analysis of Eye-fundus Images (안저영상(眼低映像) 해석(解析)을 위한 특징영성(特徵領域)의 분할(分割)에 관한 연구(硏究))

  • Kang, Jeon-Kwun;Kim, Seung-Bum;Ku, Ja-Yl;Han, Young-Hwan;Hong, Hong-Seung
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1993 no.11
    • /
    • pp.27-30
    • /
    • 1993
  • Information about retinal blood vessels can be used in grading disease severity or as part of the process of automated diagnosis of diseases with ocular menifestations. In this paper, we address the problem of detecting retinal blood vessels and optic disk (papilla) in Eye-fundus images. We introduce an algorithm for feature extraction based on Fuzzy festering(FCM). The results ore compared to those obtained with other methods. The automatic detection of retinal blood vessels and optic disk in the Eye-fundus images could help physicians in diagnosing ocular diseases.

  • PDF

Obstacles modeling method in cluttered environments using satellite images and its application to path planning for USV

  • Shi, Binghua;Su, Yixin;Zhang, Huajun;Liu, Jiawen;Wan, Lili
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.11 no.1
    • /
    • pp.202-210
    • /
    • 2019
  • The obstacles modeling is a fundamental and significant issue for path planning and automatic navigation of Unmanned Surface Vehicle (USV). In this study, we propose a novel obstacles modeling method based on high resolution satellite images. It involves two main steps: extraction of obstacle features and construction of convex hulls. To extract the obstacle features, a series of operations such as sea-land segmentation, obstacles details enhancement, and morphological transformations are applied. Furthermore, an efficient algorithm is proposed to mask the obstacles into convex hulls, which mainly includes the cluster analysis of obstacles area and the determination rules of edge points. Experimental results demonstrate that the models achieved by the proposed method and the manual have high similarity. As an application, the model is used to find the optimal path for USV. The study shows that the obstacles modeling method is feasible, and it can be applied to USV path planning.

A Study on Heavy Rainfall Guidance Realized with the Aid of Neuro-Fuzzy and SVR Algorithm Using AWS Data (AWS자료 기반 SVR과 뉴로-퍼지 알고리즘 구현 호우주의보 가이던스 연구)

  • Kim, Hyun-Myung;Oh, Sung-Kwun;Kim, Yong-Hyuk;Lee, Yong-Hee
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.4
    • /
    • pp.526-533
    • /
    • 2014
  • In this study, we introduce design methodology to develop a guidance for issuing heavy rainfall warning by using both RBFNNs(Radial basis function neural networks) and SVR(Support vector regression) model, and then carry out the comparative studies between two pattern classifiers. Individual classifiers are designed as architecture realized with the aid of optimization and pre-processing algorithm. Because the predictive performance of the existing heavy rainfall forecast system is commonly affected from diverse processing techniques of meteorological data, under-sampling method as the pre-processing method of input data is used, and also data discretization and feature extraction method for SVR and FCM clustering and PSO method for RBFNNs are exploited respectively. The observed data, AWS(Automatic weather wtation), supplied from KMA(korea meteorological administration), is used for training and testing of the proposed classifiers. The proposed classifiers offer the related information to issue a heavy rain warning in advance before 1 to 3 hours by using the selected meteorological data and the cumulated precipitation amount accumulated for 1 to 12 hours from AWS data. For performance evaluation of each classifier, ETS(Equitable Threat Score) method is used as standard verification method for predictive ability. Through the comparative studies of two classifiers, neuro-fuzzy method is effectively used for improved performance and to show stable predictive result of guidance to issue heavy rainfall warning.

A Review on Advanced Methodologies to Identify the Breast Cancer Classification using the Deep Learning Techniques

  • Bandaru, Satish Babu;Babu, G. Rama Mohan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.420-426
    • /
    • 2022
  • Breast cancer is among the cancers that may be healed as the disease diagnosed at early times before it is distributed through all the areas of the body. The Automatic Analysis of Diagnostic Tests (AAT) is an automated assistance for physicians that can deliver reliable findings to analyze the critically endangered diseases. Deep learning, a family of machine learning methods, has grown at an astonishing pace in recent years. It is used to search and render diagnoses in fields from banking to medicine to machine learning. We attempt to create a deep learning algorithm that can reliably diagnose the breast cancer in the mammogram. We want the algorithm to identify it as cancer, or this image is not cancer, allowing use of a full testing dataset of either strong clinical annotations in training data or the cancer status only, in which a few images of either cancers or noncancer were annotated. Even with this technique, the photographs would be annotated with the condition; an optional portion of the annotated image will then act as the mark. The final stage of the suggested system doesn't need any based labels to be accessible during model training. Furthermore, the results of the review process suggest that deep learning approaches have surpassed the extent of the level of state-of-of-the-the-the-art in tumor identification, feature extraction, and classification. in these three ways, the paper explains why learning algorithms were applied: train the network from scratch, transplanting certain deep learning concepts and constraints into a network, and (another way) reducing the amount of parameters in the trained nets, are two functions that help expand the scope of the networks. Researchers in economically developing countries have applied deep learning imaging devices to cancer detection; on the other hand, cancer chances have gone through the roof in Africa. Convolutional Neural Network (CNN) is a sort of deep learning that can aid you with a variety of other activities, such as speech recognition, image recognition, and classification. To accomplish this goal in this article, we will use CNN to categorize and identify breast cancer photographs from the available databases from the US Centers for Disease Control and Prevention.

An Algorithm for Translation from RDB Schema Model to XML Schema Model Considering Implicit Referential Integrity (묵시적 참조 무결성을 고려한 관계형 스키마 모델의 XML 스키마 모델 변환 알고리즘)

  • Kim, Jin-Hyung;Jeong, Dong-Won;Baik, Doo-Kwon
    • Journal of KIISE:Databases
    • /
    • v.33 no.5
    • /
    • pp.526-537
    • /
    • 2006
  • The most representative approach for efficient storing of XML data is to store XML data in relational databases. The merit of this approach is that it can easily accept the realistic status that most data are still stored in relational databases. This approach needs to convert XML data into relational data or relational data into XML data. The most important issue in the translation is to reflect structural and semantic relations of RDB to XML schema model exactly. Many studies have been done to resolve the issue, but those methods have several problems: Not cover structural semantics or just support explicit referential integrity relations. In this paper, we propose an algorithm for extracting implicit referential integrities automatically. We also design and implement the suggested algorithm, and execute comparative evaluations using translated XML documents. The proposed algorithm provides several good points such as improving semantic information extraction and conversion, securing sufficient referential integrity of the target databases, and so on. By using the suggested algorithm, we can guarantee not only explicit referential integrities but also implicit referential integrities of the initial relational schema model completely. That is, we can create more exact XML schema model through the suggested algorithm.

Container Image Recognition using Fuzzy-based Noise Removal Method and ART2-based Self-Organizing Supervised Learning Algorithm (퍼지 기반 잡음 제거 방법과 ART2 기반 자가 생성 지도 학습 알고리즘을 이용한 컨테이너 인식 시스템)

  • Kim, Kwang-Baek;Heo, Gyeong-Yong;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.7
    • /
    • pp.1380-1386
    • /
    • 2007
  • This paper proposed an automatic recognition system of shipping container identifiers using fuzzy-based noise removal method and ART2-based self-organizing supervised learning algorithm. Generally, identifiers of a shipping container have a feature that the color of characters is blacker white. Considering such a feature, in a container image, all areas excepting areas with black or white colors are regarded as noises, and areas of identifiers and noises are discriminated by using a fuzzy-based noise detection method. Areas of identifiers are extracted by applying the edge detection by Sobel masking operation and the vertical and horizontal block extraction in turn to the noise-removed image. Extracted areas are binarized by using the iteration binarization algorithm, and individual identifiers are extracted by applying 8-directional contour tacking method. This paper proposed an ART2-based self-organizing supervised learning algorithm for the identifier recognition, which improves the performance of learning by applying generalized delta learning and Delta-bar-Delta algorithm. Experiments using real images of shipping containers showed that the proposed identifier extraction method and the ART2-based self-organizing supervised learning algorithm are more improved compared with the methods previously proposed.

Development and Usability Testing of a User-Centered 3D Virtual Liver Surgery Planning System

  • Yang, Xiaopeng;Yu, Hee Chul;Choi, Younggeun;Yang, Jae Do;Cho, Baik Hwan;You, Heecheon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.1
    • /
    • pp.37-52
    • /
    • 2017
  • Objective: The present study developed a user-centered 3D virtual liver surgery planning (VLSP) system called Dr. Liver to provide preoperative information for safe and rational surgery. Background: Preoperative 3D VLSP is needed for patients' safety in liver surgery. Existing systems either do not provide functions specialized for liver surgery planning or do not provide functions for cross-check of the accuracy of analysis results. Method: Use scenarios of Dr. Liver were developed through literature review, benchmarking, and interviews with surgeons. User interfaces of Dr. Liver with various user-friendly features (e.g., context-sensitive hotkey menu and 3D view navigation box) was designed. Novel image processing algorithms (e.g., hybrid semi-automatic algorithm for liver extraction and customized region growing algorithm for vessel extraction) were developed for accurate and efficient liver surgery planning. Usability problems of a preliminary version of Dr. Liver were identified by surgeons and system developers and then design changes were made to resolve the identified usability problems. Results: A usability testing showed that the revised version of Dr. Liver achieved a high level of satisfaction ($6.1{\pm}0.8$ out of 7) and an acceptable time efficiency ($26.7{\pm}0.9 min$) in liver surgery planning. Conclusion: Involvement of usability testing in system development process from the beginning is useful to identify potential usability problems to improve for shortening system development period and cost. Application: The development and evaluation process of Dr. Liver in this study can be referred in designing a user-centered system.

Fingerprint Image Quality Analysis for Knowledge-based Image Enhancement (지식기반 영상개선을 위한 지문영상의 품질분석)

  • 윤은경;조성배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.911-921
    • /
    • 2004
  • Accurate minutiae extraction from input fingerprint images is one of the critical modules in robust automatic fingerprint identification system. However, the performance of a minutiae extraction is heavily dependent on the quality of the input fingerprint images. If the preprocessing is performed according to the fingerprint image characteristics in the image enhancement step, the system performance will be more robust. In this paper, we propose a knowledge-based preprocessing method, which extracts S features (the mean and variance of gray values, block directional difference, orientation change level, and ridge-valley thickness ratio) from the fingerprint images and analyzes image quality with Ward's clustering algorithm, and enhances the images with respect to oily/neutral/dry characteristics. Experimental results using NIST DB 4 and Inha University DB show that clustering algorithm distinguishes the image Quality characteristics well. In addition, the performance of the proposed method is assessed using quality index and block directional difference. The results indicate that the proposed method improves both the quality index and block directional difference.

Background Subtraction Algorithm Based on Multiple Interval Pixel Sampling (다중 구간 샘플링에 기반한 배경제거 알고리즘)

  • Lee, Dongeun;Choi, Young Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.27-34
    • /
    • 2013
  • Background subtraction is one of the key techniques for automatic video content analysis, especially in the tasks of visual detection and tracking of moving object. In this paper, we present a new sample-based technique for background extraction that provides background image as well as background model. To handle both high-frequency and low-frequency events at the same time, multiple interval background models are adopted. The main innovation concerns the use of a confidence factor to select the best model from the multiple interval background models. To our knowledge, it is the first time that a confidence factor is used for merging several background models in the field of background extraction. Experimental results revealed that our approach based on multiple interval sampling works well in complicated situations containing various speed moving objects with environmental changes.