• Title/Summary/Keyword: Edge Feature Image

Search Result 323, Processing Time 0.024 seconds

Localizing Head and Shoulder Line Using Statistical Learning (통계학적 학습을 이용한 머리와 어깨선의 위치 찾기)

  • Kwon, Mu-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.141-149
    • /
    • 2007
  • Associating the shoulder line with head location of the human body is useful in verifying, localizing and tracking persons in an image. Since the head line and the shoulder line, what we call ${\Omega}$-shape, move together in a consistent way within a limited range of deformation, we can build a statistical shape model using Active Shape Model (ASM). However, when the conventional ASM is applied to ${\Omega}$-shape fitting, it is very sensitive to background edges and clutter because it relies only on the local edge or gradient. Even though appearance is a good alternative feature for matching the target object to image, it is difficult to learn the appearance of the ${\Omega}$-shape because of the significant difference between people's skin, hair and clothes, and because appearance does not remain the same throughout the entire video. Therefore, instead of teaming appearance or updating appearance as it changes, we model the discriminative appearance where each pixel is classified into head, torso and background classes, and update the classifier to obtain the appropriate discriminative appearance in the current frame. Accordingly, we make use of two features in fitting ${\Omega}$-shape, edge gradient which is used for localization, and discriminative appearance which contributes to stability of the tracker. The simulation results show that the proposed method is very robust to pose change, occlusion, and illumination change in tracking the head and shoulder line of people. Another advantage is that the proposed method operates in real time.

Automated Satellite Image Co-Registration using Pre-Qualified Area Matching and Studentized Outlier Detection (사전검수영역기반정합법과 't-분포 과대오차검출법'을 이용한 위성영상의 '자동 영상좌표 상호등록')

  • Kim, Jong Hong;Heo, Joon;Sohn, Hong Gyoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4D
    • /
    • pp.687-693
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene, one of which represents a reference image, while the other is geometrically transformed to the one. In order to improve efficiency and effectiveness of the co-registration approach, the author proposed a pre-qualified area matching algorithm which is composed of feature extraction with canny operator and area matching algorithm with cross correlation coefficient. For refining matching points, outlier detection using studentized residual was used and iteratively removes outliers at the level of three standard deviation. Throughout the pre-qualification and the refining processes, the computation time was significantly improved and the registration accuracy is enhanced. A prototype of the proposed algorithm was implemented and the performance test of 3 Landsat images of Korea. showed: (1) average RMSE error of the approach was 0.435 pixel; (2) the average number of matching points was over 25,573; (3) the average processing time was 4.2 min per image with a regular workstation equipped with a 3 GHz Intel Pentium 4 CPU and 1 Gbytes Ram. The proposed approach achieved robustness, full automation, and time efficiency.

Virtual core point detection and ROI extraction for finger vein recognition (지정맥 인식을 위한 가상 코어점 검출 및 ROI 추출)

  • Lee, Ju-Won;Lee, Byeong-Ro
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.3
    • /
    • pp.249-255
    • /
    • 2017
  • The finger vein recognition technology is a method to acquire a finger vein image by illuminating infrared light to the finger and to authenticate a person through processes such as feature extraction and matching. In order to recognize a finger vein, a 2D mask-based two-dimensional convolution method can be used to detect a finger edge but it takes too much computation time when it is applied to a low cost micro-processor or micro-controller. To solve this problem and improve the recognition rate, this study proposed an extraction method for the region of interest based on virtual core points and moving average filtering based on the threshold and absolute value of difference between pixels without using 2D convolution and 2D masks. To evaluate the performance of the proposed method, 600 finger vein images were used to compare the edge extraction speed and accuracy of ROI extraction between the proposed method and existing methods. The comparison result showed that a processing speed of the proposed method was at least twice faster than those of the existing methods and the accuracy of ROI extraction was 6% higher than those of the existing methods. From the results, the proposed method is expected to have high processing speed and high recognition rate when it is applied to inexpensive microprocessors.

Dual Dictionary Learning for Cell Segmentation in Bright-field Microscopy Images (명시야 현미경 영상에서의 세포 분할을 위한 이중 사전 학습 기법)

  • Lee, Gyuhyun;Quan, Tran Minh;Jeong, Won-Ki
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.21-29
    • /
    • 2016
  • Cell segmentation is an important but time-consuming and laborious task in biological image analysis. An automated, robust, and fast method is required to overcome such burdensome processes. These needs are, however, challenging due to various cell shapes, intensity, and incomplete boundaries. A precise cell segmentation will allow to making a pathological diagnosis of tissue samples. A vast body of literature exists on cell segmentation in microscopy images [1]. The majority of existing work is based on input images and predefined feature models only - for example, using a deformable model to extract edge boundaries in the image. Only a handful of recent methods employ data-driven approaches, such as supervised learning. In this paper, we propose a novel data-driven cell segmentation algorithm for bright-field microscopy images. The proposed method minimizes an energy formula defined by two dictionaries - one is for input images and the other is for their manual segmentation results - and a common sparse code, which aims to find the pixel-level classification by deploying the learned dictionaries on new images. In contrast to deformable models, we do not need to know a prior knowledge of objects. We also employed convolutional sparse coding and Alternating Direction of Multiplier Method (ADMM) for fast dictionary learning and energy minimization. Unlike an existing method [1], our method trains both dictionaries concurrently, and is implemented using the GPU device for faster performance.

A Study on the Observation of Soil Moisture Conditions and its Applied Possibility in Agriculture Using Land Surface Temperature and NDVI from Landsat-8 OLI/TIRS Satellite Image (Landsat-8 OLI/TIRS 위성영상의 지표온도와 식생지수를 이용한 토양의 수분 상태 관측 및 농업분야에의 응용 가능성 연구)

  • Chae, Sung-Ho;Park, Sung-Hwan;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.931-946
    • /
    • 2017
  • The purpose of this study is to observe and analyze soil moisture conditions with high resolution and to evaluate its application feasibility to agriculture. For this purpose, we used three Landsat-8 OLI (Operational Land Imager)/TIRS (Thermal Infrared Sensor) optical and thermal infrared satellite images taken from May to June 2015, 2016, and 2017, including the rural areas of Jeollabuk-do, where 46% of agricultural areas are located. The soil moisture conditions at each date in the study area can be effectively obtained through the SPI (Standardized Precipitation Index)3 drought index, and each image has near normal, moderately wet, and moderately dry soil moisture conditions. The temperature vegetation dryness index (TVDI) was calculated to observe the soil moisture status from the Landsat-8 OLI/TIRS images with different soil moisture conditions and to compare and analyze the soil moisture conditions obtained from the SPI3 drought index. TVDI is estimated from the relationship between LST (Land Surface Temperature) and NDVI (Normalized Difference Vegetation Index) calculated from Landsat-8 OLI/TIRS satellite images. The maximum/minimum values of LST according to NDVI are extracted from the distribution of pixels in the feature space of LST-NDVI, and the Dry/Wet edges of LST according to NDVI can be determined by linear regression analysis. The TVDI value is obtained by calculating the ratio of the LST value between the two edges. We classified the relative soil moisture conditions from the TVDI values into five stages: very wet, wet, normal, dry, and very dry and compared to the soil moisture conditions obtained from SPI3. Due to the rice-planing season from May to June, 62% of the whole images were classified as wet and very wet due to paddy field areas which are the largest proportions in the image. Also, the pixels classified as normal were analyzed because of the influence of the field area in the image. The TVDI classification results for the whole image roughly corresponded to the SPI3 soil moisture condition, but they did not correspond to the subdivision results which are very dry, wet, and very wet. In addition, after extracting and classifying agricultural areas of paddy field and field, the paddy field area did not correspond to the SPI3 drought index in the very dry, normal and very wet classification results, and the field area did not correspond to the SPI3 drought index in the normal classification. This is considered to be a problem in Dry/Wet edge estimation due to outlier such as extremely dry bare soil and very wet paddy field area, water, cloud and mountain topography effects (shadow). However, in the agricultural area, especially the field area, in May to June, it was possible to effectively observe the soil moisture conditions as a subdivision. It is expected that the application of this method will be possible by observing the temporal and spatial changes of the soil moisture status in the agricultural area using the optical satellite with high spatial resolution and forecasting the agricultural production.

3D surface Reconstruction of Moving Object Using Multi-Laser Stripes Irradiation (멀티 레이저 라인 조사를 이용한 비등속 이동물체의 3차원 형상 복원)

  • Yi, Young-Youl;Ye, Soo-Young;Nam, Ki-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.144-152
    • /
    • 2007
  • We propose a 3D modeling method for surface inspection of non-linear moving object. The laser lines reflect the surface curvature. We can acquire 3D surface information by analyzing projected laser lines on object. ill this paper, we use multi-line laser to make use of robust of single stripe method and high speed of single frame. Binarization and channel edge extraction method were used for robust laser line extraction. A new labeling method was used for laser line labeling. We acquired sink information between each 3D reconstructed frame by feature point matching, and registered each frame to one whole image. We verified the superiority of proposed method by applying it to container damage inspection system.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Intensity Gradient filter and Median Filter based Video Sequence Deinterlacing Using Texture Detection (텍스쳐 감지를 이용한 화소값 기울기 필터 및 중간값 필터 기반의 비디오 시퀀스 디인터레이싱)

  • Kang, Kun-Hwa;Ku, Su-Il;Jeong, Je-Chang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.4C
    • /
    • pp.371-379
    • /
    • 2009
  • In this paper, we proposed new de-interlacing algorithm for video data using intensity gradient filter and median filter with texture detection in the image block. We first introduce the texture detection. According to texture detection, the current region is determined into smooth region or texture region. In case that the smooth region interpolated by median filter. In addition, in case of the texture region, we calculate missing pixel value using intensity gradient filter. Therefore, we analyze the local region feature using the texture detection and classify each missing pixel into two categories. And then, based on the classification result, a different de-interlacing algorithm is activated in order to obtain the best performance. Experimental results show that the proposed algorithm performs well with a variety of moving sequences compared with conventional intra-field method in the literature.

Text Detection and Recognition in Outdoor Korean Signboards for Mobile System Applications (모바일 시스템 응용을 위한 실외 한국어 간판 영상에서 텍스트 검출 및 인식)

  • Park, J.H.;Lee, G.S.;Kim, S.H.;Lee, M.H.;Toan, N.D.
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.2
    • /
    • pp.44-51
    • /
    • 2009
  • Text understand in natural images has become an active research field in the past few decades. In this paper, we present an automatic recognition system in Korean signboards with a complex background. The proposed algorithm includes detection, binarization and extraction of text for the recognition of shop names. First, we utilize an elaborate detection algorithm to detect possible text region based on edge histogram of vertical and horizontal direction. And detected text region is segmented by clustering method. Second, the text is divided into individual characters based on connected components whose center of mass lie below the center line, which are recognized by using a minimum distance classifier. A shape-based statistical feature is adopted, which is adequate for Korean character recognition. The system has been implemented in a mobile phone and is demonstrated to show acceptable performance.

Facial Feature Detection and Facial Contour Extraction using Snakes (얼굴 요소의 영역 추출 및 Snakes를 이용한 윤곽선 추출)

  • Lee, Kyung-Hee;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.731-741
    • /
    • 2000
  • This paper proposes a method to detect a facial region and extract facial features which is crucial for visual recognition of human faces. In this paper, we extract the MER(Minimum Enclosing Rectangle) of a face and facial components using projection analysis on both edge image and binary image. We use an active contour model(snakes) for extraction of the contours of eye, mouth, eyebrow, and face in order to reflect the individual differences of facial shapes and converge quickly. The determination of initial contour is very important for the performance of snakes. Particularly, we detect Minimum Enclosing Rectangle(MER) of facial components and then determine initial contours using general shape of facial components within the boundary of the obtained MER. We obtained experimental results to show that MER extraction of the eye, mouth, and face was performed successfully. But in the case of images with bright eyebrow, MER extraction of eyebrow was performed poorly. We obtained good contour extraction with the individual differences of facial shapes. Particularly, in the eye contour extraction, we combined edges by first order derivative operator and zero crossings by second order derivative operator in designing energy function of snakes, and we achieved good eye contours. For the face contour extraction, we used both edges and grey level intensity of pixels in designing of energy function. Good face contours were extracted as well.

  • PDF