• Title/Summary/Keyword: Histogram of binary image

Search Result 71, Processing Time 0.027 seconds

Enhanced ART1 Algorithm for the Recognition of Student Identification Cards of the Educational Matters Administration System on the Web (웹 환경 학사관리 시스템의 학생증 인식을 위한 개선된 ART1 알고리즘)

  • Park Hyun-Jung;Kim Kwang-Baek
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.5 s.37
    • /
    • pp.333-342
    • /
    • 2005
  • This paper proposes a method, which recognizes student's identification card by using image processing and recognition technology and can manage student information on the web. The presented scheme sets up an average brightness as a threshold, based on the brightest Pixel and the least bright one for the source image of the ID card. It is converting to binary image, applies a horizontal histogram, and extracts student number through its location. And, it removes the noise of the student number region by the mode smoothing with 3$\times$3 mask. After removing noise from the student number region, each number is extracted using vertical histogram and normalized. Using the enhanced ART1 algorithm recognized the extracted student number region. In this study, we propose the enhanced ART1 algorithm different from the conventional ART1 algorithm by the dynamical establishment of the vigilance parameter. which shows a tolerance limit of unbalance between voluntary and stored patterns for clustering. The Experiment results showed that the recognition rate of the proposed ART1 algorithm was improved much more than that of the conventional ART1 algorithm. So, we develop an educational matters administration system by using the proposed recognition method of the student's identification card.

  • PDF

An image enhancement Method for extracting multi-license plate region

  • Yun, Jong-Ho;Choi, Myung-Ryul;Lee, Sang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.3188-3207
    • /
    • 2017
  • In this paper, we propose an image enhancement algorithm to improve license plate extraction rate in various environments (Day Street, Night Street, Underground parking lot, etc.). The proposed algorithm is composed of image enhancement algorithm and license plate extraction algorithm. The image enhancement method can improve an image quality of the degraded image, which utilizes a histogram information and overall gray level distribution of an image. The proposed algorithm employs an interpolated probability distribution value (PDV) in order to control a sudden change in image brightness. Probability distribution value can be calculated using cumulative distribution function (CDF) and probability density function (PDF) of the captured image, whose values are achieved by brightness distribution of the captured image. Also, by adjusting the image enhancement factor of each part region based on image pixel information, it provides a function that can adjust the gradation of the image in more details. This processed gray image is converted into a binary image, which fuses narrow breaks and long thin gulfs, eliminates small holes, and fills gaps in the contour by using morphology operations. Then license plate region is detected based on aspect ratio and license plate size of the bound box drawn on connected license plate areas. The images have been captured by using a video camera or a personal image recorder installed in front of the cars. The captured images have included several license plates on multilane roads. Simulation has been executed using OpenCV and MATLAB. The results show that the extraction success rate is more improved than the conventional algorithms.

Automatic Edge Detection Method for Mobile Robot Application (이동로봇을 위한 영상의 자동 엣지 검출 방법)

  • Kim Dongsu;Kweon Inso;Lee Wangheon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.423-428
    • /
    • 2005
  • This paper proposes a new edge detection method using a $3{\times}3$ ideal binary pattern and lookup table (LUT) for the mobile robot localization without any parameter adjustments. We take the mean of the pixels within the $3{\times}3$ block as a threshold by which the pixels are divided into two groups. The edge magnitude and orientation are calculated by taking the difference of average intensities of the two groups and by searching directional code in the LUT, respectively. And also the input image is not only partitioned into multiple groups according to their intensity similarities by the histogram, but also the threshold of each group is determined by fuzzy reasoning automatically. Finally, the edges are determined through non-maximum suppression using edge confidence measure and edge linking. Applying this edge detection method to the mobile robot localization using projective invariance of the cross ratio. we demonstrate the robustness of the proposed method to the illumination changes in a corridor environment.

Development of Auto Sorting System for T Type Welding nut using A Vision Inspector (비전 검사기를 활용한 T형 용접너트 자동 선별시스템 개발)

  • Song, Han-Lim;Hur, Tae-Won
    • 전자공학회논문지 IE
    • /
    • v.48 no.1
    • /
    • pp.16-24
    • /
    • 2011
  • In this paper, we developed a auto sorting system for T type welding nut using a vision inspector. We used edge and thread detection with histogram of image which is captured by machine vision camera. We also used a binary morphology operation for a detection of spot. As a result we performed numeric inspection of 0.1mm accuracy. This is impossible in old sorting system and inspector with naked eye. Also, we reduced the manufacturing unit cost to 25% and improved a production efficiency to 330%.

The Robust Derivative Code for Object Recognition

  • Wang, Hainan;Zhang, Baochang;Zheng, Hong;Cao, Yao;Guo, Zhenhua;Qian, Chengshan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.272-287
    • /
    • 2017
  • This paper proposes new methods, named Derivative Code (DerivativeCode) and Derivative Code Pattern (DCP), for object recognition. The discriminative derivative code is used to capture the local relationship in the input image by concatenating binary results of the mathematical derivative value. Gabor based DerivativeCode is directly used to solve the palmprint recognition problem, which achieves a much better performance than the state-of-art results on the PolyU palmprint database. A new local pattern method, named Derivative Code Pattern (DCP), is further introduced to calculate the local pattern feature based on Dervativecode for object recognition. Similar to local binary pattern (LBP), DCP can be further combined with Gabor features and modeled by spatial histogram. To evaluate the performance of DCP and Gabor-DCP, we test them on the FERET and PolyU infrared face databases, and experimental results show that the proposed method achieves a better result than LBP and some state-of-the-arts.

Implementation of a Self Controlled Mobile Robot with Intelligence to Recognize Obstacles (장애물 인식 지능을 갖춘 자율 이동로봇의 구현)

  • 류한성;최중경
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.312-321
    • /
    • 2003
  • In this paper, we implement robot which are ability to recognize obstacles and moving automatically to destination. we present two results in this paper; hardware implementation of image processing board and software implementation of visual feedback algorithm for a self-controlled robot. In the first part, the mobile robot depends on commands from a control board which is doing image processing part. We have studied the self controlled mobile robot system equipped with a CCD camera for a long time. This robot system consists of a image processing board implemented with DSPs, a stepping motor, a CCD camera. We will propose an algorithm in which commands are delivered for the robot to move in the planned path. The distance that the robot is supposed to move is calculated on the basis of the absolute coordinate and the coordinate of the target spot. And the image signal acquired by the CCD camera mounted on the robot is captured at every sampling time in order for the robot to automatically avoid the obstacle and finally to reach the destination. The image processing board consists of DSP (TMS320VC33), ADV611, SAA7111, ADV7l76A, CPLD(EPM7256ATC144), and SRAM memories. In the second part, the visual feedback control has two types of vision algorithms: obstacle avoidance and path planning. The first algorithm is cell, part of the image divided by blob analysis. We will do image preprocessing to improve the input image. This image preprocessing consists of filtering, edge detection, NOR converting, and threshold-ing. This major image processing includes labeling, segmentation, and pixel density calculation. In the second algorithm, after an image frame went through preprocessing (edge detection, converting, thresholding), the histogram is measured vertically (the y-axis direction). Then, the binary histogram of the image shows waveforms with only black and white variations. Here we use the fact that since obstacles appear as sectional diagrams as if they were walls, there is no variation in the histogram. The intensities of the line histogram are measured as vertically at intervals of 20 pixels. So, we can find uniform and nonuniform regions of the waveforms and define the period of uniform waveforms as an obstacle region. We can see that the algorithm is very useful for the robot to move avoiding obstacles.

Design of OpenCV based Finger Recognition System using binary processing and histogram graph

  • Baek, Yeong-Tae;Lee, Se-Hoon;Kim, Ji-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.2
    • /
    • pp.17-23
    • /
    • 2016
  • NUI is a motion interface. It uses the body of the user without the use of HID device such as a mouse and keyboard to control the device. In this paper, we use a Pi Camera and sensors connected to it with small embedded board Raspberry Pi. We are using the OpenCV algorithms optimized for image recognition and computer vision compared with traditional HID equipment and to implement a more human-friendly and intuitive interface NUI devices. comparison operation detects motion, it proposed a more advanced motion sensors and recognition systems fused connected to the Raspberry Pi.

Implementation of Face Detection System on Android Platform for Real-Time Applications (실시간 응용을 위한 안드로이드 플랫폼에서의 안면 검출 시스템 구현)

  • Han, Byung-Gil;Lim, Kil-Taek
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.8 no.3
    • /
    • pp.137-143
    • /
    • 2013
  • This paper describes an implementation of face detection technology for a real-time application on the Android platform. Java class of Face-Detection for detection of human face is provided by the Android API. However, this function is not suitable to apply for the real-time applications due to inadequate detection speed and accuracy. In this paper, the AdaBoost based classification method which utilizes Local Binary Pattern (LBP) histogram is employed for face detection. The face detection module has been developed by C/C++ language for high-speed image processing, and this module is included to the Android platform using the Java Native Interface (JNI). The experiments were carried out in the Java-based environment and JNI-based environment. The experimental results have shown that the performance of JNI-based is faster than Java-based method and our system is well enough to apply for real-time applications.

Coated Tongue Region Extraction using the Fluorescence Response of the Tongue Coating by Ultraviolet Light Source (설태의 자외선 형광 반응을 이용한 설태 영역 추출)

  • Choi, Chang-Yur;Lee, Woo-Beom;Hong, You-Sik;Nam, Dong-Hyun;Lee, Sang-Suk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.181-188
    • /
    • 2012
  • An effective extraction method for extracting a coated tongue is proposed in this paper, which is used as the diagnostic criteria in the tongue diagnosis. Proposed method uses the fluorescence response characteristics of the coated tongue that is occurred by using the ultraviolet light. Specially, this method can solved the previous problems including the issue in the limits of the diagnosis environment and in the objectivity of the diagnosis results. In our method, original tongue image is acquired by using the ultraviolet light, and binarization is performed by thresholding a valley-points in the histogram that corresponds to the color difference of tongue body and tongue coating. Final view image is presented to the oriental doctor, after applying the canny-edge algorithm to the binary image, and edge image is added to the original image. In order to evaluate the performance of the our proposed method, after building a various tongue image, we compared the true region of coated tongue by the oriental doctor's hand with the extracted region by the our method. As a result, the proposed method showed the average 87.87% extraction ratio. The shape of the extracted coated tongue region showed also significantly higher similarity.

Face Detection using Orientation(In-Plane Rotation) Invariant Facial Region Segmentation and Local Binary Patterns(LBP) (방향 회전에 불변한 얼굴 영역 분할과 LBP를 이용한 얼굴 검출)

  • Lee, Hee-Jae;Kim, Ha-Young;Lee, David;Lee, Sang-Goog
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.692-702
    • /
    • 2017
  • Face detection using the LBP based feature descriptor has issues in that it can not represent spatial information between facial shape and facial components such as eyes, nose and mouth. To address these issues, in previous research, a facial image was divided into a number of square sub-regions. However, since the sub-regions are divided into different numbers and sizes, the division criteria of the sub-region suitable for the database used in the experiment is ambiguous, the dimension of the LBP histogram increases in proportion to the number of sub-regions and as the number of sub-regions increases, the sensitivity to facial orientation rotation increases significantly. In this paper, we present a novel facial region segmentation method that can solve in-plane rotation issues associated with LBP based feature descriptors and the number of dimensions of feature descriptors. As a result, the proposed method showed detection accuracy of 99.0278% from a single facial image rotated in orientation.