• Title/Summary/Keyword: HSV Color

Search Result 179, Processing Time 0.022 seconds

Visual Touchless User Interface for Window Manipulation (윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스)

  • Kim, Jin-Woo;Jung, Kyung-Boo;Jeong, Seung-Do;Choi, Byung-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.471-478
    • /
    • 2009
  • Recently, researches for user interface are remarkably processed due to the explosive growth of 3-dimensional contents and applications, and the spread class of computer user. This paper proposes a novel method to manipulate windows efficiently using only the intuitive motion of hand. Previous methods have some drawbacks such as burden of expensive device, high complexity of gesture recognition, assistance of additional information using marker, and so on. To improve the defects, we propose a novel visual touchless interface. First, we detect hand region using hue channel in HSV color space to control window using hand. The distance transform method is applied to detect centroid of hand and curvature of hand contour is used to determine position of fingertips. Finally, by using the hand motion information, we recognize hand gesture as one of predefined seven motions. Recognized hand gesture is to be a command to control window. In the proposed method, user can manipulate windows with sense of depth in the real environment because the method adopts stereo camera. Intuitive manipulation is also available because the proposed method supports visual touch for the virtual object, which user want to manipulate, only using simple motions of hand. Finally, the efficiency of the proposed method is verified via an application based on our proposed interface.

An Automatic Mobile Cell Counting System for the Analysis of Biological Image (생물학적 영상 분석을 위한 자동 모바일 셀 계수 시스템)

  • Seo, Jaejoon;Chun, Junchul;Lee, Jin-Sung
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.39-46
    • /
    • 2015
  • This paper presents an automatic method to detect and count the cells from microorganism images based on mobile environments. Cell counting is an important process in the field of biological and pathological image analysis. In the past, cell counting is done manually, which is known as tedious and time consuming process. Moreover, the manual cell counting can lead inconsistent and imprecise results. Therefore, it is necessary to make an automatic method to detect and count cells from biological images to obtain accurate and consistent results. The proposed multi-step cell counting method automatically segments the cells from the image of cultivated microorganism and labels the cells by utilizing topological analysis of the segmented cells. To improve the accuracy of the cell counting, we adopt watershed algorithm in separating agglomerated cells from each other and morphological operation in enhancing the individual cell object from the image. The system is developed by considering the availability in mobile environments. Therefore, the cell images can be obtained by a mobile phone and the processed statistical data of microorganism can be delivered by mobile devices in ubiquitous smart space. From the experiments, by comparing the results between manual and the proposed automatic cell counting we can prove the efficiency of the developed system.

Study on vision-based object recognition to improve performance of industrial manipulator (산업용 매니퓰레이터의 작업 성능 향상을 위한 영상 기반 물체 인식에 관한 연구)

  • Park, In-Cheol;Park, Jong-Ho;Ryu, Ji-Hyoung;Kim, Hyoung-Ju;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.4
    • /
    • pp.358-365
    • /
    • 2017
  • In this paper, we propose an object recognition method using image information to improve the efficiency of visual servoingfor industrial manipulators in industry. This is an image-processing method for real-time responses to an abnormal situation or to external environment change in a work object by utilizing camera-image information of an industrial manipulator. The object recognition method proposed in this paper uses the Otsu method, a thresholding technique based on separation of the V channel containing color information and the S channel, in which it is easy to separate the background from the HSV channel in order to improve the recognition rate of the existing Harris Corner algorithm. Through this study, when the work object is not placed in the correct position due to external factors or from being twisted,the position is calculated and provided to the industrial manipulator.

Implementation of a walking-aid light with machine vision-based pedestrian signal detection (머신비전 기반 보행신호등 검출 기능을 갖는 보행등 구현)

  • Jihun Koo;Juseong Lee;Hongrae Cho;Ho-Myoung An
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.1
    • /
    • pp.31-37
    • /
    • 2024
  • In this study, we propose a machine vision-based pedestrian signal detection algorithm that operates efficiently even in computing resource-constrained environments. This algorithm demonstrates high efficiency within limited resources and is designed to minimize the impact of ambient lighting by sequentially applying HSV color space-based image processing, binarization, morphological operations, labeling, and other steps to address issues such as light glare. Particularly, this algorithm is structured in a relatively simple form to ensure smooth operation within embedded system environments, considering the limitations of computing resources. Consequently, it possesses a structure that operates reliably even in environments with low computing resources. Moreover, the proposed pedestrian signal system not only includes pedestrian signal detection capabilities but also incorporates IoT functionality, allowing wireless integration with a web server. This integration enables users to conveniently monitor and control the status of the signal system through the web server. Additionally, successful implementation has been achieved for effectively controlling 50W LED pedestrian signals. This proposed system aims to provide a rapid and efficient pedestrian signal detection and control system within resource-constrained environments, contemplating its potential applicability in real-world road scenarios. Anticipated contributions include fostering the establishment of safer and more intelligent traffic systems.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

Breaking character and natural image based CAPTCHA using feature classification (특징 분리를 통한 자연 배경을 지닌 글자 기반 CAPTCHA 공격)

  • Kim, Jaehwan;Kim, Suah;Kim, Hyoung Joong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.5
    • /
    • pp.1011-1019
    • /
    • 2015
  • CAPTCHA(Completely Automated Public Turing test to tell Computers and Humans Apart) is a test used in computing to distinguish whether or not the user is computer or human. Many web sites mostly use the character-based CAPTCHA consisting of digits and characters. Recently, with the development of OCR technology, simple character-based CAPTCHA are broken quite easily. As an alternative, many web sites add noise to make it harder for recognition. In this paper, we analyzed the most recent CAPTCHA, which incorporates the addition of the natural images to obfuscate the characters. We proposed an efficient method using support vector machine to separate the characters from the background image and use convolutional neural network to recognize each characters. As a result, 368 out of 1000 CAPTCHAs were correctly identified, it was demonstrated that the current CAPTCHA is not safe.

Automatic segmentation of a tongue area and oriental medicine tongue diagnosis system using the learning of the area features (영역 특징 학습을 이용한 혀의 자동 영역 분리 및 한의학적 설진 시스템)

  • Lee, Min-taek;Lee, Kyu-won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.4
    • /
    • pp.826-832
    • /
    • 2016
  • In this paper, we propose a tongue diagnosis system for determining the presence of specific taste crack area as a first step in the digital tongue diagnosis system that anyone can use easily without special equipment and expensive digital tongue diagnosis equipment. Training DB was developed by the Haar-like feature, Adaboost learning on the basis of 261 pictures which was collected in Oriental medicine. Tongue candidate regions were detected from the input image by the learning results and calculated the average value of the HUE component to separate only the tongue area in the detected candidate regions. A tongue area is separated through the Connected Component Labeling from the contour of tongue detected. The palate regions were divided by the relative width and height of the tongue regions separated. Image on the taste area is converted to gray image and binarized with each of the average brightness values. A crack in the presence or absence was determined via Connected Component Labeling with binary images.

Extraction of Attentive Objects Using Feature Maps (특징 지도를 이용한 중요 객체 추출)

  • Park Ki-Tae;Kim Jong-Hyeok;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.12-21
    • /
    • 2006
  • In this paper, we propose a technique for extracting attentive objects in images using feature maps, regardless of the complexity of images and the position of objects. The proposed method uses feature maps with edge and color information in order to extract attentive objects. We also propose a reference map which is created by integrating feature maps. In order to create a reference map, feature maps which represent visually attentive regions in images are constructed. Three feature maps including edge map, CbCr map and H map are utilized. These maps contain the information about boundary regions by the difference of intensity or colors. Then the combination map which represents the meaningful boundary is created by integrating the reference map and feature maps. Since the combination map simply represents the boundary of objects we extract the candidate object regions including meaningful boundaries from the combination map. In order to extract candidate object regions, we use the convex hull algorithm. By applying a segmentation algorithm to the area of candidate regions to separate object regions and background regions, real object regions are extracted from the candidate object regions. Experiment results show that the proposed method extracts the attentive regions and attentive objects efficiently, with 84.3% Precision rate and 81.3% recall rate.

Implementation of Intelligent Image Surveillance System based Context (컨텍스트 기반의 지능형 영상 감시 시스템 구현에 관한 연구)

  • Moon, Sung-Ryong;Shin, Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.11-22
    • /
    • 2010
  • This paper is a study on implementation of intelligent image surveillance system using context information and supplements temporal-spatial constraint, the weak point in which it is hard to process it in real time. In this paper, we propose scene analysis algorithm which can be processed in real time in various environments at low resolution video(320*240) comprised of 30 frames per second. The proposed algorithm gets rid of background and meaningless frame among continuous frames. And, this paper uses wavelet transform and edge histogram to detect shot boundary. Next, representative key-frame in shot boundary is selected by key-frame selection parameter and edge histogram, mathematical morphology are used to detect only motion region. We define each four basic contexts in accordance with angles of feature points by applying vertical and horizontal ratio for the motion region of detected object. These are standing, laying, seating and walking. Finally, we carry out scene analysis by defining simple context model composed with general context and emergency context through estimating each context's connection status and configure a system in order to check real time processing possibility. The proposed system shows the performance of 92.5% in terms of recognition rate for a video of low resolution and processing speed is 0.74 second in average per frame, so that we can check real time processing is possible.