• Title/Summary/Keyword: Hand image processing

Search Result 233, Processing Time 0.034 seconds

Hue Preserving Color Gamut Mapping (색조 보존을 위한 칼라 색역 매핑)

  • 성영모;박은홍;임재권
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.106-109
    • /
    • 2003
  • This paper presents a hue preserving gamut mapping algorithm for color monitor and printer. The gamuts of monitor and printer are set by the profile of color reproduction media, specified by ICC(International Color Consortium) and provided by vendors, then those gamuts are represented on the CIE xy color space. In case that the color of monitor are located on out-of-gamut of printer, these are clipped on the point of gamut boundary of printer towards a reference white point. On the other hand, colors are in-gamut of printer are unchanged. An image generated by the algorithm keeps a ratio of each pixel of original image. Advantages of the algorithm are easy to implement and fast processing time than other algorithms which involve hue preserving especially in CIELAB color space.

  • PDF

Classification of Gripping Movement in Daily Life Using EMG-based Spider Chart and Deep Learning (근전도 기반의 Spider Chart와 딥러닝을 활용한 일상생활 잡기 손동작 분류)

  • Lee, Seong Mun;Pi, Sheung Hoon;Han, Seung Ho;Jo, Yong Un;Oh, Do Chang
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.299-307
    • /
    • 2022
  • In this paper, we propose a pre-processing method that converts to Spider Chart image data for classification of gripping movement using EMG (electromyography) sensors and Convolution Neural Networks (CNN) deep learning. First, raw data for six hand gestures are extracted from five test subjects using an 8-channel armband and converted into Spider Chart data of octagonal shapes, which are divided into several sliding windows and are learned. In classifying six hand gestures, the classification performance is compared with the proposed pre-processing method and the existing methods. Deep learning was performed on the dataset by dividing 70% of the total into training, 15% as testing, and 15% as validation. For system performance evaluation, five cross-validations were applied by dividing 80% of the entire dataset by training and 20% by testing. The proposed method generates 97% and 94.54% in cross-validation and general tests, respectively, using the Spider Chart preprocessing, which was better results than the conventional methods.

Algorithm for Extract Region of Interest Using Fast Binary Image Processing (고속 이진화 영상처리를 이용한 관심영역 추출 알고리즘)

  • Cho, Young-bok;Woo, Sung-hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.4
    • /
    • pp.634-640
    • /
    • 2018
  • In this paper, we propose an automatic extraction algorithm of region of interest(ROI) based on medical x-ray images. The proposed algorithm uses segmentation, feature extraction, and reference image matching to detect lesion sites in the input image. The extracted region is searched for matching lesion images in the reference DB, and the matched results are automatically extracted using the Kalman filter based fitness feedback. The proposed algorithm is extracts the contour of the left hand image for extract growth plate based on the left x-ray input image. It creates a candidate region using multi scale Hessian-matrix based sessionization. As a result, the proposed algorithm was able to split rapidly in 0.02 seconds during the ROI segmentation phase, also when extracting ROI based on segmented image 0.53, the reinforcement phase was able to perform very accurate image segmentation in 0.49 seconds.

A Method for Structuring Digital Video

  • Lee, Jae-Yeon;Jeong, Se-Yoon;Yoon, Ho-Sub;Kim, Kyu-Heon;Bae, Younglae-J;Jang, Jong-whan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.92-97
    • /
    • 1998
  • For the efficient searching and browsing of digital video, it is essential to extract the internal structure of the video contents. As an example, a news video consists of several sections such as politics, economics, sports and others, and also each section consists of individual topics. With this information in hand, users can ore easily access the required video frames. This paper addresses the problem of automatic shot boundary detection and selection of representative frames (R-frames), which are the essential step in recognizing the internal structure of video contents. In the shot boundary detection, a new algorithm that have dual detectors which are designed specifically for the abrupt boundaries (cuts) and gradually changing bounaries respectively is proposed. Compared to the existing 미algorithms that mostly have tried to detect both types by a single mechanism, the proposed algorithm is proved to be more robust and accurate. Also in the problem of R-frame selection, simple mechanical approaches such as selecting one frame every other second have been adopted. However this approach often selects too many R-frames in static short, while drops important frames in dynamic shots. To improve the selection mechanism, a new R-frame selection algorithm that uses motion information extracted from pixel difference is proposed.

  • PDF

Image Processing Based Virtual Reality Input Method using Gesture (영상처리 기반의 제스처를 이용한 가상현실 입력기)

  • Hong, Dong-Gyun;Cheon, Mi-Hyeon;Lee, Donghwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.5
    • /
    • pp.129-137
    • /
    • 2019
  • Ubiquitous computing technology is emerging as information technology advances. In line with this, a number of studies are being carried out to increase device miniaturization and user convenience. Some of the proposed devices are user-friendly and uncomfortable with hand-held operation. To address these inconveniences, this paper proposed a virtual button that could be used in watching television. When watching a video on television, a camera is installed at the top of the TV, using the fact that the user watches the video from the front, so that the camera takes a picture of the top of the head. Extract the background and hand area separately from the filmed image, extract the outline to the extracted hand area, and detect the tip point of the finger. Detection of the end point of the finger produces a virtual button interface at the top of the image being filmed in front, and the button activates when the end point of the detected finger becomes a pointer and is located inside the button.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

Hand Shape Recognition with Disparity Pattern of Multiple Model Images (복수 모델영상의 상위도 패턴을 이용한 손형상 인식)

  • 이칠우
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.4
    • /
    • pp.400-408
    • /
    • 1999
  • This paper describes a method for making the "disparity pattern" which is basis of image matching with brightness difference; called disparity, between multiple model images, and an algorithm which recognizes hand shape by utilizing the pattern in measuring the distance between a input image and model images. The virtue of the algorithm is that only simple brightness difference calculated from multiple images by managing a whole image as the fundamental processing unit is patterned in two dimensional shape and then is used in the recognition process. Consequently, this method is very useful for other recognition algorithm requiring comparison of large scale image since correlation among multiple model images is applied simultaneously in recognition process.

  • PDF

Interactive information process image with minute hand gestures

  • Lim, Chan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.04a
    • /
    • pp.799-802
    • /
    • 2016
  • It is definitely an interesting job to work with V4 to create various contents emphasizing different interfaces like 3D graphics, and multimedia such as video, audio, and camera. Moreover, beyond the other interface, as it could be used in the many aspects of the sensory sign such as visual effects, auditory effects, and touchable effects, it feels free to make a better developed model. We intended the users to feel some kind of pleasure and interactions rather than just using in aspect of Media art.

Real-time Hand Region Detection and Tracking using Depth Information (깊이정보를 이용한 실시간 손 영역 검출 및 추적)

  • Joo, SungIl;Weon, SunHee;Choi, HyungIl
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.177-186
    • /
    • 2012
  • In this paper, we propose a real-time approach for detecting and tracking a hand region by analyzing depth images. We build a hand model in advance. The model has the shape information of a hand. The detecting process extracts out moving areas in an image, which are possibly caused by moving a hand in front of a camera. The moving areas can be identified by analyzing accumulated difference images and applying the region growing technique. The extracted moving areas are compared against a hand model to get justified as a hand region. The tracking process keeps the track of center points of hand regions of successive frames. For this purpose, it involves three steps. The first step is to determine a seed point that is the closest point to the center point of a previous frame. The second step is to perform region growing to form a candidate region of a hand. The third step is to determine the center point of a hand to be tracked. This point is searched by the mean-shift algorithm within a confined area whose size varies adaptively according to the depth information. To verify the effectiveness of our approach, we have evaluated the performance of our approach while changing the shape and position of a hand as well as the velocity of hand movement.

Implementation of DID interface using gesture recognition (제스쳐 인식을 이용한 DID 인터페이스 구현)

  • Lee, Sang-Hun;Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.343-352
    • /
    • 2012
  • In this paper, we implemented a touchless interface for DID(Digital Information Display) system using gesture recognition technique which includes both hand motion and hand shape recognition. Especially this touchless interface without extra attachments gives user both easier usage and spatial convenience. For hand motion recognition, two hand-motion's parameters such as a slope and a velocity were measured as a direction-based recognition way. And extraction of hand area image utilizing YCbCr color model and several image processing methods were adopted to recognize a hand shape recognition. These recognition methods are combined to generate various commands, such as, next-page, previous-page, screen-up, screen-down and mouse -click in oder to control DID system. Finally, experimental results showed the performance of 93% command recognition rate which is enough to confirm the possible application to commercial products.