• Title/Summary/Keyword: 3-D Shape Recognition

검색결과 135건 처리시간 0.03초

iOS 플랫폼에서 Active Shape Model 개선을 통한 얼굴 특징 검출 (Improvement of Active Shape Model for Detecting Face Features in iOS Platform)

  • 이용환;김흥준
    • 반도체디스플레이기술학회지
    • /
    • 제15권2호
    • /
    • pp.61-65
    • /
    • 2016
  • Facial feature detection is a fundamental function in the field of computer vision such as security, bio-metrics, 3D modeling, and face recognition. There are many algorithms for the function, active shape model is one of the most popular local texture models. This paper addresses issues related to face detection, and implements an efficient extraction algorithm for extracting the facial feature points to use on iOS platform. In this paper, we extend the original ASM algorithm to improve its performance by four modifications. First, to detect a face and to initialize the shape model, we apply a face detection API provided from iOS CoreImage framework. Second, we construct a weighted local structure model for landmarks to utilize the edge points of the face contour. Third, we build a modified model definition and fitting more landmarks than the classical ASM. And last, we extend and build two-dimensional profile model for detecting faces within input images. The proposed algorithm is evaluated on experimental test set containing over 500 face images, and found to successfully extract facial feature points, clearly outperforming the original ASM.

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권7호
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

Morphometric analysis of the Daphne kiusiana complex (Thymelaeaceae) using digitized herbarium specimens

  • KIM, Yoon-Su;OH, Sang-Hun
    • 식물분류학회지
    • /
    • 제52권3호
    • /
    • pp.144-155
    • /
    • 2022
  • Daphne kiusiana is an evergreen shrub with dense head-like umbels of white flowers distributed in southern Korea, Japan, China, and Taiwan. Plants in China and Taiwan are recognized as var. atrocaulis by having a dark purple stem, elliptic leaves, and persistent bracts. Recently, plants on Jejudo Island were segregated as a separate species, D. jejudoensis, given their elliptic leaves with an acuminate apex, a long hypanthium and sepals, and a glabrous hypanthium. Morphological variations of three closely related taxa, the D. kiusiana complex, were investigated across the distributional range to clarify the taxonomic delimitation of members of the complex. Twelve characters of the leaf and flower were measured from digitized herbarium specimens using the image analysis program ImageJ and were included in a morphometric analysis, the results of which indicate that the level of variation in the characters is very high. The results of a principal component analysis weakly separated D. jejudoensis from D. kiusiana according to their floral characteristics, such as a longer, glabrous hypanthium, and larger sepals. However, some individuals of D. kiusiana, particularly those from Bigeumdo Island, were included in D. jejudoensis. Recognition of D. kiusiana var. atrocaulis based on the leaf shape was not supported in the analysis, and D. jejudoensis may be recognized as a variety of D. kiusiana. Our morphometric analysis shows that digitized images of herbarium specimens could be useful and an additional method by which to investigate more diverse specimens.

An Automatic Camera Tracking System for Video Surveillance

  • Lee, Sang-Hwa;Sharma, Siddharth;Lin, Sang-Lin;Park, Jong-Il
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2010년도 하계학술대회
    • /
    • pp.42-45
    • /
    • 2010
  • This paper proposes an intelligent video surveillance system for human object tracking. The proposed system integrates the object extraction, human object recognition, face detection, and camera control. First, the object in the video signals is extracted using the background subtraction. Then, the object region is examined whether it is human or not. For this recognition, the region-based shape descriptor, angular radial transform (ART) in MPEG-7, is used to learn and train the shapes of human bodies. When it is decided that the object is human or something to be investigated, the face region is detected. Finally, the face or object region is tracked in the video, and the pan/tilt/zoom (PTZ) controllable camera tracks the moving object with the motion information of the object. This paper performs the simulation with the real CCTV cameras and their communication protocol. According to the experiments, the proposed system is able to track the moving object(human) automatically not only in the image domain but also in the real 3-D space. The proposed system reduces the human supervisors and improves the surveillance efficiency with the computer vision techniques.

  • PDF

맞춤판재 용접용 3차원 비젼 감시기 개발 (Developement of 3-D Vision Monitoring System for Tailored Blank Welding)

  • 장영건;이경돈
    • 한국정밀공학회지
    • /
    • 제14권12호
    • /
    • pp.17-23
    • /
    • 1997
  • A 3-D vision system is developed to evaluate blanks' line up and monitor gap and thickness difference between blanks in tailored blank welding system. A structured lighting method is used for 3-D vision recognition. Images of sheared portion in blanks are irregular according to roughness of blank surface, shape of sheared geometry and blurring. It is difficult to get accurate and reliable informations in the case of using binary image processing or contour detection techniques in real time for such images. We propoe a new energy integration method robust to blurring and changes of illumination. The method is computationally simple, and uses feature restoration concept, different to another digital image restoration methods which aim image itself restoration and may be used in conventional applications using structured line lighting technique. Experimental results show this system measuring repeatability is .+-. pixel for gap and thickness difference in static and dynamic tests. The data are expected to be useful for preview gap control.

  • PDF

Three-dimensional Head Tracking Using Adaptive Local Binary Pattern in Depth Images

  • Kim, Joongrock;Yoon, Changyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제16권2호
    • /
    • pp.131-139
    • /
    • 2016
  • Recognition of human motions has become a main area of computer vision due to its potential human-computer interface (HCI) and surveillance. Among those existing recognition techniques for human motions, head detection and tracking is basis for all human motion recognitions. Various approaches have been tried to detect and trace the position of human head in two-dimensional (2D) images precisely. However, it is still a challenging problem because the human appearance is too changeable by pose, and images are affected by illumination change. To enhance the performance of head detection and tracking, the real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor are recently used. In this paper, we propose an effective feature extraction method, called adaptive local binary pattern (ALBP), for depth image based applications. Contrasting to well-known conventional local binary pattern (LBP), the proposed ALBP cannot only extract shape information without texture in depth images, but also is invariant distance change in range images. We apply the proposed ALBP for head detection and tracking in depth images to show its effectiveness and its usefulness.

Label Restoration Using Biquadratic Transformation

  • Le, Huy Phat;Nguyen, Toan Dinh;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • 제6권1호
    • /
    • pp.6-11
    • /
    • 2010
  • Recently, there has been research to use portable digital camera to recognize objects in natural scene images, including labels or marks on a cylindrical surface. In many cases, text or logo in a label can be distorted by a structural movement of the object on which the label resides. Since the distortion in the label can degrade the performance of object recognition, the label should be rectified or restored from deformations. In this paper, a new method for label detection and restoration in digital images is presented. In the detection phase, the Hough transform is employed to detect two vertical boundaries of the label, and a horizontal edge profile is analyzed to detect upper-side and lower-side boundaries of the label. Then, the biquadratic transformation is used to restore the rectangular shape of the label. The proposed algorithm performs restoration of 3D objects in a 2D space, and it requires neither an auxiliary hardware such as 3D camera to construct 3D models nor a multi-camera to capture objects in different views. Experimental results demonstrate the effectiveness of the proposed method.

WEED DETECTION BY MACHINE VISION AND ARTIFICIAL NEURAL NETWORK

  • S. I. Cho;Lee, D. S.;J. Y. Jeong
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2000년도 THE THIRD INTERNATIONAL CONFERENCE ON AGRICULTURAL MACHINERY ENGINEERING. V.II
    • /
    • pp.270-278
    • /
    • 2000
  • A machine vision system using charge coupled device(CCD) camera for the weed detection in a radish farm was developed. Shape features were analyzed with the binary images obtained from color images of radish and weeds. Aspect, Elongation and PTB were selected as significant variables for discriminant models using the STEPDISC option. The selected variables were used in the DISCRIM procedure to compute a discriminant function for classifying images into one of the two classes. Using discriminant analysis, the successful recognition rate was 92% for radish and 98% for weeds. To recognize radish and weeds more effectively than the discriminant analysis, an artificial neural network(ANN) was used. The developed ANN model distinguished the radish from the weeds with 100%. The performance of ANNs was improved to prevent overfitting and to generalize well using a regularization method. The successful recognition rate in the farms was 93.3% for radish and 93.8% for weeds. As a whole, the machine vision system using CCD camera with the artificial neural network was useful to detect weeds in the radish farms.

  • PDF

히어 캠 임베디드 플랫폼 설계 (HearCAM Embedded Platform Design)

  • 홍선학;조경순
    • 디지털산업정보학회논문지
    • /
    • 제10권4호
    • /
    • pp.79-87
    • /
    • 2014
  • In this paper, we implemented the HearCAM platform with Raspberry PI B+ model which is an open source platform. Raspberry PI B+ model consists of dual step-down (buck) power supply with polarity protection circuit and hot-swap protection, Broadcom SoC BCM2835 running at 700MHz, 512MB RAM solered on top of the Broadcom chip, and PI camera serial connector. In this paper, we used the Google speech recognition engine for recognizing the voice characteristics, and implemented the pattern matching with OpenCV software, and extended the functionality of speech ability with SVOX TTS(Text-to-speech) as the matching result talking to the microphone of users. And therefore we implemented the functions of the HearCAM for identifying the voice and pattern characteristics of target image scanning with PI camera with gathering the temperature sensor data under IoT environment. we implemented the speech recognition, pattern matching, and temperature sensor data logging with Wi-Fi wireless communication. And then we directly designed and made the shape of HearCAM with 3D printing technology.

컴퓨터그래픽스를 이용한 사실적인 3D 인물 일러스트레이션의 표현 (Realistic 3-dimensional using computer graphics Expression of Human illustrations)

  • 김훈
    • 디자인학연구
    • /
    • 제19권1호
    • /
    • pp.79-88
    • /
    • 2006
  • 사람의 얼굴은 정체성의 시각적 상징이다. 사람마다 각기 다른 얼굴 모습은 타인과 구별할 수 있도록 하는 중요한 역할을 하면서 개인의 정체성과 직결된다. 역사적으로 볼 때 얼굴에 대한 시대적 인식의 변화와 함께 표현매체와 커뮤니케이션매체가 다양해지고 발전함에 따라 얼굴을 표현하는 것에도 많은 변화가 있었다. 그러나 지금처럼 얼굴에 대한 사람들의 관심과 주목이 컸던 적이 없었다. 기술적으로는 컴퓨터그래픽스의 등장으로 얼굴표현의 새로운 전기를 맞이하게 되었다. 특히 시각 이미지들이 디지털형태로 제작, 저장, 전송할 수 있게 되어 시간적, 공간적제약이 없어지면서 시각이미지정보는 커뮤니케이션에서 그 비중이 전보다 더 커지고 있다. 그 중에서 디지털로 만들어진 얼굴 이미지는 그 활용도가 점차 확대되고 있다. 이에 따라 컴퓨터그래픽스를 이용한 얼굴의 3d (3-dimensional) 표현은 수년전부터 얼굴 각 부분의 형태와 텍스추어 맵 등 각 요소들을 필요할 때 마치 퍼즐 조각처럼 조립해서 전문적인 기술이 없이도 손쉽게 표현할 수 있게 되었다. 본 연구에서는 3d 얼굴표현의 제작단계별 내용을 연구하고 그 결과를 바탕으로 케이스 스터디에서 시각화함으로써 3d 전문가가 아닌 일반 시각디자이너들이 3d 형태의 얼굴을 효과적으로 표현하는 방법을 연구한다.

  • PDF