• Title/Summary/Keyword: Computer vision technology

Search Result 669, Processing Time 0.031 seconds

Automatic Recognition of the Front/Back Sides and Stalk States for Mushrooms(Lentinus Edodes L.) (버섯 전후면과 꼭지부 상태의 자동 인식)

  • Hwang, H.;Lee, C.H.
    • Journal of Biosystems Engineering
    • /
    • v.19 no.2
    • /
    • pp.124-137
    • /
    • 1994
  • Visual features of a mushroom(Lentinus Edodes, L.) are critical in grading and sorting as most agricultural products are. Because of its complex and various visual features, grading and sorting of mushrooms have been done manually by the human expert. To realize the automatic handling and grading of mushrooms in real time, the computer vision system should be utilized and the efficient and robust processing of the camera captured visual information be provided. Since visual features of a mushroom are distributed over the front and back sides, recognizing sides and states of the stalk including the stalk orientation from the captured image is a prime process in the automatic task processing. In this paper, the efficient and robust recognition process identifying the front and back side and the state of the stalk was developed and its performance was compared with other recognition trials. First, recognition was tried based on the rule set up with some experimental heuristics using the quantitative features such as geometry and texture extracted from the segmented mushroom image. And the neural net based learning recognition was done without extracting quantitative features. For network inputs the segmented binary image obtained from the combined type automatic thresholding was tested first. And then the gray valued raw camera image was directly utilized. The state of the stalk seriously affects the measured size of the mushroom cap. When its effect is serious, the stalk should be excluded in mushroom cap sizing. In this paper, the stalk removal process followed by the boundary regeneration of the cap image was also presented. The neural net based gray valued raw image processing showed the successful results for our recognition task. The developed technology through this research may open the new way of the quality inspection and sorting especially for the agricultural products whose visual features are fuzzy and not uniquely defined.

  • PDF

Reflection-type Finger Vein Recognition for Mobile Applications

  • Zhang, Congcong;Liu, Zhi;Liu, Yi;Su, Fangqi;Chang, Jun;Zhou, Yiran;Zhao, Qijun
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.467-476
    • /
    • 2015
  • Finger vein recognition, which is a promising biometric method for identity authentication, has attracted significant attention. Considerable research focuses on transmission-type finger vein recognition, but this type of authentication is difficult to implement in mobile consumer devices. Therefore, reflection-type finger vein recognition should be developed. In the reflection-type vein recognition field, the majority of researchers concentrate on palm and palm dorsa patterns, and only a few pay attention to reflection-type finger vein recognition. Thus, this paper presents reflection-type finger vein recognition for biometric application that can be integrated into mobile consumer devices. A database is built to test the proposed algorithm. A novel method of region-of-interest localization for a finger vein image is introduced, and a scheme for effectively extracting finger vein features is proposed. Experiments demonstrate the feasibility of reflection-type finger vein recognition.

Emotion Recognition Method using Physiological Signals and Gestures (생체 신호와 몸짓을 이용한 감정인식 방법)

  • Kim, Ho-Duck;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.322-327
    • /
    • 2007
  • Researchers in the field of psychology used Electroencephalographic (EEG) to record activities of human brain lot many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study emotion recognition method which uses one of physiological signals and gestures in the existing research. In this paper, we use together physiological signals and gestures for emotion recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both physiological signals and gestures gets high recognition rates better than using physiological signals or gestures. Both physiological signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

A Method of Biofouling Population Estimation on Marine Structure (수중구조물 표면에 부착된 해양생물의 개체 수 예측 방법)

  • Choi, Hyun-Jun;Kim, Gue-Chol;Kim, Bu-Ki
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.4
    • /
    • pp.845-850
    • /
    • 2018
  • In this paper, we propose a method to estimate the number of biofouling attached to the surface of marine structures. This method estimates the number of biofouling by calculating the region maxima using images taken in underwater. To do this, we analyze the correlation between the region maxima and the number of biofouling. The analysis showed that there is a significant correlation between the number of region maxima and the number of biofouling. By using the results of this analysis, the experiments were conducted on images taken in the underwater. Experimental results show that the higher the region maxima of the image, is greater than the number of biofouling in the image. The proposed method can be used as an important technology in computer vision for underwater images.

Signaling Method for Spatial Adjacency Matrix of UWV media in MPEG Media Transport Environment (MPEG Media Transport 환경 내 UWV 미디어 공간 인접 행렬 시그널링 방안)

  • Kim, Junsik;Kang, Dongjin;Lee, Euisang;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.261-273
    • /
    • 2018
  • As progress on image processing, computer vision and display technologies aroused market's interests on generation and consumption of various types of media, interests on UWV media are also increasing. In context of consumption of UWV media, to effectively manage load of servers and resources of end terminal devices and provide user-derived services, technology which enables users to select and consume interested regions of media seems to be needed. Here, this paper proposes a method for description and transmission of spatial relationships among media, which composes UWV, by expanding MPEG-CI and Layout signaling to enable users' selective consumption of UWV media.

Efficient Image Stitching Using Fast Feature Descriptor Extraction and Matching (빠른 특징점 기술자 추출 및 정합을 이용한 효율적인 이미지 스티칭 기법)

  • Rhee, Sang-Burm
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.65-70
    • /
    • 2013
  • Recently, the field of computer vision has been actively researched through digital image which can be easily generated as the development and expansion of digital camera technology. Especially, research that extracts and utilizes the feature in image has been actively carried out. The image stitching is a method that creates the high resolution image using features extract and match. Image stitching can be widely used in military and medical purposes as well as in variety fields of real life. In this paper, we have proposed efficient image stitching method using fast feature descriptor extraction and matching based on SURF algorithm. It can be accurately, and quickly found matching point by reduction of dimension of feature descriptor. The feature descriptor is generated by classifying of unnecessary minutiae in extracted features. To reduce the computational time and efficient match feature, we have reduced dimension of the descriptor and expanded orientation window. In our results, the processing time of feature matching and image stitching are faster than previous algorithms, and also that method can make natural-looking stitched image.

A review on deep learning-based structural health monitoring of civil infrastructures

  • Ye, X.W.;Jin, T.;Yun, C.B.
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.567-585
    • /
    • 2019
  • In the past two decades, structural health monitoring (SHM) systems have been widely installed on various civil infrastructures for the tracking of the state of their structural health and the detection of structural damage or abnormality, through long-term monitoring of environmental conditions as well as structural loadings and responses. In an SHM system, there are plenty of sensors to acquire a huge number of monitoring data, which can factually reflect the in-service condition of the target structure. In order to bridge the gap between SHM and structural maintenance and management (SMM), it is necessary to employ advanced data processing methods to convert the original multi-source heterogeneous field monitoring data into different types of specific physical indicators in order to make effective decisions regarding inspection, maintenance and management. Conventional approaches to data analysis are confronted with challenges from environmental noise, the volume of measurement data, the complexity of computation, etc., and they severely constrain the pervasive application of SHM technology. In recent years, with the rapid progress of computing hardware and image acquisition equipment, the deep learning-based data processing approach offers a new channel for excavating the massive data from an SHM system, towards autonomous, accurate and robust processing of the monitoring data. Many researchers from the SHM community have made efforts to explore the applications of deep learning-based approaches for structural damage detection and structural condition assessment. This paper gives a review on the deep learning-based SHM of civil infrastructures with the main content, including a brief summary of the history of the development of deep learning, the applications of deep learning-based data processing approaches in the SHM of many kinds of civil infrastructures, and the key challenges and future trends of the strategy of deep learning-based SHM.

Recent Trends in Human Pose Estimation Based on a Single Image (단일 이미지에 기반을 둔 사람의 포즈 추정에 대한 연구 동향)

  • Cho, Jungchan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.31-42
    • /
    • 2019
  • With the recent development of deep learning technology, remarkable achievements have been made in many research areas of computer vision. Deep learning has also made dramatic improvement in two-dimensional or three-dimensional human pose estimation based on a single image, and many researchers have been expanding the scope of this problem. The human pose estimation is one of the most important research fields because there are various applications, especially it is a key factor in understanding the behavior, state, and intention of people in image or video analysis. Based on this background, this paper surveys research trends in estimating human poses based on a single image. Because there are various research results for robust and accurate human pose estimation, this paper introduces them in two separated subsections: 2D human pose estimation and 3D human pose estimation. Moreover, this paper summarizes famous data sets used in this field and introduces various studies which utilize human poses to solve their own problem.

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

Violence Recognition using Deep CNN for Smart Surveillance Applications (스마트 감시 애플리케이션을 위해 Deep CNN을 이용한 폭력인식)

  • Ullah, Fath U Min;Ullah, Amin;Muhammad, Khan;Lee, Mi Young;Baik, Sung Wook
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.53-59
    • /
    • 2018
  • Due to the recent developments in computer vision technology, complex actions can be recognized with reasonable accuracy in smart cities. In contrast, violence recognition such as events related to fight and knife, has gained less attention. The capability of visual surveillance can be used for detecting fight in streets or in prison centers. In this paper, we proposed a deep learning-based violence recognition method for surveillance cameras. A convolutional neural network (CNN) model is trained and fine-tuned on available benchmark datasets of fights and knives for violence recognition. When an abnormal event is detected, an alarm can be sent to the nearest police station to take immediate action. Moreover, when the probabilities of fight and knife classes are predicted very low, this situation is considered as normal situation. The experimental results of the proposed method outperformed other state-of-the-art CNN models with high margin by achieving maximum 99.21% accuracy.