• Title/Summary/Keyword: object-image recognition

Search Result 798, Processing Time 0.033 seconds

Multiple Moving Person Tracking based on the IMPRESARIO Simulator

  • Kim, Hyun-Deok;Jin, Tae-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.877-881
    • /
    • 2008
  • In this paper, we propose a real-time people tracking system with multiple CCD cameras for security inside the building. The camera is mounted from the ceiling of the laboratory so that the image data of the passing people are fully overlapped. The implemented system recognizes people movement along various directions. To track people even when their images are partially overlapped, the proposed system estimates and tracks a bounding box enclosing each person in the tracking region. The approximated convex hull of each individual in the tracking area is obtained to provide more accurate tracking information. To achieve this goal, we propose a method for 3D walking human tracking based on the IMPRESARIO framework incorporating cascaded classifiers into hypothesis evaluation. The efficiency of adaptive selection of cascaded classifiers have been also presented. We have shown the improvement of reliability for likelihood calculation by using cascaded classifiers. Experimental results show that the proposed method can smoothly and effectively detect and track walking humans through environments such as dense forests.

  • PDF

A Study on the Edge Detection using Modified Expansion Mask (변형된 확장 마스크를 이용한 에지 검출에 관한 연구)

  • Lee, Chang-Young;Hwang, Yeong-Yeun;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.630-632
    • /
    • 2012
  • Contemporary society has evolved in the digital information age. Because of this, use of various digital images has been increased. To process these images, various digital image processing methods are used. Edge detection methods, one of those, are utilized to various areas of application such as object recognition, line detection. To detect edge, there are many methods such as Sobel, Prewitt, Laplacian. Because images which are dealt with existing methods are processed in same methods regardless the distribution of gray-level in image, edge detection property is insufficient. Therefore, In this study, to improve shortcomings of existing methods an algorithm using modified expansion mask is proposed.

  • PDF

Development of Moving Objects Recognition and Tracking System on 360 Degree Panorama (360도 영상에서 이동 물체 감지 및 추적 시스템의 개발)

  • Ko, Kwang-Man;Joo, Su-Chong
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.289-299
    • /
    • 2018
  • The 360 degree panoramas are picture of a wide range of images on one screen, so we can see a fairly wide range at a time. In particular, cylinderical panoramas are the most widely used spherical image, and its left and right viewing angles reach 360 degree, so you can observe front, rear, left, and right at once. Using 360 degree panorama, all directions can be monitored at the same time, so all directions can be effectively monitored compared to other methods. In this paper, we develop a system to recognize and track the movement of moving objects on a 360 degree panorama, and then present and verify the experimental results. For this goals, first, we developed a system to recognize moving objects in 360 degree panorama using DoF(Difference of Frame) algorithm. Second, based on the TLD algorithm, we developed an application that can track a specific single moving object in a 360 degree panorama and presented the experimental results.

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

A Study on Edge Detection Algorithm using Modified Morphology (변형된 모폴로지를 이용한 에지 검출 알고리즘에 관한 연구)

  • Lee, Chang-Young;An, Young-Joo;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.929-931
    • /
    • 2015
  • As the digital image processing technology develops, the edge in the image is widely utilized in various fields such as the object recognition and detection. Most of the current methods to detect the edge use the fixed weighting mask of Sobel or Roberts. Such current methods have an advantage that the implementation is simple but have a disadvantage that the characteristics of the edge detection are more or less insufficient. Thus, an algorithm using the modified morphology is proposed in order to supplement such problems of the current edge detection methods and obtain the excellent edge detection, and also a simulation using this algorithm is conducted to compare with such current methods.

  • PDF

Nonlinear model for estimating depth map of haze removal (안개제거의 깊이 맵 추정을 위한 비선형 모델)

  • Lee, Seungmin;Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.492-496
    • /
    • 2020
  • The visibility deteriorates in hazy weather and it is difficult to accurately recognize information captured by the camera. Research is being actively conducted to remove haze so that camera-based applications such as object localization/detection and lane recognition can operate normally even in hazy weather. In this paper, we propose a nonlinear model for depth map estimation through an extensive analysis that the difference between brightness and saturation in hazy image increases non-linearly with the depth of the image. The quantitative evaluation(MSE, SSIM, TMQI) shows that the proposed haze removal method based on the nonlinear model is superior to other state-of-the-art methods.

Effective machine learning-based haze removal technique using haze-related features (안개관련 특징을 이용한 효과적인 머신러닝 기반 안개제거 기법)

  • Lee, Ju-Hee;Kang, Bong-Soon
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.83-87
    • /
    • 2021
  • In harsh environments such as fog or fine dust, the cameras' detection ability for object recognition may significantly decrease. In order to accurately obtain important information even in bad weather, fog removal algorithms are necessarily required. Research has been conducted in various ways, such as computer vision/data-based fog removal technology. In those techniques, estimating the amount of fog through the input image's depth information is an important procedure. In this paper, a linear model is presented under the assumption that the image dark channel dictionary, saturation ∗ value, and sharpness characteristics are linearly related to depth information. The proposed method of haze removal through a linear model shows the superiority of algorithm performance in quantitative numerical evaluation.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

Efficient Deep Neural Network Architecture based on Semantic Segmentation for Paved Road Detection (효율적인 비정형 도로영역 인식을 위한 Semantic segmentation 기반 심층 신경망 구조)

  • Park, Sejin;Han, Jeong Hoon;Moon, Young Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1437-1444
    • /
    • 2020
  • With the development of computer vision systems, many advances have been made in the fields of surveillance, biometrics, medical imaging, and autonomous driving. In the field of autonomous driving, in particular, the object detection technique using deep learning are widely used, and the paved road detection is a particularly crucial problem. Unlike the ROI detection algorithm used in general object detection, the structure of paved road in the image is heterogeneous, so the ROI-based object recognition architecture is not available. In this paper, we propose a deep neural network architecture for atypical paved road detection using Semantic segmentation network. In addition, we introduce the multi-scale semantic segmentation network, which is a network architecture specialized to the paved road detection. We demonstrate that the performance is significantly improved by the proposed method.