• Title/Summary/Keyword: Fisheye Camera

Search Result 53, Processing Time 0.024 seconds

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

Development of 360° Omnidirectional IP Camera with High Resolution of 12Million Pixels (1200만 화소의 고해상도 360° 전방위 IP 카메라 개발)

  • Lee, Hee-Yeol;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.3
    • /
    • pp.268-271
    • /
    • 2017
  • In this paper, we propose the development of high resolution $360^{\circ}$ omnidirectional IP camera with 12 million pixels. The proposed 12-megapixel high-resolution $360^{\circ}$ omnidirectional IP camera consists of a lens unit with $360^{\circ}$ omnidirectional viewing angle and a 12-megapixel high-resolution IP camera unit. The lens section of $360^{\circ}$ omnidirectional viewing angle adopts the isochronous lens design method and the catadioptric facet production method to obtain the image without peripheral distortion which is inevitably generated in the fisheye lens. The 12 megapixel high-resolution IP camera unit consists of a CMOS sensor & ISP unit, a DSP unit, and an I / O unit, and converts the image input to the camera into a digital image to perform image distortion correction, image correction and image compression And then transmits it to the NVR (Network Video Recorder). In order to evaluate the performance of the proposed 12-megapixel high-resolution $360^{\circ}$ omnidirectional IP camera, 12.3 million pixel image efficiency, $360^{\circ}$ omnidirectional lens angle of view, and electromagnetic certification standard were measured.

The Study on the Fire Monitoring Dystem for Full-scale Surveillance and Video Tracking (전방위 감시와 영상추적이 가능한 화재감시시스템에 관한 연구)

  • Baek, Dong-hyun
    • Fire Science and Engineering
    • /
    • v.32 no.6
    • /
    • pp.40-45
    • /
    • 2018
  • The omnidirectional surveillance camera uses the object detection algorithm to level the object by unit so that broadband surveillance can be performed using a fisheye lens and then, it was a field experiment with a system composed of an omnidirectional surveillance camera and a tracking (PTZ) camera. The omnidirectional surveillance camera accurately detects the moving object, displays the squarely, and tracks it in close cooperation with the tracking camera. In the field test of flame detection and temperature of the sensing camera, when the flame is detected during the auto scan, the detection camera stops and the temperature is displayed by moving the corresponding spot part to the central part of the screen. It is also possible to measure the distance of the flame from the distance of 1.5 km, which exceeds the standard of calorific value of 1 km 2,340 kcal. In the performance test of detecting the flame along the distance, it is possible to be 1.5 km in width exceeding $56cm{\times}90cm$ at a distance of 1km, and so it is also adaptable to forest fire. The system is expected to be very useful for safety such as prevention of intrinsic or surrounding fire and intrusion monitoring if it is installed in a petroleum gas storage facility or a storing place for oil in the future.

Using Omnidirectional Images for Semi-Automatically Generating IndoorGML Data

  • Claridades, Alexis Richard;Lee, Jiyeong;Blanco, Ariel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.319-333
    • /
    • 2018
  • As human beings spend more time indoors, and with the growing complexity of indoor spaces, more focus is given to indoor spatial applications and services. 3D topological networks are used for various spatial applications that involve navigation indoors such as emergency evacuation, indoor positioning, and visualization. Manually generating indoor network data is impractical and prone to errors, yet current methods in automation need expensive sensors or datasets that are difficult and expensive to obtain and process. In this research, a methodology for semi-automatically generating a 3D indoor topological model based on IndoorGML (Indoor Geographic Markup Language) is proposed. The concept of Shooting Point is defined to accommodate the usage of omnidirectional images in generating IndoorGML data. Omnidirectional images were captured at selected Shooting Points in the building using a fisheye camera lens and rotator and indoor spaces are then identified using image processing implemented in Python. Relative positions of spaces obtained from CAD (Computer-Assisted Drawing) were used to generate 3D node-relation graphs representing adjacency, connectivity, and accessibility in the study area. Subspacing is performed to more accurately depict large indoor spaces and actual pedestrian movement. Since the images provide very realistic visualization, the topological relationships were used to link them to produce an indoor virtual tour.

A Study of Selecting Sequential Viewpoint and Examining the Effectiveness of Omni-directional Angle Image Information in Grasping the Characteristics of Landscape (경관 특성 파악에 있어서의 시퀀스적 시점장 선정과 전방위 화상정보의 유효성 검증에 관한 연구)

  • Kim, Heung Man;Lee, In Hee
    • KIEAE Journal
    • /
    • v.9 no.2
    • /
    • pp.81-90
    • /
    • 2009
  • Relating to grasping sequential landscape characteristics in consideration of the behavioral characteristics of the subject experiencing visual perception, this study was made on the subject of main walking line section for visitors of three treasures of Buddhist temples. Especially, as a method of obtaining data for grasping sequential visual perception landscape, the researcher employed [momentum sequential viewpoint setup] according to [the interval of pointers arbitrarily] and fisheye-lens-camera photography using the obtained omni-directional angle visual perception information. As a result, in terms of viewpoint selection, factors like approach road form, change in circulation axis, change in the ground surface level, appearance of objects, etc. were verified to make effect, and among these, approach road form and circulation axis change turned out to be the greatest influences. In addition, as a result of reviewing the effectiveness via the subjects, for the sake of qualitative evaluation of landscape components using the VR picture image obtained in the process of acquiring omni-directional angle visual perception information, a positive result over certain values was earned in terms of panoramic vision, scene reproduction, three-dimensional perspective, etc. This convinces us of the possibility to activate the qualitative evaluation of omni-directional angle picture information and the study of landscape through it henceforth.

A Hardware Design for Realtime Correction of a Barrel Distortion Using the Nearest Pixels on a Corrected Image (보정 이미지의 최 근접 좌표를 이용한 실시간 방사 왜곡 보정 하드웨어 설계)

  • Song, Namhun;Yi, Joonhwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.49-60
    • /
    • 2012
  • In this paper, we propose a hardware design for correction of barrel distortion using the nearest coordinates in the corrected image. Because it applies the nearest distance on corrected image rather than adjacent distance on distorted image, the picture quality is improved by the image whole area, solve the staircase phenomenon in the exterior area. But, because of additional arithmetic operation using design of bilinear interpolation, required arithmetic operation is increased. Look up table(LUT) structure is proposed in order to solve this, coordinate rotation digital computer(CORDIC) algorithm is applied. The results of the synthesis using Design compiler, the design of implementing all processes of the interpolation method with the hardware is higher than the previous design about the throughput, In case of the rear camera, the design of using LUT and hardware together can reduce the size than the design of implementing all processes with the hardware.

A Study on Effective Stitching Technique of 360° Camera Image (360° 카메라 영상의 효율적인 스티칭 기법에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.335-341
    • /
    • 2018
  • This study is a study on effective stitching technique for video recorded by using a dual-lens $360^{\circ}$ camera composed of two fisheye lenses. First of all, this study located a problem in the result of stitching by using a bundled program. And the study was carried out, focusing on looking for a stitching technique more efficient and closer to perfect by comparatively analyzing the results of stitching by using Autopano Video Pro and Autopano Giga, professional stitching program. As a result, it was shown that the problems of bundled program were horizontal and vertical distortion, exposure and color mismatch and unsmooth stitching line. And it was possible to solve the problem of the horizontal and vertical by using Automatic Horizon and Verticals Tool of Autopano Video Pro and Autopano Giga, problem of exposure and color by using Levels, Color and Edit Color Anchors and problem of stitching line by using Mask function. Based on this study, it is to be hoped that $360^{\circ}$ VR video content closer to perfect can be produced by efficient stitching technique for video recorded by using dual-lens $360^{\circ}$ camera in the future.

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

Fast Light Source Estimation Technique for Effective Synthesis of Mixed Reality Scene (효과적인 혼합현실 장면 생성을 위한 고속의 광원 추정 기법)

  • Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2016
  • One of the fundamental elements in developing mixed reality applications is to effectively analyze and apply the environmental lighting information to image synthesis. In particular, interactive applications require to process dynamically varying lighting sources in real-time, reflecting them properly in rendering results. Previous related works are not often appropriate for this because they are usually designed to synthesize photorealistic images, generating too many, often exponentially increasing, light sources or having too heavy a computational complexity. In this paper, we present a fast light source estimation technique that aims to search for primary light sources on the fly from a sequence of video images taken by a camera equipped with a fisheye lens. In contrast to previous methods, our technique can adust the number of found light sources approximately to the size that a user specifies. Thus, it can be effectively used in Phong-illumination-model-based direct illumination or soft shadow generation through light sampling over area lights.

Monitoring canopy phenology in a deciduous broadleaf forest using the Phenological Eyes Network (PEN)

  • Choi, Jeong-Pil;Kang, Sin-Kyu;Choi, Gwang-Yong;Nasahara, Kenlo Nishda;Motohka, Takeshi;Lim, Jong-Hwan
    • Journal of Ecology and Environment
    • /
    • v.34 no.2
    • /
    • pp.149-156
    • /
    • 2011
  • Phenological variables derived from remote sensing are useful in determining the seasonal cycles of ecosystems in a changing climate. Satellite remote sensing imagery is useful for the spatial continuous monitoring of vegetation phenology across broad regions; however, its applications are substantially constrained by atmospheric disturbances such as clouds, dusts, and aerosols. By way of contrast, a tower-based ground remote sensing approach at the canopy level can provide continuous information on canopy phenology at finer spatial and temporal scales, regardless of atmospheric conditions. In this study, a tower-based ground remote sensing system, called the "Phenological Eyes Network (PEN)", which was installed at the Gwangneung Deciduous KoFlux (GDK) flux tower site in Korea was introduced, and daily phenological progressions at the canopy level were assessed using ratios of red, green, and blue (RGB) spectral reflectances obtained by the PEN system. The PEN system at the GDK site consists of an automatic-capturing digital fisheye camera and a hemi-spherical spectroradiometer, and monitors stand canopy phenology on an hourly basis. RGB data analyses conducted between late March and early December in 2009 revealed that the 2G_RB (i.e., 2G - R - B) index was lower than the G/R (i.e., G divided by R) index during the off-growing season, owing to the effects of surface reflectance, including soil and snow effects. The results of comparisons between the daily PEN-obtained RGB ratios and daily moderate-resolution imaging spectroradiometer (MODIS)-driven vegetation indices demonstrate that ground remote sensing data, including the PEN data, can help to improve cloud-contaminated satellite remote sensing imagery.