• Title/Summary/Keyword: 감시카메라 시스템

Search Result 462, Processing Time 0.028 seconds

DEVELOPMENT OF A LYMAN-α IMAGING SOLAR TELESCOPE FOR THE SATELLITE (인공위성 탑재용 자외선 태양카메라(LIST) 개발)

  • Jang, M.;Oh, H.S.;Rim, C.S.;Park, J.S.;Kim, J.S.;Son, D.;Lee, H.S.;Kim, S.J.;Lee, D.H.;Kim, S.S.;Kim, K.H.
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.3
    • /
    • pp.329-352
    • /
    • 2005
  • Long term observations of full-disk Lyman-o irradiance have been made by the instruments on various satellites. In addition, several sounding rockets dating back to the 1950s and up through the present have measured the $Lyman-{\alpha}$ irradiance. Previous full disk $Lyman-{\alpha}$ images of the sun have been very interesting and useful scientifically, but have been only five-minute 'snapshots' obtained on sounding rocket flights. All of these observations to date have been snapshots, with no time resolution to observe changes in the chromospheric structure as a result of the evolving magnetic field, and its effect on the Lyman-o intensity. The $Lyman-{\alpha}$ Imaging Solar Telescope(LIST) can provide a unique opportunity for the study of the sun in the $Lyman-{\alpha}$ region with the high time and spatial resolution for the first time. Up to the 2nd year development, the preliminary design of the optics, mechanical structure and electronics system has been completed. Also the mechanical structure analysis, thermal analysis were performed and the material for the structure was chosen as a result of these analyses. And the test plan and the verification matrix were decided. The operation systems, technical and scientific operation, were studied and finally decided. Those are the technical operation, mechanical working modes for the observation and safety, the scientific operation and the process of the acquired data. The basic techniques acquired through the development of satellite based solar telescope are essential for the construction of space environment forecast system in the future. The techniques which we developed through this study, like mechanical, optical and data processing techniques, could be applied extensively not only to the process of the future production of flight models of this kind, but also to the related industries. Also, we can utilize the scientific achievements which are obtained throughout the project And these can be utilized to build a high resolution photometric detectors for military and commercial purposes. It is also believed that we will be able to apply several acquired techniques for the development of the Korean satellite projects in the future.

A Study on the Improvement of Aquaculture Security System to Insure the Lawful Evidence of Theft (도적행위의 법적증거확보를 위한 양식장 보안 시스템 개선에 관한 연구)

  • Yim, Jeong-Bin;Nam, Taek-Keun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.13 no.4
    • /
    • pp.55-63
    • /
    • 2007
  • The Group Digital Surveillance System for Fishery Safety and Security (GDSS-F2S) is to provide the target tracking information and the target identification information in order to secure an huge aquaculture farm-field from a thief. The two information, however, is not enough to indict the thief due to the lack of lawful evidences for the crime actions. To overcome this problem, we consider the target image information as one of solutions after discussion with the effective countermeasure tools for the crime actions with scenario-based analysis according to the geological feature of aquaculture farm-field. To capture the real-time image for the trespassing targets in the aquaculture farm-field area, we developed the image capture system which is consists of ultra sensitive CCD(Charge-Coupled Device) camera with 0.0001 Lux and supplementary devices. As results from the field tests for GDSS-F2S with image capture system, the high definite images of the vehicle number plate and shape, person's actions and features are obtainable not only day time but also very dark night without moon light. Thus it is cleary known that the improved GDSS-F2S with image capture system can provide much enough lawful evidences for the crime actions of targets.

  • PDF

Real Time Face Detection in Video Using Progressive Thresholding (순차 임계 설정법을 이용한 비디오에서의 실시간 얼굴검출)

  • Ye Soo-Young;Lee Seon-Bong;Kum Dae-Hyun;Kim Hyo-Sung;Nam Ki-Gon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.3
    • /
    • pp.95-101
    • /
    • 2006
  • A face detection plays an important role in face recognition, video surveillance, and human computer interaction. In this paper, we propose a progressive threshold method to detect human faces in real time. The consecutive face images are acquired from camera and transformed into YCbCr color space images. The skin color of the input images are separated using a skin color filter in the YCbCr color space and some candidated face areas are decided by connected component analysis. The intensity equalization is performed to avoid the effect of many circumstances and an arbitrary threshold value is applied to get binary images. The eye area can be detected because the area is clearly distinguished from others in the binary image progressive threshold method searches for an optimal eye area by progressively increasing threshold from low values. After progressive thresholding, the eye area is normalized and verified by back propagation algorithm to finalize the face detection.

  • PDF

PID Controled UAV Monitoring System for Fire-Event Detection (PID 제어 UAV를 이용한 발화 감지 시스템의 구현)

  • Choi, Jeong-Wook;Kim, Bo-Seong;Yu, Je-Min;Choi, Ji-Hoon;Lee, Seung-Dae
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • If a dangerous situation arises in a place where out of reach from the human, UAVs can be used to determine the size and location of the situation to reduce the further damage. With this in mind, this paper sets the minimum value of the roll, pitch, and yaw using beta flight to detect the UAV's smooth hovering, integration, and derivative (PID) values to ensure that the UAV stays horizontal, minimizing errors for safe hovering, and the camera uses Open CV to install the Raspberry Pi program and then HSV (color, saturation, Brightness) using the color palette, the filter is black and white except for the red color, which is the closest to the fire we want, so that the UAV detects the image in the air in real time. Finally, it was confirmed that hovering was possible at a height of 0.5 to 5m, and red color recognition was possible at a distance of 5cm and at a distance of 5m.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.

Design and Implementation of Surveillance and Combat Robot Using Smart Phone (스마트폰을 이용한 정찰 및 전투 로봇의 설계와 구현)

  • Kim, Do-Hyun;Park, Young-Sik;Kwon, Sung-Gab;Yang, Yeong-Yil
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.93-98
    • /
    • 2011
  • In this paper, we propose the surveillance and combat robot framework for remote monitoring and robot control on the smart phone, which is implemented with the fusion technology called RITS(Robot technology & Information Technology System). In our implemented system, the camera phone mounted on the robot generates signals to control the robot and sends images to the smart phone of the operator. Therefore, we can monitor the surrounding area of the robot with the smart phone. Besides the control of the movement of the robot, we can fire the weapons armed on the robot by sending the fire command. From experimental results, we can conclude that it's possible to control the robot and monitor the surrounding area of the robot and fire the weapons in real time in the region where the 3G(Generation) mobile communication is possible. In addition, we controlled the robot using the 2G mobile communication or wired phone when the robot is in the visual range.

Non-Dyadic Lens Distortion Correction and Image Enhancement Based on Local Self-Similarity (자기 예제 참조기반 단계적 어안렌즈 영상보정을 통한 주변부 열화 제거)

  • Park, Jinho;Kim, Donggyun;Kim, Daehee;Kim, Chulhyun;Paik, Joonki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.10
    • /
    • pp.147-153
    • /
    • 2014
  • In this paper, we present a non-dyadic lens distortion correction model and image restoration method based on local self-similarity to remove jagging and blurring artifacts in the peripheral region of the geometrically corrected image. The proposed method can be applied in various application areas including vehicle real-view cameras, visual surveillance systems, and medical imaging systems.

Subject Region-Based Auto-Focusing Algorithm Using Noise Robust Focus Measure (잡음에 강인한 초점 값을 이용한 피사체 중심의 자동초점 알고리듬)

  • Jeon, Jae-Hwan;Yoon, In-Hye;Lee, Jin-Hee;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.80-87
    • /
    • 2011
  • In this paper we present subject region-based auto-focusing algorithm using noise robust focus measure. The proposed algorithm automatically estimates the main subject using entropy and solves the traditional problems with a subject position or high frequency component of background image. We also propose a new focus measure by analyzing the discrete cosine transform coefficients. Experimental results show that the proposed method is more robust to Gaussian and impulse noises than the traditional methods. The proposed algorithm can be applied to Pan-tilt-zoom (PTZ) cameras in the intelligent video surveillance system.

과학기술위성 3호 주탑재체 MIRIS의 비행모델 우주환경시험

  • Mun, Bong-Gon;Park, Yeong-Sik;Park, Gwi-Jong;Lee, Deok-Haeng;Lee, Dae-Hui;Jeong, Ung-Seop;Nam, Uk-Won;Park, Won-Gi;Kim, Il-Jung;Cha, Won-Ho;Sin, Gu-Hwan;Lee, Sang-Hyeon;Seo, Jeong-Gi;Park, Jong-O;Lee, Seung-U;Han, Won-Yong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.2
    • /
    • pp.205.1-205.1
    • /
    • 2012
  • 러시아 발사체 드네프르에 의해 발사될 과학기술위성 3호의 주탑재체 다목적적외선영상시스템, MIRIS (Multipurpose InfraRed Imaging System)는 한국천문연구원에서 주관하여 개발되었다. 그 구성 카메라인 EOC (Earth Observation Camera)는 한반도재난감시를 수행하고, SOC (Space Observation Camera)는 우리 은하 평면의 근적외선 서베이 관측을 통해 $360^{\circ}{\times}6^{\circ}$ Paschen-${\alpha}$ 방출선 지도를 작성하고 I, H 밴드 필터를 이용해서 황도 남북극에 대한 적외선우주배경복사를 관측한다. MIRIS 비행모델이 제작 완료되었고, 그 구성 기기인 SOC, EOC, 전장박스에 대한 최종 우주환경시험을 수행하였다. 과학기술위성 3호의 비행모델 우주환경시험은 진동시험과 열진공시험으로 이뤄지며, 그 시험 규격은 문서에 규정된 Acceptance Level로 수행된다. 충격시험은 공학인증모델을 통해 검증되었다. 열진공시험은 한국천문연구원에서 수행되었으며, 진동시험은 한국과학기술원 인공위성센터에서 수행되었다. 또한 전체 위성이 조립된 후 과학기술위성 3호의 열진공시험은 한국항공우주연구원에서 수행되었다. 이 발표에서는 MIRIS 비행모델에 대한 환경시험과정 및 결과를 보고하고, 과학기술위성이 전체적으로 조립된 후의 MIRIS 진동 및 열진공 시험 결과도 함께 논의한다.

  • PDF

Performance Analysis of Face Recognition by Face Image resolutions using CNN without Backpropergation and LDA (역전파가 제거된 CNN과 LDA를 이용한 얼굴 영상 해상도별 얼굴 인식률 분석)

  • Moon, Hae-Min;Park, Jin-Won;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.24-29
    • /
    • 2016
  • To satisfy the needs of high-level intelligent surveillance system, it shall be able to extract objects and classify to identify precise information on the object. The representative method to identify one's identity is face recognition that is caused a change in the recognition rate according to environmental factors such as illumination, background and angle of camera. In this paper, we analyze the robust face recognition of face image by changing the distance through a variety of experiments. The experiment was conducted by real face images of 1m to 5m. The method of face recognition based on Linear Discriminant Analysis show the best performance in average 75.4% when a large number of face images per one person is used for training. However, face recognition based on Convolution Neural Network show the best performance in average 69.8% when the number of face images per one person is less than five. In addition, rate of low resolution face recognition decrease rapidly when the size of the face image is smaller than $15{\times}15$.