• Title/Summary/Keyword: Image Sensor Module

Search Result 133, Processing Time 0.026 seconds

Epipolar Resampling Module for CAS500 Satellites 3D Stereo Data Processing (국토위성 3차원 데이터 생성을 위한 입체 기하 영상 생성 모듈 제작 및 테스트)

  • Oh, Jaehong;Lee, Changno
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.939-948
    • /
    • 2020
  • CAS500-1 and CAS500-2 are high-resolution Earth-observing satellites being developed and scheduled to launch for land monitoring of Korea. The satellite information will be used for land usage analysis, change detection, 3D topological monitoring, and so on. Satellite image data of region of interests must be acquired in the stereo mode from different positions for 3D information generation. Accurate 3D processing and 3D display of stereo satellite data requires the epipolar image resampling process considering the pushbroom sensor and the satellite trajectory. This study developed an epipolar image resampling module for CAS-500 stereo data processing and verified its accuracy performance by testing along-track, across-track, and heterogeneous stereo data.

Development of Image-map Generation and Visualization System Based on UAV for Real-time Disaster Monitoring (실시간 재난 모니터링을 위한 무인항공기 기반 지도생성 및 가시화 시스템 구축)

  • Cheon, Jangwoo;Choi, Kyoungah;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.407-418
    • /
    • 2018
  • The frequency and risk of disasters are increasing due to environmental and social factors. In order to respond effectively to disasters that occur unexpectedly, it is very important to quickly obtain up-to-date information about target area. It is possible to intuitively judge the situation about the area through the image-map generated at high speed, so that it can cope with disaster quickly and effectively. In this study, we propose an image-map generation and visualization system from UAV images for real-time disaster monitoring. The proposed system consists of aerial segment and ground segment. In the aerial segment, the UAV system acquires the sensory data from digital camera and GPS/IMU sensor. Communication module transmits it to the ground server in real time. In the ground segment, the transmitted sensor data are processed to generate image-maps and the image-maps are visualized on the geo-portal. We conducted experiment to check the accuracy of the image-map using the system. Check points were obtained through ground survey in the data acquisition area. When calculating the difference between adjacent image maps, the relative accuracy was 1.58 m. We confirmed the absolute accuracy of the image map for the position measured from the individual image map. It is confirmed that the map is matched to the existing map with an absolute accuracy of 0.75 m. We confirmed the processing time of each step until the visualization of the image-map. When the image-map was generated with GSD 10 cm, it took 1.67 seconds to visualize. It is expected that the proposed system can be applied to real - time monitoring for disaster response.

Camera Imaging Lens Fabrication using Wafer-Scale UV Embossing Process

  • Jeong, Ho-Seop;Kim, Sung-Hwa;Shin, Dong-Ik;Lee, Seok-Cheon;Jin, Young-Su;Noh, Jung-Eun;Oh, Hye-Ran;Lee, Ki-Un;Song, Seok-Ho;Park, Woo-Je
    • Journal of the Optical Society of Korea
    • /
    • v.10 no.3
    • /
    • pp.124-129
    • /
    • 2006
  • We have developed a compact and cost-effective camera module on the basis of wafer-scale-replica processing. A multiple-layered structure of several aspheric lenses in a mobile-phone camera module is first assembled by bonding multiple glass-wafers on which 2-dimensional replica arrays of identical aspheric lenses are UV-embossed, followed by dicing the stacked wafers and packaging them with image sensor chips. This wafer-scale processing leads to at least 95% yield in mass-production, and potentially to a very slim phone with camera-module less than 2 mm in thickness. We have demonstrated a VGA camera module fabricated by the wafer-scale-replica processing with various UV-curable polymers having refractive indices between 1.4 and 1.6, and with three different glass-wafers of which both surfaces are embossed as aspheric lenses having $230{\mu}m$ sag-height and aspheric-coefficients of lens polynomials up to tenth-order. We have found that precise compensation in material shrinkage of the polymer materials is one of the most technical challenges, in orderto achieve a higher resolution in wafer-scaled lenses for mobile-phone camera modules.

Design of Miniaturized Telemetry Module for Bi-Directional Wireless Endoscopy

  • Park, H. J.;H. W. Nam;B. S. Song;J. H. Cho
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.494-496
    • /
    • 2002
  • A bi-directional and multi-channel wireless telemetry capsule, 11mm in diameter, is presented that can transmit video images from inside the human body and receive a control signal from an external control unit. The proposed telemetry capsule includes transmitting and receiving antennas, a demodulator, decoder, four LEDs, and CMOS image sensor, along with their driving circuits. The receiver demodulates the received signal radiated from the external control unit. Next, the decoder receives the stream of control signals and interprets five of the binary digits as an address code. Thereafter, the remaining signal is interpreted as four bits of binary data. Consequently, the proposed telemetry module can demodulate external signals so as to control the behavior of the camera and four LEDs during the transmission of video images. The proposed telemetry capsule can simultaneously transmit a video signal and receive a control signal determining the behavior of the capsule itself. As a result, the total power consumption of the telemetry capsule can be reduced by turning off the camera power during dead time and separately controlling the LEDs for proper illumination of the intestine.

  • PDF

Design of Measurement Algoritms in the Smart CamRuler (스마트 CamRuler 계측 알고리즘 설계)

  • Oh, Sun-Jin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.149-156
    • /
    • 2013
  • With a rapid growth of smartphone technologies, various applications are developed and diffused actively nowadays. Especially, interesting applications using camera module in a smartphone are developed continuously, mobile users are able to use various useful mobile services in humdrum life. In this paper, we design and implement measurement algorithms which precisely measure the object taken by the camera module in a smartphone. We use 3-axis gyro accelerometer sensor in a smartphone to get the distance, incline and rotation angle in a real time when we take a picture of shooting object and can obtain precise size of it in the picture image. The measurement algorithms proposed in this paper are analyzed and evaluated by a simulation study.

The Study for the Efficient scanning of Stereo X-ray System (스테레오 X-ray 시스템 검색기능 개선 연구)

  • Hwang, Young-Gwan;Lee, Nam-Ho;Park, Jong-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.884-886
    • /
    • 2012
  • As the existing radiation Scanning systems use 2-dimensional radiation scanned images, the low accuracy has been pointed out as a problem of it. Two-dimensional radiation images which have different disparity values are acquired from a newly designed stereo image acquisition system which has one additional line sensor to the conventional system. In this paper, we enhanced the scanning efficiency of the stereo X-ray inspection system using the precision control module.

  • PDF

Three-Dimensional Conjugate Heat Transfer Analysis for Infrared Target Modeling (적외선 표적 모델링을 위한 3차원 복합 열해석 기법 연구)

  • Jang, Hyunsung;Ha, Namkoo;Lee, Seungha;Choi, Taekyu;Kim, Minah
    • Journal of KIISE
    • /
    • v.44 no.4
    • /
    • pp.411-416
    • /
    • 2017
  • The spectral radiance received by an infrared (IR) sensor is mainly influenced by the surface temperature of the target itself. Therefore, the precise temperature prediction is important for generating an IR target image. In this paper, we implement the combined three-dimensional surface temperature prediction module against target attitudes, environments and properties of a material for generating a realistic IR signal. In order to verify the calculated surface temperature, we are using the well-known IR signature analysis software, OKTAL-SE and compare the result with that. In addition, IR signal modeling is performed using the result of the surface temperature through coupling with OKTAL-SE.

Robot System Design Capable of Motion Recognition and Tracking the Operator's Motion (사용자의 동작인식 및 모사를 구현하는 로봇시스템 설계)

  • Choi, Yonguk;Yoon, Sanghyun;Kim, Junsik;Ahn, YoungSeok;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.24 no.6
    • /
    • pp.605-612
    • /
    • 2015
  • Three dimensional (3D) position determination and motion recognition using a 3D depth sensor camera are applied to a developed penguin-shaped robot, and its validity and closeness are investigated. The robot is equipped with an Asus Xtion Pro Live as a 3D depth camera, and a sound module. Using the skeleton information from the motion recognition data extracted from the camera, the robot is controlled so as to follow the typical three mode-reactions formed by the operator's gestures. In this study, the extraction of skeleton joint information using the 3D depth camera is introduced, and the tracking performance of the operator's motions is explained.

Detection of Precise Crop Locations under Vinyl Mulch using Non-integral Moving Average Applied to Thermal Distribution

  • Cho, Yongjin;Yun, Yeji;Lee, Kyou-Seung;Lee, Dong-Hoon
    • Journal of Biosystems Engineering
    • /
    • v.42 no.2
    • /
    • pp.117-125
    • /
    • 2017
  • Purpose: Damage to pulse crops by wild birds is a serious problem. The damage is to such an extent that the rate of damage during the period between seeding and cotyledon stages reaches 54.6% on an average. In this study, a crop-position detection method was developed wherein infrared (IR) sensors were used to determine the cotyledon position under a vinyl mulch. Methods: IR sensors that helped measure the temperature were used to locate the cotyledons below the vinyl mulch. A single IR sensor module was installed at three locations of the crops (peanut, red lettuce, and crown daisy) in the cotyledon stage. The representative thermal response of a $16{\times}4$ pixel area was detected using this sensor in the case where the distance from the target was 25 cm. A spatial image was applied to the two-dimensional temperature distribution using a non-integral moving-average method. The collected data were first processed by taking the moving average via interpolation to determine the frame where the variance was the lowest for a resolution unit of 1.02 cm. Results: The temperature distribution was plotted corresponding to a distance of 10 cm between the crops. A clear leaf pattern of the crop was visually confirmed. However, the temperature distribution after the normalization was unclear. The image conversion and frequency-conversion graphs were obtained based on the moving average by averaging the points corresponding to a frequency of 40 Hz for 8 pixels. The most optimized resolutions at locations 1, 2, and 3 were found on 3.4, 4.1, and 5.6 Pixels, respectively. Conclusions: In this study, to solve the problem of damage caused by birds to crops in the cotyledon stage after seeding, the vinyl mulch is punched after seeding. The crops in the cotyledon stage could be accurately located using the proposed method. By conducting the experiments using the single IR sensor and a sliding mechanical device with the help of a non-integral interpolation method, the crops in the cotyledon stage could be precisely located.

3D object generation based on the depth information of an active sensor (능동형 센서의 깊이 정보를 이용한 3D 객체 생성)

  • Kim, Sang-Jin;Yoo, Ji-Sang;Lee, Seung-Hyun
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.5
    • /
    • pp.455-466
    • /
    • 2006
  • In this paper, 3D objects is created from the real scene that is used by an active sensor, which gets depth and RGB information. To get the depth information, this paper uses the $Zcam^{TM}$ camera which has built-in an active sensor module. <중략> Thirdly, calibrate the detailed parameters and create 3D mesh model from the depth information, then connect the neighborhood points for the perfect 3D mesh model. Finally, the value of color image data is applied to the mesh model, then carries out mapping processing to create 3D object. Experimentally, it has shown that creating 3D objects using the data from the camera with active sensors is possible. Also, this method is easier and more useful than the using 3D range scanner.

  • PDF