• Title/Summary/Keyword: 화각

Search Result 88, Processing Time 0.031 seconds

Generation of Forensic Evidence Data from Script (무선 WiGig 전송 연구)

  • Choi, Sang-hyeon;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.356-359
    • /
    • 2017
  • According to the plan of operation of the Ministry of Education, IWB (Interactive White Board) was distributed to one or two classrooms per school. Therefore, instead of the overhead projector (OHP) and the screen, the visual presenter and the IWB replaced the role. However, the development speed of the imaging device and the display device could not keep up, and the utilization was often lowered. In this study, we study to obtain a high resolution image using the camera of smartphone. It uses WiGig(Wireless Gigabit) technology to transmit the acquired high-resolution images to IWB or large-screen TV without delay in wireless communication. In addition, while the smartphone camera is equipped with a lens of a wide field of view(FOV), the microscope lens can be used to magnify and magnify a specific portion of a smartphone 400 times. As s result of this study it will be used as active material for real-time 400 times magnification in education and research field.

  • PDF

Dose Distribution of 100 MeV Proton Beams in KOMAC by using Liquid Organic Scintillator (액체 섬광체를 이용한 100 MeV 양성자 빔의 선량 분포 평가)

  • Kim, Sunghwan
    • Journal of radiological science and technology
    • /
    • v.40 no.4
    • /
    • pp.621-626
    • /
    • 2017
  • In this paper, an optical dosimetric system for radiation dose measurement is developed and characterized for 100 MeV proton beams in KOMAC(Korea Multi-Purpose Accelerator Complex). The system consists of 10 wt% Ultima GoldTM liquid organic scintillator in the ethanol, a camera lens(50 mm / f1.8), and a high sensitivity CMOS(complementary metal-oxide-semiconductor) camera (ASI120MM, ZWO Co.). The FOV(field of view) of the system is designed to be 150 mm at a distance of 2 m. This system showed sufficient linearity in the range of 1~40 Gy for the 100 MeV proton beams in KOMAC. We also successfully got the percentage depth dose and the isodose curves of the 100 MeV proton beams from the captured images. Because the solvent is not a human tissue equivalent material, we can not directly measure the absorbed dose of the human body. Through this study, we have established the optical dosimetric procedure and propose a new volume dose assessment method.

Implementation of Omni-directional Image Viewer Program for Effective Monitoring (효과적인 감시를 위한 전방위 영상 기반 뷰어 프로그램 구현)

  • Jeon, So-Yeon;Kim, Cheong-Hwa;Park, Goo-Man
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.939-946
    • /
    • 2018
  • In this paper, we implement a viewer program that can monitor effectively using omni-directional images. The program consists of four modes: Normal mode, ROI(Region of Interest) mode, Tracking mode, and Auto-rotation mode, and the results for each mode is displayed simultaneously. In the normal mode, the wide angle image is rendered as a spherical image to enable pan, tilt, and zoom. In ROI mode, the area is displayed expanded by selecting an area. And, in Auto-rotation mode, it is possible to track the object by mapping the position of the object with the rotation angle of the spherical image to prevent the object from deviating from the spherical image in Tracking mode. Parallel programming for processing of multiple modes is performed to improve the processing speed. This has the advantage that various angles can be seen compared with surveillance system having a limited angle of view.

Design Android-based image processing system using the Around-View (안드로이드 기반 영상처리를 이용한 Around-View 시스템 설계)

  • Kim, Gyu-Hyun;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.421-424
    • /
    • 2014
  • Currently, car black box, and CCTV products, such as image processing are prevalent on the market giving convenience to users.In particular, the black box of the driver driving a vehicle accident that occurred at the time to help identify the cause of the accident is gaining. Black box, the front or rear of the vehicle can check the image only. Because of the angle of view of the driver's vision or the black box can not determine a non-scene. In order to solve this problem by a more advanced system, the black box AVM (Around-View Monitoring) systems have been developed. AVM system to the vehicle's top-view images obtained before and after, left and right of the image, ie, $360^{\circ}$ image of the vehicle can be secured. AVM system must be installed on the vehicle, a desktop that you can acquire images Cling conditions. In this paper, we propose an Android-based tablet using the AVM system of the vehicle can achieve a $360^{\circ}$ image you want to design the system.

  • PDF

A Study on Performance and Sensitivity Improvement of an Off-axis TMA Telescope Optical System by Changing the Aperture-stop Position (조리개 위치 변경을 통한 비축 삼반사 망원경 광학계의 성능 및 민감도 개선 연구)

  • Lee, Han-Yul;Jun, Won-Kyoun;Lee, Sang-min;Kim, Ki-hwan;Seo, Hyun-Ju;Park, Seung-Han;Jung, Mee-Suk
    • Korean Journal of Optics and Photonics
    • /
    • v.32 no.1
    • /
    • pp.9-14
    • /
    • 2021
  • In this paper we have studied an optical system according to the aperture position of an off-axis TMA telescope for satellites. An off-axis TMA telescope should have high resolution and wide field of view (FOV). In addition, the optical system should have a wide tolerance range, because it is structurally located off-axis and is difficult to assemble. However, there are differences in performance and sensitivity according to the aperture-stop position, so it is important to select a suitable aperture-stop position. Therefore, in this paper we have designed each off-axis TMA telescope according to the aperture-stop position, and have analyzed the performance and sensitivity to suggest a suitable aperture-stop position.

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.

A study on an artificial intelligence model for measuring object speed using road markers that can respond to external forces (외부력에 대응할 수 있는 도로 마커 활용 개체 속도 측정 인공지능 모델 연구)

  • Lim, Dong Hyun;Park, Dae-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.228-231
    • /
    • 2022
  • Most CCTVs operated by public institutions for crime prevention and parking enforcement are located on roads. The angle of these CCTV's view is often changed for various reasons, such as bolt loosening by vibration or shocking by vehicles and workers, etc. In order to effectively provide AI services based on the collected images, the service target area(ROI, Region Of Interest) must be provided without interruption within the image. This is also related to the viewpoint of effective operation of computing power for image analysis. This study explains how to maximize the application of artificial intelligence technology by setting the ROI based on the marker on the road, setting the image analysis to be possible only within the area, and studying the process of finding the ROI.

  • PDF

Development of compound eye image quality improvement based on ESRGAN (ESRGAN 기반의 복안영상 품질 향상 알고리즘 개발)

  • Taeyoon Lim;Yongjin Jo;Seokhaeng Heo;Jaekwan Ryu
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • Demand for small biomimetic robots that can carry out reconnaissance missions without being exposed to the enemy in underground spaces and narrow passages is increasing in order to increase the fighting power and survivability of soldiers in wartime situations. A small compound eye image sensor for environmental recognition has advantages such as small size, low aberration, wide angle of view, depth estimation, and HDR that can be used in various ways in the field of vision. However, due to the small lens size, the resolution is low, and the problem of resolution in the fused image obtained from the actual compound eye image occurs. This paper proposes a compound eye image quality enhancement algorithm based on Image Enhancement and ESRGAN to overcome the problem of low resolution. If the proposed algorithm is applied to compound eye image fusion images, image resolution and image quality can be improved, so it is expected that performance improvement results can be obtained in various studies using compound eye cameras.

Infrastructure 2D Camera-based Real-time Vehicle-centered Estimation Method for Cooperative Driving Support (협력주행 지원을 위한 2D 인프라 카메라 기반의 실시간 차량 중심 추정 방법)

  • Ik-hyeon Jo;Goo-man Park
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.1
    • /
    • pp.123-133
    • /
    • 2024
  • Existing autonomous driving technology has been developed based on sensors attached to the vehicles to detect the environment and formulate driving plans. On the other hand, it has limitations, such as performance degradation in specific situations like adverse weather conditions, backlighting, and obstruction-induced occlusion. To address these issues, cooperative autonomous driving technology, which extends the perception range of autonomous vehicles through the support of road infrastructure, has attracted attention. Nevertheless, the real-time analysis of the 3D centroids of objects, as required by international standards, is challenging using single-lens cameras. This paper proposes an approach to detect objects and estimate the centroid of vehicles using the fixed field of view of road infrastructure and pre-measured geometric information in real-time. The proposed method has been confirmed to effectively estimate the center point of objects using GPS positioning equipment, and it is expected to contribute to the proliferation and adoption of cooperative autonomous driving infrastructure technology, applicable to both vehicles and road infrastructure.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.