• Title/Summary/Keyword: Camera Performance

Search Result 1,815, Processing Time 0.029 seconds

Development of OIS actuator for Mobile Phone Camera (휴대폰 카메라를 위한 OIS 엑추에이터 개발)

  • Baek, Hyun-Woo;Hur, Young-Jun;Song, Myeong-Gyu;Park, No-Cheol;Park,, Young-Pil;Park, Kyoung-Su;Lim, Soo-Cheol;Park, Jae-Hyuk
    • Transactions of the Society of Information Storage Systems
    • /
    • v.5 no.1
    • /
    • pp.8-13
    • /
    • 2009
  • In this study, to compensate trembling of camera caused by vibration of user's hand, 2-axis voice coil actuator of optical image stabilizer (OIS) is suggested. In consideration of actuating performance, volume of OIS and application of hall sensor, one of concept models is selected and it is optimized to maximize the actuating performance. Two types of mechanisms that have the feasibility of moving in 2-axis and the capability of including the optimized EM circuit are proposed. Finally both types are fabricated and then actuating performance of OIS actuator and behavior of hall sensor are verified through the experiment.

  • PDF

Design and performance prediction of large-area hybrid gamma imaging system (LAHGIS) for localization of low-level radioactive material

  • Lee, Hyun Su;Kim, Jae Hyeon;Lee, Junyoung;Kim, Chan Hyeong
    • Nuclear Engineering and Technology
    • /
    • v.53 no.4
    • /
    • pp.1259-1265
    • /
    • 2021
  • In the present study, a large-area hybrid gamma imaging system was designed by adopting coded aperture imaging on the basis of a large-area Compton camera to achieve high imaging performance throughout a broad energy range (100-2000 keV). The system consisting of a tungsten coded aperture mask and monolithic NaI(Tl) scintillation detectors was designed through a series of Geant4 Monte Carlo radiation transport simulations, in consideration of both imaging sensitivity and imaging resolution. Then, the performance of the system was predicted by Geant4 Monte Carlo simulations for point sources under various conditions. Our simulation results show that the system provides very high imaging sensitivity (i.e., low values for minimum detectable activity, MDA), thus allowing for imaging of low-activity sources at distances impossible with coded aperture imaging or Compton imaging alone. In addition, the imaging resolution of the system was found to be high (i.e., around 6°) over the broad energy range of 59.5-1330 keV.

A Study on the Two-Dimensional Scheduling for Minimization of Moving Distance on the Remote Controllable Web-Camera (원격조정 가능한 웹 카메라의 이동거리 최소화를 위한 이차원 스케줄링에 관한 연구)

  • Cho, Soo-Young;Song, Myung-Nam;Kim, Young-Sin;Hwang, Jun
    • Journal of Internet Computing and Services
    • /
    • v.1 no.2
    • /
    • pp.61-67
    • /
    • 2000
  • In case of the remote controllable web-camera that was took notice especially in internet real-time broadcasting systems, a great many clients connect the server of web-camera to request the service. So, the scheduling methods are important. Web-camera systems have used to the traditional FIFO(First In First Out) or SDF(Shortest Distance First) scheduling method. But they does not satisfy both the minimization of moving distance on the web-camera and the fairness on the users. In this paper, We propose the 2D scheduling method, As a result, the moving distance of the web-camera decreases compared with FIFO scheduling method. And the starvation state on the user's request does not happen compared with SDF scheduling method. Thus, if the remote controllable web-camera systems use the 2D scheduling method, they are satisfied with the minimization of moving distance on the remote controllable web-camera and the fairness on the users simultaneously. Therefore the user's satisfaction and the performance of the systems are improved.

  • PDF

A Study on the Implementation of the Web-Camera System for Realtime Monitoring (실시간 영상 감시를 위한 웹 카메라 시스템의 구현에 관한 연구)

  • Ahn, Young-Min;Jin, Hyun-Joon;Park, Nho-Kyung
    • Journal of IKEEE
    • /
    • v.5 no.2 s.9
    • /
    • pp.174-181
    • /
    • 2001
  • In this study, the architecture of the Web Camera System for realtime monitoring on Internet is proposed and implemented in two different structures. In the one architecture, a Web-server and a Camera-server are implemented on the same system, and the system transfers motion pictures compressed to JPEG file to users on the WWW(World Wide Web). In the other architecture, the Web-server and the Camera-server are implemented on different systems, and the motion pictures are transferred from the Camera-server to the Web-server, and finally to users. For JPEG image transferring in the Web Camera system, the Java Applet and the Java Script are used to maximize flexibility of the system from the Operating system and the Web browsers. In order to compare system performance between two architectures, data traffic is measured and simulated in the unit of byte per second.

  • PDF

Development of a Coded-aperture Gamma Camera for Monitoring of Radioactive Materials (방사성 물질 감시를 위한 부호화 구경 감마카메라 개발)

  • Cho, Gye-Seong;Shin, Hyung-Joo;Chi, Yong-Ki;Yoon, Jeong-Hyoun
    • Journal of Radiation Protection and Research
    • /
    • v.29 no.4
    • /
    • pp.257-261
    • /
    • 2004
  • A coded-aperture gamma camera was developed to increase the sensitivity of a pin hole camera made with a pixellated CsI(Tl) scintillator and a position-sensitive photomultiplier tube. The modified round-hole uniformly redundant array of pixel size $13{\times}11$ was chosen as a coded mask considering the detector spatial resolution. The performance of the coded-aperture camera was compared with the pin hole camera using various forms of Tc-99m source to see the improvement of signal-to-noise ratio or the improvement of the sensitivity. The image quality is much improved despite of a slight degradation of the spatial resolution. Though the camera and the test were made for low energy case, but the concept of the coded-aperture gamma camera could be effectively used for the radioactive environmental monitoring and other applications.

Multi-slit prompt-gamma camera for locating of distal dose falloff in proton therapy

  • Park, Jong Hoon;Kim, Sung Hun;Ku, Youngmo;Kim, Chan Hyeong;Lee, Han Rim;Jeong, Jong Hwi;Lee, Se Byeong;Shin, Dong Ho
    • Nuclear Engineering and Technology
    • /
    • v.51 no.5
    • /
    • pp.1406-1416
    • /
    • 2019
  • In this research, a multi-slit prompt-gamma camera was developed to locate the distal dose falloff of the proton beam spots in spot scanning proton therapy. To see the performance of the developed camera, therapeutic proton beams were delivered to a solid plate phantom and then the prompt gammas from the phantom were measured using the camera. Our results show that the camera locates the 90% distal dose falloff (= d90%), within about 2-3 mm of error for the spots which are composed $3.8{\times}10^8$ protons or more. The measured location of d90% is not very sensitive to the irradiation depth of the proton beam (i.e., the depth of proton beam from the phantom surface toward which the camera is located). Considering the number of protons per spot for the most distal spots in typical treatment cases (i.e., 2 Gy dose divided in 2 fields), the camera can locate d90% only for a fraction of the spots depending on the treatment cases. However, the information of those spots is still valuable in that, in the multi-slit prompt-gamma camera, the distal dose falloff of the spots is located solely based on prompt gamma measurement, i.e., not referring to Monte Carlo simulation.

Vision-based dense displacement and strain estimation of miter gates with the performance evaluation using physics-based graphics models

  • Narazaki, Yasutaka;Hoskere, Vedhus;Eick, Brian A.;Smith, Matthew D.;Spencer, Billie F.
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.709-721
    • /
    • 2019
  • This paper investigates the framework of vision-based dense displacement and strain measurement of miter gates with the approach for the quantitative evaluation of the expected performance. The proposed framework consists of the following steps: (i) Estimation of 3D displacement and strain from images before and after deformation (water-fill event), (ii) evaluation of the expected performance of the measurement, and (iii) selection of measurement setting with the highest expected accuracy. The framework first estimates the full-field optical flow between the images before and after water-fill event, and project the flow to the finite element (FE) model to estimate the 3D displacement and strain. Then, the expected displacement/strain estimation accuracy is evaluated at each node/element of the FE model. Finally, methods and measurement settings with the highest expected accuracy are selected to achieve the best results from the field measurement. A physics-based graphics model (PBGM) of miter gates of the Greenup Lock and Dam with the updated texturing step is used to simulate the vision-based measurements in a photo-realistic environment and evaluate the expected performance of different measurement plans (camera properties, camera placement, post-processing algorithms). The framework investigated in this paper can be used to analyze and optimize the performance of the measurement with different camera placement and post-processing steps prior to the field test.

A Study on the Improvement of Human Operators' Performance in Detection of External Defects in Visual Inspection (품질 검사자의 외관검사 검출력 향상방안에 관한 연구)

  • Han, Sung-Jae;Ham, Dong-Han
    • Journal of the Korea Safety Management & Science
    • /
    • v.21 no.4
    • /
    • pp.67-74
    • /
    • 2019
  • Visual inspection is regarded as one of the critical activities for quality control in a manufacturing company. it is thus important to improve the performance of detecting a defective part or product. There are three probable working modes for visual inspection: fully automatic (by automatic machines), fully manual (by human operators), and semi-automatic (by collaboration between human operators and automatic machines). Most of the current studies on visual inspection have been focused on the improvement of automatic detection performance by developing a better automatic machine using computer vision technologies. However, there are still a range of situations where human operators should conduct visual inspection with/without automatic machines. In this situation, human operators'performance of visual inspection is significant to the successful quality control. However, visual inspection of components assembled into a mobile camera module belongs to those situations. This study aims to investigate human performance issues in visual inspection of the components, paying more attention to human errors. For this, Abstraction Hierarchy-based work domain modeling method was applied to examine a range of direct or indirect factors related to human errors and their relationships in the visual inspection of the components. Although this study was conducted in the context of manufacturing mobile camera modules, the proposed method would be easily generalized into other industries.

Development of Urban Wildlife Detection and Analysis Methodology Based on Camera Trapping Technique and YOLO-X Algorithm (카메라 트래핑 기법과 YOLO-X 알고리즘 기반의 도시 야생동물 탐지 및 분석방법론 개발)

  • Kim, Kyeong-Tae;Lee, Hyun-Jung;Jeon, Seung-Wook;Song, Won-Kyong;Kim, Whee-Moon
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.4
    • /
    • pp.17-34
    • /
    • 2023
  • Camera trapping has been used as a non-invasive survey method that minimizes anthropogenic disturbance to ecosystems. Nevertheless, it is labor-intensive and time-consuming, requiring researchers to quantify species and populations. In this study, we aimed to improve the preprocessing of camera trapping data by utilizing an object detection algorithm. Wildlife monitoring using unmanned sensor cameras was conducted in a forested urban forest and a green space on a university campus in Cheonan City, Chungcheongnam-do, Korea. The collected camera trapping data were classified by a researcher to identify the occurrence of species. The data was then used to test the performance of the YOLO-X object detection algorithm for wildlife detection. The camera trapping resulted in 10,500 images of the urban forest and 51,974 images of green spaces on campus. Out of the total 62,474 images, 52,993 images (84.82%) were found to be false positives, while 9,481 images (15.18%) were found to contain wildlife. As a result of wildlife monitoring, 19 species of birds, 5 species of mammals, and 1 species of reptile were observed within the study area. In addition, there were statistically significant differences in the frequency of occurrence of the following species according to the type of urban greenery: Parus varius(t = -3.035, p < 0.01), Parus major(t = 2.112, p < 0.05), Passer montanus(t = 2.112, p < 0.05), Paradoxornis webbianus(t = 2.112, p < 0.05), Turdus hortulorum(t = -4.026, p < 0.001), and Sitta europaea(t = -2.189, p < 0.05). The detection performance of the YOLO-X model for wildlife occurrence was analyzed, and it successfully classified 94.2% of the camera trapping data. In particular, the number of true positive predictions was 7,809 images and the number of false negative predictions was 51,044 images. In this study, the object detection algorithm YOLO-X model was used to detect the presence of wildlife in the camera trapping data. In this study, the YOLO-X model was used with a filter activated to detect 10 specific animal taxa out of the 80 classes trained on the COCO dataset, without any additional training. In future studies, it is necessary to create and apply training data for key occurrence species to make the model suitable for wildlife monitoring.

SEOUL NATIONAL UNIVERSITY CAMERA II (SNUCAM-II): THE NEW SED CAMERA FOR THE LEE SANG GAK TELESCOPE (LSGT)

  • Choi, Changsu;Im, Myungshin
    • Journal of The Korean Astronomical Society
    • /
    • v.50 no.3
    • /
    • pp.71-78
    • /
    • 2017
  • We present the characteristics and the performance of the new CCD camera system, SNUCAM-II (Seoul National University CAMera system II) that was installed on the Lee Sang Gak Telescope (LSGT) at the Siding Spring Observatory in 2016. SNUCAM-II consists of a deep depletion chip covering a wide wavelength from $0.3{\mu}m$ to $1.1{\mu}m$ with high sensitivity (QE at > 80% over 0.4 to $0.9{\mu}m$). It is equipped with the SDSS ugriz filters and 13 medium band width (50 nm) filters, enabling us to study spectral energy distributions (SEDs) of diverse objects from extragalactic sources to solar system objects. On LSGT, SNUCAM-II offers $15.7{\times}15.7$ arcmin field-of-view (FOV) at a pixel scale of 0.92 arcsec and a limiting magnitude of g = 19.91 AB mag and z=18.20 AB mag at $5{\sigma}$ with 180 sec exposure time for point source detection.