• Title/Summary/Keyword: Frame per second(FPS)

Search Result 38, Processing Time 0.021 seconds

Commercially Available High-Speed Cameras Connected with a Laryngoscope for Capturing the Laryngeal Images (상용화 된 고속카메라와 후두내시경을 이용한 성대촬영 방법의 소개)

  • Nam, Do-Hyun;Choi, Hong-Shik
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.21 no.2
    • /
    • pp.133-138
    • /
    • 2010
  • Background and Objectives : High-speed imaging can be useful in studies of linguistic and artistic singing styles, and laryngeal examination of patients with voice disorders, particularly in irregular vocal fold vibrations. In this study, we introduce new laryngeal imaging systems which are commercially available high speed cameras connected with a laryngoscope. Materials and Method : The laryngeal images were captured from three different types of cameras. First, the adapter was made to connect with laryngoscope and Casio EX-F1 to capture the images using $2{\times}150$ Watt Halogen light source (EndoSTROB) at speeds of 1,200 tps (frame per second)($336{\times}96$). Second, Phantom Miro ex4 was used to capture the digital laryngeal images using Xenon Nova light source 175 Watt (STORZ) at speeds of 1,920 fps ($512{\times}384$). Finally, laryngeal images were captured using MotionXtra N-4 with 250 Watt halogen lamp (Olympus CLH-250) light source at speeds of 2,000tps ($384{\times}400$) by connecting with laryngoscope. All images were transformed into the Kymograph using KIPS (Kay's image processing Software) of Kay Pentex Inc. Results: Casio EX-F1 was too small to adjust the focus and screen size was diminished once the images were captured despite of high resolution images. High quality of color images could be obtained with Phantom Miro ex4 whereas good black and white images from Motion Xtra N-4 Despite of some limitations of illumination problems, limited recording time capacity, and time consuming procedures in Phantom Miro ex4 and Motion Xtra N-4, those portable devices provided high resolution images. Conclusion : All those high speed cameras could capture the laryngeal images by connecting with laryngoscope. High resolution images were able to be captured at the fixed position under the good lightness. Accordingly, these techniques could be applicable to observe the vocal fold vibration properties in the clinical practice.

  • PDF

Developement of Small 360° Oral Scanner Embedded Board for Image Processing (소형 360° 구강 스캐너 영상처리용 임베디드 보드 개발)

  • Ko, Tae-Young;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1214-1217
    • /
    • 2018
  • In this paper, we propose the development of a Small $360^{\circ}$ Oral Scanner embedded board. The proposed small $360^{\circ}$ oral scanner embedded board consists of image level and transfer method changing part FPGA part, memory part and FIFO to USB transfer part. The image level and transmission mode change unit divides the MIPI format oral image received through the small $360^{\circ}$ oral cavity image sensor and the image sensor into low power signal mode and high speed signal mode and distributes them to the port and transfers the level shift to the FPGA unit. The FPGA unit performs functions such as $360^{\circ}$ image distortion correction, image correction, image processing, and image compression. In the FIFO to USB transfer section, the RAW data transferred through the FIFO in the FPGA is transferred to the PC using USB 3.0, USB 3.1, etc. using the transceiver chip. In order to evaluate the efficiency of the proposed small $360^{\circ}$ oral scanner embedded board, it has been tested by an authorized testing institute. As a result, the frame rate per second is over 60 fps and the data transfer rate is 4.99 Gb/second

Fourier Domain Optical Coherence Tomography for Retinal Imaging with 800-nm Swept Source: Real-time Resampling in k-domain

  • Lee, Sang-Won;Song, Hyun-Woo;Kim, Bong-Kyu;Jung, Moon-Youn;Kim, Seung-Hwan;Cho, Jae-Du;Kim, Chang-Seok
    • Journal of the Optical Society of Korea
    • /
    • v.15 no.3
    • /
    • pp.293-299
    • /
    • 2011
  • In this study, we demonstrated Fourier-domain/swept-source optical coherence tomography (FD/SS-OCT) at a center wavelength of 800 nm for in vivo human retinal imaging. A wavelength-swept source was constructed with a semiconductor optical amplifier, a fiber Fabry-Perot tunable filter, isolators, and a fiber coupler in a ring cavity. Our swept source produced a laser output with a tuning range of 42 nm (779 to 821 nm) and an average power of 3.9 mW. The wavelength-swept speed in this configuration with bidirectionality is 2,000 axial scans per second. In addition, we suggested a modified zero-crossing method to achieve equal sample spacing in the wavenumber (k) domain and to increase the image depth range. FD/SS-OCT has a sensitivity of ~89.7 dB and an axial resolution of 10.4 ${\mu}m$ in air. When a retinal image with 2,000 A-lines/frame is obtained, an acquisition speed of 2.0 fps is achieved.

An Experimental Study on the Frequency Characteristics of Cloud Cavitation on Naval Ship Rudder (함정용 방향타에서 발생하는 구름(cloud) 캐비테이션의 주파수 특성에 대한 실험적 연구)

  • Paik, Bu-Geun;Ahn, Jong-Woo;Jeong, Hongseok;Seol, Hanshin;Song, Jae-Yeol;Ko, Yoon-Ho
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.3
    • /
    • pp.167-174
    • /
    • 2021
  • In this study, the amount and frequency characteristics of cloud cavitation formed on a navy ship rudder were investigated through cavitation image processing technique and cavitation noise analysis. A high-speed camera with high time resolution was used to observe the cavitation on a full-spade rudder. The deflection angle range of the full-spade rudder was set to 8 to 15 degrees so that cloud cavitation was generated on the rudder surface. For images taken at 104 fps (frame per second), reference values for detecting cavitation were defined and detected in Red, Green, Blue and Hue, Saturation, Lightness color spaces to quantitatively analyze the amount of cavitation. Intrinsic frequency characteristics of cloud cavitation were detected from the time series data of the amount of cavitation. The frequency characteristics of cloud cavitation obtained by using the image processing technique were found to be the same through the analysis of the noise signal measured by the hydrophone installed on the hull above the rudder, and its peak value was in the frequency band of 30~60Hz.

Evaluation of Robustness of Deep Learning-Based Object Detection Models for Invertebrate Grazers Detection and Monitoring (조식동물 탐지 및 모니터링을 위한 딥러닝 기반 객체 탐지 모델의 강인성 평가)

  • Suho Bak;Heung-Min Kim;Tak-Young Kim;Jae-Young Lim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.297-309
    • /
    • 2023
  • The degradation of coastal ecosystems and fishery environments is accelerating due to the recent phenomenon of invertebrate grazers. To effectively monitor and implement preventive measures for this phenomenon, the adoption of remote sensing-based monitoring technology for extensive maritime areas is imperative. In this study, we compared and analyzed the robustness of deep learning-based object detection modelsfor detecting and monitoring invertebrate grazersfrom underwater videos. We constructed an image dataset targeting seven representative species of invertebrate grazers in the coastal waters of South Korea and trained deep learning-based object detection models, You Only Look Once (YOLO)v7 and YOLOv8, using this dataset. We evaluated the detection performance and speed of a total of six YOLO models (YOLOv7, YOLOv7x, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) and conducted robustness evaluations considering various image distortions that may occur during underwater filming. The evaluation results showed that the YOLOv8 models demonstrated higher detection speed (approximately 71 to 141 FPS [frame per second]) compared to the number of parameters. In terms of detection performance, the YOLOv8 models (mean average precision [mAP] 0.848 to 0.882) exhibited better performance than the YOLOv7 models (mAP 0.847 to 0.850). Regarding model robustness, it was observed that the YOLOv7 models were more robust to shape distortions, while the YOLOv8 models were relatively more robust to color distortions. Therefore, considering that shape distortions occur less frequently in underwater video recordings while color distortions are more frequent in coastal areas, it can be concluded that utilizing YOLOv8 models is a valid choice for invertebrate grazer detection and monitoring in coastal waters.

The Performance Analysis of GPU-based Cloth simulation according to the Change of Work Group Configuration (워크 그룹 구성 변화에 따른 GPU 기반 천 시뮬레이션의 성능 분석)

  • Choi, Young-Hwan;Hong, Min;Lee, Seung-Hyun;Choi, Yoo-Joo
    • Journal of Internet Computing and Services
    • /
    • v.18 no.3
    • /
    • pp.29-36
    • /
    • 2017
  • In these days, 3D dynamic simulation is closely related to many industries. In the past, physically-based 3D simulation was used mainly in the car crash or construction related fields, but it also plays an important role in movies or games today. Many mathematical computations are needed to represent the 3D object realistically, but it is difficult to process a large amount of calculations for simulation of application based on CPU in real-time. Recently, with the advanced graphic hardware and improved architecture, GPU can be utilized for the general purposes of computation function as well as graphic computation. Many approaches using GPU have been applied for various research fields. In this paper, we analyze the performance variation of two cloth simulation algorithms based on GPU according to the change of execution properties of GPU shaders in oder to optimize the performance of GPU-based cloth simulation. Cloth simulation is implemented by the spring centric algorithm and node centric algorithm with GPU parallel computing using compute shader of GLSL 4.3. We compare the performance of between these algorithms according to the change of the size and dimension of work group. The experiment is repeated to 10 times during 5,000 frames for each test and experimental results are provided by averaging of FPS. The experimental result shows that the node centric algorithm is executed in higher speed than the spring centric algorithm.

Experimental Verification on the Effect of the Gap Flow Blocking Devices Attached on the Semi-Spade Rudder using Flow Visualization Technique (유동가시화를 이용한 혼-타의 간극유동 차단장치 효과에 관한 실험적 검증)

  • Shin, Kwangho;Suh, Jung-Chun;Kim, Hyochul;Ryu, Keuksang;Oh, Jungkeun
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.50 no.5
    • /
    • pp.324-333
    • /
    • 2013
  • Recently, rudder erosion due to cavitation has been frequently reported on a semi-spade rudder of a high-speed large ship. This problem raises economic and safety issues when operating ships. The semi-spade rudders have a gap between the horn/pintle and the movable wing part. Due to this gap, a discontinuous surface, cavitation phenomenon arises and results in unresolved problems such as rudder erosion. In this study, we made a rudder model for 2-D experiments using the NACA0020 and also manufactured gap flow blocking devices to insert to the gap of the model. In order to study the gap flow characteristics at various rudder deflection angles($5^{\circ}$, $10^{\circ}$, $35^{\circ}$) and the effect of the gap flow blocking devices, we carried out the velocity measurements using PIV(Particle Image Velocimetry) techniques and cavitation observation using high speed camera in Seoul National University cavitation tunnel. To observe the gap cavitation on a semi-spade rudder, we slowly lowered the inside pressure of the cavitation tunnel until cavitation occurred near the gap and then captured it using high-speed camera with the frame rate of 4300 fps(frame per second). During this procedure, cavitation numbers and the generated location were recorded, and these experimental data were compared with CFD results calculated by commercial code, Fluent. When we use gap flow blocking device to block the gap, it showed a different flow character compared with previous observation without the device. With the device blocking the gap, the flow velocity increases on the suction side, while it decreases on the pressure side. Therefore, we can conclude that the gap flow blocking device results in a high lift-force effect. And we can also observe that the cavitation inception is delayed.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.