• Title/Summary/Keyword: Frames per second

Search Result 225, Processing Time 0.025 seconds

Performance-based structural fire design of steel frames using conventional computer software

  • Chan, Y.K.;Iu, C.K.;Chan, S.L.;Albermani, F.G.
    • Steel and Composite Structures
    • /
    • v.10 no.3
    • /
    • pp.207-222
    • /
    • 2010
  • Fire incident in buildings is common, so the fire safety design of the framed structure is imperative, especially for the unprotected or partly protected bare steel frames. However, software for structural fire analysis is not widely available. As a result, the performance-based structural fire design is urged on the basis of using user-friendly and conventional nonlinear computer analysis programs so that engineers do not need to acquire new structural analysis software for structural fire analysis and design. The tool is desired to have the capacity of simulating the different fire scenarios and associated detrimental effects efficiently, which includes second-order P-D and P-d effects and material yielding. Also the nonlinear behaviour of large-scale structure becomes complicated when under fire, and thus its simulation relies on an efficient and effective numerical analysis to cope with intricate nonlinear effects due to fire. To this end, the present fire study utilizes a second-order elastic/plastic analysis software NIDA to predict structural behaviour of bare steel framed structures at elevated temperatures. This fire study considers thermal expansion and material degradation due to heating. Degradation of material strength with increasing temperature is included by a set of temperature-stress-strain curves according to BS5950 Part 8 mainly, which implicitly allows for creep deformation. This finite element stiffness formulation of beam-column elements is derived from the fifth-order PEP element which facilitates the computer modeling by one member per element. The Newton-Raphson method is used in the nonlinear solution procedure in order to trace the nonlinear equilibrium path at specified elevated temperatures. Several numerical and experimental verifications of framed structures are presented and compared against solutions in literature. The proposed method permits engineers to adopt the performance-based structural fire analysis and design using typical second-order nonlinear structural analysis software.

Design of High-Performance Motion Estimation Circuit for H.264/AVC Video CODEC (H.264/AVC 동영상 코덱용 고성능 움직임 추정 회로 설계)

  • Lee, Seon-Young;Cho, Kyeong-Soon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.7
    • /
    • pp.53-60
    • /
    • 2009
  • Motion estimation for H.264/AVC video CODEC is very complex and requires a huge amount of computational efforts because it uses multiple reference frames and variable block sizes. We propose the architecture of high-performance integer-pixel motion estimation circuit based on fast algorithms for multiple reference frame selection, block matching, block mode decision and motion vector estimation. We also propose the architecture of high-performance interpolation circuit for sub-pixel motion estimation. We described the RTL circuit in Verilog HDL and synthesized the gate-level circuit using 130nm standard cell library. The integer-pixel motion estimation circuit consists of 77,600 logic gates and four $32\times8\times32$-bit dual-port SRAM's. It has tile maximum operating frequency of 161MHz and can process up to 51 D1 (720$\times$480) color in go frames per second. The fractional motion estimation circuit consists of 22,478 logic gates. It has the maximum operating frequency of 200MHz and can process up to 69 1080HD (1,920$\times$1,088) color image frames per second.

Analysis of the Relationships according to the Frame (f/s) Change of Cine Imaging in Coronary Angiographic System: With Focus on FOV Enlargement and Live Zoom (심장 혈관 조영장치에서의 프레임 레이트(f/s) 변화에 따른 상관 관계 분석 : FOV 확대와 Live Zoom을 중점으로)

  • Kim, Won Hyo;Song, Jong-Nam;Han, Jae-Bok
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.7
    • /
    • pp.845-852
    • /
    • 2018
  • This study aimed to investigate the difference of X-ray exposure by comparing and analyzing absorbed dose according to changes in the number of frames in coronary angiography, also depending whether the zoom mode is FOV enlargement or Zoom Live. Moreover, for appropriate frame selection measures for examination, including the effect of frame change on the image quality, were sought by measuring the noise strength expressed by the standard deviation (SD), the signal to noise ratio (SNR) and contrast to noise ratio (CNR). The study was conducted with an anthropomorphic phantom on an angio-system. The linear relationship between the frame rate and the radiation dose was evident. On the contrary, the indices of image quality (SD, SNR, and CNR) were almost constant irrespective of the number of frames. The difference depending on the zoom mode was not statistically significant for DAP, air kerma, and SD (p > 0.05). However, SNR and CNR were statistically different between FOV enlargement and Zoom Live. In conclusion, since the image quality was not degraded significantly with the decreasing frame rate from 30, 15, to 7.5 f/s and the radiation dose evidently decreases in almost exactly linear proportion to the decreasing frame rate, the number of frames per second needs to be maintained as low as reasonably achievable. As for the dependence on the zooming mode, the Live Zoom mode showed statistically significant improvement in the image quality indices of SNR and CNR and it justifies active use of the Live Zoom mode which enables real-time image enlargment without additional radiation dose.

Implementation of Real-Time Video Transfer System on Android Environment (안드로이드 기반의 실시간 영상전송시스템의 구현)

  • Lee, Kang-Hun;Kim, Dong-Il;Kim, Dae-Ho;Sung, Myung-Yoon;Lee, Young-Kil;Jung, Suk-Yong
    • Journal of the Korea Convergence Society
    • /
    • v.3 no.1
    • /
    • pp.1-5
    • /
    • 2012
  • In this paper, we developed real-tim video transfer system based on Android environment. After android device with embedded camera capture images, it sends image frames to video server system. And also video server transfer the images from client to peer client. Peer client also implemented on android environment. We can send 16 image frames per second without any loss in 3G mobile network environment.

FPGA-DSP Based Implementation of Lane and Vehicle Detection (FPGA와 DSP를 이용한 실시간 차선 및 차량인식 시스템 구현)

  • Kim, Il-Ho;Kim, Gyeong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.12C
    • /
    • pp.727-737
    • /
    • 2011
  • This paper presents an implementation scheme of real-time lane and vehicle detection system with FPGA and DSP. In this type of implementation, defining the functionality of each device in efficient manner is of crucial importance. The FPGA is in charge of extracting features from input image sequences in reduced form, and the features are provided to the DSP so that tracking lanes and vehicles are performed based on them. In addition, a way of seamless interconnection between those devices is presented. The experimental results show that the system is able to process at least 15 frames per second for video image sequences with size of $640{\times}480$.

Real-time Stabilization Method for Video acquired by Unmanned Aerial Vehicle (무인 항공기 촬영 동영상을 위한 실시간 안정화 기법)

  • Cho, Hyun-Tae;Bae, Hyo-Chul;Kim, Min-Uk;Yoon, Kyoungro
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.1
    • /
    • pp.27-33
    • /
    • 2014
  • Video from unmanned aerial vehicle (UAV) is influenced by natural environments due to the light-weight UAV, specifically by winds. Thus UAV's shaking movements make the video shaking. Objective of this paper is making a stabilized video by removing shakiness of video acquired by UAV. Stabilizer estimates camera's motion from calculation of optical flow between two successive frames. Estimated camera's movements have intended movements as well as unintended movements of shaking. Unintended movements are eliminated by smoothing process. Experimental results showed that our proposed method performs almost as good as the other off-line based stabilizer. However estimation of camera's movements, i.e., calculation of optical flow, becomes a bottleneck to the real-time stabilization. To solve this problem, we make parallel stabilizer making average 30 frames per second of stabilized video. Our proposed method can be used for the video acquired by UAV and also for the shaking video from non-professional users. The proposed method can also be used in any other fields which require object tracking, or accurate image analysis/representation.

Curved-quartic-function elements with end-springs in series for direct analysis of steel frames

  • Liu, Si-Wei;Chan, Jake Lok Yan;Bai, Rui;Chan, Siu-Lai
    • Steel and Composite Structures
    • /
    • v.29 no.5
    • /
    • pp.623-633
    • /
    • 2018
  • A robust element is essential for successful design of steel frames with Direct analysis (DA) method. To this end, an innovative and efficient curved-quartic-function (CQF) beam-column element using the fourth-order polynomial shape function with end-springs in series is proposed for practical applications of DA. The member initial imperfection is explicitly integrated into the element formulation, and, therefore, the P-${\delta}$ effect can be directly captured in the analysis. The series of zero-length springs are placed at the element ends to model the effects of semi-rigid joints and material yielding. One-element-per-member model is adopted for design bringing considerable savings in computer expense. The incremental secant stiffness method allowing for large deflections is used to describe the kinematic motion. Finally, several problems are studied in this paper for examining and validating the accuracy of the present formulations. The proposed element is believed to make DA simpler to use than existing elements, which is essential for its successful and widespread adoption by engineers.

Design and Implementation of Real-time High Performance Face Detection Engine (고성능 실시간 얼굴 검출 엔진의 설계 및 구현)

  • Han, Dong-Il;Cho, Hyun-Jong;Choi, Jong-Ho;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.2
    • /
    • pp.33-44
    • /
    • 2010
  • This paper propose the structure of real-time face detection hardware architecture for robot vision processing applications. The proposed architecture is robust against illumination changes and operates at no less than 60 frames per second. It uses Modified Census Transform to obtain face characteristics robust against illumination changes. And the AdaBoost algorithm is adopted to learn and generate the characteristics of the face data, and finally detected the face using this data. This paper describes the face detection hardware structure composed of Memory Interface, Image Scaler, MCT Generator, Candidate Detector, Confidence Comparator, Position Resizer, Data Grouper, and Detected Result Display, and verification Result of Hardware Implementation with using Virtex5 LX330 FPGA of Xilinx. Verification result with using the images from a camera showed that maximum 32 faces per one frame can be detected at the speed of maximum 149 frame per second.

A Trial Toward Marine Watch System by Image Processing

  • Shimpo, Masatoshi;Hirasawa, Masato;Ishida, Keiichi;Oshima, Masaki
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.41-46
    • /
    • 2006
  • This paper describes a marine watch system on a ship, which is aided by an image processing method. The system detects other ships through a navigational image sequence to prevent oversights, and it measures their bearings to maintain their movements. The proposed method is described, the detection techniques and measurement of bearings techniques are derived, and the results have been reported. The image is divided into small regions on the basis of the brightness value and then labeled. Each region is considered as a template. A template is assumed to be a ship. Then, the template is compared with frames in the original image after a selected time. A moving vector of the regions is calculated using an Excel table. Ships are detected using the characteristics of the moving vector. The video camera captures 30 frames per second. We segmented one frame into approximately 5000 regions; from these, approximately 100 regions are presumed to be ships and considered to be templates. Each template was compared with frames captured at 0.33 s or 0.66 s. In order to improve the accuracy, this interval was changed on the basis of the magnification of the video camera. Ships’ bearings also need to be determined. The proposed method can measure the ships’ bearings on the basis of three parameters: (1) the course of the own ship, (2) arrangement between the camera and hull, and (3) coordinates of the ships detected from the image. The course of the own ship can be obtained by using a gyrocompass. The camera axis is calibrated along a particular direction using a stable position on a bridge. The field of view of the video camera is measured from the size of a known structure on the hull in the image. Thus, ships’ bearings can be calculated using these parameters.

  • PDF

Fundamental Study on Algorithm Development for Prediction of Smoke Spread Distance Based on Deep Learning (딥러닝 기반의 연기 확산거리 예측을 위한 알고리즘 개발 기초연구)

  • Kim, Byeol;Hwang, Kwang-Il
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.1
    • /
    • pp.22-28
    • /
    • 2021
  • This is a basic study on the development of deep learning-based algorithms to detect smoke before the smoke detector operates in the event of a ship fire, analyze and utilize the detected data, and support fire suppression and evacuation activities by predicting the spread of smoke before it spreads to remote areas. Proposed algorithms were reviewed in accordance with the following procedures. As a first step, smoke images obtained through fire simulation were applied to the YOLO (You Only Look Once) model, which is a deep learning-based object detection algorithm. The mean average precision (mAP) of the trained YOLO model was measured to be 98.71%, and smoke was detected at a processing speed of 9 frames per second (FPS). The second step was to estimate the spread of smoke using the coordinates of the boundary box, from which was utilized to extract the smoke geometry from YOLO. This smoke geometry was then applied to the time series prediction algorithm, long short-term memory (LSTM). As a result, smoke spread data obtained from the coordinates of the boundary box between the estimated fire occurrence and 30 s were entered into the LSTM learning model to predict smoke spread data from 31 s to 90 s in the smoke image of a fast fire obtained from fire simulation. The average square root error between the estimated spread of smoke and its predicted value was 2.74.