• Title/Summary/Keyword: Time Domain Features

Search Result 254, Processing Time 0.026 seconds

Seismic response analysis of steel frames with post-Northridge connection

  • Mehrabian, Ali;Haldar, Achintya;Reyes-Salazar, Alfredo
    • Steel and Composite Structures
    • /
    • v.5 no.4
    • /
    • pp.271-287
    • /
    • 2005
  • The seismic behavior of two steel moment-resisting frames, which satisfy all the current seismic design requirements, are evaluated and compared in the presence of pre-Northridge connections denoted as BWWF and an improved post-Northridge connections denoted as BWWF-AD. Pre-Northridge connections are modeled first as fully restrained (FR) type. Then they are considered to be partially restrained (PR) to model their behavior more realistically. The improved post-Northridge connections are modeled as PR type, as proposed by the authors. A sophisticated nonlinear time-domain finite element program developed by the authors is used for the response evaluation of the frames in terms of the overall rotation of the connections and the maximum drift. The frames are excited by ten recorded earthquake time histories. These time histories are then scaled up to produce some relevant response characteristics. The behaviors of the frames are studied comprehensively with the help of 120 analyses. Following important observations are made. The frames produced essentially similar rotation and drift for the connections modeled as FR type and PR type represented by BWWF-AD indicating that the presence of slots in the web of beams in BWWF-AD is not detrimental to the overall response behavior. When the lateral displacements of the frames are significantly large, the responses are improved if BWWF-AD type connections are used in the frames. This study analytically confirms many desirable features of BWWF-AD connections. PR frames have longer periods of vibration in comparison to FR frames and may attract lower inertia forces. However, calculated periods of the frames of this study using FEMA 350 empirical equation is longer than those calculated using dynamic characteristics of the frames. This may result in even lower design forces and may adversely influence the design.

Travel Times of Radionuclides Released from Hypothetical Multiple Source Positions in the KURT Site (KURT 환경 자료를 이용한 가상의 다중 발생원에서의 누출 핵종의 이동 시간 평가)

  • Ko, Nak-Youl;Jeong, Jongtae;Kim, Kyung Su;Hwang, Youngtaek
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.11 no.4
    • /
    • pp.281-291
    • /
    • 2013
  • A hypothetical repository was assumed to be located at the KURT (KAERI Underground Research Tunnel) site, and the travel times of radionuclides released from three source positions were calculated. The groundwater flow around the KURT site was simulated and the groundwater pathways from the hypothetical source positions to the shallow groundwater were identified. Of the pathways, three pathways were selected because they had highly water-conductive features. The transport travel times of the radionuclides were calculated by a TDRW (Time-Domain Random Walk) method. Diffusion and sorption mechanisms in a host rock matrix as well as advection-dispersion mechanisms under the KURT field condition were considered. To reflect the radioactive decay, four decay chains with the radionuclides included in the high-level radioactive wastes were selected. From the simulation results, the half-life and distribution coefficient in the rock matrix, as well as multiple pathways, had an influence on the mass flux of the radionuclides. For enhancing the reliability of safety assessment, this reveals that identifying the history of the radionuclides contained in the high-level wastes and investigating the sorption processes between the radionuclides and the rock matrix in the field condition are preferentially necessary.

Real Time Environmental Classification Algorithm Using Neural Network for Hearing Aids (인공 신경망을 이용한 보청기용 실시간 환경분류 알고리즘)

  • Seo, Sangwan;Yook, Sunhyun;Nam, Kyoung Won;Han, Jonghee;Kwon, See Youn;Hong, Sung Hwa;Kim, Dongwook;Lee, Sangmin;Jang, Dong Pyo;Kim, In Young
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.1
    • /
    • pp.8-13
    • /
    • 2013
  • Persons with sensorineural hearing impairment have troubles in hearing at noisy environments because of their deteriorated hearing levels and low-spectral resolution of the auditory system and therefore, they use hearing aids to compensate weakened hearing abilities. Various algorithms for hearing loss compensation and environmental noise reduction have been implemented in the hearing aid; however, the performance of these algorithms vary in accordance with external sound situations and therefore, it is important to tune the operation of the hearing aid appropriately in accordance with a wide variety of sound situations. In this study, a sound classification algorithm that can be applied to the hearing aid was suggested. The proposed algorithm can classify the different types of speech situations into four categories: 1) speech-only, 2) noise-only, 3) speech-in-noise, and 4) music-only. The proposed classification algorithm consists of two sub-parts: a feature extractor and a speech situation classifier. The former extracts seven characteristic features - short time energy and zero crossing rate in the time domain; spectral centroid, spectral flux and spectral roll-off in the frequency domain; mel frequency cepstral coefficients and power values of mel bands - from the recent input signals of two microphones, and the latter classifies the current speech situation. The experimental results showed that the proposed algorithm could classify the kinds of speech situations with an accuracy of over 94.4%. Based on these results, we believe that the proposed algorithm can be applied to the hearing aid to improve speech intelligibility in noisy environments.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

An Intelligent Framework for Test Case Prioritization Using Evolutionary Algorithm

  • Dobuneh, Mojtaba Raeisi Nejad;Jawawi, Dayang N.A.
    • Journal of Internet Computing and Services
    • /
    • v.17 no.5
    • /
    • pp.89-95
    • /
    • 2016
  • In a software testing domain, test case prioritization techniques improve the performance of regression testing, and arrange test cases in such a way that maximum available faults be detected in a shorter time. User-sessions and cookies are unique features of web applications that are useful in regression testing because they have precious information about the application state before and after making changes to software code. This approach is in fact a user-session based technique. The user session will collect from the database on the server side, and test cases are released by the small change configuration of a user session data. The main challenges are the effectiveness of Average Percentage Fault Detection rate (APFD) and time constraint in the existing techniques, so in this paper developed an intelligent framework which has three new techniques use to manage and put test cases in group by applying useful criteria for test case prioritization in web application regression testing. In dynamic weighting approach the hybrid criteria which set the initial weight to each criterion determines optimal weight of combination criteria by evolutionary algorithms. The weight of each criterion is based on the effectiveness of finding faults in the application. In this research the priority is given to test cases that are performed based on most common http requests in pages, the length of http request chains, and the dependency of http requests. To verify the new technique some fault has been seeded in subject application, then applying the prioritization criteria on test cases for comparing the effectiveness of APFD rate with existing techniques.

Study of Optical Fiber Sensor Systems for the Simultaneous Monitoring of Fracture and Strain in Composite Laminates (복합적층판의 변형파손 동시감지를 위한 광섬유 센서 시스템에 관한 연구)

  • 방형준;강현규;홍창선;김천곤
    • Composites Research
    • /
    • v.16 no.3
    • /
    • pp.58-67
    • /
    • 2003
  • To perform the realtime strain and fracture monitoring of the smart composite structures, two optical fiber sensor systems are proposed. The two types of the coherent sources were used for fracture signal detection - EDFA with FBG and EDFA with Fabry-Perot filter. These sources were coupled to EFPI sensors imbedded in composite specimens. To understand the characteristics of matrix crack signals, at first, we performed tensile tests using surface attached PZT sensors by changing the thickness and width of the specimens. This paper describes the implementation of time-frequency analysis such as short time Fourier transform (STFT) and wavelet transform (WT) for the quantitative evaluation of fracture signals. The experimental result shows the distinctive signal features in frequency domain due to the different specimen shapes. And, from the test of tensile load monitoring using optical fiber sensor systems, measured strain agreed with the value of electric strain gage and the fracture detection system could detect the moment of damage with high sensitivity to recognize the onset of micro-crack fracture signal.

Heart Sound Recognition by Analysis of Block Integration and Statistical Variables (구간적분과 통계변수 분석에 의한 심음 인식)

  • 이상민;김인영;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.6
    • /
    • pp.573-581
    • /
    • 1999
  • Although phonocardiography by auscultation has been used in diagnosis long time ago, recognition of heart sound was tried only restricted fields such as the first heart sound, the second heart sound, and specific valve operation for the purpose of analyzing local function or operation of heart and developments of heart sound recognition in full cycle are quite insignificant. in this paper, we proposed a recognition method which extracts features of heart sound in full cycle and classllies heart sounds This proposed recognition algorithm is based on detecting the first and second heart sounds in thme domain. The algorithm classifics heart sound into several classes by extracting the important time blocks and analyzing the peak position, integration values and statistical variables. Heart sounds are classified into normal, early systolic murmur, late systolic mumur, early diastolic murmur, late diastolie murmur, continuous murmur. We can verify our algorithm is useful from the results which show the average recognition rate of heart sounds is 88 perecnt. Recognition error was occurred mainly in early systolic murmur.

  • PDF

Direct Pass-Through based GPU Virtualization for Biologic Applications (바이오 응용을 위한 직접 통로 기반의 GPU 가상화)

  • Choi, Dong Hoon;Jo, Heeseung;Lee, Myungho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.113-118
    • /
    • 2013
  • The current GPU virtualization techniques incur large overheads when executing application programs mainly due to the fine-grain time-sharing scheduling of the GPU among multiple Virtual Machines (VMs). Besides, the current techniques lack of portability, because they include the APIs for the GPU computations in the VM monitor. In this paper, we propose a low overhead and high performance GPU virtualization approach on a heterogeneous HPC system based on the open-source Xen. Our proposed techniques are tailored to the bio applications. In our virtualization framework, we allow a VM to solely occupy a GPU once the VM is assigned a GPU instead of relying on the time-sharing the GPU. This improves the performance of the applications and the utilization of the GPUs. Our techniques also allow a direct pass-through to the GPU by using the IOMMU virtualization features embedded in the hardware for the high portability. Experimental studies using microbiology genome analysis applications show that our proposed techniques based on the direct pass-through significantly reduce the overheads compared with the previous Domain0 based approaches. Furthermore, our approach closely matches the performance for the applications to the bare machine or rather improves the performance.

Implementation and Validation of EtherCAT Support in Integrated Development Environment for Synchronized Motion Control Application (동기 모션 제어 응용을 위한 통합개발환경의 EtherCAT 지원 기능 구현 및 검증)

  • Lee, Jongbo;Kim, Chaerin;Kim, Ikhwan;Kim, Youngdong;Kim, Taehyoun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.38 no.2
    • /
    • pp.211-218
    • /
    • 2014
  • Recently, software-based programmable logic controller (PLC) systems, which are implemented in standard PLC languages on general hardware, are gaining popularity because they overcome the limitations of classical hardware PLC systems. Another noticeable trend is that the use of integrated development environment (IDE) is becoming important. IDEs can help developers to easily manage the growing complexity of modern control systems. Furthermore, industrial Ethernet, e.g. EtherCAT, is becoming widely accepted as a replacement for conventional fieldbuses in the distributed control domain because it offers favorable features such as short transmission delay, high bandwidth, and low cost. In this paper, we implemented the extension of open source IDE, called Beremiz, for developing EtherCAT-based real-time, synchronized motion control applications. We validated the EtherCAT system management features and the real-time responsiveness of the control function by using commercial EtherCAT drives and evaluation boards.

Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN (3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향)

  • Yeongjee Chung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN is one of the deep learning techniques for learning time series data. Such three-dimensional learning can generate many parameters, so that high-performance machine learning is required or can have a large impact on the learning rate. When learning dynamic hand-gestures in spatiotemporal domain, it is necessary for the improvement of the efficiency of dynamic hand-gesture learning with 3D-CNN to find the optimal conditions of input video data by analyzing the learning accuracy according to the spatiotemporal change of input video data without structural change of the 3D-CNN model. First, the time ratio between dynamic hand-gesture actions is adjusted by setting the learning interval of image frames in the dynamic hand-gesture video data. Second, through 2D cross-correlation analysis between classes, similarity between image frames of input video data is measured and normalized to obtain an average value between frames and analyze learning accuracy. Based on this analysis, this work proposed two methods to effectively select input video data for 3D-CNN deep learning of dynamic hand-gestures. Experimental results showed that the learning interval of image data frames and the similarity of image frames between classes can affect the accuracy of the learning model.