• Title/Summary/Keyword: Computational power

Search Result 1,932, Processing Time 0.03 seconds

Relay Node Selection Method using Node-to-node Connectivity and Masking Operation in Delay Tolerant Networks (DTN에서 노드 간 연결 가능성과 마스킹 연산을 이용한 중계노드 선정 기법)

  • Jeong, Rae-jin;Jeon, Il-Kyu;Woo, Byeong-hun;Koo, Nam-kyoung;Lee, Kang-whan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.1020-1030
    • /
    • 2016
  • This paper propose an improving relay node selection method for node-to-node connectivity. This concern with the mobility and analysis of deployed for masking operation using highest connectivity node. The major of Delay Tolerant Network (DTN) routing protocols make use of simple forwarding approach to transmit the message depend on the node's mobility. In this cases, the selection of the irrelevant mobile node induced the delay and packet delivery loss caused by limiting buffer size and computational power of node. Also the proposed algorithm provides the node connectivity considering the mobility and direction select the highest connectivity node from neighbor node using masking operation. From the simulation results, the proposed algorithm compared the packet delivery ratio with PROPHET and Epidemic. The proposed Enhanced Prediction-based Context-awareness Matrix(EPCM) algorithm shows an advantage packet delivery ratio even with selecting relay node according to mobility and direction.

Motion Estimation and Mode Decision Algorithm for Very Low-complexity H.264/AVC Video Encoder (초저복잡도 H.264 부호기의 움직임 추정 및 모드 결정 알고리즘)

  • Yoo Youngil;Kim Yong Tae;Lee Seung-Jun;Kang Dong Wook;Kim Ki-Doo
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.528-539
    • /
    • 2005
  • The H.264 has been adopted as the video codec for various multimedia services such as DMB and next-generation DVD because of its superior coding performance. However, the reference codec of the standard, the joint model (JM) contains quite a few algorithms which are too complex to be used for the resource-constraint embedded environment. This paper introduces very low-complexity H.264 encoding algorithm which is applicable for the embedded environment. The proposed algorithm was realized by restricting some coding tools on the basis that it should not cause too severe degradation of RD-performance and adding a few early termination and bypass conditions during the motion estimation and mode decision process. In case of encoding of 7.5fps QCIF sequence with 64kbpswith the proposed algorithm, the encoder yields worse PSNRs by 0.4 dB than the standard JM, but requires only $15\%$ of computational complexity and lowers the required memory and power consumption drastically. By porting the proposed H.264 codec into the PDA with Intel PXA255 Processor, we verified the feasibility of the H.264 based MMS(Multimedia Messaging Service) on PDA.

A Study on the Combustion Flow Characteristics of a Exhaust Gas Recirculation Burner with the Change of Outlet Opening Position (배기가스 재순환 버너에서 연소가스 출구 위치에 따른 연소 유동 특성에 관한 연구)

  • Ha, Ji-Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.8
    • /
    • pp.8-13
    • /
    • 2018
  • Nitrogen oxides (NOx) have recently been very influential in the generation of ultrafine dust, which is of great social interest in terms of improving the atmospheric environment. Nitrogen oxides are generated mainly by the reaction of nitrogen and oxygen in air in a combustion gas atmosphere of high temperature in a combustion apparatus such as thermal power generation. Recently, research has been conducted on the combustion that recirculates the exhaust gas to the cylindrical burner by using a piping using a Coanda nozzle. In this study, three types of burners were carried out through computational fluid analysis. Case 1 burner with the outlet of the combustion gas to the right, Case 2 burner with both sides as gas exit, Case 3 burner with left side gas exit. The pressure, flow, temperature, combustion reaction rate and distribution characteristics of nitrogen oxides were compared and analyzed. The combustion reaction occurred in Case 1 and Case 2 burner in the right direction with combustion gas recirculation inlet and Case 3 burner in the vicinity of mixed gas inlet. The temperature at the outlet was about $100^{\circ}C$ lower than that of the other burners as the Case 2 burner was exhausted to both sides. The NOx concentration of Case 1 burner at the exit was about 20 times larger than that of the other burners. From the present study, it could be seen that it is effective for the NOx reduction to exhaust the exhaust gas to both side gas exits or to exhaust the exhaust gas to the opposite direction of inlet of recirculation gas.

PSO-Based PID Controller for AVR Systems Concerned with Design Specification (설계사양을 고려한 AVR 시스템의 PSO 기반 PID 제어기)

  • Lee, Yun-Hyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.10
    • /
    • pp.330-338
    • /
    • 2018
  • The proportional-integral-derivative(PID) controller has been widely used in the industry because of its robust performance and simple structure in a wide range of operating conditions. However, the AVR(Automatic Voltage Regulator) as a control system is not robust to variations of the power system parameters. Therefore, it is necessary to use PID controller to increase the stability and performance of the AVR system. In this paper, a novel design method for determining the optimal PID controller parameters of an AVR system using the particle swarm optimization(PSO) algorithm is presented. The proposed approach has superior features, including easy implementation, stable convergence characteristic and good computational efficiency. In order to assist estimating the performance of the proposed PSO-PID controller, a new performance criterion function is also defined. This evaluation function is intended to reflect when the maximum percentage overshoot, the settling time are given as design specifications. The ITAE evaluation function should impose a penalty if the design specifications are violated, so that the PSO algorithm satisfies the specifications when searching for the PID controller parameter. Finally, through the computer simulations, the proposed PSO-PID controller not only satisfies the given design specifications for the terminal voltage step response, but also shows better control performance than other similar recent studies.

Internal Defection Evaluation of Spot Weld Part and Carbon Composite using the Non-contact Air-coupled Ultrasonic Transducer Method (비접촉 초음파 탐상기법을 이용한 스폿용접부 및 탄소복합체의 내부 결함평가)

  • Kwak, Nam-Su;Lee, Seung-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.11
    • /
    • pp.6432-6439
    • /
    • 2014
  • The NAUT (Non-contact Air coupled Ultrasonic Testing) technique is one of the ultrasonic testing methods that enables non-contact ultrasonic testing by compensating for the energy loss caused by the difference in acoustic impedance of air with an ultrasonic pulser receiver, PRE-AMP and high-sensitivity transducer. As the NAUT is performed in a state of steady ultrasonic transmission and reception, testing can be performed on materials of high or low temperatures or specimens with a rough surface or narrow part, which could not have been tested using the conventional contact-type testing technique. For this study, the internal defects of spot weld, which are often applied to auto parts, and CFRP parts, were tested to determine if it is practical to make the NAUT technique commercial. As the spot welded part had a high ultrasonic transmissivity, the result was shown as red. On the other hand, the part with an internal defect had a layer of air and low transmissivity, which was shown as blue. In addition, depending on the PRF (Pulse Repetition Frequency), an important factor that determines the measurement speed, the color sharpness showed differences. With the images obtained from CFRP specimens or an imaging device, it was possible to identify the shape, size and position of the internal defect within a short period of time. In this paper, it was confirmed in the above-described experiment that both internal defect detection and image processing of the defect could be possible using the NAUT technique. Moreover, it was possible to apply NAUT to the detection of internal defects in the spot welded parts or in CFRP parts, and commercialize its practical application to various fields.

A Study for BIM based Evaluation and Process for Architectural Design Competition -Case Study of Domestic and International BIM-based Competition (BIM기반의 건축설계경기 평가 및 절차에 관한 연구 -국내외 BIM기반 건축설계경기 사례를 기반으로-)

  • Park, Seung-Hwa;Hong, Chang-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.2
    • /
    • pp.23-30
    • /
    • 2017
  • In the AEC(Architecture, Engineering and Construction) industry, BIM(Building Information Modeling) technology not only helps design intent efficiently, but also realizes an object-oriented design including building's life cycle information. Thus it can manage all data created in each building stage and the roles of BIM are greatly expanded. Contractors and designers have been trying to adopt BIM to design competitions and validate it for the best result in various aspects. Via the computational simulation which differs from the existing process, effective evaluation can be done. For this process, a modeling guideline for each kind of BIM tool and a validation system for the confidential assessment are required. This paper explains a new process about design evaluation methods and process using BIM technologies which follow the new paradigm in construction industry through complement points by an example of a competition activity of the Korea Power Exchange(KPX) headquarter office. In conclusion, this paper provides a basic data input guideline based on open BIM for automatic assessment and interoperability between different BIM systems and suggests a practical usage of the rule-based Model Checker.

Deisgn of adaptive array antenna for tracking the source of maximum power and its application to CDMA mobile communication (최대 고유치 문제의 해를 이용한 적응 안테나 어레이와 CDMA 이동통신에의 응용)

  • 오정호;윤동운;최승원
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.11
    • /
    • pp.2594-2603
    • /
    • 1997
  • A novel method of adaptive beam forming is presented in this paper. The proposed technique provides for a suboptimal beam pattern that increases the Signal to Noise/Interference Ratio (SNR/SIR), thus, eventually increases the capacity of the communication channel, under an assumption that the desired signal is dominant compared to each component of interferences at the receiver, which is precoditionally achieved in Code Division Multiple Access (CDMA) mobile communications by the chip correlator. The main advantages of the new technique are:(1)The procedure requires neither reference signals nor training period, (2)The signal interchoerency does not affect the performance or complexity of the entire procedure, (3)The number of antennas does not have to be greater than that of the signals of distinct arrival angles, (4)The entire procedure is iterative such that a new suboptimal beam pattern be generated upon the arrival of each new data of which the arrival angle keeps changing due tot he mobility of the signal source, (5)The total amount of computation is tremendously reduced compared to that of most conventional beam forming techniques such that the suboptimal beam pattern be produced at vevery snapshot on a real-time basis. The total computational load for generating a new set of weitht including the update of an N-by-N(N is the number of antenna elements) autocovariance matrix is $0(3N^2 + 12N)$. It can further be reduced down to O(11N) by approximating the matrix with the instantaneous signal vector.

  • PDF

Flow analysis of the Sump Pump (흡수정의 유동해석)

  • Jung, Han-Byul;Noh, Seung-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.673-680
    • /
    • 2017
  • sump pump is a system that draws in water that is stored in a dam or reservoir. They are used to pump large amounts of water for cooling systems in large power plants, such as thermal and nuclear plants. However, if the flow and sump pump ratio are small, the flow rate increases around the inlet port. This causes a turbulent vortex or swirl flows. The turbulent flow reduces the performance and can cause failure. Various methods have been devised to solve the problem, but a correct solution has not been found for low water level. The most efficient solution is to install an anti-vortex device (AVD) or increase the length of the sump inlet, which makes the flow uniform. This paper presents a computational fluid dynamics (CFD) analysis of the flow characteristics in a sump pump for different sump inlet lengths and AVD types. Modeling was performed in three stages based on the pump intake, sump, and pump. For accurate analysis, the grid was made denser in the intake part, and the grid for the sump pump and AVD were also dense. 1.2-1.5 million grid elements were generated using ANSYS ICEM-CFD 14.5 with a mixture of tetra and prism elements. The analysis was done using the SST turbulence model of ANSYS CFX14.5, a commercial CFD program. The conditions were as follows: H.W.L 6.0 m, L.W.L 3.5, Qmax 4.000 kg/s, Qavg 3.500 kg/s Qmin 2.500 kg/s. The results of analysis by the vertex angle and velocity distribution are as follows. A sump pump with an Ext E-type AVD was accepted at a high water level. However, further studies are needed for a low water level using the Ext E-type AVD as a base.

Removal of Seabed Multiples in Seismic Reflection Data using Machine Learning (머신러닝을 이용한 탄성파 반사법 자료의 해저면 겹반사 제거)

  • Nam, Ho-Soo;Lim, Bo-Sung;Kweon, Il-Ryong;Kim, Ji-Soo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.3
    • /
    • pp.168-177
    • /
    • 2020
  • Seabed multiple reflections (seabed multiples) are the main cause of misinterpretations of primary reflections in both shot gathers and stack sections. Accordingly, seabed multiples need to be suppressed throughout data processing. Conventional model-driven methods, such as prediction-error deconvolution, Radon filtering, and data-driven methods, such as the surface-related multiple elimination technique, have been used to attenuate multiple reflections. However, the vast majority of processing workflows require time-consuming steps when testing and selecting the processing parameters in addition to computational power and skilled data-processing techniques. To attenuate seabed multiples in seismic reflection data, input gathers with seabed multiples and label gathers without seabed multiples were generated via numerical modeling using the Marmousi2 velocity structure. The training data consisted of normal-moveout-corrected common midpoint gathers fed into a U-Net neural network. The well-trained model was found to effectively attenuate the seabed multiples according to the image similarity between the prediction result and the target data, and demonstrated good applicability to field data.

A Study On Memory Optimization for Applying Deep Learning to PC (딥러닝을 PC에 적용하기 위한 메모리 최적화에 관한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.2
    • /
    • pp.136-141
    • /
    • 2017
  • In this paper, we propose an algorithm for memory optimization to apply deep learning to PC. The proposed algorithm minimizes the memory and computation processing time by reducing the amount of computation processing and data required in the conventional deep learning structure in a general PC. The algorithm proposed in this paper consists of three steps: a convolution layer configuration process using a random filter with discriminating power, a data reduction process using PCA, and a CNN structure creation using SVM. The learning process is not necessary in the convolution layer construction process using the discriminating random filter, thereby shortening the learning time of the overall deep learning. PCA reduces the amount of memory and computation throughput. The creation of the CNN structure using SVM maximizes the effect of reducing the amount of memory and computational throughput required. In order to evaluate the performance of the proposed algorithm, we experimented with Yale University's Extended Yale B face database. The results show that the algorithm proposed in this paper has a similar performance recognition rate compared with the existing CNN algorithm. And it was confirmed to be excellent. Based on the algorithm proposed in this paper, it is expected that a deep learning algorithm with many data and computation processes can be implemented in a general PC.