• Title/Summary/Keyword: amount of computation

Search Result 604, Processing Time 0.023 seconds

A Possible Path per Link CBR Algorithm for Interference Avoidance in MPLS Networks

  • Sa-Ngiamsak, Wisitsak;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.772-776
    • /
    • 2004
  • This paper proposes an interference avoidance approach for Constraint-Based Routing (CBR) algorithm in the Multi-Protocol Label Switching (MPLS) network. The MPLS network itself has a capability of integrating among any layer-3 protocols and any layer-2 protocols of the OSI model. It is based on the label switching technology, which is fast and flexible switching technique using pre-defined Label Switching Paths (LSPs). The MPLS network is a solution for the Traffic Engineering(TE), Quality of Service (QoS), Virtual Private Network (VPN), and Constraint-Based Routing (CBR) issues. According to the MPLS CBR, routing performance requirements are capability for on-line routing, high network throughput, high network utilization, high network scalability, fast rerouting performance, low percentage of call-setup request blocking, and low calculation complexity. There are many previously proposed algorithms such as minimum hop (MH) algorithm, widest shortest path (WSP) algorithm, and minimum interference routing algorithm (MIRA). The MIRA algorithm is currently seemed to be the best solution for the MPLS routing problem in case of selecting a path with minimum interference level. It achieves lower call-setup request blocking, lower interference level, higher network utilization and higher network throughput. However, it suffers from routing calculation complexity which makes it difficult to real task implementation. In this paper, there are three objectives for routing algorithm design, which are minimizing interference levels with other source-destination node pairs, minimizing resource usage by selecting a minimum hop path first, and reducing calculation complexity. The proposed CBR algorithm is based on power factor calculation of total amount of possible path per link and the residual bandwidth in the network. A path with high power factor should be considered as minimum interference path and should be selected for path setup. With the proposed algorithm, all of the three objectives are attained and the approach of selection of a high power factor path could minimize interference level among all source-destination node pairs. The approach of selection of a shortest path from many equal power factor paths approach could minimize the usage of network resource. Then the network has higher resource reservation for future call-setup request. Moreover, the calculation of possible path per link (or interference level indicator) is run only whenever the network topology has been changed. Hence, this approach could reduce routing calculation complexity. The simulation results show that the proposed algorithm has good performance over high network utilization, low call-setup blocking percentage and low routing computation complexity.

  • PDF

A Performance Analysis of Distributed Storage Codes for RGG/WSN (RGG/WSN을 위한 분산 저장 부호의 성능 분석)

  • Cheong, Ho-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.462-468
    • /
    • 2017
  • In this paper IoT/WSN(Internet of Things/Wireless Sensor Network) has been modeled with a random geometric graph. And a performance of the decentralized code for the efficient storage of data which is generated from WSN has been analyzed. WSN with n=100 or 200 has been modeled as a random geometric graph and has been simulated for their performance analysis. When the number of the total nodes of WSN is n=100 or 200, the successful decoding probability as decoding ratio ${\eta}$ depends more on the number of source nodes k rather than the number of nodes n. Especially, from the simulation results we can see that the successful decoding rate depends greatly on k value than n value and the successful decoding rate was above 70% when $${\eta}{\leq_-}2.0$$. We showed that the number of operations of BP(belief propagation) decoding scheme increased exponentially with k value from the simulation of the number of operations as a ${\eta}$. This is probably because the length of the LT code becomes longer as the number of source nodes increases and thus the decoding computation amount increases greatly.

A Comparative Study on Dietary Life according to the Obesity Assessment Methods of Higher Grade Elementary School Students in Jeonju (전주지역 고학년 초등학생의 비만판정 방법에 따른 식생활 비교연구)

  • Yu, Ok-Kyeong;Cha, Youn-Soo
    • Korean Journal of Human Ecology
    • /
    • v.9 no.4
    • /
    • pp.83-93
    • /
    • 2006
  • This study was done for finding out if eating habits, eating behaviors were different between non-obese and obese elementary school students in Jeonju Area. Total 2568 students of 1364 male and 1204 female of the 4th, 5th, and 6th year in 5 elementary schools were surveyed and the statistics of the result was analyzed by SPSS program. The results are summarized as follows: 1. Obesity was defined as Body Mass Index(BMI) that exceeded 85th and Obesity Index(OI) that exceeded 110. First, subjects were divided into 4 groups : lean, normal, overweight and obese. Second subjects were reclassified into non-obese(lean and normal) and obese(overweight and obese) groups. Average height of male and female students were 142.5cm, 143.1cm and weight of those were 36.4kg and 37.9kg respectively. 2. As results of obesity computation, obese male students were 19.6%(overweight 11.3%, obese 8.3%) in BMI and obese male students were 25.0%(overweight 12.5.%, obese 12.5%) in OI. Especially Obesity percent rate of male student were significantly higher on that of female student in OI method. 3. Examining obesity between male and female, there were statistically different between male students and female students in OI, but there were not statistically different in BMI. With regard to grade level(4th, 5th, 6th), there were statistically different among grade levels. 4. Examining correlation between eating habits(eating behaviors) and obesity, there were statistically significant in some cases. For example, there were statistically significant correlation between fast eating habit and obesity. And the relation analysis of general environments and obesity showed that there were statistically significant in some cases. These results suggest that the number of overweight students can be increased due to the amount and kinds of food children have as well as the general causes of overweight such as genetic, environmental and psychological reason. Surveying about children's eating habits, eating behaviors this study methodically. Working with parents is necessary and comparison of eating habits, eating behaviors and nutrition knowledge between the past and their presents are also needed in a future.

  • PDF

A Digital Twin Software Development Framework based on Computing Load Estimation DNN Model (컴퓨팅 부하 예측 DNN 모델 기반 디지털 트윈 소프트웨어 개발 프레임워크)

  • Kim, Dongyeon;Yun, Seongjin;Kim, Won-Tae
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.368-376
    • /
    • 2021
  • Artificial intelligence clouds help to efficiently develop the autonomous things integrating artificial intelligence technologies and control technologies by sharing the learned models and providing the execution environments. The existing autonomous things development technologies only take into account for the accuracy of artificial intelligence models at the cost of the increment of the complexity of the models including the raise up of the number of the hidden layers and the kernels, and they consequently require a large amount of computation. Since resource-constrained computing environments, could not provide sufficient computing resources for the complex models, they make the autonomous things violate time criticality. In this paper, we propose a digital twin software development framework that selects artificial intelligence models optimized for the computing environments. The proposed framework uses a load estimation DNN model to select the optimal model for the specific computing environments by predicting the load of the artificial intelligence models with digital twin data so that the proposed framework develops the control software. The proposed load estimation DNN model shows up to 20% of error rate compared to the formula-based load estimation scheme by means of the representative CNN models based experiments.

A Review of Seismic Full Waveform Inversion Based on Deep Learning (딥러닝 기반 탄성파 전파형 역산 연구 개관)

  • Sukjoon, Pyun;Yunhui, Park
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.227-241
    • /
    • 2022
  • Full waveform inversion (FWI) in the field of seismic data processing is an inversion technique that is used to estimate the velocity model of the subsurface for oil and gas exploration. Recently, deep learning (DL) technology has been increasingly used for seismic data processing, and its combination with FWI has attracted remarkable research efforts. For example, DL-based data processing techniques have been utilized for preprocessing input data for FWI, enabling the direct implementation of FWI through DL technology. DL-based FWI can be divided into the following methods: pure data-based, physics-based neural network, encoder-decoder, reparameterized FWI, and physics-informed neural network. In this review, we describe the theory and characteristics of the methods by systematizing them in the order of advancements. In the early days of DL-based FWI, the DL model predicted the velocity model by preparing a large training data set to adopt faithfully the basic principles of data science and apply a pure data-based prediction model. The current research trend is to supplement the shortcomings of the pure data-based approach using the loss function consisting of seismic data or physical information from the wave equation itself in deep neural networks. Based on these developments, DL-based FWI has evolved to not require a large amount of learning data, alleviating the cycle-skipping problem, which is an intrinsic limitation of FWI, and reducing computation times dramatically. The value of DL-based FWI is expected to increase continually in the processing of seismic data.

A Study about Learning Graph Representation on Farmhouse Apple Quality Images with Graph Transformer (그래프 트랜스포머 기반 농가 사과 품질 이미지의 그래프 표현 학습 연구)

  • Ji Hun Bae;Ju Hwan Lee;Gwang Hyun Yu;Gyeong Ju Kwon;Jin Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Recently, a convolutional neural network (CNN) based system is being developed to overcome the limitations of human resources in the apple quality classification of farmhouse. However, since convolutional neural networks receive only images of the same size, preprocessing such as sampling may be required, and in the case of oversampling, information loss of the original image such as image quality degradation and blurring occurs. In this paper, in order to minimize the above problem, to generate a image patch based graph of an original image and propose a random walk-based positional encoding method to apply the graph transformer model. The above method continuously learns the position embedding information of patches which don't have a positional information based on the random walk algorithm, and finds the optimal graph structure by aggregating useful node information through the self-attention technique of graph transformer model. Therefore, it is robust and shows good performance even in a new graph structure of random node order and an arbitrary graph structure according to the location of an object in an image. As a result, when experimented with 5 apple quality datasets, the learning accuracy was higher than other GNN models by a minimum of 1.3% to a maximum of 4.7%, and the number of parameters was 3.59M, which was about 15% less than the 23.52M of the ResNet18 model. Therefore, it shows fast reasoning speed according to the reduction of the amount of computation and proves the effect.

Selection of Optimal Variables for Clustering of Seoul using Genetic Algorithm (유전자 알고리즘을 이용한 서울시 군집화 최적 변수 선정)

  • Kim, Hyung Jin;Jung, Jae Hoon;Lee, Jung Bin;Kim, Sang Min;Heo, Joon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.175-181
    • /
    • 2014
  • Korean government proposed a new initiative 'government 3.0' with which the administration will open its dataset to the public before requests. City of Seoul is the front runner in disclosure of government data. If we know what kind of attributes are governing factors for any given segmentation, these outcomes can be applied to real world problems of marketing and business strategy, and administrative decision makings. However, with respect to city of Seoul, selection of optimal variables from the open dataset up to several thousands of attributes would require a humongous amount of computation time because it might require a combinatorial optimization while maximizing dissimilarity measures between clusters. In this study, we acquired 718 attribute dataset from Statistics Korea and conducted an analysis to select the most suitable variables, which differentiate Gangnam from other districts, using the Genetic algorithm and Dunn's index. Also, we utilized the Microsoft Azure cloud computing system to speed up the process time. As the result, the optimal 28 variables were finally selected, and the validation result showed that those 28 variables effectively group the Gangnam from other districts using the Ward's minimum variance and K-means algorithm.

A New Program to Design Residual Treatment Trains at Water Treatment Plants (정수장 배출수처리시설 설계 프로그램의 개발)

  • Bae, Byung-Uk;Her, Kuk;Joo, Dae-Sung;Jeong, Yeon-Gu;Kim, Young-Il;Ha, Chang-Won
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.29 no.3
    • /
    • pp.277-282
    • /
    • 2007
  • For more accurate and practical design of the residual treatment train at water treatment plants(WTPs), a computational program based on the commercial spreadsheet, Microsoft Excel, was developed. The computational program for the design of a residual treatment train(DRTT) works in three steps which estimate the residual production to be treated, analyze the mass balance, and determine the size of each unit process. Of particular interest in the DRTT program, is provision for a filter backwash recycle system consisting of surge tank and sedimentation basin for more efficient recycling of backwash water. When the DRTT program was applied to the Chungju WTP, the program was very beneficial in avoiding errors which might have occurred during arithmetic calculations and in reducing the time needed to get the output. It is anticipated that the DRTT program could be used for design of new WTPs as well as the rehabilitation of existing ones.

A Memory-efficient Partially Parallel LDPC Decoder for CMMB Standard (메모리 사용을 최적화한 부분 병렬화 구조의 CMMB 표준 지원 LDPC 복호기 설계)

  • Park, Joo-Yul;Lee, So-Jin;Chung, Ki-Seok;Cho, Seong-Min;Ha, Jin-Seok;Song, Yong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.1
    • /
    • pp.22-30
    • /
    • 2011
  • In this paper, we propose a memory efficient multi-rate Low Density Parity Check (LDPC) decoder for China Mobile Multimedia Broadcasting (CMMB). We find the best trade-off between the performance and the circuit area by designing a partially parallel decoder which is capable of passing multiple messages in parallel. By designing an efficient address generation unit (AGU) with an index matrix, we could reduce both the amount of memory requirement and the complexity of computation. The proposed regular LDPC decoder was designed in Verilog HDL and was synthesized by Synopsys' Design Compiler using Chartered $0.18{\mu}m$ CMOS cell library. The synthesized design has the gate size of 455K (in NAND2). For the two code rates supported by CMMB, the rate-1/2 decoder has a throughput of 14.32 Mbps, and the rate-3/4 decoder has a throughput of 26.97 Mbps. Compared with a conventional LDPC for CMMB, our proposed design requires only 0.39% of the memory.

A Degree of Difficulty in Operations Area in Elementary Mathematics (초등수학에서 연산영역의 곤란도 분석)

  • Ahn, Byoung-Gon
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.13 no.1
    • /
    • pp.17-30
    • /
    • 2009
  • This paper is about the basic skills of four operations in numbers and operations areas from step 1 to step 3 in elementary mathematics. Here are the results of the evaluation. First, addition and subtraction take the largest time. The average difficulty rate in operations area is 91.2%. Most students understand the contents of textbook well. Specifically, students easily understand the step 1. However, subtraction has lower difficulty rate than addition. Also, three mixed computation, calculation in horizontal, and rounding(rounding down) are difficult areas for students. The contents of step 2 are fully understood. However, lots of mistakes are found in the process of rounding(rounding down), and sentence problems are thought as difficult. Second, the multiplication is first starting in the step 2-Ga. The unit 'Multiplication 99' takes 13 hours, the longest. The difficulty rate in this unit is 89.4%, students understand well. However, students are influenced by addition and subtraction errors in the process of multiplication, and have difficulty in changing the sentence problem to multiplication expression. Third, the division, which starts in step 3-Ga, has 89.9% of difficulty rate. Students well understand. Result of this paper: most of students understand well four operations, but accurate concept, the relationship between multiplication and division, specific instructions in teaching principles of division calculation and sentence problems are in need. Setting the amount of the contents and difficulty rate in understanding are depends on every school's situation, so suggesting universal standard is really hard. However, studying more objects broadly and specific study will be helpful to suggest proper contents and effective teaching.

  • PDF