• Title/Summary/Keyword: SA algorithm

Search Result 286, Processing Time 0.021 seconds

A Tunable Transmitter - Tunable Receiver Algorithm for Accessing the Multichannel Slotted-Ring WDM Metropolitan Network under Self-Similar Traffic

  • Sombatsakulkit, Ekanun;Sa-Ngiamsak, Wisitsak;Sittichevapak, Suvepol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.777-781
    • /
    • 2004
  • This paper presents an algorithm for multichannel slotted-ring topology medium access protocol (MAC) using in wavelength division multiplexing (WDM) networks. In multichannel ring, there are two main previously proposed architectures: Tunable Transmitter - Fixed Receiver (TTFR) and Fixed Transmitter - Tunable Receivers (FTTR). With TTFR, nodes can only receive packets on a fixed wavelength and can send packets on any wavelengths related to destination of packets. Disadvantage of this architecture is required as many wavelengths as there are nodes in the network. This is clearly a scalability limitation. In contrast, FTTR architecture has advantage that the number of nodes can be much larger than the number of wavelength. Source nodes send packet on a fixed channel (or wavelength) and destination nodes can received packets on any wavelength. If there are fewer wavelengths than there are nodes in the network, the nodes will also have to share all the wavelengths available for transmission. However the fixed wavelength approach of TTFR and FTTR bring low network utilization. Because source node with waiting data have to wait for an incoming empty slot on corresponding wavelength. Therefore this paper presents Tunable Transmitter - Tunable Receiver (TTTR) approach, in which the transmitting node can send a packet over any wavelengths and the receiving node can receive a packet from any wavelengths. Moreover, the self-similar distributed input traffic is used for evaluation of the performance of the proposed algorithm. The self-similar traffic performs better performance over long duration than short duration of the Poison distribution. In order to increase bandwidth efficiency, the Destination Stripping approach is used to mark the slot which has already reached the desired destination as an empty slot immediately at the destination node, so the slot does not need to go back to the source node to be marked as an empty slot as in the Source Stripping approach. MATLAB simulator is used to evaluate performance of FTTR, TTFR, and TTTR over 4 and 16 nodes ring network. From the simulation result, it is clear that the proposed algorithm overcomes higher network utilization and average throughput per node, and reduces the average queuing delay. With future works, mathematical analysis of those algorithms will be the main research topic.

  • PDF

Transaction Pattern Discrimination of Malicious Supply Chain using Tariff-Structured Big Data (관세 정형 빅데이터를 활용한 우범공급망 거래패턴 선별)

  • Kim, Seongchan;Song, Sa-Kwang;Cho, Minhee;Shin, Su-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.121-129
    • /
    • 2021
  • In this study, we try to minimize the tariff risk by constructing a hazardous cargo screening model by applying Association Rule Mining, one of the data mining techniques. For this, the risk level between supply chains is calculated using the Apriori Algorithm, which is an association analysis algorithm, using the big data of the import declaration form of the Korea Customs Service(KCS). We perform data preprocessing and association rule mining to generate a model to be used in screening the supply chain. In the preprocessing process, we extract the attributes required for rule generation from the import declaration data after the error removing process. Then, we generate the rules by using the extracted attributes as inputs to the Apriori algorithm. The generated association rule model is loaded in the KCS screening system. When the import declaration which should be checked is received, the screening system refers to the model and returns the confidence value based on the supply chain information on the import declaration data. The result will be used to determine whether to check the import case. The 5-fold cross-validation of 16.6% precision and 33.8% recall showed that import declaration data for 2 years and 6 months were divided into learning data and test data. This is a result that is about 3.4 times higher in precision and 1.5 times higher in recall than frequency-based methods. This confirms that the proposed method is an effective way to reduce tariff risks.

A Combat Effectiveness Evaluation Algorithm Considering Technical and Human Factors in C4I System (NCW 환경에서 C4I 체계 전투력 상승효과 평가 알고리즘 : 기술 및 인적 요소 고려)

  • Jung, Whan-Sik;Park, Gun-Woo;Lee, Jae-Yeong;Lee, Sang-Hoon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.55-72
    • /
    • 2010
  • Recently, the battlefield environment has changed from platform-centric warfare(PCW) which focuses on maneuvering forces into network-centric warfare(NCW) which is based on the connectivity of each asset through the warfare information system as information technology increases. In particular, C4I(Command, Control, Communication, Computer and Intelligence) system can be an important factor in achieving NCW. It is generally used to provide direction across distributed forces and status feedback from thoseforces. It can provide the important information, more quickly and in the correct format to the friendly units. And it can achieve the information superiority through SA(Situational Awareness). Most of the advanced countries have been developed and already applied these systems in military operations. Therefore, ROK forces also have been developing C4I systems such as KJCCS(Korea Joint Command Control System). And, ours are increasing the budgets in the establishment of warfare information systems. However, it is difficult to evaluate the C4I effectiveness properly by deficiency of methods. We need to develop a new combat effectiveness evaluation method that is suitable for NCW. Existing evaluation methods lay disproportionate emphasis on technical factors with leaving something to be desired in human factors. Therefore, it is necessary to consider technical and human factors to evaluate combat effectiveness. In this study, we proposed a new Combat Effectiveness evaluation algorithm called E-TechMan(A Combat Effectiveness Evaluation Algorithm Considering Technical and Human Factors in C4I System). This algorithm uses the rule of Newton's second law($F=(m{\Delta}{\upsilon})/{\Delta}t{\Rightarrow}\frac{V{\upsilon}I}{T}{\times}C$). Five factors considered in combat effectiveness evaluation are network power(M), movement velocity(v), information accuracy(I), command and control time(T) and collaboration level(C). Previous researches did not consider the value of the node and arc in evaluating the network power after the C4I system has been established. In addition, collaboration level which could be a major factor in combat effectiveness was not considered. E-TechMan algorithm is applied to JFOS-K(Joint Fire Operating System-Korea) system that can connect KJCCS of Korea armed forces with JADOCS(Joint Automated Deep Operations Coordination System) of U.S. armed forces and achieve sensor to shooter system in real time in JCS(Joint Chiefs of Staff) level. We compared the result of evaluation of Combat Effectiveness by E-TechMan with those by other algorithms(e.g., C2 Theory, Newton's second Law). We can evaluate combat effectiveness more effectively and substantially by E-TechMan algorithm. This study is meaningful because we improved the description level of reality in calculation of combat effectiveness in C4I system. Part 2 will describe the changes of war paradigm and the previous combat effectiveness evaluation methods such as C2 theory while Part 3 will explain E-TechMan algorithm specifically. Part 4 will present the application to JFOS-K and analyze the result with other algorithms. Part 5 is the conclusions provided in the final part.

Three-Dimensional Image Display System using Stereogram and Holographic Optical Memory Techniques (스테레오그램과 홀로그래픽 광 메모리 기술을 이용한 3차원 영상 표현 시스템)

  • 김철수;김수중
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6B
    • /
    • pp.638-644
    • /
    • 2002
  • In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH(binary phase hologram) and LCD(liquid crystal display) for controlling reference beam. The reference beams are acquired by Fourier transform of BPHs which designed with SA(simulated annealing)algorithm, and the BPHs are represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software(Photoshop) with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. In output plane, we used a LCD shutter that is synchronized to a monitor that display alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO$_3$ repeatedly using the proposed holographic optical memory techniques.

Content Analysis-based Adaptive Filtering in The Compressed Satellite Images (위성영상에서의 적응적 압축잡음 제거 알고리즘)

  • Choi, Tae-Hyeon;Ji, Jeong-Min;Park, Joon-Hoon;Choi, Myung-Jin;Lee, Sang-Keun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.84-95
    • /
    • 2011
  • In this paper, we present a deblocking algorithm that removes grid and staircase noises, which are called "blocking artifacts", occurred in the compressed satellite images. Particularly, the given satellite images are compressed with equal quantization coefficients in row according to region complexity, and more complicated regions are compressed more. However, this approach has a problem that relatively less complicated regions within the same row of complicated regions have blocking artifacts. Removing these artifacts with a general deblocking algorithm can blur complex and undesired regions as well. Additionally, the general filter lacks in preserving the curved edges. Therefore, the proposed algorithm presents an adaptive filtering scheme for removing blocking artifacts while preserving the image details including curved edges using the given quantization step size and content analysis. Particularly, WLFPCA (weighted lowpass filter using principle component analysis) is employed to reduce the artifacts around edges. Experimental results showed that the proposed method outperforms SA-DCT in terms of subjective image quality.

Reduction of a Numerical Grid Dependency in High-pressure Diesel Injection Simulation Using the Lagrangian-Eulerian CFD Method (Lagrangian-Eulerian 기법을 이용한 고압 디젤 분무 시뮬레이션의 수치해석격자 의존성 저감에 관한 연구)

  • Kim, Sa-Yop;Oh, Yun-Jung;Park, Sung-Wook;Lee, Chang-Sik
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.20 no.1
    • /
    • pp.39-45
    • /
    • 2012
  • In the standard CFD code, Lagrangian-Eulerian method is very popular to simulate the liquid spray penetrating into gaseous phase. Though this method can give a simple solution and low computational cost, it have been reported that the Lagrangian spray models have numerical grid dependency, resulting in serious numerical errors. Many researches have shown the grid dependency arise from two sources. The first is due to unaccurate prediction of the droplet-gas relative velocity, and the second is that the probability of binary droplet collision is dependent on the grid resolution. In order to solve the grid dependency problem, the improved spray models are implemented in the KIVA-3V code in this study. For reducing the errors in predicting the relative velocity, the momentum gain from the gaseous phase to liquid particles were resolved according to the gas-jet theory. In addition, the advanced algorithm of the droplet collision modeling which surmounts the grid dependency problem was applied. Then, in order to validate the improved spray model, the computation is compared to the experimental results. By simultaneously regarding the momentum coupling and the droplet collision modeling, successful reduction of the numerical grid dependency could be accomplished in the simulation of the high-pressure injection diesel spray.

A Study of Cooling Schedule Parameters on Adaptive Simulated Annealing in Structural Optimization (구조 최적화에서 적응 시뮬레이티드 애닐링의 냉각변수에 대한 연구)

  • Park, Jung-Sun;Jung, Suk-Hoon;Ji, Sang-Hyun;Im, Jong-Bin
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.32 no.6
    • /
    • pp.49-55
    • /
    • 2004
  • The increase of computing power makes stochastic optimization algorithms available in structural design. One of the stochastic algorithms, simulated annealing algorithm, has been applied to various structural optimization problems. By applying several cooling schedules such as simulated annealing (SA), Boltzmann annealing (BA), fast annealing (FA) and adaptive simulated annealing (ASA), truss structures are optimized to improve the quality of objective functions and reduce the number of function evaluations. In this paper, many cooling parameters have been applied to the cooling schedule of ASA. The influence of cooling parameters is investigated to find the rules of thumb for using ASA. Tn addition, the cooling schedule combined with BA and ASA is applied to the optimization of ten bar-truss and twenty five bar-truss structure.

The Simulation of Myocardium Conduction System using DEVCS and Discrete Time CAM (DEVCS 및 Discrete Time CAM을 이용한 심근 전도 시스템의 시뮬레이션)

  • Kim, K.N.;Nam, G.K.;Son, K.S.;Lee, Y.W.;Jun, K.R.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.05
    • /
    • pp.150-155
    • /
    • 1997
  • Modelling and Simulation of the activation process for the myocardium is meaningful to understand special excitation conduction system in the heart and to study cardiac functions. In this paper, we propose two dimensional cellular automata model for the activation process of the myocardium and simulated by means of discrete time and discrete event algorithm. In the model, cells are classified into anatomically similar characteristic parts of heart; SA node, internodal tracks, AV node, His bundle, bundle branch and four layers of the ventricular muscle, each of which has a set of cells with preassigned properties, that is, activation time, refractory duration and conduction time between neighbor cell. Each cell in this model has state variables to represent the state of the cell and has some simple state transition rules to change values of state variables executed by state transition function. Simulation results are as follows. First, simulation of the normal and abnormal activation process for the myocardium has been done with discrete time and discrete event formalism. Next, we show that the simulation results of discrete time and discrete event cell space model is the same. Finally, we compare the simulation time of discrete event myocardium model with discrete time myocardium models and show that the discrete event myocardium model spends much less simulation time than discrete time myocardium model and conclude the discrete event simulation method Is excellent in the simulation time aspect if the interval deviation of event time is large.

  • PDF

A detection algorithm for the installations and damages on a tunnel liner using the laser scanning data (레이저 스캐닝 데이터를 이용한 터널 시설물 및 손상부위 검측 알고리즘)

  • Yoon, Jong-Suk;Lee, Jun-S.;Lee, Kyu-Sung;SaGong, Myung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.9 no.1
    • /
    • pp.19-28
    • /
    • 2007
  • Tunnel management is a time-consuming and expensive task. In particular, visual analysis of tunnel inspection often requires extended time and cost and shows problems on data gathering, storage and analysis. This study proposes a new approach to extract information for tunnel management by using a laser scanning technology. A prototype tunnel laser scanner developed was used to obtain point clouds of a railway tunnel surface. Initial processing of laser scanning data was to separate those laser pulses returned from the installations attached to tunnel liner using radiometric and geometric characteristics of laser returns. Once the laser returns from the installations were separated and removed, physically damaged parts on tunnel lining are detected. Based on the plane formed by laser scanner data, damaged parts are detected by analysis of proximity. The algorithms presented in this study successfully detect the physically damaged parts which can be verified by the digital photography of the corresponding location on the tunnel surface.

  • PDF

Development of a Design and Analysis Program for Automatic Transmission Applications to Consider the Planetary Gear Noise and Its Adaptation (자동변속기 유성기어 소음을 고려한 시스템 분석용 프로그램 개발 및 적용에 관한 연구)

  • Lee, Hyun Ku;Lee, Sang Hwa;Kim, Moo Suk;Hong, Sa Man;Kim, Si Woong;Yoo, Dong Kyu;Kwon, Hyun Sik;Kahraman, Ahmet
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.25 no.7
    • /
    • pp.487-495
    • /
    • 2015
  • A generalized special program called planetary transmission analysis(here in after PTA) is developed to improve planetary gear noise in automatic transmission. PTA is capable of analyzing any typical one-degree-of-freedom automatic transmission gear train containing any number of simple, compound or complex-compound planetary gear sets. The kinematics module in PTA can compute the rotational speeds of gears and carriers and calculate the order frequencies to predict the planetary noise components. The power flow analysis module performs a complete static force analysis providing forces, moments, or torques of gears, bearings, clutches and connections. Based on the given type and number of planetary gear sets, the search algorithm determines all possible kinematic configurations and gear tooth combinations in a required set of gear ratios, while eliminating whole kinematic redundancies and unfavorable clutching sequences. By using PTA program, planetary internal speeds of new developed automatic transmission are early obtained; therefore, possibility of the noise problem could be predicted in early design stage. As implementing PTA in planetary gear NVH development procedure, planetary gear noise was successfully reduced by 10 dBA.