• Title/Summary/Keyword: 속도 결정 알고리즘

Search Result 501, Processing Time 0.029 seconds

Design of ATM Switch-based on a Priority Control Algorithm (우선순위 알고리즘을 적용한 상호연결 망 구조의 ATM 스위치 설계)

  • Cho Tae-Kyung;Cho Dong-Uook;Park Byoung-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.4
    • /
    • pp.189-196
    • /
    • 2004
  • Most of the recent researches for ATM switches have been based on multistage interconnection network known as regularity and self-routing property. These networks can switch packets simultaneously and in parallel. However, they are blocking networks in the sense that packet is capable of collision with each other Mainly Banyan network have been used for structure. There are several ways to reduce the blocking or to increase the throughput of banyan-type switches: increasing the internal link speeds, placing buffers in each switching node, using multiple path, distributing the load evenly in front of the banyan network and so on. Therefore, this paper proposes the use of recirculating shuffle-exchange network to reduce the blocking and to improve hardware complexity. This structures are recirculating shuffle-exchange network as simplified in hardware complexity and Rank network with tree structure which send only a packet with highest priority to the next network, and recirculate the others to the previous network. after it decides priority number on the Packets transferred to the same destination, The transferred Packets into banyan network use the function of self routing through decomposition and composition algorithm and all they arrive at final destinations. To analyze throughput, waiting time and packet loss ratio according to the size of buffer, the probabilities are modeled by a binomial distribution of packet arrival. If it is 50 percentage of load, the size of buffer is more than 15. It means the acceptable packet loss ratio. Therefore, this paper simplify the hardware complexity as use of recirculating shuffle-exchange network instead of bitonic sorter.

  • PDF

Skin Region Detection Using Histogram Approximation Based Mean Shift Algorithm (Mean Shift 알고리즘 기반의 히스토그램 근사화를 이용한 피부 영역 검출)

  • Byun, Ki-Won;Joo, Jae-Heum;Nam, Ki-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.21-29
    • /
    • 2011
  • At existing skin detection methods using skin color information defined based on the prior knowldege, threshold value to be used at the stage of dividing the backround and the skin region was decided on a subjective point of view through experiments. Also, threshold value was selected in a passive manner according to their background and illumination environments in these existing methods. These existing methods displayed a drawback in that their performance was fully influenced by the threshold value estimated through repetitive experiments. To overcome the drawback of existing methods, this paper propose a skin region detection method using a histogram approximation based on the mean shift algorithm. The proposed method is to divide the background region and the skin region by using the mean shift method at the histogram of the skin-map of the input image generated by the comparison of the similarity with the standard skin color at the CbCr color space and actively finding the maximum value converged by brightness level. Since the histogram has a form of discontinuous function accumulated according to the brightness value of the pixel, it gets approximated as a Gaussian Mixture Model (GMM) using the Bezier Curve method. Thus, the proposed method detects the skin region by using the mean shift method and actively finding the maximum value which eventually becomes the dividing point, not by using the manually selected threshold value unlike other existing methods. This method detects the skin region high performance effectively through experiments.

New Frequency-domain GSC using the Modified-CFAR Algorithm (변형된 CFAR 알고리즘을 이용한 새로운 주파수영역 GSC)

  • Cho, Myeong-Je;Moon, Sung-Hoon;Han, Dong-Seog;Jung, Jin-Won;Kim, Soo-Joong
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.2
    • /
    • pp.96-107
    • /
    • 1999
  • The generalized sidelobe cancellers(GSC's) ar used for suppressing an interference in array radar. The frequency-domain GSC's have a faster convergence rate than the time-domain GSC's because they remove the correlation between the interferences using a frequency-domain least mean square(LMS) algorithm. However, we have not fully used the advantage of the frequency-domain GSC's since we have always updated the weights of all frequency bins, even the interferer free frequency bin. In this paper, we propose a new frequency-domain GSC based on constant false-alarm rate(CFAR) detector, of which GSC adaptively determine the bin whose weight is updated according to the power of each frequency bin. This canceller updates the weight of only updated according to the power of each frequency bin. This canceller updates the weight of only the bin of which the power is high because of the interference signal. The computer simulation shows that the new GSC reduces the iteration number for convergence over the conventional GSC's by more than 100 iterations. The signal-to-noise ration(SNR) improvement is more than 5 dB. Moreover, the number of renewal weights required for the adaptation is much fewer than that of the conventional one.

  • PDF

Fast Combinatorial Programs Generating Total Data (전수데이터를 생성하는 빠른 콤비나토리얼 프로그램)

  • Jang, Jae-Soo;Won, Shin-Jae;Cheon, Hong-Sik;Suh, Chang-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.3
    • /
    • pp.1451-1458
    • /
    • 2013
  • This paper deals with the programs and algorithms that generate the full data set that satisfy the basic combinatorial requirement of combination, permutation, partial permutation or shortly r-permutation, which are used in the application of the total data testing or the simulation input. We search the programs able to meet the rules which is permutations and combinations, r-permutations, select the fastest program by field. With further study, we developed a new program reducing the time required to processing. Our research performs the following pre-study. Firstly, hundreds of algorithms and programs in the internet are collected and corrected to be executable. Secondly, we measure running time for all completed programs and select a few fast ones. Thirdly, the fast programs are analyzed in depth and its pseudo-code programs are provided. We succeeded in developing two programs that run faster. Firstly, the combination program can save the running time by removing recursive function and the r-permutation program become faster by combining the best combination program and the best permutation program. According to our performance test, the former and later program enhance the running speed by 22% to 34% and 62% to 226% respectively compared with the fastest collected program. The programs suggested in this study could apply to a particular cases easily based on Pseudo-code., Predicts the execution time spent on data processing, determine the validity of the processing, and also generates total data with minimum access programming.

Analysis of the applicability of parameter estimation methods for a transient storage model (저장대모형의 매개변수 산정을 위한 최적화 기법의 적합성 분석)

  • Noh, Hyoseob;Baek, Donghae;Seo, Il Won
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.10
    • /
    • pp.681-695
    • /
    • 2019
  • A Transient Storage Model (TSM) is one of the most widely used model accounting for complex solute transport in natural river to understanding natural river properties with four TSM key parameters. The TSM parameters are estimated via inverse modeling. Parameter estimation of the TSM is carried out by solving optimization problem about finding best fitted simulation curve with measured curve obtained from tracer test. Several studies have reported uncertainty in parameter estimation from non-convexity of the problem. In this study, we assessed best combination of optimization method and objective function for TSM parameter estimation using Cheong-mi Creek tracer test data. In order to find best optimization setting guaranteeing convergence and speed, Evolutionary Algorithm (EA) based global optimization methods, such as CCE of SCE-UA and MCCE of SP-UCI, and error based objective functions were compared, using Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL). Overall results showed that multi-EA SC-SAHEL with Percent Mean Squared Error (PMSE) objective function is the best optimization setting which is fastest and stable method in convergence.

Backward Path Tracking Control of a Trailer Type Robot Using a RCGS-Based Model (RCGA 기반의 모델을 이용한 트레일러형 로봇의 후방경로 추종제어)

  • Wi, Yong-Uk;Kim, Heon-Hui;Ha, Yun-Su;Jin, Gang-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.9
    • /
    • pp.717-722
    • /
    • 2001
  • This paper presents a methodology on the backward path tracking control of a trailer type robot which consists of two parts: a tractor and a trailer. It is difficult to control the motion of a trailer vehicle since its dynamics is non-holonomic. Therefore, in this paper, the modeling and parameter estimation of the system using a real-coded genetic algorithm(RCGA) is proposed and a backward path tracking control algorithm is then obtained based on the linearized model. Experimental results verify the effectiveness of the proposed method.

  • PDF

Spatio-temporal Mode Selection Methods of Fast H.264 Using Multiple Reference Frames (다중 참조 영상을 이용한 고속 H.264의 움직임 예측 모드 선택 기법)

  • Kwon, Jae-Hyun;Kang, Min-Jung;Ryu, Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.3C
    • /
    • pp.247-254
    • /
    • 2008
  • H.264 provides a good coding efficiency compared with existing video coding standards, H.263, MPEG-4, based on the use of multiple reference frame for variable block size motion estimation, quarter-pixel motion estimation and compensation, $4{\times}4$ integer DCT, rate-distortion optimization, and etc. However, many modules used to increase its performance also require H.264 to have increased complexity so that fast algorithms are to be implemented as practical approach. In this paper, among many approaches, fast mode decision algorithm by skipping variable block size motion estimation and spatial-predictive coding, which occupies most encoder complexity, is proposed. This approach takes advantages of temporal and spatial properties of fast mode selection techniques. Experimental results demonstrate that the proposed approach can save encoding time up to 65% compared with the H.264 standard while maintaining the visual perspectives.

Distributed Hierarchical Location Placement of Core Nodes in the OCBT Multicast Protocol (OCBT 멀티캐스트 프로토콜에서 core 노드의 분산 계층 위치 결정)

  • 황경호;조동호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.1A
    • /
    • pp.90-95
    • /
    • 2000
  • In the Ordered Core Based Tree(OCBT) protocol, a core location is the most important feature to affect the performance. In this paper, the location placement of multiple level cores is studied. The proposed algorithm isthat each node in the network evaluates a sum of shortest path costs from all the other nodes and the entirenetwork is divided into a hierarchy region to have 3-logical level(Small, Medium, Large). The node to have thelowest cost in each S-Region is decided to be a core node. Then, the core nodes in the each S-Region evaluatea sum of shortest path costs from all the other core nodes in the same M-Region. The core node to have thelowest cost is decided to be the upper level core node. Similarly the highest level core node is decided in theL-Region. The proposed algoritthm is compared with conventional two methods to put the core nodes in thenetwork One is the random method to put the core nodes randomly. The other is the center method to locatethe core node at the nearest node from the center of each S-Region and then to locate the highest level corenode at the nearest core node from the center of the entire network. Extensive simulations are performed in theview of mean tree cost and join latency. Simulation results show that the proposed algorithm has betterperformance than random method or center method.

  • PDF

The fabrication and evaluation of CdS sensor for diagnostic x-ray detector application (진단 X선 검출기 적용을 위한 CdS 센서 제작 및 성능 평가)

  • Park, Ji-Koon;Lee, Mi-Hyun;Choi, Young-Zoon;Jung, Bong-Zae;Choi, Il-Hong;Kang, Sang-Sik
    • Journal of the Korean Society of Radiology
    • /
    • v.4 no.2
    • /
    • pp.21-25
    • /
    • 2010
  • Recently, various semiconductor compounds as radiation detection material have been researched for a diagnostic x-ray detector application. In this paper, we have fabricated the CdS detecton sensor that has good photosensitivity and high x-ray absorption efficiency among other semiconductor compounds, and evaluated the application feasibility by investigating the detection properties about energy range of diagnostic x-ray generator. We have fabricated the line voltage selector(LCV) for a signal acquisition and quantities of CdS sensor, and designed the voltage detection circuit and rectifying circuit. Also, we have used a relative relation algorithm according to x-ray exposure condition, and fabricated the interface board with DAC controller. Performance evaluation was investigated by data processing using ANOVA program from voltage profile characteristics according to resistive change obtained by a tube voltage, tube current, and exposure time that is a exposure condition of x-ray generator. From experimental results, an error rates were reduced according to increasing of a tube voltage and tube current, and a good properties of 6%(at 90 kVp) and 0.4%(at 320 mA) ere showed. and coefficient of determination was 0.98 with relative relation of 1:1. The error rate according to x-ray exposure time showed exponential reduction because of delayed response velocity of CdS material, and the error rate has 2.3% at 320 msec. Finally, the error rate according to x-ray dose is below 10%, and a high relative relation was showed with coefficient of determination of 0.9898.

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.