• Title/Summary/Keyword: decision algorithm

Search Result 2,359, Processing Time 0.025 seconds

Bit Split Algorithm for Applying the Multilevel Modulation of Iterative codes (반복부호의 멀티레벨 변조방식 적용을 위한 비트분리 알고리즘)

  • Park, Tae-Doo;Kim, Min-Hyuk;Kim, Nam-Soo;Jung, Ji-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1654-1665
    • /
    • 2008
  • This paper presents bit splitting methods to apply multilevel modulation to iterative codes such as turbo code, low density parity check code and turbo product code. Log-likelihood ratio method splits multilevel symbols to soft decision symbols using the received in-phase and quadrature component based on Gaussian approximation. However it is too complicate to calculate and to implement hardware due to exponential and logarithm calculation. Therefore this paper presents Euclidean, MAX, sector and center focusing method to reduce the high complexity of LLR method. Also, this paper proposes optimal soft symbol split method for three kind of iterative codes. Futhermore, 16-APSK modulator method with double ring structure for applying DVB-S2 system and 16-QAM modulator method with lattice structure for T-DMB system are also analyzed.

Unsupervised Image Classification through Multisensor Fusion using Fuzzy Class Vector (퍼지 클래스 벡터를 이용하는 다중센서 융합에 의한 무감독 영상분류)

  • 이상훈
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.329-339
    • /
    • 2003
  • In this study, an approach of image fusion in decision level has been proposed for unsupervised image classification using the images acquired from multiple sensors with different characteristics. The proposed method applies separately for each sensor the unsupervised image classification scheme based on spatial region growing segmentation, which makes use of hierarchical clustering, and computes iteratively the maximum likelihood estimates of fuzzy class vectors for the segmented regions by EM(expected maximization) algorithm. The fuzzy class vector is considered as an indicator vector whose elements represent the probabilities that the region belongs to the classes existed. Then, it combines the classification results of each sensor using the fuzzy class vectors. This approach does not require such a high precision in spatial coregistration between the images of different sensors as the image fusion scheme of pixel level does. In this study, the proposed method has been applied to multispectral SPOT and AIRSAR data observed over north-eastern area of Jeollabuk-do, and the experimental results show that it provides more correct information for the classification than the scheme using an augmented vector technique, which is the most conventional approach of image fusion in pixel level.

A Study on Condition-based Maintenance Policy using Minimum-Repair Block Replacement (최소수리 블록교체 모형을 활용한 상태기반 보전 정책 연구)

  • Lim, Jun Hyoung;Won, Dong-Yeon;Sim, Hyun Su;Park, Cheol Hong;Koh, Kwan-Ju;Kang, Jun-Gyu;Kim, Yong Soo
    • Journal of Applied Reliability
    • /
    • v.18 no.2
    • /
    • pp.114-121
    • /
    • 2018
  • Purpose: This study proposes a process for evaluating the preventive maintenance policy for a system with degradation characteristics and for calculating the appropriate preventive maintenance cycle using time- and condition-based maintenance. Methods: First, the collected data is divided into the maintenance history lifetime and degradation lifetime, and analysis datasets are extracted through preprocessing. Particle filter algorithm is used to estimate the degradation lifetime from analysis datasets and prior information is obtained using LSE. The suitability and cost of the existing preventive maintenance policy are each evaluated based on the degradation lifetime and by using a minimum repair block replacement model of time-based maintenance. Results: The process is applied to the degradation of the reverse osmosis (RO) membrane in a seawater reverse osmosis (SWRO) plant to evaluate the existing preventive maintenance policy. Conclusion: This method can be used for facilities or systems that undergo degradation, which can be evaluated in terms of cost and time. The method is expected to be used in decision-making for devising the optimal preventive maintenance policy.

Tax Judgment Analysis and Prediction using NLP and BiLSTM (NLP와 BiLSTM을 적용한 조세 결정문의 분석과 예측)

  • Lee, Yeong-Keun;Park, Koo-Rack;Lee, Hoo-Young
    • Journal of Digital Convergence
    • /
    • v.19 no.9
    • /
    • pp.181-188
    • /
    • 2021
  • Research and importance of legal services applied with AI so that it can be easily understood and predictable in difficult legal fields is increasing. In this study, based on the decision of the Tax Tribunal in the field of tax law, a model was built through self-learning through information collection and data processing, and the prediction results were answered to the user's query and the accuracy was verified. The proposed model collects information on tax decisions and extracts useful data through web crawling, and generates word vectors by applying Word2Vec's Fast Text algorithm to the optimized output through NLP. 11,103 cases of information were collected and classified from 2017 to 2019, and verified with 70% accuracy. It can be useful in various legal systems and prior research to be more efficient application.

Malware Family Detection and Classification Method Using API Call Frequency (API 호출 빈도를 이용한 악성코드 패밀리 탐지 및 분류 방법)

  • Joe, Woo-Jin;Kim, Hyong-Shik
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.4
    • /
    • pp.605-616
    • /
    • 2021
  • While malwares must be accurately identifiable from arbitrary programs, existing studies using classification techniques have limitations that they can only be applied to limited samples. In this work, we propose a method to utilize API call frequency to detect and classify malware families from arbitrary programs. Our proposed method defines a rule that checks whether the call frequency of a particular API exceeds the threshold, and identifies a specific family by utilizing the rate information on the corresponding rules. In this paper, decision tree algorithm is applied to define the optimal threshold that can accurately identify a particular family from the training set. The performance measurements using 4,443 samples showed 85.1% precision and 91.3% recall rate for family detection, 97.7% precision and 98.1% reproduction rate for classification, which confirms that our method works to distinguish malware families effectively.

Kalman Filtering-based Traffic Prediction for Software Defined Intra-data Center Networks

  • Mbous, Jacques;Jiang, Tao;Tang, Ming;Fu, Songnian;Liu, Deming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.2964-2985
    • /
    • 2019
  • Global data center IP traffic is expected to reach 20.6 zettabytes (ZB) by the end of 2021. Intra-data center networks (Intra-DCN) will account for 71.5% of the data center traffic flow and will be the largest portion of the traffic. The understanding of traffic distribution in IntraDCN is still sketchy. It causes significant amount of bandwidth to go unutilized, and creates avoidable choke points. Conventional transport protocols such as Optical Packet Switching (OPS) and Optical Burst Switching (OBS) allow a one-sided view of the traffic flow in the network. This therefore causes disjointed and uncoordinated decision-making at each node. For effective resource planning, there is the need to consider joining the distributed with centralized management which anticipates the system's needs and regulates the entire network. Methods derived from Kalman filters have proved effective in planning road networks. Considering the network available bandwidth as data transport highways, we propose an intelligent enhanced SDN concept applied to OBS architecture. A management plane (MP) is added to conventional control (CP) and data planes (DP). The MP assembles the traffic spatio-temporal parameters from ingress nodes, uses Kalman filtering prediction-based algorithm to estimate traffic demand. Prior to packets arrival at edges nodes, it regularly forwards updates of resources allocation to CPs. Simulations were done on a hybrid scheme (1+1) and on the centralized OBS. The results demonstrated that the proposition decreases the packet loss ratio. It also improves network latency and throughput-up to 84 and 51%, respectively, versus the traditional scheme.

A Deep Learning Part-diagnosis Platform(DLPP) based on an In-vehicle On-board gateway for an Autonomous Vehicle

  • Kim, KyungDeuk;Son, SuRak;Jeong, YiNa;Lee, ByungKwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.4123-4141
    • /
    • 2019
  • Autonomous driving technology is divided into 0~5 levels. Of these, Level 5 is a fully autonomous vehicle that does not require a person to drive at all. The automobile industry has been trying to develop Level 5 to satisfy safety, but commercialization has not yet been achieved. In order to commercialize autonomous unmanned vehicles, there are several problems to be solved for driving safety. To solve one of these, this paper proposes 'A Deep Learning Part-diagnosis Platform(DLPP) based on an In-vehicle On-board gateway for an Autonomous Vehicle' that diagnoses not only the parts of a vehicle and the sensors belonging to the parts, but also the influence upon other parts when a certain fault happens. The DLPP consists of an In-vehicle On-board gateway(IOG) and a Part Self-diagnosis Module(PSM). Though an existing vehicle gateway was used for the translation of messages happening in a vehicle, the IOG not only has the translation function of an existing gateway but also judges whether a fault happened in a sensor or parts by using a Loopback. The payloads which are used to judge a sensor as normal in the IOG is transferred to the PSM for self-diagnosis. The Part Self-diagnosis Module(PSM) diagnoses parts itself by using the payloads transferred from the IOG. Because the PSM is designed based on an LSTM algorithm, it diagnoses a vehicle's fault by considering the correlation between previous diagnosis result and current measured parts data.

Detection of Frame Deletion Using Convolutional Neural Network (CNN 기반 동영상의 프레임 삭제 검출 기법)

  • Hong, Jin Hyung;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.886-895
    • /
    • 2018
  • In this paper, we introduce a technique to detect the video forgery by using the regularity that occurs in the video compression process. The proposed method uses the hierarchical regularity lost by the video double compression and the frame deletion. In order to extract such irregularities, the depth information of CU and TU, which are basic units of HEVC, is used. For improving performance, we make a depth map of CU and TU using local information, and then create input data by grouping them in GoP units. We made a decision whether or not the video is double-compressed and forged by using a general three-dimensional convolutional neural network. Experimental results show that it is more effective to detect whether or not the video is forged compared with the results using the existing machine learning algorithm.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

A Study on DEA-based Stepwise Benchmarking Target Selection Considering Resource Improvement Preferences (DEA 기반의 자원 개선 선호도를 고려한 단계적 벤치마킹 대상 탐색 연구)

  • Park, Jaehun;Sung, Si-Il
    • Journal of Korean Society for Quality Management
    • /
    • v.47 no.1
    • /
    • pp.33-46
    • /
    • 2019
  • Purpose: This study proposed a DEA (Data Envelopment Analysis)-based stepwise benchmarking target selection for inefficient DMU (Decision Making Unit) to improve its efficiency gradually to reach most efficient frontier considering resource (DEA inputs and outputs) improvement preferences. Methods: The proposed method proceeded in two steps. First step evaluates efficiency of DMUs by using DEA, and an evaluated DMU selects benchmarking targets of HCU (Hypothesis Composit Unit) or RU (Real Unit) considering resource improvement preferences. Second step selects stepwise benchmarking targets of the inefficient DMU. To achieve this, this study developed a new DEA model, which can select a benchmarking target of an inefficient DMU in considering inputs or outputs improvement preference, and suggested an algorithm, which can select stepwise benchmarking targets of the inefficient DMU. Results: The proposed method was applied to 34 international ports for validation. In efficiency evaluation, five ports was evaluated as most efficient port, and the remaining 29 ports was evaluated as relative inefficient port. When port 34 was supposed as evaluated DMU, its can select its four stepwise benchmarking targets in assigning the preference weight to inputs (berth length, total area of pier, CFS, number of loading machine) as (0.82, 1.00, 0.41, 0.00). Conclusion: For the validation of the proposed method, it applied to the 34 major ports around the world and selected stepwise benchmarking targets for an inefficient port to improve its efficiency gradually. We can say that the proposed method enables for inefficient DMU to establish more effective and practical benchmarking strategy than the conventional DEA because it considers the resource (inputs or outputs) improvement preference in selecting benchmarking targets gradually.