• Title/Summary/Keyword: code vector

Search Result 255, Processing Time 1.677 seconds

A Study on Adaptive Linear MMSE Detector for DS-CDMA Reverse Link in Rayleigh Fading Environment (레일리 페이딩 환경하에서 DS-CDMA 역방향 링크에 적용 가능한 적응 선형 MMSE 수신기의 연구)

  • 안태기;이병섭;김성락;이정구
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.9 no.2
    • /
    • pp.131-140
    • /
    • 1998
  • MAI(Multi-Access Interference) and fast channel variation due to the fading environment are the major problems in the mobile CDMA communication systems. Recently, interest has been increasing in applying the Adaptive Linear MMSE Detector to MAI cancellation in the CDMA reverse link. In this paper, we propose a modified Adaptive Linear MMSE Detector structure which can be used in Long-duration code CDMA system in the presence of independent Rayleigh fading. We use independent multiple tap-weight vector structure to cope with the variation of spreading sequence pattern between neighbor symbols because of the Long-duration code. In this case, more exact channel parameter estimation is required. To solve this problem, we use coherent CDMA structure which can track the channel parameters like amplitude and phase by employing the low power pilot channel in the CDMA reverse link.

  • PDF

A Study on the Implement of Image Recognition the Road Traffic Safety Information Board using Nearest Neighborhood Decision Making Algorithm (최근접 이웃 결정방법 알고리즘을 이용한 도로교통안전표지판 영상인식의 구현)

  • Jung Jin-Yong;Kim Dong-Hyun;Lee So-Haeng
    • Management & Information Systems Review
    • /
    • v.4
    • /
    • pp.257-284
    • /
    • 2000
  • According as the drivers increase who have their cars, the comprehensive studies on the automobile for the traffic safety have been raised as the important problems. Visual Recognition System for radio-controled driving is a part of the sensor processor of Unmanned Autonomous Vehicle System. When a driver drives his car on an unknown highway or general road, it produces a model from the successively inputted road traffic information. The suggested Recognition System of the Road Traffic Safety Information Board is to recognize and distinguish automatically a Road Traffic Safety Information Board as one of road traffic information. The whole processes of Recognition System of the Road Traffic Safety Information Board suggested in this study are as follows. We took the photographs of Road Traffic Safety Information Board with a digital camera in order to get an image and normalize bitmap image file with a size of $200{\times}200$ byte with Photo Shop 5.0. The existing True Color is made up the color data of sixteen million kinds. We changed it with 256 Color, because it has large capacity, and spend much time on calculating. We have practiced works of 30 times with erosion and dilation algorithm to remove unnecessary images. We drawing out original image with the Region Splitting Technique as a kind of segmentation. We made three kinds of grouping(Attention Information Board, Prohibit Information Board, and Introduction Information Board) by RYB( Red, Yellow, Blue) color segmentation. We minimized the image size of board, direction, and the influence of rounding. We also minimized the Influence according to position. and the brightness of light and darkness with Eigen Vector and Eigen Value. The data sampling this feature value appeared after building the learning Code Book Database. The suggested Recognition System of the Road Traffic Safety Information Board firstly distinguished three kinds of groups in the database of learning Code Book, and suggested in order to recognize after comparing and judging the board want to recognize within the same group with Nearest Neighborhood Decision Making.

  • PDF

Frequency-Code Domain Contention in Multi-antenna Multicarrier Wireless Networks

  • Lv, Shaohe;Zhang, Yiwei;Li, Wen;Lu, Yong;Dong, Xuan;Wang, Xiaodong;Zhou, Xingming
    • Journal of Communications and Networks
    • /
    • v.18 no.2
    • /
    • pp.218-226
    • /
    • 2016
  • Coordination among users is an inevitable but time-consuming operation in wireless networks. It severely limit the system performance when the data rate is high. We present FC-MAC, a novel MAC protocol that can complete a contention within one contention slot over a joint frequency-code domain. When a node takes part in the contention, it generates randomly a contention vector (CV), which is a binary sequence of length equal to the number of available orthogonal frequency division multiplexing (OFDM) subcarriers. In FC-MAC, different user is assigned with a distinct signature (i.e., PN sequence). A node sends the signature at specific subcarriers and uses the sequence of the ON/OFF states of all subcarriers to indicate the chosen CV. Meanwhile, every node uses the redundant antennas to detect the CVs of other nodes. The node with the minimum CV becomes the winner. The experimental results show that, the collision probability of FC-MAC is as low as 0.05% when the network has 100 nodes. In comparison with IEEE 802.11, contention time is reduced by 50-80% and the throughput gain is up to 200%.

Souce Code Identification Using Deep Neural Network (심층신경망을 이용한 소스 코드 원작자 식별)

  • Rhim, Jisu;Abuhmed, Tamer
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.373-378
    • /
    • 2019
  • Since many programming sources are open online, problems with reckless plagiarism and copyrights are occurring. Among them, source codes produced by repeated authors may have unique fingerprints due to their programming characteristics. This paper identifies each author by learning from a Google Code Jam program source using deep neural network. In this case, the original creator's source is to be vectored using a pre-processing instrument such as predictive-based vector or frequency-based approach, TF-IDF, etc. and to identify the original program source by learning by using a deep neural network. In addition a language-independent learning system was constructed using a pre-processing machine and compared with other existing learning methods. Among them, models using TF-IDF and in-depth neural networks were found to perform better than those using other pre-processing or other learning methods.

Authorship Attribution Framework Using Survival Network Concept : Semantic Features and Tolerances (서바이벌 네트워크 개념을 이용한 저자 식별 프레임워크: 의미론적 특징과 특징 허용 범위)

  • Hwang, Cheol-Hun;Shin, Gun-Yoon;Kim, Dong-Wook;Han, Myung-Mook
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1013-1021
    • /
    • 2020
  • Malware Authorship Attribution is a research field for identifying malware by comparing the author characteristics of unknown malware with the characteristics of known malware authors. The authorship attribution method using binaries has the advantage that it is easy to collect and analyze targeted malicious codes, but the scope of using features is limited compared to the method using source code. This limitation has the disadvantage that accuracy decreases for a large number of authors. This study proposes a method of 'Defining semantic features from binaries' and 'Defining allowable ranges for redundant features using the concept of survival network' to complement the limitations in the identification of binary authors. The proposed method defines Opcode-based graph features from binary information, and defines the allowable range for selecting unique features for each author using the concept of a survival network. Through this, it was possible to define the feature definition and feature selection method for each author as a single technology, and through the experiment, it was confirmed that it was possible to derive the same level of accuracy as the source code-based analysis with an improvement of 5.0% accuracy compared to the previous study.

A study on the development of severity-adjusted mortality prediction model for discharged patient with acute stroke using machine learning (머신러닝을 이용한 급성 뇌졸중 퇴원 환자의 중증도 보정 사망 예측 모형 개발에 관한 연구)

  • Baek, Seol-Kyung;Park, Jong-Ho;Kang, Sung-Hong;Park, Hye-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.126-136
    • /
    • 2018
  • The purpose of this study was to develop a severity-adjustment model for predicting mortality in acute stroke patients using machine learning. Using the Korean National Hospital Discharge In-depth Injury Survey from 2006 to 2015, the study population with disease code I60-I63 (KCD 7) were extracted for further analysis. Three tools were used for the severity-adjustment of comorbidity: the Charlson Comorbidity Index (CCI), the Elixhauser comorbidity index (ECI), and the Clinical Classification Software (CCS). The severity-adjustment models for mortality prediction in patients with acute stroke were developed using logistic regression, decision tree, neural network, and support vector machine methods. The most common comorbid disease in stroke patients were hypertension, uncomplicated (43.8%) in the ECI, and essential hypertension (43.9%) in the CCS. Among the CCI, ECI, and CCS, CCS had the highest AUC value. CCS was confirmed as the best severity correction tool. In addition, the AUC values for variables of CCS including main diagnosis, gender, age, hospitalization route, and existence of surgery were 0.808 for the logistic regression analysis, 0.785 for the decision tree, 0.809 for the neural network and 0.830 for the support vector machine. Therefore, the best predictive power was achieved by the support vector machine technique. The results of this study can be used in the establishment of health policy in the future.

Bacterial Expression of Cytochrome $b_5$ Type III Pseudogene

  • Baek, Sun-Ah;Kim, Su-Won;Kim, Jong-Won;Yoo, Min
    • Biomedical Science Letters
    • /
    • v.18 no.3
    • /
    • pp.310-312
    • /
    • 2012
  • Cytochrome $b_5$ is involved in the reduction of methemoglobin back to hemoglobin, thereby maintaining normal function of the blood to carry oxygen around. Congenital abnormal condition of this enzyme causes a rare disease called methemoglobinemia. At least 4 different retropseudogenes are reported so far for cytochrome $b_5$. However, type III pseudogene has attracted most attention because it contains open reading frame in its structure. Although there is no evidence yet if this pseudogene is actually expressed in the cell or the blood the possibility of its expression needs to be elucidated. We have isolated type III pseudogene by polymerase chain reaction and cloned into pGEX-4T-1 expression vector followed by SDS-PAGE. Protein was expressed and the size of the expressed protein was 28 kDa as expected in its genetic code. This result also shows that the protein is not harmful for the viability of the microorganism. This study may contribute to the genetic diagnosis of cardiac diseases, possibly caused by cytochrome $b_5$.

Study on the Accuracy Improvement of E-ICAM in Consideration of Gouging (과절삭을 고려한 E-ICAM의 정밀도 개선에 관한 연구)

  • Son, Hwang Jin;Cho, Young Tae;Jung, Yoon Gyo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.8
    • /
    • pp.705-711
    • /
    • 2015
  • Five-Axis machines can generate undesirable defects such as the undercutting and overcutting errors that frequently occur in the three-axis machining process. It is therefore necessary to develop a program for NC-code generation, whereby the cutter posture is considered to decrease the occurrence of defects. In previous studies, the Easy-Impeller CAM(E-ICAM), an automatic CAM program used for the five-axis machining of impellers, was developed; however, when E-ICAM is used to machine an impeller, it is possible to gouge the hub and blade. Therefore, the aim of this study is the establishment of a formula for each type of endmill to minimize gouging according to the cutter posture, in consideration of several factors that affect accuracy in the machining of an impeller. This study also aimed to improve the performance and accuracy of E-ICAM in the manufacturing of impellers.

VQ Codebook Index Interpolation Method for Frame Erasure Recovery of CELP Coders in VoIP

  • Lim Jeongseok;Yang Hae Yong;Lee Kyung Hoon;Park Sang Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.877-886
    • /
    • 2005
  • Various frame recovery algorithms have been suggested to overcome the communication quality degradation problem due to Internet-typical impairments on Voice over IP(VoIP) communications. In this paper, we propose a new receiver-based recovery method which is able to enhance recovered speech quality with almost free computational cost and without an additional increment of delay and bandwidth consumption. Most conventional recovery algorithms try to recover the lost or erroneous speech frames by reconstructing missing coefficients or speech signal during speech decoding process. Thus they eventually need to modify the decoder software. The proposed frame recovery algorithm tries to reconstruct the missing frame itself, and does not require the computational burden of modifying the decoder. In the proposed scheme, the Vector Quantization(VQ) codebook indices of the erased frame are directly estimated by referring the pre-computed VQ Codebook Index Interpolation Tables(VCIIT) using the VQ indices from the adjacent(previous and next) frames. We applied the proposed scheme to the ITU-T G.723.1 speech coder and found that it improved reconstructed speech quality and outperforms conventional G.723.1 loss recovery algorithm. Moreover, the suggested simple scheme can be easily applicable to practical VoIP systems because it requires a very small amount of additional computational cost and memory space.

A Study on An Error-Resilient Constant Bit Rate Video Codec (에러 환경에 강한 항등비트율 동영상 부호화기에 관한 연구)

  • 한동원;송진규;김용구;최윤식
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.9B
    • /
    • pp.1721-1730
    • /
    • 1999
  • In this thesis, an error resilient video coding algorithm, under the error-prone environment such as wireless communication, is suggested. The suggested algorithm adapts the Classified VQ method for intra imagers that reduces some load by searching similar vectors. The Duplicate Vector Position Code is proposed for the higher compression efficiency and the robust decoding in error environment. As a result, the bitstream encoded by the proposed method is in a CBR(Constant Bit Rate) preventing from error propagation. The experiment that adds practical error to the encoded bitsrteam shows the error-robustness superior to H.263.

  • PDF