• Title/Summary/Keyword: C4.5알고리즘

Search Result 200, Processing Time 0.024 seconds

ATM Cell Encipherment Method using Rijndael Algorithm in Physical Layer (Rijndael 알고리즘을 이용한 물리 계층 ATM 셀 보안 기법)

  • Im Sung-Yeal;Chung Ki-Dong
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.83-94
    • /
    • 2006
  • This paper describes ATM cell encipherment method using Rijndael Algorithm adopted as an AES(Advanced Encryption Standard) by NIST in 2001. ISO 9160 describes the requirement of physical layer data processing in encryption/decryption. For the description of ATM cell encipherment method, we implemented ATM data encipherment equipment which satisfies the requirements of ISO 9160, and verified the encipherment/decipherment processing at ATM STM-1 rate(155.52Mbps). The DES algorithm can process data in the block size of 64 bits and its key length is 64 bits, but the Rijndael algorithm can process data in the block size of 128 bits and the key length of 128, 192, or 256 bits selectively. So it is more flexible in high bit rate data processing and stronger in encription strength than DES. For tile real time encryption of high bit rate data stream. Rijndael algorithm was implemented in FPGA in this experiment. The boundary of serial UNI cell was detected by the CRC method, and in the case of user data cell the payload of 48 octets (384 bits) is converted in parallel and transferred to 3 Rijndael encipherment module in the block size of 128 bits individually. After completion of encryption, the header stored in buffer is attached to the enciphered payload and retransmitted in the format of cell. At the receiving end, the boundary of ceil is detected by the CRC method and the payload type is decided. n the payload type is the user data cell, the payload of the cell is transferred to the 3-Rijndael decryption module in the block sire of 128 bits for decryption of data. And in the case of maintenance cell, the payload is extracted without decryption processing.

Evaluation of Planning Dose Accuracy in Case of Radiation Treatment on Inhomogeneous Organ Structure (불균질부 방사선치료 시 계획 선량의 정확성 평가)

  • Kim, Chan Yong;Lee, Jae Hee;Kwak, Yong Kook;Ha, Min Yong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.137-143
    • /
    • 2013
  • Purpose: We are to find out the difference of calculated dose of treatment planning system (TPS) and measured dose in case of inhomogeneous organ structure. Materials and Methods: Inhomogeneous phantom is made with solid water phantom and cork plate. CT image of inhomogeneous phantom is acquired. Treatment plan is made with TPS (Pinnacle3 9.2. Royal Philips Electronics, Netherlands) and calculated dose of point of interest is acquired. Treatment plan was delivered in the inhomogeneous phantom by ARTISTE (Siemens AG, Germany) measured dose of each point of interest is obtained with Gafchromic EBT2 film (International Specialty Products, US) in the gap between solid water phantom or cork plate. To simulate lung cancer radiation treatment, artificial tumor target of paraffin is inserted in the cork volume of inhomogeneous phantom. Calculated dose and measured dose are acquired as above. Results: In case of inhomogeneous phantom experiment, dose difference of calculated dose and measured dose is about -8.5% at solid water phantom-cork gap and about -7% lower in measured dose at cork-solid water phantom gap. In case of inhomogeneous phantom inserted paraffin target experiment, dose difference is about 5% lower in measured dose at cork-paraffin gap. There is no significant difference at same material gap in both experiments. Conclusion: Radiation dose at the gap between two organs with different electron density is significantly lower than calculated dose with TPS. Therefore, we must be aware of dose calculation error in TPS and great care is suggested in case of radiation treatment planning on inhomogeneous organ structure.

  • PDF

Design of Architecture of Programmable Stack-based Video Processor with VHDL (VHDL을 이용한 프로그램 가능한 스택 기반 영상 프로세서 구조 설계)

  • 박주현;김영민
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.4
    • /
    • pp.31-43
    • /
    • 1999
  • The main goal of this paper is to design a high performance SVP(Stack based Video Processor) for network applications. The SVP is a comprehensive scheme; 'better' in the sense that it is an optimal selection of previously proposed enhancements of a stack machine and a video processor. This can process effectively object-based video data using a S-RISC(Stack-based Reduced Instruction Set Computer) with a semi -general-purpose architecture having a stack buffer for OOP(Object-Oriented Programming) with many small procedures at running programs. And it includes a vector processor that can improve the MPEG coding speed. The vector processor in the SVP can execute advanced mode motion compensation, motion prediction by half pixel and SA-DCT(Shape Adaptive-Discrete Cosine Transform) of MPEG-4. Absolutors and halfers in the vector processor make this architecture extensive to a encoder. We also designed a VLSI stack-oriented video processor using the proposed architecture of stack-oriented video decoding. It was designed with O.5$\mu\textrm{m}$ 3LM standard-cell technology, and has 110K logic gates and 12 Kbits SRAM internal buffer. The operating frequency is 50MHz. This executes algorithms of video decoding for QCIF 15fps(frame per second), maximum rate of VLBV(Very Low Bitrate Video) in MPEG-4.

  • PDF

A Study on Reliability Based Design Criteria for Reinforced Concrete Bridge Superstructures (철근(鐵筋)콘크리트 도로교(道路橋) 상부구조(上部構造) 신뢰성(信賴性) 설계규준(設計規準)에 관한 연구(研究))

  • Cho, Hyo Nam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.2 no.3
    • /
    • pp.87-99
    • /
    • 1982
  • This study proposes a reliability based design criteria for the R.C. superstructures of highway bridges. Uncertainties associated with the resistance of T or rectangular sections are investigated, and a set of appropriate uncertainties associated with the bridge dead and traffic live loads are proposed by reflecting our level of practice. Major 2nd moment reliability analysis and design theories including both Cornell's MFOSM(Mean First Order 2nd Moment) Methods and Lind-Hasofer's AFOSM(Advanced First Order 2nd Moment) Methods are summarized and compared, and it has been found that Ellingwood's algorithm and an approximate log-normal type reliability formula are well suited for the proposed reliability study. A target reliability index (${\beta}_0=3.5$) is selected as an optimal value considering our practice based on the calibration with the current R.C. bridge design safety provisions. A set of load and resistance factors is derived by the proposed uncertainties and the methods corresponding to the target reliability. Furthermore, a set of nominal safety factors and allowable stresses are proposed for the current W.S.D. design provisions. It may be asserted that the proposed L.R.F.D. reliability based design criteria for the R.C. highway bridges may have to be incorporated into the current R.C. bridge design codes as a design provision corresponding to the U.S.D. provisions of the current R.C. design code.

  • PDF

A Design on Radio-Communications Operation of the Fishery VMS by VHF DSC in the East Sea Area (VHF DSC에 의한 동해권 어업 VMS의 통신운용 설계)

  • Choi, Jo-Cheon;Jeong, Young-Cheol;Kim, Jeong Uk;Choi, Myeong Soo;Lee, Seong Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.4
    • /
    • pp.371-377
    • /
    • 2013
  • Fishing boats more than 5 tons is obliged to VHF DSC by Fishing Vessels Act and Vessel Safety Act. The owner of the fishing vessel is equipped to the automatic position reporting device in accordance with the Notice of the Minister of Land, Transport and Maritime Affairs to regulations, shall be ensure to navigations safety and in order to respond quickly in the event of maritime accidents on fishing vessels. East sea set up to start in 2012, which is now underway the annual install plain to the yellow sea and the south sea for VHF coast stations. It is web-based remote operation of DSC on the remote control and monitoring in Fishery Information Communication Station for the coastal VMS construction project. All fishing vessels is VHF DSC in conjunction with the GPS that location information transmitted to the coast station. automatically by the DSC call. This paper has been studied on the communications coverage set up and traffic operation for realization a roaming service by navigation route tracking and RSSI techniques in parallel algorithm refer to VHF DSC coast stations in east sea.

An Automated Topic Specific Web Crawler Calculating Degree of Relevance (연관도를 계산하는 자동화된 주제 기반 웹 수집기)

  • Seo Hae-Sung;Choi Young-Soo;Choi Kyung-Hee;Jung Gi-Hyun;Noh Sang-Uk
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • It is desirable if users surfing on the Internet could find Web pages related to their interests as closely as possible. Toward this ends, this paper presents a topic specific Web crawler computing the degree of relevance. collecting a cluster of pages given a specific topic, and refining the preliminary set of related web pages using term frequency/document frequency, entropy, and compiled rules. In the experiments, we tested our topic specific crawler in terms of the accuracy of its classification, crawling efficiency, and crawling consistency. First, the classification accuracy using the set of rules compiled by CN2 was the best, among those of C4.5 and back propagation learning algorithms. Second, we measured the classification efficiency to determine the best threshold value affecting the degree of relevance. In the third experiment, the consistency of our topic specific crawler was measured in terms of the number of the resulting URLs overlapped with different starting URLs. The experimental results imply that our topic specific crawler was fairly consistent, regardless of the starting URLs randomly chosen.

  • PDF

Missing Pattern Matching of Rough Set Based on Attribute Variations Minimization in Rough Set (속성 변동 최소화에 의한 러프집합 누락 패턴 부합)

  • Lee, Young-Cheon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.6
    • /
    • pp.683-690
    • /
    • 2015
  • In Rough set, attribute missing values have several problems such as reduct and core estimation. Further, they do not give some discernable pattern for decision tree construction. Now, there are several methods such as substitutions of typical attribute values, assignment of every possible value, event covering, C4.5 and special LEMS algorithm. However, they are mainly substitutions into frequently appearing values or common attribute ones. Thus, decision rules with high information loss are derived in case that important attribute values are missing in pattern matching. In particular, there is difficult to implement cross validation of the decision rules. In this paper we suggest new method for substituting the missing attribute values into high information gain by using entropy variation among given attributes, and thereby completing the information table. The suggested method is validated by conducting the same rough set analysis on the incomplete information system using the software ROSE.

Study on Analysis Algorithm of Search Direction and Concentration of Spatial Information (공간정보 탐색 방향과 집중정도 분석 알고리즘에 관한 연구)

  • Kim, Jong-Ha
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.4
    • /
    • pp.80-89
    • /
    • 2016
  • The analysis of spatial search direction and its concentration through eye movement can produce some useful data in that it enables to know the features of space elements and their effects on one another. The results by analysing the search features and concentration of spatial sections through the eye-tracking in shops in a department store makes it possible to define the followings. First, the features of 'eye's in & out' could be estimated through the division of sections by the characteristics of those shops and the extraction of central point based on the decision of continuative observation. The decision of continuative observations enabled to analyse the frequency of observation data which can be considered to be 'things watched longtime' and the stared points that is equivalent to 'things seen very often', by which the searching characteristics of spatial sections could be estimated. Second, as with the eye's [in], the right shops had 0.6 times more (3.5%) than those left and as with the eye's [out] the left ones had 0.6 times more (3.5%). It indicates that [in, out] of the right and the left shops had the same difference, which lets us know that with starting point of the middle space, [in] and [out] were paid more attention to the right shops and the left shops respectively. Third, as with the searching directions by section, the searching times [2.9 times] from [B] to [A] were than that [2.6 times] from [A] to [B]. It was also found that the left shops had more searching direction toward [C, D] than the right ones and that those searching activities at the left shops were more active. Fourth, when the searching directions by section are reviewed, the frequency of searching from [B] to [A] was 2.9 and that of the other way 2.6. Also the left shops were found to have more searching direction toward [C, D] than the right ones and those searching activities at the left shops were estimated to be more active.

Binary Connected-component Labeling with Block-based Labels and a Pixel-based Scan Mask (블록기반 라벨과 화소기반 스캔마스크를 이용한 이진 연결요소 라벨링)

  • Kim, Kyoil
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.287-294
    • /
    • 2013
  • Binary connected-component labeling is widely used in the fields of the image processing and the computer vision. Many kinds of labeling techniques have been developed, and two-scan is known as the fastest method among them. Traditionally pixel-based scan masks have been used for the first stage of the two-scan. Recently, block-based labeling techniques were introduced by C. Grana et. al. and L. He et. al. They are faster than pixel-based labeling methods. In this paper, we propose a new binary connected-component labeling technique with block-based labels and a pixel-based scan mask. The experimental results with various images show that the proposed method is faster than the He's which is known as the fastest method currently. The amount of performance enhancement is averagely from 3.9% to 22.4% according to the sort of the images.

High Quality Multi-Channel Audio System for Karaoke Using DSP (DSP를 이용한 가라오케용 고음질 멀티채널 오디오 시스템)

  • Kim, Tae-Hoon;Park, Yang-Su;Shin, Kyung-Chul;Park, Jong-In;Moon, Tae-Jung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • This paper deals with the realization of multi-channel live karaoke. In this study, 6-channel MP3 decoding and tempo/key scaling was operated in real time by using the TMS320C6713 DSP, which is 32 bit floating-point DSP made by TI Co. The 6 channel consists of front L/R instrument, rear L/R instrument, melody, and woofer. In case of the 4 channel, rear L/R instrument can be replaced with drum L/R channel. And the final output data is generated as adjusted to a 5.1 channel speaker. The SOLA algorithm was applied for tempo scaling, and key scaling was done with interpolation and decimation in the time domain. Drum channel was excluded in key scaling by separating instruments into drums and non-drums, and in processing SOLA, high-quality tempo scaling was made possible by differentiating SOLA frame size, which was optimized for real-time process. The use of 6 channels allows the composition of various channels, and the multi-channel audio system of this study can be effectively applied at any place where live music is needed.