• Title/Summary/Keyword: number and operations

Search Result 2,085, Processing Time 0.029 seconds

Side Channel Analysis with Low Complexity in the Diffusion Layer of Block Cipher Algorithm SEED (SEED 블록 암호 알고리즘 확산계층에서 낮은 복잡도를 갖는 부채널 분석)

  • Won, Yoo-Seung;Park, Aesun;Han, Dong-Guk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.5
    • /
    • pp.993-1000
    • /
    • 2017
  • When the availability of embedded device is considered, combined countermeasure such as first-order masking and hiding countermeasures is quite attractive because the security and efficiency can be provided at the same time. Especially, combined countermeasure can be applied to the confusion and diffusion layers of the first and last rounds in order to provide the efficiency. Also, the middle rounds only employs first-order masking countermeasure or no countermeasure. In this paper, we suggest a novel side channel analysis with low complexity in the output of diffusion layer. In general, the attack target cannot be set to the output of diffusion layer owing to the high complexity. When the diffusion layer of block cipher is composed of AND operations, we show that the attack complexity can be reduced. Here, we consider that the main algorithm is SEED. Then, the attack complexity with $2^{32}$ can be reduced by $2^{16}$ according to the fact that the correlation between the combination of S-box outputs and that of the outputs of diffusion layer. Moreover, compared to the fact that the main target is the output of S-box in general, we demonstrate that the required number of traces can be reduced by 43~98% in terms of simulated traces. Additionally, we show that only 8,000 traces are enough to retrieve the correct key by suggested scheme, although it fails to reveal the correct key when performing the general approach on 100,000 traces in realistic device.

Parallel Range Query processing on R-tree with Graphics Processing Units (GPU를 이용한 R-tree에서의 범위 질의의 병렬 처리)

  • Yu, Bo-Seon;Kim, Hyun-Duk;Choi, Won-Ik;Kwon, Dong-Seop
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.5
    • /
    • pp.669-680
    • /
    • 2011
  • R-trees are widely used in various areas such as geographical information systems, CAD systems and spatial databases in order to efficiently index multi-dimensional data. As data sets used in these areas grow in size and complexity, however, range query operations on R-tree are needed to be further faster to meet the area-specific constraints. To address this problem, there have been various research efforts to develop strategies for acceleration query processing on R-tree by using the buffer mechanism or parallelizing the query processing on R-tree through multiple disks and processors. As a part of the strategies, approaches which parallelize query processing on R-tree through Graphics Processor Units(GPUs) have been explored. The use of GPUs may guarantee improved performances resulting from faster calculations and reduced disk accesses but may cause additional overhead costs caused by high memory access latencies and low data exchange rate between GPUs and the CPU. In this paper, to address the overhead problems and to adapt GPUs efficiently, we propose a novel approach which uses a GPU as a buffer to parallelize query processing on R-tree. The use of buffer algorithm can give improved performance by reducing the number of disk access and maximizing coalesced memory access resulting in minimizing GPU memory access latencies. Through the extensive performance studies, we observed that the proposed approach achieved up to 5 times higher query performance than the original CPU-based R-trees.

A Use-case based Component Mining Approach for the Modernization of Legacy Systems (레거시 시스템을 현대화하기 위한 유스케이스 기반의 컴포넌트 추출 방법)

  • Kim, Hyeon-Soo;Chae, Heung-Seok;Kim, Chul-Hong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.601-611
    • /
    • 2005
  • Due to not only proven stability and reliability but a significant investment and years of accumulated -experience and knowledge, legacy systems have supported the core business applications of a number of organizations over many years. While the emergence of Web-based e-business environments requires externalizing core business processes to the Web. This is a competitive advantage in the new economy. Consequently, organizations now need to mine the business value buried in the legacy systems for reuse in new e-business applications. In this paper we suggest a systematic approach to mining components that perform specific business services and that consist of the legacy system's assets to be leveraged on the modem platform. The proposed activities are divided into several tasks. First, use cases that realize the business processes are captured. Secondly, a design model is constructed for each identified use case in order to integrate the use cases with the similar functionalities. Thirdly, we identify component candidates from the design model and then adjust the component candidates by considering common elements among the candidate components. And also business components are divided into three more fine-grained components to deploy them onto J2EE/EJB environments. finally, we define the interfaces of components which provide functionalities of the components as operations.

Distal-extension removable partial denture with anterior implant supported fixed prostheses in a maxillary edentulous patient: Case report (상악 완전 무치악 환자에서 임플란트 고정성 보철물을 지대치로 한 후방 연장 국소의치 수복 증례)

  • Gwon, Bora;Jeon, Young-Chan;Jeong, Chang-Mo;Yun, Mi-Jung;Lee, So-Hyoun;Huh, Jung-Bo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.56 no.4
    • /
    • pp.375-383
    • /
    • 2018
  • Clinicians are faced with many difficulties when planning prosthodontic restorations with implants in a complete edentulous patient. When planning fixed implant prosthetics, it is often necessary to have additional surgery due to highly reduced alveolar bone, as well as high treatment costs and long-term treatment durations can be required. In addition, lack of interocclusal space can be a problem when planning implant supported overdentures. In this study, we planned to place a small number of implants on the anterior maxilla and used them as the abutments for distal-extension removable partial dentures on the posterior side in a maxillary fully edentulous patient. This would reduce the possibility of additional invasive operations such as alveolar bone graft, shorten the treatment time, and be relatively easy for elderly patients to burden. In this case, the patient was provided with a distal-extension removable partial denture and anterior implant fixed prostheses, which was similar to the previous one, and showed good adaptation, and chewing efficiency and esthetics was recovered.

2N-Point FFT-Based Inter-Carrier Interference Cancellation Alamouti Coded OFDM Method for Distributed Antennas systems (분산안테나 시스템을 위한 2N-점 고속푸리에변환 기반 부반송파 간 간섭 자체제거 알라무티 부호화 직교주파수분할다중화 기법)

  • Kim, Bong-Seok;Choi, Kwonhue
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.12
    • /
    • pp.1030-1038
    • /
    • 2013
  • The proposed Alamouti coded OFDM effectively cancels Inter Carrier Interference (ICI) due to frequency offset between distributed antennas. The conventional Alamouti coded OFDM schemes to mitigate ICI utilize N-point Inverse Fast Fourier Transform/Fast Fourier Transform (IFFT/FFT) operations for OFDM modulation and demodulation processes with total N subcarriers. However, the performance degrades because ICI is also repeated in N periods due to the property of N-point IFFT/FFT operation. In order to avoid this problem, null data are used at the subcarriers with large ICI and thus, data rate decreases. The proposed scheme employs 2N-point IFFT/FFT instead of N-point IFFT/FFT in order to increase sampling rate. By increasing sampling rate, the amount of interference significantly decreases because the period of ICI also increases. The proposed scheme increases the data rate and improves the performance by reducing amount of ICI and the number of null-data. Furthermore, the gain of the performance and data rate of the proposed scheme is significant with higher modulation such as 16-Quadarature Amplitude Modulation (QAM) or 64-QAM.

SPQUSAR : A Large-Scale Qualitative Spatial Reasoner Using Apache Spark (SPQUSAR : Apache Spark를 이용한 대용량의 정성적 공간 추론기)

  • Kim, Jongwhan;Kim, Jonghoon;Kim, Incheol
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.12
    • /
    • pp.774-779
    • /
    • 2015
  • In this paper, we present the design and implementation of a large-scale qualitative spatial reasoner using Apache Spark, an in-memory high speed cluster computing environment, which is effective for sequencing and iterating component reasoning jobs. The proposed reasoner can not only check the integrity of a large-scale spatial knowledge base representing topological and directional relationships between spatial objects, but also expand the given knowledge base by deriving new facts in highly efficient ways. In general, qualitative reasoning on topological and directional relationships between spatial objects includes a number of composition operations on every possible pair of disjunctive relations. The proposed reasoner enhances computational efficiency by determining the minimal set of disjunctive relations for spatial reasoning and then reducing the size of the composition table to include only that set. Additionally, in order to improve performance, the proposed reasoner is designed to minimize disk I/Os during distributed reasoning jobs, which are performed on a Hadoop cluster system. In experiments with both artificial and real spatial knowledge bases, the proposed Spark-based spatial reasoner showed higher performance than the existing MapReduce-based one.

Bony Contusion of the Knees with Isolated Traumatic Meniscal Tears (외상성 반월상 연골 단독 손상에서 골타박)

  • Kim, Kyung-Chul;Lee, Ho-Jin;Koo, Bon-Seop
    • Journal of the Korean Arthroscopy Society
    • /
    • v.8 no.1
    • /
    • pp.9-13
    • /
    • 2004
  • Purpose: We studied the incidence rate and patterns of bony contusions of the knees with isolated traumatic meniscal tears. Materials and Methods: We analyzed retrospectively MRI scans and medical records of forty-two patients(42 knees) which had undergone operations for isolated traumatic meniscal tears. Mean age, 33.7 years, the number of patients with lateral, medial or both meniscal tears were 19, 18 and 5, respectively. Bony contusions were examined according to incidence, Location, and in relation to the types of meniscal tears. Results: Bony contusion was identified in 5 cases (11.9%) which had medial meniscal tear (4 cases0 or both meniscal tear (1 case). It was always located on the medial compartment of the joint. Bony contusion was found in the knee with various type of traumatic meniscal tears. Conclusion: Bony contusions in thd knees with isolated traumatic meniscal tear have very low incidence and they seem to disappear at or less than 12 months after the trauma. The bony contusions are mainly related to medial meniscal tear and located in the medial compartment of the joint.

  • PDF

A Real Time Processing Technique for Content-Based Image Retargeting (컨텐츠 기반 영상 리타겟팅을 위한 실시간 처리 기법)

  • Lee, Kang-Hee;Yoo, Jae-Wook;Park, Dae-Hyun;Kim, Yoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.63-71
    • /
    • 2011
  • In this paper, we propose a new real time image retargeting method which preserves the contents of an image. Since the conventional seam carving which is the well-known content-based image retargeting technology uses the dynamic programming method, the repetitive update procedure of the accumulation minimum energy map is absolutely needed. The energy map update procedure cannot avoid the processing time delay because of many operations by the image full-searching. The proposed method calculates the diffusion region of each seam candidates in the accumulation minimum energy map in order to reduce the update processing time. By using the diffusion region, several seams are extracted at the same time and the update number of accumulation energy map is reduced. Therefore, although the fast processing is possible, the quality of an image can be analogously maintained with an existing method. The experimental results show that the proposed method can preserve the contents of an image and adjust the image size on a real-time.

Detecting Spelling Errors by Comparison of Words within a Document (문서내 단어간 비교를 통한 철자오류 검출)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.83-92
    • /
    • 2011
  • Typographical errors by the author's mistyping occur frequently in a document being prepared with word processors contrary to usual publications. Preparing this online document, the most common orthographical errors are spelling errors resulting from incorrectly typing intent keys to near keys on keyboard. Typical spelling checkers detect and correct these errors by using morphological analyzer. In other words, the morphological analysis module of a speller tries to check well-formedness of input words, and then all words rejected by the analyzer are regarded as misspelled words. However, if morphological analyzer accepts even mistyped words, it treats them as correctly spelled words. In this paper, I propose a simple method capable of detecting and correcting errors that the previous methods can not detect. Proposed method is based on the characteristics that typographical errors are generally not repeated and so tend to have very low frequency. If words generated by operations of deletion, exchange, and transposition for each phoneme of a low frequency word are in the list of high frequency words, some of them are considered as correctly spelled words. Some heuristic rules are also presented to reduce the number of candidates. Proposed method is able to detect not syntactic errors but some semantic errors, and useful to scoring candidates.

An Experience of Cox-maze III Procedure for Chronic Atrial Fibrillation (만성 심방세동에 대한 Cox-maze III 수술의 임상경험)

  • 김삼현;박이태;서필원;박성식;류재욱;최창휴;김명아;이명용;김영권
    • Journal of Chest Surgery
    • /
    • v.31 no.7
    • /
    • pp.668-673
    • /
    • 1998
  • During the past several years, the maze operation has become the most effective method of treatment for chronic atrial fibrillation. When the maze procedure is done concomittantly with other cardiac operations, surgeons, in their initial experiences, may be concerned about the additional operative risks and uncertainty of the results. We performed the Cox-maze III procedure in six cases of chronic atrial fibrillation associated with mitral, mitral & aortic, or coronary arterial disease. Maze III procedure was done with open mitral commissurotomy(3 cases), mitral valve replacement(1 case), aortic and mitral valve replacement(1 case), and two-vessel coronary bypass graft(1 case). In spite of rather prolonged aortic cross clamp time, cardiac recovery was uneventful in all cases. No cases required reexploration for postoperative bleeding. All patients showed regular sinus rhythms immediate or between 2 and 20 days postoperateratively. Transient postoperative supraventricular arrhythmarias were easily controlled by various antiarrhythmic agents. In follow up evaluations, all cases showed regular sinus rhythm on ECG and the right and left atrial transport function was confirmed by Doppler echocardiography in all except one. Though our experience was limited in case number, the Cox-maze III procedure was effective in controlling the chronic atrial fibrillation without serious additional operative risks.

  • PDF