• Title/Summary/Keyword: 데이터처리

Search Result 17,686, Processing Time 0.042 seconds

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Finite Element Method Modeling for Individual Malocclusions: Development and Application of the Basic Algorithm (유한요소법을 이용한 환자별 교정시스템 구축의 기초 알고리즘 개발과 적용)

  • Shin, Jung-Woog;Nahm, Dong-Seok;Kim, Tae-Woo;Lee, Sung Jae
    • The korean journal of orthodontics
    • /
    • v.27 no.5 s.64
    • /
    • pp.815-824
    • /
    • 1997
  • The purpose of this study is to develop the basic algorithm for the finite element method modeling of individual malocclusions. Usually, a great deal of time is spent in preprocessing. To reduce the time required, we developed a standardized procedure for measuring the position of each tooth and a program to automatically preprocess. The following procedures were carried to complete this study. 1. Twenty-eight teeth morphologies were constructed three-dimensionally for the finite element analysis and saved as separate files. 2. Standard brackets were attached so that the FA points coincide with the center of the brackets. 3. The study model of a patient was made. 4. Using the study model, the crown inclination, angulation, and the vertical distance from the tip of a tooth was measured by using specially designed tools. 5. The arch form was determined from a picture of the model with an image processing technique. 6. The measured data were input as a rotational matrix. 7. The program provides an output file containing the necessary information about the three-dimensional position of teeth, which is applicable to several finite element programs commonly used. The program for a basic algorithm was made with Turbo-C and the subsequent outfile was applied to ANSYS. This standardized model measuring procedure and the program reduce the time required, especially for preprocessing and can be applied to other malocclusions easily.

  • PDF

A Study on Partially Applied Color Image in Black and White Moving Imagery (흑백영상의 부분 색채화에 관한 연구)

  • Yeo, Myoung;Kim, Ji-Hong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.322-326
    • /
    • 2006
  • Though human being has ability to percept a full colored vision, the technology of early photography only can produce black and white images. For cinema filming imagery also captured mono tone with black and white, until developed a color film technology. The desire for presenting color imagery and the technique for producing film and color ink, photography and print utilize color on it with noticeable color impact to viewers. It, however, abusing fun colors image each and every printed and filmed imagery, the freshness of eye catching power diminished now. On contrast, color becomes black and white or partially used for making discrepancy among full colored images. This image detected commercial and music video, and it spread to film. To use those bleached color images is for evoking a nostalgia and a visual differentiation. Especially, it can be provocative images brought to audience with that. such as "Anycall", "Dimchae" for CF, and "Schindler's list," and "Sin city" for movie. It is hard to investigate on the color studies for partially used images. Therefore, this study is to research that through CF and film, base on it, to investigate the application for this image. To collect data from survey, it will be established a basic concept for understanding the partial color applying.

  • PDF

병원정보시스템 품질 항목에 대한 제안

  • Park, Chan-Seok;Go, Seok-Ha
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2007.05a
    • /
    • pp.300-320
    • /
    • 2007
  • 정보기술의 발달과 함께 소프트웨어 제품은 모든 산업에 필수요소가 되었고, 품질과 평가에 대한 관심도 점차 증가되고 있다. 하지만 일부 산업에서는 소프트웨어 품질 평가에 대한 사용자들의 만병 통치적 기대, 품질 표준의 부족, 측정을 위한 양질의 데이터 부족, 소프트웨어 분석과 디자인에 대한 공학적 한계로 소프트웨어 품질 평가에 대해 많은 문제점이 지적되고 있다. 국내에서도 의료산업 관련 정보시스템의 오류 및 사용자들의 운영 미숙은 매년 많은 금액의 사회적 비용을 증가시켰고, 병원정보시스템의 품질에 대한 관심을 초래하였다. 특히, 산업적 특성이 강한 병원정보시스템은 성공적 구축을 위해 사용자 중심의 소프트웨어 디자인과 다양한 전문가들의 지식 통합이 필요하며, 정보시스템 품질 측정으로 연구자들과 개발자들의 시스템 설계 혼란을 감소시키는 방법론이 필요하다는 연구들이 등장하고 있다. 대부분의 병원정보시스템이 단편적인 업무처리 위주로 개발 운영되고 있고, 장기적 경영전략이나 임상연구를 위한 분석적 정보처리 기능들은 결여되어 있다. 또한 소프트웨어 재설계나 추가적 개발 에 활용될 수 있는 객관적 품질 기준이 부족하고, 사용자들의 요구사항에 대해 소프트웨어 설계에 있어서 효율적으로 반영되지 못하고 있다. 이에 본 연구는 최근 발표된 병원정보시스템 품질 평가에 대한 연구 경향을 종합하고, 품질 평가에 대 해 효율적으로 활용되고 있는 사용성(Usability)을 기준으로 병원산업의 특수성을 포함한 품질 평가 방법과 품질척도를 제안하고자 한다. 국제표준기구(ISO:International Standards Organization)에서는 품질 특성을 기능성, 신뢰성, 사용성, 효율성, 유지 보수성과 이식성의 특성을 제시하고 있다. 특히 Folmer & Bosch(2004)가 정리한 ISO9126에서는 품질의 특성을 배움의 용이성, 운영의 용이성, 이해성과 매력성으로 분류하였고, ISO9241-11는 효과성, 효율성과 만족성으로 분류하였다. 또한 Shackel(1991)은 배움의 용이성(배움과 시간, 기억력), 효과성(오류, 직무시간), 유연성과 마음가짐으로 분류하고 있다(Shackel, 1991). Nielsen(1997)은 배움의 용이성, 기억의 용이성, 오류, 효율성, 만족성으로 분류하고 있고(Nielsen, 1997), Shneiderman(1998)는 효과성(직무시간, 배움의 시간), 효율성(기억의 지속시간, 오류), 만족도를 품질의 특성으로 분류하였다. 이와 같은 소프트웨어의 품질은 소프트웨어 계획, 개발, 성장과 쇠퇴의 모든 과정에 적용되며, 환경적 변화에 따라 사용자들의 정보욕구를 적절하게 반영하여 만족도를 높이 는 것이라고 요약할 수 있다. 그러나 현재까지 소프트웨어 품질 평가에 대한 연구들 은 보편적인 평가 항목들을 대상으로 측정하여 일반적인 품질기준을 제시하고 있고, 유사한 측정 내용들이 중복되어 있다. 이러한 경향은 산업별 특수성이 강한 소프트웨어에 대해서는 정확한 품질측정이 어려웠고, 품질측정에 대한 신뢰성을 떨어뜨리는 계기가 되었다. 이러한 한계를 극복하고자 나타난 방법론이 최종사용자들의 요구사항을 얼마나 적절하게 시스템에 반영했는지에 대한 사용성(Usability) 측정이다. 사용성에 대한 정의는 사용자들이 실질적으로 일하는 장소에서 직접 사용자들의 시스템 운용실태를 파악하여 문제점을 개선하는 것으로 요약할 수 있다. ISO9124-11에서는 사용성을 "어떤 제품이 구체적인 사용자들에 의해 구체적인 목적을 달성하기 위한 구체적인 사용의 맥락에서 효율성, 효과성을 만족함으로 사용될 수 있는 정도"로 정의하고 있다. 지난 10년간 병원정보시스템 평가에 대한 문헌들을 고찰한 결과 품질 측정의 효과는 정보화에 대한 동기유발과 의료품질을 높이는 게기가 되었으며, 질병에 대한 예방효과도 높은 것으로 조사되었다. 그러나 평가에 대한 인식의 문제, 평가 방법의 신뢰성 부족, 평가 지침과 부분적 평가에 따른 인증의 어려움 평가 결과에 대한 확산과 단편적 연구의 한계 등으로 연구결과에 대한 신뢰도와 활용도는 낮은 것으로 조사되었고, HIS에 대한 연구 빈도와 범위 가 매우 미약하였다. 특히, 품질속성은 같은 용어이지만 연구자에 따라 전혀 다른 측정 내용을 제시하고 있어 효율적인 품질 지표를 제시하는데 많은 혼란을 초래하고 있다. 이러한 품질 평가의 경향은 시스템 설계 및 개발자들에게 필요한 사용자들의 구체적이고 독특한 욕구나 병원정보시스템 환경의 특수성 파악에 한계를 보였으며, 평가 범위도 부분적으로 이루어져 전사적 시스템 설계 및 개발에 중요한 자료를 제공하지 못하고 있다. 이러한 문제점과 한계를 극복하고자 ISO와 같은 품질 표준 속성과 컨텍스트(Context)를 중심으로 사용자에 의한 평가 척도의 설정은 구체적이고 실용적이며 신뢰성 있는 평가 방법이 될 것이다.

  • PDF

Analysis of Interactions in Multiple Genes using IFSA(Independent Feature Subspace Analysis) (IFSA 알고리즘을 이용한 유전자 상호 관계 분석)

  • Kim, Hye-Jin;Choi, Seung-Jin;Bang, Sung-Yang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.157-165
    • /
    • 2006
  • The change of external/internal factors of the cell rquires specific biological functions to maintain life. Such functions encourage particular genes to jnteract/regulate each other in multiple ways. Accordingly, we applied a linear decomposition model IFSA, which derives hidden variables, called the 'expression mode' that corresponds to the functions. To interpret gene interaction/regulation, we used a cross-correlation method given an expression mode. Linear decomposition models such as principal component analysis (PCA) and independent component analysis (ICA) were shown to be useful in analyzing high dimensional DNA microarray data, compared to clustering methods. These methods assume that gene expression is controlled by a linear combination of uncorrelated/indepdendent latent variables. However these methods have some difficulty in grouping similar patterns which are slightly time-delayed or asymmetric since only exactly matched Patterns are considered. In order to overcome this, we employ the (IFSA) method of [1] to locate phase- and shut-invariant features. Membership scoring functions play an important role to classify genes since linear decomposition models basically aim at data reduction not but at grouping data. We address a new function essential to the IFSA method. In this paper we stress that IFSA is useful in grouping functionally-related genes in the presence of time-shift and expression phase variance. Ultimately, we propose a new approach to investigate the multiple interaction information of genes.

Hybrid Scheme of Data Cache Design for Reducing Energy Consumption in High Performance Embedded Processor (고성능 내장형 프로세서의 에너지 소비 감소를 위한 데이타 캐쉬 통합 설계 방법)

  • Shim, Sung-Hoon;Kim, Cheol-Hong;Jhang, Seong-Tae;Jhon, Chu-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.166-177
    • /
    • 2006
  • The cache size tends to grow in the embedded processor as technology scales to smaller transistors and lower supply voltages. However, larger cache size demands more energy. Accordingly, the ratio of the cache energy consumption to the total processor energy is growing. Many cache energy schemes have been proposed for reducing the cache energy consumption. However, these previous schemes are concerned with one side for reducing the cache energy consumption, dynamic cache energy only, or static cache energy only. In this paper, we propose a hybrid scheme for reducing dynamic and static cache energy, simultaneously. for this hybrid scheme, we adopt two existing techniques to reduce static cache energy consumption, drowsy cache technique, and to reduce dynamic cache energy consumption, way-prediction technique. Additionally, we propose a early wake-up technique based on program counter to reduce penalty caused by applying drowsy cache technique. We focus on level 1 data cache. The hybrid scheme can reduce static and dynamic cache energy consumption simultaneously, furthermore our early wake-up scheme can reduce extra program execution cycles caused by applying the hybrid scheme.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

The Efficient Merge Operation in Log Buffer-Based Flash Translation Layer for Enhanced Random Writing (임의쓰기 성능향상을 위한 로그블록 기반 FTL의 효율적인 합병연산)

  • Lee, Jun-Hyuk;Roh, Hong-Chan;Park, Sang-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.161-186
    • /
    • 2012
  • Recently, the flash memory consistently increases the storage capacity while the price of the memory is being cheap. This makes the mass storage SSD(Solid State Drive) popular. The flash memory, however, has a lot of defects. In order that these defects should be complimented, it is needed to use the FTL(Flash Translation Layer) as a special layer. To operate restrictions of the hardware efficiently, the FTL that is essential to work plays a role of transferring from the logical sector number of file systems to the physical sector number of the flash memory. Especially, the poor performance is attributed to Erase-Before-Write among the flash memory's restrictions, and even if there are lots of studies based on the log block, a few problems still exists in order for the mass storage flash memory to be operated. If the FAST based on Log Block-Based Flash often is generated in the wide locality causing the random writing, the merge operation will be occur as the sectors is not used in the data block. In other words, the block thrashing which is not effective occurs and then, the flash memory's performance get worse. If the log-block makes the overwriting caused, the log-block is executed like a cache and this technique contributes to developing the flash memory performance improvement. This study for the improvement of the random writing demonstrates that the log block is operated like not only the cache but also the entire flash memory so that the merge operation and the erase operation are diminished as there are a distinct mapping table called as the offset mapping table for the operation. The new FTL is to be defined as the XAST(extensively-Associative Sector Translation). The XAST manages the offset mapping table with efficiency based on the spatial locality and temporal locality.

Microleakage of endodontically treated teeth restored with three different esthetic post and cores (심미적 포스트 코어의 종류에 따른 미세누출에 관한 연구)

  • Park, Ji-Geun;Park, Ji-Man;Park, Eun-Jin
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.47 no.1
    • /
    • pp.53-60
    • /
    • 2009
  • Statement of problem: At present, as the esthetic demands are on the increase, there are many ongoing studies for tooth-colored post and cores. Most of them are about fiber post and prefabricated zirconia post, but few about one-piece milled zirconia post and core using CAD/CAM (computer-aided design/computer-aided manufacturing) technique. Purpose: The objective of this study was to compare microleakage of endodontically treated teeth restored with three different tooth-colored post and cores. Material and methods: Extracted 27 human maxillary incisors were cut at the cementoenamel junction, and the teeth were endodontically treated. Teeth were divided into 3 groups (n=9); restored with fiber post and resin core, prefabricated zirconia post and heat-pressed ceramic core, and CAD/CAM milled zirconia post and core. After the preparation of post space, each post was cemented with dual-polymerized resin cement (Variolink II). Teeth were thermocycled for 1000 cycles between $5-55^{\circ}C$ and dyed in 2% methylene blue at $37^{\circ}C$ for 24 hours. Teeth were sectioned (bucco-lingual), kept the record of microleakage and then image-analyzed using a microscope and computer program. The data were analyzed by one-way ANOVA and Scheffe's multiple range test (${\alpha}=0.05$). Results: All groups showed microleakage and there were no significant differences among the groups (P>.05). Prefabricated zirconia post and heat-pressed ceramic core showed more leakage in dye penetration at the post-tooth margin, but there was little microleakage at the end of the post. Fiber post and resin core group and CAD/CAM milled zirconia post and core group indicated similar microleakage score in each stage. Conclusion: Prefabricated zirconia post and heat-pressed ceramic core group demonstrated better resistance to leakage, and fiber post and resin core group and CAD/CAM milled zirconia post and core group showed the similar patterns. The ANOVA test didn't indicate significant differences in microleakage among test groups. (P>.05)