• Title/Summary/Keyword: Natural Scientists

Search Result 483, Processing Time 0.028 seconds

Self-healing Engineering Materials: I. Organic Materials (자기치유 공학재료: I. 유기 재료)

  • Choi, Eun-Ji;Wang, Jing;Yoon, Ji-Hwan;Shim, Sang-Eun;Yun, Ju-Ho;Kim, Il
    • Clean Technology
    • /
    • v.17 no.1
    • /
    • pp.1-12
    • /
    • 2011
  • Scientists and engineers have altered the properties of materials such as metals, alloys, polymers, ceramics, and so on, to suit the ever changing needs of our society. Man-made engineering materials generally demonstrate excellent mechanical properties, which often tar exceed those of natural materials. However, all such engineering materials lack the ability of self-healing, i.e. the ability to remove or neutralize microcracks without intentional human interaction. The damage management paradigm observed in nature can be reproduced successfully in man-made engineering materials, provided the intrinsic character of the various types of engineering materials is taken into account. Various self-healing ptotocols that can be applied for the organic materials such as polymers, ionomers and composites can be developed by utilizing suitable chemical reactions and physical intermolecular interactions.

A Word Embedding used Word Sense and Feature Mirror Model (단어 의미와 자질 거울 모델을 이용한 단어 임베딩)

  • Lee, JuSang;Shin, JoonChoul;Ock, CheolYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.226-231
    • /
    • 2017
  • Word representation, an important area in natural language processing(NLP) used machine learning, is a method that represents a word not by text but by distinguishable symbol. Existing word embedding employed a large number of corpora to ensure that words are positioned nearby within text. However corpus-based word embedding needs several corpora because of the frequency of word occurrence and increased number of words. In this paper word embedding is done using dictionary definitions and semantic relationship information(hypernyms and antonyms). Words are trained using the feature mirror model(FMM), a modified Skip-Gram(Word2Vec). Sense similar words have similar vector. Furthermore, it was possible to distinguish vectors of antonym words.

The Utilization of Local Document Information to Improve Statistical Context-Sensitive Spelling Error Correction (통계적 문맥의존 철자오류 교정 기법의 향상을 위한 지역적 문서 정보의 활용)

  • Lee, Jung-Hun;Kim, Minho;Kwon, Hyuk-Chul
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.7
    • /
    • pp.446-451
    • /
    • 2017
  • The statistical context-sensitive spelling correction technique in this thesis is based upon Shannon's noisy channel model. The interpolation method is used for the improvement of the correction method proposed in the paper, and the general interpolation method is to fill the middle value of the probability by (N-1)-gram and (N-2)-gram. This method is based upon the same statistical corpus. In the proposed method, interpolation is performed using the frequency information between the statistical corpus and the correction document. The advantages of using frequency of correction documents are twofold. First, the probability of the coined word existing only in the correction document can be obtained. Second, even if there are two correction candidates with ambiguous probability values, the ambiguity is solved by correcting them by referring to the correction document. The method proposed in this thesis showed better precision and recall than the existing correction model.

A Packet Loss Control Scheme based on Network Conditions and Data Priority (네트워크 상태와 데이타 중요도에 기반한 패킷 손실 제어 기법)

  • Park, Tae-Uk;Chung, Ki-Dong
    • Journal of KIISE:Information Networking
    • /
    • v.31 no.1
    • /
    • pp.1-10
    • /
    • 2004
  • This study discusses Application-layer FEC using erasure codes. Because of the simple decoding process, erasure codes are used effectively in Application-layer FEC to deal with Packet-level errors. The large number of parity packets makes the loss rate to be small, but causes the network congestion to be worse. Thus, a redundancy control algorithm that can adjust the number of parity packets depending on network conditions is necessary. In addition, it is natural that high-priority frames such as I frames should produce more parity packets than low-priority frames such as P and B frames. In this paper, we propose a redundancy control algorithm that can adjust the amount of redundancy depending on the network conditions and depending on data priority, and test the performance in simple links and congestion links.

KITTEN: A Multi-thread Virtual Reality System (KITTEN: 다중 스레드 가상현실 시스템)

  • Kim, Dae-Won;Lee, Son-Ou;Whon, Kwang-Yun;Lee, Kwang-Hyung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.3
    • /
    • pp.275-287
    • /
    • 2000
  • A virtual reality system must provide participants with a natural interaction, a sufficient immersion, and mostly, realistic images. To achieve this, it is crucial to provide a fast and uniform rendering speed regardless of the complexity of virtual worlds, or the complexity of simulation. In this paper, a virtual reality system which offers an improved rendering performance for complex virtual reality applications has been designed and implemented. The key idea of the proposed system is to exploit the multi-thread scheme in system module design, and execute each modules in parallel. Taking such design approach, rendering, simulation, and interaction can be executed independently. Hence, in applications where a simulation is complex or a scene is very large, this system can provide a more uniform and faster frame rates. The proposed method has been experimented under the various application environments in which scenes and simulations are very complex.

  • PDF

Neural Predictive Coding for Text Compression Using GPGPU (GPGPU를 활용한 인공신경망 예측기반 텍스트 압축기법)

  • Kim, Jaeju;Han, Hwansoo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.3
    • /
    • pp.127-132
    • /
    • 2016
  • Several methods have been proposed to apply artificial neural networks to text compression in the past. However, the networks and targets are both limited to the small size due to hardware capability in the past. Modern GPUs have much better calculation capability than CPUs in an order of magnitude now, even though CPUs have become faster. It becomes possible now to train greater and complex neural networks in a shorter time. This paper proposed a method to transform the distribution of original data with a probabilistic neural predictor. Experiments were performed on a feedforward neural network and a recurrent neural network with gated-recurrent units. The recurrent neural network model outperformed feedforward network in compression rate and prediction accuracy.

Transformation-based Learning for Korean Comparative Sentence Classification (한국어 비교 문장 유형 분류를 위한 변환 기반 학습 기법)

  • Yang, Seon;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.2
    • /
    • pp.155-160
    • /
    • 2010
  • This paper proposes a method for Korean comparative sentence classification which is a part of comparison mining. Comparison mining, one area of text mining, analyzes comparative relations from the enormous amount of text documents. Three-step process is needed for comparison mining - 1) identifying comparative sentences in the text documents, 2) classifying those sentences into several classes, 3) analyzing comparative relations per each comparative class. This paper aims at the second task. In this paper, we use transformation-based learning (TBL) technique which is a well-known learning method in the natural language processing. In our experiment, we classify comparative sentences into seven classes using TBL and achieve an accuracy of 80.01%.

Example-based Dialog System for English Conversation Tutoring (영어 회화 교육을 위한 예제 기반 대화 시스템)

  • Lee, Sung-Jin;Lee, Cheong-Jae;Lee, Geun-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.2
    • /
    • pp.129-136
    • /
    • 2010
  • In this paper, we present an Example-based Dialogue System for English conversation tutoring. It aims to provide intelligent one-to-one English conversation tutoring instead of old fashioned language education with static multimedia materials. This system can understand poor expressions of students and it enables green hands to engage in a dialogue in spite of their poor linguistic ability, which gives students interesting motivation to learn a foreign language. And this system also has educational functionalities to improve the linguistic ability. To achieve these goals, we have developed a statistical natural language understanding module for understanding poor expressions and an example-based dialogue manager with high domain scalability and several effective tutoring methods.

Research on Integrated Management of ISMS : Comparative Analysis of IT Disaster Recovery Framework (IT재해복구 연관 프레임워크 비교분석을 통한 ISMS의 통합관리방안)

  • Bak, Yurim;Kim, Byungki;Yoon, Ohjun;Khil, Ara;Shin, Yongtea
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.3
    • /
    • pp.177-182
    • /
    • 2017
  • To develop computer and communication in the information society, difficulties exist in managing the enormous data manually. Also, loss of data due to natural disasters or hacker attacks, generate a variety of disasters in the IT securities. Hence, there is an urgent need for an information protection management system in order to mitigate these incidents. Information Security Management System has various existing frameworks for IT disaster management. These include Cyber Security Framework, Risk Management Framework, ISO / IEC 27001: 2013, and COBIT 5.0. Each framework analyses and compares the entry for IT disaster recovery from among the various available data. In this paper, we describe a single integrated management scheme for fast resolution of IT disasters.

A Comparative Performance Analysis of Spark-Based Distributed Deep-Learning Frameworks (스파크 기반 딥 러닝 분산 프레임워크 성능 비교 분석)

  • Jang, Jaehee;Park, Jaehong;Kim, Hanjoo;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.299-303
    • /
    • 2017
  • By piling up hidden layers in artificial neural networks, deep learning is delivering outstanding performances for high-level abstraction problems such as object/speech recognition and natural language processing. Alternatively, deep-learning users often struggle with the tremendous amounts of time and resources that are required to train deep neural networks. To alleviate this computational challenge, many approaches have been proposed in a diversity of areas. In this work, two of the existing Apache Spark-based acceleration frameworks for deep learning (SparkNet and DeepSpark) are compared and analyzed in terms of the training accuracy and the time demands. In the authors' experiments with the CIFAR-10 and CIFAR-100 benchmark datasets, SparkNet showed a more stable convergence behavior than DeepSpark; but in terms of the training accuracy, DeepSpark delivered a higher classification accuracy of approximately 15%. For some of the cases, DeepSpark also outperformed the sequential implementation running on a single machine in terms of both the accuracy and the running time.