• Title/Summary/Keyword: Interactive model for reading

Search Result 4, Processing Time 0.018 seconds

Some Suggestions for Improving Environment of Chinese Reading Class: Focused on Blended Learing (중국어 읽기 수업 환경 개선을 위한 제안: 블렌디드 러닝을 중심으로)

  • Park, Chan Wook
    • Cross-Cultural Studies
    • /
    • v.29
    • /
    • pp.413-452
    • /
    • 2012
  • The purpose of this study is to examine and apply Blended Learning to Chinese reading class and give some suggestions for Chinese reading class for realizing the interactive model for reading. For learner's improvement in Chinese reading level, various teaching methods need to be applied to Chinese reading class. Among teaching methods, this article tried to apply Blened Learning in terms of interaction, because Blended Learning can follow the general trend that all of people use laptop, smartphone, etc., and also can be contribution to reading as performance in foreign language learning. As a result, Blended Learning can make learner prepare class for giving online contents, and can make teacher and learner have more chances of interaction in class for improving reading competence.

Interactive Colision Detection for Deformable Models using Streaming AABBs

  • Zhang, Xinyu;Kim, Young-J.
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02c
    • /
    • pp.306-317
    • /
    • 2007
  • We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At run-time, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30~100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.

  • PDF

Classification of Consumer Review Information Based on Satisfaction/Dissatisfaction with Availability/Non-availability of Information (구매후기 정보의 충족/미충족에 따른 소비자의 만족/불만족 인식 및 구매후기 정보의 유형화)

  • Hong, Hee-Sook
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.35 no.9
    • /
    • pp.1099-1111
    • /
    • 2011
  • This study identified the types of consumer review information about apparel products based on consumer satisfaction/dissatisfaction with the availability/non-availability of consumer review information for online stores. Data were collected from 318 females aged 20s' to 30s', who had significant experience in reading consumer reviews posted on online stores. Consumer satisfaction/dissatisfaction with availability or non-availability of review information on online stores is different for information in regards to apparel product attributes, product benefits, and store attributes. According to the concept of quality elements suggested by the Kano model, two types of consumer review information were determined: Must-have information (product attribute information about size, fabric, color and design of the apparel product; benefit information about washing & care and comport of the apparel product; store attribute information about responsiveness, disclosure, delivery and after service of the store) and attracting information (attribute information about price comparison; benefit information about coordination with other items, fashionability, price discounts, value for price, reaction from others, emotion experienced during transaction, symbolic features for status, health functionality, and eco-friendly feature; store attribute information about return/refund, damage compensation and reputation/credibility of online store and interactive and dynamic nature of reviews among customers). There were significant differences between the high and low involvement groups in their perceptions of consumer review information.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.