• Title/Summary/Keyword: 대용

Search Result 6,381, Processing Time 0.029 seconds

Usage Status of Traditional Rice Cake as a Meal Substitute and Analysis on the Selection Attributes Affecting Purchase (식사대용으로 전통떡류의 이용현황 및 구매에 미치는 선택속성 분석)

  • Yoon, Suk-Ja;Oh, In-Suk
    • Culinary science and hospitality research
    • /
    • v.20 no.2
    • /
    • pp.38-53
    • /
    • 2014
  • This Study has conducted a survey on the usage status of traditional rice cake as a meal substitute and an analysis on the selection attributes affecting purchase targeting male and female adults over 20 years old living in Seoul area so as to acquire basic data for enhancement of service and quality, and product development to facilitate and generalize rice cake consumption as a meal substitute. The survey was conducted for around 7 days from September 23rd to 30th, 2013, and 250 effective copies of questionnaire were used for the final analysis. The research outcomes show no significant deviation in terms of demographical features such as gender, marital status, age range, average monthly income and housing type; but there was a meaningful deviation in university graduation. Also, it is shown that taste, freshness, and convenience of a store were considered in their purchase of traditional rice cake as a meal substitute. Therefore, it is expected that development of various products, available for meal substitutes, would generalize and vitalize the consumption of traditional rice cake.

Confidence Value based Large Scale OWL Horst Ontology Reasoning (신뢰 값 기반의 대용량 OWL Horst 온톨로지 추론)

  • Lee, Wan-Gon;Park, Hyun-Kyu;Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.553-561
    • /
    • 2016
  • Several machine learning techniques are able to automatically populate ontology data from web sources. Also the interest for large scale ontology reasoning is increasing. However, there is a problem leading to the speculative result to imply uncertainties. Hence, there is a need to consider the reliability problems of various data obtained from the web. Currently, large scale ontology reasoning methods based on the trust value is required because the inference-based reliability of quantitative ontology is insufficient. In this study, we proposed a large scale OWL Horst reasoning method based on a confidence value using spark, a distributed in-memory framework. It describes a method for integrating the confidence value of duplicated data. In addition, it explains a distributed parallel heuristic algorithm to solve the problem of degrading the performance of the inference. In order to evaluate the performance of reasoning methods based on the confidence value, the experiment was conducted using LUBM3000. The experiment results showed that our approach could perform reasoning twice faster than existing reasoning systems like WebPIE.

Study on Distributed System for Process of A Large Amount of Science Technology Information (대용량 과학 기술 정보 처리를 위한 분산 시스템에 관한 연구)

  • Kim, Kwang-Young;Kang, Nam-Gyu;Kim, Jin-Suk;Jin, Du-Seok;Jeong, Chang-Hoo;Yun, Hwa-Muk
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.461-464
    • /
    • 2006
  • With the development of internet technologies, internet has been more complexly consisted of a large amount of Web documents, science technology documents, data-base and etc. distributed system is required to support effective retrieval and management about a large amount of Web documents, science technology documents and etc. distributed system has to support for user to search quickly and exactly. distributed system has to support for manager to manage easy. This paper designed and made distributed system. these system effectively manages the science technology information documents. this paper made an experiment using a large mount of science technology information system. this paper analyzed the result of experimenting.

  • PDF

Large Point Cloud-based Pipe Shape Reverse Engineering Automation Method (대용량 포인트 클라우드 기반 파이프 형상 역설계 자동화 방법 연구)

  • Kang, Tae-Wook;Kim, Ji-Eum
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.692-698
    • /
    • 2016
  • Recently, the facility extension construction and maintenance market portion has increased instead of decreased the newly facility construction. In this context, it is important to examine the reverse engineering of MEP (Mechanical Electrical and Plumbing) facilities, which have the high operation and management cost in the architecture domains. The purpose of this study was to suggest the Large Point Cloud-based Pipe Shape Reverse Engineering Method. To conduct the study, the related researches were surveyed and the reverse engineering automation method of the pipe shapes considering large point cloud was proposed. Based on the method, the prototype was developed and the results were validated. The proposed method is suitable for large data processing considering the validation results because the rendering performance standard deviation related to the 3D point cloud massive data searching was 0.004 seconds.

Transceiver Design Method for Finitely Large Numbers of Antenna Systems (유한 대용량 안테나 시스템에서 송수신기 설계 방법)

  • Shin, Joonwoo
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.39 no.3
    • /
    • pp.280-285
    • /
    • 2015
  • We consider a linear transceiver design method for multi-user multiple-input multiple-output (MIMO) downlink channels where a base station (BS) equipped with a finitely large number of antennas. Although a matched-filter precoder is a capacity-achieving method in massive MIMO downlink systems, it cannot guarantee to achieve the multi-user MIMO capacity in a finitely large number of antennas due to inter-user interferences. In this paper, we propose a two-stage precoder design method that maximizes the sum-rate of cell-edge users when the BS equipped with a finitely large number of antennas. At the first stage, a matched-filter precoder is adopted to exploit both beamforming gain and the reduction of the dimension of effective channels. Then, we derive the second stage precoder that maximizes the sum-rate by minimizing the weighted mean square error (WMSE). From simulation and analysis, we verify the effectiveness of the proposed method.

Large-Memory Data Processing on a Remote Memory System using Commodity Hardware (대용량 메모리 데이타 처리를 위한 범용 하드웨어 기반의 원격 메모리 시스템)

  • Jung, Hyung-Soo;Han, Hyuck;Yeom, Heon-Y.
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.9
    • /
    • pp.445-458
    • /
    • 2007
  • This article presents a novel infrastructure for large-memory database processing using commodity hardware with operating system support. We exploit inexpensive PCs and a high-speed network capable of Remote Direct Memory Access (RDMA) operations to build a new memory hierarchy between fast volatile memory and slow disk storage. The new memory hierarchy guarantees a reasonable response time, and its storage size enables us to run large-memory database systems with little performance degradation. The proposed architecture has two main components: (1) a remote memory system inside the Linux kernel to manage other computers' memory pages efficiently and (2) a remote memory pager responsible for manipulating remote read/write operations on remote memory pages. We insist that the proposed architecture is practical enough to support the rigorous demands of commercial in-memory database systems by demonstrating the performance of publicly available main-memory databases (e.g., MySQL) on our prototyped system. The experimental results show very interesting results from the TPC-C benchmark.

High-Speed Search Mechanism based on B-Tree Index Vector for Huge Web Log Mining and Web Attack Detection (대용량 웹 로그 마이닝 및 공격탐지를 위한 B-트리 인덱스 벡터 기반 고속 검색 기법)

  • Lee, Hyung-Woo;Kim, Tae-Su
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1601-1614
    • /
    • 2008
  • The number of web service users has been increased rapidly as existing services are changed into the web-based internet applications. Therefore, it is necessary for us to use web log pre-processing technique to detect attacks on diverse web service transactions and it is also possible to extract web mining information. However, existing mechanisms did not provide efficient pre-processing procedures for a huge volume of web log data. In this paper, we proposed both a field based parsing and a high-speed log indexing mechanism based on the suggested B-tree Index Vector structure for performance enhancement. In experiments, the proposed mechanism provides an efficient web log pre-processing and search functions with a session classification. Therefore it is useful to enhance web attack detection function.

  • PDF

Design and Implementation of Large Tag Data Transmission Protocol for 2.4GHz Multi-Channel Active RFID System (2.4GHz 다중채널 능동형 RFID시스템을 위한 대용량 태그 데이터 전송 프로토콜의 설계 및 구현)

  • Lee, Chae-Suk;Kim, Dong-Hyun;Kim, Jong-Doek
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.3
    • /
    • pp.217-227
    • /
    • 2010
  • To apply active RFID technology in the various kinds of industry, it needs to quickly transmit a large amount of data. ISO/IEC 18000-7 standard uses the 433.92MHz as single channel system and its transmit rate is just 27.8kbps, that is insufficient for a large amount of data transmission. To solve this problem, we designed a new data transmission protocol using 2.4GHz band. The feature of designed protocol is not only making over 255bytes data messages using the Burst Read UDB but also efficiently transmitting it. To implement this protocol, we use Texas Instruments's SmartRF04 develop kit and CC2500 transceiver as RF module. As an evaluation of 63.75kbytes data transmission, we demonstrate that transmission time of Burst Read UDB has improved as 17.95% faster than that of Read UDB in the ISO/IEC 18000-7.

Design and Implementation of Storage Manager for Real-Time Compressed Storing of Large Volume Datastream (대용량 데이터스트림 실시간 압축 저장을 위한 저장관리자 설계 및 구현)

  • Lee, Dong-Wook;Baek, Sung-Ha;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.3
    • /
    • pp.31-39
    • /
    • 2009
  • Requirement level regarding processing and managing real-time datastream in an ubiquitous environment is increased. Especially, due to the unbounded, high frequency and real-time characteristics of datastream, development of specialized stroge manager for DSMS is necessary to process such datastream. Existing DSMS, e.g. Coral8, can support datastream processing but it is not scalable and cannot perform well when handling large-volume real-time datastream, e.g. 100 thousand over per second. In the case of Oracle10g, which is generally used in related field, it supports storing and management processing. However, it does not support real-time datastream processing. In this paper, we propose specialized storage manager of DSMS for real-time compressed storing on semiconductor or LCD production facility of Samsung electronics, Hynix and HP. Hynix and HP. This paper describes the proposed system architecture and major components and show better performance of the proposed system compared with similar systems in the experiment section.

  • PDF

A Distributed Vertex Rearrangement Algorithm for Compressing and Mining Big Graphs (대용량 그래프 압축과 마이닝을 위한 그래프 정점 재배치 분산 알고리즘)

  • Park, Namyong;Park, Chiwan;Kang, U
    • Journal of KIISE
    • /
    • v.43 no.10
    • /
    • pp.1131-1143
    • /
    • 2016
  • How can we effectively compress big graphs composed of billions of edges? By concentrating non-zeros in the adjacency matrix through vertex rearrangement, we can compress big graphs more efficiently. Also, we can boost the performance of several graph mining algorithms such as PageRank. SlashBurn is a state-of-the-art vertex rearrangement method. It processes real-world graphs effectively by utilizing the power-law characteristic of the real-world networks. However, the original SlashBurn algorithm displays a noticeable slowdown for large-scale graphs, and cannot be used at all when graphs are too large to fit in a single machine since it is designed to run on a single machine. In this paper, we propose a distributed SlashBurn algorithm to overcome these limitations. Distributed SlashBurn processes big graphs much faster than the original SlashBurn algorithm does. In addition, it scales up well by performing the large-scale vertex rearrangement process in a distributed fashion. In our experiments using real-world big graphs, the proposed distributed SlashBurn algorithm was found to run more than 45 times faster than the single machine counterpart, and process graphs that are 16 times bigger compared to the original method.