• Title/Summary/Keyword: Large Size data Processing

Search Result 246, Processing Time 0.032 seconds

A Method for Fuzzy-Data Processing of Cooked-rice Portion Size Estimation (식품 눈대중량 퍼지데이타의 처리방안에 관한 연구)

  • 김명희
    • Journal of Nutrition and Health
    • /
    • v.27 no.8
    • /
    • pp.856-863
    • /
    • 1994
  • To develop a optimized method for educing the errors associated with the estimation of portion size of foods, fuzzy-dta processing of portion size was performed. Cooked-rice was chosen as a food item. The experiment was conducted in two parts. First, to study the conceptions of respondents to bowl size(large, medium, small), 11 bowls of different size and shape were used and measured the actual weights of cooked-rice. Second, to study the conceptions of respondents to volume(1, 1/2, 1/3, 1/4), 16 different volumes of cooked-rice in bowls of same size and shape were used. Respondents for this study were 31 graduate students. After collecting the data of respondents to size and volume, fuzzy sets of size and volume were produced. The critical values were calculated by defuzzification(mean of maximum method, center of area method). The differences of the weights of cooked-rice in various bowl size and volume between the critical values and the calculated values by average portion size using in conventional methods were compared. The results hows large inter-subject variation in conception to bowl size, especially in large size. However, conception of respondents to volume is relatively accurate. Conception to bowl size seems to be influenced by bowl shape. Considering that the new fuzzy set was calculated by cartesian product(bowl size and volume), bowl shape should be considered in estimation of bowl size to make more accurate fuzzy set for cooked-rice portion size. The limitations of this study were discussed. If more accurate data for size and volume of many other food items are collected by the increased number of respondents, reducing the errors associated with the estimation of portion size of foods and rapid processing will be possible by constructing computer processing systems.

  • PDF

Development of Very Large Image Data Service System with Web Image Processing Technology

  • Lee, Sang-Ik;Shin, Sang-Hee
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1200-1202
    • /
    • 2003
  • Satellite and aerial images are very useful means to monitor ecological and environmental situation. Nowadays more and more officials at Ministry of Environment in Korea need to access and use these image data through networks like internet or intranet. However it is very hard to manage and service these image data through internet or intranet, because of its size problem. In this paper very large image data service system for Ministry of Environment is constructed on web environment using image compression and web based image processing technology. Through this system, not only can officials in Ministry of Environment access and use all the image data but also can achieve several image processing effects on web environment. Moreover officials can retrieve attribute information from vector GIS data that are also integrated with the system.

  • PDF

Parallel Multithreaded Processing for Data Set Summarization on Multicore CPUs

  • Ordonez, Carlos;Navas, Mario;Garcia-Alvarado, Carlos
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.2
    • /
    • pp.111-120
    • /
    • 2011
  • Data mining algorithms should exploit new hardware technologies to accelerate computations. Such goal is difficult to achieve in database management system (DBMS) due to its complex internal subsystems and because data mining numeric computations of large data sets are difficult to optimize. This paper explores taking advantage of existing multithreaded capabilities of multicore CPUs as well as caching in RAM memory to efficiently compute summaries of a large data set, a fundamental data mining problem. We introduce parallel algorithms working on multiple threads, which overcome the row aggregation processing bottleneck of accessing secondary storage, while maintaining linear time complexity with respect to data set size. Our proposal is based on a combination of table scans and parallel multithreaded processing among multiple cores in the CPU. We introduce several database-style and hardware-level optimizations: caching row blocks of the input table, managing available RAM memory, interleaving I/O and CPU processing, as well as tuning the number of working threads. We experimentally benchmark our algorithms with large data sets on a DBMS running on a computer with a multicore CPU. We show that our algorithms outperform existing DBMS mechanisms in computing aggregations of multidimensional data summaries, especially as dimensionality grows. Furthermore, we show that local memory allocation (RAM block size) does not have a significant impact when the thread management algorithm distributes the workload among a fixed number of threads. Our proposal is unique in the sense that we do not modify or require access to the DBMS source code, but instead, we extend the DBMS with analytic functionality by developing User-Defined Functions.

Two-Tier Storage DBMS for High-Performance Query Processing

  • Eo, Sang-Hun;Li, Yan;Kim, Ho-Seok;Bae, Hae-Young
    • Journal of Information Processing Systems
    • /
    • v.4 no.1
    • /
    • pp.9-16
    • /
    • 2008
  • This paper describes the design and implementation of a two-tier DBMS for handling massive data and providing faster response time. In the present day, the main requirements of DBMS are figured out using two aspects. The first is handling large amounts of data. And the second is providing fast response time. But in fact, Traditional DBMS cannot fulfill both the requirements. The disk-oriented DBMS can handle massive data but the response time is relatively slower than the memory-resident DBMS. On the other hand, the memory-resident DBMS can provide fast response time but they have original restrictions of database size. In this paper, to meet the requirements of handling large volumes of data and providing fast response time, a two-tier DBMS is proposed. The cold-data which does not require fast response times are managed by disk storage manager, and the hot-data which require fast response time among the large volumes of data are handled by memory storage manager as snapshots. As a result, the proposed system performs significantly better than disk-oriented DBMS with an added advantage to manage massive data at the same time.

Development of Very Large Image Data Service System with Web Image Processing Technology (웹 환경에서의 원격탐사기법을 이용한 대용량 영상자료 서비스 시스템개발)

  • 이상익;신상희;최윤수;고준환
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.215-220
    • /
    • 2004
  • Satellite and aerial images are very useful means to monitor ecological and environmental situation. Nowadays more and more officials at Ministry of Environment in Korea need to access and use these image data through networks like internet or intranet. However it is very hard to manage and service these image data through internet or intranet, because of its size problem. In this paper very large image data service system for Ministry of Environment is constructed on web environment using image compression and web based image processing technology. Through this system, not only can officials in Ministry of Environment access and use all the image data but also can achieve several image processing effects on web environment. Moreover officials can retrieve attribute information from vector GIS data that are also integrated with the system.

  • PDF

A Method for Distributed Database Processing with Optimized Communication Cost in Dataflow model (데이터플로우 모델에서 통신비용 최적화를 이용한 분산 데이터베이스 처리 방법)

  • Jun, Byung-Uk
    • Journal of Internet Computing and Services
    • /
    • v.8 no.1
    • /
    • pp.133-142
    • /
    • 2007
  • Large database processing is one of the most important technique in the information society, Since most large database is regionally distributed, the distributed database processing has been brought into relief. Communications and data compressions are the basic technologies for large database processing. In order to maximize those technologies, the execution time for the task, the size of data, and communication time between processors should be considered. In this paper, the dataflow scheme and vertically layered allocation algorithm have been used to optimize the distributed large database processing. The basic concept of this method is rearrangement of processes considering the communication time between processors. The paper also introduces measurement model of the execution time, the size of output data, and the communication time in order to implement the proposed scheme.

  • PDF

Large-scale 3D fast Fourier transform computation on a GPU

  • Jaehong Lee;Duksu Kim
    • ETRI Journal
    • /
    • v.45 no.6
    • /
    • pp.1035-1045
    • /
    • 2023
  • We propose a novel graphics processing unit (GPU) algorithm that can handle a large-scale 3D fast Fourier transform (i.e., 3D-FFT) problem whose data size is larger than the GPU's memory. A 1D FFT-based 3D-FFT computational approach is used to solve the limited device memory issue. Moreover, to reduce the communication overhead between the CPU and GPU, we propose a 3D data-transposition method that converts the target 1D vector into a contiguous memory layout and improves data transfer efficiency. The transposed data are communicated between the host and device memories efficiently through the pinned buffer and multiple streams. We apply our method to various large-scale benchmarks and compare its performance with the state-of-the-art multicore CPU FFT library (i.e., fastest Fourier transform in the West [FFTW]) and a prior GPU-based 3D-FFT algorithm. Our method achieves a higher performance (up to 2.89 times) than FFTW; it yields more performance gaps as the data size increases. The performance of the prior GPU algorithm decreases considerably in massive-scale problems, whereas our method's performance is stable.

Wavelet Compression Method with Minimum Delay for Mobile Tele-cardiology Applications (이동형 Tele-cardiology 시스템 적용을 위한 최저 지연을 가진 웨이브릿 압축 기법)

  • Kim Byoung-Soo;Yoo Sun-Kook;Lee Moon-Hyoung
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.11
    • /
    • pp.786-792
    • /
    • 2004
  • A wavelet based ECG data compression has become an attractive and efficient method in many mobile tele-cardiology applications. But large data size required for high compression performance leads a serious delay. In this paper, new wavelet compression method with minimum delay is proposed. It is based on deciding the type and compression ratio(CR) of block organically according to the standard deviation of input ECG data with minimum block size. Compression performances of the proposed algorithm for different MIT ECG Records were analyzed comparing other ECG compression algorithm. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via random noise simulation models. The results show that the proposed algorithm has both lower PRD than other algorithm on same CR and minimum time in the data acquisition, processing and transmission.

The Study on the GIS Software Engine based on PDA using GPS/GIS (GPS/GIS를 이용한 PDA기반 GIS 소프트웨어 엔진 연구)

  • PARK, Sung-Seok;KIM, Chang-Soo;SONG, Ha-Joo
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.17 no.1
    • /
    • pp.76-85
    • /
    • 2005
  • GIS (Geographic Information Systems) technology is a necessary function to support location based on service by using GPS in the mobile environment. These mobile systems have basic functional limitations such as a low rate of processing, limited memory capacity, and small screen size. Because of these limitations, most of the mobile systems require development of a reduced digital map to overcome problems with large-volume spatial data. In this paper, we suggest using the reduced digital map format in order to use location based on service in a PDA environment. The processing of the proposed data format consists of map generation, redefinition of layers, creating polygons, and format conversion. The proposed data format reduces the data size by about 98% comparing with DXF format based on the digital map of Busan.

Clustering Algorithm Using Hashing in Classification of Multispectral Satellite Images

  • Park, Sung-Hee;Kim, Hwang-Soo;Kim, Young-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.16 no.2
    • /
    • pp.145-156
    • /
    • 2000
  • Clustering is the process of partitioning a data set into meaningful clusters. As the data to process increase, a laster algorithm is required than ever. In this paper, we propose a clustering algorithm to partition a multispectral remotely sensed image data set into several clusters using a hash search algorithm. The processing time of our algorithm is compared with that of clusters algorithm using other speed-up concepts. The experiment results are compared with respect to the number of bands, the number of clusters and the size of data. It is also showed that the processing time of our algorithm is shorter than that of cluster algorithms using other speed-up concepts when the size of data is relatively large.