• Title/Summary/Keyword: 메모리 요구량

Search Result 269, Processing Time 0.034 seconds

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

A High Speed Block Turbo Code Decoding Algorithm and Hardware Architecture Design (고속 블록 터보 코드 복호 알고리즘 및 하드웨어 구조 설계)

  • 유경철;신형식;정윤호;김근회;김재석
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.7
    • /
    • pp.97-103
    • /
    • 2004
  • In this paper, we propose a high speed block turbo code decoding algorithm and an efficient hardware architecture. The multimedia wireless data communication systems need channel codes which have the high-performance error correcting capabilities. Block turbo codes support variable code rates and packet sizes, and show a high performance due to a soft decision iteration decoding of turbo codes. However, block turbo codes have a long decoding time because of the iteration decoding and a complicated extrinsic information operation. The proposed algorithm using the threshold that represents a channel information reduces the long decoding time. After the threshold is decided by a simulation result, the proposed algorithm eliminates the calculation for the bits which have a good channel information and assigns a high reliability value to the bits. The threshold is decided by the absolute mean and the standard deviation of a LLR(Log Likelihood Ratio) in consideration that the LLR distribution is a gaussian one. Also, the proposed algorithm assigns '1', the highest reliable value, to those bits. The hardware design result using verilog HDL reduces a decoding time about 30% in comparison with conventional algorithm, and includes about 20K logic gate and 32Kbit memory sizes.

Index for Efficient Ontology Retrieval and Inference (효율적인 온톨로지 검색과 추론을 위한 인덱스)

  • Song, Seungjae;Kim, Insung;Chun, Jonghoon
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.2
    • /
    • pp.153-173
    • /
    • 2013
  • The ontology has been gaining increasing interests by recent arise of the semantic web and related technologies. The focus is mostly on inference query processing that requires high-level techniques for storage and searching ontologies efficiently, and it has been actively studied in the area of semantic-based searching. W3C's recommendation is to use RDFS and OWL for representing ontologies. However memory-based editors, inference engines, and triple storages all store ontology as a simple set of triplets. Naturally the performance is limited, especially when a large-scale ontology needs to be processed. A variety of researches on proposing algorithms for efficient inference query processing has been conducted, and many of them are based on using proven relational database technology. However, none of them had been successful in obtaining the complete set of inference results which reflects the five characteristics of the ontology properties. In this paper, we propose a new index structure called hyper cube index to efficiently process inference queries. Our approach is based on an intuition that an index can speed up the query processing when extensive inferencing is required.

Energy Efficient Clustering Algorithm for Surveillance and Reconnaissance Applications in Wireless Sensor Networks (무선 센서 네트워크에서 에너지 효율적인 감시·정찰 응용의 클러스터링 알고리즘 연구)

  • Kong, Joon-Ik;Lee, Jae-Ho;Kang, Jiheon;Eom, Doo-Seop
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.11
    • /
    • pp.1170-1181
    • /
    • 2012
  • Wireless Sensor Networks(WSNs) are used in diverse applications. In general, sensor nodes that are easily deployed on specific areas have many resource constrains such as battery power, memory sizes, MCUs, RFs and so on. Hence, first of all, the efficient energy consumption is strongly required in WSNs. In terms of event states, event-driven deliverly model (i.e. surveillance and reconnaissance applications) has several characteristics. On the basis of such a model, clustering algorithms can be mostly used to manage sensor nodes' energy efficiently owing to the advantages of data aggregations. Since a specific node collects packets from its child nodes in a network topology and aggregates them into one packet to relay them once, amount of transmitted packets to a sink node can be reduced. However, most clustering algorithms have been designed without considering can be reduced. However, most clustering algorithms have been designed without considering characteristics of event-driven deliverly model, which results in some problems. In this paper, we propose enhanced clustering algorithms regarding with both targets' movement and energy efficiency in order for applications of surveillance and reconnaissance. These algorithms form some clusters to contend locally between nodes, which have already detected certain targets, by using a method which called CHEW (Cluster Head Election Window). Therefore, our proposed algorithms enable to reduce not only the cost of cluster maintenance, but also energy consumption. In conclusion, we analyze traces of the clusters' movements according to targets' locations, evaluate the traces' results and we compare our algorithms with others through simulations. Finally, we verify our algorithms use power energy efficiently.

Computation ally Efficient Video Object Segmentation using SOM-Based Hierarchical Clustering (SOM 기반의 계층적 군집 방법을 이용한 계산 효율적 비디오 객체 분할)

  • Jung Chan-Ho;Kim Gyeong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.74-86
    • /
    • 2006
  • This paper proposes a robust and computationally efficient algorithm for automatic video object segmentation. For implementing the spatio-temporal segmentation, which aims for efficient combination of the motion segmentation and the color segmentation, an SOM-based hierarchical clustering method in which the segmentation process is regarded as clustering of feature vectors is employed. As results, problems of high computational complexity which required for obtaining exact segmentation results in conventional video object segmentation methods, and the performance degradation due to noise are significantly reduced. A measure of motion vector reliability which employs MRF-based MAP estimation scheme has been introduced to minimize the influence from the motion estimation error. In addition, a noise elimination scheme based on the motion reliability histogram and a clustering validity index for automatically identifying the number of objects in the scene have been applied. A cross projection method for effective object tracking and a dynamic memory to maintain temporal coherency have been introduced as well. A set of experiments has been conducted over several video sequences to evaluate the proposed algorithm, and the efficiency in terms of computational complexity, robustness from noise, and higher segmentation accuracy of the proposed algorithm have been proved.

Super Resolution Algorithm Based on Edge Map Interpolation and Improved Fast Back Projection Method in Mobile Devices (모바일 환경을 위해 에지맵 보간과 개선된 고속 Back Projection 기법을 이용한 Super Resolution 알고리즘)

  • Lee, Doo-Hee;Park, Dae-Hyun;Kim, Yoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.103-108
    • /
    • 2012
  • Recently, as the prevalence of high-performance mobile devices and the application of the multimedia content are expanded, Super Resolution (SR) technique which reconstructs low resolution images to high resolution images is becoming important. And in the mobile devices, the development of the SR algorithm considering the operation quantity or memory is required because of using the restricted resources. In this paper, we propose a new single frame fast SR technique suitable for mobile devices. In order to prevent color distortion, we change RGB color domain to HSV color domain and process the brightness information V (Value) considering the characteristics of human visual perception. First, the low resolution image is enlarged by the improved fast back projection considering the noise elimination. And at the same time, the reliable edge map is extracted by using the LoG (Laplacian of Gaussian) filtering. Finally, the high definition picture is reconstructed by using the edge information and the improved back projection result. The proposed technique removes effectually the unnatural artefact which is generated during the super resolution restoration, and the edge information which can be lost is amended and emphasized. The experimental results indicate that the proposed algorithm provides better performance than conventional back projection and interpolation methods.

Development of a Remotely Sensed Image Processing/Analysis System : GeoPixel Ver. 1.0 (JAVA를 이용한 위성영상처리/분석 시스템 개발 : GeoPixel Ver. 1.0)

  • 안충현;신대혁
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.1
    • /
    • pp.13-30
    • /
    • 1997
  • Recent improvements of satellite remote sensing sensors which are represented by hyperspectral imaging sensors and high spatial resolution sensors provide a large amount of data, typically several hundred megabytes per one scene. Moreover, increasing information exchange via internet and information super-highway requires the developments of more active service systems for processing and analysing of remote sensing data in order to provide value-added products. In this sense, an advanced satellite data processing system is being developed to achive high performance in computing speed and efficieney in processing a huge volume of data, and to make possible network computing and easy improving, upgrading and managing of systems. JAVA internet programming language provides several advantages for developing software such as object-oriented programming, multi-threading and robust memory managent. Using these features, a satellite data processing system named as GeoPixel has been developing using JAVA language. The GeoPixel adopted newly developed techniques including object-pipe connect method between each process and multi-threading structure. In other words, this system has characteristics such as independent operating platform and efficient data processing by handling a huge volume of remote sensing data with robustness. In the evaluation of data processing capability, the satisfactory results were shown in utilizing computer resources(CPU and Memory) and processing speeds.

Compression of CNN Using Low-Rank Approximation and CP Decomposition Methods (저계수 행렬 근사 및 CP 분해 기법을 이용한 CNN 압축)

  • Moon, HyeonCheol;Moon, Gihwa;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.125-131
    • /
    • 2021
  • In recent years, Convolutional Neural Networks (CNNs) have achieved outstanding performance in the fields of computer vision such as image classification, object detection, visual quality enhancement, etc. However, as huge amount of computation and memory are required in CNN models, there is a limitation in the application of CNN to low-power environments such as mobile or IoT devices. Therefore, the need for neural network compression to reduce the model size while keeping the task performance as much as possible has been emerging. In this paper, we propose a method to compress CNN models by combining matrix decomposition methods of LR (Low-Rank) approximation and CP (Canonical Polyadic) decomposition. Unlike conventional methods that apply one matrix decomposition method to CNN models, we selectively apply two decomposition methods depending on the layer types of CNN to enhance the compression performance. To evaluate the performance of the proposed method, we use the models for image classification such as VGG-16, RestNet50 and MobileNetV2 models. The experimental results show that the proposed method gives improved classification performance at the same range of 1.5 to 12.1 times compression ratio than the existing method that applies only the LR approximation.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.