• Title/Summary/Keyword: 메모리 요구량

Search Result 271, Processing Time 0.02 seconds

Design and Implementation of Buffer Cache for EXT3NS File System (EXT3NS 파일 시스템을 위한 버퍼 캐시의 설계 및 구현)

  • Sohn, Sung-Hoon;Jung, Sung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2202-2211
    • /
    • 2006
  • EXT3NS is a special-purpose file system for large scale multimedia streaming servers. It is built on top of streaming acceleration hardware device called Network-Storage card. The EXT3NS file system significantly improves streaming performance by eliminating memory-to-memory copy operations, i.e. sending video/audio from disk directly to network interface with no main memory buffering. In this paper, we design and implement a buffer cache mechanism, called PMEMCACHE, for EXT3NS file system. We also propose a buffer cache replacement method called ONS for the buffer cache mechanism. The ONS algorithm outperforms other existing buffer replacement algorithms in distributed multimedia streaming environment. In EXT3NS with PMEMCACHE, operation is 33MB/sec and random read operation is 2.4MB/sec. Also, the buffer replacement ONS algorithm shows better performance by 600KB/sec than other buffer cache replacement policies. As a result PMEMCACHE and an ONS can greatly improve the performance of multimedia steaming server which should supportmultiple client requests at the same time.

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

A High Speed Block Turbo Code Decoding Algorithm and Hardware Architecture Design (고속 블록 터보 코드 복호 알고리즘 및 하드웨어 구조 설계)

  • 유경철;신형식;정윤호;김근회;김재석
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.7
    • /
    • pp.97-103
    • /
    • 2004
  • In this paper, we propose a high speed block turbo code decoding algorithm and an efficient hardware architecture. The multimedia wireless data communication systems need channel codes which have the high-performance error correcting capabilities. Block turbo codes support variable code rates and packet sizes, and show a high performance due to a soft decision iteration decoding of turbo codes. However, block turbo codes have a long decoding time because of the iteration decoding and a complicated extrinsic information operation. The proposed algorithm using the threshold that represents a channel information reduces the long decoding time. After the threshold is decided by a simulation result, the proposed algorithm eliminates the calculation for the bits which have a good channel information and assigns a high reliability value to the bits. The threshold is decided by the absolute mean and the standard deviation of a LLR(Log Likelihood Ratio) in consideration that the LLR distribution is a gaussian one. Also, the proposed algorithm assigns '1', the highest reliable value, to those bits. The hardware design result using verilog HDL reduces a decoding time about 30% in comparison with conventional algorithm, and includes about 20K logic gate and 32Kbit memory sizes.

Index for Efficient Ontology Retrieval and Inference (효율적인 온톨로지 검색과 추론을 위한 인덱스)

  • Song, Seungjae;Kim, Insung;Chun, Jonghoon
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.2
    • /
    • pp.153-173
    • /
    • 2013
  • The ontology has been gaining increasing interests by recent arise of the semantic web and related technologies. The focus is mostly on inference query processing that requires high-level techniques for storage and searching ontologies efficiently, and it has been actively studied in the area of semantic-based searching. W3C's recommendation is to use RDFS and OWL for representing ontologies. However memory-based editors, inference engines, and triple storages all store ontology as a simple set of triplets. Naturally the performance is limited, especially when a large-scale ontology needs to be processed. A variety of researches on proposing algorithms for efficient inference query processing has been conducted, and many of them are based on using proven relational database technology. However, none of them had been successful in obtaining the complete set of inference results which reflects the five characteristics of the ontology properties. In this paper, we propose a new index structure called hyper cube index to efficiently process inference queries. Our approach is based on an intuition that an index can speed up the query processing when extensive inferencing is required.

Energy Efficient Clustering Algorithm for Surveillance and Reconnaissance Applications in Wireless Sensor Networks (무선 센서 네트워크에서 에너지 효율적인 감시·정찰 응용의 클러스터링 알고리즘 연구)

  • Kong, Joon-Ik;Lee, Jae-Ho;Kang, Jiheon;Eom, Doo-Seop
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.11
    • /
    • pp.1170-1181
    • /
    • 2012
  • Wireless Sensor Networks(WSNs) are used in diverse applications. In general, sensor nodes that are easily deployed on specific areas have many resource constrains such as battery power, memory sizes, MCUs, RFs and so on. Hence, first of all, the efficient energy consumption is strongly required in WSNs. In terms of event states, event-driven deliverly model (i.e. surveillance and reconnaissance applications) has several characteristics. On the basis of such a model, clustering algorithms can be mostly used to manage sensor nodes' energy efficiently owing to the advantages of data aggregations. Since a specific node collects packets from its child nodes in a network topology and aggregates them into one packet to relay them once, amount of transmitted packets to a sink node can be reduced. However, most clustering algorithms have been designed without considering can be reduced. However, most clustering algorithms have been designed without considering characteristics of event-driven deliverly model, which results in some problems. In this paper, we propose enhanced clustering algorithms regarding with both targets' movement and energy efficiency in order for applications of surveillance and reconnaissance. These algorithms form some clusters to contend locally between nodes, which have already detected certain targets, by using a method which called CHEW (Cluster Head Election Window). Therefore, our proposed algorithms enable to reduce not only the cost of cluster maintenance, but also energy consumption. In conclusion, we analyze traces of the clusters' movements according to targets' locations, evaluate the traces' results and we compare our algorithms with others through simulations. Finally, we verify our algorithms use power energy efficiently.

Computation ally Efficient Video Object Segmentation using SOM-Based Hierarchical Clustering (SOM 기반의 계층적 군집 방법을 이용한 계산 효율적 비디오 객체 분할)

  • Jung Chan-Ho;Kim Gyeong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.74-86
    • /
    • 2006
  • This paper proposes a robust and computationally efficient algorithm for automatic video object segmentation. For implementing the spatio-temporal segmentation, which aims for efficient combination of the motion segmentation and the color segmentation, an SOM-based hierarchical clustering method in which the segmentation process is regarded as clustering of feature vectors is employed. As results, problems of high computational complexity which required for obtaining exact segmentation results in conventional video object segmentation methods, and the performance degradation due to noise are significantly reduced. A measure of motion vector reliability which employs MRF-based MAP estimation scheme has been introduced to minimize the influence from the motion estimation error. In addition, a noise elimination scheme based on the motion reliability histogram and a clustering validity index for automatically identifying the number of objects in the scene have been applied. A cross projection method for effective object tracking and a dynamic memory to maintain temporal coherency have been introduced as well. A set of experiments has been conducted over several video sequences to evaluate the proposed algorithm, and the efficiency in terms of computational complexity, robustness from noise, and higher segmentation accuracy of the proposed algorithm have been proved.

Super Resolution Algorithm Based on Edge Map Interpolation and Improved Fast Back Projection Method in Mobile Devices (모바일 환경을 위해 에지맵 보간과 개선된 고속 Back Projection 기법을 이용한 Super Resolution 알고리즘)

  • Lee, Doo-Hee;Park, Dae-Hyun;Kim, Yoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.103-108
    • /
    • 2012
  • Recently, as the prevalence of high-performance mobile devices and the application of the multimedia content are expanded, Super Resolution (SR) technique which reconstructs low resolution images to high resolution images is becoming important. And in the mobile devices, the development of the SR algorithm considering the operation quantity or memory is required because of using the restricted resources. In this paper, we propose a new single frame fast SR technique suitable for mobile devices. In order to prevent color distortion, we change RGB color domain to HSV color domain and process the brightness information V (Value) considering the characteristics of human visual perception. First, the low resolution image is enlarged by the improved fast back projection considering the noise elimination. And at the same time, the reliable edge map is extracted by using the LoG (Laplacian of Gaussian) filtering. Finally, the high definition picture is reconstructed by using the edge information and the improved back projection result. The proposed technique removes effectually the unnatural artefact which is generated during the super resolution restoration, and the edge information which can be lost is amended and emphasized. The experimental results indicate that the proposed algorithm provides better performance than conventional back projection and interpolation methods.

Development of a Remotely Sensed Image Processing/Analysis System : GeoPixel Ver. 1.0 (JAVA를 이용한 위성영상처리/분석 시스템 개발 : GeoPixel Ver. 1.0)

  • 안충현;신대혁
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.1
    • /
    • pp.13-30
    • /
    • 1997
  • Recent improvements of satellite remote sensing sensors which are represented by hyperspectral imaging sensors and high spatial resolution sensors provide a large amount of data, typically several hundred megabytes per one scene. Moreover, increasing information exchange via internet and information super-highway requires the developments of more active service systems for processing and analysing of remote sensing data in order to provide value-added products. In this sense, an advanced satellite data processing system is being developed to achive high performance in computing speed and efficieney in processing a huge volume of data, and to make possible network computing and easy improving, upgrading and managing of systems. JAVA internet programming language provides several advantages for developing software such as object-oriented programming, multi-threading and robust memory managent. Using these features, a satellite data processing system named as GeoPixel has been developing using JAVA language. The GeoPixel adopted newly developed techniques including object-pipe connect method between each process and multi-threading structure. In other words, this system has characteristics such as independent operating platform and efficient data processing by handling a huge volume of remote sensing data with robustness. In the evaluation of data processing capability, the satisfactory results were shown in utilizing computer resources(CPU and Memory) and processing speeds.

Compression of CNN Using Low-Rank Approximation and CP Decomposition Methods (저계수 행렬 근사 및 CP 분해 기법을 이용한 CNN 압축)

  • Moon, HyeonCheol;Moon, Gihwa;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.125-131
    • /
    • 2021
  • In recent years, Convolutional Neural Networks (CNNs) have achieved outstanding performance in the fields of computer vision such as image classification, object detection, visual quality enhancement, etc. However, as huge amount of computation and memory are required in CNN models, there is a limitation in the application of CNN to low-power environments such as mobile or IoT devices. Therefore, the need for neural network compression to reduce the model size while keeping the task performance as much as possible has been emerging. In this paper, we propose a method to compress CNN models by combining matrix decomposition methods of LR (Low-Rank) approximation and CP (Canonical Polyadic) decomposition. Unlike conventional methods that apply one matrix decomposition method to CNN models, we selectively apply two decomposition methods depending on the layer types of CNN to enhance the compression performance. To evaluate the performance of the proposed method, we use the models for image classification such as VGG-16, RestNet50 and MobileNetV2 models. The experimental results show that the proposed method gives improved classification performance at the same range of 1.5 to 12.1 times compression ratio than the existing method that applies only the LR approximation.

An Efficient Dual Queue Strategy for Improving Storage System Response Times (저장시스템의 응답 시간 개선을 위한 효율적인 이중 큐 전략)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.19-24
    • /
    • 2024
  • Recent advances in large-scale data processing technologies such as big data, cloud computing, and artificial intelligence have increased the demand for high-performance storage devices in data centers and enterprise environments. In particular, the fast data response speed of storage devices is a key factor that determines the overall system performance. Solid state drives (SSDs) based on the Non-Volatile Memory Express (NVMe) interface are gaining traction, but new bottlenecks are emerging in the process of handling large data input and output requests from multiple hosts simultaneously. SSDs typically process host requests by sequentially stacking them in an internal queue. When long transfer length requests are processed first, shorter requests wait longer, increasing the average response time. To solve this problem, data transfer timeout and data partitioning methods have been proposed, but they do not provide a fundamental solution. In this paper, we propose a dual queue based scheduling scheme (DQBS), which manages the data transfer order based on the request order in one queue and the transfer length in the other queue. Then, the request time and transmission length are comprehensively considered to determine the efficient data transmission order. This enables the balanced processing of long and short requests, thus reducing the overall average response time. The simulation results show that the proposed method outperforms the existing sequential processing method. This study presents a scheduling technique that maximizes data transfer efficiency in a high-performance SSD environment, which is expected to contribute to the development of next-generation high-performance storage systems