• Title/Summary/Keyword: 멀티미디어응용

Search Result 1,611, Processing Time 0.025 seconds

Implementation of a QoS routing path control based on KREONET OpenFlow Network Test-bed (KREONET OpenFlow 네트워크 테스트베드 기반의 QoS 라우팅 경로 제어 구현)

  • Kim, Seung-Ju;Min, Seok-Hong;Kim, Byung-Chul;Lee, Jae-Yong;Hong, Won-Taek
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.48 no.9
    • /
    • pp.35-46
    • /
    • 2011
  • Future Internet should support more efficient mobility management, flexible traffic engineering and various emerging new services. So, lots of traffic engineering techniques have been suggested and developed, but it's impossible to apply them on the current running commercial Internet. To overcome this problem, OpenFlow protocol was proposed as a technique to control network equipments using network controller with various networking applications. It is a software defined network, so researchers can verify their own traffic engineering techniques by applying them on the controller. In addition, for high-speed packet processing in the OpenFlow network, programmable NetFPGA card with four 1G-interfaces and commercial Procurve OpenFlow switches can be used. In this paper, we implement an OpenFlow test-bed using hardware-accelerated NetFPGA cards and Procurve switches on the KREONET, and implement CSPF (Constraint-based Shortest Path First) algorithm, which is one of popular QoS routing algorithms, and apply it on the large-scale testbed to verify performance and efficiency of multimedia traffic engineering scheme in Future Internet.

Compact Field Remapping for Dynamically Allocated Structures (동적으로 할당된 구조체를 위한 압축된 필드 재배치)

  • Kim, Jeong-Eun;Han, Hwan-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.1003-1012
    • /
    • 2005
  • The most significant difference of embedded systems from general purpose systems is that embedded systems are allowed to use only limited resources including battery and memory. Especially, the number of applications increases which deal with multimedia data. In those systems with high data computations, the delay of memory access is one of the major bottlenecks hurting the system performance. As a result, many researchers have investigated various techniques to reduce the memory access cost. Most programs generally have locality in memory references. Temporal locality of references means that a resource accessed at one point will be used again in the near future. Spatial locality of references is that likelihood of using a resource gets higher if resources near it were just accessed. The latest embedded processors usually adapt cache memory to exploit these two types of localities. Processors access faster cache memory than off-chip memory, reducing the latency. In this paper we will propose the enhanced dynamic allocation technique for structure-type data in order to eliminate unused memory space and to reduce both the cache miss rate and the application execution time. The proposed approach aggregates fields from multiple records dynamically allocated and consecutively remaps them on the memory space. Experiments on Olden benchmarks show $13.9\%$ L1 cache miss rate drop and $15.9\%$ L2 cache miss drop on average, compared to the previously proposed techniques. We also find execution time reduced by $10.9\%$ on average, compared to the previous work.

Efficient Methods for Detecting Frame Characteristics and Objects in Video Sequences (내용기반 비디오 검색을 위한 움직임 벡터 특징 추출 알고리즘)

  • Lee, Hyun-Chang;Lee, Jae-Hyun;Jang, Ok-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • This paper detected the characteristics of motion vector to support efficient content -based video search of video. Traditionally, the present frame of a video was divided into blocks of equal size and BMA (block matching algorithm) was used, which predicts the motion of each block in the reference frame on the time axis. However, BMA has several restrictions and vectors obtained by BMA are sometimes different from actual motions. To solve this problem, the foil search method was applied but this method is disadvantageous in that it has to make a large volume of calculation. Thus, as an alternative, the present study extracted the Spatio-Temporal characteristics of Motion Vector Spatio-Temporal Correlations (MVSTC). As a result, we could predict motion vectors more accurately using the motion vectors of neighboring blocks. However, because there are multiple reference block vectors, such additional information should be sent to the receiving end. Thus, we need to consider how to predict the motion characteristics of each block and how to define the appropriate scope of search. Based on the proposed algorithm, we examined motion prediction techniques for motion compensation and presented results of applying the techniques.

Energy-efficient Correlated Data Placement Techniques for Multi-disk-based Mobile Systems (다중 디스크 기반 모바일 시스템 대상의 에너지 효율적인 연관 데이타 배치 기법)

  • Kim, Young-Jin;Kwon, Kwon-Taek;Kim, Ji-Hong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.3
    • /
    • pp.101-112
    • /
    • 2007
  • Hard disks have been the most prevalent secondary storage devices and these days their usage is becoming more important in mobile computing systems due to I/O intensive applications such as multimedia applications and games. However, significant power consumption in the disk drives still limits battery lifetimes of mobile systems critically. In this paper, we show that using several smaller disks (instead of one large disk) can be an energy-efficient secondary storage solution on typical mobile platforms without a significant performance delay. Also, we propose a novel energy-efficient technique, which clusters related data into groups and migrates the correlated groups to the same disk. We compare this method with the existing data concentration scheme, and also combine them. The experiments show that our technique saves the energy consumption up to 34% when a pair of 1.8' disks is used instead of a single 2.5' disk with a negligible increase in the average response time. The results also show that our method also saves up to 14.8% of disk energy consumption and improve the average I/O response time by up to 10 times over the existing scheme.

Hardware-Software Cosynthesis of Multitask Multicore SoC with Real-Time Constraints (실시간 제약조건을 갖는 다중태스크 다중코어 SoC의 하드웨어-소프트웨어 통합합성)

  • Lee Choon-Seung;Ha Soon-Hoi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.592-607
    • /
    • 2006
  • This paper proposes a technique to select processors and hardware IPs and to map the tasks into the selected processing elements, aming to achieve high performance with minimal system cost when multitask applications with real-time constraints are run on a multicore SoC. Such technique is called to 'Hardware-Software Cosynthesis Technique'. A cosynthesis technique was already presented in our early work [1] where we divide the complex cosynthesis problem into three subproblems and conquer each subproblem separately: selection of appropriate processing components, mapping and scheduling of function blocks to the selected processing component, and schedulability analysis. Despite good features, our previous technique has a serious limitation that a task monopolizes the entire system resource to get the minimum schedule length. But in general we may obtain higher performance in multitask multicore system if independent multiple tasks are running concurrently on different processor cores. In this paper, we present two mapping techniques, task mapping avoidance technique(TMA) and task mapping pinning technique(TMP), which are applicable for general cases with diverse operating policies in a multicore environment. We could obtain significant performance improvement for a multimedia real-time application, multi-channel Digital Video Recorder system and for randomly generated multitask graphs obtained from the related works.

Adaptive Discrete Wavelet Transform Based on Block Energy for JPEG2000 Still Images (JPEG2000 정지영상을 위한 블록 에너지 기반 적응적 이산 웨이블릿 변환)

  • Kim, Dae-Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.1
    • /
    • pp.22-31
    • /
    • 2007
  • The proposed algorithm in this paper is based on the wavelet decomposition and the energy computation of composed blocks so the amount of calculation and complexity is minimized by adaptively replacing the DWT coefficients and managing the resources effectively. We are now living in the world of a lot. of multimedia applications for many digital electric appliances and mobile devices. Among so many multimedia applications, the digital image compression is very important technology for digital cameras to store and transmit digital images to other sites and JPEG2000 is one of the cutting edge technology to compress still images efficiently. The digital cm technology is mainly using the digital image compression features so that those images could be efficiently saved locally and transferred to other sites without any losses. JPEG2000 standard is applicable for processing the digital images usefully to keep, send and receive through wired and/or wireless networks. The discrete wavelet transform (DWT) is one of the main differences to the previous digital image compression standard such as JPEG, performing the DWT to the entire image rather than splitting into many blocks. Several digital images m tested with this method and restored to compare to the results of conventional DWT which shows that the proposed algorithm get the better result without any significant degradation in terms of MSE & PSNR and the number of zero coefficients when the energy based adaptive DWT is applied.

  • PDF

Clustering of Web Objects with Similar Popularity Trends (유사한 인기도 추세를 갖는 웹 객체들의 클러스터링)

  • Loh, Woong-Kee
    • The KIPS Transactions:PartD
    • /
    • v.15D no.4
    • /
    • pp.485-494
    • /
    • 2008
  • Huge amounts of various web items such as keywords, images, and web pages are being made widely available on the Web. The popularities of such web items continuously change over time, and mining temporal patterns in popularities of web items is an important problem that is useful for several web applications. For example, the temporal patterns in popularities of search keywords help web search enterprises predict future popular keywords, enabling them to make price decisions when marketing search keywords to advertisers. However, presence of millions of web items makes it difficult to scale up previous techniques for this problem. This paper proposes an efficient method for mining temporal patterns in popularities of web items. We treat the popularities of web items as time-series, and propose gapmeasure to quantify the similarity between the popularities of two web items. To reduce the computation overhead for this measure, an efficient method using the Fast Fourier Transform (FFT) is presented. We assume that the popularities of web items are not necessarily following any probabilistic distribution or periodic. For finding clusters of web items with similar popularity trends, we propose to use a density-based clustering algorithm based on the gap measure. Our experiments using the popularity trends of search keywords obtained from the Google Trends web site illustrate the scalability and usefulness of the proposed approach in real-world applications.

An Efficient Dynamic Network Status Update Mechanism for QoS Routing (QoS 라우팅을 위한 효율적인 동적 네트워크 상태 정보 갱신 방안)

  • Kim, Jee-Hye;Lee, Mee-Jeong
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.1
    • /
    • pp.65-76
    • /
    • 2002
  • QoS routing is a routing technique for finding feasible path that satisfies QoS requirements required by application programs. Since QoS routing determines such paths in terms of dynamic network state, it satisfies the requirement of applications and increases the utilization of the network. The overhead is, however, generated by routers to exchange the information of the dynamic state of network. In order to reduce this protocol overhead, a timer based update mechanism is proposed in which router checks the change of the network status periodically and network state information is exchanged if the change is greater than a certain value. Using large update period makes, though, routing performance irresponsive to the parameters which determine the update of the network state of the router. In addition to this, large update period may result in inaccurate network state information at routers and cause resource reservation failure. The resource reservation failure generates additional overhead to cancel the resource reservation of the part of the path. In this paper, we propose mechanisms enhancing the existing network state update policy with respect to these two problems. Performance of the proposed schemes are evaluated through a course of simulation.

Analysis of Mashup Performances based on Vector Layer of Various GeoWeb 2.0 Platform Open APIs (다양한 공간정보 웹 2.0 플랫폼 Open API의 벡터 레이어 기반 매쉬업 성능 분석)

  • Kang, Jinwon;Kim, Min-soo
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.9 no.4
    • /
    • pp.745-754
    • /
    • 2019
  • As GeoWeb 2.0 technologies are widely used, various kinds of services that mashup spatial data and user data are being developed. In particular, various spatial information platforms such as Google Maps, OpenStreetMap, Daum Map, Naver Map, olleh Map, and VWorld based on GeoWeb 2.0 technologies support mashup service. The mashup service which is supported through the Open APIs of the platforms, provides various kinds of spatial data such as 2D map, 3D map, and aerial image. Also, application fields using the mashup service are greatly expanded. Recently, as user data for mashup have been greatly increased, there was a problem in mashup performance. However, the research on the mashup performance improvement is currently insufficient, even the research on the mashup performance comparison of various platforms has not been performed. In this paper, we perform comparative analysis of the mashup performance for large amounts of user data and spatial data using various spatial information platforms available in Korea. Specifically, we propose two performance analysis indexes of mashup time and user interaction time in order to analyze the mashup performance efficiently. Also, we implement a system for the performance analysis. Finally, from the performance analysis result, we propose a spatial information platform that can be efficiently applied to cases when user data increases greatly and user interaction occurs frequently.

RGB Channel Selection Technique for Efficient Image Segmentation (효율적인 이미지 분할을 위한 RGB 채널 선택 기법)

  • 김현종;박영배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1332-1344
    • /
    • 2004
  • Upon development of information super-highway and multimedia-related technoiogies in recent years, more efficient technologies to transmit, store and retrieve the multimedia data are required. Among such technologies, firstly, it is common that the semantic-based image retrieval is annotated separately in order to give certain meanings to the image data and the low-level property information that include information about color, texture, and shape Despite the fact that the semantic-based information retrieval has been made by utilizing such vocabulary dictionary as the key words that given, however it brings about a problem that has not yet freed from the limit of the existing keyword-based text information retrieval. The second problem is that it reveals a decreased retrieval performance in the content-based image retrieval system, and is difficult to separate the object from the image that has complex background, and also is difficult to extract an area due to excessive division of those regions. Further, it is difficult to separate the objects from the image that possesses multiple objects in complex scene. To solve the problems, in this paper, I established a content-based retrieval system that can be processed in 5 different steps. The most critical process of those 5 steps is that among RGB images, the one that has the largest and the smallest background are to be extracted. Particularly. I propose the method that extracts the subject as well as the background by using an Image, which has the largest background. Also, to solve the second problem, I propose the method in which multiple objects are separated using RGB channel selection techniques having optimized the excessive division of area by utilizing Watermerge's threshold value with the object separation using the method of RGB channels separation. The tests proved that the methods proposed by me were superior to the existing methods in terms of retrieval performances insomuch as to replace those methods that developed for the purpose of retrieving those complex objects that used to be difficult to retrieve up until now.