• Title/Summary/Keyword: Network storage system

Search Result 546, Processing Time 0.029 seconds

Design of Dynamic Buffer Assignment and Message model for Large-scale Process Monitoring of Personalized Health Data (개인화된 건강 데이터의 대량 처리 모니터링을 위한 메시지 모델 및 동적 버퍼 할당 설계)

  • Jeon, Young-Jun;Hwang, Hee-Joung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.187-193
    • /
    • 2015
  • The ICT healing platform sets a couple of goals including preventing chronic diseases and sending out early disease warnings based on personal information such as bio-signals and life habits. The 2-step open system(TOS) had a relay designed between the healing platform and the storage of personal health data. It also took into account a publish/subscribe(pub/sub) service based on large-scale connections to transmit(monitor) the data processing process in real time. In the early design of TOS pub/sub, however, the same buffers were allocated regardless of connection idling and type of message in order to encode connection messages into a deflate algorithm. Proposed in this study, the dynamic buffer allocation was performed as follows: the message transmission type of each connection was first put to queuing; each queue was extracted for its feature, computed, and converted into vector through tf-idf, then being entered into a k-means cluster and forming a cluster; connections categorized under a certain cluster would re-allocate the resources according to the resource table of the cluster; the centroid of each cluster would select a queuing pattern to represent the cluster in advance and present it as a resource reference table(encoding efficiency by the buffer sizes); and the proposed design would perform trade-off between the calculation resources and the network bandwidth for cluster and feature calculations to efficiently allocate the encoding buffer resources of TOS to the network connections, thus contributing to the increased tps(number of real-time data processing and monitoring connections per unit hour) of TOS.

S-XML Transformation Method for Efficient Distribution of Spatial Information on u-GIS Environment (u-GIS 환경에서 효율적인 공간 정보 유통을 위한 S-XML 변환 기법)

  • Lee, Dong-Wook;Baek, Sung-Ha;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.55-62
    • /
    • 2009
  • In u-GIS environment, we collect spatial data needed through sensor network and provide them with information real-time processed or stored. When information through Internet is requested on Web based applications, it is transmitted in XML. Especially, when requested information includes spatial data, GML, S-XML, and other document that can process spatial data are used. In this processing, real-time stream data processed in DSMS is transformed to S-XML document type and spatial information service based on web receive S-XML document through Internet. Because most of spatial application service use existing spatial DBMS as a storage system, The data used in S-XML and SDBMS needs transformation between themselves. In this paper, we propose S-XML a transformation method using caching of spatial data. The proposed method caches the spatial data part of S-XML to transform S-XML and relational spatial database for providing spatial data efficiently and it transforms cached data without additional transformation cost when a transformation between data in the same region is required. Through proposed method, we show that it reduced the cost of transformation between S-XML documents and spatial information services based on web to provide spatial information in u-GIS environment and increased the performance of query processing through performance evaluation.

  • PDF

Pre-Filtering based Post-Load Shedding Method for Improving Spatial Queries Accuracy in GeoSensor Environment (GeoSensor 환경에서 공간 질의 정확도 향상을 위한 선-필터링을 이용한 후-부하제한 기법)

  • Kim, Ho;Baek, Sung-Ha;Lee, Dong-Wook;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.12 no.1
    • /
    • pp.18-27
    • /
    • 2010
  • In u-GIS environment, GeoSensor environment requires that dynamic data captured from various sensors and static information in terms of features in 2D or 3D are fused together. GeoSensors, the core of this environment, are distributed over a wide area sporadically, and are collected in any size constantly. As a result, storage space could be exceeded because of restricted memory in DSMS. To solve this kind of problems, a lot of related studies are being researched actively. There are typically 3 different methods - Random Load Shedding, Semantic Load Shedding, and Sampling. Random Load Shedding chooses and deletes data in random. Semantic Load Shedding prioritizes data, then deletes it first which has lower priority. Sampling uses statistical operation, computes sampling rate, and sheds load. However, they are not high accuracy because traditional ones do not consider spatial characteristics. In this paper 'Pre-Filtering based Post Load Shedding' are suggested to improve the accuracy of spatial query and to restrict load shedding in DSMS. This method, at first, limits unnecessarily increased loads in stream queue with 'Pre-Filtering'. And then, it processes 'Post-Load Shedding', considering data and spatial status to guarantee the accuracy of result. The suggested method effectively reduces the number of the performance of load shedding, and improves the accuracy of spatial query.

Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

  • 박상현;원정임;윤지희;김상욱
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.468-478
    • /
    • 2004
  • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.

CO2 Exchange in Kwangneung Broadleaf Deciduous Forest in a Hilly Terrain in the Summer of 2002 (2002년 여름철 경사진 광릉 낙엽 활엽수림에서의 이산화탄소 교환)

  • Choi, Tae-jin;Kim, Joon;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.5 no.2
    • /
    • pp.70-80
    • /
    • 2003
  • We report the first direct measurement of $CO_2$ flux over Kwangneung broadleaf deciduous forest, one of the tower flux sites in KoFlux network. Eddy covariance system was installed on a 30 m tower along with other meteorological instruments from June to August in 2002. Although the study site was non-ideal (with valley-like terrain), turbulence characteristics from limited wind directions (i.e., 90$\pm$45$^{\circ}$) was not significantly different from those obtained at simple, homogeneous terrains with an ideal fetch. Despite very low rate of data retrieval, preliminary results from our analysis are encouraging and worthy of further investigation. Ignoring the role of advection terms, the averaged net ecosystem exchange (NEE) of $CO_2$ ranged from -1.2 to 0.7 mg m$^{-2}$ s$^{-1}$ from June to August in 2002. The effect of weak turbulence on nocturnal NEE was examined in terms of friction velocity (u*) along with the estimation of storage term. The effect of low uf u* NEE was obvious with a threshold value of about 0.2 m s$^{-1}$ . The contribution of storage term to nocturnal NEE was insignificant; suggesting that the $CO_2$ stored within the forest canopy at night was probably removed by the drainage flow along the hilly terrain. This could be also an artifact of uncertainty in calculations of storage term based on a single-level concentration. The hyperbolic light response curves explained >80% of variation in the observed NEE, indicating that $CO_2$ exchange at the site was notably light-dependent. Such a relationship can be used effectively in filling up the missing gaps in NEE data through the season. Finally, a simple scaling analysis based on a linear flow model suggested that advection might play a significant role in NEE evaluation at this site.

An Efficient P2P Based Proxy Patching Scheme for Large Scale VOD Systems (대규모 VOD 시스템을 위한 효율적인 P2P 기반의 프록시 패칭 기법)

  • Kwon, Chun-Ja;Choi, Hwang-Kyu
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.341-354
    • /
    • 2005
  • The main bottleneck for large scale VOD systems is bandwidth of storage or network I/O due to the large number of client requests simultaneously, and then efficient techniques are required to solve the bottleneck problem of the VOD system. Patching is one of the most efficient techniques to overcome the bottleneck of the VOD system through the use of multicast scheme. In this paper, we propose a new patching scheme, called P2P proxy patching, for improving the typical patching technique by jointly using the prefix caching and P2P proxy. In our proposed scheme, each client plays a role in a proxy to multicast a regular stream to other clients that request the same video stream. Due to the use of the P2P proxy and the prefix caching, the client requests that ive out of the patching window range can receive the regular stream from other clients in the previous patching group without allocating the new regular channels from the VOD server to the clients. In the performance study, we show that our patching scheme can reduce the server bandwidth requirement about $33\%$ less than that of the existing patching technique with respect to prefix size and request interval.

Effects of Equivalent Weight of Epoxy Resins and Content of Catalyst on the Curing Reaction in Cationic Catalyst/Epoxy Cure System (양이온 촉매/에폭시 경화계에서 에폭시 수지의 당량 및 촉매 함량이 경화반응에 미치는 영향)

  • Kim, Youn Cheol;Park, Soo-Jin;Lee, Jae-Rock
    • Applied Chemistry for Engineering
    • /
    • v.8 no.6
    • /
    • pp.960-966
    • /
    • 1997
  • The effects of epoxy resins and content of catalyst on the cure characteristics were studied by FT-IR, DSC and dynamic viscometer for the thermal properties and rheological properties of the catalytic (N-Benzylpyrazinium hexafluoroantimonate, BPH) epoxy thermosetting system. Compared with DSC results of DEGBF containing 0.5wt% BPH, the DSC thermograms of DGEBA containing 0.5wt% BPH indicated that the reaction was faster than that of DGEBF/BPH and the conversion rate of DGEBA/BPH was high in the initial stage of the reaction. As the concentration of BPH increases, the reaction and conversion rates show similar value in both the cases. The influence of hydroxyl group of epoxy resin on gel point defined from the crossover point of storage modulus (G') and loss modulus (G") could be explained by the formation of 3-dimensional network in the initial stage owing to the curing reaction between epoxides and hydroxyl groups of epoxy resin. This was consistent with the gel point obtained from DSC, FT-IR and moduli crossover. The activation energy (Et) obtained from the crossover point (G'/G"=1) are $31-39kJ.mol^{-1}$ for various BPH compositions in case of two epoxy systems.

  • PDF

Assessment of Agricultural Water Supply Capacity Using MODSIM-DSS Coupled with SWAT (SWAT과 MODSIM-DSS 모형을 연계한 금강유역의 농업용수 공급능력 평가)

  • Ahn, So Ra;Park, Geun Ae;Kim, Seong Joon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.2
    • /
    • pp.507-519
    • /
    • 2013
  • This study is to evaluate agricultural water supply capacity in Geum river basin (9,865 $km^2$), one of the 5 big river basin of South Korea using MODSIM-DSS (MODified SIMyld-Decision Support System) model. The model is a generalized river basin decision support system and network flow model developed at Colorado State University designed specifically to meet the growing demands and pressures on river basin management. The model was established by dividing the basin into 14 subbasins and the irrigation facilities viz. agricultural reservoirs, pumping stations, diversions, culverts and groundwater wells were grouped and networked within each subbasin and networked between subbasins including municipal and industrial water supplies. To prepare the inflows to agricultural reservoirs and multipurpose dams, the Soil and Water Assessment Tool (SWAT) was calibrated using 6 years (2005-2010) observed dam inflow and storage data. By MODSIM run for 8 years from 2004 to 2011, the agricultural water shortage had occurred during the drought years of 2006, 2008, and 2009. The agricultural water shortage could be calculated as 282 $10^6m^3$, 286 $10^6m^3$, and 329 $10^6m^3$ respectively.

A Design of Authentication Mechanism for Secure Communication in Smart Factory Environments (스마트 팩토리 환경에서 안전한 통신을 위한 인증 메커니즘 설계)

  • Joong-oh Park
    • Journal of Industrial Convergence
    • /
    • v.22 no.4
    • /
    • pp.1-9
    • /
    • 2024
  • Smart factories represent production facilities where cutting-edge information and communication technologies are fused with manufacturing processes, reflecting rapid advancements and changes in the global manufacturing sector. They capitalize on the integration of robotics and automation, the Internet of Things (IoT), and the convergence of artificial intelligence technologies to maximize production efficiency in various manufacturing environments. However, the smart factory environment is prone to security threats and vulnerabilities due to various attack techniques. When security threats occur in smart factories, they can lead to financial losses, damage to corporate reputation, and even human casualties, necessitating an appropriate security response. Therefore, this paper proposes a security authentication mechanism for safe communication in the smart factory environment. The components of the proposed authentication mechanism include smart devices, an internal operation management system, an authentication system, and a cloud storage server. The smart device registration process, authentication procedure, and the detailed design of anomaly detection and update procedures were meticulously developed. And the safety of the proposed authentication mechanism was analyzed, and through performance analysis with existing authentication mechanisms, we confirmed an efficiency improvement of approximately 8%. Additionally, this paper presents directions for future research on lightweight protocols and security strategies for the application of the proposed technology, aiming to enhance security.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.