• Title/Summary/Keyword: 리소스 평가

Search Result 85, Processing Time 0.023 seconds

Optimal Sensor Location in Water Distribution Network using XGBoost Model (XGBoost 기반 상수도관망 센서 위치 최적화)

  • Hyewoon Jang;Donghwi Jung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.217-217
    • /
    • 2023
  • 상수도관망은 사용자에게 고품질의 물을 안정적으로 공급하는 것을 목적으로 하며, 이를 평가하기 위한 지표 중 하나로 압력을 활용한다. 최근 스마트 센서의 설치가 확장됨에 따라 기계학습기법을 이용한 실시간 데이터 기반의 분석이 활발하다. 따라서 어디에서 데이터를 수집하느냐에 대한 센서 위치 결정이 중요하다. 본 연구는 eXtreme Gradient Boosting(XGBoost) 모델을 활용하여 대규모 상수도관망 내 센서 위치를 최적화하는 방법론을 제안한다. XGBoost 모델은 여러 의사결정 나무(decision tree)를 활용하는 앙상블(ensemble) 모델이며, 오차에 따른 가중치를 부여하여 성능을 향상시키는 부스팅(boosting) 방식을 이용한다. 이는 분산 및 병렬 처리가 가능해 메모리리소스를 최적으로 사용하고, 학습 속도가 빠르며 결측치에 대한 전처리 과정을 모델 내에 포함하고 있다는 장점이 있다. 모델 구현을 위한 독립 변수 결정을 위해 압력 데이터의 변동성 및 평균압력 값을 고려하여 상수도관망을 대표하는 중요 절점(critical node)를 선정한다. 중요 절점의 압력 값을 예측하는 XGBoost 모델을 구축하고 모델의 성능과 요인 중요도(feature importance) 값을 고려하여 센서의 최적 위치를 선정한다. 이러한 방법론을 기반으로 상수도관망의 특성에 따른 경향성을 파악하기 위해 다양한 형태(예를 들어, 망형, 가지형)와 구성 절점의 수를 변화시키며 결과를 분석한다. 본 연구에서 구축한 XGBoost 모델은 추가적인 전처리 과정을 최소화하며 대규모 관망에 간편하게 사용할 수 있어 추후 다양한 입출력 데이터의 조합을 통해 센서 위치 외에도 상수도관망에서의 성능 최적화에 활용할 수 있을 것으로 기대한다.

  • PDF

Author Entity Identification using Representative Properties in Linked Data (대표 속성을 이용한 저자 개체 식별)

  • Kim, Tae-Hong;Jung, Han-Min;Sung, Won-Kyung;Kim, Pyung
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.1
    • /
    • pp.17-29
    • /
    • 2012
  • In recent years, Linked Data that is published under an open license shows increased growth rate and comes into the spotlight due to its interoperability and openness especially in government of developed countries. However there are relatively few out-links compared with its entire number of links and most of links refer a few hub dataset. These occur because of absence of technology that identifies entities in Linked data. In this paper, we present an improved author entity resolution method that using representative properties. To solve problems of previous methods that utilizes relation with other entities(owl:sameAs, owl:differentFrom and so on) or depends on Curation, we design and evaluate an automated realtime resolution process based on multi-ontologies that respects entity's type and its logical characteristics so as to verify entities consistency. The evaluation of author entity resolution shows positive results (The average of K measuring result is 0.8533.) with 29 author information that has obtained confirmation.

3-tag-based Web Image Retrieval Technique (3-태그 기반의 웹 이미지 검색 기법)

  • Lee, Si-Hwa;Hwang, Dae-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.9
    • /
    • pp.1165-1173
    • /
    • 2012
  • One of the most popular technologies in Web2.0 is tagging, and it widely applies to Web content as well as multimedia data such as image and video. Web users have expected that tags by themselves would be reused in information search and maximize the search efficiency, but wrong tag by irresponsible Web users really has brought forth a incorrect search results. In past papers, we have gathered various information resources and tags scattered in Web, mapped one tag onto other tags, and clustered these tags according to the corelation between them. A 3-tag based search algorithm which use the clustered tags of past papers, is proposed in this paper. For performance evaluation of the proposed algorithm, our algorithm is compared with image search result of Flickr, typical tag based site, and is evaluated in accuracy and recall factor.

Development of Mobile u-Healthcare System in WSN (무선센서네트워크 환경의 모바일 u-헬스케어 시스템 개발)

  • Lee, Seung-Chul;Chung, Wan-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.338-346
    • /
    • 2012
  • Wireless sensor network (WSN) technology provides a variety of medical and healthcare solutions to assist detection and communication of body conditions. However, data reliability inside WSN might be influenced to healthcare routing protocol due to limited hardware resources of computer, storage, and communication bandwidth. For this reason, we have conducted various wireless communication experiments between nodes using parameters such as RF strength, battery status, and deployment status to get a optimal performance of mobile healthcare routing protocol. This experiment may also extend the life time of the nodes. Performance analysis is done to obtain some important parameters in terms of distance and reception rate between the nodes. Our experiment results show optimal distance between nodes according to battery status and RF strength, or deployment status and RF strength. The packet reception rate according to deployment status and RF strength of nodes was also checked. Based on this performance evaluation, the optimized sensor node battery and deployment in the developed our mobile healthcare routing protocol were proposed.

A Study on the Evaluation of MPEG-4 Video Decoding Complexity for HDTV (HDTV를 위한 MPEG-4 비디오 디코딩 복잡도의 평가에 관한 연구)

  • Ahn, Seong-Yeol;Park, Won-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.595-598
    • /
    • 2005
  • MPEG-4 Visual is, and international standard for the object-based video compression, designed for supporting a wide range of applications from multimedia communication to HDTV. To control the minimum decoding complexity required at the decoder, the MPEG-4 Visual standard defines the co-called video buffering mechanism, which includes three video buffer models. Among them, the VCV(Video Complexity Verifier) defines the control of the processing speed for decoding of a macroblock, there are two models: VCV and B-VCV distinguishing the boundary and non-boundary MB. This paper presents the evaluation results of decoding complexity by measuring decoding time of a MB for rectangular, arbitrarily shaped video objects and the various types of objects supporting the resolution of HDTV using the optimized MPEG-4 Reference Software. The experimental results shows that decoding complexity varies depending on the coding type and more effective usage of decoding resources may be possible.

  • PDF

Detecting Malicious Scripts in Web Contents through Remote Code Verification (원격코드검증을 통한 웹컨텐츠의 악성스크립트 탐지)

  • Choi, Jae-Yeong;Kim, Sung-Ki;Lee, Hyuk-Jun;Min, Byoung-Joon
    • The KIPS Transactions:PartC
    • /
    • v.19C no.1
    • /
    • pp.47-54
    • /
    • 2012
  • Sharing cross-site resources has been adopted by many recent websites in the forms of service-mashup and social network services. In this change, exploitation of the new vulnerabilities increases, which includes inserting malicious codes into the interaction points between clients and services instead of attacking the websites directly. In this paper, we present a system model to identify malicious script codes in the web contents by means of a remote verification while the web contents downloaded from multiple trusted origins are executed in a client's browser space. Our system classifies verification items according to the origin of request based on the information on the service code implementation and stores the verification results into three databases composed of white, gray, and black lists. Through the experimental evaluations, we have confirmed that our system provides clients with increased security by effectively detecting malicious scripts in the mashup web environment.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Analysis of Research Trends in Deep Learning-Based Video Captioning (딥러닝 기반 비디오 캡셔닝의 연구동향 분석)

  • Lyu Zhi;Eunju Lee;Youngsoo Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.35-49
    • /
    • 2024
  • Video captioning technology, as a significant outcome of the integration between computer vision and natural language processing, has emerged as a key research direction in the field of artificial intelligence. This technology aims to achieve automatic understanding and language expression of video content, enabling computers to transform visual information in videos into textual form. This paper provides an initial analysis of the research trends in deep learning-based video captioning and categorizes them into four main groups: CNN-RNN-based Model, RNN-RNN-based Model, Multimodal-based Model, and Transformer-based Model, and explain the concept of each video captioning model. The features, pros and cons were discussed. This paper lists commonly used datasets and performance evaluation methods in the video captioning field. The dataset encompasses diverse domains and scenarios, offering extensive resources for the training and validation of video captioning models. The model performance evaluation method mentions major evaluation indicators and provides practical references for researchers to evaluate model performance from various angles. Finally, as future research tasks for video captioning, there are major challenges that need to be continuously improved, such as maintaining temporal consistency and accurate description of dynamic scenes, which increase the complexity in real-world applications, and new tasks that need to be studied are presented such as temporal relationship modeling and multimodal data integration.

IT Convergence u-Learning Contents using Agent Based Modeling (에이전트 기반 모델링을 활용한 IT 융합 u-러닝 콘텐츠)

  • Park, Hong-Joon;Kim, Jin-Young;Jun, Young-Cook
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.4
    • /
    • pp.513-521
    • /
    • 2014
  • The purpose of this research is to develope and implement a convergent educational contents based on theoretical background of integrated education using agent based modeling in the ubiquitous learning environment. The structure of this contents consists of three modules that were designed by trans-disciplinary concept and situated learning theory. These three modules are: convergent problem presenting module, resource of knowledge module and learning of agent based modeling and IT tools module. After the satisfaction survey of the implemented content, out of 5 total value, the average value was 3.86 for effectiveness, 4.13 for convenience and 3.86 for design. The result of the survey shows that the users are generally satisfied. By using this u-learning contents, learners can experience and learn how to solve the convergent problem by utilizing IT tools without any limitation of device, time and space. At the same time, the proposal of structural design of contents can be a good guideline to the researchers to develop the convergent educational contents in the future.

An Automated Code Generation for Both Improving Performance and Detecting Error in Self-Adaptive Modules (자가 적응 모듈의 성능 개선과 오류 탐지를 위한 코드 자동 생성 기법)

  • Lee, Joon-Hoon;Park, Jeong-Min;Lee, Eun-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.9
    • /
    • pp.538-546
    • /
    • 2008
  • It has limits that system administrator deals with many problems occurred in systems because computing environments are increasingly complex. It is issued that systems have an ability to recognize system's situations and adapt them by itself in order to resolve these limits. But it requires much experiences and knowledge to build the Self-Adaptive System. The difficulty that builds the Self-Adaptive System has been problems. This paper proposes a technique that generates automatically the codes of the Self-Adaptive System in order to make the system to be built more easily. This Self-Adaptive System resolves partially the problems about ineffectiveness of the exceeded usage of the system resource that was previous research's problem and incorrect operation that is occurred by external factors such as virus. In this paper, we applied the proposed approach to the file transfer module that is in the video conferencing system in order to evaluate it. We compared the length of the codes, the number of Classes that are created by the developers, and development time. We have confirmed this approach to have the effectiveness.