• Title/Summary/Keyword: 리소스 발견

Search Result 19, Processing Time 0.02 seconds

A New Deadlock Detection Mechanism in Wormhole Networks (웜홀 네트웍을 위한 새로운 교착상태 발견 기법)

  • Lee, Su-Jung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.280-289
    • /
    • 2003
  • Deadlock recovery-based routing algorithms in wormhole networks have gained attraction due to low hardware complexity and high routing adaptability Progressive deadlock recovery techniques require a few dedicated resources to transmit deadlocked packets rather than killing them. Selection of deadlocked packets is primarily based on time-out value which should be carefully determined considering various traffic patterns or packet length. By its nature, current techniques using time-out accompany unignorable number of false deadlock detections especially in a heavy-loaded network or with long packet size. Moreover, when a deadlock occurs, more than one packet may be marked as deadlocked, which saturate the resources allocated for recovery. This paper proposes more accurate deadlock detection scheme which does not make use of time-out to declare deadlock. The proposed scheme reduces the probability to detect false deadlocks considerably. Furthermore, a single message is selected as deadlocked for each cycle of blocked messages, thereby eliminating recovery overheads.

A Hybrid Clustering Technique for Processing Large Data (대용량 데이터 처리를 위한 하이브리드형 클러스터링 기법)

  • Kim, Man-Sun;Lee, Sang-Yong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.33-40
    • /
    • 2003
  • Data mining plays an important role in a knowledge discovery process and various algorithms of data mining can be selected for the specific purpose. Most of traditional hierachical clustering methode are suitable for processing small data sets, so they difficulties in handling large data sets because of limited resources and insufficient efficiency. In this study we propose a hybrid neural networks clustering technique, called PPC for Pre-Post Clustering that can be applied to large data sets and find unknown patterns. PPC combinds an artificial intelligence method, SOM and a statistical method, hierarchical clustering technique, and clusters data through two processes. In pre-clustering process, PPC digests large data sets using SOM. Then in post-clustering, PPC measures Similarity values according to cohesive distances which show inner features, and adjacent distances which show external distances between clusters. At last PPC clusters large data sets using the simularity values. Experiment with UCI repository data showed that PPC had better cohensive values than the other clustering techniques.

A Study on the Exploration Device of the Disaster Site Using Drones (드론을 이용한 재난 현장 탐사 장치에 대한 연구)

  • Nam, Kang-Hyun;Jang, Min-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.3
    • /
    • pp.579-586
    • /
    • 2019
  • The purpose of this study is to determine the rapid saving of life through the drones when natural disasters such as earthquake and fire occur. Drones are equipped with riders, temperature, hazardous gas sensors and wireless cameras are registered with the application server for monitoring the disaster site and real-time monitoring functions are performed to identify the situation on site before rescuing personnel are active. When monitoring finds a person to save, the application server provides real-time image information for effective life-saving.

A Test Case Generation Method for Data Distribution System of Submarine (잠수함 데이터 분산 시스템을 위한 테스트 케이스 생성 기법)

  • Son, Suik;Kang, Dongsu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.4
    • /
    • pp.137-144
    • /
    • 2019
  • Testing maturity is critical to the system under development with lack of experience and skills in the acquisition of the weapon systems. Defects have a huge impact on important system operations. Sharing real-time information will lead to rapid command and mission capability in submarine. DDS(Data Distribution System) is a very important information sharing system and interface between various manufacturers or data formats. In this paper, we analyze data distribution characteristics of distributed data system to group data-specific systems and proposes a test case-generation method using path search of postorder and preorder which is a tree traversal in path testing method. The proposed method reduces 73.7.% testing resource compare to existing methods.

A Study of User Interests and Tag Classification related to resources in a Social Tagging System (소셜 태깅에서 관심사로 바라본 태그 특징 연구 - 소셜 북마킹 사이트 'del.icio.us'의 태그를 중심으로 -)

  • Bae, Joo-Hee;Lee, Kyung-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.826-833
    • /
    • 2009
  • Currently, the rise of social tagging has changing taxonomy to folksonomy. Tag represents a new approach to organizing information. Nonhierarchical classification allows data to be freely gathered, allows easy access, and has the ability to move directly to other content topics. Tag is expected to play a key role in clustering various types of contents, it is expand to network in the common interests among users. First, this paper determine the relationships among user, tags and resources in social tagging system and examine the circumstances of what aspects to users when creating a tag related to features of websites. Therefore, this study uses tags from the social bookmarking service 'del.icio.us' to analyze the features of tag words when adding a new web page to a list. To do this, websites features classified into 7 items, it is known as tag classification related to resources. Experiments were conducted to test the proposed classify method in the area of music, photography and games. This paper attempts to investigate the perspective in which users apply a tag to a webpage and establish the capacity of expanding a social service that offers the opportunity to create a new business model.

  • PDF

Implementation of an Application System using Middleware and Context Server for Handling Context-Awareness (상황인식 처리를 위한 미들웨어 및 컨텍스트 서버를 이용한 응용시스템의 구현)

  • Shim Choon-Bo;Tae Bong-Sub;Chang Jae-Woo;Kim Jeong-Ki;Park Seung-Min
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.1
    • /
    • pp.31-42
    • /
    • 2006
  • Context-awareness is a technology to facilitate information acquisition and execution by supporting interoperability between users and devices based on users' context. It is one of the most important technologies in ubiquitous computing. In this paper, we propose a middleware and a context server for dealing with context-awareness in ubiquitous Computing and implement an application system using them. The middleware proposed in our work plays an important role in recognizing a moving node with mobility by using a Bluetooth wireless communication technology as well as in executing an appropriate execution module according to the context acquired from a context server. In addition, the proposed context server functions as a manager that efficiently stores into a database server context information, such as user's current status, physical environment, and resources of a computing system. Finally, our application system implemented in our work one which provides a music playing service based on context information, and it verifies the usefulness of both the middleware and the context server developed in our work.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Comparison of Performance Between Incremental and Batch Learning Method for Information Analysis of Cyber Surveillance and Reconnaissance (사이버 감시정찰의 정보 분석에 적용되는 점진적 학습 방법과 일괄 학습 방법의 성능 비교)

  • Shin, Gyeong-Il;Yooun, Hosang;Shin, DongIl;Shin, DongKyoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.99-106
    • /
    • 2018
  • In the process of acquiring information through the cyber ISR (Intelligence Surveillance Reconnaissance) and research into the agent to help decision-making, periodic communication between the C&C (Command and Control) server and the agent may not be possible. In this case, we have studied how to effectively surveillance and reconnaissance. Due to the network configuration, agents planted on infiltrated computers can not communicate seamlessly with C&C servers. In this case, the agent continues to collect data continuously, and in order to analyze the collected data within a short time in When communication is possible with the C&C server, it can utilize limited resources and time to continue its mission without being discovered. This research shows the superiority of incremental learning method over batch method through experiments. At an experiment with the restricted memory of 500 mega bytes, incremental learning method shows 10 times decrease in learning time. But at an experiment with the reuse of incorrectly classified data, the required time for relearn takes twice more.

A Comparative Study on Similarity Measure Techniques for Cross-Project Defect Prediction (교차 프로젝트 결함 예측을 위한 유사도 측정 기법 비교 연구)

  • Ryu, Duksan;Baik, Jongmoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.205-220
    • /
    • 2018
  • Software defect prediction is helpful for allocating valuable project resources effectively for software quality assurance activities thanks to focusing on the identified fault-prone modules. If historical data collected within a company is sufficient, a Within-Project Defect Prediction (WPDP) can be utilized for accurate fault-prone module prediction. In case a company does not maintain historical data, it may be helpful to build a classifier towards predicting comprehensible fault prediction based on Cross-Project Defect Prediction (CPDP). Since CPDP employs different project data collected from other organization to build a classifier, the main obstacle to build an accurate classifier is that distributions between source and target projects are not similar. To address the problem, because it is crucial to identify effective similarity measure techniques to obtain high performance for CPDP, In this paper, we aim to identify them. We compare various similarity measure techniques. The effectiveness of similarity weights calculated by those similarity measure techniques are evaluated. The results are verified using the statistical significance test and the effect size test. The results show k-Nearest Neighbor (k-NN), LOcal Correlation Integral (LOCI), and Range methods are the top three performers. The experimental results show that predictive performances using the three methods are comparable to those of WPDP.