• Title/Summary/Keyword: Data Source

Search Result 6,566, Processing Time 0.034 seconds

Priority Analysis for Software Functions Using Social Network Analysis and DEA(Data Envelopment Analysis) (사회연결망 분석과 자료포락분석 기법을 이용한 소프트웨어 함수 우선순위 분석 연구)

  • Huh, Sang Moo;Kim, Woo Je
    • Journal of Information Technology Services
    • /
    • v.17 no.3
    • /
    • pp.171-189
    • /
    • 2018
  • To remove software defects and improve performance of software, many developers perform code inspections and use static analysis tools. A code inspection is an activity that is performed manually to detect software defects in the developed source. However, there is no clear criterion which source codes are inspected. A static analysis tool can automatically detect software defects by analyzing the source codes without running the source codes. However, it has disadvantage that analyzes only the codes in the functions without analyzing the relations among source functions. The functions in the source codes are interconnected and formed a social network. Functions that occupy critical locations in a network can be important enough to affect the overall quality. Whereas, a static analysis tool merely suggests which functions were called several times. In this study, the core functions will be elicited by using social network analysis and DEA (Data Envelopment Analysis) for CUBRID open database sources. In addition, we will suggest clear criteria for selecting the target sources for code inspection and will suggest ways to find core functions to minimize defects and improve performance.

High Resolution Hydroacoustic Investigation in Shallow Water for the Engineering Design of Railroad Bridge (철도교량 설계 지반조사를 위한 고분해능 수면 탄성파반사법의 응용 사례)

  • ;Swoboda Ulrich
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2001.03a
    • /
    • pp.231-238
    • /
    • 2001
  • To investigate the underground structure of shallow water, Han-river near Yangsou-Ri, high resolution hydroacoustic measurements were carried out for the engineering design of railroad bridge. The acoustic source was a Boomer with an energy of 90 to 280J and in a frequency range up to about 16KHz. The reflected signals were received by using both traditional hydrophones(passive element) and a specially devised receiver unit(active element) mainly composed of piezofilms and preamplifier. They are connected to the "SUMMIT" data acquisition system(DMT-GeoTec company), where the sampling interval was set to 1/32㎳. The source position was continuously monitored by a precision DGPS system whose positioning accuracy was on the order of loom. For the quality control purposes, two different source-receiver geometries were taken. That is to say, the measurements were repeated along the profile everytime depending on the different source energy(175J, 280J), the receiving elements(passive, active) and two different source-receiver geometries. It was shown that the data resolution derived from a proper arrangement with the active hydrophone could be greatly enhanced and hence the corresponding profile section caused by the regular data processing system "FOCUS" accounted excellently for the underground formation below the shallow water.w the shallow water.

  • PDF

Quantitative Assessment of Input and Integrated Information in GIS-based Multi-source Spatial Data Integration: A Case Study for Mineral Potential Mapping

  • Kwon, Byung-Doo;Chi, Kwang-Hoon;Lee, Ki-Won;Park, No-Wook
    • Journal of the Korean earth science society
    • /
    • v.25 no.1
    • /
    • pp.10-21
    • /
    • 2004
  • Recently, spatial data integration for geoscientific application has been regarded as an important task of various geoscientific applications of GIS. Although much research has been reported in the literature, quantitative assessment of the spatial interrelationship between input data layers and an integrated layer has not been considered fully and is in the development stage. Regarding this matter, we propose here, methodologies that account for the spatial interrelationship and spatial patterns in the spatial integration task, namely a multi-buffer zone analysis and a statistical analysis based on a contingency table. The main part of our work, the multi-buffer zone analysis, was addressed and applied to reveal the spatial pattern around geological source primitives and statistical analysis was performed to extract information for the assessment of an integrated layer. Mineral potential mapping using multi-source geoscience data sets from Ogdong in Korea was applied to illustrate application of this methodology.

Efficient Kernel Based 3-D Source Localization via Tensor Completion

  • Lu, Shan;Zhang, Jun;Ma, Xianmin;Kan, Changju
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.1
    • /
    • pp.206-221
    • /
    • 2019
  • Source localization in three-dimensional (3-D) wireless sensor networks (WSNs) is becoming a major research focus. Due to the complicated air-ground environments in 3-D positioning, many of the traditional localization methods, such as received signal strength (RSS) may have relatively poor accuracy performance. Benefit from prior learning mechanisms, fingerprinting-based localization methods are less sensitive to complex conditions and can provide relatively accurate localization performance. However, fingerprinting-based methods require training data at each grid point for constructing the fingerprint database, the overhead of which is very high, particularly for 3-D localization. Also, some of measured data may be unavailable due to the interference of a complicated environment. In this paper, we propose an efficient kernel based 3-D localization algorithm via tensor completion. We first exploit the spatial correlation of the RSS data and demonstrate the low rank property of the RSS data matrix. Based on this, a new training scheme is proposed that uses tensor completion to recover the missing data of the fingerprint database. Finally, we propose a kernel based learning technique in the matching phase to improve the sensitivity and accuracy in the final source position estimation. Simulation results show that our new method can effectively eliminate the impairment caused by incomplete sensing data to improve the localization performance.

The Study Active-based for Improvement of Reliablity In Mobile Ad-hoc Network (이동 애드혹 네트워크에서 신뢰성 향상을 위한 액티브 기반연구)

  • 박경배;강경인;유재휘;김진용
    • Journal of the Korea Society of Computer and Information
    • /
    • v.7 no.4
    • /
    • pp.188-198
    • /
    • 2002
  • In this paper, we propose an active network to support reliable data transmission in the mobile ad-hoc network. The active network uses DSR(Dynamic Source Routing) protocol as its basic routing protocol, and uses source and destination nodes as key active nodes. For reliable improvement the source node is changed to source active node to add function that its buffer to store the last data with the flow control for data transmission per destination node. The destination node is changed to destination active node to add function that it requests the re-transmission for data that was not previously received by the destination active node with the flow control for data reception per source active node As the result of evaluation. we found the proposed active network guaranteed reliable data transmission with almost 100% data reception rate for slowly moving mobile ad-hoc network and with more 95% data reception rate, which is improvement of 3.5737% reception rate compared with none active network, for continuously fast moving mobile ad-hoc network.

  • PDF

Effects of Uncertain Spatial Data Representation on Multi-source Data Fusion: A Case Study for Landslide Hazard Mapping

  • Park No-Wook;Chi Kwang-Hoon;Kwon Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.5
    • /
    • pp.393-404
    • /
    • 2005
  • As multi-source spatial data fusion mainly deal with various types of spatial data which are specific representations of real world with unequal reliability and incomplete knowledge, proper data representation and uncertainty analysis become more important. In relation to this problem, this paper presents and applies an advanced data representation methodology for different types of spatial data such as categorical and continuous data. To account for the uncertainties of both categorical data and continuous data, fuzzy boundary representation and smoothed kernel density estimation within a fuzzy logic framework are adopted, respectively. To investigate the effects of those data representation on final fusion results, a case study for landslide hazard mapping was carried out on multi-source spatial data sets from Jangheung, Korea. The case study results obtained from the proposed schemes were compared with the results obtained by traditional crisp boundary representation and categorized continuous data representation methods. From the case study results, the proposed scheme showed improved prediction rates than traditional methods and different representation setting resulted in the variation of prediction rates.

A Robust Vector Quantization Method against Distortion Outlier and Source Mismatch (이상 신호왜곡과 소스 불일치에 강인한 벡터 양자화 방법)

  • Noh, Myung-Hoon;Kim, Moo-Young
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.74-80
    • /
    • 2012
  • In resolution-constrained quantization, the size of Voronoi cell varies depending on probability density function of the input data, which causes large amount of distortion outliers. We propose a vector quantization method that reduces distortion outliers by combining the generalized Lloyd algorithm (GLA) and the cell-size constrained vector quantization (CCVQ) scheme. The training data are divided into the inside and outside regions according to the size of Voronoi cell, and consequently CCVQ and GLA are applied to each region, respectively. As CCVQ is applied to the densely populated region of the source instead of GLA, the number of centroids for the outside region can be increased such that distortion outliers can be decreased. In real-world environment, source mismatch between training and test data is inevitable. For the source mismatch case, the proposed algorithm improves performance in terms of average distortion and distortion outliers.

Database and User Interface for Pollutant Source and Load Management of Yeungsan Estuarine Lake Watershed Using GIS (GIS를 활용한 영산호 수계 오염원 데이터베이스 구축과 오염원관리 사용자 인터페이스)

  • 양홍모
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.28 no.6
    • /
    • pp.114-126
    • /
    • 2001
  • The purpose of this study is to establish the databases of pollutant sources and water quality measurement data by utilizing GIS, and making the user interface for the management of pollutant sources. Yeongsan Estuarine Lake was formed of a huge levee of 4.35 km constructed by an agricultural reclamation project. Water quality of the reservoir has been degraded gradually, which mainly attributes to increase of point and non-point source pollutant loads from the lake's watershed of 33,374.3 $\textrm{km}^2$ into it. Application of GIS to establishment of the database was researched of pint source such as domestic sewage, industrial wastewater, farm wastes, and fishery wastes, and non-pont source such as residence, rice and upland field, and forest runoffs of the watershed of the lake. NT Acr/Info and ArcView were mainly utilized for the database formation. Land use of the watershed using LANDSAT image data was analyzed for non-point source pollutant load estimation. Pollutant loads from the watershed into the reservoir were calculated using the GIS database and BOD, TN, TP load units of point and non-point sources. Total BOD, TN, TP loads into it reached approximately to 141, 715, 2,094 and 4,743 kg/day respectively. The loads can be used as input parameters for water quality predicting model of it. A user-friendly interface program was developed using Dialog Designer and Avenue Script of AcrView, which can perform spatial analysis of point and non-point sources, calculate pollutant inputs from the sources, update attribute data of them, delete and add point sources, identify locations and volumes of water treatment facilities, and examine water quality data of water sampling points.

  • PDF

Multi-Hop Cooperative Communications using Multi-Relays in IEEE 802.11 (IEEE 802.11에서 다중 릴레이를 이용한 멀티홉 방식 협력 무선통신)

  • Lee, Sook-Hyoun;Lee, Tae-Jin
    • Journal of KIISE:Information Networking
    • /
    • v.36 no.6
    • /
    • pp.528-535
    • /
    • 2009
  • This paper proposes a mechanism to increase performance using cooperative communications in IEEE 802.11 environment. Existing algorithms use one relay between a source and a destination, which is a 2 hop relay. The proposed algorithm utilizes more than one relay to complement inefficiency of using one relay. In the proposed mechanism, an AP manages network information (rate), which is used to select relays of a source by the AP. The AP notifies the selected relays to the source and neighbor nodes, and the source transmits data to the relays for cooperative communications. Moreover, relays are given to have an opportunity to send its own data right after relaying the source's data. So relays are compensated for the power to send the source's data and overall throughput is improved.

Gaussian Processes for Source Separation: Pseudo-likelihood Maximization (유사-가능도 최대화를 통한 가우시안 프로세스 기반 음원분리)

  • Park, Sun-Ho;Choi, Seung-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.7
    • /
    • pp.417-423
    • /
    • 2008
  • In this paper we present a probabilistic method for source separation in the case here each source has a certain temporal structure. We tackle the problem of source separation by maximum pseudo-likelihood estimation, representing the latent function which characterizes the temporal structure of each source by a random process with a Gaussian prior. The resulting pseudo-likelihood of the data is Gaussian, determined by a mixing matrix as well as by the predictive mean and covariance matrix that can easily be computed by Gaussian process (GP) regression. Gradient-based optimization is applied to estimate the demixing matrix through maximizing the log-pseudo-likelihood of the data. umerical experiments confirm the useful behavior of our method, compared to existing source separation methods.