• Title/Summary/Keyword: a bottom-up algorithm

Search Result 58, Processing Time 0.024 seconds

An Algorithm for Ontology Merging and Alignment using Local and Global Semantic Set (지역 및 전역 의미집합을 이용한 온톨로지 병합 및 정렬 알고리즘)

  • 김재홍;이상조
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.4
    • /
    • pp.23-30
    • /
    • 2004
  • Ontologies play an important role in the Semantic Web by providing well-defined meaning to ontology consumers. But as the ontologies are authored in a bottom-up distributed mimer, a large number of overlapping ontologies are created and used for the similar domains. Ontology sharing and reuse have become a distinguished topic, and ontology merging and alignment are the solutions for the problem. Ontology merging and alignment algorithms previously proposed detect conflicts between concepts by making use of only local syntactic information of concept names. And they depend only on a semi-automatic approach, which makes ontology engineers tedious. Consequently, the quality of merging and alignment tends to be unsatisfying. To remedy the defects of the previous algorithms, we propose a new algorithm for ontology merging and alignment which uses local and global semantic set of a concept. We evaluated our algorithm with several pairs of ontologies written in OWL, and achieved around 91% of precision in merging and alignment. We expect that, with the widespread use of web ontology, the need for ontology sharing and reuse ill become higher, and our proposed algorithm can significantly reduce the time required for ontology development. And also, our algorithm can easily be applied to various fields such as ontology mapping where semantic information exchange is a requirement.

Mobile Router Decision Using Multi-layered Perceptron in Nested Mobile Networks (중첩 이동 네트워크에서 Multi-layered Perceptron을 이용한 최적의 이동 라우터 지정 방안)

  • Song, Jiyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2843-2852
    • /
    • 2013
  • In the nested mobile network environment, the mobile node selects one of multiple mobile routers. The MR(Mobile Router) by existing top-down or bottom-up methods may not be the optimal MR if the numbers of mobile nodes and routers are substantially increased, and the scale of the network is increased drastically. Since an inappropriate MR decision causes handover or binding renewal to mobile nodes, determining of the optimal MR is important for efficiency. In this paper, we propose an algorithm that decides on the optimal MR using MR QoS(Quality of Service) information, and we describe how to understand the various structured MLP(Multi-Layered Perceptron) based on the algorithm. In conclusion, we prove the ability of the suggested neural network for a nesting mobile network through the performance analysis of each learned MLP.

Costing of a State-Wide Population Based Cancer Awareness and Early Detection Campaign in a 2.67 Million Population of Punjab State in Northern India

  • Thakur, JS;Prinja, Shankar;Jeet, Gursimer;Bhatnagar, Nidhi
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.2
    • /
    • pp.791-797
    • /
    • 2016
  • Background: Punjab state is particularly reporting a rising burden of cancer. A 'door to door cancer awareness and early detection campaign' was therefore launched in the Punjab covering about 2.67 million population, wherein after initial training accredited social health activists (ASHAs) and other health staff conducted a survey for early detection of cancer cases based on a twelve point clinical algorithm. Objective: To ascertain unit cost for undertaking a population-based cancer awareness and early detection campaign. Materials and Methods: Data were collected using bottom-up costing methods. Full economic costs of implementing the campaign from the health system perspective were calculated. Options to meet the likely demand for project activities were further evaluated to examine their worth from the point of view of long-term sustainability. Results: The campaign covered 97% of the state population. A total of 24,659 cases were suspected to have cancer and were referred to health facilities. At the state level, incidence and prevalence of cancer were found to be 90 and 216 per 100,000, respectively. Full economic cost of implementing the campaign in pilot district was USD 117,524. However, the financial cost was approximately USD 6,301. Start-up phase of campaign was more resource intensive (63% of total) than the implementation phase. The economic cost per person contacted and suspected by clinical algorithm was found to be USD 0.20 and USD 40 respectively. Cost per confirmed case under the campaign was 7,043 USD. Conclusions: The campaign was able to screen a reasonably large population. High to high economic cost points towards the fact that the opportunity cost of campaign put a significant burden on health system and other programs. However, generating awareness and early detection strategy adopted in this campaign seems promising in light of fact that organized screening is not in place in India and in many developing countries.

Current Status of Hyperspectral Data Processing Techniques for Monitoring Coastal Waters (연안해역 모니터링을 위한 초분광영상 처리기법 현황)

  • Kim, Sun-Hwa;Yang, Chan-Su
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.1
    • /
    • pp.48-63
    • /
    • 2015
  • In this study, we introduce various hyperspectral data processing techniques for the monitoring of shallow and coastal waters to enlarge the application range and to improve the accuracy of the end results in Korea. Unlike land, more accurate atmospheric correction is needed in coastal region showing relatively low reflectance in visible wavelengths. Sun-glint which occurs due to a geometry of sun-sea surface-sensor is another issue for the data processing in the ocean application of hyperspectal imagery. After the preprocessing of the hyperspectral data, a semi-analytical algorithm based on a radiative transfer model and a spectral library can be used for bathymetry mapping in coastal area, type classification and status monitoring of benthos or substrate classification. In general, semi-analytical algorithms using spectral information obtained from hyperspectral imagey shows higher accuracy than an empirical method using multispectral data. The water depth and quality are constraint factors in the ocean application of optical data. Although a radiative transfer model suggests the theoretical limit of about 25m in depth for bathymetry and bottom classification, hyperspectral data have been used practically at depths of up to 10 m in shallow and coastal waters. It means we have to focus on the maximum depth of water and water quality conditions that affect the coastal applicability of hyperspectral data, and to define the spectral library of coastal waters to classify the types of benthos and substrates.

Study on the energy-saving constant temperature and humidity machine operating characteristics (에너지 절감형 항온항습기 운전 특성에 관한 연구)

  • Cha, Insu;Ha, Minho;Jung, Gyeonghwan
    • Journal of Energy Engineering
    • /
    • v.25 no.3
    • /
    • pp.27-33
    • /
    • 2016
  • The heat recovery system that was applied in this study, is the energy-saving type that can produce the maximum cooling capacity less power in use. In order to have a more precise control function the temperature and humidity of the constant temperature and humidity machine, control algorithm is applied to designed a fuzzy PID controller, and the outside air compensation device (air-cooled) demonstrated excellent ability to dehumidify the moisture, $-20^{\circ}C$ in winter. High efficiency and the low-noise type sirocco fan operate quitely and designed to fit the bottom-up and top-down in accordance with the characteristics of equipment. as a result of experiment data, the conversion efficiency is 95% or more, power recovery time is within 5sec, stop delay time is within 30sec, pump down time is 10sec, pump delay time is 5sec, heating delay time is 5sec, temperature deviation is ${\pm}2^{\circ}C$ (cooling deviation: $2^{\circ}C$, Heating deviation : $2^{\circ}C$), humidity deviation is a ${\pm}5%$ (humidification deviation 3.0%, dehumidification deviation 3.0%). Recently, ubiquitous technology is important. so, the constant temperature and humidity machine designed to be able to remotely control to via the mobile phone, and more scalable to support MMI software and automatic interface. Further, the life of the parts and equipment is extended by the failure.

Underdetermined blind source separation using normalized spatial covariance matrix and multichannel nonnegative matrix factorization (멀티채널 비음수 행렬분해와 정규화된 공간 공분산 행렬을 이용한 미결정 블라인드 소스 분리)

  • Oh, Son-Mook;Kim, Jung-Han
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.120-130
    • /
    • 2020
  • This paper solves the problem in underdetermined convolutive mixture by improving the disadvantages of the multichannel nonnegative matrix factorization technique widely used in blind source separation. In conventional researches based on Spatial Covariance Matrix (SCM), each element composed of values such as power gain of single channel and correlation tends to degrade the quality of the separated sources due to high variance. In this paper, level and frequency normalization is performed to effectively cluster the estimated sources. Therefore, we propose a novel SCM and an effective distance function for cluster pairs. In this paper, the proposed SCM is used for the initialization of the spatial model and used for hierarchical agglomerative clustering in the bottom-up approach. The proposed algorithm was experimented using the 'Signal Separation Evaluation Campaign 2008 development dataset'. As a result, the improvement in most of the performance indicators was confirmed by utilizing the 'Blind Source Separation Eval toolbox', an objective source separation quality verification tool, and especially the performance superiority of the typical SDR of 1 dB to 3.5 dB was verified.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF