• Title/Summary/Keyword: Query Model

Search Result 563, Processing Time 0.027 seconds

An Efficient Hybrid Lookup Service Exploiting Localized Query Traffic (질의의 지역성을 이용한 효율적인 하이브리드 검색 서비스)

  • Lee, Sang-Hwan;Han, Jae-Il;Kim, Chul-Su;Hwang, Jae-Gak
    • Journal of Information Technology Services
    • /
    • v.8 no.3
    • /
    • pp.171-184
    • /
    • 2009
  • Since the development of the Distributed Hash Tables (DHTs), the distributed lookup services are one of the hot topics in the networking area. The main reason of this popularity is the simplicity of the lookup structure. However, the simple key based search mechanism makes the so called "keyword" based search difficult if not impossible. Thus, the applicability of the DHTs is limited to certain areas. In this paper. we find that DHTs can be used as the ubiquitous sensor network (USN) metadata lookup service across a large number of sensor networks. The popularity of the Ubiquitous Sensor Network has motivated the development of the USN middleware services for the sensor networks. One of the key functionalities of the USN middleware service is the lookup of the USN metadata, by which users get various information about the sensor network such as the type of the sensor networks and/or nodes, the residual of the batteries, the type of the sensor nodes. Traditional distributed hash table based lookup systems are good for one sensor network. However, as the number of sensor network increases, the need to integrate the lookup services of many autonomous sensor networks so that they can provide the users an integrated view of the entire sensor network. In this paper, we provide a hybrid lookup model, in which the autonomous lookup services are combined together and provide seamless services across the boundary of a single lookup services. We show that the hybrid model can provide far better lookup performance than a single lookup system.

Estimation of Uncertain Moving Object Location Data

  • Ahn Yoon-Ae;Lee Do-Yeol;Hwang Ho-Young
    • Journal of the Korea Computer Industry Society
    • /
    • v.6 no.3
    • /
    • pp.495-508
    • /
    • 2005
  • Moving objects are spatiotemporal data that change their location or shape continuously over time. Their location coordinates are periodically measured and stored i3l the application systems. The linear function is mainly used to estimate the location information that is not in the system at the query time point. However, a new method is needed to improve uncertainties of the location representation, because the location estimation by linear function induces the estimation error. This paper proposes an application method of the cubic spline interpolation in order to reduce deviation of the location estimation by linear function. First, we define location information of the moving object on the two-dimensional space. Next, we apply the cubic spline interpolation to location estimation of the proposed data model and describe algorithm of the estimation operation. Finally, the precision of this estimation operation model is experimented. The experimentation comes out more accurate results than the method by linear function.

  • PDF

Tax Judgment Analysis and Prediction using NLP and BiLSTM (NLP와 BiLSTM을 적용한 조세 결정문의 분석과 예측)

  • Lee, Yeong-Keun;Park, Koo-Rack;Lee, Hoo-Young
    • Journal of Digital Convergence
    • /
    • v.19 no.9
    • /
    • pp.181-188
    • /
    • 2021
  • Research and importance of legal services applied with AI so that it can be easily understood and predictable in difficult legal fields is increasing. In this study, based on the decision of the Tax Tribunal in the field of tax law, a model was built through self-learning through information collection and data processing, and the prediction results were answered to the user's query and the accuracy was verified. The proposed model collects information on tax decisions and extracts useful data through web crawling, and generates word vectors by applying Word2Vec's Fast Text algorithm to the optimized output through NLP. 11,103 cases of information were collected and classified from 2017 to 2019, and verified with 70% accuracy. It can be useful in various legal systems and prior research to be more efficient application.

Numerical Web Model for Quality Management of Concrete based on Compressive Strength (압축강도 기반의 콘크리트 품질관리를 위한 웹 전산모델 개발)

  • Lee, Goon-Jae;Kim, Hak-Young;Lee, Hye-Jin;Hwang, Seung-Hyeon;Yang, Keun-Hyeok
    • Journal of the Korea Institute of Building Construction
    • /
    • v.21 no.3
    • /
    • pp.195-202
    • /
    • 2021
  • Concrete quality is mainly managed through the reliable prediction and control of compressive strength. Although related industries have established a relevant datasets based on the mixture proportions and compressive strength gain, whereas they have not been shared due to various reasons including technology leakage. Consequently, the costs and efforts for quality control have been wasted excessively. This study aimed to develop a web-based numerical model, which would present diverse optimal values including concrete strength prediction to the user, and to establish a sustainable database (DB) collection system by inducing the data entered by the user to be collected for the DB. The system handles the overall technology related to the concrete. Particularly, it predicts compressive strength at a mean accuracy of 89.2% by applying the artificial neural network method, modeled based on extensive DBs.

Approximate Top-k Labeled Subgraph Matching Scheme Based on Word Embedding (워드 임베딩 기반 근사 Top-k 레이블 서브그래프 매칭 기법)

  • Choi, Do-Jin;Oh, Young-Ho;Bok, Kyoung-Soo;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.8
    • /
    • pp.33-43
    • /
    • 2022
  • Labeled graphs are used to represent entities, their relationships, and their structures in real data such as knowledge graphs and protein interactions. With the rapid development of IT and the explosive increase in data, there has been a need for a subgraph matching technology to provide information that the user is interested in. In this paper, we propose an approximate Top-k labeled subgraph matching scheme that considers the semantic similarity of labels and the difference in graph structure. The proposed scheme utilizes a learning model using FastText in order to consider the semantic similarity of a label. In addition, the label similarity graph(LSG) is used for approximate subgraph matching by calculating similarity values between labels in advance. Through the LSG, we can resolve the limitations of the existing schemes that subgraph expansion is possible only if the labels match exactly. It supports structural similarity for a query graph by performing searches up to 2-hop. Based on the similarity value, we provide k subgraph matching results. We conduct various performance evaluations in order to show the superiority of the proposed scheme.

A Study on the Practice Model for Prescription Information Management of EMR Certification standard - Focus on Data management using SQL - (처방정보관리를 위한 EMR 인증기준의 실습 모델 연구 - SQL을 이용한 데이터 관리 중심 -)

  • Joon-Young Choi
    • Journal of the Health Care and Life Science
    • /
    • v.10 no.1
    • /
    • pp.25-38
    • /
    • 2022
  • In this study, a SQL practice model for understanding EMR certification standards and data management practice for healthcare information managers was presented. This study is to practice prescription information management for the functionality of EMR certification standards through the health and medical information management practice program. The data management practice according to the EMR certification criteria consists of medicaiton master management and medicaiton name inquiry, medicaiton prescription inquiry, previous medication prescription inquiry after converting medical care information, examintaion result inquiry when administering medicine, examination prescription records inquiry. Additionally, dietary prescription records, other prescription records, return reason inquiry when administering medicine is stopped are included. Accordingly, using the prescription management database of the MS-ACCESS-based health care information management education system, a SQL statement was written that can inquire the contents of the prescription information management certification standards. In the EMR certification practice, you can understand the certification standards more easily by directly extracting the query items required by the certification standards using SQL and it will be possible to improve the data management and information generation ability to extract data other than the certification standards.

Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos (멀티모달 개념계층모델을 이용한 만화비디오 컨텐츠 학습을 통한 등장인물 기반 비디오 자막 생성)

  • Kim, Kyung-Min;Ha, Jung-Woo;Lee, Beom-Jin;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.451-458
    • /
    • 2015
  • Previous multimodal learning methods focus on problem-solving aspects, such as image and video search and tagging, rather than on knowledge acquisition via content modeling. In this paper, we propose the Multimodal Concept Hierarchy (MuCH), which is a content modeling method that uses a cartoon video dataset and a character-based subtitle generation method from the learned model. The MuCH model has a multimodal hypernetwork layer, in which the patterns of the words and image patches are represented, and a concept layer, in which each concept variable is represented by a probability distribution of the words and the image patches. The model can learn the characteristics of the characters as concepts from the video subtitles and scene images by using a Bayesian learning method and can also generate character-based subtitles from the learned model if text queries are provided. As an experiment, the MuCH model learned concepts from 'Pororo' cartoon videos with a total of 268 minutes in length and generated character-based subtitles. Finally, we compare the results with those of other multimodal learning models. The Experimental results indicate that given the same text query, our model generates more accurate and more character-specific subtitles than other models.

A Proposal of a Mobile Augmented Reality Service Model based on Street Data, and its Implementation (도로데이터 기반의 모바일 증강현실 서비스 모델 제안 및 시스템 구현)

  • Lee, Jeong Hwan;Lee, Jun;Kwon, Yong Jin
    • Spatial Information Research
    • /
    • v.23 no.5
    • /
    • pp.9-19
    • /
    • 2015
  • The popularity of smart devices and Location Based Services (LBSes) is increasing in part due to users demand for personalized information associated with their location. These services provide intuitive and realistic information by adopting Augmented Reality (AR) technology. This technology utilizes various sensors embedded in the mobile devices. However, these services have inherent problems due to the devices small screen size and the complexity of the real world environment; overlapping content on a small screen and placing icons without considering the user's possible movement. In order to solve these problems, this paper proposes a Mobile Augmented Reality Model with the application of Street Data. The model consists of two layers: "Real Space" and "Information Space". In the model, a user creates a query by scanning the nearby street with a camera in real space and searches accessible content along the street through the use of the information space. Furthermore, the results are placed on both sides of the street to solve the issue of Overlapping. Also, the proposed model is implemented for region "Aenigol", and the efficiency and usefulness of the model are verified.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.