• Title/Summary/Keyword: Search Query

Search Result 688, Processing Time 0.026 seconds

An Analysis of High School Korean Language Instruction Regarding Universal Design for Learning: Social Big Data Analysis and Survey Analysis (보편적 학습설계 측면에서의 고등학교 국어과 교수 실태: 소셜 빅데이터 및 설문조사 분석)

  • Shin, Mikyung;Lee, Okin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.1
    • /
    • pp.326-337
    • /
    • 2020
  • This study examined the public interest in high school Korean language instruction and the universal design for learning (UDL) using the social big data analysis method. The observations from 10,339 search results led to the conclusion that public interest in UDL was significantly lower than that of high school Korean language instruction. The results of the Big Data Association analysis showed that 17.22% of the terms were found to be related to "curriculum." In addition, a survey was conducted on a total of 330 high school students to examine how their teachers apply UDL in the classroom. High school students perceived computers as the most frequently used technology tool in daily classes (38.79%). Teacher-led lectures (52.12%) were the most frequently observed method of instruction. Compared to the second-year and third-year students, the first-year students appreciated the usage of technology tools and various instruction mediums more frequently (ps<.05). Students were relatively more positive in their response to the query on the provision of multiple means of representation. Consequently, the lesson contents became easier to understand for students with the availability of various study methods and materials. The first-year students were generally more positive towards teachers' incorporation of UDL.

Trajectory Indexing for Efficient Processing of Range Queries (영역 질의의 효과적인 처리를 위한 궤적 인덱싱)

  • Cha, Chang-Il;Kim, Sang-Wook;Won, Jung-Im
    • The KIPS Transactions:PartD
    • /
    • v.16D no.4
    • /
    • pp.487-496
    • /
    • 2009
  • This paper addresses an indexing scheme capable of efficiently processing range queries in a large-scale trajectory database. After discussing the drawbacks of previous indexing schemes, we propose a new scheme that divides the temporal dimension into multiple time intervals and then, by this interval, builds an index for the line segments. Additionally, a supplementary index is built for the line segments within each time interval. This scheme can make a dramatic improvement in the performance of insert and search operations using a main memory index, particularly for the time interval consisting of the segments taken by those objects which are currently moving or have just completed their movements, as contrast to the previous schemes that store the index totally on the disk. Each time interval index is built as follows: First, the extent of the spatial dimension is divided onto multiple spatial cells to which the line segments are assigned evenly. We use a 2D-tree to maintain information on those cells. Then, for each cell, an additional 3D $R^*$-tree is created on the spatio-temporal space (x, y, t). Such a multi-level indexing strategy can cure the shortcomings of the legacy schemes. Performance results obtained from intensive experiments show that our scheme enhances the performance of retrieve operations by 3$\sim$10 times, with much less storage space.

Relational Database SQL Test Auto-scoring System

  • Hur, Tai-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.127-133
    • /
    • 2019
  • SQL is the most common language in data processing. Therefore, most of the colleges offer SQL in their curriculum. In this research, an auto scoring SQL test is proposed for the efficient results of SQL education. The system was treated with algorithms instead of using expensive DBMS(Data Base Management System) for automatic scoring, and satisfactory results were produced. For this system, the test question bank was established out of 'personnel management' and 'academic management'. It provides users with different sets of test each time. Scoring was done by dividing tables into two sections. The one that does not change the table(select) and the other that actually changes the table(update, insert, delete). In the case of a search, the answer and response were executed at first and then the results were compared and processed, the user's answers are evaluated by comparing the table with the correct answer. Modification, insertion, and deletion of table actually changes the data table, so data was restored by using ROLLBACK command. This system was implemented and tested 772 times on the 88 students in Computer Information Division of our college. The results of the implementation show that the average scoring time for a test consisting of 10 questions is 0.052 seconds, and the performance of this system is distinguished considering that multiple responses cannot be processed at the same time by a human grader, we want to develop a problem system that takes into account the difficulty of the problem into account near future.

A Knowledge Graph on Japanese "Comfort Women": Interlinking Fragmented Digital Archival Resources (일본군 '위안부' 지식그래프: 파편화된 디지털 기록의 연결)

  • Park, Haram;Kim, Haklae
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.21 no.3
    • /
    • pp.61-78
    • /
    • 2021
  • Records on Japanese "Comfort Women" have been individually managed by private sectors or institutions, and some are provided as digital archives on the Internet. However, records of digital archives differ in the composition and representation of metadata by individual institutions. Meanwhile, there is a lack of a consistent structure to describe the relationships between and among these records, leading to their fragmentation and disconnectedness. This paper proposes a knowledge model for interlinking the digital archival resources and builds a knowledge graph by integrating the records from distributed digital archives. It derives common elements by analyzing metadata from the diverse digital archives and expresses them in standard vocabularies to semantically describe multiple entities and relationships of the digital archival resources. In particular, the study includes the refinement of collected data to search and thread dispersed records and the enrichment of external data to provide significant contextual information of records. An evaluation of the knowledge graph is performed via a query measuring the (dis)connectivity between the distributed records. As a result, the knowledge graph is capable of interlinking and retrieving fragmented records, providing substantial contextual information on the records with external data enrichment, and searching accurately to match the user's intentions through semantic-based queries.

Development and Utilization of Linked Data of Port Maintenance Information for Port Facilities Based on Port BIM Standards (항만 BIM 표준 기반 항만 유지관리 정보의 링크드데이터 구축 및 활용)

  • Shin, Jaeyoung;Moon, Hyounseok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.501-510
    • /
    • 2023
  • The importance of using construction data is increasing in accordance with the recent trend in the smart construction. However, construction project and maintenance information is distributed on the web, and the existing BIM(Building Information Modeling) information exchange and linking method using IFC(Industry Foundation Classes) cannot support connection with BIM data and web resources. This study aims to establish the BIM-based port facility data integration system using linked data(LD) technology in order to integrate BIM and heterogeneous data in the port maintenance domain. To this end, the port BIM-based ifcOWL and port facility maintenance ontology were designed, and LD was built for the BIM and maintenance information of Busan New Port 2-1 Pier3, a BIM pilot project. In addition, service prototypes such as search, statistics and SPARQL(SPARQL Protocol and RDF Query Language) endpoint functions were implemented using the issued LD. The LD-based information utilization system is expected to improve the reusability of information by converting the existing closed information system into an open system and BIM and maintenance data as a web resource in a standard format.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Prediction of infectious diseases using multiple web data and LSTM (다중 웹 데이터와 LSTM을 사용한 전염병 예측)

  • Kim, Yeongha;Kim, Inhwan;Jang, Beakcheol
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.139-148
    • /
    • 2020
  • Infectious diseases have long plagued mankind, and predicting and preventing them has been a big challenge for mankind. For this reasen, various studies have been conducted so far to predict infectious diseases. Most of the early studies relied on epidemiological data from the Centers for Disease Control and Prevention (CDC), and the problem was that the data provided by the CDC was updated only once a week, making it difficult to predict the number of real-time disease outbreaks. However, with the emergence of various Internet media due to the recent development of IT technology, studies have been conducted to predict the occurrence of infectious diseases through web data, and most of the studies we have researched have been using single Web data to predict diseases. However, disease forecasting through a single Web data has the disadvantage of having difficulty collecting large amounts of learning data and making accurate predictions through models for recent outbreaks such as "COVID-19". Thus, we would like to demonstrate through experiments that models that use multiple Web data to predict the occurrence of infectious diseases through LSTM models are more accurate than those that use single Web data and suggest models suitable for predicting infectious diseases. In this experiment, we predicted the occurrence of "Malaria" and "Epidemic-parotitis" using a single web data model and the model we propose. A total of 104 weeks of NEWS, SNS, and search query data were collected, of which 75 weeks were used as learning data and 29 weeks were used as verification data. In the experiment we predicted verification data using our proposed model and single web data, Pearson correlation coefficient for the predicted results of our proposed model showed the highest similarity at 0.94, 0.86, and RMSE was also the lowest at 0.19, 0.07.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.