• Title/Summary/Keyword: Semantic processing

Search Result 814, Processing Time 0.027 seconds

A Study on the Definition of Data Literacy for Elementary and Secondary Artificial Intelligence Education (초·중등 인공지능 교육을 위한 데이터 리터러시 정의 연구)

  • Kim, SeulKi;Kim, Taeyoung
    • 한국정보교육학회:학술대회논문집
    • /
    • 2021.08a
    • /
    • pp.59-67
    • /
    • 2021
  • The development of AI technology has brought about a big change in our lives. As AI's influence grows from life to society to the economy, the importance of education on AI and data is also growing. In particular, the OECD Education Research Report and various domestic information and curriculum studies address data literacy and present it as an essential competency. Looking at domestic and international studies, one can see that the definition of data literacy differs in its specific content and scope from researchers to researchers. Thus, the definition of major research related to data literacy was analyzed from various angles and derived from various angles. In key studies, Word2vec natural language processing methods, along with word frequency analysis used to define data literacy, are used to analyze semantic similarities and nominate them based on content elements of curriculum research to derive the definition of 'understanding and using data to process information'. Based on the definition of data literacy derived from this study, we hope that the contents will be revised and supplemented, and more research will be conducted to provide a good foundation for educational research that develops students' future capabilities.

  • PDF

A study on the methodology for the automatic semantic web service composition problem (자동적인 시맨틱 웹 서비스 구성문제를 위한 방법론에 관한 연구)

  • Yang, Jin-Hyuk;Lee, Kang-Chan;Kim, Sung-Han;Min, Jae-Hong;Chung, In-Jeong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11c
    • /
    • pp.2265-2268
    • /
    • 2002
  • 인터넷 사용자들의 기하급수적인 증가와 웹 페이지의 폭발적인 증가로 인하여 정보공유를 위한 인터넷에서 효율적으로 원하는 정보를 발견하고 이용하기에 매우 힘든 상황에 처해있다. 따라서, 이러한 문제점들을 근본적으로 해결하기 위한 노력의 일환으로 기계가 이해하고 추론할 수 있는 시맨틱 웹이 등장하였다. 시맨틱 웹과 관련된 여러 가지 기술들 중 시맨틱 웹 서비스는 사용자에게 현재의 인터넷 환경에서 제공할 수 있는 서비스보다 향상된 고품질의 서비스를 제공하는 것을 목적으로 삼고 있다. 이러한 시맨틱 웹 서비스는 웹 서비스의 발견, 실행 및 구성으로 구성된다. 본 논문에서는 시맨틱 웹 서비스를 자동화하기 위한 노력의 일환으로서 시맨틱 웹에서 웹 서비스를 구성하는 문제에 대하여 언급한다. 시맨틱 웹 서비스 구성문제는 사용자의 요구사항을 충족시키기 위하여 다양한 웹 서비스들을 조합하는 문제이다. 그러나, WSFL, X-LANG 및 BPEL4WS 그리고, DAML-S와 같은 웹 서비스 구성문제에 대한 일련의 노력들에서는 사용자 요구사항에 대한 검증이나 서비스의 품질에 대한 사항들을 확인 및 제공할 수 있는 방법들이 없다. 따라서, 본 논문에서는 시맨틱 웹 서비스 구성문제와 관련된 상기와 같은 문제점들을 해결할 수 있는 방법론을 제시한다. 본 논문에서 제시된 방법론에서는 시맨틱 웹 구성문제를 제약만족문제로 변환함으로써 제약만족문제에 있어 늘리 알려진 다양한 알고리즘들을 활용할 수 있는 장점들이 있을 뿐만 아니라 사용자들의 요구사항에 대한 검증과 서비스의 품질을 확인할 수 있는 장점들이 있다.의 위상변화에 대한 적응성을 높일 수 있도록 한다. SQL Server 2000 그리고 LSF를 이용하였다. 그리고 구현 환경과 구성요소에 대한 수행 화면을 보였다.ool)을 사용하더라도 단순 다중 쓰레드 모델보다 더 많은 수의 클라이언트를 수용할 수 있는 장점이 있다. 이러한 결과를 바탕으로 본 연구팀에서 수행중인 MoIM-Messge서버의 네트워크 모듈로 다중 쓰레드 소켓폴링 모델을 적용하였다.n rate compared with conventional face recognition algorithms. 아니라 실내에서도 발생하고 있었다. 정량한 8개 화합물 각각과 총 휘발성 유기화합물의 스피어만 상관계수는 벤젠을 제외하고는 모두 유의하였다. 이중 톨루엔과 크실렌은 총 휘발성 유기화합물과 좋은 상관성 (톨루엔 0.76, 크실렌, 0.87)을 나타내었다. 이 연구는 톨루엔과 크실렌이 총 휘발성 유기화합물의 좋은 지표를 사용될 있고, 톨루엔, 에틸벤젠, 크실렌 등 많은 휘발성 유기화합물의 발생원은 실외뿐 아니라 실내에도 있음을 나타내고 있다.>10)의 $[^{18}F]F_2$를 얻었다. 결론: $^{18}O(p,n)^{18}F$ 핵반응을 이용하여 친전자성 방사성동위원소 $[^{18}F]F_2$를 생산하였다. 표적 챔버는 알루미늄으로 제작하였으며 본 연구에서 연구된 $[^{18}F]F_2$가스는 친핵성 치환반응으로 방사성동위원소를 도입하기 어려운 다양한 방사성의 약품개발에 유용하게 이용될 수 있을 것이다.었으나

  • PDF

Fast information extraction algorithm for object-based MPEG-4 application from MPEG-2 bit-streamaper (MPEG-2 비트열로부터 객체 기반 MPEG-4 응용을 위한 고속 정보 추출 알고리즘)

  • 양종호;원치선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2109-2119
    • /
    • 2001
  • In this paper, a fast information extraction algorithm for object-based MPEG-4 application from MPEG-2 bit-steam is proposed. For object-based MPEG-4 conversion, we need to extract such information as object-image, shape-image, macro-block motion vector, and header information from MPEG-2 bit-stream. If we use the extracted information, fast conversion for object-based MPEG-4 is possible. The proposed object extraction algorithm has two important steps, namely the motion vectors extraction from MPEG-2 bit-stream and the watershed algorithm. The algorithm extracts objects using user\`s assistance in the intra frame and tracks then in the following inter frames. If we have an unsatisfactory result for a fast moving object, the user can intervene to correct the segmentation. The proposed algorithm consist of two steps, which are intra frame object extracts processing and inter frame tracking processing. Object extracting process is the step in which user extracts a semantic object directly by using the block classification and watersheds. Object tacking process is the step of the following the object in the subsequent frames. It is based on the boundary fitting method using motion vector, object-mask, and modified watersheds. Experimental results show that the proposed method can achieve a fast conversion from the MPEG-2 bit-stream to the object-based MPEG-4 input.

  • PDF

Study Service Ontology Design Scheme Using UML and OCL (UML 및 OCL을 이용한 서비스 온톨로지 설계 방안에 관한 연구)

  • Lee Yun-Su;Chung In-Jeoung
    • The KIPS Transactions:PartD
    • /
    • v.12D no.4 s.100
    • /
    • pp.627-636
    • /
    • 2005
  • The Intelligent Web Service is proposed for the purpose of automatic discovery, invocation, composition, inter-operation, execution monitoring and recovery web service through the Semantic Web and the Agent technology. To accomplish this Intelligent Web Service, the Ontology is a necessity for reasoning and processing the knowledge by the computer. However, creating service ontology, for the intelligent web service, has two problems not only consuming a lot of time and cost depended on heuristic of service developer, but also being hard to be mapping completely between service and service ontology. Moreover, the markup language to describe the service ontology is currently hard to be learned by the service developer In a short time. This paper proposes the efficient way of designing and creating the service ontology using MDA methodology. This proposed solution reuses the creating model in terms of desiEninE and constructing Web Service Model using UML based on MDA. After converting the Platform-Independent Web Service Model to the dependent model of OWL-S which is a Service Ontology description language, it converts to OWL-S Service Ontology using XMI. This proposed solution has profits, oneis able to be easily constructed the Service Ontology by Service Developers, the other is enable to be created the both service and Service Ontology from one model. Moreover, it can be effective to reduce the time and cost as creating Service Ontology automatically from a model, and calmly dealt with a change of outer environment like as the platform change. This paper cites an instance for the validity of designing Web Service model and creating the Service Ontology, and validates whether the created Service Ontology is valid or not.

Neuropsychological Approaches to Mathematical Learning Disabilities and Research on the Development of Diagnostic Test (신경심리학적 이론에 근거한 수학학습장애의 유형분류 및 심층진단검사의 개발을 위한 기초연구)

  • Kim, Yon-Mi
    • Education of Primary School Mathematics
    • /
    • v.14 no.3
    • /
    • pp.237-259
    • /
    • 2011
  • Mathematics learning disabilities is a specific learning disorder affecting the normal acquisition of arithmetic and spatial skills. Reported prevalence rates range from 5 to 10 percent and show high rates of comorbid disabilities, such as dyslexia and ADHD. In this study, the characteristics and the causes of this disorder has been examined. The core cause of mathematics learning disabilities is not clear yet: it can come from general cognitive problems, or disorder of innate intuitive number module could be the cause. Recently, researchers try to subdivide mathematics learning disabilities as (1) semantic/memory type, (2) procedural/skill type, (3) visuospatial type, and (4) reasoning type. Each subtype is related to specific brain areas subserving mathematical cognition. Based on these findings, the author has performed a basic research to develop grade specific diagnostic tests: number processing test and math word problems for lower grades and comprehensive math knowledge tests for the upper grades. The results should help teachers to find out prior knowledge, specific weaknesses of students, and plan personalized intervention program. The author suggest diagnostic tests are organized into 6 components. They are number sense, conceptual knowledge, arithmetic facts retrieval, procedural skills, mathematical reasoning/word problem solving, and visuospatial perception tests. This grouping will also help the examiner to figure out the processing time for each component.

Fast information extraction algorithm for object-based MPEG-4 conversion from MPEG-1,2 (MPEG-1,2로부터 객체 기반 MPEG-4 변환을 위한 고속 정보 추출 알고리즘)

  • 양종호;박성욱
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.91-102
    • /
    • 2004
  • In this paper, a fast information extraction algorithm for object-based MPEG-4 application from MPEG-1,2 is proposed. For object-based MPEG-4 conversion, we need to extract such information as object-image, shape-image, macro-block motion vector, and header information from MPEG-1,2 bit-stream. If we use the extracted information, fast conversion for object-based MPEG-4 is possible. The proposed object extraction algerian has two important steps, namely the motion vector extraction from MPEG-1,2 bit-stream and the watershed algerian The algorithm extracts objects using user's assistance in the intra frame and tracks then in the following inter frames. If we have an unsatisfactory result for a fast moving object the user can intervene to connect the segmentation. The proposed algorithm consist of two steps, which are intra frame object extracting processing and inter frame tracking processing. Object extracting process is the step in which user extracts a semantic object directly by using the block classification and watersheds. Object tracking process is the step of the following the object in the subsequent frames. It is based on the boundary fitting method using motion vector, object-mask and modified watersheds. Experimental results show that the proposed method can achieve a fast conversion from the MPEG-1,2 bit-stream to the object-based MPEG-4 input.

A Program Transformational Approach for Rule-Based Hangul Automatic Programming (규칙기반 한글 자동 프로그램을 위한 프로그램 변형기법)

  • Hong, Seong-Su;Lee, Sang-Rak;Sim, Jae-Hong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.114-128
    • /
    • 1994
  • It is very difficult for a nonprofessional programmer in Koera to write a program with very High Level Language such as, V,REFINE, GIST, and SETL, because the semantic primitives of these languages are based on predicate calculus, set, mapping, or testricted natural language. And it takes time to be familiar with these language. In this paper, we suggest a method to reduce such difficulties by programming with the declarative, procedural constructs, and aggregate constructs. And we design and implement an experimental knowledge-based automatic programming system. called HAPS(Hangul Automatic Program System). HAPS, whose input is specification such as Hangul abstract algorithm and datatype or Hangul procedural constructs, and whose output is C program. The method of operation is based on rule-based and program transformation technique, and the problem transformation technique. The problem area is general problem. The control structure of HAPS accepts the program specification, transforms this specification according to the proper rule in the rule-base, and stores the transformed program specification on the global data base. HAPS repeats these procedures until the target C program is fully constructed.

  • PDF

A Study on a Conceptualization-oriented SDSS Model for Landscape Design (조경설계를 위한 공간개념화 지향의 공간의사결정지원시스템 모델에 대한 연구)

  • Kim, Eun Hyung
    • Spatial Information Research
    • /
    • v.22 no.6
    • /
    • pp.55-65
    • /
    • 2014
  • By combining the role of current GIS technology and design behaviors from the cognitive perspective, spatial conceptualization can be extended efficiently and creatively for ill-structured problems. This study elaborates the model of a conceptualization-oriented SDSS(Spatial Decision Support System) for a landscape design problem. Current information-oriented GIS technology plays a minor role in planning and design. The three attributes in planning and design problems describe how the deficiencies of current GIS technology can be seen as a failure of the technology. These are summarized: (1) Information Explosion/Information Ignorance (2) Dilemma of Rigor and Relevance (3) Ill-structured Nature of planning and Design. In order to implement the conceptualization idea in the current GIS environment, it will be necessary to shift from traditional, information-oriented GISs to conceptualization-oriented SDSSs. The conceptualization-oriented SDSS model reflects the key elements of six important theories and techniques. The six useful theories and techniques are as follows; (1) Human Information Processing (2) Tool/Theory Interaction (3) The Sciences of the Artificial and Epistemology of Practice (4) Decision Support Systems (DSSs) (5) Human-Computer Interaction (HCI) (6) Creative Thinking. The future conceptualization-oriented SDSS can provide capabilities for planners and designers to figure out some "hidden organizations" in spatial planning and design, and develop new ideas through its conceptualization capability. The facilitation of conceptualization has been demonstrated by presenting three key ideas for the framework of the SDSS model: (1) bubble-oriented design support system (2) prototypes as an extension of semantic memory, and (3) scripts as an extension of episodic memory in a cognitive pschology perspective. The three ideas can provide a direction for the future GIS technology in planning and design.

Broadcast Content Recommender System based on User's Viewing History (사용자 소비이력기반 방송 콘텐츠 추천 시스템)

  • Oh, Soo-Young;Oh, Yeon-Hee;Han, Sung-Hee;Kim, Hee-Jung
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.129-139
    • /
    • 2012
  • This paper introduces a recommender system that is to recommend broadcast content. Our recommender system uses user's viewing history for personalized recommendations. Broadcast contents has unique characteristics as compared with books, musics and movies. There are two types of broadcast content, a series program and an episode program. The series program is comprised of several programs that deal with the same topic or story. Meanwhile, the episode program covers a variety of topics. Each program of those has different topic in general. Therefore, our recommender system recommends TV programs to users according to the type of broadcast content. The recommendations in this system are based on user's viewing history that is used to calculate content similarity between contents. Content similarity is calculated by exploiting collaborative filtering algorithm. Our recommender system uses java sparse array structure and performs memory-based processing. And then the results of processing are stored as an index structure. Our recommender system provides recommendation items through OPEN APIs that utilize the HTTP Protocol. Finally, this paper introduces the implementation of our recommender system and our web demo.

Detecting Errors in POS-Tagged Corpus on XGBoost and Cross Validation (XGBoost와 교차검증을 이용한 품사부착말뭉치에서의 오류 탐지)

  • Choi, Min-Seok;Kim, Chang-Hyun;Park, Ho-Min;Cheon, Min-Ah;Yoon, Ho;Namgoong, Young;Kim, Jae-Kyun;Kim, Jae-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.7
    • /
    • pp.221-228
    • /
    • 2020
  • Part-of-Speech (POS) tagged corpus is a collection of electronic text in which each word is annotated with a tag as the corresponding POS and is widely used for various training data for natural language processing. The training data generally assumes that there are no errors, but in reality they include various types of errors, which cause performance degradation of systems trained using the data. To alleviate this problem, we propose a novel method for detecting errors in the existing POS tagged corpus using the classifier of XGBoost and cross-validation as evaluation techniques. We first train a classifier of a POS tagger using the POS-tagged corpus with some errors and then detect errors from the POS-tagged corpus using cross-validation, but the classifier cannot detect errors because there is no training data for detecting POS tagged errors. We thus detect errors by comparing the outputs (probabilities of POS) of the classifier, adjusting hyperparameters. The hyperparameters is estimated by a small scale error-tagged corpus, in which text is sampled from a POS-tagged corpus and which is marked up POS errors by experts. In this paper, we use recall and precision as evaluation metrics which are widely used in information retrieval. We have shown that the proposed method is valid by comparing two distributions of the sample (the error-tagged corpus) and the population (the POS-tagged corpus) because all detected errors cannot be checked. In the near future, we will apply the proposed method to a dependency tree-tagged corpus and a semantic role tagged corpus.