• Title/Summary/Keyword: domain-specific model

Search Result 290, Processing Time 0.025 seconds

A theoretical model for the utilization of intellectual resources between science and mathematics: An empirical study (수학 및 과학 간 지적 자원의 사용: 이론적 모형에 대한 실증 연구)

  • Choi, Kyong Mi;Seo, Kyungwoon;Hand, Brian;Hwang, Jihyun
    • The Mathematical Education
    • /
    • v.59 no.4
    • /
    • pp.405-420
    • /
    • 2020
  • There have been mixed reports about the idea of utilization of resources developed from one discipline across disciplinary areas. Grounded with the argument that critical thinking is not domain-specific (Mulnix, 2012; Vaughn, 2005), we developed a theoretical model of intellectual resources (IR) that students develop and use when learning and doing mathematics and science. The theoretical model shows that there are two parallel epistemic practices students engage in science and mathematics - searching for reasons and giving reasons (Bailin, 2002; 2007; Mulnix, 2012). Applying Confirmatory Factor Analysis and Structural Equation Model to the data of 9,300 fourth grade students' responses to standardized science and mathematics assessments, we verified the theoretical model empirically. Empirically, the theoretical model is verified in that fourth graders do use the two epistemic practices, and the development of parallel practices in science impacts the development of the two practices in mathematics: A fourth grader's ability to search for reasons in science affects his or her ability to search for reasons in mathematics, and the ability to give reasons in science affects the same ability use in mathematics. The findings indicate that educators need to open ideas of sharing development of epistemic practices across disciplines because students who developed intellectual resources can utilize these in other settings.

Analysis of Tidal Flow using the Frequency Domain Finite Element Method (II) (有限要素法을 이용한 海水流動解析 (II))

  • Kwun, Soon-Kuk;Koh, Deuk-Koo;Cho, Kuk-Kwang;Kim, Joon-Hyun
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.34 no.2
    • /
    • pp.73-84
    • /
    • 1992
  • The TIDE, finite element model for the simulation of tidal flow in shallow sea was tested for its applicability at the Saemangeum area. Several pre and post processors were developed to facilitate handling of the complicated and large amount of input and output data for the model developed. Also an operation scheme to run the model and the processors were established. As a result of calibration test using the observed data collected at 9 points within the region, linearlized friction coefficients were adjusted to be ranged 0.0027~0.0072, and water depths below the mean sea level at every nodes were changed to be increased generally by 1 meter. Comparisons of tidal velocities between the observed and the simulated for the 5 stations were made and obtained the result that the average relative error between simulated and observed tidal velocities was 11% for the maximum velocities and 22% for the minimum, and the absolute errors were less than 0.2m/sec. Also it was found that the average R.M.S. error between the velocities of observed and simulated was 0.119 m/sec and the average correlation coefficient was 0.70 showing close agreement. Another comparison test was done to show the result that R.M.S. error between the simulated and the observed tidal elevations at the 4 stations was 0.476m in average and the correlation coefficients were ranged 0.96~0.99. Though the simulated tidal circulation pattern in the region was well agreed with the observed, the simulated tidal velocities and elevations for specific points showed some errors with the observed. It was thought that the errors mainly due to the characteristics of TIDE Model which was developed to solve only with the linearized scheme. Finally it was concluded that, to improve the simulation results by the model, a new attempt to develop a fully nonlinear model as well as further calibration and the more reasonable generation of finite element grid would be needed.

  • PDF

The Implementation of Policy Management Tool Based on Network Security Policy Information Model (네트워크 보안 정책 정보 모델에 기반한 정책 관리 도구의 구현)

  • Kim, Geon-Lyang;Jang, Jong-Soo;Sohn, Sung-Won
    • The KIPS Transactions:PartC
    • /
    • v.9C no.5
    • /
    • pp.775-782
    • /
    • 2002
  • This paper introduces Policy Management Tool which was implemented based on Policy Information Model in network suity system. Network security system consists of policy terror managing and sending policies to keep a specific domain from attackers and policy clients detecting and responding intrusion by using policies that policy server sends. Policies exchanged between policy server and policy client are saved in database in the form of directory through LDAP by using Policy Management Tool based on network security policy information model. NSPIM is an extended policy information model of IETF's PCIM and PCIMe, which enables network administrator to describe network security policies. Policy Management Tool based on NSPIM provides not only policy management function but also editing function using reusable object, automatic generation function of object name and blocking policy, and other convenient functions to user.

Deep Learning-based Target Masking Scheme for Understanding Meaning of Newly Coined Words

  • Nam, Gun-Min;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.157-165
    • /
    • 2021
  • Recently, studies using deep learning to analyze a large amount of text are being actively conducted. In particular, a pre-trained language model that applies the learning results of a large amount of text to the analysis of a specific domain text is attracting attention. Among various pre-trained language models, BERT(Bidirectional Encoder Representations from Transformers)-based model is the most widely used. Recently, research to improve the performance of analysis is being conducted through further pre-training using BERT's MLM(Masked Language Model). However, the traditional MLM has difficulties in clearly understands the meaning of sentences containing new words such as newly coined words. Therefore, in this study, we newly propose NTM(Newly coined words Target Masking), which performs masking only on new words. As a result of analyzing about 700,000 movie reviews of portal 'N' by applying the proposed methodology, it was confirmed that the proposed NTM showed superior performance in terms of accuracy of sensitivity analysis compared to the existing random masking.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

Ramp Activity Expert System for Scheduling and Co-ordination (공항의 계류장 관리 스케줄링 및 조정을 위한 전문가시스템)

  • Jo, Geun-Sik;Yang, Jong-Yoon
    • Journal of Advanced Navigation Technology
    • /
    • v.2 no.1
    • /
    • pp.61-67
    • /
    • 1998
  • In this paper, we have described the Ramp Activity Coordination Expert System (RACES) which can solve aircraft parking problems. RACES includes a knowledge-based scheduling problem which assigns every daily arriving and departing flight to the gates and remote spots with the domain specific knowledge and heuristics acquired from human experts. RACES processes complex scheduling problem such as dynamic inter-relations among the characteristics of remote spots/gates and aircraft with various other constraints, for example, custome and ground handling factors at an airport. By user-driven modeling for end users and knowledge-driven near optimal scheduling acquired from human experts, RACES can produce parking schedules of aircraft in about 20 seconds for about 400 daily flights, whereas it normally takes about 4 to 5 hours by human experts. Scheduling results in the form of Gantt charts produced by the RACES are also accepted by the domain experts. RACES is also designed to deal with the partial adjustment of the schedule when unexpected events occur. After daily scheduling is completed, the messages for aircraft changes and delay messages are reflected and updated into the schedule according to the knowledge of the domain experts. By analyzing the knowledge model of the domain expert, the reactive scheduling steps are effectively represented as rules and the scenarios of the Graphic User Interfaces (GUI) are designed. Since the modification of the aircraft dispositions such as aircraft changes and cancellations of flights are reflected to the current schedule, the modification should be notified to RACES from the mainframe for the reactive scheduling. The adjustments of the schedule are made semi-automatically by RACES since there are many irregularities in dealing with the partial rescheduling.

  • PDF

Design and Implementation of Feature Catalogue Builder based on the S-100 Standard (S-100 표준 기반 피처 카탈로그 제작지원 시스템의 설계 및 구현)

  • Park, Daewon;Kwon, Hyuk-Chul;Park, Suhyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.571-578
    • /
    • 2013
  • The IHO S-100 is a standard on the universal hydorgraphic data model for supporting information services that integrate various data in maritime and provide proper information for safety of vessels. The S-100 is used to develop S-10x product specifications which are standards on guideline for creation and delivery of specific data set in maritime. The product specification for feature-based data such as ENC(Electronic Navigational Chart) data includes a feature catalogue that describes characteristics of features in that feature-based data. The feature catalogue is developed by domain experts with knowledge on data of the target domain. However, it is not feasible to develop a feature catalogue according to the XML schema by manual. In the IHO TSMAD committee meeting, needs of developing technology on building feature catalogue has been discussed. Therefore, we present a feature catalogue builder that is a GUI(Graphic User Interface) system supporting domain experts to build feature catalogues in XML. The feature catalogue builder is developed to connect with the FCD(Feature Concept Dictionary) register in the IHO(International Hydrographic Organization) GI(Geographic Information) registry. Also, it supports domain experts to select proper feature items based on the relationships between register items.

An Analysis of the Policy Making Process of Gyeonggido Cyber Library Establishment: Based on the Policy Streams Model of Kingdon (경기도사이버도서관 설립의 정책형성과정 분석: 킹던의 정책흐름모형을 중심으로)

  • Chu, Yoonmi;Kim, Giyeong
    • Journal of the Korean Society for information Management
    • /
    • v.30 no.3
    • /
    • pp.71-87
    • /
    • 2013
  • In this study, we analyze the agenda setting and policy making process of the establishment of Gyeonggido Cyber Library, which has played an important role for development of public libraries in Gyeonggido since its launching, based on Kingdon's policy streams model. According to the model, policy formation is described as the result from the convergence of the three streams, such as problem, policy and politics streams. When these streams converge on a specific time point, a policy window is created so that the issues become policy agenda. At this moment, policy entrepreneurs propose their alternatives, which have been prepared already, and try to pass it through the window. We identify coupling of the streams in the policy window and the role of policy entrepreneurs in the process of agenda setting and selection of alternatives of Gyeonggido Cyber Library policy. Suggestions are provided based on the analysis for public policy formation in public libraries domain.

3D Line Segment Detection using a New Hybrid Stereo Matching Technique (새로운 하이브리드 스테레오 정합기법에 의한 3차원 선소추출)

  • 이동훈;우동민;정영기
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.4
    • /
    • pp.277-285
    • /
    • 2004
  • We present a new hybrid stereo matching technique in terms of the co-operation of area-based stereo and feature-based stereo. The core of our technique is that feature matching is carried out by the reference of the disparity evaluated by area-based stereo. Since the reference of the disparity can significantly reduce the number of feature matching combinations, feature matching error can be drastically minimized. One requirement of the disparity to be referenced is that it should be reliable to be used in feature matching. To measure the reliability of the disparity, in this paper, we employ the self-consistency of the disunity Our suggested technique is applied to the detection of 3D line segments by 2D line matching using our hybrid stereo matching, which can be efficiently utilized in the generation of the rooftop model from urban imagery. We carry out the experiments on our hybrid stereo matching scheme. We generate synthetic images by photo-realistic simulation on Avenches data set of Ascona aerial images. Experimental results indicate that the extracted 3D line segments have an average error of 0.5m and verify our proposed scheme. In order to apply our method to the generation of 3D model in urban imagery, we carry out Preliminary experiments for rooftop generation. Since occlusions are occurred around the outlines of buildings, we experimentally suggested multi-image hybrid stereo system, based on the fusion of 3D line segments. In terms of the simple domain-specific 3D grouping scheme, we notice that an accurate 3D rooftop model can be generated. In this context, we expect that an extended 3D grouping scheme using our hybrid technique can be efficiently applied to the construction of 3D models with more general types of building rooftops.

A Study on Factors Influencing on Companies' ICT-Convergence Cluster Participation (기업의 ICT융합 클러스터 참여 촉진 요인에 관한 연구)

  • Kim, Yong-Young;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.14 no.8
    • /
    • pp.151-161
    • /
    • 2016
  • ICT-convergence cluster is considered as critical policy means because it can create higher value-added products and services in the era of creative economy. Previous research has focused on comprehensive ICT-convergence cluster strategy based on Porter's diamond model. This paper adopted AIDA(Attention, Interest, Desire, Action) model and investigated a specific domain of government supporting policies related to non-R&D support. For two weeks, we gathered and analyzed 181 data from companies located in Chungbuk province. The results showed that support for technology, commercialization, and participation conditions positively leads to companies' interest in ICT-convergence cluster, which, in turn, makes positive impact on their intention to participate in it. It is significant that this paper verified AIDA model in the Government-to-Business(G2B) context. Future research will need to adapt AIDA model to national projects.