• Title/Summary/Keyword: automatic processing

Search Result 2,235, Processing Time 0.027 seconds

Method for Improving Description of Software Metrics Using Metric Description Language Based on OCL (OCL에 바탕을 둔 메트릭 기술 언어를 이용한 메트릭의 표현 방법 개선)

  • Kim, Tae-Yeon;Kim, Yun-Kyu;Chae, Heung-Seok
    • The KIPS Transactions:PartD
    • /
    • v.15D no.5
    • /
    • pp.629-646
    • /
    • 2008
  • Because most metricsin the literatures are described by a natural language, they can be interpreted in an ambigous manner. To cope with this problem, there are some researches to express based on Object Constraint Language(OCL). Because OCL has been proposed to describe structural constraintsfor Unified Modeling Language(UML) diagrams, it is difficult and awkward. In this paper, we propose Metric Description Language(MDL) which is a high level language to describe metrics. MDL supports a modular description of complex metrics, aggregation function, and automatic navigation between entities. Moreover, we develop MetriUs for describing metrics using MDL and supporting an automated computation for UML diagrams. In a case study, we have described a variety of existing metrics using MDL and found that MDL contributes to producing simpler expression of metrics than OCL.

Automatic Expansion of ConceptNet by Using Neural Tensor Networks (신경 텐서망을 이용한 컨셉넷 자동 확장)

  • Choi, Yong Seok;Lee, Gyoung Ho;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.549-554
    • /
    • 2016
  • ConceptNet is a common sense knowledge base which is formed in a semantic graph whose nodes represent concepts and edges show relationships between concepts. As it is difficult to make knowledge base integrity, a knowledge base often suffers from incompleteness problem. Therefore the quality of reasoning performed over such knowledge bases is sometimes unreliable. This work presents neural tensor networks which can alleviate the problem of knowledge bases incompleteness by reasoning new assertions and adding them into ConceptNet. The neural tensor networks are trained with a collection of assertions extracted from ConceptNet. The input of the networks is two concepts, and the output is the confidence score, telling how possible the connection between two concepts is under a specified relationship. The neural tensor networks can expand the usefulness of ConceptNet by increasing the degree of nodes. The accuracy of the neural tensor networks is 87.7% on testing data set. Also the neural tensor networks can predict a new assertion which does not exist in ConceptNet with an accuracy 85.01%.

A Design on Informal Big Data Topic Extraction System Based on Spark Framework (Spark 프레임워크 기반 비정형 빅데이터 토픽 추출 시스템 설계)

  • Park, Kiejin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.521-526
    • /
    • 2016
  • As on-line informal text data have massive in its volume and have unstructured characteristics in nature, there are limitations in applying traditional relational data model technologies for data storage and data analysis jobs. Moreover, using dynamically generating massive social data, social user's real-time reaction analysis tasks is hard to accomplish. In the paper, to capture easily the semantics of massive and informal on-line documents with unsupervised learning mechanism, we design and implement automatic topic extraction systems according to the mass of the words that consists a document. The input data set to the proposed system are generated first, using N-gram algorithm to build multiple words to capture the meaning of the sentences precisely, and Hadoop and Spark (In-memory distributed computing framework) are adopted to run topic model. In the experiment phases, TB level input data are processed for data preprocessing and proposed topic extraction steps are applied. We conclude that the proposed system shows good performance in extracting meaningful topics in time as the intermediate results come from main memories directly instead of an HDD reading.

A Generic Interface for Internet of Things (IoT) Platforms (IoT 플랫폼을 위한 범용 인터페이스)

  • Kim, Mi;Lee, Nam-Yong;Par, Jin-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.1
    • /
    • pp.19-24
    • /
    • 2018
  • This paper presents an IoT platform of common interfaces that are flexible IoT applications and Connect the smart devices. The IoT platform includes automatic collaboration discovery the smart Device. It is different things case with connection each device through IoT Platforms are each device and application service. Due to these heterogeneities, it is quite advantages to develop applications working with derived IoT services. This circumstance needs the generic interface and results in a range of IoT architectures by not only the environments settings and discovery resource but also varied uniqe to implementation services working with IoT applications. Therefore, this suggestion of solve the problems and make it possible independent platforms always alive to connection with each devices based on the generic interface. The generic interface is heterogeneity-driven solve the problems and effectively contributions a platform that could be operated in diverse IoT Platforms.

The Implementation of Policy Management Tool Based on Network Security Policy Information Model (네트워크 보안 정책 정보 모델에 기반한 정책 관리 도구의 구현)

  • Kim, Geon-Lyang;Jang, Jong-Soo;Sohn, Sung-Won
    • The KIPS Transactions:PartC
    • /
    • v.9C no.5
    • /
    • pp.775-782
    • /
    • 2002
  • This paper introduces Policy Management Tool which was implemented based on Policy Information Model in network suity system. Network security system consists of policy terror managing and sending policies to keep a specific domain from attackers and policy clients detecting and responding intrusion by using policies that policy server sends. Policies exchanged between policy server and policy client are saved in database in the form of directory through LDAP by using Policy Management Tool based on network security policy information model. NSPIM is an extended policy information model of IETF's PCIM and PCIMe, which enables network administrator to describe network security policies. Policy Management Tool based on NSPIM provides not only policy management function but also editing function using reusable object, automatic generation function of object name and blocking policy, and other convenient functions to user.

Automatic Encryption Method within Kernel Level using Various Access Control Policy in UNIX system (유닉스 시스템에서 다양한 접근제어 정책을 이용한 커널 수준의 자동 암호화 기법)

  • Lim, Jae-Deok;Yu, Joon-Suk;Kim, Jeong-Nyeo
    • The KIPS Transactions:PartC
    • /
    • v.10C no.4
    • /
    • pp.387-396
    • /
    • 2003
  • Many studies have been done on secure kernel and encryption filesystem for system security. Secure kernel can protect user or system data from unauthorized and/or illegal accesses by applying various access control policy like ACL, MAC, RBAC and so on, but cannot protect user or system data from stealing backup media or disk itself. In addition to access control policy, there are many studies on encryption filesystem that encrypt file data within system level. However few studies have been done on combining access control policy and encryption filesystem. In this paper we proposed a new encryption filesystem that provides a transparency to the user by integrating encryption service into virtual filesystem layer within secure kernel that has various access control policies. Proposed encryption filesystem can provide a simple encryption key management architecture by using encryption keys based on classes of MAC policy and overcome a limit of physical data security of access control policy for stealing.

Remote Sensing Information Models for Sediment and Soil

  • Ma, Ainai
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.739-744
    • /
    • 2002
  • Recently we have discovered that sediments should be separated from lithosphere, and soil should be separated from biosphere, both sediment and soil will be mixed sediments-soil-sphere (Seso-sphere), which is using particulate mechanics to be solved. Erosion and sediment both are moving by particulate matter with water or wind. But ancient sediments will be erosion same to soil. Nowadays, real soil has already reduced much more. Many places have only remained sediments that have ploughed artificial farming layer. Thus it means sediments-soil-sphere. This paper discusses sediments-soil-sphere erosion modeling. In fact sediments-soil-sphere erosion is including water erosion, wind erosion, melt-water erosion, gravitational water erosion, and mixed erosion. We have established geographical remote sensing information modeling (RSIM) for different erosion that was using remote sensing digital images with geographical ground truth water stations and meteorological observatories data by remote sensing digital images processing and geographical information system (GIS). All of those RSIM will be a geographical multidimensional gray non-linear equation using mathematics equation (non-dimension analysis) and mathematics statistics. The mixed erosion equation is more complex that is a geographical polynomial gray non-linear equation that must use time-space fuzzy condition equations to be solved. RSIM is digital image modeling that has separated physical factors and geographical parameters. There are a lot of geographical analogous criterions that are non-dimensional factor groups. The geographical RSIM could be automatic to change them analogous criterions to be fixed difference scale maps. For example, if smaller scale maps (1:1000 000) that then will be one or two analogous criterions and if larger scale map (1:10 000) that then will be four or five analogous criterions. And the geographical parameters that are including coefficient and indexes will change too with images. The geographical RSIM has higher precision more than mathematics modeling even mathematical equation or mathematical statistics modeling.

  • PDF

Automatic Video Genre Classification Method in MPEG compressed domain (MPEG 부호화 영역에서 Video Genre 자동 분류 방법)

  • Kim, Tae-Hee;Lee, Woong-Hee;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.8A
    • /
    • pp.836-845
    • /
    • 2002
  • Video summary is one of the tools which can provide the fast and effective browsing for a lengthy video. Video summary consists of many key-frames that could be defined differently depending on the video genre it belongs to. Consequently, the video summary constructed by the uniform manner might lead into inadequate result. Therefore, identifying the video genre is the important first step in generating the meaningful video summary. We propose a new method that can classify the genre of the video data in MPEC compressed bit-stream domain. Since the proposed method operates directly on the compressed bit-stream without decoding the frame, it has merits such as simple calculation and short processing time. In the proposed method, only the visual information is utilized through the spatial-temporal analysis to classify the video genre. Experiments are done for 6 genres of video: Cartoon, commercial, Music Video, News, Sports, and Talk Show. Experimental result shows more than 90% of accuracy in genre classification for the well -structured video data such as Talk Show and Sports.

An Algorithm for Detecting Residual Quantity of Ringer's Solution for Automatic Replacement (링거 자동 교체를 위한 잔량 검출 알고리즘)

  • Kim, Chang-Wook;Woo, Sang-Hyo;Zia, Mohy Ud Din;Won, Chul-Ho;Hong, Jae-Pyo;Cho, Jin-Ho
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.1
    • /
    • pp.30-36
    • /
    • 2008
  • Recently, ere are many researches to improve the quality of e medical service such as Point of care (POC). To improve the quality of the medical service, not only good medical device but also more man power is required. Especially, the number of nurses are very few in Korea that is almost the lowest rank compared to OECD countries. If the simple repetition works of the nurse could be removed, it is possible to use the skillful nurse for other works and provide better quality services. There are many simple repetition works which the nurses have to do, such as replacing the ringer's solution. To replace the ringer's solution automatically, it is necessary to detect residual quantity of the ringer's solution. In this paper, image processing is used to detect the residual quantity of ringer's solution, and modified self quotient image (SQI) algorithm is used to strong background lights. After modified SQI algorithm, the simple histogram accumulation is done to find the residual quantity of the ringer's solution. The implemented algorithm could be use to replace the ringer's solution automatically or alarm to the nurses to replace the solution.

  • PDF

Judgment about the Usefulness of Automatically Extracted Temporal Information from News Articles for Event Detection and Tracking (사건 탐지 및 추적을 위해 신문기사에서 자동 추출된 시간정보의 유용성 판단)

  • Kim Pyung;Myaeng Sung-Hyon
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.6
    • /
    • pp.564-573
    • /
    • 2006
  • Temporal information plays an important role in natural language processing (NLP) applications such as information extraction, discourse analysis, automatic summarization, and question-answering. In the topic detection and tracking (TDT) area, the temporal information often used is the publication date of a message, which is readily available but limited in its usefulness. We developed a relatively simple NLP method of extracting temporal information from Korean news articles, with the goal of improving performance of TDT tasks. To extract temporal information, we make use of finite state automata and a lexicon containing time-revealing vocabulary. Extracted information is converted into a canonicalized representation of a time point or a time duration. We first evaluated the extraction and canonicalization methods for their accuracy and investigated on the extent to which temporal information extracted as such can help TDT tasks. The experimental results show that time information extracted from text indeed helps improve both precision and recall significantly.