• Title/Summary/Keyword: Data Structuring

Search Result 143, Processing Time 0.022 seconds

The Guideline for Re-Structuring of Information System and Case Study (정보시스템 재구축 수행 방안과 적용 사례)

  • Choi, Youn-Lak;Lee, Eun-Sang;Lee, Hyun-Jeong;Chong, Ki-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10a
    • /
    • pp.473-476
    • /
    • 2001
  • 최근 기존 정보시스템에 고객이나 사용자의 다양한 요구사항이나 기업의 환경 변화를 반영하여 새로운 정보시스템으로 재구축하는 경향을 보이고 있다. 이를 통해 기업들에서 경쟁 우위를 선점함으로써 보다 우세한 경쟁력을 갖출 수 있다. 본 논문에서는 정보시스템 재구축을 위한 프로세스 모델링(Process Modeling)과 데이터 모델링(Data Modeling)을 체계적으로 수행하는 방안을 제시하고, 이를 실제로 적용한 사례를 보여준다. 정보시스템의 전체적인 관점에서의 요구사항 및 기존 정보시스템의 미비사항을 분석하여 정보화 대상을 추출하는 프로세스 모델 분석(Process Model Analysis) 단계와 정보화 대상을 개념 모델로 전환하는 논리 데이터 모델링(Logical Data Modeling) 단계, 실제 컴퓨터에 저장하여 사용하는 물리 데이터 모델링(Physical Data Modeling) 단계로 구성된다.

  • PDF

A Secure Healthcare System Using Holochain in a Distributed Environment

  • Jong-Sub Lee;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.4
    • /
    • pp.261-269
    • /
    • 2023
  • We propose to design a Holochain-based security and privacy protection system for resource-constrained IoT healthcare systems. Through analysis and performance evaluation, the proposed system confirmed that these characteristics operate effectively in the IoT healthcare environment. The system proposed in this paper consists of four main layers aimed at secure collection, transmission, storage, and processing of important medical data in IoT healthcare environments. The first PERCEPTION layer consists of various IoT devices, such as wearable devices, sensors, and other medical devices. These devices collect patient health data and pass it on to the network layer. The second network connectivity layer assigns an IP address to the collected data and ensures that the data is transmitted reliably over the network. Transmission takes place via standardized protocols, which ensures data reliability and availability. The third distributed cloud layer is a distributed data storage based on Holochain that stores important medical information collected from resource-limited IoT devices. This layer manages data integrity and access control, and allows users to share data securely. Finally, the fourth application layer provides useful information and services to end users, patients and healthcare professionals. The structuring and presentation of data and interaction between applications are managed at this layer. This structure aims to provide security, privacy, and resource efficiency suitable for IoT healthcare systems, in contrast to traditional centralized or blockchain-based systems. We design and propose a Holochain-based security and privacy protection system through a better IoT healthcare system.

Anomaly Detection of Big Time Series Data Using Machine Learning (머신러닝 기법을 활용한 대용량 시계열 데이터 이상 시점탐지 방법론 : 발전기 부품신호 사례 중심)

  • Kwon, Sehyug
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.2
    • /
    • pp.33-38
    • /
    • 2020
  • Anomaly detection of Machine Learning such as PCA anomaly detection and CNN image classification has been focused on cross-sectional data. In this paper, two approaches has been suggested to apply ML techniques for identifying the failure time of big time series data. PCA anomaly detection to identify time rows as normal or abnormal was suggested by converting subjects identification problem to time domain. CNN image classification was suggested to identify the failure time by re-structuring of time series data, which computed the correlation matrix of one minute data and converted to tiff image format. Also, LASSO, one of feature selection methods, was applied to select the most affecting variables which could identify the failure status. For the empirical study, time series data was collected in seconds from a power generator of 214 components for 25 minutes including 20 minutes before the failure time. The failure time was predicted and detected 9 minutes 17 seconds before the failure time by PCA anomaly detection, but was not detected by the combination of LASSO and PCA because the target variable was binary variable which was assigned on the base of the failure time. CNN image classification with the train data of 10 normal status image and 5 failure status images detected just one minute before.

Discussion : Vision and Strategy for Undergraduate Statistics Major Program (토론 : 통계학 학부전공 프로그램의 비전과 전략에 비추어)

  • 손건태;허명회
    • The Korean Journal of Applied Statistics
    • /
    • v.12 no.2
    • /
    • pp.705-709
    • /
    • 1999
  • We discuss the paper by Cho, Shin, Lee, and Han on the "information-relate" undergraduate statistics major program from the following perspectives: Recently, Korean universities are under re-structuring turmoil. To effectively confront the situation, we need both the vision and the strategy for statistics and statistics departments. For undergraduate statistics major program, our visions are 1) it should not be preliminary education program targeted for the graduate degrees, 2) it should be responsive to future social demand, and 3) it should incorporate the progressive identity of statistics as information and data science. As strategies, we propose 1) the effective integration and due balance among data collection, management and analysis, 2) the harmony and role development of computers and mathematics as statistical tools, 3) the statistics education through task-oriented problem solving, and 4) the emphasis of team work and communication skills.on skills.

  • PDF

Watershed Segmentation of High-Resolution Remotely Sensed Imagery

  • WANG Ziyu;ZHAO Shuhe;CHEN Xiuwan
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.107-109
    • /
    • 2004
  • High-resolution remotely sensed data such as SPOT-5 imagery are employed to study the effectiveness of the watershed segmentation algorithm. Existing problems in this approach are identified and appropriate solutions are proposed. As a case study, the panchromatic SPOT-5 image of part of Beijing urban areas has been segmented by using the MATLAB software. In segmentation, the structuring element has been firstly created, then the gaps between objects have been exaggerated and the objects of interest are converted. After that, the intensity valleys have been detected and the watershed segmentation have been conducted. Through this process, the objects in an image are divided into separate objects. Finally, the effectiveness of the watershed segmentation approach for high-resolution imagery has been summarized. The approach to solve the problems such as over-segmentation has been proposed.

  • PDF

Design Knowledge Management using Configuration Manager (구성 관리자를 이용한 설계지식 관리)

  • Kang, Mu-Jin;Jung, Seung-Hwan
    • Proceedings of the KSME Conference
    • /
    • 2000.04a
    • /
    • pp.890-893
    • /
    • 2000
  • It is known that about 15 to 40 percent of total design time is spent on retrieving information such as standard parts handbook data, engineering equations, previous designs. This paper describes a knowledge management system for machine tool design. Product structuring, change management, and complex design knowledge management are possible through the developed system. The system can speed up the design process by making necessary data instantly available as it is needed and keeping track of all the relevant design information and knowledge including individual decisions, design intentions, documents, and drawings.

  • PDF

A Structural Analysis of Dictionary Text for the Construction of Lexical Data Base (어휘정보구축을 위한 사전텍스트의 구조분석 및 변환)

  • 최병진
    • Language and Information
    • /
    • v.6 no.2
    • /
    • pp.33-55
    • /
    • 2002
  • This research aims at transforming the definition tort of an English-English-Korean Dictionary (EEKD) which is encoded in EST files for the purpose of publishing into a structured format for Lexical Data Base (LDB). The construction of LDB is very time-consuming and expensive work. In order to save time and efforts in building new lexical information, the present study tries to extract useful linguistic information from an existing printed dictionary. In this paper, the process of extraction and structuring of lexical information from a printed dictionary (EEKD) as a lexical resource is described. The extracted information is represented in XML format, which can be transformed into another representation for different application requirements.

  • PDF

Problem Structuring in IT Policy: Boundary Analysis of IT Policy Problems (경계분석을 통한 정책문제 정의에 관한 연구 - 언론보도에 나타난 IT 정책문제 탐색을 중심으로 -)

  • Park, Chisung;Nam, Ki Bum
    • 한국정책학회보
    • /
    • v.21 no.4
    • /
    • pp.199-228
    • /
    • 2012
  • Policy problems are complex due to diverse participants and their relations in the policy processes. Defining the right problem in the first place is important because Type III error is likely to happen without removing rival hypothesis in defining the problem. This study applies Boundary Analysis suggested by Dunn to structure IT policy problems in Korea. The time frame of the study focuses on 5 years of Lee Administration and data are collected from four newspapers. Using content analysis, the study, first, elaborates total 2,614 policy problems from 1,908 stakeholders. After removing duplicating problems, 369 problems from 323 stakeholders are identified as a boundary of IT policy problem. Among others, failures in government policies are weighted as the most serious problems in IT policy field. However, many significant problems raised by stakeholders dated back to more than a decade, and those are intrinsic problems, which initially caused by market distortions in the IT industry. Therefore, we should be cautious not to overemphasize the most conspicuous problem as the only problem in the policy field when we interpret results of problem structuring.

A New Focus Measure Method Based on Mathematical Morphology for 3D Shape Recovery (3차원 형상 복원을 위한 수학적 모폴로지 기반의 초점 측도 기법)

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.1
    • /
    • pp.23-28
    • /
    • 2017
  • Shape from focus (SFF) is a technique used to reconstruct 3D shape of objects from a sequence of images obtained at different focus settings of the lens. In this paper, a new shape from focus method for 3D reconstruction of microscopic objects is described, which is based on gradient operator in Mathematical Morphology. Conventionally, in SFF methods, a single focus measure is used for measuring the focus quality. Due to the complex shape and texture of microscopic objects, single measure based operators are not sufficient, so we propose morphological operators with multi-structuring elements for computing the focus values. Finally, an optimal focus measure is obtained by combining the response of all focus measures. The experimental results showed that the proposed algorithm has provided more accurate depth maps than the existing methods in terms of three-dimensional shape recovery.

An Efficient Anchor Range Extracting Algorithm for The Unit Structuring of News Data (뉴스 정보의 단위 구조화를 위한 효율적인 앵커구간 추출 알고리즘)

  • 전승철;박성한
    • Journal of Broadcast Engineering
    • /
    • v.6 no.3
    • /
    • pp.260-269
    • /
    • 2001
  • This paper proposes an efficient algorithm extracting anchor ranges that exist in news video for the unit structuring of news. To this purpose, this paper uses anchors face in the frame rather than the cuts where the scene changes are occurred. In anchor range, we find the end position (frame) of anchor range with the FRFD(Face Region Frame Difference). On the other hand, in not-anchor range, we find the start position of anchor range by extracting anchors face. The process of extracting anchors face is consists of two parts to enhance the computation time for WPEG decoding. The first pact is to find candidates of anchors face through rough analysis with partial decoding MPEG and the second part is to verify candidates of anchors face with fully decoding. It is possible to use the result of this process in basic step of news analysis. Especially, the fast processing and the high recall rate of this process are suitable to apply for the real news service.

  • PDF