• 제목/요약/키워드: data manage

검색결과 3,788건 처리시간 0.03초

유전적 프로그래밍과 SOM을 결합한 개선된 선박 설계용 데이터 마이닝 시스템 개발 (Development of Data Mining System for Ship Design using Combined Genetic Programming with Self Organizing Map)

  • 이경호;박종훈;한영수;최시영
    • 한국CDE학회논문집
    • /
    • 제14권6호
    • /
    • pp.382-389
    • /
    • 2009
  • Recently, knowledge management has been required in companies as a tool of competitiveness. Companies have constructed Enterprise Resource Planning(ERP) system in order to manage huge knowledge. But, it is not easy to formalize knowledge in organization. We focused on data mining system by genetic programming(GP). Data mining system by genetic programming can be useful tools to derive and extract the necessary information and knowledge from the huge accumulated data. However when we don't have enough amounts of data to perform the learning process of genetic programming, we have to reduce input parameter(s) or increase number of learning or training data. In this study, an enhanced data mining method combining Genetic Programming with Self organizing map, that reduces the number of input parameters, is suggested. Experiment results through a prototype implementation are also discussed.

메가프로젝트 원가 자료 분석에 관한 연구 (A Study of cost data modeling for Megaproject)

  • 지성민;조재경;현창택
    • 한국건축시공학회:학술대회논문집
    • /
    • 한국건축시공학회 2009년도 추계 학술논문 발표대회
    • /
    • pp.253-256
    • /
    • 2009
  • To the success of the megaproject including various and complex facilities, it is needed to establish a database system. Developments in data collection, storage and extracting technology have enabled iPMIS to manage various and complex information about cost and time. Especially, when we consider that both the go and no go decision in feasibility, Cost is an important and clear criteria in megaproject. Thus, Cost data modeling is the basis of the system and is necessary process. This research is focus on the structure and definition about CBS data which is collected from sites. We used four tools which are Function Analysis in VE, Casual loop Diagram in System Dynamics, Decision Tree in Data-mining, and Normalization in SQL to identify its cause and effect relationship on CBS data. Cost data modeling provide iPMIS with helpful guideline.

  • PDF

State Analysis and Location Tracking Technology through EEG and Position Data Analysis

  • Jo, Guk-Han;Song, Young-Joon
    • 한국정보기술학회 영문논문지
    • /
    • 제8권2호
    • /
    • pp.27-39
    • /
    • 2018
  • In this paper, we describe the algorithms, EEG classification methods, and position data analysis methods using EEG and ADS1299 sensors. In addition, it is necessary to manage the amount of real-time data of location data and EEG data and to extract data efficiently. To do this, we explain the process of extracting important information from a vast amount of data through a cloud server. The electrical signals extracted from the brain are measured to determine the psychological state and health status, and the measured positions can be collected using the position sensor and triangulation method.

다중 전술데이터링크 운용에 따른 데이터 루핑 방지 방안 (A Method For Preventing Data Looping in Multi Tactical Datalink Operation)

  • 우순;임재성
    • 한국군사과학기술학회지
    • /
    • 제16권3호
    • /
    • pp.314-321
    • /
    • 2013
  • In this paper, we have proposed a method which can prevent data looping in multi tactical data-link operating situation. Because the situation of multi tactical data-link in Korea would be more complex than ever before, data looping is more likely to be happened. To prevent data looping, forwarder has to manage TQ(Track Quality) in forwarded track message by degrading it to enlarge correlation gate. Forwarder also has to discard useless track message which can be determined by minimum TQ value. To decide optimal formula for forwarder to degrade TQ and to determine minimum TQ, a research about track motion, correlation, TQ managing, etc in real system is necessary.

데이터마이닝을 이용한 관측적 침하해석의 신뢰성 연구 (A Study on the Reliability of Observational Settlement Analysis Using Data Mining)

  • 우철웅;장병욱
    • 한국농공학회지
    • /
    • 제45권6호
    • /
    • pp.183-193
    • /
    • 2003
  • Most construction works on the soft ground adopt instrumentation to manage settlement and stability of the embankment. The rapid progress of the information technologies and the digital data acquisition on the soft ground instrumentation has led to the fast-growing amount of data. Although valuable information about the behaviour of the soft ground may be hiding behind the data, most of the data are used restrictedly only for the management of settlement and stability. One of the critical issues on soft ground instrumentation is the long-term settlement prediction. Some observational settlement analysis methods are used for this purpose. But the reliability of the analysis results is remained in vague. The knowledge could be discovered from a large volume of experiences on the observational settlement analysis. In this article, we present a database to store settlement records and data mining procedure. A large volume of knowledge about observational settlement prediction were collected from the database by applying the filtering algorithm and knowledge discovery algorithm. Statistical analysis revealed that the reliability of observational settlement analysis depends on stay duration and estimated degree of consolidation.

Issues in structural health monitoring employing smart sensors

  • Nagayama, T.;Sim, S.H.;Miyamori, Y.;Spencer, B.F. Jr.
    • Smart Structures and Systems
    • /
    • 제3권3호
    • /
    • pp.299-320
    • /
    • 2007
  • Smart sensors densely distributed over structures can provide rich information for structural monitoring using their onboard wireless communication and computational capabilities. However, issues such as time synchronization error, data loss, and dealing with large amounts of harvested data have limited the implementation of full-fledged systems. Limited network resources (e.g. battery power, storage space, bandwidth, etc.) make these issues quite challenging. This paper first investigates the effects of time synchronization error and data loss, aiming to clarify requirements on synchronization accuracy and communication reliability in SHM applications. Coordinated computing is then examined as a way to manage large amounts of data.

Correlation Analysis of the Frequency and Death Rates in Arterial Intervention using C4.5

  • Jung, Yong Gyu;Jung, Sung-Jun;Cha, Byeong Heon
    • International journal of advanced smart convergence
    • /
    • 제6권3호
    • /
    • pp.22-28
    • /
    • 2017
  • With the recent development of technologies to manage vast amounts of data, data mining technology has had a major impact on all industries.. Data mining is the process of discovering useful correlations hidden in data, extracting executable information for the future, and using it for decision making. In other words, it is a core process of Knowledge Discovery in data base(KDD) that transforms input data and derives useful information. It extracts information that we did not know until now from a large data base. In the decision tree, c4.5 algorithm was used. In addition, the C4.5 algorithm was used in the decision tree to analyze the difference between frequency and mortality in the region. In this paper, the frequency and mortality of percutaneous coronary intervention for patients with heart disease were divided into regions.

DICOM 객체를 활용한 무결성 PACS Data 관리시스템 구현 (A Study for Management System of Integrity PACS Data Using DICOM Object)

  • 박범진;정재호;손기경;정영태;강희두
    • 대한디지털의료영상학회논문지
    • /
    • 제15권1호
    • /
    • pp.9-20
    • /
    • 2013
  • PACS is one of the most used medical information system and share information from other hospitals through the PACS. Data integrity means zero defects data and this is a prerequisite of information system performance. but I wonder if I can trust these informations that Incorrect information from radiotechnologist's mistakes, anonymous in emergency department, Newborn baby department, modified informations at later. And Modified informations causes defect in integrity of the data. When we import, we use DICOM header not DB data. so error occurs that DB data is deferent with DICOM Header information. This paper discusses to resolve as above problem using DICOM object such as DICOM PR, SR. And propose quality management system that can guarantee the patient information and can manage exam history.

  • PDF

Development of Data-Flow Control Algorithm of Wireless Network for Sewage Disposal Facility

  • Jung, Soonho;Shin, Jaekwon;Kang, Jeongjin;Lee, Seungyoun;Lee, Junghoon
    • International journal of advanced smart convergence
    • /
    • 제4권2호
    • /
    • pp.14-19
    • /
    • 2015
  • Recently, water sewage disposal facilities are able to manage real-time data collection and record management through compact broadband modem LAN switching technology. Therefore, it needs more stable and efficient facility management. So, we required practical use of environmental facilities convergence based on broadband integrated modem. In this paper, we proposed short distance wireless communication network of compact broadband modem for sewage disposal facilities. And it received data inside of water treatment facility using the two communication methods (IEEE802.11x and IEEE802.15.4x). Then, our proposed an data-flow control algorithm of wireless network technology will prioritize processing data when emergency happen through collecting data, analysis data and processing. Lastly, we proved usefulness by experiment and simulation analysis.

사례기반추론을 이용한 대용량 데이터의 실시간 처리 방법론 : 고혈압 고위험군 관리를 위한 자기학습 시스템 프레임워크 (Data Mining Approach for Real-Time Processing of Large Data Using Case-Based Reasoning : High-Risk Group Detection Data Warehouse for Patients with High Blood Pressure)

  • 박성혁;양근우
    • 한국IT서비스학회지
    • /
    • 제10권1호
    • /
    • pp.135-149
    • /
    • 2011
  • In this paper, we propose the high-risk group detection model for patients with high blood pressure using case-based reasoning. The proposed model can be applied for public health maintenance organizations to effectively manage knowledge related to high blood pressure and efficiently allocate limited health care resources. Especially, the focus is on the development of the model that can handle constraints such as managing large volume of data, enabling the automatic learning to adapt to external environmental changes and operating the system on a real-time basis. Using real data collected from local public health centers, the optimal high-risk group detection model was derived incorporating optimal parameter sets. The results of the performance test for the model using test data show that the prediction accuracy of the proposed model is two times better than the natural risk of high blood pressure.