• Title/Summary/Keyword: Automatic Use

Search Result 1,754, Processing Time 0.031 seconds

Classification Tree Analysis to Assess Contributing Factors Influencing Biosecurity Level on Farrow-to-Finish Pig Farms in Korea (분류 트리 기법을 이용한 국내 일괄사육 양돈장의 차단방역 수준에 영향을 미치는 기여 요인 평가)

  • Kim, Kyu-Wook;Pak, Son-Il
    • Journal of Veterinary Clinics
    • /
    • v.33 no.2
    • /
    • pp.107-112
    • /
    • 2016
  • The objective of this study was to determine potential contributing factors associated with biosecurity level of farrow-to-finish pig farms and to develop a classification tree model to explore how these factors related to each other based on prediction model. To this end, the author analyzed data (n = 193) extracted from a cross-sectional study of 344 farrow-to-finish farms which was conducted between March and September 2014 aimed to explore swine disease status at farm level. Standardized questionnaires with information about basic demographical data and management practices were collected in each farm by on-site visit of trained veterinarians. For the classification of the data sets regarding biosecurity level as a dependent variable and predictor variables, Chi-squared Automatic Interaction Detection (CHAID) algorithm was applied for modeling classification tree. The statistics of misclassification risk was used to evaluate the fitness of the model in terms of prediction results. Categorical multivariate input data (40 variables) was used to construct a classification tree, and the target variable was biosecurity level dichotomized into low versus high. In general, the level of biosecurity was lower in the majority of farms studied, mainly due to the limited implementation of on-farm basic biosecurity measures aimed at controlling the potential introduction and transmission of swine diseases. The CHAID model illustrated the relative importance of significant predictors in explaining the level of biosecurity; maintenance of medical records of treatment and vaccination, use of dedicated clothing to enter the farm, installing fence surrounding the farm perimeter, and periodic monitoring of the herd using written biosecurity plan in place. The misclassification risk estimate of the prediction model was 0.145 with the standard error of 0.025, indicating that 85.5% of the cases could be classified correctly by using the decision rule based on the current tree. Although CHAID approach could provide detailed information and insight about interactions among factors associated with biosecurity level, further evaluation of potential bias intervened in the course of data collection should be included in future studies. In addition, there is still need to validate findings through the external dataset with larger sample size to improve the external validity of the current model.

A Technical Guide to Operational Regional Ocean Forecasting Systems in the Korea Hydrographic and Oceanographic Agency (I): Continuous Operation Strategy, Downloading External Data, and Error Notification (국립해양조사원 해양예측시스템 소개 (I): 현업 운영 전략, 외부 해양·기상 자료 내려 받기 및 오류 알림 기능)

  • BYUN, DO-SEONG;SEO, GWANG-HO;PARK, SE-YOUNG;JEONG, KWANG-YEONG;LEE, JOO YOUNG;CHOI, WON-JIN;SHIN, JAE-AM;CHOI, BYOUNG-JU
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.22 no.3
    • /
    • pp.103-117
    • /
    • 2017
  • This note provides technical guide on three issues associated with establishing and automatically running regional ocean forecasting systems: (1) a strategy for continuous production of hourly-interval three-day ocean forecast data, (2) the daily download of ocean and atmospheric forecasting data (i.e., HYCOM and NOAA/NCEP GFS data), which are provided by outside institutions and used as initial condition, surface forcing, and boundary data for regional ocean models, and (3) error notifications to numerical model managers through the Short Message Service (SMS). Guidance on dealing with these three issues is illustrated via solutions implemented by the Korea Hydrographic and Oceanographic Agency, since in embarking on this project we found that this procedural information was not readily available elsewhere. This technical guide is based on our experiences and lessons learned during the process of establishing and operating regional ocean forecasting systems for the East Sea and the Yellow and East China Seas over the 5 year period of 2012-2016. The fundamental approach and techniques outlined in this guide are of use to anyone wanting to establish an automatic regional and coastal ocean forecasting system.

Accuracy Improvement of Laser Navigation System using FIS and Reliability (FIS와 신뢰도를 이용한 레이저 내비게이션의 정밀도 향상)

  • Jung, Eun-Kook;Kim, Jung-Min;Jung, Kyung-Hoon;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.383-388
    • /
    • 2011
  • This paper presents to study the accuracy improvement of the laser navigation using FIS(fuzzy inference system) and the reliability. As wireless guidance system, the top-mounted laser with the laser navigation can rotate $360^{\circ}$ with phototransistor or other optical sensors that read the return signal from reflectors mounted at the perimeter of the workspace. The type of major existing guidance systems is a wire guidance system. Because they have high accuracy and fast response time, they are used to most industries. However, their installation cost is very expensive and maintenance is very difficult because their sensors are placed approximately 1 inch below the ground or embedded in the floor. To solve those problems, the laser navigation was developed as a wire guidance system. It does not need to reconstruct a floor or ground. And it can reduce costs of installation and maintenance because changing the layout is easy. However, it is difficult to apply to an industrial field because it is easily affected by disturbances which cause loss and damage of data, and has slow respond time. Therefore, we study the accuracy improvement of the laser navigation. The proposed method is a correction method using reliability of the laser navigation. here, reliability is calculated by FIS which is designed with the analyzed characteristics of the laser navigation. For performance comparison, we use original position data form the laser navigation and position data corrected by original reliability from the laser navigation. In experimental result, we verified that the performance of the proposed method compared the others is improved by about 50% or more.

Development of Vehicle Arrival Time Prediction Algorithm Based on a Demand Volume (교통수요 기반의 도착예정시간 산출 알고리즘 개발)

  • Kim, Ji-Hong;Lee, Gyeong-Sun;Kim, Yeong-Ho;Lee, Seong-Mo
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.107-116
    • /
    • 2005
  • The information on travel time in providing the information of traffic to drivers is one of the most important data to control a traffic congestion efficiently. Especially, this information is the major element of route choice of drivers, and based on the premise that it has the high degree of confidence in real situation. This study developed a vehicle arrival time prediction algorithm called as "VAT-DV" for 6 corridors in total 6.1Km of "Nam-san area trffic information system" in order to give an information of congestion to drivers using VMS, ARS, and WEB. The spatial scope of this study is 2.5km~3km sections of each corridor, but there are various situations of traffic flow in a short period because they have signalized intersections in a departure point and an arrival point of each corridor, so they have almost characteristics of interrupted and uninterrupted traffic flow. The algorithm uses the information on a demand volume and a queue length. The demand volume is estimated from density of each points based on the Greenburg model, and the queue length is from the density and speed of each point. In order to settle the variation of the unit time, the result of this algorithm is strategically regulated by importing the AVI(Automatic Vehicle Identification), one of the number plate matching methods. In this study, the AVI travel time information is composed by Hybrid Model in order to use it as the basic parameter to make one travel time in a day using ILD to classify the characteristics of the traffic flow along the queue length. According to the result of this study, in congestion situation, this algorithm has about more than 84% degree of accuracy. Specially, the result of providing the information of "Nam-san area traffic information system" shows that 72.6% of drivers are available.

Program Development for Automatic Extraction and Transformation of Standard Metadata of Geo-spatial Data (공간정보 표준 메타데이터 추출 및 변환 프로그램 개발)

  • Han, Sun-Mook;Lee, Ki-Won
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.549-559
    • /
    • 2010
  • In geo-spatial information system building and operation, metadata is one of the crucial factors. Therefore, international and domestic organizations or associations for standardization have developed and distributed geo-based standard metadata to meet public demands. However, because metadata is composed of complicated elements and needs XML storage and management, individual organization which implement and operate practical application system is inclined to define and use its own metadata specifications. In this study, metadata extraction program, that metadata elements are directly extracted from geo-based file formats was developed to easily utilize standard metadata such as ISO/TC 19115, TTAS.KO-10.0139 and TTAS.IS-19115, and those elements are processed into XML. Furthermore, geo-based images sets are applied to another metadata of ISO/TC 19115-2. As well, metadata transformation is needed due to inconsistent or non-corresponding definition among standard metadata; in this program, transformation modules are also implemented to interoperable uses between standard metadata specifications. Widely used data formats are dealt with in this program, but extension for other formats and other metadata specifications is possible, and it is expected that availability of standard metadata is increased, through this kind of development.

A Review of Change Detection Techniques using Multi-temporal Synthetic Aperture Radar Images (다중시기 위성 레이더 영상을 활용한 변화탐지 기술 리뷰)

  • Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.737-750
    • /
    • 2019
  • Information of target changes in inaccessible areas is very important in terms of national security. Fast and accurate change detection of targets is very important to respond quickly. Spaceborne synthetic aperture radar can acquire images with high accuracy regardless of weather conditions and solar altitude. With the recent increase in the number of SAR satellites, it is possible to acquire images with less than one day temporal resolution for the same area. This advantage greatly increases the availability of change detection for inaccessible areas. Commonly available information in satellite SAR is amplitude and phase information, and change detection techniques have been developed based on each technology. Those are amplitude Change Detection (ACD), Coherence Change Detection (CCD). Each algorithm differs in the preprocessing process for accurate automatic classification technique according to the difference of information characteristics and the final detection result of each algorithm. Therefore, by analyzing the academic research trends for ACD and CCD, each technologies can be complemented. The goal of this paper is identifying current issues of SAR change detection techniques by collecting research papers. This study would help to find the prerequisites for SAR change detection and use it to conduct periodic detection research on inaccessible areas.

Fuzzy Inference System Architecture for Customer Satisfaction Service (고객 만족 서비스를 위한 퍼지 추론 시스템 구조)

  • Kwon, Hee-Chul;Yoo, Jung-Sang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.219-226
    • /
    • 2010
  • Recently most parking control systems provide customers with various services, but most of the services are just the extension of parking spaces, automatic parking control system and so on. It is essential to use the satisfaction degree as the extent that customer are satisfied with parking control system to improve the quality of the system services and diversify the system services. The degree of satisfaction is different from customer to customer in same condition and can be represented as linguistic variables. In this paper, we present therefore a technique that quantify how much customer are satisfied with parking control system and fuzzy inference system architecture as a solution that can help us to make a efficient decision for these parking problems. In this architecture, inference engine using fuzzy logic compares context data with the rules in the fuzzy rule-based system, gets the sub-results, aggregates them and defuzzifies the aggregated result using MATLAB application programming to obtain crisp value. Fuzzy inference system architecture presented in this paper, can be used as a efficient method to analyze the satisfaction degree which is represented as fuzzy linguistic variables by human emotion. And it can be used to improve the satisfaction degree of not only parking system but also other service systems of various domains.

An Ontology - based Transformation Method from Feature Model to Class Model (온톨로지 기반 Feature 모델에서 Class 모델로의 변환 기법)

  • Kim, Dong-Ri;Song, Chee-Yang;Kang, Dong-Su;Baik, Doo-Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.5
    • /
    • pp.53-67
    • /
    • 2008
  • At present, for reuse of similar domains between feature model and class model. researches of transformation at the model level and of transformation using ontology between two models are being made. but consistent transformation through metamodel is not made. And the factors of modeling transformation targets are not sufficient, and especially, automatic transformation algorithm and supporting tools are not provided so reuse of domains between models is not activated. This paper proposes a method of transformation from feature model to class model using ontology on the metamodel. For this, it re-establishes the metamodel of feature model, class model, and ontology, and it defines the properties of modelling factors for each metamodel. Based on the properties, it defines the profiles of transformation rules between feature mndel and ontology, and between ontology and class model, using set theory and propositional calculus. For automation of the transformation, it creates transformation algorithm and supporting tools. Using the proposed transformation rules and tools, real application is made through Electronic Approval System. Through this, it is possible to transform from the existing constructed feature model to the class model and to use it again for a different development method. Especially, it is Possible to remove ambiguity of semantic transformation using ontology, and automation of transformation maintains consistence between models.

  • PDF

Automatic Extraction of River Levee Slope Using MMS Point Cloud Data (MMS 포인트 클라우드를 활용한 하천제방 경사도 자동 추출에 관한 연구)

  • Kim, Cheolhwan;Lee, Jisang;Choi, Wonjun;Kim, Wondae;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1425-1434
    • /
    • 2021
  • Continuous and periodic data acquisition must be preceded to maintain and manage the river facilities effectively. Adapting the existing general facilities methods, which include river surveying methods such as terrestrial laser scanners, total stations, and Global Navigation Satellite System (GNSS), has limitation in terms of its costs, manpower, and times to acquire spatial information since the river facilities are distributed across the wide and long area. On the other hand, the Mobile Mapping System (MMS) has comparative advantage in acquiring the data of river facilities since it constructs three-dimensional spatial information while moving. By using the MMS, 184,646,009 points could be attained for Anyang stream with a length of 4 kilometers only in 20 minutes. Levee points were divided at intervals of 10 meters so that about 378 levee cross sections were generated. In addition, the waterside maximum and average slope could be automatically calculated by separating slope plane form levee point cloud, and the accuracy of RMSE was confirmed by comparing with manually calculated slope. The reference slope was calculated manually by plotting point cloud of levee slope plane and selecting two points that use location information when calculating the slope. Also, as a result of comparing the water side slope with slope standard in basic river plan for Anyang stream, it is confirmed that inspecting the river facilities with the MMS point cloud is highly recommended than the existing river survey.

Maintenance of Platelet Counts with Low Level QC Materials and the Change in P-LCR according to Hemolysis with XN-9000 (XN-9000장비에서 Low Level QC물질에서의 혈소판 수 관리와 용혈에 따른 P-LCR의 변화)

  • Shim, Moon-Jung;Lee, Hyun-A
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.50 no.4
    • /
    • pp.399-405
    • /
    • 2018
  • The platelet count in clinical laboratories is essential for the diagnosis and treatment of hemostasis abnormalities, and accurate platelet counting in the low count range is of prime importance for deciding if a platelet transfusion is needed and for monitoring after chemotherapy. Quality control is designed to reduce and correct any deficiencies in the internal analytical process of a clinical laboratory prior to the release of patient results. Fragmented erythrocytes are the major confusing factors for platelet counting because of their similar size to platelets. The authors found that the low range QC values were out of 2SD with a Sysmex automatic analyzer in internal quality control process. Thus far, there has been little discussion on the relationship between hemolysis and the platelet parameters. Therefore, this study focused on the performance of automated platelet counts, including the PLT-F, the PLT-I, and PLT-O methods at the low platelet range using the low level QC materials and compared the 5 platelet parameters with the hemolyzed samples. The results showed that the CV was the smallest with PLT-F and P-LCR increased from 18.4 to 31.9% in the hemolysis samples. These results indicate that a more accurate estimation of the platelet counts can be achieved using the PLT-F method than the PLT-I method at the low platelet range. The use of the PLT-F system improves the confidence of results in low platelets samples in a routine hematology laboratory. The results suggest that P-LCR is a new parameter in assessing samples when the specimen is suspected of hemolysis and deterioration. Nevertheless, further studies will be needed to establish the relationship with P-LCR and hemolysis using human blood specimens.