• Title/Summary/Keyword: Information Error

Search Result 11,131, Processing Time 0.043 seconds

Precision Improvement of GPS Height Time Series by Correcting for Atmospheric Pressure Loading Displacements (대기압하중에 의한 지각변위 보정을 통한 GPS 수직좌표 시계열 정밀도 향상)

  • Kim, Kyeong-Hui;Park, Kwan-Dong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.599-605
    • /
    • 2009
  • Changes of atmospheric pressures cause short- and long-term crustal deformations and thus become error sources in the site positions estimated from space geodesy equipments. In this study, we computed daily displacements due to the atmospheric pressure loading (ATML) at the 14 permanent GPS sites operated by National Geographic Information Institute. And the 10-year GPS data collected at those stations were processed to create a continuous time series of the height estimate. Then, we corrected for the ATML from the GPS height time series to see if the correction changes the site velocity and improves the precision of the time series. While the precision improved by about 4% on average, the velocity change was not significant at all. We also investigated the overall characteristics of the ATML in the southern Korean peninsula by computing the ATML effects at the inland grid points with a $0.5^{\circ}{\times}0.5^{\circ}$ spatial resolution. We found that ATML displacements show annual signals and those signals can be fitted with sinusoidal functions. The amplitudes were in the range of 3-4 mm, and they were higher at higher latitudes and lower at the costal area.

Recurrent Neural Network Modeling of Etch Tool Data: a Preliminary for Fault Inference via Bayesian Networks

  • Nawaz, Javeria;Arshad, Muhammad Zeeshan;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.239-240
    • /
    • 2012
  • With advancements in semiconductor device technologies, manufacturing processes are getting more complex and it became more difficult to maintain tighter process control. As the number of processing step increased for fabricating complex chip structure, potential fault inducing factors are prevail and their allowable margins are continuously reduced. Therefore, one of the key to success in semiconductor manufacturing is highly accurate and fast fault detection and classification at each stage to reduce any undesired variation and identify the cause of the fault. Sensors in the equipment are used to monitor the state of the process. The idea is that whenever there is a fault in the process, it appears as some variation in the output from any of the sensors monitoring the process. These sensors may refer to information about pressure, RF power or gas flow and etc. in the equipment. By relating the data from these sensors to the process condition, any abnormality in the process can be identified, but it still holds some degree of certainty. Our hypothesis in this research is to capture the features of equipment condition data from healthy process library. We can use the health data as a reference for upcoming processes and this is made possible by mathematically modeling of the acquired data. In this work we demonstrate the use of recurrent neural network (RNN) has been used. RNN is a dynamic neural network that makes the output as a function of previous inputs. In our case we have etch equipment tool set data, consisting of 22 parameters and 9 runs. This data was first synchronized using the Dynamic Time Warping (DTW) algorithm. The synchronized data from the sensors in the form of time series is then provided to RNN which trains and restructures itself according to the input and then predicts a value, one step ahead in time, which depends on the past values of data. Eight runs of process data were used to train the network, while in order to check the performance of the network, one run was used as a test input. Next, a mean squared error based probability generating function was used to assign probability of fault in each parameter by comparing the predicted and actual values of the data. In the future we will make use of the Bayesian Networks to classify the detected faults. Bayesian Networks use directed acyclic graphs that relate different parameters through their conditional dependencies in order to find inference among them. The relationships between parameters from the data will be used to generate the structure of Bayesian Network and then posterior probability of different faults will be calculated using inference algorithms.

  • PDF

Statistical Characteristics of the Non-tidal Components Data in Korean Coasts (한반도 연안 비조석 성분자료의 통계적 특성)

  • Cho, Hong-Yeon;Jeong, Shin-Taek;Yoon, Jong-Tae;Kim, Chang-Il
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.18 no.2
    • /
    • pp.112-123
    • /
    • 2006
  • Double-peak normal distribution function was suggested as the probability density function of the non-tidal components (NTC) data in Korean coastal zone. Frequency distribution analysis of the NTC data was carried out using hourly tidal elevation data of the ten tidal gauging stations, i.e., Incheon, Gunsan, Mokpo, Jeju, Yeosu, Masan, Gadeokdo, Busan, Pohang, and Sokcho which were served through the Internet Homepage by the National Ocean Research Institute. NTC data is defined as the difference between the measured tidal elevation data and the astronomical tidal elevation data using 64 tidal constituents information. Based on the RMS error and R2 value comparison analysis, it was found that this suggested function as the probability density function of the NTC data was found to be more appropriate than the normal distribution function. The parameters of the double-peak function were estimated optimally using Levenberg-Marquardt method which was modified from the Newton method. The standard deviation and skewness coefficient were highly correlated with the non-tidal constants of the tidal gauging stations except Mokpo, Jeju and Sokcho stations.

Analyzing Vomit of Platalea minor (Black-faced Spoonbill) to Identify Food Components using Next-Generation Sequencing and Microscopy (차세대염기서열 및 현미경 분석을 통한 저어새의 토사물 내 먹이생물 분석)

  • Kim, Hyun-Jung;Lee, Taek-Kyun;Jung, Seung Won;Kwon, In-Ki;Yoo, Jae-Won
    • Korean Journal of Environmental Biology
    • /
    • v.36 no.2
    • /
    • pp.165-173
    • /
    • 2018
  • We sampled vomit of black-faced spoonbills(Platalea minor) during the brood-rearing season (from June 2011 to June 2014) at the Namdong reservoir in Incheon and analyzed the food components in the vomit using microscopy and next-generation sequencing (NGS). Microscopic observations primarily helped in identifying osteichthyes (bony fishes), crustaceans, and polychaetes. In particular, species belonging to the families Mugilidae and Gobiidae among the fish, and Macrophthalmus japonicas among the crustaceans, were observed at high frequency. Results of NGS analysis revealed the predominant presence of bony fish (42.58% of total reads) and crustaceans (40.75% of total reads), whereas others, such as polychaetes (12.66%), insects (0.24%), and unidentified species (3.78%), occurred in lower proportions. At the species level, results of NGS analysis revealed that Macrophthalmus abbreviates and Macrobrachium sp. among the crustaceans, and Acanthogobius hasta, Tridentiger obscurus, and Pterogobius zacalles among the bony fish, made up a high proportion of the total reads. These food species are frequently found at tidal flats in the Songdo and Sihwa lakes, emphasizing the importance of these areas as potential feeding sites of the black-faced spoonbill. Feed composition of the black-faced spoonbill, as evaluated by analyzing its vomit, differed when the evaluations were done by microscopic observation or by NGS analysis. Evaluation by microscopic observation is difficult and not error free, owing to the degradation of the samples to be analyzed; however, NGS analysis is more accurate, because it makes use of genetic information. Therefore, accurately analyzing food components from morphologically indistinguishable samples is possible by using genetic analysis.

Accuracy of Parcel Boundary Demarcation in Agricultural Area Using UAV-Photogrammetry (무인 항공사진측량에 의한 농경지 필지 경계설정 정확도)

  • Sung, Sang Min;Lee, Jae One
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.53-62
    • /
    • 2016
  • In recent years, UAV Photogrammetry based on an ultra-light UAS(Unmanned Aerial System) installed with a low-cost compact navigation device and a camera has attracted great attention through fast and accurate acquirement of geo-spatial data. In particular, UAV Photogrammetry do gradually replace the traditional aerial photogrammetry because it is able to produce DEMs(Digital Elevation Models) and Orthophotos rapidly owing to large amounts of high resolution image collection by a low-cost camera and image processing software combined with computer vision technique. With these advantages, UAV-Photogrammetry has therefore been applying to a large scale mapping and cadastral surveying that require accurate position information. This paper presents experimental results of an accuracy performance test with images of 4cm GSD from a fixed wing UAS to demarcate parcel boundaries in agricultural area. Consequently, the accuracy of boundary point extracted from UAS orthoimage has shown less than 8cm compared with that of terrestrial cadastral surveying. This means that UAV images satisfy the tolerance limit of distance error in cadastral surveying for the scale of 1: 500. And also, the area deviation is negligible small, about 0.2%(3.3m2), against true area of 1,969m2 by cadastral surveying. UAV-Photogrammetry is therefore as a promising technology to demarcate parcel boundaries.

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Voting-based Intra Mode Bit Skip Using Pixel Information in Neighbor Blocks (이웃한 블록 내 화소 정보를 이용한 투표 결정 기반의 인트라 예측 모드 부호화 생략 방법)

  • Kim, Ji-Eon;Cho, Hye-Jeong;Jeong, Se-Yoon;Lee, Jin-Ho;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.15 no.4
    • /
    • pp.498-512
    • /
    • 2010
  • Intra coding is an indispensable coding tool since it can provide random accessibility as well as error resiliency. However, it is the problem that intra coding has relatively low coding efficiency compared with inter coding in the area of video coding. Even though H.264/AVC has significantly improved the intra coding performance compared with previous video standards, H.264/AVC encoder complexity is significantly increased, which is not suitable for low bit rate interactive services. In this paper, a Voting-based Intra Mode Bit Skip (V-IMBS) scheme is proposed to improve coding efficiency as well as to reduce encoding time complexity using decoder-side prediction. In case that the decoder can determine the same prediction mode as what is chosen by the encoder, the encoder does not send that intra prediction mode; otherwise, the conventional H.264/AVC intra coding is performed. Simulation results reveal a performance increase up to 4.44% overall rate savings and 0.24 dB in peak signal-to-noise ratio while the frame encoding speed of proposed method is about 42.8% better than that of H.264/AVC.

Experimental Examination of the Beer's law for Quantitative Electron Tomography (정량적 전자토모그래피를 위한 Beer's law의 실험적 검증)

  • Kim, Jin-Gyu;Song, Kyung;Lee, Su-Jeong;Jou, Hyeong-Tae;Kim, Youn-Joong
    • Applied Microscopy
    • /
    • v.40 no.2
    • /
    • pp.117-123
    • /
    • 2010
  • This study has examined experimentally the Beer's law which is a precondition for quantitative electron tomography. We used carbon support film and latex spheres, which have similar absorption coefficients with biological samples, as the test samples to take a tilt-series of images for electron tomography. First, the 3D information of carbon film and latex spheres was obtained by electron tomography. Then, the regression analysis on the relationship between the intensities of the incident and the transmitted beams in a tilt series was carried out to examine the Beer's law. The regression results with RMS error of 0.976 show the linear intensity variations of the transmitted beam as the tilt angles were increased. In addition, the relative absorption coefficients of carbon support film and latex spheres calculated experimentally through the Beer's law were 1.71 (5) and 2.67 (6)/${\mu}m$, respectively. The absorption coefficients remained constant within a full tilt range. Therefore, it is expected that quantitative electron tomography could be performed for biological samples by applying Beer's law provided the exact intensity of incident beam can be obtained under the thoroughly controlled experimental conditions.

Shipboard Fire Evacuation Route Prediction Algorithm Development (선박 화재시 승선자 피난동선예측을 위한 알고리즘 개발 기초연구)

  • Hwang, Kwang-Il;Cho, So-Hyung;Ko, Hoo-Sang;Cho, Ik-Soon;Yun, Gwi-Ho;Kim, Byeol
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.5
    • /
    • pp.519-526
    • /
    • 2018
  • In this study, an algorithm to predict evacuation routes in support of shipboard lifesaving activities is presented. As the first step of algorithm development, the feasibility and necessity of an evacuation route prediction algorithm are shown numerically. The proposed algorithm can be explained in brief as follows. This system continuously obtains and analyzes passenger movement data from the ship's monitoring system during non-disaster conditions. In case of a disaster, evacuation route prediction information is derived using the previously acquired data and a prediction tool, with the results provided to rescuers to minimize casualties. In this study, evacuation-related data obtained through fire evacuation trials was filtered and analyzed using a statistical method. In a simulation using the conventional evacuation prediction tool, it was found that reliable prediction results were obtained only in the SN1 trial because of the conceptual and structural nature of the tool itself. In order to verify the validity of the algorithm proposed in this study, an industrial engineering tool was adapted for evacuation characteristics prediction. When the proposed algorithm was implemented, the predicted values for average evacuation time and route were very similar to the measured values with error ranges of 0.6-6.9 % and 0.6-3.6 %, respectively. In the future, development of a high-performance evacuation route prediction algorithm is planned based on shipboard data monitoring and analysis.