• Title/Summary/Keyword: STEP-Based Data Model

Search Result 703, Processing Time 0.025 seconds

Experimental Evaluation of Distance-based and Probability-based Clustering

  • Kwon, Na Yeon;Kim, Jang Il;Dollein, Richard;Seo, Weon Joon;Jung, Yong Gyu
    • International journal of advanced smart convergence
    • /
    • v.2 no.1
    • /
    • pp.36-41
    • /
    • 2013
  • Decision-making is to extract information that can be executed in the future, it refers to the process of discovering a new data model that is induced in the data. In other words, it is to find out the information to peel off to find the vein to catch the relationship between the hidden patterns in data. The information found here, is a process of finding the relationship between the useful patterns by applying modeling techniques and sophisticated statistical analysis of the data. It is called data mining which is a key technology for marketing database. Therefore, research for cluster analysis of the current is performed actively, which is capable of extracting information on the basis of the large data set without a clear criterion. The EM and K-means methods are used a lot in particular, how the result values of evaluating are come out in experiments, which are depending on the size of the data by the type of distance-based and probability-based data analysis.

Manipulation of 3D Surface Data within Web-based 3D Geo-Processing

  • Choe, Seung-Keol;Kim, Kyong-Ho;Lee, Jong-Hun;Yang, Young-Kyu
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.80-83
    • /
    • 1999
  • An efficient modeling and management of a large amount of surface data for a wide rage of geographic information play an important role in determining the functionality of 3D geographic information system. It has been put many efforts to design and manage an effective way to enhence the manipulation of the data by considering geometry type and data structures. Recently, DEM(Data Elevation Model) and TIN(Triangulated Irregular Network) are used for representing surface data. In this paper, we propose a 3D data processing method. The method utilizes the major properties of DEM and TIN, respectively. Furthermore, by approximating DEM with a TIN of an appropriate resolution, we can support a fast and realistic surface modeling. We implement the structure with the following 4 level stages. The first is an optimal resolution of DEM which represent all of wide range of geographic data. The second is the full resolution DEM which is a subarea of original data generated by user's selection in our implemeatation. The third is the TIN approximation of this data with a proper resolution determined by the relative position with the camera. And the last step is multi-resolution TIN data whose resolution is dynamically decided by considering which direction user take notice currently. Specially, the TIN of the last step is designed for realtime camera navigation. By using the structure we implemented realtime surface clipping, efficient approximation of height field and the locally detailed surface LOD(Level of Detail). We used the initial 10-meter sampling DEM data of Seoul, KOREA and implement the structure to the 3D Virtual GIS based on the Internet.

  • PDF

BIOLOGICALLY-BASED DOSE-RESPONSE MODEL FOR NEUROTOXICITY RISK ASSESSMENT

  • Slikker, William Jr.;Gaylor, David W.
    • Toxicological Research
    • /
    • v.6 no.2
    • /
    • pp.205-213
    • /
    • 1990
  • The regulation of neurotoxicants has usually been based upon setting reference doses by dividing a no observed adverse effect level (NOAEL) by uncertainty factors that theoretically account for interspecies and intraspecies extraploation of experimental results in animals to humans. Recently, we have proposed a four-step alternative procedure which provides quantitative estimates of risk as a function of dose. The first step is to establish a mathematical relationship between a biological effect or biomarker and the dose of chemical administered. The second step is to determine the distribution (variability) of individual measurements of biological effects or their biomarkers about the dose response curve. The third step is to define an adverse or abnormal level of a biological effect or biomarker in an untreated population. The fourth and final step is to combine the information from the first three steps to estimate the risk (proportion of individuals exceeding on adverse or abnormal level of a biological effect or biomarker) as a function of dose. The primary purpose of this report is to enhance the certainty of the first step of this procedure by improving our understanding of the relationship between a biomarker and dose of administered chemical. Several factors which need to be considered include: 1) the pharmacokinetics of the parent chemical, 2) the target tissue concentrations of the parent chemical or its bioactivated proximate toxicant, 3) the uptake kinetics of the parent chemical or metabolite into the target cell(s) and/or membrane interactions, and 4) the interaction of the chemical or metabolite with presumed receptor site(s). Because these theoretical factors each contain a saturable step due to definitive amounts of required enzyme, reuptake or receptor site(s), a nonlinear, saturable dose-response curve would be predicted. In order to exemplify this process, effects of the neurotoxicant, methlenedioxymethamphetamine (MDMA), were reviewed and analyzed. Our results and those of others indicate that: 1) peak concentrations of MDMA and metabolites are ochieved in rat brain by 30 min and are negligible by 24 hr, 2) a metabolite of MDMA is probably responsible for its neurotoxic effects, and 3) pretreatment with monoamine uptake blockers prevents MDMA neurotoxicity. When data generated from rats administerde MDMA were plotted as bilolgical effect (decreases in hippocampal serotonin concentrations) versus dose, a saturation curve best described the observed relationship. These results support the hypothesis that at least one saturable step is involved in MDMA neurotoxicity. We conclude that the mathematical relationship between biological effect and dose of MDMA, the first step of our quantitative neurotoxicity risk assessment procedure, should reflect this biological model information generated from the whole of the dose-response curve.

  • PDF

EXTRACTION OF THE LEAN TISSUE BOUNDARY OF A BEEF CARCASS

  • Lee, C. H.;H. Hwang
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11c
    • /
    • pp.715-721
    • /
    • 2000
  • In this research, rule and neuro net based boundary extraction algorithm was developed. Extracting boundary of the interest, lean tissue, is essential for the quality evaluation of the beef based on color machine vision. Major quality features of the beef are size, marveling state of the lean tissue, color of the fat, and thickness of back fat. To evaluate the beef quality, extracting of loin parts from the sectional image of beef rib is crucial and the first step. Since its boundary is not clear and very difficult to trace, neural network model was developed to isolate loin parts from the entire image input. At the stage of training network, normalized color image data was used. Model reference of boundary was determined by binary feature extraction algorithm using R(red) channel. And 100 sub-images(selected from maximum extended boundary rectangle 11${\times}$11 masks) were used as training data set. Each mask has information on the curvature of boundary. The basic rule in boundary extraction is the adaptation of the known curvature of the boundary. The structured model reference and neural net based boundary extraction algorithm was developed and implemented to the beef image and results were analyzed.

  • PDF

Simulation of Mobile Robot Navigation based on Multi-Sensor Data Fusion by Probabilistic Model

  • Jin, Tae-seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.4
    • /
    • pp.167-174
    • /
    • 2018
  • Presently, the exploration of an unknown environment is an important task for the development of mobile robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems, In mobile robotics, multi-sensor data fusion(MSDF) became useful method for navigation and collision avoiding. Moreover, their applicability for map building and navigation has exploited in recent years. In this paper, as the preliminary step for developing a multi-purpose autonomous carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as ultrasonic sensor, IR sensor for mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within indoor environments. Simulation results with a mobile robot will demonstrate the effectiveness of the discussed methods.

Generation of Daily High-resolution Sea Surface Temperature for the Seas around the Korean Peninsula Using Multi-satellite Data and Artificial Intelligence (다종 위성자료와 인공지능 기법을 이용한 한반도 주변 해역의 고해상도 해수면온도 자료 생산)

  • Jung, Sihun;Choo, Minki;Im, Jungho;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.707-723
    • /
    • 2022
  • Although satellite-based sea surface temperature (SST) is advantageous for monitoring large areas, spatiotemporal data gaps frequently occur due to various environmental or mechanical causes. Thus, it is crucial to fill in the gaps to maximize its usability. In this study, daily SST composite fields with a resolution of 4 km were produced through a two-step machine learning approach using polar-orbiting and geostationary satellite SST data. The first step was SST reconstruction based on Data Interpolate Convolutional AutoEncoder (DINCAE) using multi-satellite-derived SST data. The second step improved the reconstructed SST targeting in situ measurements based on light gradient boosting machine (LGBM) to finally produce daily SST composite fields. The DINCAE model was validated using random masks for 50 days, whereas the LGBM model was evaluated using leave-one-year-out cross-validation (LOYOCV). The SST reconstruction accuracy was high, resulting in R2 of 0.98, and a root-mean-square-error (RMSE) of 0.97℃. The accuracy increase by the second step was also high when compared to in situ measurements, resulting in an RMSE decrease of 0.21-0.29℃ and an MAE decrease of 0.17-0.24℃. The SST composite fields generated using all in situ data in this study were comparable with the existing data assimilated SST composite fields. In addition, the LGBM model in the second step greatly reduced the overfitting, which was reported as a limitation in the previous study that used random forest. The spatial distribution of the corrected SST was similar to those of existing high resolution SST composite fields, revealing that spatial details of oceanic phenomena such as fronts, eddies and SST gradients were well simulated. This research demonstrated the potential to produce high resolution seamless SST composite fields using multi-satellite data and artificial intelligence.

A Modeling Methodology for Analysis of Dynamic Systems Using Heuristic Search and Design of Interface for CRM (휴리스틱 탐색을 통한 동적시스템 분석을 위한 모델링 방법과 CRM 위한 인터페이스 설계)

  • Jeon, Jin-Ho;Lee, Gye-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.179-187
    • /
    • 2009
  • Most real world systems contain a series of dynamic and complex phenomena. One of common methods to understand these systems is to build a model and analyze the behavior of them. A two-step methodology comprised of clustering and then model creation is proposed for the analysis on time series data. An interface is designed for CRM(Customer Relationship Management) that provides user with 1:1 customized information using system modeling. It was confirmed from experiments that better clustering would be derived from model based approach than similarity based one. Clustering is followed by model creation over the clustered groups, by which future direction of time series data movement could be predicted. The effectiveness of the method was validated by checking how similarly predicted values from the models move together with real data such as stock prices.

Reference Feature Based Cell Decomposition and Form Feature Recognition (기준 특징형상에 기반한 셀 분해 및 특징형상 인식에 관한 연구)

  • Kim, Jae-Hyun;Park, Jung-Whan
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.4
    • /
    • pp.245-254
    • /
    • 2007
  • This research proposed feature extraction algorithms as an input of STEP Ap214 data, and feature parameterization process to simplify further design change and maintenance. The procedure starts with suppression of blend faces of an input solid model to generate its simplified model, where both constant and variable-radius blends are considered. Most existing cell decomposition algorithms utilize concave edges, and they usually require complex procedures and computing time in recomposing the cells. The proposed algorithm using reference features, however, was found to be more efficient through testing with a few sample cases. In addition, the algorithm is able to recognize depression features, which is another strong point compared to the existing cell decomposition approaches. The proposed algorithm was implemented on a commercial CAD system and tested with selected industrial product models, along with parameterization of recognized features for further design change.

Analysis of quantitative high throughput screening data using a robust method for nonlinear mixed effects models

  • Park, Chorong;Lee, Jongga;Lim, Changwon
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.6
    • /
    • pp.701-714
    • /
    • 2020
  • Quantitative high throughput screening (qHTS) assays are used to assess toxicity for many chemicals in a short period by collectively analyzing them at several concentrations. Data are routinely analyzed using nonlinear regression models; however, we propose a new method to analyze qHTS data using a nonlinear mixed effects model. qHTS data are generated by repeating the same experiment several times for each chemical; therefor, they can be viewed as if they are repeated measures data and hence analyzed using a nonlinear mixed effects model which accounts for both intra- and inter-individual variabilities. Furthermore, we apply a one-step approach incorporating robust estimation methods to estimate fixed effect parameters and the variance-covariance structure since outliers or influential observations are not uncommon in qHTS data. The toxicity of chemicals from a qHTS assay is classified based on the significance of a parameter related to the efficacy of the chemicals using the proposed method. We evaluate the performance of the proposed method in terms of power and false discovery rate using simulation studies comparing with one existing method. The proposed method is illustrated using a dataset obtained from the National Toxicology Program.

Biological Pathway Extension Using Microarray Gene Expression Data

  • Chung, Tae-Su;Kim, Ji-Hun;Kim, Kee-Won;Kim, Ju-Han
    • Genomics & Informatics
    • /
    • v.6 no.4
    • /
    • pp.202-209
    • /
    • 2008
  • Biological pathways are known as collections of knowledge of certain biological processes. Although knowledge about a pathway is quite significant to further analysis, it covers only tiny portion of genes that exists. In this paper, we suggest a model to extend each individual pathway using a microarray expression data based on the known knowledge about the pathway. We take the Rosetta compendium dataset to extend pathways of Saccharomyces cerevisiae obtained from KEGG (Kyoto Encyclopedia of genes and genomes) database. Before applying our model, we verify the underlying assumption that microarray data reflect the interactive knowledge from pathway, and we evaluate our scoring system by introducing performance function. In the last step, we validate proposed candidates with the help of another type of biological information. We introduced a pathway extending model using its intrinsic structure and microarray expression data. The model provides the suitable candidate genes for each single biological pathway to extend it.