• Title/Summary/Keyword: Feature Based Modeling System

Search Result 163, Processing Time 0.021 seconds

Development of a Unified Modeler Framework for Virtual Manufacturing System (VMS를 위한 Unified Modeler Framework 개발)

  • Lee, Deok-Ung;Hwang, Hyeon-Cheol;Choe, Byeong-Gyu
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2004.05a
    • /
    • pp.52-55
    • /
    • 2004
  • VMS (virtual manufacturing system) may be defined as a transparent interface/control mechanism to support human decision-making via simulation and monitoring of real operating situation through modeling of all activities in RMS (real manufacturing system). The three main layers in VMS are business process layer, manufacturing execution layer, and facility operation layer, and each layer is represented by a specific software system having its own input modeler module. The current version of these input modelers has been implemented based on its own 'local' framework, and as a result, there are no information sharing mechanism, nor a common user view among them. Proposed in this paper is a unified modeler framework covering the three VMS layers, in which the concept of PPR (product-process-resource) model is employed as a common semantics framework and a 2D graphic network model is used as a syntax framework. For this purpose, abstract class PPRObject and GraphicObject are defined and then a subclass is inherited from the abstract class for each application layer. This feature would make it easier to develop and maintain the individual software systems. For information sharing, XML is used as a common data format.

  • PDF

Developing Data Fusion Method for Indoor Space Modeling based on IndoorGML Core Module

  • Lee, Jiyeong;Kang, Hye Young;Kim, Yun Ji
    • Spatial Information Research
    • /
    • v.22 no.2
    • /
    • pp.31-44
    • /
    • 2014
  • According to the purpose of applications, the application program will utilize the most suitable data model and 3D modeling data would be generated based on the selected data model. In these reasons, there are various data sets to represent the same geographical features. The duplicated data sets bring serious problems in system interoperability and data compatibility issues, as well in finance issues of geo-spatial information industries. In order to overcome the problems, this study proposes a spatial data fusion method using topological relationships among spatial objects in the feature classes, called Topological Relation Model (TRM). The TRM is a spatial data fusion method implemented in application-level, which means that the geometric data generated by two different data models are used directly without any data exchange or conversion processes in an application system to provide indoor LBSs. The topological relationships are defined and described by the basic concepts of IndoorGML. After describing the concepts of TRM, experimental implementations of the proposed data fusion method in 3D GIS are presented. In the final section, the limitations of this study and further research are summarized.

Data anomaly detection for structural health monitoring of bridges using shapelet transform

  • Arul, Monica;Kareem, Ahsan
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.93-103
    • /
    • 2022
  • With the wider availability of sensor technology through easily affordable sensor devices, several Structural Health Monitoring (SHM) systems are deployed to monitor vital civil infrastructure. The continuous monitoring provides valuable information about the health of the structure that can help provide a decision support system for retrofits and other structural modifications. However, when the sensors are exposed to harsh environmental conditions, the data measured by the SHM systems tend to be affected by multiple anomalies caused by faulty or broken sensors. Given a deluge of high-dimensional data collected continuously over time, research into using machine learning methods to detect anomalies are a topic of great interest to the SHM community. This paper contributes to this effort by proposing a relatively new time series representation named "Shapelet Transform" in combination with a Random Forest classifier to autonomously identify anomalies in SHM data. The shapelet transform is a unique time series representation based solely on the shape of the time series data. Considering the individual characteristics unique to every anomaly, the application of this transform yields a new shape-based feature representation that can be combined with any standard machine learning algorithm to detect anomalous data with no manual intervention. For the present study, the anomaly detection framework consists of three steps: identifying unique shapes from anomalous data, using these shapes to transform the SHM data into a local-shape space and training machine learning algorithms on this transformed data to identify anomalies. The efficacy of this method is demonstrated by the identification of anomalies in acceleration data from an SHM system installed on a long-span bridge in China. The results show that multiple data anomalies in SHM data can be automatically detected with high accuracy using the proposed method.

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

Detection of Faces with Partial Occlusions using Statistical Face Model (통계적 얼굴 모델을 이용한 부분적으로 가려진 얼굴 검출)

  • Seo, Jeongin;Park, Hyeyoung
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.921-926
    • /
    • 2014
  • Face detection refers to the process extracting facial regions in an input image, which can improve speed and accuracy of recognition or authorization system, and has diverse applicability. Since conventional works have tried to detect faces based on the whole shape of faces, its detection performance can be degraded by occlusion made with accessories or parts of body. In this paper we propose a method combining local feature descriptors and probability modeling in order to detect partially occluded face effectively. In training stage, we represent an image as a set of local feature descriptors and estimate a statistical model for normal faces. When the test image is given, we find a region that is most similar to face using our face model constructed in training stage. According to experimental results with benchmark data set, we confirmed the effect of proposed method on detecting partially occluded face.

Identifying potential mergers of globular clusters: a machine-learning approach

  • Pasquato, Mario
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.2
    • /
    • pp.89-89
    • /
    • 2014
  • While the current consensus view holds that galaxy mergers are commonplace, it is sometimes speculated that Globular Clusters (GCs) may also have undergone merging events, possibly resulting in massive objects with a strong metallicity spread such as Omega Centauri. Galaxies are mostly far, unresolved systems whose mergers are most likely wet, resulting in observational as well as modeling difficulties, but GCs are resolved into stars that can be used as discrete dynamical tracers, and their mergers might have been dry, therefore easily simulated with an N-body code. It is however difficult to determine the observational parameters best suited to reveal a history of merging based on the positions and kinematics of GC stars, if evidence of merging is at all observable. To overcome this difficulty, we investigate the applicability of supervised and unsupervised machine learning to the automatic reconstruction of the dynamical history of a stellar system. In particular we test whether statistical clustering methods can classify simulated systems into monolithic versus merger products. We run direct N-body simulations of two identical King-model clusters undergoing a head-on collision resulting in a merged system, and other simulations of isolated King models with the same total number of particles as the merged system. After several relaxation times elapse, we extract a sample of snapshots of the sky-projected positions of particles from each simulation at different dynamical times, and we run a variety of clustering and classification algorithms to classify the snapshots into two subsets in a relevant feature space.

  • PDF

Genetic Algorithm based hyperparameter tuned CNN for identifying IoT intrusions

  • Alexander. R;Pradeep Mohan Kumar. K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.755-778
    • /
    • 2024
  • In recent years, the number of devices being connected to the internet has grown enormously, as has the intrusive behavior in the network. Thus, it is important for intrusion detection systems to report all intrusive behavior. Using deep learning and machine learning algorithms, intrusion detection systems are able to perform well in identifying attacks. However, the concern with these deep learning algorithms is their inability to identify a suitable network based on traffic volume, which requires manual changing of hyperparameters, which consumes a lot of time and effort. So, to address this, this paper offers a solution using the extended compact genetic algorithm for the automatic tuning of the hyperparameters. The novelty in this work comes in the form of modeling the problem of identifying attacks as a multi-objective optimization problem and the usage of linkage learning for solving the optimization problem. The solution is obtained using the feature map-based Convolutional Neural Network that gets encoded into genes, and using the extended compact genetic algorithm the model is optimized for the detection accuracy and latency. The CIC-IDS-2017 and 2018 datasets are used to verify the hypothesis, and the most recent analysis yielded a substantial F1 score of 99.23%. Response time, CPU, and memory consumption evaluations are done to demonstrate the suitability of this model in a fog environment.

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

Smart-tracking Systems Development with QR-Code and 4D-BIM for Progress Monitoring of a Steel-plant Blast-furnace Revamping Project in Korea

  • Jung, In-Hye;Roh, Ho-Young;Lee, Eul-Bum
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.149-156
    • /
    • 2020
  • Blast furnace revamping in steel industry is one of the most important work to complete the complicated equipment within a short period of time based on the interfaces of various types of work. P company has planned to build a Smart Tracking System based on the wireless tag system with the aim of complying with the construction period and reducing costs, ahead of the revamping of blast furnace scheduled for construction in February next year. It combines the detailed design data with the wireless recognition technology to grasp the stage status of design, storage and installation. Then, it graphically displays the location information of each member in relation to the plan and the actual status in connection with Building Information Modeling (BIM) 4D Simulation. QR Code is used as a wireless tag in order to check the receiving status of core equipment considering the characteristics of each item. Then, DB in server system is built, status information is input. By implementing BIM 4D Simulation data using DELMIA, the information on location and status is provided. As a feature of the S/W function, a function for confirming the items will be added to the cellular phone screen in order to improve the accuracy of tagging of the items. Accuracy also increases by simultaneous processing of storage and location tagging. The most significant effect of building this system is to minimize errors in construction by preventing erroneous operation of members. This system will be very useful for overall project management because the information about the position and progress of each critical item can be visualized in real time. It could be eventually lead to cost reduction of project management.

  • PDF

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.