• Title/Summary/Keyword: object term

Search Result 359, Processing Time 0.025 seconds

Gain-Tuning of Sensory Feedback for a Multi-Fingered Hand Based on Muscle Physiology

  • Bae, J.H.;Arimoto, S.;Shinsuke, N.;Ozawa, R.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1994-1999
    • /
    • 2003
  • This paper discusses dynamic characteristics of motion of a pair of multi-degrees of freedom robot fingers executing grasp of a rigid object and controlling its orientation with the aid of rolling contacts. In particular, the discussions are focused on a problem of gain-tuning of sensory feedback signals proposed from the viewpoint of sensorymotor coordination, which consist of a feedforward term, a feedback term for controlling rotational moment of the object, and another term for controlling its rotational angle. It is found through computer simulations of the overall fingersobject dynamics subject to rolling contact constraints that some dynamic characteristics of torque-angular velocity relation may play an important role likely as reported by experimental results in muscle physiology and therefore selection of damping gains in angular velocity feedback depending on the guess of object mass is crucial. Finally, a guidance of gain-tuning in each feedback term is suggested and its validity is discussed by various computer simulations.

  • PDF

Energy Minimization Based Semantic Video Object Extraction

  • Kim, Dong-Hyun;Choi, Sung-Hwan;Kim, Bong-Joe;Shin, Hyung-Chul;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.138-141
    • /
    • 2010
  • In this paper, we propose a semi-automatic method for semantic video object extraction which extracts meaningful objects from an input sequence with one correctly segmented training image. Given one correctly segmented image acquired by the user's interaction in the first frame, the proposed method automatically segments and tracks the objects in the following frames. We formulate the semantic object extraction procedure as an energy minimization problem at the fragment level instead of pixel level. The proposed energy function consists of two terms: data term and smoothness term. The data term is computed by considering patch similarity, color, and motion information. Then, the smoothness term is introduced to enforce the spatial continuity. Finally, iterated conditional modes (ICM) optimization is used to minimize energy function in a globally optimal manner. The proposed semantic video object extraction method provides faithful results for various types of image sequences.

  • PDF

Imitation Learning of Bimanual Manipulation Skills Considering Both Position and Force Trajectory (힘과 위치를 동시에 고려한 양팔 물체 조작 솜씨의 모방학습)

  • Kwon, Woo Young;Ha, Daegeun;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.20-28
    • /
    • 2013
  • Large workspace and strong grasping force are required when a robot manipulates big and/or heavy objects. In that situation, bimanual manipulation is more useful than unimanual manipulation. However, the control of both hands to manipulate an object requires a more complex model compared to unimanual manipulation. Learning by human demonstration is a useful technique for a robot to learn a model. In this paper, we propose an imitation learning method of bimanual object manipulation by human demonstrations. For robust imitation of bimanual object manipulation, movement trajectories of two hands are encoded as a movement trajectory of the object and a force trajectory to grasp the object. The movement trajectory of the object is modeled by using the framework of dynamic movement primitives, which represent demonstrated movements with a set of goal-directed dynamic equations. The force trajectory to grasp an object is also modeled as a dynamic equation with an adjustable force term. These equations have an adjustable force term, where locally weighted regression and multiple linear regression methods are employed, to imitate complex non-linear movements of human demonstrations. In order to show the effectiveness our proposed method, a movement skill of pick-and-place in simulation environment is shown.

Host galaxy of tidal disruption object, Swift J1644+57

  • Yoon, Yongmin;Im, Myungshin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.2
    • /
    • pp.70.1-70.1
    • /
    • 2013
  • We present long-term optical to NIR data of the tidal disruption object, Swift J1644+57. The data were obtained with CQUEAN, UKIRT WFCAM observations. We analyze the morphology of the host galaxy of this object and decompose the bulge component using high resolution HST WFC3 images. We conclude that the host galaxy is bulge dominant. We also estimate the multi-band fluxes of the host galaxy through the light curves based on the long-term observational data. We fit the SED models to the multi-band fluxes of the host galaxy and determine its stellar mass. Finally, we estimate the mass of the central super massive black hole which is thought to be the main role of the tidal disruption event. The estimated stellar mass and black hole mass are $10^{9.1}M_{\odot}$, $10^{6.8}M_{\odot}$ respectively. We compare our results to other results that have studied before.

  • PDF

Host galaxy of tidal disruption object, Swift J1644+57

  • Yoon, Yongmin;Im, Myungshin;Lee, Seong-Kook;Pak, Soojong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.48.2-48.2
    • /
    • 2014
  • We present long-term optical to NIR data of the tidal disruption object, Swift J1644+57. The data were obtained with CQUEAN, UKIRT WFCAM observations. We analyze the morphology of the host galaxy of this object and decompose the bulge component using high resolution HST WFC3 images. We conclude that the host galaxy is bulge dominant. We also estimate the multi-band fluxes of the host galaxy through the light curves based on the long-term observational data. We fit the SED models to the multi-band fluxes of the host galaxy and determine its stellar mass. Finally, we estimate the mass of the central super massive black hole which is responsible for the tidal disruption event. The estimated stellar mass and black hole mass are ${\sim}10^{9.1}M_{\odot}$, ${\sim}10^{6.8}M_{\odot}$, respectively. We compare our results to other previous estimates.

  • PDF

The Effect of Process Models on Short-term Prediction of Moving Objects for Autonomous Driving

  • Madhavan Raj;Schlenoff Craig
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.4
    • /
    • pp.509-523
    • /
    • 2005
  • We are developing a novel framework, PRIDE (PRediction In Dynamic Environments), to perform moving object prediction (MOP) for autonomous ground vehicles. The underlying concept is based upon a multi-resolutional, hierarchical approach which incorporates multiple prediction algorithms into a single, unifying framework. The lower levels of the framework utilize estimation-theoretic short-term predictions while the upper levels utilize a probabilistic prediction approach based on situation recognition with an underlying cost model. The estimation-theoretic short-term prediction is via an extended Kalman filter-based algorithm using sensor data to predict the future location of moving objects with an associated confidence measure. The proposed estimation-theoretic approach does not incorporate a priori knowledge such as road networks and traffic signage and assumes uninfluenced constant trajectory and is thus suited for short-term prediction in both on-road and off-road driving. In this article, we analyze the complementary role played by vehicle kinematic models in such short-term prediction of moving objects. In particular, the importance of vehicle process models and their effect on predicting the positions and orientations of moving objects for autonomous ground vehicle navigation are examined. We present results using field data obtained from different autonomous ground vehicles operating in outdoor environments.

EER-ASSL: Combining Rollback Learning and Deep Learning for Rapid Adaptive Object Detection

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4776-4794
    • /
    • 2020
  • We propose a rapid adaptive learning framework for streaming object detection, called EER-ASSL. The method combines the expected error reduction (EER) dependent rollback learning and the active semi-supervised learning (ASSL) for a rapid adaptive CNN detector. Most CNN object detectors are built on the assumption of static data distribution. However, images are often noisy and biased, and the data distribution is imbalanced in a real world environment. The proposed method consists of collaborative sampling and EER-ASSL. The EER-ASSL utilizes the active learning (AL) and rollback based semi-supervised learning (SSL). The AL allows us to select more informative and representative samples measuring uncertainty and diversity. The SSL divides the selected streaming image samples into the bins and each bin repeatedly transfers the discriminative knowledge of the EER and CNN models to the next bin until convergence and incorporation with the EER rollback learning algorithm is achieved. The EER models provide a rapid short-term myopic adaptation and the CNN models an incremental long-term performance improvement. EER-ASSL can overcome noisy and biased labels in varying data distribution. Extensive experiments shows that EER-ASSL obtained 70.9 mAP compared to state-of-the-art technology such as Faster RCNN, SSD300, and YOLOv2.

A Study on Application of Illumination Models for Color Constancy of Objects (객체의 색상 항등성을 위한 조명 모델 응용에 관한 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.1
    • /
    • pp.125-133
    • /
    • 2017
  • Color in an image is determined by illuminant and surface reflectance. So, to recover unique color of object, estimation of exact illuminant is needed. In this study, the illumination models suggested to get the object color constancy with the physical illumination model based on physical phenomena. Their characteristics and application limits are presented and the necessity of an extended illumination model is suggested to get more appropriate object colors recovered. The extended illumination model should contain an additional term for the ambient light in order to account for spatial variance of illumination in object images. Its necessity is verified through an experiment under simple lighting environment in this study. Finally, a reconstruction method for recovering input images under standard white light illumination is experimented and an useful method for computing object color reflectivity is suggested and experimented which can be induced from combination of the existing illumination models.

LSTM Network with Tracking Association for Multi-Object Tracking

  • Farhodov, Xurshedjon;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.10
    • /
    • pp.1236-1249
    • /
    • 2020
  • In a most recent object tracking research work, applying Convolutional Neural Network and Recurrent Neural Network-based strategies become relevant for resolving the noticeable challenges in it, like, occlusion, motion, object, and camera viewpoint variations, changing several targets, lighting variations. In this paper, the LSTM Network-based Tracking association method has proposed where the technique capable of real-time multi-object tracking by creating one of the useful LSTM networks that associated with tracking, which supports the long term tracking along with solving challenges. The LSTM network is a different neural network defined in Keras as a sequence of layers, where the Sequential classes would be a container for these layers. This purposing network structure builds with the integration of tracking association on Keras neural-network library. The tracking process has been associated with the LSTM Network feature learning output and obtained outstanding real-time detection and tracking performance. In this work, the main focus was learning trackable objects locations, appearance, and motion details, then predicting the feature location of objects on boxes according to their initial position. The performance of the joint object tracking system has shown that the LSTM network is more powerful and capable of working on a real-time multi-object tracking process.