• Title/Summary/Keyword: sensor-based interaction

Search Result 191, Processing Time 0.027 seconds

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

Damage detection for pipeline structures using optic-based active sensing

  • Lee, Hyeonseok;Sohn, Hoon
    • Smart Structures and Systems
    • /
    • v.9 no.5
    • /
    • pp.461-472
    • /
    • 2012
  • This study proposes an optics-based active sensing system for continuous monitoring of underground pipelines in nuclear power plants (NPPs). The proposed system generates and measures guided waves using a single laser source and optical cables. First, a tunable laser is used as a common power source for guided wave generation and sensing. This source laser beam is transmitted through an optical fiber, and the fiber is split into two. One of them is used to actuate macro fiber composite (MFC) transducers for guided wave generation, and the other optical fiber is used with fiber Bragg grating (FBG) sensors to measure guided wave responses. The MFC transducers placed along a circumferential direction of a pipe at one end generate longitudinal and flexural modes, and the corresponding responses are measured using FBG sensors instrumented in the same configuration at the other end. The generated guided waves interact with a defect, and this interaction causes changes in response signals. Then, a damage-sensitive feature is extracted from the response signals using the axi-symmetry nature of the measured pitch-catch signals. The feasibility of the proposed system has been examined through a laboratory experiment.

Improving light collection efficiency using partitioned light guide on pixelated scintillator-based γ-ray imager

  • Hyeon, Suyeon;Hammig, Mark;Jeong, Manhee
    • Nuclear Engineering and Technology
    • /
    • v.54 no.5
    • /
    • pp.1760-1768
    • /
    • 2022
  • When gamma-camera sensor modules, which are key components of radiation imagers, are derived from the coupling between scintillators and photosensors, the light collection efficiency is an important factor in determining the effectiveness with which the instrument can identify nuclides via their derived gamma-ray spectra. If the pixel area of the scintillator is larger than the pixel area of the photosensor, light loss and cross-talk between pixels of the photosensor can result in information loss, thereby degrading the precision of the energy estimate and the accuracy of the position-of-interaction determination derived from each active pixel in a coded-aperture based gamma camera. Here we present two methods to overcome the information loss associated with the loss of photons created by scintillation pixels that are coupled to an associated silicon photomultiplier pixel. Specifically, we detail the use of either: (1) light guides, or (2) scintillation pixel areas that match the area of the SiPM pixel. Compared with scintillator/SiPM couplings that have slightly mismatched intercept areas, the experimental results show that both methods substantially improve both the energy and spatial resolution by increasing light collection efficiency, but in terms of the image sensitivity and image quality, only slight improvements are accrued.

A Study on the Shift Register-Based Multi Channel Ultrasonic Focusing Delay Control Method using a CPLD for Ultrasonic Tactile Implementation (초음파 촉각 구현을 위한 CPLD를 사용한 Shift Register기반 다채널 초음파 집속 지연 제어 방법에 대한 연구)

  • Shin, Duck-Shick;Park, Jun-Heon;Lim, Young-Cheol;Choi, Joon-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.31 no.5
    • /
    • pp.324-329
    • /
    • 2022
  • This paper proposes a shift-register-based multichannel ultrasonic focusing delay control method using a complex programmable logic device (CPLD) for a high resolution of ultrasonic focusing system. The proposed method can achieve the ultrasonic focusing through the delay control of driving signals of each ultrasonic transducer of an ultrasonic array. The delay of the driving signals of all ultrasonic channels can be controlled by setting the shift register in the CPLD. The experiment verified that the frequency of the clock used for the delay control increased, the error of the focusing point decreased, and the diameter of the focusing point decreased as the length of the shift register in the proposed method. The proposed method used only one CPLD for ultrasonic focusing and did not require to use complex hardware circuits. Therefore, the resources required for the design of an ultrasonic focusing system could be reduced. The proposed method can be applied to the fields of human computer interaction (HCI), virtual reality (VR) and augmented reality (AR).

Evaluation of Interactions Between Surface Water and Groundwater Based on Temperature, Flow Properties, and Geochemical Data (온도, 유동특성 및 지화학분석 자료를 이용한 지표수-지하수 연계특성 평가)

  • Jeon, Hang-Tak;Kim, Gyoo-Bum
    • The Journal of Engineering Geology
    • /
    • v.21 no.1
    • /
    • pp.45-55
    • /
    • 2011
  • We examined the interactions between surface and groundwater through (1) flowmeter logging, (2) measurements of seasonal and vertical changes in temperature within a well, and (3) geochemical analyses of water samples from nine groundwater-monitoring wells. At two wells adjacent to a stream, subsurface water was found to flow from the stream to a surrounding alluvial fan, and the seasonal change in groundwater temperature is similar to those of surface water and air. Geochemical analyses at two wells indicated hydro-geochemical features affected by streamwater inflow, showing seasonal variations. Accordingly, these two wells are located in an area with active interaction between surface water and groundwater. The Thermochron I-button used in the present study is useful for this type of study of groundwater?surface water interaction because of its low cost and small size.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Analysis of Domestic and International Biomechanics Research Trends in Shoes: Focusing on Research Published in 2015-2019 (신발 분야 국내외 운동역학 연구동향 분석: 2015-2019년에 발간된 연구를 중심으로)

  • Back, Heeyoung;Yi, Kyungock;Lee, Jusung;Kim, Jieung;Moon, Jeheon
    • Korean Journal of Applied Biomechanics
    • /
    • v.30 no.2
    • /
    • pp.185-195
    • /
    • 2020
  • Objective: The purpose of this study was to identify recent domestic and international research trends regarding shoes carried out in biomechanics field and to suggest the direction of shoe research later. Method: To achieve this goal of research, the Web of Science, Scopus, PubMed, Korea Education and Research Information Service and Korean Citation Index were searched to identify trends in 64 domestic and international research. Also, classified into the interaction of the human body, usability evaluation of functional shoes, smart shoe development research, and suggested the following are the suggestions for future research directions. Conclusion: A study for the coordination of muscle activity, control of motion and prevention of injury should be sought by developing shoes of eco-friendly materials, and scientific evidence such as physical aspects, materials, floor shapes and friction should be supported. Second, a study on elite athletes in various sports is needed based on functional shoes using new materials to improve their performance along with cooperation in muscle activities and prevention of injury. Third, various information and energy production are possible in real time through human behavioral information, and the application of Human Machine Interface (HMI) technology through shoe-sensor-human interaction should be explored.

Design Mobility Agent Module for Healthcare Application Service (헬스케어 응용 서비스를 위한 Mobility Agent 모듈 설계)

  • Nam, Jin-Woo;Chung, Yeong-Jee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.2
    • /
    • pp.378-384
    • /
    • 2008
  • The sensor network for the health care application service has the man or movable object as the main sensing object. In order to support inter-node interaction by the movement of such sensing objects, the node's dynamic function modification, dynamic self-configuration and energy efficiency must be considered. In this paper, the Agilla model which supports the dynamic function modification through the agent migration between nodes and LEACH protocol which guarantees the dynamic self-configuration and energy efficiency through the configuration of inter-node hierarchical cluster configuration are analyzed. Based on the results of the analysis, the Mobility Agent Middleware which supports the dynamic function modification between nodes is designed, and LEACH_Mobile protocol which guarantees the node nobility as the weakness of the existing LEACH protocol is suggested. Also, the routing module which supports the LEACH_Mobile protocol is designed and the interface for conjunction with Mobility Agent Middleware is designed. Then, it is definitely increase performance which un mobility node of transfer data rate through LEACH_Mobile protocol of simulation result.

Interactive Mobile Augmented Reality System using Muscle Sensor and Image-based Localization System through Client-Server Communication (서버/클라이언트 통신을 통한 영상 기반 위치 인식 및 근육 센서를 이용한 상호작용 모바일 증강현실 시스템)

  • Lee, Sungjin;Baik, Davin;Choi, Sangyeon;Hwang, Sung Soo
    • Journal of the HCI Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.15-23
    • /
    • 2018
  • A lot of games are supposed to play through controller operations, such as mouse and keyboard rather than user's physical movement. These games have limitation that causes the user lack of movement. Therefore, this study will solve the problems that these traditional game systems have through the development of a motion-producing system, and provide users more realistic system. It recognizes the user's position in a given space and provides a mobile augmented reality system that interacts with virtual game characters. It uses augmented reality technology to make users feel as if virtual characters exist in real space and it designs a mobile game system that uses armband controllers that interact with virtual characters.

  • PDF

Hand Motion Recognition Algorithm Using Skin Color and Center of Gravity Profile (피부색과 무게중심 프로필을 이용한 손동작 인식 알고리즘)

  • Park, Youngmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.411-417
    • /
    • 2021
  • The field that studies human-computer interaction is called HCI (Human-computer interaction). This field is an academic field that studies how humans and computers communicate with each other and recognize information. This study is a study on hand gesture recognition for human interaction. This study examines the problems of existing recognition methods and proposes an algorithm to improve the recognition rate. The hand region is extracted based on skin color information for the image containing the shape of the human hand, and the center of gravity profile is calculated using principal component analysis. I proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. We proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. The existing center of gravity profile has shown the result of incorrect hand gesture recognition for the deformation of the hand due to rotation, but in this study, the center of gravity profile is used and the point where the distance between the points of all contours and the center of gravity is the longest is the starting point. Thus, a robust algorithm was proposed by re-improving the center of gravity profile. No gloves or special markers attached to the sensor are used for hand gesture recognition, and a separate blue screen is not installed. For this result, find the feature vector at the nearest distance to solve the misrecognition, and obtain an appropriate threshold to distinguish between success and failure.