• Title/Summary/Keyword: 시각센서

Search Result 579, Processing Time 0.027 seconds

Comparison of the Vertical Data between Eulerian and Lagrangian Method (오일러와 라그랑주 관측방식의 연직 자료 비교)

  • Hyeok-Jin Bae;Byung Hyuk Kwon;Sang Jin Kim;Kyung-Hun Lee;Geon-Myeong Lee;Yu-Jin Kim;Ji-Woo Seo;Yu-Jung Koo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1009-1014
    • /
    • 2023
  • Comprehensive observations of the Euler method and the Lagrangian method were performed in order to obtain high-resolution observation data in space and time for the complex environment of new city. The two radiosondes, which measure meteorological parameters using Lagrangian methods, produced air pressure, wind speed and wind direction. They were generally consistent with each other even if the observation points or times were different. The temperature measured by the sensor exposed to the air during the day was relatively high as the altitude increased due to the influence of solar radiation. The temporal difference in wind direction and speed was found in the comparison of Euler's wind profiler data with radiosonde data. When the wind field is horizontally in homogeneous, this result implies the need to consider the advection component to compare the data of the two observation methods. In this study, a method of using observation data at different times for each altitude section depending on the observation period of the Euler method is proposed to effectively compare the data of the two observation methods.

LiDAR Data Segmentation Using Aerial Images for Building Modeling (항공영상에 의한 LiDAR 데이터 분할에 기반한 건물 모델링)

  • Lee, Jin-Hyung;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.1
    • /
    • pp.47-56
    • /
    • 2010
  • The use of airborne LiDAR data obtained by airborne laser scanners has increased in the field of spatial information such as building modeling. LiDAR data consist of irregularly distributed 3D coordinates and lack visual and semantic information. Therefore, LiDAR data processing is complicate. This study suggested a method of LiDAR data segmentation using roof surface patches from aerial images. Each segmented patch was modeled by analyzing geometric characteristics of the LiDAR data. The optimal functions could be determined with segmented data that fits various shapes of the roof surfaces as flat and slanted planes, dome and arch types. However, satisfiable segmentation results were not obtained occasionally due to shadow and tonal variation on the images. Therefore, methods to remove unnecessary edges result in incorrect segmentation are required.

Development of an EMG-based Wireless and Wearable Computer Interlace (근전도기반의 무선 착용형 컴퓨터 인터페이스 개발)

  • Han, Hyo-Nyoung;Choi, Chang-Mok;Lee, Yun-Joo;Ha, Sung-Do;Kim, Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.240-244
    • /
    • 2008
  • This paper presents an EMG-based wireless and wearable computer interface. The wearable device contains 4 channel EMG sensors and is able to acquire EMG signals using signal processing. Obtained signals are transmitted to a host computer through wireless communication. EMG signals induced by the volitional movements are acquired from four sites in the lower limb to extract a user's intention and six classes of wrist movements are discriminated by employing an artificial neural network (ANN). This interface could provide an aid to the limb disabled to directly access to computers and network environments without conventional computer interface such as a keyboard and a mouse.

  • PDF

Provision of Effective Spatial Interaction for Users in Advanced Collaborative Environment (지능형 협업 환경에서 사용자를 위한 효과적인 공간 인터랙션 제공)

  • Ko, Su-Jin;Kim, Jong-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.677-684
    • /
    • 2009
  • With various sensor network and ubiquitous technologies, we can extend interaction area from a virtual domain to physical space domain. This spatial interaction is differ in that traditional interaction is mainly processed by direct interaction with the computer machine which is a target machine or provides interaction tools and the spatial interaction is performed indirectly between users with smart interaction tools and many distributed components of space. So, this interaction gives methods to users to control whole manageable space components by registering and recognizing objects. Finally, this paper provides an effective spatial interaction method with template-based task mapping algorithm which is sorted by historical interaction data for support of users' intended task. And then, we analyze how much the system performance would be improved with the task mapping algorithm and conclude with an introduction of a GUI method to visualize results of spatial interaction.

  • PDF

Automatic measurement of voluntary reaction time after audio-visual stimulation and generation of synchronization and generation of synchronization signals for the analysis of evoked EEG (시청각자극후의 피험자의 자의적 반응시간의 자동계측과 유발뇌파분석을 위한 동기신호의 생성)

  • 김철승;엄광문;손진훈
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2003.05a
    • /
    • pp.36-40
    • /
    • 2003
  • 근래에 들어 질병으로 인하여 의사표현이 곤란한 환자에게 뇌파에 기초한 BCI(Brain Computer Interface)와 같은 새로운 인터페이스를 제공하고자 하는 연구가 활발히 진행되고 있다. BCI를 위한 기초 연구로서 특정 자극에 대해 유발되는 뇌파의 측정과 분석은 BCI를 위한 뇌파의 패턴과 인터페이스의 설계에 중요한 역할을 한다. 이 연구의 목적은 시청각 자극 인가후 피험자의 반응 시간을 측정하는 시스템을 EEG와 같은 생체 신호 계측 시스템과 연동이 가능한 형태로 개발하는 것이다. 제안된 시스템은 기능적으로 자극 신호 발생부, 반응시간 측정부, 유발뇌파 측정부, 동기신호발생부로 나뉘어진다. 자극신호 발생부는 실험에 이용되는 자극신호를 제작하는 부분으로서 Flash를 사용하여 구현하였다. 반응시간 측정부는 문제에 대한 답 선택 요청시각으로부터 피험자의 반응까지의 시간을 측정하는 부분으로서 마이크로 컴퓨터(80C31)를 이용하여 구현하였다. 우발뇌파 측정부는 시판용 하드웨어와 소프트웨어를 그대로 사용하였다. 동기신호 발생부는 전체 시스템의 동기를 맞추기 위한 신호를 발생하는 부분으로서 문제제시, 답요구와 동기한 화면상의 명암 신호와 이를 검출하는 광센서로 구성하였다. 본 논문에서 제시한 방법에서는 기존의 유발진위 측정 및 자극시스템에 특정 모듈(반응시간 측정 장치, 동기신호 발생장치)만을 추가하여 실험자의 의도에 맞는 시스템을 설계할 수 있어 유발 뇌파 및 반응시간 측정을 필요로 하는 연구를 가속화 할 것이 기대된다.

  • PDF

A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance (불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구)

  • Jang, W.-S.;Kim, K.-S.;Shin, K.-S.;Joo, C.;;Yoon, H.-K.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

Traded control of telerobot system with an autonomous visual sensor feedback (자율적인 시각 센서 피드백 기능을 갖는 원격 로보트 시스템교환 제어)

  • 김주곤;차동혁;김승호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.940-943
    • /
    • 1996
  • In teleoperating, as seeing the monitor screen obtained from a camera instituted in the working environment, human operator generally controls the slave arm. Because we can see only 2-D image in a monitor, human operator does not know the depth information and can not work with high accuracy. In this paper, we proposed a traded control method using an visual sensor for the purpose of solving this problem. We can control a teleoperation system with precision when we use the proposed algorithm. Not only a human operator command but also an autonomous visual sensor feedback command is given to a slave arm for the purpose of coincidence current image features and target image features. When the slave arm place in a distant place from the target position, human operator can know very well the difference between the desired image features and the current image features, but calculated visual sensor command have big errors. And when the slave arm is near the target position, the state of affairs is changed conversely. With this visual sensor feedback, human does not need coincide the detail difference between the desired image features and the current image features and proposed method can work with higher accuracy than other method without, sensor feedback. The effectiveness of the proposed control method is verified through series of experiments.

  • PDF

Image Retrieval using Multiple Features on Mobile Platform (모바일 플랫폼에서 다중 특징 기반의 이미지 검색)

  • Lee, Yong-Hwan;Cho, Han-Jin;Lee, June-Hwan
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.237-243
    • /
    • 2014
  • In this paper, we propose a mobile image retrieval system which utilizes the mobile device's sensor information and enables running in a variety of the environments, and implement the system on Android platform. The proposed system deals with a new image descriptor using combination of the visual feature with EXIF attributes in the target of JPEG image, and image matching algorithm which is optimized to the mobile environments. Experiments are performed on the Android platform, and the experimental results revealed that the proposed algorithm exhibits a significant improved results with large image database.

Intelligent Navigation of a Mobile Robot based on Intention Inference of Obstacles (장애물의 의도 추론에 기초한 이동 로봇의 지능적 주행)

  • Kim, Seong-Hun;Byeon, Jeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.2
    • /
    • pp.21-34
    • /
    • 2002
  • Different from ordinary mobile robots used in a well-structured industrial workspace, a guide mobile robot for the visually impaired should be designed in consideration of a moving obstacle, which mostly refers to pedestrians in intentional motions. Thus, the navigation of the guide robot can be facilitated if the intention of each detected obstacle can be known in advance. In this paper, we propose an inference method to understand an intention of a detected obstacle. In order to represent the environment with ultrasonic sensors, the fuzzy grid-type map is first constructed. Then, we detect the obstacle and infer the intention for collision avoidance with the CLA(Centroid of Largest Area) point of the fuzzy grid-type map. To verify the proposed method, some experiments are performed.

A Study on Human-Friendly Guide Robot (인간친화적인 안내 로봇 연구)

  • Choi, Woo-Kyung;Kim, Seong-Joo;Ha, Sang-Hyung;Jeon, Hong-Tae
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.6 s.312
    • /
    • pp.9-15
    • /
    • 2006
  • The recent development in robot field shows that service robot which interacts with human and provides specific service to human has been researched continually. Especially, robot for human welfare becomes the center of public concern. At present time, guide robot is priority field of general welfare robot and helps the blind keep safe path when he walks outdoor. In this paper, guide robot provides not only collision avoidance but also the best walking direction and velocity to blind people while recognizing environment information from various kinds of sensors. In addition, it is able to provide the most safe path planing on behalf of blind people.