• 제목/요약/키워드: Camera-based Interaction

Search Result 133, Processing Time 0.025 seconds

Adaptive Event Clustering for Personalized Photo Browsing (사진 사용 이력을 이용한 이벤트 클러스터링 알고리즘)

  • Kim, Kee-Eung;Park, Tae-Suh;Park, Min-Kyu;Lee, Yong-Beom;Kim, Yeun-Bae;Kim, Sang-Ryong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.711-716
    • /
    • 2006
  • Since the introduction of digital camera to the mass market, the number of digital photos owned by an individual is growing at an alarming rate. This phenomenon naturally leads to the issues of difficulties while searching and browsing in the personal digital photo archive. Traditional approach typically involves content-based image retrieval using computer vision algorithms. However, due to the performance limitations of these algorithms, at least on the casual digital photos taken by non-professional photographers, more recent approaches are centered on time-based clustering algorithms, analyzing the shot times of photos. These time-based clustering algorithms are based on the insight that when these photos are clustered according to the shot-time similarity, we have "event clusters" that will help the user browse through her photo archive. It is also reported that one of the remaining problems with the time-based approach is that people perceive events in different scales. In this paper, we present an adaptive time-based clustering algorithm that exploits the usage history of digital photos in order to infer the user's preference on the event granularity. Experiments show significant performance improvements in the clustering accuracy.

  • PDF

A method of structural adhesives inspection based on single camera (싱글 카메라 기반 구조용 접착제 검사 방법)

  • Yun, Sung-Jo;Seo, Kap-Ho;Park, Yong-Sik;Park, Jung-Woo;Park, Sung-Ho;Jeon, Kwang-Woo;Jeon, Jung-Su
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.07a
    • /
    • pp.51-52
    • /
    • 2014
  • 본 논문에서는 경량화 설계가 반영된 차량의 내구성 및 안전성을 증가시키기 위해 사용되는 구조용 접착제의 비균일적, 미도포 된 부분을 검사하기 위한 방법에 대해 제안한다. 이 방법은 차량이 조립되기 전 구조용 접착체가 균일하게 도포되지 않은 부분을 사전에 알려줌으로써 차량의 전체적인 강성을 높일 수 있어, 차량의 조립에 있어 의무화되어 가는 추세이다. 이 방법은 머신전용 카메라와 보드 기반으로 간단하게 구성이 가능하다.

  • PDF

Touchless User Interface for Real-Time Mobile Devices (실시간 비접촉 모바일 제어 기법)

  • Jung, Il-Lyong;Nikolay, Akatyev;Jang, Won-Dong;Kim, Chang-Su
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.2
    • /
    • pp.435-440
    • /
    • 2011
  • A touchless user interface system is proposed for real-time mobile devices in this work. The proposed system first recognizes the position of pattern to interact various applications based on the scene taken from mobile camera. Then, the proposed system traces the route of the pattern, which estimates the motion of pattern based on the mean-shift method. Based on this information, we control menu and keypad for the various applications. Differ from other user interface systems, the proposed system provides the new experience to even for the user of low-end mobile without additional hardware. Simulation results demonstrate that the proposed algorithm provides better interaction performances than the conventional method, while achieving real-time user interacting for mobile devices.

Moving object detection for biped walking robot flatfrom (이족로봇 플랫폼을 위한 동체탐지)

  • Kang, Tae-Koo;Hwang, Sang-Hyun;Kim, Dong-Won;Park, Gui-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.570-572
    • /
    • 2006
  • This paper discusses the method of moving object detection for biped robot walking. Most researches on vision based object detection have mostly focused on fixed camera based algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since hired walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, method for moving object detection has been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. But these methods are not suitable to biped walking robot. So, we suggest the advanced method which is suitable to biped walking robot platform. For carrying out certain tasks, an object detecting system using modified optical flow algorithm by wireless vision camera is implemented in a biped walking robot.

  • PDF

Augmented Reality based Interactive Storyboard System (증강현실 기반의 인터랙티브 스토리보드 제작 시스템)

  • Park, Jun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.2
    • /
    • pp.17-22
    • /
    • 2007
  • In early stages of film or animation production, storyboard is used to visually describe the outline of a story. Drawings or photographs, as well as the texts, are employed for character / item placements and camera pose. However, commercially available storyboard tools are mainly drawing and editing tools, not providing functionality for item placement and camera control. In this paper, an Augmented Reality based storyboard tool is presented, which provides an intuitive and easy-to-use interface for storyboard development. Using the presented tool, non-expert users may compose 30 scenes in his or her real environments through tangible building blocks which are used to fetch corresponding 3D models and their pose.

  • PDF

A Gaze Tracking based on the Head Pose in Computer Monitor (얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적)

  • 오승환;이희영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

Augmented Reality Service Based on Object Pose Prediction Using PnP Algorithm

  • Kim, In-Seon;Jung, Tae-Won;Jung, Kye-Dong
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.295-301
    • /
    • 2021
  • Digital media technology is gradually developing with the development of convergence quaternary industrial technology and mobile devices. The combination of deep learning and augmented reality can provide more convenient and lively services through the interaction of 3D virtual images with the real world. We combine deep learning-based pose prediction with augmented reality technology. We predict the eight vertices of the bounding box of the object in the image. Using the predicted eight vertices(x,y), eight vertices(x,y,z) of 3D mesh, and the intrinsic parameter of the smartphone camera, we compute the external parameters of the camera through the PnP algorithm. We calculate the distance to the object and the degree of rotation of the object using the external parameter and apply to AR content. Our method provides services in a web environment, making it highly accessible to users and easy to maintain the system. As we provide augmented reality services using consumers' smartphone cameras, we can apply them to various business fields.

Human-Computer Natur al User Inter face Based on Hand Motion Detection and Tracking

  • Xu, Wenkai;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.501-507
    • /
    • 2012
  • Human body motion is a non-verbal part for interaction or movement that can be used to involves real world and virtual world. In this paper, we explain a study on natural user interface (NUI) in human hand motion recognition using RGB color information and depth information by Kinect camera from Microsoft Corporation. To achieve the goal, hand tracking and gesture recognition have no major dependencies of the work environment, lighting or users' skin color, libraries of particular use for natural interaction and Kinect device, which serves to provide RGB images of the environment and the depth map of the scene were used. An improved Camshift tracking algorithm is used to tracking hand motion, the experimental results show out it has better performance than Camshift algorithm, and it has higher stability and accuracy as well.

Safe and Reliable Intelligent Wheelchair Robot with Human Robot Interaction

  • Hyuk, Moon-In;Hyun, Joung-Sang;Kwang, Kum-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.120.1-120
    • /
    • 2001
  • This paper proposes a prototype of a safe and reliable wheelchair robot with Human Robot Interaction (HRI). Since the wheelchair users are usually the handicapped, the wheelchair robot must guarantee the safety and reliability for the motion while considering users intention, A single color CCD camera is mounted for input user´s command based on human-friendly gestures, and a ultra sonic sensor array is used for sensing external motion environment. We use face and hand directional gestures as the user´s command. By combining the user´s command with the sensed environment configuration, the planner of the wheelchair robot selects an optimal motion. We implement a prototype wheelchair robot, MR, HURI (Mobile Robot with Human Robot Interaction) ...

  • PDF

Camera-based Interaction for Handheld Virtual Reality (카메라의 상대적 추적을 사용한 핸드헬드 가상현실 인터랙션)

  • Hwang, Jane;Kim, Gerard Joung-Hyun;Kim, Nam-Gyu
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.619-625
    • /
    • 2006
  • 핸드헬드 가상현실 시스템이란 멀티모달 센서와 멀티모달 디스플레이 장치가 내장되어 가상환경을 제공하는 한 손으로 들고 다닐 수 있는 핸드헬드 시스템을 의미한다. 이런 핸드헬드 가상현실 시스템에서는 일반적으로 제한된 입력수단 (예> 버튼, 터치스크린)을 제공하기 때문에 이를 사용해서 3 차원 인터랙션을 행하기가 쉽지 않다. 그래서 본 연구에서는 일반 핸드헬드 기기에 대부분 내장되어 있는 장치인 카메라를 사용해서 핸드헬드 가상환경에서 3 차원 인터랙션을 수행하는 방법을 제안하고 구현, 평가한다.

  • PDF