• Title/Summary/Keyword: Vision-based interface

Search Result 131, Processing Time 0.027 seconds

PROTOTYPE AUTOMATIC SYSTEM FOR CONSTRUCTING 3D INTERIOR AND EXTERIOR IMAGE OF BIOLOGICAL OBJECTS

  • Park, T. H.;H. Hwang;Kim, C. S.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11b
    • /
    • pp.318-324
    • /
    • 2000
  • Ultrasonic and magnetic resonance imaging systems are used to visualize the interior states of biological objects. These nondestructive methods have many advantages but too much expensive. And they do not give exact color information and may miss some details. If it is allowed to destruct some biological objects to get the interior and exterior information, constructing 3D image from the series of the sliced sectional images gives more useful information with relatively low cost. In this paper, PC based automatic 3D model generator was developed. The system was composed of three modules. One is the object handling and image acquisition module, which feeds and slices objects sequentially and maintains the paraffin cool to be in solid state and captures the sectional image consecutively. The second is the system control and interface module, which controls actuators for feeding, slicing, and image capturing. And the last is the image processing and visualization module, which processes a series of acquired sectional images and generates 3D graphic model. The handling module was composed of the gripper, which grasps and feeds the object and the cutting device, which cuts the object by moving cutting edge forward and backward. Sliced sectional images were acquired and saved in the form of bitmap file. The 3D model was generated to obtain the volumetric information using these 2D sectional image files after being segmented from the background paraffin. Once 3-D model was constructed on the computer, user could manipulate it with various transformation methods such as translation, rotation, scaling including arbitrary sectional view.

  • PDF

Development of Multi-functional Tele-operative Modular Robotic System For Watermelon Cultivation in Greenhouse

  • H. Hwang;Kim, C. S.;Park, D. Y.
    • Journal of Biosystems Engineering
    • /
    • v.28 no.6
    • /
    • pp.517-524
    • /
    • 2003
  • There have been worldwide research and development efforts to automate various processes of bio-production and those efforts will be expanded with priority given to tasks which require high intensive labor or produce high value-added product and tasks under hostile environment. In the field of bio-production capabilities of the versatility and robustness of automated system have been major bottlenecks along with economical efficiency. This paper introduces a new concept of automation based on tole-operation, which can provide solutions to overcome inherent difficulties in automating bio-production processes. Operator(farmer), computer, and automatic machinery share their roles utilizing their maximum merits to accomplish given tasks successfully. Among processes of greenhouse watermelon cultivation tasks such as pruning, watering, pesticide application, and harvest with loading were chosen based on the required labor intensiveness and functional similarities to realize the proposed concept. The developed system was composed of 5 major hardware modules such as wireless remote monitoring and task control module, wireless remote image acquisition and data transmission module, gantry system equipped with 4 d.o.f. Cartesian type robotic manipulator, exchangeable modular type end-effectors, and guided watermelon loading and storage module. The system was operated through the graphic user interface using touch screen monitor and wireless data communication among operator, computer, and machine. The proposed system showed practical and feasible way of automation in the field of volatile bio-production process.

A Study on a Digital Mirror System Offering Different Information by Distance (사용자와의 거리에 따라 다른 형태의 정보를 제공하는 디지털 거울 연구 - 사용자 니즈 분석을 중심으로 -)

  • Park, Ji-Eun;Lee, Moo-Heon;Hahm, Won-Sik;Soh, Yeon-Jung;Choi, Hea-Ju;Jung, Ji-Hong;Hahn, Min-Soo
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.43-50
    • /
    • 2006
  • A mirror is a familiar tool for human beings who have been seeing themselves through it for a long time since it was created. As evolving Digital Technology, many approaches about digital mirrors which reflect not only the light, but also the information have been studied. Traditional mirrors on the wall do not need any special control to perform their automatic visual feedbacks, reflecting lights. On the contrary, digital mirrors can actively provide more information to the user than the traditional ones. In this paper, we propose an active digital mirror system of which functions are changed according to the user-mirror distance. First of all, we investigated users' behaviors on mirrors and categorized the interactions by user-mirror distance. Based on the previous result, we designed the user interface of the mirror, and developed a prototype which has three recognition modules: a distance measuring module using infrared sensor arrays, a user recognition module by computer vision technique, and a control perception module using infrared sensor grid. In addition, the next steps for improving the user-centered digital mirror system, and the possibility for developing a mirror-shaped computer system were suggested.

  • PDF

The design and implementation of Object-based bioimage matching on a Mobile Device (모바일 장치기반의 바이오 객체 이미지 매칭 시스템 설계 및 구현)

  • Park, Chanil;Moon, Seung-jin
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.1-10
    • /
    • 2019
  • Object-based image matching algorithms have been widely used in the image processing and computer vision fields. A variety of applications based on image matching algorithms have been recently developed for object recognition, 3D modeling, video tracking, and biomedical informatics. One prominent example of image matching features is the Scale Invariant Feature Transform (SIFT) scheme. However many applications using the SIFT algorithm have implemented based on stand-alone basis, not client-server architecture. In this paper, We initially implemented based on client-server structure by using SIFT algorithms to identify and match objects in biomedical images to provide useful information to the user based on the recently released Mobile platform. The major methodological contribution of this work is leveraging the convenient user interface and ubiquitous Internet connection on Mobile device for interactive delineation, segmentation, representation, matching and retrieval of biomedical images. With these technologies, our paper showcased examples of performing reliable image matching from different views of an object in the applications of semantic image search for biomedical informatics.

Design and Implementation of a Language Identification System for Handwriting Input Data (필기 입력데이터에 대한 언어식별 시스템의 설계 및 구현)

  • Lim, Chae-Gyun;Kim, Kyu-Ho;Lee, Ki-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.63-68
    • /
    • 2010
  • Recently, to accelerate the Ubiquitous generation, the input interface of the mobile machinery and tools are actively being researched. In addition with the existing interfaces such as the keyboard and curser (mouse), other subdivisions including the handwriting, voice, vision, and touch are under research for new interfaces. Especially in the case of small-sized mobile machinery and tools, there is a increasing need for an efficient input interface despite the small screens. This is because, additional installment of other devices are strictly limited due to its size. Previous studies on handwriting recognition have generally been based on either two-dimensional images or algorithms which identify handwritten data inserted through vectors. Futhermore, previous studies have only focused on how to enhance the accuracy of the handwriting recognition algorithms. However, a problem arisen is that when an actual handwriting is inserted, the user must select the classification of their characters (e.g Upper or lower case English, Hangul - Korean alphabet, numbers). To solve the given problem, the current study presents a system which distinguishes different languages by analyzing the form/shape of inserted handwritten characters. The proposed technique has treated the handwritten data as sets of vector units. By analyzing the correlation and directivity of each vector units, a more efficient language distinguishing system has been made possible.

Implement of Hand Gesture Interface using Ratio and Size Variation of Gesture Clipping Region (제스쳐 클리핑 영역 비율과 크기 변화를 이용한 손-동작 인터페이스 구현)

  • Choi, Chang-Yur;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.121-127
    • /
    • 2013
  • A vision based hand-gesture interface method for substituting a pointing device is proposed in this paper, which is used the ratio and size variation of Gesture Region. Proposed method uses the skin hue&saturation of the hand region from the HSI color model to extract the hand region effectively. This method can remove the non-hand region, and reduces the noise effect by the light source. Also, as the computation quantity is reduced by detecting not the static hand-shape recognition, but the ratio and size variation of hand-moving from the clipped hand region in real time, more response speed is guaranteed. In order to evaluate the performance of the our proposed method, after applying to the computerized self visual acuity testing system as a pointing device. As a result, the proposed method showed the average 86% gesture recognition ratio and 87% coordinate moving recognition ratio.

Adaptive Event Clustering for Personalized Photo Browsing (사진 사용 이력을 이용한 이벤트 클러스터링 알고리즘)

  • Kim, Kee-Eung;Park, Tae-Suh;Park, Min-Kyu;Lee, Yong-Beom;Kim, Yeun-Bae;Kim, Sang-Ryong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.711-716
    • /
    • 2006
  • Since the introduction of digital camera to the mass market, the number of digital photos owned by an individual is growing at an alarming rate. This phenomenon naturally leads to the issues of difficulties while searching and browsing in the personal digital photo archive. Traditional approach typically involves content-based image retrieval using computer vision algorithms. However, due to the performance limitations of these algorithms, at least on the casual digital photos taken by non-professional photographers, more recent approaches are centered on time-based clustering algorithms, analyzing the shot times of photos. These time-based clustering algorithms are based on the insight that when these photos are clustered according to the shot-time similarity, we have "event clusters" that will help the user browse through her photo archive. It is also reported that one of the remaining problems with the time-based approach is that people perceive events in different scales. In this paper, we present an adaptive time-based clustering algorithm that exploits the usage history of digital photos in order to infer the user's preference on the event granularity. Experiments show significant performance improvements in the clustering accuracy.

  • PDF

A Miniature Humanoid Robot That Can Play Soccor

  • Lim, Seon-Ho;Cho, Jeong-San;Sung, Young-Whee;Yi, Soo-Yeong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.628-632
    • /
    • 2003
  • An intelligent miniature humanoid robot system is designed and implemented as a platform for researching walking algorithm. The robot system consists of a mechanical robot body, a control system, a sensor system, and a human interface system. The robot has 6 dofs per leg, 3 dofs per arm, and 2 dofs for a neck, so it has total of 20 dofs to have dexterous motion capability. For the control system, a supervisory controller runs on a remote host computer to plan high level robot actions based on the vision sensor data, a main controller implemented with a DSP chip generates walking trajectories for the robot to perform the commanded action, and an auxiliary controller implemented with an FPGA chip controls 20 actuators. The robot has three types of sensors. A two-axis acceleration sensor and eight force sensing resistors for acquiring information on walking status of the robot, and a color CCD camera for acquiring information on the surroundings. As an example of an intelligent robot action, some experiments on playing soccer are performed.

  • PDF

The Hand Region Acquistion System for Gesture-based Interface (제스처 기반 인터페이스를 위한 손영역 획득 시스템)

  • 양선옥;고일주;최형일
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.4
    • /
    • pp.43-52
    • /
    • 1998
  • We extract a hand region by using color information, which is an important feature for human vision to distinguish objects. Because pixel values in images are changed according to the luminance and lighting source, it is difficult to extract a hand region exactly without previous knowledge. We generate a hand skin model at learning stage, and extract a hand region from images by using the model. We also use a Kalman filter to consider changes of pixel values in a hand skin model. A Kalman filter restricts a search area for extracting a hand region at next frame also. The validity of the proposed method is proved by implementing the hand-region acquisition module.

  • PDF

Three-dimensional Head Tracking Using Adaptive Local Binary Pattern in Depth Images

  • Kim, Joongrock;Yoon, Changyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-139
    • /
    • 2016
  • Recognition of human motions has become a main area of computer vision due to its potential human-computer interface (HCI) and surveillance. Among those existing recognition techniques for human motions, head detection and tracking is basis for all human motion recognitions. Various approaches have been tried to detect and trace the position of human head in two-dimensional (2D) images precisely. However, it is still a challenging problem because the human appearance is too changeable by pose, and images are affected by illumination change. To enhance the performance of head detection and tracking, the real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor are recently used. In this paper, we propose an effective feature extraction method, called adaptive local binary pattern (ALBP), for depth image based applications. Contrasting to well-known conventional local binary pattern (LBP), the proposed ALBP cannot only extract shape information without texture in depth images, but also is invariant distance change in range images. We apply the proposed ALBP for head detection and tracking in depth images to show its effectiveness and its usefulness.