• Title/Summary/Keyword: Visual Processing

Search Result 1,643, Processing Time 0.047 seconds

Automatic Visual Inspection System of Remocon using Camera (카메라를 이용한 리모컨 외관검사 자동화 시스템 구현)

  • Huh, Kyung-Moo;Kang, Su-Min;Park, Se-Hyuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.11
    • /
    • pp.1106-1111
    • /
    • 2007
  • The visual inspection method that depends on human's eyes has some problem that a lot of variations happen in examination according to bodily, spiritual state of the checker. We automate remocon inspection process using CCD camera. Our developed inspection system can be used in any remocon production line without the user's big handling. Our inspection system was developed using PC, CCD Camera, Visual C++ for universal work place. The accuracy of proposed system was improved about 3.2[%] than the conventional pattern matching method and the processing time was decreased about 119[ms]. Also we showed that our inspection system is more robust to lighting circumstances.

Stereo Visual Odometry without Relying on RANSAC for the Measurement of Vehicle Motion (차량의 모션계측을 위한 RANSAC 의존 없는 스테레오 영상 거리계)

  • Song, Gwang-Yul;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.321-329
    • /
    • 2015
  • This paper addresses a new algorithm for a stereo visual odometry to measure the ego-motion of a vehicle. The new algorithm introduces an inlier grouping method based on Delaunay triangulation and vanishing point computation. Most visual odometry algorithms rely on RANSAC in choosing inliers. Those algorithms fluctuate largely in processing time between images and have different accuracy depending on the iteration number and the level of outliers. On the other hand, the new approach reduces the fluctuation in the processing time while providing accuracy corresponding to the RANSAC-based approaches.

The Study of Visual Tool for Automated Ultrasonic Examination of the Piping Welds in NPP (자동 초음파 신호평가를 위한 비쥬얼도구에 관한 연구)

  • Yoo, Hyun Joo;Choi, Sung Nam;Kim, Hyung Nam;Lee, Hee Jong
    • Transactions of the Korean Society of Pressure Vessels and Piping
    • /
    • v.6 no.1
    • /
    • pp.9-15
    • /
    • 2010
  • This paper describes the Visual Tool for automatic ultrasonic examination that is under developing as a part of the project for development of automatic ultrasonic wave acquisition and analysis program. This tool that is supported by various image processing techniques will be adopted to detect the flaws in the component and piping welds in NPP. Visual Tool will enhance the integrity of nuclear power plant. The object of this paper is to address the Visual Tool which is developing for automatic ultrasonic inspection of welds in NPP.

  • PDF

Critical Steps in Building Applications with Visual Basic and UML: Focusing on Order Processing Application (Visual Basic과 UML을 사용한 애플리케이션 개발시의 핵심적 단계: 주문처리 업무를 중심으로)

  • Han, Yong-Ho
    • IE interfaces
    • /
    • v.16 no.2
    • /
    • pp.268-279
    • /
    • 2003
  • This paper presents critical steps in building client/server application with UML and Visual Basic, which are derived from the implementation case of a typical order processing system. To begin with, we briefly review the software architecture, the diagrams and the object-oriened building process in the UML. In the inception phase, it is critical to define the project charter, to draw use case diagrams, and to construct a preliminary architecture of the application. In the elaboration phase, it is critical to identify classes to be displayed in the class diagram, to develop user interface prototypes for each use case, to construct sequence diagram for each use case, and finally to design an implementation architecture. Steps to construct implementation architecture are given. In the construction phase, it is critical to design both the database and components. Steps to design these components are described in detail. Additionally the way to create the Internet interface is suggested.

Imaging a scene from experience given verbal experssions

  • Sakai, Y.;Kitazawa, M.;Takahashi, S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.307-310
    • /
    • 1995
  • In the conventional systems, a human must have knowledge of machines and of their special language in communicating with machines. In one side, it is desirable for a human but in another side, it is true that achieving it is very elaborate and is also a significant cause of human error. To reduce this sort of human load, an intelligent man-machine interface is desirable to exist between a human operator and machines to be operated. In the ordinary human communication, not only linguistic information but also visual information is effective, compensating for each others defect. From this viewpoint, problem of translating verbal expressions to some visual image is discussed here in this paper. The location relation between any two objects in a visual scene is a key in translating verbal information to visual information, as is the case in Fig.l. The present translation system advances in knowledge with experience. It consists of Japanese Language processing, image processing, and Japanese-scene translation functions.

  • PDF

An Implementation of Visual OWL Editor (Visual OWL Editor 구현)

  • Ryu, Yeong-Hyeon;Sung, Ji-Hyeon;Jeon, Yang-Seung;Joung, Suck-Tae;Jeong, Young-Sik;Han, Sung-Kook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.437-440
    • /
    • 2005
  • 온톨로지는 차세대 웹 기술인 시맨틱웹 기술 개발에 있어 가장 핵심이 되는 요소로 복잡하고 어려운 온톨로지를 보다 쉽게 이해하고 직관적으로 편집할 수 있는 툴이 절실히 요구된다. 이 논문에서는 기존에 개발된 온톨로지 개발도구들을 분석해 보고, W3C에 의해 온톨로지 언어의 표준으로 자리 잡은 OWL을 이용하여 온톨로지 작성을 위한 편집 도구를 구현하였다. 본 논문에서 구현한 Visual OWL Editor는 Visual한 다이어그램 형태의 그래픽 인터페이스를 통해 사용자가 쉽게 온톨로지를 이해하고 간단하게 OWL문서를 생성 편집할 수 있는 방법을 제공해 준다.

  • PDF

Multimodal Curvature Discrimination of 3D Objects

  • Kim, Kwang-Taek;Lee, Hyuk-Soo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.212-216
    • /
    • 2013
  • As virtual reality technologies are advanced rapidly, how to render 3D objects across modalities is becoming an important issue. This study is therefore aimed to investigate human discriminability on the curvature of 3D polygonal surfaces with focusing on the vision and touch senses because they are most dominant when explore 3D shapes. For the study, we designed a psychophysical experiment using signal detection theory to determine curvature discrimination for three conditions: haptic only, visual only, and both haptic and visual. The results show that there is no statistically significant difference among the conditions although the threshold in the haptic condition is the lowest. The results also indicate that rendering using both visual and haptic channels could degrade the performance of discrimination on a 3D global shape. These results must be considered when a multimodal rendering system is designed in near future.

Is the Peak-Affect Important in Fast Processing of Visual Images in Printed Ads?: A Comparative Study on the Affect Integration Theories

  • Bu, Kyunghee;Lee, Luri
    • Asia Marketing Journal
    • /
    • v.24 no.3
    • /
    • pp.96-108
    • /
    • 2022
  • This study investigates how affects elicited by visual images in print ads are integrated to form a liking for the ads. Assuming a sequential rather than simultaneous processing of still-cut images, we adopt the 'think-aloud' method to capture consumers' spontaneous responses to visual images. We hypothesize that not only would consumers show mixed affects toward a still-cut visual image but that they would also integrate their serial affects heuristically rather than simply averaging the affects as suggested by the compensatory hypothesis. By comparing the effects of two contradictory affect integration hypotheses (i.e., peak-affect and mood-maintenance) with compensatory integration, using a single regression model, we found that peak-negative along with mood maintenance integration of serial affects for a print ad works best in the formation of ad liking. The results also support our initial premise that people can have mixed valence even toward a still-cut ad.

A Study on Migration of a Web-based Convergence Service (웹 기반 융합 서비스의 이동성 연구)

  • Song, Eun-Ji;Kim, Su-Ra;Choi, Hun-Hoi;Kim, Geun-Hyung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.1129-1131
    • /
    • 2011
  • 최근 스마트폰, 태블릿 PC, 스마트 TV 와 같이 디스플레이 크기, OS, H/W 성능이 다양한 특징을 갖는 인터넷 커넥트 단말의 종류가 증가하고 하고 있으며, 웹 서비스 업체들은 기존의 각종 콘텐츠와 서비스를 융합하여 새로운 웹 서비스를 제공하고 있다. 개인 소유의 단말과 서비스가 증가함에 따라 사용자 단말간에 융합된 서비스를 자유롭게 이동할 수 있는 기술이 요구되었다. 하지만 서로 다른 특징을 같은 단말간에 seamless 한 서비스 이동이 어렵기 때문에, 이를 극복하기 위해 단말 제조업체, 통신 사업자들은 자사의 단말 또는 플랫폼 기반의 N-Screen 서비스를 제공하고 있다. 본 논문은 단말간 웹 서비스 이동에 있어 이동 가능한 객체를 정의하였으며, 서로 다른 특징을 갖는 단말과 플랫폼 기반에 구애 받지 않고 웹 기반의 서비스 이동을 위해 HTML5 의 Websocket 기술을 활용하여 사용자 단말간 서비스 이동이 가능함을 보였다.

Design of HCI System of Museum Guide Robot Based on Visual Communication Skill

  • Qingqing Liang
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.328-336
    • /
    • 2024
  • Visual communication is widely used and enhanced in modern society, where there is an increasing demand for spirituality. Museum robots are one of many service robots that can replace humans to provide services such as display, interpretation and dialogue. For the improvement of museum guide robots, the paper proposes a human-robot interaction system based on visual communication skills. The system is based on a deep neural mesh structure and utilizes theoretical analysis of computer vision to introduce a Tiny+CBAM mesh structure in the gesture recognition component. This combines basic gestures and gesture states to design and evaluate gesture actions. The test results indicated that the improved Tiny+CBAM mesh structure could enhance the mean average precision value by 13.56% while maintaining a loss of less than 3 frames per second during static basic gesture recognition. After testing the system's dynamic gesture performance, it was found to be over 95% accurate for all items except double click. Additionally, it was 100% accurate for the action displayed on the current page.