• 제목/요약/키워드: Computer vision technology

검색결과 666건 처리시간 0.025초

생체모방 시각센서 기술동향 (Trends in Biomimetic Vision Sensor Technology)

  • 이태재;박윤재;구교인;서종모;조동일
    • 제어로봇시스템학회논문지
    • /
    • 제21권12호
    • /
    • pp.1178-1184
    • /
    • 2015
  • In conventional robotics, charge-coupled device (CCD) and complementary metal-oxide-semiconductor (CMOS) cameras have been utilized for acquiring vision information. These devices have problems, such as narrow optic angles and inefficiencies in visual information processing. Recently, biomimetic vision sensors for robotic applications have been receiving much attention. These sensors are more efficient than conventional vision sensors in terms of the optic angle, power consumption, dynamic range, and redundancy suppression. This paper presents recent research trends on biomimetic vision sensors and discusses future directions.

MEAN Stack 기반의 컴퓨터 비전 플랫폼 설계 (Computer Vision Platform Design with MEAN Stack Basis)

  • 홍선학;조경순;윤진섭
    • 디지털산업정보학회논문지
    • /
    • 제11권3호
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, we implemented the computer vision platform design with MEAN Stack through Raspberry PI 2 model which is an open source platform. we experimented the face recognition, temperature and humidity sensor data logging with WiFi communication under Raspberry Pi 2 model. Especially we directly made the shape of platform with 3D printing design. In this paper, we used the face recognition algorithm with OpenCV software through haarcascade feature extraction machine learning algorithm, and extended the functionality of wireless communication function ability with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the vision platform for identifying the face recognition characteristics of scanning with PI camera with gathering the temperature and humidity sensor data under IoT environment. and made the vision platform with 3D printing technology. Especially we used MongoDB for developing the performance of vision platform because the MongoDB is more akin to working with objects in a programming language than what we know of as a database. Afterwards, we would enhance the performance of vision platform for clouding functionalities.

Text-To-Vision Player - Converting Text to Vision Based on TVML Technology -

  • Hayashi, Masaki
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.799-802
    • /
    • 2009
  • We have been studying the next generation of video creation solution based on TVML (TV program Making Language) technology. TVML is a well-known scripting language for computer animation and a TVML Player interprets the script to create video content using real-time 3DCG and synthesized voices. TVML has a long history proposed back in 1996 by NHK, however, the only available Player has been the one made by NHK for years. We have developed a new TVML Player from scratch and named it T2V (Text-To-Vision) Player. Due to the development from scratch, the code is compact, light and fast, and extendable and portable. Moreover, the new T2V Player performs not only a playback of TVML script but also a Text-To-Vision conversion from input written in XML format or just a mere plane text to videos by using 'Text-filter' that can be added as a plug-in of the Player. We plan to make it public as freeware from early 2009 in order to stimulate User-Generated-Content and a various kinds of services running on the Internet and media industry. We think that our T2V Player would be a key technology for upcoming new movement.

  • PDF

Study of Intelligent Vision Sensor for the Robotic Laser Welding

  • Kim, Chang-Hyun;Choi, Tae-Yong;Lee, Ju-Jang;Suh, Jeong;Park, Kyoung-Taik;Kang, Hee-Shin
    • 한국산업융합학회 논문집
    • /
    • 제22권4호
    • /
    • pp.447-457
    • /
    • 2019
  • The intelligent sensory system is required to ensure the accurate welding performance. This paper describes the development of an intelligent vision sensor for the robotic laser welding. The sensor system includes a PC based vision camera and a stripe-type laser diode. A set of robust image processing algorithms are implemented. The laser-stripe sensor can measure the profile of the welding object and obtain the seam line. Moreover, the working distance of the sensor can be changed and other configuration is adjusted accordingly. The robot, the seam tracking system, and CW Nd:YAG laser are used for the laser welding robot system. The simple and efficient control scheme of the whole system is also presented. The profile measurement and the seam tracking experiments were carried out to validate the operation of the system.

Measuring the volume of powder by vision

  • SeijiIshikawa;ShigeruHarada;HiroyukiYoshinaga;KiyoshiKato
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1987년도 한국자동제어학술회의논문집(한일합동학술편); 한국과학기술대학, 충남; 16-17 Oct. 1987
    • /
    • pp.776-779
    • /
    • 1987
  • This paper describes a technique for measuring the volume of a pile of powder in a visual way. The volume of a fragile object whose shape is easily transformed by a slight touch of another object must be measured without any contact with it. This can be achieved by applying a three-dimensional shape reconstruction technique employed in computer vision. We have developed a measurement system that finds the volume of a pile of powder by employing a range finder, and performed an experiment of determining the volume of PVC powder piled on a table. The result of the experiment was satisfactory.

  • PDF

저전력 온디바이스 비전 SW 프레임워크 기술 동향 (Trends in Low-Power On-Device Vision SW Framework Technology)

  • 이문수;배수영;김정시;석종수
    • 전자통신동향분석
    • /
    • 제36권2호
    • /
    • pp.56-64
    • /
    • 2021
  • Many computer vision algorithms are computationally expensive and require a lot of computing resources. Recently, owing to machine learning technology and high-performance embedded systems, vision processing applications, such as object detection, face recognition, and visual inspection, are widely used. However, on-devices need to use their resources to handle powerful vision works with low power consumption in heterogeneous environments. Consequently, global manufacturers are trying to lock many developers into their ecosystem, providing integrated low-power chips and dedicated vision libraries. Khronos Group-an international standard organization-has released the OpenVX standard for high-performance/low-power vision processing in heterogeneous on-device systems. This paper describes vision libraries for the embedded systems and presents the OpenVX standard along with related trends for on-device vision system.

조명의 변화가 심한 환경에서 자동차 부품 유무 비전검사 방법 (Auto Parts Visual Inspection in Severe Changes in the Lighting Environment)

  • 김기석;박요한;박종섭;조재수
    • 제어로봇시스템학회논문지
    • /
    • 제21권12호
    • /
    • pp.1109-1114
    • /
    • 2015
  • This paper presents an improved learning-based visual inspection method for auto parts inspection in severe lighting changes. Automobile sunroof frames are produced automatically by robots in most production lines. In the sunroof frame manufacturing process, there is a quality problem with some parts such as volts are missed. Instead of manual sampling inspection using some mechanical jig instruments, a learning-based machine vision system was proposed in the previous research[1]. But, in applying the actual sunroof frame production process, the inspection accuracy of the proposed vision system is much lowered because of severe illumination changes. In order to overcome this capricious environment, some selective feature vectors and cascade classifiers are used for each auto parts. And we are able to improve the inspection accuracy through the re-learning concept for the misclassified data. The effectiveness of the proposed visual inspection method is verified through sufficient experiments in a real sunroof production line.

효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술 (Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction)

  • 박동건;강경민;배진우;한지형
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

자동 표면 결함검사 시스템에서 Retro 광학계를 이용한 3D 깊이정보 측정방법 (Linear System Depth Detection using Retro Reflector for Automatic Vision Inspection System)

  • 주영복
    • 반도체디스플레이기술학회지
    • /
    • 제21권4호
    • /
    • pp.77-80
    • /
    • 2022
  • Automatic Vision Inspection (AVI) systems automatically detect defect features and measure their sizes via camera vision. It has been populated because of the accuracy and consistency in terms of QC (Quality Control) of inspection processes. Also, it is important to predict the performance of an AVI to meet customer's specification in advance. AVI are usually suffered from false negative and positives. It can be overcome by providing extra information such as 3D depth information. Stereo vision processing has been popular for depth extraction of the 3D images from 2D images. However, stereo vision methods usually take long time to process. In this paper, retro optical system using reflectors is proposed and experimented to overcome the problem. The optical system extracts the depth without special SW processes. The vision sensor and optical components such as illumination and depth detecting module are integrated as a unit. The depth information can be extracted on real-time basis and utilized and can improve the performance of an AVI system.

스펙트럼 해석을 이용한 연삭숫돌 마멸거동 (The Behavior of Grinding Wheel Wear Using Spectrum Analysis)

  • 사승윤
    • 한국생산제조학회지
    • /
    • 제8권5호
    • /
    • pp.20-24
    • /
    • 1999
  • Grinding System is very difficult to examine closely wear phenomenon or dynamic characterastic because it is very complex and different from a general cutting system, Considering automatization and precision it is very important to examine closely grinding system. In this study grinding wheel surface is acquired by using computer vision system in order to explain wear and loading phenomenon. We investigate the relationship between wear and Fourier spectrum of acquired image and observe the entropy variation in the process of manufacturing.

  • PDF