• 제목/요약/키워드: Vision data

검색결과 1,777건 처리시간 0.249초

Visualizations of Relational Capital for Shared Vision

  • Russell, Martha G.;Still, Kaisa;Huhtamaki, Jukka;Rubens, Neil
    • World Technopolis Review
    • /
    • 제5권1호
    • /
    • pp.47-60
    • /
    • 2016
  • In today's digital non-linear global business environment, innovation initiatives are influenced by inter-organizational, political, economic, environmental, technological systems, as well as by decisions made individually by key actors in these systems. Network-based structures emerge from social linkages and collaborations among various actors, creating innovation ecosystems, complex adaptive systems in which entities co-create value. A shared vision of value co-creation allows people operating individually to arrive together at the same future. Yet, relationships are difficult to see, continually changing and challenging to manage. The Innovation Ecosystem Transformation Framework construct includes three core components to make innovation relationships visible and articulate networks of relational capital for the wellbeing, sustainability and business success of innovation ecosystems: data-driven visualizations, storytelling and shared vision. Access to data facilitates building evidence-based visualizations using relational data. This has dramatically altered the way leaders can use data-driven analysis to develop insights and provide ongoing feedback needed to orchestrate relational capital and build shared vision for high quality decisions about innovation. Enabled by a shared vision, relational capital can guide decisions that catalyze, support and sustain an ecosystemic milieu conducive to innovation for business growth.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

멀티 라인 레이저 비전 센서를 이용한 고속 3차원 계측 및 모델링에 관한 연구 (Development of multi-line laser vision sensor and welding application)

  • 성기은;이세헌
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 춘계학술대회 논문집
    • /
    • pp.169-172
    • /
    • 2002
  • A vision sensor measure range data using laser light source. This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which feeds foster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF

DEVELOPMENT OF LASER VISION SENSOR WITH MULTI-LINE

  • Kieun Sung;Sehun Rhee;Yun, Jae-Ok
    • 대한용접접합학회:학술대회논문집
    • /
    • 대한용접접합학회 2002년도 Proceedings of the International Welding/Joining Conference-Korea
    • /
    • pp.324-329
    • /
    • 2002
  • Generally, the laser vision sensor makes it possible design a highly reliable and precise range sensor at a low cost. When the laser vision sensor is applied to lap joint welding, however, there are many limitations. Therefore, a specially-designed hardware system has to be used. However, if the multi-lines are used instead of a single line, multi-range data can be generated from one image. Even under a set condition of 30fps, the generated 2D range data increases depending on the number of lines used. In this study, a laser vision sensor with a multi-line pattern is

  • PDF

Development of Laser Vision Sensor with Multi-line for High Speed Lap Joint Welding

  • Sung, K.;Rhee, S.
    • International Journal of Korean Welding Society
    • /
    • 제2권2호
    • /
    • pp.57-60
    • /
    • 2002
  • Generally, the laser vision sensor makes it possible design a highly reliable and precise range sensor at a low cost. When the laser vision sensor is applied to lap joint welding, however. there are many limitations. Therefore, a specially-designed hardware system has to be used. However, if the multi-lines are used instead of a single line, multi-range data .:an be generated from one image. Even under a set condition of 30fps, the generated 2D range data increases depending on the number of lines used. In this study, a laser vision sensor with a multi-line pattern is developed with conventional CCD camera to carry out high speed seam tracking in lap joint welding.

  • PDF

멀티 라인 레이저 비전 센서를 이용한 고속 용접선 추적 기술 (High speed seam tracking using multi-line laser vision sensor)

  • 성기은;이세헌
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 추계학술대회 논문집
    • /
    • pp.584-587
    • /
    • 2002
  • A vision sensor measure range data using laser light source. This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which needs laster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF

Effective Real Time Tracking System using Stereo Vision

  • Lee, Hyun-Jin;Kuc, Tae-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.70.1-70
    • /
    • 2001
  • Recently, research of visual control is getting more essential in robotic application, and acquiring 3D informations from the 2D images is becoming more important with development of vision system. For this application, we propose the effective way of controlling stereo vision tracking system for target tracking and calculating distance between target and camera. In this paper we address improved controller using dual-loop visual servo which is more effective compared with using single-loop visual servo for stereo vision tracking system. The speed and the accuracy for realizing a real time tracking are important. However, the vision processing speed is too slow to track object in real time by using only vision feedback data. So we use another feedback data from controller parts which offer state feedback ...

  • PDF

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • 한국멀티미디어학회논문지
    • /
    • 제10권6호
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

비전 프로브를 이용한 기상에서의 복합곡면의 역공학 (Reverse Engineering of Compound Surfaces on the Machine Tool using a Vision Probe)

  • 김경진;윤길상;초명우;권혁동;서태일
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 춘계학술대회 논문집
    • /
    • pp.287-292
    • /
    • 2002
  • This paper presents a reverse engineering method for compound surfaces using vision system. A CNC machining center is used as a measuring station, which is equipped with slit beam generator and vision probe. Since obtained data using slit beam or laser scanner may have much data loss along the edge of compound surfaces, an algorithm is presented in this study to recover missing geometric data at such region. First, b-spline interpolation is applied to extract edge information of the surface, and as a next step, b-spline approximation is applied to recover the missing geometric data. Finally, b-spline skinning method is applied to regenerate the surface information. Appropriate simulation and experimental works are preformed to very the effectiveness of the proposed methods.

  • PDF

자동차 부품 품질검사를 위한 비전시스템 개발과 머신러닝 모델 비교 (Development of vision system for quality inspection of automotive parts and comparison of machine learning models)

  • 박영민;정동일
    • 문화기술의 융합
    • /
    • 제8권1호
    • /
    • pp.409-415
    • /
    • 2022
  • 컴퓨터 비전은 카메라를 이용하여 측정대상의 영상을 획득하고, 추출하고자 하는 특징 값, 벡터, 영역 등을 알고리즘과 라이브러리 함수를 응용하여 검출한다. 검출된 데이터는 사용하는 목적에 따라 다양한 형태로 계산되고 분석한다. 컴퓨터 비전은 다양한 곳에 활용되고 있으며, 특히 자동차의 부품을 자동으로 인식하거나 품질을 측정하는 분야에 많이 활용된다. 컴퓨터 비전을 산업분야에서 머신비전이라는 용어로 활용되고 있으며, 인공지능과 연결되어 제품의 품질을 판정하거나 결과를 예측하기도 한다. 본 연구에서는 자동차 부품의 품질을 판정하기 위한 비전시스템을 구축하고, 생산된 데이터에 5개의 머신러닝 분류 모델을 적용하여 그 결과를 비교하였다.