• Title/Summary/Keyword: Vision data

Search Result 1,817, Processing Time 0.029 seconds

Visualizations of Relational Capital for Shared Vision

  • Russell, Martha G.;Still, Kaisa;Huhtamaki, Jukka;Rubens, Neil
    • World Technopolis Review
    • /
    • v.5 no.1
    • /
    • pp.47-60
    • /
    • 2016
  • In today's digital non-linear global business environment, innovation initiatives are influenced by inter-organizational, political, economic, environmental, technological systems, as well as by decisions made individually by key actors in these systems. Network-based structures emerge from social linkages and collaborations among various actors, creating innovation ecosystems, complex adaptive systems in which entities co-create value. A shared vision of value co-creation allows people operating individually to arrive together at the same future. Yet, relationships are difficult to see, continually changing and challenging to manage. The Innovation Ecosystem Transformation Framework construct includes three core components to make innovation relationships visible and articulate networks of relational capital for the wellbeing, sustainability and business success of innovation ecosystems: data-driven visualizations, storytelling and shared vision. Access to data facilitates building evidence-based visualizations using relational data. This has dramatically altered the way leaders can use data-driven analysis to develop insights and provide ongoing feedback needed to orchestrate relational capital and build shared vision for high quality decisions about innovation. Enabled by a shared vision, relational capital can guide decisions that catalyze, support and sustain an ecosystemic milieu conducive to innovation for business growth.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Development of multi-line laser vision sensor and welding application (멀티 라인 레이저 비전 센서를 이용한 고속 3차원 계측 및 모델링에 관한 연구)

  • 성기은;이세헌
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.169-172
    • /
    • 2002
  • A vision sensor measure range data using laser light source. This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which feeds foster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF

DEVELOPMENT OF LASER VISION SENSOR WITH MULTI-LINE

  • Kieun Sung;Sehun Rhee;Yun, Jae-Ok
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.324-329
    • /
    • 2002
  • Generally, the laser vision sensor makes it possible design a highly reliable and precise range sensor at a low cost. When the laser vision sensor is applied to lap joint welding, however, there are many limitations. Therefore, a specially-designed hardware system has to be used. However, if the multi-lines are used instead of a single line, multi-range data can be generated from one image. Even under a set condition of 30fps, the generated 2D range data increases depending on the number of lines used. In this study, a laser vision sensor with a multi-line pattern is

  • PDF

Development of Laser Vision Sensor with Multi-line for High Speed Lap Joint Welding

  • Sung, K.;Rhee, S.
    • International Journal of Korean Welding Society
    • /
    • v.2 no.2
    • /
    • pp.57-60
    • /
    • 2002
  • Generally, the laser vision sensor makes it possible design a highly reliable and precise range sensor at a low cost. When the laser vision sensor is applied to lap joint welding, however. there are many limitations. Therefore, a specially-designed hardware system has to be used. However, if the multi-lines are used instead of a single line, multi-range data .:an be generated from one image. Even under a set condition of 30fps, the generated 2D range data increases depending on the number of lines used. In this study, a laser vision sensor with a multi-line pattern is developed with conventional CCD camera to carry out high speed seam tracking in lap joint welding.

  • PDF

High speed seam tracking using multi-line laser vision sensor (멀티 라인 레이저 비전 센서를 이용한 고속 용접선 추적 기술)

  • 성기은;이세헌
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.584-587
    • /
    • 2002
  • A vision sensor measure range data using laser light source. This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which needs laster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF

Effective Real Time Tracking System using Stereo Vision

  • Lee, Hyun-Jin;Kuc, Tae-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.70.1-70
    • /
    • 2001
  • Recently, research of visual control is getting more essential in robotic application, and acquiring 3D informations from the 2D images is becoming more important with development of vision system. For this application, we propose the effective way of controlling stereo vision tracking system for target tracking and calculating distance between target and camera. In this paper we address improved controller using dual-loop visual servo which is more effective compared with using single-loop visual servo for stereo vision tracking system. The speed and the accuracy for realizing a real time tracking are important. However, the vision processing speed is too slow to track object in real time by using only vision feedback data. So we use another feedback data from controller parts which offer state feedback ...

  • PDF

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

Reverse Engineering of Compound Surfaces on the Machine Tool using a Vision Probe (비전 프로브를 이용한 기상에서의 복합곡면의 역공학)

  • 김경진;윤길상;초명우;권혁동;서태일
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.287-292
    • /
    • 2002
  • This paper presents a reverse engineering method for compound surfaces using vision system. A CNC machining center is used as a measuring station, which is equipped with slit beam generator and vision probe. Since obtained data using slit beam or laser scanner may have much data loss along the edge of compound surfaces, an algorithm is presented in this study to recover missing geometric data at such region. First, b-spline interpolation is applied to extract edge information of the surface, and as a next step, b-spline approximation is applied to recover the missing geometric data. Finally, b-spline skinning method is applied to regenerate the surface information. Appropriate simulation and experimental works are preformed to very the effectiveness of the proposed methods.

  • PDF

Development of vision system for quality inspection of automotive parts and comparison of machine learning models (자동차 부품 품질검사를 위한 비전시스템 개발과 머신러닝 모델 비교)

  • Park, Youngmin;Jung, Dong-Il
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.409-415
    • /
    • 2022
  • In computer vision, an image of a measurement target is acquired using a camera. And feature values, vectors, and regions are detected by applying algorithms and library functions. The detected data is calculated and analyzed in various forms depending on the purpose of use. Computer vision is being used in various places, especially in the field of automatically recognizing automobile parts or measuring the quality. Computer vision is being used as the term machine vision in the industrial field, and it is connected with artificial intelligence to judge product quality or predict results. In this study, a vision system for judging the quality of automobile parts was built, and the results were compared by applying five machine learning classification models to the produced data.