• Title/Summary/Keyword: Synthetic Vision

Search Result 56, Processing Time 0.028 seconds

A Novel Approach to Mugshot Based Arbitrary View Face Recognition

  • Zeng, Dan;Long, Shuqin;Li, Jing;Zhao, Qijun
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.239-244
    • /
    • 2016
  • Mugshot face images, routinely collected by police, usually contain both frontal and profile views. Existing automated face recognition methods exploited mugshot databases by enlarging the gallery with synthetic multi-view face images generated from the mugshot face images. This paper, instead, proposes to match the query arbitrary view face image directly to the enrolled frontal and profile face images. During matching, the 3D face shape model reconstructed from the mugshot face images is used to establish corresponding semantic parts between query and gallery face images, based on which comparison is done. The final recognition result is obtained by fusing the matching results with frontal and profile face images. Compared with previous methods, the proposed method better utilizes mugshot databases without using synthetic face images that may have artifacts. Its effectiveness has been demonstrated on the Color FERET and CMU PIE databases.

Game Engine Driven Synthetic Data Generation for Computer Vision-Based Construction Safety Monitoring

  • Lee, Heejae;Jeon, Jongmoo;Yang, Jaehun;Park, Chansik;Lee, Dongmin
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.893-903
    • /
    • 2022
  • Recently, computer vision (CV)-based safety monitoring (i.e., object detection) system has been widely researched in the construction industry. Sufficient and high-quality data collection is required to detect objects accurately. Such data collection is significant for detecting small objects or images from different camera angles. Although several previous studies proposed novel data augmentation and synthetic data generation approaches, it is still not thoroughly addressed (i.e., limited accuracy) in the dynamic construction work environment. In this study, we proposed a game engine-driven synthetic data generation model to enhance the accuracy of the CV-based object detection model, mainly targeting small objects. In the virtual 3D environment, we generated synthetic data to complement training images by altering the virtual camera angles. The main contribution of this paper is to confirm whether synthetic data generated in the game engine can improve the accuracy of the CV-based object detection model.

  • PDF

Essential Computer Vision Methods for Maximal Visual Quality of Experience on Augmented Reality

  • Heo, Suwoong;Song, Hyewon;Kim, Jinwoo;Nguyen, Anh-Duc;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.2
    • /
    • pp.39-45
    • /
    • 2016
  • The augmented reality is the environment which consists of real-world view and information drawn by computer. Since the image which user can see through augmented reality device is a synthetic image composed by real-view and virtual image, it is important to make the virtual image generated by computer well harmonized with real-view image. In this paper, we present reviews of several works about computer vision and graphics methods which give user realistic augmented reality experience. To generate visually harmonized synthetic image which consists of a real and a virtual image, 3D geometry and environmental information such as lighting or material surface reflectivity should be known by the computer. There are lots of computer vision methods which aim to estimate those. We introduce some of the approaches related to acquiring geometric information, lighting environment and material surface properties using monocular or multi-view images. We expect that this paper gives reader's intuition of the computer vision methods for providing a realistic augmented reality experience.

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks

  • Zhai, Guanghao;Narazaki, Yasutaka;Wang, Shuo;Shajihan, Shaik Althaf V.;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.237-250
    • /
    • 2022
  • Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.

Occlusion Restoration of Synthetic Stereomate for Remote Sensing Imagery

  • Kim, Hye-Jin;Choi, Jae-Wan;Chang, Ho-Wook;Ryu, Ki-Yun
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.439-445
    • /
    • 2007
  • Stereoscopic viewing is an efficient technique for not only computer vision but also remote sensing applications. Generally, stereo pair obtained at the same time is necessary for 3D viewing, but it is possible to synthesize a stereomate suitable for stereo view with a single image and disparity-map. There have been researches concerning the generation of the synthetic stereomate from remote sensing imagery. However it is hard to find researches concerning the restoration of occlusion in stereomate. In this paper, we generated synthetic stereomates from remote sensing images, focused on the occlusion restoration. In order to figure out proper restoration methods depending on the spatial resolution of remote sensing imagery, we tested several methods including general interpolation and inpainting technique, then evaluated the results.

Learning Fuzzy Rules for Pattern Classification and High-Level Computer Vision

  • Rhee, Chung-Hoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1E
    • /
    • pp.64-74
    • /
    • 1997
  • In many decision making systems, rule-based approaches are used to solve complex problems in the areas of pattern analysis and computer vision. In this paper, we present methods for generating fuzzy IF-THEN rules automatically from training data for pattern classification and high-level computer vision. The rules are generated by construction minimal approximate fuzzy aggregation networks and then training the networks using gradient descent methods. The training data that represent features are treated as linguistic variables that appear in the antecedent clauses of the rules. Methods to generate the corresponding linguistic labels(values) and their membership functions are presented. In addition, an inference procedure is employed to deduce conclusions from information presented to our rule-base. Two experimental results involving synthetic and real are given.

  • PDF

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

Improving Performance of Machine Learning-based Haze Removal Algorithms with Enhanced Training Database

  • Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.948-952
    • /
    • 2018
  • Haze removal is an object of scientific desire due to its various practical applications. Existing algorithms are founded upon histogram equalization, contrast maximization, or the growing trend of applying machine learning in image processing. Since machine learning-based algorithms solve problems based on the data, they usually perform better than those based on traditional image processing/computer vision techniques. However, to achieve such a high performance, one of the requisites is a large and reliable training database, which seems to be unattainable owing to the complexity of real hazy and haze-free images acquisition. As a result, researchers are currently using the synthetic database, obtained by introducing the synthetic haze drawn from the standard uniform distribution into the clear images. In this paper, we propose the enhanced equidistribution, improving upon our previous study on equidistribution, and use it to make a new database for training machine learning-based haze removal algorithms. A large number of experiments verify the effectiveness of our proposed methodology.

Towing Tank Test assuming the Collision between Ice-going Ship and Ice Floe and Measurement of Ice Floe's Motion using Machine Vision Inspection (내빙선과 유빙의 충돌을 가정한 예인수조실험 및 머신비전검사를 이용한 유빙의 운동 계측)

  • Kim, Hyo-Il;Jun, Seung-Hwan
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2015.10a
    • /
    • pp.33-34
    • /
    • 2015
  • The voyage and cargo volume passing through the Arctic route (NSR) have been gradually increased. The ship-ice collision is one of the most biggest factors threatening the safety navigation of ice-going ships. A lot of researchers are trying to reveal the ship-ice collision mechanism. In this study, some tests that a model ship is forced to collide with disk-shaped synthetic ice are carried out in a towing tank. Then, ice floe's motion (velocity and trajectory) is measured by machine vision inspection.

  • PDF

A design of window configuration for stereo matching (스테레오 매칭을 위한 Window 형상 설계)

  • 강치우;정영덕;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.1175-1180
    • /
    • 1991
  • The purpose of this paper is to improve the matching accuracy in identifying corresponding points in the area-based matching for the processing of stereo vision. For the selection of window size, a new method is proposed based on frequency domain analysis. The effectiveness of the proposed method is confirmed through a series of experiments. To overcome disproportionate distortion in stereo image pair, a new matching method using the warped window is also proposed. In the algorithm, the window is warped according to imaging geometry. Experiments on a synthetic image show that the matching accuracy is improved by 14.1% and 4.2% over the rectangular window method and image warping method each.

  • PDF