• Title/Summary/Keyword: Image pixel

Search Result 2,495, Processing Time 0.03 seconds

Development of Color Recognition Algorithm for Traffic Lights using Deep Learning Data (딥러닝 데이터 활용한 신호등 색 인식 알고리즘 개발)

  • Baek, Seoha;Kim, Jongho;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.45-50
    • /
    • 2022
  • The vehicle motion in urban environment is determined by surrounding traffic flow, which cause understanding the flow to be a factor that dominantly affects the motion planning of the vehicle. The traffic flow in this urban environment is accessed using various urban infrastructure information. This paper represents a color recognition algorithm for traffic lights to perceive traffic condition which is a main information among various urban infrastructure information. Deep learning based vision open source realizes positions of traffic lights around the host vehicle. The data are processed to input data based on whether it exists on the route of ego vehicle. The colors of traffic lights are estimated through pixel values from the camera image. The proposed algorithm is validated in intersection situations with traffic lights on the test track. The results show that the proposed algorithm guarantees precise recognition on traffic lights associated with the ego vehicle path in urban intersection scenarios.

Deformation estimation of truss bridges using two-stage optimization from cameras

  • Jau-Yu Chou;Chia-Ming Chang
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.409-419
    • /
    • 2023
  • Structural integrity can be accessed from dynamic deformations of structures. Moreover, dynamic deformations can be acquired from non-contact sensors such as video cameras. Kanade-Lucas-Tomasi (KLT) algorithm is one of the commonly used methods for motion tracking. However, averaging throughout the extracted features would induce bias in the measurement. In addition, pixel-wise measurements can be converted to physical units through camera intrinsic. Still, the depth information is unreachable without prior knowledge of the space information. The assigned homogeneous coordinates would then mismatch manually selected feature points, resulting in measurement errors during coordinate transformation. In this study, a two-stage optimization method for video-based measurements is proposed. The manually selected feature points are first optimized by minimizing the errors compared with the homogeneous coordinate. Then, the optimized points are utilized for the KLT algorithm to extract displacements through inverse projection. Two additional criteria are employed to eliminate outliers from KLT, resulting in more reliable displacement responses. The second-stage optimization subsequently fine-tunes the geometry of the selected coordinates. The optimization process also considers the number of interpolation points at different depths of an image to reduce the effect of out-of-plane motions. As a result, the proposed method is numerically investigated by using a truss bridge as a physics-based graphic model (PBGM) to extract high-accuracy displacements from recorded videos under various capturing angles and structural conditions.

Automatic assessment of post-earthquake buildings based on multi-task deep learning with auxiliary tasks

  • Zhihang Li;Huamei Zhu;Mengqi Huang;Pengxuan Ji;Hongyu Huang;Qianbing Zhang
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.383-392
    • /
    • 2023
  • Post-earthquake building condition assessment is crucial for subsequent rescue and remediation and can be automated by emerging computer vision and deep learning technologies. This study is based on an endeavour for the 2nd International Competition of Structural Health Monitoring (IC-SHM 2021). The task package includes five image segmentation objectives - defects (crack/spall/rebar exposure), structural component, and damage state. The structural component and damage state tasks are identified as the priority that can form actionable decisions. A multi-task Convolutional Neural Network (CNN) is proposed to conduct the two major tasks simultaneously. The rest 3 sub-tasks (spall/crack/rebar exposure) were incorporated as auxiliary tasks. By synchronously learning defect information (spall/crack/rebar exposure), the multi-task CNN model outperforms the counterpart single-task models in recognizing structural components and estimating damage states. Particularly, the pixel-level damage state estimation witnesses a mIoU (mean intersection over union) improvement from 0.5855 to 0.6374. For the defect detection tasks, rebar exposure is omitted due to the extremely biased sample distribution. The segmentations of crack and spall are automated by single-task U-Net but with extra efforts to resample the provided data. The segmentation of small objects (spall and crack) benefits from the resampling method, with a substantial IoU increment of nearly 10%.

REAL-TIME 3D MODELING FOR ACCELERATED AND SAFER CONSTRUCTION USING EMERGING TECHNOLOGY

  • Jochen Teizer;Changwan Kim;Frederic Bosche;Carlos H. Caldas;Carl T. Haas
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.539-543
    • /
    • 2005
  • The research presented in this paper enables real-time 3D modeling to help make construction processes ultimately faster, more predictable and safer. Initial research efforts used an emerging sensor technology and proved its usefulness in the acquisition of range information for the detection and efficient representation of static and moving objects. Based on the time-of-flight principle, the sensor acquires range and intensity information of each image pixel within the entire sensor's field-of-view in real-time with frequencies of up to 30 Hz. However, real-time working range data processing algorithms need to be developed to rapidly process range information into meaningful 3D computer models. This research ultimately focuses on the application of safer heavy equipment operation. The paper compares (a) a previous research effort in convex hull modeling using sparse range point clouds from a single laser beam range finder, to (b) high-frame rate update Flash LADAR (Laser Detection and Ranging) scanning for complete scene modeling. The presented research will demonstrate if the FlashLADAR technology can play an important role in real-time modeling of infrastructure assets in the near future.

  • PDF

Laboratory geometric calibration simulation analysis of push-broom satellite imaging sensor

  • Reza Sh., Hafshejani;Javad, Haghshenas
    • Advances in aircraft and spacecraft science
    • /
    • v.10 no.1
    • /
    • pp.67-82
    • /
    • 2023
  • Linear array imaging sensors are widely used in remote sensing satellites. The final products of an imaging sensor can only be used when they are geometrically, radiometrically, and spectrally calibrated. Therefore, at the first stages of sensor design, a detailed calibration procedure must be carefully planned based on the accuracy requirements. In this paper, focusing on inherent optical distortion, a step-by-step procedure for laboratory geometric calibration of a typical push-broom satellite imaging sensor is simulated. The basis of this work is the simulation of a laboratory procedure in which a linear imager mounted on a rotary table captures images of a pin-hole pattern at different angles. By these images and their corresponding pinhole approximation, the correction function is extracted and applied to the raw images to give the corrected ones. The simulation results illustrate that using this approach, the nonlinear effects of distortion can be minimized and therefore the accuracy of the geometric position of this method on the image screen can be improved to better than the order of sub-pixel. On the other hand, the analyses can be used to proper laboratory facility selection based on the imaging sensor specifications and the accuracy.

Mineral Image Analysis Technique (광물이미지 분석 기법)

  • Shin, Kwang-seong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.353-354
    • /
    • 2021
  • In this study, in order to overcome the limitations of the particle size analysis method using a scanner, a microscope, or a laser, and to reduce the cost, a high-quality sampling of micro minerals is performed using an ultra-high-pixel DSLR camera and a MACRO lens. Using this, digital photos taken of standard mineral particles are analyzed to distinguish the size and shape of mineral particles at the level of grain of sand (a few mm ~ 0.063 mm). In addition, various photographing techniques for the production of three-dimensional images of mineral particles were sought, and an attempt was made to produce learning materials and images for mineral classification.

  • PDF

Development of Deep Learning-based Land Monitoring Web Service (딥러닝 기반의 국토모니터링 웹 서비스 개발)

  • In-Hak Kong;Dong-Hoon Jeong;Gu-Ha Jeong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.275-284
    • /
    • 2023
  • Land monitoring involves systematically understanding changes in land use, leveraging spatial information such as satellite imagery and aerial photographs. Recently, the integration of deep learning technologies, notably object detection and semantic segmentation, into land monitoring has spurred active research. This study developed a web service to facilitate such integrations, allowing users to analyze aerial and drone images using CNN models. The web service architecture comprises AI, WEB/WAS, and DB servers and employs three primary deep learning models: DeepLab V3, YOLO, and Rotated Mask R-CNN. Specifically, YOLO offers rapid detection capabilities, Rotated Mask R-CNN excels in detecting rotated objects, while DeepLab V3 provides pixel-wise image classification. The performance of these models fluctuates depending on the quantity and quality of the training data. Anticipated to be integrated into the LX Corporation's operational network and the Land-XI system, this service is expected to enhance the accuracy and efficiency of land monitoring.

Asymmetric Metal-Semiconductor-Metal Al0.24Ga0.76N UV Sensors with Surface Passivation Effect Under Local Joule Heating

  • Byeong-Jun Park;Sung-Ho Hahm
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.6
    • /
    • pp.425-431
    • /
    • 2023
  • An asymmetric metal-semiconductor-metal Al0.24Ga0.76N ultraviolet (UV) sensor was fabricated, and the effects of local Joule heating were investigated. After dielectric breakdown, the current density under a reverse bias of 2.0 V was 1.1×10-9 A/cm2, significantly lower than 1.2×10-8 A/cm2 before dielectric breakdown; moreover, the Schottky behavior of the Ti/Al/Ni/Au electrode changed to ohmic behavior under forward bias. The UV-to-visible rejection ratio (UVRR) under a reverse bias of 7.0 V before dielectric breakdown was 87; however, this UVRR significantly increased to 578, in addition to providing highly reliable responsivity. Transmission electron microscopy revealed interdiffusion between adjacent layers, with nitrogen vacancies possibly formed owing to local Joule heating at the AlGaN/Ti/Al/Ni/Au interfaces. X-ray photoelectron microscopy results revealed decreases in the peak intensities of the O 1s binding energies associated with the Ga-O bond and OH-, which act as electron-trapping states on the AlGaN surface. The reduction in dark current owing to the proposed local heating method is expected to increase the sensing performance of UV optoelectronic integrated devices, such as active-pixel UV image sensors.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

ShadowCam Instrument and Investigation Overview

  • Mark Southwick Robinson;Scott Michael Brylow;Michael Alan Caplinger;Lynn Marie Carter;Matthew John Clark;Brett Wilcox Denevi;Nicholas Michael Estes;David Carl Humm;Prasun Mahanti;Douglas Arden Peckham;Michael Andrew Ravine;Jacob Andrieu Schaffner;Emerson Jacob Speyerer;Robert Vernon Wagner
    • Journal of Astronomy and Space Sciences
    • /
    • v.40 no.4
    • /
    • pp.149-171
    • /
    • 2023
  • ShadowCam is a National Aeronautics and Space Administration Advanced Exploration Systems funded instrument hosted onboard the Korea Aerospace Research Institute (KARI) Korea Pathfinder Lunar Orbiter (KPLO) satellite. By collecting high-resolution images of permanently shadowed regions (PSRs), ShadowCam will provide critical information about the distribution and accessibility of water ice and other volatiles at spatial scales (1.7 m/pixel) required to mitigate risks and maximize the results of future exploration activities. The PSRs never see direct sunlight and are illuminated only by light reflected from nearby topographic highs. Since secondary illumination is very dim, ShadowCam was designed to be over 200 times more sensitive than previous imagers like the Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC). ShadowCam images thus allow for unprecedented views into the shadows, but saturate while imaging sunlit terrain.