• Title/Summary/Keyword: color coordinate

Search Result 267, Processing Time 0.024 seconds

Design of Scan Conversion Processor for 3-Dimensional Mobile Graphics Application (3차원 모바일 그래픽 응용을 위한 스캔 변환 프로세서의 설계)

  • Choi, Byeong-Yoon;Ha, Chang-Soo;Salcic, Zoran
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.11
    • /
    • pp.2107-2115
    • /
    • 2007
  • In this paper, the scan conversion processor which converts the triangle represented by three vertices into pixel-level screen coordinates, depth coordinate, and color data is designed. The processor adopts scan-line algorithm which decomposes triangle into horizontal spans and then transforms the span into pixel data. By supporting top-left filling convention, it ensures that triangles that share an edge do not produce any dropouts or overlaps between adjacent polygons. It consists of about 21,400 gates and its maximum operating frequency is about 80 Mhz under 0.35um CMOS technology. Because its maximum pixel rate is about 80 Mpixels/sec, it can be applicable to mobile graphics application.

Dioxygen Binding to the Singly Alkoxo-Bridged Diferrous Complex: Properties of [$Fe^{Ⅱ}_2$(N-Et-HPTB)$Cl_2$]$BPh_4$

  • 김은석;이강봉;Jang, Ho G.
    • Bulletin of the Korean Chemical Society
    • /
    • v.17 no.12
    • /
    • pp.1127-1131
    • /
    • 1996
  • [FeⅡ2(N-Et-HPTB)Cl2]BPh4(1), where N-Et-HPTB is the anion of N,N,N',N'-tetrakis(N-ethyl-2-benzimidazolylmethyl)-2-hydroxy-l,3-diaminopropane, has been synthesized to model dioxygen binding to the diferrous centers of proteins. 1 has a singly bridged structure with a μ-alkoxo of N-Et-HPTB and contains two five-coordinate iron(Ⅱ) centers with two chloride ligands as exogenous ligands. 1 exhibits an electronic spectrum with a λmax at 336 nm in acetone. 1 in acetone exhibits no EPR signal at 4 K, indicating diiron(Ⅱ) centers are antiferromagnetically coupled. Exposure of acetone solution of 1 to O2 at -90 ℃ affords an intense blue color intermediate showing a broad band at 586 nm. This absorption maximum of the dioxygen adduct(1/O2) was found in the same region of μ-l,2-peroxo diiron(Ⅲ) intermediates in the related complexes with pendant pyridine or benzimidazole ligand systems. However, this blue intermediate exhibits EPR signals at g = 1.93, 1.76, and 1.59 at 4 K. These g values are characteristic of S = 1/2 system derived from an antiferromagnetically coupled high-spin Fe(Ⅱ)Fe(Ⅲ) units. 1 is the unique example of a (μ-alkoxo)diferrous complex which can bind dioxygen and form a metastable mixed-valence intermediate. At ambient temperature, most of 1/O2 intermediate decays to form a diamagnetic species. It suggests that the dacay reaction of the intermediate might be bimolecular, implying the formation of mixed-valence tetranuclear species in transition state.

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • v.23 no.4
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.

An Enhancement Technique for Backlit Images using Laplace Pyramid Fusion (라플라스 피라미드 융합을 이용한 역광영상의 개선 방법)

  • Kim, Jin Heon
    • Journal of IKEEE
    • /
    • v.26 no.2
    • /
    • pp.292-298
    • /
    • 2022
  • There is a limit to improving the image quality through global processing of images taken under backlighting because too bright and dark parts are mixed in one scene. This paper introduces a method to improve the quality of a photo by making two virtual images that improve the dark and bright areas of a backlit photo, and fusing them with the original image into a Laplacian pyramid. The proposed method reduces the computational burden by using histogram stretching and gamma transformation that can be simplified with LUT when creating the two virtual images. In addition, in order to obtain a color-enhanced image, contrast conversion was performed only on the luminance using the HSV coordinate system. The proposed technique showed its effectiveness by calculating several NIQA indicators using standard image data sets.

Image Processing-based Object Recognition Approach for Automatic Operation of Cranes

  • Zhou, Ying;Guo, Hongling;Ma, Ling;Zhang, Zhitian
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.399-408
    • /
    • 2020
  • The construction industry is suffering from aging workers, frequent accidents, as well as low productivity. With the rapid development of information technologies in recent years, automatic construction, especially automatic cranes, is regarded as a promising solution for the above problems and attracting more and more attention. However, in practice, limited by the complexity and dynamics of construction environment, manual inspection which is time-consuming and error-prone is still the only way to recognize the search object for the operation of crane. To solve this problem, an image-processing-based automated object recognition approach is proposed in this paper, which is a fusion of Convolutional-Neutral-Network (CNN)-based and traditional object detections. The search object is firstly extracted from the background by the trained Faster R-CNN. And then through a series of image processing including Canny, Hough and Endpoints clustering analysis, the vertices of the search object can be determined to locate it in 3D space uniquely. Finally, the features (e.g., centroid coordinate, size, and color) of the search object are extracted for further recognition. The approach presented in this paper was implemented in OpenCV, and the prototype was written in Microsoft Visual C++. This proposed approach shows great potential for the automatic operation of crane. Further researches and more extensive field experiments will follow in the future.

  • PDF

Properties of ZnS:Cu,Cl Thick Film Electroluminescent Devices by Screen Printing Method (스크린인쇄법에 의한 ZnS:Cu,Cl 후막 전계발광소자의 특성)

  • No, Jun-Seo;Yu, Su-Ho;Jang, Ho-Jeong
    • Korean Journal of Materials Research
    • /
    • v.11 no.6
    • /
    • pp.448-452
    • /
    • 2001
  • The ZnS:Cu,Cl thick film electroluminescent devices with the stacking type(separated with phosphors and insulator layers) and the composite type (mixed with phosphor and insulator materials) emission layers were fabricated on ITO/glass substrates by the screen printing methods. The opical and electrical properties were investigated as fundations of applied voltages and frequencies. In the stacking type, the luminance was about 58 cd/$\m^2$ at the applied voltage of 400Hz, 200V and increased to 420 cd/$\m^2$ with increasing the frequency to 30Hz. For the composite type devices, the threshold voltage was 45V and the maximum luminance was 670 cd/$\m^2$ at the driving condition of 200V, 30Hz. The value of luminance of the composite type device showed 1.5 times higher than that of stacking type device. The main emission peak was 512 nm of bluish-green color at 1Hz frequency below and shifted to 452 nm in the driving frequency over 5Hz showing the blue omission color. There were no distinct differences of the main emission peaks and color coordinate for both samples.

  • PDF

Performance Comparison of the Recognition Methods of a Touched Area on a Touch-Screen Panel for Embedded Systems (임베디드 시스템을 위한 터치스크린 패널의 터치 영역 인식 기법의 성능 비교)

  • Oh, Sam-Kweon;Park, Geun-Duk;Kim, Byoung-Kuk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.9
    • /
    • pp.2334-2339
    • /
    • 2009
  • In case of an embedded system having an LCD panel with touch-screen capability, various figures such as rectangles, pentagons, circles, and arrows are frequently used for the delivery of user-input commands. In such a case, it is necessary to have an algorithm that can recognize whether a touched location is within a figure on which a specific user-input command is assigned. Such algorithms, however, impose a considerable amount of overhead for embedded systems with restricted amount of computing resources. This paper first describes a method for initializing and driving a touch-screen LCD and a coordinate-calibration method that converts touch-screen coordinates into LCD panel coordinates. Then it introduces methods that can be used for recognizing touched areas of rectangles, many-sided figures like pentagons, and circles; they are a range checking method for rectangles, a crossing number checking method for many-sided figures, a distance measurement method for circles, and a color comparison method that can be applied to all figures. In order to evaluate the performance of these methods, we implement two-dimensional graphics functions for drawing figures like triangles, rectangles, circles, and images. Then, we draw such figures and measures times spent for the touched-area recognition of these figures. Measurements show that the range checking is the most suitable method for rectangles, the distance measurement for circles, and the color comparison for many-sided figures and images.

Comparative Experiment of 2D and 3D DCT Point Cloud Compression (2D 및 3D DCT를 활용한 포인트 클라우드 압축 비교 실험)

  • Nam, Kwijung;Kim, Junsik;Han, Muhyen;Kim, Kyuheon;Hwang, Minkyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.553-565
    • /
    • 2021
  • Point cloud is a set of points for representing a 3D object, and consists of geometric information, which is 3D coordinate information, and attribute information, which is information representing color, reflectance, and the like. In this way of expressing, it has a vast amount of data compared to 2D images. Therefore, a process of compressing the point cloud data in order to transmit the point cloud data or use it in various fields is required. Unlike color information corresponding to all 2D geometric information constituting a 2D image, a point cloud represents a point cloud including attribute information such as color in only a part of the 3D space. Therefore, separate processing of geometric information is also required. Based on these characteristics of point clouds, MPEG under ISO/IEC standardizes V-PCC, which imitates point cloud images and compresses them into 2D DCT-based 2D image compression codecs, as a compression method for high-density point cloud data. This has limitations in accurately representing 3D spatial information to proceed with compression by converting 3D point clouds to 2D, and difficulty in processing non-existent points when utilizing 3D DCT. Therefore, in this paper, we present 3D Discrete Cosine Transform-based Point Cloud Compression (3DCT PCC), a method to compress point cloud data, which is a 3D image by utilizing 3D DCT, and confirm the efficiency of 3D DCT compared to V-PCC based on 2D DCT.

Estimation of Illuminant Chromaticity by Equivalent Distance Reference Illumination Map and Color Correlation (균등거리 기준 조명 맵과 색 상관성을 이용한 조명 색도 추정)

  • Kim Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.6
    • /
    • pp.267-274
    • /
    • 2023
  • In this paper, a method for estimating the illuminant chromaticity of a scene for an input image is proposed. The illuminant chromaticity is estimated using the illuminant reference region. The conventional method uses a certain number of reference lighting information. By comparing the chromaticity distribution of pixels from the input image with the chromaticity set prepared in advance for the reference illuminant, the reference illuminant with the largest overlapping area is regarded as the scene illuminant for the corresponding input image. In the process of calculating the overlapping area, the weights for each reference light were applied in the form of a Gaussian distribution, but a clear standard for the variance value could not be presented. The proposed method extracts an independent reference chromaticity region from a given reference illuminant, calculates the characteristic values in the r-g chromaticity plane of the RGB color coordinate system for all pixels of the input image, and then calculates the independent chromaticity region and features from the input image. The similarity is evaluated and the illuminant with the highest similarity was estimated as the illuminant chromaticity component of the image. The performance of the proposed method was evaluated using the database image and showed an average of about 60% improvement compared to the conventional basic method and showed an improvement performance of around 53% compared to the conventional Gaussian weight of 0.1.

Generation of Feature Map for Improving Localization of Mobile Robot based on Stereo Camera (스테레오 카메라 기반 모바일 로봇의 위치 추정 향상을 위한 특징맵 생성)

  • Kim, Eun-Kyeong;Kim, Sung-Shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.58-63
    • /
    • 2020
  • This paper proposes the method for improving the localization accuracy of the mobile robot based on the stereo camera. To restore the position information from stereo images obtained by the stereo camera, the corresponding point which corresponds to one pixel on the left image should be found on the right image. For this, there is the general method to search for corresponding point by calculating the similarity of pixel with pixels on the epipolar line. However, there are some disadvantages because all pixels on the epipolar line should be calculated and the similarity is calculated by only pixel value like RGB color space. To make up for this weak point, this paper implements the method to search for the corresponding point simply by calculating the gap of x-coordinate when the feature points, which are extracted by feature extraction and matched by feature matching method, are a pair and located on the same y-coordinate on the left/right image. In addition, the proposed method tries to preserve the number of feature points as much as possible by finding the corresponding points through the conventional algorithm in case of unmatched features. Because the number of the feature points has effect on the accuracy of the localization. The position of the mobile robot is compensated based on 3-D coordinates of the features which are restored by the feature points and corresponding points. As experimental results, by the proposed method, the number of the feature points are increased for compensating the position and the position of the mobile robot can be compensated more than only feature extraction.