• Title/Summary/Keyword: multi-camera

Search Result 878, Processing Time 0.03 seconds

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

Detection of Gaze Direction for the Hearing-impaired in the Intelligent Space (지능형 공간에서 청각장애인의 시선 방향 검출)

  • Oh, Young-Joon;Hong, Kwang-Jin;Kim, Jong-In;Jung, Kee-Chul
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.333-340
    • /
    • 2011
  • The Human-Computer Interaction(HCI) is a study of the method for interaction between human and computers that merges the ergonomics and the information technology. The intelligent space, which is a part of the HCI, is an important area to provide effective user interface for the disabled, who are alienated from the information-oriented society. In the intelligent space for the disabled, the method supporting information depends on types of disability. In this paper, we only support the hearing-impaired. It is material to the gaze direction detection method because it is very efficient information provide method to present information on gazing direction point, except for the information provide location perception method through directly contact with the hearing-impaired. We proposed the gaze direction detection method must be necessary in order to provide the residence life application to the hearing-impaired like this. The proposed method detects the region of the user from multi-view camera images, generates candidates for directions of gaze for horizontal and vertical from each camera, and calculates the gaze direction of the user through the comparison with the size of each candidate. In experimental results, the proposed method showed high detection rate with gaze direction and foot sensing rate with user's position, and showed the performance possibility of the scenario for the disabled.

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.

Land Cover Mapping and Availability Evaluation Based on Drone Images with Multi-Spectral Camera (다중분광 카메라 탑재 드론 영상 기반 토지피복도 제작 및 활용성 평가)

  • Xu, Chun Xu;Lim, Jae Hyoung;Jin, Xin Mei;Yun, Hee Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.589-599
    • /
    • 2018
  • The land cover map has been produced by using satellite and aerial images. However, these two images have the limitations in spatial resolution, and it is difficult to acquire images of a area at desired time because of the influence of clouds. In addition, it is costly and time-consuming that mapping land cover map of a small area used by satellite and aerial images. This study used multispectral camera-based drone to acquire multi-temporal images for orthoimages generation. The efficiency of produced land cover map was evaluated using time series analysis. The results indicated that the proposed method can generated RGB orthoimage and multispectral orthoimage with RMSE (Root Mean Square Error) of ${\pm}10mm$, ${\pm}11mm$, ${\pm}26mm$ and ${\pm}28mm$, ${\pm}27mm$, ${\pm}47mm$ on X, Y, H respectively. The accuracy of the pixel-based and object-based land cover map was analyzed and the results showed that the accuracy and Kappa coefficient of object-based classification were higher than that of pixel-based classification, which were 93.75%, 92.42% on July, 92.50%, 91.20% on October, 92.92%, 91.77% on February, respectively. Moreover, the proposed method can accurately capture the quantitative area change of the object. In summary, the suggest study demonstrated the possibility and efficiency of using multispectral camera-based drone in production of land cover map.

Quantitative Analysis of Snow Particles Using a Multi-Angle Snowflake Camera in the Yeongdong Region (영동지역에서 눈결정 카메라를 활용한 눈결정의 정량 분석)

  • Kim, Su-Hyun;Ko, Dae-Hong;Seong, Dae-Kyung;Eun, Seung-Hee;Kim, Byung-Gon;Kim, Baek-Jo;Park, Chang-Geun;Cha, Ju-Wan
    • Atmosphere
    • /
    • v.29 no.3
    • /
    • pp.311-324
    • /
    • 2019
  • We employed a Multi-Angle Snowflake Camera (MASC) to quantitatively analyze snow particles at the ground level in the Yeongdong region of Korea. The MASC captures high-resolution photographs of hydrometeors from three angles and simultaneously measures fallspeed. Based on snowflake images of the several episodes in 2017 and 2018, we derived statistics of size, aspect ratio, orientation, complexity, and fallspeed of snow crystals, which generally showed similar characteristics to the previous studies in other regions of the world. Dominant snow crystal habits of January 22, 2018 generated by northerly were melted aggregates when 850 hPa temperature was about $-6{\sim}-8^{\circ}C$. Average fallspeed of snow crystals was $1.0m\;s^{-1}$ though its size gradually increased as temperature decreased. Another snowfall event (March 8, 2018) was driven by the baroclinic instability as accompanied with a deep trough. Snow crystal habits were largely rimed aggregates (complexity ~1.8) and melting particles of dark images. Meanwhile, in the extreme snowfall event whose snow rate was greater than $10cm\;hr^{-1}$ on January 20, 2017, main snow crystals appeared to be heavily rimed particles with relatively smaller size when convective clouds developed vertically up to 9 km in association with tropopause folding. MASC also could successfully measure a decrease in snow crystal size and an increase in riming degree after AgI seeding at Daegwallyeong on March 14, 2017.

On Pattern Kernel with Multi-Resolution Architecture for a Lip Print Recognition (구순문 인식을 위한 복수 해상도 시스템의 패턴 커널에 관한 연구)

  • 김진옥;황대준;백경석;정진현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2067-2073
    • /
    • 2001
  • Biometric systems are forms of technology that use unique human physical characteristics to automatically identify a person. They have sensors to pick up some physical characteristics, convert them into digital patterns, and compare them with patterns stored for individual identification. However, lip-print recognition has been less developed than recognition of other human physical attributes such as the fingerprint, voice patterns, retinal at blood vessel patterns, or the face. The lip print recognition by a CCD camera has the merit of being linked with other recognition systems such as the retinal/iris eye and the face. A new method using multi-resolution architecture is proposed to recognize a lip print from the pattern kernels. A set of pattern kernels is a function of some local lip print masks. This function converts the information from a lip print into digital data. Recognition in the multi-resolution system is more reliable than recognition in the single-resolution system. The multi-resolution architecture allows us to reduce the false recognition rate from 15% to 4.7%. This paper shows that a lip print is sufficiently used by the measurements of biometric systems.

  • PDF

High Resolution Depth-map Estimation in Real-time using Efficient Multi-threading (효율적인 멀티 쓰레딩을 이용한 고해상도 깊이지도의 실시간 획득)

  • Cho, Chil-Suk;Jun, Ji-In;Choo, Hyon-Gon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.17 no.6
    • /
    • pp.945-953
    • /
    • 2012
  • A depth map can be obtained by projecting/capturing patterns of stripes using a projector-camera system and analyzing the geometric relationship between the projected patterns and the captured patterns. This is usually called structured light technique. In this paper, we propose a new multi-threading scheme for accelerating a conventional structured light technique. On CPUs and GPUs, multi-threading can be implemented by using OpenMP and CUDA, respectively. However, the problem is that their performance changes according to the computational conditions of partial processes of a structured light technique. In other words, OpenMP (using multiple CPUs) outperformed CUDA (using multiple GPUs) in partial processes such as pattern decoding and depth estimation. In contrast, CUDA outperformed OpenMP in partial processes such as rectification and pattern segmentation. Therefore, we carefully analyze the computational conditions where each outperforms the other and do use the better one in the related conditions. As a result, the proposed method can estimate a depth map in a speed of over 25 fps on $1280{\times}800$ images.

Visual Multi-touch Input Device Using Vision Camera (비젼 카메라를 이용한 멀티 터치 입력 장치)

  • Seo, Hyo-Dong;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.718-723
    • /
    • 2011
  • In this paper, we propose a visual multi-touch air input device using vision cameras. The implemented device provides a barehanded interface which copes with the multi-touch operation. The proposed device is easy to apply to the real-time systems because of its low computational load and is cheaper than the existing methods using glove data or 3-dimensional data because any additional equipment is not required. To do this, first, we propose an image processing algorithm based on the HSV color model and the labeling from obtained images. Also, to improve the accuracy of the recognition of hand gestures, we propose a motion recognition algorithm based on the geometric feature points, the skeleton model, and the Kalman filter. Finally, the experiments show that the proposed device is applicable to remote controllers for video games, smart TVs and any computer applications.

A Real-time Copper Foil Inspection System using Multi-thread (다중 스레드를 이용한 실시간 동판 검사 시스템)

  • Lee Chae-Kwang;Choi Dong-Hyuk
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.6
    • /
    • pp.499-506
    • /
    • 2004
  • The copper foil surface inspection system is necessary for the factory automation and product quality. The developed system is composed of the high speed line scan camera, the image capture board and the processing computer. For the system resource utilization and real-time processing, multi-threaded architecture is introduced. There are one image capture thread, 2 or more defect detection threads, and one defect communication thread. To process the high-speed input image data, the I/O overlap is used through the double buffering. The defect is first detected by the predetermined threshold. To cope with the light irregularity, the compensation process is applied. After defect detection, defect type is classified with the defect width, eigenvalue ratio of the defect covariance matrix and gray level of defect. In experiment, for high-speed input image data, real-time processing is possible with multi -threaded architecture, and the 89.4% of the total 141 defects correctly classified.

Development of small multi-copter system for indoor collision avoidance flight (실내 비행용 소형 충돌회피 멀티콥터 시스템 개발)

  • Moon, Jung-Ho
    • Journal of Aerospace System Engineering
    • /
    • v.15 no.1
    • /
    • pp.102-110
    • /
    • 2021
  • Recently, multi-copters equipped with various collision avoidance sensors have been introduced to improve flight stability. LiDAR is used to recognize a three-dimensional position. Multiple cameras and real-time SLAM technology are also used to calculate the relative position to obstacles. A three-dimensional depth sensor with a small process and camera is also used. In this study, a small collision-avoidance multi-copter system capable of in-door flight was developed as a platform for the development of collision avoidance software technology. The multi-copter system was equipped with LiDAR, 3D depth sensor, and small image processing board. Object recognition and collision avoidance functions based on the YOLO algorithm were verified through flight tests. This paper deals with recent trends in drone collision avoidance technology, system design/manufacturing process, and flight test results.