• Title/Summary/Keyword: Camera module

Search Result 501, Processing Time 0.024 seconds

Design and development of non-contact locks including face recognition function based on machine learning (머신러닝 기반 안면인식 기능을 포함한 비접촉 잠금장치 설계 및 개발)

  • Yeo Hoon Yoon;Ki Chang Kim;Whi Jin Jo;Hongjun Kim
    • Convergence Security Journal
    • /
    • v.22 no.1
    • /
    • pp.29-38
    • /
    • 2022
  • The importance of prevention of epidemics is increasing due to the serious spread of infectious diseases. For prevention of epidemics, we need to focus on the non-contact industry. Therefore, in this paper, a face recognition door lock that controls access through non-contact is designed and developed. First very simple features are combined to find objects and face recognition is performed using Haar-based cascade algorithm. Then the texture of the image is binarized to find features using LBPH. An non-contact door lock system which composed of Raspberry PI 3B+ board, an ultrasonic sensor, a camera module, a motor, etc. are suggested. To verify actual performance and ascertain the impact of light sources, various experiment were conducted. As experimental results, the maximum value of the recognition rate was about 85.7%.

Machine Vision Platform for High-Precision Detection of Disease VOC Biomarkers Using Colorimetric MOF-Based Gas Sensor Array (비색 MOF 가스센서 어레이 기반 고정밀 질환 VOCs 바이오마커 검출을 위한 머신비전 플랫폼)

  • Junyeong Lee;Seungyun Oh;Dongmin Kim;Young Wung Kim;Jungseok Heo;Dae-Sik Lee
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.112-116
    • /
    • 2024
  • Gas-sensor technology for volatile organic compounds (VOC) biomarker detection offers significant advantages for noninvasive diagnostics, including rapid response time and low operational costs, exhibiting promising potential for disease diagnosis. Colorimetric gas sensors, which enable intuitive analysis of gas concentrations through changes in color, present additional benefits for the development of personal diagnostic kits. However, the traditional method of visually monitoring these sensors can limit quantitative analysis and consistency in detection threshold evaluation, potentially affecting diagnostic accuracy. To address this, we developed a machine vision platform based on metal-organic framework (MOF) for colorimetric gas sensor arrays, designed to accurately detect disease-related VOC biomarkers. This platform integrates a CMOS camera module, gas chamber, and colorimetric MOF sensor jig to quantitatively assess color changes. A specialized machine vision algorithm accurately identifies the color-change Region of Interest (ROI) from the captured images and monitors the color trends. Performance evaluation was conducted through experiments using a platform with four types of low-concentration standard gases. A limit-of-detection (LoD) at 100 ppb level was observed. This approach significantly enhances the potential for non-invasive and accurate disease diagnosis by detecting low-concentration VOC biomarkers and offers a novel diagnostic tool.

Study on Structure Visual Inspection Technology using Drones and Image Analysis Techniques (드론과 이미지 분석기법을 활용한 구조물 외관점검 기술 연구)

  • Kim, Jong-Woo;Jung, Young-Woo;Rhim, Hong-Chul
    • Journal of the Korea Institute of Building Construction
    • /
    • v.17 no.6
    • /
    • pp.545-557
    • /
    • 2017
  • The study is about the efficient alternative to concrete surface in the field of visual inspection technology for deteriorated infrastructure. By combining industrial drones and deep learning based image analysis techniques with traditional visual inspection and research, we tried to reduce manpowers, time requirements and costs, and to overcome the height and dome structures. On board device mounted on drones is consisting of a high resolution camera for detecting cracks of more than 0.3 mm, a lidar sensor and a embeded image processor module. It was mounted on an industrial drones, took sample images of damage from the site specimen through automatic flight navigation. In addition, the damege parts of the site specimen was used to measure not only the width and length of cracks but white rust also, and tried up compare them with the final image analysis detected results. Using the image analysis techniques, the damages of 54ea sample images were analyzed by the segmentation - feature extraction - decision making process, and extracted the analysis parameters using supervised mode of the deep learning platform. The image analysis of newly added non-supervised 60ea image samples was performed based on the extracted parameters. The result presented in 90.5 % of the damage detection rate.

Design of FPGA Camera Module with AVB based Multi-viewer for Bus-safety (AVB 기반의 버스안전용 멀티뷰어의 FPGA 카메라모듈 설계)

  • Kim, Dong-jin;Shin, Wan-soo;Park, Jong-bae;Kang, Min-goo
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.11-17
    • /
    • 2016
  • In this paper, we proposed a multi-viewer system with multiple HD cameras based AVB(Audio Video Bridge) ethernet cable using IP networking, and FPGA(Xilinx Zynq 702) for bus safety systems. This AVB (IEEE802.1BA) system can be designed for the low latency based on FPGA, and transmit real-time with HD video and audio signals in a vehicle network. The proposed multi-viewer platform can multiplex H.264 video signals from 4 wide-angle HD cameras with existed ethernet 1Gbps. and 2-wire 100Mbps cables. The design of Zynq 702 based low latency to H.264 AVC CODEC was proposed for the minimization of time-delay in the HD video transmission of car area network, too. And the performance of PSNR(Peak Signal-to-noise-ratio) was analyzed with the reference model JM for encoding and decoding results in H.264 AVC CODEC. These PSNR values can be confirmed according the theoretical and HW result from the signal of H.264 AVC CODEC based on Zynq 702 the multi-viewer with multiple cameras. As a result, proposed AVB multi-viewer platform with multiple cameras can be used for the surveillance of audio and video around a bus for the safety due to the low latency of H.264 AVC CODEC design.

Improved Radiochemical Yields, Reliability and Improvement of Domestic $^{18}F$-FDG Auto Synthesizer (국산 $^{18}F$-FDG Auto Sysnthesizer의 수율 향상과 성능 개선)

  • Park, Jun-Hyung;Im, Ki-Seop;Lee, Hong-Jin;Jeong, Kyung-Il;Lee, Byung-Chul;Lee, In-Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.147-151
    • /
    • 2009
  • Purpose: 2-[$^{18}F$]Fluoro-2-deoxy-D-glucose ([$^{18}F$]FDG) particularly plays as a important role in Positron Emission Tomography (PET) imaging in nuclear medicine. Domestic [$^{18}F$]FDG auto synthesizers are installed in Seoul National University Bundang Hospital (SNUBH) at June 2008, these modules were known that it's synthetic yields were guaranteed in average $45{\pm}5%$ so far. To improve yields and convenience of domestic [$^{18}F$]FDG auto synthesizer, numerous trials in reaction time, base concentration, pressure and temperature were performed to increase [$^{18}F$]FDG yields. Materials and Methods: Several synthetic factors (temperature, time and pressure) and shortcoming were corrected based on many evaporation test. Syringe dispensing of tetra-butylammonium bicarbonate (TBAB) was replaced with micro pipette to prepare tetrabutyl ammonium fluoride salt ([$^{18}F$]TBAF). Troublesome refill of liquid nitrogen every 2 hours which was used to protect vacuum system was changed to charcoal cartridge, base guard filter. To monitor the volume of delivered $[^{18}O]OH_2$ from cyclotron by surveillance camera, we set up the volumetric vial on the cover of the module. In addition to, the recovery vial was added in [$^{18}F$]FDG production system to recover [$^{18}F$]FDG loss due to the leak of valve ($V_{13,14}$) in [$^{18}F$]FDG purification process. Results: When we used micro pipette for adding TBAB ($30\;{\mu}L$ in 12% $H_2O$ in acetonitrile), this quantitative dispensation has enabled to improve $5.5{\pm}1.7%$ residual fluorine-18 activity in fluorine separation cartridge compared to syringe adding. Besides, the synthetic yields of [$^{18}F$]FDG has increased $58{\pm}2.6%$ (n=19), $58{\pm}2.9%$ (n=14), $60%{\pm}2.5%$ (n=17) for 3 months. The life cycle of charcoal cartridge and base vacuum was 3 months prior to filling liquid nitrogen every 2 hours and additional side separator can prevent pump corrosion by organic solvent. After setting of volumetric indicator vial, the operator can easily monitor the total volume of irradiated $[^{18}O]OH_2$ from cyclotron. The recovery vial can be used for the stabilizer when an irregular [$^{18}F$]FDG loss was generated by the leak of valves ($V_{13,14}$). Conclusions: We has optimized appropriate synthetic conditions (temperature, time, pressure) in domestic [$^{18}F$]FDG auto synthesizer. In addition to, the remodeling with several accessories improve yields of domestic [$^{18}F$]FDG auto synthesizer with reliable reproducibility.

  • PDF

Implementation of Smart Shopping Cart using Object Detection Method based on Deep Learning (딥러닝 객체 탐지 기술을 사용한 스마트 쇼핑카트의 구현)

  • Oh, Jin-Seon;Chun, In-Gook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.262-269
    • /
    • 2020
  • Recently, many attempts have been made to reduce the time required for payment in various shopping environments. In addition, for the Fourth Industrial Revolution era, artificial intelligence is advancing, and Internet of Things (IoT) devices are becoming more compact and cheaper. So, by integrating these two technologies, access to building an unmanned environment to save people time has become easier. In this paper, we propose a smart shopping cart system based on low-cost IoT equipment and deep-learning object-detection technology. The proposed smart cart system consists of a camera for real-time product detection, an ultrasonic sensor that acts as a trigger, a weight sensor to determine whether a product is put into or taken out of the shopping cart, an application for smartphones that provides a user interface for a virtual shopping cart, and a deep learning server where learned product data are stored. Communication between each module is through Transmission Control Protocol/Internet Protocol, a Hypertext Transmission Protocol network, a You Only Look Once darknet library, and an object detection system used by the server to recognize products. The user can check a list of items put into the smart cart via the smartphone app, and can automatically pay for them. The smart cart system proposed in this paper can be applied to unmanned stores with high cost-effectiveness.

Human-likeness of an Agent's Movement-Data Loci based on Realistically Limited Perception Data (제한적 인지 데이터에 기초한 에이전트 움직임-데이터 궤적의 인간다움)

  • Han, Chang-Hee;Kim, Won-Il
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.4
    • /
    • pp.1-10
    • /
    • 2010
  • This present paper's goal is to show a virtual human agent's movement-data loci based on realistically limited perception data is human-like. To determine human-likeness of the movement-data loci, we consider interactions between two parameters: Realistically Limited Perception (RLP) data and Incremental Movement-Path data Generation (IMPG). That is to consider how the former (i.e., RLP), one of the simulated parameters of human thought or its elements dictates the latter (i.e., IMPG), one of the simulated parameters of human movement behavior. Mapping DB is a prerequisite for navigation in an agent system because it functions as an interface between perception and movement behavior. Although Hill et al. studied mapping DB methodology based on RLP, their research dealt only with a rendering camera's view point data. The agent system in this present paper was integrated with the Hill's mapping DB module and then the two parameters' interaction was considered on a military reconnaissance mission with unexpected enemy emergence. Movement loci that were generated by the agent and subjects were compared with each other. The agent system in this present research verifies that it can be a functional test bed for producing human-like movement-data loci although the human-likeness of agent is the result of a pilot test, determined by two parameters (RLP and IMPG) and only 30 subjects.

Hybrid (refrctive/diffractive) lens design for the ultra-compact camera module (초소형 영상 전송 모듈용 DOE(Diffractive optical element)렌즈의 설계 및 평가)

  • Lee, Hwan-Seon;Rim, Cheon-Seog;Jo, jae-Heung;Chang, Soo;Lim, Hyun-Kyu
    • Korean Journal of Optics and Photonics
    • /
    • v.12 no.3
    • /
    • pp.240-249
    • /
    • 2001
  • A high speed ultra-compact lens with a diffractive optical element (DOE) is designed, which can be applied to mobile communication devices such as IMT2000, PDA, notebook computer, etc. The designed hybrid lens has sufficiently high performance of less than f/2.2, compact size of 3.3 mm (1st surf. to image), and wide field angle of more than 30 deg. compared with the specifications of a single lens. By proper choice of the aspheric and DOE surface which has very large negative dispersion, we can correct chromatic and high order aberrations through the optimization technique. From Seidel third order aberration theory and Sweatt modeling, the initial data and surface configurations, that is, the combination condition of the DOE and the aspherical surface are obtained. However, due to the consideration of diffraction efficiency of a DOE, we can choose only four cases as the optimization input, and present the best solution after evaluating and comparing those four cases. On the other hand, we also report dramatic improvement in optical performance by inserting another refractive lens (so-called, field flattener), that keeps the refractive power of an original DOE lens and makes the petzval sum zero in the original DOE lens system. ystem.

  • PDF

Hardware Design of Super Resolution on Human Faces for Improving Face Recognition Performance of Intelligent Video Surveillance Systems (지능형 영상 보안 시스템의 얼굴 인식 성능 향상을 위한 얼굴 영역 초해상도 하드웨어 설계)

  • Kim, Cho-Rong;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.9
    • /
    • pp.22-30
    • /
    • 2011
  • Recently, the rising demand for intelligent video surveillance system leads to high-performance face recognition systems. The solution for low-resolution images acquired by a long-distance camera is required to overcome the distance limits of the existing face recognition systems. For that reason, this paper proposes a hardware design of an image resolution enhancement algorithm for real-time intelligent video surveillance systems. The algorithm is synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images, called training set. When we checked the performance of the algorithm at 32bit RISC micro-processor, the entire operation took about 25 sec, which is inappropriate for real-time target applications. Based on the result, we implemented the hardware module and verified it using Xilinx Virtex-4 and ARM9-based embedded processor(S3C2440A). The designed hardware can complete the whole operation within 33 msec, so it can deal with 30 frames per second. We expect that the proposed hardware could be one of the solutions not only for real-time processing at the embedded environment, but also for an easy integration with existing face recognition system.

A design and implementation of Face Detection hardware (얼굴 검출을 위한 SoC 하드웨어 구현 및 검증)

  • Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.4
    • /
    • pp.43-54
    • /
    • 2007
  • This paper presents design and verification of a face detection hardware for real time application. Face detection algorithm detects rough face position based on already acquired feature parameter data. The hardware is composed of five main modules: Integral Image Calculator, Feature Coordinate Calculator, Feature Difference Calculator, Cascade Calculator, and Window Detection. It also includes on-chip Integral Image memory and Feature Parameter memory. The face detection hardware was verified by using S3C2440A CPU of Samsung Electronics, Virtex4LX100 FPGA of Xilinx, and a CCD Camera module. Our design uses 3,251 LUTs of Xilinx FPGA and takes about 1.96${\sim}$0.13 sec for face detection depending on sliding-window step size, when synthesized for Virtex4LX100 FPGA. When synthesized on Magnachip 0.25um ASIC library, it uses about 410,000 gates (Combinational area about 345,000 gates, Noncombinational area about 65,000 gates) and takes less than 0.5 sec for face realtime detection. This size and performance shows that it is adequate to use for embedded system applications. It has been fabricated as a real chip as a part of XF1201 chip and proven to work.