• Title/Summary/Keyword: multiple cameras

Search Result 228, Processing Time 0.023 seconds

A Study on the Hypercentric Lens Design and Optical Performance Analysis (하이퍼센트릭 렌즈 설계 방법 및 성능 분석에 대한 연구)

  • Koh, Jae Seok;Cho, Hyun Woo;Park, Tae Yang;Kim, Sang Hyun;An, Young Duk;Jung, Mee Suk
    • Korean Journal of Optics and Photonics
    • /
    • v.29 no.1
    • /
    • pp.7-12
    • /
    • 2018
  • In the field of machine vision, a variety of lenses are used to inspect a product for defects. Only part of the appearance of an object can be photographed with a general lens. Optical components such as mirrors, multiple lenses and cameras are required to inspect the entire exterior. This increases the size of the optical system, and has the disadvantage of high cost. In this paper, we design a hypercentric lens, which can photograph the top and side of an object, and various sizes of objects while maintaining the image size. Also, the validity of the design is verified through the performance analysis of the product.

Development of Stretchable Joint Motion Sensor for Rehabilitation based on Silver Nanoparticle Direct Printing (은 나노입자 프린팅 기반의 재활치료용 신축성 관절센서 개발)

  • Chae, Woen-Sik;Jung, Jae-Hu
    • Korean Journal of Applied Biomechanics
    • /
    • v.31 no.3
    • /
    • pp.183-188
    • /
    • 2021
  • Objective: The purpose of this study was to develop a stretchable joint motion sensor that is based on silver nano-particle. Through this sensor, it can be utilized as an equipment for rehabilitation and analyze joint movement. Method: In this study, precursor solution was created, after that, nozel printer (Musashi, Image master 350PC) was used to print on a circuit board. Sourcemeter (Keithley, Keithley-2450) was used in order to evaluate changes of electric resistance as the sensor stretches. In addition, the sensor was attached on center of a knee joint to 2 male adults, and performed knee flexion-extension in order to evaluate accurate analysis; 3 infrared cameras (100 Hz, Motion Master 100, Visol Inc., Korea) were also used to analyze three dimensional movement. Descriptive statistics were suggested for comparing each accuracy of measurement variables of joint motions with the sensor and 3D motions. Results: The change of electric resistance of the sensor indicated multiple of 30 times from initial value in 50% of elongation and the value of electric resistance were distinctively classified by following 10%, 20%, 30%, 40% of elongation respectively. Through using the sensor and 3D camera to analyze movement variable, it showed a resistance of 99% in a knee joint extension, whereas, it indicated about 80% in flexion phase. Conclusion: In this research, the stretchable joint motion sensor was created based on silver nanoparticle that has high conductivity. If the sensor stretches, the distance between nanoparticles recede which lead gradual disconnection of an electric circuit and to have increment of electric resistance. Through evaluating angle of knee joints with observation of sensor's electric resistance, it showed similar a result and propensity from 3D motion analysis. However, unstable electric resistance of the stretchable sensor was observed when it stretches to maximum length, or went through numerous joint movements. Therefore, the sensor need complement that requires stability when it comes to measuring motions in any condition.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

Users' Preference and Acceptance of Smart Home Technologies (사용자의 스마트 주거 기술 선호와 수용에 관한 연구)

  • Cho, Myung Eun;Kim, Mi Jeong
    • Journal of the Architectural Institute of Korea Planning & Design
    • /
    • v.34 no.11
    • /
    • pp.75-84
    • /
    • 2018
  • This study analyzed users' acceptance and intention to use in addition to needs and preferences of smart home technologies, and identified the differences in technology preference and acceptance by different factors. The subjects were residents in the 40s and 60s residing in the Seoul or suburbs of Seoul, and questionnaires were conducted in the 40s while interviews with questionnaires were conducted in the 60s. A total of 105 questionnaires were used as data, and frequency, mean, crossover, independent sample t test, one-way ANOVA and multiple regression analysis were performaed using SPSS23. The results of this study are as follows. First, hypertension, hyperlipidemia and hypercholesterolemia were the most common diseases among respondents and if there was no discomfort, they would like to continue living in the homes of the current residence. Therefore, the direction of smart home development should support the daily living and health care so that residents can live a healthy life for a long time in their living space. Second, the technologies that residents most need were a control technology of residential environments and a monitoring technology of residents' health and physiological changes. The most preferred sensor types are motion sensors and speech recognition while video cameras have a very low preference. Third, technology anxiety was the most significant factor influencing intention to accept smart home technology. The greater the technology anxiety is, the weaker the acceptance of technology. Fourth, when applying smart residential technology in homes, various resident characteristics should be considered. Age and technology intimacy were the most influential variables, and accordingly there were differences in technology preference and acceptance. Therefore, a user-friendly smart home plan should be done in the consideration of the results.

Development of small multi-copter system for indoor collision avoidance flight (실내 비행용 소형 충돌회피 멀티콥터 시스템 개발)

  • Moon, Jung-Ho
    • Journal of Aerospace System Engineering
    • /
    • v.15 no.1
    • /
    • pp.102-110
    • /
    • 2021
  • Recently, multi-copters equipped with various collision avoidance sensors have been introduced to improve flight stability. LiDAR is used to recognize a three-dimensional position. Multiple cameras and real-time SLAM technology are also used to calculate the relative position to obstacles. A three-dimensional depth sensor with a small process and camera is also used. In this study, a small collision-avoidance multi-copter system capable of in-door flight was developed as a platform for the development of collision avoidance software technology. The multi-copter system was equipped with LiDAR, 3D depth sensor, and small image processing board. Object recognition and collision avoidance functions based on the YOLO algorithm were verified through flight tests. This paper deals with recent trends in drone collision avoidance technology, system design/manufacturing process, and flight test results.

Defect Diagnosis and Classification of Machine Parts Based on Deep Learning

  • Kim, Hyun-Tae;Lee, Sang-Hyeop;Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.2_1
    • /
    • pp.177-184
    • /
    • 2022
  • The automatic defect sorting function of machinery parts is being introduced to the automation of the manufacturing process. In the final stage of automation of the manufacturing process, it is necessary to apply computer vision rather than human visual judgment to determine whether there is a defect. In this paper, we introduce a deep learning method to improve the classification performance of typical mechanical parts, such as welding parts, galvanized round plugs, and electro galvanized nuts, based on the results of experiments. In the case of poor welding, the method to further increase the depth of layer of the basic deep learning model was effective, and in the case of a circular plug, the surrounding data outside the defective target area affected it, so it could be solved through an appropriate pre-processing technique. Finally, in the case of a nut plated with zinc, since it receives data from multiple cameras due to its three-dimensional structure, it is greatly affected by lighting and has a problem in that it also affects the background image. To solve this problem, methods such as two-dimensional connectivity were applied in the object segmentation preprocessing process. Although the experiments suggested that the proposed methods are effective, most of the provided good/defective images data sets are relatively small, which may cause a learning balance problem of the deep learning model, so we plan to secure more data in the future.

Take-off and landing assistance system for efficient operation of compact drone CCTV in remote locations (원격지의 초소형 드론 CCTV의 효율적인 운영을 위한 이착륙 보조 시스템)

  • Byoung-Kug Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.287-292
    • /
    • 2023
  • In the case of fixed CCTV, there is a problem in that a shadow area occurs, even if the visible range is maximized by utilizing the pan-tilt and zoom functions. The representative solution for that problem is that a plurality of fixed CCTVs are used. This requires a large amount of additional equipment (e.g., wires, facilities, monitors, etc.) proportional to the number of the CCTVs. Another solution is to use drones that are equipped with cameras and fly. However, Drone's operation time is much short. In order to extend the time, we can use multiple drones and can fly one by one. In this case, drones that need to recharge their batteries re-enter into a ready state at the drone port for next operation. In this paper, we propose a system for precised positioning and stable landing on the drone port by utilizing a small drone equipped with a fixed forward-facing monocular camera. For our conclusion, we implement our proposed system, operate, and finally verify our feasibility.

Object detection and tracking using a high-performance artificial intelligence-based 3D depth camera: towards early detection of African swine fever

  • Ryu, Harry Wooseuk;Tai, Joo Ho
    • Journal of Veterinary Science
    • /
    • v.23 no.1
    • /
    • pp.17.1-17.10
    • /
    • 2022
  • Background: Inspection of livestock farms using surveillance cameras is emerging as a means of early detection of transboundary animal disease such as African swine fever (ASF). Object tracking, a developing technology derived from object detection aims to the consistent identification of individual objects in farms. Objectives: This study was conducted as a preliminary investigation for practical application to livestock farms. With the use of a high-performance artificial intelligence (AI)-based 3D depth camera, the aim is to establish a pathway for utilizing AI models to perform advanced object tracking. Methods: Multiple crossovers by two humans will be simulated to investigate the potential of object tracking. Inspection of consistent identification will be the evidence of object tracking after crossing over. Two AI models, a fast model and an accurate model, were tested and compared with regard to their object tracking performance in 3D. Finally, the recording of pig pen was also processed with aforementioned AI model to test the possibility of 3D object detection. Results: Both AI successfully processed and provided a 3D bounding box, identification number, and distance away from camera for each individual human. The accurate detection model had better evidence than the fast detection model on 3D object tracking and showed the potential application onto pigs as a livestock. Conclusions: Preparing a custom dataset to train AI models in an appropriate farm is required for proper 3D object detection to operate object tracking for pigs at an ideal level. This will allow the farm to smoothly transit traditional methods to ASF-preventing precision livestock farming.

Anomaly Detection Method Based on Trajectory Classification in Surveillance Systems (감시 시스템에서 궤적 분류를 이용한 이상 탐지 방법)

  • Jeonghun Seo;Jiin Hwang;Pal Abhishek;Haeun Lee;Daesik Ko;Seokil Song
    • Journal of Platform Technology
    • /
    • v.12 no.3
    • /
    • pp.62-70
    • /
    • 2024
  • Recent surveillance systems employ multiple sensors, such as cameras and radars, to enhance the accuracy of intrusion detection. However, object recognition through camera (RGB, Thermal) sensors may not always be accurate during nighttime, in adverse weather conditions, or when the intruder is camouflaged. In such situations, it is possible to detect intruders by utilizing the trajectories of objects extracted from camera or radar sensors. This paper proposes a method to detect intruders using only trajectory information in environments where object recognition is challenging. The proposed method involves training an LSTM-Attention based trajectory classification model using normal and abnormal (intrusion, loitering) trajectory data of animals and humans. This model is then used to identify abnormal human trajectories and perform intrusion detection. Finally, the validity of the proposed method is demonstrated through experiments using real data.

  • PDF

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.