• Title/Summary/Keyword: Camera Controller

Search Result 273, Processing Time 0.022 seconds

Robot Manipulator Visual Servoing via Kalman Filter- Optimized Extreme Learning Machine and Fuzzy Logic

  • Zhou, Zhiyu;Hu, Yanjun;Ji, Jiangfei;Wang, Yaming;Zhu, Zefei;Yang, Donghe;Chen, Ji
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2529-2551
    • /
    • 2022
  • Visual servoing (VS) based on the Kalman filter (KF) algorithm, as in the case of KF-based image-based visual servoing (IBVS) systems, suffers from three problems in uncalibrated environments: the perturbation noises of the robot system, error of noise statistics, and slow convergence. To solve these three problems, we use an IBVS based on KF, African vultures optimization algorithm enhanced extreme learning machine (AVOA-ELM), and fuzzy logic (FL) in this paper. Firstly, KF online estimation of the Jacobian matrix. We propose an AVOA-ELM error compensation model to compensate for the sub-optimal estimation of the KF to solve the problems of disturbance noises and noise statistics error. Next, an FL controller is designed for gain adaptation. This approach addresses the problem of the slow convergence of the IBVS system with the KF. Then, we propose a visual servoing scheme combining FL and KF-AVOA-ELM (FL-KF-AVOA-ELM). Finally, we verify the algorithm on the 6-DOF robotic manipulator PUMA 560. Compared with the existing methods, our algorithm can solve the three problems mentioned above without camera parameters, robot kinematics model, and target depth information. We also compared the proposed method with other KF-based IBVS methods under different disturbance noise environments. And the proposed method achieves the best results under the three evaluation metrics.

Non-contact mobile inspection system for tunnels: a review (터널의 비접촉 이동식 상태점검 장비: 리뷰)

  • Chulhee Lee;Donggyou Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.3
    • /
    • pp.245-259
    • /
    • 2023
  • The purpose of this paper is to examine the most recent tunnel scanning systems to obtain insights for the development of non-contact mobile inspection system. Tunnel scanning systems are mostly being developed by adapting two main technologies, namely laser scanning and image scanning systems. Laser scanning system has the advantage of accurately recreating the geometric characteristics of tunnel linings from point cloud. On the other hand, image scanning system employs computer vision to effortlessly identify damage, such as fine cracks and leaks on the tunnel lining surface. The analysis suggests that image scanning system is more suitable for detecting damage on tunnel linings. A camera-based tunnel scanning system under development should include components such as lighting, data storage, power supply, and image-capturing controller synchronized with vehicle speed.

Counting People Walking Through Doorway using Easy-to-Install IR Infrared Sensors (설치가 간편한 IR 적외선 센서를 활용한 출입문 유동인구 계측 방법)

  • Oppokhonov, Shokirkhon;Lee, Jae-Hyun;Jung, Jae-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.35-40
    • /
    • 2021
  • People counting data is crucial for most business owners, since they can derive meaningful information about customers movement within their businesses. For example, owners of the supermarkets can increase or decrease the number of checkouts counters depending on number of occupants. Also, it has many applications in smart buildings, too. Where it can be used as a smart controller to control heating and cooling systems depending on a number of occupants in each room. There are advanced technologies like camera-based people counting system, which can give more accurate counting result. But they are expensive, hard to deploy and privacy invasive. In this paper, we propose a method and a hardware sensor for counting people passing through a passage or an entrance using IR Infrared sensors. Proposed sensor operates at low voltage, so low power consumption ensure long duration on batteries. Moreover, we propose a new method that distinguishes human body and other objects. Proposed method is inexpensive, easy to install and most importantly, it is real-time. The evaluation of our proposed method showed that when counting people passing one by one without overlapping, recall was 96% and when people carrying handbag like objects, the precision was 88%. Our proposed method outperforms IR Infrared based people counting systems in term of counting accuracy.

  • PDF

Augmented Reality Based Tangible Interface For Digital Lighting of CAID System (CAID 시스템의 디지털 라이팅을 위한 증강 현실 기반의 실체적 인터페이스에 관한 연구)

  • Hwang, Jung-Ah;Nam, Tek-Jin
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.119-128
    • /
    • 2007
  • With the development of digital technologies, CAID became an essential part in the industrial design process. Creating photo-realistic images from a virtual scene with 3D models is one of the specialized task for CAID users. This task requires a complex interface of setting the positions and the parameters of camera and lights for optimal rendering results. However, the user interface of existing CAID tools are not simple for designers because the task is mostly accomplished in a parameter setting dialogue window. This research address this interface issues, in particular the issues related to lighting, by developing and evaluating TLS(Tangible Lighting Studio) that uses Augmented Reality and Tangible User Interface. The interface of positioning objects and setting parameters become tangible and distributed in the workspace to support more intuitive rendering task. TLS consists of markers, and physical controller, and a see-through HMD(Head Mounted Display). The user can directly control the lighting parameters in the AR workspace. In the evaluation experiment, TLS provide higher effectiveness, efficiency and user satisfaction compared to existing GUI(Graphic User Interface) method. It is expected that the application of TLS can be expanded to photography education and architecture simulation.

  • PDF

Driver's Status Recognition Using Multiple Wearable Sensors (다중 웨어러블 센서를 활용한 운전자 상태 인식)

  • Shin, Euiseob;Kim, Myong-Guk;Lee, Changook;Kang, Hang-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.6
    • /
    • pp.271-280
    • /
    • 2017
  • In this paper, we propose a new safety system composed of wearable devices, driver's seat belt, and integrating controllers. The wearable device and driver's seat belt capture driver's biological information, while the integrating controller analyzes captured signal to alarm the driver or directly control the car appropriately according to the status of the driver. Previous studies regarding driver's safety from driver's seat, steering wheel, or facial camera to capture driver's physiological signal and facial information had difficulties in gathering accurate and continuous signals because the sensors required the upright posture of the driver. Utilizing wearable sensors, however, our proposed system can obtain continuous and highly accurate signals compared to the previous researches. Our advanced wearable apparatus features a sensor that measures the heart rate, skin conductivity, and skin temperature and applies filters to eliminate the noise generated by the automobile. Moreover, the acceleration sensor and the gyro sensor in our wearable device enable the reduction of the measurement errors. Based on the collected bio-signals, the criteria for identifying the driver's condition were presented. The accredited certification body has verified that the devices has the accuracy of the level of medical care. The laboratory test and the real automobile test demonstrate that our proposed system is good for the measurement of the driver's condition.

Development of Rotation Invariant Real-Time Multiple Face-Detection Engine (회전변화에 무관한 실시간 다중 얼굴 검출 엔진 개발)

  • Han, Dong-Il;Choi, Jong-Ho;Yoo, Seong-Joon;Oh, Se-Chang;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.116-128
    • /
    • 2011
  • In this paper, we propose the structure of a high-performance face-detection engine that responds well to facial rotating changes using rotation transformation which minimize the required memory usage compared to the previous face-detection engine. The validity of the proposed structure has been verified through the implementation of FPGA. For high performance face detection, the MCT (Modified Census Transform) method, which is robust against lighting change, was used. The Adaboost learning algorithm was used for creating optimized learning data. And the rotation transformation method was added to maintain effectiveness against face rotating changes. The proposed hardware structure was composed of Color Space Converter, Noise Filter, Memory Controller Interface, Image Rotator, Image Scaler, MCT(Modified Census Transform), Candidate Detector / Confidence Mapper, Position Resizer, Data Grouper, Overlay Processor / Color Overlay Processor. The face detection engine was tested using a Virtex5 LX330 FPGA board, a QVGA grade CMOS camera, and an LCD Display. It was verified that the engine demonstrated excellent performance in diverse real life environments and in a face detection standard database. As a result, a high performance real time face detection engine that can conduct real time processing at speeds of at least 60 frames per second, which is effective against lighting changes and face rotating changes and can detect 32 faces in diverse sizes simultaneously, was developed.

A study on measurement and compensation of automobile door gap using optical triangulation algorithm (광 삼각법 측정 알고리즘을 이용한 자동차 도어 간격 측정 및 보정에 관한 연구)

  • Kang, Dong-Sung;Lee, Jeong-woo;Ko, Kang-Ho;Kim, Tae-Min;Park, Kyu-Bag;Park, Jung Rae;Kim, Ji-Hun;Choi, Doo-Sun;Lim, Dong-Wook
    • Design & Manufacturing
    • /
    • v.14 no.1
    • /
    • pp.8-14
    • /
    • 2020
  • In general, auto parts production assembly line is assembled and produced by automatic mounting by an automated robot. In such a production site, quality problems such as misalignment of parts (doors, trunks, roofs, etc.) to be assembled with the vehicle body or collision between assembly robots and components are often caused. In order to solve such a problem, the quality of parts is manually inspected by using mechanical jig devices outside the automated production line. Automotive inspection technology is the most commonly used field of vision, which includes surface inspection such as mounting hole spacing and defect detection, body panel dents and bends. It is used for guiding, providing location information to the robot controller to adjust the robot's path to improve process productivity and manufacturing flexibility. The most difficult weighing and measuring technology is to calibrate the surface analysis and position and characteristics between parts by storing images of the part to be measured that enters the camera's field of view mounted on the side or top of the part. The problem of the machine vision device applied to the automobile production line is that the lighting conditions inside the factory are severely changed due to various weather changes such as morning-evening, rainy days and sunny days through the exterior window of the assembly production plant. In addition, since the material of the vehicle body parts is a steel sheet, the reflection of light is very severe, which causes a problem in that the quality of the captured image is greatly changed even with a small light change. In this study, the distance between the car body and the door part and the door are acquired by the measuring device combining the laser slit light source and the LED pattern light source. The result is transferred to the joint robot for assembling parts at the optimum position between parts, and the assembly is done at the optimal position by changing the angle and step.

Development of On-line Quality Sorting System for Dried Oak Mushroom - 3rd Prototype-

  • 김철수;김기동;조기현;이정택;김진현
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.8-15
    • /
    • 2003
  • In Korea, quality evaluation of dried oak mushrooms are done first by classifying them into more than 10 different categories based on the state of opening of the cap, surface pattern, and colors. And mushrooms of each category are further classified into 3 or 4 groups based on its shape and size, resulting into total 30 to 40 different grades. Quality evaluation and sorting based on the external visual features are usually done manually. Since visual features of mushroom affecting quality grades are distributed over the entire surface of the mushroom, both front (cap) and back (stem and gill) surfaces should be inspected thoroughly. In fact, it is almost impossible for human to inspect every mushroom, especially when they are fed continuously via conveyor. In this paper, considering real time on-line system implementation, image processing algorithms utilizing artificial neural network have been developed for the quality grading of a mushroom. The neural network based image processing utilized the raw gray value image of fed mushrooms captured by the camera without any complex image processing such as feature enhancement and extraction to identify the feeding state and to grade the quality of a mushroom. Developed algorithms were implemented to the prototype on-line grading and sorting system. The prototype was developed to simplify the system requirement and the overall mechanism. The system was composed of automatic devices for mushroom feeding and handling, a set of computer vision system with lighting chamber, one chip microprocessor based controller, and pneumatic actuators. The proposed grading scheme was tested using the prototype. Network training for the feeding state recognition and grading was done using static images. 200 samples (20 grade levels and 10 per each grade) were used for training. 300 samples (20 grade levels and 15 per each grade) were used to validate the trained network. By changing orientation of each sample, 600 data sets were made for the test and the trained network showed around 91 % of the grading accuracy. Though image processing itself required approximately less than 0.3 second depending on a mushroom, because of the actuating device and control response, average 0.6 to 0.7 second was required for grading and sorting of a mushroom resulting into the processing capability of 5,000/hr to 6,000/hr.

  • PDF

IGRINS First Light Instrumental Performance

  • Park, Chan;Yuk, In-Soo;Chun, Moo-Young;Pak, Soojong;Kim, Kang-Min;Pavel, Michael;Lee, Hanshin;Oh, Heeyoung;Jeong, Ueejeong;Sim, Chae Kyung;Lee, Hye-In;Le, Huynh Anh Nguyen;Strubhar, Joseph;Gully-Santiago, Michael;Oh, Jae Sok;Cha, Sang-Mok;Moon, Bongkon;Park, Kwijong;Brooks, Cynthia;Ko, Kyeongyeon;Han, Jeong-Yeol;Nah, Jakyuong;Hill, Peter C.;Lee, Sungho;Barnes, Stuart;Park, Byeong-Gon;T., Daniel
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.52.2-52.2
    • /
    • 2014
  • The Immersion Grating Infrared Spectrometer (IGRINS) is an unprecedentedly minimized infrared cross-dispersed echelle spectrograph with a high-resolution and high-sensitivity optical performance. A silicon immersion grating features the instrument for the first time in this field. IGRINS will cover the entire portion of the wavelength range between 1.45 and $2.45{\mu}m$ accessible from the ground in a single exposure with spectral resolution of 40,000. Individual volume phase holographic (VPH) gratings serve as cross-dispersing elements for separate spectrograph arms covering the H and K bands. On the 2.7m Harlan J. Smith telescope at the McDonald Observatory, the slit size is $1^{\prime\prime}{\times}15^{\prime\prime}$. IGRINS has a $0.27^{\prime\prime}$ pixel-1 plate scale on a $2048{\times}2048$ pixel Teledyne Scientific & Imaging HAWAII-2RG detector with SIDECAR ASIC cryogenic controller. The instrument includes four subsystems; a calibration unit, an input relay optics module, a slit-viewing camera, and nearly identical H and K spectrograph modules. The use of a silicon immersion grating and a compact white pupil design allows the spectrograph collimated beam size to be 25mm, which permits the entire cryogenic system to be contained in a moderately sized rectangular vacuum chamber. The fabrication and assembly of the optical and mechanical hardware components were completed in 2013. In this presentation, we describe the major design characteristics of the instrument and the early performance estimated from the first light commissioning at the McDonald Observatory.

  • PDF

Two Design Techniques of Embedded Systems Based on Ad-Hoc Network for Wireless Image Observation (애드 혹 네트워크 기반의 무선 영상 관측용 임베디드 시스템의 두 가지 설계 기법들)

  • LEE, Yong Up;Song, Chang-Yeoung;Park, Jeong-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.5
    • /
    • pp.271-279
    • /
    • 2014
  • In this paper, the two design techniques of the embedded system which provides a wireless image observation with temporary ad-hoc network are proposed and developed. The first method is based on the embedded system design technique for a nearly real-time wireless short observation application, having a specific remote monitoring node with a built-in image processing function, and having the maximum rate of 1 fps (frame per second) wireless image transmission capability of a $160{\times}128$size image. The second technique uses the embedded system for a general wireless long observation application, consisting of the main node, the remote monitoring node, and the system controller with built-in image processing function, and the capability of the wireless image transmission rate of 1/3 fps. The proposed system uses the wireless ad-hoc network which is widely accepted as a short range, low power, and bidirectional digital communication, the hardware are consisted of the general developed modules, a small digital camera, and a PC, and the embedded software based upon the Zigbee stack and the user interface software are developed and tested on the implemented module. The wireless environment analysis and the performance results are presented.