• Title/Summary/Keyword: Movement Detection

Search Result 606, Processing Time 0.03 seconds

A Study on Improving License Plate Recognition Performance Using Super-Resolution Techniques

  • Kyeongseok JANG;Kwangchul SON
    • Korean Journal of Artificial Intelligence
    • /
    • v.12 no.3
    • /
    • pp.1-7
    • /
    • 2024
  • In this paper, we propose an innovative super-resolution technique to address the issue of reduced accuracy in license plate recognition caused by low-resolution images. Conventional vehicle license plate recognition systems have relied on images obtained from fixed surveillance cameras for traffic detection to perform vehicle detection, tracking, and license plate recognition. However, during this process, image quality degradation occurred due to the physical distance between the camera and the vehicle, vehicle movement, and external environmental factors such as weather and lighting conditions. In particular, the acquisition of low-resolution images due to camera performance limitations has been a major cause of significantly reduced accuracy in license plate recognition. To solve this problem, we propose a Single Image Super-Resolution (SISR) model with a parallel structure that combines Multi-Scale and Attention Mechanism. This model is capable of effectively extracting features at various scales and focusing on important areas. Specifically, it generates feature maps of various sizes through a multi-branch structure and emphasizes the key features of license plates using an Attention Mechanism. Experimental results show that the proposed model demonstrates significantly improved recognition accuracy compared to existing vehicle license plate super-resolution methods using Bicubic Interpolation.

Effect of Fabric Sensor Type and Measurement Location on Respiratory Detection Performance (직물센서의 종류와 측정 위치가 호흡 신호 검출 성능에 미치는 효과)

  • Cho, Hyun-Seung;Yang, Jin-Hee;Lee, Kang-Hwi;Kim, Sang-Min;Lee, Hyeok-Jae;Lee, Jeong-Hwan;Kwak, Hwi-Kuen;Ko, Yun-Su;Chae, Je-Wook;Oh, Su-Hyeon;Lee, Joo-Hyeon
    • Science of Emotion and Sensibility
    • /
    • v.22 no.4
    • /
    • pp.97-106
    • /
    • 2019
  • The purpose of this study was to investigate the effect of the type and measurement location of a fabric strain gauge sensor on the detection performance for respiratory signals. We implemented two types of sensors to measure the respiratory signal and attached them to a band to detect the respiratory signal. Eight healthy males in their 20s were the subject of this study. They were asked to wear two respiratory bands in turns. While the subjects were measured for 30 seconds standing comfortably, the respiratory was given at 15 breaths per minute were synchronized, and then a 10-second break; subsequently, the entire measurement was repeated. Measurement locations were at the chest and abdomen. In addition, to verify the performance of respiratory measurement in the movement state, the subjects were asked to walk in place at a speed of 80 strides per minute(SPM), and the respiratory was measured using the same method mentioned earlier. Meanwhile, to acquire a reference signal, the SS5LB of BIOPAC Systems, Inc., was worn by the subjects simultaneously with the experimental sensor. The Kruskal-Wallis test and Bonferroni post hoc tests were performed using SPSS 24.0 to verify the difference in measurement performances among the group of eight combinations of sensor types, measurement locations, and movement states. In addition, the Wilcoxon test was conducted to examine whether there are differences according to sensor type, measurement location, and movement state. The results showed that the respiratory signal detection performance was the best when the respiratory was measured in the chest using the CNT-coated fabric sensor regardless of the movement state. Based on the results of this study, we will develop a chest belt-type wearable platform that can monitor the various vital signal in real time without disturbing the movements in an outdoor environment or in daily activities.

Welfare Interface using Multiple Facial Features Tracking (다중 얼굴 특징 추적을 이용한 복지형 인터페이스)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.75-83
    • /
    • 2008
  • We propose a welfare interface using multiple fecial features tracking, which can efficiently implement various mouse operations. The proposed system consist of five modules: face detection, eye detection, mouth detection, facial feature tracking, and mouse control. The facial region is first obtained using skin-color model and connected-component analysis(CCs). Thereafter the eye regions are localized using neutral network(NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, and then mouth region is localized using edge detector. Once eye and mouth regions are localized they are continuously and correctly tracking by mean-shift algorithm and template matching, respectively. Based on the tracking results, mouse operations such as movement or click are implemented. To assess the validity of the proposed system, it was applied to the interface system for web browser and was tested on a group of 25 users. The results show that our system have the accuracy of 99% and process more than 21 frame/sec on PC for the $320{\times}240$ size input image, as such it can supply a user-friendly and convenient access to a computer in real-time operation.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Development of Differential Diagnosis and Treatment Method of Reproductive Disorders Using Ultrasonography in Cows IV. Confirmation of Estrus Detection and Early Pregnancy Diagnosis (초음파검사에 의한 소의 번식장애 감별진단 및 치료법 개발 IV, 발정확인 및 조기 임신진단)

  • 손창호;강병규;최한선;강현구;김혁진;오기석;서국현
    • Journal of Veterinary Clinics
    • /
    • v.16 no.1
    • /
    • pp.128-137
    • /
    • 1999
  • Plasma progesterone (P$_4$) concentrations were measured for confirming the estrus observation and for the early pregnancy diagnosis in 130 cows of small farmers. Ultrasonographic examinations were performed from day 30 after artificial insemination to establish the characteristic ultrasonographic appearances of gestational structures in each pregnant stages. Of the 130 cows inseminated, 111 cows (85.4%) were an ovulatory estrus, 12 cows (9.2%) were an unovulatory estrus, and 7 cows (5.4%) were the error of estrus detection, respectively. The accuracy for early pregnancy diagnosis in 111 ovulatory estrus cows achieved when the discriminatory concentration at day 21 after artificial insemination was placed at 3.0 ng-/ml in plasma, was 86.7 % for positive diagnosis and 100% for negative diagnosis, respectively. Pregnancy diagnosis by ultrasonography were performed to evaluate gestational structures from day 30 after artificial insemination in 83 cows. Pregnant cows were 72 of 83 cows. The characteristic ultrasonography of gestational structures in each gestational stages was as follows. The embryo proper was observed within anechoic fetal fluid between 28 and 40 days after insemination, and amnion and embryonic heartbeat was also detected in this period. Between days 41 and 50, embryo proper was detected as an discriminated from head and body, and forelimb buds and hindlimb buds were also observed in this period. Between days 51 and 60, an embryo proper was clearly discriminated from head and body, and fetal movement, forelimb buds and hindlimb buds were observed in this period. Between days 61 and 70, fetus was completely developed, and fetal skeleton, organs and cotyledon were observed. After day 71, each organs of fetus were rapidly developed and a fetus was partially observed in screen because fetus was too big and larger, These results indicate that plasma P$_4$ determination at days 0,6 and 21 after artificial insemination can be utilized for confirming the estrus observation and for early pregnancy diagnosis. Also, ultrasonography was reliable method for early pregnancy diagnosis at day 30 after artificial insemination.

  • PDF

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

A Robust Marker Detection Algorithm Using Hybrid Features in Augmented Reality (증강현실 환경에서 복합특징 기반의 강인한 마커 검출 알고리즘)

  • Park, Gyu-Ho;Lee, Heng-Suk;Han, Kyu-Phil
    • The KIPS Transactions:PartA
    • /
    • v.17A no.4
    • /
    • pp.189-196
    • /
    • 2010
  • This paper presents an improved marker detection algorithm using hybrid features such as corner, line segment, region, and adaptive threshold values, etc. In usual augmented reality environments, there are often marker occlusion and poor illumination. However, existing ARToolkit fails to recognize the marker in these situations, especially, partial concealment of marker by user, large change of illumination and dim circumstances. In order to solve these problems, the adaptive threshold technique is adopted to extract a marker region and a corner extraction method based on line segments is presented against marker occlusions. In addition, a compensating method, corresponding the marker size and center between registered and extracted one, is proposed to increase the template matching efficiency, because the inside marker size of warped images is slightly distorted due to the movement of corner and warping. Therefore, experimental results showed that the proposed algorithm can robustly detect the marker in severe illumination change and occlusion environment and use similar markers because the matching efficiency was increased almost 30%.

Automatic Control System on Cardiac Output Regulation for the Moving Actuator Type Total Artificial Heart (MOVING-ACTUATOR TYPE 인공심장의 심박출 조절에 대한 자동 제어방법)

  • 김원곤
    • Journal of Chest Surgery
    • /
    • v.28 no.6
    • /
    • pp.542-548
    • /
    • 1995
  • The goal of this study is to develop an effective control system for cardiac output regulation based upon the preload and afterload conditions without any transducers and compliance chambers in the moving actuator type total artificial heart. Motor current waveforms during the actuator movement are used as an input to the automatic control algorithm. While the current waveform analysis is performed, the stroke length and velocity of the actuator are gradually increased up to the maximum pump output level. If the diastolic filling rate of either right or left pump begins to exceed the venous return, atrial collapse will occur. Since the diastolic suction acts as a load to the motor, this critical condition can be detected by analyzing the motor current waveforms. Every time this detection criterion is met, the control algorithm decreases the stroke velocity and length of the actuator step by step just below the critical detection level. Then, they start to increase. In this way the maximum pump output under given venous return can be achieved. Additionally the control algorithm provides some degree of afterload sensitivity. If the aortic pressure is detected to exceed 120 mmHg, the stroke length and velocity decrease in the same way as the response to the preload. Left-right pump output balance is maintained by proper adjustment of the asymmetry of the stroke angle. In the mock circulatory test, this control system worked well and there was a considerable range of stroke volume difference with adjustment of the asymmetry value. Two ovine experiments were performed. It was confirmed that the required cardiac output regulation according to the venous return could be achieved with adequate detection of diastolic function, at least in the in vivo short-term survival cases[2-3 days . We conclude that this control algorithm is a promising method to regulate cardiac output in the moving actuator type total artificial heart.

  • PDF

Design and Implementation of a Motor Vehicle Emergency Situation Detection System (차량용 사고 상황 감지 시스템의 설계 및 구현)

  • Kang, Moon-Seol;Kim, Yu-Sin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.11
    • /
    • pp.2677-2685
    • /
    • 2013
  • Car running data collected from the vehicle is a native image data and sensing data of it. Hence, it can be used as a set of objective data based on which events that took place outside the car can be analyzed and determined. In this paper, we designed and implemented a emergency situation detection system to sense, store, and analyze signals related to car movements, driver's various operation states, collision pulse, etc when a car collision accident occurs on the actual road by sensing and analyzing the car movements and driver's operation status. The suggested system provides information on the driver's reaction right before the collision, operation state of the vehicle, and physical movement. The collected and analyzed data on vehicle running can be utilized to clarify the cause of a collision accident and to handle it in a just manner. Besides, it can contribute to grasping and correcting wrong driving habits of the driver and to saving.

Performance Analysis of the Active SAS Autofocus Processing for UUV Trajectory Disturbances Compensation (수중무인체 궤적교란 보상을 위한 능동 SAS 자동초점처리 성능 분석)

  • Kim, Boo-il
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.215-222
    • /
    • 2017
  • An active synthetic aperture sonar mounted on small UUV is generated various trajectory disturbances in the traveling path by the influence of external underwater environments. That is the phase mismatch occurs in the synthetic aperture processing of the signals reflected from seabed objects and fetches the detection performance decreases. In this paper, we compensated deteriorated images by the active SAS autofocus processing using DPC and analyzed the effects of detection performance when the periodic trajectory disturbances occur in the side direction at a constant velocity and straight movement of UUV. Through simulations, the deteriorated images according to the periodic disturbance magnitudes and period variations in the platform were compensated using difference phases processing of the overlapping displaced phase centers on the adjacent transmitted ping signals, and we conformed the improved performance characteristics of azimuth resolution and detection images at 3dB reference point.