• Title/Summary/Keyword: low vision

Search Result 700, Processing Time 0.025 seconds

Adaptive planar vision marker composed of LED arrays for sensing under low visibility

  • Kim, Kyukwang;Hyun, Jieum;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.141-149
    • /
    • 2018
  • In image processing and robotic applications, two-dimensional (2D) black and white patterned planar markers are widely used. However, these markers are not detectable in low visibility environment and they are not changeable. This research proposes an active and adaptive marker node, which displays 2D marker patterns using light emitting diode (LED) arrays for easier recognition in the foggy or turbid underwater environments. Because each node is made to blink at a different frequency, active LED marker nodes were distinguishable from each other from a long distance without increasing the size of the marker. We expect that the proposed system can be used in various harsh conditions where the conventional marker systems are not applicable because of low visibility issues. The proposed system is still compatible with the conventional marker as the displayed patterns are identical.

Reconstruction of High-Resolution Facial Image Based on A Recursive Error Back-Projection

  • Park, Joeng-Seon;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.715-717
    • /
    • 2004
  • This paper proposes a new reconstruction method of high-resolution facial image from a low-resolution facial image based on a recursive error back-projection of top-down machine learning. A face is represented by a linear combination of prototypes of shape and texture. With the shape and texture information about the pixels in a given low-resolution facial image, we can estimate optimal coefficients for a linear combination of prototypes of shape and those of texture by solving least square minimization. Then high-resolution facial image can be obtained by using the optimal coefficients for linear combination of the high-resolution prototypes, In addition to, a recursive error back-projection is applied to improve the accuracy of synthesized high-resolution facial image. The encouraging results of the proposed method show that our method can be used to improve the performance of the face recognition by applying our method to reconstruct high-resolution facial images from low-resolution one captured at a distance.

  • PDF

A Distance Measurement System Using a Laser Pointer and a Monocular Vision Sensor (레이저포인터와 단일카메라를 이용한 거리측정 시스템)

  • Jeon, Yeongsan;Park, Jungkeun;Kang, Taesam;Lee, Jeong-Oog
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.5
    • /
    • pp.422-428
    • /
    • 2013
  • Recently, many unmanned aerial vehicle (UAV) studies have focused on small UAVs, because they are cost effective and suitable in dangerous indoor environments where human entry is limited. Map building through distance measurement is a key technology for the autonomous flight of small UAVs. In many researches for unmanned systems, distance could be measured by using laser range finders or stereo vision sensors. Even though a laser range finder provides accurate distance measurements, it has a disadvantage of high cost. Calculating the distance using a stereo vision sensor is straightforward. However, the sensor is large and heavy, which is not suitable for small UAVs with limited payload. This paper suggests a low-cost distance measurement system using a laser pointer and a monocular vision sensor. A method to measure distance using the suggested system is explained and some experiments on map building are conducted with these distance measurements. The experimental results are compared to the actual data and the reliability of the suggested system is verified.

Design and Implementation of a Low-Code/No-Code System

  • Hyun, Chang Young
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.188-193
    • /
    • 2019
  • This paper is about environment-based low-code and no-code execution platform and execution method that combines hybrid and native apps. In detail, this paper describes the Low-Code/No-Code execution structure that combines the advantages of hybrid and native apps. It supports the iPhone and Android phones simultaneously, supports various templates, and avoids developer-oriented development methods based on the production process of coding-free apps and the produced apps play the role of Java virtual machine (VM). The Low-Code /No-Code (LCNC) development platform is a visual integrated development environment that allows non-technical developers to drag and drop application components to develop mobile or web applications. It provides the functions to manage dependencies that are packaged into small modules such as widgets and dynamically loads when needed, to apply model-view-controller (MVC) pattern, and to handle document object model (DOM). In the Low-Code/No-Code system, the widget calls the AppOS API provided by the UCMS platform to deliver the necessary requests to AppOS. The AppOS API provides authentication/authorization, online to offline (O2O), commerce, messaging, social publishing, and vision. It includes providing the functionality of vision.

Mobile Web UI/UX Research for Low Vision in Visually Handicapped People (저시력 장애인을 위한 모바일 웹 UI/UX 연구)

  • Song, Seung-hun;Kim, Eui-jeong;Kang, Shin-cheon;Kim, Chang-suk;Chung, Jong-in
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.391-394
    • /
    • 2017
  • Persons with low vision impairment refers to visual and visual impairments that can not be remedied by medical or optical methods due to inherited or acquired eye disease. People with low vision impairments account for more than 240 million people in the world and have only a few remaining eyesight. We will discuss the improvement of information accessibility of low visually impaired people and future research methods through research on Web UI/UX in mobile web environment for low visibility handicapped in the environment where information can be accessed through existing screen reader (TTS) and screen enlargement function.

  • PDF

Adaptive Enhancement of Low-light Video Images Algorithm Based on Visual Perception (시각 감지 기반의 저조도 영상 이미지 적응 보상 증진 알고리즘)

  • Li Yuan;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.51-60
    • /
    • 2024
  • Aiming at the problem of low contrast and difficult to recognize video images in low-light environment, we propose an adaptive contrast compensation enhancement algorithm based on human visual perception. First of all, the video image characteristic factors in low-light environment are extracted: AL (average luminance), ABWF (average bandwidth factor), and the mathematical model of human visual CRC(contrast resolution compensation) is established according to the difference of the original image's grayscale/chromaticity level, and the proportion of the three primary colors of the true color is compensated by the integral, respectively. Then, when the degree of compensation is lower than the bright vision precisely distinguishable difference, the compensation threshold is set to linearly compensate the bright vision to the full bandwidth. Finally, the automatic optimization model of the compensation ratio coefficient is established by combining the subjective image quality evaluation and the image characteristic factor. The experimental test results show that the video image adaptive enhancement algorithm has good enhancement effect, good real-time performance, can effectively mine the dark vision information, and can be widely used in different scenes.

Factors Influencing on Vision-related Quality of Life in Patients with Retinal Diseases Receiving Intravitreal Injections (유리체강 내 주입술을 받는 망막질환자의 시각 관련 삶의 질 영향요인)

  • Kim, Hyunyoung;Ha, Yeongmi
    • Journal of Korean Clinical Nursing Research
    • /
    • v.27 no.1
    • /
    • pp.54-65
    • /
    • 2021
  • Purpose: The purpose of this study was to identify influencing factors on vision-related quality of life in patients with retinal diseases receiving intravitreal injections by examining relationships among anxiety, depression, coping, eye health behaviors and vision-related quality of life. Methods: One hundred and five outpatients who were diagnosed with macular degeneration and diabetic retinopathy were recruited from one university hospital during August 16, 2019 to March 25, 2020. Data were analyzed using descriptive statistics (frequency and percentage, mean, standard deviation), and t-tests, ANOVA, Scheffé test, Pearson's correlations, and stepwise multiple regressions using the IBM SPSS Statistics 25.0. Results: The vision-related quality of life according to general characteristics of retinal disease patients with intravitreal injection showed significant differences in age (F=3.01, p=.034), subjective economic status (F=5.83, p=.004), types of retinal disease (t=2.62, p=.010), and disease in both eyes (t=-3.04, p=.003). The vision-related quality of life showed a significant positive correlation with age (r=.24, p=.012), and negative correlations with anxiety (r=-.66, p<.001), depression (r=-.48, p<.001), and emotion-focused coping (r=-.20, p=.036). The hierarchical regression analysis indicated that factors affecting vision-related quality of life in patients with retinal diseases were anxiety and subjective economic status, accounting for 47.0% of the variances of the vision-related quality of life. Conclusion: Based on our results, health professionals need to pay attention to patients with low socioeconomic status due to frequent treatments. Also, a program needs to be developed to decrease anxiety for outpatients receiving intravitreal injections to improve their vision-related quality of life.

Development of an Automatic Unmanned Target Object Carrying System for ASV Sensor Evaluation Methods (ASV용 센서통합평가 기술을 위한 무인 타겟 이동 시스템의 개발)

  • Kim, Eunjeong;Song, Insung;Yu, Sybok;Kim, Byungsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.4 no.2
    • /
    • pp.32-36
    • /
    • 2012
  • The Automatic unmanned target object carrying system (AUTOCS) is developed for testing road vehicle radar and vision sensor. It is important for the target to reflect the realistic target characteristics when developing ASV or ADAS products. The AUTOCS is developed to move the pedestrian or motorcycle target for desired speed and position. The AUTOCS is designed that only payload target which is a manikin or a motorcycle is detected by the sensor not the AUTOCS itself. In order for the AUTOCS to have low exposure to radar, the AUTOCS is stealthy shaped to have low RCS(Radar Cross Section). For deceiving vision sensor, the AUTOCS has a specially designed pattern on outside skin which resembles the asphalt pattern. The AUTOCS has three driving modes which are remote control, path following and replay. The AUTOCS V.1 is tested to verify the radar detect characteristics, and the AUTOCS successfully demonstrated that it is not detected by a car radar. The result is presented in this paper.

On low cost model-based monitoring of industrial robotic arms using standard machine vision

  • Karagiannidisa, Aris;Vosniakos, George C.
    • Advances in robotics research
    • /
    • v.1 no.1
    • /
    • pp.81-99
    • /
    • 2014
  • This paper contributes towards the development of a computer vision system for telemonitoring of industrial articulated robotic arms. The system aims to provide precision real time measurements of the joint angles by employing low cost cameras and visual markers on the body of the robot. To achieve this, a mathematical model that connects image features and joint angles was developed covering rotation of a single joint whose axis is parallel to the visual projection plane. The feature that is examined during image processing is the varying area of given circular target placed on the body of the robot, as registered by the camera during rotation of the arm. In order to distinguish between rotation directions four targets were used placed every $90^{\circ}$ and observed by two cameras at suitable angular distances. The results were deemed acceptable considering camera cost and lighting conditions of the workspace. A computational error analysis explored how deviations from the ideal camera positions affect the measurements and led to appropriate correction. The method is deemed to be extensible to multiple joint motion of a known kinematic chain.