• 제목/요약/키워드: Low-vision

검색결과 696건 처리시간 0.025초

Adaptive planar vision marker composed of LED arrays for sensing under low visibility

  • Kim, Kyukwang;Hyun, Jieum;Myung, Hyun
    • Advances in robotics research
    • /
    • 제2권2호
    • /
    • pp.141-149
    • /
    • 2018
  • In image processing and robotic applications, two-dimensional (2D) black and white patterned planar markers are widely used. However, these markers are not detectable in low visibility environment and they are not changeable. This research proposes an active and adaptive marker node, which displays 2D marker patterns using light emitting diode (LED) arrays for easier recognition in the foggy or turbid underwater environments. Because each node is made to blink at a different frequency, active LED marker nodes were distinguishable from each other from a long distance without increasing the size of the marker. We expect that the proposed system can be used in various harsh conditions where the conventional marker systems are not applicable because of low visibility issues. The proposed system is still compatible with the conventional marker as the displayed patterns are identical.

Reconstruction of High-Resolution Facial Image Based on A Recursive Error Back-Projection

  • Park, Joeng-Seon;Lee, Seong-Whan
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2004년도 봄 학술발표논문집 Vol.31 No.1 (B)
    • /
    • pp.715-717
    • /
    • 2004
  • This paper proposes a new reconstruction method of high-resolution facial image from a low-resolution facial image based on a recursive error back-projection of top-down machine learning. A face is represented by a linear combination of prototypes of shape and texture. With the shape and texture information about the pixels in a given low-resolution facial image, we can estimate optimal coefficients for a linear combination of prototypes of shape and those of texture by solving least square minimization. Then high-resolution facial image can be obtained by using the optimal coefficients for linear combination of the high-resolution prototypes, In addition to, a recursive error back-projection is applied to improve the accuracy of synthesized high-resolution facial image. The encouraging results of the proposed method show that our method can be used to improve the performance of the face recognition by applying our method to reconstruct high-resolution facial images from low-resolution one captured at a distance.

  • PDF

레이저포인터와 단일카메라를 이용한 거리측정 시스템 (A Distance Measurement System Using a Laser Pointer and a Monocular Vision Sensor)

  • 전영산;박정근;강태삼;이정욱
    • 한국항공우주학회지
    • /
    • 제41권5호
    • /
    • pp.422-428
    • /
    • 2013
  • 최근에 소형무인기(small UAV)에 대한 관심이 증대되고 있는데, 이는 소형무인기가 비용대비 효율적이고 사람의 접근이 어려운 재난 환경 등에 적합하기 때문이다. 이러한 소형무인기에서 거리측정을 통한 매핑(mapping)은 필수적인 기술이다. 기존의 무인시스템 연구에서 거리 측정 센서는 주로 레이저 센서와 스테레오 비전 센서를 많이 사용하였다. 레이저 센서는 정확도와 신뢰성이 우수하지만 대부분 고가의 장비이고 스테레오 비전 센서는 구현이 용이하지만 무게 측면에서 소형무인기에 탑재하여 사용하기에는 적합하지 않다. 본 논문에서는 레이저 포인터와 단일 카메라를 사용하여 저가의 거리측정기를 구성하는 방안을 소개한다. 제안한 시스템을 이용하여 거리를 측정하고 이로부터 맵을 구성하는 실험을 수행하였고 실제 데이터와 비교 분석하여 시스템의 신뢰성을 검증하였다.

Design and Implementation of a Low-Code/No-Code System

  • Hyun, Chang Young
    • International journal of advanced smart convergence
    • /
    • 제8권4호
    • /
    • pp.188-193
    • /
    • 2019
  • This paper is about environment-based low-code and no-code execution platform and execution method that combines hybrid and native apps. In detail, this paper describes the Low-Code/No-Code execution structure that combines the advantages of hybrid and native apps. It supports the iPhone and Android phones simultaneously, supports various templates, and avoids developer-oriented development methods based on the production process of coding-free apps and the produced apps play the role of Java virtual machine (VM). The Low-Code /No-Code (LCNC) development platform is a visual integrated development environment that allows non-technical developers to drag and drop application components to develop mobile or web applications. It provides the functions to manage dependencies that are packaged into small modules such as widgets and dynamically loads when needed, to apply model-view-controller (MVC) pattern, and to handle document object model (DOM). In the Low-Code/No-Code system, the widget calls the AppOS API provided by the UCMS platform to deliver the necessary requests to AppOS. The AppOS API provides authentication/authorization, online to offline (O2O), commerce, messaging, social publishing, and vision. It includes providing the functionality of vision.

저시력 장애인을 위한 모바일 웹 UI/UX 연구 (Mobile Web UI/UX Research for Low Vision in Visually Handicapped People)

  • 송승훈;김의정;강신천;김창석;정종인
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2017년도 추계학술대회
    • /
    • pp.391-394
    • /
    • 2017
  • 저시력 장애인은 선천적 혹은 후천적인 안질환으로 인한 의학적 안광학적 방법으로 개선이 불가능한 시력 장애 및 시기능 장애를 말한다. 저시력 장애인은 전 세계에 2억 4000만 명에 해당할 만큼 많은 비율을 차지하고 있으며 잔존시력이 있음에도 불과하고 스마트 시대에 정보 취약 계층에 속하고 있다. 기존 스크린리더(TTS) 및 화면확대 기능을 통해 정보접근이 가능한 환경에서 저시력 장애인을 위한 모바일 웹 환경에서의 UI/UX에 대한 연구를 통해 저시력 장애인의 정보 접근성 향상 및 향후 연구 방안에 대해서 논하고자 한다.

  • PDF

시각 감지 기반의 저조도 영상 이미지 적응 보상 증진 알고리즘 (Adaptive Enhancement of Low-light Video Images Algorithm Based on Visual Perception)

  • 이원;민병원
    • 사물인터넷융복합논문지
    • /
    • 제10권2호
    • /
    • pp.51-60
    • /
    • 2024
  • 저조도 환경에서 영상 이미지의 콘트라스트가 낮고 식별이 어려운 문제를 목표로 사람의 시각 감지 기반의 콘트라스트 적응 보상 증진 알고리즘을 제안한다. 첫째, 저조도 환경에서 평균 밝기, 평균 대역폭 요인의 영상 이미지 특징 요인을 추출하고, 원본 영상의 회색/색도 차이에 따라 사람의 시각적 콘트라스트 해상도 보상의 수학적 모델을 설정하며, 실제 컬러의 3원색에 대해 각각 비례 적분하여 보상한다. 다음으로 보상 정도가 명시각 차이를 적절하게 구별할 수 있는 것보다 낮을 때 보상 임계값 선형 보상이 명시각에서 전체 대역폭으로 설정된다. 마지막으로 주관적인 이미지 품질 평가와 이미지 특성 요인을 결합하여 비례 계수를 보상하는 자동 최적화 모델을 구축한다. 실험 테스트 결과는 영상 이미지 적응 증진 알고리즘이 우수한 증진 효과와 우수한 실시간 성능을 가지며 다크 비전 정보를 효과적으로 마이닝할 수 있으며 다양한 시나리오에서 널리 사용될 수 있음을 보여준다.

유리체강 내 주입술을 받는 망막질환자의 시각 관련 삶의 질 영향요인 (Factors Influencing on Vision-related Quality of Life in Patients with Retinal Diseases Receiving Intravitreal Injections)

  • 김현영;하영미
    • 임상간호연구
    • /
    • 제27권1호
    • /
    • pp.54-65
    • /
    • 2021
  • Purpose: The purpose of this study was to identify influencing factors on vision-related quality of life in patients with retinal diseases receiving intravitreal injections by examining relationships among anxiety, depression, coping, eye health behaviors and vision-related quality of life. Methods: One hundred and five outpatients who were diagnosed with macular degeneration and diabetic retinopathy were recruited from one university hospital during August 16, 2019 to March 25, 2020. Data were analyzed using descriptive statistics (frequency and percentage, mean, standard deviation), and t-tests, ANOVA, Scheffé test, Pearson's correlations, and stepwise multiple regressions using the IBM SPSS Statistics 25.0. Results: The vision-related quality of life according to general characteristics of retinal disease patients with intravitreal injection showed significant differences in age (F=3.01, p=.034), subjective economic status (F=5.83, p=.004), types of retinal disease (t=2.62, p=.010), and disease in both eyes (t=-3.04, p=.003). The vision-related quality of life showed a significant positive correlation with age (r=.24, p=.012), and negative correlations with anxiety (r=-.66, p<.001), depression (r=-.48, p<.001), and emotion-focused coping (r=-.20, p=.036). The hierarchical regression analysis indicated that factors affecting vision-related quality of life in patients with retinal diseases were anxiety and subjective economic status, accounting for 47.0% of the variances of the vision-related quality of life. Conclusion: Based on our results, health professionals need to pay attention to patients with low socioeconomic status due to frequent treatments. Also, a program needs to be developed to decrease anxiety for outpatients receiving intravitreal injections to improve their vision-related quality of life.

ASV용 센서통합평가 기술을 위한 무인 타겟 이동 시스템의 개발 (Development of an Automatic Unmanned Target Object Carrying System for ASV Sensor Evaluation Methods)

  • 김은정;송인성;유시복;김병수
    • 자동차안전학회지
    • /
    • 제4권2호
    • /
    • pp.32-36
    • /
    • 2012
  • The Automatic unmanned target object carrying system (AUTOCS) is developed for testing road vehicle radar and vision sensor. It is important for the target to reflect the realistic target characteristics when developing ASV or ADAS products. The AUTOCS is developed to move the pedestrian or motorcycle target for desired speed and position. The AUTOCS is designed that only payload target which is a manikin or a motorcycle is detected by the sensor not the AUTOCS itself. In order for the AUTOCS to have low exposure to radar, the AUTOCS is stealthy shaped to have low RCS(Radar Cross Section). For deceiving vision sensor, the AUTOCS has a specially designed pattern on outside skin which resembles the asphalt pattern. The AUTOCS has three driving modes which are remote control, path following and replay. The AUTOCS V.1 is tested to verify the radar detect characteristics, and the AUTOCS successfully demonstrated that it is not detected by a car radar. The result is presented in this paper.

On low cost model-based monitoring of industrial robotic arms using standard machine vision

  • Karagiannidisa, Aris;Vosniakos, George C.
    • Advances in robotics research
    • /
    • 제1권1호
    • /
    • pp.81-99
    • /
    • 2014
  • This paper contributes towards the development of a computer vision system for telemonitoring of industrial articulated robotic arms. The system aims to provide precision real time measurements of the joint angles by employing low cost cameras and visual markers on the body of the robot. To achieve this, a mathematical model that connects image features and joint angles was developed covering rotation of a single joint whose axis is parallel to the visual projection plane. The feature that is examined during image processing is the varying area of given circular target placed on the body of the robot, as registered by the camera during rotation of the arm. In order to distinguish between rotation directions four targets were used placed every $90^{\circ}$ and observed by two cameras at suitable angular distances. The results were deemed acceptable considering camera cost and lighting conditions of the workspace. A computational error analysis explored how deviations from the ideal camera positions affect the measurements and led to appropriate correction. The method is deemed to be extensible to multiple joint motion of a known kinematic chain.