• Title/Summary/Keyword: Low-vision

Search Result 695, Processing Time 0.023 seconds

DEVELOPMENT OF LASER VISION SENSOR WITH MULTI-LINE

  • Kieun Sung;Sehun Rhee;Yun, Jae-Ok
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.324-329
    • /
    • 2002
  • Generally, the laser vision sensor makes it possible design a highly reliable and precise range sensor at a low cost. When the laser vision sensor is applied to lap joint welding, however, there are many limitations. Therefore, a specially-designed hardware system has to be used. However, if the multi-lines are used instead of a single line, multi-range data can be generated from one image. Even under a set condition of 30fps, the generated 2D range data increases depending on the number of lines used. In this study, a laser vision sensor with a multi-line pattern is

  • PDF

Tool Monitoring System using Vision System with Minimizing External Condition (환경영향을 최소화한 비전 시스템을 이용한 미세공구의 상태 감시 기술)

  • Kim, Sun-Ho;Baek, Woon-Bo
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.11 no.5
    • /
    • pp.142-147
    • /
    • 2012
  • Machining tool conditions directly affect to quality of product and productivity of manufacturing. Many researches performed for tool condition monitoring in machining process to improve quality and productivity. Conventional methods use characteristics of signal for cutting force, motor current consumption, vibration of machine tools and machining sound. Recently, diameter of machining tool is become smaller for minimizing of mechanical parts. Tool condition monitoring using conventional methods are relatively difficult because micro machining using small diameter tool has low machining load and high cutting speed. These days, the direct monitoring for tool conditions using vision system is performed actively. But, vision system is affected by external conditions such as back ground of image and illumination. In this study, minimizing technology of external conditions using distribution analysis of image data are developed in micro machining using small diameter drill and tap. The image data is gathered from vision system. Several sets of experiment results are performed to verify the characteristics of the proposed machining technology.

Development of Laser Vision Sensor with Multi-line for High Speed Lap Joint Welding

  • Sung, K.;Rhee, S.
    • International Journal of Korean Welding Society
    • /
    • v.2 no.2
    • /
    • pp.57-60
    • /
    • 2002
  • Generally, the laser vision sensor makes it possible design a highly reliable and precise range sensor at a low cost. When the laser vision sensor is applied to lap joint welding, however. there are many limitations. Therefore, a specially-designed hardware system has to be used. However, if the multi-lines are used instead of a single line, multi-range data .:an be generated from one image. Even under a set condition of 30fps, the generated 2D range data increases depending on the number of lines used. In this study, a laser vision sensor with a multi-line pattern is developed with conventional CCD camera to carry out high speed seam tracking in lap joint welding.

  • PDF

Development of Real-Time Vision-Based Fabric Inspection System (비전 시스템을 이용한 실시간 섬유결점 검사기 개발)

  • 조지승;정병묵;박무진
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.9
    • /
    • pp.92-99
    • /
    • 2003
  • Quality inspection of textile products is an important problem for fabric manufacturers. This paper presents an automatic vision-based system for quality control of web textile fabrics. Typical web material is 1-3m wide and is driven with speeds ranging from 20m/min to 200m/min. At the present, the quality assessment procedures are performed manually by expert. But worker can not detect more than 60% of the present defect and inspect the fabric if moving faster than 30m/min. To increase the overall quality and homogeneity of textile, an automated visual inspection system is needed fur the productivity. However, the existing inspection system are too expensive to purchase for small companies. In this paper, the proposed PC based real-time inspection algorithm gives low cost textile inspection system, high detection rate with good accuracy and low rate of false alarms. The method shows good results in the detection of several types of fabric defects.

Vision chip for edge detection with resolution improvement through simplification of unit-pixel circuit (단위 픽셀 회로의 간소화를 통해서 해상도를 향상시킨 이차원 윤곽 검출용 시각칩)

  • Sung, Dong-Kyu;Kong, Jae-Sung;Hyun, Hyo-Young;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.17 no.1
    • /
    • pp.15-22
    • /
    • 2008
  • When designing image sensors including a CMOS vision chip for edge detection, resolution is a significant factor to evaluate the performance. It is hard to improve the resolution of a bio-inspired CMOS vision using a resistive network because the vision chip contains many circuits such as a resistive network and several signal processing circuits as well as photocircuits of general image sensors such as CMOS image sensor (CIS). Low resolution restricts the use of the application systems. In this paper, we improve the resolution through layout and circuit optimization. Furthermore, we have designed a printed circuit board using FPGA which controls the vision chip. The vision chip for edge detection has been designed and fabricated by using $0.35{\mu}m$ double-poly four-metal CMOS technology, and its output characteristics have been investigated.

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms (임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가)

  • Minha Lee;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.

Functional Status and Long-Term Care Services for the Community-Dwelling Low-Income Elderly (저소득층 재가노인의 기능상태와 요구되는 요양서비스 유형 분석)

  • Jeon, Eun-Young
    • The Korean Journal of Rehabilitation Nursing
    • /
    • v.12 no.2
    • /
    • pp.92-101
    • /
    • 2009
  • Purpose: This study was conducted to explore the functional status and long-term care services for the community-dwelling low-income elderly. Method: A descriptive research design was used in this study. The functional status of the participants was obtained using Minimum Data Set-Home Care Version 2.0 and the long-term care services were identified via Michigan's choice. Total of 154 persons aged 65 years or older completed Korean Minimum Data Set-Home Care Version 2.0 on the community dwelling low-income elderly. Results: The average of Activities of Daily Living was 4.19, and the range was 0-55, while the average of Instrument of Activities of Daily Living was 4.85 and the range was 0-56. Among the subjects, 46.1% belonged to the Information and Referral group and 1.3% to the Nursing Home group. Severe daily pain was reported by 14.9%, and 76.6% of the participants had impaired vision. The Activities of Daily Living was difference according to living with, education, vision, and depression. The long-term care services differed according to gender, pain, vision, hearing, and depression. Conclusion: The support policy for the elderly needed to focus on impaired visual and depression to enhance the activities of daily living. Moreover, there is a need for the Information and Referral group to arrange and develop nursing intervention resources.

  • PDF

Autonomous-flight Drone Algorithm use Computer vision and GPS (컴퓨터 비전과 GPS를 이용한 드론 자율 비행 알고리즘)

  • Kim, Junghwan;Kim, Shik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.3
    • /
    • pp.193-200
    • /
    • 2016
  • This paper introduces an algorithm to middle-low price drone's autonomous navigation flight system using computer vision and GPS. Existing drone operative system mainly contains using methods such as, by inputting course of the path to the installed software of the particular drone in advance of the flight or following the signal that is transmitted from the controller. However, this paper introduces new algorithm that allows autonomous navigation flight system to locate specific place, specific shape of the place and specific space in an area that the user wishes to discover. Technology developed for military industry purpose was implemented on a lower-quality hobby drones without changing its hardware, and used this paper's algorithm to maximize the performance. Camera mounted on middle-low price drone will process the image which meets user's needs will look through and search for specific area of interest when the user inputs certain image of places it wishes to find. By using this algorithm, middle-low price drone's autonomous navigation flight system expect to be apply to a variety of industries.

Interface Modeling for Digital Device Control According to Disability Type in Web

  • Park, Joo Hyun;Lee, Jongwoo;Lim, Soon-Bum
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.249-256
    • /
    • 2020
  • Learning methods using various assistive and smart devices have been developed to enable independent learning of the disabled. Pointer control is the most important consideration for the disabled when controlling a device and the contents of an existing graphical user interface (GUI) environment; however, difficulties can be encountered when using a pointer, depending on the disability type; Although there are individual differences depending on the blind, low vision, and upper limb disability, problems arise in the accuracy of object selection and execution in common. A multimodal interface pilot solution is presented that enables people with various disability types to control web interactions more easily. First, we classify web interaction types using digital devices and derive essential web interactions among them. Second, to solve problems that occur when performing web interactions considering the disability type, the necessary technology according to the characteristics of each disability type is presented. Finally, a pilot solution for the multimodal interface for each disability type is proposed. We identified three disability types and developed solutions for each type. We developed a remote-control operation voice interface for blind people and a voice output interface applying the selective focusing technique for low-vision people. Finally, we developed a gaze-tracking and voice-command interface for GUI operations for people with upper-limb disability.