• Title/Summary/Keyword: Vision data

Search Result 1,771, Processing Time 0.031 seconds

A study on the automatic wafer alignment in semiconductor dicing (반도체 절단 공정의 웨이퍼 자동 정렬에 관한 연구)

  • 김형태;송창섭;양해정
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.12
    • /
    • pp.105-114
    • /
    • 2003
  • In this study, a dicing machine with vision system was built and an algorithm for automatic alignment was developed for dual camera system. The system had a macro and a micro inspection tool. The algorithm was formulated from geometric relations. When a wafer was put on the cutting stage within certain range, it was inspected by vision system and compared with a standard pattern. The difference between the patterns was analyzed and evaluated. Then, the stage was moved by x, y, $\theta$ axes to compensate these differences. The amount of compensation was calculated from the result of the vision inspection through the automatic alignment algorithm. The stage was moved to the compensated position and was inspected by vision for checking its result again. Accuracy and validity of the algorithm was discussed from these data.

Controlling Brightness Compensation of Full Color LED Vision (천연색 LED 정보표시 시스템의 휘도보정 제어장치)

  • Hwang, Hyun-Hwa;Yim, Hyung-Kun;Park, Jung-Hwan;Lee, Jong-Ha
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1291-1296
    • /
    • 2005
  • In this paper, we prevent a display quality drop for image of characteristics brightness ununiformity depend on LED use to LED vision. It is about that method also a control system development equipped with brightness compensation function of LED vision which is done easily for LED set up of LED vision. Generally, It is calculate driving current value is attended by each brightness to brightness characteristics mathematical function establish by "Y=aX+b", When is doing brightness value for "Y", driving current value for "X", brightness compensation value by using time for "b", characteristics value for "a" ground with characteristics curve of LED. So much, First It is create brightness data of each pixel take a photograph red, green and blue of LED vision. Second It is get average error about each pixel which get average brightness value of entire. Last, It is handle a complicated for about gradationally regulation to color and brightness of image send to LED vision. Also It raise the whole average brightness value of vision adjust for "b" value to solve brightness drop problem of LED using the long time.

  • PDF

A Platform-Based SoC Design for Real-Time Stereo Vision

  • Yi, Jong-Su;Park, Jae-Hwa;Kim, Jun-Seong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.12 no.2
    • /
    • pp.212-218
    • /
    • 2012
  • A stereo vision is able to build three-dimensional maps of its environment. It can provide much more complete information than a 2D image based vision but has to process, at least, that much more data. In the past decade, real-time stereo has become a reality. Some solutions are based on reconfigurable hardware and others rely on specialized hardware. However, they are designed for their own specific applications and are difficult to extend their functionalities. This paper describes a vision system based on a System on a Chip (SoC) platform. A real-time stereo image correlator is implemented using Sum of Absolute Difference (SAD) algorithm and is integrated into the vision system using AMBA bus protocol. Since the system is designed on a pre-verified platform it can be easily extended in its functionality increasing design productivity. Simulation results show that the vision system is suitable for various real-time applications.

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

The Control of A Ball Beam Using Fuzzy Control in Vision (화상의 퍼지 알고리즘 처리를 통한 공과 막대 시스템 제어)

  • Park, Seung-Hun;Joo, Han-Jo;Yim, Wha-Yoeng
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.965-967
    • /
    • 2003
  • Fuzzy Controller is a system that displays a person's thoughts using membership function and IF-THEN rules. With the help of specialists' knowledge, rule bases can be explained in easy language. Furthermore Fuzzy Controller has strong resistance against turbulence. Its performance is especially prominent when targets cannot be measured in mathematic methods because the fuzzy controller can measure the output using only the relations between the input and output. With the increasing influence of multimedia on our daily lives, vision plays bigger role both in industries and personal lives. Like wise vision is being used in many areas such as detecting and identifying objects. It is difficult to detect and control targets because there is a delay in the calculating when using vision in detecting and controlling objects in large quantity. In this paper we showed how to use fuzzy controller in minimizing the calculation process, controlling target objects and moving view window instead of applying input variation through vision. Ball beam, which has strong nonlinear, was used as the target object and DSP320C6711 IDK by TI(Texas Instruments) company was for the benefit of speedy calculation and vision data operation.

  • PDF

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

A Study on Type A Behavior Pattern(TABP), Stress, Depression and HIT-6 in the Patients with Chronic Headache (만성두통 환자의 성격유형 A 행태, 스트레스, 우울 및 두통영향정도의 관계 연구)

  • Cha, Nam-Hyun;Lim, Sabina;Jung, In-Tae;Kim, Su-Young;An, Kyung-Ae;Kim, Keon-Sik;Lee, Jae-Dong;Lee, Sang-Hoon;Choi, Do-Young;Lee, Yun-Ho;Lee, Doo-Ik
    • Korean Journal of Adult Nursing
    • /
    • v.17 no.4
    • /
    • pp.539-547
    • /
    • 2005
  • Purpose: To examine an estimate factor and grasp the relation of difference for Type A Behavior Pattern(TABP), Perceived Stress Questionnaire, Depression and HIT-6 in the Chronic headache client. Method: Data collected by self-reported questionnaires from 38 client in S city who were selected by criteria of IHS, from the $19^{th}$ of October to 10th of December, 2004. Result: 1) Differences between biographical data by TABP was significant by SaSang constitutions, by Stress was significantly influenced by age, and by Depression were significantly influenced health status and SaSang constitutions. 2) Correlations Coefficients among Study Variables were Stress and Depression(r=.494, p=.002) and Depression and HIT-6(r=.432, p=.010). 3) In regression analysis, HIT-6 were significantly influenced by Depression and Type A Behavior Pattern(TABP).These variables explained 38% and 34% respectively. Conclusion: The result suggest that chronic headache management with psychological aspect, as well as physical aspect should be a focus to enhance the quality of life.

  • PDF

The improvement of MIRAGE I robot system (MIRAGE I 로봇 시스템의 개선)

  • 한국현;서보익;오세종
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.605-607
    • /
    • 1997
  • According to the way of the robot control, the robot systems of all the teams which participate in the MIROSOT can be divided into three categories : the remote brainless system, the vision-based system and the robot-based system. The MIRAGE I robot control system uses the last one, the robot-based system. In the robot-based system the host computer with the vision system transmits the data on only the location of the ball and the robots. Based on this robot control method, we took part in the MIROSOT '96 and the MIROSOT '97.

  • PDF

A Study on High Speed, High Precision Auto-alignment System Using Vision System (비전 시스템을 이용한 고속 고정도 자동 정렬장치 연구)

  • 홍준희;전경한
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.457-460
    • /
    • 1998
  • Recently, the research of the FPD(Flat Panel Display)which is substituted for CRT(Cathode Ray Tube) has been widely progressed. But most equipment that are used for production of FPD are expensive and we must import these equipment. Among these equipment, most important one is a auto-alignment system. In this paper, we present a high speed, high precision auto-alignment system, in which a PLD auto tuning algorithm, 1-dimensional CCD(Dcharge Coupled Device) camera, vision board, and vision data processing algorithm are included.

  • PDF

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.