• Title/Summary/Keyword: PC Camera

Search Result 338, Processing Time 0.033 seconds

A Study on the Improvement of Calsium Test (Calsium Test의 정밀도 향상을 위한 연구)

  • Han, Jin-Woo;Hwang, Jung-Yeon;Seo, Dae-Shik;Kim, Young-Hun;Moon, Dae-Kyu;Han, Jung-In
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2005.05a
    • /
    • pp.169-172
    • /
    • 2005
  • 공정시 플라스틱 기판의 변형을 방지하기 위해 PC(Polycarbonate) 기판을 약 12시간 동안 pre-annealing시킨 다음 SiN(silicon nitride)와 PI(Poly-imide)를 각각 Sputter와 Spin-Coater를 이용하여 Coating하였다. 완성된 PC 기판위에 Themal Evaporation으로 Calsium을 증착한 뒤 Al을 올렸다. Calsium 증착 된면에 삼성 코닝의 글래스를 UV resin으로 부착 시킨 다음 상온에서 투습률을 측정하였다. 측정 간격은 12시간으로 하였으며 Calsium Test 의 정확도 향상을 위해 CCD Camera로 측정하여 컴퓨터로 분석하였다. 그래픽 저장 파일은 저장시 이미지 손실을 방지하기 위해 Bitmap방식을 그대로 사용 하였으며 정확도 향상을 위한 분석 프로그램은 MicroSoft 사의 Visual C++로 작성하였다. 화상 처리 면적은 컴퓨터 시스템의 처리 속도를 감안하여 70*70 으로 하였다.

  • PDF

Real-Time Pupil Detection System Using PC Camera (PC 카메라를 이용한 실시간 동공 검출)

  • 조상규;황치규;황재정
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1184-1192
    • /
    • 2004
  • A real-time pupil detection system that detects the pupil movement from the real-time video data achieved by the visual light camera for general purpose personal computer is proposed. It is implemented with three steps; at first, face region is detected using the Haar-like feature detection scheme, and then eye region is detected within the face region using the template-based scheme. Finally, pupil movement is detected within the eye region by convolution of the horizontal and vertical histogram profiling and Gaussian filter. As results, we obtained more than 90% of the detection rate from 2375 simulation images and the data processing time is about 160㎳, that detects 7 times per second.

The Future of CAE as a Disruptive Innovation (와해성 혁신의 관점에서 본 CAE 의 미래)

  • Kim, Sangtae
    • Transactions of the KSME C: Technology and Education
    • /
    • v.3 no.2
    • /
    • pp.149-154
    • /
    • 2015
  • Disruptive innovation means the innovation which obsoletes the existing technology by a new technology. Like the examples of disruptive innovation, film camera and digital camera and PC and laptop, CAE technology can be regarded as a disruptive technology which can raise the disruptive innovation. CAE technology which is being improved dramatically with the development of the related technologies brings the innovation in the design and validation process in manufacturing industry. This innovation can change the rule of the market so that the researchers, companies and CAE enginnees should pay attention to it. Also not only the improvement of the technology but also the training and cultivation of the engineers who can utilize CAE technology are very important and cannot be neglected.

An Interactive Aerobic Training System Using Vision and Multimedia Technologies

  • Chalidabhongse, Thanarat H.;Noichaiboon, Alongkot
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1191-1194
    • /
    • 2004
  • We describe the development of an interactive aerobic training system using vision-based motion capture and multimedia technology. Unlike the traditional one-way aerobic training on TV, the proposed system allows the virtual trainer to observe and interact with the user in real-time. The system is composed of a web camera connected to a PC watching the user moves. First, the animated character on the screen makes a move, and then instructs the user to follow its movement. The system applies a robust statistical background subtraction method to extract a silhouette of the moving user from the captured video. Subsequently, principal body parts of the extracted silhouette are located using model-based approach. The motion of these body parts is then analyzed and compared with the motion of the animated character. The system provides audio feedback to the user according to the result of the motion comparison. All the animation and video processing run in real-time on a PC-based system with consumer-type camera. This proposed system is a good example of applying vision algorithms and multimedia technology for intelligent interactive home entertainment systems.

  • PDF

A Development of Image Transfer Remote Maintenance Monitoring System for Hand Held Device (휴대용 화상전송 원격정비 감시시스템의 개발)

  • Kim, Dong-Wan;Park, Sung-Won
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.58 no.3
    • /
    • pp.276-284
    • /
    • 2009
  • In this paper, we develop the image transfer remote maintenance monitoring system for hand held device which can compensate defects of human mistake. The human mistakes always happen when the worker communicate information each other to check and maintenance the equipment of the power plant under bad circumstance such as small place and long distance in power plant. A worker couldn't converse with other when in noisy place like Power plant. So, we make some hand device for handy size and able to converse in noisy place. The developed system can have improvement of productivity through increasing plant operation time. And developed system is composed of advanced H/W(hard ware) system and S/W(soft ware)system. The H/W system consist of media server unit, communication equipment with hand held device, portable camera, mike and head set. The advanced s/w system consist of data base system, client pc(personal computer) real time monitoring system which has server GUI(graphic user interface) program, wireless monitoring program and wire ethernet communication program. The client GUI program is composed of total solution program as pc camera program, and phonetic conversation program etc.. We analyzed the required items and investigated applicable part in the image transfer remote maintenance monitoring system with hand held device. Also we investigated linkage of communication protocol for developed prototype, developed software tool of two-way communication and realtime recording skill of voice with image. We confirmed the efficiency by the field test in preventive maintenance of plant power.

Unmanned Patient Monitoring System Using Frame Difference Method and Decibel Threshold (프레임 차이법과 데시벨 임계치를 이용한 무인 환자 감시 시스템)

  • Lee, Kee-Woo;Lee, Hyuk-Soo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.1
    • /
    • pp.1-5
    • /
    • 2007
  • In this paper, we propose an unmanned patient monitoring system design and performance of a motion capture and sound detection. Unmanned patient monitoring system can be used in the greek koma and meaning deep sleep patient to need 24 hour surveillance. To monitoring, we used laptop, CCTV camera (or PC camera), A/D converter, microphone and detection program. The detection program based on the frame difference method and sound level meter. It had several functions such as data collecting and storing. All of this system was tested in several the simulations of emergency situations. It can be expected that an unmanned patient monitoring system can be used in emergency situation and patient care.

  • PDF

Development of a Camera-based Position Measurement System for the RTGC with Environment Conditions (실외 주행환경을 고려한 카메라 기반의 RTGC 위치계측시스템 개발)

  • Kawai, Hideki;Kim, Young-Bok;Choi, Yong-Woon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.9
    • /
    • pp.892-896
    • /
    • 2011
  • This paper describes a camera-based position measurement system for automatic tracking control of a rubber Tired Gantry Crane (RTGC). An automatic tracking control of RTGC depends on the ability to measure its displacement and angle from a guide line that the RTGC has to follow. The measurement system proposed in this paper is composed of a camera and a PC that are mounted on the right upper between front and rear tires of the RTGC's side. The measurement accuracy of the system is affected by disturbances such as cracks and stains of the guide line, shadows, and halation from the light fluctuation. To overcome the disturbances, both side edges of the guide line are detected as two straight lines from an input image taken by the camera, and parameters of the straight lines are determined by using Hough transform. The displacement and angle of the RTGC from the guide line can be obtained from these parameters with the robustness against the disturbances. From the experiments with the disturbances, we found the accurate displacement and the angle from the guide line that have the standard deviations of 0.95 pixels and 0.22 degrees, respectively.

Development and Test of the Remote Operator Visual Support System Based on Virtual Environment (가상환경기반 원격작업자 시각지원시스템 개발 및 시험)

  • Song, T.G.;Park, B.S.;Choi, K.H.;Lee, S.H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.429-439
    • /
    • 2008
  • With a remote operated manipulator system, the situation at a remote site can be rendered through remote visualized image to the operator. Then the operator can quickly realize situations and control the slave manipulator by operating a master input device based on the information of the virtual image. In this study, the remote operator visual support system (ROVSS) was developed for viewing support of a remote operator to perform the remote task effectively. A visual support model based on virtual environment was also inserted and used to fulfill the need of this study. The framework for the system was created by Windows API based on PC and the library of 3D graphic simulation tool such as ENVISION. To realize this system, an operation test environment for a limited operating site was constructed by using experimental robot operation. A 3D virtual environment was designed to provide accurate information about the rotation of robot manipulator, the location and distance of operation tool through the real time synchronization. In order to show the efficiency of the visual support, we conducted the experiments by four methods such as the direct view, the camera view, the virtual view and camera view plus virtual view. The experimental results show that the method of camera view plus virtual view has about 30% more efficiency than the method of camera view.

Quantitative analysis of gene expression by fluorescence images using green fluorescence protein

  • Park, Yong-Doo;Kim, Jong-Won;Suh, You-Hun;Min, Byoung-Goo
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.475-477
    • /
    • 1997
  • We have analyzed the fluorescence image obtaining from green fluorescence protein (GFP). In order to monitor the fluorescence of specific gene, we used the amyloid precursor protein promoter which has been known to act as a major role in the development of Alzheimer's disease. The promoter from - 3.0 kb to + 100 base pair was inserted into the gene expression monitoring GFP vector purchased from Clontech. This construct was transfected into the PC 12 and fibroblast cells and the fluorescence image was captured by two kinds of methods. One is using cheaper CCD camera and other is SIT-CCD camera. or the higher sensitivity of the fluorescence image, we developed the multiple image grabbing program. As a results, the fluorescence image by conventional CCD camera have the similar sensitivity compared with that of the SIT-camera by applying the multiple image grabbing programs. By this system. it will be possible to construct the fluorescence monitoring system with lower cost. And gene expression in real time by fluorescence image will be possible without changing the fluorescence images.

  • PDF

A Fast Motion Detection and Tracking Algorithm for Automatic Control of an Object Tracking Camera (객체 추적 카메라 제어를 위한 고속의 움직임 검출 및 추적 알고리즘)

  • 강동구;나종범
    • Journal of Broadcast Engineering
    • /
    • v.7 no.2
    • /
    • pp.181-191
    • /
    • 2002
  • Video based surveillance systems based on an active camera require a fast algorithm for real time detection and tracking of local motion in the presence of global motion. This paper presents a new fast and efficient motion detection and tracking algorithm using the displaced frame difference (DFD). In the Proposed algorithm, first, a Previous frame is adaptively selected according to the magnitude of object motion, and the global motion is estimated by using only a few confident matching blocks for a fast and accurate result. Then, a DFD is obtained between the current frame and the selected previous frame displaced by the global motion. Finally, a moving object is extracted from the noisy DFD by utilizing the correlation between the DFD and current frame. We implement this algorithm into an active camera system including a pan-tilt unit and a standard PC equipped with an AMD 800MHz processor. The system can perform the exhaustive search for a search range of 120, and achieve the processing speed of about 50 frames/sec for video sequences of 320$\times$240. Thereby, it provides satisfactory tracking results.