• Title/Summary/Keyword: Optical information processing

Search Result 606, Processing Time 0.03 seconds

Scattered Point Noise Filtering Method for Image Reconstruction Performance Enhancing of White Light Interfrometry (높이영상에 산포되어 있는 점 노이즈 처리를 통한 백색광 간섭계의 영상 복원력 향상)

  • Yim, Hae-Dong;Lee, Min-Woo;Lee, Seung-Gol;Park, Se-Geun;Lee, El-Hang;O, Beom-Hoan
    • Korean Journal of Optics and Photonics
    • /
    • v.21 no.1
    • /
    • pp.21-25
    • /
    • 2010
  • In this paper, in order to enhance the image reconstruction performance of white light scanning interferometry(WLI), we demonstrate the scattered point noise filtering performance of post-processing methods. Median filtering is similar to using an averaging filter. Because the median value is less sensitive than the mean to extreme values, the median filter can remove the scattered point noise from a height-map without significantly reducing the sharpness of the image. In several specific cases, however, the median filter can't remove the scattered point noise. Therefore, we propose a comparative mean filter that uses order-statistic filtering and the mean of the neighborhood pixels. The performance is demonstrated by measuring an array of metal solder balls fabricated on PCB. The proposed method reduced the noise pixels by 4.4 percent.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

A Prototype Architecture of an Interactive Service System for Digital Hologram Videos (디지털 홀로그램 비디오를 위한 인터랙티브 서비스 시스템의 프로토타입 설계)

  • Seo, Young-Ho;Lee, Yoon-Hyuk;Yoo, Ji-Sang;Kim, Man-Bae;Choi, Hyun-Jun;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.695-706
    • /
    • 2012
  • The purpose of this paper is to propose a service system for a digital hologram video, which has not been published yet. This system assumes the existing service frame for 2-dimensional or 3-dimensional image/video, which includes data acquisition, processing, transmission, reception, and reconstruction. This system also includes the function to service the digital hologram at the viewer's view point by tracking the viewer's face. For this function, the image information at the virtual view point corresponding to the viewer's view point is generated to get the corresponding hologram. Here in this paper, only a prototype that includes major functions of it is implemented, which includes camera system for data acquisition, camera calibration and image rectification, depth/intensity image enhancement, intermediate view generation, digital hologram generation, and holographic image reconstruction by both simulation and optical apparatus. The proposed prototype system was implemented and the result showed that it takes about 352ms to generate one frame of digital hologram and reconstruct the image by simulation, or 183ms to reconstruct image by optical apparatus instead of simulation.

Enhancing A Neural-Network-based ISP Model through Positional Encoding (위치 정보 인코딩 기반 ISP 신경망 성능 개선)

  • DaeYeon Kim;Woohyeok Kim;Sunghyun Cho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.81-86
    • /
    • 2024
  • The Image Signal Processor (ISP) converts RAW images captured by the camera sensor into user-preferred sRGB images. While RAW images contain more meaningful information for image processing than sRGB images, RAW images are rarely shared due to their large sizes. Moreover, the actual ISP process of a camera is not disclosed, making it difficult to model the inverse process. Consequently, research on learning the conversion between sRGB and RAW has been conducted. Recently, the ParamISP[1] model, which directly incorporates camera parameters (exposure time, sensitivity, aperture size, and focal length) to mimic the operations of a real camera ISP, has been proposed by advancing the simple network structures. However, existing studies, including ParamISP[1], have limitations in modeling the camera ISP as they do not consider the degradation caused by lens shading, optical aberration, and lens distortion, which limits the restoration performance. This study introduces Positional Encoding to enable the camera ISP neural network to better handle degradations caused by lens. The proposed positional encoding method is suitable for camera ISP neural networks that learn by dividing the image into patches. By reflecting the spatial context of the image, it allows for more precise image restoration compared to existing models.

Classification of Handwritten and Machine-printed Korean Address Image based on Connected Component Analysis (연결요소 분석에 기반한 인쇄체 한글 주소와 필기체 한글 주소의 구분)

  • 장승익;정선화;임길택;남윤석
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.904-911
    • /
    • 2003
  • In this paper, we propose an effective method for the distinction between machine-printed and handwritten Korean address images. It is important to know whether an input image is handwritten or machine-printed, because methods for handwritten image are quite different from those of machine-printed image in such applications as address reading, form processing, FAX routing, and so on. Our method consists of three blocks: valid connected components grouping, feature extraction, and classification. Features related to width and position of groups of valid connected components are used for the classification based on a neural network. The experiment done with live Korean address images has demonstrated the superiority of the proposed method. The correct classification rate for 3,147 testing images was about 98.85%.

An Analysis of Design Elements of Silicon Avalanche LED (실리콘 애벌런치 LED의 설계요소에 대한 분석)

  • Ea, Jung-Yang
    • Journal of the Korean Vacuum Society
    • /
    • v.18 no.2
    • /
    • pp.116-126
    • /
    • 2009
  • It is becoming more difficult to improve the device operating speed by shrinking the size of semiconductor devices. Therefore, for a new leap forward in the semiconductor industry, the advent of silicon opto-electronic devices, i.e., silicon photonics is more desperate. Silicon Avalanche LED is one of the prospective candidates to realize the practical silicon opto-electronic devices due to its simplicity of fabrication, repeatability, stability, high speed operation, and compatibility with silicon IC processing. We conducted the measurement of the electrical characteristics and the observation of the light-emitting phenomena using optical microscopy. We analyzed the influence of the design elements such as the shape of the light-emitting area and the depth of the $n^{+}-p^{+}$ junction with simple device modeling and simulation. We compared the results of simulation and the measurement and explained the discrepancy between the results of the simulation and the measurement, and the suggestions for the improvement were given.

Enhanced Reconstruction of Heavy Occluded Objects Using Estimation of Variance in Volumetric Integral Imaging (VII) (Volumetric 집적영상에서 분산 추정을 이용한 심하게 은폐된 물체의 향상된 복원)

  • Hwang, Yong-Seok;Kim, Eun-Soo
    • Korean Journal of Optics and Photonics
    • /
    • v.19 no.6
    • /
    • pp.389-393
    • /
    • 2008
  • Enhanced reconstruction of heavy occluded objects was represented using estimation of variance in computational integral imaging. The system is analyzed to extract information of enhanced reconstruction from an elemental images set. To obtain elemental images with enhanced resolution, low focus error, and large depth of focus, synthetic aperture integral imaging (SAII) utilizing a digital camera has been adopted. The focused areas of the reconstructed image are varied with the distance of the reconstruction plane. When an occluded object is occluded heavily, an occluded object can not be reconstructed by removing the occluding object. To obtain reconstruction of the occluded object by remedying the effect of heavy occlusion, the statistical technique has been adopted.

Vision-based Obstacle Detection using Geometric Analysis (기하학적 해석을 이용한 비전 기반의 장애물 검출)

  • Lee Jong-Shill;Lee Eung-Hyuk;Kim In-Young;Kim Sun-I.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.8-15
    • /
    • 2006
  • Obstacle detection is an important task for many mobile robot applications. The methods using stereo vision and optical flow are computationally expensive. Therefore, this paper presents a vision-based obstacle detection method using only two view images. The method uses a single passive camera and odometry, performs in real-time. The proposed method is an obstacle detection method using 3D reconstruction from taro views. Processing begins with feature extraction for each input image using Dr. Lowe's SIFT(Scale Invariant Feature Transform) and establish the correspondence of features across input images. Using extrinsic camera rotation and translation matrix which is provided by odometry, we could calculate the 3D position of these corresponding points by triangulation. The results of triangulation are partial 3D reconstruction for obstacles. The proposed method has been tested successfully on an indoor mobile robot and is able to detect obstacles at 75msec.

Microstructure Change and Mechanical Properties in Binary Ti-Al Containing Ti3Al

  • Oh, Chang-Sup;Woo, Sang-Woo;Han, Chang-Suk
    • Korean Journal of Materials Research
    • /
    • v.26 no.12
    • /
    • pp.709-713
    • /
    • 2016
  • Grain morphology, phase stability and mechanical properties in binary Ti-Al alloys containing 43-52 mo1% Al have been investigated. Isothermal forging was used to control the grain sizes of these alloys in the range of 5 to $350{\mu}m$. Grain morphology and volume fraction of ${\alpha}_2$ phase were observed by optical metallography and scanning electron microscopy. Compressive properties were evaluated at room temperature, 1070 K, and 1270 K in an argon atmosphere. Work hardening is significant at room temperature, but it hardly took place at 1070 K and 1270 K because of dynamical recrystallization. The grain morphologies were determined as functions of aluminum content and processing conditions. The transus curve of ${\alpha}$ and ${\alpha}+{\gamma}$ shifted more to the aluminum-rich side than was the case in McCullough's phase diagram. Flow stress at room temperature depends strongly on the volume fraction of the ${\alpha}_2$ phase and the grain size, whereas flow stress at 1070 K is insensitive to the alloy composition or the grain size, and flow stress at 1270 K depends mainly on the grain size. The ${\alpha}_2$ phase in the alloys does not increase the proof stress at high temperatures. These observations indicate that improvement of both the proof stress at high temperature and the room temperature ductility should be achieved to obtain slightly Ti-rich TiAl base alloys.

MyWorkspace: VR Platform with an Immersive User Interface (MyWorkspace: 몰입형 사용자 인터페이스를 이용한 가상현실 플랫폼)

  • Yoon, Jong-Won;Hong, Jin-Hyuk;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.52-55
    • /
    • 2009
  • With the recent development of virtual reality, it has been actively investigated to develop user interfaces for immersive interaction. Immersive user interfaces improve the efficiency and the capability of information processing in the virtual environment providing various services, and provide effective interaction in the field of ubiquitous and mobile computing. In this paper, we propose an virtual reality platform "My Workspace" which renders an 3D virtual workspace by using an immersive user interface. We develop an interface that integrates an optical see-through head-mounted display, a Wii remote controller, and a helmet with infrared LEDs. It estimates the user's gaze direction in terms of horizontal and vertical angles based on the model of head movements. My Workspace expands the current 2D workspace based on monitors into the layered 3D workspace, and renders a part of 3D virtual workspace corresponding to the gaze direction. The user can arrange various tasks on the virtual workspace and switch each task by moving his head. In this paper, we will also verify the performance of the immersive user interface as well as its usefulness with the usability test.

  • PDF