• 제목/요약/키워드: IT-fusion

검색결과 2,952건 처리시간 0.025초

Transflective liquid crystal display with single cell gap and simple structure

  • Kim, Mi-Young;Lim, Young-Jin;Jeong, Eun;Chin, Mi-Hyung;Kim, Jin-Ho;Srivastava, Anoop Kumar;Lee, Seung-Hee
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2008년도 International Meeting on Information Display
    • /
    • pp.340-343
    • /
    • 2008
  • This work reports the simple fabrication of the single cell gap transflective liquid crystal display (LCD) using wire grid polarizer. The nano sized wire grid polarizer was patterned on common electrode itself, on the reflective part of FFS (Fringe field switching) mode whereas the common electrode was unpatterned at transmissive part. However, this structure didn't show single gamma curve, so we further improved the device by patterning the common electrode at transmissive part. As a result, V-T curve of proposed structure shows single gamma curve. Such a device structure is free from in-cell retarder, compensation film and reflector and furthermore it is very thin and easy to fabricate.

  • PDF

Real-Time Visible-Infrared Image Fusion using Multi-Guided Filter

  • Jeong, Woojin;Han, Bok Gyu;Yang, Hyeon Seok;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3092-3107
    • /
    • 2019
  • Visible-infrared image fusion is a process of synthesizing an infrared image and a visible image into a fused image. This process synthesizes the complementary advantages of both images. The infrared image is able to capture a target object in dark or foggy environments. However, the utility of the infrared image is hindered by the blurry appearance of objects. On the other hand, the visible image clearly shows an object under normal lighting conditions, but it is not ideal in dark or foggy environments. In this paper, we propose a multi-guided filter and a real-time image fusion method. The proposed multi-guided filter is a modification of the guided filter for multiple guidance images. Using this filter, we propose a real-time image fusion method. The speed of the proposed fusion method is much faster than that of conventional image fusion methods. In an experiment, we compare the proposed method and the conventional methods in terms of quantity, quality, fusing speed, and flickering artifacts. The proposed method synthesizes 57.93 frames per second for an image size of $320{\times}270$. Based on our experiments, we confirmed that the proposed method is able to perform real-time processing. In addition, the proposed method synthesizes flicker-free video.

고부가 활용을 위한 이종영상 융합 소프트웨어(InFusion) 개발 (Development of Multi-sensor Image Fusion software(InFusion) for Value-added applications)

  • 최명진;정인협;고형균;장수민
    • 한국위성정보통신학회논문지
    • /
    • 제12권3호
    • /
    • pp.15-21
    • /
    • 2017
  • 2012년 5월 다목적실용위성 KOMPSAT-3 발사 성공 이후, 2013년 8월 KOMPSAT-5, 2015년 3월 KOMPSAT-3A의 발사 성공으로 국내는 광학, 레이더, 열적외선 센서를 통합 운영할 수 있게 되었으며, 각 센서들의 특성을 융합 활용할 수 있는 기반을 마련하였다. 단일 센서가 가지고 있는 활용의 적용 범위나 산출물 정확도에 한계점을 극복하고자 다중 센서들의 장점을 취하고 단점은 상호 보완하는 다종센서간 영상융합기술이 대두하게 되었다. 본 논문에서는 다목적실용위성 군을 활용한 영상 융합 및 고부가 산출물 생성을 위한 소프트웨어(InFusion) 개발에 대하여 소개하고자 한다. 먼저 각 센서들의 특징 설명과 융합 소프트웨어 개발의 필요성에 대하여 기술하고, 개발 전과정에 대하여 상세히 설명하고자 한다. 국내외 다목적실용위성 군의 자료 활용성을 증대시키고 고부가 제품생성을 통한 국내 소프트웨어의 우수성을 알리는 계기가 되고자 한다.

환경기술과 정보기술 기반의 미래도시 공간 메커니즘과 알고리즘 분석 (An Analysis on the Mechanism and Algorism of ET·IT Based Future City Space)

  • 한주형;이상호
    • 한국산학기술학회논문지
    • /
    • 제18권3호
    • /
    • pp.296-305
    • /
    • 2017
  • 본 연구는 정보기술과 환경기술의 미래도시 메커니즘(Mechanism) 구조와 그에 따른 알고리즘 분석을 통해 새로운 도시의 공간을 창출하는데 목표가 있다. 본 연구의 결과는 다음과 같다. 첫째, 환경과 정보기술의 개발트렌드는 친환경개발, 에너지개발, 에너지 절감기술 개발, 광역네트워크 개발 등 4가지 유형으로 분류 할 수 있다. 둘째, 사례 상암DMC는 한국전쟁부터 1978년까지 친환경과 환경 보호적 측면에서 개발이 진행되었다. 광역네트워크 개발은 1990년부터 2000년 사이에 급속하게 진화되었다. 그러나 2010년 이후 도시공간은 다시 환경과 정보의 융합에 의해 공간이 개발되고 있다. 상암DMC의 과거 개발 트렌드는 개인적인 환경기술 개발이 중심이 되었다. 현재에는 공공의 정보기술 중심으로 추진되고 있으나 일부는 환경을 중심으로 하는 준 융합개발의 트렌드를 갖고 있다. 그러나, 미래시대의 융합은 통합적 융합개발이 될 것이라 예측한다. 셋째, 메커니즘 구조는 생성, 소멸 그리고 융합과정에 의해 발전된다. 그 생성은 불충분한 부분으로부터 보충화 될 것이다. 소멸은 불충분한 부분의 융합과정에 의해서 함축화 될 것이다. 그리고 그러한 융합은 생성과 소멸의 기준이 될 것이다. 결국 새로운 창조 도시공간은 환경과 정보기술 중심의 메커니즘 기호화 패턴 구조에 의해 계속해서 형성 될 것이다.

X-Ray Image Enhancement Using a Boundary Division Wiener Filter and Wavelet-Based Image Fusion Approach

  • Khan, Sajid Ullah;Chai, Wang Yin;See, Chai Soo;Khan, Amjad
    • Journal of Information Processing Systems
    • /
    • 제12권1호
    • /
    • pp.35-45
    • /
    • 2016
  • To resolve the problems of Poisson/impulse noise, blurriness, and sharpness in degraded X-ray images, a novel and efficient enhancement algorithm based on X-ray image fusion using a discrete wavelet transform is proposed in this paper. The proposed algorithm consists of two basics. First, it applies the techniques of boundary division to detect Poisson and impulse noise corrupted pixels and then uses the Wiener filter approach to restore those corrupted pixels. Second, it applies the sharpening technique to the same degraded X-ray image. Thus, it has two source X-ray images, which individually preserve the enhancement effects. The details and approximations of these sources X-ray images are fused via different fusion rules in the wavelet domain. The results of the experiment show that the proposed algorithm successfully combines the merits of the Wiener filter and sharpening and achieves a significant proficiency in the enhancement of degraded X-ray images exhibiting Poisson noise, blurriness, and edge details.

비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정 (Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion)

  • 박진성;박영진;박윤식;홍덕화
    • 제어로봇시스템학회논문지
    • /
    • 제17권6호
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

다양한 콘텐츠 제공을 위한 17×17 LED 도트 매트릭스 제작 및 연구 (A Study of 17×17 LED Dot Matrix for offring Various Contents)

  • 배예정;권종만;정순호;박구만;차재상
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2016년도 추계학술대회
    • /
    • pp.197-198
    • /
    • 2016
  • 최근 RGB LED 조명은 다양한 장점으로 이를 활용한 연구가 진행되고 있으며 단순 조명이 아닌 정보전달 및 공간연출의 디자인적 요소의 역할을 이행하기도 한다. RGB의 혼합으로 다양한 색상이 표현 가능한 LED를 사용하여 개별적으로 LED를 제어해 여러 가지 색상 및 모양이 표현 가능한 LED 도트 매트릭스를 제작하였으며, 이를 활용해 다양한 콘텐츠를 출력하고자 한다. 다양한 모양의 콘텐츠에선 개별적으로 LED를 제어하고, 그 외의 콘텐츠에선 원하는 LED를 그룹지어 제어한다. 본 논문에서 제작한 LED 도트 매트릭스는 많은 정보를 전달 할 수 있으며, 다양한 콘텐츠 제공으로 인한 상업화 및 효율성의 확대를 꾀할 수 있다.

  • PDF

Transfer Learning-Based Feature Fusion Model for Classification of Maneuver Weapon Systems

  • Jinyong Hwang;You-Rak Choi;Tae-Jin Park;Ji-Hoon Bae
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.673-687
    • /
    • 2023
  • Convolutional neural network-based deep learning technology is the most commonly used in image identification, but it requires large-scale data for training. Therefore, application in specific fields in which data acquisition is limited, such as in the military, may be challenging. In particular, the identification of ground weapon systems is a very important mission, and high identification accuracy is required. Accordingly, various studies have been conducted to achieve high performance using small-scale data. Among them, the ensemble method, which achieves excellent performance through the prediction average of the pre-trained models, is the most representative method; however, it requires considerable time and effort to find the optimal combination of ensemble models. In addition, there is a performance limitation in the prediction results obtained by using an ensemble method. Furthermore, it is difficult to obtain the ensemble effect using models with imbalanced classification accuracies. In this paper, we propose a transfer learning-based feature fusion technique for heterogeneous models that extracts and fuses features of pre-trained heterogeneous models and finally, fine-tunes hyperparameters of the fully connected layer to improve the classification accuracy. The experimental results of this study indicate that it is possible to overcome the limitations of the existing ensemble methods by improving the classification accuracy through feature fusion between heterogeneous models based on transfer learning.

천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정 (Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion)

  • 신옥식;박찬국
    • 제어로봇시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.