• Title/Summary/Keyword: Network Camera

검색결과 645건 처리시간 0.032초

LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성 (LFFCNN: Multi-focus Image Synthesis in Light Field Camera)

  • 김형식;남가빈;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

도시철도 환경에서 지능형 감시 시스템 구축 사례 (A Case Study on Intelligent Surveillance System for Urban Transit Environment)

  • 장일식;안태기;조병목;박구만
    • 한국철도학회:학술대회논문집
    • /
    • 한국철도학회 2011년도 춘계학술대회 논문집
    • /
    • pp.1722-1728
    • /
    • 2011
  • The security issue in urban transit system has been widely considered as the common matters after the fire accident at Daegu subway station. The safe urban transit system is highly demanded because of the vast number of daily passengers, and it is one of the most challenging projects. We introduced a test model for integrated security system for urban transit system and built it at a subway station to demonstrate its performance. This system consists of cameras, sensor network and central monitoring software. We described the smart camera functionality in more detail. The proposed smart camera includes the moving objects recognition module, video analytics, video encoder and server module that transmits video and audio information.

  • PDF

Intelligent Robot Control using Personal Digital Assistants

  • Jaeyong Seo;Kim, Seongjoo;Kim, Yongtaek;Hongtae Jeon
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.304-306
    • /
    • 2003
  • In this paper, we propose the intelligent robot control technique for mobile robot using personal digital assistants (PDA). With the proposed technique, the mobile rebot can trace human at regular intervals by the remote control method with PDA. The mobile robot can recognize the distances between it and human whom the robot must follow with both multi-ultrasonic sensors and PC-camera and then, can inference the direction and velocity of itself to keep the given regular distances. In the first place, the mobile robot acquires the information about circumstances using ultrasonic sensor and PC-camera then secondly, transmits the data to PDA using wireless LAN communication. Finally, PDA recognizes the status of circumstances using the fuzzy logic and neural network and gives the command to mobile robot again.

  • PDF

비젼에 의한 감성인식 (Emotion Recognition by Vision System)

  • 이상윤;오재흥;주영훈;심귀보
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2001년도 추계학술대회 학술발표 논문집
    • /
    • pp.203-207
    • /
    • 2001
  • In this Paper, we propose the neural network based emotion recognition method for intelligently recognizing the human's emotion using CCD color image. To do this, we first acquire the color image from the CCD camera, and then propose the method for recognizing the expression to be represented the structural correlation of man's feature Points(eyebrows, eye, nose, mouse) It is central technology that the Process of extract, separate and recognize correct data in the image. for representation is expressed by structural corelation of human's feature Points In the Proposed method, human's emotion is divided into four emotion (surprise, anger, happiness, sadness). Had separated complexion area using color-difference of color space by method that have separated background and human's face toughly to change such as external illumination in this paper. For this, we propose an algorithm to extract four feature Points from the face image acquired by the color CCD camera and find normalization face picture and some feature vectors from those. And then we apply back-prapagation algorithm to the secondary feature vector. Finally, we show the Practical application possibility of the proposed method.

  • PDF

Recognition of Car Manufacturers using Faster R-CNN and Perspective Transformation

  • Ansari, Israfil;Lee, Yeunghak;Jeong, Yunju;Shim, Jaechang
    • 한국멀티미디어학회논문지
    • /
    • 제21권8호
    • /
    • pp.888-896
    • /
    • 2018
  • In this paper, we report detection and recognition of vehicle logo from images captured from street CCTV. Image data includes both the front and rear view of the vehicles. The proposed method is a two-step process which combines image preprocessing and faster region-based convolutional neural network (R-CNN) for logo recognition. Without preprocessing, faster R-CNN accuracy is high only if the image quality is good. The proposed system is focusing on street CCTV camera where image quality is different from a front facing camera. Using perspective transformation the top view images are transformed into front view images. In this system, the detection and accuracy are much higher as compared to the existing algorithm. As a result of the experiment, on day data the detection and recognition rate is improved by 2% and night data, detection rate improved by 14%.

Pipelined Implementation of JPEG Baseline Encoder IP

  • Kim, Kyung-Hyun;Sonh, Seung-Il
    • Journal of information and communication convergence engineering
    • /
    • 제6권1호
    • /
    • pp.29-33
    • /
    • 2008
  • This paper presents the proposal and hardware design of JPEG baseline encoder. The JPEG encoder system consists of line buffer, 2-D DCT, quantization, entropy encoding, and packer. A fully pipelined scheme for JPEG encoder is adopted to speed-up an image compression. The proposed architecture was described in VHDL and synthesized in Xilinx ISE 7.1i and simulated by modelsim 6.1i. The results showed that the performance of the designed JPEG baseline encoder is higher than that demanded by real-time applications for $1024{\times}768$ image size. The designed JPEG encoder IP can be easily integrated into various application systems, such as scanner, PC camera, color FAX, and network camera, etc.

Delta-bar-Delta 알고리즘을 이용한 ODVS의 좌표 교정 (Coordinate Calibration of the ODVS using Delta-bar-Delta Neural Network)

  • 김도현;박용민;차의영
    • 한국정보통신학회논문지
    • /
    • 제9권3호
    • /
    • pp.669-675
    • /
    • 2005
  • 본 논문에서는 카타디옵트릭 카메라로부터 획득한 전 방향 구면 왜곡 영상에서의 좌표를 실제 거리좌표로 변환하기 위해서 3차원 포물면 좌표 변환 방법과 Delta-bar-delta 알고리즘에 의한 좌표 교정방법을 제시하였다. 실험을 통해 살펴본 결과 제안된 좌표 변환 방법이 환경 변수에 민감한 좌표 변환에서의 정확성 및 신뢰성을 가짐을 알 수 있었다.

진화전략 알고리즘을 이용한 AGV 조향제어에 관한 연구 (A Study for AGV Steering Control using Evolution Strategy)

  • 이진우;손주한;최성욱;이영진;이권순
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.149-149
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF

해외 직접투자에 의한 기술혁신능력의 강화: 삼성항공의 카메라사업

  • 이공래;심상완
    • 기술혁신연구
    • /
    • 제8권2호
    • /
    • pp.145-170
    • /
    • 2000
  • This study explores the building process of innovative capability through overseas direct investment (ODI) by taking the camera business of Samsung Aerospace Industry (SAI) as a case. SAI with less than 20 years history has pursued an aggressive ODI and developed its own camera models. It acquired Rollei in Germany and Union in Japan in 1995, for acquiring advanced technology. Several factors leading to the success of SAI's innovative capability building were taken into consideration. Fists, SAI effectively absorbed technological knowledge by exchanging technical personnel with foreign partners. Second, it chose right partners with the complementary knowledge required for advancing its technological capability. Third, it strengthened its competence to satisfy the technical standards of the partner companies in OEM trade relations. Fourth, ti successfully formed a global marketing network in which subsidiary companies plays a central role in each region. Finally, SAI successfully addressed the factors of mutual respect between partners in order to let its partners to be confident in their ventures.

  • PDF

Live Electrooptic Imaging Camera for Real-Time Visual Accesses to Electric Waves in GHz Range

  • Tsuchiya, Masahiro;Shiozawa, Takahiro
    • Journal of electromagnetic engineering and science
    • /
    • 제11권4호
    • /
    • pp.290-297
    • /
    • 2011
  • Recent progresses in the live electrooptic imaging (LEI) technique are reviewed with emphasis on its functionality of real-time visual accesses to traveling electric waves in the GHz range. Together with the principles, configurations, and procedures for the visual observation experiments by an LEI camera system, the following results are described as examples indicating the wide application ranges of the technique; Ku-band waves on arrayed planar antennas, waves on a Gb/s-class digital circuit, W-band waves traveling both in slab-waveguide modes and aerially, backward-traveling wave along composite right/left-handed transmission line, and, waves in monolithic microwave integrated circuit module case.