• Title/Summary/Keyword: Network Camera

Search Result 638, Processing Time 0.037 seconds

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

LATERAL CONTROL OF AUTONOMOUS VEHICLE USING SEVENBERG-MARQUARDT NEURAL NETWORK ALGORITHM

  • Kim, Y.-B.;Lee, K.-B.;Kim, Y.-J.;Ahn, O.-S.
    • International Journal of Automotive Technology
    • /
    • v.3 no.2
    • /
    • pp.71-78
    • /
    • 2002
  • A new control method far vision-based autonomous vehicle is proposed to determine navigation direction by analyzing lane information from a camera and to navigate a vehicle. In this paper, characteristic featured data points are extracted from lane images using a lane recognition algorithm. Then the vehicle is controlled using new Levenberg-Marquardt neural network algorithm. To verify the usefulness of the algorithm, another algorithm, which utilizes the geometric relation of a camera and vehicle, is introduced. The second one involves transformation from an image coordinate to a vehicle coordinate, then steering is determined from Ackermann angle. The steering scheme using Ackermann angle is heavily depends on the correct geometric data of a vehicle and a camera. Meanwhile, the proposed neural network algorithm does not need geometric relations and it depends on the driving style of human driver. The proposed method is superior than other referenced neural network algorithms such as conjugate gradient method or gradient decent one in autonomous lateral control .

Stereo Calibration Using Support Vector Machine

  • Kim, Se-Hoon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.250-255
    • /
    • 2003
  • The position of a 3-dimensional(3D) point can be measured by using calibrated stereo camera. To obtain more accurate measurement ,more accurate camera calibration is required. There are many existing methods to calibrate camera. The simple linear methods are usually not accurate due to nonlinear lens distortion. The nonlinear methods are accurate more than linear method, but it increase computational cost and good initial guess is needed. The multi step methods need to know some camera parameters of used camera. Recent years, these explicit model based camera calibration work with the development of more precise camera models involving correction of lens distortion. But these explicit model based camera calibration have disadvantages. So implicit camera calibration methods have been derived. One of the popular implicit camera calibration method is to use neural network. In this paper, we propose implicit stereo camera calibration method for 3D reconstruction using support vector machine. SVM can learn the relationship between 3D coordinate and image coordinate, and it shows the robust property with the presence of noise and lens distortion, results of simulation are shown in section 4.

  • PDF

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.

Performance Analysis of Optical Camera Communication with Applied Convolutional Neural Network (합성곱 신경망을 적용한 Optical Camera Communication 시스템 성능 분석)

  • Jong-In Kim;Hyun-Sun Park;Jung-Hyun Kim
    • Smart Media Journal
    • /
    • v.12 no.3
    • /
    • pp.49-59
    • /
    • 2023
  • Optical Camera Communication (OCC), known as the next-generation wireless communication technology, is currently under extensive research. The performance of OCC technology is affected by the communication environment, and various strategies are being studied to improve it. Among them, the most prominent method is applying convolutional neural networks (CNN) to the receiver of OCC using deep learning technology. However, in most studies, CNN is simply used to detect the transmitter. In this paper, we experiment with applying the convolutional neural network not only for transmitter detection but also for the Rx demodulation system. We hypothesize that, since the data images of the OCC system are relatively simple to classify compared to other image datasets, high accuracy results will appear in most CNN models. To prove this hypothesis, we designed and implemented an OCC system to collect data and applied it to 12 different CNN models for experimentation. The experimental results showed that not only high-performance CNN models with many parameters but also lightweight CNN models achieved an accuracy of over 99%. Through this, we confirmed the feasibility of applying the OCC system in real-time on mobile devices such as smartphones.

Visual Bean Inspection Using a Neural Network

  • Kim, Taeho;Yongtae Do
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.644-647
    • /
    • 2003
  • This paper describes a neural network based machine vision system designed for inspecting yellow beans in real time. The system consists of a camera. lights, a belt conveyor, air ejectors, and a computer. Beans are conveyed in four lines on a belt and their images are taken by a monochrome line scan camera when they fall down from the belt. Beans are separated easily from their background on images by back-lighting. After analyzing the image, a decision is made by a multilayer artificial neural network (ANN) trained by the error back-propagation (EBP) algorithm. We use the global mean, variance and local change of gray levels of a bean for the input nodes of the network. In an our experiment, the system designed could process about 520kg/hour.

  • PDF

An Indoor Localization of Mobile Robot through Sensor Data Fusion (센서융합을 이용한 모바일로봇 실내 위치인식 기법)

  • Kim, Yoon-Gu;Lee, Ki-Dong
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.312-319
    • /
    • 2009
  • This paper proposes a low-complexity indoor localization method of mobile robot under the dynamic environment by fusing the landmark image information from an ordinary camera and the distance information from sensor nodes in an indoor environment, which is based on sensor network. Basically, the sensor network provides an effective method for the mobile robot to adapt to environmental changes and guides it across a geographical network area. To enhance the performance of localization, we used an ordinary CCD camera and the artificial landmarks, which are devised for self-localization. Experimental results show that the real-time localization of mobile robot can be achieved with robustness and accurateness using the proposed localization method.

  • PDF

Visual servoing of robot manipulator by fuzzy membership function based neural network (퍼지 신경망에 의한 로보트의 시각구동)

  • 김태원;서일홍;조영조
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.874-879
    • /
    • 1992
  • It is shown that there exists a nonlinear mappping which transforms features and their changes to the desired camera motion without measurement of the relative distance between the camera and the part, and the nonlinear mapping can eliminate several difficulties encountered when using the inverse of the feature Jacobian as in the usual feature-based visual feedback controls. And instead of analytically deriving the closed form of such a nonlinear mapping, a fuzzy membership function (FMF) based neural network is then proposed to approximate the nonlinear mapping, where the structure of proposed networks is similar to that of radial basis function neural network which is known to be very useful in function approximations. The proposed FMF network is trained to be capable of tracking moving parts in the whole work space along the line of sight. For the effective implementation of proposed IMF networks, an image feature selection processing is investigated, and required fuzzy membership functions are designed. Finally, several numerical examples are illustrated to show the validities of our proposed visual servoing method.

  • PDF

3-D Position Analysis of an Object using a Monocular USB port Camera through JAVA (한 대의 USB port 카메라와 자바를 이용한 3차원 정보 추출)

  • 지창호;이동엽;이만형
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2001.04a
    • /
    • pp.606-609
    • /
    • 2001
  • This paper's purpose is to obtain 3-Dimension information by using a monocular camera. This system embodies to obtain the height of object by using trigonometry method between a reference point of circumstance and an object. It is possible to build up system regardless of operating system, and then set it up. An comfortable USB port camera is used everywhere without the capture board. The internet can be used by using the applet and JMF everywhere. We regard the camera as a fixed. And we have developed a Real-Time JPEG/RTP Network Camera system using UDP/IP on Ethernet.

  • PDF

Design and Implementation of A Dual CPU Based Embedded Web Camera Streaming Server (Dual CPU 기반 임베디드 웹 카메라 스트리밍 서버의 설계 및 구현)

  • 홍진기;문종려;백승걸;정선태
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.417-420
    • /
    • 2003
  • Most Embedded Web Camera Server products currently deployed on the market adopt JPEG for compression of video data continuously acquired from the cameras. However, JPEG does not efficiently compress the continuous video stream, and is not appropriate for the Internet where the transmission bandwidth is not guaranteed. In our previous work, we presented the result of designing and implementing an embedded web camera streaming server using MPEG4 codec. But the server in our previous work did not show good performance since one CPU had to both compress and process the network transmission. In this paper, we present our efforts to improve our previous result by using dual CPUs, where DSP is employed for data compression and StrongARM is used for network processing. Better performance has been observed, but it is found that still more time is needed to optimize the performance.

  • PDF