• Title/Summary/Keyword: Situation Image

Search Result 791, Processing Time 0.026 seconds

Inter-vehicular Distance Estimation Scheme Based on VLC using Image Sensor and LED Tail Lamps in Moving Situation (후미등의 가시광통신을 이용한 이동상황에서의 영상센서 기반 차량 간 거리 추정 기법)

  • Yun, Soo-Keun;Jeon, Hui-Jin;Kim, Byung Wook;Jung, Sung-Yoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.6
    • /
    • pp.935-941
    • /
    • 2017
  • This paper proposes a method for estimating the distance betweeen vehicles in a moving situation using the image ratio of the distance between the tail lamps of a front vehicle. The actual distance between the tail lamps of a front vehicle was transmitted by LED tail lamps using visible light communication. As the distance between the front vehicle and the rear vehicle changes, it calculates the ratio of the pixel width between the tail lamps of the front vehicle projected on the image. The calculated values are used to derive a distance-mapping function through non-linear regression technique. Then, the distance between vehicles in the moving situation is estimated based on this function.

A Design of Emergency Medical Image Communication System EMICS based on DICOM suitable for Emergency medical system

  • Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.7
    • /
    • pp.91-97
    • /
    • 2015
  • In this paper, we designed a emergency medical image communication system EMICS added concept of emergency medical image to the existing emergency medical information system based on DICOM. Also we suggested a emergency medical image object EMISPS of EMICS. Using EMICS, the emergency medical technician can work together with emergency doctor. Therefore the patient can take more stable care than existing emergency medical information system. Using EMISPS, the emergency medical technician can get exact situation information of the patient.

The Desired Self-Images and the Fashion Product Unities of Male College Students according to Situation (남자대학생의 의복 착용상황별 추구이미지와 패션상품통일체)

  • Bae Hye-Jin;Chung Ihn-Hee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.30 no.7 s.155
    • /
    • pp.1135-1145
    • /
    • 2006
  • The purpose of this study was to identify the desired seIf-images of male college students according to situations, and to construct fashion product unities bought by male college students for different situations. Empirical data were collected by self-administered questionnaires distributed to male students at 4 universities and 2 colleges in Daegu and Gyeongbuk area during June 2005, and 346 were analyzed, eliminating incomplete ones. Subjects were required to respond to 32 desired image words in 4 different situations respectively: school, meeting girlfriends, ceremonies, and exercises. As a result of factor analysis on desired self-image words, 5 factors were determined: refined image, sporty image, classic image, natural image and simple image. Based on the desired self-image factors, male college students were classified into 3 groups: selective image management group, passive image management group, and active Image management group. Fashion product unity of male college students for the school setting was consisted of round shirts, jeans, running shoes, bags and watches. Aloha shirts/knitted shirts/V-neck shirts, cotton pants/jeans/semi -formal pants, formal shoes/running shoes and watches were the fashion product unity for the setting of meeting girlfriends. For the setting of ceremonies, the fashion product unity included Y-shirts, formal dress, formal shoes, neckties and watches. And for the setting of exercises, the fashion product unity included cotton shirts, training suits, running shoes/jogging shoes/basketball shoes, armguard and caps.

Fear and Surprise Facial Recognition Algorithm for Dangerous Situation Recognition

  • Kwak, NaeJoung;Ryu, SungPil;Hwang, IlYoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.2
    • /
    • pp.51-55
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing dangerous situation. The proposed method firstly extracts the facial region using Harr-like technique from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, detects facial expression, and recognizes dangerous situation. The proposed method is evaluated for MUCT database image and web cam input. The proposed method produces good results of facial expression and discriminates dangerous situation well and the average recognition rate is 91.05%.

Facial Expression Algorithm For Risk Situation Recognition (얼굴 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-jong;Song, Teuk-Seob
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.197-200
    • /
    • 2014
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image. The proposed method produces good results of facial expression and discriminates risk situation well.

  • PDF

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.

RECOGNIZING NEEDLE POSITION USING BOOK-ATTACHED CAMERA FOR SUPPORTING "WATOJI"

  • Kamon, Sayaka;Uranishi, Yuki;Sasaki, Hiroshi;Manabe, Yoshitsugu;Chihara, Kunihiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.760-763
    • /
    • 2009
  • "Watoji" is a Japanese traditional book binding technique. An aim of this research develops a support system which everyone can make Watoji easily by. This system uses a working situation by recognizing a position of a needle and annotating to a book directly by mixed reality technique. Additionally, a technique of recognizing a working situation at a book is sewn by a needle is proposed. The proposed system of recognizing a position of a needle is build, then we experiment recognizing of a needle from an image. Furthermore, setting up Watoji on the system, we experiment recognizing of a position of a needle. An experimental result shows recognizing a needle from an acquired image from a camera. Using this result, a working situation can be recognized. Then, suitable information to a working situation can be presented.

  • PDF

Virtual Contamination Lane Image and Video Generation Method for the Performance Evaluation of the Lane Departure Warning System (차선 이탈 경고 시스템의 성능 검증을 위한 가상의 오염 차선 이미지 및 비디오 생성 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.6
    • /
    • pp.627-634
    • /
    • 2016
  • In this paper, an augmented video generation method to evaluate the performance of lane departure warning system is proposed. In our system, the input is a video which have road scene with general clean lane, and the content of output video is the same but the lane is synthesized with contamination image. In order to synthesize the contamination lane image, two approaches were used. One is example-based image synthesis, and the other is background-based image synthesis. Example-based image synthesis is generated in the assumption of the situation that contamination is applied to the lane, and background-based image synthesis is for the situation that the lane is erased due to aging. In this paper, a new contamination pattern generation method using Gaussian function is also proposed in order to produce contamination with various shape and size. The contamination lane video can be generated by shifting synthesized image as lane movement amount obtained empirically. Our experiment showed that the similarity between the generated contamination lane image and real lane image is over 90 %. Futhermore, we can verify the reliability of the video generated from the proposed method through the analysis of the change of lane recognition rate. In other words, the recognition rate based on the video generated from the proposed method is very similar to that of the real contamination lane video.

Development of Very Large Image Data Service System with Web Image Processing Technology

  • Lee, Sang-Ik;Shin, Sang-Hee
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1200-1202
    • /
    • 2003
  • Satellite and aerial images are very useful means to monitor ecological and environmental situation. Nowadays more and more officials at Ministry of Environment in Korea need to access and use these image data through networks like internet or intranet. However it is very hard to manage and service these image data through internet or intranet, because of its size problem. In this paper very large image data service system for Ministry of Environment is constructed on web environment using image compression and web based image processing technology. Through this system, not only can officials in Ministry of Environment access and use all the image data but also can achieve several image processing effects on web environment. Moreover officials can retrieve attribute information from vector GIS data that are also integrated with the system.

  • PDF

A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System (비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구)

  • 이진우;이영진;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2000.11a
    • /
    • pp.207-217
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF