• Title/Summary/Keyword: image database

Search Result 1,267, Processing Time 0.031 seconds

Design of EPG Information Player System using DCT based Blind Watermark (DCT기반의 블라인드 워터마크를 이용한 EPG 정보 재생기 설계)

  • Kim, Dae-Jin;Choi, Hong-Sub
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.4
    • /
    • pp.1-10
    • /
    • 2011
  • While the broadband network and multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading with recently starting IPTV. Generally, PC player can display digital contents obtained through middleware like a settop box and can only bring the informations about contents like CODEC, bitrate etc. useful for only experts. But general users want to know more optional informations like content's subject, description etc. So unlike previous PC player, we proposed a player system that can get inserted informations, namely EPG(Electronic Program Guide), without database after bringing contents to PC through settop box. In addition, we also proposed DCT(Discrete Cosine Transform) based blind watermark generating method to insert EPG informations. We can extract watermark without original image and insert robust watermark in proportion to coefficients in frequency domain. And we analyzed and parsed PSI data from MPEG-TS. So we could insert wanted information using watermark from EPG. And we composed UI by extracting EPG information from watermark interted contents. Finally we modularized whole system into the watermark insert/extract application and directshow filter based player. So we tried to design this system so that the general developer can do in a way that is easier and faster.

An Advanced User-friendly Wireless Smart System for Vehicle Safety Monitoring and Accident Prevention (차량 안전 모니터링 및 사고 예방을 위한 친사용자 환경의 첨단 무선 스마트 시스템)

  • Oh, Se-Bin;Chung, Yeon-Ho;Kim, Jong-Jin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.1898-1905
    • /
    • 2012
  • This paper presents an On-board Smart Device (OSD) for moving vehicle, based on a smooth integration of Android-based devices and a Micro-control Unit (MCU). The MCU is used for the acquisition and transmission of various vehicle-borne data. The OSD has threefold functions: Record, Report and Alarm. Based on these RRA functions, the OSD is basically a safety and convenience oriented smart device, where it facilitates alert services such as accident report and rescue as well as alarm for the status of vehicle. In addition, voice activated interface is developed for the convenience of users. Vehicle data can also be uploaded to a remote server for further access and data manipulation. Therefore, unlike conventional blackboxes, the developed OSD lends itself to a user-friendly smart device for vehicle safety: It basically stores monitoring images in driving plus vehicle data collection. Also, it reports on accident and enables subsequent rescue operation. The developed OSD can thus be considered an essential safety smart device equipped with comprehensive wireless data service, image transfer and voice activated interface.

DETECTION AND MASKING OF CLOUD CONTAMINATION IN HIGH-RESOLUTION SST IMAGERY: A PRACTICAL AND EFFECTIVE METHOD FOR AUTOMATION

  • Hu, Chuanmin;Muller-Karger, Frank;Murch, Brock;Myhre, Douglas;Taylor, Judd;Luerssen, Remy;Moses, Christopher;Zhang, Caiyun
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.1011-1014
    • /
    • 2006
  • Coarse resolution (9 - 50 km pixels) Sea Surface Temperature satellite data are frequently considered adequate for open ocean research. However, coastal regions, including coral reef, estuarine and mesoscale upwelling regions require high-resolution (1-km pixel) SST data. The AVHRR SST data often suffer from navigation errors of several kilometres and still require manual navigation adjustments. The second serious problem is faulty and ineffective cloud-detection algorithms used operationally; many of these are based on radiance thresholds and moving window tests. With these methods, increasing sensitivity leads to masking of valid pixels. These errors lead to significant cold pixel biases and hamper image compositing, anomaly detection, and time-series analysis. Here, after manual navigation of over 40,000 AVHRR images, we implemented a new cloud filter that differs from other published methods. The filter first compares a pixel value with a climatological value built from the historical database, and then tests it against a time-based median value derived for that pixel from all satellite passes collected within ${\pm}3$ days. If the difference is larger than a predefined threshold, the pixel is flagged as cloud. We tested the method and compared to in situ SST from several shallow water buoys in the Florida Keys. Cloud statistics from all satellite sensors (AVHRR, MODIS) shows that a climatology filter with a $4^{\circ}C$ threshold and a median filter threshold of $2^{\circ}C$ are effective and accurate to filter clouds without masking good data. RMS difference between concurrent in situ and satellite SST data for the shallow waters (< 10 m bottom depth) is < $1^{\circ}C$, with only a small bias. The filter has been applied to the entire series of high-resolution SST data since1993 (including MODIS SST data since 2003), and a climatology is constructed to serve as the baseline to detect anomaly events.

  • PDF

A Pixel-based Assessment of Urban Quality of Life (도시의 삶의 질을 평가하기 위한 화소기반 기법)

  • Jun, Byong-Woon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.9 no.3
    • /
    • pp.146-155
    • /
    • 2006
  • A handful of previous studies have attempted to integrate socioeconomic data and remotely sensed data for urban quality of life assessment with their spatial dimension in a zonal unit. However, such a zone-based approach not only has the unrealistic assumption that all attributes of a zone are uniformly spatially distributed throughout the zone, but also has resulted in serious methodological difficulties such as the modifiable areal unit problem and the incompatibility problem with environmental data. An alternative to the zone-based approach can be a pixel-based approach which gets its spatial dimension through a pixel. This paper proposes a pixel-based approach to linking remotely sensed data with socioeconomic data in GIS for urban quality of life assessment. The pixel-based approach uses dasymetric mapping and spatial interpolation to spatially disaggregate socioeconomic data and integrates remotely sensed data with spatially disaggregated socioeconomic data for the quality of life assessment. This approach was implemented and compared with a zone-based approach using a case study of Fulton County, Georgia. Results indicate that the pixel-based approach allows for the calculation of a microscale indicator in the urban quality of life assessment and facilitates efficient data integration and visualization in the assessment although it costs an intermediate step with more processing time such as the disaggregation of zonal data. The results also demonstrate that the pixel-based approach opens up the potential for the development of new database and increased analytical capabilities in urban analysis.

  • PDF

A Study On Memory Optimization for Applying Deep Learning to PC (딥러닝을 PC에 적용하기 위한 메모리 최적화에 관한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.2
    • /
    • pp.136-141
    • /
    • 2017
  • In this paper, we propose an algorithm for memory optimization to apply deep learning to PC. The proposed algorithm minimizes the memory and computation processing time by reducing the amount of computation processing and data required in the conventional deep learning structure in a general PC. The algorithm proposed in this paper consists of three steps: a convolution layer configuration process using a random filter with discriminating power, a data reduction process using PCA, and a CNN structure creation using SVM. The learning process is not necessary in the convolution layer construction process using the discriminating random filter, thereby shortening the learning time of the overall deep learning. PCA reduces the amount of memory and computation throughput. The creation of the CNN structure using SVM maximizes the effect of reducing the amount of memory and computational throughput required. In order to evaluate the performance of the proposed algorithm, we experimented with Yale University's Extended Yale B face database. The results show that the algorithm proposed in this paper has a similar performance recognition rate compared with the existing CNN algorithm. And it was confirmed to be excellent. Based on the algorithm proposed in this paper, it is expected that a deep learning algorithm with many data and computation processes can be implemented in a general PC.

A Study on the Development of Standardized Nursing Care Plans for Computerized Nursing Service (간호업무 전산화를 위한 표준화된 간호계획의 개발에 관한 연구)

  • 김조자;전춘영;임영신;박지원
    • Journal of Korean Academy of Nursing
    • /
    • v.20 no.3
    • /
    • pp.368-380
    • /
    • 1990
  • A central issue in the development of nursing practice is to describe the phenomenon with which nursing is concerned. To identify the health problems which can be diagnosed and managed by the nurse is the first step to organize and ensure the development of nursing science. Therefore the academic world has been discussing the application of the nursing diagnosis in nursing practice as a means of improving quality of care. The objectives of this study were to develop a standardized nursing care plan for ten selected nursing diagnoses to form a database for computerized nursing service. The research approach used in the study was (1) the selection of the ten nursing diagnoses which occur most frequently on medical-surgical wards, (2) the development of a standardized nursing care plan for the ten selected nursing diagnoses, (3) application of the plan to hospitalize patients and evaluation of the content validity by the nurses, and (4) evaluation of the clinical effects after the use of the standardized nursing care plans. The subjects were 56 nurses and 395 hospitalized patients on two medical and two surgical unit. The results of this study were as follows ; 1) The ten selected nursing diagnoses for the development of the standardized nursing care plans were “PAIN, SLEEP DISTURBANCE, ALTERED HEALTH MAINTENANCE, ALTERATION IN NUTRITION, ANXIETY, CONSTIPATION, ALTERED PATTERNS OF URINARY ELIMINATION, DISTURBANCE IN BODY IMAGE, POTENTIAL FOR ACTIVITY INTOLERANCE AND ACTIVITY INTOLERANCE”. 2. The developed standardized nursing care plans included the nursing diagnosis, definition, defining characteristics, etiologic or related factors that contribute to the condition, recording pattern, desired outcomes and nursing orders (nursing interventions). 3. The plan was used with hospitalized patients on medical - surgical wards to test for content validity. The patient's satisfaction with the nursing care and nurses' job satisfaction were investigated to evaluate the clinical effects after the use of the standardized nursing care plans. A comparison of patient satisfaction with nursing care before and after the introduction of the standardized nursing care plans showed a statistically significant higher level of satisfaction with the standardized care plans. There was no difference in the level of job satisfaction expressed by the nursing staff before and after the standardized nursing care plans were introduced. However, when opinions about the use of the standardized nursing care plans were examined it was found that there was a positive effect on clarity in defining the nursing problems, determining nursing cost, more feasible goal setting, effective and systematic nursing records and indications for nursing research. The results of this study suggest that in order to increase the use of nursing diagnoses in the clinical area, it would be effective to select some wards as a pilot project, give the nurses training in the use of nursing diagnosis and develop and use the standardized nursing care plans. In addition to the ten diagnosis used in this study it is recommended that continual development of nursing diagnoses be done using diagnoses that are appropriate to Korea and testing them for validity through standardized care plans.

  • PDF

A Novel Video Copy Detection Method based on Statistical Analysis (통계적 분석 기반 불법 복제 비디오 영상 감식 방법)

  • Cho, Hye-Jeong;Kim, Ji-Eun;Sohn, Chae-Bong;Chung, Kwang-Sue;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.661-675
    • /
    • 2009
  • The carelessly and illegally copied contents are raising serious social problem as internet and multimedia technologies are advancing. Therefore, development of video copy detection system must be settled without delay. In this paper, we propose the hierarchical video copy detection method that estimates similarity using statistical characteristics between original video and manipulated(transformed) copy video. We rank according to luminance value of video to be robust to spacial transformation, and choose similar videos categorized as candidate segments in huge amount of database to reduce processing time and complexity. The copy videos generally insert black area in the edge of the image, so we remove rig black area and decide copy or not by using statistical characteristics of original video and copied video with center part of frame that contains important information of video. Experiment results show that the proposed method has similar keyframe accuracy to reference method, but we use less memory to save feature information than reference's, because the number of keyframes is less 61% than that of reference's. Also, the proposed method detects if the video is copied or not efficiently despite expansive spatial transformations such as blurring, contrast change, zoom in, zoom out, aspect ratio change, and caption insertion.

Detecting near-duplication Video Using Motion and Image Pattern Descriptor (움직임과 영상 패턴 서술자를 이용한 중복 동영상 검출)

  • Jin, Ju-Kyong;Na, Sang-Il;Jenong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.107-115
    • /
    • 2011
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

Robust Face Recognition based on 2D PCA Face Distinctive Identity Feature Subspace Model (2차원 PCA 얼굴 고유 식별 특성 부분공간 모델 기반 강인한 얼굴 인식)

  • Seol, Tae-In;Chung, Sun-Tae;Kim, Sang-Hoon;Chung, Un-Dong;Cho, Seong-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.35-43
    • /
    • 2010
  • 1D PCA utilized in the face appearance-based face recognition methods such as eigenface-based face recognition method may lead to less face representative power and more computational cost due to the resulting 1D face appearance data vector of high dimensionality. To resolve such problems of 1D PCA, 2D PCA-based face recognition methods had been developed. However, the face representation model obtained by direct application of 2D PCA to a face image set includes both face common features and face distinctive identity features. Face common features not only prevent face recognizability but also cause more computational cost. In this paper, we first develope a model of a face distinctive identity feature subspace separated from the effects of face common features in the face feature space obtained by application of 2D PCA analysis. Then, a novel robust face recognition based on the face distinctive identity feature subspace model is proposed. The proposed face recognition method based on the face distinctive identity feature subspace shows better performance than the conventional PCA-based methods (1D PCA-based one and 2D PCA-based one) with respect to recognition rate and processing time since it depends only on the face distinctive identity features. This is verified through various experiments using Yale A and IMM face database consisting of face images with various face poses under various illumination conditions.

Towards a Pedestrian Emotion Model for Navigation Support (내비게이션 지원을 목적으로 한 보행자 감성모델의 구축)

  • Kim, Don-Han
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.197-206
    • /
    • 2010
  • For an emotion retrieval system implementation to support pedestrian navigation, coordinating the pedestrian emotion model with the system user's emotion is considered a key component. This study proposes a new method for capturing the user's model that corresponds to the pedestrian emotion model and examines the validity of the method. In the first phase, a database comprising a set of interior images that represent hypothetical destinations was developed. In the second phase, 10 subjects were recruited and asked to evaluate on navigation and satisfaction toward each interior image in five rounds of navigation experiments. In the last phase, the subjects' feedback data was used for of the pedestrian emotion model, which is called ‘learning' in this study. After evaluations by the subjects, the learning effect was analyzed by the following aspects: recall ratio, precision ratio, retrieval ranking, and satisfaction. Findings of the analysis verify that all four aspects significantly were improved after the learning. This study demonstrates the effectiveness of the learning algorithm for the proposed pedestrian emotion model. Furthermore, this study demonstrates the potential of such pedestrian emotion model to be well applicable in the development of various mobile contents service systems dealing with visual images such as commercial interiors in the future.

  • PDF