• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.032 seconds

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Generation of Grid Maps of GPS Signal Delays in the Troposphere and Analysis of Relative Point Positioning Accuracy Enhancement (GPS 신호의 대류권 지연정보 격자지도 생성과 상대측위 정확도 향상 평가)

  • Kim, Dusik;Won, Jihye;Son, Eun-Seong;Park, Kwan-Dong
    • Journal of Navigation and Port Research
    • /
    • v.36 no.10
    • /
    • pp.825-832
    • /
    • 2012
  • GPS signal delay that caused by dry gases and water vapor in troposphere is a main error source of GPS point positioning and it must be eliminated for precise point positioning. In this paper, we implemented to generate tropospheric delay grid map over the Korean Peninsula based on post-processing method by using the GPS permanent station network in order to determine the availability of tropospheric delay generation algorithm. GIPSY 5.0 was used for GPS data process and nationwide AWS observation network was used to calculate the amount of dry delay and wet delay separately. As the result of grid map's accuracy analysis, the RMSE between grid map data and GPS site data was 0.7mm in ZHD, 7.6mm in ZWD and 8.5mm in ZTD. After grid map accuracy analysis, we applied the calculated tropospheric delay grid map to single frequency relative positioning algorithm and analyzed the positioning accuracy enhancement. As the result, positioning accuracy was improved up to 36% in case of relative positioning of Suwon(SUWN) and Mokpo(MKPO), that the baseline distance is about 297km.

Preliminary Scheduling Based on Historical and Experience Data for Airport Project (초기 기획단계의 실적 및 경험자료 기반 공항사업 기준공기 산정체계)

  • Kang, Seunghee;Jung, Youngsoo;Kim, Sungrae;Lee, Ikhaeng;Lee, Changweon;Jeong, Jinhak
    • Korean Journal of Construction Engineering and Management
    • /
    • v.18 no.6
    • /
    • pp.26-37
    • /
    • 2017
  • Preliminary scheduling at the initial stage of planning phase is usually performed with limited information and details. Therefore, the reliability and accuracy of preliminary scheduling is affected by personal experiences and skills of the schedule planners, and it requires enormous managerial effort (or workload). Reusing of historical data of the similar projects is important for efficient preliminary scheduling. However, understanding the structure of historical data and applying them to a new project requires a great deal of experience and knowledge. In this context, this paper propose a framework and methodology for automated preliminary schedule generation based on historical database. The proposed methodology and framework enables to automatically generate CPM schedules for airport projects in the early planning stage in order to enhance the reliability and to reduce the workload by using structured knowledge and experience.

Improved ErtPS Scheduling Algorithm for AMR Speech Codec with CNG Mode in IEEE 802.16e Systems (IEEE 802.16e 시스템에서의 CNG 모드 AMR 음성 코덱을 위한 개선된 ErtPS 스케줄링 알고리즘)

  • Woo, Hyun-Je;Kim, Joo-Young;Lee, Mee-Jeong
    • The KIPS Transactions:PartC
    • /
    • v.16C no.5
    • /
    • pp.661-668
    • /
    • 2009
  • The Extended real-time Polling Service (ErtPS) is proposed tosupport QoS of VoIP service with silence suppression which generates variable size data packets in IEEE 802.16e systems. If the silence is suppressed, VoIP should support Comfort Noise Generation (CNG) which generates comfort noise for receiver's auditory sense to notify the status of connection to the user. CNG mode in silent-period generates a data with lower bit rate at long packet transmission intervals in comparison with talk-spurt. Therefore, if the ErtPS, which is designed to support service flows that generate data packets on a periodic basis, is applied to silent-period, resources of the uplink are used inefficiently. In this paper, we proposed the Improved ErtPS algorithm for efficient resource utilization of the silent-period in VoIP traffic supporting CNG. In the proposed algorithm, the base station allocates bandwidth depending on the status of voice at the appropriate interval by havingthe user inform the changes of voice status. The Improved ErtPS utilizes the Cannel Quality Information Channel (CQICH) which is an uplink subchannel for delivering quality information of channel to the base station on a periodic basis in 802.16e systems. We evaluated the performance of proposed algorithm using OPNET simulator. We validated that proposed algorithm improves the bandwidth utilization of the uplink and packet transmission latency

Constructing Software Structure Graph through Progressive Execution (점진적 실행을 통한 소프트웨어의 구조 그래프 생성)

  • Lee, Hye-Ryun;Shin, Seung-Hun;Choi, Kyung-Hee;Jung, Gi-Hyun;Park, Seung-Kyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.7
    • /
    • pp.111-123
    • /
    • 2013
  • To verify software vulnerability, the method of conjecturing software structure and then testing the software based on the conjectured structure has been highlighted. To utilize the method, an efficient way to conjecture software structure is required. The popular graph and tree methods such as DFG(Data Flow Graph), CFG(Control Flow Graph) and CFA(Control Flow Automata) have a serious drawback. That is, they cannot express software in a hierarchical fashion. In this paper, we propose a method to overcome the drawback. The proposed method applies various input data to a binary code, generate CFG's based on the code output and construct a HCFG (Hierarchical Control Flow Graph) to express the generated CFG's in a hierarchical structure. The components required for HCFG and progressive algorithm to construct HCFG are also proposed. The proposed method is verified through constructing the software architecture of an open SMTP(Simple Mail Transfer Protocol) server program. The structure generated by the proposed method and the real program structure are compared and analyzed.

Instructions and Data Prefetch Mechanism using Displacement History Buffer (변위 히스토리 버퍼를 이용한 명령어 및 데이터 프리페치 기법)

  • Jeong, Yong Su;Kim, JinHyuk;Cho, Tae Hwan;Choi, SangBang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.10
    • /
    • pp.82-94
    • /
    • 2015
  • In this paper, we propose hardware prefetch mechanism with an efficient cache replacement policy by giving priority to the trigger block in which a spatial region and producing a spatial region by using the displacement field. It could be taken into account the sequence of the program since a history is based on the trigger block of history record, and it could be quickly prefetching the instructions or data address by adding a stored value to the trigger address and displacement field since a history is stored as a displacement value. Also, we proposed a method of replacing at random by the cache replacement policy from the low priority block when the cache area is full after giving priority to the trigger block. We analyzed using the memory simulator program gem5 and PARSEC benchmark to assess the performance of the hardware prefetcher. As a result, compared to the existing hardware prefecture to generate the spatial region using a bit vector, L1 data cache miss rate was reduced about 44.5% on average and an average of 26.1% of L1 instruction misses occur. In addition, IPC (Instruction Per Cycle) showed an improvement of about 23.7% on average.

Outlook of Discharge for Daecheong and Yongdam Dam Watershed Using A1B Climate Change Scenario Based RCM and SWAT Model (A1B기후변화시나리오 기반 RCM과 SWAT모형을 이용한 대청댐 및 용담댐 유역 유출량 전망)

  • Park, Jin-Hyeog;Kwon, Hyun-Han;No, Sun-Hee
    • Journal of Korea Water Resources Association
    • /
    • v.44 no.12
    • /
    • pp.929-940
    • /
    • 2011
  • In this study, the future expected discharges are analyzed for Daecheong and Yongdam Dam Watershed in Geum River watershed using A1B scenario based RCM with 27 km spatial resolutions from Korea Meteorological Agency and SWAT model. The direct use of GCM and RCM data for water resources impact assessment is practically hard because the spatial and temporal scales are different. In this study, the problems of spatial and temporal scales were settled by the spatial and temporal downscaling from watershed scale to weather station scale and from monthly to daily of RCM grid data. To generate the detailed hydrologic scenarios of the watershed scale, the multi-site non-stationary downscaling method was used to examine the fluctuations of rainfall events according to the future climate change with considerations of non-stationary. The similarity between simulation and observation results of inflows and discharges at the Yongdam Dam and Daecheong Dam was respectively 90.1% and 84.3% which shows a good agreement with observed data using SWAT model from 2001 to 2006. The analysis period of climate change was selected for 80 years from 2011 to 2090 and the discharges are increased 6% in periods of 2011~2030. The seasonal patterns of discharges will be different from the present precipitation patterns because the simulated discharge of summer was decreased and the discharge of fall was increased.

Spatial Selectivity Estimation using Cumulative Wavelet Histograms (누적밀도 웨이블릿 히스토그램을 이용한 공간 선택율 추정)

  • Chi, Jeong-Hee;Jeong, Jae-Hyuk;Ryu, Keun-Ho
    • Journal of KIISE:Databases
    • /
    • v.32 no.5
    • /
    • pp.547-557
    • /
    • 2005
  • The purpose of selectivity estimation is to maintain the summary data in a very small memory space and to minimize the error of estimated value and query result. In case of estimating selectivity for large spatial data, the existing works need summary information which reflect spatial data distribution well to get the exact result for query. In order to get such summary information, they require a much memory space. Therefore In this paper, we propose a new technique cumulative density wavelet Histogram, called CDW Histogram, which gets a high accurate selectivity in small memory space. The proposed method is to utilize the sub-histograms created by CD histogram. The each sub-histograms are used to generate the wavelet summary information by applying the wavelet transform. This fact gives us good selectivity even if the memory sire is very small. The experimental results show that the proposed method simultaneously takes full advantage of their strong points - gets a good selectivity using the previous histogram in ($25\%\~50\%$) memory space and is superior to the existing selectivity estimation techniques. The proposed technique can be used to accurately quantify the selectivity of the spatial range query in databases which have very restrictive memory.

Analysis of Lane-Changing Distribution within Merging and Weaving Sections of Freeways (고속도로 합류 및 엇갈림구간에서의 차로변경 분포 분석에 관한 연구)

  • Kim, Yeong-Chun;Kim, Sang-Gu
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.4
    • /
    • pp.115-126
    • /
    • 2009
  • The lane-change behavior usually consists of discretionary lane-change and mandatory lane-change types. For the first type, drivers change lanes selectively to maintain their own driving condition and the second type is the case that the drivers must change the current lane, which can occur in recurrent congestion sections like merging and weaving sections. The mandatory lane-change behavior have a great effect on the operation condition of freeway. In this paper, we first generate data such as traffic volumes, speeds, densities, and the number of lane-change within the merging and weaving sections using the data of individual vehicle collected from time-lapse aerial photography. And then, the data is divided into the stable and congested flow by analyzing the speed variation pattern of individual vehicles. In addition, the number of lane-changing from ramp to mainline within every 30-meter interval is investigated before and after traffic congestion at study sites and the distribution of lane-changing at each 30-meter point is analyzed to identify the variation of lane-changing ratio depending on the stable and congested flows. To recognize the effect of mainline flow influenced by ramp flow, this study also analyzes the characteristics of the lane-changing distributions within the lanes of mainline. The purpose of this paper is to present the basic theory to be used in developing a lane-changing model at the merging and weaving sections on freeways.

A Study on Classifying Sea Ice of the Summer Arctic Ocean Using Sentinel-1 A/B SAR Data and Deep Learning Models (Sentinel-1 A/B 위성 SAR 자료와 딥러닝 모델을 이용한 여름철 북극해 해빙 분류 연구)

  • Jeon, Hyungyun;Kim, Junwoo;Vadivel, Suresh Krishnan Palanisamy;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.999-1009
    • /
    • 2019
  • The importance of high-resolution sea ice maps of the Arctic Ocean is increasing due to the possibility of pioneering North Pole Routes and the necessity of precise climate prediction models. In this study,sea ice classification algorithms for two deep learning models were examined using Sentinel-1 A/B SAR data to generate high-resolution sea ice classification maps. Based on current ice charts, three classes (Open Water, First Year Ice, Multi Year Ice) of training data sets were generated by Arctic sea ice and remote sensing experts. Ten sea ice classification algorithms were generated by combing two deep learning models (i.e. Simple CNN and Resnet50) and five cases of input bands including incident angles and thermal noise corrected HV bands. For the ten algorithms, analyses were performed by comparing classification results with ground truth points. A confusion matrix and Cohen's kappa coefficient were produced for the case that showed best result. Furthermore, the classification result with the Maximum Likelihood Classifier that has been traditionally employed to classify sea ice. In conclusion, the Convolutional Neural Network case, which has two convolution layers and two max pooling layers, with HV and incident angle input bands shows classification accuracy of 96.66%, and Cohen's kappa coefficient of 0.9499. All deep learning cases shows better classification accuracy than the classification result of the Maximum Likelihood Classifier.