• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.032 seconds

The World as Seen from Venice (1205-1533) as a Case Study of Scalable Web-Based Automatic Narratives for Interactive Global Histories

  • NANETTI, Andrea;CHEONG, Siew Ann
    • Asian review of World Histories
    • /
    • v.4 no.1
    • /
    • pp.3-34
    • /
    • 2016
  • This introduction is both a statement of a research problem and an account of the first research results for its solution. As more historical databases come online and overlap in coverage, we need to discuss the two main issues that prevent 'big' results from emerging so far. Firstly, historical data are seen by computer science people as unstructured, that is, historical records cannot be easily decomposed into unambiguous fields, like in population (birth and death records) and taxation data. Secondly, machine-learning tools developed for structured data cannot be applied as they are for historical research. We propose a complex network, narrative-driven approach to mining historical databases. In such a time-integrated network obtained by overlaying records from historical databases, the nodes are actors, while thelinks are actions. In the case study that we present (the world as seen from Venice, 1205-1533), the actors are governments, while the actions are limited to war, trade, and treaty to keep the case study tractable. We then identify key periods, key events, and hence key actors, key locations through a time-resolved examination of the actions. This tool allows historians to deal with historical data issues (e.g., source provenance identification, event validation, trade-conflict-diplomacy relationships, etc.). On a higher level, this automatic extraction of key narratives from a historical database allows historians to formulate hypotheses on the courses of history, and also allow them to test these hypotheses in other actions or in additional data sets. Our vision is that this narrative-driven analysis of historical data can lead to the development of multiple scale agent-based models, which can be simulated on a computer to generate ensembles of counterfactual histories that would deepen our understanding of how our actual history developed the way it did. The generation of such narratives, automatically and in a scalable way, will revolutionize the practice of history as a discipline, because historical knowledge, that is the treasure of human experiences (i.e. the heritage of the world), will become what might be inherited by machine learning algorithms and used in smart cities to highlight and explain present ties and illustrate potential future scenarios and visionarios.

A Study on the Construction of Computerized Algorithm for Proper Construction Cost Estimation Method by Historical Data Analysis (실적자료 분석에 의한 적정 공사비 산정방법의 전산화 알고리즘 구축에 관한 연구)

  • Chun Jae-Youl
    • Korean Journal of Construction Engineering and Management
    • /
    • v.4 no.4 s.16
    • /
    • pp.192-200
    • /
    • 2003
  • The object of this research is to develop a computerized algorithm of cost estimation method to forecast the total construction cost in the bidding stage by the historical and elemental work cost data. Traditional cost models to prepare Bill of Quantities in the korea construction industry since 1970 are not helpful to forecast the project total cost in the bidding stage because the BOQ is always constant data according to the design factors of a particular project. On the contrary, statistical models can provide cost quicker and more reliable than traditional ones if the collected cost data are sufficient enough to analyze the trends of the variables. The estimation system considers non-deterministic methods which referred to as the 'Monte Carlo simulation. The method interprets cost data to generate a probabilistic distribution for total costs from the deficient elemental experience cost distribution.

A Study on Distribution Query Conversion Method for Real-time Integrating Retrieval based on TMDR (TMDR 기반의 실시간 통합 검색을 위한 분산질의 변환 기법에 대한 연구)

  • Hwang, Chi-Gon;Shin, Hyo-Young;Jung, Kye-Dong;Choi, Young-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.7
    • /
    • pp.1701-1707
    • /
    • 2010
  • This study is intended for implementing the system environment that can help integrate and retrieve various types of data in real-time by providing semantic interoperability among distributed heterogeneous information systems. The semantic interoperability is made possible by providing a TMDR(Topicmaps Metadata Registry), a set of ontologies. TMDR, which has been made by combining MDR(MetaData Registry) and TopicMaps and storing them in the database, is able to generate distributed query and provide efficient knowledge. MDR is a metadata management technique for distributed data management. TopicMaps is an ontology representation technique that takes into consideration the hierarchy and association for accessing knowledge data. We have created TMDR, a kind of ontology, that is fit for any system and able to detect and resolve semantic conflicts on the level of data and schema. With this system we propose a query-processing technique to integrate and access heterogeneous information sources. Unlike existing retrieval methods this makes possible efficient retrieval and reasoning by providing association focusing on subjects.

Design and Implementation of Voice One-Time Password(V-OTP) based User Authentication Mechanism on Smart Phone (스마트폰에서 음성 정보를 이용한 일회용 패스워드(V-OTP) 기반 사용자 인증 메커니즘 설계 및 구현)

  • Cho, Sik-Wan;Lee, Hyung-Woo
    • The KIPS Transactions:PartC
    • /
    • v.18C no.2
    • /
    • pp.79-88
    • /
    • 2011
  • It is necessary for us to enhance the security service on smart phone by using voice data on authentication procedure. In this study, a voice data based one-time password generation mechanism is designed and implemented for enhancing user authentication on smart phone. After receiving a PIN value from the server, a user inputs his/her own voice biometric data using mike device on smart phone. And then this captured a voice biometric data will be used to generate one-time token on server side after verification procedures. Based on those mutual authentication steps, a voice data based one-time password(V-OTP) will be generated by client module after receiving the one-time token from the server finally. Using proposed voice one-time password mechanism, it is possible for us to provide more secure user authentication service on smart phone.

Correction of Erroneous Individual Vehicle Speed Data Using Locally Weighted Regression (LWR) (국소가중다항회귀분석을 이용한 이상치제거 및 자료보정기법 개발 (GPS를 이용한 개별차량 주행속도를 중심으로))

  • Im, Hui-Seop;O, Cheol;Park, Jun-Hyeong;Lee, Geon-U
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.2
    • /
    • pp.47-56
    • /
    • 2009
  • Effective detection and correction of outliers of raw traffic data collected from the field is of keen interest because reliable traffic information is highly dependent on the quality of raw data. Global positioning system (GPS) based traffic surveillance systems are capable of producing individual vehicle speeds that are invaluable for various traffic management and information strategies. This study proposed a locally weighted regression (LWR) based filtering method for individual vehicle speed data. An important feature of this study was to propose a technique to generate synthetic outliers for more systematic evaluation of the proposed method. It was identified by performance evaluations that the proposed LWR-based method outperformed an exponential smoothing. The proposed method is expected to be effectively utilized for filtering out raw individual vehicle speed data.

Development of the UGC Support WebGIS System for Marine Spatial Data (웹 GIS 기반 해양 공간데이터의 사용자 콘텐츠 제작 지원시스템 개발)

  • Oh, Jung-Hee;Choi, Hyun-Woo;Kim, Sung-Dae;Lee, Charm
    • Spatial Information Research
    • /
    • v.19 no.5
    • /
    • pp.13-25
    • /
    • 2011
  • Until now, most of the Web GIS system has been developed with one-sided service type that provides pre-built spatial information to users. Recently, however, interactive web system is getting attention because users can create directly spatial information contents that meet their needs. In line with this trend, this study had a aim to develop a UGC(User Generated Contents) support system for marine science researchers who can generate spatial data by themselves on the web. The main advantage of this system is that it provides marine survey data and marine spatial information that needed to work for marine science research. Furthermore, it provides the functions of extracting of coastline as point data for their marine study area, and making of the spatial planning map for marine field survey work and marine science thematic maps for exploratory analysis after research survey. Such kinds of interactive UGC support system gives researchers a chance for utilizing marine spatial information more easily. Therefore, it is expected that the improving of the efficiency of research works, as well as increasing of the utilization of marine spatial data.

Reversible Data Hiding based on QR Code for Binary Image (이진 이미지를 위한 QR 코드 기반의 가역적인 데이터 은닉)

  • Kim, Cheonshik
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.6
    • /
    • pp.281-288
    • /
    • 2014
  • QR code (abbreviated from Quick Response Code) is code system that is strong in against to apply image processing techniques (skew, warp, blur, and rotate) as QR codes can store several hundred times the amount of information carried by ordinary bar codes. For this reason, QR code is used in various fields, e.g., air ticket (boarding control system), food(vegetables, meat etc.) tracking system, contact lenses management, prescription management, patient wrist band (patient management) etc. In this paper, we proposed reversible data hiding for binary images. A reversible data hiding algorithm, which can recover the original image without any distortion from the marked (stego) image after the hidden data have been extracted, because it is possible to use various kinds of purposes. QR code can be used to generate by anyone so it can be easily used for crime. In order to prevent crimes related QR code, reversible data hiding can confirm if QR code is counterfeit or not as including authentication information. In this paper, we proved proposed method as experiments.

Association rule ranking function by decreased lift influence (향상도 영향 감소화에 의한 연관성 순위결정함수)

  • Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.397-405
    • /
    • 2010
  • Data mining is the method to find useful information for large amounts of data in database, and one of the important goals is to search and decide the association for several variables. The task of association rule mining is to find certain association relationships among a set of data items in a database. There are three primary measures for association rule, support and confidence and lift. In this paper we developed a association rule ranking function by decreased lift influence to generate association rule for items satisfying at least one of three criteria. We compared our function with the functions suggested by Park (2010), and Wu et al. (2004) using some numerical examples. As the result, we knew that our decision function was better than the function of Park's and Wu's functions because our function had a value between -1 and 1regardless of the range for three association thresholds. Our function had the value of 1 if all of three association measures were greater than their thresholds and had the value of -1 if all of three measures were smaller than the thresholds.

Assessment of Trophic State for Yongdam Reservoir Using Satellite Imagery Data (인공위성 영상자료를 이용한 용담호의 영양상태 평가)

  • Kim, Tae Geun
    • Journal of Environmental Impact Assessment
    • /
    • v.15 no.2
    • /
    • pp.121-127
    • /
    • 2006
  • The conventional water quality measurements by point sampling provide only site specific temporal water quality information but not the synoptic geographic coverage of water quality distribution. To circumvent these limitations in temporal and spatial measurements, the use of remote sensing is increasingly involved in the water quality monitoring research. In other to assess a trophic state of Yongdam reservoir using satellite imagery data, I obtained Landsat ETM data and water quality data on 16th September and 18th October 2001. The approach involved acquisition of water quality samples from boats at 33 sites on 16th September and 30 sites on 18th October 2001, simultaneous with Landsat-7 satellite overpass. The correlation coefficients between the DN values of the imagery and the concentrations of chlorophyll-a were analyzed. The visible bands(band 1,2,3) and near infrared band(band 4) data of September image showed the correlation coefficient values higher than 0.9. The October image showed the correlation coefficient values about 0.7 due to the atmospheric effect and low variation of chlorophyll-a concentration. Regression models between the chrophyll-a concentration and DN values of the Landsat imagery data have been developed for each image. The regression model was determined based on the spectral characteristics of chlorophyll, so the green band(band 2) and near infrared band(band 4) were selected to generate a trophic state map. The coefficient of determination(R2) of the regression model for 16th September was 0.95 and that of the regression model for 18th October was 0.55. According to the trophic state map made based on Aizaki's TSI and chlorophyll-a concentration, the trophic state of Yongdam reservoir was mostly eutrophic state during this study.

On-board Realtime Orbit Parameter Generator for Geostationary Satellite (정지궤도위성 탑재용 실시간 궤도요소 생성기)

  • Park, Bong-Kyu;Yang, Koon-Ho
    • Aerospace Engineering and Technology
    • /
    • v.8 no.2
    • /
    • pp.61-67
    • /
    • 2009
  • This paper proposes an on-board orbit data generation algorithm for geostationary satellites. The concept of the proposed algorithm is as follows. From the ground, the position and velocity deviations with respect to the assumed reference orbit are computed for 48 hours of time duration in 30 minutes interval, and the generated data are up-loaded to the satellite to be stored. From the table, three nearest data sets are selected to compute position and velocity deviation for asked epoch time by applying $2^{nd}$ order polynomial interpolation. The computed position and velocity deviation data are added to reference orbit to recover absolute orbit information. Here, the reference orbit is selected to be ideal geostationary orbit with a zero inclination and zero eccentricity. Thanks to very low computational burden, this algorithm allows us to generate orbit data at 1Hz or even higher. In order to support 48 hours autonomy, maximum 3K byte memory is required as orbit data storage. It is estimated that this additional memory requirement is acceptable for geostationary satellite application.

  • PDF