• Title/Summary/Keyword: Time Series DB

Search Result 45, Processing Time 0.018 seconds

Household Types and Changes of Work-Family Time Allocation - Adapting Fuzzy-set Ideal Type Analysis - (일-가족 시간배분에 따른 가구유형과 변화 - 퍼지셋 이상형 분석의 적용 -)

  • Kim, Jin-Wook;Choi, Young-Jun
    • Korean Journal of Social Welfare
    • /
    • v.64 no.2
    • /
    • pp.31-54
    • /
    • 2012
  • Along with increasing mothers' employment, work-family reconciliation has been recognised as a key policy agenda in contemporary welfare states. Although various policy instruments have been introduced and expanded in recent years, the problem of time allocation within couples still remains as a fundamental issue, which has been largely underresearched at a micro perspective. In this context, this study aims to identify dominant types of work-family time allocation within married couple, and to apply these types to the Korean case using the fuzzy-set ideal type analysis. Further, a series of multiple regression analyses will be implemented to find factors affecting each ideal type of work-family time allocation. The 1999 and 2009 Korea Time Use Survey datasets will be adopted for the analyses. Married couples are selected as samples only when men work 40 hours or more per week and they have at least one pre-school child. Empirical analyses cover three parts. First of all, four ideal types on work-family time allocation are classified by intersecting two core variables - the ratio of men's (paid) working and family (caring time plus domestic work) time to total working and family time. In this research, the four types will be labelled the traditional male breadwinner model (TM, high working and low family time), the dual burden model (DB, shared working but low family time), the family-friendly male breadwinner model (FM, high working but shared family time), and the adaptive partnership model (AP, shared working and shared family time). By comparing the composition of the four ideal types in 1999 and 2009, it will examine the trend of work-family time allocation in Korea. In addition, multiple regressions will be useful for investigating which characteristics contribute to the different degree of each fuzzy ideal score in the four models. Finally, policy implications and further research agenda will be discussed.

  • PDF

Real-time Seismic Damage Estimation for Harbor Site Considering Ground Motion Amplification Characteristics (항만지역의 지반증폭 특성을 반영한 실시간 지진피해 평가방안 수립)

  • Kim, Han-Saem;Yoo, Seung-Hoon;Jang, In-Sung;Chung, Choong-Ki
    • Journal of the Korean Geotechnical Society
    • /
    • v.28 no.5
    • /
    • pp.55-65
    • /
    • 2012
  • The purpose of this study is to estimate seismic damage for harbor site considering dynamic amplification characteristics. First of all, a series of ground response analysis is performed and then correlation equations between rock outcrop accelerations and peak ground accelerations (PGAs) are determined. These equations are saved into DB and when an earthquake occurs, PGAs are determined by them as soon as possible. For earthquake events, seismic damage grades of harbor structures are determined by using the correlated PGAs and fragility curves of harbor structures in real time. In this study, seismic damage was estimated and classified into several grades by applying two hypothetical earthquakes.

An Informetric Analysis on the Notation of East Sea Recorded in Academic Journals ('동해' 표기에 대한 계량적 분석)

  • Han, Jong Yup
    • Journal of the Korean Society for information Management
    • /
    • v.32 no.1
    • /
    • pp.23-41
    • /
    • 2015
  • This study worked on the qualitative analysis about nomenclature East Sea by the record type in researches related to East Sea shown in the scientific journals. Here in this study, the way of marking is classified as three: 'sole notation of East Sea', 'sole notation of Sea of Japan', and 'simultaneous notation of both'. Based on a total of 4,192 selections from Web of Science DB, the analysis was followed up for change in time series by the notation type, notation type according to the nation that authors belong to, difference in research topic, impact factor, collaboration in research, and co-authorship network. The result turned out in this work that the sole notation of Sea of Japan accounted for the largest portion. It also showed that the rates of sole notation of East Sea and simultaneous notation have kept increasing continuously since the 1990s. Hub nations regarding the research of East Sea is five including Japan, Russia, Korea, USA, and China. In the case of sole notation of Sea of Japan, active collaboration studies are performed in USA, Russia, and China with a focus in Japan. In the case of sole notation of East Sea and simultaneous use, the research rate is relatively high in USA and Japan with a focus in Korea. As to the co-authorship network in the sole notation of Sea of Japan, sort of a "giant component" among different groups has been set up and through which the collaborative works are actively underway. However, it was found that the research of sole notation of East Sea is dispersed into small groups on the base of relevant individual institution.

Building an SNS Crawling System Using Python (Python을 이용한 SNS 크롤링 시스템 구축)

  • Lee, Jong-Hwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.5
    • /
    • pp.61-76
    • /
    • 2018
  • Everything is coming into the world of network where modern people are living. The Internet of Things that attach sensors to objects allows real-time data transfer to and from the network. Mobile devices, essential for modern humans, play an important role in keeping all traces of everyday life in real time. Through the social network services, information acquisition activities and communication activities are left in a huge network in real time. From the business point of view, customer needs analysis begins with SNS data. In this research, we want to build an automatic collection system of SNS contents of web environment in real time using Python. We want to help customers' needs analysis through the typical data collection system of Instagram, Twitter, and YouTube, which has a large number of users worldwide. It is stored in database through the exploitation process and NLP process by using the virtual web browser in the Python web server environment. According to the results of this study, we want to conduct service through the site, the desired data is automatically collected by the search function and the netizen's response can be confirmed in real time. Through time series data analysis. Also, since the search was performed within 5 seconds of the execution result, the advantage of the proposed algorithm is confirmed.

Method of extracting context from media data by using video sharing site

  • Kondoh, Satoshi;Ogawa, Takeshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.709-713
    • /
    • 2009
  • Recently, a lot of research that applies data acquired from devices such as cameras and RFIDs to context aware services is being performed in the field on Life-Log and the sensor network. A variety of analytical techniques has been proposed to recognize various information from the raw data because video and audio data include a larger volume of information than other sensor data. However, manually watching a huge amount of media data again has been necessary to create supervised data for the update of a class or the addition of a new class because these techniques generally use supervised learning. Therefore, the problem was that applications were able to use only recognition function based on fixed supervised data in most cases. Then, we proposed a method of acquiring supervised data from a video sharing site where users give comments on any video scene because those sites are remarkably popular and, therefore, many comments are generated. In the first step of this method, words with a high utility value are extracted by filtering the comment about the video. Second, the set of feature data in the time series is calculated by applying functions, which extract various feature data, to media data. Finally, our learning system calculates the correlation coefficient by using the above-mentioned two kinds of data, and the correlation coefficient is stored in the DB of the system. Various other applications contain a recognition function that is used to generate collective intelligence based on Web comments, by applying this correlation coefficient to new media data. In addition, flexible recognition that adjusts to a new object becomes possible by regularly acquiring and learning both media data and comments from a video sharing site while reducing work by manual operation. As a result, recognition of not only the name of the seen object but also indirect information, e.g. the impression or the action toward the object, was enabled.

  • PDF