• Title/Summary/Keyword: Subsequence

Search Result 103, Processing Time 0.02 seconds

Digital Revolution and Welfare State Reforms: Revisiting Social Investment and Social Protection (기술혁명과 미래 복지국가 개혁의 논점: 다시 사회투자와 사회보호로)

  • Choi, Young Jun;Choi, Jung Eun;Ryu, Jung Min
    • 한국사회정책
    • /
    • v.25 no.1
    • /
    • pp.3-43
    • /
    • 2018
  • The digital revolution has brought about both positive expectations and negative concerns. Many experts predict that the current technological revolution, so-called "Fourth Industrial Revolution", which is expected to increase productivity in a disruptive way, has significant implications on employment and the labor market. In subsequence, the possible demise of the traditional employment system could markedly undermine the comtemporary welfare state. As a result, basic income has emerged as an alternative. However, little welfare state research has conducted the systematic review on the impact of the present technological revolution on employment and welfare states. In this paper, we will start to review the gist of the digital revolution and critically review recent studies on its effects on employment and welfare states together with actual case studies. In particular, we will investigate the experiences of platform economies of Uber and Amazon Mechanical Turk, and the German experience from 'Work 4.0'. Finally, we will discuss key issues of future welfare state reforms. This research argues that the effects of the technological revolution on employment and welfare state policies would be enormous, but they will be most likely to be mediated by domestic political and policy institutions. It emphasizes the importance of high-quality social investment that would enable individuals to flexibly adapt technological changes and support creative human capital resource. But, high-quality social investment could not be sustained without the decent social protection system that universally provides security to people.

Automatic Text Summarization based on Selective Copy mechanism against for Addressing OOV (미등록 어휘에 대한 선택적 복사를 적용한 문서 자동요약)

  • Lee, Tae-Seok;Seon, Choong-Nyoung;Jung, Youngim;Kang, Seung-Shik
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.58-65
    • /
    • 2019
  • Automatic text summarization is a process of shortening a text document by either extraction or abstraction. The abstraction approach inspired by deep learning methods scaling to a large amount of document is applied in recent work. Abstractive text summarization involves utilizing pre-generated word embedding information. Low-frequent but salient words such as terminologies are seldom included to dictionaries, that are so called, out-of-vocabulary(OOV) problems. OOV deteriorates the performance of Encoder-Decoder model in neural network. In order to address OOV words in abstractive text summarization, we propose a copy mechanism to facilitate copying new words in the target document and generating summary sentences. Different from the previous studies, the proposed approach combines accurate pointing information and selective copy mechanism based on bidirectional RNN and bidirectional LSTM. In addition, neural network gate model to estimate the generation probability and the loss function to optimize the entire abstraction model has been applied. The dataset has been constructed from the collection of abstractions and titles of journal articles. Experimental results demonstrate that both ROUGE-1 (based on word recall) and ROUGE-L (employed longest common subsequence) of the proposed Encoding-Decoding model have been improved to 47.01 and 29.55, respectively.

An Efficient Algorithm for Streaming Time-Series Matching that Supports Normalization Transform (정규화 변환을 지원하는 스트리밍 시계열 매칭 알고리즘)

  • Loh, Woong-Kee;Moon, Yang-Sae;Kim, Young-Kuk
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.600-619
    • /
    • 2006
  • According to recent technical advances on sensors and mobile devices, processing of data streams generated by the devices is becoming an important research issue. The data stream of real values obtained at continuous time points is called streaming time-series. Due to the unique features of streaming time-series that are different from those of traditional time-series, similarity matching problem on the streaming time-series should be solved in a new way. In this paper, we propose an efficient algorithm for streaming time- series matching problem that supports normalization transform. While the existing algorithms compare streaming time-series without any transform, the algorithm proposed in the paper compares them after they are normalization-transformed. The normalization transform is useful for finding time-series that have similar fluctuation trends even though they consist of distant element values. The major contributions of this paper are as follows. (1) By using a theorem presented in the context of subsequence matching that supports normalization transform[4], we propose a simple algorithm for solving the problem. (2) For improving search performance, we extend the simple algorithm to use $k\;({\geq}\;1)$ indexes. (3) For a given k, for achieving optimal search performance of the extended algorithm, we present an approximation method for choosing k window sizes to construct k indexes. (4) Based on the notion of continuity[8] on streaming time-series, we further extend our algorithm so that it can simultaneously obtain the search results for $m\;({\geq}\;1)$ time points from present $t_0$ to a time point $(t_0+m-1)$ in the near future by retrieving the index only once. (5) Through a series of experiments, we compare search performances of the algorithms proposed in this paper, and show their performance trends according to k and m values. To the best of our knowledge, since there has been no algorithm that solves the same problem presented in this paper, we compare search performances of our algorithms with the sequential scan algorithm. The experiment result showed that our algorithms outperformed the sequential scan algorithm by up to 13.2 times. The performances of our algorithms should be more improved, as k is increased.