The determination of seismic velocities in refractors for near-surface seismic refraction investigations is an ill-posed problem. Small variations in the computed time parameters can result in quite large lateral variations in the derived velocities, which are often artefacts of the inversion algorithms. Such artefacts are usually not recognized or corrected with forward modelling. Therefore, if detailed refractor models are sought with model based inversion, then detailed starting models are required. The usual source of artefacts in seismic velocities is irregular refractors. Under most circumstances, the variable migration of the generalized reciprocal method (GRM) is able to accommodate irregular interfaces and generate detailed starting models of the refractor. However, where the very-near-surface environment of the Earth is also irregular, the efficacy of the GRM is reduced, and weathering corrections can be necessary. Standard methods for correcting for surface irregularities are usually not practical where the very-near-surface irregularities are of limited lateral extent. In such circumstances, the GRM smoothing statics method (SSM) is a simple and robust approach, which can facilitate more-accurate estimates of refractor velocities. The GRM SSM generates a smoothing 'statics' correction by subtracting an average of the time-depths computed with a range of XY values from the time-depths computed with a zero XY value (where the XY value is the separation between the receivers used to compute the time-depth). The time-depths to the deeper target refractors do not vary greatly with varying XY values, and therefore an average is much the same as the optimum value. However, the time-depths for the very-near-surface irregularities migrate laterally with increasing XY values and they are substantially reduced with the averaging process. As a result, the time-depth profile averaged over a range of XY values is effectively corrected for the near-surface irregularities. In addition, the time-depths computed with a Bero XY value are the sum of both the near-surface effects and the time-depths to the target refractor. Therefore, their subtraction generates an approximate 'statics' correction, which in turn, is subtracted from the traveltimes The GRM SSM is essentially a smoothing procedure, rather than a deterministic weathering correction approach, and it is most effective with near-surface irregularities of quite limited lateral extent. Model and case studies demonstrate that the GRM SSM substantially improves the reliability in determining detailed seismic velocities in irregular refractors.
With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.
Sesquiterpenoids are defined as