• Title/Summary/Keyword: Time series model

Search Result 2,666, Processing Time 0.031 seconds

The Impact of SSM Market Entry on Changes in Market Shares among Retailing Types (기업형 슈퍼마켓(SSM)의 시장진입이 소매업태간 시장점유율 변화에 미친 영향)

  • Choi, Ji-Ho;Yonn, Min-Suk;Moon, Youn-Hee;Choi, Sung-Ho
    • Journal of Distribution Research
    • /
    • v.17 no.3
    • /
    • pp.115-132
    • /
    • 2012
  • This study empirically examines the impact of SSM market entry on changes in market shares among retailing types. The data is monthly time-series data spanning over the period from January 2000 to December 2010, and the effect of SSM market entry on market shares of retailing types is analyzed by utilizing several key factors such as the number of new SSM monthly entrants, total number of SSMs, the proportion of new SSM entrant that is smaller than $165m^2$ to total new SSM entrants. According to the Korean Standard Industrial Classification codes, the retailing type is classified into 5 groups: department stores, retail sale in other non-specialized large stores(big marts), supermarkets, convenience stores, and retail sale in other non-specialized stores with food or beverages predominating (others). The market shares of retailing types are calculated by the ratio of each retailing type monthly sales to total monthly retailing sales in which total retailing sales is the sum of each retailing type sales. The empirical model controls for the size effects with the number of monthly employees for each retailing type and the macroeconomic effects with M2. The empirical model employed in this study is as follows; $$MS_i=f(NewSSM,\;CumSSM,\;employ_i,\;under165,\;M2)$$ where $MS_i$ is the market share of each retailing type (department stores, big marts), supermarkets, convenience stores, and others), NewSSM is the number of new SSM monthly entrants, CumSSM is total number of SSMs, $employ_i$ is the number of monthly employees for each retailing type, and under165 is the proportion of new SSM entrant that is smaller than $165m^2$ to total new SSM entrants. The correlation among these variables are reported in

    .
    shows the descriptive statistics of the sample. Sales is the total monthly revenue of each retailing type, employees is total number of monthly employees for each retailing type, area is total floor space of each retail type($m^2$), number of store is total number of monthly stores for each retailing type, market share is the ratio of each retailing type monthly sales to total monthly retailing sales in which total retailing sales is the sum of each retailing type sales, new monthly SSMs is total number of new monthly SSM entrants, and M2 is a money supply. The empirical results of the effect of new SSM market entry on changes in market shares among retailing types (department stores, retail sale in other non-specialized large stores, supermarkets, convenience stores, and retail sale in other non-specialized stores with food or beverages predominating) are reported in
    . The dependant variables are the market share of department stores, the market share of big marts, the market share of supermarkets, the market share of convenience stores, and the market share of others. The result shows that the impact of new SSM market entry on changes in market share of retail sale in other non-specialized large stores (big marts) is statistically significant. Total number of monthly SSM stores has a significant effect on market share, but the magnitude and sign of effect is different among retailing types. The increase in the number of SSM stores has a negative effect on the market share of retail sale in other non-specialized large stores(big marts) and convenience stores, but has a positive impact on the market share of department stores, supermarkets, and retail sale in other non-specialized stores with food or beverages predominating (others). This study offers the theoretical and practical implication to these findings and also suggests the direction for the further analysis.

  • PDF
  • Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

    • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
      • Journal of Intelligence and Information Systems
      • /
      • v.19 no.1
      • /
      • pp.95-110
      • /
      • 2013
    • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

    A Study on Movement of the Free Face During Bench Blasting (전방 자유면의 암반 이동에 관한 연구)

    • Lee, Ki-Keun;Kim, Gab-Soo;Yang, Kuk-Jung;Kang, Dae-Woo;Hur, Won-Ho
      • Explosives and Blasting
      • /
      • v.30 no.2
      • /
      • pp.29-42
      • /
      • 2012
    • Variables influencing the free face movement due to rock blasting include the physical and mechanical properties, in particular the discontinuity characteristics, explosive type, charge weight, burden, blast-hole spacing, delay time between blast-holes or rows, stemming conditions. These variables also affects the blast vibration, air blast and size of fragmentation. For the design of surface blasting, the priority is given to the safety of nearby buildings. Therefore, blast vibration has to be controlled by analyzing the free face movement at the surface blasting sites and also blasting operation needs to be optimized to improve the fragmentation size. High-speed digital image analysis enables the analyses of the initial movement of free face of rock, stemming optimality, fragment trajectory, face movement direction and velocity as well as the optimal detonator initiation system. Even though The high-speed image analysis technique has been widely used in foreign countries, its applications can hardly be found in Korea. This thesis aims at carrying out a fundamental study for optimizing the blast design and evaluation using the high-speed digital image analysis. A series of experimentation were performed at two large surface blasting sites with the rock type of shale and granite, respectively. Emulsion and ANFO were the explosives used for the study. Based on the digital images analysis, displacement and velocity of the free face were scrutinized along with the analysis fragment size distribution. In addition, AUTODYN, 2-D FEM model, was applied to simulate detonation pressure, detonation velocity, response time for the initiation of the free face movement and face movement shape. The result show that regardless of the rock type, due to the displacement and the movement velocity have the maximum near the center of charged section the free face becomes curved like a bow. Compared with ANFO, the cases with Emulsion result in larger detonation pressure and velocity and faster reaction for the displacement initiation.

    Structure and Variation of Tidal Flat Temperature in Gomso Bay, West Coast of Korea (서해안 곰소만 갯벌 온도의 구조 및 변화)

    • Lee, Sang-Ho;Cho, Yang-Ki;You, Kwang-Woo;Kim, Young-Gon;Choi, Hyun-Yong
      • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
      • /
      • v.10 no.1
      • /
      • pp.100-112
      • /
      • 2005
    • Soil temperature was measured from the surface to 40 cm depth at three stations with different heights in tidal flat of Gomso Bay, west coast of Korea, for one month in every season 2004 to examine the thermal structure and the variation. Mean temperature in surface layer was higher in summer and lower in winter than in lower layer, reflecting the seasonal variation of vertically propagating structure of temperature by heating and cooling from the tidal flat surface. Standard deviation of temperature decreased from the surface to lower layer. Periodic variations of solar radiation energy and tide mainly caused short term variation of soil temperature, which was also intermittently influenced by precipitation and wind. Time series analysis showed the power spectral energy peaks at the periods of 24, 12 and 8 hours, and the strongest peak appeared at 24 hour period. These peaks can be interpreted as temperature waves forced by variations of solar radiation, diurnal tide and interaction of both variations, respectively. EOF analysis showed that the first and the second modes resolved 96% of variation of vertical temperature structure. The first mode was interpreted as the heating antl cooling from tidal flat surface and the second mode as the effect of phase lag produced by temperature wave propagation in the soil. The phase of heat transfer by 24 hour period wave, analyzed by cross spectrum, showed that mean phase difference of the temperature wave increased almost linearly with the soil depth. The time lags by the phase difference from surface to 10, 20 and 40cm were 3.2,6.5 and 9.8 hours, respectively. Vertical thermal diffusivity of temperature wave of 24 hour period was estimated using one dimensional thermal diffusion model. Average diffusivity over the soil depths and seasons resulted in $0.70{\times}10^{-6}m^2/s$ at the middle station and $0.57{\times}10^{-6}m^2/s$ at the lowest station. The depth-averaged diffusivity was large in spring and small in summer and the seasonal mean diffusivity vertically increased from 2 cm to 10 cm and decreased from 10 cm to 40 cm. Thermal propagation speeds were estimated by $8.75{\times}10^{-4}cm/s,\;3.8{\times}10{-4}cm/s,\;and\;1.7{\times}10^{-4}cm/s$ from 2 cm to 10 cm, 20 cm and 40 cm, respectively, indicating the speed reduction with depth increasing from the surface.

    The Anticancer Effect and Mechanism of Photodynamic Therapy Using 9-Hydroxypheophorbide-a and 660 nm Diode Laser on Human Squamous Carcinoma Cell Line. (9-hydroxypheophorbide-a와 660 nm 다이오드 레이저를 이용한 광역학치료의 항암효과와 치료기전에 대한 연구)

    • Ahn, Jin-Chul
      • Journal of Life Science
      • /
      • v.19 no.6
      • /
      • pp.770-780
      • /
      • 2009
    • A new photosensitizer, 9-Hydroxypheophorbide-a (9-HpbD-a), was derived from Spirulina platensis. We conducted a series of experiments, in vitro and in vivo, to evaluate the anticancer effect and mechanism of photodynamic therapy using 9-HpbD-a and 660 nm diode lasers on a squamous carcinoma cell line. We studied the cytotoxic effects of pheophytin-a, 9-HpbD-a, 9-HpbD-a red and 660 nm diode lasers in a human head and neck cancer cell line (SNU-1041). Cell growth inhibition was determined by using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) reduction assay. The effects of 9-HpbD was higher than those of 9-HpbD-a red or pheophytin-a in PDT. We then tested the cytotoxic effects of 9-hydroxypheophorbide-a (9-HpbD-a) in vitro. The cultured SNU-I041 cells were treated with serial concentrations of 9-HpbD-a followed by various energy doses (0, 0.1, 0.5, 3.2 J/$cm^{2}$) and by various interval times (0, 3, 6, 9, 12 hr) until laser irradiation, then MTT assay was applied to measure the relative inhibitory effects of photodynamic therapy (PDT). Optimal laser irradiation time was 30 minutes and the cytotoxic effects according to incubation time after 9-HpbD-a treatment increased until 6 hours, after which it then showed no increase. To observe the cell death mechanism after PDT, SUN-I041 cells were stained by Hoechst 33342 and propidium iodide after PDT, and observed under transmission electron microscopy (TEM). The principal mechanism of PDT at a low dose of 9-HpbD-a was apoptosis, and at a high dose of 9-HpbD-a it was necrosis. PDT effects were also observed in a xenografted nude mouse model. Group I (no 9-HpbD-a, no laser irradiation) and Group II (9-HpbD-a injection only) showed no response (4/4, 100%), and Group III (laser irradiation only) showed recurrence (1/4,25%) or no response (3/4, 75 %). Group IV (9-HpbD-a + laser irradiation) showed complete response (10/16, 62.5%), recurrence (4/16, 25%) or no response (2/16, 12.5%). Group IV showed a significant remission rate compared to other groups (p<0.05). These results suggest that 9-HpbD-a is a promising photosensitizer for the future and that further studies on biodistribution, toxicity and mechanism of action would be needed to use 9-HpbD-a as a photosensitizer in the clinical setting.

    Text Mining-Based Emerging Trend Analysis for the Aviation Industry (항공산업 미래유망분야 선정을 위한 텍스트 마이닝 기반의 트렌드 분석)

    • Kim, Hyun-Jung;Jo, Nam-Ok;Shin, Kyung-Shik
      • Journal of Intelligence and Information Systems
      • /
      • v.21 no.1
      • /
      • pp.65-82
      • /
      • 2015
    • Recently, there has been a surge of interest in finding core issues and analyzing emerging trends for the future. This represents efforts to devise national strategies and policies based on the selection of promising areas that can create economic and social added value. The existing studies, including those dedicated to the discovery of future promising fields, have mostly been dependent on qualitative research methods such as literature review and expert judgement. Deriving results from large amounts of information under this approach is both costly and time consuming. Efforts have been made to make up for the weaknesses of the conventional qualitative analysis approach designed to select key promising areas through discovery of future core issues and emerging trend analysis in various areas of academic research. There needs to be a paradigm shift in toward implementing qualitative research methods along with quantitative research methods like text mining in a mutually complementary manner. The change is to ensure objective and practical emerging trend analysis results based on large amounts of data. However, even such studies have had shortcoming related to their dependence on simple keywords for analysis, which makes it difficult to derive meaning from data. Besides, no study has been carried out so far to develop core issues and analyze emerging trends in special domains like the aviation industry. The change used to implement recent studies is being witnessed in various areas such as the steel industry, the information and communications technology industry, the construction industry in architectural engineering and so on. This study focused on retrieving aviation-related core issues and emerging trends from overall research papers pertaining to aviation through text mining, which is one of the big data analysis techniques. In this manner, the promising future areas for the air transport industry are selected based on objective data from aviation-related research papers. In order to compensate for the difficulties in grasping the meaning of single words in emerging trend analysis at keyword levels, this study will adopt topic analysis, which is a technique used to find out general themes latent in text document sets. The analysis will lead to the extraction of topics, which represent keyword sets, thereby discovering core issues and conducting emerging trend analysis. Based on the issues, it identified aviation-related research trends and selected the promising areas for the future. Research on core issue retrieval and emerging trend analysis for the aviation industry based on big data analysis is still in its incipient stages. So, the analysis targets for this study are restricted to data from aviation-related research papers. However, it has significance in that it prepared a quantitative analysis model for continuously monitoring the derived core issues and presenting directions regarding the areas with good prospects for the future. In the future, the scope is slated to expand to cover relevant domestic or international news articles and bidding information as well, thus increasing the reliability of analysis results. On the basis of the topic analysis results, core issues for the aviation industry will be determined. Then, emerging trend analysis for the issues will be implemented by year in order to identify the changes they undergo in time series. Through these procedures, this study aims to prepare a system for developing key promising areas for the future aviation industry as well as for ensuring rapid response. Additionally, the promising areas selected based on the aforementioned results and the analysis of pertinent policy research reports will be compared with the areas in which the actual government investments are made. The results from this comparative analysis are expected to make useful reference materials for future policy development and budget establishment.

    A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

    • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
      • Journal of Intelligence and Information Systems
      • /
      • v.19 no.3
      • /
      • pp.1-23
      • /
      • 2013
    • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

    Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

    • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
      • Journal of Intelligence and Information Systems
      • /
      • v.26 no.4
      • /
      • pp.127-148
      • /
      • 2020
    • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

    Analysis on the Snow Cover Variations at Mt. Kilimanjaro Using Landsat Satellite Images (Landsat 위성영상을 이용한 킬리만자로 만년설 변화 분석)

    • Park, Sung-Hwan;Lee, Moung-Jin;Jung, Hyung-Sup
      • Korean Journal of Remote Sensing
      • /
      • v.28 no.4
      • /
      • pp.409-420
      • /
      • 2012
    • Since the Industrial Revolution, CO2 levels have been increasing with climate change. In this study, Analyze time-series changes in snow cover quantitatively and predict the vanishing point of snow cover statistically using remote sensing. The study area is Mt. Kilimanjaro, Tanzania. 23 image data of Landsat-5 TM and Landsat-7 ETM+, spanning the 27 years from June 1984 to July 2011, were acquired. For this study, first, atmospheric correction was performed on each image using the COST atmospheric correction model. Second, the snow cover area was extracted using the NDSI (Normalized Difference Snow Index) algorithm. Third, the minimum height of snow cover was determined using SRTM DEM. Finally, the vanishing point of snow cover was predicted using the trend line of a linear function. Analysis was divided using a total of 23 images and 17 images during the dry season. Results show that snow cover area decreased by approximately $6.47km^2$ from $9.01km^2$ to $2.54km^2$, equivalent to a 73% reduction. The minimum height of snow cover increased by approximately 290 m, from 4,603 m to 4,893 m. Using the trend line result shows that the snow cover area decreased by approximately $0.342km^2$ in the dry season and $0.421km^2$ overall each year. In contrast, the annual increase in the minimum height of snow cover was approximately 9.848 m in the dry season and 11.251 m overall. Based on this analysis of vanishing point, there will be no snow cover 2020 at 95% confidence interval. This study can be used to monitor global climate change by providing the change in snow cover area and reference data when studying this area or similar areas in future research.

    Global Temperature Trends of Lower Stratosphere Derived from the Microwave Satellite Observations and GCM Reanalyses (마이크로파 위성관측과 모델 재분석에서 조사된 전지구에 대한 하부 성층권 온도의 추세)

    • Yoo, Jung-Moon;Yoon, Sun-Kyung;Kim, Kyu-Myong
      • Journal of the Korean earth science society
      • /
      • v.22 no.5
      • /
      • pp.388-404
      • /
      • 2001
    • In order to examine the relative accuracy of satellite observations and model reanalyses about lower stratospheric temperature trends, two satellite-observed Microwave Sounding Unit (MSU) channel 4 (Ch 4) brightness temperature data and two GCM (ECMWF and GEOS) reanalyses during 1981${\sim}$1993 have been intercompared with the regression analysis of time series. The satellite data for the period of 1980${\sim}$1999 are MSU4 at nadir direction and SC4 at multiple scans, respectively, derived in this study and Spencer and Christy (1993). The MSU4 temperature over the globe during the above period shows the cooling trend of -0.35 K/decade, and the cooling over the global ocean is 1.2 times as much as that over the land. Lower stratospheric temperatures during the common period (1981${\sim}$1993) globally show the cooling in MSU4 (-0.14 K/decade), SC4 (-0.42 K/decade) and GEOS (-0.15 K/decade) which have strong annual cycles. However, ECMWF shows a little warming and weak annual cycle. The 95% confidence intervals of the lower stratospheric temperature trends are greater than those of midtropospheric (channel 2) trends, indicating less confidence in Ch 4. The lapse rate in the trend between the above two atmospheric layers is largest over the northern hemispheric land. MSU4 has low correlation with ECMWF over the globe, and high value with GEOS near the Korean peninsula. Lower correlations (r < 0.6) between MSU4 and SC4 (or ECMWF) occur over $30^{\circ}$N latitude belt, where subtropical jet stream passes. Temporal correlation among them over the globe is generally high (r > 0.6). Four kinds of lower stratospheric temperature data near the Korean peninsula commonly show cooling trends, of which the SC4 values (-0.82 K/decade) is the largest.

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.