• Title/Summary/Keyword: correlated channel

Search Result 225, Processing Time 0.031 seconds

Application of an Automated Time Domain Reflectometry to Solute Transport Study at Field Scale: Transport Concept (시간영역 광전자파 분석기 (Automatic TDR System)를 이용한 오염물질의 거동에 관한 연구: 오염물질 운송개념)

  • Kim, Dong-Ju
    • Economic and Environmental Geology
    • /
    • v.29 no.6
    • /
    • pp.713-724
    • /
    • 1996
  • The time-series resident solute concentrations, monitored at two field plots using the automated 144-channel TDR system by Kim (this issue), are used to investigate the dominant transport mechanism at field scale. Two models, based on contradictory assumptions for describing the solute transport in the vadose zone, are fitted to the measured mean breakthrough curves (BTCs): the deterministic one-dimensional convection-dispersion model (CDE) and the stochastic-convective lognormal transfer function model (CLT). In addition, moment analysis has been performed using the probability density functions (pdfs) of the travel time of resident concentration. Results of moment analysis have shown that the first and second time moments of resident pdf are larger than those of flux pdf. Based on the time moments, expressed in function of model parameters, variance and dispersion of resident solute travel times are derived. The relationship between variance or dispersion of solute travel time and depth has been found to be identical for both the time-series flux and resident concentrations. Based on these relationships, the two models have been tested. However, due to the significant variations of transport properties across depth, the test has led to unreliable results. Consequently, the model performance has been evaluated based on predictability of the time-series resident BTCs at other depths after calibration at the first depth. The evaluation of model predictability has resulted in a clear conclusion that for both experimental sites the CLT model gives more accurate prediction than the CDE model. This suggests that solute transport at natural field soils is more likely governed by a stream tube model concept with correlated flow than a complete mixing model. Poor prediction of CDE model is attributed to the underestimation of solute spreading and thus resulting in an overprediction of peak concentration.

  • PDF

The Significance of Electroencephalography in the Hypothermic Circulatory Arrest in Human (인체에서 저체온 완전 순환 정지 시 뇌파검사의 의의)

  • 전양빈;이창하;나찬영;강정호
    • Journal of Chest Surgery
    • /
    • v.34 no.6
    • /
    • pp.465-471
    • /
    • 2001
  • Background: Hypothermia protects the brain by suppressing the cerebral metabolism and it is performed well enough before the total circulatory arrest(TCA) in the operation of aortic disease. Generally, TCA has been performed depending on the rectal or nasopharyngeal temperatures; however, there is no definite range of optimal temperature for TCA or an objective indicator determining the temperature for safe TCA. In this study, we tried to determine the optimal range of temperature for safe hypothermic circulatory arrest by using the intraoperative electroencephalogram(EEG), and studied the role of EEG as an indicator of optimal hypothermia. Material and Method: Between March, 1999 and August 31, 2000, 27 patients underwent graft replacement of the part of thoracic aorta using hypothermia and TCA with intraoperative EEG. The rectal and nasopharyngeal temperatures were monitored continuously from the time of anesthetic induction and the EEG was recorded with a ten-channel portable electroencephalography from the time of anesthetic induction to electrocerebral silence(ECS). Result: On ECS, the rectal and nasopharyngeal temperatures were not consistent but variable(rectal 11$^{\circ}C$ -$25^{\circ}C$, nasopharynx 7.7$^{\circ}C$ -23$^{\circ}C$). The correlation between two temperatures was not significant(p=0.171). The cooling time from the start of cardiopulmonary bypass to ECS was also variable(25-127min), but correlated with the body surface area(p=0.027). Conclusion: We have found that ECS appeared at various body temperatures, and thus, the use of rectal or nasopharyngeal temperature were not useful in identifying ECS. Conclusively, we can not fully assure cerebral protection during hypothermic circulatory arrest in regards to the body temperatures, and therefore, the intraoperative EEG is one of the necessary methods for determining the range of optimal hypothermia for safe circulatory arrest. :

  • PDF

Swell Effect Correction for the High-resolution Marine Seismic Data (고해상 해저 탄성파 탐사자료에 대한 너울영향 보정)

  • Lee, Ho-Young;Koo, Nam-Hyung;Kim, Wonsik;Kim, Byoung-Yeop;Cheong, Snons;Kim, Young-Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.4
    • /
    • pp.240-249
    • /
    • 2013
  • The seismic data quality of marine geological and engineering survey deteriorates because of the sea swell. We often conduct a marine survey when the swell height is about 1 ~ 2 m. The swell effect correction is required to enhance the horizontal continuity of seismic data and satisfy the resolution less than 1 m. We applied the swell correction to the 8 channel high-resolution airgun seismic data and 3.5 kHz subbottom profiler (SBP) data. The correct sea bottom detection is important for the swell correction. To detect the sea bottom, we used maximum amplitude of seismic signal around the expected sea bottom, and picked the first increasing point larger than threshold value related with the maximum amplitude. To find sea bottom easily in the case of the low quality data, we transformed the input data to envelope data or the cross-correlated data using the sea bottom wavelet. We averaged the picked sea bottom depths and calculated the correction values. The maximum correction of the airgun data was about 0.8 m and the maximum correction of two kinds of 3.5 kHz SBP data was 0.5 m and 2.0 m respectively. We enhanced the continuity of the subsurface layer and produced the high quality seismic section using the proper methods of swell correction.

An Analysis of IT Trends Using Tweet Data (트윗 데이터를 활용한 IT 트렌드 분석)

  • Yi, Jin Baek;Lee, Choong Kwon;Cha, Kyung Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.143-159
    • /
    • 2015
  • Predicting IT trends has been a long and important subject for information systems research. IT trend prediction makes it possible to acknowledge emerging eras of innovation and allocate budgets to prepare against rapidly changing technological trends. Towards the end of each year, various domestic and global organizations predict and announce IT trends for the following year. For example, Gartner Predicts 10 top IT trend during the next year, and these predictions affect IT and industry leaders and organization's basic assumptions about technology and the future of IT, but the accuracy of these reports are difficult to verify. Social media data can be useful tool to verify the accuracy. As social media services have gained in popularity, it is used in a variety of ways, from posting about personal daily life to keeping up to date with news and trends. In the recent years, rates of social media activity in Korea have reached unprecedented levels. Hundreds of millions of users now participate in online social networks and communicate with colleague and friends their opinions and thoughts. In particular, Twitter is currently the major micro blog service, it has an important function named 'tweets' which is to report their current thoughts and actions, comments on news and engage in discussions. For an analysis on IT trends, we chose Tweet data because not only it produces massive unstructured textual data in real time but also it serves as an influential channel for opinion leading on technology. Previous studies found that the tweet data provides useful information and detects the trend of society effectively, these studies also identifies that Twitter can track the issue faster than the other media, newspapers. Therefore, this study investigates how frequently the predicted IT trends for the following year announced by public organizations are mentioned on social network services like Twitter. IT trend predictions for 2013, announced near the end of 2012 from two domestic organizations, the National IT Industry Promotion Agency (NIPA) and the National Information Society Agency (NIA), were used as a basis for this research. The present study analyzes the Twitter data generated from Seoul (Korea) compared with the predictions of the two organizations to analyze the differences. Thus, Twitter data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. To overcome these challenges, we used SAS IRS (Information Retrieval Studio) developed by SAS to capture the trend in real-time processing big stream datasets of Twitter. The system offers a framework for crawling, normalizing, analyzing, indexing and searching tweet data. As a result, we have crawled the entire Twitter sphere in Seoul area and obtained 21,589 tweets in 2013 to review how frequently the IT trend topics announced by the two organizations were mentioned by the people in Seoul. The results shows that most IT trend predicted by NIPA and NIA were all frequently mentioned in Twitter except some topics such as 'new types of security threat', 'green IT', 'next generation semiconductor' since these topics non generalized compound words so they can be mentioned in Twitter with other words. To answer whether the IT trend tweets from Korea is related to the following year's IT trends in real world, we compared Twitter's trending topics with those in Nara Market, Korea's online e-Procurement system which is a nationwide web-based procurement system, dealing with whole procurement process of all public organizations in Korea. The correlation analysis show that Tweet frequencies on IT trending topics predicted by NIPA and NIA are significantly correlated with frequencies on IT topics mentioned in project announcements by Nara market in 2012 and 2013. The main contribution of our research can be found in the following aspects: i) the IT topic predictions announced by NIPA and NIA can provide an effective guideline to IT professionals and researchers in Korea who are looking for verified IT topic trends in the following topic, ii) researchers can use Twitter to get some useful ideas to detect and predict dynamic trends of technological and social issues.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.