• Title/Summary/Keyword: analysis system

Search Result 69,940, Processing Time 0.106 seconds

Discounted Cost Model of Condition-Based Maintenance Regarding Cumulative Damage of Armor Units of Rubble-Mound Breakwaters as a Discrete-Time Stochastic Process (경사제 피복재의 누적피해를 이산시간 확률과정으로 고려한 조건기반 유지관리의 할인비용모형)

  • Lee, Cheol-Eung;Park, Dong-Heon
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.2
    • /
    • pp.109-120
    • /
    • 2017
  • A discounted cost model for preventive maintenance of armor units of rubble-mound breakwaters is mathematically derived by combining the deterioration model based on a discrete-time stochastic process of shock occurrence with the cost model of renewal process together. The discounted cost model of condition-based maintenance proposed in this paper can take into account the nonlinearity of cumulative damage process as well as the discounting effect of cost. By comparing the present results with the previous other results, the verification is carried out satisfactorily. In addition, it is known from the sensitivity analysis on variables related to the model that the more often preventive maintenance should be implemented, the more crucial the level of importance of system is. However, the tendency is shown in reverse as the interest rate is increased. Meanwhile, the present model has been applied to the armor units of rubble-mound breakwaters. The parameters of damage intensity function have been estimated through the time-dependent prediction of the expected cumulative damage level obtained from the sample path method. In particular, it is confirmed that the shock occurrences can be considered to be a discrete-time stochastic process by investigating the effects of uncertainty of the shock occurrences on the expected cumulative damage level with homogeneous Poisson process and doubly stochastic Poisson process that are the continuous-time stochastic processes. It can be also seen that the stochastic process of cumulative damage would depend directly on the design conditions, thus the preventive maintenance would be varied due to those. Finally, the optimal periods and scale for the preventive maintenance of armor units of rubble-mound breakwaters can be quantitatively determined with the failure limits, the levels of importance of structure, and the interest rates.

Genotype $\times$ Environment Interaction of Rice Yield in Multi-location Trials (벼 재배 품종과 환경의 상호작용)

  • 양창인;양세준;정영평;최해춘;신영범
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.46 no.6
    • /
    • pp.453-458
    • /
    • 2001
  • The Rural Development Administration (RDA) of Korea now operates a system called Rice Variety Selection Tests (RVST), which are now being implemented in eight Agricultural Research and Extension Services located in eight province RVST's objective is to provide accurate yield estimates and to select well-adapted varieties to each province. Systematic evaluation of entries included in RVST is a highly important task to select the best-adapted varieties to specific location and to observe the performance of entries across a wide range of test sites within a region. The rice yield data in RVST for ordinary transplanting in Kangwon province during 1997-2000 were analyzed. The experiments were carried out in three replications of a random complete block design with eleven entries across five locations. Additive Main effects and Multiplicative Interaction (AMMI) model was employed to examine the interaction between genotype and environment (G$\times$E) in the biplot form. It was found that genotype variability was as high as 66%, followed by G$\times$E interaction variability, 21%, and variability by environment, 13%. G$\times$E interaction was partitioned into two significant (P<0.05) principal components. Pattern analysis was used for interpretation on G$\times$E interaction and adaptibility. Major determinants among the meteorological factors on G$\times$E matrix were canopy minimum temperature, minimum relative humidity, sunshine hours, precipitation and mean cloud amount. Odaebyeo, Obongbyeo and Jinbubyeo were relatively stable varieties in all the regions. Furthermore, the most adapted varieties in each region, in terms of productivity, were evaluated.

  • PDF

Adhesion Characteristics and the High Pressure Resistance of Biofilm Bacteria in Seawater Reverse Osmosis Desalination Process (역삼투 해수담수화 공정 내 바이오필름 형성 미생물의 부착 및 고압내성 특성)

  • Jung, Ji-Yeon;Lee, Jin-Wook;Kim, Sung-Youn;Kim, In-S.
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.31 no.1
    • /
    • pp.51-57
    • /
    • 2009
  • Biofouling in seawater reverse osmosis (SWRO) desalination process causes many problems such as flux decline, biodegradation of membrane, increased cleaning time, and increased energy consumption and operational cost. Therefore biofouling is considered as the most critical problem in system operation. To control biofouling in early stage, detection of the most problematic bacteria causing biofouling is required. In this study, six model bacteria were chosen; Bacillus sp., Flavobacterium sp., Mycobacterium sp., Pseudomonas aeruginosa, Pseudomonas fluorescens, and Rhodobacter sp. based on report in the literature and phylogenetic analysis of seawater intake and fouled RO membrane. The adhesion to RO membrane, the high pressure resistance, and the hydrophobicity of the six model bacteria were examined to find out their fouling potential. Rhodobacter sp. and Mycobacterium sp. were found to attach very well to RO membrane surface compared to others used in this study. The test of hydrophobicity revealed that the bacteria which have high hydrophobicity or similar contact angle with RO membrane ($63^{\circ}$ of contact angle) easily attached to RO membrane surface. P. aeruginosa which is highly hydrophilic ($23.07^{\circ}$ of contact angle) showed the least adhesion characteristic among six model bacteria. After applying a pressure of 800 psi to the sample, Rhodobacter sp. was found to show the highest reduction rate; with 59-73% of the cells removed from the membrane under pressure. P. fluorescens on the other hand analyzed as the most pressure resistant bacteria among six model bacteria. The difference between reduction rates using direct counting and plate counting indicates that the viability of each model bacteria was affected significantly from the high pressure. Most cells subjected to high pressure were unable to form colonies even thought they maintained their structural integrity.

Determination of Petroleum Aromatic Hydrocarbons in Seawater Using Headspace Solid-Phase Microextraction Coupled to Gas Chromatography/Mass Spectrometry (HS-SPME-GC/MS를 이용한 해수 내 유류계 방향족탄화수소 분석법)

  • An, Joon Geon;Shim, Won Joon;Ha, Sung Yong;Yim, Un Hyuk
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.17 no.1
    • /
    • pp.27-35
    • /
    • 2014
  • The headspace solid-phase microextraction (HS-SPME) followed by gas chromatography/mass spectrometry procedure has been developed for the simultaneous determination of petroleum aromatic hydrocarbons such as benzene, toluene, ethylbenzene and xylene isomers (BTEX) and polycyclic aromatic hydrocarbons (PAHs) in seawater. The advantages of SPME compared to traditional methods of sample preparation are ease of operation, reuse of fiber, portable system, minimal contamination and loss of the sample during transport and storage. SPME fiber, extraction time, temperature, stirring speed, and GC desorption time were key extraction parameters considered in this study. Among three kinds of SPME fibers, i.e., PDMS ($100{\mu}m$), CAR/PDMS ($75{\mu}m$), and PDMS/DVB ($65{\mu}m$), a $65{\mu}m$ PDMS/DVB fiber showed the most optimal extraction efficiencies covering molecular weight ranging from 78 to 202. Other extraction parameters were set up using $65{\mu}m$ PDMS/DVB. The final optimized extraction conditions were extraction time (60 min), extraction temperature (50), stirring speed (750 rpm) and GC desorption time (3 min). When applied to artificially contaminated seawater like water accommodated fraction, our optimized HS-SPME-GC/MS showed comparable performances with other conventional method. The proposed protocol can be an attractive alternative to analysis of BTEX and PAHs in seawater.

A historical study on the flexibility square-format typeface and the prospects - Focused on the three-pairs fonts of hangeul - (탈네모글꼴에 관한 역사적 연구와 전망 - 세벌식 한글 글꼴을 중심으로 -)

  • Yu, Jeong-Mi
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.241-250
    • /
    • 2006
  • Hangeul as the Korean unique characters were invented according to some character-making principles and based on scholars' exhaustive researches. While most of the characters in the world evolved naturally, Hangeul was invented based on a precise linguistic analysis of the time, and therefore, it is most scientific and reasonable among various characters throughout the world. Nevertheless, Hangeul typeface designs do not seem to inherit the ideology of scientific and reasonable Hangeul correctly. For the square forms have been used intact due to the influences from the Chinese characters which prevailed during the time. If a single set of square characters should be designed, as much as 11,172 fonts should be designed, which suggests that advantages of Mangeul may not well be used fully; Hangeul was invented to visualize every sound with the combinations of 28 vowels and consonants. Problems of such square fonts began to be identified since 1900's when typewriters were introduced first from the West. Since a typewriter is designed with 28 characters laid out on its keyboard by using such combinations, the letters may be easily combined on it. The so-called the flexibility square-format typeface was born as such. Specially, the three-pairs fonts of these can be combined up to 67 letters including vowels and consonants. The three-pairs fonts system can help to solve the problems arising form the conventional square fonts and inherit the original ideology of Hangeul invention. This study aims to review the history of the three-pairs fonts designs facilitated by mechanic encoding of Hangeul and thereupon, suggest some desirable directions for future Hangeul fonts. Since the flexibility square-format typeface is expected to evolve more and more owing to development of the digital technology, they would serve our age of information in terms of both functions and convenience. Just as Hunminjongum tried to be literally independent from the Chinese characters, so the flexibility square-format typeface designs would serve to recover identity of our Hangeul font designs.

  • PDF

EFFECT OF INSTRUMENT COMPLIANCE ON THE POLYMERIZATION SHRINKAGE STRESS MEASUREMENTS OF DENTAL RESIN COMPOSITES (측정장치의 compliance 유무가 복합레진의 중합수축음력의 측정에 미치는 영향)

  • Seo, Deog-Gyu;Min, Sun-Hong;Lee, In-Bog
    • Restorative Dentistry and Endodontics
    • /
    • v.34 no.2
    • /
    • pp.145-153
    • /
    • 2009
  • The purpose of this study was to evaluate the effect of instrument compliance on the polymerization shrinkage stress measurements of dental composites. The contraction strain and stress of composites during light curing were measured by a custom made stress-strain analyzer, which consisted of a displacement sensor, a cantilever load cell and a negative feedback mechanism. The instrument can measure the polymerization stress by two modes: with compliance mode in which the instrument compliance is allowed, or without compliance mode in which the instrument compliance is not allowed. A flowable (Filtek Flow: FF) and two universal hybrid (Z100: Z1 and Z250: Z2) composites were studied. A silane treated metal rod with a diameter of 3.0 mm was fixed at free end of the load cell, and other metal rod was fixed on the base plate. Composite of 1.0 mm thickness was placed between the two rods and light cured. The axial shrinkage strain and stress of the composite were recorded for 10 minutes during polymerization. and the tensile modulus of the materials was also determined with the instrument. The statistical analysis was conducted by ANOVA. paired t-test and Tukey's test (${\alpha}<0.05$). There were significant differences between the two measurement modes and among materials. With compliance mode, the contraction stress of FF was the highest: 3.11 (0.13). followed by Z1: 2.91 (0.10) and Z2: 1.94 (0.09) MPa. When the instrument compliance is not allowed, the contraction stress of Z1 was the highest: 17.08 (0.89), followed by FF: 10.11 (0.29) and Z2: 9.46 (1.63) MPa. The tensile modulus for Z1, Z2 and FF was 2.31 (0.18), 2.05 (0.20), 1.41 (0.11) GPa, respectively. With compliance mode. the measured stress correlated with the axial shrinkage strain of composite: while without compliance the elastic modulus of materials played a significant role in the stress measurement.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Attention to the Internet: The Impact of Active Information Search on Investment Decisions (인터넷 주의효과: 능동적 정보 검색이 투자 결정에 미치는 영향에 관한 연구)

  • Chang, Young Bong;Kwon, YoungOk;Cho, Wooje
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.117-129
    • /
    • 2015
  • As the Internet becomes ubiquitous, a large volume of information is posted on the Internet with exponential growth every day. Accordingly, it is not unusual that investors in stock markets gather and compile firm-specific or market-wide information through online searches. Importantly, it becomes easier for investors to acquire value-relevant information for their investment decision with the help of powerful search tools on the Internet. Our study examines whether or not the Internet helps investors assess a firm's value better by using firm-level data over long periods spanning from January 2004 to December 2013. To this end, we construct weekly-based search volume for information technology (IT) services firms on the Internet. We limit our focus to IT firms since they are often equipped with intangible assets and relatively less recognized to the public which makes them hard-to measure. To obtain the information on those firms, investors are more likely to consult the Internet and use the information to appreciate the firms more accurately and eventually improve their investment decisions. Prior studies have shown that changes in search volumes can reflect the various aspects of the complex human behaviors and forecast near-term values of economic indicators, including automobile sales, unemployment claims, and etc. Moreover, search volume of firm names or stock ticker symbols has been used as a direct proxy of individual investors' attention in financial markets since, different from indirect measures such as turnover and extreme returns, they can reveal and quantify the interest of investors in an objective way. Following this line of research, this study aims to gauge whether the information retrieved from the Internet is value relevant in assessing a firm. We also use search volume for analysis but, distinguished from prior studies, explore its impact on return comovements with market returns. Given that a firm's returns tend to comove with market returns excessively when investors are less informed about the firm, we empirically test the value of information by examining the association between Internet searches and the extent to which a firm's returns comove. Our results show that Internet searches are negatively associated with return comovements as expected. When sample is split by the size of firms, the impact of Internet searches on return comovements is shown to be greater for large firms than small ones. Interestingly, we find a greater impact of Internet searches on return comovements for years from 2009 to 2013 than earlier years possibly due to more aggressive and informative exploit of Internet searches in obtaining financial information. We also complement our analyses by examining the association between return volatility and Internet search volumes. If Internet searches capture investors' attention associated with a change in firm-specific fundamentals such as new product releases, stock splits and so on, a firm's return volatility is likely to increase while search results can provide value-relevant information to investors. Our results suggest that in general, an increase in the volume of Internet searches is not positively associated with return volatility. However, we find a positive association between Internet searches and return volatility when the sample is limited to larger firms. A stronger result from larger firms implies that investors still pay less attention to the information obtained from Internet searches for small firms while the information is value relevant in assessing stock values. However, we do find any systematic differences in the magnitude of Internet searches impact on return volatility by time periods. Taken together, our results shed new light on the value of information searched from the Internet in assessing stock values. Given the informational role of the Internet in stock markets, we believe the results would guide investors to exploit Internet search tools to be better informed, as a result improving their investment decisions.

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition (개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형)

  • Jung, Min-Kyu;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.39-57
    • /
    • 2012
  • Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.