• Title/Summary/Keyword: Korea society

Search Result 343,286, Processing Time 0.303 seconds

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Coexistence Model in a Dynamic Platform with ICT-based Multi-Value Chains: focusing on Healthcare Service (ICT 기반 다중 가치사슬의 동적 플랫폼에서의 공존 모형: 의료서비스를 중심으로)

  • Lee, Hyun Jung;Chang, Yong Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.69-93
    • /
    • 2017
  • The development of ICT has leaded the diversification and changes of supplies and demands in markets. It also caused the creations of a variety of values which are differentiated from those in the existing market. Therefore, a new-type market is created, which can include multi-value chains which are from ICT-based created markets as well as the existing markets. We defined the platform as the new-type market. In the platform, the multi-value chains can be coexisted with multi-values. In true market, when a new-type value chain entered into an existing market, it is general that it can be conflicted with the existing value chain in the market. The conflicted problem among multi-value chains in a market is caused by the sharing of limited market resources like suppliers, consumers, services or products among the value chains. In other words, if there are multi-value chains in the platform, then it is possible to have conflictions, overlapping, creations or losses of values among the value chains. To solve the problem, we introduce coexistence factors to reduce the conflictions to reach market equilibrium in the platform. In the other hand, it is possible to lead the creations of differentiated values from the existing market and to augment the total market values in the platform. In the early era of ICT development, ICT was introduced for improvement of efficiency and effectiveness of the value chains in the existing market. However, according to the changed role of ICT from the supporter to the promotor of the market, ICT became to lead the variations of the value chains and creations of various values in the markets. For instance, Uber Taxi created a new value chain with ICT-based new-type service or products with new resources like new suppliers and consumers. When Uber and Traditional Taxi services are playing at the same time in Taxi service platform, it is possible to create values or make conflictions among values between the new and old value chains. In this research, like Uber and traditional taxi services, if there are conflictions among the multi-value chains, then it is necessary to minimize the conflictions in the platform for the coexistence of multi-value chains which can create the value-added values in the platform. So, it is important to predict and discuss the possible conflicted problems between new and old value chains. The confliction should be solved to reach market equilibrium with multi-value chains in the platform. That is, we discuss the possibility of the coexistence of multi-value chains in the platform which are comprised of a variety of suppliers and customers. To do this, especially we are focusing on the healthcare markets. Nowadays healthcare markets are popularized in global market as well as domestic. Therefore, there are a lot of and a variety of healthcare services like Traditional-, Tele-, or Intelligent- healthcare services and so on. It shows that there are multi-suppliers, -consumers and -services as components of each different value chain in the same platform. The platform can be shared by different values that are created or overlapped by confliction and loss of values in the value chains. In this research, as was said, we focused on the healthcare services to show if a platform can be shared by different value chains like traditional-, tele-healthcare and intelligent-healthcare services and products. Additionally, we try to show if it is possible to increase the value of each value chain as well as the total value of the platform. As the result, it is possible to increase of each value of each value chain as well as the total value in the platform. Finally, we propose a coexistence model to overcome such problems and showed the possibility of coexistence between the value chains through experimentation.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Influence analysis of Internet buzz to corporate performance : Individual stock price prediction using sentiment analysis of online news (온라인 언급이 기업 성과에 미치는 영향 분석 : 뉴스 감성분석을 통한 기업별 주가 예측)

  • Jeong, Ji Seon;Kim, Dong Sung;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.37-51
    • /
    • 2015
  • Due to the development of internet technology and the rapid increase of internet data, various studies are actively conducted on how to use and analyze internet data for various purposes. In particular, in recent years, a number of studies have been performed on the applications of text mining techniques in order to overcome the limitations of the current application of structured data. Especially, there are various studies on sentimental analysis to score opinions based on the distribution of polarity such as positivity or negativity of vocabularies or sentences of the texts in documents. As a part of such studies, this study tries to predict ups and downs of stock prices of companies by performing sentimental analysis on news contexts of the particular companies in the Internet. A variety of news on companies is produced online by different economic agents, and it is diffused quickly and accessed easily in the Internet. So, based on inefficient market hypothesis, we can expect that news information of an individual company can be used to predict the fluctuations of stock prices of the company if we apply proper data analysis techniques. However, as the areas of corporate management activity are different, an analysis considering characteristics of each company is required in the analysis of text data based on machine-learning. In addition, since the news including positive or negative information on certain companies have various impacts on other companies or industry fields, an analysis for the prediction of the stock price of each company is necessary. Therefore, this study attempted to predict changes in the stock prices of the individual companies that applied a sentimental analysis of the online news data. Accordingly, this study chose top company in KOSPI 200 as the subjects of the analysis, and collected and analyzed online news data by each company produced for two years on a representative domestic search portal service, Naver. In addition, considering the differences in the meanings of vocabularies for each of the certain economic subjects, it aims to improve performance by building up a lexicon for each individual company and applying that to an analysis. As a result of the analysis, the accuracy of the prediction by each company are different, and the prediction accurate rate turned out to be 56% on average. Comparing the accuracy of the prediction of stock prices on industry sectors, 'energy/chemical', 'consumer goods for living' and 'consumer discretionary' showed a relatively higher accuracy of the prediction of stock prices than other industries, while it was found that the sectors such as 'information technology' and 'shipbuilding/transportation' industry had lower accuracy of prediction. The number of the representative companies in each industry collected was five each, so it is somewhat difficult to generalize, but it could be confirmed that there was a difference in the accuracy of the prediction of stock prices depending on industry sectors. In addition, at the individual company level, the companies such as 'Kangwon Land', 'KT & G' and 'SK Innovation' showed a relatively higher prediction accuracy as compared to other companies, while it showed that the companies such as 'Young Poong', 'LG', 'Samsung Life Insurance', and 'Doosan' had a low prediction accuracy of less than 50%. In this paper, we performed an analysis of the share price performance relative to the prediction of individual companies through the vocabulary of pre-built company to take advantage of the online news information. In this paper, we aim to improve performance of the stock prices prediction, applying online news information, through the stock price prediction of individual companies. Based on this, in the future, it will be possible to find ways to increase the stock price prediction accuracy by complementing the problem of unnecessary words that are added to the sentiment dictionary.

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

Seedling Emergence of Dry -seeded Rice under Different Sowing Depths and Irrigation Regimes (건답직파에서 파종심도와 관개조건에 따른 벼 품종들의 출아특성)

  • 이변우;명을재
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.40 no.1
    • /
    • pp.59-68
    • /
    • 1995
  • Investigated were the relationships between plumule elongation characteristics and seedling emergence of 46 varieties including native, improved and red rice varieties of Korea, and varieties from U.S.A., Italy, India, Japan under 1, 3, and 5cm deep sowing with irrigated and non-irrigated condition. Experiments were carried out in paddy field of sandy loam. There was heavy shower of 19.2mm on the next day of seeding and thereafter, clear and dry weather continued during the experiment period. Soil temperature averaged over 30 days after seeding was $16.4^{\circ}C$ at 3cm depth. Soil hardness increased linearly up to 2.5kg /$cm^2$ on the 14th day after seeding, on which date irrigated plot was irrigated through furrow, and up to 4kg / $cm^2$ on the 28th day in non-irrigated plot. Soil hardness dropped near to 0kg /$cm^2$ after irrigation and developed up to 2.5kg /$cm^2$ again by 28 days after seeding. Seedling emergence was higher in irrigated plots than non-irrigated plots at all seeding depths. Korean improved varieties were substantially lower in seedling emergence under non-irrigated condition of 1 cm deep sowing than those under irrigated condition. This poor seedling emergence resulted mainly from delayed emergence by exposing them to greater soil strength. Percent seedling emergence under irrigated and non-irrigated condition showed signifi-cant correlations at 3 and 5 cm deep sowing. Korean improved varieties belonged to the group of poor seedling emergence, and I taliconaverneco, Chinsura Boro and Weld Pally to best group under both irrigation conditions at 3 and 5cm deep sowing. Seedling emergence showed highly signifi-cant positive correlation with the plumule length of mesocotyl + 1st internode + incomplete leaf and of mesocotyl+coleoptile. Among the characters constituting plumule length, incomplete leaf length showed greatest positive correlation followed by coleoptile and mesocotyl under irrigated condition at 3 and 5 cm deep sowing, and highest correlation with mesocotyllength followed by first internode and incomplete leaf under non-irrigated condition. Days to 50% seedling emergence at 1 cm deep sowing with irrigation showed great varietal variation of 10 to 30 days, and showed high significant negative correlations with percent seedling emergence under both irrigation conditions except for 1 cm deep sowing with irrigation, Days to seedling emergence revealed sig-nificant negative correlations with plumule characters except 2nd internode, showing highest cor-relation with incomplete leaf length.

  • PDF

Studies on the Quality of Korean Rice (한국쌀의 품질에 관한 연구)

  • Kim, Z.U.;Lee, K.H.;Kim, D.Y.
    • Applied Biological Chemistry
    • /
    • v.15 no.1
    • /
    • pp.65-75
    • /
    • 1972
  • The rice qualities including cooking and eating qualities were studied using recommended Korean rice varieties (20 of japonica and 3 of indies type; IR 667) which were grown at Suwon, Korea in 1971. As the result, followings were obtained. 1. Amylose contents of white rice were varied with the varieties 21.1 to 25.5% and the average was 23.0%. Three indica type varieties (IR 667) showed higher amylose contents than the other japonica type varieties except Mankyung. Among japonica type varieties, Palkum, Mankyung composed the group of the highest amyloes content and Kimmaje was the lowest. 2. Blue values were distributed in the range of 0.38 to 0.48 and the average was 0.42 IR 667 varieties showed the highest blue value among them. Among japonica type varieties, Jaegun showd the highest blue value and Sooseung, Shirogane showed successively lower values, Shin #2, Nongbaik, Palkweng, Suwon #82, Mankyung, Nonglim #25 and Nongkwang relatively lower blue values. 3. Alkali numbers were in the range of 6.0 to 7.4 and the average was 6.8. Much difference was not shown in alkali number between IR 667 group and the japonica varieties group. 4. Gelatinization temperature were ranged from 59.5 to 64.0 IR 667 varieties showed relatively higher gelatinization temperature than japonica type varieties. 5. Water uptake ratios were measured in the range of 2.67 to 2.92 and the average was 2.79. IR 667 varieties were belonged to the group of highest water uptake ratio. Among japonica type varieties Kimmaje, Suwon #82, Nonglim #29, Deungpan #5, Jaegun, Jinhung, were belonged to the group of relatively high water uptake ratio and Palkweng, Palkeum and Paldal to the relatively low water uptake ratio. 6. Expaned volums were ranged from 29.8 to 33.7 and the average was 31.8. IR 667 varieties showed higher expanded volumes than japonica type varieties. 7. Intensities of starch-iodine blue value of residual liquid indicated 0.35 to 0.58. Among them IR 667 varieties showed relatively high intensities. 8. The range of total solids in residual liquids was 0.605 to 0.810 and the average was 0.700 Much difference was not shown in total solids in residual liquid between IR 667 varieties and japonica varieties. 9. pH values of residual liquids were in the range of 6.3 to 7.3 and the average was 6.95. IR 667 varieties showed lower pH than japonira type varieties.

  • PDF

A Study on the Timing and Method of the Final Price of Air Ticket in Computerised Booking System (인터넷 항공권 예약시스템에서의 '최종가격' 표시시기와 방법 - 2015년 1월 15일 EU사법재판소 C-573/13 판결을 중심으로 -)

  • Sur, Ji-Min
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.1
    • /
    • pp.327-353
    • /
    • 2017
  • The issue submitted to the Court of Justice on the merits of case C---573/13 originated from a claim brought in the context of a dispute between Air Berlin and the German Federal Union of Consumer Organisations and Associations. The challenge concerned the way in which air fares were displayed in Air Berlin's computerised booking system. The system was organised in such a way that, after selecting a date and a departure airport, one would find all possible flight connections in a summary table. However, the final price of the ticket was displayed only for the clicked connection, and not for all connections, thus preventing customers from being able to compare such price with the prices of other connections. The German Federal Union took the view that this practice did not meet the requirements laid down by Article 23 of Regulation (EC) No. 1008/2008, which requires transparency in the prices set for air services. This led the German State to bring an injunctive action to cause Air Berlin to discontinue said practice. The claim was upheld at both the application and appeal stage of the relevant proceedings. Subsequently, Air Berlin submitted the matter to the German Federal High Court, which decided to stay the proceedings and ask for a preliminary ruling from the Court of Justice as to 1. whether Article 23 of Regulation (EC) No. 1008/2008 must be interpreted as meaning that, during the computerised booking process, the final price to be paid must be indicated at all times when prices of air services are shown, including when they are shown for the first time; and 2. whether, during the computerised booking process, the final price must be indicated only for the air service specifically selected by the customer or for each air service shown. In a nutshell, the Court, by the here---discussed judgment determined that Article 23 of Regulation (EC) No. 1008/2008 must be interpreted as meaning that, in the context of a computerised air ticket booking system, the final price to be paid must be indicated not only for the air service specifically selected by the customer, but also for each air service in respect of which the fare is shown. Clearly the above judgment will place air companies under an obligation to update and adjust (when needed) their computerised ticket booking and payment systems, in consideration of the primary need for consumers to be aware at all times of the actual price payable for a ticket and be able to compare the price of the service selected with the prices for other air services in respect of which the fare is shown.

  • PDF