• Title/Summary/Keyword: 이모

Search Result 276, Processing Time 0.021 seconds

Effects for Growth and Chlorophyll in Old-barley and New-barley Seed exposed by X-ray (X-선이 묵은보리 씨앗과 햇보리 씨앗의 생장과 클로로필 농도에 미치는 영향)

  • Sang-Bok, Jeong;Sun-Cheol, Jeong;Mo-Kwon, Lee;Yun-Ho, Choi;Kang-Un, Byun;Su-Ah, Yu;Sang-Eun, Han;Jun-Beom, Heo;Wan-Sik, Shin;Won-Jeong, Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.149-156
    • /
    • 2023
  • The purpose of this study is to compare of growth and chlorophyl between old-barley seed (OBS, 2019) and New-barley seed (NBS, 2020) exposed by X-ray. After germination the OBS and NBS, experimental group was exposed by 30 Gy X-ray using linear accelerator (Clinac IS, VERIAN, USA), by 6 MV X-ray, SSD 100 cm, 18 × 10 cm2, 600 MU/min. Length was measured every day until 9th day, and chlorophyl was analyzed using spectrophotometer(uv-1800, shimadzu, japan) after measuring weight in 9th day. Data analysis was performed the Independent T-test using SPSS ver 26.0(Chicago, IL, USA). NBS grow more faster than OBS in control group, but OBS grow more faster than NBS in experimental group. Length of control group was longer significantly every day than that of experimental group in OBS. NBS weighted more than OBS in control group, but OBS weighted more than NBS in experimental group. In comparing chlorophyl density, NBS high more than OBS in control group as well as experimental group. Growth and weight of OBS was effected more those than NBS by X-ray, but NBS in chlorophyl by X-ray. It is expected to be used as basic data for future X-ray research in barley seed.

A Study on Consumer's Emotional Consumption Value and Purchase Intention about IoT Products - Focused on the preference of using EEG - (IoT 제품에 관한 소비자의 감성적 소비가치와 구매의도에 관한 연구 - EEG를 활용한 선호도 연구를 중심으로 -)

  • Lee, Young-ae;Kim, Seung-in
    • Journal of Communication Design
    • /
    • v.68
    • /
    • pp.278-288
    • /
    • 2019
  • The purpose of this study is to analyze the effects of risk and convenience on purchase intention in the IOT market, and I want to analyze the moderating effect of emotional consumption value. In this study, two products were selected from three product groups. There are three major methods of research. First, theoretical considerations. Second, survey analysis. Reliability analysis and factor analysis were performed using descriptive statistics using SPSS. Third, we measured changes of EEG according to in - depth interview and indirect experience. As a result of the hypothesis of this study, it was confirmed that convenience of use of IoT product influences purchase intention. Risk was predicted to have a negative effect on purchase intentions, but not significant in this study. This implies that IoT products tend to be neglected in terms of monetary loss such as cost of purchase, cost of use, and disposal cost when purchasing. In-depth interviews and EEG analysis revealed that there is a desire to purchase and try out the IoT product due to the nature of the product, the novelty of new technology, and the vague idea that it will benefit my life. The aesthetic, symbolic, and pleasure factors, which are sub - elements of emotional consumption value, were found to have a great influence. This is consistent with previous research showing that emotional consumption value has a positive effect on purchase intention. In-depth interviews and EEG analyzes also yielded the same results. This study has revealed that emotional consumption value affects the intention to purchase IoT products. It seems that companies producing IoT products need to concentrate on marketing with more emotional consumption value.

A Study on the Development of Educational Smart App. for Home Economics Classes(1st): Focusing on 'Clothing Preparation Planning and Selection' (가정과수업을 위한 교육용 스마트 앱(App) 개발연구(제1보): 중1 기술·가정 '의복 마련 계획과 선택'단원을 중심으로)

  • Kim, Gyuri;Wee, Eunhah
    • Journal of Korean Home Economics Education Association
    • /
    • v.35 no.3
    • /
    • pp.47-66
    • /
    • 2023
  • The purpose of this study was to develop an educational smart app for classes by reconstructing some of the teaching-learning contents of the clothing preparation planning within the 'clothing preparation planning and selection' curriculum unit. To this end, a teaching-learning process plan was planned for the classes, a smart app was developed for classes, and feedback from home economics teachers and app development experts was received for the developed app. The main composition of the developed app consists of five steps. The first step is to set up a profile using a real photo, ZEPETO or Galaxy emoji, or iPhone Memoji. In the second step, students make a list of clothes by figuring out the types, quantities and conditions of their exisitng wardrobe items. Each piece of clothing is assigned an individual registration number, and stduents can take pictures of the front and back, along with describing key attributes such as type, color, season-appropriateness, purchase date, and current status. Step three guides students in deciding which garments to retain and which to discard. Building on the clothing inventory from the previous step, students classify items to keep and items to dispose of. In Step 4, Deciding How to Arrange Clothing, students decide how to arrange clothing by filling out an alternative scorecard. Through this process, students can learn in advance the subsection of resource management and self-reliance, laying the foundationa for future learning in 'Practice of Rational Consumption Life'. Lastly, in the fifth stage of determining the disposal method, this stage is to develop practical problem-oriented classes on how to dispose of the clothes to be discarded in the thirrd stage by exploring various disposal methods, engaging in group discussions, and sharing opinions. This study is meaningful as a case study as an attempt to develop a smart app for education by an instructor to align teaching plans and educational content with achievement standards for the class. In the future, upgrades will have to be made through user application.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.