• Title/Summary/Keyword: Portal model

Search Result 274, Processing Time 0.023 seconds

Calculation of Damage to Whole Crop Corn Yield by Abnormal Climate Using Machine Learning (기계학습모델을 이용한 이상기상에 따른 사일리지용 옥수수 생산량에 미치는 피해 산정)

  • Ji Yung Kim;Jae Seong Choi;Hyun Wook Jo;Moonju Kim;Byong Wan Kim;Kyung Il Sung
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.1
    • /
    • pp.11-21
    • /
    • 2023
  • This study was conducted to estimate the damage of Whole Crop Corn (WCC; Zea Mays L.) according to abnormal climate using machine learning as the Representative Concentration Pathway (RCP) 4.5 and present the damage through mapping. The collected WCC data was 3,232. The climate data was collected from the Korea Meteorological Administration's meteorological data open portal. The machine learning model used DeepCrossing. The damage was calculated using climate data from the automated synoptic observing system (ASOS, 95 sites) by machine learning. The calculation of damage was the difference between the dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of WCC data (1978-2017). The level of abnormal climate by temperature and precipitation was set as RCP 4.5 standard. The DMYnormal ranged from 13,845-19,347 kg/ha. The damage of WCC which was differed depending on the region and level of abnormal climate where abnormal temperature and precipitation occurred. The damage of abnormal temperature in 2050 and 2100 ranged from -263 to 360 and -1,023 to 92 kg/ha, respectively. The damage of abnormal precipitation in 2050 and 2100 was ranged from -17 to 2 and -12 to 2 kg/ha, respectively. The maximum damage was 360 kg/ha that the abnormal temperature in 2050. As the average monthly temperature increases, the DMY of WCC tends to increase. The damage calculated through the RCP 4.5 standard was presented as a mapping using QGIS. Although this study applied the scenario in which greenhouse gas reduction was carried out, additional research needs to be conducted applying an RCP scenario in which greenhouse gas reduction is not performed.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

Radiation Therapy Using M3 Wax Bolus in Patients with Malignant Scalp Tumors (악성 두피 종양(Scalp) 환자의 M3 Wax Bolus를 이용한 방사선치료)

  • Kwon, Da Eun;Hwang, Ji Hye;Park, In Seo;Yang, Jun Cheol;Kim, Su Jin;You, Ah Young;Won, Young Jinn;Kwon, Kyung Tae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.75-81
    • /
    • 2019
  • Purpose: Helmet type bolus for 3D printer is being manufactured because of the disadvantages of Bolus materials when photon beam is used for the treatment of scalp malignancy. However, PLA, which is a used material, has a higher density than a tissue equivalent material and inconveniences occur when the patient wears PLA. In this study, we try to treat malignant scalp tumors by using M3 wax helmet with 3D printer. Methods and materials: For the modeling of the helmet type M3 wax, the head phantom was photographed by CT, which was acquired with a DICOM file. The part for helmet on the scalp was made with Helmet contour. The M3 Wax helmet was made by dissolving paraffin wax, mixing magnesium oxide and calcium carbonate, solidifying it in a PLA 3D helmet, and then eliminated PLA 3D Helmet of the surface. The treatment plan was based on Intensity-Modulated Radiation Therapy (IMRT) of 10 Portals, and the therapeutic dose was 200 cGy, using Analytical Anisotropic Algorithm (AAA) of Eclipse. Then, the dose was verified by using EBT3 film and Mosfet (Metal Oxide Semiconductor Field Effect Transistor: USA), and the IMRT plan was measured 3 times in 3 parts by reproducing the phantom of the head human model under the same condition with the CT simulation room. Results: The Hounsfield unit (HU) of the bolus measured by CT was $52{\pm}37.1$. The dose of TPS was 186.6 cGy, 193.2 cGy and 190.6 cGy at the M3 Wax bolus measurement points of A, B and C, and the dose measured three times at Mostet was $179.66{\pm}2.62cGy$, $184.33{\pm}1.24cGy$ and $195.33{\pm}1.69cGy$. And the error rates were -3.71 %, -4.59 %, and 2.48 %. The dose measured with EBT3 film was $182.00{\pm}1.63cGy$, $193.66{\pm}2.05cGy$ and $196{\pm}2.16cGy$. The error rates were -2.46 %, 0.23 % and 2.83 %. Conclusions: The thickness of the M3 wax bolus was 2 cm, which could help the treatment plan to be established by easily lowering the dose of the brain part. The maximum error rate of the scalp surface dose was measured within 5 % and generally within 3 %, even in the A, B, C measurements of dosimeters of EBT3 film and Mosfet in the treatment dose verification. The making period of M3 wax bolus is shorter, cheaper than that of 3D printer, can be reused and is very useful for the treatment of scalp malignancies as human tissue equivalent material. Therefore, we think that the use of casting type M3 wax bolus, which will complement the making period and cost of high capacity Bolus and Compensator in 3D printer, will increase later.