• Title/Summary/Keyword: 양하

Search Result 37,980, Processing Time 0.072 seconds

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Study on the Improvement Plans of Police Fire Investigation (경찰화재조사의 개선방안에 관한 연구)

  • SeoMoon, Su-Cheol
    • Journal of Korean Institute of Fire Investigation
    • /
    • v.9 no.1
    • /
    • pp.103-121
    • /
    • 2006
  • We are living in more comfortable circumstances with the social developments and the improvement of the standard of living, but, on the other hand, we are exposed to an increase of the occurrences of tires on account of large-sized, higher stories, deeper underground building and the use of various energy resources. The materials of the floor in a residence modern society have been going through various alterations in accordance with the uses of a residence and are now used as final goods in interioring the bottom of apartments, houses and shops. There are so many kinds of materials you usually come in contact with, but in the first place, we need to make an experiment on the spread of the fire with the hypocaust used as the floors of apartments, etc. and the floor covers you usually can get easily. We, scientific investigators, can get in contact with the accidents caused by incendiarism or an accidental fire closely connected with petroleum stuffs on the floor materials that give rise to lots of problems. on this account, I'd like to propose that we conduct an experiment on fire shapes by each petroleum stuff and that discriminate an accidental tire from incendiarism. In an investigation, it seems that finding a live coal could be an essential part of clearing up the cause of a tire but it could not be the cause of a fire itself. And besides, all sorts of tire cases or fire accidents have some kind of legislation and standard to minimize and at an early stage cope with the damage by tires. That is to say, we are supposed to install each kind of electric apparatus, automatic alarm equipment, automatic fire extinguisher in order to protect ourselves from the danger of fires and check them at any time and also escape urgently in case of fire-outbreaking or build a tire-proof construction to prevent flames from proliferating to the neighboring areas. Namely, you should take several factors into consideration to investigate a cause of a case or an accident related to fire. That means it's not in reason for one investigator or one investigative team to make clear of the starting part and the cause of a tire. accordingly, in this thesis, explanations would be given set limits to the judgement and verification on the cause of a fire and the concrete tire-spreading part through investigation on the very spot that a fire broke out. The fire-discernment would also be focused on the early stage fire-spreading part fire-outbreaking resources, and I think the realities of police tire investigations and the problems are still a matter of debate. The cause of a fire must be examined into by logical judgement on the basis of abundant scientific knowledge and experience covering the whole of fire phenomena. The judgement of the cause should be made with fire-spreading situation at the spot as the central figure and in case of verifying, you are supposed to prove by the situational proof from the traces of the tire-spreading to the fire-outbreaking sources. The causal relation on a fire-outbreak should not be proved by arbitrary opinion far from concrete facts, and also there is much chance of making mistakes if you draw deduction from a coincidence. It is absolutely necessary you observe in an objective attitude and grasp the situation of a tire in the investigation of the cause. Having a look at the spot with a prejudice is not allowed. The source of tire-outbreak itself is likely to be considered as the cause of a tire and that makes us doubt about the results according to interests of the independent investigators. So to speak, they set about investigations, the police investigation in the hope of it not being incendiarism, the fire department in the hope of it not being problems in installments or equipments, insurance companies in the hope of it being any incendiarism, electric fields in the hope of it not being electric defects, the gas-related in the hope of it not being gas problems. You could not look forward to more fair investigation and break off their misgivings. It is because the firing source itself is known as the cause of a fire and civil or criminal responsibilities are respected to the firing source itself. On this occasion, investigating the cause of a fire should be conducted with research, investigation, emotion independent, and finally you should clear up the cause with the results put together.

  • PDF

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Light and Electron Microscopy of Gill and Kidney on Adaptation of Tilapia(Oreochromis niloticus) in the Various Salinities (틸라피아의 해수순치시(海水馴致時) 아가미와 신장(腎臟)의 광학(光學) 및 전자현미경적(電子顯微鏡的) 관찰(觀察))

  • Yoon, Jong-Man;Cho, Kang-Yong;Park, Hong-Yang
    • Applied Microscopy
    • /
    • v.23 no.2
    • /
    • pp.27-40
    • /
    • 1993
  • This study was taken to examine the light microscopic and ultrastructural changes of gill and kidney of female tilapia{Oreochromis niloticus) adapted in 0%o, 10%o, 20%o, and 30%o salt concentrations, respectively, by light, scanning and transmission electron microscope. The results obtained in these experiments were summarized as follows: Gill chloride cell hyperplasia, gill lamellar epithelial separation, kidney glomerular shrinkage, blood congestion in kidneys and deposition of hyalin droplets in kidney glomeruli, tubules were the histological alterations in Oreochromis niloticus. Incidence and severity of gill chloride cell hyperplasia rapidly increased together with increase of salinity, and the number of chloride cells in gill lamellae rapidly increased in response to high external NaCl concentrations. The ultrastructure by scanning electron microscope(SEM) indicated that the gill secondary lamella of tilapia(Oreochromis niloticus) exposed to seawater, were characterized by rough convoluted surfaces during the adaptation. Transmission electron microscopy(TEM) indicated that mitochondria in chloride cells exposed to seawater, were both large and elongate and contained well-developed cristae. TEM also showed the increased chloride cells exposed to seawater. The presence of two mitochondria-rich cell types is discussed with regard to their possible role in the hypoosmoregulatory changes which occur during seawater-adaptation. Most Oreochromis niloticus adapted in seawater had an occasional glomerulus completely filling Bowman's capsule in kidney, and glomerular shrinkage was occurred higher in kidney tissues of individuals living in 10%o, 20%o, 30%o of seawater than in those living in 0%o of freshwater, and blood congestion was occurred severer in kidney tissues of individuals living 20%o, 30%o of seawater than in those living in 10%o of seawater. There were decreases in the glomerular area and the nuclear area in the main segments of the nephron, and that the nuclear areas of the nephron cells in seawater-adapted tilapia were of smaller size than those from freshwater-adapted fish. Our findings demonstrated that Oreochromis niloticus tolerated moderately saline environment and the increased body weight living in 30%o was relatively higher than that living in 10%o in spite of histopathological changes.

  • PDF

Soil Surface Fixation by Direct Sowing of Zoysia japonica with Soil Improvement on the Dredged Soil Slope (해저준설토 사면에서 개량제 처리에 의한 한국들잔디 직파 지표고정 공법에 관한 연구)

  • Jeong, Yong-Ho;Lee, Im-Kyun;Seo, Kyung-Won;Lim, Joo-Hoon;Kim, Jung-Ho;Shin, Moon-Hyun
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.14 no.4
    • /
    • pp.1-10
    • /
    • 2011
  • This study was conducted to compare the growth of Zoysia japonica depending on different soil treatments in Saemangeum sea dike, which is filled with dredged soil. Zoysia japonica was planted using sod-pitching method on the control plot. On plots which were treated with forest soil and soil improvement, Zoysia japonica seeds were sprayed mechanically. Sixteen months after planting, coverage rate, leaf length, leaf width, and root length were measured and analyzed. Also, three Zoysia japonica samples per plot were collected to analyze nutrient contents. Coverage rate was 100% in B treatment plot(dredged soil+$40kg/m^3$ soil improvement+forest soil), in C treatment plots (dredged soil+$60kg/m^3$ soil improvement+forest soil), and D treatment plots (dredged soil+$60kg/m^3$ soil improvement), while only 43% of the soil surface was covered with Zoysia japonica on control plots. The width of the leaf on C treatment plots (3.79mm) was the highest followed by D treatment (3.49mm), B treatment (2.40mm) and control plots (1.97mm). Leaf and root length of D treatment was 30.18cm and 13.18cm, which were highest among different treatments. The leaf length of D treatment was highest followed by C, B, and A treatments. The root length of D treatment was highest followed by C, A, and B treatments. The nitrogen and phosphate contents of the above ground part of Zoysia japonica were highest in C treatment, followed by D, B, and A treatments. The nitrogen and phosphate contents of the underground part of Zoysia japonica were highest in D treatment, followed by C, A, and B treatments. C and D treatments showed the best results in every aspect of grass growth. The results of this study could be used to identify the cost effective way to improve soil quality for soil surface fixation on reclaimed areas using grass species.

Physio-Ecological Studies on Stevia(Stevia rebaudiana Bertoni) (스테비아(Stevia rebaudiana Bertoni)에 관한 생리 생태적 연구)

  • Kwang-He Kang;Eun-Woong Lee
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.26 no.1
    • /
    • pp.69-89
    • /
    • 1981
  • Stevia (Stevia rebaudiana Bertoni) is a perennial herb widely distributed in the mountainous area of Paraguay. It belongs to the family Compositae and contains 6 to 12 percent stevioside in the leaves. Stevioside is a glucoside having similar sweetening character to surgar and the degree of sweetness is approximately 300 times of sugar. Since Korea does not produce any sugar crops, and the synthetic sweetenings are potentially hazardous for health, it is rather urgent to develop an economical new sweetener. Consequently, the current experiments are conducted to establish cultural practices of stevia, a new sweetening herbs, introduced into Korea in 1973 and the results are summarized as followings: 1. Days from transplanting of cuttings to the flower bud formation of 6 stevia lines were similar among daylengths of 8, 10 and 12 hours, but it was much greater at daylengths of 14 or 24 hour and varietal differences were noticable. All lines were photosensitive, but a line, 77013, was the most sensitive and 77067 and Suweon 2 were less sensitive to daylength. 2. Critical daylength of all lines seemed to be approximately 12 hours. Growth of plants was severely retarded at daylengths less than 12 hours. 3. Cutting were responded to short daylength before rooting. Number of days from transplanting to flower bud formation of 40-day old cuttings in the nursery bed was 20 days and it was delayed as duration of nursery were shorter. 4. Number of days from emergence to flower bud formation was shortest at short day treatment from 20 days after emergence. It was became longer as initiation of short day treatment was earlier or later than 20 days. 5. Plant height, number of branches, and top dry weight of stevia were reduced as cutting date was delayed from March 20 to May 20. The highest yield of dry leaf was obtained at nursery duration of 40-50 days in march 20 cutting, 30-40 days in April 20 cutting, and 30 days in May 20 cutting. 6. An asymptotic relationship was observed between plant population and leaf dry weight. Yield of dry leaf increased rapidly as plant population increased from 5,000 to 10,000 plants/10a with a reduced increasing rate from 10,000 to 20,000 plants/l0a, and levelled off at the plant population higher than 20,000 plants/l0a. 7. Stevia was adaptable in Suweon, Chengju, Mokpo and Jeju and drought was one of the main factors reducing yield of dry leaf. Yield of dry leaf was reduced significantly (approximately 30%) at June 20 transplanting compared to optimum transplanting. 8. Yield of dry leaf was higher in a vinyl house compared to unprotected control at long daylength or natural daylength except at short day treatment at March 20. Higher temperature ill a vinyl house does not have benefital effects at April 20 transplanting. 9. The highest content of stevioside was noted at the upper leaves of the plant but the lowest was measured at the plant parts of 20cm above ground. Leaf dry weight and stevioside yield was mainly contributed by the plant parts of 60 to 120cm above ground but the varietal differences were also significant. 10. Delayed harvest by the time of flower bud formation increased leaf dry weight remarkably. However, there were insignificant changes of yield as harvests were made at any time after flower bud formation. Content of stevioside was highest at the time of flower bud formation and earlier or later harvest than this time was low in its content. The optimum harvesting time determined by leaf dry weight and stevioside content was the periods from flower bud formation to right before flowering that would be the period from September 10 to September 15 in Suweon area. 11. Stevioside and rebaudioside content in the leaves of Stevia varieties were ranged from 5.4% to 14.3% and 1.5% to 8.3% respectively. However, no definit relationships between stevioside and rebaudioside were observed in these particular experiments.

  • PDF

Growth Efficiency, Carcass Quality Characteristics and Profitability of 'High'-Market Weight Pigs ('고체중' 출하돈의 성장효율, 도체 품질 특성 및 수익성)

  • Park, M.J.;Ha, D.M.;Shin, H.W.;Lee, S.H.;Kim, W.K.;Ha, S.H.;Yang, H.S.;Jeong, J.Y.;Joo, S.T.;Lee, C.Y.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.4
    • /
    • pp.459-470
    • /
    • 2007
  • Domestically, finishing pigs are marketed at 110 kg on an average. However, it is thought to be feasible to increase the market weight to 120kg or greater without decreasing the carcass quality, because most domestic pigs for pork production have descended from lean-type lineages. The present study was undertaken to investigate the growth efficiency and profitability of ‘high’-market wt pigs and the physicochemical characteristics and consumers' acceptability of the high-wt carcass. A total of 96 (Yorkshire × Landrace) × Duroc-crossbred gilts and barrows were fed a finisher diet ad laibtum in 16 pens beginning from 90-kg BW, after which the animals were slaughtered at 110kg (control) or ‘high’ market wt (135 and 125kg in gilts & barrows, respectively) and their carcasses were analyzed. Average daily gain and gain:feed did not differ between the two sex or market wt groups, whereas average daily feed intake was greater in the barrow and high market wt groups than in the gilt and 110-kg market wt groups, respectively(P<0.01). Backfat thickness of the high-market wt gilts and barrows corrected for 135 and 125-kg live wt, which were 23.7 and 22.5 mm, respectively, were greater (P<0.01) than their corresponding 110-kg counterparts(19.7 & 21.1 mm). Percentages of the trimmed primal cuts per total trimmed lean (w/w), except for that of loin, differed statistically (P<0.05) between two sex or market wt groups, but their numerical differences were rather small. Crude protein content of the loin was greater in the high vs. 110-kg market group (P<0.01), but crude fat and moisture contents and other physicochemical characteristics including the color of this primal cut were not different between the two sexes or market weights. Aroma, marbling and overall acceptability scores were greater in the high vs. 110-kg market wt group in sensory evaluation for fresh loin (P<0.01); however, overall acceptabilities for cooked loin, belly and ham were not different between the two market wt groups. Marginal profits of the 135- and 125-kg high-market wt gilt and barrow relative to their corresponding 110-kg ones were approximately -35,000 and 3,500 wons per head under the current carcass grading standard and price. However, if it had not been for the upper wt limits for the A- and B-grade carcasses, marginal profits of the high market wt gilt and barrow would have amounted to 22,000 and 11,000 wons per head, respectively. In summary, 120~125-kg market pigs are likely to meet the consumers' preference better than the 110-kg ones and also bring a profit equal to or slightly greater than that of the latter even under the current carcass grading standard. Moreover, if only the upper wt limits of the A- & B-grade carcasses were removed or increased to accommodate the high-wt carcass, the optimum market weights for the gilt and barrow would fall upon their target weights of the present study, i.e. 135 and 125 kg, respectively.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Multi-Dimensional Analysis Method of Product Reviews for Market Insight (마켓 인사이트를 위한 상품 리뷰의 다차원 분석 방안)

  • Park, Jeong Hyun;Lee, Seo Ho;Lim, Gyu Jin;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.57-78
    • /
    • 2020
  • With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.