• Title/Summary/Keyword: Reflecting

Search Result 5,535, Processing Time 0.037 seconds

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

Analysis of Football Fans' Uniform Consumption: Before and After Son Heung-Min's Transfer to Tottenham Hotspur FC (국내 프로축구 팬들의 유니폼 소비 분석: 손흥민의 토트넘 홋스퍼 FC 이적 전후 비교)

  • Choi, Yeong-Hyeon;Lee, Kyu-Hye
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.91-108
    • /
    • 2020
  • Korea's famous soccer players are steadily performing well in international leagues, which led to higher interests of Korean fans in the international leagues. Reflecting the growing social phenomenon of rising interests on international leagues by Korean fans, the study examined the overall consumer perception in the consumption of uniform by domestic soccer fans and compared the changes in perception following the transfers of the players. Among others, the paper examined the consumer perception and purchase factors of soccer fans shown in social media, focusing on periods before and after the recruitment of Heung-Min Son to English Premier League's Tottenham Football Club. To this end, the EPL uniform is the collection keyword the paper utilized and collected consumer postings from domestic website and social media via Python 3.7, and analyzed them using Ucinet 6, NodeXL 1.0.1, and SPSS 25.0 programs. The results of this study can be summarized as follows. First, the uniform of the club that consistently topped the league, has been gaining attention as a popular uniform, and the players' performance, and the players' position have been identified as key factors in the purchase and search of professional football uniforms. In the case of the club, the actual ranking and whether the league won are shown to be important factors in the purchase and search of professional soccer uniforms. The club's emblem and the sponsor logo that will be attached to the uniform are also factors of interest to consumers. In addition, in the decision making process of purchase of a uniform by professional soccer fan, uniform's form, marking, authenticity, and sponsors are found to be more important than price, design, size, and logo. The official online store has emerged as a major purchasing channel, followed by gifts for friends or requests from acquaintances when someone travels to the United Kingdom. Second, a classification of key control categories through the convergence of iteration correlation analysis and Clauset-Newman-Moore clustering algorithm shows differences in the classification of individual groups, but groups that include the EPL's club and player keywords are identified as the key topics in relation to professional football uniforms. Third, between 2002 and 2006, the central theme for professional football uniforms was World Cup and English Premier League, but from 2012 to 2015, the focus has shifted to more interest of domestic and international players in the English Premier League. The subject has changed to the uniform itself from this time on. In this context, the paper can confirm that the major issues regarding the uniforms of professional soccer players have changed since Ji-Sung Park's transfer to Manchester United, and Sung-Yong Ki, Chung-Yong Lee, and Heung-Min Son's good performances in these leagues. The paper also identified that the uniforms of the clubs to which the players have transferred to are of interest. Fourth, both male and female consumers are showing increasing interest in Son's league, the English Premier League, which Tottenham FC belongs to. In particular, the increasing interest in Son has shown a tendency to increase interest in football uniforms for female consumers. This study presents a variety of researches on sports consumption and has value as a consumer study by identifying unique consumption patterns. It is meaningful in that the accuracy of the interpretation has been enhanced by using a cluster analysis via convergence of iteration correlation analysis and Clauset-Newman-Moore clustering algorithm to identify the main topics. Based on the results of this study, the clubs will be able to maximize its profits and maintain good relationships with fans by identifying key drivers of consumer awareness and purchasing for professional soccer fans and establishing an effective marketing strategy.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Research on Perfusion CT in Rabbit Brain Tumor Model (토끼 뇌종양 모델에서의 관류 CT 영상에 관한 연구)

  • Ha, Bon-Chul;Kwak, Byung-Kook;Jung, Ji-Sung;Lim, Cheong-Hwan;Jung, Hong-Ryang
    • Journal of radiological science and technology
    • /
    • v.35 no.2
    • /
    • pp.165-172
    • /
    • 2012
  • We investigated the vascular characteristics of tumors and normal tissue using perfusion CT in the rabbit brain tumor model. The VX2 carcinoma concentration of $1{\times}10^7$ cells/ml(0.1ml) was implanted in the brain of nine New Zealand white rabbits (weight: 2.4kg-3.0kg, mean: 2.6kg). The perfusion CT was scanned when the tumors were grown up to 5mm. The tumor volume and perfusion value were quantitatively analyzed by using commercial workstation (advantage windows workstation, AW, version 4.2, GE, USA). The mean volume of implanted tumors was $316{\pm}181mm^3$, and the biggest and smallest volumes of tumor were 497 $mm^3$ and 195 $mm^3$, respectively. All the implanted tumors in rabbits are single-nodular tumors, and intracranial metastasis was not observed. In the perfusion CT, cerebral blood volume (CBV) were $74.40{\pm}9.63$, $16.08{\pm}0.64$, $15.24{\pm}3.23$ ml/100g in the tumor core, ipsilateral normal brain, and contralateral normal brain, respectively ($p{\leqq}0.05$). In the cerebral blood flow (CBF), there were significant differences between the tumor core and both normal brains ($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($962.91{\pm}75.96$ vs. $357.82{\pm}12.82$ vs. $323.19{\pm}83.24$ ml/100g/min). In the mean transit time (MTT), there were significant differences between the tumor core and both normal brains ($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($4.37{\pm}0.19$ vs. $3.02{\pm}0.41$ vs. $2.86{\pm}0.22$ sec). In the permeability surface (PS), there were significant differences among the tumor core, ipsilateral and contralateral normal brains ($47.23{\pm}25.45$ vs. $14.54{\pm}1.60$ vs. $6.81{\pm}4.20$ ml/100g/min)($p{\leqq}0.05$). In the time to peak (TTP) were no significant differences among the tumor core, ipsilateral and contralateral normal brains. In the positive enhancement integral (PEI), there were significant differences among the tumor core, ipsilateral and contralateral brains ($61.56{\pm}16.07$ vs. $12.58{\pm}2.61$ vs. $8.26{\pm}5.55$ ml/100g). ($p{\leqq}0.05$). In the maximum slope of increase (MSI), there were significant differences between the tumor core and both normal brain($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($13.18{\pm}2.81$ vs. $6.99{\pm}1.73$ vs. $6.41{\pm}1.39$ HU/sec). Additionally, in the maximum slope of decrease (MSD), there were significant differences between the tumor core and contralateral normal brain($p{\leqq}0.05$), but no significant differences between the tumor core and ipsilateral normal brain($4.02{\pm}1.37$ vs. $4.66{\pm}0.83$ vs. $6.47{\pm}1.53$ HU/sec). In conclusion, the VX2 tumors were implanted in the rabbit brain successfully, and stereotactic inoculation method make single-nodular type of tumor that was no metastasis in intracranial, suitable for comparative study between tumors and normal tissues. Therefore, perfusion CT would be a useful diagnostic tool capable of reflecting the vascularity of the tumors.

Geology of Athabasca Oil Sands in Canada (캐나다 아사바스카 오일샌드 지질특성)

  • Kwon, Yi-Kwon
    • The Korean Journal of Petroleum Geology
    • /
    • v.14 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • As conventional oil and gas reservoirs become depleted, interests for oil sands has rapidly increased in the last decade. Oil sands are mixture of bitumen, water, and host sediments of sand and clay. Most oil sand is unconsolidated sand that is held together by bitumen. Bitumen has hydrocarbon in situ viscosity of >10,000 centipoises (cP) at reservoir condition and has API gravity between $8-14^{\circ}$. The largest oil sand deposits are in Alberta and Saskatchewan, Canada. The reverves are approximated at 1.7 trillion barrels of initial oil-in-place and 173 billion barrels of remaining established reserves. Alberta has a number of oil sands deposits which are grouped into three oil sand development areas - the Athabasca, Cold Lake, and Peace River, with the largest current bitumen production from Athabasca. Principal oil sands deposits consist of the McMurray Fm and Wabiskaw Mbr in Athabasca area, the Gething and Bluesky formations in Peace River area, and relatively thin multi-reservoir deposits of McMurray, Clearwater, and Grand Rapid formations in Cold Lake area. The reservoir sediments were deposited in the foreland basin (Western Canada Sedimentary Basin) formed by collision between the Pacific and North America plates and the subsequent thrusting movements in the Mesozoic. The deposits are underlain by basement rocks of Paleozoic carbonates with highly variable topography. The oil sands deposits were formed during the Early Cretaceous transgression which occurred along the Cretaceous Interior Seaway in North America. The oil-sands-hosting McMurray and Wabiskaw deposits in the Athabasca area consist of the lower fluvial and the upper estuarine-offshore sediments, reflecting the broad and overall transgression. The deposits are characterized by facies heterogeneity of channelized reservoir sands and non-reservoir muds. Main reservoir bodies of the McMurray Formation are fluvial and estuarine channel-point bar complexes which are interbedded with fine-grained deposits formed in floodplain, tidal flat, and estuarine bay. The Wabiskaw deposits (basal member of the Clearwater Formation) commonly comprise sheet-shaped offshore muds and sands, but occasionally show deep-incision into the McMurray deposits, forming channelized reservoir sand bodies of oil sands. In Canada, bitumen of oil sands deposits is produced by surface mining or in-situ thermal recovery processes. Bitumen sands recovered by surface mining are changed into synthetic crude oil through extraction and upgrading processes. On the other hand, bitumen produced by in-situ thermal recovery is transported to refinery only through bitumen blending process. The in-situ thermal recovery technology is represented by Steam-Assisted Gravity Drainage and Cyclic Steam Stimulation. These technologies are based on steam injection into bitumen sand reservoirs for increase in reservoir in-situ temperature and in bitumen mobility. In oil sands reservoirs, efficiency for steam propagation is controlled mainly by reservoir geology. Accordingly, understanding of geological factors and characteristics of oil sands reservoir deposits is prerequisite for well-designed development planning and effective bitumen production. As significant geological factors and characteristics in oil sands reservoir deposits, this study suggests (1) pay of bitumen sands and connectivity, (2) bitumen content and saturation, (3) geologic structure, (4) distribution of mud baffles and plugs, (5) thickness and lateral continuity of mud interbeds, (6) distribution of water-saturated sands, (7) distribution of gas-saturated sands, (8) direction of lateral accretion of point bar, (9) distribution of diagenetic layers and nodules, and (10) texture and fabric change within reservoir sand body.

  • PDF

Understanding User Motivations and Behavioral Process in Creating Video UGC: Focus on Theory of Implementation Intentions (Video UGC 제작 동기와 행위 과정에 관한 이해: 구현의도이론 (Theory of Implementation Intentions)의 적용을 중심으로)

  • Kim, Hyung-Jin;Song, Se-Min;Lee, Ho-Geun
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.125-148
    • /
    • 2009
  • UGC(User Generated Contents) is emerging as the center of e-business in the web 2.0 era. The trend reflects changing roles of users in production and consumption of contents on websites and helps us to understand new strategies of websites such as web portals and social network websites. Nowadays, we consume contents created by other non-professional users for both utilitarian (e.g., knowledge) and hedonic values (e.g., fun). Also, contents produced by ourselves (e.g., photo, video) are posted on websites so that our friends, family, and even the public can consume those contents. This means that non-professionals, who used to be passive audience in the past, are now creating contents and share their UGCs with others in the Web. Accessible media, tools, and applications have also reduced difficulty and complexity in the process of creating contents. Realizing that users create plenty of materials which are very interesting to other people, media companies (i.e., web portals and social networking websites) are adjusting their strategies and business models accordingly. Increased demand of UGC may lead to website visits which are the source of benefits from advertising. Therefore, they put more efforts into making their websites open platforms where UGCs can be created and shared among users without technical and methodological difficulties. Many websites have increasingly adopted new technologies such as RSS and openAPI. Some have even changed the structure of web pages so that UGC can be seen several times to more visitors. This mainstream of UGCs on websites indicates that acquiring more UGCs and supporting participating users have become important things to media companies. Although those companies need to understand why general users have shown increasing interest in creating and posting contents and what is important to them in the process of productions, few research results exist in this area to address these issues. Also, behavioral process in creating video UGCs has not been explored enough for the public to fully understand it. With a solid theoretical background (i.e., theory of implementation intentions), parts of our proposed research model mirror the process of user behaviors in creating video contents, which consist of intention to upload, intention to edit, edit, and upload. In addition, in order to explain how those behavioral intentions are developed, we investigated influences of antecedents from three motivational perspectives (i.e., intrinsic, editing software-oriented, and website's network effect-oriented). First, from the intrinsic motivation perspective, we studied the roles of self-expression, enjoyment, and social attention in forming intention to edit with preferred editing software or in forming intention to upload video contents to preferred websites. Second, we explored the roles of editing software for non-professionals to edit video contents, in terms of how it makes production process easier and how it is useful in the process. Finally, from the website characteristic-oriented perspective, we investigated the role of a website's network externality as an antecedent of users' intention to upload to preferred websites. The rationale is that posting UGCs on websites are basically social-oriented behaviors; thus, users prefer a website with the high level of network externality for contents uploading. This study adopted a longitudinal research design; we emailed recipients twice with different questionnaires. Guided by invitation email including a link to web survey page, respondents answered most of questions except edit and upload at the first survey. They were asked to provide information about UGC editing software they mainly used and preferred website to upload edited contents, and then asked to answer related questions. For example, before answering questions regarding network externality, they individually had to declare the name of the website to which they would be willing to upload. At the end of the first survey, we asked if they agreed to participate in the corresponding survey in a month. During twenty days, 333 complete responses were gathered in the first survey. One month later, we emailed those recipients to ask for participation in the second survey. 185 of the 333 recipients (about 56 percentages) answered in the second survey. Personalized questionnaires were provided for them to remind the names of editing software and website that they reported in the first survey. They answered the degree of editing with the software and the degree of uploading video contents to the website for the past one month. To all recipients of the two surveys, exchange tickets for books (about 5,000~10,000 Korean Won) were provided according to the frequency of participations. PLS analysis shows that user behaviors in creating video contents are well explained by the theory of implementation intentions. In fact, intention to upload significantly influences intention to edit in the process of accomplishing the goal behavior, upload. These relationships show the behavioral process that has been unclear in users' creating video contents for uploading and also highlight important roles of editing in the process. Regarding the intrinsic motivations, the results illustrated that users are likely to edit their own video contents in order to express their own intrinsic traits such as thoughts and feelings. Also, their intention to upload contents in preferred website is formed because they want to attract much attention from others through contents reflecting themselves. This result well corresponds to the roles of the website characteristic, namely, network externality. Based on the PLS results, the network effect of a website has significant influence on users' intention to upload to the preferred website. This indicates that users with social attention motivations are likely to upload their video UGCs to a website whose network size is big enough to realize their motivations easily. Finally, regarding editing software characteristic-oriented motivations, making exclusively-provided editing software more user-friendly (i.e., easy of use, usefulness) plays an important role in leading to users' intention to edit. Our research contributes to both academic scholars and professionals. For researchers, our results show that the theory of implementation intentions is well applied to the video UGC context and very useful to explain the relationship between implementation intentions and goal behaviors. With the theory, this study theoretically and empirically confirmed that editing is a different and important behavior from uploading behavior, and we tested the behavioral process of ordinary users in creating video UGCs, focusing on significant motivational factors in each step. In addition, parts of our research model are also rooted in the solid theoretical background such as the technology acceptance model and the theory of network externality to explain the effects of UGC-related motivations. For practitioners, our results suggest that media companies need to restructure their websites so that users' needs for social interaction through UGC (e.g., self-expression, social attention) are well met. Also, we emphasize strategic importance of the network size of websites in leading non-professionals to upload video contents to the websites. Those websites need to find a way to utilize the network effects for acquiring more UGCs. Finally, we suggest that some ways to improve editing software be considered as a way to increase edit behavior which is a very important process leading to UGC uploading.

Semiweekly variation of Spring Phytoplankton Community in Relation to the Freshwater Discharges from Keum River Estuarine Weir, Korea (금강하구언 담수방류와 춘계 식물플랑크톤 군집의 단주기 변동)

  • Yih, Won-Ho;Myung, Geum-Og;Yoo, Yeong-Du;Kim, Young-Geel;Jeong, Hae-Jm
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.10 no.3
    • /
    • pp.154-163
    • /
    • 2005
  • Irregular discharges of freshwater through the water gates of the Keum River Estuarine Weir, Korea, whose construction had been completed in 1998 with its water gates being operated as late as August 1994, drastically modified the estuarine environment. Sharp decrease of salinity along with the altered concentrations of inorganic nutrients are accompanied with the irregular discharges of freshwater into the estuary under the influence of regular semi-diurnal tidal effect. Field sampling was carried out on the time of high tide at 2 fixed stations(St.1 near the Estuarine Weir and St.2 off Kunsan Ferry Station) every other day for 4 months from mid-February 2004 to investigate into the semi-weekly variation of spring phytoplankton community in relation to the freshwater discharges from Keum River Estuarine Weir. CV(coefficient of variation) of salinity measurements was roughly 2 times greater in St.1 than that in St.2, reflecting extreme salinity variation in St.1 Among inorganic nutrients, concentrations of N-nutrients($NO_3^-,\;NO_2^-$ and $NH_4^+$) were clearly higher in St.1, to imply the more drastic changes of the nutrient concentrations in St.1. than St.2 following the freshwater discharges. As a component of phytoplankton community, diatoms were among the top dominants in terms of species richness as well as biomass. Solitary centric diatom, Cyclotella meneghiniana, and chain-forming centric diatom, Skeletonema costatum, dominated over the phytoplankton community in order for S-6 weeks each (Succession Interval I and II), and the latter succeeded to the former from the time of <$10^{\circ}C$ of water temperature. Cyanobacterial species, Aphanizomenon Posaquae and Phormidium sp., which might be transported into the estuary along with the discharged freshwater, occupied high portion of total biomass during Succession Interval III(mid-April to late-May). During this period, freshwater species exclusively dominated over the phytoplankton community except the low concentrations of the co-occurring 2 estuarine diatoms, Cyclotella meneghiniana and Skeletonema costatum. During the 4th Succession Interval when the water temperature was over $18^{\circ}C$, the diatom, Guinardia delicatula, was predominant for a week with the highest dominance of $75\%$ in discrete samples. To summarize, during all the Succession Intervals other than Succession Interval III characterized by the extreme variation of salinity under cooler water temperature than $18^{\circ}C$, the diatoms were the most important dominants for species succession in spring. If the scale and frequency of the freshwater discharge could have been adjusted properly even during the Succession Interval III, the dominant species would quite possibly be replaced by other estuarine diatom species rather than the two freshwater cyanobacteria, Aphanizomenon flosaquae and Phormidium sp.. The scheme of field sampling every other day for the present study was concluded to be the minimal requirement in order to adequately explore the phytoplankton succession in such estuarine environment as in Keum River Estuary: which is stressed by the unpredictable and unavoidable discharges of freshwater under the regular semi-diurnal tide.

4-Dimensional dose evaluation using deformable image registration in respiratory gated radiotherapy for lung cancer (폐암의 호흡동조방사선치료 시 변형영상정합을 이용한 4차원 선량평가)

  • Um, Ki Cheon;Yoo, Soon Mi;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.83-95
    • /
    • 2018
  • Purpose : After planning the Respiratory Gated Radiotherapy for Lung cancer, the movement and volume change of sparing normal structures nearby target are not often considered during dose evaluation. This study carried out 4-D dose evaluation which reflects the movement of normal structures at certain phase of Respiratory Gated Radiotherapy, by using Deformable Image Registration that is well used for Adaptive Radiotherapy. Moreover, the study discussed the need of analysis and established some recommendations, regarding the normal structures's movement and volume change due to Patient's breathing pattern during evaluation of treatment plans. Materials and methods : The subjects were taken from 10 lung cancer patients who received Respiratory Gated Radiotherapy. Using Eclipse(Ver 13.6 Varian, USA), the structures seen in the top phase of CT image was equally set via Propagation or Segmentation Wizard menu, and the structure's movement and volume were analyzed by Center-to Center method. Also, image from each phase and the dose distribution were deformed into top phase CT image, for 4-dimensional dose evaluation, via VELOCITY Program. Also, Using $QUASAR^{TM}$ Phantom(Modus Medical Devices) and $GAFCHROMIC^{TM}$ EBT3 Film(Ashland, USA), verification carried out 4-D dose distribution for 4-D gamma pass rate. Result : The movement of the Inspiration and expiration phase was the most significant in axial direction of right lung, as $0.989{\pm}0.34cm$, and was the least significant in lateral direction of spinal cord, as -0.001 cm. The volume of right lung showed the greatest rate of change as 33.5 %. The maximal and minimal difference in PTV Conformity Index and Homogeneity Index between 3-dimensional dose evaluation and 4-dimensional dose evaluation, was 0.076, 0.021 and 0.011, 0.0 respectfully. The difference of 0.0045~2.76 % was determined in normal structures, using 4-D dose evaluation. 4-D gamma pass rate of every patients passed reference of 95 % gamma pass rate. Conclusion : PTV Conformity Index was more significant in all patients using 4-D dose evaluation, but no significant difference was observed between two dose evaluations for Homogeneity Index. 4-D dose distribution was shown more homogeneous dose compared to 3D dose distribution, by considering the movement from breathing which helps to fill out the PTV margin area. There was difference of 0.004~2.76 % in 4D evaluation of normal structure, and there was significant difference between two evaluation methods in all normal structures, except spinal cord. This study shows that normal structures could be underestimated by 3-D dose evaluation. Therefore, 4-D dose evaluation with Deformable Image Registration will be considered when the dose change is expected in normal structures due to patient's breathing pattern. 4-D dose evaluation with Deformable Image Registration is considered to be a more realistic dose evaluation method by reflecting the movement of normal structures from patient's breathing pattern.

  • PDF