• Title/Summary/Keyword: Portal model

Search Result 277, Processing Time 0.028 seconds

An investigation of the User Research Techniques in the User-Centered Design Framework - Focused on the on-line community services development for 13-18 Young Adults (사용자 중심 디자인 프레임워크에서 사용자 조사기법의 역할에 관한 연구 - 13-18 청소년용 온라인 커뮤니티 컨텐트 개발 프로젝트를 중심으로)

  • 이종호
    • Archives of design research
    • /
    • v.17 no.2
    • /
    • pp.77-86
    • /
    • 2004
  • User-Centered Design Approach plays important role in dealing with usability issues for developing modern technology products. Yet it is still questionable whether the User-Centered approach is enough for the development of successful consumer contents since the User-Centered Design is originated from the software engineering field where meeting customers' functional requirement is the most critical aspect in developing a software. However, modern consumer market is already saturated and in order to meet ever increasing consumer requirements, the User-Centered Design approach needs to be expanded. As a way of incorporating the User-Centered Approach into the consumer product development, Jordan suggested the 'Pleasure-based Approach' in industrial design field, which usually generates multi-dimensional user requirements: 1)physical, 2)cognitive, 3)identity and 4) social. It is the current tendency that many portal and community service providers focus on fulfilling both functional and emotional needs for users when developing new items, contents and services. Previously fulfilling consumers' emotional needs solely depend on visual designer's graphical sense and capability. However, taking the customer-centered approach on withdrawing consumers' unknown needs is getting critical in the competitive market environment. This paper reviews different types of user research techniques and categorized into 6 ways based on Kano(1992)'s product quality model. Based on his theory, only performance factors, such as suability, can be identified through the user-centered design approach. The user-centered design approach has to be expanded to include factors include personality, sociability, pleasure, and so on. In order to identify performance as well as excellent factors through user research, a user-research framework was established and tested through the case study, which is ' the development of new online service for teens '. The results of the user research were summarized at the end of the paper and the pros and cons of each research techniques were analyzed.

  • PDF

Vibrio Vulnificus Induces the Inflammation of Mouse Ileal Epithelium: Involvement of Protein Kinase C and Nuclear Factor-Kappa B (회장 상피세포에서 비브리오균(Vibrio vulnificus)의 염증 유도 기작 연구: protein kinase C와 nuclear factor kappa-B의 관련성)

  • Han, Gi Yeon;Jung, Young Hyun;Jang, Kyung Ku;Choi, Sang Ho;Lee, Sei-Jung
    • Journal of Life Science
    • /
    • v.24 no.6
    • /
    • pp.664-670
    • /
    • 2014
  • In the present study, we investigate the role of V. vulnificus in promoting the inflammation of mouse ileal ephitelium and its related signaling pathways. ICR mice were infected orally with V. vulnificus ($1{\times}10^9CFU$) for 16 h as a representative model of food-borne infection. To find the major portal of entry of V. vulnificus in mouse intestine, we have measured the levels of bacterial colonization in small intestine, colon, spleen, and liver. V. vulnificus appeared to colonize in intestine and colon in the order of ileum >> jejunum> colon, but lack in the duodenum, spleen, and liver. V. vulnificus in ileum caused severe necrotizing enteritis and showed shortened villi heights accompanied by an expanded width and inflammation, compared with the control mice. V. vulnificus induced ileal epithelium inflammation by activating phosphorylation of PKC and membrane translocation of $PKC{\alpha}$. V. vulnificus induced the phosphorylation of ERK and JNK, but did not affect p38 MAPK phosphorylation. Notably, V. vulnificus stimulated the I-${\kappa}B$-dependent phosphorylation of NF-${\kappa}B$ in mouse ileal epithelium. Finally, the ileal infection of V. vulnificus resulted in a significant increase in expression of proinflammatory cytokines and Toll-like receptors, respectively, compared to the control. Collectively, our results indicate that V. vulnificus induces ileal epithelium inflammation by increasing NF-${\kappa}B$ phosphorylation via activation of PKC, ERK, and JNK, which is critical for host defense mechanism in food-borne infection by V. vulnificus.

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

Calculation of Damage to Whole Crop Corn Yield by Abnormal Climate Using Machine Learning (기계학습모델을 이용한 이상기상에 따른 사일리지용 옥수수 생산량에 미치는 피해 산정)

  • Ji Yung Kim;Jae Seong Choi;Hyun Wook Jo;Moonju Kim;Byong Wan Kim;Kyung Il Sung
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.1
    • /
    • pp.11-21
    • /
    • 2023
  • This study was conducted to estimate the damage of Whole Crop Corn (WCC; Zea Mays L.) according to abnormal climate using machine learning as the Representative Concentration Pathway (RCP) 4.5 and present the damage through mapping. The collected WCC data was 3,232. The climate data was collected from the Korea Meteorological Administration's meteorological data open portal. The machine learning model used DeepCrossing. The damage was calculated using climate data from the automated synoptic observing system (ASOS, 95 sites) by machine learning. The calculation of damage was the difference between the dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of WCC data (1978-2017). The level of abnormal climate by temperature and precipitation was set as RCP 4.5 standard. The DMYnormal ranged from 13,845-19,347 kg/ha. The damage of WCC which was differed depending on the region and level of abnormal climate where abnormal temperature and precipitation occurred. The damage of abnormal temperature in 2050 and 2100 ranged from -263 to 360 and -1,023 to 92 kg/ha, respectively. The damage of abnormal precipitation in 2050 and 2100 was ranged from -17 to 2 and -12 to 2 kg/ha, respectively. The maximum damage was 360 kg/ha that the abnormal temperature in 2050. As the average monthly temperature increases, the DMY of WCC tends to increase. The damage calculated through the RCP 4.5 standard was presented as a mapping using QGIS. Although this study applied the scenario in which greenhouse gas reduction was carried out, additional research needs to be conducted applying an RCP scenario in which greenhouse gas reduction is not performed.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

Radiation Therapy Using M3 Wax Bolus in Patients with Malignant Scalp Tumors (악성 두피 종양(Scalp) 환자의 M3 Wax Bolus를 이용한 방사선치료)

  • Kwon, Da Eun;Hwang, Ji Hye;Park, In Seo;Yang, Jun Cheol;Kim, Su Jin;You, Ah Young;Won, Young Jinn;Kwon, Kyung Tae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.75-81
    • /
    • 2019
  • Purpose: Helmet type bolus for 3D printer is being manufactured because of the disadvantages of Bolus materials when photon beam is used for the treatment of scalp malignancy. However, PLA, which is a used material, has a higher density than a tissue equivalent material and inconveniences occur when the patient wears PLA. In this study, we try to treat malignant scalp tumors by using M3 wax helmet with 3D printer. Methods and materials: For the modeling of the helmet type M3 wax, the head phantom was photographed by CT, which was acquired with a DICOM file. The part for helmet on the scalp was made with Helmet contour. The M3 Wax helmet was made by dissolving paraffin wax, mixing magnesium oxide and calcium carbonate, solidifying it in a PLA 3D helmet, and then eliminated PLA 3D Helmet of the surface. The treatment plan was based on Intensity-Modulated Radiation Therapy (IMRT) of 10 Portals, and the therapeutic dose was 200 cGy, using Analytical Anisotropic Algorithm (AAA) of Eclipse. Then, the dose was verified by using EBT3 film and Mosfet (Metal Oxide Semiconductor Field Effect Transistor: USA), and the IMRT plan was measured 3 times in 3 parts by reproducing the phantom of the head human model under the same condition with the CT simulation room. Results: The Hounsfield unit (HU) of the bolus measured by CT was $52{\pm}37.1$. The dose of TPS was 186.6 cGy, 193.2 cGy and 190.6 cGy at the M3 Wax bolus measurement points of A, B and C, and the dose measured three times at Mostet was $179.66{\pm}2.62cGy$, $184.33{\pm}1.24cGy$ and $195.33{\pm}1.69cGy$. And the error rates were -3.71 %, -4.59 %, and 2.48 %. The dose measured with EBT3 film was $182.00{\pm}1.63cGy$, $193.66{\pm}2.05cGy$ and $196{\pm}2.16cGy$. The error rates were -2.46 %, 0.23 % and 2.83 %. Conclusions: The thickness of the M3 wax bolus was 2 cm, which could help the treatment plan to be established by easily lowering the dose of the brain part. The maximum error rate of the scalp surface dose was measured within 5 % and generally within 3 %, even in the A, B, C measurements of dosimeters of EBT3 film and Mosfet in the treatment dose verification. The making period of M3 wax bolus is shorter, cheaper than that of 3D printer, can be reused and is very useful for the treatment of scalp malignancies as human tissue equivalent material. Therefore, we think that the use of casting type M3 wax bolus, which will complement the making period and cost of high capacity Bolus and Compensator in 3D printer, will increase later.