• Title/Summary/Keyword: Convergence test

Search Result 4,315, Processing Time 0.024 seconds

Association Between Structural and Functional Aspects of Social Networks and Health Promoting Behaviors: Focusing on Community-Dwelling Older Adults (사회적 관계망의 구조적, 기능적 측면과 건강증진행동과의 관계: 지역사회 거주 노인을 중심으로)

  • An, Hyunseo;Kim, Inhye;Yun, Sohyeon;Park, Hae Yean
    • Therapeutic Science for Rehabilitation
    • /
    • v.12 no.4
    • /
    • pp.81-96
    • /
    • 2023
  • Objective : This study aimed to examine the association between the structural and functional aspects of social networks and health-promoting behaviors in community-dwelling older adults. Methods : Social networks based on structural and functional aspects and health-promoting behaviors by lifestyle were measured in 226 community-dwelling adults aged 65 years and over. To analyze the collected data, an independent t-test, one-way ANOVA, Pearson's correlation analysis, and hierarchical regression were performed. Results : The network size was the highest in the friend network, and the frequency of contact and social support was the highest in the non-living family network. Health-promoting behaviors were highest for activities of daily living and lowest for productive and social activities. All subfactors of social networks showed significant positive correlations with health-promoting behaviors. Hierarchical regression showed social support from neighbors as the variable with a significant effect on health-promoting behaviors; gender and depression were also influencing factors, and this model showed 37% explanatory power. Conclusion : To promote healthy behaviors of older adultsin the community, the development of health promotion programs and related policies considering social networks centering on social support from neighbors is required.

Development of evaluation items for adolescents' dietary habits and nutritional practices reflecting eating behaviors and food environment (식행동, 식생활 환경을 반영한 청소년의 식생활·영양 실천 평가 항목 개발)

  • Jimin Lim;Hye Ji Seo;Jieun Oh
    • Journal of Nutrition and Health
    • /
    • v.57 no.1
    • /
    • pp.136-152
    • /
    • 2024
  • Purpose: A comprehensive evaluation item was developed to assess adolescent dietary habits and nutritional practices, considering food intake, eating behaviors, and food culture, such as social support and food environment. Methods: The 59 candidate items of the evaluation checklist were obtained based on the results of the eighth Korea National Health and Nutrition Examination Survey data, Korea Dietary Reference Intakes, dietary guidelines for adolescents, Youth Risk Behavior Survey data, national nutrition policies and dietary guidelines, and literature reviews. Four hundred and three middle and high school students residing in metropolitan areas participated in a survey using the 58-item checklist, which was selected through expert evaluation and content validity ratio analysis. The construct validity of the assessment tool for the quality of adolescent diets was assessed by exploratory factor analyses to determine if the checklist items were organized properly and whether the responses to each item were distributed adequately. Results: The Bartlett sphericity test was significant for each area (p <0.001), and the eigen values were greater than one. The Kaiser-Meyer-Olkin and cumulative proportions by areas were food intake (0.765 and 56.8%, respectively), eating behaviors (0.544 and 64.8%, respectively), and food environment (0.699 and 62.4%, respectively). Twenty-two checklists were determined for the final evaluation items for the adolescents' dietary habits and nutritional practices and were categorized into three distinct factors: food intake (10 items), eating behaviors (4 items), and food environment (8 items). Conclusion: The evaluation items for adolescent dietary habits and nutritional practices is a useful checklist for easily and quickly assessing the dietary qualities and reflecting Korean adolescents and their food environmental factors related to a sustainable diet.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Comparison and Evaluation of the Effectiveness between Respiratory Gating Method Applying The Flow Mode and Additional Gated Method in PET/CT Scanning. (PET/CT 검사에서 Flow mode를 적용한 Respiratory Gating Method 촬영과 추가 Gating 촬영의 비교 및 유용성 평가)

  • Jang, Donghoon;Kim, Kyunghun;Lee, Jinhyung;Cho, Hyunduk;Park, Sohyun;Park, Youngjae;Lee, Inwon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.54-59
    • /
    • 2017
  • Purpose The present study aimed at assessing the effectiveness of the respiratory gating method used in the flow mode and additional localized respiratory-gated imaging, which differs from the step and go method. Materials and Methods Respiratory gated imaging was performed in the flow mode to twenty patients with lung cancer (10 patients with stable signals and 10 patients with unstable signals), who underwent PET/CT scanning of the torso using Biograph mCT Flow PET/CT at Bundang Seoul University Hospital from June 2016 to September 2016. Additional images of the lungs were obtained by using the respiratory gating method. SUVmax, SUVmean, and Tumor Volume ($cm^3$) of non-gating images, gating images, and additional lung gating images were found with Syngo,bia (Siemens, Germany). A paired t-test was performed with GraphPad Prism6, and changes in the width of the amplitude range were compared between the two types of gating images. Results The following results were obtained from all patients when the respiratory gating method was applied: $SUV_{max}=9.43{\pm}3.93$, $SUV_{mean}=1.77{\pm}0.89$, and $Tumor\;Volume=4.17{\pm}2.41$ for the non-gating images, $SUV_{max}=10.08{\pm}4.07$, $SUV_{mean}=1.75{\pm}0.81$, and $Tumor\;Volume=3.56{\pm}2.11$ for the gating images, and $SUV_{max}=10.86{\pm}4.36$, $SUV_{mean}=1.77{\pm}0.85$, $Tumor\;Volume=3.36{\pm}1.98$ for the additional lung gating images. No statistically significant difference in the values of $SUV_{mean}$ was found between the non-gating and gating images, and between the gating and lung gating images (P>0.05). A significant difference in the values of $SUV_{max}$ and Tumor Volume were found between the aforementioned groups (P<0.05). The width of the amplitude range was smaller for lung gating images than gating images for 12 from 20 patients (3 patients with stable signals, 9 patients with unstable signals). Conclusion In PET/CT scanning using the respiratory gating method in the flow mode, any lesion movements caused by respiration were adjusted; therefore, more accurate measurements of $SUV_{max}$, and Tumor Volume could be obtained from the gating images than the non-gating images in this study. In addition, the width of the amplitude range decreased according to the stability of respiration to a more significant degree in the additional lung gating images than the gating images. We found that gating images provide information that is more useful for diagnosis than the one provided by non-gating images. For patients with irregular signals, it may be helpful to perform localized scanning additionally if time allows.

  • PDF