• Title/Summary/Keyword: hyper method

Search Result 386, Processing Time 0.02 seconds

Comparison of Dietary Habits and Nutrient Intakes in Subjects with Obesity or Hyperglycemia Classified Metabolic Syndrome (비만 또는 고혈당 증상 보유에 따른 대사성증후군의 식습관 및 영양상태 비교)

  • Park Jung-A;Yoon Jin-Sook
    • Journal of Nutrition and Health
    • /
    • v.38 no.8
    • /
    • pp.672-681
    • /
    • 2005
  • Metabolic syndrome (MS) was defined as condition in which the subjects have two or more abnormalities among obesity, hyperlipidemia, hypertension and hyperglycemia. To develop a nutritional education program for MS, this study was performed to compare the dietary habits and nutrients intake of complex symptoms of MS with obesity or hyper-glycemia. The participants in this study were 84 normal adults,62 MS with obesity, 33 MS with hyperglycemia and 54 MS with obesity and hyperglycemia (OB + HG). A dietary survey was conducted using 24-hour recall method. Total cholesterol level of MS with obesity group was significantly higher than other groups. WHR and systolic blood pressure showed no significant difference among MS with obesity, hyperglycemia and OB+HG groups. Dietary intakes of energy, Fe, Vit A, Vit $B_2$ and Ca were less than $75\%$ of 7th Korean RDA in the all groups. Especially, dietary intakes of Vit $B_2$, Vit A and Ca were less than $50\%$ of RDA in MS with hyperglycemia and OB+HG groups. The other nutrient intakes of each group were also below the RDA level except for P, Vit C. It appeared that most of the nutrient intakes in MS with hyperglycemia and OB + HG groups were significantly lower than normal group. In MS with obesity group, each consumption of sweet, organ meat and soup was higher than other groups. Each consumption of garlic and onion in MS with obesity, hyperglycemia and OB + HG groups was lower than normal group. Also, each consumption of soup in MS with hyperglycemia and OB + HG groups was higher than normal group. Indices of nutritional quality (INQ) for Ca, Vit A and Vit $B_2$ were below 1 in all the groups. Food composition group score of MS with hyperglycemia group was significantly lower than normal and MS with obesity groups. Our results indicated that nutritional education program for MS with obesity or hyperglycemia should include specific strategies to modify unsound dietary habits and inappropriate food intake for health.

A Scientometric Social Network Analysis of International Collaborative Publications of All India Institute of Medical Sciences, India

  • Nishavathi, E.;Jeyshankar, R.
    • Journal of Information Science Theory and Practice
    • /
    • v.8 no.3
    • /
    • pp.64-76
    • /
    • 2020
  • Scientometrics and social network analysis (SNA) measures were used to analyze the international scientific collaboration (ISC) of All India Institute of Medical Sciences (AIIMS) for a period of 10 years (2009-2018). The dataset consists of 19,622 records retrieved from the Scopus database. The mean degree of collaboration 0.95 implied that researchers of AIIMS tend to collaborate domestically (80.29%) and internationally (14.67%). The data exhibits a hyper authorship pattern, and a medium-size research team consists of 4 to 10 authors who contributed a maximum of 62.08% (12,182) publications. 71.97% of research findings are scattered in journal articles. The most preferred journals published 58.55% of medical literature. An undirected collaboration network is constructed in Pajek to study the ISC of AIIMS during the period 2009-2018 which consists of 179 vertices (Vn) and 11,938 edges. The degree centrality (Dc) identified that the United States of America (Dc - 54; CC - 0.99) and United Kingdom (Dc - 41; 0.98) are the most collaborative countries in the whole network as well as the most influential countries. The Louvain community detection method is used to detect influential research groups of AIIMS. The temporal evolution of ISC of AIIMS studied through scientometrics and SNA measures shed light on the structure and properties of ISC networks of AIIMS. It revealed that AIIMS, India has taken keen steps to enrich the quality of research by extending and encouraging the collaboration between institutions and industries at the international level.

Bayesian Optimization Framework for Improved Cross-Version Defect Prediction (향상된 교차 버전 결함 예측을 위한 베이지안 최적화 프레임워크)

  • Choi, Jeongwhan;Ryu, Duksan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.339-348
    • /
    • 2021
  • In recent software defect prediction research, defect prediction between cross projects and cross-version projects are actively studied. Cross-version defect prediction studies assume WP(Within-Project) so far. However, in the CV(Cross-Version) environment, the previous work does not consider the distribution difference between project versions is important. In this study, we propose an automated Bayesian optimization framework that considers distribution differences between different versions. Through this, it automatically selects whether to perform transfer learning according to the difference in distribution. This framework is a technique that optimizes the distribution difference between versions, transfer learning, and hyper-parameters of the classifier. We confirmed that the method of automatically selecting whether to perform transfer learning based on the distribution difference is effective through experiments. Moreover, we can see that using our optimization framework is effective in improving performance and, as a result, can reduce software inspection effort. This is expected to support practical quality assurance activities for new version projects in a cross-version project environment.

A Study on the Potential of Utilizing Sensible Media for Dance in 5G Network

  • Chang, So-jung
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.3
    • /
    • pp.111-115
    • /
    • 2019
  • A 5G is 20 times faster than 4G. It also has hyper-connectivity, low latency merit and boundless potentials for medical education, transportation, entertainment and so on. In accordance with this, it is time to quickly look over on the utilization plan for 5G and sensible media in dance field, deal with the issue and its utilization. First of all, this study will review potential of 5G and sensible media in dance and its development plan. It seems like dance is able to communicate in a three-dimensional way. Utilizing sensible media can contribute to inform people of dance, and increase fun and interest which will make three-dimensional mutual communication. Also, in 5G environment, one can select whatever one wants in his or her viewpoint when utilizing sensible media such as VR, AR, hologram and so on. Supposing in a case of dancers and judges, it is possible for them to hire their own style of dancers in their countries. So, both the dancer and the judges have the positive merits. Third, streaming is possible without any installation, buffering is reduced. At the same time high-definition of media is allowed. This allowed collaborated performance of celebrities in dance and it also increased concentration and engagement. Dance field should acknowledge 5G sensible media, look for systemic and detailed method and disseminate and spread professional training and performance. In dance, testing fast developing sensible media due to 5G network, produce systemic dance training environment with various try is required and an effort for the performance situations in which advanced 5G sensible media is used.

Bivariate long range dependent time series forecasting using deep learning (딥러닝을 이용한 이변량 장기종속시계열 예측)

  • Kim, Jiyoung;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.1
    • /
    • pp.69-81
    • /
    • 2019
  • We consider bivariate long range dependent (LRD) time series forecasting using a deep learning method. A long short-term memory (LSTM) network well-suited to time series data is applied to forecast bivariate time series; in addition, we compare the forecasting performance with bivariate fractional autoregressive integrated moving average (FARIMA) models. Out-of-sample forecasting errors are compared with various performance measures for functional MRI (fMRI) data and daily realized volatility data. The results show a subtle difference in the predicted values of the FIVARMA model and VARFIMA model. LSTM is computationally demanding due to hyper-parameter selection, but is more stable and the forecasting performance is competitively good to that of parametric long range dependent time series models.

A Study on the Prediction of Major Prices in the Shipbuilding Industry Using Time Series Analysis Model (시계열 분석 모델을 이용한 조선 산업 주요물가의 예측에 관한 연구)

  • Ham, Juh-Hyeok
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.5
    • /
    • pp.281-293
    • /
    • 2021
  • Oil and steel prices, which are major pricescosts in the shipbuilding industry, were predicted. Firstly, the error of the moving average line (N=3-5) was examined, and in all three error analyses, the moving average line (N=3) was small. Secondly, in the linear prediction of data through existing theory, oil prices rise slightly, and steel prices rise sharply, but in reality, linear prediction using existing data was not satisfactory. Thirdly, we identified the limitations of linear prediction methods and confirmed that oil and steel price prediction was somewhat similar to actual moving average line prediction methods. Due to the high volatility of major price flows, large errors were inevitable in the forecast section. Through the time series analysis method at the end of this paper, we were able to achieve not bad results in all analysis items relative to artificial intelligence (Prophet). Predictive data through predictive analysis using eight predictive models are expected to serve as a good research foundation for developing unique tools or establishing evaluation systems in the future. This study compares the basic settings of artificial intelligence programs with the results of core price prediction in the shipbuilding industry through time series prediction theory, and further studies the various hyper-parameters and event effects of Prophet in the future, leaving room for improvement of predictability.

A Study of Unified Framework with Light Weight Artificial Intelligence Hardware for Broad range of Applications (다중 애플리케이션 처리를 위한 경량 인공지능 하드웨어 기반 통합 프레임워크 연구)

  • Jeon, Seok-Hun;Lee, Jae-Hack;Han, Ji-Su;Kim, Byung-Soo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.5
    • /
    • pp.969-976
    • /
    • 2019
  • A lightweight artificial intelligence hardware has made great strides in many application areas. In general, a lightweight artificial intelligence system consist of lightweight artificial intelligence engine and preprocessor including feature selection, generation, extraction, and normalization. In order to achieve optimal performance in broad range of applications, lightweight artificial intelligence system needs to choose a good preprocessing function and set their respective hyper-parameters. This paper proposes a unified framework for a lightweight artificial intelligence system and utilization method for finding models with optimal performance to use on a given dataset. The proposed unified framework can easily generate a model combined with preprocessing functions and lightweight artificial intelligence engine. In performance evaluation using handwritten image dataset and fall detection dataset measured with inertial sensor, the proposed unified framework showed building optimal artificial intelligence models with over 90% test accuracy.

Precision comparison of 3D photogrammetry scans according to the number and resolution of images

  • Park, JaeWook;Kim, YunJung;Kim, Lyoung Hui;Kwon, SoonChul;Lee, SeungHyun
    • International journal of advanced smart convergence
    • /
    • v.10 no.2
    • /
    • pp.108-122
    • /
    • 2021
  • With the development of 3D graphics software and the speed of computer hardware, it is an era that can be realistically expressed not only in movie visual effects but also in console games. In the production of such realistic 3D models, 3D scans are increasingly used because they can obtain hyper-realistic results with relatively little effort. Among the various 3D scanning methods, photogrammetry can be used only with a camera. Therefore, no additional hardware is required, so its demand is rapidly increasing. Most 3D artists shoot as many images as possible with a video camera, etc., and then calculate using all of those images. Therefore, the photogrammetry method is recognized as a task that requires a lot of memory and long hardware operation. However, research on how to obtain precise results with 3D photogrammetry scans is insufficient, and a large number of photos is being utilized, which leads to increased production time and data capacity and decreased productivity. In this study, point cloud data generated according to changes in the number and resolution of photographic images were produced, and an experiment was conducted to compare them with original data. Then, the precision was measured using the average distance value and standard deviation of each vertex of the point cloud. By comparing and analyzing the difference in the precision of the 3D photogrammetry scans according to the number and resolution of images, this paper presents a direction for obtaining the most precise and effective results to 3D artists.

Analysis of Accuracy and Loss Performance According to Hyperparameter in RNN Model (RNN모델에서 하이퍼파라미터 변화에 따른 정확도와 손실 성능 분석)

  • Kim, Joon-Yong;Park, Koo-Rack
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.31-38
    • /
    • 2021
  • In this paper, in order to obtain the optimization of the RNN model used for sentiment analysis, the correlation of each model was studied by observing the trend of loss and accuracy according to hyperparameter tuning. As a research method, after configuring the hidden layer with LSTM and the embedding layer that are most optimized to process sequential data, the loss and accuracy of each model were measured by tuning the unit, batch-size, and embedding size of the LSTM. As a result of the measurement, the loss was 41.9% and the accuracy was 11.4%, and the trend of the optimization model showed a consistently stable graph, confirming that the tuning of the hyperparameter had a profound effect on the model. In addition, it was confirmed that the decision of the embedding size among the three hyperparameters had the greatest influence on the model. In the future, this research will be continued, and research on an algorithm that allows the model to directly find the optimal hyperparameter will continue.

Exploring performance improvement through split prediction in stock price prediction model (주가 예측 모델에서의 분할 예측을 통한 성능향상 탐구)

  • Yeo, Tae Geon Woo;Ryu, Dohui;Nam, Jungwon;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.503-509
    • /
    • 2022
  • The purpose of this study is to set the rate of change between the market price of the next day and the previous day to be predicted as the predicted value, and the market price for each section is generated by dividing the stock price ranking of the next day to be predicted at regular intervals, which is different from the previous papers that predict the market price. We would like to propose a new time series data prediction method that predicts the market price change rate of the final next day through a model using the rate of change as the predicted value. The change in the performance of the model according to the degree of subdivision of the predicted value and the type of input data was analyzed.