• Title/Summary/Keyword: Multi-task

Search Result 791, Processing Time 0.024 seconds

Ensembles of neural network with stochastic optimization algorithms in predicting concrete tensile strength

  • Hu, Juan;Dong, Fenghui;Qiu, Yiqi;Xi, Lei;Majdi, Ali;Ali, H. Elhosiny
    • Steel and Composite Structures
    • /
    • v.45 no.2
    • /
    • pp.205-218
    • /
    • 2022
  • Proper calculation of splitting tensile strength (STS) of concrete has been a crucial task, due to the wide use of concrete in the construction sector. Following many recent studies that have proposed various predictive models for this aim, this study suggests and tests the functionality of three hybrid models in predicting the STS from the characteristics of the mixture components including cement compressive strength, cement tensile strength, curing age, the maximum size of the crushed stone, stone powder content, sand fine modulus, water to binder ratio, and the ratio of sand. A multi-layer perceptron (MLP) neural network incorporates invasive weed optimization (IWO), cuttlefish optimization algorithm (CFOA), and electrostatic discharge algorithm (ESDA) which are among the newest optimization techniques. A dataset from the earlier literature is used for exploring and extrapolating the STS behavior. The results acquired from several accuracy criteria demonstrated a nice learning capability for all three hybrid models viz. IWO-MLP, CFOA-MLP, and ESDA-MLP. Also in the prediction phase, the prediction products were in a promising agreement (above 88%) with experimental results. However, a comparative look revealed the ESDA-MLP as the most accurate predictor. Considering mean absolute percentage error (MAPE) index, the error of ESDA-MLP was 9.05%, while the corresponding value for IWO-MLP and CFOA-MLP was 9.17 and 13.97%, respectively. Since the combination of MLP and ESDA can be an effective tool for optimizing the concrete mixture toward a desirable STS, the last part of this study is dedicated to extracting a predictive formula from this model.

Artificial neural network for classifying with epilepsy MEG data (뇌전증 환자의 MEG 데이터에 대한 분류를 위한 인공신경망 적용 연구)

  • Yujin Han;Junsik Kim;Jaehee Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.139-155
    • /
    • 2024
  • This study performed a multi-classification task to classify mesial temporal lobe epilepsy with left hippocampal sclerosis patients (left mTLE), mesial temporal lobe epilepsy with right hippocampal sclerosis (right mTLE), and healthy controls (HC) using magnetoencephalography (MEG) data. We applied various artificial neural networks and compared the results. As a result of modeling with convolutional neural networks (CNN), recurrent neural networks (RNN), and graph neural networks (GNN), the average k-fold accuracy was excellent in the order of CNN-based model, GNN-based model, and RNN-based model. The wall time was excellent in the order of RNN-based model, GNN-based model, and CNN-based model. The graph neural network, which shows good figures in accuracy, performance, and time, and has excellent scalability of network data, is the most suitable model for brain research in the future.

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.

Density map estimation based on deep-learning for pest control drone optimization (드론 방제의 최적화를 위한 딥러닝 기반의 밀도맵 추정)

  • Baek-gyeom Seong;Xiongzhe Han;Seung-hwa Yu;Chun-gu Lee;Yeongho Kang;Hyun Ho Woo;Hunsuk Lee;Dae-Hyun Lee
    • Journal of Drive and Control
    • /
    • v.21 no.2
    • /
    • pp.53-64
    • /
    • 2024
  • Global population growth has resulted in an increased demand for food production. Simultaneously, aging rural communities have led to a decrease in the workforce, thereby increasing the demand for automation in agriculture. Drones are particularly useful for unmanned pest control fields. However, the current method of uniform spraying leads to environmental damage due to overuse of pesticides and drift by wind. To address this issue, it is necessary to enhance spraying performance through precise performance evaluation. Therefore, as a foundational study aimed at optimizing drone-based pest control technologies, this research evaluated water-sensitive paper (WSP) via density map estimation using convolutional neural networks (CNN) with a encoder-decoder structure. To achieve more accurate estimation, this study implemented multi-task learning, incorporating an additional classifier for image segmentation alongside the density map estimation classifier. The proposed model in this study resulted in a R-squared (R2) of 0.976 for coverage area in the evaluation data set, demonstrating satisfactory performance in evaluating WSP at various density levels. Further research is needed to improve the accuracy of spray result estimations and develop a real-time assessment technology in the field.

Study on load tracking characteristics of closed Brayton conversion liquid metal cooled space nuclear power system

  • Li Ge;Huaqi Li;Jianqiang Shan
    • Nuclear Engineering and Technology
    • /
    • v.56 no.5
    • /
    • pp.1584-1602
    • /
    • 2024
  • It is vital to output the required electrical power following various task requirements when the space reactor power supply is operating in orbit. The dynamic performance of the closed Brayton cycle thermoelectric conversion system is initially studied and analyzed. Based on this, a load tracking power regulation method is developed for the liquid metal cooled space reactor power system, which takes into account the inlet temperature of the lithium on the hot side of the intermediate heat exchanger, the filling quantity of helium and xenon, and the input amount of the heat pipe radiator module. After comparing several methods, a power regulation method with fast response speed and strong system stability is obtained. Under various changes in power output, the dynamic response characteristics of the ultra-small liquid metal lithium-cooled space reactor concept scheme are analyzed. The transient operation process of 70 % load power shows that core power variation is within 30 % and core coolant temperature can operate at the set safety temperature. The second loop's helium-xenon working fluid has a 65K temperature change range and a 25 % filling quantity. The lithium at the radiator loop outlet changes by less than ±7 K, and the system's main key parameters change as expected, indicating safety. The core system uses less power during 30 % load power transient operation. According to the response characteristics of various system parameters, under low power operation conditions, the lithium working fluid temperature of the radiator circuit and the high-temperature heat pipe operation temperature are limiting conditions for low-power operation, and multiple system parameters must be coordinated to ensure that the radiator system does not condense the lithium working fluid and the heat pipe.

Gaze-Manipulated Data Augmentation for Gaze Estimation With Diffusion Autoencoders (디퓨전 오토인코더의 시선 조작 데이터 증강을 통한 시선 추적)

  • Kangryun Moon;Younghan Kim;Yongjun Park;Yonggyu Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.51-59
    • /
    • 2024
  • Collecting a dataset with a corresponding labeled gaze vector requires a high cost in the gaze estimation field. In this paper, we suggest a data augmentation of manipulating the gaze of an original image, which improves the accuracy of the gaze estimation model when the number of given gaze labels is restricted. By conducting multi-class gaze bin classification as an auxiliary task and adjusting the latent variable of the diffusion model, the model semantically edits the gaze from the original image. We manipulate a non-binary attribute, pitch and yaw of gaze vector to a desired range and uses the edited image as an augmented train data. The improved gaze accuracy of the gaze estimation network in the semi-supervised learning validates the effectiveness of our data augmentation, especially when the number of gaze labels is 50k or less.

The Patient Specific QA of IMRT and VMAT Through the AAPM Task Group Report 119 (AAPM TG-119 보고서를 통한 세기조절방사선치료(IMRT)와 부피적세기조절회전치료(VMAT)의 치료 전 환자별 정도관리)

  • Kang, Dong-Jin;Jung, Jae-Yong;Kim, Jong-Ha;Park, Seung;Lee, Keun-Sub;Sohn, Seung-Chang;Shin, Young-Joo;Kim, Yon-Lae
    • Journal of radiological science and technology
    • /
    • v.35 no.3
    • /
    • pp.255-263
    • /
    • 2012
  • The aim of this study was to evaluate the patient specific quality assurance (QA) results of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) through the AAPM Task Group Report 119. Using the treatment planning system, both IMRT and VMAT treatment plans were established. The absolute dose and relative dose for the target and OAR were measured by using an ion chamber and the bi-planar diode array, respectively. The plan evaluation was used by the Dose volume histogram (DVH) and the dose verification was implemented by compare the measured value with the calculated value. For the evaluation of plan, in case of prostate, both IMRT and VMAT were closed the goal of target and OARs. In case of H&N and Multi-target, IMRT was not reached the goal of target, but VMAT was reached the goal of target and OARs. In case of C-shape(easy), both were reached the goal of target and OARs. In case of C-shape(hard), both were reached the goal of target but not reached the goal of OARs. For the evaluation of absolute dose, in case of IMRT, the mean of relative error (%) between measured and calculated value was $1.24{\pm}2.06%$ and $1.4{\pm}2.9%$ for target and OAR, respectively. The confidence limits were 3.65% and 4.39% for target and OAR, respectively. In case of VMAT the mean of relative error was $2.06{\pm}0.64%$ and $2.21{\pm}0.74%$ for target and OAR, respectively. The confidence limits were 4.09% and 3.04% for target and OAR, respectively. For the evaluation of relative dose, in case of IMRT, the average percentage of passing gamma criteria (3mm/3%) were $98.3{\pm}1.5%$ and the confidence limits were 3.78%. In case of VMAT, the average percentage were $98.2{\pm}1.1%$ and the confidence limits were 3.95%. We performed IMRT and VMAT patient specific QA using TG-119 based procedure, all analyzed results were satisfied with acceptance criteria based on TG-119. So, the IMRT and VMAT of our institution was confirmed the accuracy.

The Study on the Priority of First Person Shooter game Elements using Delphi Methodology (FPS게임 구성요소의 중요도 분석방법에 관한 연구 1 -델파이기법을 이용한 독립요소의 계층설계와 검증을 중심으로-)

  • Bae, Hye-Jin;Kim, Suk-Tae
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.61-72
    • /
    • 2007
  • Having started with "Space War", the first game produced by MIT in the 1960's, the gaming industry expanded rapidly and grew to a large size over a short period of time: the brand new games being launched on the market are found to contain many different elements making up a single content in that it is often called the 'the most comprehensive ultimate fruits' of the design technologies. This also translates into a large increase in the number of things which need to be considered in developing games, complicating the plans on the financial budget, the work force, and the time to be committed. Therefore, an approach for analyzing the elements which make up a game, computing the importance of each of them, and assessing those games to be developed in the future, is the key to a successful development of games. Many decision-making activities are often required under such a planning process. The decision-making task involves many difficulties which are outlined as follows: the multi-factor problem; the uncertainty problem impeding the elements from being "quantified" the complex multi-purpose problem for which the outcome aims confusion among decision-makers and the problem with determining the priority order of multi-stages leading to the decision-making process. In this study we plan to suggest AHP (Analytic Hierarchy Process) so that these problems can be worked out comprehensively, and logical and rational alternative plan can be proposed through the quantification of the "uncertain" data. The analysis was conducted by taking FPS (First Person Shooting) which is currently dominating the gaming industry, as subjects for this study. The most important consideration in conducting AHP analysis is to accurately group the elements of the subjects to be analyzed objectively, and arrange them hierarchically, and to analyze the importance through pair-wise comparison between the elements. The study is composed of 2 parts of analyzing these elements and computing the importance between them, and choosing an alternative plan. Among these this paper is particularly focused on the Delphi technique-based objective element analyzing and hierarchy of the FPS games.

  • PDF

An Interface Technique for Avatar-Object Behavior Control using Layered Behavior Script Representation (계층적 행위 스크립트 표현을 통한 아바타-객체 행위 제어를 위한 인터페이스 기법)

  • Choi Seung-Hyuk;Kim Jae-Kyung;Lim Soon-Bum;Choy Yoon-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.751-775
    • /
    • 2006
  • In this paper, we suggested an avatar control technique using the high-level behavior. We separated behaviors into three levels according to level of abstraction and defined layered scripts. Layered scripts provide the user with the control over the avatar behaviors at the abstract level and the reusability of scripts. As the 3D environment gets complicated, the number of required avatar behaviors increases accordingly and thus controlling the avatar-object behaviors gets even more challenging. To solve this problem, we embed avatar behaviors into each environment object, which informs how the avatar can interact with the object. Even with a large number of environment objects, our system can manage avatar-object interactions in an object-oriented manner Finally, we suggest an easy-to-use user interface technique that allows the user to control avatars based on context menus. Using the avatar behavior information that is embedded into the object, the system can analyze the object state and filter the behaviors. As a result, context menu shows the behaviors that the avatar can do. In this paper, we made the virtual presentation environment and applied our model to the system. In this paper, we suggested the technique that we controling an the avatar control technique using the high-level behavior. We separated behaviors into three levels byaccording to level of abstract levelion and defined multi-levellayered script. Multi-leveILayered script offers that the user can control avatar behavior at the abstract level and reuses script easily. We suggested object models for avatar-object interaction. Because, TtThe 3D environment is getting more complicated very quickly, so that the numberss of avatar behaviors are getting more variableincreased. Therefore, controlling avatar-object behavior is getting complex and difficultWe need tough processing for handling avatar-object interaction. To solve this problem, we suggested object models that embedded avatar behaviors into object for avatar-object interaction. insert embedded ail avatar behaviors into object. Even though the numbers of objects areis large bigger, it can manage avatar-object interactions by very efficientlyobject-oriented manner. Finally Wewe suggested context menu for ease ordering. User can control avatar throughusing not avatar but the object-oriented interfaces. To do this, Oobject model is suggested by analyzeing object state and filtering the behavior, behavior and context menu shows the behaviors that avatar can do. The user doesn't care about the object or avatar state through the related object.

External Auditing on Absorbed Dose Using a Solid Water Phantom for Domestic Radiotherapy Facilities (고체팬텀을 이용한 국내 방사선 치료시설의 흡수선량에 대한 조사)

  • Choi, Chang-Heon;Kim, Jung-In;Park, Jong-Min;Park, Yang-Kyun;Cho, Kun-Woo;Cho, Woon-Kap;Lim, Chun-Il;Ye, Sung-Joon
    • Radiation Oncology Journal
    • /
    • v.28 no.1
    • /
    • pp.50-56
    • /
    • 2010
  • Purpose: We report the results of an external audit on the absorbed dose of radiotherapy beams independently performed by third parties. For this effort, we developed a method to measure the absorbed dose to water in an easy and convenient setup of solid water phantom. Materials and Methods: In 2008, 12 radiotherapy centers voluntarily participated in the external auditing program and 47 beams of X-ray and electron were independently calibrated by the third party’s American Association of Physicists in Medicine (AAPM) task group (TG)-51 protocol. Even though the AAPM TG-51 protocol recommended the use of water, water as a phantom has a few disadvantages, especially in a busy clinic. Instead, we used solid water phantom due to its reproducibility and convenience in terms of setup and transport. Dose conversion factors between solid water and water were determined for photon and electron beams of various energies by using a scaling method and experimental measurements. Results: Most of the beams (74%) were within ${\pm}2%$ of the deviation from the third party's protocol. However, two of 20 X-ray beams and three of 27 electron beams were out of the tolerance (${\pm}3%$), including two beams with a >10% deviation. X-ray beams of higher than 6 MV had no conversion factors, while a 6 MV absorbed dose to a solid water phantom was 0.4% less than the dose to water. The electron dose conversion factors between the solid water phantom and water were determined: The higher the electron energy, the less is the conversion factor. The total uncertainty of the TG-51 protocol measurement using a solid water phantom was determined to be ${\pm}1.5%$. Conclusion: The developed method was successfully applied for the external auditing program, which could be evolved into a credential program of multi-institutional clinical trials. This dosimetry saved time for measuring doses as well as decreased the uncertainty of measurement possibly resulting from the reference setup in water.