• Title/Summary/Keyword: Adaptive 시스템

Search Result 4,055, Processing Time 0.035 seconds

A study on perceived interactivity of Dance video contents and intention to use: Focused on YouTube (무용영상콘텐츠의 정보서비스 이용에 대한 상호작용성 인식과 이용지속의도에 관한 연구- 유투브를 중심으로)

  • Jung, Sae-Bom;Won, Do-Yeon;Jang, Young-Jin
    • 한국체육학회지인문사회과학편
    • /
    • v.55 no.3
    • /
    • pp.349-363
    • /
    • 2016
  • The purpose of this study was to perceived interactivity about experience of using dance video contents and intention to use. This study was examine an adaptive model of Technology Acceptance Model and focused on YouTube. In order to achieve the purpose of this study, total 350 questionnaires were surveyed and 311 data were finally analyzed. All data were processed through SPSS for Windows 20.0 version and AMOS 18.0. For the analysis of data, frequency analysis, reliability analysis, confirmatory factor analysis, correlation analysis, model evaluation, and structural equation modeling techniques. The results were as follows. first. Perceived Interactivity had not affect on perceived usefulness but two-way communication, user control among the sub-factor in perceived Interactivity had a positive effect on perceived ease of use. second, perceived ease of use had a positive effect on perceived usefulness. Lastly, Perceived usefulness had not affect intention to use, but perceived ease of use had a positive effect on intention use.

Estimation of the Marginal Walking Time of Bus Users in Small-Medium Cities (중·소도시 버스이용자의 한계도보시간 추정)

  • Kim, Kyung Whan;Yoo, Hwan Hee;Lee, Sang Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.4D
    • /
    • pp.451-457
    • /
    • 2008
  • Establishing realistic bus service coverage is needed to build optimum city bus line networks and reasonable bus service coverage areas. The purposes of this study are understanding the characteristics of the present walking time and marginal walking time of small-medium cities and constructing an ANFIS (Adaptive Neuro-Fuzzy Inference System) model to estimate the marginal walking time for certain age and income. The cities of Masan, Chongwon and Jinju are selected for study cities. The 80 percentile of present walking time of bus users of these cities are 10.2-11.1 minutes, thus the values are greater than the 5 minutes of the maximum walking time in USA and the marginal walking times of 21.1-21.8 minutes are much greater. An ANFIS model based on pulled data of the cities are constructed to estimate the marginal walking time of small-medium cities. Analyzing the relationship between marginal walking time and age/income by using the model, the marginal walking time decreases as the age increases, but is near constant from the age of 25 to 35. And the marginal walking time is inversely proportional to the income. In comparing the surveyed and the estimated values, as the statistics of coefficient of determination, MSE and MAE are 0.996, 0.163, 0.333 respectively, it may be judged that the explainability of the model is very high. The technique developed in this study can be applied to other cities.

An Accelerated Approach to Dose Distribution Calculation in Inverse Treatment Planning for Brachytherapy (근접 치료에서 역방향 치료 계획의 선량분포 계산 가속화 방법)

  • Byungdu Jo
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.633-640
    • /
    • 2023
  • With the recent development of static and dynamic modulated brachytherapy methods in brachytherapy, which use radiation shielding to modulate the dose distribution to deliver the dose, the amount of parameters and data required for dose calculation in inverse treatment planning and treatment plan optimization algorithms suitable for new directional beam intensity modulated brachytherapy is increasing. Although intensity-modulated brachytherapy enables accurate dose delivery of radiation, the increased amount of parameters and data increases the elapsed time required for dose calculation. In this study, a GPU-based CUDA-accelerated dose calculation algorithm was constructed to reduce the increase in dose calculation elapsed time. The acceleration of the calculation process was achieved by parallelizing the calculation of the system matrix of the volume of interest and the dose calculation. The developed algorithms were all performed in the same computing environment with an Intel (3.7 GHz, 6-core) CPU and a single NVIDIA GTX 1080ti graphics card, and the dose calculation time was evaluated by measuring only the dose calculation time, excluding the additional time required for loading data from disk and preprocessing operations. The results showed that the accelerated algorithm reduced the dose calculation time by about 30 times compared to the CPU-only calculation. The accelerated dose calculation algorithm can be expected to speed up treatment planning when new treatment plans need to be created to account for daily variations in applicator movement, such as in adaptive radiotherapy, or when dose calculation needs to account for changing parameters, such as in dynamically modulated brachytherapy.

Shading Treatment-Induced Changes in Physiological Characteristics of Thermopsis lupinoides (L.) Link (차광처리에 따른 갯활량나물의 생리 특성)

  • Seungju Jo;Dong-Hak Kim;Jung-Won Yoon;Eun Ju Cheong
    • Journal of Korean Society of Forest Science
    • /
    • v.113 no.2
    • /
    • pp.198-209
    • /
    • 2024
  • This study aimed to investigate the impact of light intensity, manipulated through different shading levels, on the growth and physiological responses of Thermopsis lupinoides. To assess the effects of shading treatments, we examined leaf mass per area, chlorophyll content, chlorophyll fluorescence response, and photosynthetic characteristics. T. lupinoidesexhibited adaptive responses under low light conditions (50% shading), showing increased leaf area and decreased leaf mass per area as shading levels increased. These changes indicate morpho-physiological adaptations to reduced light availability. At 50% shading, the physiological and ecological responses were favorable, with optimal photosynthetic functions including chlorophyll content, photosynthesis saturation point, photosynthetic rate, carbon fixation efficiency, stomatal conductance, transpiration rate, and water use efficiency. However, at 95% shading, the essential light conditions for growth were not met, significantly impairing photosynthetic functions. Consequently, 50% shading was determined to be the most optimal condition for T. lupinoides growth. These findings provide valuable insights for effective ex-situconservation practices and site selection for T. lupinoides, serving as foundational data for habitat restoration efforts.

Current Status and Improvement Measures for Records Management in the National Assembly Member's Office: Focusing on the Perception of the National Assembly Aides (국회의원실 기록관리의 현황과 개선방안 - 보좌직원의 인식을 중심으로 -)

  • Yeonhee Jang;Eun-Ha Youn
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.24 no.1
    • /
    • pp.187-204
    • /
    • 2024
  • This study was conducted to examine the current status of record management in parliamentary offices and identify areas of improvement. For this, in-depth interviews were conducted primarily with parliamentary aides to investigate their perceptions and needs. The research revealed that although the responsibility for record management in parliamentary offices lies with the aides, systematic record management is lacking because of inadequate awareness. While some aides recognize the importance of record management, there is still a need for a change in perception and practice. Furthermore, the study found that there is a lack of systematic education and support for effective implementation. The perceptions of aides were classified into three types: proactive (type A), pragmatically adaptive (type B), and those emphasizing the specificity of parliamentary records (type C). In particular, the change in perception of aides in types B and C is crucial, considering their pivotal role in parliamentary office record management. In response, this study suggests education and awareness improvement programs for record management, the introduction of an integrated record management system, and the establishment of policy and institutional support as key tasks.

Single-Channel Seismic Data Processing via Singular Spectrum Analysis (특이 스펙트럼 분석 기반 단일 채널 탄성파 자료처리 연구)

  • Woodon Jeong;Chanhee Lee;Seung-Goo Kang
    • Geophysics and Geophysical Exploration
    • /
    • v.27 no.2
    • /
    • pp.91-107
    • /
    • 2024
  • Single-channel seismic exploration has proven effective in delineating subsurface geological structures using small-scale survey systems. The seismic data acquired through zero- or near-offset methods directly capture subsurface features along the vertical axis, facilitating the construction of corresponding seismic sections. However, substantial noise in single-channel seismic data hampers precise interpretation because of the low signal-to-noise ratio. This study introduces a novel approach that integrate noise reduction and signal enhancement via matrix rank optimization to address this issue. Unlike conventional rank-reduction methods, which retain selected singular values to mitigate random noise, our method optimizes the entire singular value spectrum, thus effectively tackling both random and erratic noises commonly found in environments with low signal-to-noise ratio. Additionally, to enhance the horizontal continuity of seismic events and mitigate signal loss during noise reduction, we introduced an adaptive weighting factor computed from the eigenimage of the seismic section. To access the robustness of the proposed method, we conducted numerical experiments using single-channel Sparker seismic data from the Chukchi Plateau in the Arctic Ocean. The results demonstrated that the seismic sections had significantly improved signal-to-noise ratios and minimal signal loss. These advancements hold promise for enhancing single-channel and high-resolution seismic surveys and aiding in the identification of marine development and submarine geological hazards in domestic coastal areas.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Usefulness of Abdominal Compressor Using Stereotactic Body Radiotherapy with Hepatocellular Carcinoma Patients (토모테라피를 이용한 간암환자의 정위적 방사선치료시 복부압박장치의 유용성 평가)

  • Woo, Joong-Yeol;Kim, Joo-Ho;Kim, Joon-Won;Baek, Jong-Geal;Park, Kwang-Soon;Lee, Jong-Min;Son, Dong-Min;Lee, Sang-Kyoo;Jeon, Byeong-Chul;Cho, Jeong-Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.2
    • /
    • pp.157-165
    • /
    • 2012
  • Purpose: We evaluated usefulness of abdominal compressor for stereotactic body radiotherapy (SBRT) with unresectable hepatocellular carcinoma (HCC) patients and hepato-biliary cancer and metastatic liver cancer patients. Materials and Methods: From November 2011 to March 2012, we selected HCC patients who gained reduction of diaphragm movement >1 cm through abdominal compressor (diaphragm control, elekta, sweden) for HT (Hi-Art Tomotherapy, USA). We got planning computed tomography (CT) images and 4 dimensional (4D) images through 4D CT (somatom sensation, siemens, germany). The gross tumor volume (GTV) included a gross tumor and margins considering tumor movement. The planning target volume (PTV) included a 5 to 7 mm safety margin around GTV. We classified patients into two groups according to distance between tumor and organs at risk (OAR, stomach, duodenum, bowel). Patients with the distance more than 1 cm are classified as the 1st group and they received SBRT of 4 or 5 fractions. Patients with the distance less than 1 cm are classified as the 2nd group and they received tomotherapy of 20 fractions. Megavoltage computed tomography (MVCT) were performed 4 or 10 fractions. When we verify a MVCT fusion considering priority to liver than bone-technique. We sent MVCT images to Mim_vista (Mimsoftware, ver .5.4. USA) and we re-delineated stomach, duodenum and bowel to bowel_organ and delineated liver. First, we analyzed MVCT images to check the setup variation. Second we compared dose difference between tumor and OAR based on adaptive dose through adaptive planning station and Mim_vista. Results: Average setup variation from MVCT was $-0.66{\pm}1.53$ mm (left-right) $0.39{\pm}4.17$ mm (superior-inferior), $0.71{\pm}1.74$ mm (anterior-posterior), $-0.18{\pm}0.30$ degrees (roll). 1st group ($d{\geq}1$) and 2nd group (d<1) were similar to setup variation. 1st group ($d{\geq}1$) of $V_{diff3%}$ (volume of 3% difference of dose) of GTV through adaptive planing station was $0.78{\pm}0.05%$, PTV was $9.97{\pm}3.62%$, $V_{diff5%}$ was GTV 0.0%, PTV was $2.9{\pm}0.95%$, maximum dose difference rate of bowel_organ was $-6.85{\pm}1.11%$. 2nd Group (d<1) GTV of $V_{diff3%}$ was $1.62{\pm}0.55%$, PTV was $8.61{\pm}2.01%$, $V_{diff5%}$ of GTV was 0.0%, PTV was $5.33{\pm}2.32%$, maximum dose difference rate of bowel_organ was $28.33{\pm}24.41%$. Conclusion: Despite we saw diaphragm movement more than 5 mm with flouroscopy after use an abdominal compressor, average setup_variation from MVCT was less than 5 mm. Therefore, we could estimate the range of setup_error within a 5 mm. Target's dose difference rate of 1st group ($d{\geq}1$) and 2nd group (d<1) were similar, while 1st group ($d{\geq}1$) and 2nd group (d<1)'s bowel_organ's maximum dose difference rate's maximum difference was more than 35%, 1st group ($d{\geq}1$)'s bowel_organ's maximum dose difference rate was smaller than 2nd group (d<1). When applicating SBRT to HCC, abdominal compressor is useful to control diaphragm movement in selected patients with more than 1 cm bowel_organ distance.

  • PDF

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

A Study on the Effects of the Institutional Pressure on the Process of Implementation and Appropriation of System: M-EMRS in Hospital Organization (시스템의 도입과 전유 과정에 영향을 미치는 제도적 압력에 관한 연구: 병원조직의 모바일 전자의무기록 시스템을 대상으로)

  • Lee, Zoon-Ky;Shin, Ho-Kyoung;Choi, Hee-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.95-116
    • /
    • 2009
  • Increasingly the institutional theory has been an important theoretical view of decision making process and IT adoption in many academic researches. This study used the institutional theory as a lens through which we can understand the factors that enable the effective appropriation of advanced information technology. It posits that mimetic, coercive, and normative pressures existing in an institutionalized environment could influence the participation of top managers or decision makers and the involvement of users toward an effective use of IT in their tasks. Since the introduction of IT, organizational members have been using IT in their daily tasks, creating and recreating rules and resources according to their own methods and needs. That is to say, the adaptation process of the IT and outcomes are different among organizations. The previous studies on a diverse use of IT refer to the appropriation of technology from the social technology view. Users appropriate IT through not only technology itself, but also in terms of how they use it or how they make the social practice in their use of it. In this study, the concepts of institutional pressure, appropriation, participation of decision makers, and involvement of users toward the appropriation are explored in the context of the appropriation of the mobile electronic medical record system (M-EMRS) in particularly a hospital setting. Based on the conceptual definition of institutional pressure, participation and involvement, operational measures are reconstructed. Furthermore, the concept of appropriation is measured in the aspect of three sub-constructs-consensus on appropriation, faithful appropriation, and attitude of use. Grounded in the relevant theories to appropriation of IT, we developed a research framework in which the effects of institutional pressure, participation and involvement on the appropriation of IT are analyzed. Within this theoretical framework, we formulated several hypotheses. We developed a second order institutional pressure and appropriation construct. After establishing its validity and reliability, we tested the hypotheses with empirical data from 101 users in 3 hospitals which had adopted and used the M-EMRS. We examined the mediating effect of the participation of decision makers and the involvement of users on the appropriation and empirically validated their relationships. The results show that the mimetic, coercive, and normative institutional pressure has an effect on the participation of decision makers and the involvement of users in the appropriation of IT while the participation of decision makers and the involvement of users have an effect on the appropriation of IT. The results also suggest that the institutional pressure and the participation of decision makers influence the involvement of users toward an appropriation of IT. Our results emphasize the mediating effect of the institutional pressure on the appropriation of IT. Namely, the higher degree of the participation of decision makers and the involvement of users, the more effective appropriation users will represent. These results provide strong support for institutional-based variables as predictors of appropriation. These findings also indicate that organizations should focus on the role of participation of decision makers and the involvement of users for the purpose of effective appropriation, and these are the practical implications of our study. The theoretical contribution of this study is lies in the integrated model of the effect of institutional pressure on the appropriation of IT. The results are consistent with the institutional theory and support previous studies on adaptive structuration theory.