• Title/Summary/Keyword: 구조실험정보

Search Result 3,591, Processing Time 0.048 seconds

Pharmacological Comparison of Timosaponin A III on the 5-beta Reductase and Androgen Receptor via In Silico Molecular Docking Approach (In silico 약리학적 분석을 통한 티모사포닌 A III의 5-베타 리덕타아제 단백질 및 안드로겐 수용체 단백질 활성 부위에 대한 결합 친화도 비교 연구)

  • Kim, Dong-Chan
    • Journal of Life Science
    • /
    • v.28 no.3
    • /
    • pp.307-313
    • /
    • 2018
  • Alopecia cause psychological stress due to their effect on appearance. Thus, the global market size of the alopecia treatment products are growing quickly. Timosaponin A III is the well known active ingredient of Anemarrhenae Rhizoma. In this study, we investigated and compared the binding affinity of timosaponin A III with finasteride (5-beta reductase antagonist) and minoxidil (androgen receptor antagonist) on the target protein active site by in silico computational docking studies. The three dimensional crystallographic structure of 5-beta reductase (PDB ID : 3G1R) and androgen receptor (PDB ID: 4K7A) was obtained from PDB database. In silico computational autodocking analysis was performed using PyRx, Autodock Vina, Discovery Studio Version 4.5, and NX-QuickPharm option based on scoring functions. The timosaponin A III showed optimum binding affinity (docking energy) with 5-beta reductase as -12.20 kcal/mol as compared to the finasteride (-11.70 kcal/mol) and with androgen receptor as -9.00 kcal/mol as compared to the minoxidil (-7.40 kcal/mol). The centroid X, Y, Z grid position of the timosaponin A III on the 5-beta reductase was similar (overlap) to the finasteride, but the X, Y, Z centroid grid of the timosaponin A III on the androgen receptor was significantly far from the minoxidil centroid position. These results significantly indicated that timosaponin A III could be more potent antagonist to the 5-beta reductase and androgen receptor. Therefore, the extract of Anemarrhenae Rhizoma or timosaponin A III containing biomaterials can substitute the finasteride and minoxidil and can be applied to the alopecia protecting product and related industrial fields.

Costume Consumption Culture for Costumeplay (코스튬플레이 의상 소비문화)

  • Jang, Nam-Kyung;Park, Soo-Kyung;Lee, Joo-Young
    • Archives of design research
    • /
    • v.19 no.5 s.67
    • /
    • pp.203-212
    • /
    • 2006
  • With interests and participation in the costumeplay that mimics characters appeared on carton or animation in recent days, the costumeplay becomes one of cultural phenomena. Using a qualitative research method, this study identified costumeplayers' costume consumption pattern and explored its meanings from the perspective of consumption culture. Indeed, this study intended to help for understanding costumeplayer group as a consumer, and to provide basic knowledge about new market analysis related to fashion design and marketing. The results from the analyzing participant observation and in-depth interviews data are as follows: first, costumeplayers usually begin costumeplay by friends' invitations or by themselves and then continue on participating. Through the costumeplay, participants have benefits such as fun, departure from the daily life, and social interaction. Second, participants acquire costumes through purchase, rent, producing or combination of daily wear, but both purchase and rent account high. Third, the meanings of consumption culture in costumeplay include consumption behavior repeating possession and disposal. Also, costumeplayers concerns efficiency when purchasing or renting the costumes, and internet is a place where information search, comparison, and actual purchasing are occurred. Based on the results, fashion design and marketing implication, limitation of this study and further research ideas were suggested.

  • PDF

The research for the yachting development of Korean Marina operation plans (요트 발전을 위한 한국형 마리나 운영방안에 관한 연구)

  • Jeong Jong-Seok;Hugh Ihl
    • Journal of Navigation and Port Research
    • /
    • v.28 no.10 s.96
    • /
    • pp.899-908
    • /
    • 2004
  • The rise of income and introduction of 5 day a week working system give korean people opportunities to enjoy their leisure time. And many korean people have much interest in oceanic sports such as yachting and also oceanic leisure equipments. With the popularization and development of the equipments, the scope of oceanic activities has been expanding in Korea just as in the advanced oceanic countries. However, The current conditions for the sports in Korea are not advanced and even worse than underdeveloped countries. In order to develop the underdeveloped resources of Korean marina, we need to customize the marina models of advanced nations to serve the specific needs and circumstances of Korea As such we have carried out a comparative analysis of how Austrailia, Newzealand, Singapore, japan and Malaysia operate their marina, reaching the following conclusions. Firstly, in marina operations, in order to protect personal property rights and to preserve the environment, we must operate membership and non-membership, profit and non-profit schemes separately, yet without regulating the dress code entering or leaving the club house. Secondly, in order to accumulate greater value added, new sporting events should be hosted each year. There is also the need for an active use of volunteers, the generation of greater interest in yacht tourism, and the simplification of CIQ procedures for foreign yachts as well as the provision of language services. Thirdly, a permanent yacht school should be established, and classes should be taught by qualified instructors. Beginners, intermediary, and advanced learner classes should be managed separately with special emphasis on the dinghy yacht program for children. Fourthly, arrival and departure at the moorings must be regulated autonomically, and there must be systematic measures for the marina to be able, in part, to compensate for loss and damages to equipment, security and surveillance after usage fees have been paid for. Fifthly, marine safety personnel must be formed in accordance with Korea's current circumstances from civilian organizations in order to be used actively in benchmarking, rescue operations, and oceanic searches at times of disaster at sea.

ANC Caching Technique for Replacement of Execution Code on Active Network Environment (액티브 네트워크 환경에서 실행 코드 교체를 위한 ANC 캐싱 기법)

  • Jang Chang-bok;Lee Moo-Hun;Cho Sung-Hoon;Choi Eui-In
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9B
    • /
    • pp.610-618
    • /
    • 2005
  • As developed Internet and Computer Capability, Many Users take the many information through the network. So requirement of User that use to network was rapidly increased and become various. But it spend much time to accept user requirement on current network, so studied such as Active network for solved it. This Active node on Active network have the capability that stored and processed execution code aside from capability of forwarding packet on current network. So required execution code for executed packet arrived in active node, if execution code should not be in active node, have to take by request previous Action node and Code Server to it. But if this execution code take from previous active node and Code Server, bring to time delay by transport execution code and increased traffic of network and execution time. So, As used execution code stored in cache on active node, it need to increase execution time and decreased number of request. So, our paper suggest ANC caching technique that able to decrease number of execution code request and time of execution code by efficiently store execution code to active node. ANC caching technique may decrease the network traffic and execution time of code, to decrease request of execution code from previous active node.

Analysis of Sinjido Marine Ecosystem in 1994 using a Trophic Flow Model (영양흐름모형을 이용한 1994년 신지도 해양생태계 해석)

  • Kang, Yun-Ho
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.4
    • /
    • pp.180-195
    • /
    • 2011
  • A balanced trophic model for Sinjido marine ecosystem was constructed using ECOPATH model and data obtained 1994 in the region. The model integrates available information on biomass and food spectrum, and analyses ecosystem properties, dynamics of the main species populations and the key trophic pathways of the system, and then compares these results with those of other marine environments. The model comprises 17 groups of benthic algae, phytoplankton, zooplankton, gastropoda, polychaeta, bivalvia, echinodermata, crustacean, cephalopoda, goby, flatfish, rays and skates, croaker, blenny, conger, flatheads, and detritus. The model shows trophic levels of 1.0~4.0 from primary producers and detritus to top predator as flathead group. The model estimates total biomass(B) of 0.1 $kgWW/m^2$, total net primary production(PP) of 1.6 $kgWW/m^2/yr$, total system throughput(TST) of 3.4 $kgWW/m^2/yr$ and TST's components of consumption 7%, exports 43%, respiratory flows 4% and flows into detritus 46%. The model also calculates PP/TR of 0.012, PP/B of 0.015, omnivory index(OI) of 0.12, Fin's cycling index(FCI) of 0.7%, Fin's mean path length(MPL) of2.11, ascendancy(A) of 4.1 $kgWW/m^2/yr$ bits, development capacity(C) of 8.2 $kgWW/m^2/yr$ bits and A/C of 51%. In particular this study focuses the analysis of mixed trophic impacts and describes the indirect impact of a groupb upon another through mediating one based on 4 types. A large proportion of total export in TST means higher exchange rate in the study region than in semi enclosed basins, which seems by strong tidal currents along the channels between islands, called Sinjido, Choyakdo and Saengildo. Among ecosystem theory and cycling indices, B, TST, PP/TR, FCI, MPL and OI are shown low, indicating the system is not fully mature according to Odum's theory. Additionally, high A/C reveals the maximum capacity of the region is small. To sum up, the study region has high exports of trophic flow and low capacity to develop, and reaches a development stage in the moment. This is a pilot research applied to the Sinjido in terms of trophic flow and food web system such that it may be helpful for comparison and management of the ecosystem in the future.

Development of Neural Network Based Cycle Length Design Model Minimizing Delay for Traffic Responsive Control (실시간 신호제어를 위한 신경망 적용 지체최소화 주기길이 설계모형 개발)

  • Lee, Jung-Youn;Kim, Jin-Tae;Chang, Myung-Soon
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.3 s.74
    • /
    • pp.145-157
    • /
    • 2004
  • The cycle length design model of the Korean traffic responsive signal control systems is devised to vary a cycle length as a response to changes in traffic demand in real time by utilizing parameters specified by a system operator and such field information as degrees of saturation of through phases. Since no explicit guideline is provided to a system operator, the system tends to include ambiguity in terms of the system optimization. In addition, the cycle lengths produced by the existing model have yet been verified if they are comparable to the ones minimizing delay. This paper presents the studies conducted (1) to find shortcomings embedded in the existing model by comparing the cycle lengths produced by the model against the ones minimizing delay and (2) to propose a new direction to design a cycle length minimizing delay and excluding such operator oriented parameters. It was found from the study that the cycle lengths from the existing model fail to minimize delay and promote intersection operational conditions to be unsatisfied when traffic volume is low, due to the feature of the changed target operational volume-to-capacity ratio embedded in the model. The 64 different neural network based cycle length design models were developed based on simulation data surrogating field data. The CORSIM optimal cycle lengths minimizing delay were found through the COST software developed for the study. COST searches for the CORSIM optimal cycle length minimizing delay with a heuristic searching method, a hybrid genetic algorithm. Among 64 models, the best one producing cycle lengths close enough to the optimal was selected through statistical tests. It was found from the verification test that the best model designs a cycle length as similar pattern to the ones minimizing delay. The cycle lengths from the proposed model are comparable to the ones from TRANSYT-7F.

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.