• Title/Summary/Keyword: Hand Detection

Search Result 732, Processing Time 0.038 seconds

Development of prevotella intermedia ATCC 49046 Strain-Specific PCR Primer Based on a Pig6 DNA Probe (Pig6 DNA probe를 기반으로 하는 Prevotella intermedia ATCC 49046 균주-특이 PCR primer 개발)

  • Jeong Seung-U;Yoo So-Young;Kang Sook-Jin;Kim Mi-Kwang;Jang Hyun-Seon;Lee Kwang-Yong;Kim Byung-Ok;Kook Joong-Ki
    • Korean Journal of Microbiology
    • /
    • v.42 no.2
    • /
    • pp.89-94
    • /
    • 2006
  • The purpose of this study is to develop the strain-specific PCR primers for the identification of prevotella inter-media ATCC 49046 which is frequently used in the pathogenesis studies of periodontitis. The Hind III-digested genomic DNA of P. intermedia ATCC 49046 were cloned by random cloning method. The specificity of cloned DNA fragments were determined by Southern blot analysis. The nucleotide sequence of cloned DNA probes was determined by chain termination method. The PCR primers were designed based on the nucleotide sequence of cloned DNA fragment. The data showed that Pig6 DNA probe were hybridized with the genomic DNA from P. intermedia strains (ATCC $25611^T$ and 49046) isolated from the Westerns, not the strains isolated from Koreans. The Pig6 DNA probe were consisted of 813 bp. Pig6-F3 and Pig6-R3 primers, designed base on the nucleotide Sequences Of Pig6 DNA Probe, were 3150 specific to the only both P. intermedia ATCC $25611^T$ and P. intermedia ATCC 49046. In the other hand, Pig6-60F and Pig6-770R primers were specific to the only P. intermedia ATCC 49046. The two PCR primer sets could detect as little as 4 pg of chromosomal DNA of P. intermedia. These results indicate that Pig6-60F and Pig6-770R primers have proven useful for the identification of P. intermedia ATCC 49046, especially with regard to the maintenance of the strain.

The Usefulness of Diagnostic Scan Using Technetium-99m Pertechnetate Scintigraphy prior to the First Ablative Radioiodine Treatment in Patients with Well Differentiated Thyroid Carcinoma: A Comparative Study with Iodine-131 (분화된 갑상선암 수술 후 초치료에 있어서 Tc-99m Pertechnetate을 이용한 진단 스캔의 유용성: Iodine-131 스캔과의 비교)

  • Yoon, Seok-Nam;Park, Chan-H.;Hwang, Kyung-Hoon;Kim, Su-Zy;Soh, Eui-Young;Kim, Kyung-Rae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.34 no.4
    • /
    • pp.285-293
    • /
    • 2000
  • Purpose: A prospective comparison was made between imaging with Tc-99m pertechnetate (Tc-99m) and Ioine-131 (I-131) for the detection of residual and metastatic tissue after total thyroidectomy in patients with well-differentiated thyroid carcinoma. Materials and Methods: Initially our patients had imaging with Tc-99m, followed by I-131 within 3 days. The study included 21 patients who had ablation with high dose of I-131 ranging from 100 mCi to 150 mCi. Planar and pinhole images were acquired for both Tc-99m and I-131. Diagnostic images of Tc-99m and I-131 were compared with post-therapy images. Degree of uptake on Tc-99m and I-131 images was scored by four point scale and compared. Results: The results of the Tc-99m study were: 16 of 19 studies (84%) were positive on simple planar images, but 19 of 20 studies (95%) were positive on pinhole images. Conventional I-131 diagnostic imaging on the other hand showed that all studies (100%) were positive on both planar and pinhole images. There was a significant difference in degree of uptake between Tc-99m and I-131 planar images (p<0.05). Only one case of Tc-99m scintigraphy was negative on both planar and pinhole studies (false negative). There was no distant metastasis on the therapeutic I-131 images. Conclusion: Tc-99m scan using pinhole in certain clinical situations is an alternative to the I-131 scan in detecting thyroid or lymph node metastasis prior to the first ablative treatment after thyroidectomy for well-differentiated thyroid carcinoma.

  • PDF

Detection of Pine Wilt Disease tree Using High Resolution Aerial Photographs - A Case Study of Kangwon National University Research Forest - (시계열 고해상도 항공영상을 이용한 소나무재선충병 감염목 탐지 - 강원대학교 학술림 일원을 대상으로 -)

  • PARK, Jeong-Mook;CHOI, In-Gyu;LEE, Jung-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.2
    • /
    • pp.36-49
    • /
    • 2019
  • The objectives of this study were to extract "Field Survey Based Infection Tree of Pine Wilt Disease(FSB_ITPWD)" and "Object Classification Based Infection Tree of Pine Wilt Disease(OCB_ITPWD)" from the Research Forest at Kangwon National University, and evaluate the spatial distribution characteristics and occurrence intensity of wood infested by pine wood nematode. It was found that the OCB optimum weights (OCB) were 11 for Scale, 0.1 for Shape, 0.9 for Color, 0.9 for Compactness, and 0.1 for Smoothness. The overall classification accuracy was approximately 94%, and the Kappa coefficient was 0.85, which was very high. OCB_ITPWD area is approximately 2.4ha, which is approximately 0.05% of the total area. When the stand structure, distribution characteristics, and topographic and geographic factors of OCB_ITPWD and those of FSB_ITPWD were compared, age class IV was the most abundant age class in FSB_ITPWD (approximately 55%) and OCB_ITPWD (approximately 44%) - the latter was 11% lower than the former. The diameter at breast heigh (DBH at 1.2m from the ground) results showed that (below 14cm) and (below 28cm) DBH trees were the majority (approximately 93%) in OCB_ITPWD, while medium and (more then 30cm) DBH trees were the majority (approximately 87%) in FSB_ITPWD, indicating different DBH distribution. On the other hand, the elevation distribution rate of OCB_ITPWD was mostly between 401 and 500m (approximately 30%), while that of FSB_ITPWD was mostly between 301 and 400m (approximately 45%). Additionally, the accessibility from the forest road was the highest at "100m or less" for both OCB_ITPWD (24%) and FSB_ITPWD (31%), indicating that more trees were infected when a stand was closer to a forest road with higher accessibility. OCB_ITPWD hotspots were 31 and 32 compartments, and it was highly distributed in areas with a higher age class and a higher DBH class.

LSTM Based Prediction of Ocean Mixed Layer Temperature Using Meteorological Data (기상 데이터를 활용한 LSTM 기반의 해양 혼합층 수온 예측)

  • Ko, Kwan-Seob;Kim, Young-Won;Byeon, Seong-Hyeon;Lee, Soo-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.603-614
    • /
    • 2021
  • Recently, the surface temperature in the seas around Korea has been continuously rising. This temperature rise causes changes in fishery resources and affects leisure activities such as fishing. In particular, high temperatures lead to the occurrence of red tides, causing severe damage to ocean industries such as aquaculture. Meanwhile, changes in sea temperature are closely related to military operation to detect submarines. This is because the degree of diffraction, refraction, or reflection of sound waves used to detect submarines varies depending on the ocean mixed layer. Currently, research on the prediction of changes in sea water temperature is being actively conducted. However, existing research is focused on predicting only the surface temperature of the ocean, so it is difficult to identify fishery resources according to depth and apply them to military operations such as submarine detection. Therefore, in this study, we predicted the temperature of the ocean mixed layer at a depth of 38m by using temperature data for each water depth in the upper mixed layer and meteorological data such as temperature, atmospheric pressure, and sunlight that are related to the surface temperature. The data used are meteorological data and sea temperature data by water depth observed from 2016 to 2020 at the IEODO Ocean Research Station. In order to increase the accuracy and efficiency of prediction, LSTM (Long Short-Term Memory), which is known to be suitable for time series data among deep learning techniques, was used. As a result of the experiment, in the daily prediction, the RMSE (Root Mean Square Error) of the model using temperature, atmospheric pressure, and sunlight data together was 0.473. On the other hand, the RMSE of the model using only the surface temperature was 0.631. These results confirm that the model using meteorological data together shows better performance in predicting the temperature of the upper ocean mixed layer.

A study on Convergence Weapon Systems of Self propelled Mobile Mines and Supercavitating Rocket Torpedoes (자항 기뢰와 초공동 어뢰의 융복합 무기체계 연구)

  • Lee, Eunsu;Shin, Jin
    • Maritime Security
    • /
    • v.7 no.1
    • /
    • pp.31-60
    • /
    • 2023
  • This study proposes a new convergence weapon system that combines the covert placement and detection abilities of a self-propelled mobile mine with the rapid tracking and attack abilities of supercavitating rocket torpedoes. This innovative system has been designed to counter North Korea's new underwater weapon, 'Haeil'. The concept behind this convergence weapon system is to maximize the strengths and minimize the weaknesses of each weapon type. Self-propelled mobile mines, typically placed discreetly on the seabed or in the water, are designed to explode when a vessel or submarine passes near them. They are generally used to defend or control specific areas, like traditional sea mines, and can effectively limit enemy movement and guide them in a desired direction. The advantage that self-propelled mines have over traditional sea mines is their ability to move independently, ensuring the survivability of the platform responsible for placing the sea mines. This allows the mines to be discreetly placed even deeper into enemy lines, significantly reducing the time and cost of mine placement while ensuring the safety of the deployed platforms. However, to cause substantial damage to a target, the mine needs to detonate when the target is very close - typically within a few yards. This makes the timing of the explosion crucial. On the other hand, supercavitating rocket torpedoes are capable of traveling at groundbreaking speeds, many times faster than conventional torpedoes. This rapid movement leaves little room for the target to evade, a significant advantage. However, this comes with notable drawbacks - short range, high noise levels, and guidance issues. The high noise levels and short range is a serious disadvantage that can expose the platform that launched the torpedo. This research proposes the use of a convergence weapon system that leverages the strengths of both weapons while compensating for their weaknesses. This strategy can overcome the limitations of traditional underwater kill-chains, offering swift and precise responses. By adapting the weapon acquisition criteria from the Defense force development Service Order, the effectiveness of the proposed system was independently analyzed and proven in terms of underwater defense sustainability, survivability, and cost-efficiency. Furthermore, the utility of this system was demonstrated through simulated scenarios, revealing its potential to play a critical role in future underwater kill-chain scenarios. However, realizing this system presents significant technical challenges and requires further research.

  • PDF

Application of Westgard Multi-Rules for Improving Nuclear Medicine Blood Test Quality Control (핵의학 검체검사 정도관리의 개선을 위한 Westgard Multi-Rules의 적용)

  • Jung, Heung-Soo;Bae, Jin-Soo;Shin, Yong-Hwan;Kim, Ji-Young;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.115-118
    • /
    • 2012
  • Purpose: The Levey-Jennings chart controlled measurement values that deviated from the tolerance value (mean ${\pm}2SD$ or ${\pm}3SD$). On the other hand, the upgraded Westgard Multi-Rules are actively recommended as a more efficient, specialized form of hospital certification in relation to Internal Quality Control. To apply Westgard Multi-Rules in quality control, credible quality control substance and target value are required. However, as physical examinations commonly use quality control substances provided within the test kit, there are many difficulties presented in the calculation of target value in relation to frequent changes in concentration value and insufficient credibility of quality control substance. This study attempts to improve the professionalism and credibility of quality control by applying Westgard Multi-Rules and calculating credible target value by using a commercialized quality control substance. Materials and Methods : This study used Immunoassay Plus Control Level 1, 2, 3 of Company B as the quality control substance of Total T3, which is the thyroid test implemented at the relevant hospital. Target value was established as the mean value of 295 cases collected for 1 month, excluding values that deviated from ${\pm}2SD$. The hospital quality control calculation program was used to enter target value. 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T of Westgard Multi-Rules were applied in the Total T3 experiment, which was conducted 194 times for 20 days in August. Based on the applied rules, this study classified data into random error and systemic error for analysis. Results: Quality control substances 1, 2, and 3 were each established as 84.2 ng/$dl$, 156.7 ng/$dl$, 242.4 ng/$dl$ for target values of Total T3, with the standard deviation established as 11.22 ng/$dl$, 14.52 ng/$dl$, 14.52 ng/$dl$ respectively. According to error type analysis achieved after applying Westgard Multi-Rules based on established target values, the following results were obtained for Random error, 12s was analyzed 48 times, 13s was analyzed 13 times, R4s was analyzed 6 times, for Systemic error, 22s was analyzed 10 times, 41s was analyzed 11 times, 2 of 32s was analyzed 17 times, $10\bar{x}$ was analyzed 10 times, and 7T was not applied. For uncontrollable Random error types, the entire experimental process was rechecked and greater emphasis was placed on re-testing. For controllable Systemic error types, this study searched the cause of error, recorded the relevant cause in the action form and reported the information to the Internal Quality Control committee if necessary. Conclusions : This study applied Westgard Multi-Rules by using commercialized substance as quality control substance and establishing target values. In result, precise analysis of Random error and Systemic error was achieved through the analysis of 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T rules. Furthermore, ideal quality control was achieved through analysis conducted on all data presented within the range of ${\pm}3SD$. In this regard, it can be said that the quality control method formed based on the systematic application of Westgard Multi-Rules is more effective than the Levey-Jennings chart and can maximize error detection.

  • PDF

Detoxification of PSP and relationship between PSP toxicity and Protogonyaulax sp. (마비성패류독의 제독방법 및 패류독성과 원인플랑크톤과의 관계에 관한 연구)

  • CHANG Dong-Suck;SHIN Il-Shik;KIM Ji-Hoe;PYUN Jae-hueung;CHOE Wi-Kung
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.22 no.4
    • /
    • pp.177-188
    • /
    • 1989
  • The purpose of this study was to investigate the detoxifying effect on PSP-infested sea mussel, Mytilus edulis, by heating treatment and correlation between the PSP toxicity and the environmental conditions of shellfish culture area such as temperature, pH, salinity, density of Protogonyaulax sp. and concentration of inorganic nutrients such as $NH_4-N,\;NO_3-N,\;NO_2-N\;and\;PO_4-P$. This experiment was carried out at $Suj\u{o}ng$ in Masan, Yangdo in Jindong, $Hach\u{o}ng\;in\;K\u{o}jedo\;and\;Gamch\u{o}n$ bay in Pusan from February to June in $1987\~1989$. It was observed that the detection ratio and toxicity of PSP in sea mussel were different by the year even same collected area. The PSP was often detected when the temperature of sea water about $8.0\~14.0^{\circ}C$. Sometimes the PSP fox of sea mussel was closely related to density of Protogonyaulax sp. at $Gamch\u{o}n$ bay in Pusan from March to April in 1989, but no relationship was observed except above duration during the study period. The concentration of inorganic nutrients effects on the growth of Protogonyaulax sp., then effects of $NO_3-N$ was the strongest among them. When the PSP-infested sea mussel homogenate was heated at various temperature, the PSP toxicity was not changed significantly at below $70^{\circ}C$ for 60 min. but it was proper-tionaly decreased as the heating temperature was increased. For example, when the sea mussel homogenate was heated at $100^{\circ}C,\;121^{\circ}C$ for 10 min., the toxicity was decreased about $67\%\;and\;90\%$, respectively. On the other hand, when shellstock sea mussel contained PSP of $150{\mu}g/100g$ was boiled at $100^{\circ}C$ for 30 min. with tap water, the toxicity was not detected by mouse assay, but that of PSP of $5400{\mu}g/100g$ was reduced to $57{\mu}g/100g$ even after boiling for 120 min.

  • PDF

The Spatio-temporal Distribution of Organic Matter on the Surface Sediment and Its Origin in Gamak Bay, Korea (가막만 표층퇴적물중 유기물량의 시.공간적 분포 특성)

  • Noh Il-Hyeon;Yoon Yang-Ho;Kim Dae-Il;Park Jong-Sick
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.9 no.1
    • /
    • pp.1-13
    • /
    • 2006
  • A field survey on the spatio-temporal distribution characteristics and origins of organic matter in surface sediments was carried out monthly at six stations in Gamak Bay, South Korea from April 2000 to March 2002. The range of ignition loss(IL) was $4.6{\sim}11.6%(7.1{\pm}1.6%)$, while chemical oxygen demand(CODs) ranged from $12.25{\sim}99.26mgO_2/g-dry(30.98{\pm}19.09mgO_2/g-dry)$, acid volatile sulfide(AVS) went from no detection(ND)${\sim}10.29mgS/g-dry(1.02{\pm}0.58mgS/g-dry)$, and phaeopigment was $6.84{\sim}116.18{\mu}g/g-dry(23.72{\pm}21.16{\mu}g/g-dry)$. The ranges of particulate organic carbon(POC) and particulate organic nitrogen(PON) were $5.45{\sim}23.24 mgC/g-dty(10.34{\pm}4.40C\;mgC/g-dry)$ and $0.71{\sim}2.99mgN/g-dry(1.37{\pm}0.58mgN/g-dry)$, respectively. Water content was in the range of $43.1{\sim}77.6%(55.8{\pm}5.6%)$, and mud content(silt+clay) was higher than 95% at all stations. The spatial distribution of organic matter in surface sediments was greatly divided between the northwestern, central and eastern areas, southern entrance area from the distribution characteristic of their organic matters. The concentrations of almost all items were greater at the northwestern and southern entrance area than at the other areas in Gamak Bay. In particular, sedimentary pollution was very serious at the northwestern area, because the area had an excessive supply of organic matter due to aquaculture activity and the inflow of sewage from the land. These materials stayed longer because of the topographical characteristics of such as basin and the anoxic conditions in the bottom seawater environment caused by thermocline in the summer. The tendency of temporal change was most prominently in the period of high-water temperatures than low-water ones at the northwestern and southern entrance areas. On the other hand, the central and eastern areas did not show a regular trend for changing the concentrations of each item but mainly showed a higher tendency during the low-water temperatures. This was observed for all but AVS concentrations which were higher during the period of high-water temperature at all stations. Especially, the central and eastern areas showed a large temporal increase of AVS concentration during those periods of high-water temperature where the concentration of CODs was in excess of $20mgO_2/g-dry$. The results show that the organic matters in surface sediments in Gamak Bay actually originated from autochthonous organic matters with eight or less in average C/N ratio including the organic matters generated by the use of ocean, rather than terrigenous organic matters. However, the formation of autochthonous organic matter was mainly derived from detritus than living phytoplankton, indicated the results of the POC/phaeopigment ratio. In addition, the CODs/IL ratio results demonstrate that the detritus was the product of artificial activities such as dregs feeding and fecal pellets of farm organisms caused by aquaculture activities rather than the dynamic of natural ocean activities.

  • PDF

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.