• Title/Summary/Keyword: Return system

Search Result 1,387, Processing Time 0.032 seconds

CNN-based Recommendation Model for Classifying HS Code (HS 코드 분류를 위한 CNN 기반의 추천 모델 개발)

  • Lee, Dongju;Kim, Gunwoo;Choi, Keunho
    • Management & Information Systems Review
    • /
    • v.39 no.3
    • /
    • pp.1-16
    • /
    • 2020
  • The current tariff return system requires tax officials to calculate tax amount by themselves and pay the tax amount on their own responsibility. In other words, in principle, the duty and responsibility of reporting payment system are imposed only on the taxee who is required to calculate and pay the tax accurately. In case the tax payment system fails to fulfill the duty and responsibility, the additional tax is imposed on the taxee by collecting the tax shortfall and imposing the tax deduction on For this reason, item classifications, together with tariff assessments, are the most difficult and could pose a significant risk to entities if they are misclassified. For this reason, import reports are consigned to customs officials, who are customs experts, while paying a substantial fee. The purpose of this study is to classify HS items to be reported upon import declaration and to indicate HS codes to be recorded on import declaration. HS items were classified using the attached image in the case of item classification based on the case of the classification of items by the Korea Customs Service for classification of HS items. For image classification, CNN was used as a deep learning algorithm commonly used for image recognition and Vgg16, Vgg19, ResNet50 and Inception-V3 models were used among CNN models. To improve classification accuracy, two datasets were created. Dataset1 selected five types with the most HS code images, and Dataset2 was tested by dividing them into five types with 87 Chapter, the most among HS code 2 units. The classification accuracy was highest when HS item classification was performed by learning with dual database2, the corresponding model was Inception-V3, and the ResNet50 had the lowest classification accuracy. The study identified the possibility of HS item classification based on the first item image registered in the item classification determination case, and the second point of this study is that HS item classification, which has not been attempted before, was attempted through the CNN model.

A Study on the Characteristics of Projects Following the Promotion of Private Park Special Projects (민간공원특례사업의 추진에 따른 사업특성에 관한 연구)

  • Gweon, Young-Dal;Park, Hyun-Bin;Kim, Dong-Pil
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.5
    • /
    • pp.112-124
    • /
    • 2021
  • This study was conducted to examine and analyze local governments, park status, project characteristics, and the implementation in detail for private park special projects across the country as a means of responding to the sunsetting of urban parks. As a result of the analysis, first, the private park special project, was found to be mainly implemented in cities with a population of more than 100,000, so there was a limit to the application on military installations or in local small cities. Therefore, rather than applying the special system collectively, it was judged that institutional flexibility, considering the characteristics and size of local government, was needed. Second, the current special projects by the park creation donation collection method shows monotonous development centered on apartment houses, so it is necessary to diversify the development by introducing a park preservation method that purchases and donates park sites. Third, it was found that the area standard needs to be eased to less than 50,000m2 to include parks with high utilization and good accessibility in urban areas of large cities, as the type and area of parks are limited. Fourth, most special projects are mountain parks, which are feared to damage the natural terrain and skyline, so separate ordinances should be established and applied, and development approaches should be made to allow nature and parks to coexist with the setting of detailed building guidelines for each type of facility. The guidelines should include, first, after the nationwide private park special projects are completed, standards for appropriate returns for similar projects should be established, institutional standards such as the recovery of excess profits should be established, and environmental reviews should be conducted. Second, it was found that local governments should institutionalize the composition of private consultations to promote the efficient management of projects through a cooperative system, and third, a roadmap for maintenance after the donation of special parks should be established.

The Theory of Chen tuan's Internal Alchemy and Intermixture of Taoism, Buddhism and Neo-Confucianism (진단의 내단이론과 삼교회통론)

  • Kim, Kyeong-Soo
    • The Journal of Korean Philosophical History
    • /
    • no.31
    • /
    • pp.53-86
    • /
    • 2011
  • Taoism exercised its influence and has made much progress apparently under the aegis of the Tang dynasty. But since the external alchemy, a traditional way of eternal life that they have pursued, met the limitation, they were placed in a situation where they needed to seek a new discipline. From this period to the early North Song dynasty, three religions have established the unique theoretical systems of their own theory of ascetic practices. They showed their own unique formats as follows. Neo-Confucianism established the theory of moral training, Buddhism did the theory of ascetic practices and Taoism had theory of discipline. By this time, a person who claimed the Intermixture of Three Religions composed the new system of theory of ascetic practice by taking advantage of other religions and putting them into his own view. Chen tuan established the theory of internal alchemy of Taoism and was the most influential figure in the world of thought since North Song dynasty. He clearly declared that he accepted the merits of other religions in his theory. He added I Ching of Confucianism in I Ching of secret of Taoism to stop the logical gaps during the process of disciplines in Taoism and took ascetic practices on mind of Buddhism into his system while he sought a way to integrate the dual structure of body and mind. The theory of Chen tuan's internal alchemy was training schema with stages of 'YeonJeongHwaGi', 'YeonGiHwaSin', and 'YeonSinHwanHeo' based on the concepts of vital, energy and spirit. The internal alchemy practice that Chen tuan was saying started from the practice of Zen to keep the mind calm with the basis of fundamental principles of interpretation of book of change according to Taoism. When a person reached the state to be in concert with all changes at the end of the silence and be full of wisdoms, he finally returned to the state of BokGwiMuGeuk by taking the flow of subtle mind and transforming it into energy. He expressed this process by drawing 'MuGeukDo'. Oriental philosophy categorized human into 'phenomenal existence' and 'original existence'. The logic of theory of ascetic practice has been established from these 'category of existence'. It would be determined whether it will return to 'original existence' or be stepped up from 'phenomenal existence' according to how the concept of 'self' or 'I' was made. Chen tuan who established the theory of internal alchemy in Taoism has established the unique theory of internal alchemy discipline and system of intermixture of three religions in this aspect. Today is called 'era of self-loss' or 'era of incurable diseases' caused by environmental pollution. It's still meaningful to review the theory of discipline of Chen tuan's connecting the body and the soul to heal the self, and keep life healthy and pursue the new way of discipline based on it.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

The Obligation of Return Unjust Enrichment or Compensation for the Use of Flight Safety Zone -Seoul High Court Judgment 2018Na2034474, decided on 2018. 10. 11.- (비행안전구역의 사용에 대한 부당이득반환·손실 보상 의무의 존부 -서울고등법원 2018. 10. 11. 선고 2018나2034474 판결-)

  • Kwon, Chang-Young;Park, Soo-Jin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.35 no.1
    • /
    • pp.63-101
    • /
    • 2020
  • 'Flight safety zone' means a zone that the Minister of National Defense designates under Articles 4 and 6 of the Protection of Military Bases and Installations Act (hereinafter 'PMBIA') for the safety of flight during takeoff and landing of military aircrafts. The purpose of flight safety zone is to contribute to the national security by providing necessary measures for the protection of military bases and installations and smooth conduct of military operations. In this case, when the state set and used the flight safety zone, the landowner claimed restitution of unjust enrichment against the country. This article is an analysis based on the existing legal theory regarding the legitimacy of plaintiff's claim, and the summary of the discussion is as follows. A person who without any legal ground derives a benefit from the property or services of another and thereby causes loss to the latter shall be bound to return such benefit (Article 741 of the Civil Act). Since the subject matter is an infringing profit, the defendant must prove that he has a legitimate right to retain the profit. The State reserves the right to use over the land designated as a flight safety zone in accordance with legitimate procedures established by the PMBIA for the safe takeoff and landing of military aircrafts. Therefore, it cannot be said that the State gained an unjust enrichment equivalent to the rent over the land without legal cause. Expropriation, use or restriction of private property from public necessity and compensation therefor shall be governed by Act: provided, that in such a case, just compensation shall be paid (Article 23 (1) of the Constitution of The Republic of KOREA). Since there is not any provision in the PMBIA for loss compensation for the case where a flight safety zone is set over land as in this case, next question would be whether or not it is unconstitutional. Even if it is designated as a flight safety zone and the use and profits of the land are limited, the justification of the purpose of the flight safety zone system, the appropriateness of the means, the minimization of infringement, and the balance of legal interests are still recognized; thus just not having any loss compensation clause does not make the act unconstitutional. In conclusion, plaintiff's claim for loss compensation based on the 'Act on Acquisition of and Compensation for land, etc. for Public Works Projects', which has no provision for loss compensation due to public limits, is unjust.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Changes of Minute Blood Flow in the Large Vessels during Orthostasis and Antiorthostasis, before and after Atropine Administration (체위변화가 두부 및 하지의 분시혈류량에 미치는 영향)

  • Park, Won-Kyun;Chae, E-Up
    • The Korean Journal of Physiology
    • /
    • v.19 no.2
    • /
    • pp.139-153
    • /
    • 1985
  • The passive tilt has been performed to study the orthostasis on the cardiovascular system. The orthostasis due to upright tilt was demonstrated as follows: the venous return, cardiac output and systemic arteiral blood pressure were decreased, whereas there was concomitant increase of heart rate, through the negative feedback mediated by such as the baroreceptor . Previous investigators have suggested that the tolerance to the orthostasis could he increased by blocking the cholinergic fiber with atropine which prevented vasodilation and bradycardia through the vasovagal reflex during the orthostasis. However, this hypothesis has not been clearly understood. This study was attempted to clarify the effect of atropine on the tolerance of the cardiovascular system to the upright and head-down tilt, and to investigate the change of the blood flow through head and lower leg with Electromagnetic flowmeter in both tilts before and after atropine state. Fourteen anesthetized dogs of $10{\sim}14kg$ were examined by tilting from supine position to $+77^{\circ}$ upright position (orthostasis), and then to $-90^{\circ}$ head-down position (antiorthostasis) for 10 minutes on each test. And the same course was taken 20 minutes after intravenous administration of 0.5mg atropine. The measurements were made of the blood flow(ml/min.) on the carotid artery, external jugular vein, femoral artery and femoral vein. At the same time pH, $PCO_2$, $PO_2$ and hematocrit (Hct) of the arterial and venous blood, and heart rate(HR) and respiratory rate (RR) were measured. The measurements obtained from upright and head-down tilt were compared with those from supine position. The results obtained are as follows: In upright tilt, the blood flow both on the artery and the vein through head and lower leg were decreased, however the decrement of blood flow through the head was greater than the lower leg And the atropine attenuated the decrement of the blood flow on the carotid artery, but not on the vessels of the lower leg. HR was moderately increased in upright tilt, but slightly in head-down tilt. The percent change of HR after the atropine administration was smaller than that before the atropine state in both upright and head-down tilts. Before the atropine state, RR was decreased in upright tilt, whereas increased in head-down tilt. However after the atropine state, the percent change of RR was smaller than that of before the atropine state in both upright and head-down tilts. In upright tilt, venous $PCO_2$ was increased, but arterial $PO_2$ and venous $PO_2$ were slightly decreased. Hct was increased in both upright and head-down tilts. The findings of blood $PCO_2$, $PO_2$ and Hct were not interferred by the atropine. In conclusion, 1;he administration of atropine is somewhat effective on improving the cardiovascular tolerance to postural changes. Thus, atropine attenuates the severe diminution of the blood flow to the head during orthostasis, and also reduces the changes of HR and RR in both orthostasis and antiorthostasis.

  • PDF

Norm-referenced criteria for strength of the elbow joint for the korean high school baseball players using the isokinetic equipment: (Focusing on seoul and gyeonggi-do) (등속성 장비를 이용하여 한국고교야구선수 주관절 근력 평가기준치 설정: (서울 및 경기도 중심으로))

  • Kim, Su-Hyun;Lee, Jin-Wook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.10
    • /
    • pp.442-447
    • /
    • 2017
  • The purpose of this study was to establish norm-referenced criteria for the isokinetic strength of the elbow joint in Korean high school baseball players. Two hundred and one high school baseball players participated in this study, none of whom had any medical problem with their upper limbs. The elbow flexion/extension test was conducted four times at a speed of $60^{\circ}/sec$. The HUMAC NORM (CSMI, USA) system was used to obtain the values of the peak torque and peak torque per body weight. The results were presented as norm-referenced criterion valuesusing the 5-point scale of Cajori which consists of five stages (6.06%, 24.17%, 38.30%, 24.17%, and 6.06%). In the results of this study, the peak torques of the elbow (flexor and extensor?) at an angular velocity of $60^{\circ}/sec$ were $37.88{\pm}8.14Nm$ and $44.59{\pm}11.79Nm$, and the peak torque per body weight of the elbow (flexor and extensor?) were $50.06{\pm}8.66Nm$ and $58.28{\pm}12.84Nm$, respectively. The reference values of the peak torque and peak torque per body weight of the elbow flexor and extensor were setat an angular velocity of $60^{\circ}/sec$. On the basis of the results analyzed in this study, the following conclusions were drawn. There is a lack of proper studies on the elbow joint strength, even though the most common injury in baseball players occurs in the elbow joint. Therefore, we need to establish a standard muscle strength in order to prevent elbow joint injuries and improve their performance. The criteria for the peak torque and peak torque per body weight established here in will provide useful information for high school baseball players, baseball coaches, athletic trainers and sports injury rehabilitation specialists in injury recovery and return to rehabilitation, which can beutilized as objective clinical assessment data.

User Centered Interface Design of Web-based Attention Testing Tools: Inhibition of Return(IOR) and Graphic UI (웹 기반 주의력 검사의 사용자 인터페이스 설계: 회귀억제 과제와 그래픽 UI를 중심으로)

  • Kwahk, Ji-Eun;Kwak, Ho-Wan
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.331-367
    • /
    • 2008
  • This study aims to validate a web-based neuropsychological testing tool developed by Kwak(2007) and to suggest solutions to potential problems that can deteriorate its validity. When it targets a wider range of subjects, a web-based neuropsychological testing tool is challenged by high drop-out rates, lack of motivation, lack of interactivity with the experimenter, fear of computer, etc. As a possible solution to these threats, this study aims to redesign the user interface of a web-based attention testing tool through three phases of study. In Study 1, an extensive analysis of Kwak's(2007) attention testing tool was conducted to identify potential usability problems. The Heuristic Walkthrough(HW) method was used by three usability experts to review various design features. As a result, many problems were found throughout the tool. The findings concluded that the design of instructions, user information survey forms, task screen, results screen, etc. did not conform to the needs of users and their tasks. In Study 2, 11 guidelines for the design of web-based attention testing tools were established based on the findings from Study 1. The guidelines were used to optimize the design and organization of the tool so that it fits to the user and task needs. The resulting new design alternative was then implemented as a working prototype using the JAVA programming language. In Study 3, a comparative study was conducted to demonstrate the excellence of the new design of attention testing tool(named graphic style tool) over the existing design(named text style tool). A total of 60 subjects participated in user testing sessions where their error frequency, error patterns, and subjective satisfaction were measured through performance observation and questionnaires. Through the task performance measurement, a number of user errors in various types were observed in the existing text style tool. The questionnaire results were also in support of the new graphic style tool, users rated the new graphic style tool higher than the existing text style tool in terms of overall satisfaction, screen design, terms and system information, ease of learning, and system performance.

  • PDF