• Title/Summary/Keyword: Task-Based Design

Search Result 838, Processing Time 0.036 seconds

Design and Implementation of Real-time Implanted Kernel, RTiK to Support Real-time for a Test Set based on Windows (윈도우 기반의 점검장비에 실시간성을 지원하는 실시간 이식 커널의 설계 및 구현)

  • Lee, Jin-Wook;Cho, Moon-Haeng;Kim, Jong-Jin;Jo, Han-Moo;Park, Young-Soo;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.10
    • /
    • pp.36-44
    • /
    • 2010
  • Recently, as new weapons are being developed, test equipments to test their functions inevitably require real-time features. However, since test equipments based on Windows can not support real-time requirements, we have no choice but to use third-party solutions such as RTX or INtime. This leads to increase the development cost of each test equipment. This paper suggests an real-time implemented kernel(RTiK) which operates as a device driver on Windows. RTiK provides another timer using the Local APIC of x86 microprocessors. It supports real-time requirements by periodically executing the required services using Windows-independent timer interrupts to guarantee task deadlines. To reduce the interrupt latency, we used deferred procedure calls provided by Windows. We also used the export driver to implement and modify user-defined functions without accessing the RTiK internals. Using an oscilloscope, we prove that the RTiK kernel proposed in this paper guarantees up to 0.1ms periods.

A Study on the Design of a Fake News Management Platform Based on Citizen Science (시민과학 기반 가짜뉴스 관리 플랫폼 연구)

  • KIM, Ji Yeon;SHIM, Jae Chul;KIM, Gyu Tae;KIM, Yoo Hyang
    • Journal of Science and Technology Studies
    • /
    • v.20 no.1
    • /
    • pp.39-85
    • /
    • 2020
  • With the development of information technology, fake news is becoming a serious social problem. Individual measures to manage the problem, such as fact-checking by the media, legal regulation, or technical solutions, have not been successful. The flood of fake news has undermined not only trust in the media but also the general credibility of social institutions, and is even threatening the foundations of democracy. This is why one cannot leave fake news unchecked, though it is certainly a difficult task to accomplish. The problem of fake news is not about simply judging its veracity, as no news is completely fake or unquestionably real and there is much uncertainty. Therefore, managing fake news does not mean removing them completely. Nor can the problem be left to individuals' capacity for rational judgment. Recurring fake news can easily disrupt individual decision making, which raises the need for socio-technical measures and multidisciplinary collaboration. In this study, we introduce a new public online platform for fake news management, which incorporates a multidimensional and multidisciplinary approach based on citizen science. Our proposed platform will fundamentally redesign the existing process for collecting and analyzing fake news and engaging with user reactions. People in various fields would be able to participate in and contribute to this platform by mobilizing their own expertise and capability.

Similarity checking between XML tags through expanding synonym vector (유사어 벡터 확장을 통한 XML태그의 유사성 검사)

  • Lee, Jung-Won;Lee, Hye-Soo;Lee, Ki-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.676-683
    • /
    • 2002
  • The success of XML(eXtensible Markup Language) is primarily based on its flexibility : everybody can define the structure of XML documents that represent information in the form he or she desires. XML is so flexible that XML documents cannot be automatically provided with an underlying semantics. Different tag sets, different names for elements or attributes, or different document structures in general mislead the task of classifying and clustering XML documents precisely. In this paper, we design and implement a system that allows checking the semantic-based similarity between XML tags. First, this system extracts the underlying semantics of tags and then expands the synonym set of tags using an WordNet thesaurus and user-defined word library which supports the abbreviation forms and compound words for XML tags. Seconds, considering the relative importance of XML tags in the XML documents, we extend a conventional vector space model which is the most generally used for document model in Information Retrieval field. Using this method, we have been able to check the similarity between XML tags which are represented different tags.

Development of Scaffolding Strategies Model by Information Search Process (ISP) (정보탐색과정(ISP)에 의한 스캐폴딩 전략 모형 개발)

  • Jeong-Hoon Lim
    • Journal of Korean Library and Information Science Society
    • /
    • v.54 no.1
    • /
    • pp.143-165
    • /
    • 2023
  • This study aims to propose a scaffolding strategy that can be applied to the information search process by using Kuhlthau's ISP model, which presented a design and implementation strategy for the mediation role in the learning process. To this end, the relevant literature was reviewed to categorize scaffolding strategies, and impressions were collected from the students surveys after providing 150 middle school students in the Daejeon area with the project class to which the scaffolding strategy based on the ISP model was applied. The collected data were processed into a form suitable for analysis through data preprocessing for word frequencies to be extracted, and topic analysis was performed using STM (Structural Topic Modeling). First, after determining the optimal number of topics and extracting topics for each stage of the ISP model, the extracted topics were classified into three types: cognitive domain-macro perspective, cognitive domain-micro perspective, and emotional domain perspective. In this process, we focused on cognitive verbs and emotional verbs among words extracted through text mining, and presented a scaffolding strategy model related to each topic by reviewing representative document cases. Based on the results of this study, if an appropriate scaffolding strategy is provided at the ISP model stage, a positive effect on learners' self-directed task solving can be expected.

A Study on Improving Performance of Software Requirements Classification Models by Handling Imbalanced Data (불균형 데이터 처리를 통한 소프트웨어 요구사항 분류 모델의 성능 개선에 관한 연구)

  • Jong-Woo Choi;Young-Jun Lee;Chae-Gyun Lim;Ho-Jin Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.295-302
    • /
    • 2023
  • Software requirements written in natural language may have different meanings from the stakeholders' viewpoint. When designing an architecture based on quality attributes, it is necessary to accurately classify quality attribute requirements because the efficient design is possible only when appropriate architectural tactics for each quality attribute are selected. As a result, although many natural language processing models have been studied for the classification of requirements, which is a high-cost task, few topics improve classification performance with the imbalanced quality attribute datasets. In this study, we first show that the classification model can automatically classify the Korean requirement dataset through experiments. Based on these results, we explain that data augmentation through EDA(Easy Data Augmentation) techniques and undersampling strategies can improve the imbalance of quality attribute datasets, and show that they are effective in classifying requirements. The results improved by 5.24%p on F1-score, indicating that handling imbalanced data helps classify Korean requirements of classification models. Furthermore, detailed experiments of EDA illustrate operations that help improve classification performance.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

A Comparative Case Study on the Adaptation Process of Advanced Information Technology: A Grounded Theory Approach for the Appropriation Process (신기술 사용 과정에 관한 비교 사례 연구: 기술 전유 과정의 근거이론적 접근)

  • Choi, Hee-Jae;Lee, Zoon-Ky
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.99-124
    • /
    • 2009
  • Many firms in Korea have adopted and used advanced information technology in an effort to boost efficiency. The process of adapting to the new technology, at the same time, can vary from one firm to another. As such, this research focuses on several relevant factors, especially the roles of social interaction as a key variable that influences the technology adaptation process and the outcomes. Thus far, how a firm goes through the adaptation process to the new technology has not been yet fully explored. Previous studies on changes undergone by a firm or an organization due to information technology have been pursued from various theoretical points of views, evolved from technological and institutional views to an integrated social technology views. The technology adaptation process has been understood to be something that evolves over time and has been regarded as cycles between misalignments and alignments, gradually approaching the stable aligned state. The adaptation process of the new technology was defined as "appropriation" process according to Poole and DeSanctis (1994). They suggested that this process is not automatically determined by the technology design itself. Rather, people actively select how technology structures should be used; accordingly, adoption practices vary. But concepts of the appropriation process in these studies are not accurate while suggested propositions are not clear enough to apply in practice. Furthermore, these studies do not substantially suggest which factors are changed during the appropriation process and what should be done to bring about effective outcomes. Therefore, research objectives of this study lie in finding causes for the difference in ways in which advanced information technology has been used and adopted among organizations. The study also aims to explore how a firm's interaction with social as well as technological factors affects differently in resulting organizational changes. Detail objectives of this study are as follows. First, this paper primarily focuses on the appropriation process of advanced information technology in the long run, and we look into reasons for the diverse types of the usage. Second, this study is to categorize each phases in the appropriation process and make clear what changes occur and how they are evolved during each phase. Third, this study is to suggest the guidelines to determine which strategies are needed in an individual, group and organizational level. For this, a substantially grounded theory that can be applied to organizational practice has been developed from a longitudinal comparative case study. For these objectives, the technology appropriation process was explored based on Structuration Theory by Giddens (1984), Orlikoski and Robey (1991) and Adaptive Structuration Theory by Poole and DeSanctis (1994), which are examples of social technology views on organizational change by technology. Data have been obtained from interviews, observations of medical treatment task, and questionnaires administered to group members who use the technology. Data coding was executed in three steps following the grounded theory approach. First of all, concepts and categories were developed from interviews and observation data in open coding. Next, in axial coding, we related categories to subcategorize along the lines of their properties and dimensions through the paradigm model. Finally, the grounded theory about the appropriation process was developed through the conditional/consequential matrix in selective coding. In this study eight hypotheses about the adaptation process have been clearly articulated. Also, we found that the appropriation process involves through three phases, namely, "direct appropriation," "cooperate with related structures," and "interpret and make judgments." The higher phases of appropriation move, the more users represent various types of instrumental use and attitude. Moreover, the previous structures like "knowledge and experience," "belief that other members know and accept the use of technology," "horizontal communication," and "embodiment of opinion collection process" are evolved to higher degrees in their dimensions of property. Furthermore, users continuously create new spirits and structures, while removing some of the previous ones at the same time. Thus, from longitudinal view, faithful and unfaithful appropriation methods appear recursively, but gradually faithful appropriation takes over the other. In other words, the concept of spirits and structures has been changed in the adaptation process over time for the purpose of alignment between the task and other structures. These findings call for a revised or extended model of structural adaptation in IS (Information Systems) literature now that the vague adaptation process in previous studies has been clarified through the in-depth qualitative study, identifying each phrase with accuracy. In addition, based on these results some guidelines can be set up to help determine which strategies are needed in an individual, group, and organizational level for the purpose of effective technology appropriation. In practice, managers can focus on the changes of spirits and elevation of the structural dimension to achieve effective technology use.

The Study on the Priority of First Person Shooter game Elements using Delphi Methodology (FPS게임 구성요소의 중요도 분석방법에 관한 연구 1 -델파이기법을 이용한 독립요소의 계층설계와 검증을 중심으로-)

  • Bae, Hye-Jin;Kim, Suk-Tae
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.61-72
    • /
    • 2007
  • Having started with "Space War", the first game produced by MIT in the 1960's, the gaming industry expanded rapidly and grew to a large size over a short period of time: the brand new games being launched on the market are found to contain many different elements making up a single content in that it is often called the 'the most comprehensive ultimate fruits' of the design technologies. This also translates into a large increase in the number of things which need to be considered in developing games, complicating the plans on the financial budget, the work force, and the time to be committed. Therefore, an approach for analyzing the elements which make up a game, computing the importance of each of them, and assessing those games to be developed in the future, is the key to a successful development of games. Many decision-making activities are often required under such a planning process. The decision-making task involves many difficulties which are outlined as follows: the multi-factor problem; the uncertainty problem impeding the elements from being "quantified" the complex multi-purpose problem for which the outcome aims confusion among decision-makers and the problem with determining the priority order of multi-stages leading to the decision-making process. In this study we plan to suggest AHP (Analytic Hierarchy Process) so that these problems can be worked out comprehensively, and logical and rational alternative plan can be proposed through the quantification of the "uncertain" data. The analysis was conducted by taking FPS (First Person Shooting) which is currently dominating the gaming industry, as subjects for this study. The most important consideration in conducting AHP analysis is to accurately group the elements of the subjects to be analyzed objectively, and arrange them hierarchically, and to analyze the importance through pair-wise comparison between the elements. The study is composed of 2 parts of analyzing these elements and computing the importance between them, and choosing an alternative plan. Among these this paper is particularly focused on the Delphi technique-based objective element analyzing and hierarchy of the FPS games.

  • PDF

A Study of Competency for R&D Engineer on Semiconductor Company (반도체 기술 R&D 연구인력의 역량연구 -H사 기업부설연구소를 중심으로)

  • Yun, Hye-Lim;Yoon, Gwan-Sik;Jeon, Hwa-Ick
    • 대한공업교육학회지
    • /
    • v.38 no.2
    • /
    • pp.267-286
    • /
    • 2013
  • Recently, the advanced company has been sparing no efforts in improving necessary core knowledge and technology to achieve outstanding work performance. In this rapidly changing knowledge-based society, the company has confronted the task of creating a high value-added knowledge. The role of R&D workforce that corresponds to the characteristic and role of knowledge worker is getting more significant. As the life cycle of technical knowledge and skill shortens, in every industry, the technical knowledge and skill have become essential elements for successful business. It is difficult to improve competitiveness of the company without enhancing the competency of individual and organization. As the competency development which is a part of human resource management in the company is being spread now, it is required to focus on the research of determining necessary competency and to analyze the competency of a core organization in the research institute. 'H' is the semiconductor manufacturing company which has a affiliated research institute with its own R&D engineers. Based on focus group interview and job analysis data, vision and necessary competency were confirmed. And to confirm whether the required competency by job is different or not, analysis was performed by dividing members into workers who are in charge of circuit design and design before process development and who are in the process actualization and process development. Also, this research included members' importance awareness of the determined competency. The interview and job analysis were integrated and analyzed after arranging by groups and contents and the analyzed results were resorted after comparative analysis with a competency dictionary of Spencer & Spencer and competency models which are developed from the advanced research. Derived main competencies are: challenge, responsibility, and prediction/responsiveness, planning a new business, achievement -oriented, training, cooperation, self-development, analytic thinking, scheduling, motivation, communication, commercialization of technology, information gathering, professionalism on the job, and professionalism outside of work. The highly required competency for both jobs was 'Professionalism'. 'Attitude', 'Performance Management', 'Teamwork' for workers in charge of circuit design and 'Challenge', 'Training', 'Professionalism on the job' and 'Communication' were recognized to be required competency for those who are in charge of process actualization and process development. With above results, this research has determined the necessary competency that the 'H' company's affiliated research institute needs and found the difference of required competency by job. Also, it has suggested more enthusiastic education methods or various kinds of education by confirming the importance awareness of competency and individual's level of awareness about the competency.

Morphological Characteristics Optimizing Pocketability and Text Readability for Mobile Information Devices (모바일 정보기기의 소지용이성과 텍스트 가독성을 최적화하기 위한 형태적 특성)

  • Kim, Yeon-Ji;Lee, Woo-Hun
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.323-332
    • /
    • 2006
  • Information devices such as a cellular phone, smart phone, and PDA become smaller to such an extent that people put them into their pockets without any difficulties. This drastic miniaturization causes to deteriorate the readability of text-based contents. The morphological characteristics of size and proportion are supposed to have close relationships with the pocketability and text readability of mobile information devices. This research was aimed to investigate the optimal morphological characteristics to satisfy the two usability factors together. For this purpose, we conducted an controlled experiment, which was designed to evaluate the pocketability according to $size(4000mm^2/8000mm^2)$, proportion(1:1/2:1/3:1), and weight(100g/200g) of information devices as well as participants' pose and carrying method. In the case of male participants putting the models of information device into their pockets, 2:1 morphological proportion was preferred. On the other hand, the female participants carrying the models in their hands preferred 2:1 proportion$(size:4000mm^2{\times}2mm)$ and 3:1 proportion$(size:8000mm^2{\times}20mm)$. For the device in the size of $4000mm^2$, it was found that the weight of device has an significant effect on pocketability. In consequence, 2:1 proportion is optimal to achieve better pocketability. The second experiment was about how text readability is affected by size $(2000mm^2/4000mm^2/8000mm^2)$ and proportion(1:1/2:1/3:1) of information devices as well as interlinear space of displayed text(135%/200%). From this experiment result, it was found that reading speed was increased as line length increased. Regarding the subjective assessment on reading task, 2:1 proportion was strongly preferred. Based on these results, we suggest 2:l proportion as an optimal proportion that satisfy pocketability of mobile information devices and text readability displayed on the screen together. To apply these research outputs to a practical design work efficiently, it is important to take into account the fact that the space for input devices is also required in addition to a display screen.

  • PDF