• Title/Summary/Keyword: digital designing

Search Result 652, Processing Time 0.029 seconds

An Analysis on the Revision Process and Main Contents of the International Standard ISO 16175 Sets (국제표준 ISO 16175의 개정과정과 주요내용 분석)

  • Lee, Gemma
    • The Korean Journal of Archival Studies
    • /
    • no.67
    • /
    • pp.5-55
    • /
    • 2021
  • The purpose of this study is to promote future research and practical application in the field of records systems by informing the revision of the ISO 16175 standard set and analyzing its main contents, which was widely used as a record management functional requirements. Based on the experience of participating in the developing process of this International Standard since 2015, this study analyzed the context and process of revision, and the main contents of the standard, and sought to draw limited implications and proposed follow-up researches connected to the practice of the Korean records systems. The previous ISO 16175 sets had been produced as ISO 16175-1, 2, 3 in 2010-2011, and which were restructured and revised into new ISO 16175-1, 2 in 2020 in line with the revision of ISO 15489 and changes in the digital environment. Main title of the International Standard is processes and functional requirements for software for managing records, and Part I provides high-level functional requirements and associated guidance for applications that manage digital records, Part II provides guidance for selecting, designing, implementing and maintaining software for managing records. This standard assumes that the records system does not necessarily have to be a single system or software solely for records management and that it should be able to perform record management function in any forms.

The Moderating Effects of Social Support and Self-Efficacy on the Relationship between Resilience and Burnout of Visiting Caregivers in Seoul (서울지역 방문요양보호사의 회복탄력성과 직무소진의 관계에서 사회적 지지와 자기효능감의 조절효과에 관한 연구)

  • Nam, Kyung-Ok;Chae, Jae-Eun
    • Journal of Digital Convergence
    • /
    • v.19 no.1
    • /
    • pp.59-66
    • /
    • 2021
  • As the demand for elderly care service has been on rise due to the increasing of the aging population, the role of visiting caregivers becomes increasingly important. In this context, this study aims to examine whether there are moderating effects of social support and self-efficacy on the relationship between resilience of visiting caregivers and their burnouts. The hierarchical regression analysis was used for statistical analysis. The findings from the study are as follow. First, resilience has a negative effect on burnout. As the resilience increases, the burnout decreases. Second, social support has moderating effects on the relationship between a resilience sub-variable(positivity) and burnout. Third, self-efficacy has a moderating effect on the relationship between a resilience sub-variable(self-regulation) and burnout. The findings of the study suggest that nursing facilities pay attention to psychological characteristics of visiting caregivers as well as their job characteristics in designing burnout prevention programs for them.

A Design and Effect of Maker Education Using Educational Artificial Intelligence Tools in Elementary Online Environment (초등 온라인 환경에서 교육용 인공지능 도구를 활용한 메이커 수업 설계 및 효과)

  • Kim, Keun-Jae;Han, Hyeong-Jong
    • Journal of Digital Convergence
    • /
    • v.19 no.6
    • /
    • pp.61-71
    • /
    • 2021
  • In a situation where the online learning is expanding due to COVID-19, the current maker education has limitations in applying it to classes. This study is to design the class of online maker education using artificial intelligence tools in elementary school. Also, it is to identify the responses to it and to confirm whether it helps improve the learner's computational thinking and creative problem solving ability. The class was designed by the literature review and redesign of the curriculum. Using interveiw, the responses of instructor and learners were identified. Pre- and post-test using corresponding sample t-test was conducted. As a result, the class consisted of ten steps including empathizing, defining making problems, identifying the characteristics of material and tool, designing algorithms and coding using remixes, etc. For computing thinking and creative problem solving ability, statistically significant difference was found. This study has the significance that practical maker activities using educational artificial intelligence tools in the context of elementary education can be practically applied even in the online environment.

The Framework of the Transition of UX Design Workshops into the non-Face-to-Face (UX 디자인 워크숍 비대면 전환 프레임워크 연구)

  • Seong, Dain;Ha, Kwang-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.3
    • /
    • pp.309-321
    • /
    • 2022
  • As the spread of COVID19 has compelled activities in various fields to transform to adapt to the non-face-to-face environment, various activities have either already been transitioned into non-face-to-face methods or been searching for alternative methods to carry out activities in a non-face-to-face manner. However, there are apparent limits in handling this transition with the pre-existing digital technology. Ironically, said limitations are more apparent in the UX design field that has thus far emphasized resolutions based on digital technology. The reason for this stems from the nature of UX design which strongly emphasizes the importance of collaboration. Especially, in the field of UX design, problems are expected to surface under areas of communication and collaboration in workshops, which are productive means of collecting the ideas of interested parties and coming up with other new ideas. Based on the aforementioned rise of necessity, this study aims to assess the characteristics of workshops in the field of UX design and suggest an effective method of transitioning UX workshops into a non-face-to-face environment. Along the line of this process, this study has created a standard process in regards to design workshops with active creation, suggestion, and acceptance of ideas, among the various types of workshops defined by the Nielsen Norman Group. This study also developed a framework consisting of non-face-to-face workshops by combining with the standard process the methodologies of workshop activation and non-face-to-face services meant for communication and designing activities, and confirmed the adaptability and the effectiveness of said transition against various types of workshops. Application of the results of this study is expected to effectively lead the transition into the non-face-to-face environment and improve the collaborative efforts of the interested parties via workshops.

Guidelines for big data projects in artificial intelligence mathematics education (인공지능 수학 교육을 위한 빅데이터 프로젝트 과제 가이드라인)

  • Lee, Junghwa;Han, Chaereen;Lim, Woong
    • The Mathematical Education
    • /
    • v.62 no.2
    • /
    • pp.289-302
    • /
    • 2023
  • In today's digital information society, student knowledge and skills to analyze big data and make informed decisions have become an important goal of school mathematics. Integrating big data statistical projects with digital technologies in high school <Artificial Intelligence> mathematics courses has the potential to provide students with a learning experience of high impact that can develop these essential skills. This paper proposes a set of guidelines for designing effective big data statistical project-based tasks and evaluates the tasks in the artificial intelligence mathematics textbook against these criteria. The proposed guidelines recommend that projects should: (1) align knowledge and skills with the national school mathematics curriculum; (2) use preprocessed massive datasets; (3) employ data scientists' problem-solving methods; (4) encourage decision-making; (5) leverage technological tools; and (6) promote collaborative learning. The findings indicate that few textbooks fully align with these guidelines, with most failing to incorporate elements corresponding to Guideline 2 in their project tasks. In addition, most tasks in the textbooks overlook or omit data preprocessing, either by using smaller datasets or by using big data without any form of preprocessing. This can potentially result in misconceptions among students regarding the nature of big data. Furthermore, this paper discusses the relevant mathematical knowledge and skills necessary for artificial intelligence, as well as the potential benefits and pedagogical considerations associated with integrating technology into big data tasks. This research sheds light on teaching mathematical concepts with machine learning algorithms and the effective use of technology tools in big data education.

A Study on Metaverse Framework Design for Education and Training of Hydrogen Fuel Cell Engineers (수소 연료전지 엔지니어 양성을 위한 메타버스 교육훈련 플랫폼에 관한 연구)

  • Yang Zhen;Kyung Min Gwak;Young J. Rho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.207-212
    • /
    • 2024
  • The importance of hydrogen fuel cells continues to be emphasized, and there is a growing demand for education and training in this field. Among various educational environments, metaverse education is opening a new era of change in the global education industry, especially to adapt to remote learning. The most significant change that the metaverse has brought to education is the shift from one-way, instructor-centered, and static teaching approaches to multi-directional and dynamic ones. It is expected that the metaverse can be effectively utilized in hydrogen fuel cell engineer education, not only enhancing the effectiveness of education by enabling learning and training anytime, anywhere but also reducing costs associated with engineering education.In this research, inspired by these ideas, we are designing a fuel cell education platform. We have created a platform that combines theoretical and practical training using the metaverse. Key aspects of this research include the development of educational training content to increase learner engagement, the configuration of user interfaces for improved usability, the creation of environments for interacting with objects in the virtual world, and support for convergence services in the form of digital twins.

An Empirical Study on How the Moderating Effects of Individual Cultural Characteristics towards a Specific Target Affects User Experience: Based on the Survey Results of Four Types of Digital Device Users in the US, Germany, and Russia (특정 대상에 대한 개인 수준의 문화적 성향이 사용자 경험에 미치는 조절효과에 대한 실증적 연구: 미국, 독일, 러시아의 4개 디지털 기기 사용자를 대상으로)

  • Lee, In-Seong;Choi, Gi-Woong;Kim, So-Lyung;Lee, Ki-Ho;Kim, Jin-Woo
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.113-145
    • /
    • 2009
  • Recently, due to the globalization of the IT(Information Technology) market, devices and systems designed in one country are used in other countries as well. This phenomenon is becoming the key factor for increased interest on cross-cultural, or cross-national, research within the IT area. However, as the IT market is becoming bigger and more globalized, a great number of IT practitioners are having difficulty in designing and developing devices or systems which can provide optimal experience. This is because not only tangible factors such as language and a country's economic or industrial power affect the user experience of a certain device or system but also invisible and intangible factors as well. Among such invisible and intangible factors, the cultural characteristics of users from different countries may affect the user experience of certain devices or systems because cultural characteristics affect how they understand and interpret the devices or systems. In other words, when users evaluate the quality of overall user experience, the cultural characteristics of each user act as a perceptual lens that leads the user to focus on a certain elements of experience. Therefore, there is a need within the IT field to consider cultural characteristics when designing or developing certain devices or systems and plan a strategy for localization. In such an environment, existing IS studies identify the culture with the country, emphasize the importance of culture in a national level perspective, and hypothesize that users within the same country have same cultural characteristics. Under such assumptions, these studies focus on the moderating effects of cultural characteristics on a national level within a certain theoretical framework. This has already been suggested by cross-cultural studies conducted by scholars such as Hofstede(1980) in providing numerical research results and measurement items for cultural characteristics and using such results or items as they increase the efficiency of studies. However, such national level culture has its limitations in forecasting and explaining individual-level behaviors such as voluntary device or system usage. This is because individual cultural characteristics are the outcome of not only the national culture but also the culture of a race, company, local area, family, and other groups that are formulated through interaction within the group. Therefore, national or nationally dominant cultural characteristics may have its limitations in forecasting and explaining the cultural characteristics of an individual. Moreover, past studies in psychology suggest a possibility that there exist different cultural characteristics within a single individual depending on the subject being measured or its context. For example, in relation to individual vs. collective characteristics, which is one of the major cultural characteristics, an individual may show collectivistic characteristics when he or she is with family or friends but show individualistic characteristics in his or her workplace. Therefore, this study acknowledged such limitations of past studies and conducted a research within the framework of 'theoretically integrated model of user satisfaction and emotional attachment', which was developed through a former study, on how the effects of different experience elements on emotional attachment or user satisfaction are differentiated depending on the individual cultural characteristics related to a system or device usage. In order to do this, this study hypothesized the moderating effects of four cultural dimensions (uncertainty avoidance, individualism vs, collectivism, masculinity vs. femininity, and power distance) as suggested by Hofstede(1980) within the theoretically integrated model of emotional attachment and user satisfaction. Statistical tests were then implemented on these moderating effects through conducting surveys with users of four digital devices (mobile phone, MP3 player, LCD TV, and refrigerator) in three countries (US, Germany, and Russia). In order to explain and forecast the behavior of personal device or system users, individual cultural characteristics must be measured, and depending on the target device or system, measurements must be measured independently. Through this suggestion, this study hopes to provide new and useful perspectives for future IS research.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Market Segmentation of Converging New Media Advertising: The Interpretative Approach Based on Consumer Subjectivity (융합형 뉴미디어 광고의 시장세분화 연구: 소비자 주관성에 근거한 해석적 관점에서)

  • Seo, Kyoung-Jin;Hwang, Jin-Ha;Jeung, Jang-Hun;Kim, Ki-Youn
    • Journal of Internet Computing and Services
    • /
    • v.15 no.4
    • /
    • pp.91-102
    • /
    • 2014
  • The purpose of this research is to perform the consumer typological study of integrated emerging digital advertisement, where IT and advertisement industry were fused, and to propose the theoretical definition about consumer characteristic which is in need for collection of related market subdivision strategy in perspective of business marketing. For this, the Q methodology, the 'subjectivity' research of qualitative perspective, which discovers new theory by interpreting subjective system of thinking, preference, opinion, and recognition of inner side of respondents, was applied and analyzed. Compared to previous quantitative research that pursues hypothesis verification, this Q methodology is not dependent on operational definition proposed by researcher but pursues for analytic study completely reflecting objective testimony of respondents. For this reason, Q study analyzes in-depth the actual consumer type, which can be found at the initial market formation stage of new service, therefore this study is applicable for theorizing the consumer character as a mean of advanced research. This study extracted thirty 'IT integrated digital advertisement type (Q sample)' from thorough literature research and interviews, and eventually discovered a total four consumer types from analyzing each Q sorting research data of 40 respondents (P sample). Moreover, by interpreting subdivided intrinsic characteristic of each group, the four types were named as 'multi-channel digital advertisement pursuit type', 'emotional advertisement pursuit type', 'new media advertisement pursuit type', and Web 2.0 advertisement pursuit type'. The analysis result of this study is being expected for its value of usage as advanced research of academic and industrial research with the emerging digital advertisement industry as a subject, and as basic research in the field of R&D, Marketing program and the field of designing the advertisement creative strategy and related policy.

GIS based Development of Module and Algorithm for Automatic Catchment Delineation Using Korean Reach File (GIS 기반의 하천망분석도 집수구역 자동 분할을 위한 알고리듬 및 모듈 개발)

  • PARK, Yong-Gil;KIM, Kye-Hyun;YOO, Jae-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.4
    • /
    • pp.126-138
    • /
    • 2017
  • Recently, the national interest in environment is increasing and for dealing with water environment-related issues swiftly and accurately, the demand to facilitate the analysis of water environment data using a GIS is growing. To meet such growing demands, a spatial network data-based stream network analysis map(Korean Reach File; KRF) supporting spatial analysis of water environment data was developed and is being provided. However, there is a difficulty in delineating catchment areas, which are the basis of supplying spatial data including relevant information frequently required by the users such as establishing remediation measures against water pollution accidents. Therefore, in this study, the development of a computer program was made. The development process included steps such as designing a delineation method, and developing an algorithm and modules. DEM(Digital Elevation Model) and FDR(Flow Direction) were used as the major data to automatically delineate catchment areas. The algorithm for the delineation of catchment areas was developed through three stages; catchment area grid extraction, boundary point extraction, and boundary line division. Also, an add-in catchment area delineation module, based on ArcGIS from ESRI, was developed in the consideration of productivity and utility of the program. Using the developed program, the catchment areas were delineated and they were compared to the catchment areas currently used by the government. The results showed that the catchment areas were delineated efficiently using the digital elevation data. Especially, in the regions with clear topographical slopes, they were delineated accurately and swiftly. Although in some regions with flat fields of paddles and downtowns or well-organized drainage facilities, the catchment areas were not segmented accurately, the program definitely reduce the processing time to delineate existing catchment areas. In the future, more efforts should be made to enhance current algorithm to facilitate the use of the higher precision of digital elevation data, and furthermore reducing the calculation time for processing large data volume.